Professional Documents
Culture Documents
BrowneGrassmannAlgebra PDF
BrowneGrassmannAlgebra PDF
Algebra
Exploring extended vector algebra
with Mathematica
John Browne
2009 9 3
Grassmann Algebra Book.nb 2
2009 9 3
Grassmann Algebra Book.nb 3
Table of Contents
1 Introduction
1.1 Background
The mathematical representation of physical entities
The central concept of the Ausdehnungslehre
Comparison with the vector and tensor algebras
Algebraicizing the notion of linear dependence
Grassmann algebra as a geometric calculus
1.15 Terminology
Terminology
1.16 Summary
To be completed
2009 9 3
2.3 Exterior Linear Spaces Grassmann Algebra Book.nb 5
Composing m-elements
Composing elements automatically
Spaces and congruence
The associativity of the exterior product
Transforming exterior products
2.5 Bases
Bases for exterior linear spaces
Declaring a basis in GrassmannAlgebra
Composing bases of exterior linear spaces
Composing palettes of basis elements
Standard ordering
Indexing basis elements of exterior linear spaces
2.6 Cobases
Definition of a cobasis
The cobasis of unity
Composing palettes of cobasis elements
The cobasis of a cobasis
2.7 Determinants
Determinants from exterior products
Properties of determinants
The Laplace expansion technique
Calculating determinants
2.8 Cofactors
Cofactors from exterior products
The Laplace expansion in cofactor form
Exploring the calculation of determinants using minors and cofactors
Transformations of cobases
Exploring transformations of cobases
2.10 Simplicity
The concept of simplicity
All (n|1)-elements are simple
Conditions for simplicity of a 2-element in a 4-space
Conditions for simplicity of a 2-element in a 5-space
2009 9 3
Grassmann Algebra Book.nb 6
2.11 Exterior division
The definition of an exterior quotient
Division by a 1-element
Division by a k-element
Automating the division process
2.14 Summary
3.2 Duality
The notion of duality
Examples: Obtaining the dual of an axiom
Summary: The duality transformation algorithm
2009 9 3
3.5 The Common Factor Axiom
Motivation
The Common Factor Axiom Grassmann Algebra Book.nb 7
Extension of the Common Factor Axiom to general elements
Special cases of the Common Factor Axiom
Dual versions of the Common Factor Axiom
Application of the Common Factor Axiom
When the common factor is not simple
3.10 Summary
4 Geometric Interpretations
4.1 Introduction
2009 9 3
4.2 Geometrically Interpreted 1-elements Grassmann Algebra Book.nb 8
Vectors
Points
Declaring a basis for a bound vector space
Composing vectors and points
Example: Calculation of the centre of mass
4.6 m-planes
m-planes defined by points
m-planes defined by m-vectors
m-planes as exterior quotients
Computing exterior quotients
The m-vector of a bound m-vector
4.14 Summary
5 The Complement
5.1 Introduction
5.13 Summary
6.12 Angle
Defining the angle between elements
The angle between a vector and a bivector
The angle between two bivectors
The volume of a parallelepiped
6.13 Projection
To be completed.
2009 9 3
Grassmann Algebra Book.nb 14
8 Exploring Mechanics
8.1 Introduction
8.2 Force
Representing force
Systems of forces
Equilibrium
Force in a metric 3-plane
8.3 Momentum
The velocity of a particle
Representing momentum
The momentum of a system of particles
The momentum of a system of bodies
Linear momentum and the mass centre
Momentum in a metric 3-plane
9 Grassmann Algebra
9.1 Introduction
2009 9 3
9.1 Grassmann Numbers Grassmann Algebra Book.nb 15
Creating Grassmann numbers
Body and soul
Even and odd components
The grades of a Grassmann number
Working with complex scalars
2009 9 3
Grassmann Algebra Book.nb 16
2009 9 3
Grassmann Algebra Book.nb 17
11.8 Octonions
To be completed
2009 9 3
12.14 Clifford Algebras of a 2-Space Grassmann Algebra Book.nb 20
The Clifford product table in 2-space
Product tables in a 2-space with an orthogonal basis
Case 1: 8e1 û e1 Ø +1, e2 û e2 Ø +1<
Case 2: 8e1 û e1 Ø +1, e2 û e2 Ø -1<
ŸCase 3: 8e1 û e1 Ø -1, e2 û e2 Ø -1<
12.17 Rotations
To be completed
2009 9 3
Grassmann Algebra Book.nb 21
13.6 Matrix Powers
Positive integer powers
Negative integer powers
Non-integer powers of matrices with distinct eigenvalues
Integer powers of matrices with distinct eigenvalues
13.11 Supermatrices
To be completed
Appendices
Ï Contents
A1 Guide to GrassmannAlgebra
A3 Notation
A4 Glossary
A5 Bibliography
Bibliography
A note on sources to Grassmann's work
Index
2009 9 3
Grassmann Algebra Book.nb 22
Preface
Ï The origins of this book
This book has grown out of an interest in Grassmann's work over the past three decades. There is
something fascinating about the beauty with which the mathematical structures Grassmann
discovered (invented, if you will) describe the physical world, and something also fascinating
about how these beautiful structures have been largely lost to the mainstreams of mathematics and
science.
Hermann Günther Grassmann was born in 1809 in Stettin, near the border of Germany and Poland.
He was only 23 when he discovered the method of adding and multiplying points and vectors
which was to become the foundation of his Ausdehnungslehre. In 1839 he composed a work on the
study of tides entitled Theorie der Ebbe und Flut, which was the first work ever to use vectorial
methods. In 1844 Grassmann published his first Ausdehnungslehre (Die lineale Ausdehnungslehre
ein neuer Zweig der Mathematik) and in the same year won a prize for an essay which expounded a
system satisfying an earlier search by Leibniz for an 'algebra of geometry'. Despite these
achievements, Grassmann received virtually no recognition.
A more detailed biography of Grassmann may be found at the end of the book.
The term 'Exterior Calculus' will be reserved for the calculus of exterior differential forms,
originally developed by Elie Cartan from the Ausdehnungslehre. This is an area in which there are
many excellent texts, and which is outside the scope of this book.
The term 'Grassmann algebra' will be used to describe that body of algebraic theory and results
based on the Ausdehnungslehre, but extended to include more recent results and viewpoints. This
will be the basic focus of this book.
Finally, the term 'GrassmannAlgebra' will be used to refer to the Mathematica based software
package which accompanies it.
2009 9 3
Grassmann Algebra Book.nb 23
Finally, the term 'GrassmannAlgebra' will be used to refer to the Mathematica based software
package which accompanies it.
The intrinsic power of Grassmann algebra arises from its fundamental product operation, the
exterior product. The exterior product codifies the property of linear dependence, so essential for
modern applied mathematics, directly into the algebra. Simple non-zero elements of the algebra
may be viewed as representing constructs of linearly independent elements. For example, a simple
bivector is the exterior product of two vectors; a line is represented by the exterior product of two
points; a plane is represented by the exterior product of three points.
The primary focus of this book is to provide a readable account in modern notation of Grassmann's
major algebraic contributions to mathematics and science. I would like it to be accessible to
scientists and engineers, students and professionals alike. Consequently I have tried to avoid all
mathematical terminology which does not make an essential contribution to understanding the
basic concepts. The only assumptions I have made as to the reader's background is that they have
some familiarity with basic linear algebra.
The secondary concern of this book is to provide an environment for exploring applications of the
Grassmann algebra. For general applications in higher dimensional spaces, computations by hand
in any algebra become tedious, indeed limiting, thus restricting the hypotheses that can be
explored. For this reason the book is integrated with a package for exploring Grassmann algebra,
called GrassmannAlgebra. GrassmannAlgebra has been developed in Mathematica. You can read
the book without using the package, or you can use the package to extend the examples in the text,
experiment with hypotheses, or explore your own interests.
Chapter 1: Introduction provides a brief preparatory overview of the book, introducing the seminal
concepts of each chapter, and solidifying them with simple examples. This chapter is designed to
give you a global appreciation with which better to understand the detail of the chapters which
follow. However, it is independent of those chapters, and may be read as far as your interest takes
you.
Chapters 2 to 6: The Exterior Product, The Regressive Product, Geometric Interpretations, The
Complement, and The Interior Product develop the basic theory. These form the essential core for
a working knowledge of Grassmann algebra and for the explorations in subsequent chapters. They
should be read sequentially.
Chapters 7 to 13: Exploring Screw Algebra, Mechanics, Grassmann Algebra, The Generalized
Product, Hypercomplex Algebra, Clifford Algebra, and Grassmann Matrix Algebra, may be read
independently of each other, with the proviso that the chapters on hypercomplex and Clifford
algebra
2009 9 3depend on the notion of the generalized product.
Grassmann Algebra Book.nb 24
Chapters 7 to 13: Exploring Screw Algebra, Mechanics, Grassmann Algebra, The Generalized
Product, Hypercomplex Algebra, Clifford Algebra, and Grassmann Matrix Algebra, may be read
independently of each other, with the proviso that the chapters on hypercomplex and Clifford
algebra depend on the notion of the generalized product.
Some computational sections within the text use GrassmannAlgebra or Mathematica. Wherever
possible, explanation is provided so that the results are still intelligible without a knowledge of
Mathematica. Mathematica input/output dialogue is in indented Courier font, with the input in
bold. For example:
GrassmannSimplify@1 + x + x Ô xD
1+x
Any chapter or section whose title begins with Exploring … may be ommitted on first reading
without unduly disturbing the flow of concept development. Exploratory chapters cover separate
applications. Exploratory sections treat topics that are: important for the justification of later
developments, but not essential for their understanding; of a specialized nature; or, results which
appear to be new and worth documenting.
As with most books these days, this book has its own website:
http://sites.google.com/site/grassmannalgebra
From this site you will be able to: email me, download a copy of the book or package, learn of any
updates to them, check for known errata or bugs, let me know of any errata or bugs you find, get
hints and faqs for using the package, and (eventually) link to where you can find a hardcopy if the
book.
Ï Acknowledgements
Finally I would like to acknowledge those who were instrumental in the evolution of this book:
Cecil Pengilley who originally encouraged me to look at applying Grassmann's work to
engineering; the University of Melbourne, Xerox Corporation, the University of Rochester and
Swinburne University of Technology which sheltered me while I pursued this interest; Janet Blagg
who peerlessly edited the text; my family who had to put up with my writing, and finally to
Stephen Wolfram who created Mathematica and provided me with a visiting scholar's grant to
work at Wolfram Research Institute in Champaign where I began developing the
GrassmannAlgebra package.
Above all however, I must acknowledge Hermann Grassmann. His contribution to mathematics
and science puts him among the great thinkers of the nineteenth century.
For I have every confidence that the effort I have applied to the science reported upon here, which has
occupied a considerable span of my lifetime and demanded the most intense exertions of my powers, is not to
be lost. … a time will come when it will be drawn forth from the dust of oblivion and the ideas laid down
here will bear fruit. … some day these ideas, even if in an altered form, will reappear and with the passage of
time will participate in a lively intellectual exchange. For truth is eternal, it is divine; and no phase in the
development of truth, however small the domain it embraces, can pass away without a trace. It remains even
if the garments in which feeble men clothe it fall into dust.
2009 9 3
For I have every confidence that the effort I have applied to the science reported upon here,
Grassmann which
Algebra has
Book.nb 25
occupied a considerable span of my lifetime and demanded the most intense exertions of my powers, is not to
be lost. … a time will come when it will be drawn forth from the dust of oblivion and the ideas laid down
here will bear fruit. … some day these ideas, even if in an altered form, will reappear and with the passage of
time will participate in a lively intellectual exchange. For truth is eternal, it is divine; and no phase in the
development of truth, however small the domain it embraces, can pass away without a trace. It remains even
if the garments in which feeble men clothe it fall into dust.
Hermann Grassmann
in the foreword to the Ausdehnungslehre of 1862
translated by Lloyd Kannenberg
2009 9 3
Grassmann Algebra Book.nb 26
1 Introduction
Ï
1.1 Background
As a case in point we may take the concept of force. It is well known that a force is not
satisfactorily represented by a (free) vector, yet contemporary practice is still to use a (free) vector
calculus for this task. The deficiency may be made up for by verbal appendages to the
mathematical statements: for example 'where the force f acts along the line through the point P'.
Such verbal appendages, being necessary, and yet not part of the calculus being used, indicate that
the calculus itself is not adequate to model force satisfactorily. In practice this inadequacy is coped
with in terms of a (free) vector calculus by the introduction of the concept of moment. The
conditions of equilibrium of a rigid body include a condition on the sum of the moments of the
forces about any point. The justification for this condition is not well treated in contemporary texts.
It will be shown later however that by representing a force correctly in terms of an element of the
Grassmann algebra, both force-vector and moment conditions for the equilibrium of a rigid body
are natural consequences of the algebraic processes alone.
Since the application of Grassmann algebra to mechanics was known during the nineteenth century
one might wonder why, with the 'progress of science', it is not currently used. Indeed the same
question might be asked with respect to its application in many other fields. To attempt to answer
these questions, a brief biography of Grassmann is included as an appendix. In brief, the scientific
world was probably not ready in the nineteenth century for the new ideas that Grassmann
proposed, and now, in the twenty-first century, seems only just becoming aware of their potential.
2009 9 3
Grassmann Algebra Book.nb 27
A line may be defined by the exterior product of any two distinct points on it. Similarly, a plane
may be defined by the exterior product of any three points in it, and so on for higher dimensions.
This independence with respect to the specific points chosen is an important and fundamental
property of the exterior product. Each time a higher dimensional object is required it is simply
created out of a lower dimensional one by multiplying by a new element in a new dimension.
Intersections of elements are also obtainable as products.
Simple elements of the Grassmann algebra may be interpreted as defining subspaces of a linear
space. The exterior product then becomes the operation for building higher dimensional subspaces
(higher order elements) from a set of lower dimensional independent subspaces. A second product
operation called the regressive product may then be defined for determining the common lower
dimensional subspaces of a set of higher dimensional non-independent subspaces.
Rather than a sub-algebra of the tensor algebra, it is perhaps more meaningful to view the
Grassmann algebra as a super-algebra of the three-dimensional vector algebra since both
commonly use invariant (component free) notations. The principal differences are that the
Grassmann algebra has a dual axiomatic structure, can treat higher order elements than vectors, can
differentiate between points and vectors, generalizes the notion of 'cross product', is independent of
dimension, and possesses the structure of a true algebra.
2009 9 3
Grassmann Algebra Book.nb 28
If vectors x1 , x2 , x3 , … are linearly dependent, then it turns out that their exterior product is zero:
x1 Ô x2 Ô x3 Ô … ã 0. If they are independent, their exterior product is non-zero.
Conversely, if the exterior product of vectors x1 , x2 , x3 , … is zero, then the vectors are linearly
dependent. Thus the exterior product brings the critical notion of linear dependence into the realm
of direct algebraic manipulation.
Although this might appear to be a relatively minor addition to linear algebra, we expect to
demonstrate in this book that nothing could be further from the truth: the consequences of being
able to model linear dependence with a product operation are far reaching, both in facilitating an
understanding of current results, and in the generation of new results for many of the algebras and
their entities used in science and engineering today. These include of course linear and multilinear
algebra, but also vector and tensor algebra, screw algebra, the complex numbers, quaternions,
octonions, Clifford algebra, and Pauli and Dirac algebra.
For applications to the physical world, however, the Grassmann algebra possesses a critical
capability that no other algebra possesses so directly: it can distinguish between points and vectors
and treat them as separate tensorial entities. Lines and planes are examples of higher order
2009 9 3 from points and vectors, which have both position and direction. A line can be
constructs
represented by the exterior product of any two points on it, or by any point on it with a vector
parallel to it.
Grassmann Algebra Book.nb 29
For applications to the physical world, however, the Grassmann algebra possesses a critical
capability that no other algebra possesses so directly: it can distinguish between points and vectors
and treat them as separate tensorial entities. Lines and planes are examples of higher order
constructs from points and vectors, which have both position and direction. A line can be
represented by the exterior product of any two points on it, or by any point on it with a vector
parallel to it.
A plane can be represented by the exterior product of any point on it with a bivector parallel to it,
any two points on it with a vector parallel to it, or any three points on it.
Finally, it should be noted that the Grassmann algebra subsumes all of real algebra, the exterior
product reducing in this case to the usual product operation amongst real numbers.
Here then is a geometric calculus par excellence. We hope you enjoy exploring it.
The fundamental defining characteristic of the exterior product is its anti-symmetry. That is, the
product changes sign if the order of the factors is reversed.
2009 9 3
Grassmann Algebra Book.nb 30
x Ô y ã -y Ô x 1.1
From this we can easily show the equivalent relation, that the exterior product of a vector with
itself is zero.
xÔx ã 0 1.2
The exterior product is associative, distributive, and behaves linearly as expected with scalars.
x ã a1 e1 + a2 e2 + a3 e3
y ã b1 e1 + b2 e2 + b3 e3
Here, the ai and bi are of course scalars. Taking the exterior product of x and y and multiplying
out the product allows us to express the bivector x Ô y as a linear combination of basis bivectors.
x Ô y ã Ha1 e1 + a2 e2 + a3 e3 L Ô Hb1 e1 + b2 e2 + b3 e3 L
The first simplification we can make is to put all basis bivectors of the form ei Ô ei to zero. The
second simplification is to use the anti-symmetry of the product and collect the terms of the
bivectors which are not essentially different (that is, those that may differ only in the order of their
factors, and hence differ only by a sign). The product x Ô y can then be written:
The scalar factors appearing here are just those which would have appeared in the usual vector
cross product of x and y. However, there is an important difference. The exterior product
expression does not require the vector space to have a metric, while the cross product, because it
generates a vector orthogonal to x and y, necessarily assumes a metric. Furthermore, the exterior
product is valid for any number of vectors in spaces of arbitrary dimension, while the cross product
is necessarily confined to products of two vectors in a space of three dimensions.
z ã c1 e1 + c2 e2 + c3 e3
2009 9 3
Grassmann Algebra Book.nb 31
xÔyÔz ã
HHa1 b2 - a2 b1 L e1 Ô e2 + Ha2 b3 - a3 b2 L e2 Ô e3 + Ha3 b1 - a1 b3 L e3 Ô e1 L Ô
Hc1 e1 + c2 e2 + c3 e3 L
Adopting the same simplification procedures as before we obtain the trivector x Ô y Ô z expressed
in basis form.
xÔyÔz ã
Ha1 b2 c3 - a3 b2 c1 + a2 b3 c1 + a3 b1 c2 - a1 b3 c2 - a2 b1 c3 L e1 Ô e2 Ô e3
A trivector in a space of three dimensions has just one component. Its coefficient is the
determinant of the coefficients of the original three vectors. Clearly, if these three vectors had been
linearly dependent, this determinant would have been zero. In a metric space, this coefficient
would be proportional to the volume of the parallelepiped formed by the vectors x, y, and z. Hence
the geometric interpretation of the algebraic result: if x, y, and z are lying in a planar direction,
that is, they are dependent, then the volume of the parallelepiped defined is zero.
We see here also that the exterior product begins to give geometric meaning to the often
inscrutable operations of the algebra of determinants. In fact we shall see that all the operations of
determinants are straightforward consequences of the properties of the exterior product.
In three-dimensional metric vector algebra, the vanishing of the scalar triple product of three
vectors is often used as a criterion of their linear dependence, whereas in fact the vanishing of their
exterior product (valid also in a non-metric space) would suffice. It is interesting to note that the
notation for the scalar triple product, or 'box' product, is Grassmann's original notation, viz [x y
z].
Finally, we can see that the exterior product of more than three vectors in a three-dimensional
space will always be zero, since they must be dependent.
2009 9 3
Grassmann Algebra Book.nb 32
Finally, we can see that the exterior product of more than three vectors in a three-dimensional
space will always be zero, since they must be dependent.
The word 'vector' however, is in current practice used in two distinct ways. The first and traditional
use endows the vector with its well-known geometric properties of direction, sense, and (possibly)
magnitude. In the second and more recent, the term vector may refer to any element of a linear
space, even though the space is devoid of geometric context.
In this book, we adopt the traditional practice and use the term vector only when we intend it to
have its traditional geometric interpretation of a (free) arrow-like entity. When referring to an
element of a linear space which we are not specifically interpreting geometrically, we simply use
the term element. The exterior product of m 1-elements of a linear space will thus be referred to as
an m-element. (In the GrassmannAlgebra package however, we have had to depart somewhat from
this convention in the interests of common usage: symbols representing 1-elements are called
VectorSymbols).
In Grassmann algebra however, a further dimension arises. By introducing a new element called
the Origin into the basis of a vector space which has the interpretation of a point, we are able to
distinguish two types of 1-element: vectors and points. This simple distinction leads to a
sophisticated and powerful tableaux of free (involving only vectors) and bound (involving points)
entities with which to model geometric and physical systems.
Science and engineering make use of mathematics by endowing its constructs with geometric or
physical interpretations. We will use the term entity to refer to an element which we specifically
wish to endow with a geometric or physical interpretation. For example we would say that
(geometric) points and vectors and (physical) positions and directions are 1-entities because they
are represented by 1-elements, while (geometric) bound vectors and bivectors and (physical) forces
and angular momenta are 2-entities because they are represented by 2-elements. Points, vectors,
bound vectors and bivectors are examples of geometric entities. Positions, directions, forces and
momenta are examples of physical entities.
Points, lines, planes and hyperplanes may also be conveniently considered for computational
purposes as geometric entities, or we may also define them in the more common way as a set of
points: the set of all points in the entity. (A 1-element is in an m-element if their exterior product is
zero.) We call such a set of points a geometric object. We interpret elements of a linear space
geometrically or physically, while we represent geometric or physical entities by elements of a
linear space.
2009 9 3
Grassmann Algebra Book.nb 33
An m-element may be denoted by a symbol underscripted with the value m. For example:
a ã xÔyÔuÔv
4
For simplicity, however, we do not generally denote 1-elements with an underscripted '1'.
The grade of a scalar is 0. We shall see that this is a natural consequence of the exterior product
axioms formulated for elements of general grade.
The dimension of the underlying linear space of 1-elements is denoted by n. Elements of grade
greater than n are zero.
… Ô x Ô y Ô … ã -H… Ô y Ô x Ô …L
In fact, interchanging the order of any two non-adjacent 1-element factors will also change the sign
of the product.
… Ô x Ô … Ô y Ô … ã -H… Ô y Ô … Ô x Ô …L
To see why this is so, suppose the number of factors between x and y is m. First move y to the
immediate left of x. This will cause m+1 changes of sign. Then move x to the position that y
vacated. This will cause m changes of sign. In all there will be 2m+1 changes of sign, equivalent to
just one sign change.
Note that it is only elements of odd grade that anti-commute. If, in a product of two elements, at
least one of them is of even grade, then the elements commute. For example, 2-elements commute
with all other elements.
Hx Ô yL Ô z ã z Ô Hx Ô yL
2009 9 3
Grassmann Algebra Book.nb 34
Ka aO Ô b ã a Ô a b ã a aÔb 1.5
m k m k m k
• An exterior product is anti-commutative whenever the grades of the factors are both odd.
a Ô b ã H-1Lm k b Ô a 1.6
m k k m
• The exterior product is both left and right distributive under addition.
Ka + b O Ô g ã a Ô g + b Ô g a Ô Kb + gO ã a Ô b + a Ô g 1.7
m m r m r m r m r r m r m r
2009 9 3
Grassmann Algebra Book.nb 35
The underlying beauty of the Ausdehnungslehre is due to this symmetry, which in turn is due to the
fact that linear spaces of m-elements and linear spaces of (n|m)-elements have the same dimension.
This too is the key to the duality of the exterior and regressive products. For example, the exterior
product of m 1-elements is an m-element. The dual to this is that the regressive product of m (n|1)-
elements is an (n|m)-element. This duality has the same form as that in a Boolean algebra: if the
exterior product corresponds to a type of 'union' then the regressive product corresponds to a type
of 'intersection'.
It is this duality that permits the definition of complement, and hence interior, inner and scalar
products defined in later chapters. To underscore this duality it is proposed to adopt here the Ó
('vee') for the regressive product operation.
Thus we can define the space of x Ô y as the space of all 1-elements z such that x Ô y Ô z ã 0. We
extend this to more general elements by defining the space of a simple element a as the space of all
m
1-elements b such that a Ô b ã 0.
m
We will also need the notion of congruence. Two elements are congruent if one is a scalar factor
times the other. For example x and 2 x are congruent; x Ô y and -x Ô y are congruent. Congruent
elements define the same subspace. We denote congruence by the symbol !. For two elements,
congruence is equivalent to linear dependence. For more than two elements however, although the
elements may be linearly dependent, they are not necessarily congruent. The following concepts of
union and intersection only make sense up to congruence.
2009 9 3
Grassmann Algebra Book.nb 36
The regressive product enables us to calculate intersections of elements. Thus it is the product
operation correctly dual to the exterior product. In fact, in Chapter 3: The Regressive Product, we
will develop the axiom set for the regressive product from that of the exterior product, and also
show that we can reverse the procedure.
Ka aO Ó b ã a Ó a b ã a aÓb 1.10
m k m k m k
• A regressive product is anti-commutative whenever the complementary grades of the factors are
both odd.
• The regressive product is both left and right distributive under addition.
2009 9 3
Grassmann Algebra Book.nb 37
• The regressive product is both left and right distributive under addition.
Ka + b O Ó g ã a Ó g + b Ó g a Ó Kb + gO ã a Ó b + a Ó g 1.12
m m r m r m r m r r m r m r
To make this connection we need to introduce a further axiom which we call the Common Factor
Axiom. The form of the Common Factor Axiom may seem somewhat arbitrary, but it is in fact one
of the simplest forms which enable intersections to be calculated. This can be seen in the following
application of the axiom to a vector 3-space.
Suppose x, y, and z are three independent vectors in a vector 3-space. The Common Factor Axiom
says that the regressive product of the two bivectors x Ô z and y Ô z may also be expressed as the
regressive product of a 3-element x Ô y Ô z with their common factor z.
Hx Ô zL Ó Hy Ô zL ã Hx Ô y Ô zL Ó z
Since the space is 3-dimensional, we can write any 3-element such as x Ô y Ô z as a scalar factor
(a, say) times the unit 3-element (introduced in axiom 1.9).
Hx Ô y Ô zL Ó z ã Ka 1O Ó z ã a z
3
This then gives us the axiomatic structure to say that the regressive product of two elements
possessing an element in common is congruent to that element. Another way of saying this is: the
regressive product of two elements defines their intersection.
2009 9 3
Grassmann Algebra Book.nb 38
Hx Ô zL Ó Hy Ô zL ª z
Of course this is just a simple case. More generally, let a, b, and g be simple elements with m+k+p
m k p
= n, where n is the dimension of the space. Then the Common Factor Axiom states that
There are many rearrangements and special cases of this formula which we will encounter in later
chapters. For example, when p is zero, the Common Factor Axiom shows that the regressive
product of an m-element with an (n|m)-element is a scalar which can be expressed in the
alternative form of a regressive product with the unit 1.
aÓ b ã aÔ b Ó1
m n-m m n-m
The Common Factor Axiom allows us to prove a particularly useful result: the Common Factor
Theorem. The Common Factor Theorem expresses any regressive product in terms of exterior
products alone. This of course enables us to calculate intersections of arbitrary elements.
Most importantly however we will see later that the Common Factor Theorem has a counterpart
expressed in terms of exterior and interior products, called the Interior Common Factor Theorem.
This forms the principal expansion theorem for interior products and from which we can derive all
the important theorems relating exterior and interior products.
The Interior Common Factor Theorem, and the Common Factor Theorem upon which it is based,
are possibly the most important theorems in the Grassmann algebra.
2009 9 3
Grassmann Algebra Book.nb 39
The Interior Common Factor Theorem, and the Common Factor Theorem upon which it is based,
are possibly the most important theorems in the Grassmann algebra.
In the next section we informally apply the Common Factor Theorem to obtain the intersection of
two bivectors in a three-dimensional space.
z ã Hx Ô yL Ó Hu Ô vL
We will see in Chapter 3: The Regressive Product that this can be expanded by the Common
Factor Theorem to give
Hx Ô yL Ó Hu Ô vL ã Hx Ô y Ô vL Ó u - Hx Ô y Ô uL Ó v 1.14
But we have already seen in Section 1.2 that in a 3-space, the exterior product of three vectors will,
in any given basis, give the basis trivector, multiplied by the determinant of the components of the
vectors making up the trivector.
Additionally, we note that the regressive product (intersection) of a vector with an element like the
basis trivector completely containing the vector, will just give an element congruent to itself. Thus
the regressive product leads us to an explicit expression for an intersection of the two bivectors.
z ª Det@x, y, vD u - Det@x, y, uD v
Here Det[x,y,v] is the determinant of the components of x, y, and v. We could also have
obtained an equivalent formula expressing z in terms of x and y instead of u and v by simply
interchanging the order of the bivector factors in the original regressive product.
Note carefully however, that this formula only finds the common factor up to congruence, because
until we determine an explicit expression for the unit n-element 1 in terms of basis elements
n
(which we do by introducing the complement operation in Chapter 5), we cannot usefully use
axiom 1.9 above. Nevertheless, this is not to be seen as a restriction. Rather, as we shall see in the
next section it leads to interesting insights as to what can be accomplished when we work in spaces
without a metric (projective spaces).
2009 9 3
Grassmann Algebra Book.nb 40
As discussed in Section 1.2, the term 'vector' is often used to refer to an element of a linear space
with no intention of implying an interpretation. In this book however, we reserve the term for a
particular type of geometric interpretation: that associated with representing direction.
But an element of a linear space may also be interpreted as a point. Of course vectors may also be
used to represent points, but only relative to another given point (the information for which is not
in the vector). These vectors are properly called position vectors. Common practice often omits
explicit reference to this other given point, or perhaps may refer to it verbally. Points can be
represented satisfactorily in many cases by position vectors alone, but when both position and
direction are required in the same element we must distinguish mathematically between the two.
To describe true position in a three-dimensional physical space, a linear space of four dimensions
is required, one for an origin point, and the other three for the three spatial directions. Since the
exterior product is independent of the dimension of the underlying space, it can deal satisfactorily
with points and vectors together. The usual three-dimensional vector algebra however cannot.
Suppose x, y, and z are elements of a linear space interpreted as vectors. Vectors always have a
direction and sense. But only when the linear space has a metric do they also have a magnitude.
Since to this stage we have not yet introduced the notion of metric, we will only be discussing
interpretations and applications which do not require elements (other than congruent elements) to
be commensurable.
Of course, vectors may be summed in a space with no metric: the geometric interpretation of this
operation being the standard 'triangle rule'.
As we have seen in the previous sections, multiplying vectors together with the exterior product
leads to higher order entities like bivectors and trivectors. These entities are interpreted as
representing directions and their higher dimensional analogues.
P ã ¯! + x 1.15
2009 9 3
Grassmann Algebra Book.nb 41
Q ã P + y ã H¯" + xL + y == ¯" + Hx + yL
P y
x
x+y Q
¯!
P - Q ã H¯" + x L - H¯" + x + yL ã y
The sum of two points gives the point halfway between them with a weight of 2.
2009 9 3
Grassmann Algebra Book.nb 42
x1 + x2
P1 + P2 ã H¯" + x1 L + H¯" + x2 L ã 2 K¯" + O ã 2 Pc
2
The sum of two points is a point of weight 2 located mid-way between them
Ï Historical Note
The point was originally considered the fundamental geometric entity of interest. However the difference of
points was clearly no longer a point, since reference to the origin had been lost. Sir William Rowan Hamilton
coined the term 'vector' for this new entity since adding a vector to a point 'carried' the point to a new point.
Determining a mass-centre
A classic application of a sum of weighted points is in the determination of a centre of mass.
Consider a collection of points Pi weighted with masses mi . The sum of the weighted points gives
the point PG at the mass-centre (centre of gravity) weighted with the total mass M.
To show this, first add the weighted points and collect the terms involving the origin.
Hm1 x1 + m2 x2 + m3 x3 + …L
PG ã ¯" +
Hm1 + m2 + m3 + …L
† To fix ideas, we take a simple example demonstrating that centres of mass can be accumulated
in any order. Suppose we have three points P, Q, and R with masses p, q, and r. The centres of
mass taken two at a time are given by
Hp + qL GPQ ã p P + q Q
Hq + rL GQR ã q Q + r R
Hp + rL GPR ã p P + r R
Now take the centre of mass of each of these with the other weighted point. Clearly, the three sums
will be equal.
2009 9 3
Grassmann Algebra Book.nb 43
Hp + qL GPQ + r R ã Hq + rL GQR + p P ã
Hp + rL GPR + q Q ã p P + q Q + r R ã Hp + q + rL GPQR
GPR
GQR
GPQR
P Q
GPQ
In practice, we usually compute with lines by computing with their bound vectors. For example, to
get the intersection of two lines in the plane, we take the regressive product of their bound vectors.
By abuse of terminology we may therefore often refer to a bound vector as a line.
A bound vector can be defined by the exterior product of a point and a vector, or of two points. In
the first case we represent the line L through the point P in the direction of x by any entity
congruent to the exterior product of P and x. In the second case we can introduce Q as P + x to get
the same result.
L ª P Ô x ª P Ô HQ - PL ª P Ô Q
Q
x
P P
L ª PÔx L ª PÔQ
Two different depictions of a bound vector in its line.
A line is independent of the specific points used to define it. To see this, consider any other point R
on the line. Since R is on the line it can be represented by the sum of P with an arbitrary scalar
multiple of the vector x:
2009 9 3
Grassmann Algebra Book.nb 44
A line is independent of the specific points used to define it. To see this, consider any other point R
on the line. Since R is on the line it can be represented by the sum of P with an arbitrary scalar
multiple of the vector x:
L ª R Ô x ã HP + a xL Ô x ã P Ô x
A line may also be represented by the exterior product of any two points on it.
L ª P Ô R ã P Ô HP + a xL ã a P Ô x
Note that the bound vectors P Ô x and P Ô R are different, but congruent. They therefore define the
same line.
x
R
x
P
These concepts extend naturally to higher dimensional constructs. For example a plane P may be
represented by the exterior product of single point on it together with a bivector in the direction of
the plane, any two points on it together with a vector in it (not parallel to the line joining the
points), or any three points on it (not in the same line).
To build higher dimensional geometric entities from lower dimensional ones, we simply take their
exterior product. For example we can build a line by taking the exterior product of a point with any
point or vector exterior to it. Or we can build a plane by taking the exterior product of a line with
any point or vector exterior to it.
2009 9 3
Grassmann Algebra Book.nb 45
L1 ª P1 Ô x1 ã H¯" + n1 L Ô x1 ã ¯" Ô x1 + n1 Ô x1
L2 ª P2 Ô x2 ã H¯" + n2 L Ô x2 ã ¯" Ô x2 + n2 Ô x2
x1 x2
P1 P2
The point of intersection of L1 and L2 is the point P given by (congruent to) the regressive product
of the lines L1 and L2 .
P ª L1 Ó L2 ã H¯" Ô x1 + n1 Ô x1 L Ó H¯" Ô x2 + n2 Ô x2 L
The Common Factor Theorem for the regressive product of elements of the form
Hx Ô yL Ó Hu Ô vL in a linear space of three dimensions was introduced as formula 1.14 in Section
1.3 as:
Hx Ô yL Ó Hu Ô vL ã Hx Ô y Ô vL Ó u - Hx Ô y Ô uL Ó v
Since a bound 2-space is three dimensional (its basis contains three elements - the origin and two
vectors), we can use this formula to expand each of the terms in P:
The term ¯" Ô x1 Ô ¯" is zero because of the exterior product of repeated factors. The four terms
involving the exterior product of three vectors, for example n1 Ô x1 Ô x2 , are also zero since any
three vectors in a two-dimensional vector space must be dependent (The vector space is 2-
dimensional since it is the vector sub-space of a bound 2-space). Hence we can express the point of
intersection as congruent to the weighted point P:
2009 9 3
Grassmann Algebra Book.nb 46
The term ¯" Ô x1 Ô ¯" is zero because of the exterior product of repeated factors. The four terms
involving the exterior product of three vectors, for example n1 Ô x1 Ô x2 , are also zero since any
three vectors in a two-dimensional vector space must be dependent (The vector space is 2-
dimensional since it is the vector sub-space of a bound 2-space). Hence we can express the point of
intersection as congruent to the weighted point P:
If we express the vectors in terms of a basis, e1 and e2 say, we can reduce this formula (after some
manipulation) to:
Det@n2 , x2 D Det@n1 , x1 D
P ª ¯" + x1 - x2
Det@x1 , x2 D Det@x1 , x2 D
Here, the determinants are the determinants of the coefficients of the vectors in the given basis.
To verify that P does indeed lie on both the lines L1 and L2 , we only need to carry out the
straightforward verification that the products P Ô L1 and P Ô L2 are both zero.
Although this approach in this simple case is certainly more complex than the standard algebraic
approach in the plane, its interest lies in the facts that it is immediately generalizable to
intersections of any geometric objects in spaces of any number of dimensions, and that it leads to
easily computable solutions.
Consider a linear space of dimension n with basis e1 , e2 , …, en . The set of all the essentially
different m-element products of these basis elements forms the basis of another linear space, but
n
this time of dimension K O. For example, when n is 3, the linear space of 2-elements has three
m
elements in its basis: e1 Ôe2 , e1 Ôe3 , e2 Ôe3 .
The anti-symmetric nature of the exterior product means that there are just as many basis elements
in the linear space of (n|m)-elements as there are in the linear space of m-elements. Because these
linear spaces have the same dimension, we can set up a correspondence between m-elements and
(n|m)-elements. That is, given any m-element, we can define its corresponding (n|m)-element. The
(n|m)-element is called the complement of the m-element. Normally this correspondence is set up
between basis elements and extended to all other elements by linearity.
2009 9 3
Grassmann Algebra Book.nb 47
e1 ã e2 Ô e3 ï e1 Ô e1 ã e1 Ô e2 Ô e3
e2 ã e3 Ô e1 ï e2 Ô e2 ã e1 Ô e2 Ô e3
e3 ã e1 Ô e2 ï e3 Ô e3 ã e1 Ô e2 Ô e3
The Euclidean complement is the simplest type of complement and defines a Euclidean metric, that
is, where the basis elements are mutually orthonormal. This was the only type of complement
considered by Grassmann. In Chapter 5: The Complement, we will show however, that
Grassmann's concept of complement is easily extended to more general metrics. Note carefully that
we will be using the notion of complement to define the notions of orthogonality and metric, and
until we do this, we will not be relying on their existence in Chapters 2, 3, and 4.
With the definitions above, we can now proceed to define the Euclidean complement of a general 1-
element x ã a e1 + b e2 + c e3 . To do this we need to endow the complement operation with the
property of linearity so that it has meaning for linear combinations of basis elements.
x ã a e1 + b e2 + c e3 ã a e1 + b e2 + c e3 ã a e2 Ô e3 + b e3 Ô e1 + c e1 Ô e2
In Section 1.6 we will see that the complement of an element is orthogonal to the element, because
we will define the interior product (and hence inner and scalar products) using the complement.
We can start to see how the scalar product of a 1-element with itself might arise by expanding the
product
2009 9 3x Ô x:
Grassmann Algebra Book.nb 48
In Section 1.6 we will see that the complement of an element is orthogonal to the element, because
we will define the interior product (and hence inner and scalar products) using the complement.
We can start to see how the scalar product of a 1-element with itself might arise by expanding the
product x Ô x:
x Ô x ã Ha e1 + b e2 + c e3 L Ô Ha e2 Ô e3 + b e3 Ô e1 + c e1 Ô e2 L
ã Ia2 + b2 + c2 M e1 Ô e2 Ô e3
The complement of the basis 2-elements can be defined in a manner analogous to that for 1-
elements, that is, such that the exterior product of a basis 2-element with its complement is equal to
the basis 3-element. The complement of a 2-element in 3-space is therefore a 1-element.
e2 Ô e3 ã e1 ï e2 Ô e3 Ô e2 Ô e3 ã e2 Ô e3 Ô e1 ã e1 Ô e2 Ô e3
e3 Ô e1 ã e2 ï e3 Ô e1 Ô e3 Ô e1 ã e3 Ô e1 Ô e2 ã e1 Ô e2 Ô e3
e1 Ô e2 ã e3 ï e1 Ô e2 Ô e1 Ô e2 ã e1 Ô e2 Ô e3
1 ã e1 Ô e2 Ô e3
e1 Ô e2 Ô e3 ã 1
Summarizing these results for the Euclidean complement of the basis elements of a Grassmann
algebra in three dimensions shows the essential symmetry of the complement operation.
Complement Palette
Basis Complement
1 e1 Ô e2 Ô e3
e1 e2 Ô e3
e2 -He1 Ô e3 L
e3 e1 Ô e2
e1 Ô e2 e3
e1 Ô e3 -e2
e2 Ô e3 e1
e1 Ô e2 Ô e3 1
2009 9 3
Grassmann Algebra Book.nb 49
x ã a e1 + b e2 + c e3
x ã a e1 + b e2 + c e3 ã a e1 + b e2 + c e3 ã a e2 Ô e3 + b e3 Ô e1 + c e1 Ô e2
x ã a e2 Ô e3 + b e3 Ô e1 + c e1 Ô e2 ã
a e2 Ô e3 + b e3 Ô e1 + c e1 Ô e2 ã a e1 + b e2 + c e3
ï xãx
More generally, as we shall see in Chapter 5: The Complement, we can show that the complement
of the complement of any element is the element itself, apart from a possible sign.
This result is independent of the correspondence that we set up between the m-elements and (n|m)-
elements of the space, except that the correspondence must be symmetric. This is equivalent to the
requirement that the metric tensor (and inner product) be symmetric.
Whereas in a 3-space, the complement of the complement of a 1- element is the element itself, in a
2-space it turns out to be the negative of the element.
Complement Palette
Basis Complement
1 e1 Ô e2
e1 e2
e2 -e1
e1 Ô e2 1
Although this sign dependence on the dimension of the space and grade of the element might
appear arbitrary, it turns out to capture some essential properties of the elements in their spaces to
which we have become accustomed. For example in 2-space the complement x of a vector x
p
rotates it anticlockwise by 2 . Taking the complement x of the vector x rotates it anticlockwise by a
p
further 2 into -x.
x ã a e1 + b e2
2009 9 3
Grassmann Algebra Book.nb 50
x ã a e1 + b e2 ã a e2 - b e1
x ã a e2 - b e1 ã -a e1 - b e2
x = -x
However, although this may be derived in this simple case, to develop the Grassmann algebra for
general metrics, we will assume this relationship holds independent of the metric. It thus takes on
the mantle of an axiom.
This axiom, which we call the Complement Axiom, is the quintessential formula of the Grassmann
algebra. It expresses the duality of its two fundamental operations on elements and their
complements. We note the formal similarity to de Morgan's law in Boolean algebra.
We will also see that adopting this formula for general complements will enable us to compute the
complement of any element of a space once we have defined the complements of its basis 1-
elements.
† As an example consider two vectors x and y in 3-space, their exterior product. The Complement
Axiom becomes
xÔy ã xÓy
The complement of the bivector x Ô y is a vector. The complements of x and y are bivectors. The
regressive product of these two bivectors is a vector. The following graphic depicts this
relationship, and the orthogonality of the elements. We discuss orthogonality in the next section.
2009 9 3
Grassmann Algebra Book.nb 51
The complement of the bivector x Ô y is a vector. The complements of x and y are bivectors. The
regressive product of these two bivectors is a vector. The following graphic depicts this
relationship, and the orthogonality of the elements. We discuss orthogonality in the next section.
The interior product of an element a with an element b is denoted a û b and defined to be the
m k m k
regressive product of a with the complement of b.
m k
The grade of an interior product a û b may be seen from the definition to be m+(n|k)|n = m|k.
m k
Note that while the grade of a regressive product depends on the dimension of the underlying linear
space, the grade of an interior product is independent of the dimension of the underlying space.
This independence underpins the important role the interior product plays in the Grassmann
algebra - the exterior product sums grades while the interior product differences them. However,
grades may be arbitrarily summed, but not arbitrarily differenced, since there are no elements of
negative grade.
2009 9 3
Grassmann Algebra Book.nb 52
Note that while the grade of a regressive product depends on the dimension of the underlying linear
space, the grade of an interior product is independent of the dimension of the underlying space.
This independence underpins the important role the interior product plays in the Grassmann
algebra - the exterior product sums grades while the interior product differences them. However,
grades may be arbitrarily summed, but not arbitrarily differenced, since there are no elements of
negative grade.
Thus the order of factors in an interior product is important. When the grade of the first element is
less than that of the second element, the result is necessarily zero.
A special case of the interior or inner product in which the two factors of the product are of grade 1
is called a scalar product to conform to the common usage.
In Chapter 6 we will show that the inner product is symmetric, that is, the order of the factors is
immaterial.
a û b ã b û a 1.21
m m m m
When the inner product is between simple elements it can be expressed as the determinant of the
array of scalar products according to the following formula:
For example, the inner product of two 2-elements a1 Ôa2 and b1 Ôb2 may be written:
a û Hb1 Ô b2 Ô … Ô bk L ã a Ó Hb1 Ô b2 Ô … Ô bk L ã
m m
a Ó b1 Ó b2 Ó … Ó bk ã KKa û b1 O û b2 O û … û bk
m m
2009 9 3
Grassmann Algebra Book.nb 53
a û Hb1 Ô b2 Ô … Ô bk L ã
m
1.24
Ka û b1 O û Hb2 Ô … Ô bk L ã KKKa û b1 O û b2 O û … O û bk
m m
For example, the inner product of two bivectors can be rewritten to display the interior product of
the first bivector with either of the vectors
Hx Ô yL û Hu Ô vL ã HHx Ô yL û uL û v ã -HHx Ô yL û vL û u
It is the straightforward and consistent derivation of formulae like 1.24 from definition 1.20 using
only the fundamental exterior, regressive and complement operations, that shows how powerful
Grassmann's approach is. The alternative approach of simply introducing an inner product onto a
space cannot bring such power to bear.
Orthogonality
As is well known, two 1-elements are said to be orthogonal if their scalar product is zero.
More generally, a 1-element x is orthogonal to a simple element a if and only if their interior
m
product a û x is zero.
m
a û Hx1 Ô x2 Ô ! Ô xk L ã Ka û x1 O û Hx2 Ô ! Ô xk L
m m
Hence it becomes immediately clear that if a û x1 is zero then so is the whole product
m
a û Hx1 Ô x2 Ô ! Ô xk L.
m
†A§2 ã A û A ã
1.25
Ha1 Ô a2 Ô ! Ô am L û Ha1 Ô a2 Ô ! Ô am L ã Det@ai û aj D
Under a geometric interpretation of the space in which vectors are interpreted as representing
displacements, the concept of measure corresponds to the concept of magnitude. The magnitude of
a vector is its length, the magnitude of a bivector is the area of the parallelogram formed by its two
vectors, and the magnitude of a trivector is the volume of the parallelepiped formed by its three
vectors. The magnitude of a scalar is the scalar itself.
2009 9 3
Grassmann Algebra Book.nb 54
Under a geometric interpretation of the space in which vectors are interpreted as representing
displacements, the concept of measure corresponds to the concept of magnitude. The magnitude of
a vector is its length, the magnitude of a bivector is the area of the parallelogram formed by its two
vectors, and the magnitude of a trivector is the volume of the parallelepiped formed by its three
vectors. The magnitude of a scalar is the scalar itself.
xûx xûy
†x Ô y§ ã Hx Ô yL û Hx Ô yL ã DetB F 1.27
xûy yûy
Of course, a bivector may be expressed in an infinity of ways as the exterior product of two
vectors, for example
B = x Ô y ã x Ô Hy + a xL
From this, the square of its area may be written in either of two ways
B û B ã Hx Ô yL û Hx Ô yL
B û B ã Hx Ô Hy + a xLL û Hx Ô Hy + a xLL
However, multiplying out these expressions using formula 1.23 shows that terms cancel in the
second expression, thus reducing them both to the same expression.
B û B ã Hx û xL Hy û yL - Hx û yL2
Thus the measure of a bivector is independent of the actual vectors used to express it.
Geometrically interpreted this means that the area of the corresponding parallelogram (with sides
corresponding to the displacements represented by the vectors) is independent of its shape. These
results extend straightforwardly to simple elements of any grade.
The measure of an element is equal to the measure of its complement. By the definition of the
interior product, and formulae 1.18 and 1.19 we have
a û a ã a Ó a ã a Ó a ã H-1Lm Hn-mL a Ó a ã a Ó a ã a û a
m m m m m m m m m m m m
`
A unit element a can be defined by the ratio of the element to its measure.
m
2009 9 3
Grassmann Algebra Book.nb 55
a
` m
a ã 1.29
m
£aß
m
e1 û e1 ã e1 Ó e1 ã e1 Ó He2 Ô e3 L ã He1 Ô e2 Ô e3 L Ó 1 ã 1 Ó 1 ã 1
e1 û e2 ã e1 Ó e2 ã e1 Ó He3 Ô e1 L ã He1 Ô e3 Ô e1 L Ó 1 ã 0 Ó 1 ã 0
He1 Ô e2 L û e1 ã He1 Ô e2 L Ó e1 ã
He1 Ô e2 L Ó He2 Ô e3 L ã He1 Ô e2 Ô e3 L Ó e2 ã 1 Ó e2 ã e2
If a basis 2-element does not contain a given basis 1-element, then their interior product is zero:
1.30
2009 9 3
Grassmann Algebra Book.nb 56
n
a û b ã ‚ ai û b ai
m k k k m-k
i=1
a ã a1 Ô a1 ã a2 Ô a2 ã ! ã an Ô an
m k m-k k m-k k m-k
For example, the Interior Common Factor Theorem may be used to prove a relationship involving
the interior product of a 1-element x with the exterior product of two factors, each of which may
not be simple. This relationship and the special cases that derive from it find application
throughout the explorations in the rest of this book.
a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x 1.31
m k m k m k
B ã x1 Ô x2
B û x ã Hx1 Ô x2 L û x ã Hx û x1 L x2 - Hx û x2 L x1
B Ô HB û xL ã 0
The resulting vector B û x is also orthogonal to x. We can show this by taking its scalar product
with x, or by using formula 1.24.
HB û xL û x ã B û Hx Ô xL ã 0
`
If B is the unit bivector of B, the projection x˛ of x onto B is given by
` `
x˛ ã -B û IB û xM
1.30
2009 9 3
Grassmann Algebra Book.nb 57
` `
x¨ ã IB Ô xM û B
These concepts may easily be extended to geometric entities of higher grade. We will explore this
further in Chapter 6.
Because our generalization reduces to the usual definition under the usual circumstances, we take
the liberty of continuing to refer to the generalized cross product as, simply, the cross product.
The cross product of a and b is denoted a µ b and is defined as the complement of their exterior
m k m k
product. The cross product of an m-element and a k-element is thus an (n|(m+k))-element.
This definition preserves the basic property of the cross product: that the cross product of two
elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three
dimensional metric vector space. For 1-elements xi the definition has the following consequences,
independent of the dimension of the space.
2009 9 3
Grassmann Algebra Book.nb 58
This definition preserves the basic property of the cross product: that the cross product of two
elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three
dimensional metric vector space. For 1-elements xi the definition has the following consequences,
independent of the dimension of the space.
† The box product, or triple vector product, is an (n|3)-element, and therefore a scalar only in
three dimensions.
Hx1 µ x2 L û x3 ã x1 Ô x2 Ô x3 1.34
† The scalar product of two cross products is a scalar in any number of dimensions.
† The cross product of two cross products is a (4|n)-element, and therefore a 1-element only in
three dimensions. It corresponds to the regressive product of two exterior products.
The cross product of the three-dimensional vector calculus requires the space to have a metric,
since it is defined to be orthogonal to its factors. Equation 1.35 however, shows that in this
particular case, the result does not require a metric.
2009 9 3
Grassmann Algebra Book.nb 59
Introduction
In this section we explore the different possible types of linear product axiom that can exist
between two elements of a linear space. Our exploration will take a different approach to that used
by Grassmann in his Ausdehnungslehre of 1862 (Chapter 2, Section 3: The various types of
product structure), but we will arrive at essentially the same conclusions: that there are just two
types of linear product axioms: one asserting the symmetry of the product operation, and the other
asserting the antisymmetry of the product operation.
Of course, this does not mean to imply that all linear products must be either symmetric or
antisymmetric! Only that, if there is an axiom relating the products of two arbitrary elements of a
linear space, then it must either define the product operation to be symmetric, or must define it to
be antisymmetric. Intuitively, this makes sense, corresponding to there being just two parities: even
and odd.
Suppose a and b are (any) two distinct non-zero elements of a given linear space. To help make the
development more concrete, we can conceive of them as 1-elements or vectors, but this is not
necessary. A linear product axiom involving a and b is defined as one that remains valid under the
substitutions
a ö pa +qb b ö ra +sb
where p, q, r, and s are scalars, perhaps zero. We also assume distributivity and commutativity
with scalars.
2009 9 3
Grassmann Algebra Book.nb 60
where p, q, r, and s are scalars, perhaps zero. We also assume distributivity and commutativity
with scalars.
We denote the product with a dot: a.b. Note that, while a and b must both belong to the same
linear space, we make no assertions as to where the product belongs.
There are two essentially different forms of product we can form from the two distinct elements a
and b: those involving an element with itself (a.a and b.b); and those involving distinct elements
(a.b and b.a).
For a product of an element with itself, there are just two distinct possibilities: either it is zero, or it
is non-zero. In what follows, we shall take each of these two cases and explore for any relations it
might imply on products of distinct elements.
Ha + bL ü Ha + bL ã 0
Since we have assumed the product is distributive, we can expand it to give
Ha + bL ü Ha + bL ã a ü a + a ü b + b ü a + b ü b ã 0
aüb + büa ã 0
This means that, if a linear product is defined to be nilpotent (that is a ü a ã 0), then necessarily it
is antisymmetric. Conversely, if it is antisymmetric, it is also nilpotent.
The most general linear relationship between these products may be written:
2009 9 3
Grassmann Algebra Book.nb 61
C: 2 a aüa + 2 d büb ã 0
Since, for the case we are considering, the product of no element with itself is zero, we conclude
that both a and d must be zero. Thus, equation A becomes
D: b aüb + c büa ã 0
E: Hb + cL a ü a ã 0
Again, since for this case a.a is not zero, we must have that b + c is zero, that is, c is equal to -b.
Replace c by -b in equation D to give
F: b Ha ü b - b ü aL ã 0
Now, either b is zero, or a ü b - b ü a is zero. In the case b is zero we also have that c is zero.
Additionally, from equation C we have that both a and d are zero. This condition leads to no linear
axiom of the form A being possible.
The remaining possibility is that a ü b - b ü a ã 0. Thus we can summarize the results as leading
to either of the two possibilities
a ü a ! 0, b ü b ! 0 ï Either a ü b - b ü a ã 0,
1.38
or no linear product axiom exists
This means that if a linear product is defined such that the product of any non-zero element with
itself is never zero, then necessarily, either no linear relation is implied between products or, the
product operation is symmetric, that is a ü b ã b ü a.
Ï Summary
In sum, there exists just two types of linear product axioms: one asserting the symmetry of the
product operation, and the other asserting the antisymmetry of the product operation. This does not
preclude linear products which have no axioms relating products of two elements.
Examples
Ï Symmetric products
Inner products These are symmetric for factors of (equal) arbitrary grade.
2009 9 3
Grassmann Algebra Book.nb 62
Scalar products These are a special case of inner product for factors of grade 1.
Exterior products These are symmetric for factors of (equal) even grade.
Algebraic products The ordinary algebraic product is symmetric. It is equivalent to the special
cases of the inner or exterior products of elements of zero grade.
Ï Antisymmetric products
Exterior products These are antisymmetric for factors of (equal) odd grade.
Regressive products These are antisymmetric for factors of (equal) grade, but one whose parity is
different from that of the dimension of the space.
Generalized products These are antisymmetric for factors of (equal) grade, but one whose parity
is different from that of the order of the product.
To confirm this, we take the simplest case of the Clifford product of two 1-elements, and posit that
there does exist a linear product axiom. Our expectation is that this will lead to a contradiction.
B:
a Ha Ô a + a û aL + b Ha Ô b + a û bL + c Hb Ô a + b û aL + d Hb Ô b + b û bL ã 0
Since the exterior and scalar product operations are distinct, leading to elements of different grade,
we can write B as two separate linear product axioms:
C: a Ha Ô aL + b Ha Ô bL + c Hb Ô aL + d Hb Ô bL ã 0
D: a Ha û aL + b Ha û bL + c Hb û aL + d Hb û bL ã 0
E: Hb - cL Ha Ô bL ã 0
F: a Ha û aL + Hb + cL Ha û bL + d Hb û bL ã 0
G: a Ha û aL + 2 b Ha û bL + d Hb û bL ã 0
Put a equal to - a in G and subtract the result from G to give 4b(a û b) ã 0. Hence b must be
zero, giving
2009 9 3
Grassmann Algebra Book.nb 63
Put a equal to - a in G and subtract the result from G to give 4b(a û b) ã 0. Hence b must be
zero, giving
H: a Ha û aL + d Hb û bL ã 0
Now put b equal to a to get (a + d)(a û a) ã 0, whence a must be equal to -d yielding finally
I: aûa - bûb ã 0
This is clearly a contradiction (as we suspected might be the case), hence there is no linear product
axiom relating Clifford products of 1-elements.
1.15 Terminology
Terminology
Grassmann algebra is really quite a simple structure, straightforwardly generated by introducing a
product operation onto the elements of a linear space to generate a suite of new linear spaces. If we
only wish to describe this algebra and its elements, the terminology can be quite compact and
consistent with accepted practice.
However in applications of the algebra, for example to geometry and physics, we may want to
work in an interpreted version of the algebra. This new interpretation may be as simple as viewing
one of the basis elements of the underlying linear space as an origin point, and the rest as vectors,
but the interpreted algebraic, and therefore terminological, distinctions are now significantly
increased. Whereas the exterior product on an algebraically uninterpreted basis multiplied the basis
elements into a single suite of higher grade elements, the exterior product on such an interpreted
basis multiplies the basis elements into two suites of interpreted higher grade elements - those
containing the origin as a factor, and those which do not.
This is of course more complex, and requires more terminology to make all the distinctions.
Deciding on the terminology to use however poses a challenge, because many of the distinctions
while new to some, are known to others by different names. For example, E. A. Milne in his
Vectorial Mechanics, although not mentioning Grassmann or the exterior product uses the term
'line vector' for the entity we model by the exterior product of a point with a vector. Robert Stawell
Ball in his A Treatise on the Theory of Screws, only briefly mentioning Grassmann, uses the word
'screw' for the entity we model by the sum of a 'line vector' and what is currently known as a
bivector.
2009 9 3
Grassmann Algebra Book.nb 64
One compromise, adopted to avoid tedious extra-locution, is to extend the term space in this book
to also mean a Grassmann algebra. I have done this because it seems to me that when we visualize
space, it is not only inhabited by points and vectors, but also by lines, planes, bivectors, trivectors,
and other multi-dimensional entities.
Ï Linear space
In this book a linear space is defined in the usual way as consisting of an abelian group under
addition, a field, and a (scalar) multiplication operation between their elements. The dimension of a
linear space is the number of elements in a basis of the space.
The underlying linear space L of a Grassmann algebra is the linear space which, together with the
1
exterior product operation generates the algebra. The field of L is denoted L, and dimension of L
1 0 1
by n (or, when using GrassmannAlgebra, ¯n or ¯D).
An exterior linear space L is the linear space whose basis consists of all the essentially different
m
n
exterior products of basis elements of L, m at a time. The dimension of L is therefore . Its
1 m m
elements are called m-elements. The integer m is called the grade of L and its elements.
m
L 1 e1 , e2 , e3 , …, en 1-element
1
2009 9 3
Grassmann Algebra Book.nb 65
Ï Grassmann algebra
Ï Space
In this book, the term space is another term for a Grassmann algebra. The term n-space is another
term for a Grassmann algebra whose underlying linear space is of dimension n. By abuse of
terminology we will refer to the dimension and basis of an n-space as that of its underlying linear
space. In GrassmannAlgebra you can declare that you want to work in an n-space by entering ¯!n .
The space of a simple m-element is the m-space whose underlying linear space consists of all the 1-
elements in the m-element. A 1-element is said to be in a simple m-element if and only their
exterior product is zero. A basis of the space of a simple m-element may be taken as any set of m
independent factors of the m-element. An n-space and the space of its n-element are identical.
Ï Vector space
A vector space is a space in which the elements of the underlying linear space have been
interpreted as vectors. This interpretation views a vector as an (unlocated) direction and
graphically depicts it as an arrow. An m-element in a vector space is called an m-vector or
multivector. A 2-vector is also called a bivector. A 3-vector is also called a trivector. A simple m-
vector is viewed as a multi-dimensional direction, or m-direction.
An element of a vector space may also be called a geometric entity (or simply, entity) to emphasize
that it has a geometric interpretation. An m-vector is an entity of grade m. Vectors are 1-entities,
bivectors are 2-entities.
A vector n-space is a vector space whose underlying linear space is of dimension n. A vector 3-
space is thus richer than the usual 3-dimensional vector algebra, since it also contains bivectors and
trivectors, as well as the exterior product operation.
2009 9 3
Grassmann Algebra Book.nb 66
L 1 e1 , e2 , e3 , …, en vector
1
The space of a simple m-vector is the m-space whose underlying linear space consists of all the
vectors in the m-vector.
Ï Bound space
A bound space is a vector space to whose basis has been added an origin. The origin is an element
with the geometric interpretation of a point. A bound n-space is a vector n-space to whose basis
has been added an origin. The symbol n refers to the number of vectors in the basis of the bound
space, thus making the dimension of a bound n-space n+1. In GrassmannAlgebra you can declare
that you want to work in a bound n-space by entering ¯"n . You can determine the dimension of a
space by entering ¯D, which in this case would return n+1. Thus the underlying linear space of a
bound n-space is of dimension n+1.
An element of a bound space may also be called a geometric entity (or simply, entity) to emphasize
it has a geometric interpretation. An m-entity is an entity of grade m. Vectors and points are 1-
entities. Bivectors and bound vectors are 2-entities.
A bound n-space is thus richer than a vector n-space, since as well as containing vectorial
(unlocated) entities (vectors, bivectors, …), it also contains bound (located) entities (points, bound
vectors, …) and sums of these. In contradistinction to the term bound space, a vector space may
sometimes be called a free space. A bound 3-space is a closer algebraic model to physical 3-space
than a vector 3-space, even though its underlying linear space is of four dimensions.
2009 9 3
Grassmann Algebra Book.nb 67
The space of a bound simple m-vector is the (m+1)-space whose underlying linear space consists of
all the points and vectors in the bound simple m-vector.
Ï Geometric objects
The geometric object of a bound simple entity is the set of all points in the entity, that is, all points
whose exterior product with the bound simple entity is zero.
The geometric object of a bound scalar or weighted point is the point itself. The geometric object
of a bound vector is a line (an infinite set of points). The geometric object of a bound simple
bivector is a plane (a doubly infinite set of points).
Since the properties of geometric objects are well modelled algebraically by their corresponding
entities, we find it convenient to compute with geometric objects by computing with their
corresponding bound simple entities; thus for example computing the intersection of two lines by
computing the regressive product of two bound vectors.
2009 9 3
Grassmann Algebra Book.nb 68
Two elements are said to be congruent if one is a scalar multiple (not zero) of the other.
The scalar multiple associated with two congruent elements is called the congruence factor.
Two congruent elements are said to be of opposite orientation if their congruence factor is negative.
The origin of a bound space is the basis element designated as the origin.
A vector in a bound space is a 1-element that does not involve the origin.
A point is the sum of the origin with a vector. This vector is called the position vector of the point.
A weighted point is a scalar multiple of a point. The scalar multiple is called the weight of the point.
The sum of a point and a vector is another point. The vector may be said to carry the first point
into the second.
The sum of a bound simple m-vector and a simple (m+1)-vector containing the m-vector is another
bound simple m-vector. The simple (m+1)-vector may be said to carry the first bound simple m-
vector into the second.
2009 9 3
Grassmann Algebra Book.nb 69
Congruent vectors of opposite orientation are also said to have opposite sense.
Congruent bound simple elements are said to define the same position and direction.
A bound simple m-vector that has been carried to a new bound simple m-vector by the addition of
an (m+1)-vector may be said to have been carried to a new position. The m-direction of the bound
simple m-vector is unchanged.
The geometric object of a bound simple entity may be said to have the position and direction of its
entity.
The notions of position, direction, and sense are geometric interpretations on the algebraic
elements.
1.16 Summary
To be completed
2009 9 3
Grassmann Algebra Book.nb 70
2.1 Introduction
The exterior product is the natural fundamental product operation for elements of a linear space.
Although it is not a closed operation (that is, the product of two elements is not itself an element of
the same linear space), the products it generates form a series of new linear spaces, the totality of
which can be used to define a true algebra, which is closed.
The exterior product is naturally and fundamentally connected with the notion of linear
dependence. Several 1-elements are linearly dependent if and only if their exterior product is zero.
All the properties of determinants flow naturally from the simple axioms of the exterior product.
The notion of 'exterior' is equivalent to the notion of linear independence, since elements which are
truly exterior to one another (that is, not lying in the same space) will have a non-zero product. An
exterior product of 1-elements has a straightforward geometric interpretation as the multi-
dimensional equivalent to the 1-elements from which it is constructed. And if the space possesses a
metric, its measure or magnitude may be interpreted as a length, area, volume, or hyper-volume
according to the grade of the product.
However, the exterior product does not require the linear space to possess a metric. This is in direct
contrast to the three-dimensional vector calculus in which the vector (or cross) product does
require a metric, since it is defined as the vector orthogonal to its two factors, and orthogonality is
a metric concept. Some of the results which use the cross product can equally well be cast in terms
of the exterior product, thus avoiding unnecessary assumptions.
We start the chapter with an informal discussion of the exterior product, and then collect the
axioms for exterior linear spaces together more formally into a set of 12 axioms which combine
those of the underlying field and underlying linear space with the specific properties of the exterior
product. In later chapters we will derive equivalent sets for the regressive and interior products, to
which we will often refer.
Next, we pin down the linear space notions that we have introduced axiomatically by introducing a
basis onto the underlying (primary) linear space and showing how this can induce a basis onto each
of the other linear spaces generated by the exterior product. A constantly useful partner to a basis
element of any of these exterior linear spaces is its cobasis element. The exterior product of a basis
element with its cobasis element always gives the basis n-element of the algebra. We develop the
notion of cobasis following that of basis.
The next three sections look at some standard topics in linear algebra from a Grassmannian
viewpoint: determinants, cofactors, and the solution of systems of linear equations. We show that
all the well-known properties of determinants, cofactors, and linear equations proceed directly
from the properties of the exterior product.
The next two sections discuss two concepts dependent on the exterior product that have no direct
counterpart in standard linear algebra: simplicity and exterior division. If an element is simple it
can be factorized into 1-elements. If a simple element is divided by another simple element
contained in it, the result is not unique. Both these properties will find application later in our
explorations of geometry.
2009 9 3
Grassmann Algebra Book.nb 71
The next two sections discuss two concepts dependent on the exterior product that have no direct
counterpart in standard linear algebra: simplicity and exterior division. If an element is simple it
can be factorized into 1-elements. If a simple element is divided by another simple element
contained in it, the result is not unique. Both these properties will find application later in our
explorations of geometry.
At the end of the chapter, we take the opportunity to introduce the concepts of span and cospan of a
simple element, and the concept that we can define multilinear forms involving the spans and
cospans of an element which are invariant to any refactorization of the element. Following this we
explore their application to calculating the union and intersection of two simple elements, and the
factorization of a simple element. These applications involve only the exterior product. But we will
continue to use the concepts of span, cospan and multilinear forms throughout the book, where
they will generally involve other product operations.
We call L a linear space rather than a vector space and its elements 1-elements rather than vectors
1
because we will be interpreting its elements as points as well as vectors.
A second linear space denoted L may be constructed as the space of sums of exterior products of 1-
2
elements taken two at a time. The exterior product operation is denoted by Ô, and has the following
properties:
a Hx Ô yL ã Ha xL Ô y 2.1
x Ô Hy + zL ã x Ô y + x Ô z 2.2
Hy + zL Ô x ã y Ô x + z Ô x 2.3
xÔx ã 0 2.4
An important property of the exterior product (which is sometimes taken as an axiom) is the anti-
symmetry of the product of two 1-elements.
x Ô y ã -y Ô x 2.5
This may be proved from the distributivity and nilpotency axioms since:
2009 9 3
Grassmann Algebra Book.nb 72
This may be proved from the distributivity and nilpotency axioms since:
Hx + yL Ô Hx + yL ã 0
ï xÔx + xÔy + yÔx + yÔy ã 0
ï xÔy + yÔx ã 0
ï x Ô y ã -y Ô x
An element of L will be called a 2-element (of grade 2) and is denoted by a kernel letter with a '2'
2
written below. For example:
a ã xÔy + zÔw + !
2
It is important to note the distinction between a 2-element and a simple 2-element (a distinction of
no consequence in L, where all elements are simple). A 2-element is in general a sum of simple 2-
1
elements, and is not generally expressible as the product of two 1-elements (except where L is of
1
dimension n b 3). The structure of L, whilst still that of a linear space, is thus richer than that of L,
2 1
that is, it has more properties.
By way of example, the following shows that when L is of dimension 3, every element of L is
1 2
simple.
Suppose that L has basis e1 , e2 , e3 . Then a basis of L is the set of all essentially different (linearly
1 2
independent) products of basis elements of L taken two at a time: e1 Ô e2 , e2 Ô e3 , e3 Ô e1 . (The
1
product e1 Ô e2 is not considered essentially different from e2 Ô e1 in view of the anti-symmetry
property).
a ã a e1 Ô e2 + b e2 Ô e3 + c e3 Ô e1
2
Without loss of generality, suppose a " 0. Then a can be recast in the form below, thus proving
2
the proposition.
b c
a ã a e1 - e3 Ô e2 - e3
2 a a
2009 9 3
Grassmann Algebra Book.nb 73
Ï Historical Note
In the Ausdehnungslehre of 1844 Grassmann denoted the exterior product of two symbols
a and b by a simple concatenation viz. a b; whilst in the Ausdehnungslehre of 1862 he
enclosed them in square brackets viz. [a b]. This notation has survived in the three-
dimensional vector calculus as the 'box' product [a b g] used for the triple scalar product.
Modern usage denotes the exterior product operation by the wedge Ô, thus aÔb. Amongst
other writers, Whitehead [1898] used the 1844 version whilst Forder [1941] and Cartan
[1922] followed the 1862 version.
If you have just loaded the GrassmannAlgebra package, you will see these symbols listed in the
status panes at the top of the palette. (You can load the package by clicking the red title button on
the GrassmannAlgebra Palette, and get help on any of the topics or commands by clicking the
associated ? button.)
DeclareScalarSymbols@8a, b, g<D
8a, b, g<
You will now see these new symbols reflected in the Current ScalarSymbols pane.
ScalarSymbols
8a, b, g<
You can always return to the default declaration by entering the command
DeclareDefaultScalarSymbols or its alias ¯S. (You can enter this with the keystroke
combination of Â*5ÂS, or by using the 5-star symbol on the palette.)
¯S
8a, b, c, d, e, f, g, h<
The same procedures apply mutatis mutandi for vector symbols, with the word "Scalar" replaced
by the word "Vector" and ¯S by ¯V.
† As well as symbols in the lists ScalarSymbols and VectorSymbols, you can also include
patterns. Any expression which conforms to such a pattern is considered to be a symbol of the
respective type. For further information, check the Preferences section in the palette.
2009 9 3
Grassmann Algebra Book.nb 74
As well as symbols in the lists ScalarSymbols and VectorSymbols, you can also include
patterns. Any expression which conforms to such a pattern is considered to be a symbol of the
respective type. For further information, check the Preferences section in the palette.
† Note that the five-pointed star symbol ¯ (Â*5Â) is used in GrassmannAlgebra as the first
character in each of its aliases to commonly used commands, objects or functions.
† You can reset all the default declarations (ScalarSymbols, VectorSymbols, Basis and
Metric) by entering ¯A, which you can access from the palette. We will do this often
throughout the book to ensure we are starting our computations from the default environment.
A quick way to enter xÔy is to type x, click on Ô in the palette, then type y.
Other ways are to type xÂ^Ây, or click the ÉÔÉ button on the palette. The first placeholder will
be selected automatically for immediate typing. To move to the next placeholder, press the tab key.
To produce a multifactored exterior product, click the palette button as many times as required. To
take the exterior product with an existing expression, select the expression and click the ÉÔÉ
button.
For example, to type (x + y)Ô(x + z), first type x + y and select it, click the ÉÔÉ button, then
type (x + z).
† For further information, check the Expression Composition section in the palette.
Composing m-elements
In the previous section the exterior product was introduced as an operation on the elements of the
linear space L to produce elements belonging to a new space L. Just as L was composed from
1 2 2
sums of exterior products of elements of L two at a time, so L may be composed from sums of
1 m
exterior products of elements of L m at a time.
1
x1 Ô x2 Ô ! Ô xm xi œ L
1
2009 9 3
Grassmann Algebra Book.nb 75
a = a1 Hx1 Ô x2 Ô ! Ô xm L + a2 Hy1 Ô y2 Ô ! Ô ym L + ! ai œ L
m 0
The exterior linear space L has essentially only one element in its basis, which we take as the unit
0
1. All other elements of L are scalar multiples of this basis element.
0
The exterior linear space L (where n is the dimension of L) also has essentially only one element in
n 1
its basis: e1 Ô e2 Ô ! Ô en . All other elements of L are scalar multiples of this basis element.
n
Any m-element, where m > n, is zero, since there are no more than n independent elements
available from which to compose it, and the nilpotency property causes it to vanish.
ComposeSimpleFormBa x + b x + c x + d xF
0 1 2 3
a x0 + b x1 + c x2 Ô x3 + d x4 Ô x5 Ô x6
GrassmannAlgebra understands that the new symbols generated are vector symbols. You can
verify this by checking the Current VectorSymbols status pane in the palette, or by entering
VectorSymbols.
VectorSymbols
8p, q, r, s, t, u, v, w, x, y, z, x1 , x2 , x3 , x4 , x5 , x6 <
† For further information, check the Expression Composition section in the palette.
The space of a simple m-element is the m-space whose underlying linear space consists of all the 1-
elements in the m-element. A basis of the space of a simple m-element may be taken as any set of
m independent factors of the m-element. An n-space and the space of its n-element are identical.
2009 9 3
Grassmann Algebra Book.nb 76
We may also say that a is in the space of a or that a defines its own space.
m m
A simple element b is said to be contained in another simple element a if and only if the space of b
k m k
is a subset of that of a.
m
We say that two elements are congruent if one is a scalar multiple of the other, and denote the
relationship with the symbol ª. For example we may write:
aªaa
m m
We can therefore say that if two elements are congruent, then their spaces are the same.
Note that whereas several congruent elements are linearly dependent, several linearly dependent
elements are not necessarily congruent.
When we have one element equal to a scalar multiple of another, a = a b say, we may sometimes
m m
take the liberty of writing the scalar multiple as a quotient of the two elements:
a
m
aã
b
m
These notions will be encountered many times in the rest of the book.
† An overview of the terminology used in this book can found at the end of Chapter 1.
Hence
From this associativity together with the anti-symmetric property of 1-elements it may be shown
that the exterior product is anti-symmetric in all (1-element) factors. That is, a transposition of any
two 1-element factors changes the sign of the product. For example:
x1 Ô x2 Ô x3 Ô x4 ã -x3 Ô x2 Ô x1 Ô x4 ã x3 Ô x4 Ô x1 Ô x2 ã !
Furthermore, from the nilpotency axiom, a product with two identical 1-element factors is zero.
For example:
2009 9 3
Grassmann Algebra Book.nb 77
Furthermore, from the nilpotency axiom, a product with two identical 1-element factors is zero.
For example:
x1 Ô x2 Ô x3 Ô x2 ã 0
It should be noted that for simple elements a, a Ô a ã 0, but that non-simple elements do not
m m m
necessarily possess this property, as the following example shows.
He1 Ô e2 + e3 Ô e4 L Ô He1 Ô e2 + e3 Ô e4 L ã 2 e1 Ô e2 Ô e3 Ô e4
To expand an exterior product, you can use the function GrassmannExpand. For example, to
expand the product in the previous example:
X = GrassmannExpand@He1 Ô e2 + e3 Ô e4 L Ô He1 Ô e2 + e3 Ô e4 LD
e1 Ô e2 Ô e1 Ô e2 + e1 Ô e2 Ô e3 Ô e4 + e3 Ô e4 Ô e1 Ô e2 + e3 Ô e4 Ô e3 Ô e4
An alias for GrassmannExpand is ¯#, which you can find on the GrasssmannAlgebra palette.
To simplify the expression you would use the GrassmannSimplify function. An alias for
GrassmannSimplify is ¯$, which you can also find on the palette.
However in this case, you must first tell GrassmannAlgebra that you have chosen a 4-space with
basis 8e1 , e2 , e3 , e4 <, otherwise it will not know that the ei are basis 1-elements. You can do
that by entering the alias ¯!4 . (We will discuss the detail of declaring bases in a later section).
¯!4
8e1 , e2 , e3 , e4 <
¯$@XD
2 e1 Ô e2 Ô e3 Ô e4
2009 9 3
Grassmann Algebra Book.nb 78
To expand and then simplify the expression in one operation, you can use the single function
GrassmannExpandAndSimplify (which has the alias ¯%).
¯%@He1 Ô e2 + e3 Ô e4 L Ô He1 Ô e2 + e3 Ô e4 LD
2 e1 Ô e2 Ô e3 Ô e4
† Note that you can also use these functions to expand and/or simplify any Grassmann expression.
† For further information, check the Expression Transformation section in the palette.
Summary of axioms
The axioms for exterior linear spaces are summarized here for future reference. They are a
composite of the requisite field, linear space, and exterior product axioms, and may be used to
derive the properties discussed informally in the preceding sections. Composite statements are
given in list form.
Ka + b O + g ã a + Kb + g O 2.8
m m m m m m
2009 9 3
Grassmann Algebra Book.nb 79
Ï Ô8: There is a unit scalar which acts as an identity under the exterior product.
Ï Ô9: Non-zero scalars have inverses with respect to the exterior product.
2009 9 3
Grassmann Algebra Book.nb 80
a Ô b ã H-1Lm k b Ô a 2.16
m k k m
Ï Ô11: Additive identities act as multiplicative zero elements under the exterior
product.
Ï Ô12: The exterior product is both left and right distributive under addition.
Ka + b O Ô g ã a Ô g + b Ô g
m m k m k m k
2.18
aÔ b + g ã aÔb + aÔg
m k k m k m k
a Ô a b ã Ka aO Ô b ã a a Ô b 2.19
m k m k m k
It can be seen from the above axioms that each of the linear spaces has its own zero. Apart from
grade, these zeros all have the same properties, so that for simplicity in computations we will
denote them all by the one symbol 0.
However, this also means that we cannot determine the grade of this symbol 0 from the symbol
alone. In practice, we overcome this problem by defining the grade of 0 to be the (undefined)
symbol ¯0.
Grassmann algebras
Under the foregoing axioms it may be directly shown that:
1. L is a field.
0
2. The L are linear spaces over L.
m 0
3. The direct sum of the L is an algebra.
m
2009 9 3
Grassmann Algebra Book.nb 81
1. L is a field.
0
2. The L are linear spaces over L.
m 0
3. The direct sum of the L is an algebra.
m
This algebra is called a Grassmann algebra. Its elements are sums of elements from the L, thus
m
allowing closure over both addition and exterior multiplication.
For example, we can expand and simplify the product of two elements of the algebra to give
another element.
¯%@H1 + 2 x + 3 x Ô y + 4 x Ô y Ô zL Ô H1 + 2 x + 3 x Ô y + 4 x Ô y Ô zLD
1 + 4 x + 6 xÔy + 8 xÔyÔz
A Grassmann algebra is also a linear space of dimension 2n , where n is the dimension of the
underlying linear space L. We will sometimes refer to the Grassmann algebra whose underlying
1
linear space is of dimension n as the Grassmann algebra of n-space.
If one of these factors is a scalar (b = a, say; k = 0), the axiom reduces to:
k
aÔa ã aÔa
m m
Ô
Since by axiom 6, each of these terms is an m-element, we may permit the exterior product to
subsume the normal field multiplication. Thus, if a is a scalar, a Ô a is equivalent to a a. The latter
m m
(conventional) notation will usually be adopted.
In the usual definitions of linear spaces no discussion is given to the nature of the product of a
scalar and an element of the space. A notation is usually adopted (that is, the omission of the
product sign) that leads one to suppose this product to be of the same nature as that between two
scalars. From the axioms above it may be seen that both the product of two scalars and the product
of a scalar and an element of the linear space may be interpreted as exterior products.
2009 9 3
Grassmann Algebra Book.nb 82
Factoring scalars
In GrassmannAlgebra scalars can be factored out of a product by GrassmannSimplify (alias
¯$) so that they are collected at the beginning. For example:
¯$@5 H2 xL Ô H3 yL Ô Hb zLD
30 b x Ô y Ô z
If any of the factors in the product is scalar, it will also be collected at the beginning. Here a is a
scalar since it has been declared as one (by default).
¯$@5 H2 xL Ô H3 aL Ô Hb zLD
30 a b x Ô z
GrassmannSimplify works with any Grassmann expression, or lists (to any level) of
Grassmann expressions:
1 + aÔb aÔx
¯$BK OF êê MatrixForm
zÔb 1 - zÔx
1+ab ax
K O
bz 1 + xÔz
Grassmann expressions
A Grassmann expression is any well-formed expression recognized by GrassmannAlgebra as a
valid element of a Grassmann algebra. To check whether an expression is considered to be a
Grassmann expression, you can use the function GrassmannExpressionQ.
GrassmannExpressionQ@1 +x Ô yD
True
GrassmannExpressionQ@8x Ô y, a x, x y<D
8True, True, False<
† For further information, check the Expression Analysis section in the palette.
At this stage we have not yet begun to use expressions for which the grade is other than obvious by
inspection. However, as will be seen later, the grade will not always be obvious, especially when
general Grassmann expressions, for example those involving Clifford or hypercomplex numbers,
are
2009being
9 3 considered. A Clifford number may, for example, have components of different grade.
Grassmann Algebra Book.nb 83
At this stage we have not yet begun to use expressions for which the grade is other than obvious by
inspection. However, as will be seen later, the grade will not always be obvious, especially when
general Grassmann expressions, for example those involving Clifford or hypercomplex numbers,
are being considered. A Clifford number may, for example, have components of different grade.
In GrassmannAlgebra you can use the function Grade (alias ¯G) to calculate the grade of a
Grassmann expression. Grade works with single Grassmann expressions or with lists (to any
level) of expressions.
Grade@x Ô y Ô zD
3
Grade@81, x, x Ô y, x Ô y Ô z<D
80, 1, 2, 3<
It will also calculate the grades of the elements in a more general expression, returning them in a
list.
Grade@1 + x + x Ô y + x Ô y Ô zD
80, 1, 2, 3<
† As mentioned in the summary of axioms above the zero symbol 0 will be used indiscriminately
for the zero element of any of the exterior linear spaces. The grade of the symbol 0 is therefore
ambiguous. In GrassmannAlgebra, whenever Grade encounters a 0 it will return the symbol ¯0.
This symbol may be read as "the grade of zero". Grade also returns ¯0 when it encounters an
element whose notional grade is greater than the dimension of the currently declared space (and
hence is zero).
As an example, we first declare a 3-space, then compute the grades of three elements. The first
element 0 could be the zero of any exterior linear space, hence its grade is returned as ¯0. The
second element, the unit scalar 1, is of grade 0 (this 0 is a true scalar!). The third element has a
notional grade of 4, and hence reduces to 0 in the currently declared 3-space.
2.5 Bases
A basis for L may be constructed from the ei by assembling all the essentially different non-zero
2
products ei1 Ô ei2 Hi1 , i2 : 1, …, nL. Two products are essentially different if they do not involve
2009 9 3 n n
the same 1-elements. There are obviously K O such products, making K O also the dimension of L.
2 2 2
Grassmann Algebra Book.nb 84
A basis for L may be constructed from the ei by assembling all the essentially different non-zero
2
products ei1 Ô ei2 Hi1 , i2 : 1, …, nL. Two products are essentially different if they do not involve
n n
the same 1-elements. There are obviously K O such products, making K O also the dimension of L.
2 2 2
In general, a basis for L may be constructed from the ei by taking all the essentially different
m
n
products ei1 Ô ei2 Ô ! Ô eim Hi1 , i2 , …, im : 1, …, nL. L is thus K O-dimensional.
m m
When first loaded GrassmannAlgebra sets up a default basis of 8e1 , e2 , e3 <. You can see this
from the Current Basis pane on the palette.
This is the basis of a 3-dimensional linear space which may be interpreted as a 3-dimensional
vector space if you wish. (We will be exploring interpretations other than the vectorial later.)
DeclareBasis@8i, j, k<D
8i, j, k<
Notice that the Current Basis pane of the palette now reads {i, j, k}.
DeclareBasis either takes a list (of your own basis elements) or a positive integer as its
argument. By using the positive integer form you can simply declare a subscripted basis of any
dimension you please.
DeclareBasis@8D
8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <
An optional argument gives you control over the kernel symbol used for the basis elements.
DeclareBasis@8, eD
8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <
A convenient way to change your space to one of dimension n is by entering the symbol ¯!n with
n a positive integer. The simplest way to enter this is from the palette using the ¯!Ñ symbol, and
entering the integer into the placeholder Ñ.
2009 9 3
Grassmann Algebra Book.nb 85
¯!4
8e1 , e2 , e3 , e4 <
You can return to the default basis by entering DeclareBasis[3] or ¯!3 . You can set all
preferences back to their default values by entering DeclareAllDefaultPreferences (or it
alias ¯A).
¯!3
8e1 , e2 , e3 <
BasisL@0D
81<
BasisL@1D
8e1 , e2 , e3 <
BasisL@2D
8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <
BasisL@3D
8e1 Ô e2 Ô e3 <
BasisL@D
881<, 8e1 , e2 , e3 <, 8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <, 8e1 Ô e2 Ô e3 <<
You can combine these into a single list by using Mathematica's inbuilt Flatten function.
Flatten@BasisL@DD
81, e1 , e2 , e3 , e1 Ô e2 , e1 Ô e3 , e2 Ô e3 , e1 Ô e2 Ô e3 <
This is a basis of the Grassmann algebra whose underlying linear space is 3-dimensional. As a
linear space, the algebra has 23 = 8 dimensions corresponding to its 8 basis elements.
¯!4
8e1 , e2 , e3 , e4 <
Entering BasisPalette would then give you the 16 basis elements of the corresponding
Grassmann algebra:
BasisPalette
Basis Palette
L L L L L
0 1 2 3 4
1 e1 e1 Ô e2 e1 Ô e2 Ô e3 e1 Ô e2 Ô e3 Ô e4
e2 e1 Ô e3 e1 Ô e2 Ô e4
e3 e1 Ô e4 e1 Ô e3 Ô e4
e4 e2 Ô e3 e2 Ô e3 Ô e4
e2 Ô e4
e3 Ô e4
You can click on any of the buttons to get its contents pasted into your notebook. For example you
can compose Grassmann numbers of your choice by clicking on the relevant basis elements.
X = a + b e1 + c e1 Ô e3 + d e1 Ô e3 Ô e4
Standard ordering
The standard ordering of the basis of L is defined as the ordering of the elements in the declared
1
list of basis elements. This ordering induces a natural standard ordering on the basis elements of L:
m
if the basis elements of L were letters of the alphabet arranged alphabetically, then the basis
1
elements of L would be words arranged alphabetically. Equivalently, if the basis elements were
m
digits, the ordering would be numeric.
For example, if we take {A,B,C,D} as basis, we can see from its BasisPalette that the basis
elements for each of the bases are arranged alphabetically.
2009 9 3
Grassmann Algebra Book.nb 87
Basis Palette
L L L L L
0 1 2 3 4
For example, suppose we have a basis 8e1 , e2 , e3 , e4 < for L, then the standard basis for L is
1 3
given by
¯!4 ; BasisL@3D
8e1 Ô e2 Ô e3 , e1 Ô e2 Ô e4 , e1 Ô e3 Ô e4 , e2 Ô e3 Ô e4 <
We could, if we wished, set up rules to denote these basis elements more compactly. For example:
:e1 Ô e2 Ô e3 Ø e1 , e1 Ô e2 Ô e4 Ø e2 , e1 Ô e3 Ô e4 Ø e3 , e2 Ô e3 Ô e4 Ø e4 >
3 3 3 3
We will find this more compact notation of some use in later theoretical derivations. Note however
that GrassmannAlgebra is set up to accept only declarations for the basis of L from which it
1
generates the bases of the other exterior linear spaces of the algebra.
2009 9 3
Grassmann Algebra Book.nb 88
2.6 Cobases
Definition of a cobasis
The notion of cobasis of a basis will be conceptually and notationally useful in our development of
later concepts. The cobasis of a basis of L is simply the basis of L which has been ordered in a
m n-m
special way relative to the basis of L.
m
Let 8e1 , e2 , !, en < be a basis for L. The cobasis element associated with a basis element ei is
1
denoted ei and is defined as the product of the remaining basis elements such that the exterior
product of a basis element with its cobasis element is equal to the basis element of L.
n
ei Ô ei ã e1 Ô e2 Ô ! Ô en
That is:
ei = H-1Li-1 e1 Ô ! Ô ·i Ô ! Ô en 2.21
The choice of the underbar notation to denote the cobasis may be viewed as a mnemonic to
indicate that the element ei is the basis element of L with ei 'struck out' from it.
n
Cobasis elements have similar properties to Euclidean complements, which are denoted with
overbars (see Chapter 5: The Complement). However, it should be noted that the underbar denotes
an element, and is not an operation. For example: a ei is not defined.
More generally, the cobasis element of the basis m-element ei = ei1 Ô ! Ô eim of L is denoted ei
m m m
and is defined as the product of the remaining basis elements such that:
ei Ô ei ã e1 Ô e2 Ô ! Ô en 2.22
m m
That is:
From the above definition it can be seen that the exterior product of a basis element with the
cobasis element of another basis element is zero. Hence we can write:
2009 9 3
Grassmann Algebra Book.nb 89
From the above definition it can be seen that the exterior product of a basis element with the
cobasis element of another basis element is zero. Hence we can write:
ei Ô ej ã dij e1 Ô e2 Ô ! Ô en
m m 2.24
1 Ô 1 ã e1 Ô e2 Ô ! Ô en
Thus:
1 ã e1 Ô e2 Ô ! Ô en ã e 2.25
n
Any of these three representations will be called the basis n-element of the space. 1 may also be
described as the cobasis of unity. Note carefully that 1 may be different in different bases.
2009 9 3
Grassmann Algebra Book.nb 90
¯!3 ; CobasisPalette
Cobasis Palette
Basis Cobasis
1 e1 Ô e2 Ô e3
e1 e2 Ô e3
e2 -He1 Ô e3 L
e3 e1 Ô e2
e1 Ô e2 e3
e1 Ô e3 -e2
e2 Ô e3 e1
e1 Ô e2 Ô e3 1
If you want to compose a palette for another basis, first declare the basis.
Cobasis Palette
Basis Cobasis
1 !Ô "
! "
" -!
!Ô " 1
element is denoted ej and is defined as expected by the product of the remaining basis elements
m
such that:
2009 9 3
Grassmann Algebra Book.nb 91
ei Ô ej ã dij e1 Ô e2 Ô ! Ô en
m
m 2.26
ei Ô ej ã H-1Lm Hn-mL ej Ô ei
m m
m m
which, by comparison with the definition for the cobasis shows that:
That is, the cobasis element of a cobasis element of an element is (apart from a possible sign) equal
to the element itself. We shall find this formula useful as we develop general formulae for the
complement and interior products in later chapters.
2.7 Determinants
The determinant:
a11 a12 ! a1 n
a21 a22 ! a2 n
Dã
ª ª ! ª
an1 an2 ! an n
may be calculated by considering each of its rows (or columns) as a 1-element. Here, we consider
rows. Development by columns may be obtained mutatis mutandis. Introduce an arbitrary basis ei
in order to encode the position of the entry in the row, and let
Then form the exterior product of all the ai . We can arrange the ai in rows to portray the effect of
an array:
2009 9 3
Grassmann Algebra Book.nb 92
a1 Ô a2 Ô ! Ô an ã Ha11 e1 + a12 e2 + ! + a1 n en L
ÔHa21 e1 + a22 e2 + ! + a2 n en L
ª ª ! ª
ÔHan1 e1 + an2 e2 + ! + an n en L
a1 Ô a2 Ô ! Ô an ã D e1 Ô e2 Ô ! Ô en
Because L is of dimension 1, we can express this uniquely as:
n
a1 Ô a2 Ô ! Ô an
D= 2.28
e1 Ô e2 Ô ! Ô en
Properties of determinants
All the well-known properties of determinants proceed directly from the properties of the exterior
product.
! Ô ai Ô ! Ô aj Ô ! ã - ! Ô aj Ô ! Ô ai Ô !
! Ô ai Ô ! Ô ai Ô ! ã 0
a1 Ô ! Ô Hk ai L Ô ! ã k H a1 Ô ! Ô ai Ô !L
a1 Ô ! Ô JS ai N Ô ! ã S H a1 Ô ! Ô ai Ô !L
i i
2009 9 3
Grassmann Algebra Book.nb 93
The exterior product is unchanged if to any factor is added multiples of other factors.
a1 Ô ! Ô ai + S kj aj
j!i
Ô ! ã a1 Ô ! Ô ai Ô !
n n
Each of the first two operations produces an element with K O = K O terms.
p n- p
A generalization of the Laplace expansion technique is evident from the fact that the exterior
product of the ai may be effected in any grouping and sequence which facilitate the computation.
Calculating determinants
As an example take the following matrix. We can calculate its determinant with Mathematica's
inbuilt Det function:
a 0 0 0 0 0 0 0
0 0 0 c 0 0 0 a
0 b 0 0 0 f 0 0
0 0 0 0 e 0 0 0
DetB F
0 0 e 0 0 0 g 0
0 f 0 d 0 c 0 0
g 0 0 0 0 0 0 h
0 0 0 0 d 0 b 0
a b2 c2 e2 h - a b c e2 f2 h
DeclareBasis@8D
8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <
and then expand and simplify the exterior product of the 8 rows or columns expressed as 1-
elements. Here we use ExpandAndSimplifyExteriorProducts (as it is somewhat faster in
this specialized case than the more general GrassmannExpandAndSimplify) to expand by
columns.
2009 9 3
Grassmann Algebra Book.nb 94
and then expand and simplify the exterior product of the 8 rows or columns expressed as 1-
elements. Here we use ExpandAndSimplifyExteriorProducts (as it is somewhat faster in
this specialized case than the more general GrassmannExpandAndSimplify) to expand by
columns.
ExpandAndSimplifyExteriorProducts@
HHa e1 + g e7 L Ô Hb e3 + f e6 L Ô He e5 L Ô Hc e2 + d e6 L Ô
He e4 + d e8 L Ô Hf e3 + c e6 L Ô Hg e5 + b e8 L Ô Ha e2 + h e7 LLD
Ia b2 c2 e2 h - a b c e2 f2 hM e1 Ô e2 Ô e3 Ô e4 Ô e5 Ô e6 Ô e7 Ô e8
Ï Speeding up calculations
For maximum speed it is clearly better to use Mathematica's inbuilt Det wherever possible. If
however, you are using GrassmannExpandAndSimplify or
ExpandAndSimplifyExteriorProducts it is usually faster to apply the generalized Laplace
expansion technique. That is, the order in which the products are calculated is arranged with the
objective of minimizing the number of resulting terms produced at each stage. You can do this by
dividing the product up into groupings of factors where each grouping contains as few basis
elements as possible in common with the others. You then calculate the result from each grouping
before combining them into the final result. Of course, you must ensure that the sign of the overall
regrouping is the same as the original expression.
In the example below, we have rearranged the determinant calculation above into three groupings,
which we calculate successively before combining the results. Our determinant calculation then
becomes
G = ExpandAndSimplifyExteriorProducts;
-G@G@Ha e1 + g e7 L Ô Hc e2 + d e6 L Ô Ha e2 + h e7 LD Ô G@
Hb e3 + f e6 L Ô Hf e3 + c e6 LD Ô G@He e5 L Ô He e4 + d e8 L Ô Hg e5 + b e8 LDD
a b c e2 Ib c - f2 M h e1 Ô e2 Ô e3 Ô e4 Ô e5 Ô e6 Ô e7 Ô e8
This grouping took a third of the time of the direct approach above.
Ï Historical Note
Grassmann applied his Ausdehnungslehre to the theory of determinants and linear
equations quite early in his work. Later, Cauchy published his technique of 'algebraic keys'
which essentially duplicated Grassmann's results. To claim priority, Grassmann was led to
publish his only paper in French, obviously directed at Cauchy: 'Sur les différents genres de
multiplication' ('On different types of multiplication') [1855]. For a complete treatise on the
theory of determinants from a Grassmannian viewpoint see R. F. Scott [1880].
2009 9 3
Grassmann Algebra Book.nb 95
2.8 Cofactors
a1 Ô a2 Ô ! Ô an ã Ha11 e1 + a12 e2 + ! + a1 n en L
Ô
Ha21 e1 + a22 e2 + ! + a2 n en L
ª ª ! ª
Ô
Han1 e1 + an2 e2 + ! + an n en L
a2 Ô ! Ô an ã Ha21 e1 + a22 e2 + ! + a2 n en L
ª ª ! ª
ÔHan1 e1 + an2 e2 + ! + an n en L
and multiplying out the remaining n|1 factors results in an expression of the form:
a2 Ô ! Ô an ã a11 e2 Ô e3 Ô ! Ô en +
a12 H- e1 Ô e3 Ô ! Ô en L + ! + a1 n H-1Ln-1 e1 Ô e2 Ô ! Ô en-1
Here, the signs attached to the basis (n|1)-elements have been specifically chosen so that together
they correspond to the cobasis elements of e1 to en . The underscored scalar coefficients have yet
to be interpreted. Thus we can write:
a2 Ô ! Ô an ã a11 e1 + a12 e2 + ! + a1 n en
a1 Ô a2 Ô ! Ô an ã Ha11 e1 + a12 e2 + ! + a1 n en L
Ô
Ia11 e1 + a12 e2 + ! + a1 n en M
Multiplying this out and remembering that the exterior product of a basis element with the cobasis
element of another basis element is zero, we get the sum:
a1 Ô a2 Ô ! Ô an ã D e1 Ô e2 Ô ! Ô en
2009 9 3
Grassmann Algebra Book.nb 96
Mnemonically we can visualize the aij as the determinant with the row and column containing
aij as 'struck out' by the underscore.
Of course, there is nothing special in the choice of the element a1 about which to expand the
determinant. We could have written the expansion in terms of any factor (row or column):
It can be seen immediately that the product of the elements of a row with the cofactors of any other
row is zero, since in the exterior product formulation the row must have been included in the
calculation of the cofactors.
Summarizing these results for both row and column expansions we have:
n n
‚ aij akj ã ‚ aji ajk ã D dik 2.29
j=1 j=1
A Ac T ã A T Ac ã D I
In this section we revisit it more specifically in the context of the cofactor. The results will be
important in deriving later results.
Let ei be basis m-elements and ei be their corresponding cobasis (n|m)-elements (i: 1,…,n and n =
m m
n
). To fix ideas, suppose we expand a determinant about the first m rows:
m
Ha1 Ô ! Ô am L Ô Ham+1 Ô ! Ô an L
ã Ka1 e1 + a2 e2 + ! + an en O Ô a1 e1 + a2 e2 + ! + an en
m m m m m m
Here, the ai and ai are simply the coefficients resulting from the partial expansions. As with basis
1-elements, the exterior product of a basis m-element with the cobasis element of another basis m-
element
2009 9 3is zero. That is:
Grassmann Algebra Book.nb 97
Here, the ai and ai are simply the coefficients resulting from the partial expansions. As with basis
1-elements, the exterior product of a basis m-element with the cobasis element of another basis m-
element is zero. That is:
ei Ô ej ã dij e1 Ô e2 Ô ! Ô en
m m
D ã a1 a1 + a2 a2 + ! + an an
This is the Laplace expansion. The ai are minors and the ai are their cofactors.
Our example involves a fourth order determinant. To begin then, we declare a 4-space, and then
compose a list of 4 vectors, give each a name, and call the list M. The new scalar coefficients
generated by ComposeVector are automatically added to the list of declared scalars.
We can extract the scalar coefficients from M using the GrassmannAlgebra function
ExtractCoefficients. The result is now a 4!4 matrix Mc whose determinant we proceed to
calculate.
Mc = ExtractCoefficients@BasisD êü M; MatrixForm@McD
a1 a2 a3 a4
b1 b2 b3 b4
c1 c2 c3 c4
d1 d2 d3 d4
Expanding and simplifying the exterior product of rows two, three and four gives the minors of the
first row as coefficients of the resulting 3-element.
2009 9 3
Grassmann Algebra Book.nb 98
A = ¯%@y Ô z Ô wD
H-b3 c2 d1 + b2 c3 d1 + b3 c1 d2 - b1 c3 d2 - b2 c1 d3 + b1 c2 d3 L e1 Ô e2 Ô e3 +
H-b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L e1 Ô e2 Ô e4 +
H-b4 c3 d1 + b3 c4 d1 + b4 c1 d3 - b1 c4 d3 - b3 c1 d4 + b1 c3 d4 L e1 Ô e3 Ô e4 +
H-b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L e2 Ô e3 Ô e4
The cobasis of the current basis 8e1 , e2 , e3 , e4 < can be displayed by entering Cobasis.
Cobasis
8e2 Ô e3 Ô e4 , -He1 Ô e3 Ô e4 L, e1 Ô e2 Ô e4 , -He1 Ô e2 Ô e3 L<
By comparing the basis elements in the expansion of A with the cobasis elements of the current
basis above, we see that both e1 Ôe2 Ôe3 and e1 Ôe3 Ôe4 require a change of sign to bring them into
the required cobasis form. The crux of the calculation is to garner this required sign from the
coefficients of the basis elements (and hence also changing the sign of the coefficient).
We denote the cobasis elements of 8e1 , e2 , e3 , e4 < by 9e1 , e2 , e3 , e4 = and use a rule to
replace the basis elements of A with the correctly signed values.
A1 = A ê. 9e2 Ô e3 Ô e4 -> e1 ,
e1 Ô e3 Ô e4 -> -e2 , e1 Ô e2 Ô e4 -> e3 , e1 Ô e2 Ô e3 -> -e4 =
H-b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L e1 -
H-b4 c3 d1 + b3 c4 d1 + b4 c1 d3 - b1 c4 d3 - b3 c1 d4 + b1 c3 d4 L e2 +
H-b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L e3 -
H-b3 c2 d1 + b2 c3 d1 + b3 c1 d2 - b1 c3 d2 - b2 c1 d3 + b1 c2 d3 L e4
8-b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 ,
b4 c3 d1 - b3 c4 d1 - b4 c1 d3 + b1 c4 d3 + b3 c1 d4 - b1 c3 d4 ,
-b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 ,
b3 c2 d1 - b2 c3 d1 - b3 c1 d2 + b1 c3 d2 + b2 c1 d3 - b1 c2 d3 <
D1 = a1 a1 + a2 a2 + a3 a3 + a4 a4
a4 Hb3 c2 d1 - b2 c3 d1 - b3 c1 d2 + b1 c3 d2 + b2 c1 d3 - b1 c2 d3 L +
a3 H-b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L +
a2 Hb4 c3 d1 - b3 c4 d1 - b4 c1 d3 + b1 c4 d3 + b3 c1 d4 - b1 c3 d4 L +
a1 H-b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L
This can be easily verified as equal to the determinant of the original array computed using
Mathematica's Det function.
2009 9 3
Grassmann Algebra Book.nb 99
This can be easily verified as equal to the determinant of the original array computed using
Mathematica's Det function.
Expand@D1 D == Det@McD
True
Expand and simplify the product of the first and second rows.
B1 = ¯%@x Ô yD
H-a2 b1 + a1 b2 L e1 Ô e2 + H-a3 b1 + a1 b3 L e1 Ô e3 + H-a4 b1 + a1 b4 L e1 Ô e4 +
H-a3 b2 + a2 b3 L e2 Ô e3 + H-a4 b2 + a2 b4 L e2 Ô e4 + H-a4 b3 + a3 b4 L e3 Ô e4
Expand and simplify the product of the third and fourth rows.
B2 = ¯%@z Ô wD
H-c2 d1 + c1 d2 L e1 Ô e2 + H-c3 d1 + c1 d3 L e1 Ô e3 + H-c4 d1 + c1 d4 L e1 Ô e4 +
H-c3 d2 + c2 d3 L e2 Ô e3 + H-c4 d2 + c2 d4 L e2 Ô e4 + H-c4 d3 + c3 d4 L e3 Ô e4
Extract the coefficients of these basis 2-elements and denote them by bi . (For a more direct
correspondence, we denote them in reverse order).
Now, noting the required change in sign from the basis of L and its cobasis:
2
8BasisL@2D, CobasisL@2D<
88e1 Ô e2 , e1 Ô e3 , e1 Ô e4 , e2 Ô e3 , e2 Ô e4 , e3 Ô e4 <,
8e3 Ô e4 , -He2 Ô e4 L, e2 Ô e3 , e1 Ô e4 , -He1 Ô e3 L, e1 Ô e2 <<
8-c2 d1 + c1 d2 , c3 d1 - c1 d3 , -c4 d1 + c1 d4 ,
-c3 d2 + c2 d3 , c4 d2 - c2 d4 , -c4 d3 + c3 d4 <
2009 9 3
Grassmann Algebra Book.nb 100
D2 = a1 a1 + a2 a2 + a3 a3 + a4 a4 + a5 a5 + a6 a6
This can be easily verified as equal to the determinant of the original array computed using
Mathematica's Det function.
Expand@D2 D == Det@McD
True
Transformations of cobases
In this section we show that if a basis of the underlying linear space is transformed by a
transformation whose components are aij , then its cobasis is transformed by the induced
transformation whose components are the cofactors of the aij . For simplicity in what follows, we
use Einstein's summation convention in which a summation over repeated indices is understood.
Let aij be a transformation on the basis ej to give the new basis !i . That is, !i ã aij ej .
Let aij be the corresponding transformation on the cobasis ej to give the new cobasis !i . That
©
is, !i ã aij ej .
©
Using the properties of the Kronecker delta we can simplify the right side to give:
2009 9 3
Grassmann Algebra Book.nb 101
The commands will be valid in an arbitrary-dimensional space, but we exhibit the 3-dimensional
case as we proceed.
1 Reset all the defaults and declare the elements of the transformation as scalar symbols.
¯A; DeclareExtraScalarSymbols@a__ D
3 Construct a formula for the adjoint Cn of An (the matrix of cofactors) using Mathematica's
inbuilt functions.
2009 9 3
Grassmann Algebra Book.nb 102
8 Expand and simplify the exterior products on both sides of the equation.
Fn_ :=
Expand@ExpandAndSimplifyExteriorProducts@Ln DD == Expand@Rn D; F3
True
2009 9 3
Grassmann Algebra Book.nb 103
a11 x1 + a12 x2 + ! + a1 n xn ã a1
a21 x1 + a22 x2 + ! + a2 n xn ã a2
ª ª ! ª ª
am1 x1 + am2 x2 + ! + am n xn ã am
Ci ã a1 i e1 + a2 i e2 + ! + ami em
C0 ã a1 e1 + a2 e2 + ! + am em
Adding the resulting equations then gives the system in the form:
x1 C1 + x2 C2 + ! + xn Cn ã C0 2.31
To obtain an equation from which the unknowns xi have been eliminated, it is only necessary to
multiply the linear system through by the corresponding Ci .
If m = n and a solution for xi exists, it is obtained by eliminating x1 , !, xi-1 , xi+1 , !, xn ; that is,
by multiplying the linear system through by C1 Ô ! Ô Ci-1 Ô Ci+1 Ô ! Ô Cn :
In this form we only have to calculate the (n|1)-element C1 Ô ! Ô Ci-1 Ô Ci+1 Ô ! Ô Cn once.
2009 9 3
Grassmann Algebra Book.nb 104
C1 Ô ! Ô Ci-1 Ô C0 Ô Ci+1 Ô ! Ô Cn
xi ã 2.33
C1 Ô C2 Ô ! Ô Cn
All the well-known properties of solutions to systems of linear equations proceed directly from the
properties of the exterior products of the Ci .
a-2b+3c+4dã2
2a+7c-5dã9
a+b+c+dã8
To solve this system, first declare a basis of dimension at least equal to the number of equations. In
this example, we declare a 4-space so that we can add another equation later.
DeclareBasis@4D
8e1 , e2 , e3 , e4 <
Next define a 1-element for each unknown which encodes its coefficients, and one which encodes
the constants
Ca = e1 + 2 e2 + e3 ;
Cb = -2 e1 + e3 ;
Cc = 3 e1 + 7 e2 + e3 ;
Cd = 4 e1 - 5 e2 + e3 ;
C0 = 2 e1 + 9 e2 + 8 e3 ;
a Ca + b Cb + c Cc + d Cd ã C0
Suppose we wish to eliminate a and d thus giving a relationship between b and c. To accomplish
this we multiply the system equation through by Ca Ô Cd .
Ha Ca + b Cb + c Cc + d Cd L Ô Ca Ô Cd ã C0 Ô Ca Ô Cd
Or, since the terms involving a and d will obviously be eliminated by their product with Ca Ô Cd ,
we have more simply:
Hb Cb + c Cc L Ô Ca Ô Cd ã C0 Ô Ca Ô Cd
This can be put in a form similar to that of equations 2.32 and 2.33 by dividing through by the right
side.
2009 9 3
Grassmann Algebra Book.nb 105
Hb Cb + c Cc L Ô Ca Ô Cd
ã1
C0 Ô Ca Ô Cd
Expanding and simplifying the numerator and denominator then gives the required relationship
between b and c.
¯%@Hb Cb + c Cc L Ô Ca Ô Cd D
ã1
¯%@C0 Ô Ca Ô Cd D
1
H27 b - 29 cL ã 1
63
Ca = e1 + 2 e2 + e3 ;
Cb = -2 e1 + e3 + e4 ;
Cc = 3 e1 + 7 e2 + e3 - 3 e4 ;
Cd = 4 e1 - 5 e2 + e3 + e4 ;
C0 = 2 e1 + 9 e2 + 8 e3 + 7 e4 ;
Ca Ô C0 Ô Cc Ô Cd
bã
Ca Ô Cb Ô Cc Ô Cd
¯%@Ca Ô C0 Ô Cc Ô Cd D
bã
¯%@Ca Ô Cb Ô Cc Ô Cd D
30
bã-
41
2.10 Simplicity
An element is simple if it is the exterior product of 1-elements. We extend this definition to scalars
by defining all scalars to be simple. Clearly also, since any n-element always reduces to a product
of 1-elements, all n-elements are simple. Thus we see immediately that all 0-elements, 1-elements,
and n-elements are simple. In the next section we show that all (n|1)-elements are also simple.
2009 9 3
Grassmann Algebra Book.nb 106
An element is simple if it is the exterior product of 1-elements. We extend this definition to scalars
by defining all scalars to be simple. Clearly also, since any n-element always reduces to a product
of 1-elements, all n-elements are simple. Thus we see immediately that all 0-elements, 1-elements,
and n-elements are simple. In the next section we show that all (n|1)-elements are also simple.
In 2-space therefore, all elements of L, L, and L are simple. In 3-space all elements of L, L, L and
0 1 2 0 1 2
L are simple. In higher dimensional spaces elements of grade 2 § m § n|2 are therefore the only
3
ones that may not be simple. In 4-space, the only elements that may not be simple are those of
grade 2.
There is a straightforward way of testing the simplicity of 2-elements not shared by elements of
higher grade. A 2-element is simple if and only if its exterior square is zero. (The exterior square of
a is a Ô a.) Since odd elements (elements of odd grade) anti-commute, the exterior square of odd
m m m
elements will be zero, even if they are not simple. An even element of grade 4 or higher may be of
the form of the exterior product of a 1-element with a non-simple 3-element: whence its exterior
square is zero without its being simple.
We will return to a further discussion of simplicity from the point of view of factorization in
Chapter 3: The Regressive Product.
Consider first two simple (n|1)-elements. Since they can differ by at most one 1-element factor
(otherwise they would together contain more than n independent factors), we can express them as
a Ôb1 and a Ôb2 . Summing these, and factoring the common (n|2)-element gives a simple (n|1)-
n-2 n-2
element.
a Ô b1 + a Ô b2 ã a Ô Hb1 + b2 L
n-2 n-2 n-2
Any (n|1)-element can be expressed as the sum of simple (n|1)-elements. We can therefore prove
the general case by supposing pairs of simple elements to be combined to form another simple
element, until just one simple element remains.
¯!4 ; B = ComposeBivector@aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4
2009 9 3
Grassmann Algebra Book.nb 107
Since we are supposing B to be simple, B Ô B ã 0. To see how this constrains the coefficients ai
of the terms of B we expand and simplify the product:
¯%@B Ô BD
H2 a3 a4 - 2 a2 a5 + 2 a1 a6 L e1 Ô e2 Ô e3 Ô e4
We thus see that the condition for a 2-element in a 4-dimensional space to be simple may be
written:
a3 a4 - a2 a5 + a1 a6 ã 0 2.34
¯!5 ; B = ComposeBivector@aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e1 Ô e5 + a5 e2 Ô e3 +
a6 e2 Ô e4 + a7 e2 Ô e5 + a8 e3 Ô e4 + a9 e3 Ô e5 + a10 e4 Ô e5
B2 = ¯%@B Ô BD
H2 a3 a5 - 2 a2 a6 + 2 a1 a8 L e1 Ô e2 Ô e3 Ô e4 +
H2 a4 a5 - 2 a2 a7 + 2 a1 a9 L e1 Ô e2 Ô e3 Ô e5 +
H2 a4 a6 - 2 a3 a7 + 2 a1 a10 L e1 Ô e2 Ô e4 Ô e5 +
H2 a4 a8 - 2 a3 a9 + 2 a2 a10 L e1 Ô e3 Ô e4 Ô e5 +
H2 a7 a8 - 2 a6 a9 + 2 a5 a10 L e2 Ô e3 Ô e4 Ô e5
For the bivector B to be simple we require this product B2 to be zero, hence each of its five
coefficients must be zero simultaneously. We can extract the coefficients by using
GrassmannAlgebra's ExtractCoefficients function, and then thread them into a list of five
equations.
Finally, we can solve these five equations with Mathematica's inbuilt Solve function.
By inspection we see that these three solutions correspond to the first three equations.
Consequently, from the generality of the expression for the bivector we expect any three equations
to suffice.
2009 9 3
Grassmann Algebra Book.nb 108
By inspection we see that these three solutions correspond to the first three equations.
Consequently, from the generality of the expression for the bivector we expect any three equations
to suffice.
In summary we can say that a 2-element in a 5-space is simple if any three of the following
equations is satisfied:
a3 a5 - a2 a6 + a1 a8 ã 0
a4 a5 - a2 a7 + a1 a9 ã 0
a4 a6 - a3 a7 + a1 a10 ã 0 2.35
a4 a8 - a3 a9 + a2 a10 ã 0
a7 a8 - a6 a9 + a5 a10 ã 0
† The concept of simplicity will be explored in more detail in Chapter 3: The Regressive Product.
Suppose we have a 4-element X which we would like to factorize. X is expressed in terms of six 1-
elements, and is known to be simple. For reference purposes we begin with the element in factored
form X0 to assure ourselves that it is simple, but the example will take its expanded form X as
starting point. The factorization we will obtain is of course unlikely to be of the same form as X0 .
X0 = H2 x + 3 yL Ô Hu - v - xL Ô H2 w - 5 z + y + uL Ô Hz - xL;
X = ¯%@X0 D
3 uÔvÔxÔy + 2 uÔvÔxÔz + 3 uÔvÔyÔz + 6 uÔwÔxÔy +
4 u Ô w Ô x Ô z + 6 u Ô w Ô y Ô z - 14 u Ô x Ô y Ô z - 6 v Ô w Ô x Ô y -
4 v Ô w Ô x Ô z - 6 v Ô w Ô y Ô z + 17 v Ô x Ô y Ô z + 6 w Ô x Ô y Ô z
¯¯s@s_ D;
f1 = s1 u + s2 v + s3 w + s4 x + s5 y + s6 z;
2009 9 3
Grassmann Algebra Book.nb 109
X1 = ¯%@f1 Ô XD ã 0
H-6 s1 - 6 s2 + 3 s3 L u Ô v Ô w Ô x Ô y +
H-4 s1 - 4 s2 + 2 s3 L u Ô v Ô w Ô x Ô z + H-6 s1 - 6 s2 + 3 s3 L u Ô v Ô w Ô y Ô z +
H17 s1 + 14 s2 + 3 s4 - 2 s5 + 3 s6 L u Ô v Ô x Ô y Ô z +
H6 s1 + 14 s3 + 6 s4 - 4 s5 + 6 s6 L u Ô w Ô x Ô y Ô z +
H6 s2 - 17 s3 - 6 s4 + 4 s5 - 6 s6 L v Ô w Ô x Ô y Ô z ã 0
Since the coefficients of each of these terms must be zero, we now have some equations of
constraint between the coefficients of f1 . In this case it is easy to see that the first 3 are identical,
so we can satisfy them by substituting 2 s1 + 2 s2 for s3 .
X2 = HX1 ê. s3 Ø 2 s1 + 2 s2 L êê Simplify
H17 s1 + 14 s2 + 3 s4 - 2 s5 + 3 s6 L
Hu Ô v Ô x Ô y Ô z + 2 u Ô w Ô x Ô y Ô z - 2 v Ô w Ô x Ô y Ô zL ã 0
17 s1 + 14 s2 + 3 s4 + 3 s6
X3 = X2 ê. s5 Ø êê Simplify
2
True
Finally, we can substitute back into the expression for f1 to get an expression for a generic factor
of X, which we call f2 .
17 s1 + 14 s2 + 3 s4 + 3 s6
f2 = f1 ê. s3 Ø 2 s1 + 2 s2 ê. s5 Ø
2
1
u s1 + v s2 + w H2 s1 + 2 s2 L + x s4 + z s6 + y H17 s1 + 14 s2 + 3 s4 + 3 s6 L
2
X can now be constructed as the exterior product of any four linearly independent versions of f2 .
As an example, we create 1-elements xi by letting si equal 1, and the others equal 0.
x1 = f2 ê. 8s1 Ø 1, s2 Ø 0, s4 Ø 0, s6 Ø 0<
17 y
u+2w+
2
x2 = f2 ê. 8s1 Ø 0, s2 Ø 1, s4 Ø 0, s6 Ø 0<
v+2w+7y
x3 = f2 ê. 8s1 Ø 0, s2 Ø 0, s4 Ø 1, s6 Ø 0<
3y
x+
2
2009 9 3
Grassmann Algebra Book.nb 110
x4 = f2 ê. 8s1 Ø 0, s2 Ø 0, s4 Ø 0, s6 Ø 1<
3y
+z
2
X4 = ¯%@x1 Ô x2 Ô x3 Ô x4 D
3 3
uÔvÔxÔy + uÔvÔxÔz + uÔvÔyÔz + 3 uÔwÔxÔy +
2 2
2 uÔwÔxÔz + 3 uÔwÔyÔz - 7 uÔxÔyÔz - 3 vÔwÔxÔy -
17
2 vÔwÔxÔz - 3 vÔwÔyÔz + vÔxÔyÔz + 3 wÔxÔyÔz
2
This turns out to be half of X. So by doubling one of the xi , say x4 , we have the final result which
we can verify.
17 y 3y
¯%BX ã u + 2 w + Ô Hv + 2 w + 7 yL Ô x + Ô H3 y + 2 zLF
2 2
True
X = x Ô y + u Ô v;
f = s1 x + s2 y + s3 u + s4 v;
¯%@f Ô XD
s1 u Ô v Ô x + s2 u Ô v Ô y + s3 u Ô x Ô y + s4 v Ô x Ô y
Clearly this expression can only be zero if all the coefficients si are zero, which in turn implies f
must be zero. Hence in this case, X has no 1-element factors.
More generally, an element being non-simple does not imply that it has no 1-element factors. Here
is an non-simple element with two 1-element factors.
X = Hx Ô y + u Ô vL Ô w Ô z;
f = s1 x + s2 y + s3 u + s4 v + s5 w + s6 z;
¯%@f Ô XD
-s1 u Ô v Ô w Ô x Ô z - s2 u Ô v Ô w Ô y Ô z + s3 u Ô w Ô x Ô y Ô z + s4 v Ô w Ô x Ô y Ô z
2009 9 3
Grassmann Algebra Book.nb 111
a
m+k
gã ï gÔb ã a 2.36
m b m k m+k
k
Division by a 1-element
Consider the quotient of a simple (m+1)-element a by a 1-element b. Since a contains b we can
m+1 m+1
write it as a1 Ôa2 Ô!Ôam Ôb. However, we could also have written this numerator in the more
general form:
where the ti are arbitrary scalars. It is in this more general form that the numerator must be written
before b can be 'divided out'. Thus the quotient may be written:
a1 Ô a2 Ô ! Ô a m Ô b
ã Ha1 + t1 bL Ô Ha2 + t2 bL Ô ! Ô Ham + tm bL 2.37
b
2009 9 3
Grassmann Algebra Book.nb 112
Division by a k-element
Suppose now we have a quotient with a simple k-element denominator. Since the denominator is
contained in the numerator, we can write the quotient as:
Ha1 Ô a2 Ô ! Ô am L Ô Hb1 Ô b2 Ô ! Ô bk L
Hb1 Ô b2 Ô ! Ô bk L
To prepare for the 'dividing out' of the b factors we rewrite the numerator in the more general form:
Ha1 Ô a2 Ô ! Ô am L Ô Hb1 Ô b2 Ô ! Ô bk L
ã
Hb1 Ô b2 Ô ! Ô bk L
2.38
Ha1 + t11 b1 + ! + t1 k bk L Ô Ha2 + t21 b1 + ! + t2 k bk L Ô
! Ô Ham + tm1 b1 + ! + tmk bk L
a Ô b1 Ô b2 Ô ! Ô bk
ã a + t1 b1 + ! + tk bk 2.39
b1 Ô b2 Ô ! Ô bk
Ï Special cases
a b1 Ô b2 Ô ! Ô bk
ãa 2.40
b1 Ô b2 Ô ! Ô bk
b a1 Ô a2 Ô ! Ô a m
ã a1 Ô a2 Ô ! Ô a m 2.41
b
2009 9 3
Grassmann Algebra Book.nb 113
Whereas the exterior quotient of the basis trivector e1 Ô e2 Ô e3 by the basis vector e1 results in
the variable bivector He2 + e1 ¯t1 L Ô He3 + e1 ¯t2 L.
e1 Ô e2 Ô e3
ExteriorQuotientB F
e1
He2 + e1 ¯t1 L Ô He3 + e1 ¯t2 L
ExteriorQuotient requires that the factors in the denominator are also specifically displayed
in the numerator.
Hx - yL Ô Hy - zL Ô Hz - xL
ExteriorQuotientB F
Hz - xL Ô Hx - yL
y - z + Hx - yL ¯t1 + H-x + zL ¯t2
More generally, you can use ExteriorQuotient on any Grassmann expression involving
exterior quotients.
e1 Ô e2 Ô e3 e1 Ô e2 Ô e3 e1 Ô e2 Ô e3
ExteriorQuotientBa +b +c F
e1 e1 Ô e2 e1 Ô e2 Ô e3
c + b He3 + e1 ¯t1 + e2 ¯t2 L + a He2 + e1 ¯t3 L Ô He3 + e1 ¯t4 L
In order to explore the space of a simple m-element we can of course choose as basis for its
underlying linear space, any set of m independent 1-elements whose exterior product is congruent
to the given m-element. Such sets span the space of the simple m-element, and by extension, we
2009 9 3
call each of them a span of the m-element.
Grassmann Algebra Book.nb 114
In order to explore the space of a simple m-element we can of course choose as basis for its
underlying linear space, any set of m independent 1-elements whose exterior product is congruent
to the given m-element. Such sets span the space of the simple m-element, and by extension, we
call each of them a span of the m-element.
For example, if A ã xÔyÔz, then we can say that a span of A is {x, y, z}. But of course, {x +
y, y, z} and {a x, y, z} are also spans of A.
Practically however, the simple m-elements we are working with will often be available to us in
some known factored form, that is, as a given exterior product of m 1-elements. In these cases it is
convenient for computational purposes to choose these factors to span the element, and, by abuse
of terminology, call the list of these factors the span of the element. (We can read "the span" to
mean "the current working span" of the element).
For example, if A ã xÔyÔz, then we say that the span of A is {x, y, z}.
More generally, we define the k-span of a simple m-element as the list of all the essentially
different exterior products of grade k formed from the span (or 1-span) of the element.
Given this provisory preamble, we can now say that the most straightforward way to conceive of
the span of a simple m-element is as the basis of an m-space, where the basis is the list of factors of
the m-element in order. Similarly the k-span of a simple m-element may be conceived of as the
basis of the concomitant exterior product space of grade k, and the k-cospan of the element may be
conceived of as its cobasis.
The k-span of a simple m-element has no natural notion of invariance attached to it, because the
same m-element, factorized differently, will generate different k-spans. On the other hand, the pair
comprising the k-span of an m-element and its concomitant (m-k)-span will, if present together
appropriately in a multilinear form, can generate results invariant to different factorizations. Many
expressions in the algebra are multilinear forms, and an understanding of their properties is
instructive. We will explore these forms in the sections to follow. But first, we solidify what these
definitions mean by seeing how to compose a span.
Composing spans
In this section we discuss how to compose the span (1-span), k-span or k-cospan of a simple m-
element. As a concrete case, let us take a 4-element A. The compositions are independent of the
dimension of the currently declared space, but any computations with them will require the
dimension of the space to be at least that of the m-element. First we compose A. (By using
ComposeSimpleForm, the ai are automatically added to the list of declared vector symbols.)
A = ComposeSimpleFormBa, 1F
4
a1 Ô a2 Ô a3 Ô a4
To compose the span of A, you can use the GrassmannAlgebra function ComposeSpan[A][1]
or its alternative representation ¯#1 @AD. To compose the 2-span of A you can use
ComposeSpan[A][2] or its alternative ¯#2 @AD.
2009 9 3
Grassmann Algebra Book.nb 115
To compose the cospan of A, you can use ComposeCospan[A][1] or ¯#1 @AD. To compose the
k-cospan of A you can use ComposeCospan[A][2] or ¯#2 @AD. (You can enter the 2 in ¯#2 as
either a superscript or a power.) Note that while a k-span is a list of k-elements, a k-cospan is a list
of (m-k)-elements.
Since the exterior product is Listable (like all Grassmann products in GrassmannAlgebra), the
exterior product of a k-span and its k-cospan gives a list of 4-elements which simplify to 4 copies
of A.
In the discussion of multilinear forms to follow we will usually use the alternative representation
¯# since its compact nature makes the composition of the forms clearer. Further syntax for using
¯# follows.
† To compose the k-spans or k-cospans for several values of k you can enter the values of k
enclosed in ¯[ ]. For example
¯#¯@1,3D @AD
¯#¯@1,3D @AD
88a2 Ô a3 Ô a4 , -Ha1 Ô a3 Ô a4 L, a1 Ô a2 Ô a4 , -Ha1 Ô a2 Ô a3 L<,
8a4 , -a3 , a2 , -a1 <<
† If you want to multiply the elements of a span by scalar coefficients you can use Mathematica's
inbuilt listability attribute of Times, and simply multiply the span by the list of coefficients
(making sure that the two lists are the same length). The alias ¯¯S stands for
DeclareExtraScalarSymbols, and is here used to declare any subscripted s as a scalar.
¯¯S@s_ D;
8s1 , s2 , s3 , s4 , s5 , s6 < ¯#2 @AD
8s1 a1 Ô a2 , s2 a1 Ô a3 , s3 a1 Ô a4 , s4 a2 Ô a3 , s5 a2 Ô a4 , s6 a3 Ô a4 <
† If you want to multiply the elements of a span by scalar coefficients and then sum them, you can
use Mathematica's Dot (.) function.
2009 9 3
Grassmann Algebra Book.nb 116
† If you want to use some other type of listable product between your elements (other than
Times) and then sum them, you can use the GrassmannAlgebra alias for Mathematica's Total
function ¯S which simply sums the elements of any list.
† To pick out the ith element in any of these lists you can use Mathematica's inbuilt Part
function by giving a subscript to the list in the form PiT. For example
¯#2 @ADP3T
a1 Ô a4
Thus we can write A as the exterior product of any two corresponding terms from a k-span and a k-
cospan.
True
Example: Refactorizations
Let us suppose we have a simple element A for which we have a factorization, and that we wish to
obtain a factorization that has a given 1-element X (which we know belongs to A) as one of the
factors. Take again the example of the previous section.
A = He4 - 2 e7 L Ô H3 e5 + e2 L Ô He1 + 2 e2 L; X = e1 - 6 e5 ;
By taking the 2-span of A and multiplying each of the elements by X we get 3 candidates.
¯#2 @AD Ô X
8He4 - 2 e7 L Ô He2 + 3 e5 L Ô He1 - 6 e5 L,
He4 - 2 e7 L Ô He1 + 2 e2 L Ô He1 - 6 e5 L,
He2 + 3 e5 L Ô He1 + 2 e2 L Ô He1 - 6 e5 L<
¯%@¯#2 @AD Ô XD
8-He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 + 3 e1 Ô e4 Ô e5 + 6 e1 Ô e5 Ô e7 +
6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7 , -2 e1 Ô e2 Ô e4 + 4 e1 Ô e2 Ô e7 +
6 e1 Ô e4 Ô e5 + 12 e1 Ô e5 Ô e7 + 12 e2 Ô e4 Ô e5 + 24 e2 Ô e5 Ô e7 , 0<
2009 9 3
Grassmann Algebra Book.nb 117
-He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 + 3 e1 Ô e4 Ô e5 +
6 e1 Ô e5 Ô e7 + 6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7
Multilinear forms
In the chapters to follow we introduce various new Grassmann products in addition to the exterior
product: the regressive, interior, inner, generalized, hypercomplex and Clifford products. In the
computation of expressions involving these products we will often come across formulae which
express the computation as a sum of terms where each term itself is a product. These sums must
have the same natural invariance to transformations possessed by the original expression, and it
turns out they are all examples of multilinear forms.
M@a1 , a2 , …, a1 b1 + a2 b2 , …, am D ã
a1 M@a1 , a2 , …, b1 , …, am D + a2 M@a1 , a2 , …, b2 , …, am D
where the ai are scalars and the bi are also 1-elements. In this book however, because the M are
usually viewed as products, we write them in their infix form, just as we have seen that
Mathematica allows us to do with the exterior product: entering an expression in the argument
form above displays it in its infix form.
Wedge@a1 , a2 , …, a1 b1 + a2 b2 , …, am D ã
a1 Wedge@a1 , a2 , …, b1 , …, am D + a2 Wedge@a1 , a2 , …, b2 , …, am D
a1 Ô a2 Ô … Ô Ha1 b1 + a2 b2 L Ô … Ô am ã
a1 a1 Ô a2 Ô … Ô b1 Ô … Ô am + a2 a1 Ô a2 Ô … Ô b2 Ô … Ô am
More specifically, we say that a product operation " is a linear product if it is both left and right
distributive under addition, and permits scalars to be factored out.
In the chapters to follow, the multilinear forms of interest may involve two, or sometimes three,
different linear product operations. To represent these we choose three initially undefined infix
operators from Mathematica's library of symbols, ", #, and ü (CircleTimes, CirclePlus,
CircleDot). We make them Listable (as all the Grassmann products are), and make them
behave linearly within any functions we define intended to operate on expressions involving them.
Now if we replace a2 , say, by a1 b1 + a2 b2 , we can expand the expression to get a sum of two
expressions.
Since the individual product operations are linear, the whole expression is multilinear: substituting
for any of the Gi or ai with a sum of terms will lead to a sum of expressions.
In later applications these linear products will of course be Grassmann products (exterior,
regressive, interior, inner, generalized, hypercomplex or Clifford), so the multilinear form F will
simply be a Grassmann expression (an element of the algebra under consideration).
Defining m:k-forms
In this section we define in more detail the particular types of multilinear forms we will find useful
later in the book. We will find it convenient to call them m:k-forms because, for any given simple
m-element, there are forms of grade k, where k runs from 0 to m. To begin consider a simple m-
element A.
A ã a1 Ô a2 Ô … Ô ak Ô ak+1 Ô … Ô am
The k-span and k-cospan of A are denoted ¯#k @AD and ¯#k @AD. Each of these lists contain n =
m
elements of grade k and grade m-k respectively. The ith element of any list is denoted with a
k
subscripted PiT.
m
where A is a simple m-element, k can range from 0 to m, and i can range from 1 to n = .
k
2.44
2009 9 3
Grassmann Algebra Book.nb 119
n
Fm:k ã ‚ IG1 # ¯"k @ADPiT M ü IG2 " ¯"k @ADPiT M
i=1
n
‚ IG1 # ¯"k @ADPiT M ü IG2 " ¯"k @ADPiT M
i=1
n
‚ I¯"k @ADPiT M ü I¯"k @ADPiT M
i=1
2.45
n
‚ IG1 # ¯"k @ADPiT M ü I¯"k @ADPiT M
i=1
n
‚ I¯"k @ADPiT M ü IG2 " ¯"k @ADPiT M
i=1
Composing m:k-forms
All the product operations we will be dealing with are defined with the Mathematica attribute
Listable. This means that to build up expressions, you only need to enter a product of lists (with
the same number of elements in each), or the product of a single element and a list, to get a list of
the products. To see how this works in building up forms, we can take the simple case of
A = a1 Ô a2 Ô a3 and show the steps as follows.
If at any stage you want to sum the terms of a list, you can apply Mathematica's Total function,
or its GrassmannAlgebra alias ¯S.
2.44
2009 9 3
Grassmann Algebra Book.nb 120
Because all our product operations are defined to be listable you can compose the forms in the
previous section without explicitly indexing the terms (i) and specifying their number (n). All you
need to do is eliminate the index and replace the summation sign with ¯S.
For example if A = a1 Ô a2 Ô a3 , the following two expressions give the same result.
A = a1 Ô a2 Ô a3 ;
3
‚ IG1 # ¯#1 @ADPiT M ü IG2 " ¯#1 @ADPiT M
i=1
Note that, because there are a number of product operators with their own precedence, it is
advisable to assure any expected behaviour by liberal use of parentheses in the input expression.
ExpandAndSimplifyForm@FD
a c HG1 ! xL ü HG2 " zL + b c HG1 ! yL ü HG2 " zL
2009 9 3
Grassmann Algebra Book.nb 121
A = a1 Ô a2 Ô a3 ;
A = Ha x + b yL Ô a2 Ô a3 ;
ExpandAndSimplifyForm@F3,1 D
A = a1 Ô a2 Ô a3 ;
Aa = a1 Ô a2 Ô a3 ê. a1 Ø a1 + a a2
Ha1 + a a2 L Ô a2 Ô a3
¯%@A ã Aa D
True
Now consider the expression F involving the factors of A and make the same substitution.
2009 9 3
Grassmann Algebra Book.nb 122
ExpandAndSimplifyForm@Fa D
HG1 " a1 L ü HG2 ! a2 Ô a3 L + a HG1 " a2 L ü HG2 ! a2 Ô a3 L
If the form F had been invariant to this transformation as the 3-element A was, Fa would have
simplified back again to F. But Fa now has an extra term which we notice displays a2 twice, but in
different parts of the product. This isolates the two occurrences of a2 so that the antisymmetry of
the exterior product cannot put their product to zero.
This suggests that we should antisymmetrize the original form F to create a new form Fb by adding
a term featuring the same factors ai of A but reordered to bring a2 to first place. If we do this we
find Fb is invariant to the substitution a1 Ø a1 + a a2 .
Fc = Fb ê. a1 Ø a1 + a a2
HG1 " a2 L ü HG2 ! -HHa1 + a a2 L Ô a3 LL + HG1 " Ha1 + a a2 LL ü HG2 ! a2 Ô a3 L
ExpandAndSimplifyForm@Fb ã Fc D
True
Of course, if we want a form which is antisymmetric with respect to substitution of all of its factors
ai , we will need to include all the essentially different decompositions of the 3-element into a 1-
element and a 2-element.
Now, replacing a1 by a1 + a a2 + b a3 in this form leaves the form invariant to the substitution.
Fe = Fd ê. a1 Ø a1 + a a2 + b a3
HG1 " a2 L ü HG2 ! -HHa1 + a a2 + b a3 L Ô a3 LL +
HG1 " a3 L ü HG2 ! Ha1 + a a2 + b a3 L Ô a2 L +
HG1 " Ha1 + a a2 + b a3 LL ü HG2 ! a2 Ô a3 L
ExpandAndSimplifyForm@Fd ã Fe D
True
The form Fd may be recognized as the m:k-form F3,1 introduced in the previous section.
2009 9 3
Grassmann Algebra Book.nb 123
Properties of m:k-forms
To this point we have discussed simple m-elements A and the m:k-forms Fm,k based on them.
A ã a1 Ô a2 Ô … Ô ak Ô ak+1 Ô … Ô am
Our interest is to show that any transformation which leaves A invariant, also leaves any of its
associated m:k-forms Fm,k invariant. These transformations are equivalent to refactorizations of
the m-element: we want to show that an m:k-form is unchanged no matter what factorization of A
we use.
A = a1 Ô a2 Ô a3 ;
Expanding and simplifying Aa gives, as expected, the original factorization A multiplied by the
determinant of the coefficients. (But first we need to declare the coefficients as scalars.)
¯¯S@a__ D;
Ab = ¯%@Aa D
H-a1,3 a2,2 a3,1 + a1,2 a2,3 a3,1 + a1,3 a2,1 a3,2 -
a1,1 a2,3 a3,2 - a1,2 a2,1 a3,3 + a1,1 a2,2 a3,3 L a1 Ô a2 Ô a3
det =
Det@88a1,1 , a1,2 , a1,3 <, 8a2,1 , a2,2 , a2,3 <, 8a3,1 , a3,2 , a3,3 <<D
Any set of coefficients whose determinant is unity will provide a transformation to which A is
invariant. Now let us compose the range of 3:k-forms for A, and for Aa .
F3,k = TableA¯SAHG1 # ¯#i @ADL ü IG2 " ¯#i @ADME , 8i, 0, 3<E êê
ExpandAndSimplifyForm
8HG1 ! 1L ü HG2 " a1 Ô a2 Ô a3 L, HG1 ! a1 L ü HG2 " a2 Ô a3 L -
HG1 ! a2 L ü HG2 " a1 Ô a3 L + HG1 ! a3 L ü HG2 " a1 Ô a2 L,
HG1 ! a1 Ô a2 L ü HG2 " a3 L - HG1 ! a1 Ô a3 L ü HG2 " a2 L +
HG1 ! a2 Ô a3 L ü HG2 " a1 L, HG1 ! a1 Ô a2 Ô a3 L ü HG2 " 1L<
2009 9 3
Grassmann Algebra Book.nb 124
Fa,3,k = ITableA¯SAHG1 # ¯#i @Aa DL ü IG2 " ¯#i @Aa DME , 8i, 0, 3<E êê
ExpandAndSimplifyForm êê SimplifyM ê.
8Simplify@detD Ø $, Simplify@-detD Ø -$<
8! HG1 ! 1L ü HG2 " a1 Ô a2 Ô a3 L,
! HHG1 ! a1 L ü HG2 " a2 Ô a3 L - HG1 ! a2 L ü HG2 " a1 Ô a3 L +
HG1 ! a3 L ü HG2 " a1 Ô a2 LL,
! HHG1 ! a1 Ô a2 L ü HG2 " a3 L - HG1 ! a1 Ô a3 L ü HG2 " a2 L +
HG1 ! a2 Ô a3 L ü HG2 " a1 LL, ! HG1 ! a1 Ô a2 Ô a3 L ü HG2 " 1L<
Here we have used Mathematica's inbuilt Simplify to extract the determinant of the
transformation and denote it symbolically by $. If we divide by $, we can easily see that this list
of forms is equal to the original list of forms.
Fa,3,k
F3,k ã
$
True
Thus transformations (factorizations) which leave the 3-element A invariant, also leave all its 3:k-
forms invariant.
The complete span of a simple m-element A is denoted ¯#¯ @AD and defined to be the ordered list
of all the k-spans of A, with k ranging from 0 to m.
Similarly the complete cospan of a simple m-element A is denoted ¯#¯ @AD and defined to be the
ordered list of all the k-cospans of A, with k ranging from 0 to m.
A = u Ô x Ô y Ô z;
¯#¯ @AD
881<, 8u, x, y, z<, 8u Ô x, u Ô y, u Ô z, x Ô y, x Ô z, y Ô z<,
8u Ô x Ô y, u Ô x Ô z, u Ô y Ô z, x Ô y Ô z<, 8u Ô x Ô y Ô z<<
¯#¯ @AD
88u Ô x Ô y Ô z<, 8x Ô y Ô z, -Hu Ô y Ô zL, u Ô x Ô z, -Hu Ô x Ô yL<,
8y Ô z, -Hx Ô zL, x Ô y, u Ô z, -Hu Ô yL, u Ô x<, 8z, -y, x, -u<, 81<<
2009 9 3
Grassmann Algebra Book.nb 125
Since the exterior product is Listable, simplifying the exterior product of a complete span with
its complete cospan gives lists of the original m-element.
Taking the exterior product of a complete cospan with a complete span give a similar result, but
this time the exterior products of the k-cospan elements with the k-span elements include a sign
H-1Lk Hm-kL .
88u Ô x Ô y Ô z<,
8-Hu Ô x Ô y Ô zL, -Hu Ô x Ô y Ô zL, -Hu Ô x Ô y Ô zL, -Hu Ô x Ô y Ô zL<,
8u Ô x Ô y Ô z, u Ô x Ô y Ô z, u Ô x Ô y Ô z, u Ô x Ô y Ô z,
u Ô x Ô y Ô z, u Ô x Ô y Ô z<, 8-Hu Ô x Ô y Ô zL, -Hu Ô x Ô y Ô zL,
-Hu Ô x Ô y Ô zL, -Hu Ô x Ô y Ô zL<, 8u Ô x Ô y Ô z<<
If m is odd then k(m-k) will be even for all k, and hence there will be no change in sign. If m is
even, then k(m-k) will alternate between even and odd as k ranges from 0 to m. Thus we can
construct a list of signs dependent on the parity of m.
For example
With this sign construction, we can now determine a relationship between products of complete
spans and cospans. Below we verify the identity for m ranging from 1 to 5.
¯!6 ;
TableB¯$B¯#¯ BAF Ô ¯#¯ BAF ã H-1L¯rr@mD J¯#¯ BAF Ô ¯#¯ BAFNF, 8m, 1, 5<F
m m m m
In the more general case of a form based on the complete span and cospan of a simple m-element
A, we can obtain a similar identity providing we reverse all the lists, that is, reverse each list of
elements and the list of these lists. To simplify this two-level reversing process we define a
function ReverseAll with alias ¯R.
2009 9 3
Grassmann Algebra Book.nb 126
As might be expected, the identity remains valid no matter to which form the sign change
H-1L¯rr@mD or the ReverseAll operation is applied. Let
To verify these identities we first define some functions Ji @mD representing them which simplify
the terms, and then tabulate the combined results for m ranging from 1 to 5.
J1 @m_D :=
¯(@F1 @mDD ã ¯(A¯RAH-1L¯rr@mD F2 @mDEE ã ¯(AH-1L¯rr@mD ¯R@F2 @mDDE;
J2 @m_D :=
¯(@F2 @mDD ã ¯(A¯RAH-1L¯rr@mD F1 @mDEE ã ¯(AH-1L¯rr@mD ¯R@F1 @mDDE;
2009 9 3
Grassmann Algebra Book.nb 127
The intersection of two simple elements A and B is the element C of highest grade which is
common to both A and B.
A ã A* Ô C; B ã B* Ô C; A* Ô C Ô B* ! 0
Two elements A and B are said to be disjoint if C is of grade 0, that is, a scalar: thus AÔB ! 0. The
formulae that we are going to develop still remain valid in this case, so that we will not need to
consider it further.
The first thing to note is that defining the union and intersection only defines them up to
congruence because they still remain valid if we multiply and divide the factors in the formulae by
an arbitrary scalar.
A* B* A* B*
Aã Ô Hc CL; B ã Ô Hc CL; Ô Hc CL Ô !0
c c c c
The observation that the union and intersection are only defined up to congruence leads us to
consider how we might develop a formula for the union and intersection which is not dependent on
an arbitrary congruence factor. Suppose ü is a linear product as discussed in the previous section,
not equal to the exterior product, but otherwise arbitrary. Then the ü product of A and B can be
written as the ü product of their union and intersection, and the congruence factors cancel.
A ü HB* Ô CL ã HA Ô B* L ü C
Suppose B is of grade m, and B* is of grade k, then C is of grade m-k. To compute the union A Ô B*
we need to 'partition' B into such k and m-k elements, and then find the maximal value of k (equal
to k* , say) for which A Ô B* is not zero. To do this partitioning, B has to be simple, and we need to
know9some
2009 3 factorization of it. If we partition B using its span and cospan for various k and for the
given factorization, then our formula becomes a special case of an m:k-form, and invariance of the
result to the actual factorization of B is guaranteed.
Grassmann Algebra Book.nb 128
Suppose B is of grade m, and B* is of grade k, then C is of grade m-k. To compute the union A Ô B*
we need to 'partition' B into such k and m-k elements, and then find the maximal value of k (equal
to k* , say) for which A Ô B* is not zero. To do this partitioning, B has to be simple, and we need to
know some factorization of it. If we partition B using its span and cospan for various k and for the
given factorization, then our formula becomes a special case of an m:k-form, and invariance of the
result to the actual factorization of B is guaranteed.
Hence we write the formula for the ü product of the possibly intersecting simple elements A and B
in terms of their union and intersection as
*
A ü B ã ¯SAHA Ô ¯"k* @BDL ü ¯"k @BDE 2.53
where k* is the maximum grade for which the right hand side is not zero.
In the sections to follow, we will make these notions more concrete with some examples.
DeclareExtraVectorSymbols@a_ , b_ , g_ D;
Ï Example: 2:k-forms
A = a1 Ô a2 Ô g1 ; B = b1 Ô g1 ;
Here it is obvious that our formula should give us
Ha1 Ô a2 Ô g1 Ô 1L ü Hb1 Ô g1 L
Ha1 Ô a2 Ô g1 Ô b1 L ü g1 + Ha1 Ô a2 Ô g1 Ô g1 L ü H-b1 L
Ha1 Ô a2 Ô g1 Ô b1 Ô g1 L ü 1
Clearly, when simplified, the first form (k = 0) is equal to A ü B, and the third form (k = 2) is zero.
The second form (k = 1) reduces to a single term, giving us the expected expression for the union
and intersection. To simplify a form, or a list of forms, we use ExpandAndSimplifyForm, or its
alias ¯(.
2009 9 3
Grassmann Algebra Book.nb 129
¯(@TD êê TableForm
Ha1 Ô a2 Ô g1 L ü Hb1 Ô g1 L
-Ha1 Ô a2 Ô b1 Ô g1 L ü g1
0
(Note that ¯( has automatically put the factors in the second form into canonical order, based in
this case on the alphabetical order of the symbols used)
Ï Example: 3:k-forms
A = a1 Ô a2 Ô g1 ; B = b1 Ô b2 Ô g1 ;
Ha1 Ô a2 Ô g1 Ô 1L ü Hb1 Ô b2 Ô g1 L
Ha1 Ô a2 Ô g1 Ô b1 L ü Hb2 Ô g1 L + Ha1 Ô a2 Ô g1 Ô b2 L ü H-Hb1 Ô g1 LL + Ha1 Ô a2 Ô g1 Ô
Ha1 Ô a2 Ô g1 Ô b1 Ô b2 L ü g1 + Ha1 Ô a2 Ô g1 Ô b1 Ô g1 L ü H-b2 L + Ha1 Ô a2 Ô g1 Ô b2 Ô g1
Ha1 Ô a2 Ô g1 Ô b1 Ô b2 Ô g1 L ü 1
¯(@TD êê TableForm
Ha1 Ô a2 Ô g1 L ü Hb1 Ô b2 Ô g1 L
-Ha1 Ô a2 Ô b1 Ô g1 L ü Hb2 Ô g1 L + Ha1 Ô a2 Ô b2 Ô g1 L ü Hb1 Ô g1 L
Ha1 Ô a2 Ô b1 Ô b2 Ô g1 L ü g1
0
We would not expect that interchanging the order of A and B in the form would make any
difference to the results, except perhaps for a change in sign.
A = a1 Ô a2 Ô g1 ; B = b1 Ô b2 Ô g1 ;
Hb1 Ô b2 Ô g1 L ü Ha1 Ô a2 Ô g1 L
-Ha1 Ô b1 Ô b2 Ô g1 L ü Ha2 Ô g1 L + Ha2 Ô b1 Ô b2 Ô g1 L ü Ha1 Ô g1 L
Ha1 Ô a2 Ô b1 Ô b2 Ô g1 L ü g1
0
2009 9 3
Grassmann Algebra Book.nb 130
Neither would we expect that interchanging the order of the span ¯#k and cospan ¯#k functions in
the form would make any difference to the results, except again perhaps for a change in sign.
However this interchange does reverse the order of the forms, making the form displaying the
union and intersection come after the zero form, rather than before it.
A = a1 Ô a2 Ô g1 ; B = b1 Ô b2 Ô g1 ;
0
Ha1 Ô a2 Ô b1 Ô b2 Ô g1 L ü g1
-Ha1 Ô a2 Ô b1 Ô g1 L ü Hb2 Ô g1 L + Ha1 Ô a2 Ô b2 Ô g1 L ü Hb1 Ô g1 L
Ha1 Ô a2 Ô g1 L ü Hb1 Ô b2 Ô g1 L
Ï Example: 2:k-forms
We begin with the same elements as in the first example above. But now, we express all the 1-
element factors in terms of the current basis. The process is independent of the actual basis, so we
can choose it arbitrarily for the example, say an 8-space.
¯!8 ;
We compose A as the product of three factors to explicitly display the intersection as the last factor.
But the algorithm does not need to have A in its factored form, so we expand and simplify before
applying the formula.
A = ¯%@He4 - 2 e7 L Ô H3 e5 + e2 L Ô He1 + 2 e2 LD
-He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 + 3 e1 Ô e4 Ô e5 +
6 e1 Ô e5 Ô e7 + 6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7
The second element B must be supplied to the algorithm in factored form, otherwise its spans and
cospans cannot be computed. However the factorization can be arbitrary, so we expand it and
refactor it differently so that it does not explicitly display the intersection we have chosen for the
example.
In this case we know that k equal to 1 will give us the result we are looking for.
2009 9 3
Grassmann Algebra Book.nb 131
Of course, this is in expanded form. We would like to factorize it to display the union and
intersection explicitly. We can do this with GrassmannAlgebra's FactorizeForm.
F2 = FactorizeForm@F1 D
3 e5
He1 - 6 e5 L Ô He2 + 3 e5 L Ô e3 + Ô He4 - 2 e7 L ü H2 e1 + 4 e2 L
2
The intersection displayed here is congruent to the factor that we introduced as our 'hidden'
intersection in the A and B supplied to the form, and is therefore a correct result as any congruent
factor would be.
Although the union does not explicitly display the intersection, we can verify that the intersection
is indeed a factor of it, and that it is also a factor of A and B as supplied to the formula. Let U be the
union and J be the intersection.
3 e5
U = He1 - 6 e5 L Ô He2 + 3 e5 L Ô e3 + Ô He4 - 2 e7 L;
2
J = 2 e1 + 4 e2 ;
then the following are zero as expected.
¯%@8U Ô J, A Ô J, B Ô J<D
80, 0, 0<
All the other factors of the original A and B are also easily verified to belong to U.
2009 9 3
Grassmann Algebra Book.nb 132
Ï Example 1
We take the simplest example with A non-simple, and a 1-element intersection g1 , which does not
occur in the non-simple 2-element factor of A.
A = Ha1 Ô a2 + a3 Ô a4 L Ô a5 Ô g1 ; B = b1 Ô g1 ;
Since the intersection is a 1-element, and B is a 2-element, we can get the union and intersection
from the 1-span and 1-cospan of B.
Upon simplification, the second term is clearly zero, leading to the expected result as the first term
showing, in this case, that the union is not simple.
Ï Example 2
Now let B contain one of the 1-element factors which occurs in the non-simple 2-element factor of
A, say a1 .
A = Ha1 Ô a2 + a3 Ô a4 L Ô a5 Ô g1 ; B = a1 Ô g1 ;
Upon simplification, the second term is again zero, but the first term, the union, is now simple.
† Thus we conclude that under some circumstances (Example 1) the union and intersection
formula will give straightforwardly interpretable results. But in others (Example 2) one would
need to pay special attention to their interpretation.
A = ¯%@He4 - 2 e7 L Ô H3 e5 + e2 L Ô He1 + 2 e2 LD
-He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 + 3 e1 Ô e4 Ô e5 +
6 e1 Ô e5 Ô e7 + 6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7
But as we have seen in the sections above, it is really only the number of distinct symbols (hence
independent 1-elements) that are important in computing unions and intersections. Thus we can
simplify our computations by considering A as a 3-element in a 5-space 8e1 , e2 , e4 , e5 , e7 <
and B as a simple 3-element using the formula involving the 2-span and 2-cospan of B.
B1 = e1 Ô e2 Ô e4 ;
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H6 e1 + 12 e2 L
The factor of interest, our first factor of A, is the intersection 6 e1 + 12 e2 . We can ignore the
union as it is not of any significance in this procedure. Repeating the procedure for the remaining
possible independent 3-element we can construct for B gives
BB = ¯#3 @e1 Ô e2 Ô e4 Ô e5 Ô e7 D
8e1 Ô e2 Ô e4 , e1 Ô e2 Ô e5 , e1 Ô e2 Ô e7 , e1 Ô e4 Ô e5 , e1 Ô e4 Ô e7 ,
e1 Ô e5 Ô e7 , e2 Ô e4 Ô e5 , e2 Ô e4 Ô e7 , e2 Ô e5 Ô e7 , e4 Ô e5 Ô e7 <
8He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H6 e1 + 12 e2 L,
0, He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H3 e1 + 6 e2 L,
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H2 e1 - 12 e5 L,
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H6 e4 - 12 e7 L,
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H-e1 + 6 e5 L, He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H2 e2 + 6 e5 L,
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H-3 e4 + 6 e7 L,
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H-e2 - 3 e5 L, He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H-e4 + 2 e7 L<
By inspection we can see that there are 4 different factors. We can take the simplest form
congruent to each. Clearly they are not independent
¯#3 @AAD
8He1 + 2 e2 L Ô He1 - 6 e5 L Ô He4 - 2 e7 L,
He1 + 2 e2 L Ô He1 - 6 e5 L Ô He2 + 3 e5 L,
He1 + 2 e2 L Ô He4 - 2 e7 L Ô He2 + 3 e5 L,
He1 - 6 e5 L Ô He4 - 2 e7 L Ô He2 + 3 e5 L<
By inspection the second factorization in the list, because it does not display all the symbols of A,
will be zero. The others are all candidates for a factorization of A, and all can be seen to be
congruent to A when expanded and simplified.
2009 9 3
Grassmann Algebra Book.nb 134
By inspection the second factorization in the list, because it does not display all the symbols of A,
will be zero. The others are all candidates for a factorization of A, and all can be seen to be
congruent to A when expanded and simplified.
¯%@¯#3 @AADD
8-2 e1 Ô e2 Ô e4 + 4 e1 Ô e2 Ô e7 + 6 e1 Ô e4 Ô e5 +
12 e1 Ô e5 Ô e7 + 12 e2 Ô e4 Ô e5 + 24 e2 Ô e5 Ô e7 , 0,
-He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 + 3 e1 Ô e4 Ô e5 + 6 e1 Ô e5 Ô e7 +
6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7 , -He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 +
3 e1 Ô e4 Ô e5 + 6 e1 Ô e5 Ô e7 + 6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7 <
† In Chapter 3, we will explore an equivalent algorithm for factorizing simple elements, based on
the regressive product as the multilinear product.
2.14 Summary
This chapter has introduced the exterior product. Its codification of linear dependence is the
foundation of Grassmann's contribution to the mathematical and physical sciences. From it he was
able to build an algebraic system widely applicable to much of the physical world.
We began the chapter by formally extending the axioms of a linear space to include the exterior
product. We then showed how the exterior product leads naturally to an understanding of
determinants and linear systems of equations. Although Grassmann was the first to develop an
algebraic structure which naturally handled these now accepted denizens of "Linear Algebra", we
showed, in discussing the concepts of simplicity and exterior division, that his exterior product
spaces (although themselves linear spaces) had the potential to be much richer than the simple
linear space of Linear Algebra.
At the end of the chapter we undertook an analysis inspired by Grassmann of the various types of
possible product structures. This analysis is really quite fundamental to his contribution, and
confirms the central role played by the exterior and interior products in their own right, and as we
shall see later in the book, the construction of higher level products (the Clifford product,
Generalized Grassmann product and hypercomplex product) as sums of these two.
The chapter has not yet introduced any geometric or physical interpretations for the elements of the
algebra, as, being a mathematical system, its power lies in its potentially multiple interpretations.
Nevertheless, I am acutely aware that comprehension is enhanced by context, particularly the
geometric. I beg your indulgence that the next chapter is still devoid of any depiction of the entities
discussed, but hope that Chapter 4: Geometric Interpretations and subsequent examples will
redress this balance.
Chapter 3 will discover that the symmetry in the suite of linear spaces created by the exterior
product leads to a beautiful "dual" product: the regressive product. Equipped with these dual
products, Chapter 4 will then be able finally to provide a geometric interpretation for all the
different types of elements of the spaces. Chapter 5 shows how the symmetry further enables the
definition of a new operation: the complement. Then Chapter 6 shows how to combine all of these
to define the interior product, a much more general product than the scalar product. These chapters
form the critical foundation to Grassmann's algebra. The rest is exploration!
2009 9 3
Grassmann Algebra Book.nb 135
3.1 Introduction
n n
Since the linear spaces L an L are of the same dimension K O= K O and hence are
m n-m m n-m
isomorphic, the opportunity exists to define a product operation (called the regressive product) dual
to the exterior product such that theorems involving exterior products of m-elements have duals
involving regressive products of (n|m)-elements.
Very roughly speaking, if the exterior product is associated with the notion of union, then the
regressive product is associated with the notion of intersection.
The regressive product appears to be little used in the recent literature. Grassmann's original
development did not distinguish notationally between the exterior and regressive product
operations. Instead he capitalized on the inherent duality and used a notation which, depending on
the grade of the elements in the product, could only sensibly be interpreted by one or the other
operation. This was a very elegant idea, but its subtlety may have been one of the reasons the
notion has become lost. (See "Grassmann's notation for the regressive product" in Section 3.3.)
However, since the regressive product is a simple dual operation to the exterior product, an
enticing and powerful symmetry is lost by ignoring it. We will find that its 'intersection' properties
are a useful conceptual and algorithmic addition to non-metric geometry, and that its algebraic
properties enable a firm foundation for the development of metric geometry. Some of the results
which are usually proven in metric spaces via the inner and cross products can also be proven
using the exterior and regressive products, thus showing that the result is independent of whether
or not the space has a metric.
The approach adopted in this book is to distinguish between the two product operations by using
different notations, just as Boolean algebra has its dual operations of union and intersection. We
will find that this approach does not detract from the elegance of the results. We will also find that
differentiating the two operations explicitly enhances the simplicity and power of the derivation of
results.
Since the commonly accepted modern notation for the exterior product operation is the 'wedge'
symbol Ô, we will denote the regressive product operation by a 'vee' symbol Ó. Note however that
this (unfortunately) does not correspond to the Boolean algebra usage of the 'vee' for union and the
'wedge' for intersection.
2009 9 3
Grassmann Algebra Book.nb 136
3.2 Duality
Ôö Ó a ö a
m n-m
L ö L
m n-m 3.1
Note that here we are undertaking the construction of a mathematical structure and thus there is no
specific mapping implied between individual elements at this stage. In Chapter 5: The
Complement, we will introduce a mapping between the elements of L and L which will lead to
m n-m
the definition of complement and interior product.
:a œ L, b œ L> ï : a Ô b œ L >
m m k k m k m+k
To form the dual of this axiom, replace Ô with Ó, and the grades of elements and spaces by their
complementary grades.
:a œ L, b œ L> ï : a Ó b œ L >
n-m n-m n-k n-k n-m n-k n-Hm+kL
Ô
Although this is indeed the dual of axiom 6, it is not necessary to display the grades of what are
arbitrary elements in the more complex form specifically involving the dimension n of the space. It
will be more convenient to display them as grades denoted by simple symbols like m and k as were
the grades of the elements of the original axiom. To effect this transformation most expeditiously
we first let m' = n|m, k' = n|k to get
2009 9 3
Grassmann Algebra Book.nb 137
:a œ L, b œ L> ï : a Ó b œ L >
m' m' k' k' m' k' Im'+k'M-n
The grade of the space to which the regressive product belongs is n|(m+k) = n|((n|m')+(n|k')) =
(m'+k')|n.
Finally, since the primes are no longer necessary we drop them. Then the final form of the axiom
dual to Ô6, which we label Ó6, becomes:
:a œ L, b œ L> ï : a Ó b œ L >
m m k k m k m+k-n
In words, this says that the regressive product of an m-element and a k-element is an (m+k|n)-
element.
Axiom Ô8 says:
There is a unit scalar which acts as an identity under the exterior product.
For simplicity we do not normally display designated scalars with an underscripted zero. However,
the duality transformation will be clearer if we rewrite the axiom with 1 in place of 1.
0
Replace Ô with Ó, and the grades of elements and spaces by their complementary grades.
In words, this says that there is a unit n-element which acts as an identity under the regressive
product.
2009 9 3
Grassmann Algebra Book.nb 138
Replace Ô with Ó, and the grades of elements by their complementary grades. Note that it is only
the grades as shown in the underscripts which should be replaced, not those figuring elsewhere in
the formula.
a Ó b ã H-1Lm k b Ó a
n-m n-k n-k n-m
Replace arbitrary grades m (wherever they occur in the formula) with n|m', k with n|k'. An
arbitrary grade is one without a specific value, like 0, 1, or n.
a Ó b ã H-1LHn-mL Hn- kL b Ó a
m k k m
In words this says that the regressive product of elements of odd complementary grade is anti-
commutative.
1. Replace Ô with Ó, and the grades of elements and spaces by their complementary grades.
2. Replace arbitrary grades m with n|m', k with n|k'. Drop the primes.
2009 9 3
Grassmann Algebra Book.nb 139
Ï Ó8: There is a unit n-element which acts as an identity under the regressive
product.
Ï Ó9: Non-zero n-elements have inverses with respect to the regressive product.
1 1 1
:!"#$%$B , œ L F, 1 ã a Ó , &'()**Ba, a ! 0 œ L F> 3.5
a a ¯n ¯n a ¯n ¯n
Ï
Ï Ó12: The regressive product is both left and right distributive under addition.
2009 9 3
Grassmann Algebra Book.nb 140
Ó12: The regressive product is both left and right distributive under addition.
Ka + b O Ó g ã a Ó g + b Ó g
m m k m k m k
3.8
aÓ b + g ã aÓb + aÓg
m k k m k m k
a Ó a b ã Ka aO Ó b ã a a Ó b 3.9
m k m k m k
The duality algorithm has generated a unit n-element 1 which acts as the multiplicative identity for
n
the regressive product, just as the unit scalar 1 (or 1) acts as the multiplicative identity for the
0
e1 Ô e2 Ô ! Ô en ã ¯c 1 3.10
n
Defining 1 any more specifically than this is normally done by imposing a metric onto the space.
n
This we do in Chapter 5: The Complement, and Chapter 6: The Interior Product. It turns out then
that 1 is the n-element whose measure (magnitude, volume) is unity.
n
Ï other hand, for geometric application in spaces without a metric, for example the
On the
calculation of intersections of lines, planes, and hyperplanes, it is inconsequential that we only
know 1 up to congruence, because we will see that if an element defines a geometric entity then
n
2009 93
any element congruent to it will define the same geometric entity.
Grassmann Algebra Book.nb 141
On the other hand, for geometric application in spaces without a metric, for example the
calculation of intersections of lines, planes, and hyperplanes, it is inconsequential that we only
know 1 up to congruence, because we will see that if an element defines a geometric entity then
n
any element congruent to it will define the same geometric entity.
1 ã 1Ó1 3.11
¯n ¯n ¯n
By letting a ã a 1 we can show that regressive products with an n-element allow a sort of
¯n ¯n
associativity with the exterior product.
a Ób Ôg ã a Ó bÔg 3.12
¯n k j ¯n k j
Ï Division by n-elements.
b Óa ã b Óg ï aãg 3.13
¯n m ¯n m m m
Ó
Axiom 9 says that every (non-zero) n-element has an inverse with respect to the regressive
product, and the unit n-element.
1 1 1
:#)*+,+B , œ L F, 1 ã a Ó , (-./00Ba, a ! 0 œ L F>
a a ¯n ¯n a ¯n ¯n
Suppose we have an n-element a expressed in terms of some basis. Then, according to 3.10 we can
n
express it as a scalar multiple (a, say) of the unit n-element 1.
n
We may then write the inverse a-1 of a with respect to the regressive product as the scalar multiple
n n
1
a
of 1.
n
2009 9 3
Grassmann Algebra Book.nb 142
We may then write the inverse a-1 of a with respect to the regressive product as the scalar multiple
n n
1
a
of 1.
n
1
aãa1 ó a-1 ã 1
n n n a n
1
We can see this by taking the regressive product of a 1 with 1:
a n
n
1
a Ó a-1 ã Ja 1N Ó 1 ã 1Ó1 ã 1
n n n a n n n n
a ã b e1 Ô e2 Ô ! Ô en ã b ¯c 1
n n
1 1 1 1
a-1 ã 1ã e1 Ô e2 Ô ! Ô en
n b ¯c n b ¯c2
1
aãa1 ó a-1 ã 1 3.14
n n n a n
1 1
a ã b e1 Ô e2 Ô ! Ô en ó a-1 ã e1 Ô e2 Ô ! Ô en 3.15
n n b ¯c2
If q and r are the orders of two magnitudes A and B, and n that of the principal
domain, then the order of the product [A B] is first equal to q+r if q+r is smaller than
n, and second equal to q+r|n if q+r is greater than or equal to n.
Translating this into the terminology used in this book we have:
If q and r are the grades of two elements A and B, and n that of the underlying linear
space, then the grade of the product [A B] is first equal to q+r if q+r is smaller than n,
and second equal to q+r|n if q+r is greater than or equal to n.
2009 9 3
Grassmann Algebra Book.nb 143
Translating this further into the notation used in this book we have:
@A BD ó A Ô B p+q<n
p q
@A BD ó A Ó B p+q¥n
p q
Grassmann called the product [A B] in the first case a progressive (exterior) product, and in the
second case a regressive (exterior) product. (In current terminology, which we adopt in this book,
the progressive exterior product is called simply the exterior product, and the regressive exterior
product is called simply the regressive product).
In the equivalence above, Grassmann has opted to define the product of two elements whose
grades sum to the dimension n of the space as regressive, and thus a scalar. However, the more
explicit notation that we have adopted identifies that some definition is still required for the
progressive (exterior) product of two such elements.
The advantage of denoting the two products differently enables us to correctly define the exterior
product of two elements whose grades sum to n, as an n-element. Grassmann by his choice of
notation has decided to define it as a scalar. In modern terminology this is equivalent to confusing
scalars with pseudo-scalars. A separate notation for the two products thus avoids this tensorially
invalid confusion.
We can see how then, in not being explicitly denoted, the regressive product may have become lost.
1. Replace Ó with Ô, and the grades of elements and spaces by their complementary grades.
2. Replace arbitrary grades m with n|m', k with n|k'. Drop the primes.
This differs from the algorithm for obtaining the dual of an exterior product axiom only in the
replacement of Ó with Ô instead of vice versa.
It is easy to see that applying this algorithm to the regressive product axioms generates the original
exterior product axiom set.
1. Replace each product operation by its dual operation, and the grades of elements and spaces by
their complementary grades.
2. Replace arbitrary grades m with n|m', k with n|k'. Drop the primes.
Since this algorithm applies to both sets of axioms, it also applies to any theorem. Thus to each
theorem involving exterior or regressive products corresponds a dual theorem obtained by applying
the algorithm. We call this the Grassmann Duality Principle (or, where the context is clear, simply
the Duality Principle).
2009 9 3
Grassmann Algebra Book.nb 144
Since this algorithm applies to both sets of axioms, it also applies to any theorem. Thus to each
theorem involving exterior or regressive products corresponds a dual theorem obtained by applying
the algorithm. We call this the Grassmann Duality Principle (or, where the context is clear, simply
the Duality Principle).
To every theorem involving exterior and regressive products, a dual theorem may be obtained
by:
1. Replacing each product operation by its dual operation, and the grades of elements and
spaces by their complementary grades.
2. Replacing arbitrary grades m with n|m', k with n|k', then dropping the primes.
Ï Example 1
The following theorem says that the exterior product of two elements is zero if the sum of their
grades is greater than the dimension of the linear space.
:a Ô b ã 0, m + k - ¯n > 0>
m k
The dual theorem states that the regressive product of two elements is zero if the sum of their
grades is less than the dimension of the linear space.
:a Ó b ã 0, ¯n - Hk + mL > 0>
m k
Ï Example 2
This theorem below says that the exterior product of an element with itself is zero if it is of odd
grade.
:a Ô a ã 0, m œ 8OddPositiveIntegers<>
m m
The dual theorem states that the regressive product of an element with itself is zero if its
complementary grade is odd.
:a Ó a ã 0, H¯n - mL œ 8OddPositiveIntegers<>
m m
Again, we can recover the original theorem by calculating the dual of this one.
Ï Example 3
The following theorem states that the exterior product of unity with itself any number of times
remains unity.
2009 9 3
Grassmann Algebra Book.nb 145
1Ô1Ô!Ô1Ôa ã a
m m
The dual theorem states the corresponding fact for the regressive product of unit n-elements 1 .
¯n
1 Ó 1 Ó!Ó 1 Óa ã a
¯n ¯n ¯n m m
Our first examples are to show how Dual may be used to develop dual axioms. For example, to
take the dual of axiom Ô10 simply enter:
DualBa Ô b ã H-1Lm k b Ô aF
m k k m
a Ó b ã H-1LH-k+¯nL H-m+¯nL b Ó a
m k k m
:!"#$%$B 1 , 1 œ L F, a ã 1 Ó a>
¯n ¯n ¯n m ¯n m
To apply Dual to an axiom involving more than one statement, collect the statements in a list. For
example, the dual of axiom Ô9 is obtained as:
DualB:#)*+,+Ba-1 , a-1 œ LF, 1 ã a Ô a-1 , (-./00Ba, a ! 0 œ LF>F
0 0 0
1 1 1
:!"#$%$B , œ L F, 1 ã a Ó , &'()**Ba, a # 0 œ L F>
a a ¯n ¯n a ¯n ¯n
Note here that we are purposely frustrating Mathematica's inbuilt definition of Exists and
ForAll by writing them in script.
2009 9 3
Grassmann Algebra Book.nb 146
Note here that we are purposely frustrating Mathematica's inbuilt definition of Exists and
ForAll by writing them in script.
Our fourth example is to generate the dual of the Common Factor Axiom. This axiom will be
introduced in the next section. Note that the dimension of the space must be entered as ¯n in order
for Dual to recognize it as such.
CommonFactorAxiom =
: a Ô g Ó b Ô g ã a Ô b Ô g Ó g , m + k + j - ¯n ã 0>;
m j k j m k j j
Dual@CommonFactorAxiomD
: a Ó g Ô b Ó g ã a Ó b Ó g Ô g, -j - k - m + 2 ¯n ã 0>
m j k j m k j j
We can verify that the dual of the dual of the axiom is the axiom itself.
CommonFactorAxiom ã Dual@Dual@CommonFactorAxiomDD
True
Our fifth example is to obtain the dual of one of the product formulae derived in a later section of
this chapter. Again, the dimension of the space must be entered as ¯n.
F = a Ô b Ó x ã Ka Ó x O Ô b + H-1Lm a Ô b Ó x ;
m k ¯n-1 m ¯n-1 k m k ¯n-1
Dual@FD
a Ó b Ô x ã H-1L-m+¯n a Ó b Ô x + a Ô x Ó b
m k m k m k
Again, we can verify that the dual of the dual of the formula is the formula itself.
F ã Dual@Dual@FDD
True
• You can use the GrassmannAlgebra function Dual with any expression involving exterior and
regressive products of graded, vector or scalar symbols.
• You can include multiple statements by combining them in a list.
• The statements should not include any expressions which will evaluate when entered.
• The dimension of the underlying linear space should be denoted as ¯n.
• Avoid using scalar or vector symbols as (underscripted) grades if they appear other than as
underscripts elsewhere in the expression.
Motivation
Although the axiom sets for the exterior and regressive products have been posited, it still remains
to propose an axiom explicitly relating the two types of products.
The axiom set for the regressive product, which we have created above simply as a dual axiom set
to that of the exterior product, asserts that in an n-space the regressive product of an m-element and
a k-element is an (m+k-n)-element. Our first criterion for the new axiom then, is that it can be used
to compute this new element.
Earlier we have also remarked on some enticing correspondences between the dual exterior and
regressive products and the dual union and intersection operations of a Boolean algebra. We can
already see that the (non-zero) exterior product of simple elements is an element whose space is the
'union' of the spaces of its factors. Our second criterion then, is that the new axiom enable the 'dual'
of this property: the (non-zero) regressive product of simple elements is an element whose space is
the 'intersection' of the spaces of its factors.
This criterion leads us first to return to Chapter 2: Unions and Intersections, where, with an
arbitrary multilinear product operation (which we denoted ü), we were able to compute the union
and intersection of simple elements. In the cases explored the simple elements were of arbitrary
grade. We saw there that given simple elements A* Ô C and C Ô B* , we could obtain their union and
intersection as (congruent to) A* Ô C Ô B* and C respectively.
A ü B ã HA* Ô CL ü HC Ô B* L ã HA* Ô C Ô B* L ü C
Since ü is an arbitrary multilinear product (other than the exterior product), we can satisfy the
second criterion by positing the same form for the regressive product.
What additional requirements on this would one need in order to satisfy the first criterion, that is,
that the axiom enables a result which does not involve the regressive product? Axiom Ó8 above
gives us the clue.
:#)*+,+B 1 , 1 œ L F, a ã 1 Ó a>
¯n ¯n ¯n m ¯n m
If the grade of A* Ô C Ô B* is equal to the dimension of the space n (or in computations, ¯n), then
since all n-elements are congruent we can write A* Ô C Ô B* ã c 1 , where c is some scalar,
¯n
showing that the right hand side is congruent to the common factor (intersection) C.
HA* Ô CL Ó HC Ô B* L ã HA* Ô C Ô B* L Ó C ã Jc 1 N Ó C ã c C ª C
¯n
2009 9 3
Grassmann Algebra Book.nb 148
: a Ô g Ó g Ô b ã a Ô g Ô b Ó g , m + k + j - ¯n ã 0>
m j j k m j k j
: a Ô g Ó b Ô g ã a Ô b Ô g Ó g , m + k + j - ¯n ã 0>
m j k j m k j j
As we will see, an axiom of this form works very well. Indeed, it turns out to be one of the
fundamental underpinnings of the algebra. We call it the Common Factor Axiom and explore it
further below.
Ï Historical Note
The approach we have adopted in this chapter of treating the common factor relation as an
axiom is effectively the same as Grassmann used in his first Ausdehnungslehre (1844) but
differs from the approach that he used in his second Ausdehnungslehre (1862). (See
Chapter 3, Section 5 in Kannenberg.) In the 1862 version Grassmann proves this relation
from another which is (almost) the same as the Complement Axiom that we introduce in
Chapter 5: The Complement. Whitehead [1898], and other writers in the Grassmannian
tradition follow his 1862 approach.
The relation which Grassmann used in the 1862 Ausdehnungslehre is in effect equivalent
to assuming the space has a Euclidean metric (his Ergänzung or supplement). However the
Common Factor Axiom does not depend on the space having a metric; that is, it is
completely independent of any correspondence we set up between L and L . Hence we
m n-m
would rather not adopt an approach which introduces an unnecessary constraint, especially
since we want to show later that the Ausdehnungslehre is easily extended to more general
metrics than the Euclidean.
: a Ô g Ó b Ô g ã a Ô b Ô g Ó g , m + k + j - ¯n ã 0> 3.17
m j k j m k j j
Thus, the regressive product of two elements a Ô g and b Ô g with a common factor g is equal to
m j k j j
the regressive product of the 'union' of the elements a Ô b Ô g with the common factor g (their
m k j j
'intersection').
If a and b still contain some simple elements in common, then the product a Ô b Ô g is zero, hence,
m k m k j
2009
by the9definition
3 above a Ô g Ó b Ô g is also zero. In what follows, we suppose that a Ô b Ô g
m j k j m k j
is not zero.
Grassmann Algebra Book.nb 149
If a and b still contain some simple elements in common, then the product a Ô b Ô g is zero, hence,
m k m k j
Since the union a Ô b Ô g is an n-element, we can write it as some scalar factor c, say, of the unit n-
m k j
: a Ô g Ó b Ô g ª g , m + k + j - ¯n ã 0> 3.18
m j k j j
Ô
It is easy to see that by using the anti-commutativity axiom 10, that the axiom may be arranged
in any of a number of alternative forms, the most useful of which are:
: a Ô g Ó g Ô b ã a Ô g Ô b Ó g , m + k + j - ¯n ã 0> 3.19
m j j k m j k j
: g Ô a Ó g Ô b ã g Ô a Ô b Ó g , m + k + j - ¯n ã 0> 3.20
j m j k j m k j
And since for the regressive product, any n-element commutes with any other element, we can
always rewrite the right hand side with the common factor first.
: a Ô g Ó g Ô b ã g Ó a Ô g Ô b , m + k + j - ¯n ã 0> 3.21
m j j k j m j k
Consider two simple elements a1 and a2 . Then the Common Factor Axiom can be written for each
m m
as:
2009 9 3
Grassmann Algebra Book.nb 150
a1 Ô g Ó b Ô g ã a1 Ô b Ô g Ó g
m j k j m k j j
a2 Ô g Ó b Ô g ã a2 Ô b Ô g Ó g
m j k j m k j j
Adding these two equations and using the distributivity of Ô and Ó gives:
Ka1 + a2 O Ô g Ó b Ô g ã Ka1 + a2 O Ô b Ô g Ó g
m m j k j m m k j j
Extending this process, we see that the formula remains true for arbitrary a and b, providing g is
m k j
simple.
: a Ô g Ó b Ô g ã a Ô b Ô g Ó g ª g, m + k + j - ¯n ã 0> 3.22
m j k j m k j j j
This is an extended version of the Common Factor Axiom. It states that: the regressive product of
two arbitrary elements containing a simple common factor is congruent to that factor.
For applications involving computations in a non-metric space, particularly those with a geometric
interpretation, we will see that the congruence form is not restrictive. Indeed, it will be quite
elucidating. For more general applications in metric spaces we will see that the associated scalar
factor is no longer arbitrary but is determined by the metric imposed.
When the grades do not conform to the requirement shown, the product is zero.
: a Ô g Ó b Ô g ã 0, m + k + j - ¯n ! 0> 3.23
m j k j
If there is no common factor (other than the scalar 1), then the axiom reduces to:
:a Ó b ã a Ô b Ó 1 œ L , m + k - ¯n ã 0> 3.24
m k m k 0
The following version yields a sort of associativity for products which are scalar.
2009 9 3
Grassmann Algebra Book.nb 151
To prove this, we can use the previous formula to show that each side the equation is equal to
a Ô b Ô g Ó 1.
m k j
: a Ó g Ô b Ó g ã a Ó b Ó g Ô g œ L, m + k + j - 2 ¯n ã 0> 3.26
m j k j m k j j j
The duals of the three special cases in the previous section are:
: a Ó g Ô b Ó g ã 0, m + k + j - 2 ¯n ! 0> 3.27
m j k j
:a Ô b ã a Ó b Ô 1 œ L , m + k - ¯n ã 0> 3.28
m k m k ¯n ¯n
Suppose we have two general 2-elements X and Y in a 3-space and we wish to find a formula for
their 1-element intersection Z. Because X and Y are in a 3-space, we are assured that they are
simple.
2009 9 3
Grassmann Algebra Book.nb 152
X = x1 e1 Ô e2 + x2 e1 Ô e3 + x3 e2 Ô e3
Y = y1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3
Z ã XÓY
ã Hx1 e1 Ô e2 + x2 e1 Ô e3 + x3 e2 Ô e3 L Ó Hy1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 L
Expanding this product, and remembering that the regressive product of identical basis 2-elements
is zero, we obtain:
We can now apply the Common Factor Axiom to each of these regressive products:
Z ã Hx1 y2 - x2 y1 L He1 Ô e2 Ô e3 L Ó e1
+Hx1 y3 - x3 y1 L He1 Ô e2 Ô e3 L Ó e2
+Hx2 y3 - x3 y2 L He1 Ô e2 Ô e3 L Ó e3
Thus, in sum, we have the general congruence relation for the intersection of two 2-elements in a 3-
space.
Hx1 e1 Ô e2 + x2 e1 Ô e3 + x3 e2 Ô e3 L Ó
Hy1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 L 3.30
ª Hx1 y2 - x2 y1 L e1 + Hx1 y3 - x3 y1 L e2 + Hx2 y3 - x3 y2 L e3
The result on the right hand side almost looks as if it could have been obtained from a cross-
product operation. However the cross-product requires the space to have a metric, while this does
not. We will see in Chapter 6 how, once we have introduced a metric, we can transform this into
the formula for the cross product. This is an example of how the regressive product can generate
results, independent of whether the space has a metric; whereas the usual vector algebra, in using
the cross-product, must assume that it does.
2009 9 3
Grassmann Algebra Book.nb 153
We can check that Z is indeed a common element to X and Y by determining if the exterior product
of Z with each of X and Y is zero. We compose the expressions for X, Y, and Z, and then use ¯%
(the palette alias of GrassmannExpandAndSimplify) to do the check.
GrassmannAlgebra already knows that the special congruence symbol ¯c is scalar, and
ComposeBivector will automatically declare the scalar coefficients as ScalarSymbols.
Expand@¯%@8Z Ô X, Z Ô Y<DD
80, 0<
Ï 4-space
In a 4-space we only have 4 independent 1-elements available. Let A and B be 3-elements with C
apparently a common factor. First, suppose we simplify them before applying the Common Factor
Axiom
A ã u Ô C ã u Ô Hu Ô v + x Ô yL ã u Ô x Ô y
B ã C Ô x ã Hu Ô v + x Ô yL Ô x ã u Ô v Ô x
A Ó B ã Hu Ô x Ô yL Ó Hu Ô v Ô xL ã Hy Ô u Ô xL Ó H-u Ô x Ô vL
The Common Factor Axiom gives us the correct result for the simplified factors because the
simplified factors had a simple common factor. But of course it is not C.
A Ó B ã Hy Ô u Ô x Ô vL Ó Hx Ô uL
Now if we formally (that is, only looking at the form), (and thus incorrectly) apply the Common
Factor Axiom without simplifying we get
A Ó B ã Hu Ô CL Ó HC Ô xL ã Hu Ô C Ô xL Ó C
Substituting back for C on the right hand side gives zero, which is of course an incorrect result.
† In sum The Common Factor Axiom gives the correct result (as expected) when applied to terms
resulting from the complete expansion of a regressive product, since these terms will either have
simple common factors, or be zero. On the other hand it will not necessarily give the correct
result (as expected) when the symbol for the common factor takes on a non-simple value.
2009 9 3
Grassmann Algebra Book.nb 154
In sum The Common Factor Axiom gives the correct result (as expected) when applied to terms
resulting from the complete expansion of a regressive product, since these terms will either have
simple common factors, or be zero. On the other hand it will not necessarily give the correct
result (as expected) when the symbol for the common factor takes on a non-simple value.
The example in the previous section applied the Common Factor Axiom to two elements by
expanding all the terms in their regressive product, applying the Common Factor Axiom to each of
the terms, and then factoring the result. This is always possible. But in situations where there is a
large number of terms, it may not be very efficient. The Common Factor Theorem will enable
more efficient solutions.
Let us begin by returning to the motivation of the Common Factor Axiom where we recast the
union and intersection formula from Chapter 2 to become the axiom.
A Ó B ã HA* Ô CL Ó HC Ô B* L ã HA* Ô C Ô B* L Ó C
Here, the grade of A* Ô C Ô B* is equal to the dimension of the space. Since an n-element will
commute with an element of any grade, the term on the right hand side can always be rewritten as
C Ô HA* Ô C Ô B* L. This latter form is often convenient from a mnemonic point of view, and we
may make such exchanges without further comment. Thus
There are two different forms of the Common Factor Theorem which we will be developing: The
A form and the B form. It is useful to have both forms at hand, since depending on the case, one of
them is likely to be more computationally efficient than the other; and formulae resulting from
each, while ultimately yielding the same result, give distinctly different forms to their results. The
A form requires A to be simple and obtains the common factor C from the factors of A by spanning
A (that is, composing an appropriate span and cospan of A). The B form requires B to be simple and
obtains the common factor C from the factors of B by spanning B.
Ï The A form
The basic formula for the A form begins with the Common Factor Axiom written as
† us now display the grades explicitly. Let A be of grade m, B be of grade k, and C be of grade j.
Let
A* is then of grade m - j = n - k.
2009 9 3
Grassmann Algebra Book.nb 155
A Ó B ã A* Ô C Ó B ã A* Ô B Ó C
m k m-j j k m-j k j
The crux of the development comes from the intrinsic relationship between A, its span, and its
cospan: that is, for any i and s, A is equal to the exterior product of the ith s-span element and the
corresponding ith s-cospan element.
We now focus on the right hand side of the Common Factor Axiom. If, for a given value of i, we
were to replace A* by ¯#m-j @ADPiT and C by ¯#m-j @ADPiT , we would get the term
m-j j
Clearly, this term may be different for different values of i, since, for example, wherever the span
element of A (¯#m-j @ADPiT ) contains a factor of the common factor it will be zero, since B will
contain the same factor. In other cases it may be non-zero.
As it turns out, the common factor needs the sum of these terms. We will see why in the next
section when we discuss the proof of the theorem.
n
A Ó B ã ‚ K¯"m-j BAF Ô BO Ó ¯"m-j BAF 3.33
m k m PiT k m PiT
i=1
m m
The index i ranges from 1 to n, where n is equal to (or equivalently ).
j m- j
Because the exterior and regressive products have Mathematica's Listable attribute, we do not
actually need to index the list elements and then sum them. If we write ¯S as a shorthand for
Mathematica's inbuilt Total function, which sums the elements of a list, we can write
To make these ideas more concrete we now apply the formula to some simple examples before
proceeding to a proof in the next section.
Ï Generalizations
† It is evident that if both elements are simple, expansion in factors of the element of lower grade
will be the more efficient.
† When neither a nor b is simple, the Common Factor Theorem can still be applied to the simple
m k
component terms of the product.
2009 9 3
Grassmann Algebra Book.nb 156
When neither a nor b is simple, the Common Factor Theorem can still be applied to the simple
m k
component terms of the product.
† Multiple regressive products may be treated by successive applications of the formulae above.
Ï Example 1
Suppose we are working in a 5-space and A and B are expressed in such a way as to display their
common factor C. We declare the extra vector symbols with the alias ¯¯V.
¯!5 ; A = a1 Ô g1 Ô g2 ; B = g1 Ô g2 Ô b1 Ô b2 ; ¯¯V@a_ , b_ , g_ D;
Now if we simply evaluate the formula, we notice that two of the three terms are zero due to
repeated factors in the exterior product.
a1 Ô g1 Ô g2 Ô b1 Ô b2 Ó g1 Ô g2 +
g1 Ô g1 Ô g2 Ô b1 Ô b2 Ó -Ha1 Ô g2 L + g2 Ô g1 Ô g2 Ô b1 Ô b2 Ó a1 Ô g1
¯%@FD
g1 Ô g2 Ó a1 Ô b1 Ô b2 Ô g1 Ô g2
Because the second factor in the regressive product is an n-element, we can immediately write this
as congruent to the common factor:
Hg1 Ô g2 L Ó Ha1 Ô b1 Ô b2 Ô g1 Ô g2 L ª g1 Ô g2
Ï Example 2
We now take the same example as we did earlier when we multiplied out the regressive product
term by term. X and Y are both 2-elements in a 3-space, and we wish to compute a 1-element Z
common to them (remember Z can only be computed up to congruence).
8x1 e1 Ô e2 + x2 e1 Ô e3 + x3 e2 Ô e3 , y1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 <
To apply the Common Factor Theorem in the form above, we need X in factored form.
2009 9 3
Grassmann Algebra Book.nb 157
Xf = ExteriorFactorize@XD
e3 x3 e3 x2
x1 e1 - Ô e2 +
x1 x1
To compute the span of an element, the element needs to be a simple product of 1-elements - not a
scalar multiple of such a product. The GrassmannAlgebra function ¯# ignores any such scalar
factors. In applications where only a result up to congruence is required, this is entirely
satisfactory. Otherwise if we wish to include the scalar factor, we will need to multiply it into one
of the other factors, say the first.
Xf = Xf ê. a_ Hb_ Ô c__L ß Ha bL Ô c
e3 x3 e3 x2
x1 e1 - Ô e2 +
x1 x1
The span and cospan of Xf are both of grade 1. We display them for reference.
F = ¯%@FD
H-x2 y1 + x1 y2 L e1 Ó e1 Ô e2 Ô e3 +
H-x3 y1 + x1 y3 L e2 Ó e1 Ô e2 Ô e3 + H-x3 y2 + x2 y3 L e3 Ó e1 Ô e2 Ô e3
Writing this in congruence form gives us the common factor required, and corroborates the result
originally obtained by applying the Common Factor Axiom to each term of the full expansion.
2009 9 3
Grassmann Algebra Book.nb 158
A ã A* Ô C B ã C Ô B*
m m-j j k j k-j
A ã A1 Ô A1 ã A2 Ô A2 ã ! ã An Ô An
m m-j j m-j j m-j j
m m
Here n is equal to , or equivalently .
j m- j
C ã a1 A1 + a2 A2 + … + an An
j j j j
The exterior product of the ith (m-j)-span element Ai with C can be expanded to give
m-j j
Ai Ô C ã Ai Ô a1 A1 + a2 A2 + … + an An ã
m-j j m-j j j j
a1 Ai Ô A1 + a2 Ai Ô A2 + … + ai Ai Ô Ai + … + an Ai Ô An
m-j j m-j j m-j j m-j j
And indeed, because by definition, the exterior product of an (m-j)-span element (say, A2 ) and any
m-j
of the non-corresponding cospan elements (say, A3 ) is zero, we can write for any i
j
Ai Ô C ã ai Ai Ô Ai ã ai A
m-j j m-j j m
2009 9 3
Grassmann Algebra Book.nb 159
Z ã AÓB ã A * Ô C Ó C Ô B* ã A * Ô C Ô B* Ó C ã A Ô B* Ó C
m k m-j j j k-j m-j j k-j j m k-j j
Substituting for the common factor in the right hand side gives
Z ã A Ô B* Ó C ã A Ô B* Ó a1 A1 + a2 A2 + … + an An
m k-j j m k-j j j j
By the distributivity of the exterior and regressive products, and their behaviour with scalars, the
scalar factors can be transferred and attached to A.
m
ai A ã Ai Ô C
m m-j j
Z ã A1 Ô C Ô B* Ó A1 + A2 Ô C Ô B* Ó A2 + … + An Ô C Ô B* Ó An
m-j j k-j j m-j j k-j j m-j j k-j j
Z ã A1 Ô B Ó A1 + A2 Ô B Ó A2 + … + An Ô B Ó An ã
m-j k j m-j k j m-j k j
n 3.35
‚ Ai Ô B Ó Ai
m-j k j
i=1
In the case where both A and B are simple but not of the same grade, the form which decomposes
the element of lower grade will generate the least number of terms and be more computationally
efficient. Multiple regressive products may be treated by successive applications of the theorem in
either of its forms.
2009 9 3
Grassmann Algebra Book.nb 160
n
:A Ó B ã ‚ Ai Ô B Ó Ai , m - j + k - ¯n ã 0,
m k m-j k j
i=1
3.36
m
A ã A1 Ô A1 ã A2 Ô A2 ã ! ã An Ô An , n ã K O>
m m-j j m-j j m-j j j
n
A Ó B ã ‚ ¯#m-j BAF Ô B Ó ¯#m-j BAF
m k m PiT k m PiT
i=1
n
: A Ó B ã ‚ A Ô Bi Ó Bi , m - j + k - ¯n ã 0,
m k m k-j j
i=1
3.37
k
B ã B1 Ô B1 ã B2 Ô B2 ã ! ã Bn Ô Bn , nã >
k j k-j j k-j j k-j j
n
A Ó B ã ‚ A Ô ¯#j BBF Ó ¯#j BBF
m k m k PiT k PiT
i=1
2009 9 3
Grassmann Algebra Book.nb 161
n
a Ó b ã ‚ ai Ô b Ó ai
n n-1
i=1
where:
a ã a1 Ô a2 Ô ! Ô an ã H-1Ln-i Ha1 Ô a2 Ô ! Ô ·i Ô ! Ô an L Ô ai
n
The symbol ·i means that the ith factor is missing from the product. Substituting in the Common
Factor Theorem gives:
n
a Ó b ã ‚ H-1Ln-i HHa1 Ô a2 Ô ! Ô ·i Ô ! Ô an L Ô bL Ó ai
n
i=1
n
ã ‚ Ha1 Ô a2 Ô ! Ô ai-1 Ô b Ô ai+1 Ô ! Ô an L Ó ai
i=1
Ha1 Ô a2 Ô ! Ô an L Ó b ã
n
3.38
‚ Ha1 Ô a2 Ô ! Ô ai-1 Ô b Ô ai+1 Ô ! Ô an L Ó ai
i=1
Writing this out in full shows that we can expand the expression simply by interchanging b
successively with each of the factors of a1 Ôa2 Ô!Ôan , and summing the results.
Ha1 Ô a2 Ô ! Ô an L Ó b ã Hb Ô a2 Ô ! Ô an L Ó a1 +
3.39
Ha1 Ô b Ô ! Ô an L Ó a2 + ! + Ha1 Ô a2 Ô ! Ô bL Ó an
We can make the result express more explicitly the decomposition of b in terms of the ai by
'dividing through' by a. (The quotient of two n-elements has been defined previously as the
n
quotient of their scalar coefficients.)
n
a1 Ô a2 Ô ! Ô ai-1 Ô b Ô ai+1 Ô ! Ô an
bã‚ ai 3.40
i=1
a1 Ô a2 Ô ! Ô an
This expression is, of course, equivalent to that which would have been obtained from Grassmann's
approach to solving linear equations (see Chapter 2: The Exterior Product).
2009 9 3
Grassmann Algebra Book.nb 162
This expression is, of course, equivalent to that which would have been obtained from Grassmann's
approach to solving linear equations (see Chapter 2: The Exterior Product).
Again, we take the problem of finding the 1-element common to two 2-elements in a 3-space. This
time, however, we take a numerical example and suppose that we know the factors of at least one
of the 2-elements. Let:
x1 ã 3 e1 Ô e2 + 2 e1 Ô e3 + 3 e2 Ô e3
x2 ã H5 e2 + 7 e3 L Ô e1
We keep x1 fixed initially and apply the Common Factor Theorem to rewrite the regressive
product as a sum of the two products, each due to one of the essentially different rearrangements of
x2 :
x1 Ó x2 ã Hx1 Ô e1 L Ó H5 e2 + 7 e3 L - Hx1 Ô H5 e2 + 7 e3 LL Ó e1
The next step is to expand out the exterior products with x1 . Often this step can be done by
inspection, since all products with a repeated basis element factor will be zero.
x1 Ó x2 ã H3 e2 Ô e3 Ô e1 L Ó H5 e2 + 7 e3 L -
H10 e1 Ô e3 Ô e2 + 21 e1 Ô e2 Ô e3 L Ó e1
x1 Ó x2 ã He1 Ô e2 Ô e3 L Ó H-11 e1 + 15 e2 + 21 e3 L
ã ¯c H-11 e1 + 15 e2 + 21 e3 L ª -11 e1 + 15 e2 + 21 e3
We can check that this result is correct by taking its exterior product with each of x1 and x2 . We
should get zero in both cases, indicating that the factor determined is indeed common to both
original 2-elements. Here we use GrassmannExpandAndSimplify in its alias form ¯%.
¯%@H5 e2 + 7 e3 L Ô e1 Ô H-11 e1 + 15 e2 + 21 e3 LD
0
2009 9 3
Grassmann Algebra Book.nb 163
Ï Using ToCommonFactor
x1 = 3 e1 Ô e2 + 2 e1 Ô e3 + 3 e2 Ô e3 ;
x2 = H5 e2 + 7 e3 L Ô e1 ;
¯A; ToCommonFactor@x1 Ó x2 D
¯c H-11 e1 + 15 e2 + 21 e3 L
The scalar ¯c is called the congruence factor. The congruence factor has already been defined in
Section 3.3 above as the connection between the current basis n-element and the unit n-element 1
n
(in GrassmannAlgebra input we use ¯n instead of n, in case n is being used elsewhere in your
computations).
e1 Ô e2 Ô ! Ô en ã ¯c 1
¯n
Since this connection cannot be defined unless a metric is introduced, the congruence factor
remains arbitrary in non-metric spaces. Although such an arbitrariness may appear at first sight to
be disadvantageous, it is on the contrary, highly elucidating. In application to the computing of
unions and intersections of spaces, perhaps under a geometric interpretation where they represent
lines, planes and hyperplanes, the notion of congruence becomes central, since spaces can only be
determined up to congruence.
In the example above, x1 and x2 were expressed in terms of basis elements. ToCommonFactor
can also apply the Common Factor Theorem to more general elements. For example, if x1 and x2
were expressed as symbolic bivectors, the common factor expansion could still be performed.
x1 = x Ô y;
x2 = u Ô v;
ToCommonFactor@x1 Ó x2 D
¯c Hy †u Ô v Ô x§ - x †u Ô v Ô y§L
The expression †uÔvÔx§ represents the coefficient of uÔvÔx when uÔvÔx is expressed in terms
of the current basis n-element (in this case 3-element).
2009 9 3
Grassmann Algebra Book.nb 164
X = ComposeBasisForm@u Ô v Ô xD
He1 u1 + e2 u2 + e3 u3 L Ô He1 v1 + e2 v2 + e3 v3 L Ô He1 x1 + e2 x2 + e3 x3 L
¯%@XD
H-u3 v2 x1 + u2 v3 x1 + u3 v1 x2 - u1 v3 x2 - u2 v1 x3 + u1 v2 x3 L e1 Ô e2 Ô e3
†u Ô v Ô x§ ã H-u3 v2 x1 + u2 v3 x1 + u3 v1 x2 - u1 v3 x2 - u2 v1 x3 + u1 v2 x3 L
If you wish to explicitly apply the A form of the Common Factor Theorem, you can use
ToCommonFactorA. Or if the B form you can use ToCommonFactorB. The results will be
equal, but they may look different.
We illustrate by taking the example in the section above and expanding it both ways. Again define
x1 = x Ô y;
x2 = u Ô v;
ToCommonFactorA@x1 Ó x2 D
¯c Hy †u Ô v Ô x§ - x †u Ô v Ô y§L
ToCommonFactorB@x1 Ó x2 D
¯c H-v †u Ô x Ô y§ + u †v Ô x Ô y§L
To see most directly that these are equal, we express the vectors in terms of basis elements.
Z = ComposeBasisForm@x1 Ó x2 D
He1 x1 + e2 x2 + e3 x3 L Ô He1 y1 + e2 y2 + e3 y3 L Ó
He1 u1 + e2 u2 + e3 u3 L Ô He1 v1 + e2 v2 + e3 v3 L
ToCommonFactorA@ZD
¯c He1 Hu3 v1 x2 y1 - u1 v3 x2 y1 - u2 v1 x3 y1 + u1 v2 x3 y1 -
u3 v1 x1 y2 + u1 v3 x1 y2 + u2 v1 x1 y3 - u1 v2 x1 y3 L +
e2 Hu3 v2 x2 y1 - u2 v3 x2 y1 - u3 v2 x1 y2 + u2 v3 x1 y2 -
u2 v1 x3 y2 + u1 v2 x3 y2 + u2 v1 x2 y3 - u1 v2 x2 y3 L +
e3 Hu3 v2 x3 y1 - u2 v3 x3 y1 - u3 v1 x3 y2 + u1 v3 x3 y2 -
u3 v2 x1 y3 + u2 v3 x1 y3 + u3 v1 x2 y3 - u1 v3 x2 y3 LL
2009 9 3
Grassmann Algebra Book.nb 165
ToCommonFactorB@ZD ã ToCommonFactorA@ZD
True
For example, consider the regressive product of two symbolic 2-elements in a 3-space. Attempting
to compute the common factor fails, since neither factor can be decomposed.
¯A; ToCommonFactorBa Ó bF
2 2
aÓb
2 2
However, if either one of the factors is decomposable, then a formula for the common factor can be
generated. In the case below, the results are the same; remember that ¯c is an arbitrary scalar
factor.
ToCommonFactor can find the common factor of any number of elements. As an example, we
take the regressive product of three 3-elements in a 4-space. This is a 1-element, since
3 + 3 + 3 - 2 µ 4 # 1. Remember also that a 3-element in a 4-space is necessarily simple (see
Chapter 2: The Exterior Product).
To demonstrate this we first declare a 4-space, and then use the GrassmannAlgebra function
ComposeBasisForm to compose the required product in basis form.
¯!4 ; X = ComposeBasisFormBa Ó b Ó gF
3 3 3
2009 9 3
Grassmann Algebra Book.nb 166
Xc = ToCommonFactor@XD
Note that the congruence factor ¯c is to the second power. This is because the original expression
contained the regressive product operator twice: one ¯c effectively stands in for each basis 4-
element that is produced during the calculation (3+3+3 = 4+4+1).
It is easy to check that this result is indeed a common factor by taking its exterior product with
each of the 3-elements. For example:
To show this consider the (non-zero) regressive product a Ó b, where a and b are simple, m+k ¥ n.
m k m k
The 1-element factors of a and b must then have a common subspace of dimension m+k-n = j. Let
m k
g be a simple j-element which spans this common subspace. We can then write:
j
aÓb ã a Ôg Ó b Ôg ã a Ô b Ôg Óg ª g
m k m-j j k-j j m-j k-j j j j
The Common Factor Axiom then shows us that since g is simple, then so is the original product of
j
simple elements a Ó b.
m k
2009 9 3
Grassmann Algebra Book.nb 167
As an example of the foregoing result we calculate the regressive product of two 3-elements in a 4-
space. We begin by declaring a 4-space and creating two general 3-elements.
¯!4 ; Z = ComposeBasisFormBx Ó yF
3 3
Zc = ToCommonFactor@ZD
¯c HH-x3,2 y3,1 + x3,1 y3,2 L e1 Ô e2 + H-x3,3 y3,1 + x3,1 y3,3 L e1 Ô e3 +
H-x3,3 y3,2 + x3,2 y3,3 L e1 Ô e4 + H-x3,4 y3,1 + x3,1 y3,4 L e2 Ô e3 +
H-x3,4 y3,2 + x3,2 y3,4 L e2 Ô e4 + H-x3,4 y3,3 + x3,3 y3,4 L e3 Ô e4 L
We can show that this 2-element is simple by confirming that its exterior product with itself is
zero. (This technique was discussed in Chapter 2: The Exterior Product.)
2009 9 3
Grassmann Algebra Book.nb 168
Ha1 Ô ! Ô am L Ó b1 Ó ! Ó bm
n-1 n-1
ã Ha1 Ô ! Ô am L Ó bj Ó H-1Lj-1 b1 Ó ! Ó ·j Ó ! Ó bm
n-1 n-1 n-1
ã ‚ H-1Li-1 ai Ó bj Ha1 Ô ! Ô ·i Ô ! Ô am L Ó
i n-1
H-1Lj-1 b1 Ó ! Ó ·j Ó ! Ó bm
n-1 n-1
ã ‚ H-1Li+j ai Ó bj Ha1 Ô ! Ô ·i Ô ! Ô am L Ó b1 Ó ! Ó ·j Ó ! Ó bm
n-1 n-1 n-1
i
where DetBai Ó bj F is the determinant of the matrix whose elements are the scalars ai Ó bj .
n-1 n-1
Ha1 Ô a2 L Ó b1 Ó b2 ã a1 Ó b1 a2 Ó b2 - a1 Ó b2 a2 Ó b1
n-1 n-1 n-1 n-1 n-1 n-1
Let the new basis be 8b1 , !, bn < and let b ã b1 Ô ! Ô bn , then the Common Factor Theorem
n
permits us to write:
n
b Ó a ã ‚ bi Ô a Ó bi
n m n-m m m
i=1
n
where n = K O and
m
b ã b1 Ô b1 ã b2 Ô b2 ã ! ã bn Ô bn
n n-m m n-m m n-m m
We can visualize how the formula operates by writing a and b as simple products and then
m k
exchanging the bi with the ai in all the essentially different ways possible whilst always retaining
the original ordering. To make this more concrete, suppose n is 5 and m is 2:
2009 9 3
Grassmann Algebra Book.nb 169
We can visualize how the formula operates by writing a and b as simple products and then
m k
exchanging the bi with the ai in all the essentially different ways possible whilst always retaining
the original ordering. To make this more concrete, suppose n is 5 and m is 2:
Hb1 Ô b2 Ô b3 Ô b4 Ô b5 L Ó Ha1 Ô a2 L ã
Ha1 Ô a2 Ô b3 Ô b4 Ô b5 L Ó Hb1 Ô b2 L
+Ha1 Ô b2 Ô a2 Ô b4 Ô b5 L Ó Hb1 Ô b3 L
+Ha1 Ô b2 Ô b3 Ô a2 Ô b5 L Ó Hb1 Ô b4 L
+Ha1 Ô b2 Ô b3 Ô b4 Ô a2 L Ó Hb1 Ô b5 L
+Hb1 Ô a1 Ô a2 Ô b4 Ô b5 L Ó Hb2 Ô b3 L
+Hb1 Ô a1 Ô b3 Ô a2 Ô b5 L Ó Hb2 Ô b4 L
+Hb1 Ô a1 Ô b3 Ô b4 Ô a2 L Ó Hb2 Ô b5 L
+Hb1 Ô b2 Ô a1 Ô a2 Ô b5 L Ó Hb3 Ô b4 L
+Hb1 Ô b2 Ô a1 Ô b4 Ô a2 L Ó Hb3 Ô b5 L
+Hb1 Ô b2 Ô b3 Ô a1 Ô a2 L Ó Hb4 Ô b5 L
Now let the new n-elements on the right hand side be written as scalar factors bi times the new
basis n-element b.
n
bi Ô a ã bi b
n-m m n
Substituting gives
n n
b Ó a ã ‚ bi b Ó bi ã b Ó ‚ bi bi
n m n m n m
i=1 i=1
n bi Ô a
n-m m
a ã ‚ bi bi bi ã 3.42
m
i=1
m b
n
Consider a 4-space with basis 8e1 ,e2 ,e3 ,e4 < and let a be a 3-element expressed in terms of this
3
basis.
¯!4 ; a = 2 e1 Ô e2 Ô e3 - 3 e1 Ô e3 Ô e4 - 2 e2 Ô e3 Ô e4 ;
3
Suppose now that we have another basis related to the e basis by:
2009 9 3
Grassmann Algebra Book.nb 170
b1 = 2 e1 + 3 e3 ;
b2 = 5 e3 - e4 ;
b3 = e1 - e3 ;
b4 = e1 + e2 + e3 + e4 ;
We wish to express a in terms of the new basis. Instead of inverting the transformation to find the
3
ei in terms of the bi , substituting for the ei in a, and simplifying, we can almost write the result
3
required by inspection.
b3 Ô a Ó Hb1 Ô b2 Ô b4 L - b4 Ô a Ó Hb1 Ô b2 Ô b3 L
3 3
ã He1 Ô e2 Ô e3 Ô e4 L Ó
HH-4 b2 Ô b3 Ô b4 L + H-2 b1 Ô b3 Ô b4 L + H-2 b1 Ô b2 Ô b4 L + Hb1 Ô b2 Ô b3 LL
¯%@b1 Ô b2 Ô b3 Ô b4 D
-5 e1 Ô e2 Ô e3 Ô e4
1
Hb1 Ô b2 Ô b3 Ô b4 L Ó a ã - Hb1 Ô b2 Ô b3 Ô b4 L Ó
35
HH-4 b2 Ô b3 Ô b4 L + H-2 b1 Ô b3 Ô b4 L + H-2 b1 Ô b2 Ô b4 L + Hb1 Ô b2 Ô b3 LL
4 2 2 1
aã b2 Ô b3 Ô b4 + b1 Ô b3 Ô b4 + b1 Ô b2 Ô b4 - b1 Ô b2 Ô b3
3 5 5 5 5
We can easily check this by expanding and simplifying the right hand side to give both sides in
terms of the original e basis.
4 2 2 1
a ã ¯%B b2 Ô b3 Ô b4 + b1 Ô b3 Ô b4 + b1 Ô b2 Ô b4 - b1 Ô b2 Ô b3 F
3 5 5 5 5
True
2009 9 3
Grassmann Algebra Book.nb 171
For the special case of a 1-element, the scalar coefficients bi in formula 3.35 can be written more
explicitly, and without undue complexity, resulting in a decomposition of a in terms of n
independent elements bi equivalent to expressing a in the basis bi .
n
b1 Ô b2 Ô ! Ô a Ô ! Ô bn
aã‚ bi 3.43
i=1
b1 Ô b2 Ô ! Ô bi Ô ! Ô bn
Here the denominators are the same in each term but are expressed this way to show the
positioning of the factor a in the numerator.
This result has already been introduced in the section above Example: The decomposition of a 1-
element.
For the case m = 1, b Ó a may be expanded by the Common Factor Theorem to give:
n
n
Hb1 Ô b2 Ô ! Ô bn L Ó a ã ‚ Hb1 Ô b2 Ô ! Ô bi-1 Ô a Ô bi+1 Ô ! Ô bn L Ó bi
i=1
Hb1 Ô b2 Ô b3 Ô ! Ô bn L Ó a
ã Ha Ô b2 Ô b3 Ô ! Ô bn L Ó b1
+Hb1 Ô a Ô b3 Ô ! Ô bn L Ó b2
+Hb1 Ô b2 Ô a Ô ! Ô bn L Ó b3
+!
+Hb1 Ô b2 Ô b3 Ô ! Ô aL Ó bn
n
‚ H-1Li Hb0 Ô b1 Ô ! Ô ·i Ô ! Ô bn L Ó bi ã 0 3.44
i=0
For example, suppose b0 , b1 , b2 , b3 are four dependent 1-elements which span a 3-space, then the
formula reduces to the identity:
Hb1 Ô b2 Ô b3 L Ó b0 - Hb0 Ô b2 Ô b3 L Ó b1
3.45
+Hb0 Ô b1 Ô b3 L Ó b2 - Hb0 Ô b1 Ô b2 L Ó b3 ã 0
We can get GrassmannAlgebra to check this formula by composing each of the bi in basis form,
and finding the common factor of each term. (To use ComposeBasisForm, we will first have to
give the bi new names which do not already involve subscripts.)
2009 9 3
Grassmann Algebra Book.nb 172
We can get GrassmannAlgebra to check this formula by composing each of the bi in basis form,
and finding the common factor of each term. (To use ComposeBasisForm, we will first have to
give the bi new names which do not already involve subscripts.)
¯A; B = Hb1 Ô b2 Ô b3 L Ó b0 -
Hb0 Ô b2 Ô b3 L Ó b1 + Hb0 Ô b1 Ô b3 L Ó b2 - Hb0 Ô b1 Ô b2 L Ó b3 ;
B = B ê. 8b0 Ø w, b1 Ø x, b2 Ø y, b3 Ø z<
-Hw Ô x Ô y Ó zL + w Ô x Ô z Ó y - w Ô y Ô z Ó x + x Ô y Ô z Ó w
Bn = ComposeBasisForm@BD
-HHe1 w1 + e2 w2 + e3 w3 L Ô He1 x1 + e2 x2 + e3 x3 L Ô He1 y1 + e2 y2 + e3 y3 L Ó
He1 z1 + e2 z2 + e3 z3 LL +
He1 w1 + e2 w2 + e3 w3 L Ô He1 x1 + e2 x2 + e3 x3 L Ô He1 z1 + e2 z2 + e3 z3 L Ó
He1 y1 + e2 y2 + e3 y3 L -
He1 w1 + e2 w2 + e3 w3 L Ô He1 y1 + e2 y2 + e3 y3 L Ô He1 z1 + e2 z2 + e3 z3 L Ó
He1 x1 + e2 x2 + e3 x3 L +
He1 x1 + e2 x2 + e3 x3 L Ô He1 y1 + e2 y2 + e3 y3 L Ô He1 z1 + e2 z2 + e3 z3 L Ó
He1 w1 + e2 w2 + e3 w3 L
ToCommonFactor@Bn D ã 0
True
We start with three basis elements whose exterior product is equal to the basis n-element. The basis
n-element may also be expressed as the cobasis 1 of the unit element 1 of L.
0
ei Ô ej Ô es ã e1 Ô e2 Ô ! Ô en ã 1 ã ¯c 1
m k p ¯n
The Common Factor Axiom can be written for these basis elements as:
ei Ô es Ó ej Ô es ã ei Ô ej Ô es Ó es , m + k + p ã ¯n 3.46
m p k p m k p p
ei Ô es ã H-1Lm k ej
m p k
2009 9 3
Grassmann Algebra Book.nb 173
ej Ô es ã ei
k p m
es ã ei Ô ej
p m k
ei Ô ej Ô es ã 1
m k p
Substituting these four elements into the Common Factor Axiom above gives:
H-1Lm k ej Ó ei ã 1 Ó Hei Ô ej L
k m m k
ei Ó ej ã 1 Ó Hei Ô ej L ã ¯c ei Ô ej
m k m k m k
3.47
It can be seen that in this form the Common Factor Axiom does not specifically display the
common factor, and indeed remains valid for all basis elements, independent of their grades.
In sum: Given any two basis elements, the cobasis element of their exterior product is congruent to
the regressive product of their cobasis elements.
First, consider basis elements e1 and e2 of an n-space and their cobasis elements e1 and e2 . The
regressive product of e1 and e2 is given by:
e1 Ó e2 ã He2 Ô e3 Ô ! Ô en L Ó H- e1 Ô e3 Ô ! Ô en L
e1 Ó e2 ã He1 Ô e2 Ô e3 Ô ! Ô en L Ó H e3 Ô ! Ô en L
We can write this either in the form already derived in the section above:
e1 Ó e2 ã 1 Ó H e1 Ô e2 L
e1 Ó e2 ã ¯c H e1 Ô e2 L
2009 9 3
Grassmann Algebra Book.nb 174
e1 Ó e2 Ó e3 ã ¯c H e1 Ô e2 L Ó e3 ã ¯c 1 Ó H e1 Ô e2 Ô e3 L ã ¯c2 H e1 Ô e2 Ô e3 L
e1 Ó e2 Ó ! Ó em ã ¯cm-1 H e1 Ô e2 Ô ! Ô em L 3.48
A special case which we will have occasion to use in Chapter 5 is where the result reduces to a 1-
element.
In sum: The regressive product of cobasis elements of basis 1-elements is congruent to the cobasis
element of their exterior product.
In fact, this formula is just an instance of a more general result which says that: The regressive
product of cobasis elements of any grade is congruent to the cobasis element of their exterior
product. We will discuss a result very similar to this in more detail after we have defined the
complement of an element in Chapter 5.
Let a be the simple element to be factorized. Choose first an (n|m)-element b whose product with
m n-m
a is non-zero. Next, choose a set of m independent 1-elements bj whose products with b are also
m n-m
non-zero.
a ã a a1 Ô a2 Ô ! Ô a m aj ã a Ó Kbj Ô b O 3.50
m m n-m
The scalar factor a may be determined simply by equating any two corresponding terms of the
original element and the factorized version.
Note that no factorization is unique. Had different bj been chosen, a different factorization would
have been obtained. Nevertheless, any one factorization may be obtained from any other by adding
multiples of the factors to each factor.
2009 9 3
Grassmann Algebra Book.nb 175
Note that no factorization is unique. Had different bj been chosen, a different factorization would
have been obtained. Nevertheless, any one factorization may be obtained from any other by adding
multiples of the factors to each factor.
If an element is simple, then the exterior product of the element with itself is zero. The converse,
however, is not true in general for elements of grade higher than 2, for it only requires the element
to have just one simple factor to make the product with itself zero.
If the method is applied to the factorization of a non-simple element, the result will still be a simple
element. Thus an element may be tested to see if it is simple by applying the method of this
section: if the factorization is not equivalent to the original element, the hypothesis of simplicity
has been violated.
Suppose we have a 2-element a which we wish to show is simple and which we wish to factorize.
2
There are 5 independent 1-elements in the expression for a : v, w, x, y, z, hence we can choose n
2
to be 5. Next, we choose b (= b) arbitrarily as xÔyÔz, b1 as v, and b2 as w. Our two factors then
n-m 3
become:
a1 ª a Ó b1 Ô b ª a Ó Hv Ô x Ô y Ô zL
2 3 2
a2 ª a Ó b2 Ô b ª a Ó Hw Ô x Ô y Ô zL
2 3 2
In determining a1 the Common Factor Theorem permits us to write for arbitrary 1-elements x and
y:
Hx Ô yL Ó Hv Ô x Ô y Ô zL ã Hx Ô v Ô x Ô y Ô zL Ó y - Hy Ô v Ô x Ô y Ô zL Ó x
a1 ª -Hw Ô v Ô x Ô y Ô zL Ó v + Hw Ô v Ô x Ô y Ô zL Ó z ª v - z
Hx Ô yL Ó Hw Ô x Ô y Ô zL ã Hx Ô w Ô x Ô y Ô zL Ó y - Hy Ô w Ô x Ô y Ô zL Ó x
a2 ª Hv Ô w Ô x Ô y Ô zL Ó w + Hv Ô w Ô x Ô y Ô zL Ó x +
Hv Ô w Ô x Ô y Ô zL Ó y + Hv Ô w Ô x Ô y Ô zL Ó z ª w + x + y + z
2009 9 3
Grassmann Algebra Book.nb 176
a1 Ô a2 ª Hv - zL Ô Hw + x + y + zL
Verification by expansion of this product shows that this is indeed a factorization of the original
element.
The development is most clearly apparent from a specific example, but one that is general enough
to cover the general concept. Consider a 3-element in a 5-space, where we suppose the coefficients
to be such as to ensure the simplicity of the element.
a ã a1 e1 Ô e2 Ô e3 + a2 e1 Ô e2 Ô e4 + a3 e1 Ô e2 Ô e5
3
+a4 e1 Ô e3 Ô e4 + a5 e1 Ô e3 Ô e5 + a6 e1 Ô e4 Ô e5
+a7 e2 Ô e3 Ô e4 + a8 e2 Ô e3 Ô e5 + a9 e2 Ô e4 Ô e5
+a10 e3 Ô e4 Ô e5
b1 Ô b ã e1 Ô e4 Ô e5 ã e2 Ô e3
2
b2 Ô b ã e2 Ô e4 Ô e5 ã -e1 Ô e3
2
b3 Ô b ã e3 Ô e4 Ô e5 ã e1 Ô e2
2
a Ó e2 Ô e3 ã Ha1 e1 Ô e2 Ô e3 + a7 e2 Ô e3 Ô e4 + a8 e2 Ô e3 Ô e5 L Ó e2 Ô e3
3
2009 9 3
Grassmann Algebra Book.nb 177
a1 ª a Ó e2 Ô e3 ª a1 e1 + a7 e4 + a8 e5
3
a2 ª a Ó -e1 Ô e3 ª a1 e2 - a4 e4 - a5 e5
3
a3 ª a Ó e1 Ô e2 ª a1 e3 + a2 e4 + a3 e5
3
It is clear from inspecting the product of the first terms in each 1-element that the product requires
a scalar divisor of a21 . The final result is then:
1
aã a1 Ô a2 Ô a3
3 a1 2
ã
1
Ha1 e1 + a7 e4 + a8 e5 L Ô Ha1 e2 - a4 e4 - a5 e5 L Ô Ha1 e3 + a2 e4 + a3 e5 L
a1 2
We verify the factorization by multiplying out the factors and comparing the result with the
original expression. When we do this we obtain a result which still requires some conditions to be
met: those ensuring the original element is simple.
First we declare a 5-dimensional basis and compose a (this automatically declares the coefficients
3
ai to be scalar).
2009 9 3
Grassmann Algebra Book.nb 178
1
A = ¯%B
a1 2
Ha1 e1 + a7 e4 + a8 e5 L Ô Ha1 e2 - a4 e4 - a5 e5 L Ô Ha1 e3 + a2 e4 + a3 e5 LF
a1 e1 Ô e2 Ô e3 + a2 e1 Ô e2 Ô e4 + a3 e1 Ô e2 Ô e5 + a4 e1 Ô e3 Ô e4 +
a3 a4 a2 a5
a5 e1 Ô e3 Ô e5 + - + e1 Ô e4 Ô e5 + a7 e2 Ô e3 Ô e4 +
a1 a1
a3 a7 a2 a8 a5 a7 a4 a8
a8 e2 Ô e3 Ô e5 + - + e2 Ô e4 Ô e5 + - + e3 Ô e4 Ô e5
a1 a1 a1 a1
For this expression to be congruent to the original expression we clearly need to apply the
simplicity conditions for a general 3-element in a 5-space. Once this is done we retrieve the
original 3-element a with which we began. The simplicity conditions can be expressed by the
3
following rules, which we apply to the expression A. We will discuss them in the next section.
a3 a4 a2 a5
A ê. : - + Ø a6 ,
a1 a1
a3 a7 a2 a8 a5 a7 a4 a8
- + Ø a9 , - + Ø a10 >
a1 a1 a1 a1
a1 e1 Ô e2 Ô e3 + a2 e1 Ô e2 Ô e4 + a3 e1 Ô e2 Ô e5 +
a4 e1 Ô e3 Ô e4 + a5 e1 Ô e3 Ô e5 + a6 e1 Ô e4 Ô e5 +
a7 e2 Ô e3 Ô e4 + a8 e2 Ô e3 Ô e5 + a9 e2 Ô e4 Ô e5 + a10 e3 Ô e4 Ô e5
a1 e1 Ô e2 Ô e3 + a2 e1 Ô e2 Ô e4 + a3 e1 Ô e2 Ô e5 +
a4 e1 Ô e3 Ô e4 + a5 e1 Ô e3 Ô e5 + a6 e1 Ô e4 Ô e5 +
a7 e2 Ô e3 Ô e4 + a8 e2 Ô e3 Ô e5 + a9 e2 Ô e4 Ô e5 + a10 e3 Ô e4 Ô e5
a2 a5 - a3 a4 - a1 a6 ã 0
a2 a8 - a3 a7 - a1 a9 ã 0
a4 a8 - a5 a7 - a1 a10 ã 0
1
Ha1 e1 + a7 e4 + a8 e5 L Ô H a1 e2 - a4 e4 - a5 e5 L Ô Ha1 e3 + a2 e4 + a3 e5 L
a1 2
This factorization is of course not unique.
2009 9 3
Grassmann Algebra Book.nb 179
Suppose we have a simple m-element expressed as a sum of terms, each of which is of the form
a x Ô y Ô ! Ô z, where a denotes a scalar (which may be unity) and the x, y, !, z are symbols
denoting 1-element factors.
Suppose we have a 2-element in a 4-space, and we wish to apply this algorithm to obtain a
factorization. We have already seen that such an element is in general not simple. We may
however use the preceding algorithm to obtain the simplicity conditions on the coefficients.
a ã a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4
2
• Select e1 .
• Drop the terms a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4 since they do not contain e1 .
• Factor e1 from a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 and eliminate it to give factor a1 .
a1 ã a1 e2 + a2 e3 + a3 e4
• Select e2 .
• Drop the terms a2 e1 Ô e3 + a3 e1 Ô e4 + a6 e3 Ô e4 since they do not contain e2 .
• Factor e2 from a1 e1 Ô e2 + a4 e2 Ô e3 + a5 e2 Ô e4 and eliminate it to give factor a2 .
2009 9 3
Grassmann Algebra Book.nb 180
a2 ã - a1 e1 + a4 e3 + a5 e4
a1 Ô a2 ã Ha1 e2 + a2 e3 + a3 e4 L Ô H- a1 e1 + a4 e3 + a5 e4 L
ã a1 a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 +
-a3 a4 + a2 a5
a4 e2 Ô e3 + a5 e2 Ô e4 + e3 Ô e4
a1
• Comparing this product to the original 2-element a gives the final factorization as
2
1
aã Ha1 e2 + a2 e3 + a3 e4 L Ô H- a1 e1 + a4 e3 + a5 e4 L
2 a1
provided that the simplicity conditions on the coefficients are satisfied:
a1 a6 + a3 a4 - a2 a5 ã 0
In sum: A 2-element in a 4-space may be factorized if and only if a condition on the coefficients is
satisfied.
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 +
a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4
1 3.51
ã Ha1 e1 - a4 e3 - a5 e4 L Ô Ha1 e2 + a2 e3 + a3 e4 L,
a1
ó a3 a4 - a2 a5 + a1 a6 ã 0
This time we have a numerical example of a 3-element in 5-space. We wish to determine if the
element is simple, and if so, to obtain a factorization of it. To achieve this, we first assume that it is
simple, and apply the factorization algorithm. We will verify its simplicity (or its non-simplicity)
by comparing the results of this process to the original element.
a ã -3 e1 Ô e2 Ô e3 - 4 e1 Ô e2 Ô e4 + 12 e1 Ô e2 Ô e5 + 3 e1 Ô e3 Ô e4 -
3
3 e1 Ô e3 Ô e5 + 8 e1 Ô e4 Ô e5 + 6 e2 Ô e3 Ô e4 - 18 e2 Ô e3 Ô e5 + 12 e3 Ô e4 Ô e5
• Select e2 Ô e3 .
• Drop the terms not containing it, factor it from the remainder, and eliminate it to give a1 .
a1 ª -3 e1 + 6 e4 - 18 e5 ª - e1 + 2 e4 - 6 e5
• Select e1 Ô e3 .
• Drop the terms not containing it, factor it from the remainder, and eliminate it to give a2 .
2009 9 3
Grassmann Algebra Book.nb 181
• Select e1 Ô e3 .
• Drop the terms not containing it, factor it from the remainder, and eliminate it to give a2 .
a2 ª 3 e2 + 3 e4 - 3 e5 ª e2 + e4 - e5
• Select e1 Ô e2 .
• Drop the terms not containing it, factor it from the remainder, and eliminate it to give a3 .
a3 ª -3 e3 - 4 e4 + 12 e5
a1 Ô a2 Ô a3 ã H- e1 + 2 e4 - 6 e5 L Ô He2 + e4 - e5 L Ô H-3 e3 - 4 e4 + 12 e5 L
Multiplying this out (here using GrassmannExpandAndSimplify in its alias form) gives:
• Comparing this product to the original 3-element a verifies a final factorization as:
3
This factorization is, of course, not unique. For example, a slightly simpler factorization could be
obtained by subtracting twice the first factor from the third factor to obtain:
Factorization of (n|1)-elements
The foregoing method may be used to prove constructively that any (n|1)-element is simple by
obtaining a factorization and subsequently verifying its validity. Let the general (n|1)-element be:
n
a ã ‚ ai ei , a1 ! 0
n-1
i=1
Choose b ã e1 and bj ã ej and apply the Common Factor Theorem to obtain (for j "1):
1
2009 9 3
Grassmann Algebra Book.nb 182
a Ó He1 Ô ej L ã K a Ô ej O Ó e1 - K a Ô e1 O Ó ej
n-1 n-1 n-1
n n
ã ‚ ai ei Ô ej Ó e1 - ‚ ai ei Ô e1 Ó ej
i=1 i=1
ã H-1Ln-1 Kaj e Ó e1 - a1 e Ó ej O
n n
n-1
ã H-1L e Ó Haj e1 - a1 ej L
n
a1 e1 + a2 e2 + ! + an en
ª Ha2 e1 - a1 e2 L Ô Ha3 e1 - a1 e3 L Ô ! Ô Han e1 - a1 en L, 3.52
a1 ! 0
n n
‚ ai ei ª L Ha
i=2
i e1 - a1 ei L, a1 ! 0 3.53
i=1
By putting equation 3.46 in its equality form rather than its congruence form, we have
n n
a1 n-2
‚ ai ei ã H-1L n-1
L Ha
i=2
i e1 - a1 ei L, a1 ! 0 3.54
i=1
The left and right sides of the formula 3.47 are easily coded in GrassmannAlgebra as functions of
the dimension n of the space.
n
L@n_D := ¯!n ; a1 n-2 ‚ ai CobasisElement@ei D
i=1
This enables us to tabulate the formula for different dimensions. For example
2009 9 3
Grassmann Algebra Book.nb 183
In each case we first declare a basis of the requisite dimension by entering ¯!n ;, and then create a
general (n|1)-element X with scalars ai using the GrassmannAlgebra function
ComposeMElement.
ExteriorFactorize@XD
a4 e4 a3 e4 a2 e4
a1 e1 + Ô e2 - Ô e3 +
a1 a1 a1
¯!5 ; X = ComposeMElement@4, aD
a1 e1 Ô e2 Ô e3 Ô e4 + a2 e1 Ô e2 Ô e3 Ô e5 +
a3 e1 Ô e2 Ô e4 Ô e5 + a4 e1 Ô e3 Ô e4 Ô e5 + a5 e2 Ô e3 Ô e4 Ô e5
2009 9 3
Grassmann Algebra Book.nb 184
ExteriorFactorize@XD
a5 e5 a4 e5 a3 e5 a2 e5
a1 e1 - Ô e2 + Ô e3 - Ô e4 +
a1 a1 a1 a1
¯!6 ; X = ComposeMElement@5, aD
a1 e1 Ô e2 Ô e3 Ô e4 Ô e5 + a2 e1 Ô e2 Ô e3 Ô e4 Ô e6 + a3 e1 Ô e2 Ô e3 Ô e5 Ô e6 +
a4 e1 Ô e2 Ô e4 Ô e5 Ô e6 + a5 e1 Ô e3 Ô e4 Ô e5 Ô e6 + a6 e2 Ô e3 Ô e4 Ô e5 Ô e6
ExteriorFactorize@XD
a6 e6 a5 e6 a4 e6 a3 e6 a2 e6
a1 e1 + Ô e2 - Ô e3 + Ô e4 - Ô e5 +
a1 a1 a1 a1 a1
¯A; ¯!4 ;
X = ComposeMElement@3, xD
x1 e1 Ô e2 Ô e3 + x2 e1 Ô e2 Ô e4 + x3 e1 Ô e3 Ô e4 + x4 e2 Ô e3 Ô e4
Y = ComposeMElement@3, yD
y1 e1 Ô e2 Ô e3 + y2 e1 Ô e2 Ô e4 + y3 e1 Ô e3 Ô e4 + y4 e2 Ô e3 Ô e4
We can obtain the 2-element common factor of these two 3-elements by applying
ToCommonFactor to their regressive product.
Z = ToCommonFactor@X Ó YD
¯c HH-x2 y1 + x1 y2 L e1 Ô e2 + H-x3 y1 + x1 y3 L e1 Ô e3 +
H-x3 y2 + x2 y3 L e1 Ô e4 + H-x4 y1 + x1 y4 L e2 Ô e3 +
H-x4 y2 + x2 y4 L e2 Ô e4 + H-x4 y3 + x3 y4 L e3 Ô e4 L
Applying ExteriorFactorize to this common factor shows that it is simple and can be
factorized.
ExteriorFactorize@ZD
e3 Hx4 y1 - x1 y4 L e4 Hx4 y2 - x2 y4 L
¯c H-x2 y1 + x1 y2 L e1 + + Ô
-x2 y1 + x1 y2 -x2 y1 + x1 y2
e3 Hx3 y1 - x1 y3 L e4 Hx3 y2 - x2 y3 L
e2 + +
x2 y1 - x1 y2 x2 y1 - x1 y2
2009 9 3
Grassmann Algebra Book.nb 185
Because this is a 2-element in a 4-space we can of course also prove its simplicity by expanding
and simplifying its exterior square to show that it is zero.
Expand@¯%@Z Ô ZDD
0
¯!4 ; ExteriorFactorize@x Ô y + u Ô vD
uÔv + xÔy
¯!4 ; X = ComposeMElement@2, aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4
X1 = ExteriorFactorize@XD
IfB8a3 a4 - a2 a5 + a1 a6 ã 0<,
a4 e3 a5 e4 a2 e3 a3 e4
a1 e1 - - Ô e2 + + F
a1 a1 a1 a1
This Mathematica If function has syntax If[C,F], where C is a list of constraints on the scalar
symbols of the expression, and F is the factorization if the constraints are satisfied. Hence the
above may be read: if the simplicity condition is satisfied, then the factorization is as given, else
the element is not factorizable.
Instead of a list of constraints, we can also recast the list of constraints in predicate form by
applying And to the list. (In this example, where there is only one constraint, the effect is simply to
remove the braces from the constraint.)
X1 = X1 ê. c_List ß And üü c
a4 e3 a5 e4 a2 e3 a3 e4
IfBa3 a4 - a2 a5 + a1 a6 ã 0, a1 e1 - - Ô e2 + + F
a1 a1 a1 a1
If we are able to assert the condition required, then the 2-element is indeed simple and the
factorization is valid. Substituting this condition into the predicate form of the If statement, yields
true for the predicate, hence the factorization is returned.
2009 9 3
Grassmann Algebra Book.nb 186
a2 a5 - a3 a4
X1 ê. a6 Ø
a1
a4 e3 a5 e4 a2 e3 a3 e4
a1 e1 - - Ô e2 + +
a1 a1 a1 a1
¯!5 ; X = ComposeMElement@2, aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e1 Ô e5 + a5 e2 Ô e3 +
a6 e2 Ô e4 + a7 e2 Ô e5 + a8 e3 Ô e4 + a9 e3 Ô e5 + a10 e4 Ô e5
X1 = ExteriorFactorize@XD
IfB8a3 a5 - a2 a6 + a1 a8 ã 0,
a4 a5 - a2 a7 + a1 a9 ã 0, a4 a6 - a3 a7 + a1 a10 ã 0<,
a5 e3 a6 e4 a7 e5 a2 e3 a3 e4 a4 e5
a1 e1 - - - Ô e2 + + + F
a1 a1 a1 a1 a1 a1
In this case of a 2-element in a 5-space, we have a list of three constraints, which we can turn into a
predicate by applying And to the list.
X1 = X1 ê. c_List ß And üü c
IfBa3 a5 - a2 a6 + a1 a8 ã 0 &&
a4 a5 - a2 a7 + a1 a9 ã 0 && a4 a6 - a3 a7 + a1 a10 ã 0,
a5 e3 a6 e4 a7 e5 a2 e3 a3 e4 a4 e5
a1 e1 - - - Ô e2 + + + F
a1 a1 a1 a1 a1 a1
a3 a5 - a2 a6 a4 a5 - a2 a7 a4 a6 - a3 a7
X1 ê. :a8 Ø - , a9 Ø - , a10 Ø - >
a1 a1 a1
a5 e3 a6 e4 a7 e5 a2 e3 a3 e4 a4 e5
a1 e1 - - - Ô e2 + + +
a1 a1 a1 a1 a1 a1
¯!4 ; X = 2 Hx Ô yL + b Hy Ô zL - 5 Hx Ô wL + 7 a Hy Ô wL - 4 Hz Ô wL;
2009 9 3
Grassmann Algebra Book.nb 187
ExteriorFactorize@XD
2y 7ay 4z
IfB88 + 5 b ã 0<, 5 w - Ô x- + F
5 5 5
Ì A 4-element in a 5-space
¯!5 ; X = ComposeMElement@4, aD
a1 e1 Ô e2 Ô e3 Ô e4 + a2 e1 Ô e2 Ô e3 Ô e5 +
a3 e1 Ô e2 Ô e4 Ô e5 + a4 e1 Ô e3 Ô e4 Ô e5 + a5 e2 Ô e3 Ô e4 Ô e5
SimpleQ@XD
True
If SimpleQ is applied to an element which is not simple, the element will simply be returned.
SimpleQ@x Ô y + u Ô vD
False
If SimpleQ is applied to an element which may be simple conditional on the values taken by
some of its scalar coefficients, the conditional result will be returned.
Ì A 2-element in a 4-space
¯!4 ; X = ComposeMElement@2, aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4
2009 9 3
Grassmann Algebra Book.nb 188
SimpleQ@XD
If@8a3 a4 - a2 a5 + a1 a6 ã 0<, TrueD
If we are able to assert the condition required, that a3 a4 - a2 a5 + a1 a6 ã 0, then the 2-element
is indeed simple.
a3 a4 - a2 a5
SimpleQBX ê. a6 Ø - F
a1
True
Ì A 2-element in a 5-space
¯!5 ; X = ComposeMElement@2, aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e1 Ô e5 + a5 e2 Ô e3 +
a6 e2 Ô e4 + a7 e2 Ô e5 + a8 e3 Ô e4 + a9 e3 Ô e5 + a10 e4 Ô e5
SimpleQ@XD
If@8a3 a5 - a2 a6 + a1 a8 ã 0,
a4 a5 - a2 a7 + a1 a9 ã 0, a4 a6 - a3 a7 + a1 a10 ã 0<, TrueD
In this case of a 2-element in a 5-space, we have three conditions on the coefficients to satisfy
before being able to assert that the element is simple.
SimpleQ will also check m-elements expressed in terms of (independent) vector symbols.
¯!4 ; X = 2 Hx Ô yL + b Hy Ô zL - 5 Hx Ô wL + 7 a Hy Ô wL - 4 Hz Ô wL;
SimpleQ@XD
If@88 + 5 b ã 0<, TrueD
The first formula to be developed (which forms a basis for much of the rest) is an expansion for the
regressive product of an (n|1)-element with the exterior product of two arbitrary elements. We call
this (and its dual) the Product Formula.
2009 9 3
Grassmann Algebra Book.nb 189
The first formula to be developed (which forms a basis for much of the rest) is an expansion for the
regressive product of an (n|1)-element with the exterior product of two arbitrary elements. We call
this (and its dual) the Product Formula.
a Ô b Ó x ã Ka Ó x O Ô b + H-1Lm a Ô b Ó x 3.55
m k n-1 m n-1 k m k n-1
To prove this, suppose initially that a and b are simple and can be expressed as a ã a1 Ô ! Ô am
m k m
and b ã b1 Ô ! Ô bk . Applying the Common Factor Theorem gives
k
a Ô b Ó x ã HHa1 Ô ! Ô am L Ô Hb1 Ô ! Ô bk LL Ó x ã
m k n-1 n-1
m
‚ H-1Li-1 Jai Ô x N Ó HHa1 Ô ! Ô ·i Ô ! Ô am L Ô Hb1 Ô ! Ô bk LL
n-1
i=1
k
+ ‚ H-1Lm+j-1 Jbj Ô x N Ó HHa1 Ô ! Ô am L Ô Hb1 Ô ! Ô ·j Ô ! Ô bk LL
n-1
j=1
Since the ai Ô x and bj Ô x are n-elements, we can rearrange the parentheses in the terms of the
n-1 n-1
sums by using the property of n-elements that they behave with regressive products just like scalars
behave with exterior products: A Ó B Ô C Ø KA Ó BO Ô C. So the right hand side becomes
n b c n b c
m
‚ H-1Li-1 JJai Ô x N Ó Ha1 Ô ! Ô ·i Ô ! Ô am LN Ô Hb1 Ô ! Ô bk L
n-1
i=1
k
+ ‚ H-1Lm +j-1 H-1Lm Hk-1L JJbj Ô x N Ó Hb1 Ô ! Ô ·j Ô ! Ô bk LN Ô Ha1 Ô ! Ô am L
n-1
j=1
Reapplying the Common Factor Theorem in reverse enables us to condense the sums back to:
This result may be extended in a straightforward manner to the case where a and b are not simple:
m k
since a non-simple element may be expressed as the sum of simple terms, and the formula is valid
for each term, then by addition it can be shown to be valid for the sum.
We can calculate the dual of this formula by applying the GrassmannAlgebra function Dual.
(Note that we use ¯n instead of n in order to let the Dual function know of its special meaning as
the dimension of the space.)
2009 9 3
Grassmann Algebra Book.nb 190
We can calculate the dual of this formula by applying the GrassmannAlgebra function Dual.
(Note that we use ¯n instead of n in order to let the Dual function know of its special meaning as
the dimension of the space.)
DualB a Ô b Ó x ã Ka Ó x O Ô b + H-1Lm a Ô b Ó x F
m k ¯n-1 m ¯n-1 k m k ¯n-1
a Ó b Ô x ã H-1L-m+¯n a Ó b Ô x + a Ô x Ó b
m k m k m k
a Ó b Ô x ã Ka Ô xO Ó b + H-1L¯n-m a Ó b Ô x 3.56
m k m k m k
a Ó b Ô Hx1 Ô x2 L ã a Ó b Ô x1 Ô x2
m k m k
then the right-hand side may be expanded by applying the above Product Formula twice to obtain:
Each successive application doubles the number of terms. We started with two terms on the right
hand side of the basic Product Formula. By applying it again we obtain a Product Formula with
four terms on the right hand side. The next application would give us eight terms as shown in the
Product Formula below.
a Ó b Ô Hx1 Ô x2 Ô x3 L ã
m k
Ka Ô x1 O Ó b Ô x2 Ô x3 - Ka Ô x2 O Ó b Ô x1 Ô x3 +
m k m k
+ + 3.58
2009 9 3
Grassmann Algebra Book.nb 191
Ka Ô x3 O Ó b Ô x1 Ô x2 + Ka Ô x1 Ô x2 Ô x3 O Ó b +
m k m k
H-1Ln-m a Ó b Ô x1 Ô x2 Ô x3 + Ka Ô x1 Ô x2 O Ó b Ô x3 -
m k m k
Ka Ô x1 Ô x3 O Ó b Ô x2 + Ka Ô x2 Ô x3 O Ó b Ô x1
m k m k
Because this derivation process is simply the repeated application of a formula, we can get
Mathematica to do it for us automatically.
DeriveProductFormulaOnce@F_, x_D :=
GrassmannExpand@F Ô xD ê. HHa_. * Hu_ Ó v_LL Ô z_ ß
a * HHu Ô zL Ó vL + H-1L^H¯n - RawGrade@uDL * a * Hu Ó Hv Ô zLLL;
Each term is now transformed into two terms with the forms a*((uÔz)Óv) and (-1)^(¯n-
RawGrade[u])*a*(uÓ(vÔz))). Here ¯n is the (symbolic) dimension of the underlying linear
space, and the function RawGrade computes the grade of an element without consideration to the
dimension of the currently declared space. Both these features enable the formula derivation to
apply to an element of any grade.
3.58
2009 9 3
Grassmann Algebra Book.nb 192
DeriveProductFormula takes the left hand side of the original Product Formula (AÓB)ÔX,
and uses Mathematica's Fold function to repeatedly apply DeriveProductFormulaOnce to
the previous result, each time including a new 1-element factor of X (or the form resulting from
applying ComposeSimpleForm to X).
DPFSimplify is a set of rules which simplify any products of the form H-1La+b m+c ¯n which arise
in the derivation. Here, a, b, and c may be odd or even integers. Symbol m may be symbolic.
Symbol ¯n is symbolic.
† This code is explanatory only. You do not need to enter it to have DeriveProductFormula
work.
We can test out the code by seeing if we get the same results as the formulas derived by hand
above.
¯A; DeriveProductFormulaB a Ó b Ô xF
m k
a Ó b Ô x ã H-1L-m+¯n a Ó b Ô x + a Ô x Ó b
m k m k m k
DeriveProductFormulaB a Ó b Ô xF
m k 2
a Ó b Ô x1 Ô x2 ã
m k
a Ó b Ô x1 Ô x2 + H-1L-m+¯n - a Ô x1 Ó b Ô x2 + a Ô x2 Ó b Ô x1 + a Ô x1 Ô x2 Ó b
m k m k m k m k
2009 9 3
Grassmann Algebra Book.nb 193
DeriveProductFormulaB a Ó b Ô xF
m k 3
a Ó b Ô x1 Ô x2 Ô x3 ã
m k
a Ô x1 Ó b Ô x2 Ô x3 - a Ô x2 Ó b Ô x1 Ô x3 + a Ô x3 Ó b Ô x1 Ô x2 +
m k m k m k
H-1L-m+¯n a Ó b Ô x1 Ô x2 Ô x3 + a Ô x1 Ô x2 Ó b Ô x3 - a Ô x1 Ô x3 Ó b Ô x2 +
m k m k m k
a Ô x2 Ô x3 Ó b Ô x1 + a Ô x1 Ô x2 Ô x3 Ó b
m k m k
DeriveProductFormulaB a Ó b Ô xF
m k 4
a Ó b Ô x1 Ô x2 Ô x3 Ô x4 ã
m k
a Ó b Ô x1 Ô x2 Ô x3 Ô x4 + a Ô x1 Ô x2 Ó b Ô x3 Ô x4 - a Ô x1 Ô x3 Ó b Ô x2 Ô x4 +
m k m k m k
a Ô x1 Ô x4 Ó b Ô x2 Ô x3 + a Ô x2 Ô x3 Ó b Ô x1 Ô x4 - a Ô x2 Ô x4 Ó b Ô x1 Ô x3 +
m k m k m k
a Ô x3 Ô x4 Ó b Ô x1 Ô x2 + H-1L-m+¯n - a Ô x1 Ó b Ô x2 Ô x3 Ô x4 +
m k m k
a Ô x2 Ó b Ô x1 Ô x3 Ô x4 - a Ô x3 Ó b Ô x1 Ô x2 Ô x4 +
m k m k
a Ô x4 Ó b Ô x1 Ô x2 Ô x3 - a Ô x1 Ô x2 Ô x3 Ó b Ô x4 + a Ô x1 Ô x2 Ô x4 Ó b Ô x3 -
m k m k m k
a Ô x1 Ô x3 Ô x4 Ó b Ô x2 + a Ô x2 Ô x3 Ô x4 Ó b Ô x1 + a Ô x1 Ô x2 Ô x3 Ô x4 Ó b
m k m k m k
Although these outputs are not quite as easy to read as the manually derived formulas in the
previous section (because the outputs omit any brackets deemed unnecessary with the inbuilt
precedence of the operators), one can at least see in these cases that the formulas are equivalent.
2009 9 3
Grassmann Algebra Book.nb 194
F1 = DeriveProductFormulaB a Ó b Ô xF
m k 4
a Ó b Ô x1 Ô x2 Ô x3 Ô x4 ã
m k
a Ó b Ô x1 Ô x2 Ô x3 Ô x4 + a Ô x1 Ô x2 Ó b Ô x3 Ô x4 - a Ô x1 Ô x3 Ó b Ô x2 Ô x4 +
m k m k m k
a Ô x1 Ô x4 Ó b Ô x2 Ô x3 + a Ô x2 Ô x3 Ó b Ô x1 Ô x4 - a Ô x2 Ô x4 Ó b Ô x1 Ô x3 +
m k m k m k
a Ô x3 Ô x4 Ó b Ô x1 Ô x2 + H-1L-m+¯n - a Ô x1 Ó b Ô x2 Ô x3 Ô x4 +
m k m k
a Ô x2 Ó b Ô x1 Ô x3 Ô x4 - a Ô x3 Ó b Ô x1 Ô x2 Ô x4 +
m k m k
a Ô x4 Ó b Ô x1 Ô x2 Ô x3 - a Ô x1 Ô x2 Ô x3 Ó b Ô x4 + a Ô x1 Ô x2 Ô x4 Ó b Ô x3 -
m k m k m k
a Ô x1 Ô x3 Ô x4 Ó b Ô x2 + a Ô x2 Ô x3 Ô x4 Ó b Ô x1 + a Ô x1 Ô x2 Ô x3 Ô x4 Ó b
m k m k m k
The first thing we note is that apart possibly from the sign H-1L-m+¯n , the terms on the right hand
side are all products of the form
a Ô xi Ó b Ô xi x ã xi Ô xi
m 4-r k r j r 4-r
This means we can compute them using the r-span and r-cospan of x. For j equal to 4, r will range
j
from 0 to 4. Thus we have
:a Ô x1 Ô x2 Ô x3 Ô x4 Ó b Ô 1>
m k
:a Ô x2 Ô x3 Ô x4 Ó b Ô x1 , a Ô -Hx1 Ô x3 Ô x4 L Ó b Ô x2 ,
m k m k
a Ô x1 Ô x2 Ô x4 Ó b Ô x3 , a Ô -Hx1 Ô x2 Ô x3 L Ó b Ô x4 >
m k m k
:a Ô x3 Ô x4 Ó b Ô x1 Ô x2 , a Ô -Hx2 Ô x4 L Ó b Ô x1 Ô x3 , a Ô x2 Ô x3 Ó b Ô x1 Ô x4 ,
m k m k m k
a Ô x1 Ô x4 Ó b Ô x2 Ô x3 , a Ô -Hx1 Ô x3 L Ó b Ô x2 Ô x4 , a Ô x1 Ô x2 Ó b Ô x3 Ô x4 >
m k m k m k
2009 9 3
Grassmann Algebra Book.nb 195
:a Ô x4 Ó b Ô x1 Ô x2 Ô x3 , a Ô -x3 Ó b Ô x1 Ô x2 Ô x4 ,
m k m k
a Ô x2 Ó b Ô x1 Ô x3 Ô x4 , a Ô -x1 Ó b Ô x2 Ô x3 Ô x4 >
m k m k
:a Ô 1 Ó b Ô x1 Ô x2 Ô x3 Ô x4 >
m k
But all these terms can be computed at once using the complete span ¯#¯ and cospan ¯#¯ .
::a Ô x1 Ô x2 Ô x3 Ô x4 Ó b Ô 1>,
m k
:a Ô x2 Ô x3 Ô x4 Ó b Ô x1 , a Ô -Hx1 Ô x3 Ô x4 L Ó b Ô x2 ,
m k m k
a Ô x1 Ô x2 Ô x4 Ó b Ô x3 , a Ô -Hx1 Ô x2 Ô x3 L Ó b Ô x4 >,
m k m k
:a Ô x3 Ô x4 Ó b Ô x1 Ô x2 , a Ô -Hx2 Ô x4 L Ó b Ô x1 Ô x3 , a Ô x2 Ô x3 Ó b Ô x1 Ô x4 ,
m k m k m k
a Ô x1 Ô x4 Ó b Ô x2 Ô x3 , a Ô -Hx1 Ô x3 L Ó b Ô x2 Ô x4 , a Ô x1 Ô x2 Ó b Ô x3 Ô x4 >,
m k m k m k
:a Ô x4 Ó b Ô x1 Ô x2 Ô x3 , a Ô -x3 Ó b Ô x1 Ô x2 Ô x4 , a Ô x2 Ó b Ô x1 Ô x3 Ô x4 ,
m k m k m k
Now we need to multiply the terms by a scalar factor which is 1 when the grade of the span is
even, and H-1L¯n-m when it is odd. Since the lists of terms in the collection above are arranged
according to the grades of their spans, we then need a list whose elements alternate between 1 and
H-1L¯n-m . To construct this list, we first define a function ¯r which returns an alternating list of
1s and 0s of the correct length j + 1, multiply this list by ¯n - m, and then use it as a power.
Mathematica's inbuilt Listable attributes of Times and Power will again return the result we
want.
H-1L¯r@4D H¯n-mL
91, H-1L-m+¯n , 1, H-1L-m+¯n , 1=
2009 9 3
Grassmann Algebra Book.nb 196
::a Ô x1 Ô x2 Ô x3 Ô x4 Ó b Ô 1>,
m k
:a Ô x3 Ô x4 Ó b Ô x1 Ô x2 , a Ô -Hx2 Ô x4 L Ó b Ô x1 Ô x3 , a Ô x2 Ô x3 Ó b Ô x1 Ô x4 ,
m k m k m k
a Ô x1 Ô x4 Ó b Ô x2 Ô x3 , a Ô -Hx1 Ô x3 L Ó b Ô x2 Ô x4 , a Ô x1 Ô x2 Ó b Ô x3 Ô x4 >,
m k m k m k
-m+¯n -m+¯n
:H-1L a Ô x4 Ó b Ô x1 Ô x2 Ô x3 , H-1L a Ô -x3 Ó b Ô x1 Ô x2 Ô x4 ,
m k m k
:a Ô 1 Ó b Ô x1 Ô x2 Ô x3 Ô x4 >>
m k
Flattening this list and then summing the terms gives the final result.
F2 = a Ó b Ô Hx1 Ô x2 Ô x3 Ô x4 L ã
m k
a Ó b Ô x1 Ô x2 Ô x3 Ô x4 ã
m k
a Ô 1 Ó b Ô x1 Ô x2 Ô x3 Ô x4 + H-1L-m+¯n a Ô -x1 Ó b Ô x2 Ô x3 Ô x4 +
m k m k
-m+¯n -m+¯n
H-1L a Ô x2 Ó b Ô x1 Ô x3 Ô x4 + H-1L a Ô -x3 Ó b Ô x1 Ô x2 Ô x4 +
m k m k
-m+¯n
H-1L a Ô x4 Ó b Ô x1 Ô x2 Ô x3 + a Ô -Hx1 Ô x3 L Ó b Ô x2 Ô x4 +
m k m k
-m+¯n
a Ô -Hx2 Ô x4 L Ó b Ô x1 Ô x3 + H-1L a Ô -Hx1 Ô x2 Ô x3 L Ó b Ô x4 +
m k m k
-m+¯n
H-1L a Ô -Hx1 Ô x3 Ô x4 L Ó b Ô x2 + a Ô x1 Ô x2 Ó b Ô x3 Ô x4 +
m k m k
a Ô x1 Ô x4 Ó b Ô x2 Ô x3 + a Ô x2 Ô x3 Ó b Ô x1 Ô x4 +
m k m k
a Ô x3 Ô x4 Ó b Ô x1 Ô x2 + H-1L-m+¯n a Ô x1 Ô x2 Ô x4 Ó b Ô x3 +
m k m k
-m+¯n
H-1L a Ô x2 Ô x3 Ô x4 Ó b Ô x1 + a Ô x1 Ô x2 Ô x3 Ô x4 Ó b Ô 1
m k m k
Applying GrassmannExpandAndSimplify will simplify the signs and allow us to confirm the
result is identical to that produced by DeriveProductFormula.
2009 9 3
Grassmann Algebra Book.nb 197
To summarize the results above: we can take the Product Formula derived in the previous section
and rewrite it in its computational form using the notions of complete span and complete cospan.
The computational form relies on the fact that in GrassmannAlgebra the exterior and regressive
products have an inbuilt Listable attribute. (This attribute means that a product of lists is
automatically converted to a list of products.)
aÓb Ôx ã
m k j
3.59
¯r@jD H¯n-mL ¯
¯SBFlattenBH-1L a Ô ¯" BxF Ó b Ô ¯"¯ BxF FF
m j k j
In this formula, ¯#¯ BxF and ¯#¯ BxF are the complete span and complete cospan of X. The sign
j j j
-m+¯n
function ¯r just creates a list of elements alternating between 1 and H-1L . For example, for j
equal to 8:
H-1L¯r@8D H¯n-mL
91, H-1L-m+¯n , 1, H-1L-m+¯n , 1, H-1L-m+¯n , 1, H-1L-m+¯n , 1=
2009 9 3
Grassmann Algebra Book.nb 198
8True, True, True, True, True, True, True, True, True, True<
F1 = GeneralProductFormulaB a Ó b Ô Hx Ô y Ô zLF
m k
a Ó b Ô x Ô y Ô z ã H-1L-m+¯n a Ô 1 Ó b Ô x Ô y Ô z + a Ô x Ó b Ô y Ô z +
m k m k m k
-m+¯n
a Ô -y Ó b Ô x Ô z + a Ô z Ó b Ô x Ô y + H-1L a Ô -Hx Ô zL Ó b Ô y +
m k m k m k
-m+¯n -m+¯n
H-1L a Ô x Ô y Ó b Ô z + H-1L aÔyÔzÓbÔx + aÔxÔyÔzÓbÔ1
m k m k m k
We can express the 3-element as a product of different 1-element factors by adding to any given
factor, scalar multiples of the other factors. For example
2009 9 3
Grassmann Algebra Book.nb 199
F2 = GeneralProductFormulaB a Ó b Ô Hx Ô y Ô Hz + a x + b yLLF
m k
a Ó b Ô x Ô y Ô Ha x + b y + zL ã
m k
H-1L-m+¯n a Ô 1 Ó b Ô x Ô y Ô Ha x + b y + zL + a Ô x Ó b Ô y Ô Ha x + b y + zL +
m k m k
a Ô -y Ó b Ô x Ô Ha x + b y + zL + a Ô Ha x + b y + zL Ó b Ô x Ô y +
m k m k
-m+¯n
H-1L a Ô -Hx Ô Ha x + b y + zLL Ó b Ô y +
m k
-m+¯n
H-1L a Ô x Ô y Ó b Ô Ha x + b y + zL +
m k
-m+¯n
H-1L a Ô y Ô Ha x + b y + zL Ó b Ô x + a Ô x Ô y Ô Ha x + b y + zL Ó b Ô 1
m k m k
¯%@F1 ã F2 D
True
Thus we can write the General Product formula in the alternative form
a Ó b Ô x ã ¯SBFlattenBH-1L¯r@jD H¯n-mL
m k j
3.60
¯rr@jD ¯
H-1L ¯RB a Ô ¯"¯ BxF Ó b Ô ¯" BxF FFF
m j k j
2009 9 3
Grassmann Algebra Book.nb 200
Just as we encapsulated the first computable formula, we can similarly encapsulate this alternative
one.
Again comparing the results of DeriveProductFormula with this alternative formula verifies
their equivalence for values of j from 1 to 10.
8True, True, True, True, True, True, True, True, True, True<
2009 9 3
Grassmann Algebra Book.nb 201
x ã ¯SB
j
If the element to be decomposed is a 1-element, that is j = 1, there are just two terms, reducing the
decomposition formula to:
H-1Lm+¯n a Ó b Ô x + a Ô x Ó b
m -m+¯n m -m+¯n
xã
aÓ b
m -m+¯n
To make this a little more readable we add some parentheses and absorb the sign by interchanging
the order of the factors x and b .
¯n-m
aÓ xÔ b Ka Ô xO Ó b
m ¯n-m m ¯n-m
xã + 3.62
aÓ b aÓ b
m ¯n-m m ¯n-m
F1 = DeriveProductFormulaB a Ó b Ô xF
m k
a Ó b Ô x ã H-1L-m+¯n a Ó b Ô x + a Ô x Ó b
m k m k m k
2009 9 3
Grassmann Algebra Book.nb 202
Dual@F1 D
aÔbÓ x ã H-1Lm a Ô b Ó x + Ka Ó x O Ô b
m k -1+¯n m k -1+¯n m -1+¯n k
F2 = DeriveProductFormulaB a Ó b Ô xF
m k 2
a Ó b Ô x1 Ô x2 ã
m k
a Ó b Ô x1 Ô x2 + H-1L-m+¯n - a Ô x1 Ó b Ô x2 + a Ô x2 Ó b Ô x1 + a Ô x1 Ô x2 Ó b
m k m k m k m k
Dual@F2D
a Ô b Ó x1 Ó x2 ã a Ô b Ó x1 Ó x2 +
m k -1+¯n -1+¯n m k -1+¯n -1+¯n
H-1Lm - a Ó x1 Ô b Ó x2 + a Ó x2 Ô b Ó x1 +
m -1+¯n k -1+¯n m -1+¯n k -1+¯n
a Ó x1 Ó x2 Ôb
m -1+¯n -1+¯n k
F3 = DeriveProductFormulaB a Ó b Ô xF
m k 3
a Ó b Ô x1 Ô x2 Ô x3 ã
m k
a Ô x1 Ó b Ô x2 Ô x3 - a Ô x2 Ó b Ô x1 Ô x3 + a Ô x3 Ó b Ô x1 Ô x2 +
m k m k m k
H-1L-m+¯n a Ó b Ô x1 Ô x2 Ô x3 + a Ô x1 Ô x2 Ó b Ô x3 - a Ô x1 Ô x3 Ó b Ô x2 +
m k m k m k
a Ô x2 Ô x3 Ó b Ô x1 + a Ô x1 Ô x2 Ô x3 Ó b
m k m k
2009 9 3
Grassmann Algebra Book.nb 203
Dual@F3D
a Ô b Ó x1 Ó x2 Ó x3 ã
m k -1+¯n -1+¯n -1+¯n
a Ó x1 Ô b Ó x2 Ó x3 - a Ó x2 Ô b Ó x1 Ó x3 +
m -1+¯n k -1+¯n -1+¯n m -1+¯n k -1+¯n -1+¯n
a Ó x3 Ô b Ó x1 Ó x2 + H-1Lm a Ô b Ó x1 Ó x2 Ó x3 +
m -1+¯n k -1+¯n -1+¯n m k -1+¯n -1+¯n -1+¯n
a Ó x1 Ó x2 Ô b Ó x3 - a Ó x1 Ó x3 Ô b Ó x2 +
m -1+¯n -1+¯n k -1+¯n m -1+¯n -1+¯n k -1+¯n
a Ó x2 Ó x3 Ô b Ó x1 + a Ó x1 Ó x2 Ó x3 Ôb
m -1+¯n -1+¯n k -1+¯n m -1+¯n -1+¯n -1+¯n k
p n
r H¯n-mL
a Ó b Ô x ã ‚ H-1L ‚ a Ô xi Ó b Ô xi
m k p m p-r k r
r=0 i=1
3.63
p
x ã x1 Ô x1 ã x2 Ô x2 ã ! ã xn Ô xn , nãK O
p r p-r r p-r r p-r r
3.10 Summary
This chapter has introduced the regressive product as a true dual to the exterior product. This
means that to every theorem T involving exterior and regressive products there corresponds a dual
theorem Dual[T] such that T ã Dual[Dual[T]].
But although the regressive product axioms assert that the regressive product of an m-element with
a k-element is an (m+k-n)-element (where n is the dimension of the underlying linear space), the
axiom sets for the exterior and regressive products alone do not provide a mechanism for deriving
such an element explicitly. A further explicit axiom involving both exterior and regressive products
is necessary. We called this axiom the Common Factor Axiom and motivated it by a combination
of algebraic and geometric argument. If the exterior product is viewed as a sort of 'union' of
independent elements, the regressive product may be viewed as a sort of 'intersection'. Because
Grassmann considered only Euclidean spaces, used the same notation for both exterior and
regressive products, and equated scalars and pseudo-scalars, the Common Factor Axiom was
effectively hidden in his notation.
The Common Factor Axiom was then extended to prove one of the most important formulae in the
Grassmann algebra, the Common Factor Theorem. This theorem enables the regressive product of
any two arbitrary elements of the algebra to be computed in an effective manner. It will be shown
in Chapter 5: The Complement that if the underlying linear space is endowed with a metric, then
the result is specific, and depends on the metric for its precise value. Otherwise, if there is no
metric,
2009 9 3the element is specific only up to congruence (that is, up to an arbitrary scalar factor). It
was then shown how the Common Factor Theorem could be used in the factorization of simple
elements.
Grassmann Algebra Book.nb 204
The Common Factor Axiom was then extended to prove one of the most important formulae in the
Grassmann algebra, the Common Factor Theorem. This theorem enables the regressive product of
any two arbitrary elements of the algebra to be computed in an effective manner. It will be shown
in Chapter 5: The Complement that if the underlying linear space is endowed with a metric, then
the result is specific, and depends on the metric for its precise value. Otherwise, if there is no
metric, the element is specific only up to congruence (that is, up to an arbitrary scalar factor). It
was then shown how the Common Factor Theorem could be used in the factorization of simple
elements.
Finally, it was shown how the Common Factor Theorem leads to a suite of formulas called Product
Formulas. These formulas expand expressions involving an exterior product of an element with a
regressive product of elements, or, involving a regressive product of an element with an exterior
product of elements. In Chapter 6 we will show how these lead naturally to formulas where the
regressive products are replaced by interior products. These interior product forms of the product
formulae will find application throughout the rest of the book.
The next chapter is an interlude in the development of the algebraic fundamentals. In it we begin to
explore one of Grassmann algebra's most enticing interpretations: geometry. But at this stage we
will only be discussing non-metric geometry. Chapters 5 and 6 to follow will develop the algebra's
metric concepts, and thus complete the fundamentals required for subsequent applications and
geometric interpretations in metric space.
2009 9 3
Grassmann Algebra Book.nb 205
4 Geometric Interpretations
Ï
4.1 Introduction
In Chapter 2, the exterior product operation was introduced onto the elements of a linear space L,
1
enabling the construction of a series of new linear spaces L possessing new types of elements. In
m
this chapter, the elements of L will be interpreted, some as vectors, some as points. This will be
1
done by singling out one particular element of L and conferring upon it the interpretation of origin
1
point. All the other elements of the linear space then divide into two categories. Those that involve
the origin point will be called (weighted) points, and those that do not will be called vectors. As
this distinction is developed in succeeding sections it will be seen to be both illuminating and
consistent with accepted notions. Vectors and points will be called geometric interpretations of the
elements of L.
1
Some of the more important consequences however, of the distinction between vectors and points
arise from the distinctions thereby generated in the higher grade spaces L. It will be shown that a
m
simple element of L takes on two interpretations. The first, that of a multivector (or m-vector) is
m
when the m-element can be expressed in terms of vectors alone. The second, that of a bound
multivector (or bound m-vector) is when the m-element requires both points and vectors to express
it. These simple interpreted elements will be found useful for defining geometric entities such as
lines and planes and their higher dimensional analogues known as multiplanes (or m-planes).
Unions and intersections of multiplanes may then be calculated straightforwardly by using the
bound multivectors which define them. A multivector may be visualized as a 'free' entity with no
location. A bound multivector may be visualized as 'bound' through a location in space.
It is not only simple interpreted elements which will be found useful in applications however. In
Chapter 7: Exploring Screw Algebra and Chapter 8: Exploring Mechanics, a basis for a theory of
mechanics is developed whose principal quantities (for example, systems of forces, momentum of
a system of particles, velocity of a rigid body) may be represented by a general interpreted 2-
element, that is, by the sum of a bound vector and a bivector.
In the literature of the nineteenth century, wherever vectors and points were considered together,
vectors were introduced as point differences. When it is required to designate physical quantities it
is not satisfactory that all vectors should arise as the differences of points. In later literature, this
problem appeared to be overcome by designating points by their position vectors alone, making
vectors the fundamental entities [Gibbs 1886]. This approach is not satisfactory either, since by
excluding points much of the power of the calculus for dealing with free and located entities
together is excluded. In this book we do not require that vectors be defined in terms of points, but
rather, propose a difference of interpretation between the origin element and those elements not
involving the origin. This approach permits the existence of points and vectors together without the
vectors necessarily arising as point differences.
2009 9 3
vectors were introduced as point differences. When it is required to designate physical quantities it
is not satisfactory that all vectors should arise as the differences of points. In later literature, this
problem appeared to be overcome by designating points by their position vectors alone, making
vectors the fundamental entities [Gibbs 1886]. This approach is not satisfactory either,Book.nb
Grassmann Algebra since by 206
excluding points much of the power of the calculus for dealing with free and located entities
together is excluded. In this book we do not require that vectors be defined in terms of points, but
rather, propose a difference of interpretation between the origin element and those elements not
involving the origin. This approach permits the existence of points and vectors together without the
vectors necessarily arising as point differences.
In this chapter, as in the preceding chapters, L and the spaces L do not yet have a metric. That is,
1 m
there is no way of calculating a measure or magnitude associated with an element. The
interpretation discussed therefore may also be supposed non-metric. In the next chapter a metric
will be introduced onto the uninterpreted spaces and the consequences of this for the interpreted
elements developed.
In summary then, it is the aim of this chapter to set out the distinct non-metric geometric
interpretations of m-elements brought about by the interpretation of one specific element of L as an
1
origin point.
Vectors
Ï Depicting vectors
The most common current geometric interpretation for an element of a linear space is that of a
vector. We suppose in this chapter that the linear space does not have a metric (that is, we cannot
calculate magnitudes). Such a vector has the geometric properties of direction and sense (but no
location and no magnitude), and will be graphically depicted (in the usual way) by a directed and
sensed line segment thus:
This depiction is unsatisfactory in that the line segment has a definite location and length whilst the
vector it is depicting is supposed to possess neither of these properties. One way perhaps to depict
an unlocated entity is to show it in many locations.
Another way to emphasize that a vector has no location is show it on a dynamic graphic. If you are
viewing this notebook live with the GrassmannAlgebra package loaded, you can use your mouse
to manipulate the vector below to reinforce that any of the vector's locations that retain its direction
and
2009sense
9 3 are equally satisfactory depictions for it. This dynamic depiction is a somewhat more
faithful way of depicting something without locating it. But it is still not fully satisfactory, since we
are depicting the property of no-location by using any location.
Grassmann Algebra Book.nb 207
Another way to emphasize that a vector has no location is show it on a dynamic graphic. If you are
viewing this notebook live with the GrassmannAlgebra package loaded, you can use your mouse
to manipulate the vector below to reinforce that any of the vector's locations that retain its direction
and sense are equally satisfactory depictions for it. This dynamic depiction is a somewhat more
faithful way of depicting something without locating it. But it is still not fully satisfactory, since we
are depicting the property of no-location by using any location.
A linear space, all of whose elements are interpreted as vectors, will be called a vector space.
The geometric interpretation of the addition operation of a vector space is the triangle (or
parallelogram) rule. To effect a sum of two vectors using the triangle rule in this 'unlocated' view
of vectors, you would slide them (always in a direction parallel to themselves) from where you had
each of them conveniently docked, until their tails touched, make the sum in the standard textbook
manner, then park the result anywhere you please.
† From this point onwards we will always show vectors docked in a visually convenient location,
often one that is suggestive of the operation we wish to perform. Of course as discussed above,
this does not mean it is actually located there.
Ï Comparing vectors
In a metric space we can define the magnitude of a vector and hence compare vectors by
comparing their magnitudes even if they are not in the same direction. In spaces where no metric
has been imposed, as are the spaces we are discussing in this chapter, we cannot define the
magnitude of a vector, and hence we cannot compare it with another vector in a different direction.
However, we can compare vectors in the same direction. Vectors in the same direction are
congruent. That is, one is a scalar multiple (not zero) of the other. This scalar multiple is called the
congruence factor.
2009 9 3
Grassmann Algebra Book.nb 208
However, we can compare vectors in the same direction. Vectors in the same direction are
congruent. That is, one is a scalar multiple (not zero) of the other. This scalar multiple is called the
congruence factor.
a b
Thus vectors a x and b x are congruent with congruence factor b
or a
. Most often it is not the
magnitude of the congruence factor that is important, but its sign. If it is positive, the vectors may
be said to have the same orientation. If it is negative the vectors may be said to have an opposite
orientation. Orientation is thus a relative concept. It applies in the same way to elements of any
grade. For the special case of vectors however, the term sense is often used synonymously with
orientation.
x -x 2x
Points
Ï The origin
In order to describe position in space it is necessary to have a reference point. This point is usually
called the origin.
Rather than the standard technique of implicitly assuming the origin and working only with vectors
to describe position, we find it important for later applications to augment the vector space with the
origin as a new element to create a new linear space with one more element in its basis. For
reasons which will appear later, such a linear space will be called a bound vector space.
The only difference between the origin element and the vector elements of the linear space is their
interpretation. The origin element is interpreted as a point. We will denote it in GrassmannAlgebra
by ¯", which you can access from the palette or type as Â*5ÂÂdsOÂ (a five-star followed by
a double-struck capital O).
¯!
2009 9 3
Grassmann Algebra Book.nb 209
The bound vector space in addition to its vectors and its origin now possesses a new set of
elements requiring interpretation: those formed from the sum of the origin and a vector.
P ã ¯" + x
It is these elements that will be used to describe position and that we will call points. The vector x
is called the position vector of the point P. A position vector is just like any other vector and is
therefore, of course, located nowhere (even though we show it in a conveniently suggestive
position!).
It follows immediately from this definition of a point that the difference of two points is a vector:
P - Q ã H¯" + pL - H¯" + qL ã p - q ã x
Remember that the bound vector space we are discussing does not yet have a metric. That is, the
distance between two points (the magnitude of the vector equal to the point difference) is not
meaningful. However, the relative distances between points on the same line can be measured by
means of their intensities.
2009 9 3
Grassmann Algebra Book.nb 210
Q ã P + y ã H¯" + xL + y ã ¯" + Hx + yL
P y
x
x+y Q
¯!
and so a vector may be viewed as a carrier of points. That is, the addition of a vector to a point
carries or transforms it to another point. (See the historical note below).
The sum of two points is not quite another point. Rather, it is a point with weight 2, situated mid-
way between the two points.
P1 + P2 ã H¯" + x1 L + H¯" + x2 L
1
ã 2 ¯" + Hx1 - x2 L ã 2 ¯" + Hx1 - x2 L ã 2 Pc
2
Wherever pictorially feasible we will show weighted points with their weights attached to their
names, and/or a size or colour change to distinguish them from the 'pure' points on the same
graphic.
The sum of two points is a point of weight 2 located mid-way between them
Similarly, the sum of n points is a point with weight n, situated at the centre of mass (centre of
gravity) of the points.
A scalar multiple of a point (of the form m P or mÔP) will be called a weighted point. Weighted
points are summed just like a set of point masses is deemed equivalent to their total mass located at
their centre of mass. For example the sum of two weighted points with weights (masses) m1 and
m2 is equal to the weighted point with weight Hm1 + m2 Llocated on the line between them. The
location is the point Pc about which the 'mass moments' m1 l1 and m2 l2 are equal.
2009 9 3
Grassmann Algebra Book.nb 211
l2 m2 P2
l1 Hm1 + m2 L Pc
l1 m1 ! l2 m2
m1 P1
I‚ mi M Pc ã ‚ mi Pi
We can also easily express this equation in terms of the position vectors of the weighted points.
S mi xi
‚ mi Pi ã ‚ mi H¯" + xi L ã I‚ mi M ¯" +
S mi
Thus the sum of a number of mass-weighted points ⁄ mi Pi is equivalent to the centre of gravity
Sm x
¯" + S im i weighted by the total mass ⁄ mi . A numerical example is included later in this section.
i
As will be seen in Section 4.4 below, a weighted point may also be viewed as a bound scalar.
Ï Historical Note
Sir William Rowan Hamilton in his Lectures on Quaternions [Hamilton 1853] was the first
to introduce the notion of vector as 'carrier' of points.
... I regard the symbol B-A as denoting "the step from B to A": namely, that step by making which,
from the given point A, we should reach or arrive at the sought point B; and so determine, generate,
mark or construct that point. This step, (which we always suppose to be a straight line) may also in
my opinion be properly called a vector; or more fully, it may be called "the vector of the point B
from the point A": because it may be considered as having for its office, function, work, task or
business, to transport or carry (in Latin vehere) a moveable point, from the given or initial position
A, to the sought or final position B.
2009 9 3
Grassmann Algebra Book.nb 212
¯"2
8¯#, e1 , e2 <
¯ "2
e2
e1
¯!
Declaring a basis for the plane
We do not depict the basis vectors at right angles because the space does not yet have a metric, and
orthogonality is a metric concept.
As always, you can confirm your currently declared basis by checking the status pane at the top of
the palette,
We may often precede a calculation with one of these DeclareBasis aliases followed by a semi-
colon. This accomplishes the declaration of the basis but for brevity suppresses the confirming
output. For example, below we declare a bound 3-space, and then compose a palette of the basis of
the algebra constructed on it.
¯"3 ; BasisPalette
Basis Palette
L L L L L
0 1 2 3 4
1 ¯! ¯! Ô e1 ¯! Ô e1 Ô e2 ¯! Ô e1 Ô e2 Ô e3
e1 ¯! Ô e2 ¯! Ô e1 Ô e3
e2 ¯! Ô e3 ¯! Ô e2 Ô e3
e3 e1 Ô e2 e1 Ô e2 Ô e3
e1 Ô e3
e2 Ô e3
Of course, you can declare your own bound vector space basis if you wish. For example if you
want the vector basis elements to be i, j, and k, you could enter
DeclareBasis@8i, j, k, ¯"<D
8¯#, i, j, k<
(DeclareBasis will always rearrange the ordering to make the origin ¯" come first).
2009 9 3
Grassmann Algebra Book.nb 213
Ï Composing vectors
Once you have declared a basis for your bound vector space, say ¯"3 ,
¯"3
8¯#, e1 , e2 , e3 <
you can quickly compose any vectors in this basis with ComposeVector or its alias ¯!Ñ . The
placeholder Ñ is used for entering the symbol upon which you want the coefficients to be based.
Here we choose a.
Va = ¯&a
a1 e1 + a2 e2 + a3 e3
If you enter ¯!Ñ , you get an expression with placeholders in which you can enter your own
coefficients just by clicking on any placeholder, and tabbing through them.
¯&Ñ
Ñ e1 + Ñ e2 + Ñ e3
Note however that in this case if you want any symbolic coefficients you enter to be recognized as
scalar symbols, you would need to ensure this yourself.
Ï Composing points
Pb = ¯'b
¯# + b1 e1 + b2 e2 + b3 e3
¯'Ñ
¯# + Ñ e1 + Ñ e2 + Ñ e3
Now you can use the points and vectors you have composed in expressions. For example, here we
add Pb and 2 Va , then use GrassmannSimplify (alias ¯$) to collect their coefficients.
Pb + 2 Va êê ¯$
¯# + H2 a1 + b1 L e1 + H2 a2 + b2 L e2 + H2 a3 + b3 L e3
2009 9 3
Grassmann Algebra Book.nb 214
¯"3
8¯#, e1 , e2 , e3 <
M1 = 2 P1 ; P1 = ¯" + e1 + 3 e2 - 4 e3 ;
M2 = 4 P2 ; P2 = ¯" + 2 e1 - e2 - 2 e3 ;
M3 = 7 P3 ; P3 = ¯" - 5 e1 + 3 e2 - 6 e3 ;
M4 = 5 P4 ; P4 = ¯" + 4 e1 + 2 e2 - 9 e3 ;
4
M = ‚ Mi
i=1
5 H¯# + 4 e1 + 2 e2 - 9 e3 L + 7 H¯# - 5 e1 + 3 e2 - 6 e3 L +
2 H¯# + e1 + 3 e2 - 4 e3 L + 4 H¯# + 2 e1 - e2 - 2 e3 L
Simplifying this gives a weighted point with weight 18, the scalar attached to the origin.
M = ¯%@MD
18 ¯# - 5 e1 + 33 e2 - 103 e3
To take the weight out as a factor, that is, expressing the result in the form mass ! point, we can
use ToWeightedPointForm.
M = ToWeightedPointForm@MD
5 e1 11 e2 103 e3
18 ¯# - + -
18 6 18
7 P3 2 P1
4 P2
18 Pc
5 P4
5 e1 11 e2 103 e3
Thus the total mass is 18 situated at the point Pc ã ¯" - 18
+ 6
- 18
. If you are reading
this notebook in Mathematica, you can rotate this graphic, and check the positions of the points by
2009 9 3 your mouse over their symbols.
hovering
Grassmann Algebra Book.nb 215
5 e1 11 e2 103 e3
Thus the total mass is 18 situated at the point Pc ã ¯" - 18
+ 6
- 18
. If you are reading
this notebook in Mathematica, you can rotate this graphic, and check the positions of the points by
hovering your mouse over their symbols.
Ï A note on terminology
The term bound as in the bound vector PÔx indicates that the vector x is conceived of as bound
through the point P, rather than to the point P, since the latter conception would give the incorrect
impression that the vector was located at the point P. By adhering to the terminology bound
through, we get a slightly more correct impression of the 'freedom' that the vector enjoys.
The term bound vector space will be used to denote a vector space to whose basis has been added
an origin point. This should be read as bound vector-space, rather than bound-vector space.
The term bound m-space will be used to denote an m-dimensional vector space to whose basis has
been added an origin point.
2009 9 3
Grassmann Algebra Book.nb 216
Bivectors
Ï Depicting bivectors
Earlier in this chapter, a vector was depicted graphically by a directed and sensed line segment
supposed to be located nowhere in particular.
In like manner we depict a simple bivector by depicting its two component vectors supposed
located nowhere in particular. But to indicate that they are part of the same exterior product, the
vectors are depicted conveniently docked head to tail in the order of the product, and 'joined' by a
somewhat transparent parallelogram in the same colour, named by default with the exterior product
symbol.
This bivector defines a planar 2-direction, the precise 2-dimensional analog of the unlocated
vector. Its vectors are located nowhere, and it is located nowhere. The parallelogram should be
viewed as an artifact which is used to visually tie the two vectors together in the exterior product.
We motivate this choice by noting that if this bivector were in a metric space, the area of the
parallelogram would correspond to the magnitude of the bivector. We will discuss this further in
Chapter 6.
If we change the sign of both vectors in the product, we do not change the bivector, but we get a
different depiction.
Changing the signs of the both vectors gives the same bivector.
Any bivector congruent to this bivector (that is, a scalar multiple of it) will represent the same 2-
direction. Reversing the order of the factors in a bivector is equivalent to multiplying it by -1,
producing an 'opposite orientation'.
2009 9 3
Grassmann Algebra Book.nb 217
This parallelogram depiction of the simple bivector is still misleading in a further major respect
that did not arise for vectors. It incorrectly suggests a specific shape of the parallelogram. Indeed,
since xÔy ã xÔ(x + y), another valid depiction of this simple bivector would be a parallelogram
with sides constructed from vectors x and x + y.
x Ô y ã x Ô Hx + yL
x Ô y ã x Ô Hk x + yL
However in what follows, just as we will usually depict a bivector docked in a location convenient
for the discussion, so too we will usually depict it in the shape most convenient for the discussion
at hand, usually one with the simplest factors.
In the following chapter a metric will be introduced onto L from which a metric is induced onto L.
1 2
This will permit the definition of the measure of a vector (its length) and the measure of the simple
bivector (its area).
The measure of a simple bivector is geometrically interpreted as the area of the parallelogram
formed by its two vectors. However, as demonstrated above, the bivector can be expressed in terms
of an infinite number of pairs of vectors. Despite this, the simple geometric fact that they have the
same base and height shows that parallelograms formed from all of them have the same area. For
example, the areas of the parallelograms in the previous figure are the same. Thus the area
definition of the measure of the bivector is truly an invariant measure. From this point of view the
parallelogram depiction in a metric space is correctly suggestive, although the parallelogram is not
of fixed shape.
In Chapter 6 it will be shown that this parallelogram-area notion of measure is simply the 2-
dimensional case of a very much more general notion of measure applicable to entities of any
grade.
A sum of simple bivectors is called a bivector. In two and three dimensions all bivectors are simple.
This will have important consequences for our exploration of screw algebra in Chapter 7 and its
application to mechanics in Chapter 8.
Earlier in the chapter it was remarked that a vector may be viewed as a 'carrier' of points.
Analogously, a simple bivector may be viewed as a carrier of bound vectors. This view will be
more fully explored in the next section.
Bound vectors
2009 9 3
Grassmann Algebra Book.nb 218
Bound vectors
In mechanics the concept of force is paramount. In Chapter 8: Exploring Mechanics we will show
that a force may be represented by a bound vector, and that a system of forces may be represented
by a sum of bound vectors.
It has already been shown that a bound vector may be expressed either as the product of a point
with a vector or as the product of two points.
P1 Ô x ã P1 Ô HP2 - P1 L ã P1 Ô P2 P2 ã P1 + x
The bound vector in the form P1 Ô x defines a line through the point P1 in the direction of x.
Similarly, in the form P1 Ô P2 it defines the line through P1 and P2 .
Here we use the word 'define' in the context discussed in Chapter 2 in the section Spaces and
congruence. To say that a bound vector B defines a line means that the line may be defined as the
set of all points P which belong to the space of B, that is, such that B Ô P ã 0. A consequence of
this definition is that any other bound vector congruent to B defines the same line.
We depict a bound vector as located in its line in either of two ways, by:
1. Two points and their vector difference.
2. A single point and a vector.
In both cases the vector indicates the order of the factors in the exterior product.
These graphical depictions of the bound vector are each misleading in their own way.
The first (point-vector) depiction suggests that the vector lies in the line. Since the vector is located
nowhere, it is certainly not bound to the point. However as a component of the bound vector, we
will often find it convenient to imagine it docked in the line, and to speak of it as bound through
the point. And the point, again as a component of the bound vector, can be anywhere in the line
since if P and P* are any two points in the line, P - P* is a vector of the same direction as x and
the bound vector can be expressed as either P Ô x or P* Ô x .
HP - P* L Ô x ã 0 ï P Ô x ã P* Ô x
Hence, although a lone vector is located nowhere, and a lone point is immoveably fixed, the bound
vector formed by the exterior product of the two is bound to a line through the point in the
direction of the vector. And to add to this representational conundrum, the bound vector has no
specific location in the line. The following diagram depicts a bound vector docked in two different
locations in the line.
2009 9 3
Grassmann Algebra Book.nb 219
x
P*
x
P
The second (point-point) depiction suggests that the depicted points are of specific importance over
other points in the line. However, any pair of points in the line with the same vector difference may
be used to express the bound vector. Let x ã Q - P ã Q* - P* , then
P Ô x ã P* Ô x ï P Ô HQ - PL ã P* Ô HQ* - P* L ï P Ô Q ã P* Ô Q*
A bound vector displaying its ability to look like a sliding pair of points.
It has been mentioned in the previous section that a simple bivector may be viewed as a 'carrier' of
bound vectors. To see this, take any bound vector PÔx and a bivector whose space contains x. The
bivector may be expressed in terms of x and some other vector, y say, yielding yÔx. Thus:
P Ô x + y Ô x ã HP + yL Ô x ã P* Ô x
The geometric interpretation of the addition of such a simple bivector to a bound vector is then
similar to that for the addition of a vector to a point, that is, a shift in position of the bound element.
Ï Composing bivectors
Once you have declared a basis for your bound vector space, say ¯"4 ,
2009 9 3
Grassmann Algebra Book.nb 220
¯"4
8¯#, e1 , e2 , e3 , e4 <
you can use quickly compose any bivectors in this basis with ComposeBivector or its alias
¯#Ñ . The placeholder Ñ is used for entering the symbol upon which you want the coefficients to be
based. Here we choose a.
Ba = ¯%a
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4
Just as with vectors and points, if you enter ¯#Ñ , you get an expression with placeholders in which
you can enter your own coefficients. (But if you want to do further manipulation, make sure they
are declared as scalar symbols).
¯%Ñ
Ñ e1 Ô e2 + Ñ e1 Ô e3 + Ñ e1 Ô e4 + Ñ e2 Ô e3 + Ñ e2 Ô e4 + Ñ e3 Ô e4
In a 4-space, bivectors are not necessarily simple. If you want to compose a simple bivector, you
can use ComposeSimpleBivector or its alias ¯$Ñ .
Bb = ¯(b
He1 b1 + e2 b2 + e3 b3 + e4 b4 L Ô He1 b5 + e2 b6 + e3 b7 + e4 b8 L
Similarly, to compose a bound vector you can use ComposeBoundVector, or its alias ¯%Ñ . The
symbol ) has been chosen because as we have seen, bound vectors are often used to define lines.
L1 = ¯)q
H¯# + e1 q1 + e2 q2 + e3 q3 + e4 q4 L Ô He1 q5 + e2 q6 + e3 q7 + e4 q8 L
Ï The sum of two parallel bound vectors whose vectors are equal but of opposite
sense.
Consider a space of any dimension. Let !1 ã P Ô x and !2 ã Q Ô y be two bound vectors. Then
their sum % is
2009 9 3
Grassmann Algebra Book.nb 221
% ã %1 + %2 ã P Ô x + Q Ô y
Since the bound vectors are parallel we can write y equal to a x (where a is a scalar) so that
% ã P Ô x + Q Ô y ã P Ô x + Q Ô Ha xL ã HP + a QL Ô x
In the special case that y is equal to -x (that is a is -1), we have that % reduces to a bivector
% ã HP - QL Ô x
In mechanics (see Chapters 7 and 8) this is equivalent to two oppositely directed forces of equal
magnitude reducing to a couple.
Ï The sum of two parallel bound vectors is in general a bound vector parallel to
them
Supposing now that a is not -1, then P + a Q is a weighted point of weight (1+a) situated on the
line joining P and Q. We can shift this weight to the vectorial term (so that the bound vector again
becomes the product of a (pure) point and a vector) by writing % as
P+aQ
% ã R Ô HH1 + aL xL; R ã
a+1
The sum of two parallel bound vectors is a bound vector parallel to them.
In mechanics, this is equivalent to two parallel forces reducing to a resultant force parallel to them.
If the two parallel forces are of equal magnitude (a is equal to 1), the resultant force passes through
a point mid-way between them.
% ã P Ô x + Q Ô y ã R Ô Hx + yL + HP - RL Ô x + HQ - RL Ô y ã R Ô Hx + yL + B
Because our two bivectors are not parallel, the sum of their vectors cannot be zero. Hence first
term is a new bound vector which is not zero. The remaining terms are simple bivectors. In the
general dimensional case, this sum of bivectors B may not be reducible to a simple bivector. In 3-
dimensional space, the sum of bivectors B can always be reduced to a single simple bivector. In the
plane, a point R can always be found which makes the sum zero, leaving us with the result that the
sum of two bound vectors in the plane (whose sum of vectors is not zero) can always be reduced to
a single bound vector.
This point R is in fact the point of intersection of the lines of the bound vectors. We have seen
earlier in the chapter that a bound vector is independent of the point used to express it, provided
that the point lies on its line. We can thus imagine 'sliding' each bound vector along its line until its
point reaches the point of intersection with the line of the other bound vector. We can then take the
common point out of the sum as a factor, and we are left with the vector of the resultant being the
sum of the vectors of the bound vectors.
This result applies of course, not only to bound vectors in the plane, but to any pair of intersecting
bound vectors in a space of any dimensions, since together they form a planar subspace.
As we might expect, given any two bound vectors in the plane, we can check if they intersect or
are parallel by extracting a common factor from their regressive product. If they intersect, their
regressive product will give their point of intersection. If they are parallel, it will give their
common vector.
2009 9 3
Grassmann Algebra Book.nb 223
ToCommonFactor@B1 Ó B2 D
¯c Hc x †¯# Ô x Ô y§ + b y †¯# Ô x Ô y§ + ¯# †¯# Ô x Ô y§L
¯" + c x + b y
If the lines of the bound vectors in 3-space intersect, then the bound vectors together belong to a
plane and the results of the previous section apply. We are thus left to explore the remaining case
where the bivectors are neither parallel nor intersecting. In this case, the exterior product of the
bound vectors will not be zero.
As we have seen above, the sum of two bound vectors in a space of any number of dimensions,
may always be reduced to the sum of a bound vector and a bivector. The resultant bound vector
can be chosen through an arbitrary point, but its vector must be the sum of the vectors of the
original two bound vectors. The bivector will depend on the choice of the point defining the
resultant bound vector. And in 3-space this bivector is guaranteed to be simple.
% ã P Ô x + Q Ô y ã R Ô Hx + yL + B
To visualize how this works, we take a simple example. Define the two bound vectors as:
Choose a convenient point through which to define the resultant bound vector, say half way
between the points used to define the component bound vectors.
+ = H¯" + e2 L Ô He1 + e3 L
Bã)+*-+ã
¯" Ô e1 + H¯" + 2 e2 L Ô e3 - H¯" + e2 L Ô He1 + e3 L ã e2 Ô He3 - e1 L
The following depictions give two different views of the same summation. The original component
bound vectors are shown with a somewhat ghostly opacity, while the resultant bound vector and
bivector are shown in full-blooded colours.
2009 9 3
Grassmann Algebra Book.nb 224
The sum of two bound vectors. View 1. The sum of two bound vectors. View 2.
We shall see in the next section that the sum of any number of bound vectors in 3-space may be
reduced to the sum of a single bound vector and a single bivector
† In Chapter 8: Exploring Mechanics, we will see that bound vectors correspond to forces and
bivectors to moments. If the resultant bound vector force is chosen through a point which makes
it and its bivector moment orthogonal, such a system of forces is called a wrench.
We have seen in the sections above that the sum of two bound vectors in the plane is either a bound
vector or a bivector or zero. We have also seen that the sum of a bound vector and a bivector is
another bound vector.
A sum of any number of bound vectors in the plane, therefore, is clearly also reducible to a bound
vector, a bivector, or zero; for we can simply add them two at a time.
In 3-space we can present the same sort of argument. The sum of two bound vectors in 3-space is
either a bound vector, a simple bivector, the sum of a bound vector and a simple bivector, or zero.
In addition, the sum of two simple bivectors in 3-space is itself a simple bivector.
Thus again, we can add our bound vectors two at a time, and will end up with a bound vector, a
simple bivector, the sum of a bound vector and a simple bivector, or zero.
In 3-space, the sum of a bound vector and a bivector can also always be recast back into the sum of
two bound vectors by adding and subtracting a bound vector.
P Ô x + y Ô z ã HP Ô x - P Ô zL + HP Ô z + y Ô zL ã P Ô Hx - zL + HP + yL Ô z
2009 9 3
Grassmann Algebra Book.nb 225
Sums of bound vectors in n-space (n > 3) differ from sums in 3-space only in one aspect. The
bivector may not be simple.
If the bivector is simple, a sum of bound vectors in n-space has essentially the same properties as
one in 3-space.
A non-zero sum of bound vectors S can be reduced to a single bound vector only in the case that
the vector of the bound vector is a factor of the bivector. That is, S can be expressed in the form
S ã P Ô x + y Ô x ã HP + yL Ô x
S Ô S ã HHP + yL Ô xL Ô HHP + yL Ô xL ã 0
Suppose now that the vector of the bound vector is not a factor of the bivector. Then we have S =
PÔx + yÔz, where x, y, and z are independent. The exterior square of S is then
S Ô S ã HP Ô x + y Ô zL Ô HP Ô x + y Ô zL ã 2 P Ô x Ô y Ô z
This forms a straightforward test to see if the reduction of the sum to a single bound vector can be
effected. If the exterior square is zero, the sum can be reduced to a single bound vector. If the
exterior square is not zero, then the sum cannot be reduced to a single bound vector. We have
already seen this property of exterior squares of 2-elements in our discussion of simplicity in
Chapter 3.
A non-zero sum of bound vectors S Pi Ô xi (except in the case S xi ã 0) may always be reduced
to the sum of a bound vector and a bivector (or zero), since, by choosing an arbitrary point P,
S Pi Ô xi may always be written in the form:
S Pi Ô xi ã P Ô HS xi L + S HPi - PL Ô xi 4.1
The first term on the right hand side is a bound vector. The second term is a bivector. This
decomposition is clearly valid in a space of any dimension.
If S xi ã 0 then the sum is a bivector. In spaces of vector dimension greater than 3, the bivector
may not be simple.
¯A; ¯"3 ;
b1 = P1 Ô x1 ; P1 = ¯" + e1 + 3 e2 - 4 e3 ; x1 = e1 - e3 ;
b2 = P2 Ô x2 ; P2 = ¯" + 2 e1 - e2 - 2 e3 ; x2 = e1 - e2 + e3 ;
b3 = P3 Ô x3 ; P3 = ¯" - 5 e1 + 3 e2 - 6 e3 ; x3 = 2 e1 + 3 e2 ;
b4 = P4 Ô x4 ; P4 = ¯" + 4 e1 + 2 e2 - 9 e3 ; x4 = 5 e3 ;
4
B = ‚ bi
i=1
H¯# + 4 e1 + 2 e2 - 9 e3 L Ô H5 e3 L + H¯# - 5 e1 + 3 e2 - 6 e3 L Ô H2 e1 + 3 e2 L +
H¯# + e1 + 3 e2 - 4 e3 L Ô He1 - e3 L + H¯# + 2 e1 - e2 - 2 e3 L Ô He1 - e2 + e3 L
By expanding these products, simplifying and collecting terms, we obtain the sum of a bound
vector (through the origin) ¯" Ô H4 e1 + 2 e2 + 5 e3 L and a bivector
-25 e1 Ô e2 + 39 e1 Ô e3 + 22 e2 Ô e3 . We can use GrassmannExpandAndSimplify to do
the computations for us.
B = ¯%@BD
¯# Ô H4 e1 + 2 e2 + 5 e3 L - 25 e1 Ô e2 + 39 e1 Ô e3 + 22 e2 Ô e3
We could just as well have expressed this 2-element as bound through (for example) the point
¯" + e1 . To do this, we simply add e1 Ô H4 e1 + 2 e2 + 5 e3 L to the bound vector and subtract it
from the bivector to get:
H¯" + e1 L Ô H4 e1 + 2 e2 + 5 e3 L - 27 e1 Ô e2 + 34 e1 Ô e3 + 22 e2 Ô e3
We can of course also express the bivector in factored form in many ways. Using
ExteriorFactorize (discussed in Chapter 2) gives a default result
ExteriorFactorize@-27 e1 Ô e2 + 34 e1 Ô e3 + 22 e2 Ô e3 D
22 e3 34 e3
-27 e1 + Ô e2 -
27 27
Finally, we can check to see if the sum can be reduced to a bound vector by calculating its exterior
square.
2009 9 3
Grassmann Algebra Book.nb 227
¯%@B Ô BD
-230 ¯# Ô e1 Ô e2 Ô e3
Since this is not zero, the reduction of the sum to a single bound vector is not possible in this case.
P Ô P1 Ô P2 Ô x Ô y ã P Ô HP1 - PL Ô HP2 - PL Ô x Ô y
A sum of bound m-vectors may always be reduced to the sum of a bound m-vector and an (m+1)-
vector (with the proviso that either or both may be zero).
These interpreted elements and their relationships will be discussed further in the following
sections.
The m-vector
The simple m-vector, or multivector, is the multidimensional equivalent of the vector. As with a
vector, it does not have the property of location. The m-dimensional vector space of a simple m-
vector may be used to define the multidimensional direction of the m-vector.
If we had access to m-dimensional paper, we would depict the simple m-vector, just as we did for
the bivector, by depicting its vectors conveniently docked head to tail in the order of the product,
and 'joined' by a somewhat transparent m-dimensional parallelepiped in the same colour.
In practice of course, we will only be depicting vectors, bivectors and trivectors. A trivector may
be depicted by its three vectors in order joined by a parallelepiped.
2009 9 3
Grassmann Algebra Book.nb 228
The orientation is given by the order of the factors in the simple m-vector. An interchange of any
two factors produces an m-vector of opposite orientation. By the anti-symmetry of the exterior
product, there are just two distinct orientations.
In Chapter 6: The Interior Product, it will be shown that the measure of a trivector may be
geometrically interpreted as the volume of this parallelepiped. However, the depiction of the
simple trivector in the manner above suffers from similar defects to those already described for the
bivector: namely, it incorrectly suggests a specific location and shape of the parallelepiped.
A simple m-vector may also be viewed as a carrier of bound simple (m|1)-vectors in a manner
analogous to that already described for the simple bivector as a carrier of bound vectors. Thus, a
simple trivector may be viewed as a carrier for bound simple bivectors. (See below for a depiction
of this.)
A sum of simple m-vectors (that is, an m-vector) is not necessarily reducible to a simple m-vector,
except in L, L, L , L.
0 1 n-1 n
‚ Pi Ô ai ã ‚ HP + bi L Ô ai ã P Ô ‚ ai + ‚ bi Ô ai 4.2
m m m m
The first term P Ô „ ai is a bound m-vector providing that „ ai ! 0 and the second term is an
m m
(m+1)-vector.
When m = n (n being the dimension of the underlying vector space), any bound n-vector is but a
scalar multiple of the basis (n+1)-element. (The default basis (n+1)-element is denoted
¯" Ô e1 Ô e2 Ô ! Ô en .)
2009 9 3
Grassmann Algebra Book.nb 229
When m = n (n being the dimension of the underlying vector space), any bound n-vector is but a
scalar multiple of the basis (n+1)-element. (The default basis (n+1)-element is denoted
¯" Ô e1 Ô e2 Ô ! Ô en .)
The graphical depiction of bound simple m-vectors presents even greater difficulties than those
already discussed for bound vectors. As in the case of the bound vector, the point used to express
the bound simple m-vector is not unique.
In Section 4.6 we will see how bound simple m-vectors may be used to define multiplanes.
P0 Ô x1 Ô x2 Ô ! Ô xm
ã P0 Ô HP1 - P0 L Ô HP2 - P0 L Ô ! Ô HPm - P0 L
ã P0 Ô P1 Ô P2 Ô ! Ô Pm
Conversely, as we have already seen, a product of m+1 points may always be expressed as the
product of a point and a simple m-vector by subtracting one of the points from all of the others.
The m-vector of a bound simple m-vector P0 ÔP1 ÔP2 Ô!ÔPm may thus be expressed in terms of
these points as:
A particularly symmetrical formula results from the expansion of this product reducing it to a form
no longer showing preference for P0 .
A bound bivector is a 3-element. Any exterior product of three 1-elements, in which at least one 1-
element is a point, is a bound simple bivector. (For brevity in this section, we will suppose all the
bound bivectors discussed are bound simple bivectors.)
2009 9 3
Grassmann Algebra Book.nb 230
P
QãP+x
RãP+y
We can construct the same bound bivector % from these points in a number of ways
Thus we have nine (inessentially different) ways of constructing or denoting the same bound
bivector. From here on we will usually only discuss bound bivectors in any of the three
conceptually distinct forms in which points are denoted before vectors:
We depict each of these forms in a slightly different way: in each case emphasizing the entities
involved in the form, but in all cases showing the order of factors in the product by the chain of
arrows. (These graphical conventions will also apply to our discussion of bound trivectors later.)
The bound bivector % defines a plane through the points P, Q and R in the directions of x and y.
Here we use the word 'define' in the context discussed in Chapter 2 in the section Spaces and
congruence. To say that a bound bivector % defines a plane means that the plane may be defined as
the set of all points P which belong to the space of %, that is, such that % Ô P ã 0. A consequence
of this definition is that any other bound bivector congruent to % defines the same plane.
These graphical depictions of the bound bivector are each misleading in their own way. Since the
bound bivector defines a plane, we can conceive of it as lying in the plane. However, just as a
bound vector can lie anywhere in its line, a bound bivector can lie anywhere in its plane.
The following diagram depicts a bound bivector docked in three different locations in its plane.
The depictions each have the same vectors, but differ in the point of the plane they use.
2009 9 3
Grassmann Algebra Book.nb 231
Hence, although a lone bivector is located nowhere, and a lone point is immoveably fixed, the
bound bivector formed by the exterior product of the two is bound to a plane through the point in
the direction of the bivector.
It has been shown earlier that a simple bivector may be viewed as a 'carrier' of bound vectors. In a
similar manner, a simple trivector may be viewed as a 'carrier' of bound bivectors. To see this, take
any bound bivector PÔyÔz and a trivector xÔyÔz. Thus:
P Ô y Ô z + x Ô y Ô z ã HP + xL Ô y Ô z
The geometric interpretation of the addition of such a simple trivector to a bound bivector is then
similar to that for the addition of a vector to a point, that is, a shift in position of the bound element.
Ï Composing m-vectors
¯"3
8¯#, e1 , e2 , e3 <
Even though we are in a bound vector space we can still can compose m-vectors with
ComposeMVector. In a bound 3-space, we have three possible m-vectors. Here we base the
coefficients on the symbol a.
2009 9 3
Grassmann Algebra Book.nb 232
Even though we are in a bound vector space we can still can compose m-vectors with
ComposeMVector. In a bound 3-space, we have three possible m-vectors. Here we base the
coefficients on the symbol a.
Similarly we can compose simple m-vectors with ComposeSimpleMVector. In this case we use
a placeholder to enable later input of specific coefficients.
2009 9 3
Grassmann Algebra Book.nb 233
The vector space of a simple m-vector is an m-dimensional vector space. Conversely, the m-
dimensional vector space may be said to be defined by the m-vector.
The point space of a bound simple m-vector is called an m-plane (sometimes multiplane). Thus the
point space of a bound vector is a 1-plane (or line) and the point space of a bound simple bivector
is a 2-plane (or, simply, a plane). The m-plane will be said to be defined by the bound simple m-
vector.
The geometric interpretation for the notion of set inclusion is taken as 'to lie in'. Thus for example,
a point may be said to lie in an m-plane.
The point and vector spaces for a bound simple m-element are tabulated below.
Two congruent bound simple m-vectors PÔa and a P Ô a define the same m-plane. Thus, for
m m
example, the point ¯" + x and the weighted point 2(¯" + x) define the same point.
We have defined an m-plane as a set of points defined by a bound simple m-vector. It will often
turn out however to be more convenient and conceptually fruitful to work with m-planes as if they
were the bound simple m-vectors which define them. This is in fact the approach taken by
Grassmann and the early workers in the Grassmannian tradition (for example, [Whitehead 1898],
[Hyde 1884-1906] and [Forder 1941]). This will be satisfactory provided that the equality
relationship we define for m-planes is that of congruence rather than the more specific equals.
Thus we may, when speaking in a geometric context, refer to a bound simple m-vector as an m-
plane and vice versa. Hence in saying P is an m-plane, we are also saying that all a P (where a is a
scalar factor, not zero) is the same m-plane.
2009 9 3
Grassmann Algebra Book.nb 234
Thus we may, when speaking in a geometric context, refer to a bound simple m-vector as an m-
plane and vice versa. Hence in saying P is an m-plane, we are also saying that all a P (where a is a
scalar factor, not zero) is the same m-plane.
Coordinate spaces
The coordinate spaces of a Grassmann algebra are the spaces defined by the basis elements.
The coordinate m-spaces are the spaces defined by the basis elements of L. For example, if L has
m 1
basis 8¯", e1 , e2 , e3 <, that is, we are working in a bound vector 3-space, then the Grassmann
algebra it generates has basis:
¯"3 ; LBasis
881<, 8¯#, e1 , e2 , e3 <,
8¯# Ô e1 , ¯# Ô e2 , ¯# Ô e3 , e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <,
8¯# Ô e1 Ô e2 , ¯# Ô e1 Ô e3 , ¯# Ô e2 Ô e3 , e1 Ô e2 Ô e3 <, 8¯# Ô e1 Ô e2 Ô e3 <<
Each one of these basis elements defines a coordinate space. Most familiar are the coordinate m-
planes. The coordinate 1-planes ¯"Ôe1 , ¯"Ôe2 , ¯"Ôe3 define the coordinate axes, while the
coordinate 2-planes ¯"Ôe1 Ôe2 , ¯"Ôe1 Ôe3 , ¯"Ôe2 Ôe3 define the coordinate planes. Additionally
however there are the coordinate vectors e1 , e2 , e3 and the coordinate bivectors e1 Ôe2 , e1 Ôe3 ,
e2 Ôe3 .
Perhaps less familiar is the fact that there are no coordinate m-planes in a vector space, but rather
simply coordinate m-vectors.
Geometric dependence
In Chapter 2 the notion of dependence was discussed for elements of a linear space. Non-zero 1-
elements are said to be dependent if and only if their exterior product is zero.
If the elements concerned have been endowed with a geometric interpretation, the notion of
dependence takes on an additional geometric interpretation, as the following table shows.
Thus, co-directionality and coincidence, while being distinct interpretive geometric notions, are
equivalent algebraically.
2009 9 3
Grassmann Algebra Book.nb 235
Geometric duality
The concept of duality introduced in Chapter 3 is most striking when interpreted geometrically.
Suppose:
P defines a point
L defines a line
p defines a plane
V defines a 3-plane
In what follows we tabulate the dual relationships of these entities to each other.
Ï Duality in a plane
In a plane there are just three types of geometric entity: points, lines and planes. In the table below
we can see that in the plane, points and lines are 'dual' entities, and planes and scalars are 'dual'
entities, because their definitions convert under the application of the Duality Principle.
L ª P1 Ô P2 P ª L1 Ó L2
p ª P1 Ô P2 Ô P3 1 ª L1 Ó L2 Ó L3
p ª LÔP 1 ª PÓL
Ï Duality in a 3-plane
In the 3-plane there are just four types of geometric entity: points, lines, planes and 3-planes. In the
table below we can see that in the 3-plane, lines are self-dual, points and planes are now dual, and
scalars are now dual to 3-planes.
L ª P1 Ô P2 L ª p1 Ó p2
p ª P1 Ô P2 Ô P3 P ª p1 Ó p2 Ó p3
V ª P1 Ô P2 Ô P3 Ô P4 1 ª p1 Ó p2 Ó p3 Ó p4
p ª LÔP P ª LÓp
V ª L1 Ô L2 1 ª L1 Ó L2
V ª pÔP 1 ª PÓp
Ï Duality in an n-plane
From these cases the types of relationships in higher dimensions may be composed
straightforwardly. For example, if P defines a point and H defines a hyperplane ((n|1)-plane), then
we have the dual formulations:
H ª P1 Ô P2 Ô ! Ô Pn-1 P ª H1 Ó H2 Ó ! Ó Hn-1
2009 9 3
Grassmann Algebra Book.nb 236
4.6 m-Planes
In an earlier section m-planes were defined as point spaces of bound simple m-vectors. In this
section m-planes will be considered from three other aspects: the first in terms of a simple exterior
product of points, the second as an m-vector and the third as an exterior quotient.
P ã P0 Ô P1 Ô P2 Ô ! Ô Pm
P Ô P0 Ô x1 Ô x2 Ô ! Ô xm ã 0
This equation is equivalent to the statement: there exist scalars a, a0 , ai , not all zero, such that:
a P + a0 P0 + S ai xi ã 0
And since this is only possible if a ã -a0 (since for the sum to be zero, it must be a sum of
vectors) then we can rewrite the condition as:
a HP - P0 L + S ai xi ã 0
or equivalently:
HP - P0 L Ô x1 Ô x2 Ô ! Ô xm ã 0
We are thus lead to the following alternative definition of an m-plane: an m-plane defined by the
bound simple m-vector P0 Ô x1 Ô x2 Ô ! Ô xm is the set of points:
8P : HP -P0 L Ô x1 Ô x2 Ô ! Ô xm ã 0<
This is of course equivalent to the usual definition of an m-plane. That is, since the vectors
HP - P0 L, x1 , x2 , !, xm are dependent, then for scalar parameters ti :
HP - P0 L ã t1 x1 + t2 x2 + ! + tm xm 4.4
2009 9 3
Grassmann Algebra Book.nb 237
P Ô x1 Ô x2 Ô ! Ô xm ã P0 Ô x1 Ô x2 Ô ! Ô xm
'Solving' for P and noting from Section 2.11 that the quotient of an (m+1)-element by an m-element
contained in it is a 1-element with m arbitrary scalar parameters, we can write:
P0 Ô x1 Ô x2 Ô ! Ô xm
Pã ã P0 + t1 x1 + t2 x2 + ! + tm xm
x1 Ô x2 Ô ! Ô xm
P0 Ô x1 Ô x2 Ô ! Ô x m
Pã 4.5
x1 Ô x2 Ô ! Ô x m
Note that all the 1-elements in the numerator must be declared vector symbols, so we need to
declare any points we use.
Ï Points
The exterior quotient of a weighted point by its weight gives (as would be expected), the (pure)
point.
PÔa
ExteriorQuotientB F
a
P
Ï Points in a line
The exterior quotient of a bound vector with its vector gives all the points in the line defined by the
bound vector, parametrized by the scalar ¯t1 .
PÔx
ExteriorQuotientB F
x
P + x ¯t1
We can see that any other point used to define the bound vector gives effectively the same
parametrization.
2009 9 3
Grassmann Algebra Book.nb 238
We can see that any other point used to define the bound vector gives effectively the same
parametrization.
HP + a xL Ô x
ExteriorQuotientB F
x
P + a x + x ¯t1
Ï Points in a plane
The exterior quotient of a bound simple bivector with its simple bivector gives all the points in the
plane defined by the bound simple bivector, parametrized by the scalars ¯t1 and ¯t2 .
PÔxÔy
ExteriorQuotientB F
xÔy
P + x ¯t1 + y ¯t2
Ï Points in a 3-plane
The exterior quotient of a bound simple trivector with its simple trivector gives all the points in the
3-plane defined by the bound simple trivector, parametrized by the scalars ¯t1 , ¯t2 and ¯t3 .
PÔxÔyÔz
ExteriorQuotientB F
xÔyÔz
P + x ¯t1 + y ¯t2 + z ¯t3
Ï Lines in a plane
The exterior quotient of a bound simple bivector with a vector in it gives a set of bound vectors.
These bound vectors define lines in the plane of the bound simple bivector, parametrized by the
scalars ¯t1 and ¯t2 . They are characterized by the fact that their exterior product with the vector
gives the original bound simple bivector.
PÔxÔy
ExteriorQuotientB F
y
HP + y ¯t1 L Ô Hx + y ¯t2 L
Ï Lines in a 3-plane
The exterior quotient of a bound simple trivector with a bivector in it gives a set of bound vectors.
These bound vectors define lines in the 3-plane of the bound simple trivector, parametrized by the
scalars ¯t1 , ¯t2 , ¯t3 and ¯t4 . They are characterized by the fact that their exterior product with
the bivector gives the original bound simple trivector.
2009 9 3
Grassmann Algebra Book.nb 239
PÔxÔyÔz
ExteriorQuotientB F
yÔz
HP + y ¯t1 + z ¯t2 L Ô Hx + y ¯t3 + z ¯t4 L
Ï Planes in a 3-plane
The exterior quotient of a bound simple trivector with any vector in it gives all the planes in the 3-
plane of the bound simple trivector, parametrized by the scalars ¯t1 , ¯t2 and ¯t3 .
The exterior quotient of a bound simple trivector with a vector in it gives a set of bound bivectors.
These bound bivectors define planes in the 3-plane of the bound simple trivector, parametrized by
the scalars ¯t1 , ¯t2 and ¯t3 . They are characterized by the fact that their exterior product with
the vector gives the original bound simple trivector.
PÔxÔyÔz
ExteriorQuotientB F
z
HP + z ¯t1 L Ô Hx + z ¯t2 L Ô Hy + z ¯t3 L
The interesting property of this operation is that when it is applied twice, the result is zero.
Operationally, $2 ã0. For example:
$HPL ã 1
$HP0 Ô P1 L ã P1 - P0
$HP1 - P0 L ã 1 - 1 ã 0
$HP0 Ô P1 Ô P2 L ã P1 Ô P2 + P2 Ô P0 + P0 Ô P1
$HP1 Ô P2 + P2 Ô P0 + P0 Ô P1 L ã P2 - P1 + P0 - P2 + P1 - P0 ã 0
This property of nilpotence is shared by the boundary operator of algebraic topology and the
exterior derivative. Furthermore, if a product with a given 1-element is considered an operation,
then the exterior, regressive and interior products are all likewise nilpotent.
† In Chapter 6 we will show that under the hybrid metric we will adopt for bound vector spaces
(in which the origin is orthogonal to all vectors), taking the interior product of any bound
element with the origin ¯" has the same effect as the $ operator.
Applied to a bound m-vector, this operator generates the m-vector, thus creating a 'free' entity from
a bound one. Applied a second time to the result gives zero, since the m-vector is aready free.
2009 9 3
Grassmann Algebra Book.nb 240
Applied to a bound m-vector, this operator generates the m-vector, thus creating a 'free' entity from
a bound one. Applied a second time to the result gives zero, since the m-vector is aready free.
† Another way of generating the m-vector from a bound m-vector is to compose the first co-span
of the bound m-vector, and then sum the elements.
W = P1 Ô P2 Ô P3 Ô P0 ;
Declare the Pi as 'vector' symbols (that is, as 1-elements), and sum the elements of its first co-span.
P0 Ô P1 Ô P2 - P0 Ô P1 Ô P3 + P0 Ô P2 Ô P3 - P1 Ô P2 Ô P3
We can see this has the simple form we expected by factorizing it.
V2 = ExteriorFactorize@V1 D
HP0 - P3 L Ô HP1 - P3 L Ô HP2 - P3 L
Applying the summed first cospan operation to each of these terms again gives zero as do the other
operators.
For simplicity of exposition we refer to a bound vector as 'a line', rather than as 'defining a line',
and to a bound bivector as 'a plane', rather than as 'defining a plane '.
Lines in a plane
To explore lines in a plane, we first declare the basis of the plane: ¯"2 .
¯"2
8¯#, e1 , e2 <
A line in a plane can be written in several forms. The most intuitive form perhaps is as a product of
two points ¯" + x and ¯" + y where x and y are position vectors.
2009 9 3
Grassmann Algebra Book.nb 241
A line in a plane can be written in several forms. The most intuitive form perhaps is as a product of
two points ¯" + x and ¯" + y where x and y are position vectors.
L ª H¯" + xL Ô H¯" + yL
¯! + y
¯! + x
y
¯!
We can automatically generate a basis form for each of the points by using the GrassmannAlgebra
ComposePoint function, or its alias form ¯"Ñ .
L ª ¯'x Ô ¯'y
L ª H¯# + e1 x1 + e2 x2 L Ô H¯# + e1 y1 + e2 y2 L
Or, we can express the line as the product of any point in it and a vector parallel to it. For example:
L ª H¯" + xL Ô Hy - xL ª H¯" + yL Ô Hy - xL
H¯! + yxLÔ
- xHy - xL
¯! + x
y
¯!
If we want to compose a line element in this form, we can also use ComposeLineElement or its
alias ¯%Ñ . By leaving the placeholder unfilled we get a template for entering our own scalar
coefficients.
2009 9 3
Grassmann Algebra Book.nb 242
If we want to compose a line element in this form, we can also use ComposeLineElement or its
alias ¯%Ñ . By leaving the placeholder unfilled we get a template for entering our own scalar
coefficients.
¯)Ñ
H¯# + Ñ e1 + Ñ e2 L Ô HÑ e1 + Ñ e2 L
Alternatively, we can express L without specific reference to points in it. For example:
L ª ¯" Ô Ha e1 + b e2 L + c e1 Ô e2
The first term ¯" Ô Ha e1 + b e2 L is a vector bound through the origin, and hence defines a line
through the origin. The second term c e1 Ô e2 is a bivector whose addition represents a shift in the
line parallel to itself, away from the origin.
We know that this can indeed represent a line since we can factorize it into any of the forms:
b c
† A line of gradient a
through the point with coordinate b
on the ¯"Ôe1 axis.
c
Lª ¯" + e1 Ô Ha e1 + b e2 L
b
¯! Ô e2 J¯! +
c
e1 N Ô Ha e1 + b e2 L
b
¯! Ô e1
¯! ¯! +
c
e1
b
b
† A line of gradient a
through the point with coordinate - ca on the ¯"Ôe2 axis.
2009 9 3
Grassmann Algebra Book.nb 243
b
A line of gradient a
through the point with coordinate - ca on the ¯"Ôe2 axis.
c
Lª ¯" - e2 Ô Ha e1 + b e2 L
a
¯! Ô e2
c
J¯! - e2 N Ô Ha e1 + b e2 L
a
¯! Ô e1
¯!
c
¯! - e2
a
ab c c
Lª ¯" - e2 Ô ¯" + e1
c a b
ab
Of course the scalar factor c
is inessential so we can just as well say:
c c
Lª ¯" - e2 Ô ¯" + e1
a b
¯! Ô e2 c c
J¯! - e2 N Ô J¯! + e1 N
a b
¯! Ô e1
c
¯! ¯! + e1
b
c
¯! - e2
a
2009 9 3
Grassmann Algebra Book.nb 244
The expression of the line above in terms of a pair of points requires the four coordinates of the
points. Expressed without specific reference to points, we seem to need three parameters.
However, the last expression shows, as expected, that it is really only two parameters that are
necessary (viz y = m x + c).
Lines in a 3-plane
Our normal notion of three-dimensional space corresponds to a 3-plane. Lines in a 3-plane have the
same form when expressed in coordinate-free notation as they do in a plane (2-plane). Remember
that a 3-plane is a bound vector 3-space whose basis may be chosen as 3 independent vectors and a
point, or equivalently as 4 independent points. For example, we can still express a line in a 3-plane
in any of the following equivalent forms.
L ª H¯" + xL Ô H¯" + yL
L ª H¯" + xL Ô Hy - xL
L ª H¯" + yL Ô Hy - xL
L ª ¯" Ô Hy - xL + x Ô y
The coordinate form however will appear somewhat different to that in the 2-plane case. To
explore this, we redeclare the basis with ¯"3 , and use ComposePoint in its alias form to define a
line as the product of 2 points.
¯"3
8¯#, e1 , e2 , e3 <
L = ¯'x Ô ¯'y
H¯# + e1 x1 + e2 x2 + e3 x3 L Ô H¯# + e1 y1 + e2 y2 + e3 y3 L
We can expand and simplify this product with GrassmannExpandAndSimplify or its alias ¯%.
L = ¯%@LD
¯# Ô He1 H-x1 + y1 L + e2 H-x2 + y2 L + e3 H-x3 + y3 LL +
H-x2 y1 + x1 y2 L e1 Ô e2 + H-x3 y1 + x1 y3 L e1 Ô e3 + H-x3 y2 + x2 y3 L e2 Ô e3
The scalar coefficients in this expression are sometimes called the Plücker coordinates of the line.
Alternatively, we can express L in terms of basis elements, but without specific reference to points
or vectors in it. For example:
L = ¯" Ô Ha e1 + b e2 + c e3 L + d e1 Ô e2 + e e2 Ô e3 + f e1 Ô e3 ;
The first term ¯" Ô Ha e1 + b e2 + c e3 L is a vector bound through the origin, and hence defines
a line through the origin. The second term d e1 Ô e2 + e e2 Ô e3 + f e1 Ô e3 is a bivector whose
addition represents a shift in the line parallel to itself, away from the origin. In order to effect this
shift, however, it is necessary that the bivector contain the vector Ha e1 + b e2 + c e3 L. Hence
there will be some constraint on the coefficients d, e, and f. To determine this we only need to
determine
2009 9 3 the condition that the exterior product of the vector and the bivector is zero.
Grassmann Algebra Book.nb 245
The first term ¯" Ô Ha e1 + b e2 + c e3 L is a vector bound through the origin, and hence defines
a line through the origin. The second term d e1 Ô e2 + e e2 Ô e3 + f e1 Ô e3 is a bivector whose
addition represents a shift in the line parallel to itself, away from the origin. In order to effect this
shift, however, it is necessary that the bivector contain the vector Ha e1 + b e2 + c e3 L. Hence
there will be some constraint on the coefficients d, e, and f. To determine this we only need to
determine the condition that the exterior product of the vector and the bivector is zero.
¯%@Ha e1 + b e2 + c e3 L Ô Hd e1 Ô e2 + e e2 Ô e3 + f e1 Ô e3 LD ã 0
Hc d + a e - b fL e1 Ô e2 Ô e3 ã 0
Alternatively, this constraint amongst the coefficients could have been obtained by noting that in
order to be a line, L must be simple, hence the exterior product with itself must be zero.
¯%@L Ô LD ã 0
H2 c d + 2 a e - 2 b fL ¯# Ô e1 Ô e2 Ô e3 ã 0
Thus the constraint that the coefficients must obey in order for a general bound vector of the form
L to be a line in a 3-plane is that:
cd+ae-bf ã0
This constraint is sometimes referred to as the Plücker identity.
Given this constraint, and supposing neither a, b nor c to be zero, we can factorize the line into
any of the following forms:
f e
L ã ¯" + e1 + e2 Ô Ha e1 + b e2 + c e3 L
c c
d f
L ã ¯" - e2 - e3 Ô Ha e1 + b e2 + c e3 L
a a
d e
L ã ¯" + e1 - e3 Ô Ha e1 + b e2 + c e3 L
b b
Each of these forms represents a line in the direction of a e1 + b e2 + c e3 and intersecting a
coordinate plane. For example, the first form intersects the ¯" Ô e1 Ô e2 coordinate plane in the
point ¯"+ fc e1 + ec e2 with coordinates ( fc , ec ,0).
The most compact form, in terms of the number of scalar parameters used, is when L is expressed
as the product of two points, each of which lies in a coordinate plane.
ac d f f e
L= ¯" - e2 - e3 Ô ¯" + e1 + e2 ;
f a a c c
We can verify that this formulation gives us the original form of the line by expanding and
simplifying the product and substituting the constraint relation previously obtained.
¯%@Expand@¯#@¯%@LD ê. Hc d + a e Ø f bLDDD
¯# Ô Ha e1 + b e2 + c e3 L + d e1 Ô e2 + f e1 Ô e3 + e e2 Ô e3
Similar expressions may be obtained for L in terms of points lying in the other coordinate planes.
To summarize, there are three possibilities in a 3-plane, corresponding to the number of different
pairs of coordinate 2-planes in the 3-plane.
2009 9 3
Grassmann Algebra Book.nb 246
Similar expressions may be obtained for L in terms of points lying in the other coordinate planes.
To summarize, there are three possibilities in a 3-plane, corresponding to the number of different
pairs of coordinate 2-planes in the 3-plane.
L ª H¯! + x1 e1 + x2 e2 L Ô H¯! + y2 e2 + y3 e3 L ª P Ô Q
L ª H¯! + x1 e1 + x2 e2 L Ô H¯! + z1 e1 + z3 e3 L ª P Ô R 4.7
L ª H¯! + y2 e2 + y3 e3 L Ô H¯! + z1 e1 + z3 e3 L ª Q Ô R
As with a line in a 2-plane, we find that a line in a 3-plane is expressed with the minimum number
of parameters by expressing it as the product of two points, each in one of the coordinate planes. In
this form, there are just 4 independent scalar parameters (coordinates) required to express the line.
Suppose we have a line expressed as the exterior product of points in two coordinate planes, say
¯" Ô e1 Ô e2 and ¯" Ô e2 Ô e3 .
L = H¯" + x1 e1 + x2 e2 L Ô H¯" + y2 e2 + y3 e3 L;
We can always find the point on the line in the third coordinate plane ¯" Ô e3 Ô e1 by computing
the intersection of the line and the third plane.
2009 9 3
Grassmann Algebra Book.nb 247
We can always find the point on the line in the third coordinate plane ¯" Ô e3 Ô e1 by computing
the intersection of the line and the third plane.
To use GrassmannAlgebra for this, we first need to declare the space, and the scalar symbols.
The intersection of the line and the third coordinate plane is given by finding their common factor.
F = ToCommonFactor@L Ó H¯" Ô e1 Ô e3 LD
¯c H¯# Hx2 - y2 L - e1 x1 y2 + e3 x2 y3 L
R = ToPointForm@FD
e1 x1 y2 e3 x2 y3
¯# - +
x2 - y2 x2 - y2
We can confirm that this point is both on the line and on the coordinate plane by simplifying their
exterior products.
Lines in a 4-plane
Lines in a 4-plane have the same form when expressed in coordinate-free notation as they do in
any multiplane.
To obtain the Plücker coordinates of a line in a 4-plane, express the line as the exterior product of
two points and multiply it out. The resulting coefficients of the basis elements are the Plücker
coordinates of the line.
Additionally, from the results above, we can expect that a line in a 4-plane may be expressed with
the least number of scalar parameters as the exterior product of two points, each point lying in one
of the coordinate 3-planes. For example, the expression for the line as the product of the points in
the coordinate 3-planes ¯"Ôe1 Ôe2 Ôe3 and ¯"Ôe2 Ôe3 Ôe4 is
L = H¯" + x1 e1 + x2 e2 + x3 e3 L Ô H¯" + y2 e2 + y3 e3 + y4 e4 L
Lines in an m-plane
The formulae below summarize some of the expressions for defining a line, valid in a multiplane
of any dimension.
L ª H¯! + xL Ô H¯! + yL
4.8
2009 9 3
Grassmann Algebra Book.nb 248
L ª H¯! + xL Ô H¯! + yL
L ª H¯! + xL Ô Hy - xL
L ª H¯! + yL Ô Hy - xL
L ª ¯! Ô Hy - xL + x Ô y
A line can be expressed in terms of the 2m coordinates of any two points on it.
L ª H¯! + x1 e1 + x2 e2 + ! + xm em L Ô
4.9
H¯! + y1 e1 + y2 e2 + ! + ym em L
When multiplied out, the expression for the line takes a form explicitly displaying the Plücker
coordinates of the line.
Alternatively, a line in an m-plane ¯"Ôe1 Ôe2 Ô!Ôem can be expressed in terms of its intersections
with two of its coordinate (m|1)-planes, ¯" Ô e1 Ô ! Ô ·i Ô ! Ô em and ¯"Ôe1 Ô!Ô·j Ô!Ôem
say. The notation ·i means that the ith element or term is missing.
L ª H¯! + x1 e1 + ! + ·i + ! + xm em L Ô
4.11
H¯! + y1 e1 + ! + ·j + ! + ym em L
This formulation indicates that a line in m-space has at most 2(m|1) independent parameters
required to describe it.
It also implies that in the special case when the line lies in one of the coordinate (m|1)-spaces, it
can be even more economically expressed as the product of two points, each lying in one of the
coordinate (m|2)-spaces contained in the (m|1)-space. And so on.
Planes in a 3-plane
A plane P in a 3-plane can be written in several forms. The most intuitive form perhaps is as a
product of three non-collinear points P, Q, and R.
4.8
2009 9 3
Grassmann Algebra Book.nb 249
P ã ¯" + x
Q ã ¯" + y
R ã ¯" + z
Or, we can express it as the product of any two different points in it and a vector parallel to it (but
not in the direction of the line joining the two points). For example:
2009 9 3
Grassmann Algebra Book.nb 250
P ª H¯" + xL Ô H¯" + yL Ô Hz - yL
Or, we can express it as the product of any point in it and any two independent vectors parallel to
it. For example:
P ª H¯" + xL Ô Hy - xL Ô Hz - yL
Or, we can express it as the product of any point in it and any line in it not through the point. For
example:
2009 9 3
Grassmann Algebra Book.nb 251
Or, we can express it as the product of any point in it and any line in it not through the point. For
example:
P ª H¯" + xL Ô L
Or, we can express it as the product of any line in it and any vector parallel to it (but not parallel to
the line). For example:
P ª Hy - xL Ô L
Given a basis, we can always express the plane in terms of the coordinates of the points or vectors
in the expressions above. However the form which requires the least number of coordinates is that
which expresses the plane as the exterior product of its three points of intersection with the
2009 9 3 axes.
coordinate
Grassmann Algebra Book.nb 252
Given a basis, we can always express the plane in terms of the coordinates of the points or vectors
in the expressions above. However the form which requires the least number of coordinates is that
which expresses the plane as the exterior product of its three points of intersection with the
coordinate axes.
If the plane is parallel to one of the coordinate axes, say ¯" Ô e3 , it may be expressed as:
P ª H¯" + a e1 L Ô H¯" + b e2 L Ô e3
Whereas, if it is parallel to two of the coordinate axes, say ¯" Ô e2 and ¯" Ô e3 , it may be
expressed as:
P ª H¯" + a e1 L Ô e2 Ô e3
If we wish to express a plane as the exterior product of its intersection points with the coordinate
axes, we first determine its points of intersection with the axes and then take the exterior product of
the resulting points. This leads to the following identity:
To express this plane in terms of its intersections with the coordinate axes we calculate the
intersection points with the axes.
2009 9 3
Grassmann Algebra Book.nb 253
To express this plane in terms of its intersections with the coordinate axes we calculate the
intersection points with the axes.
ToCommonFactor@P Ó H¯" Ô e1 LD
¯c H13 ¯# + 329 e1 L
ToCommonFactor@P Ó H¯" Ô e2 LD
¯c H38 ¯# + 329 e2 L
ToCommonFactor@P Ó H¯" Ô e3 LD
¯c H48 ¯# + 329 e3 L
We then take the product of these points (ignoring the weights) to form the plane.
329 e1
¯%BP Ô ¯" + F
13
0
Planes in a 4-plane
From the results above, we can expect that a plane in a 4-plane is most economically expressed as
the product of three points, each point lying in one of the coordinate 2-planes. For example:
(This is analogous to expressing a line in a 3-plane as the product of two points, each point lying in
one of the coordinate 2-planes.)
If a plane is expressed in any other form, we can express it in the form above by first determining
its points of intersection with the coordinate planes and then taking the exterior product of the
resulting points. This leads to the following identity:
Planes in an m-plane
A plane in an m-plane is most economically expressed as the product of three points, each point
lying in one of the coordinate (m|2)-planes.
2009 9 3
Grassmann Algebra Book.nb 254
This formulation indicates that a plane in an m-plane has at most 3(m|2) independent scalar
parameters required to describe it.
We can define a geometric entity as an element of an exterior product space which has a specific
geometric interpretation. Examples we have already met include points, lines, planes, m-planes,
vectors, bivectors, and m-vectors.
These entities are characterized by the fact that they are simple. That is, they may be factorized into
a product of 1-elements. The 1-element factors are, of course, not unique. However, this is not a
shortcoming. Indeed, the power of such a representation for computations owes much to this non-
uniqueness. For example, if we know two lines intersect in a point P, we can represent them
immediately and expressly as products involving P (say, PÔQ and PÔR).
If an element is not simple, then it is not a candidate for geometric entities of this sort. (We will
look at geometric entities represented by non-simple elements in Chapter 7: Exploring Screw
Algebra.) Apart from 0, 1, n-1 and n-elements, an m-element of an n-space is not necessarily
simple. Thus in the plane (a three-dimensional linear space), n is equal to 3, and all entities
(scalars, points, lines and planes) are simple. In space, n is equal to 4, and hence 2-elements are the
only one that are potentially not simple. This means that only a subset of the 2-elements of space
(the simple 2-elements) can represent lines.
These comments extend to a space of any larger number of dimensions. In particular, (n-1)-
elements in a space of n dimensions are simple. Hence any (n-1)-element represents a hyperplane.
For other elements (apart from the trivial 0 and n-elements), only a subset of them can represent a
geometric entity. This subset may be defined by a set of constraints between the coefficients of the
element which ensures that it be simple.
If we are constructing a geometric entity as the exterior product of known independent points, or as
the regressive product of hyperplanes, then we are assured that the result will be simple. In the
examples of the previous section, we have seen several cases of this approach. In what follows, we
will explore the results that a consideration of simplicity produces.
2009 9 3
Grassmann Algebra Book.nb 255
Ï Lines in space
¯"3 ; A = ComposeMElement@2, aD
a1 ¯# Ô e1 + a2 ¯# Ô e2 + a3 ¯# Ô e3 + a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3
A1 = ExteriorFactorize@AD
IfB8a3 a4 - a2 a5 + a1 a6 ã 0<,
a4 e2 a5 e3 a2 e2 a3 e3
a1 ¯# - - Ô e1 + + F
a1 a1 a1 a1
This says that the given factorization is possible only if and only if the condition
a3 a4 - a2 a5 + a1 a6 ã 0 on the coefficients of A is met.
There is a straightforward way of testing the simplicity of 2-elements not shared by elements of
higher grade. A 2-element is simple if and only if its exterior square is zero. (The exterior square of
A is A Ô A.) In our case, we can write A as A ã ¯"Ôb + B, where b is a vector, and B is a bivector.
[Since elements of odd grade anti-commute, the exterior square of odd elements will be zero, even
if they are not simple. An even element of grade 4 or higher may be of the form of the exterior
product of a 1-element with a non-simple 3-element: whence its exterior square is zero without its
being simple.]
Thus we have the reduced requirement only that bÔB ã 0. In the case of a point 3-space
b = a1 e1 + a2 e2 + a3 e3 ;
B = a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3 ;
¯%@b Ô BD
Ha3 a4 - a2 a5 + a1 a6 L e1 Ô e2 Ô e3
At this point we should note that these results have not involved any metric concepts. The simple
requirement that bÔB be zero, has led to the required constraint.
On the other hand, if we require from the start that bÔB ã 0, that is B contains the factor a, then
we can write B ã aÔb, and so write A immediately as the product of a point and a vector.
2009 9 3
Grassmann Algebra Book.nb 256
A ã ¯" Ô b + a Ô b ã H¯" + aL Ô b
The vector a may be visualized as the position vector of a point on the line, and the vector b as a
vector in the direction of the line. The coefficients of aÔb are the same as the coefficients of the
cross product a!b. The cross product is the complement of the exterior product, and is orthogonal
to it. (The cross product and orthogonality are metric concepts which will be introduced in the next
chapter.)
The (Plücker) coordinates of a line in space are often given by the coefficients of the two vectors a
and a!b. Composing a and b in basis form, and then calculating their exterior product displays the
coefficients of a!b.
And as we would expect, the first three components as a vector are orthogonal to the second three.
In sum Despite this common practice of developing the Plücker coordinates in terms of dot and
cross products, we can see from the Grassmannian approach that we began with, that it is neither
necessary, nor particularly elucidating. The coordinates of a line do not need to rely on metric
notions. And if the geometry is projective, they perhaps should not.
Ï Lines in 4-space
¯"4 ; A = ComposeMElement@2, aD
a1 ¯# Ô e1 + a2 ¯# Ô e2 + a3 ¯# Ô e3 + a4 ¯# Ô e4 + a5 e1 Ô e2 +
a6 e1 Ô e3 + a7 e1 Ô e4 + a8 e2 Ô e3 + a9 e2 Ô e4 + a10 e3 Ô e4
Factorizing gives
2009 9 3
Grassmann Algebra Book.nb 257
A1 = ExteriorFactorize@AD
IfB8a3 a5 - a2 a6 + a1 a8 ã 0,
a4 a5 - a2 a7 + a1 a9 ã 0, a4 a6 - a3 a7 + a1 a10 ã 0<,
a5 e2 a6 e3 a7 e4 a2 e2 a3 e3 a4 e4
a1 ¯# - - - Ô e1 + + + F
a1 a1 a1 a1 a1 a1
On the other hand, adopting the method of the previous section, we can write
b = a1 e1 + a2 e2 + a3 e3 + a4 e4 ;
B = a5 e1 Ô e2 + a6 e1 Ô e3 + a7 e1 Ô e4 + a8 e2 Ô e3 + a9 e2 Ô e4 + a10 e3 Ô e4 ;
¯%@b Ô BD
Ha3 a5 - a2 a6 + a1 a8 L e1 Ô e2 Ô e3 + Ha4 a5 - a2 a7 + a1 a9 L e1 Ô e2 Ô e4 +
Ha4 a6 - a3 a7 + a1 a10 L e1 Ô e3 Ô e4 + Ha4 a8 - a3 a9 + a2 a10 L e2 Ô e3 Ô e4
Again, we arrive at the same conditions, except that with this approach we have one extra one.
However, it is easy to show that the number of independent conditions is still only three. Clearly,
because this product will always be a trivector, in a space of n vector dimensions, it will always
have c = n(n-1)(n-2)/6 components, and hence spawn c conditions.
On the other hand, if we require from the start that bÔB ã 0, that is B contains the factor a, then
we can write B ã aÔb, and so write A immediately as the product of a point and a vector.
A ã ¯" Ô b + a Ô b ã H¯" + aL Ô b
The vector a may be visualized as the position vector of a point on the line, and the vector b as a
vector in the direction of the line. The coefficients of aÔb cannot be expressed as the coefficients
of the cross product as before since the vector space is now 4-dimensional.
The (Plücker) coordinates of a line in 4-space must now be given by the coefficients of the vector
a and the bivector aÔb. If
ExtractCoefficients@LBasisD@CpD
8a1 , a2 , a3 , a4 , -a2 b1 + a1 b2 , -a3 b1 + a1 b3 ,
-a4 b1 + a1 b4 , -a3 b2 + a2 b3 , -a4 b2 + a2 b4 , -a4 b3 + a3 b4 <
2009 9 3
Grassmann Algebra Book.nb 258
In sum In a point 4-space, the Grassmannian approach generalizes, whereas the approach
developed through the dot and cross products does not generalize.
First declare the basis of the plane, and then define the lines.
¯"2
8¯#, e1 , e2 <
L1 = H¯" + a e1 L Ô H¯" + b e2 L;
L2 = H¯" + c e1 L Ô H¯" + d e2 L;
To find the point of intersection of these two lines, we first find their common factor. The common
factor of two elements can be found by applying ToCommonFactor to their regressive product.
Remember that common factors are only determinable up to congruence (that is, to within an
arbitrary scalar multiple). Hence the common factor we determine will, in general, be a weighted
point.
wP = ToCommonFactor@L1 Ó L2 D
¯c HHb c - a dL ¯# + Ha b c - a c dL e1 + H-a b d + b c dL e2 L
We can convert this weighted point to a form which separates the weight from the point by
applying ToWeightedPointForm.
ToWeightedPointForm@wPD
a c Hb - dL ¯c e1 b H-a + cL d ¯c e2
Hb c - a dL ¯c ¯# + +
b c ¯c - a d ¯c b c ¯c - a d ¯c
P = ToPointForm@wPD
a c Hb - dL e1 b H-a + cL d e2
¯# + +
bc-ad bc-ad
To verify that this point lies in both lines, we can take its exterior product with each of the lines
and show the results to be zero.
2009 9 3
Grassmann Algebra Book.nb 259
In the special case in which the lines are parallel, that is b c - a d = 0, their intersection is no longer
a point, but a vector defining their common direction.
First declare the basis of the 3-plane, and then define the line and plane.
¯"3
8¯#, e1 , e2 , e3 <
L = H¯" + a e1 + b e2 L Ô H¯" + c e2 + d e3 L;
P = H¯" + e e1 L Ô H¯" + f e2 L Ô H¯" + g e3 L;
To find the point of intersection of the line with the plane, we first determine their common factor
(a weighted point) by applying ToCommonFactor to their regressive product, and then drop the
weight by applying ToPointForm.
P = ToPointForm@ToCommonFactor@L Ó PDD
a e Hd f + Hc - fL gL e1
¯# + +
d e f - Hb e - c e + a fL g
f Hb e Hd - gL + c H-a + eL gL e2 d Hb e + Ha - eL fL g e3
-
d e f - Hb e - c e + a fL g d e f - Hb e - c e + a fL g
To verify that this point lies in both the line and the plane, we can take its exterior product with
each of the line and the plane and show the results to be zero.
¯$@¯%@8L Ô P, P Ô P<DD
80, 0<
In the special case in which the line is parallel to the plane, their intersection is no longer a point,
but a vector defining their common direction. When the line lies in the plane, the result from the
calculation will be zero.
First declare the basis of the 3-plane, and then define the planes.
2009 9 3
Grassmann Algebra Book.nb 260
¯"3
8¯#, e1 , e2 , e3 <
To find the line of intersection of the two planes, we first determine their common factor (a line)
by applying ToCommonFactor to their regressive product, and then (if we wish) drop the
(arbitrary) congruence factor ¯c by setting it equal to unity. Finally, applying
GrassmannSimplify collects the terms in a convenient form with the origin ¯# factored out.
L = ¯$@ToCommonFactor@P1 Ó P2 D ê. ¯c Ø 1D
¯# Ô Ha e Hc f - b gL e1 + b f H-c e + a gL e2 + c Hb e - a fL g e3 L +
a b e f H-c + gL e1 Ô e2 + a c e Hb - fL g e1 Ô e3 + b c H-a + eL f g e2 Ô e3
This is of course, a 2-element in a 4-space, and hence is not necessarily simple. However we can
verify in a number of ways that this result is indeed simple, and therefore does define a line. The
simplest way is to apply the GrassmannAlgebra function SimpleQ.
SimpleQ@LD
True
Perhaps more convincing is to factorize L so as to display it as the product of a point and a vector.
Lp = ExteriorFactorize@LD
b f H-c + gL e2 c Hb - fL g e3
a e Hc f - b gL ¯# + + Ô
-c f + b g -c f + b g
Hb c e f - a b f gL e2 Hb c e g - a c f gL e3
e1 - +
acef-abeg acef-abeg
We can extract the point and vector as specific expressions by applying the GrassmannAlgebra
function ExtractArguments.
To verify that both the point and the vector lie in both planes, we can take their exterior product
with each of the planes and show the results to be zero.
In the special case in which the planes are parallel, their intersection is no longer a line, but a
bivector defining their common 2-direction.
2009 9 3
Grassmann Algebra Book.nb 261
In the special case in which the planes are parallel, their intersection is no longer a line, but a
bivector defining their common 2-direction.
Ï The problem
Show that the osculating planes at any three points to the curve defined by:
P ã ¯" + n e1 + n 2 e2 + n 3 e3
intersect at a point coplanar with these three points. Here, n is a scalar parametrizing the curve.
Ï The solution
£ " £ "
The osculating plane P to the curve at the point P is given by PãPÔPÔP. P and P are the first and
second derivatives of P with respect to n.
P = ¯" + n e1 + n 2 e2 + n 3 e3 ;
£
P = e1 + 2 n e2 + 3 n 2 e3 ;
"
P = 2 e2 + 6 n e3 ;
£ "
P = P Ô P Ô P;
We can declare the space to be a 3-plane and subscripts of the parameter n to be scalar, and then
use GrassmannSimplify to derive the expression for the osculating plane as a function of n.
(We divide out a common multiple of 2 to make the resulting expressions a little simpler.)
¯# Ô Ie1 Ô e2 + 3 n e1 Ô e3 + 3 n2 e2 Ô e3 M + n3 e1 Ô e2 Ô e3
P1 = ¯" + n1 e1 + n1 2 e2 + n1 3 e3 ;
P2 = ¯" + n2 e1 + n2 2 e2 + n2 3 e3 ;
P3 = ¯" + n3 e1 + n3 2 e2 + n3 3 e3 ;
The (weighted) point of intersection of these three planes may be obtained by calculating their
common factor. The congruence factor ¯c appears squared because there are two regressive
product operations.
2009 9 3
Grassmann Algebra Book.nb 262
The (weighted) point of intersection of these three planes may be obtained by calculating their
common factor. The congruence factor ¯c appears squared because there are two regressive
product operations.
wP = ToCommonFactor@P1 Ó P2 Ó P3 D
The scalar coefficient attached to the origin factors out of the expression so that we can extract the
point from the resulting weighted point to give the point of intersection Q as
Q = ToPointForm@wPD
1 1
¯# + e3 n1 n2 n3 + e1 Hn1 + n2 + n3 L + e2 Hn2 n3 + n1 Hn2 + n3 LL
3 3
Finally, to show that this point of intersection Q is coplanar with the points P1 , P2 , and P3 , we
compute their exterior product.
Simplify@¯%@P1 Ô P2 Ô P3 Ô QDD
0
The shadow
In Chapter 3 we developed a formula which expressed a j-element as a sum of components, the
element being decomposed with respect to a pair of elements a and b which together span the
m n-m
whole space.
2j Ka Ô Xbi O Ó Xci Ô b
¯G@Xci D Hj+1L m n-m
X ã ‚ H-1L
j
i=1 aÓ b
m n-m
2009 9 3
Grassmann Algebra Book.nb 263
aÓ XÔ b aÔX Ó b
m j n-m m j n-m
Xã +!+
j aÓ b aÓ b
m n-m m n-m
It can be seen that the last term can be rearranged as the shadow of X on b excluding a.
j n-m m
aÔX Ó b b Ó XÔa
m j n-m n-m j m
ã
aÓ b b Óa
m n-m n-m m
If j = 1, the decomposition formula reduces to the sum of just two components, xa and xb , where
xa lies in a and xb lies in b .
m n-m
a Ó Kx Ô b O b Ó Kx Ô aO
m n-m n-m m
x ã xa + xb ã + 4.12
aÓ b b Óa
m n-m n-m m
n b1 Ô b2 Ô ! Ô x Ô ! Ô bn
xã‚ bi
i=1
b1 Ô b2 Ô ! Ô bi Ô ! Ô bn
We now explore these formulae with a number of geometric examples, beginning with the simplest
case of decomposition in a 2-space.
Decomposition in a 2-space
Suppose we have a vector 2-space defined by the bivector aÔb. We wish to decompose a vector x
in the space to give one component in a and the other in b. Applying the decomposition formula
above, and noting that xÔb and xÔa are bivectors while xÓb and xÓa are scalars, we can use
formula 3.26 and axiom Ó8 to give:
a Ó JHx Ó bL 1 N
a Ó Hx Ô bL ¯n xÓb
xa ã ã ã Ja Ó 1 N
aÓb aÓb aÓb ¯n
xÓb xÔb
ã aã a
aÓb aÔb
2009 9 3
Grassmann Algebra Book.nb 264
b Ó JHx Ô aL 1 N
b Ó Hx Ô aL ¯n xÓa
xb ã ã ã Jb Ó 1 N
bÓa bÓa bÓa ¯n
xÓa aÔx
ã bã b
bÓa aÔb
xÓb xÔb
(We are able to replace the coefficient aÓb
on the right hand side by aÔb
because we are in a 2-
space. See the section on Exterior Division in Chapter 2.)
The coefficients of a and b are scalars showing that xa is congruent to a and xb is congruent to b.
If each of the three vectors is expressed in basis form, we can determine these scalars more
specifically. For example:
x = x1 e1 + x2 e2 ; a = a1 e1 + a2 e2 ; b = b1 e1 + b2 e2 ;
xÓb Hx1 e1 + x2 e2 L Ó Hb1 e1 + b2 e2 L
ã ã
aÓb Ha1 e1 + a2 e2 L Ó Hb1 e1 + b2 e2 L
Hx1 b2 - x2 b1 L e1 Ó e2 Hx1 b2 - x2 b1 L
ã
Ha1 b2 - a2 b1 L e1 Ó e2 Ha1 b2 - a2 b1 L
Finally then we can express the original vector x as the required sum of two components, one in a
and one in b.
Hx1 b2 - x2 b1 L Hx1 a2 - x2 a1 L
xã a+ b
Ha1 b2 - a2 b1 L Hb1 a2 - b2 a1 L
We can easily check that the right hand side does indeed reduce to x by composing x, a, and b,
and then simplifying:
Hx1 b2 - x2 b1 L Hx1 a2 - x2 a1 L
a+ b êê Simplify
Ha1 b2 - a2 b1 L Hb1 a2 - b2 a1 L
e1 x1 + e2 x2
2009 9 3
Grassmann Algebra Book.nb 265
These same calculations apply mutatis mutandis to decomposing a point P into two component
weighted points congruent to two given points Q and R in a line. For variety here, we take a more
general and less coordinate-based approach.
xØP
aØQ
bØR
then the decomposition becomes
PÔR QÔP
Pã Q+ R
QÔR QÔR
Because P is a point, we would expect the sum of its two component weights to be unity.
PÔR QÔP P Ô HR - QL
+ ã
QÔR QÔR QÔR
We can see immediately that this is indeed so because adding them gives the same bound vector in
the numerator and the denominator. We can either note this equivalence from our previous
discussion of bound vectors, or do a substitution. For a substitution, let r and q be scalars, and v be
a vector in (parallel to) the line.
Q=P-qv
R=P+rv
PÔR P Ô HP + r vL r PÔv r
ã ã ã
QÔR HP - q vL Ô HP + r vL r + q PÔv r+q
QÔP HP - q vL Ô P q PÔv q
ã ã ã
QÔR HP - q vL Ô HP + r vL r + q PÔv r+q
2009 9 3
Grassmann Algebra Book.nb 266
r q
Pã Q+ R
r+q r+q
Again, these same calculations apply mutatis mutandis to the situation above. Although the result
turns out to be trivial in terms of our understanding that a vector may be expressed as the
difference of two points, we include the example to emphasize that the decomposition formula
applies easily to either bound or free entities. Let
PØx
then the decomposition becomes
xÔR QÔx
xã Q+ R
QÔR QÔR
QÔx
R H QÔR L R
xÔR QÔx
x H
QÔR
LQ + H
QÔR
LR
xÔR
Q H QÔR L Q
Because x is a vector, we would expect the sum of its two component weights to be zero.
xÔR QÔx x Ô HR - QL
+ ã
QÔR QÔR QÔR
Again, we can see immediately that this is so because adding them gives a zero bivector in the
numerator. For a substitution, let a be a scalar; then R - Q is a vector in (parallel to) the line.
x = a HR - QL
xÔR a HR - QL Ô R QÔR
ã ã -a ã -a
QÔR QÔR QÔR
QÔx Q Ô a HR - QL QÔR
ã ãa ãa
QÔR QÔR QÔR
Hence, in these terms, the final decomposition reduces to
x ã -a Q + a R
Decomposition in a 3-space
2009 9 3
Grassmann Algebra Book.nb 267
Decomposition in a 3-space
Suppose we have a vector 3-space represented by the trivector aÔbÔg. We wish to decompose a
vector x in this 3-space to give one component in aÔb and the other in g. Applying the
decomposition formula gives:
Ha Ô bL Ó Hx Ô gL
xaÔb ã
Ha Ô bL Ó g
g Ó Hx Ô a Ô bL
xg ã
g Ó Ha Ô bL
Because xÔaÔb is a 3-element it can be seen immediately that the component xg can be written as
a scalar multiple of g where the scalar is expressed either as a ratio of regressive products (scalars)
or exterior products (n-elements).
x Ó Ha Ô bL aÔbÔx
xg ã gã g
g Ó Ha Ô bL aÔbÔg
The component xaÔb will be a linear combination of a and b. To show this we can expand the
expression above for xaÔb using the Common Factor Axiom.
Ha Ô bL Ó Hx Ô gL Ha Ô x Ô gL Ó b Hb Ô x Ô gL Ó a
xaÔb ã ã -
Ha Ô bL Ó g Ha Ô bL Ó g Ha Ô bL Ó g
Rearranging these two terms into a similar form as that derived for xg gives:
xÔbÔg aÔxÔg
xaÔb ã a+ b
aÔbÔg aÔbÔg
Of course we could have obtained this decomposition directly by using the results of the
decomposition formula 3.36 for decomposing a 1-element into a linear combination of the factors
of an n-element.
2009 9 3
Grassmann Algebra Book.nb 268
Suppose we have a plane represented by the bound bivector QÔRÔS, where Q, R and S are points.
We wish to decompose a point P in this plane in two ways. The first, as weighted points at Q, R,
and S. The second as a weighted point at S and a weighted point in the line represented by QÔR.
Applying the decomposition formula 3.36 gives:
2009 9 3
Grassmann Algebra Book.nb 269
QÔRÔP
H LS
QÔRÔS
P
PÔRÔS
H LQ
QÔRÔS
QÔPÔS
H LR
QÔRÔS
Alternatively, we can group these terms two at a time to obtain a weighted point in the
corresponding line. For example, adding the first two terms together gives us the point which we
denote PQÔR .
PÔRÔS QÔPÔS
PQÔR ã Q+ R
QÔRÔS QÔRÔS
QÔRÔP
H LS
QÔRÔS
P
PÔRÔS QÔPÔS
H LQ+H LR
QÔRÔS QÔRÔS
Decomposition in a 4-space
Suppose we have a vector 4-space represented by the trivector aÔbÔgÔd and we wish to
decompose a vector x in this 4-space into two vectors, one in aÔb and one in gÔd.
x ã xaÔb + xgÔd
2009 9 3
Grassmann Algebra Book.nb 270
Hence we an write
xÔbÔgÔd aÔxÔgÔd
xaÔb ã a+ b
aÔbÔgÔd aÔbÔgÔd
aÔbÔxÔd aÔbÔgÔx
xgÔd ã g+ d
aÔbÔgÔd aÔbÔgÔd
As we have previously remarked, we can rearrange these components in whatever combinations or
interpretations (as points or vectors) we require.
n
a1 Ô a2 Ô ! Ô P Ô ! Ô an
Pã‚ ai 4.13
i=1
a1 Ô a2 Ô ! Ô ai Ô ! Ô an
As we have already seen in the examples above, the components are simply arranged in whatever
combinations are required to achieve the decomposition.
The apparent differences stem in the main from the way in which the intersection of two geometric
elements is treated in the Projective Geometry on the one hand, and Grassmann geometry on the
other. For example, in the plane, both geometries will say that two lines always intersect. In
Projective geometry the lines intersect in either a point or a point-at-infinity. In Grassmann
geometry the lines intersect in either a point or a vector. We see here that there is no essential
difference, but an inessential difference of definition or interpretation: a vector and a point-at-
infinity are essentially the same object.
2009 9 3
Grassmann Algebra Book.nb 271
The advantage of the Grassmann geometry interpretation however, is that one can add a metric to it
if one wishes to use metric concepts (for example the inner product).
But this correspondence is not as arbitrary as it might first appear. It turns out that the result of
computing the intersection of two lines approaching parallelism in the Grassmann algebra gives us
a result which can be viewed on the one hand as a point approaching infinity, and on the other hand
as a weighted point approaching a vector. That is, the point of intersection of two lines moves off
towards infinity as the lines approach parallelism, and in the limit becomes a finite vector.
To see this we need only apply the methods previously developed for the calculation of
intersections to two such lines. For simplicity, and without loss of generality, we can represent the
two lines as follows:
• L1 by a bound vector through a point P, in the direction of the vector x; and
• L2 by a bound vector through the point P + y, with the vector of the line differing in direction
from x by a scalar multiple e of y.
x
P+y
L2 - ey
y
L1
P Q
¯"2 ; DeclareExtraScalarSymbols@eD;
DeclareExtraVectorSymbols@PD;
L1 = P Ô x;
L2 = HP + yL Ô Hx - e yL;
ToCommonFactor@L1 Ó L2 D
¯c H-x †P Ô x Ô y§ - P e †P Ô x Ô y§L
Since the congruence factor ¯c, and †PÔxÔy§ (the coefficient of the exterior product in any basis)
are all scalars, we can say that the intersection is congruent to the weighted point wQ given by
wQ ª e P + x
By factoring out the scalar factor e, we can also say that the intersection is congruent to the point Q
given by
2009 9 3
Grassmann Algebra Book.nb 272
By factoring out the scalar factor e, we can also say that the intersection is congruent to the point Q
given by
x
QªP+
e
As e approaches zero, the lines approach parallelism, the point of intersection Q has an ever-
increasing vector xe , causing its position to approach infinity, and the weighted point wQ has an
ever-decreasing influence from the point P, causing it to approach the (ultimately) common vector
x.
It may be remarked that in this calculation, the point of intersection does not involve the vector y
which specifies the position of L2 . In general then, we may conclude that the resulting vector and
hence the resulting point-at-infinity is the same for all lines parallel to L1 .
We can now see how the plane may be said to contain a line approaching a line-at-infinity if we
define it by the product of these two weighted points.
This line has the property that in the limit, it contains all the points-at-infinity of the plane. We can
show this by taking its exterior product with a third arbitrary point approaching a point-at-infinity,
and obtain an element that becomes zero in the limit as e approaches zero.
In fact, in the limit, the line-at-infinity is the bivector of the plane. We can see this by letting e
approach zero in the expression for L¶ or its expansion:
Projective 3-space
In projective 3-space, all the above concepts of the projective plane carry over naturally. We can
see the parallels between Projective Geometry and Grassmann Geometry as follows:
2009 9 3
Grassmann Algebra Book.nb 273
Projective Geometry: Non-parallel planes intersect in lines. Parallel planes intersect in a line-at-
infinity. All the lines-at-infinity belong to a single plane-at-infinity.
Grassmann Geometry: Non-parallel planes intersect in lines. Parallel planes intersect in a bivector.
All the bivectors belong to a single trivector.
Homogeneous coordinates
Ï The plane
P = a0 ¯" + a1 e1 + a2 e2 ;
The elements of the triplet 8ao , a1 , a2 < are called the homogeneous coordinates of the point P.
In projective geometry, the triplet 8ao , a1 , a2 < and the triplet 8k ao , k a1 , k a2 <, where k is
a scalar, are defined to represent the same point. In the Grassmann approach, this corresponds to
weighted points with different weights also defining the same point.
¯"2 ; Cobasis
8e1 Ô e2 , -H¯# Ô e2 L, ¯# Ô e1 <
A line in the plane can be represented up to congruence by a linear combination of these cobasis
elements
L = b0 e1 Ô e2 + b1 H-¯" Ô e2 L + b2 ¯" Ô e1 ;
The elements of the triplet 8b0 , b1 , b2 < are called the homogeneous coordinates of the line L.
The equation to the line may then be obtained from the condition that a point lies in the line, that is,
their exterior product is zero.
PÔL ã 0
On simplifying the product, we recover the equation of the line in homogeneous coordinates
(a0 b0 + a1 b1 + a2 b2 ã 0) by viewing the line as fixed and the point as variable.
DeclareExtraScalarSymbols@a_ , b_ D;
¯%@P Ô LD ã 0
Ha0 b0 + a1 b1 + a2 b2 L ¯# Ô e1 Ô e2 ã 0
However, we can also view the point as fixed, and the line as variable, wherein the same form of
equation describes a variable line through a fixed point.
2009 9 3
Grassmann Algebra Book.nb 274
¯"3 ; P = a0 ¯" + a1 e1 + a2 e2 + a3 e3 ;
The elements of the quartet 8ao , a1 , a2 , a3 < are called the homogeneous coordinates of the
point P.
Cobasis
8e1 Ô e2 Ô e3 , -H¯# Ô e2 Ô e3 L, ¯# Ô e1 Ô e3 , -H¯# Ô e1 Ô e2 L<
Hence now it is planes in the 3-space that can be represented up to congruence by a linear
combination of these cobasis elements
The elements of the quartet 8b0 , b1 , b2 , b3 < are called the homogeneous coordinates of the
plane P.
The equation to the plane may then be obtained from the condition that a point lies in the plane,
that is, their exterior product is zero.
PÔP ã 0
On simplifying the product, we recover the equation of the plane in homogeneous coordinates by
viewing the plane as fixed and the point as variable.
¯%@P Ô PD ã 0
Ha0 b0 + a1 b1 + a2 b2 + a3 b3 L ¯# Ô e1 Ô e2 Ô e3 ã 0
And as in the case of the plane, we can also view the point as fixed, and the plane as variable,
wherein the same form of equation describes a variable plane through a fixed point.
The cases of points and lines in 2-space and points and planes in 3-space generalize in an obvious
manner to points and hyperplanes ((n-1)-planes) in n-space.
The reason that the relationships here are so straightforward is because 1-elements and (n-1)-
elements are simple. Hence they can immediately represent geometric objects like points and
hyperplanes as linear sums of the basis elements.
This is not, in general, the case for elements of intermediate grade. For example, in a 3-space, 2-
elements are not necessarily simple. Hence lines in space have no such simple form as a linear sum
of the basis 2-elements. We can easily compose such a form and check for its simplicity.
¯"3 ; A = ComposeMElement@2, aD
a1 ¯# Ô e1 + a2 ¯# Ô e2 + a3 ¯# Ô e3 + a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3
SimpleQ@AD
If@8a3 a4 - a2 a5 + a1 a6 ã 0<, TrueD
Thus, such a form will only be simple, and thus able to represent a line if the coefficients are
constrained by the relation
2009 9 3
Grassmann Algebra Book.nb 275
Thus, such a form will only be simple, and thus able to represent a line if the coefficients are
constrained by the relation
a3 a4 - a2 a5 + a1 a6 ã 0
Duality
The concept of duality in Grassmann algebra corresponds closely to the concept of duality in
projective geometry. For example, consider the expression for a line as the exterior product of two
points.
L ã PÔQ
This may be read variously as:
The points P and Q lie on the line L.
The line L passes through the points P and Q.
The points P and Q join to form the line L.
The line L is the join of the points P and Q.
In a plane, the dual expression to "a line is the exterior product of two points", is "a point is the
regressive product of two lines".
P ã LÓM
This may be read variously as:
The lines L and M pass through the point P.
The point P lies on the lines L and M.
The lines L and M intersect to form the point P.
The point P is the intersection of the lines L and M.
PÔQÔR ã 0
The collinearity of three lines is defined by
LÓMÓN ã 0
Duality extends naturally to a space of any dimension. Remember that, just as in projective
geometry, these entities and expressions are only defined up to congruence.
Desargues' theorem
As an example of how Grassmann algebra can be used in Projective Geometry, we prove
Desargues' theorem. It should be noted that the methods used are simply applications of the
exterior and regressive products, and as are all the applications in this chapter, entirely non-metric.
This is in contradistinction to "proofs" of the theorem using the dot and cross products of 3-
dimensional vector algebra, which are metric concepts.
If two triangles have corresponding vertices joined by concurrent lines, then the
intersections of corresponding sides are collinear. That is, if PU, QV, and RW all pass
through one point O, then the intersections X = PQ.UV, Y = PR.UW and Z = QR.VW all
lie on one line.
2009 9 3
Grassmann Algebra Book.nb 276
If two triangles have corresponding vertices joined by concurrent lines, then the
intersections of corresponding sides are collinear. That is, if PU, QV, and RW all pass
through one point O, then the intersections X = PQ.UV, Y = PR.UW and Z = QR.VW all
lie on one line.
U
P
O Q V
R
W
Desargues' theorem.
¯"2 ;
For simplicity, and without loss of generality, we may take the fixed point O as the origin ¯". We
can then define the points P, Q, and R simply by
And since U, V, and W are on the same lines (respectively) through O as are P, Q, and R, we can
define U, V, and W using scalar multiples of the position vectors of P, Q, and R.
To confirm that these triplets of points are collinear, we can compute their exterior product.
2009 9 3
Grassmann Algebra Book.nb 277
X = ToCommonFactor@HP Ô QL Ó HU Ô VLD
¯c Ha p †¯# Ô p Ô q§ - a b p †¯# Ô p Ô q§ - b q †¯# Ô p Ô q§ +
a b q †¯# Ô p Ô q§ + ¯# Ha †¯# Ô p Ô q§ - b †¯# Ô p Ô q§LL
Here A is a weighted point We can extract the associated point by using ToPointForm.
(Remember that †¯"ÔpÔq§ represents the scalar coefficient of the ¯"ÔpÔq in any basis that
might be chosen, and ¯c is the scalar congruence factor.)
X = ToPointForm@XD
Ha - a bL p H-1 + aL b q
+ + ¯#
a-b a-b
We can now check whether these points are collinear by taking their exterior product. The first
level of simplification gives
F = ¯%@X Ô Y Ô ZD
¯# Ô
H-1 + aL a b H-1 + cL a H-1 + bL b H-1 + cL a b H-1 + cL2
- + pÔq +
Ha - bL Ha - cL Ha - bL Hb - cL Ha - cL Hb - cL
H-1 + aL Ha - a bL c a H-1 + bL2 c H-1 + bL c Ha - a cL
+ + pÔr +
Ha - bL Ha - cL Ha - bL Hb - cL Ha - cL Hb - cL
H-1 + aL2 b c H1 - aL H-1 + bL b c H-1 + aL b H-1 + cL c
+ + qÔr
Ha - bL Ha - cL Ha - bL Hb - cL Ha - cL Hb - cL
On simplifying the scalar coefficients we get zero as expected, thus proving the theorem.
¯$@Simplify@FDD
0
Pappas' theorem
As a second example, we prove Pappas' theorem. The methods are essentially the same as used for
Desargues' theorem in the previous section.
Given two sets of collinear points P, Q, R, and U, V, W, then the intersection points X, Y,
Z of line pairs PV and QU, PW and RU, QW and RV, are collinear.
2009 9 3
Grassmann Algebra Book.nb 278
Given two sets of collinear points P, Q, R, and U, V, W, then the intersection points X, Y,
Z of line pairs PV and QU, PW and RU, QW and RV, are collinear.
Z
X Y
Pappas' theorem.
¯"2 ;
As our example, we take the more general case in which the lines through the two sets of collinear
points are not parallel. For simplicity, and without loss of generality, we take their intersection
point as the origin. We can then define the points P, Q, R, and U, V, W by means of two vectors p
and u; and four scalars a, b, c, d.
P = ¯" + p;
Q = ¯" + a p;
R = ¯" + b p;
U = ¯" + u;
V = ¯" + c u;
W = ¯" + d u;
By definition these triplets of points are collinear.
2009 9 3
Grassmann Algebra Book.nb 279
We can now check whether these points are collinear by taking their exterior product. A value of
zero thus proving the theorem.
F = ¯%@X Ô Y Ô ZD êê Simplify
0
Projective n-space
A projective n-space is essentially, a bound n-space, that is, a linear space of n+1 dimensions, in
which one of the basis elements is interpreted as a point, say the origin, and the other n basis
elements are interpreted as vectors. In projective n-space (n ¥2), the concepts of the projective
plane carry over naturally.
Here, a 0-plane is a point, a 1-plane is a line, a 2-plane is a plane, and so on. Two (n-1)-planes may
be said to be parallel if their (n-1)-directions are the same (that is, their (n-1)-vectors are
congruent).
We can see how this works more concretely by thinking in terms of the actual bound elements
representing these geometric objects.
P = PÔ a ; Q = QÔ b
n-1 n-1
2009 9 3
Grassmann Algebra Book.nb 280
The Common Factor Theorem shows that P and Q will have a common (n-1)-element, C say. If
n-1
a is congruent to b , then C will be congruent to them both, and will be an (n-1)-vector. If a
n-1 n-1 n-1 n-1
is not congruent to b , then C will be of the form C = R Ô g , thus representing an (n-2)-plane.
n-1 n-1 n-2
In sum
Thus we can see that Projective Geometry differs in no essential way from the geometry which is a
natural consequence of Grassmann's algebra. The superficial differences, as is the case with many
mathematical systems covering the same applications, reside in definition or interpretation.
Regions of space
Some geometric applications require an assessment of the region in which a point is located. Once
we have a test which determines if a point is inside or outside a region, we can then define the
region as the set of points which satisfy the test.
In this section we begin with the very simplest of regions, and build up to more complex ones of
higher dimension. A closed region of dimension r is bounded by regions of dimension r-1. A
closed region is often called a polytope, a generalization of the notion of polygon and polyhedron.
Regions of a plane
Consider a line L in a plane. The line will divide the plane into two regions (half-planes). A half-
plane region is an open region if it does not contain any points of the line.
Any point P will be located in either one of the half-planes or in the line itself.
We can form the exterior product PÔL. This exterior product will vary in weight as P varies over
the regions, but will only change sign when P crosses the line. The product (and hence the weight)
is of course zero when P is on the line itself.
Suppose now we have a reference point Q, not in the line, and we want to find if P and Q are on the
same side of the line. All we need to do is to take the ratio of PÔL to QÔL. If the ratio is positive, P
and Q are on the same side. If the ratio is negative, they are on opposite sides. If the ratio is zero, P
is in the line.
2009 9 3
Grassmann Algebra Book.nb 281
P2 ÔL P1 ÔL
QÔL
<0 QÔL
>0
L
Q
P2 P1
As can be seen from the figure above, the reason this occurs is because the ordering of the points in
the product PÔL is clockwise on one side of the line, and anti-clockwise on the other. (Remember
that the line L itself can be expressed as the product of any two points on it.) Because the line L is
in both the numerator and denominator of the ratio, the ratio is invariant with respect to how the
line is expressed (for example, which two points are chosen to express it).
If we know in which region the point Q is located, we can therefore determine the region in which
the point P is located. In the sections below we will see how this simple result can be extended in a
boolean manner to regions of higher dimension, and regions with more complex boundaries.
Ï Example
As shown in the graphic below, let L be a line through the origin ¯" in the direction of the basis
vector e2 , the reference point Q at the point ¯" + e1 , and point P1 and P2 at the points
¯" + a e1 + e2 and ¯" - b e1 + e2 respectively. The scalar coefficients a and b are strictly
positive.
P2 ÔL P1 ÔL
QÔL
ã -b < 0 QÔL
ãa>0
P2 P1
Q
The products of each of the three points with the line are:
2009 9 3
Grassmann Algebra Book.nb 282
P1 Ô L P2 Ô L
ãa >0 ã -b < 0
QÔL QÔL
Thus we conclude that P1 is on the same side of the line L as Q, while P2 is on the opposite side.
Regions of a line
Before we explore more complex regions and higher dimensions, let us take a step back to consider
a line as the complete space (a 1-space), and a point R in the line as dividing it into two regions.
The point R now takes the role of the line L in the previous example. Suppose the 1-space has basis
{¯",e1 }, and for simplicity we take R to be the origin ¯".
Following the pattern of the example above, we let the reference point Q be the point ¯" + e1 , and
points P1 and P2 be the points ¯" + a e1 and ¯" - b e1 respectively. Here, as above, the scalar
coefficients a and b are strictly positive.
P2 ÔR P1 ÔR
QÔR
ã -b < 0 QÔR
ãa>0
P2 R Q P1
The products of each of these three points with the dividing point R are:
P1 Ô R P2 Ô R
ãa >0 ã -b < 0
QÔR QÔR
Thus we conclude that P1 is on the same side of the dividing point R as Q, while P2 is on the
opposite side.
2009 9 3
Grassmann Algebra Book.nb 283
P2 ÔL1 P2 ÔL2
QÔL1
<0 QÔL2
>0
L2
Q
P2
P4
L1
P4 ÔL1 P4 ÔL2
QÔL1
>0 QÔL2
<0
Although for easier comprehension, the graphic shows the two lines as if they are at right angles,
and the points as if they are equidistant from the intersection, remember that all the explorations in
this chapter are non-metric, and so angles and distances at this stage are undefined.
2009 9 3
Grassmann Algebra Book.nb 284
Region L1 L2
P1 ÔL1 P1 ÔL2
P1 QÔL1
>0 QÔL2
>0
P2 ÔL1 P2 ÔL2
P2 QÔL1
<0 QÔL2
>0
P3 ÔL1 P3 ÔL2
P3 QÔL1
<0 QÔL2
<0
P4 ÔL1 P4 ÔL2
P4 QÔL1
>0 QÔL2
<0
Ï Example
To see how this works, suppose we know that some point Q is in some quadrant, and we would like
to know which quadrant another given point P is in. We simply calculate the two ratios and
compare their results to the table. We do this for a numerical example below.
L1 = H¯" + e2 L Ô H¯" + 2 e1 - e2 L;
L2 = H¯" - e2 L Ô H¯" + 2 e1 + e2 L;
Q = ¯" - 4 e1 + 3 e2 ;
To see if a point P is in the same quadrant as the reference point Q, we simply check that the signs
of both products PÔL
QÔL
1
and PÔL
QÔL
2
are positive. Below we write a function CheckP to take a random
1 2
point P and calculate the value of the combined predicate for it.
CheckP := ModuleB8P<,
P = ¯" + RandomReal@8-5, 5<D e1 + RandomReal@8-5, 5<D e2 ;
¯%@P Ô L1 D ¯%@P Ô L2 D
:P, > 0 && > 0>F
¯%@Q Ô L1 D ¯%@Q Ô L2 D
Below we check 10 random points and find that three of them return True for CheckP and are
thus in the same quadrant as Q.
2009 9 3
Grassmann Algebra Book.nb 285
L1 L2
P2
P1 P3
P4
L3
P5 P6 P7
For a given region, represented by the point Pi we can form the ratio as we did above for each of
the lines Lj and tabulate the regions as follows:
2009 9 3
Grassmann Algebra Book.nb 286
Region L1 L2 L3
P1 ÔL1 P1 ÔL2 P1 ÔL3
P1 QÔL1
>0 QÔL2
>0 QÔL3
>0
Ï Example
L1 = H¯" + e2 L Ô H¯" + 2 e1 - e2 L;
L2 = H¯" - e2 L Ô H¯" + 2 e1 + e2 L;
L3 = H¯" - e1 - 2 e2 L Ô H¯" + 3 e1 - 2 e2 L;
Q = ¯" - 3 e1 + e2 ;
To see if a point P is in the centre triangle (the region in which P4 is in), we check that the signs of
the products are +, ~, +, according to the table above, and hence write the predicate CheckP4 for
this triangular region as
CheckP4 := ModuleB8P<,
P = ¯" + RandomReal@8-3, 3<D e1 + RandomReal@8-3, 3<D e2 ;
¯%@P Ô L1 D ¯%@P Ô L2 D ¯%@P Ô L3 D
:P, > 0 && < 0 && > 0>F
¯%@Q Ô L1 D ¯%@Q Ô L2 D ¯%@Q Ô L3 D
Below we check 10 random points and find that only one of them returns True for CheckP2 and
is thus in the centre triangle.
2009 9 3
Grassmann Algebra Book.nb 287
In the section which follows this one, we will extend this example to look at the construction of a 5-
star. To make use of common geometry in the two examples, we will define the sides of the
pentagon in this example by the vertices of the 5-star (as shown below). We locate the reference
point Q in the middle of the pentagon.
Each half-plane in the diagram, on the side of its line which include the reference point Q, is
coloured an opaque grey. Regions which overlap therefore show themselves in a darker grey.
There are four shades of grey shown. The pentagonal region is the overlap of all five half-planes,
and hence shows in the darkest grey. The lightest shade is the overlap of two half-planes.
2009 9 3
Grassmann Algebra Book.nb 288
Q = ¯";
P1 = ¯" + 2 e2 ;
P2 = ¯" + 2 e1 + e2 ;
P3 = ¯" + 2 e1 - 2 e2 ;
P4 = ¯" - 2 e1 - 2 e2 ;
P5 = ¯" - 2 e1 + e2 ;
L1 = P1 Ô P3 ;
L2 = P2 Ô P4 ;
L3 = P3 Ô P5 ;
L4 = P4 Ô P1 ;
L5 = P5 Ô P2 ;
The pentagon internal to the 5-star is the region (including the boundary) defined by the set of
points P such that the predicate FiveStarPentagon below is True.
¯%@P Ô L1 D
FiveStarPentagon@P_D := AndB ¥ 0,
¯%@Q Ô L1 D
¯%@P Ô L2 D ¯%@P Ô L3 D ¯%@P Ô L4 D ¯%@P Ô L5 D
¥ 0, ¥ 0, ¥ 0, ¥ 0F;
¯%@Q Ô L2 D ¯%@Q Ô L3 D ¯%@Q Ô L4 D ¯%@Q Ô L5 D
2009 9 3
Grassmann Algebra Book.nb 289
FiveStarPentagon@QD
True
The inside of the 5-star pentagon (excluding the boundary) is defined by replacing the sign ¥ by
the sign > in the formula above.
¯%@P Ô L1 D
FiveStarPentagonInside@P_D := AndB > 0,
¯%@Q Ô L1 D
¯%@P Ô L2 D ¯%@P Ô L3 D ¯%@P Ô L4 D ¯%@P Ô L5 D
> 0, > 0, > 0, > 0F;
¯%@Q Ô L2 D ¯%@Q Ô L3 D ¯%@Q Ô L4 D ¯%@Q Ô L5 D
FiveStarPentagonBoundary@P_D :=
FiveStarPentagon@PD && Not@FiveStarPentagonInside@PDD
To check, we choose three points one inside, one on, and one outside the pentagon boundary. For
the point on the boundary we choose the point M, say, mid-way between P2 and P5 . For the other
two we perturb the position of M by a small vertical displacement e. As expected, only the point M
on the boundary returns True.
M = HP2 + P5 L ê 2; e = .000001 e2 ;
FiveStarPentagonBoundary êü 8M - e, M, M + e<
8False, True, False<
Note that the pentagon boundary cannot be defined by a Boolean expression of predicates in which
the ratios are zero, because the lines from which the boundary is formed extend beyond the
boundary.
The plane outside the 5-star pentagon is defined simply by the negation of the
FiveStarPentagon predicate. Since FiveStarPentagon includes the boundary,
FiveStarPentagonOutside excludes it.
FiveStarPentagonOutside@P_D := Not@FiveStarPentagon@PDD
For example, the vertices of the 5-star are outside the pentagon.
2009 9 3
Grassmann Algebra Book.nb 290
Although in this example it has been unnecessary to know the vertices of the pentagon, we can of
course obtain these points by applying ToCommonFactor to each of the pairs of intersecting lines.
The vertices of the 5-star are the same as the points we chose in the pentagon example in the
previous section, and the edges of the 5-star are segments of the lines we chose.
There are many ways of creating a Boolean expression to define the 5-star. The most conceptually
straightforward way is to use either a disjunction or conjunction of sub-regions - each of which is
itself defined by a Boolean expression. A disjunction (Boolean Or) would be used when the region
being defined is composed of one or other of the sub-regions. A conjunction (Boolean And) would
be used when the region being defined is the overlap of the sub-regions.
We avoid discussing solutions requiring knowledge of the vertices of the inner pentagon, as these
may not be known (although of course, as shown in the previous section, they can be easily
calculated using the regressive product). However, in the diagram below we name them as points
Ai (opposite to Pi ) in order to explain our construction.
2009 9 3
Grassmann Algebra Book.nb 291
Ì The triangle
We view the 5-star as two polygons: the triangle P2 P5 A1 , and the quadrilateral P1 P3 A1 P4 . The
triangular region can be defined as the intersection of the three half-planes defined by the lines L2 ,
L3 , L5 , and which include the point Q. This region is shown in the darkest grey in the figure below.
Ì The quadrilateral
The region defined by the quadrilateral can be viewed as the intersection of the (infinite) region
below the lines L1 and L4 , and the (infinite) region above the lines L2 and L3 . The region below
the lines L1 and L4 can be defined by the conjunction of the regions on the positive side (relative to
Q) of the lines L1 and L4 .
2009 9 3
Grassmann Algebra Book.nb 292
¯%@P Ô L1 D ¯%@P Ô L4 D
BelowL1L4@P_D := AndB ¥ 0, ¥ 0F
¯%@Q Ô L1 D ¯%@Q Ô L4 D
The region above the lines L2 and L3 can be defined by the disjunction of the regions on the
positive side (relative to Q) of the lines L2 and L3 . This region is shown in the figure below with
various opacities of red.
2009 9 3
Grassmann Algebra Book.nb 293
¯%@P Ô L2 D ¯%@P Ô L3 D
AboveL2L3@P_D := OrB ¥ 0, ¥ 0F
¯%@Q Ô L2 D ¯%@Q Ô L3 D
Ì The 5-star
And the 5-star is the disjunction of the triangle and the quadrilateral.
To test this formula, we can choose test points on, inside and outside the 5-star. The vertices are on
the 5-star, so should all yield True.
And so also are the vertices of the interior pentagon explored in the previous section.
2009 9 3
Grassmann Algebra Book.nb 294
We can add small perturbations to these vertices to create points just outside the 5-star. These
should all yield False.
d = .000001 e1 ; e = .000001 e2 ;
8FiveStar êü 8P1 + e, P2 + e, P3 + e, P4 + e, P5 + e<,
FiveStar êü 8A1 - e, A2 - d, A3 + e, A4 + e, A5 + d<<
88False, False, False, False, False<,
8False, False, False, False, False<<
Perturbations to create points just inside the 5-star should all yield True.
To define a general explicit formula for a 5-star region, all we need to do is to collect the regions
into one expression.
This formula is valid for any set of five lines intersecting to form the 5-star, and any reference
point Q inside the 5-star region. We test the formula with the points just inside the vertices of the
pentagon as above.
2009 9 3
Grassmann Algebra Book.nb 295
FiveStar@P, Q, L Ô SD
Here, L is the list of lines forming the original 5-star. Since the exterior product operation Wedge
is Listable, taking the exterior product of this list of lines with the point S gives the
corresponding list of planes through S.
We then truncate the (infinite) pyramidal cone thus formed by a sixth plane equivalent to the plane
of the original planar 5-star by adding an extra predicate term. We define the plane P as the
exterior product of any three of the original 5-star vertices.
P = P1 Ô P2 Ô P3
We must also redefine the location of the reference point Q. Since the original point Q is now on
the boundary of the new three dimensional region, we will need to relocate it to an interior point.
2009 9 3
Grassmann Algebra Book.nb 296
¯%@P Ô PD
¥0
¯%@Q Ô PD
We can now define the solid region FiveStar3D as the conjunction of these two regions.
¯%@P Ô PD
FiveStar3D@P_D := AndBFiveStar@P, Q, L Ô SD, ¥ 0F
¯%@Q Ô PD
Here is our original data for the planar 5-star:
P1 = ¯" + 2 e2 ;
P2 = ¯" + 2 e1 + e2 ;
P3 = ¯" + 2 e1 - 2 e2 ;
P4 = ¯" - 2 e1 - 2 e2 ;
P5 = ¯" - 2 e1 + e2 ;
L1 = P1 Ô P3 ;
L2 = P2 Ô P4 ;
L3 = P3 Ô P5 ;
L4 = P4 Ô P1 ;
L5 = P5 Ô P2 ;
L = 8L1 , L2 , L3 , L4 , L5 <;
We now change to a 3-space, and define the two new points, and the new plane.
¯"3 ;
Q = ¯" + e3 ;
S = ¯" + 2 e3 ;
P = P1 Ô P2 Ô P3 ;
To verify the formula, we can first check that the 5-star pyramid contains all its defining points.
We can also check that points clustered sufficiently close to the origin are inside the region.
Table@FiveStar3D@¯" + RandomReal@0.5D e1 +
RandomReal@0.5D e2 + RandomReal@0.5D e3 D, 810<D
8True, True, True, True, True, True, True, True, True, True<
We can add small perturbations to these vertices to create points just outside the 5-star. These
should all yield False.
2009 9 3
Grassmann Algebra Book.nb 297
Summary
† In 1-space, a hyperplane is a point.
In 2-space, a hyperplane is a line.
In 3-space, a hyperplane is a plane.
In n-space, a hyperplane is an (n-1)-plane.
† If two points P and Q lie on the same side of a hyperplane ,, then the ratio R of the exterior
products of the two points with the hyperplane is positive:
PÔ,
R= >0
QÔ,
PÔ,
R= <0
QÔ,
PÔ,
R= ã0
QÔ,
† The point Q may be considered as a reference point, and the region in which it is located as the
reference region.
† The point P may be considered as a variable point, and the region in which it is located as the
region of interest.
† Regions of space may be defined by Boolean functions of the predicates R > 0, R ã 0, R < 0, R
¥ 0, and R § 0.
Geometric expressions
In this chapter, we have been exploring non-metric geometry involving geometric entities: points,
lines, planes, ..., hyperplanes. In this section we will explore expressions, called geometric
expressions, involving exterior and regressive products of these entities - the exterior product to
extend them, and the regressive product to intersect them. When geometric expressions are
equated to zero they becomes a geometric equations, leading to corresponding algebraic equations
for curves and surfaces of various degrees.
One of the most enticing features of Grassmann algebra applied to geometry which we will also
explore in this section, is the idea that some geometric expressions may not only be geometrically
depicted, but may also double as prescriptions for their geometric construction.
2009 9 3
Grassmann Algebra Book.nb 298
One of the most enticing features of Grassmann algebra applied to geometry which we will also
explore in this section, is the idea that some geometric expressions may not only be geometrically
depicted, but may also double as prescriptions for their geometric construction.
The types of geometric expressions we will explore are exemplified by that leading to the equation
to a conic section in the plane (discussed in the next section).
Here, the factors P0 , P1 , P2 , L1 and L2 are fixed (constant) entities defining the parameters of the
conic section. P is considered the independent variable, and since it occurs twice, the expression
represents an equation of the second degree. (If P had occurred m times in the expression, then the
equation would have been of degree m.)
The reduction of a geometric expression to an algebraic equation for a curve or surface requires
that the expression be either a scalar, or an n-element. The expression above is in fact the exterior
product of three points, which reduces to a 3-element in the plane (a linear space of 3 dimensions).
Each time a regressive product is simplified to its common factor, for example when two lines
intersect, the result is in general a weighted point multiplied by an arbitrary scalar congruence
factor (symbolized ¯c in GrassmannAlgebra). The zero on the right hand side of the equation
means that we can effectively ignore the weights and congruence factors, and visualize each
regressive product as resulting simply in a point. In computations however, it is most effective to
retain these weights to avoid coordinates with denominators.
For a geometric expression to result in an n-element it will usually be the exterior product of n
points (3 points for the plane, 4 points for space). Geometrically this may be interpreted as the
points all belonging to a hyperplane in the space, that is, collinear in the plane, coplanar in 3-space.
For a geometric expression to result in an scalar it will usually be the regressive product of n
hyperplanes (3 lines for the plane, 4 planes for 3-space). Geometrically this may be interpreted as
the hyperplanes all belonging to a point in the space, that is, three lines intersecting in one point in
the plane, four plane intersecting in one point in 3-space.
Ì Historical Note
2009 9 3
Grassmann Algebra Book.nb 299
Suppose we have a fixed line L through two points in the plane, and we want to derive its normal
algebraic equation. Let the line L be defined by the points Pa and Pb .
P = ¯'x ; P Ô L ã 0
H¯# + e1 x1 + e2 x2 L Ô H¯# + a1 e1 + a2 e2 L Ô H¯# + b1 e1 + b2 e2 L ã 0
The algebraic equation to this line is obtained by simplifying this 3-element and extracting its
scalar coefficient by dividing through by the basis 3-element of the space
¯%@P Ô LD
ã0
¯" Ô e1 Ô e2
-a2 b1 + a1 b2 + a2 x1 - b2 x1 - a1 x2 + b1 x2 ã 0
A x1 + B x2 + C ã 0
Simplifying this expression gives us the equation to a plane in 3-space in the more recognizable
form A + B x1 + C x2 + D x3 ã 0. Here we introduce the GrassmannAlgebra function
ExpandAndCollect for collecting the coefficients of the algebraic equation.
2009 9 3
Grassmann Algebra Book.nb 300
¯%@P Ô PD
ExpandAndCollectB , 8x1 , x2 , x3 <, 3F ã 0
¯" Ô e1 Ô e2 Ô e3
-a3 b2 c1 + a2 b3 c1 + a3 b1 c2 - a1 b3 c2 - a2 b1 c3 +
a1 b2 c3 + Ha3 b2 - a2 b3 - a3 c2 + b3 c2 + a2 c3 - b2 c3 L x1 +
H-a3 b1 + a1 b3 + a3 c1 - b3 c1 - a1 c3 + b1 c3 L x2 +
Ha2 b1 - a1 b2 - a2 c1 + b2 c1 + a1 c2 - b1 c2 L x3 ã 0
C + C1 x1 + C2 x2 + C2 x3 + … + Cn xn ã 0
The idea is that an equation of the general form PÔH ã 0 still holds, where P is a variable point,
and H is a hyperplane in the space. But for higher degree curves, H will itself need to be dependent
on the point P. Thus for a conic section, P will occur twice in the product, leading ultimately to an
equation of the second degree in its coordinates.
The hyperplane in a 2-space is a line, so in this section we are looking for ways in which this line
()a , say) can be nontrivially dependent on P.
In what follows, we will use double struck symbols )i for line parameters which will not appear in
the final expression. We will also use the congruence relation when defining intersections of lines,
since these normally lead to weighted points (although, as we have previously mentioned, the
weights are inconsequential in the final result). Thus we begin with
P Ô )a ã 0
If P is a simple exterior factor of )a , the equation is true for all P, leading us nowhere.
How can )a depend linearly on P, without P being an (exterior) factor of )a ? The answer lies in
the regressive product. Geometrically, a line is most meaningfully constructed as an exterior
product of points (P1 and Q1 , say). Hence
)a ã P1 Ô Q1
So one of these points must depend on P. Suppose it is Q1 . Thus in turn, Q1 must be a regressive
product, since in the plane the regressive product of two lines yields a point. Thus
Q1 ª L1 Ó )b
Again, allowing P to be a simple exterior factor of one of these lines leads to a trivial dependence,
so we leave one of them constant (L1 , say) and replace the other, )b by an exterior product of
points, neither of which is P.
2009 9 3
Grassmann Algebra Book.nb 301
)b ã P0 Ô Q2
As in the previous steps, we leave one factor constant, and express the other as a product (in this
case regressive).
Q2 ª L2 Ó )c
Finally, we are linearly far enough away from P on our chain of lines through points to come back
and intersect it non-trivially.
)c ã P Ô P2
0 ã P Ô )a
)a ã P1 Ô Q1
Q1 ª L1 Ó )b
)b ã P0 Ô Q2
Q2 ª L2 Ó )c
)c ã P Ô P2
and eliminate )a , )b , )c , Q1 and Q2 to give the final equation for the conic section.
Note that since interchanging the order of factors in an exterior or regressive product changes at
most its sign, and that this change is inconsequential to the final result, we can if we wish, rewrite
the geometric equation in any of a number of ways, in particular, in reverse order.
HHHHP2 Ô PL Ó L2 L Ô P0 L Ó L1 L Ô P1 Ô P ã 0 4.15
2009 9 3
Grassmann Algebra Book.nb 302
We do of course still need to verify that this formula and construction leads to a scalar second
degree equation in two variables. This we do in the next section.
DeclareExtraScalarSymbols@82, 3, 4, 5, 6, 7, 8, 9, ., +<D;
2009 9 3
Grassmann Algebra Book.nb 303
P0 = ¯" + 6 e1 + 7 e2 ;
P1 = ¯" + 2 e1 + 3 e2 ;
P2 = ¯" + 4 e1 + 5 e2 ;
L1 = H¯" + 8 e1 L Ô H¯" + 9 e2 L;
L2 = H¯" + . e1 L Ô H¯" + + e2 L;
Q2 = ¯1@L2 Ó HP Ô P2 LD
H-+ ( - , $ + $ x + ( yL ¯# +
( HH-+ + $L x + , H-$ + yLL e1 + $ H+ H-( + xL + H-, + (L yL e2
Q1 = ¯1@L1 Ó HP0 Ô Q2 LD
H-- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLL -
/ H0 H+ ( + , $ - $ x - ( yL + ( HH-+ + $L x + , H-$ + yLLLL ¯# +
- H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL +
0 / ( y - 0 ( $ y + , H-H. - /L ( H$ - yL + 0 $ H-/ + yLLL e1 +
/ H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H-( + xL + H-, + (L yL -
- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLLL e2
4. Construct a line )a through Q1 and P1 to intersect line )c in point P. Q1 , P1 and P are thus on
the same line, and the left hand side of the equation (which we call #1 ) becomes
#1 = ¯1@P Ô P1 Ô Q1 D
H-1 - H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL +
0 / ( y - 0 ( $ y + , H-H. - /L ( H$ - yL + 0 $ H-/ + yLLL +
2 / H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H-( + xL + H-, + (L yL -
- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLLL +
y H- H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL +
0 / ( y - 0 ( $ y + , H-H. - /L ( H$ - yL + 0 $ H-/ + yLLL -
2 H-- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLL -
/ H0 H+ ( + , $ - $ x - ( yL + ( HH-+ + $L x + , H-$ + yLLLLL -
x H/ H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H-( + xL + H-, + (L yL -
- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLLL -
1 H-- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLL -
/ H0 H+ ( + , $ - $ x - ( yL +
( HH-+ + $L x + , H-$ + yLLLLLL ¯# Ô e1 Ô e2
Dividing out the basis n-element and collecting the coefficients of x and y.
2009 9 3
Grassmann Algebra Book.nb 304
#1
#2 = ExpandAndCollectB , 8x, y<, 2F ã 0
¯" Ô e1 Ô e2
1+0-/(-2+.-/(+1,0-/$-2,.-/$-1+0-($+
1,.-($-2+0/($+2,./($-1,-/($+2+-/($+
H-1 + 0 / ( + 2 + . / ( - 1 + - / ( + + . - / ( + 1 + 0 - $ - 1 , . - $ - 1 , 0 / $ +
2+0/$-2+-/$-10-/$+2.-/$+,.-/$+1+-($-1.-($+
1 , / ( $ + + 0 / ( $ - 2 . / ( $ - , . / ( $ + 1 - / ( $ - + - / ( $L x +
H1 + / ( - + . / ( - 1 + - $ + 1 . - $ + 1 0 / $ - + 0 / $ + + - / $ -
. - / $ - 1 / ( $ + . / ( $L x2 +
H-1 , . - ( + 2 + . - ( + 2 + 0 / ( - 2 , . / ( + 1 , - / ( - 1 0 - / ( - + 0 - / ( +
2.-/(-1,0-$+2,.-$+2,-/$-,0-/$-2+-($+10-($+
+ 0 - ( $ - , . - ( $ - 2 , / ( $ + 2 0 / ( $ - 2 - / ( $ + , - / ( $L y +
H1 . - ( - + . - ( - 1 , / ( - 2 + / ( + 1 0 / ( + , . / ( + + - / ( - . - / ( +
1,-$+2+-$-+0-$-2.-$-20/$+,0/$-
, - / $ + 0 - / $ - 1 - ( $ + . - ( $ + 2 / ( $ - 0 / ( $L x y +
H-2 . - ( + , . - ( + 2 , / ( - 2 0 / ( - , - / ( + 0 - / ( - 2 , - $ +
, 0 - $ + 2 - ( $ - 0 - ( $L y2 ã 0
To check if this gives the correct form, we can substitute in some values for the coordinates of the
points and lines.
#2 ê. 82 Ø 2, 3 Ø 1, 4 Ø 1, 5 Ø 2, 6 Ø 1, 7 Ø 1, 8 Ø 1, 9 Ø 0, . Ø 1, + Ø 3<
-3 + 6 x - 3 x2 + 2 x y - y2 ã 0
This is an ellipse.
3.0
2.5
2.0
1.5
1.0
0.5
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
P0 Ô Q1 Ô Q2 ã 0
But this time, P0 is a fixed point while Q1 and Q2 are functions of the variable point P. Because we
are looking for a more symmetric formula, let us suppose the form for Q1 is the same as that for Q2
in the previous formulation:
Qi ª Li Ó HP Ô Pi L
Thus we can imagine two fixed lines L1 and L2 and three fixed points P0 , P1 , and P2 . The
prescription to construct the conic section is as follows:
The second and third lines through Qi and Pi satisfy the conditions that Qi Ô P Ô Pi ã 0. Thus
their intersection gives the point P.
We now have two formulas purporting to describe a conic section: the formula above, and the
formula we first derived.
$2 ã P0 Ô HL1 Ó HP Ô P1 LL Ô HL2 Ó HP Ô P2 LL
To see if they are the same, we could choose a basis, substitute symbolic coordinates, and check
that the same second degree equation results. An alternative method is to prove the identity of the
formulas from the Common Factor Theorem by applying CongruenceSimplify (¯1) to the
symbolic expressions in the formulas. To indicate that the symbols for the lines represent elements
of grade 2 we underscript them with their grade. To indicate that the symbols for the points
represent elements of grade 1 we declare them as vector symbols.
2009 9 3
Grassmann Algebra Book.nb 306
¯1BL1 Ó HP Ô P1 LF
2
P P1 Ô L1 - P Ô L1 P1
2 2
Remember that the bracketing bar expressions are scalar quantities. †Z§ is to be interpreted as the
coefficient of the n-element Z when expressed in the current basis. See ToCongruenceForm.
We can expand a complete formula in a similar manner to get an alternative expression for the
conic section. For example, the first formula is
¯1BP Ô P1 Ô L1 Ó P0 Ô L2 Ó HP Ô P2 L Fã0
2 2
P Ô L2 P2 Ô L1 - P Ô L1 P2 Ô L2 P Ô P0 Ô P1 +
2 2 2 2
P Ô L2 P0 Ô L1 P Ô P1 Ô P2 ã 0
2 2
P Ô L2 P2 Ô L1 - P Ô L1 P2 Ô L2 P Ô P0 Ô P1 +
2 2 2 2
4.17
P Ô L2 P0 Ô L1 P Ô P1 Ô P2 ã 0
2 2
This is just one of a number of alternative forms. Expanding the second formula gives a different
expression. In order to show that they are in fact equal we need to note that in the formulae there
are four points involved, of which only three are independent in the plane. Hence we can arbitrarily
express one of them (say P) in terms of the other three. (We use the symbol wP to emphasize that
the expression we substitute is actually a weighted point).
wP = a P0 + b P1 + c P2 ;
In this form, the Pi can be visualized as representing a weighted point basis for the plane, and a, b
and c as variable coefficients of the weighted point wP in this basis. The formulae for comparison
then become
$1 = wP Ô P1 Ô L1 Ó P0 Ô L2 Ó HwP Ô P2 L ;
2 2
$1 = GrassmannCollect@Expand@¯1@$1 DDD
a2 P0 Ô L1 P0 Ô L2 + a b P0 Ô L1 P1 Ô L2 + a c P0 Ô L2 P2 Ô L1 +
2 2 2 2 2 2
b c P1 Ô L2 P2 Ô L1 - b c P1 Ô L1 P2 Ô L2 P0 Ô P1 Ô P2
2 2 2 2
2009 9 3
Grassmann Algebra Book.nb 307
$2 = P0 Ô L1 Ó HwP Ô P1 L Ô L2 Ó HwP Ô P2 L ;
2 2
$2 = GrassmannCollect@Expand@¯1@$2 DDD
a2 P0 Ô L1 P0 Ô L2 + a b P0 Ô L1 P1 Ô L2 + a c P0 Ô L2 P2 Ô L1 +
2 2 2 2 2 2
b c P1 Ô L2 P2 Ô L1 - b c P1 Ô L1 P2 Ô L2 P0 Ô P1 Ô P2
2 2 2 2
$1 ã $2
True
Li ã O Ô Ri
But since any points on the lines (except O) will be satisfactory, we can choose them
advantageously as the intersections with the lines P0 Ô Pi .
R1 ª L1 Ó HP0 Ô P2 L
R2 ª L2 Ó HP0 Ô P1 L
P0 ª HP1 Ô R2 L Ó HP2 Ô R1 L
P0 Ô HL1 Ó HP Ô P1 LL Ô HL2 Ó HP Ô P2 LL ã 0
HHP1 Ô R2 L Ó HP2 Ô R1 LL Ô
4.18
HHO Ô R1 L Ó HP Ô P1 LL Ô HHO Ô R2 L Ó HP Ô P2 LL ã 0
Note the symmetry. There are five fixed points, O, P1 , P2 , R1 , R2 , and the variable point P. Each
point occurs twice. Swapping P1 and P2 while at the same time swapping R1 and R2 leaves the
equation unchanged.
We can show that these five points satisfy the equation, and thus all lie on the conic section. But
since a conic section constructed through five points is unique, this equation represents the general
conic section, or general quadric curve in the plane.
2009 9 3
Grassmann Algebra Book.nb 308
We can show that these five points satisfy the equation, and thus all lie on the conic section. But
since a conic section constructed through five points is unique, this equation represents the general
conic section, or general quadric curve in the plane.
We show this as follows. First, we can immediately read off from the equation that it is satisfied
when P coincides with either P1 or P2 . To show it for O, R1 and R2 we can expand the regressive
products by the Common Factor Theorem as we did in the previous section to a different view of
the equation.
DeclareExtraVectorSymbols@8P, O, P1 , P2 , R1 , R2 <D;
11 =
¯1@HHP1 Ô R2 L Ó HP2 Ô R1 LL Ô HHO Ô R1 L Ó HP Ô P1 LL Ô HHO Ô R2 L Ó HP Ô P2 LLD ã 0
†O Ô P Ô P1 § †P Ô P2 Ô R2 § †P2 Ô R1 Ô R2 § O Ô P1 Ô R1 -
†O Ô P Ô P2 § †P Ô P1 Ô R1 § †P2 Ô R1 Ô R2 § O Ô P1 Ô R2 +
†O Ô P Ô P1 § †P Ô P2 Ô R2 § †P1 Ô P2 Ô R1 § O Ô R1 Ô R2 -
†O Ô P Ô P1 § †O Ô P Ô P2 § †P2 Ô R1 Ô R2 § P1 Ô R1 Ô R2 ã 0
It is clear from the first factor in each of the terms that the equation is satisfied when P is equal to
O. But it is not yet clear that the equation is satisfied when P is equal to R1 or R2 . To show this we
try applying CongruenceSimplify to a rearranged form where the factors of the regressive
products are reversed.
12 =
¯1@HHP2 Ô R1 L Ó HP1 Ô R2 LL Ô HHP Ô P1 L Ó HO Ô R1 LL Ô HHP Ô P2 L Ó HO Ô R2 LLD ã 0
†O Ô P Ô R1 § †O Ô P2 Ô R2 § †P1 Ô R1 Ô R2 § P Ô P1 Ô P2 -
†O Ô P Ô R1 § †O Ô P2 Ô R2 § †P1 Ô P2 Ô R2 § P Ô P1 Ô R1 +
†O Ô P Ô R2 § †O Ô P1 Ô R1 § †P1 Ô P2 Ô R2 § P Ô P2 Ô R1 -
†O Ô P Ô R1 § †O Ô P Ô R2 § †P1 Ô P2 Ô R2 § P1 Ô P2 Ô R1 ã 0
Now we see that not only is the equation satisfied when P is equal to O, but also when P is equal to
R1 . We try again with the factors of only the second and third regressive products reversed.
13 =
¯1@HHP1 Ô R2 L Ó HP2 Ô R1 LL Ô HHP Ô P1 L Ó HO Ô R1 LL Ô HHP Ô P2 L Ó HO Ô R2 LLD ã 0
-†O Ô P Ô R2 § †O Ô P1 Ô R1 § †P2 Ô R1 Ô R2 § P Ô P1 Ô P2 +
†O Ô P Ô R1 § †O Ô P2 Ô R2 § †P1 Ô P2 Ô R1 § P Ô P1 Ô R2 -
†O Ô P Ô R2 § †O Ô P1 Ô R1 § †P1 Ô P2 Ô R1 § P Ô P2 Ô R2 +
†O Ô P Ô R1 § †O Ô P Ô R2 § †P1 Ô P2 Ô R1 § P1 Ô P2 Ô R2 ã 0
This time we see immediately that the equation is satisfied for P is equal to R2 .
Of course it would have been more direct to show these sorts of results by replacing the point by P
in the equation, and then simplifying. For example, replacing R2 in the above equation, then
simplifying, gives zero as expected.
We have thus shown that the five fixed points, O, P1 , P2 , R1 , R2 , all lie on the conic section.
Here is an hyperbola as a graphic example, plotted from the original formula above.
2009 9 3
Grassmann Algebra Book.nb 309
Here is an hyperbola as a graphic example, plotted from the original formula above.
R2
L1
Q1 O R1
P0
P2 P Q2
L2
P1
And here is a circle. The only difference being the relative location of the five points.
2009 9 3
Grassmann Algebra Book.nb 310
Q2
L2 P2
P
R2 P0 P1
L1
O R1 Q1
Dual constructions
The Principle of Duality discussed in Chapter 3 says that to every formula such as
P0 Ô HL1 Ó HP Ô P1 LL Ô HL2 Ó HP Ô P2 LL ã 0
there corresponds a dual formula in which exterior and regressive products are interchanged, and m-
elements are interchanged with (n-m)-elements: in this case, points are interchanged with lines.
Thus we can write another formula for a conic section as
where the Pi are fixed points, the Li are fixed lines, and ; is a variable line.
This formula says that three lines intersect at one point. One line L0 is fixed. The other two are of
the form
)i ã P1 Ô H; Ó L1 L
To see most directly how such a formula translates into an algebraic equation, we take a numerical
example with fixed points and lines defined:
2009 9 3
Grassmann Algebra Book.nb 311
P1 = ¯" + 2 e1 + e2 ;
P2 = ¯" + e1 + 2 e2 ;
L0 = H¯" + 5 e1 L Ô H¯" + 4 e2 L;
L1 = ¯" Ô H¯" + e1 L;
L2 = H¯" - e1 L Ô H¯" + 3 e2 L;
We can define the variable line as the product of its intersections with the coordinate axes:
Expand@¯1@:1 DD ã 0
198 x y - 90 x2 y - 21 y2 - 80 x y2 + 37 x2 y2 ã 0
We can solve for y in terms of x, but remember that these are line coordinates, not point
coordinates.
18 I-11 x + 5 x2 M
y= ;
-21 - 80 x + 37 x2
By plotting a line for each pair of line coordinates, we see that they form the envelope of a conic
section as expected.
2009 9 3
Grassmann Algebra Book.nb 312
and replace the 2-space hyperplanes (the lines) with 3-space hyperplanes (planes). Thus we can
rewrite the formula as
Note that the point P2 becomes the line L in order to make the innermost term into a plane.
L2 ª P2 Ó Pc
Pc ã P Ô L
L1 ª P1 Ó Pb
Pb ã P0 Ô L2
0 ã P Ô Pa
Pa ã P1 Ô L1
We now derive the algebraic equation in the same manner as we have previously for the planar
cases.
DeclareExtraScalarSymbols@
82, 3, 4, *, 6, 7, 8, 9, ., +, <, =, >, ), ?, @<D;
P0 = ¯" + 2 e1 + 3 e2 + 4 e2 ;
P1 = ¯" + * e1 + 6 e2 + 7 e3 ;
L = H¯" + 8 e1 + 9 e2 L Ô H¯" + . e2 + + e3 L;
P1 = H¯" + < e1 L Ô H¯" + = e2 L Ô H¯" + > e3 L;
P2 = H¯" + ) e1 L Ô H¯" + ? e2 L Ô H¯" + @ e3 L;
2009 9 3
Grassmann Algebra Book.nb 313
A1 = ExpandAndCollectB
¯1@P Ô P1 Ô HP1 Ó HP0 Ô HP2 Ó HP Ô LLLLLD
, 8x, y, z<, 3F
¯" Ô e1 Ô e2 Ô e3
-1 . - $ 3 4 5 " 6 - , . - $ 3 4 5 " 6 + 2 . / $ 3 4 5 " 6 - 1 . - ( 3 4 5 " 7 - , . - ( 3 4 5 " 7 -
1#/$345"7-,#/$345"7+20/$345"7-2.-(34567-1#-$34567-
,#-$34567+20-$34567+1.-$34"67+,.-$34"67-2./$34"67+
10-$35"67+,0-$35"67-20/$35"67+1#-$45"67+
,#-$45"67-2#/$45"67+.-(345"67-0-$345"67+#/$345"67+
H-1 0 - $ 3 5 " 6 - , 0 - $ 3 5 " 6 + 2 0 / $ 3 5 " 6 - 1 # - $ 4 5 " 6 - , # - $ 4 5 " 6 +
2#/$45"6+1.-345"6+,.-345"6-2./345"6+2.(345"6-
.-(345"6+1-$345"6+,-$345"6-2/$345"6+1.-(34"7+
,.-(34"7+1#/$34"7+,#/$34"7-20/$34"7+1#/345"7+
,#/345"7-20/345"7-1#(345"7-,#(345"7+20(345"7+
1-(345"7+,-(345"7-0-(345"7+2.-(3467+1#-$3467+
,#-$3467-20-$3467+1#-34567+,#-34567-20-34567+
2-(34567-#-(34567-1.-34"67-,.-34"67+2./34"67-
2.(34"67-1-$34"67-,-$34"67+0-$34"67+2/$34"67-
#/$34"67-10-35"67-,0-35"67+20/35"67-20(35"67+
0-(35"67-1#-45"67-,#-45"67+2#/45"67-2#(45"67+
# - ( 4 5 " 6 7 + 0 - 3 4 5 " 6 7 - # / 3 4 5 " 6 7 + # ( 3 4 5 " 6 7 - - ( 3 4 5 " 6 7L z +
H1 0 - 3 5 " 6 + , 0 - 3 5 " 6 - 2 0 / 3 5 " 6 + 2 0 ( 3 5 " 6 - 0 - ( 3 5 " 6 +
1#-45"6+,#-45"6-2#/45"6+2#(45"6-#-(45"6-
1-345"6-,-345"6+2/345"6-2(345"6+-(345"6-
1#/34"7-,#/34"7+20/34"7+1#(34"7+,#(34"7-20(34"7-
1-(34"7-,-(34"7+0-(34"7-1#-3467-,#-3467+
20-3467-2-(3467+#-(3467+1-34"67+,-34"67-
0 - 3 4 " 6 7 - 2 / 3 4 " 6 7 + # / 3 4 " 6 7 + 2 ( 3 4 " 6 7 - # ( 3 4 " 6 7L z2 +
H1 . - $ 4 5 " 6 + , . - $ 4 5 " 6 - 2 . / $ 4 5 " 6 + 1 . $ 3 4 5 " 6 + , . $ 3 4 5 " 6 -
./$345"6+1.-(45"7+,.-(45"7+1#/$45"7+,#/$45"7-
20/$45"7-1./345"7-,./345"7+1.(345"7+,.(345"7+
1/$345"7+,/$345"7-0/$345"7-1.-$3467-,.-$3467+
2./$3467-10-$3567-,0-$3567+20/$3567+
2.-(4567-20-$4567+2#/$4567-2./34567+2.(34567+
1#$34567+,#$34567-20$34567+1-$34567+,-$34567-
#/$34567-1.$34"67-,.$34"67+./$34"67-10$35"67-
,0$35"67+0/$35"67-.-(45"67-1#$45"67-,#$45"67-
1-$45"67-,-$45"67+0-$45"67+2/$45"67+
. / 3 4 5 " 6 7 - . ( 3 4 5 " 6 7 + 0 $ 3 4 5 " 6 7 - / $ 3 4 5 " 6 7L x +
H1 0 $ 3 5 " 6 + , 0 $ 3 5 " 6 - 0 / $ 3 5 " 6 - 1 . - 4 5 " 6 - , . - 4 5 " 6 +
2./45"6-2.(45"6+.-(45"6+1#$45"6+,#$45"6-#/$45"6-
1$345"6-,$345"6+/$345"6+1./34"7+,./34"7-1.(34"7-
- - + - - +
2009 9 3
Grassmann Algebra Book.nb 314
1$345"6-,$345"6+/$345"6+1./34"7+,./34"7-1.(34"7-
,.(34"7-1/$34"7-,/$34"7+0/$34"7-1#/45"7-,#/45"7+
20/45"7+1#(45"7+,#(45"7-20(45"7-1-(45"7-,-(45"7+
0-(45"7+1.-3467+,.-3467-.-(3467-1#$3467-,#$3467+
20$3467-2/$3467+#/$3467+10-3567+,0-3567-20/3567+
20(3567-0-(3567+20-4567-2#/4567+2#(4567-2-(4567-
1-34567-,-34567+2/34567-2(34567+-(34567-./34"67+
.(34"67+1$34"67+,$34"67-0$34"67+1-45"67+,-45"67-
0 - 4 5 " 6 7 - 2 / 4 5 " 6 7 + # / 4 5 " 6 7 + 2 ( 4 5 " 6 7 - # ( 4 5 " 6 7L z x +
H-1 . $ 4 5 " 6 - , . $ 4 5 " 6 + . / $ 4 5 " 6 + 1 . / 4 5 " 7 + , . / 4 5 " 7 -
1.(45"7-,.(45"7-1/$45"7-,/$45"7+0/$45"7+1.$3467+
,.$3467-./$3467+10$3567+,0$3567-0/$3567+2./4567-
2.(4567+20$4567-2/$4567-1$34567-,$34567+/$34567-
. / 4 5 " 6 7 + . ( 4 5 " 6 7 + 1 $ 4 5 " 6 7 + , $ 4 5 " 6 7 - 0 $ 4 5 " 6 7L x2 +
H1 . - $ 3 5 " 6 + , . - $ 3 5 " 6 - 2 . / $ 3 5 " 6 - 2 . $ 3 4 5 " 6 + . - $ 3 4 5 " 6 -
1.-$34"7-,.-$34"7+2./$34"7+1.-(35"7+,.-(35"7-
10-$35"7-,0-$35"7+1#/$35"7+,#/$35"7-1#-$45"7-
,#-$45"7+2#/$45"7+1.-345"7+,.-345"7+1#$345"7+
,#$345"7-20$345"7+0-$345"7-2/$345"7+2.-(3567+
1#-$3567+,#-$3567-20-$3567+2.-34567-2-$34567+
#-$34567+2.$34"67-.-$34"67-.-(35"67+20$35"67-
1-$35"67-,-$35"67+2/$35"67-#/$35"67+2#$45"67-
# - $ 4 5 " 6 7 - . - 3 4 5 " 6 7 - # $ 3 4 5 " 6 7 + - $ 3 4 5 " 6 7L y +
H-1 . - 3 5 " 6 - , . - 3 5 " 6 + 2 . / 3 5 " 6 - 2 . ( 3 5 " 6 + . - ( 3 5 " 6 -
20$35"6+0-$35"6-2#$45"6+#-$45"6+2$345"6-
-$345"6-2./34"7+2.(34"7-.-(34"7-1#$34"7-,#$34"7+
20$34"7+1-$34"7+,-$34"7-0-$34"7+10-35"7+,0-35"7-
1#/35"7-,#/35"7+1#(35"7+,#(35"7-1-(35"7-,-(35"7+
1#-45"7+,#-45"7-2#/45"7+2#(45"7-#-(45"7-1-345"7-
,-345"7+2/345"7-2(345"7+-(345"7-2.-3467+2-$3467-
#-$3467-1#-3567-,#-3567+20-3567-2-(3567+#-(3567+
.-34"67-2$34"67+#$34"67+1-35"67+,-35"67-
0 - 3 5 " 6 7 - 2 / 3 5 " 6 7 + # / 3 5 " 6 7 + 2 ( 3 5 " 6 7 - # ( 3 5 " 6 7L z y +
H-1 . $ 3 5 " 6 - , . $ 3 5 " 6 + . / $ 3 5 " 6 + 2 . $ 4 5 " 6 - . - $ 4 5 " 6 +
1.$34"7+,.$34"7-./$34"7+1./35"7+,./35"7-1.(35"7-
,.(35"7+10$35"7+,0$35"7-1/$35"7-,/$35"7-
1.-45"7-,.-45"7+20$45"7+1-$45"7+,-$45"7-
0-$45"7-#/$45"7-1$345"7-,$345"7+/$345"7-
2.$3467+.-$3467+2./3567-2.(3567-1#$3567-,#$3567+
0-$3567-2/$3567+#/$3567-2.-4567-2#$4567+2-$4567+
2$34567--$34567-./35"67+.(35"67+1$35"67+
, $ 3 5 " 6 7 - 0 $ 3 5 " 6 7 + . - 4 5 " 6 7 - 2 $ 4 5 " 6 7 + # $ 4 5 " 6 7L x y +
H2 . $ 3 5 " 6 - . - $ 3 5 " 6 - 2 . $ 3 4 " 7 + . - $ 3 4 " 7 - 1 . - 3 5 " 7 -
,.-35"7-1#$35"7-,#$35"7+1-$35"7+,-$35"7-
2#$45"7+#-$45"7+2$345"7--$345"7-2.-3567+
- + - + L y2
Substituting some random values for the coordinates of the fixed points, line and planes gives us
2009 93
the equation:
Grassmann Algebra Book.nb 315
Substituting some random values for the coordinates of the fixed points, line and planes gives us
the equation:
The graphic below depicts this conic together with its defining points, line and planes. The
construction lines are not shown as they are difficult to portray unambiguously in one view.
Note that the line L and the point P1 are on the surface. This is clear from the geometric equation.
2009 9 3
Grassmann Algebra Book.nb 316
A typical factor of the formula, for example the first, simplifies to the weighted point:
Q1 = ¯1@L1 Ó HP Ô P1 LD
HH$ - 4L H2 - xL + H-( + 3L H1 - yLL ¯# +
HH$ 3 - ( 4L H2 - xL + H-( + 3L H1 x - 2 yLL e1 +
HH$ 3 - ( 4L H1 - yL + H-$ + 4L H1 x - 2 yLL e2
2009 9 3
Grassmann Algebra Book.nb 317
¯1@:1 D
:2 = ã0
¯" Ô e1 Ô e2
HH$ 3 - ( 4L H1 - yL + H-$ + 4L H1 x - 2 yLL
HHH/ - $L H0 - xL - H- - (L H. - yLL HH/ 3 - - 4L H, - xL +
H-- + 3L H+ x - , yLL + HH/ - 4L H, - xL + H-- + 3L H+ - yLL
HH/ ( - - $L H-0 + xL + H- - (L H. x - 0 yLLL +
H-H$ 3 - ( 4L H2 - xL - H-( + 3L H1 x - 2 yLL
HHH/ - $L H0 - xL - H- - (L H. - yLL
HH/ 3 - - 4L H+ - yL + H-/ + 4L H+ x - , yLL +
HH/ - 4L H, - xL + H-- + 3L H+ - yLL
HH-/ ( + - $L H. - yL + H/ - $L H. x - 0 yLLL +
HH$ - 4L H2 - xL + H-( + 3L H1 - yLL
HH-H/ 3 - - 4L H+ - yL + H/ - 4L H+ x - , yLL
HH/ ( - - $L H-0 + xL + H- - (L H. x - 0 yLL +
HH/ 3 - - 4L H, - xL + H-- + 3L H+ x - , yLL
HH-/ ( + - $L H. - yL + H/ - $L H. x - 0 yLLL ã 0
It is easy to check that the expression is indeed a cubic, either by inspection, or application of
ExpandAndCollect. as in the previous section.
It is instructive perhaps to see a graphical depiction of how the formula works. In the graphic
below we plot a typical cubic in the plane defined by the three points P1 , P2 , P3 , and the three lines
L1 , L2 , and L3 . The point P is located by the condition that the three points Q1 , Q2 , and Q3 are
concurrent, where each of the points Qi is defined as the intersection of the line P Ô Pi with the line
Li .
Qi = Li Ó HP Ô Pi L
As can be seen from the graphic, the cubic curve passes through nine special points: the three
points P1 , P2 , P3 , the three intersections of the three lines L1 , L2 , and L3 , and the intersections R1
of these three lines with the lines through the points Pi taken two at a time.
R1 = L1 Ó HP2 Ô P3 L
R2 = L2 Ó HP3 Ô P1 L
R3 = L3 Ó HP1 Ô P2 L
These special points are displayed as large semi-opaque red points (perhaps behind a blue point).
2009 9 3
Grassmann Algebra Book.nb 318
Pascal's Theorem
Our discussion up to this point leads us to Pascal's Theorem which may be stated: If a hexagon is
inscribed in a conic, then opposite sides intersect in collinear points.
In the previous sections we have shown that the geometric equation below, involving five fixed
points and a variable point P, leads to the algebraic equation of a conic section:
To explore Pascal's Theorem further we find it conceptually advantageous to start with a fresh
notation. We denote the six points by P1 , P2 , P3 , P4 , P5 , and P6 , and suppose that they are placed
on the conic in arbitrary (but of course separate) positions. Let us consider the hexagon formed by
these points in order, so that the sides S1 to S6 may be written, starting with P1 as
S1 = P1 Ô P2 ; S2 = P2 Ô P3 ; S3 = P3 Ô P4 ; S4 = P4 Ô P5 ; S5 = P5 Ô P6 ; S6 = P6 Ô P1 ;
The pairs of opposite sides can then be intersected to give the collinear points Z1 , Z2 and Z3 .
Z1 = S1 Ó S4 ; Z2 = S2 Ó S5 ; Z3 = S3 Ó S6 ;
We will call these points Pascal points. The Pascal line ) on which these points lie can be written
alternatively as
2009 9 3
Grassmann Algebra Book.nb 319
) ª Z1 Ô Z2 ª Z1 Ô Z3 ª Z2 Ô Z3
The formula is then constructed from the condition that the points Z1 , Z2 , and Z3 all lie on ), that
is, their exterior product is zero.
Z1 Ô Z2 Ô Z3 ã 0
HP1 Ô P2 Ó P4 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P6 L Ô HP3 Ô P4 Ó P6 Ô P1 L ã 0
HHP1 Ô P2 L Ó HP4 Ô P5 LL Ô
4.20
HHP2 Ô P3 L Ó HP5 Ô P6 LL Ô HHP3 Ô P4 L Ó HP6 Ô P1 LL ã 0
The graphic below depicts these entities for six points on a circle. The sides are shown in dark
green and their extensions in light green. The Pascal line for this hexagon is shown in red.
Z2
S5
P5
P6
S6
S1
P2 Z1 P1
S2 S4
S3
P3 P4 Z3
Pascal's theorem
As may be observed, this is essentially the same graphic as Constructing a circle through 5 points
in the section above on Conic sections through five points, except that the points have been
renamed as follows:
2009 9 3
Grassmann Algebra Book.nb 320
The diagram above demonstrates Pascal's Theorem in a conic in which two opposite sides cross
within the conic, hence making the Pascal line intersect it. The following example demonstrates the
Theorem for an ellipse in which all the opposite sides intersect outside the conic. We take the
ellipse
x 2 y 2
K O + ã1
4 3
and choose some symmetrically placed points around it:
EllipsePoints =
3 3
:P1 Ø ¯" + 3 e2 , P2 Ø ¯" + 2 e1 - e2 , P3 Ø ¯" + 4 e1 ,
2
3 3
P4 Ø ¯" - 3 e2 , P5 Ø ¯" - 4 e1 , P6 Ø ¯" - 2 e1 - e2 >;
2
We defined the sides Si above by
S1 = P1 Ô P2 ; S2 = P2 Ô P3 ; S3 = P3 Ô P4 ; S4 = P4 Ô P5 ; S5 = P5 Ô P6 ; S6 = P6 Ô P1 ;
To compute the three Pascal points we take the regressive product of opposite sides, substitute in
the values EllipsePoints, and use CongruenceSimplify (in its short form ¯1) and
ToPointForm to do the simplifications.
9¯# + 4 I-1 + 3 M e1 - 3 3 e2 ,
¯# - 3 3 e2 , ¯# + I4 - 4 3 M e1 - 3 3 e2 =
Pascal's Theorem says that these three points are collinear, hence their exterior product should be
zero.
¯%@Z1 Ô Z2 Ô Z3 D
0
The Pascal line may be obtained as the exterior product of any pair of the points.
9I4 - 4 3 M ¯# Ô e1 + 12 I-3 + 3 M e1 Ô e2 ,
I8 - 8 3 M ¯# Ô e1 + 24 I-3 + 3 M e1 Ô e2 ,
I4 - 4 3 M ¯# Ô e1 + 12 I-3 + 3 M e1 Ô e2 =
2009 9 3
Grassmann Algebra Book.nb 321
Since the weight is not important, we can divide through by it and simplify to obtain the Pascal line
in the form
) ª J¯" - 3 3 e2 N Ô e1
P1
S6 S1
P5 P3
S5 S2
S4 S3
P6 P2
P4
Z3 Z2 Z1
2009 9 3
Grassmann Algebra Book.nb 322
RegularHexagonPoints =
Table@Pi Ø ¯" + Cos@i p ê 3D e1 + Sin@i p ê 3D e2 , 8i, 1, 6<D
e1 3 e2 e1 3 e2
:P1 Ø ¯# + + , P2 Ø ¯# - + , P3 Ø ¯# - e1 ,
2 2 2 2
e1 3 e2 e1 3 e2
P4 Ø ¯# - - , P5 Ø ¯# + - , P6 Ø ¯# + e1 >
2 2 2 2
The intersections of the opposite sides of the regular hexagon gives
1 3 e2 3 e1 3 e2
:- 3 e1 , - 3 e1 - , - >
2 2 2 2
Note that these are all vectors, not points as expected, because the opposite sides are all parallel.
These vectors are dependent (as of course they must be in the plane); and consequently, their
exterior product is zero.
2009 9 3
Grassmann Algebra Book.nb 323
These vectors are dependent (as of course they must be in the plane); and consequently, their
exterior product is zero.
¯%@Z1 Ô Z2 Ô Z3 D
0
Thus Pascal's Theorem is demonstrated in this example to hold for pairs of sides intersecting in
points at infinity (vectors). The three points at infinity lie on the same line at infinity. That is, the
three vectors lie in the same bivector. (Of course, this must be the case for any three vectors in the
plane.)
Pascal lines
Ï Hexagons in a conic
We have seen how Pascal's Theorem establishes a line associated with any hexagon in a conic.
Suppose we have fixed the positions of the six points anywhere we wish on the conic, and named
them in any order we wish. How many essentially different hexagons (whose sides may of course
cross each other) can we form by joining these six points in some order?
As before, we name these points P1 , P2 , P3 , P4 , P5 and P6 . Without loss of generality we can start
with P1 and join it in any of five ways to the other five points. From that point we then have four
ways of forming the next side; then three ways; then two ways. This makes 5!4!3!2 = 120
hexagons. However, for each hexagon P1 Pa Pb Pc Pd Pe there corresponds the same hexagon
traversed in the reverse order P1 Pe Pd Pc Pb Pa . Thus there are just 60 essentially different
hexagons. We use Mathematica's Permutations function to list them and Union to eliminate
the ones traversed in reverse order.
2009 9 3
Grassmann Algebra Book.nb 324
hexagons =
Join@8P1 <, ÒD & êü Union@Permutations@8P2 , P3 , P4 , P5 , P6 <D,
SameTest Ø HÒ1 === Reverse@Ò2D &LD
88P1 , P2 , P3 , P4 , P5 , P6 <, 8P1 , P2 , P3 , P4 , P6 , P5 <,
8P1 , P2 , P3 , P5 , P4 , P6 <, 8P1 , P2 , P3 , P5 , P6 , P4 <,
8P1 , P2 , P3 , P6 , P4 , P5 <, 8P1 , P2 , P3 , P6 , P5 , P4 <,
8P1 , P2 , P4 , P3 , P5 , P6 <, 8P1 , P2 , P4 , P3 , P6 , P5 <,
8P1 , P2 , P4 , P5 , P3 , P6 <, 8P1 , P2 , P4 , P5 , P6 , P3 <,
8P1 , P2 , P4 , P6 , P3 , P5 <, 8P1 , P2 , P4 , P6 , P5 , P3 <,
8P1 , P2 , P5 , P3 , P4 , P6 <, 8P1 , P2 , P5 , P3 , P6 , P4 <,
8P1 , P2 , P5 , P4 , P3 , P6 <, 8P1 , P2 , P5 , P4 , P6 , P3 <,
8P1 , P2 , P5 , P6 , P3 , P4 <, 8P1 , P2 , P5 , P6 , P4 , P3 <,
8P1 , P2 , P6 , P3 , P4 , P5 <, 8P1 , P2 , P6 , P3 , P5 , P4 <,
8P1 , P2 , P6 , P4 , P3 , P5 <, 8P1 , P2 , P6 , P4 , P5 , P3 <,
8P1 , P2 , P6 , P5 , P3 , P4 <, 8P1 , P2 , P6 , P5 , P4 , P3 <,
8P1 , P3 , P2 , P4 , P5 , P6 <, 8P1 , P3 , P2 , P4 , P6 , P5 <,
8P1 , P3 , P2 , P5 , P4 , P6 <, 8P1 , P3 , P2 , P5 , P6 , P4 <,
8P1 , P3 , P2 , P6 , P4 , P5 <, 8P1 , P3 , P2 , P6 , P5 , P4 <,
8P1 , P3 , P4 , P2 , P5 , P6 <, 8P1 , P3 , P4 , P2 , P6 , P5 <,
8P1 , P3 , P4 , P5 , P2 , P6 <, 8P1 , P3 , P4 , P6 , P2 , P5 <,
8P1 , P3 , P5 , P2 , P4 , P6 <, 8P1 , P3 , P5 , P2 , P6 , P4 <,
8P1 , P3 , P5 , P4 , P2 , P6 <, 8P1 , P3 , P5 , P6 , P2 , P4 <,
8P1 , P3 , P6 , P2 , P4 , P5 <, 8P1 , P3 , P6 , P2 , P5 , P4 <,
8P1 , P3 , P6 , P4 , P2 , P5 <, 8P1 , P3 , P6 , P5 , P2 , P4 <,
8P1 , P4 , P2 , P3 , P5 , P6 <, 8P1 , P4 , P2 , P3 , P6 , P5 <,
8P1 , P4 , P2 , P5 , P3 , P6 <, 8P1 , P4 , P2 , P6 , P3 , P5 <,
8P1 , P4 , P3 , P2 , P5 , P6 <, 8P1 , P4 , P3 , P2 , P6 , P5 <,
8P1 , P4 , P3 , P5 , P2 , P6 <, 8P1 , P4 , P3 , P6 , P2 , P5 <,
8P1 , P4 , P5 , P2 , P3 , P6 <, 8P1 , P4 , P5 , P3 , P2 , P6 <,
8P1 , P4 , P6 , P2 , P3 , P5 <, 8P1 , P4 , P6 , P3 , P2 , P5 <,
8P1 , P5 , P2 , P3 , P4 , P6 <, 8P1 , P5 , P2 , P4 , P3 , P6 <,
8P1 , P5 , P3 , P2 , P4 , P6 <, 8P1 , P5 , P3 , P4 , P2 , P6 <,
8P1 , P5 , P4 , P2 , P3 , P6 <, 8P1 , P5 , P4 , P3 , P2 , P6 <<
We take the example of the ellipse and its points from the previous section and display their
hexagons.
2009 9 3
Grassmann Algebra Book.nb 325
2009 9 3
Grassmann Algebra Book.nb 326
Ï Pascal points
When two opposite sides of a hexagon are extended they intersect in a point which we have called
a Pascal point. Pascal's Theorem says that the three Pascal points lie on the one line. We can
convert each hexagon in the list of hexagons above to a list of three formulae, one formula for each
point.
allPascalPointFormulae =
hexagons ê. 8Pa_, Pb_, Pc_, Pd_, Pe_, Pf_< ß
8HPa Ô PbL Ó HPd Ô PeL, HPb Ô PcL Ó HPe Ô PfL, HPc Ô PdL Ó HPf Ô PaL<
88P1 Ô P2 Ó P4 Ô P5 , P2 Ô P3 Ó P5 Ô P6 , P3 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P4 Ô P6 , P2 Ô P3 Ó P6 Ô P5 , P3 Ô P4 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P4 , P2 Ô P3 Ó P4 Ô P6 , P3 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P6 , P2 Ô P3 Ó P6 Ô P4 , P3 Ô P5 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P4 , P2 Ô P3 Ó P4 Ô P5 , P3 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P5 , P2 Ô P3 Ó P5 Ô P4 , P3 Ô P6 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P5 , P2 Ô P4 Ó P5 Ô P6 , P4 Ô P3 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P6 , P2 Ô P4 Ó P6 Ô P5 , P4 Ô P3 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P3 , P2 Ô P4 Ó P3 Ô P6 , P4 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P6 , P2 Ô P4 Ó P6 Ô P3 , P4 Ô P5 Ó P3 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P3 , P2 Ô P4 Ó P3 Ô P5 , P4 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P5 , P2 Ô P4 Ó P5 Ô P3 , P4 Ô P6 Ó P3 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P4 , P2 Ô P5 Ó P4 Ô P6 , P5 Ô P3 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P6 , P2 Ô P5 Ó P6 Ô P4 , P5 Ô P3 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P4 Ô P3 , P2 Ô P5 Ó P3 Ô P6 , P5 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P4 Ô P6 , P2 Ô P5 Ó P6 Ô P3 , P5 Ô P4 Ó P3 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P3 , P2 Ô P5 Ó P3 Ô P4 , P5 Ô P6 Ó P4 Ô P1 <,
,
2009 9 3
Grassmann Algebra Book.nb 327
8P1 Ô P2 Ó P6 Ô P3 , P2 Ô P5 Ó P3 Ô P4 , P5 Ô P6 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P4 , P2 Ô P5 Ó P4 Ô P3 , P5 Ô P6 Ó P3 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P4 , P2 Ô P6 Ó P4 Ô P5 , P6 Ô P3 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P5 , P2 Ô P6 Ó P5 Ô P4 , P6 Ô P3 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P4 Ô P3 , P2 Ô P6 Ó P3 Ô P5 , P6 Ô P4 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P4 Ô P5 , P2 Ô P6 Ó P5 Ô P3 , P6 Ô P4 Ó P3 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P3 , P2 Ô P6 Ó P3 Ô P4 , P6 Ô P5 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P4 , P2 Ô P6 Ó P4 Ô P3 , P6 Ô P5 Ó P3 Ô P1 <,
8P1 Ô P3 Ó P4 Ô P5 , P3 Ô P2 Ó P5 Ô P6 , P2 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P4 Ô P6 , P3 Ô P2 Ó P6 Ô P5 , P2 Ô P4 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P5 Ô P4 , P3 Ô P2 Ó P4 Ô P6 , P2 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P5 Ô P6 , P3 Ô P2 Ó P6 Ô P4 , P2 Ô P5 Ó P4 Ô P1 <,
8P1 Ô P3 Ó P6 Ô P4 , P3 Ô P2 Ó P4 Ô P5 , P2 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P6 Ô P5 , P3 Ô P2 Ó P5 Ô P4 , P2 Ô P6 Ó P4 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P5 , P3 Ô P4 Ó P5 Ô P6 , P4 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P6 , P3 Ô P4 Ó P6 Ô P5 , P4 Ô P2 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P5 Ô P2 , P3 Ô P4 Ó P2 Ô P6 , P4 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P6 Ô P2 , P3 Ô P4 Ó P2 Ô P5 , P4 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P4 , P3 Ô P5 Ó P4 Ô P6 , P5 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P6 , P3 Ô P5 Ó P6 Ô P4 , P5 Ô P2 Ó P4 Ô P1 <,
8P1 Ô P3 Ó P4 Ô P2 , P3 Ô P5 Ó P2 Ô P6 , P5 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P6 Ô P2 , P3 Ô P5 Ó P2 Ô P4 , P5 Ô P6 Ó P4 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P4 , P3 Ô P6 Ó P4 Ô P5 , P6 Ô P2 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P5 , P3 Ô P6 Ó P5 Ô P4 , P6 Ô P2 Ó P4 Ô P1 <,
8P1 Ô P3 Ó P4 Ô P2 , P3 Ô P6 Ó P2 Ô P5 , P6 Ô P4 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P5 Ô P2 , P3 Ô P6 Ó P2 Ô P4 , P6 Ô P5 Ó P4 Ô P1 <,
8P1 Ô P4 Ó P3 Ô P5 , P4 Ô P2 Ó P5 Ô P6 , P2 Ô P3 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P3 Ô P6 , P4 Ô P2 Ó P6 Ô P5 , P2 Ô P3 Ó P5 Ô P1 <,
8P1 Ô P4 Ó P5 Ô P3 , P4 Ô P2 Ó P3 Ô P6 , P2 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P6 Ô P3 , P4 Ô P2 Ó P3 Ô P5 , P2 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P4 Ó P2 Ô P5 , P4 Ô P3 Ó P5 Ô P6 , P3 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P2 Ô P6 , P4 Ô P3 Ó P6 Ô P5 , P3 Ô P2 Ó P5 Ô P1 <,
8P1 Ô P4 Ó P5 Ô P2 , P4 Ô P3 Ó P2 Ô P6 , P3 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P6 Ô P2 , P4 Ô P3 Ó P2 Ô P5 , P3 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P4 Ó P2 Ô P3 , P4 Ô P5 Ó P3 Ô P6 , P5 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P3 Ô P2 , P4 Ô P5 Ó P2 Ô P6 , P5 Ô P3 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P2 Ô P3 , P4 Ô P6 Ó P3 Ô P5 , P6 Ô P2 Ó P5 Ô P1 <,
8P1 Ô P4 Ó P3 Ô P2 , P4 Ô P6 Ó P2 Ô P5 , P6 Ô P3 Ó P5 Ô P1 <,
8P1 Ô P5 Ó P3 Ô P4 , P5 Ô P2 Ó P4 Ô P6 , P2 Ô P3 Ó P6 Ô P1 <,
8P1 Ô P5 Ó P4 Ô P3 , P5 Ô P2 Ó P3 Ô P6 , P2 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P5 Ó P2 Ô P4 , P5 Ô P3 Ó P4 Ô P6 , P3 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P5 Ó P4 Ô P2 , P5 Ô P3 Ó P2 Ô P6 , P3 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P5 Ó P2 Ô P3 , P5 Ô P4 Ó P3 Ô P6 , P4 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P5 Ó P3 Ô P2 , P5 Ô P4 Ó P2 Ô P6 , P4 Ô P3 Ó P6 Ô P1 <<
This list is comprised of 3!60 formulae. But some of them define the same point because we can
interchange the factors in the regressive product, or interchange the factors in each exterior
product, without affecting the result. To reduce these formulae to a list of unique formulae, we can
apply GrassmannSimplify to put the products into canonical order, resulting in the duplicates
only
2009differing
93 by a sign (which is unimportant).
Grassmann Algebra Book.nb 328
This list is comprised of 3!60 formulae. But some of them define the same point because we can
interchange the factors in the regressive product, or interchange the factors in each exterior
product, without affecting the result. To reduce these formulae to a list of unique formulae, we can
apply GrassmannSimplify to put the products into canonical order, resulting in the duplicates
only differing by a sign (which is unimportant).
¯"2 ; pascalPointFormulae =
Union@¯$@Flatten@allPascalPointFormulaeDD ê. -p_ ß pD
8P1 Ô P2 Ó P3 Ô P4 , P1 Ô P2 Ó P3 Ô P5 , P1 Ô P2 Ó P3 Ô P6 ,
P1 Ô P2 Ó P4 Ô P5 , P1 Ô P2 Ó P4 Ô P6 , P1 Ô P2 Ó P5 Ô P6 ,
P1 Ô P3 Ó P2 Ô P4 , P1 Ô P3 Ó P2 Ô P5 , P1 Ô P3 Ó P2 Ô P6 ,
P1 Ô P3 Ó P4 Ô P5 , P1 Ô P3 Ó P4 Ô P6 , P1 Ô P3 Ó P5 Ô P6 , P1 Ô P4 Ó P2 Ô P3 ,
P1 Ô P4 Ó P2 Ô P5 , P1 Ô P4 Ó P2 Ô P6 , P1 Ô P4 Ó P3 Ô P5 , P1 Ô P4 Ó P3 Ô P6 ,
P1 Ô P4 Ó P5 Ô P6 , P1 Ô P5 Ó P2 Ô P3 , P1 Ô P5 Ó P2 Ô P4 , P1 Ô P5 Ó P2 Ô P6 ,
P1 Ô P5 Ó P3 Ô P4 , P1 Ô P5 Ó P3 Ô P6 , P1 Ô P5 Ó P4 Ô P6 , P1 Ô P6 Ó P2 Ô P3 ,
P1 Ô P6 Ó P2 Ô P4 , P1 Ô P6 Ó P2 Ô P5 , P1 Ô P6 Ó P3 Ô P4 , P1 Ô P6 Ó P3 Ô P5 ,
P1 Ô P6 Ó P4 Ô P5 , P2 Ô P3 Ó P4 Ô P5 , P2 Ô P3 Ó P4 Ô P6 , P2 Ô P3 Ó P5 Ô P6 ,
P2 Ô P4 Ó P3 Ô P5 , P2 Ô P4 Ó P3 Ô P6 , P2 Ô P4 Ó P5 Ô P6 , P2 Ô P5 Ó P3 Ô P4 ,
P2 Ô P5 Ó P3 Ô P6 , P2 Ô P5 Ó P4 Ô P6 , P2 Ô P6 Ó P3 Ô P4 , P2 Ô P6 Ó P3 Ô P5 ,
P2 Ô P6 Ó P4 Ô P5 , P3 Ô P4 Ó P5 Ô P6 , P3 Ô P5 Ó P4 Ô P6 , P3 Ô P6 Ó P4 Ô P5 <
Length@pascalPointFormulaeD
45
There are thus in general 45 distinct Pascal point formulae for the hexagons of a conic. Of course,
since at this stage we have yet to discuss Pascal lines, these formulae are also valid for the 60
hexagons formed from any six points - not necessarily those constrained to lie on a conic.
It should be noted that these are distinct formulae. The number of actual distinct points resulting
from their application to any six given points may be less. Some may be duplicates, for example,
when the hexagon has some symmetry, and some may reduce to vectors when the hexagon sides
are parallel.
For the ellipse and hexagon points discussed above, we can apply the formulae to get the actual
points.
pascalPoints =
ToPointForm@¯1@pascalPointFormulae êê. EllipsePointsDD
4
:¯# + 4 - e1 - 3 e2 , ¯# + I8 - 4 3 M e1 ,
3
3
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 , ¯# + 4 I-1 + 3 M e1 - 3 3 e2 ,
2
4 e1 9
¯# + -2 3 e2 , ¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 ,
3 2
, ,
2009 9 3
Grassmann Algebra Book.nb 329
4
¯# + 4 + e1 - 3 e2 , ¯# + 4 I2 + 3 M e1 - 3 I1 + 3 M e2 ,
3
3 3 e2
¯# + 2 I2 + 3 M e1 - , -96 e1 + 72 e2 ,
2
¯# + 4 I1 + 3 M e1 - 3 3 e2 , ¯# - 4 I2 + 3 M e1 + 3 I3 + 3 M e2 ,
3 3 e2
¯# - 3 3 e2 , ¯# - 3 e2 , ¯# - , ¯#, ¯# - 3 e2 ,
2
¯# - 3 3 e2 , ¯# + 4 I2 + 3 M e1 + 3 I3 + 3 M e2 ,
3 3 e2
¯# - 4 I1 + 3 M e1 - 3 3 e2 , ¯# - 2 I2 + 3 M e1 - ,
2
96 e1 + 72 e2 , ¯# - 4 I2 + 3 M e1 - 3 I1 + 3 M e2 ,
4 9
¯# - I3 + 3 M e1 - 3 e2 , ¯# - 2 I1 + 3 M e1 - I1 + 3 M e2 ,
3 2
4 e1 3
¯# - -2 3 e2 , ¯# + I2 - 2 3 M e1 - I-1 + 3 M e2 ,
3 2
¯# + I4 - 4 3 M e1 - 3 3 e2 , ¯# + 4 I-2 + 3 M e1 ,
4
¯# + -4 + e1 - 3 e2 , ¯# + I8 - 4 3 M e1 + 3 I-3 + 3 M e2 ,
3
9
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 , ¯# - 3 3 e2 ,
2
3
¯# + 4 I2 + 3 M e1 , ¯# - 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
9
¯# + I2 - 2 3 M e1 - I-1 + 3 M e2 ,
2
¯# + I8 - 4 3 M e1 + I3 - 3 3 M e2 , ¯# - 3 e2 ,
3 3 3 e2
¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 , ¯# + I4 - 2 3 M e1 - ,
2 2
3 3 e2
-48 3 e1 , ¯# + 2 I-2 + 3 M e1 - ,
2
¯# + 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 ,
¯# - 4 I2 + 3 M e1 , ¯# + 4 I-2 + 3 M e1 + I3 - 3 3 M e2 >
Two of these points occur in triplicate ¯& - 3 e2 and ¯& - 3 3 e2 ; and three of the entities
are vectors (points at infinity).
9-96 e1 + 72 e2 , 96 e1 + 72 e2 , -48 3 e1 =
Thus, when we plot these points we should be able to count 38 points in blue, and 3 vectors
(without their arrowheads) in green.
2009 9 3
Grassmann Algebra Book.nb 330
Thus, when we plot these points we should be able to count 38 points in blue, and 3 vectors
(without their arrowheads) in green.
Ï Pascal lines
Since there are 60 essentially different hexagons one can form from 6 given points, there are thus
60 corresponding Pascal lines. We can display the equations for these lines.
PascalLineEquations =
allPascalPointFormulae ê. 8Q1_, Q2_, Q3_< ß Q1 Ô Q2 Ô Q3 ã 0
8HP1 Ô P2 Ó P4 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P6 L Ô HP3 Ô P4 Ó P6 Ô P1 L ã 0,
HP1 Ô P2 Ó P4 Ô P6 L Ô HP2 Ô P3 Ó P6 Ô P5 L Ô HP3 Ô P4 Ó P5 Ô P1 L ã 0,
HP1 Ô P2 Ó P5 Ô P4 L Ô HP2 Ô P3 Ó P4 Ô P6 L Ô HP3 Ô P5 Ó P6 Ô P1 L ã 0,
HP1 Ô P2 Ó P5 Ô P6 L Ô HP2 Ô P3 Ó P6 Ô P4 L Ô HP3 Ô P5 Ó P4 Ô P1 L ã 0,
HP1 Ô P2 Ó P6 Ô P4 L Ô HP2 Ô P3 Ó P4 Ô P5 L Ô HP3 Ô P6 Ó P5 Ô P1 L ã 0,
HP1 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P4 L Ô HP3 Ô P6 Ó P4 Ô P1 L ã 0,
HP1 Ô P2 Ó P3 Ô P5 L Ô HP2 Ô P4 Ó P5 Ô P6 L Ô HP4 Ô P3 Ó P6 Ô P1 L ã 0,
HP1 Ô P2 Ó P3 Ô P6 L Ô HP2 Ô P4 Ó P6 Ô P5 L Ô HP4 Ô P3 Ó P5 Ô P1 L ã 0,
HP1 Ô P2 Ó P5 Ô P3 L Ô HP2 Ô P4 Ó P3 Ô P6 L Ô HP4 Ô P5 Ó P6 Ô P1 L ã 0,
HP1 Ô P2 Ó P5 Ô P6 L Ô HP2 Ô P4 Ó P6 Ô P3 L Ô HP4 Ô P5 Ó P3 Ô P1 L ã 0,
HP1 Ô P2 Ó P6 Ô P3 L Ô HP2 Ô P4 Ó P3 Ô P5 L Ô HP4 Ô P6 Ó P5 Ô P1 L ã 0,
HP1 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P4 Ó P5 Ô P3 L Ô HP4 Ô P6 Ó P3 Ô P1 L ã 0,
,
2009 9 3
Grassmann Algebra Book.nb 331
2009 9 3
Grassmann Algebra Book.nb 332
We can verify these equations for the points we have chosen on the ellipse.
Union@Simplify@¯1@PascalLineEquations ê. EllipsePointsDDD
8True<
To generate formulae for the Pascal lines themselves we only have to choose any two of its Pascal
points. Here we choose the first two.
PascalLineFormulae =
allPascalPointFormulae ê. 8Q1_, Q2_, Q3_< ß Q1 Ô Q2
8HP1 Ô P2 Ó P4 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P6 L, HP1 Ô P2 Ó P4 Ô P6 L Ô HP2 Ô P3 Ó P6 Ô P5 L,
HP1 Ô P2 Ó P5 Ô P4 L Ô HP2 Ô P3 Ó P4 Ô P6 L, HP1 Ô P2 Ó P5 Ô P6 L Ô HP2 Ô P3 Ó P6 Ô P4 L,
HP1 Ô P2 Ó P6 Ô P4 L Ô HP2 Ô P3 Ó P4 Ô P5 L, HP1 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P4 L,
HP1 Ô P2 Ó P3 Ô P5 L Ô HP2 Ô P4 Ó P5 Ô P6 L, HP1 Ô P2 Ó P3 Ô P6 L Ô HP2 Ô P4 Ó P6 Ô P5 L,
HP1 Ô P2 Ó P5 Ô P3 L Ô HP2 Ô P4 Ó P3 Ô P6 L, HP1 Ô P2 Ó P5 Ô P6 L Ô HP2 Ô P4 Ó P6 Ô P3 L,
HP1 Ô P2 Ó P6 Ô P3 L Ô HP2 Ô P4 Ó P3 Ô P5 L, HP1 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P4 Ó P5 Ô P3 L,
HP1 Ô P2 Ó P3 Ô P4 L Ô HP2 Ô P5 Ó P4 Ô P6 L, HP1 Ô P2 Ó P3 Ô P6 L Ô HP2 Ô P5 Ó P6 Ô P4 L,
HP1 Ô P2 Ó P4 Ô P3 L Ô HP2 Ô P5 Ó P3 Ô P6 L, HP1 Ô P2 Ó P4 Ô P6 L Ô HP2 Ô P5 Ó P6 Ô P3 L,
HP1 Ô P2 Ó P6 Ô P3 L Ô HP2 Ô P5 Ó P3 Ô P4 L, HP1 Ô P2 Ó P6 Ô P4 L Ô HP2 Ô P5 Ó P4 Ô P3 L,
HP1 Ô P2 Ó P3 Ô P4 L Ô HP2 Ô P6 Ó P4 Ô P5 L, HP1 Ô P2 Ó P3 Ô P5 L Ô HP2 Ô P6 Ó P5 Ô P4 L,
HP1 Ô P2 Ó P4 Ô P3 L Ô HP2 Ô P6 Ó P3 Ô P5 L, HP1 Ô P2 Ó P4 Ô P5 L Ô HP2 Ô P6 Ó P5 Ô P3 L,
HP1 Ô P2 Ó P5 Ô P3 L Ô HP2 Ô P6 Ó P3 Ô P4 L, HP1 Ô P2 Ó P5 Ô P4 L Ô HP2 Ô P6 Ó P4 Ô P3 L,
HP1 Ô P3 Ó P4 Ô P5 L Ô HP3 Ô P2 Ó P5 Ô P6 L, HP1 Ô P3 Ó P4 Ô P6 L Ô HP3 Ô P2 Ó P6 Ô P5 L,
HP1 Ô P3 Ó P5 Ô P4 L Ô HP3 Ô P2 Ó P4 Ô P6 L, HP1 Ô P3 Ó P5 Ô P6 L Ô HP3 Ô P2 Ó P6 Ô P4 L,
HP1 Ô P3 Ó P6 Ô P4 L Ô HP3 Ô P2 Ó P4 Ô P5 L, HP1 Ô P3 Ó P6 Ô P5 L Ô HP3 Ô P2 Ó P5 Ô P4 L,
HP1 Ô P3 Ó P2 Ô P5 L Ô HP3 Ô P4 Ó P5 Ô P6 L, HP1 Ô P3 Ó P2 Ô P6 L Ô HP3 Ô P4 Ó P6 Ô P5 L,
HP1 Ô P3 Ó P5 Ô P2 L Ô HP3 Ô P4 Ó P2 Ô P6 L, HP1 Ô P3 Ó P6 Ô P2 L Ô HP3 Ô P4 Ó P2 Ô P5 L,
HP1 Ô P3 Ó P2 Ô P4 L Ô HP3 Ô P5 Ó P4 Ô P6 L, HP1 Ô P3 Ó P2 Ô P6 L Ô HP3 Ô P5 Ó P6 Ô P4 L,
HP1 Ô P3 Ó P4 Ô P2 L Ô HP3 Ô P5 Ó P2 Ô P6 L, HP1 Ô P3 Ó P6 Ô P2 L Ô HP3 Ô P5 Ó P2 Ô P4 L,
HP1 Ô P3 Ó P2 Ô P4 L Ô HP3 Ô P6 Ó P4 Ô P5 L, HP1 Ô P3 Ó P2 Ô P5 L Ô HP3 Ô P6 Ó P5 Ô P4 L,
HP1 Ô P3 Ó P4 Ô P2 L Ô HP3 Ô P6 Ó P2 Ô P5 L, HP1 Ô P3 Ó P5 Ô P2 L Ô HP3 Ô P6 Ó P2 Ô P4 L,
HP1 Ô P4 Ó P3 Ô P5 L Ô HP4 Ô P2 Ó P5 Ô P6 L, HP1 Ô P4 Ó P3 Ô P6 L Ô HP4 Ô P2 Ó P6 Ô P5 L,
HP1 Ô P4 Ó P5 Ô P3 L Ô HP4 Ô P2 Ó P3 Ô P6 L, HP1 Ô P4 Ó P6 Ô P3 L Ô HP4 Ô P2 Ó P3 Ô P5 L,
HP1 Ô P4 Ó P2 Ô P5 L Ô HP4 Ô P3 Ó P5 Ô P6 L, HP1 Ô P4 Ó P2 Ô P6 L Ô HP4 Ô P3 Ó P6 Ô P5 L,
HP1 Ô P4 Ó P5 Ô P2 L Ô HP4 Ô P3 Ó P2 Ô P6 L, HP1 Ô P4 Ó P6 Ô P2 L Ô HP4 Ô P3 Ó P2 Ô P5 L,
HP1 Ô P4 Ó P2 Ô P3 L Ô HP4 Ô P5 Ó P3 Ô P6 L, HP1 Ô P4 Ó P3 Ô P2 L Ô HP4 Ô P5 Ó P2 Ô P6 L,
HP1 Ô P4 Ó P2 Ô P3 L Ô HP4 Ô P6 Ó P3 Ô P5 L, HP1 Ô P4 Ó P3 Ô P2 L Ô HP4 Ô P6 Ó P2 Ô P5 L,
HP1 Ô P5 Ó P3 Ô P4 L Ô HP5 Ô P2 Ó P4 Ô P6 L, HP1 Ô P5 Ó P4 Ô P3 L Ô HP5 Ô P2 Ó P3 Ô P6 L,
HP1 Ô P5 Ó P2 Ô P4 L Ô HP5 Ô P3 Ó P4 Ô P6 L, HP1 Ô P5 Ó P4 Ô P2 L Ô HP5 Ô P3 Ó P2 Ô P6 L,
HP1 Ô P5 Ó P2 Ô P3 L Ô HP5 Ô P4 Ó P3 Ô P6 L, HP1 Ô P5 Ó P3 Ô P2 L Ô HP5 Ô P4 Ó P2 Ô P6 L<
We can explicitly compute the Pascal lines (bound vectors) for the ellipse by substituting in our
chosen points.
2009 9 3
Grassmann Algebra Book.nb 333
We can explicitly compute the Pascal lines (bound vectors) for the ellipse by substituting in our
chosen points.
2009 9 3
Grassmann Algebra Book.nb 334
3
¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
4
¯# + 4 - e1 - 3 e2 Ô I¯# - 3 e2 M,
3
4 e1
¯# + -2 3 e2 Ô I¯# - 3 e2 M,
3
3
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 Ô
2
I¯# - 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,
4 e1
¯# + -2 3 e2 Ô I¯# - 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,
3
4 3 3 e2
¯# + 4 - e1 - 3 e2 Ô ¯# + 2 I-2 + 3 M e1 - ,
3 2
3 3 e2
I¯# - 4 I-2 + 3 M e1 M Ô ¯# + 2 I-2 + 3 M e1 - ,
2
4
¯# + 4 - e1 - 3 e2 Ô I-48 3 e1 M,
3
I¯# + 4 I-1 + 3 M e1 - 3 3 e2 M Ô I48 3 e1 M,
3 3 e2
I¯# - 4 I-2 + 3 M e1 M Ô ¯# - 2 I-2 + 3 M e1 - ,
2
3 3 e2
I¯# + 4 I-1 + 3 M e1 - 3 3 e2 M Ô ¯# - 2 I-2 + 3 M e1 - ,
2
H-96 e1 + 72 e2 L Ô I¯# - 3 3 e2 M,
I¯# + 4 I1 + 3 M e1 - 3 3 e2 M Ô I¯# - 3 3 e2 M,
9
H96 e1 - 72 e2 L Ô ¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 ,
2
I¯# - 4 I2 + 3 M e1 + 3 I3 + 3 M e2 M Ô
9
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 ,
2
I¯# + 4 I1 + 3 M e1 - 3 3 e2 M Ô
I¯# - 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,
I¯# - 4 I2 + 3 M e1 + 3 I3 + 3 M e2 M Ô
I¯# - 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,
I¯# + 4 I2 + 3 M e1 - 3 I1 + 3 M e2 M Ô
I¯# + 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,
2009 9 3
Grassmann Algebra Book.nb 335
3 3 e2
¯# + 2 I2 + 3 M e1 - Ô
2
3 3 e2 3 3 e2
¯# - 2 I-2 + 3 M e1 - , ¯# + 2 I2 + 3 M e1 - Ô
2 2
3 3 e2
¯# - Ô I¯# + 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,
2
3 3 e2
I¯# - 3 e2 M Ô ¯# - 2 I-2 + 3 M e1 - ,
2
,
2009 9 3
Grassmann Algebra Book.nb 336
3 3 e2
¯# - Ô I¯# - 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,
2
3 3 e2
I¯# - 3 3 e2 M Ô ¯# + 2 I-2 + 3 M e1 - ,
2
I¯# - 3 3 e2 M Ô I¯# - 4 I2 + 3 M e1 M,
3
I¯# - 3 3 e2 M Ô ¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
3
H96 e1 + 72 e2 L Ô ¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
H-96 e1 - 72 e2 L Ô I¯# - 3 e2 M,
I¯# - 4 I1 + 3 M e1 - 3 3 e2 M Ô I¯# - 4 I2 + 3 M e1 M,
I¯# - 4 I1 + 3 M e1 - 3 3 e2 M Ô I-48 3 e1 M,
I¯# + 4 I2 + 3 M e1 + 3 I3 + 3 M e2 M Ô
I¯# + 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,
I¯# + 4 I2 + 3 M e1 + 3 I3 + 3 M e2 M Ô
3 3 e2
¯# + 2 I-2 + 3 M e1 - >
2
We can plot these 60 Pascal lines on their Pascal points. Lines normally have an infinite extent, but
to make it clearer which lines correspond to which points, we have plotted them spanning just their
three points, where of course one of the points may be at infinity. Thus if the three Pascal points
(or vectors) of a Pascal line were Pa , Pb , and Pc we have plotted the line segments HPa , Pb L,
HPb , Pc L, and HPc , Pa L. If this is being read as a Mathematica notebook, actual line expressions
are discernible as Tooltips. However care should be taken when attempting to correlate the
expressions shown with lines, as some lines may overlap.
2009 9 3
Grassmann Algebra Book.nb 337
Sixty Pascal lines each joining its three points or points at infinity
Finally, we plot the sixty Pascal lines and bivectors of a regular hexagon. In this case we have
shown the arrowheads on the line-bound vectors and on the bivectors. The lack of apparent
symmetry is an artifact caused by the vectors and bivectors showing more information (their signs,
weights and orientation) than the geometric lines and 2-directions that they are being used to
represent.
2009 9 3
Grassmann Algebra Book.nb 338
4.14 Summary
In this chapter we have explored the implications of interpreting one of the elements of the
underlying linear space as an (origin) point (and the rest as vectors). This is a slightly different
approach to that adopted by Grassmann and workers in the earlier Grassmannian tradition, of
interpreting all the basis elements of the underlying linear space as points, and then treating any
consequent point differences as vectors.
Although the modern context is to interpret elements of a linear space most commonly as vectors,
workers in the early nineteenth century treated the point as the paramount entity of a linear space.
Indeed it was Hamilton who coined the term vector in 1853 as noted in the Historical Note at the
beginning of this chapter.
As we hope to have demonstrated in this chapter, the modern context is significantly poorer for
ignoring the possibility of interpreting the elements of a linear space such that they can include
points. We can probably trace this development back to Gibbs' early work before he became
familiar with Grassmann's work, where he unwrapped Hamilton's quaternion operation into the dot
and cross product.
Of course, this chapter involves no metric concepts other than the comparison of weights of points
(their scalar factors) or the relative lengths of vectors in the same direction. This has enabled us to
see what types of geometric theorems and constructions can be done without the concept of a
metric. Or, what is equivalent, with an arbitrary metric. The notion of congruence has been central.
The next two chapters will introduce the two seminal concepts with which we will explore metric
geometry: the complement, and the interior product. Armed with these, we will also be able to
define the notions of length and angle, and more specifically, orthogonality. Because the
2009 9 3 algebra extends the vector algebra to higher grade entities, we will find these notions
Grassmann
also extend naturally to higher grade entities.
Grassmann Algebra Book.nb 339
The next two chapters will introduce the two seminal concepts with which we will explore metric
geometry: the complement, and the interior product. Armed with these, we will also be able to
define the notions of length and angle, and more specifically, orthogonality. Because the
Grassmann algebra extends the vector algebra to higher grade entities, we will find these notions
also extend naturally to higher grade entities.
2009 9 3
Grassmann Algebra Book.nb 340
5 The Complement
Ï
5.1 Introduction
Up to this point various linear spaces and the dual exterior and regressive product operations have
been introduced. The elements of these spaces were incommensurable unless they were congruent,
that is, nowhere was there involved the concept of measure or magnitude of an element by which it
could be compared with any other element. Thus the subject of the last chapter on Geometric
Interpretations was explicitly non-metric geometry; or, to put it another way, it was what geometry
is before the ability to compare or measure is added.
The question then arises, how do we associate a measure with the elements of a Grassmann algebra
in a consistent way? Of course, we already know the approach that has developed over the last
century, that of defining a metric tensor. In this book we will indeed define a metric tensor, but we
will take an approach which develops from the concepts of the exterior product and the duality
operations which are the foundations of the Grassmann algebra. This will enable us to see how the
metric tensor on the underlying linear space generates metric tensors on the exterior linear spaces
of higher grade; the implications of the symmetry of the metric tensor; and how consistently to
generalize the notion of inner product to elements of arbitrary grade.
One of the consequences of the anti-symmetry of the exterior product of 1-elements is that the
exterior linear space of m-elements has the same dimension as the exterior linear space of (n|m)-
elements. We have already seen this property evidenced in the notions of duality and cobasis
element. And one of the consequences of the notion of duality is that the regressive product of an
m-element and an (n|m)-element is a scalar. Thus there is the opportunity of defining for each m-
element a corresponding 'co-m-element' of grade n|m such that the regressive product of these two
elements gives a scalar. We will see that this scalar measures the square of the 'magnitude' of the m-
element or the (n|m)-element, and corresponds to the inner product of either of them with itself.
We will also see that the notion of orthogonality is defined by the correspondence between m-
elements and their 'co-m-elements'. But most importantly, the definition of this inner product as a
regressive product of an m-element with a 'co-m-element' is immediately generalizable to elements
of arbitrary grade, thus permitting a theory of interior products to be developed which is consistent
with the exterior and regressive product axioms and which, via the notion of 'co-m-element', leads
to explicit and easily derived formulae between elements of arbitrary (and possibly different) grade.
The foundation of the notion of measure or metric then is the notion of 'co-m-element'. In this book
we use the term complement rather than 'co-m-element'. In this chapter we will develop the notion
of complement in preparation for the development of the notions of interior product and
orthogonality in the next chapter.
The complement of an element is denoted with a horizontal bar over the element. For example, the
complements of a, x + y and xÔy are denoted a, x+y and x Ô y.
m m
Finally, it should be noted that the term 'complement' may be used either to refer to an operation
(the operation of taking the complement of an element), or to the element itself (which is the result
of the operation).
2009 9 3
Grassmann Algebra Book.nb 341
Finally, it should be noted that the term 'complement' may be used either to refer to an operation
(the operation of taking the complement of an element), or to the element itself (which is the result
of the operation).
Ï Historical Note
Grassmann introduced the notion of complement (Ergänzung) into the Ausdehnungslehre of 1862
[Grassmann 1862]. He denoted the complement of an element x by preceding it with a vertical bar, viz |x.
For mnemonic reasons, particularly in the derivation of formulas using the Complement Axiom, the notation
for the complement used in this book is rather the horizontal bar: x.
In discussing the complement, Grassmann defines the product of the n basis elements (the basis n-element)
to be unity. That is @e1 e2 ! en D ã 1 or, in the present notation, e1 Ô e2 Ô ! Ô en ã 1. Since
Grassmann discussed only the Euclidean complement (equivalent to imposing a Euclidean metric
gij ãdij ), this statement in the present notation is equivalent to 1 ã 1. The introduction of such an
identity, however, destroys the essential duality between L and L which requires rather the identity
m n-m
Grassmann's term Ergänzung may be translated as either complement or supplement (as well as other
meanings, for example completion). Among the earlier workers in the Grassmannian tradition, Whitehead
used supplement, while Hyde used complement. Kannenberg, in his excellent translation of the
Ausdehnungslehre of 1862, has used supplement.
Whitehead also gives an interesting historical note on extensions of the Ausdehnungslehre to general metric
concepts. [Book VI (Theory of Metrics) in A Treatise on Universal Algebra (p 369)]
In more modern works the Ergänzung for general metrics has become known as the Hodge Star operator.
But we do not use the term here since our aim is to develop the concept by showing how it can be built
straightforwardly on the foundations laid by Grassmann.
In this text we have chosen to use the word complement for its more recent added evocative meanings in set
theory and linear algebra (viz orthogonal complement) and its mnemonic evocation of the concept of co-m-
element. But most especially, to slightly differentiate it from Grassmann's original use as may be found in
Kannenberg's translation; because our aim is to extend its meaning to include a more general metric (the
Riemannian metric) than Grassmann initially envisaged.
2009 9 3
Grassmann Algebra Book.nb 342
aœL ï a œ L 5.1
m m m n-m
The grade of the complement of an element is the complementary grade of the element.
(The complementary grade of an m-element in an algebra with underlying linear space of
dimension n, is n-m.)
For scalars a and b, the complement of a sum of elements (perhaps of different grades) is the sum
of the complements of the elements. The complement of a scalar multiple of an element is the
scalar multiple of the complement of the element.
Note that for the terms on each side of the expression 5.3 to be non-zero we require m+k § n, while
in expression 5.4 we require m+k ¥ n.
Expressions 5.3 and 5.4 are duals of each other. We call these dual expressions the complement
axiom. Note its enticing similarity to De Morgan's law in Boolean algebra. If we wish, we can
confirm this duality by applying the GrassmannAlgebra function Dual.
2009 9 3
Grassmann Algebra Book.nb 343
Expressions 5.3 and 5.4 are duals of each other. We call these dual expressions the complement
axiom. Note its enticing similarity to De Morgan's law in Boolean algebra. If we wish, we can
confirm this duality by applying the GrassmannAlgebra function Dual.
DualBa Ô b ã a Ó bF
m k m k
aÓb ã aÔb
m k m k
This axiom is of central importance in the development of the properties of the complement and
interior, inner and scalar products, and formulae relating these with exterior and regressive
products. In particular, it permits us to be able consistently to generate the complements of basis m-
elements from the complements of basis 1-elements, and hence via the linearity axiom, the
complements of arbitrary elements.
The forms 5.3 and 5.4 may be written for any number of elements. To see this, let bãgÔd and
k p q
In general then, expressions 5.3 and 5.4 may be stated in the equivalent forms:
In Grassmann's work, this axiom was hidden in his notation. However, since modern notation
explicitly distinguishes the progressive and regressive products, this axiom needs to be explicitly
stated.
2009 9 3
Grassmann Algebra Book.nb 344
Axiom 1 says that the complement of an m-element is an (n|m)-element. Clearly then the
complement of an (n|m)-element is an m-element. Thus the complement of the complement of an
m-element is itself an m-element.
In the interests of symmetry and simplicity we will require that the complement of the complement
of an element is equal (apart from a possible sign) to the element itself. Although consistent
algebras could no doubt be developed by rejecting this axiom, it will turn out to be an essential
underpinning to the development of the standard Riemannian metric concepts to which we are
accustomed. For example, its satisfaction will require the metric tensor to be symmetric.
It will turn out that, again in compliance with standard results, that the index f(m,n) is m(n|m). But
this result is more in the nature of a theorem than an axiom. We discuss this further in section 5.6.
Although this text does not consider pseudo-Riemannian metrics, the equivalent value for the index
f would in this case turn out to be m(n|m)+s, where s is the signature of the metric.
In Chapter 3: The Regressive Product, we showed in Section 3.3 that the unit n-element is
congruent to any basis n-element, and wrote this result in a form involving an (as yet
undetermined) scalar congruence factor ¯c.
e1 Ô e2 Ô ! Ô en ã ¯c 1
n
In the discussions and non-Mathematica derivations that follow in this chapter, it turns out to be
more notationally convenient to use the inverse of this factor, which we denote !, and rewrite this
requirement as
1 ã 1 ã ! e1 Ô e2 Ô ! Ô en 5.8
n
This is the axiom which finally enables us to define the unit n-element. This axiom requires that in
a metric space the unit n-element be identical to the complement of unity. The hitherto unspecified
scalar constant ! may now be determined from the specific complement mapping or metric under
consideration.
The dual of this axiom is 1 ã 1, since 1 and 1 are duals. We can confirm by applying the Dual
n n
function. (Note that Dual requires us to unambiguously specify that n is the dimension of the
space concerned by writing it as ¯n)
DualB1 ã 1 F
¯n
1 ã1
¯n
Hence taking the complement of expression 5.8 and using this dual axiom tells us that the
complement of the complement of unity is unity. This result clearly complies with axiom 4 .
2009 9 3
Grassmann Algebra Book.nb 345
Hence taking the complement of expression 5.8 and using this dual axiom tells us that the
complement of the complement of unity is unity. This result clearly complies with axiom 4 .
1ã1ã1 5.9
n
We can now write the unit n-element as 1 instead of 1 in any Grassmann algebras that have a
n
metric. For example in a metric space, 1 becomes the unit for the regressive product.
a ã 1Óa 5.10
m m
a ã ‚ ai ei ó a ã ‚ ai ei 5.11
m m m m
e1 Ô e2 Ô e3 Ô e4 ã e1 Ô e2 Ó e3 Ô e4 ã e1 Ô e2 Ô e3 Ó e4
The complement axiom enables us to define the complement of a basis m-element in terms only of
the complements of basis 1-elements.
2009 9 3
Grassmann Algebra Book.nb 346
e1 Ô e2 Ô ! Ô e m ã e1 Ó e2 Ó ! Ó e m 5.12
Thus in order to define the complement of any element in a Grassmann algebra, we only need to
define the complements of the basis 1-elements, that is, the correspondence between basis 1-
elements and basis (n|1)-elements.
e1 ã a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e2 Ô e3
However, it will be much more notationally convenient to define the complements of basis 1-
elements as a linear combination of their cobasis elements. (For a definition of cobasis elements,
see Chapter 2: The Exterior Product.)
Substituting for the cobasis symbols we see that we get a somewhat notationally different, though
equivalent, form to the definition with which we began.
n
ei ã ‚ aij ej
j=1
For reasons which the reader will perhaps discern from the choice of notation involving gij , we
extract the scalar factor ! as an explicit factor from the coefficients of the complement mapping.
We assume for simplicity, that ! is positive. It will turn out to be the reciprocal of the 'volume'
(measure) of the basis n-element.
n
ei ã # ‚ gij ej 5.13
j=1
This then is the form in which we will define the complement of basis 1-elements.
The scalar ! and the scalars gij are at this point otherwise entirely arbitrary. However, we must
still ensure that our definition satisfies the complement of a complement axiom 4. We will see that
in order to satisfy 4, some constraints will need to be imposed on ! and gij . Therefore, our task
now in the ensuing sections is to determine these constraints. We begin by determining ! in terms
of the gij .
A consideration of axiom 5 in the form 1ã1 enables us to determine a relationship between the
scalar ! and the gij . First we apply some axioms (in order): the complement of axiom 5, axiom
2, and axiom 3.
1 ã 1 ã ! e1 Ô e2 Ô ! Ô en
ã ! e1 Ô e2 Ô ! Ô en
ã ! e1 Ó e2 Ó ! Ó en
Then we use the definition of the complement of a 1-element in terms of cobasis elements (formula
5.13)
n n n
ã ! !n ‚ g1 j ej Ó ‚ g2 j ej Ó ! Ó ‚ gn j ej
j=1 j=1 j=1
Now because ei Ó ej is equal to - ej Ó ei (see the axioms for the regressive product), just as
ei Ô ej is equal to - ej Ô ei , we can rewrite this regressive product (see the section on
determinants in Chapter 2: The Exterior Product) as
ã ! !n †gij § e1 Ó e2 Ó ! Ó en
In Chapter 3: The Regressive Product, we explore the regressive product of cobasis elements.
Formula 3.41 is useful here:
e1 Ó e2 Ó ! Ó em ã ¯cm-1 H e1 Ô e2 Ô ! Ô em L
1
Put m equal to n, and ¯c equal to (as per our original definition of ! at the beginning of this
!
chapter) to then give
1
1 ã ! !n †gij § H e1 Ô e2 Ô ! Ô en L
!n-1
1
ã ! !n †gij § 1
!n-1
ã !2 †gij §
Ï Example in 2 dimensions
1 ã 1 ã ! e1 Ô e2
ã ! e1 Ô e2
2009 9 3
Grassmann Algebra Book.nb 348
ã ! e1 Ó e2
ã ! !2 †gij § e1 Ó e2
1
ã ! !2 †gij § e1 Ô e2
!
1
ã ! !2 †gij § 1
!
ã !2 †gij §
Ï Example in 3 dimensions
1 ã 1 ã ! e1 Ô e2 Ô e3
ã ! e1 Ô e2 Ô e3
ã ! e1 Ó e2 Ó e3
ã ! !3 †gij § e1 Ó e2 Ó e3
1
ã ! !3 †gij § H e1 Ô e2 Ô e3 L
!2
1
ã ! !3 †gij § 1
!2
ã !2 †gij §
The previous section showed that in order for this definition to conform to axiom 5 (1 ã 1), the
scalar ! must be related to the determinant of the scalar coefficients gij by
2009 9 3
Grassmann Algebra Book.nb 349
!2 †gij § ã 1
Although we have not yet identified the gij with the metric tensor, our assumption in this text of
only considering Riemannian metrics (rather than pseudo-Riemannian metrics, for example),
permits us now to further suppose that the determinant of the gij is positive. Combining this with
our assumption above that ! is positive, enables us to write
1
!ã
†gij §
Hence from this point on then, we take the value of ! to be the reciprocal of the positive square
root of the determinant of the coefficients of the complement mapping gij .
1
#ã 5.14
†gij §
e1 g11 g12 … g1 n
e2 g21 g22 … g2 n
Bã Gã
… … … … …
en gn1 gn2 … gn n
1 n
ei ã ‚ gij ej
†gij § j=1
1
Bã GB
†G§
e1 g11 g12 … g1 n e1
e2 1 g21 g22 … g2 n e2
ã
… … … … … …
†G§
en gn1 gn2 … gnn en
2009 9 3
Grassmann Algebra Book.nb 350
¯!2 ; ComplementPalette
Complement Palette
Basis Complement
1 e1 Ô e2
e1 e2
e2 -e1
e1 Ô e2 1
2009 9 3
Grassmann Algebra Book.nb 351
¯!3 ; ComplementPalette
Complement Palette
Basis Complement
1 e1 Ô e2 Ô e3
e1 e2 Ô e3
e2 -He1 Ô e3 L
e3 e1 Ô e2
e1 Ô e2 e3
e1 Ô e3 -e2
e2 Ô e3 e1
e1 Ô e2 Ô e3 1
2009 9 3
Grassmann Algebra Book.nb 352
¯!4 ; ComplementPalette
Complement Palette
Basis Complement
1 e1 Ô e2 Ô e3 Ô e4
e1 e2 Ô e3 Ô e4
e2 -He1 Ô e3 Ô e4 L
e3 e1 Ô e2 Ô e4
e4 -He1 Ô e2 Ô e3 L
e1 Ô e2 e3 Ô e4
e1 Ô e3 -He2 Ô e4 L
e1 Ô e4 e2 Ô e3
e2 Ô e3 e1 Ô e4
e2 Ô e4 -He1 Ô e3 L
e3 Ô e4 e1 Ô e2
e1 Ô e2 Ô e3 e4
e1 Ô e2 Ô e4 -e3
e1 Ô e3 Ô e4 e2
e2 Ô e3 Ô e4 -e1
e1 Ô e2 Ô e3 Ô e4 1
The Euclidean complement of a basis 1-element is its cobasis (n|1)-element. For a basis 1-element
ei then we have:
2009 9 3
Grassmann Algebra Book.nb 353
ei ã ei ã H-1Li-1 e1 Ô ! Ô ·i Ô ! Ô en 5.15
This simple relationship extends naturally to complements of basis elements of any grade.
ei ã ei 5.16
m m
where the symbol · means the corresponding element is missing from the product.
1 ã e1 Ô e2 Ô ! Ô en 5.18
1 ã e1 Ô e2 Ô ! Ô en 5.19
Finally, we can see that exterior and regressive products of basis elements with complements take
on particularly simple forms.
ei Ô ej ã dij 1 5.21
m m
ei Ó ej ã dij 5.22
m m
These forms will be the basis for the definition of the inner product in the next chapter.
We note that the concept of cobasis which we introduced in Chapter 2: The Exterior Product,
despite its formal similarity to the Euclidean complement, is only a notational convenience. We do
not define it for linear combinations of elements as the definition of a complement requires.
2009 9 3
Grassmann Algebra Book.nb 354
We note that the concept of cobasis which we introduced in Chapter 2: The Exterior Product,
despite its formal similarity to the Euclidean complement, is only a notational convenience. We do
not define it for linear combinations of elements as the definition of a complement requires.
From the basis element formulae above we can immediately derive formulae for the exterior and
regressive products of general elements a and b.
m m
a ã ‚ ai ei b ã ‚ bi ei a ã ‚ ai ei b ã ‚ bi ei
m m m m m m m m
a Ô b ã K‚ ai ei O Ô K‚ bk ek O ã I‚ ai bi M 1
m m m m
a Ô b ã I‚ ai bi M 1 5.23
m m
Taking the complement of this formula, or else doing a derivation mutatis mutandis, leads to the
regressive product form.
a Ó b ã ‚ ai bi 5.24
m m
a Ô b ã Ka Ó bO 1 5.25
m m m m
2
In the case a is equal to b, and writing ⁄ ai 2 as £aß we obtain
m m m
2
a Ô a ã I‚ ai 2 M 1 ã £aß 1 5.26
m m m
2
a Ó a ã ‚ ai 2 ã £aß 5.27
m m m
x ã ‚ ai ei y ã ‚ bi ei
2009 9 3
Grassmann Algebra Book.nb 355
x Ô y ã I‚ ai bi M 1 x Ó y ã ‚ ai bi 5.28
a == a 1 ã a 1 == a Ô 1
The complement of a scalar can then be expressed in any of the following forms:
Orthogonality
The specific complement mapping that we impose on a Grassmann algebra will define the notion
of orthogonality for that algebra. A simple element and its complement will be referred to as being
orthogonal to each other. In standard linear space terminology, the space of a simple element and
the space of its complement are said to be orthogonal complements of each other.
This orthogonality is total. That is, every 1-element in a given simple m-element is orthogonal to
every 1-element in the complement of the m-element.
2009 9 3
Grassmann Algebra Book.nb 356
This orthogonality is total. That is, every 1-element in a given simple m-element is orthogonal to
every 1-element in the complement of the m-element.
In the special case of a 2-dimensional vector space, the complement of a vector is itself a vector.
In a vector three-space, the complement of a vector is a bivector. And of course the complement of
a bivector is a vector.
More generally we can say that two simple elements of the same grade a and b, are orthogonal if
and only if the exterior product of one with the complement of the other is zero.
xÔx ã 0 xÔx ã 0
By taking the complement of these equations, and applying both the complement axiom, and the
complement of a complement axiom, we see that we can also say that two simple elements of the
same grade a and b, are orthogonal if and only if the regressive product of one with the
complement of the other is zero.
2009 9 3
Grassmann Algebra Book.nb 357
By taking the complement of these equations, and applying both the complement axiom, and the
complement of a complement axiom, we see that we can also say that two simple elements of the
same grade a and b, are orthogonal if and only if the regressive product of one with the
complement of the other is zero.
Consider the bivector xÔy. Then x Ô y is orthogonal to xÔy. But since x Ô y ã x Ó y this also
means that the intersection of the two (n|1)-spaces defined by x and y is orthogonal to xÔy. We
can depict this in 3-space as follows:
In particular the regressive product (discussed in Chapter 3) and the interior product (to be
discussed in the next chapter) have simple representations in terms of the exterior and complement
operations.
We have already introduced the complement axiom 3 (equation 5.3) as part of the definition of the
complement.
aÓb ã aÔb
m k m k
Taking the complement of both sides of this axiom, noting that the grade of a Ó b is m+k|n, and
m k
using the complement of a complement axiom 4 gives:
a Ó b ã a Ô b ã H-1Lf Hm+k-n,nL a Ó b
m k m k m k
Hence, any regressive product can be written as the complement of an exterior product of
complements.
In section 5.6 below, we will show that f(m,n) is equal to m(n-m), so that f (m+k-n, n) becomes
(m+k-n) (n-(m+k-n)) which simplifies to (m+k) (m+k-n). For reference here, we include this version
of the formula below, although it has not yet been derived.
The expression of a regressive product in terms of an exterior product and a double application of
the complement operation poses an interesting conundrum. On the left hand side, there is a product
which is defined in a completely non-metric way, and is thus independent of any metric imposed
on the space. On the right hand side we have a double application of the complement operation,
each of which requires a metric. This leads us to the notion that the complement operation applied
twice in this way is independent of any metric used to define it. A second application of the
complement operation effectively 'cancels out' the metric elements introduced during the first
application.
2009 9 3
Grassmann Algebra Book.nb 359
If on the other hand it is an n-element, it must be a scalar multiple (a, say) of the unit n-element 1.
Hence:
aÔb ã a 1 ó aÔb ã a 1 ã a
m m m m
m Hn-mL
ó H-1L bÔa ã a ó H-1Lm Hn-mL b Ó a ã a
m m m m
m Hn-mL+f Hm,nL
ó H-1L bÓa ã a
m m
We will see in the following chapter how this formula is the foundation of the definition of the
inner product. It shows how a scalar may be obtained from the regressive product of an element
with the complement of an element of the same grade. It also shows the 'equivalent' definition in
terms of the exterior product.
If f(m,n) is equal to m(n-m), as we will soon show, then the last formula simplifies, giving
In the sections below we determine the function f (m, n) in axiom 4 (Formula 5.7) above:
a ã H-1Lf Hm,nL a
m m
To find the function f (m, n) we proceed to calculate the complement of the complement of an m-
element. What we also discover along the way, is that for a formula like this to be valid, the
coefficients of our metric mapping gij must also be symmetric. And, as we have already previewed
in Section 5.5 above, f (m, n) must be equal to m(n-m).
2009 9 3
Grassmann Algebra Book.nb 360
to give
n
ei ã ! ‚ gij ej
j=1
In order to determine ei , we need to obtain an expression for the complement of a cobasis element
of a 1-element ej . Note that whereas the complement of a cobasis element is defined, the cobasis
element of a complement is not. To simplify the development we consider, without loss of
generality, a specific basis element e1 .
First we express the cobasis element as a basis (n|1)-element, and then use the complement axiom
to express the right-hand side as a regressive product of complements of 1-elements.
e1 ã e2 Ô e3 Ô ! Ô en ã e2 Ó e3 Ó ! Ó en
e1 ã ! Ig21 e1 + g22 e2 + ! + g2 n en M Ó
! Ig31 e1 + g32 e2 + ! + g3 n en M Ó
! Ó
! Ign1 e1 + gn2 e2 + ! + gnn en M
n
e1 ã !n-1 ‚ g1 j H-1Lj-1 e1 Ó e2 Ó ! Ó ·j Ó ! Ó en
j=1
In this expression, the notation ·j means that the jth factor has been omitted. The scalars g1 j are
the cofactors of g1 j in the array gij . Further discussion of cofactors, and how they result from
such products of (n|1) elements may be found in Chapter 2. The results for regressive products are
identical mutatis mutandis to those for the exterior product.
By equation 3.42, regressive products of the form above simplify to a scalar multiple of the
missing element. (Remember, ! is now equal to the inverse of ¯c for the purposes of this Chapter).
1
H-1Lj-1 e1 Ó e2 Ó ! Ó ·j Ó ! Ó en ã H-1Ln-1 ej
!n-2
The expression for e1 then simplifies to:
2009 9 3
Grassmann Algebra Book.nb 361
n
e1 ã H-1Ln-1 ! ‚ g1 j ej
j=1
The final step is to see that the actual basis element chosen (e1 ) was, as expected, of no
significance in determining the final form of the formula. We thus have the more general result:
n
ei ã H-1Ln-1 # ‚ gij ej 5.37
j=1
Thus we have determined the complement of the cobasis element of a basis 1-element as a specific
1-element. The coefficients of this 1-element are the products of the scalar ! with cofactors of the
gij , and a possible sign depending on the dimension of the space.
1
B ã H-1Ln-1 G B
†G§
where B is a column matrix of basis vectors, and G is the matrix of cofactors of elements of G.
Our major use of this formula is to derive an expression for the complement of a complement of a
basis element. This we do below.
n
ei ã ! ‚ gij ej
j=1
We can now substitute for the ej from the formula derived in the previous section to get:
n n
n-1 2
ei ã H-1L ! ‚ gij ‚ gjk ek
j=1 k=1
2009 9 3
Grassmann Algebra Book.nb 362
1
B ã H-1Ln-1 GG B
†G§
In Chapter 2 the sum over j of the terms gij gkj was shown to be equal to the determinant †gij §
whenever i equals k and zero otherwise. That is:
n
‚ gij gkj ã †gij § dik
j=1
Note carefully that in this sum, the order of the subscripts is reversed in the cofactor term compared
to that in the expression for ei .
Thus we conclude that if and only if the array gij is symmetric, that is, gij ãgji , can we express
the complement of a complement of a basis 1-element in terms only of itself and no other basis
element.
Furthermore, since we have already shown in Section 5.3 that !2 †gij §ã1, we also have that:
ei ã H-1Ln-1 ei
In sum: In order to satisfy the complement of a complement axiom for m-elements it is necessary
that gij ãgji . For 1-elements it is also sufficient.
Below we shall show that the symmetry of the gij is also sufficient for m-elements.
e1 Ô e2 Ô ! Ô em ã e1 Ó e2 Ó ! Ó em ã e1 Ô e2 Ô ! Ô em
e1 Ô e2 Ô ! Ô em ã H-1Lm Hn-1L e1 Ô e2 Ô ! Ô em
But of course the form of this result is valid for any basis element ei . Writing H-1Lm Hn-1L in the
m
equivalent form H-1Lm Hn-mL (since the parity of m(n-1) is equal to the parity of m(n-m)), we
obtain that for any basis element of any grade:
2009 9 3
Grassmann Algebra Book.nb 363
Finally then we have shown that, provided the complement mapping gij is symmetric and the
constant ! is such that !2 †gij §ã1, the complement of a complement axiom is satisfied by an
otherwise arbitrary mapping, with sign H-1Lm Hn-mL .
Ï Special cases
aãa 5.40
aãa 5.41
n n
The complement of the complement of any element in a 3-space is the element itself, since
H-1Lm H3-mL is positive for m equal to 0, 1, 2, or 3.
aãa
m m
Alternatively, we can say that aãa except when a is of odd degree in an even-dimensional space.
m m m
The simplest case of an element of odd degree in an even-dimensional space is a vector in a vector
2-space.
2009 9 3
Grassmann Algebra Book.nb 364
x = -x
Idempotent complements
No simple element (except zero) can be equal to its complement. This is because the exterior
product of a simple element with its complement is congruent to the unit n-element (which is not
zero) - a property that follows directly from the definition of complement.
However, this is not necessarily true for non-simple elements. Let S be the sum of a simple m-
element (a, say) and its complement.
m
Sãa+a
m m
S ã a + a ã a + H-1Lm Hn-mL a
m m m m
Since H-1Lm Hn-mL is equal to H-1Lm Hn-1L , we can see that S ã S if and only if the dimension n
of the space is odd, or the grade m is even.
For example, in a vector 3-space, the complement of a sum of a vector and its bivector complement
is identical to itself.
8x + x, x + x<
And in a point 3-space, the sum of a 2-element and its 2-element complement is identical to itself.
:B + B, B + B>
2 2 2 2
2009 9 3
Grassmann Algebra Book.nb 365
First, you can transform expressions involving Clifford, hypercomplex, generalized Grassmann,
interior, inner, or scalar products to expressions specifically involving scalar products of basis
elements. Your results involving these scalar products of basis elements are valid for any metric. If
you wish to make a specific metric apply to your results, you can declare that metric, and use
ToMetricElements to perform the substitution of the currently declared metric elements for the
corresponding scalar products of basis elements. We will discuss these transformations after we
have introduced the interior product in the next chapter.
Second, if you have elements expressed in terms of basis elements, you can compute their
complements by first taking their complement (applying GrassmannComplement, or using the
OverBar in the GrassmannAlgebra Palette), and then applying ConvertComplements. This
will use the currently declared metric to convert the complements of basis elements to
complementary basis elements. We will discuss these transformations in the section to follow this
one.
Third, if you want to display the metrics induced by the various exterior product spaces, you can
use MetricL for an individual space or MetricPalette to display all the induced metrics. We
discuss these in the sections below.
The default metric assumed by GrassmannAlgebra is the Euclidean metric. In the case that the
GrassmannAlgebra package has just been loaded, entering Metric will show the components of
the default Euclidean metric tensor.
Metric
881, 0, 0<, 80, 1, 0<, 80, 0, 1<<
These components are arranged as the elements of a 3!3 matrix because the default basis is 3-
dimensional.
2009 9 3
Grassmann Algebra Book.nb 366
These components are arranged as the elements of a 3!3 matrix because the default basis is 3-
dimensional.
Basis
8e1 , e2 , e3 <
However, if the dimension of the basis is changed, the default metric changes accordingly.
DeclareBasis@8¯", i, j, k<D
8¯#, i, j, k<
Metric
881, 0, 0, 0<, 80, 1, 0, 0<, 80, 0, 1, 0<, 80, 0, 0, 1<<
Declaring a metric
GrassmannAlgebra permits us to declare a metric as a matrix with any numeric or symbolic
components. There are just three conditions to which a valid matrix must conform in order to be a
metric:
1) It must be symmetric (and hence square).
2) Its order must be the same as the dimension of the declared linear space.
3) Its components must be scalars.
It is up to the user to ensure the first two conditions. The third is handled by GrassmannAlgebra
automatically assuming all symbols in the matrix are scalars.
DeclareMetric@MD
88a, 0, n<, 80, b, 0<, 8n, 0, g<<
Metric
88a, 0, n<, 80, b, 0<, 8n, 0, g<<
2009 9 3
Grassmann Algebra Book.nb 367
ScalarQ@8a, b, n<D
8True, True, True<
You can check that the components of G are now considered scalars.
ScalarQ@GD
88True, True, True<, 8True, True, True<, 8True, True, True<<
You can test to see if a symbol is a component of the currently declared metric by using MetricQ.
Suppose we are working in 3-space with the general metric defined above. Then the metrics on L,
0
L, L, L can be calculated by entering:
1 2 3
¯!3 ; DeclareMetric@-D;
M0 = MetricL@0D; MatrixForm@M0 D
H1L
2009 9 3
Grassmann Algebra Book.nb 368
M1 = MetricL@1D; MatrixForm@M1 D
$1,1 $1,2 $1,3
$1,2 $2,2 $2,3
$1,3 $2,3 $3,3
M2 = MetricL@2D; MatrixForm@M2 D
-$21,2 + $1,1 $2,2 -$1,2 $1,3 + $1,1 $2,3 -$1,3 $2,2 + $1,2 $2,3
-$1,2 $1,3 + $1,1 $2,3 -$21,3 + $1,1 $3,3 -$1,3 $2,3 + $1,2 $3,3
-$1,3 $2,2 + $1,2 $2,3 -$1,3 $2,3 + $1,2 $3,3 -$22,3 + $2,2 $3,3
M3 = MetricL@3D; MatrixForm@M3 D
I -$21,3 $2,2 + 2 $1,2 $1,3 $2,3 - $1,1 $22,3 - $21,2 $3,3 + $1,1 $2,2 $3,3 M
It is easy to verify the symmetry of the induced metrics in any particular case by inspection.
However to automate this we need only ask Mathematica to compare the metric to its transpose.
For example we can verify the symmetry of the metric induced on L by a general metric in 4-space.
3
2009 9 3
Grassmann Algebra Book.nb 369
As an example, take the general metric in 3-space discussed in the previous section. We will show
that we can obtain the tensor of the cofactors of the elements of the metric tensor of L by
1
reordering and resigning the elements of the metric tensor induced on L.
2
The metric on L is given as a correspondence G1 between the basis elements and cobasis elements
1
of L.
1
The metric on L is given as a correspondence G2 between the basis elements and cobasis elements
2
of L.
2
e1 Ô e2 e3
e1 Ô e3 ã ! G2 -e2 ;
e2 Ô e3 e1
--21,2 + -1,1 -2,2 --1,2 -1,3 + -1,1 -2,3 --1,3 -2,2 + -1,2 -2,3
G2 = --1,2 -1,3 + -1,1 -2,3 --21,3 + -1,1 -3,3 --1,3 -2,3 + -1,2 -3,3 ;
--1,3 -2,2 + -1,2 -2,3 --1,3 -2,3 + -1,2 -3,3 --22,3 + -2,2 -3,3
We can transform the columns of basis elements in this equation into the form of the preceding one
with the transformation T which is simply determined as:
0 0 1
T= 0 -1 0 ;
1 0 0
e1 Ô e2 e2 Ô e3 e3 e1
T e1 Ô e3 ã -He1 Ô e3 L ; T -e2 ã e2 ;
e2 Ô e3 e1 Ô e2 e1 e3
And since this transformation is its own inverse, we can also transform G2 to T.G2 .T. We now
expect this transformed array to be the array of cofactors of the metric tensor G1 . We can easily
check
2009 9this
3 in Mathematica by entering the predicate
Grassmann Algebra Book.nb 370
And since this transformation is its own inverse, we can also transform G2 to T.G2 .T. We now
expect this transformed array to be the array of cofactors of the metric tensor G1 . We can easily
check this in Mathematica by entering the predicate
¯!4 ; MetricPalette
Metric Palette
L H1L
0
1 0 0 0
L 0 1 0 0
1 0 0 1 0
0 0 0 1
1 0 0 0 0 0
0 1 0 0 0 0
L 0 0 1 0 0 0
2 0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
1 0 0 0
L 0 1 0 0
3 0 0 1 0
0 0 0 1
L H1L
4
2009 9 3
Grassmann Algebra Book.nb 371
Metric Palette
L H1L
0
a 0 n
L 0 b 0
1
n 0 g
ab 0 -b n
L 0 a g - n2 0
2
-b n 0 bg
L H a b g - b n2 L
3
Metric Palette
L H1L
0
-"21,2 + "1,1 "2,2 -"1,2 "1,3 + "1,1 "2,3 -"1,3 "2,2 + "1
L -"1,2 "1,3 + "1,1 "2,3 -"21,3 + "1,1 "3,3 -"1,3 "2,3 + "1
2
-"1,3 "2,2 + "1,2 "2,3 -"1,3 "2,3 + "1,2 "3,3 -"22,3 + "2,2
L I -"21,3 "2,2 + 2 "1,2 "1,3 "2,3 - "1,1 "22,3 - "21,2 "3,3 + "1,1 "2
3
You can paste any of these metric matrices into your notebook by clicking on the palette.
-.21,2 + .1,1 .2,2 -.1,2 .1,3 + .1,1 .2,3 -.1,3 .2,2 + .1,2 .2,3
-.1,2 .1,3 + .1,1 .2,3 -.21,3 + .1,1 .3,3 -.1,3 .2,3 + .1,2 .3,3
-.1,3 .2,2 + .1,2 .2,3 -.1,3 .2,3 + .1,2 .3,3 -.22,3 + .2,2 .3,3
2009 9 3
Grassmann Algebra Book.nb 372
Entering a complement
To enter the expression for the complement of a Grassmann expression X in GrassmannAlgebra,
enter GrassmannComplement[X]. For example to enter the expression for the complement of
xÔy, enter:
GrassmannComplement@x Ô yD
xÔy
Or, you can simply select the expression xÔy and click the É button on the GrassmannAlgebra
palette (or this one).
Note that an expression with a bar over it symbolizes another expression (the complement of the
original expression), not a GrassmannAlgebra operation on the expression. Hence no conversions
will occur by entering an overbarred expression.
GrassmannComplement will also work for lists, matrices, or tensors of elements. For example,
here is a matrix:
M êê MatrixForm
a e1 Ô e2 b e2
-b e2 c e3 Ô e1
Ï Euclidean metric
Here we repeat the construction of the complement palette of Section 5.4 in a 2-space.
2009 9 3
Grassmann Algebra Book.nb 373
¯!2 ; ComplementPalette
Complement Palette
Basis Complement
1 e1 Ô e2
e1 e2
e2 -e1
e1 Ô e2 1
Palettes of complements for other bases are obtained by declaring the bases, then entering the
command ComplementPalette.
Complement Palette
Basis Complement
1 #Ô$
# $
$ -#
#Ô$ 1
Ï Non-Euclidean metric
Ì 2-space
2009 9 3
Grassmann Algebra Book.nb 374
Complement Palette
Basis Complement
e1 Ôe2
1 ¯g
e2 g1,1 e1 g1,2
e1 -
¯g ¯g
e2 g1,2 e1 g2,2
e2 -
¯g ¯g
e1 Ô e2 ¯g
† The symbol ¯g represents the determinant of the metric tensor. It (or expressions involving it)
can be evaluated in any space with any declared metric using ToMetricElements.
ToMetricElements@¯gD
The palette is arranged to 'collapse' the full determinant expression under the symbol ¯g since it is
often large. However if you wish, you can include the full form of ¯g in the palette by substituting
for it.
ComplementPalette ê. ¯g Ø ToMetricElements@¯gD
Complement Palette
Basis Complement
e1 Ôe2
1 -g21,2 +g1,1 g2,2
e2 g1,1 e1 g1,2
e1 -
-g21,2 +g1,1 g2,2 -g21,2 +g1,1 g2,2
e2 g1,2 e1 g2,2
e2 -
-g21,2 +g1,1 g2,2 -g21,2 +g1,1 g2,2
Ì 3-space
2009 9 3
Grassmann Algebra Book.nb 375
Complement Palette
Basis Complement
e1 Ôe2 Ôe3
1 ¯g
e3 I-g21,2 +g1,1 g2,2 M e2 Hg1,2 g1,3 -g1,1 g2,3 L e1 H-g1,3 g2,2 +g1,2 g2,3 L
e1 Ô e2 + +
¯g ¯g ¯g
e3 H-g1,2 g1,3 +g1,1 g2,3 L e2 Ig21,3 -g1,1 g3,3 M e1 H-g1,3 g2,3 +g1,2 g3,3 L
e1 Ô e3 + +
¯g ¯g ¯g
e3 H-g1,3 g2,2 +g1,2 g2,3 L e2 Hg1,3 g2,3 -g1,2 g3,3 L e1 I-g22,3 +g2,2 g3,3 M
e2 Ô e3 + +
¯g ¯g ¯g
e1 Ô e2 Ô e3 ¯g
ToMetricElements@¯gD
-g21,3 g2,2 + 2 g1,2 g1,3 g2,3 - g1,1 g22,3 - g21,2 g3,3 + g1,1 g2,2 g3,3
2009 9 3
Grassmann Algebra Book.nb 376
¯A; ComplementPalette
Complement Palette
Basis Complement
1 e1 Ô e2 Ô e3
e1 e2 Ô e3
e2 -He1 Ô e3 L
e3 e1 Ô e2
e1 Ô e2 e3
e1 Ô e3 -e2
e2 Ô e3 e1
e1 Ô e2 Ô e3 1
LBasis
881<, 8e1 , e2 , e3 <, 8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <, 8e1 Ô e2 Ô e3 <<
GrassmannComplement@LBasisD
ConvertComplements@GrassmannComplement@LBasisDD
88e1 Ô e2 Ô e3 <, 8e2 Ô e3 , -He1 Ô e3 L, e1 Ô e2 <, 8e3 , -e2 , e1 <, 81<<
To convert complements of basis elements in a non-Euclidean space, the same three steps as in the
previous section.
2009 9 3
Grassmann Algebra Book.nb 377
ComplementPalette
Complement Palette
Basis Complement
e1 Ôe2 Ôe3
1 ¯g
n e1 Ôe2 a e2 Ôe3
e1 +
¯g ¯g
b e1 Ôe3
e2 -
¯g
g e1 Ôe2 n e2 Ôe3
e3 +
¯g ¯g
b n e1 a b e3
e1 Ô e2 - +
¯g ¯g
I-a g+n2 M e2
e1 Ô e3
¯g
b g e1 b n e3
e2 Ô e3 -
¯g ¯g
e1 Ô e2 Ô e3 ¯g
LBasis
881<, 8e1 , e2 , e3 <, 8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <, 8e1 Ô e2 Ô e3 <<
GrassmannComplement@LBasisD
ConvertComplements@GrassmannComplement@LBasisDD
e1 Ô e2 Ô e3 n e1 Ô e2 a e2 Ô e3 b e1 Ô e3 g e1 Ô e2 n e2 Ô e3
:: >, : + ,- , + >,
¯g ¯g ¯g ¯g ¯g ¯g
b n e1 a b e3 I-a g + n2 M e2 b g e1 b n e3
:- + , , - >, : ¯g >>
¯g ¯g ¯g ¯g ¯g
2009 9 3
Grassmann Algebra Book.nb 378
ToMetricElements@¯gD
a b g - b n2
† The palette is arranged to 'collapse' the full determinant expression under the symbol ¯g since it
is often large. However if you wish, you can include the full form of ¯g in the palette by
substituting for it.
ComplementPalette ê. ¯g Ø ToMetricElements@¯gD
Complement Palette
Basis Complement
e1 Ôe2 Ôe3
1 a b g-b n2
n e1 Ôe2 a e2 Ôe3
e1 +
a b g-b n2 a b g-b n2
b e1 Ôe3
e2 -
a b g-b n2
g e1 Ôe2 n e2 Ôe3
e3 +
a b g-b n2 a b g-b n2
b n e1 a b e3
e1 Ô e2 - +
a b g-b n2 a b g-b n2
I-a g+n2 M e2
e1 Ô e3
a b g-b n2
b g e1 b n e3
e2 Ô e3 -
a b g-b n2 a b g-b n2
e1 Ô e2 Ô e3 a b g - b n2
The complement of a complement of any element in 3-space is the element itself. We can verify
this for the basis elements by applying the complement conversion procedure twice. This example
verifies the result for a general metric in 3-space.
DeclareMetric@gD
88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<
2009 9 3
Grassmann Algebra Book.nb 379
G = ConvertComplements@GrassmannComplement@LBasisDD
e1 Ô e2 Ô e3 g1,3 e1 Ô e2 g1,2 e1 Ô e3 g1,1 e2 Ô e3
:: >, : - + ,
¯g ¯g ¯g ¯g
g2,3 e1 Ô e2 g2,2 e1 Ô e3 g1,2 e2 Ô e3
- + ,
¯g ¯g ¯g
g3,3 e1 Ô e2 g2,3 e1 Ô e3 g1,3 e2 Ô e3
- + >,
¯g ¯g ¯g
Taking the complement of these and converting them again returns us to the original list of basis
elements.
ConvertComplements@GrassmannComplement@GDD
881<, 8e1 , e2 , e3 <, 8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <, 8e1 Ô e2 Ô e3 <<
† ConvertComplements will take any Grassmann expression and convert any complements of
basis elements in it. Suppose we are in a 3-space with a Euclidean metric.
¯A; ConvertComplementsB2 + a b e2 Ô e2 Ô c e3 F
2 + a b c e1 Ô e2
2009 9 3
Grassmann Algebra Book.nb 380
DeclareMetric@gD; F = ConvertComplementsB2 + a b e2 Ô e2 Ô c e3 F
Simplify@FD
¯g
¯A; ConvertComplementsB2 + a b e2 Ô e2 Ô c x F
2+abcx
DeclareMetric@gD; ConvertComplementsB2 + a b e2 Ô e2 Ô c x F
2 + a b c x g2,2
GrassmannSimplify, on the other hand, only includes simplification rules which are
independent of the metric. Generally, these rules apply when an expression involving complements
can be expressed without the complements, or can be expressed by another form deemed simpler,
often involving the interior product (which will be discussed in the next chapter).
† In the example discussed in the previous section we see that using GrassmannSimplify
gives a result involving the interior product e2 û e2 (in this case a scalar product) which is
valid independent of the metric.
¯%B2 + a b e2 Ô e2 Ô c x F
2 + a b c He2 û e2 L x
In the case of a Euclidean metric, the scalar product e2 û e2 in the result is equal to 1. In the case
of a general metric it is equal to g2,2 .
2009 9 3
Grassmann Algebra Book.nb 381
¯A; ¯%B:x, x, x Ó y, x Ô y, Hu Ô vL Ó y, u Ô v Ô y,
Hu Ô vL Ó y Ó z, u Ô v Ô y Ô z, Hx Ô yL û z, Hx Ô yL Ó z>F
8x, x, x û y, x û y, u Ô v û y, u Ô v û y,
u Ô v û y Ô z, u Ô v û y Ô z, Hx û yL z, x Ô y Ô z<
The dimension of the space may change the sign of some expressions, or indeed make them zero.
Here are the same examples in 4-space, giving two sign-changes, and one zero.
¯!4 ; ¯%B:x, x, x Ó y, x Ô y, Hu Ô vL Ó y, u Ô v Ô y,
Hu Ô vL Ó y Ó z, u Ô v Ô y Ô z, Hx Ô yL û z, Hx Ô yL Ó z>F
The exterior and regressive products are taken as basic. If a complement operation (or equivalently,
a metric) is introduced (defined) on the space, then the exterior and regressive products can be
expressed in terms of each other and the complement. The interior product operation can also be
expressed in terms either of the exterior product and the complement, or else the regressive product
and the complement. To complete the suite of conversion possibilities, the exterior and regressive
products can be expressed in terms of the interior product and the complement, and the
complement of an element can be expressed as its interior product with the unit n-element.
In this chapter, we discuss only the conversions between the exterior product, the regressive
product, and the complement. The examples below show the effects of these conversions on a
single expression in both 3 and 4 dimensional spaces. In a 3-space, the complement of the
complement of an element of any grade is the element itself, hence these conversions in a 3-space
show no extra signs. In spaces of other dimension however, the results may show signs dependent
on the grades of the elements. This is exemplified by the 4-dimensional cases.
2009 9 3
In this chapter, we discuss only the conversions between the exterior product, the
Grassmann regressive
Algebra Book.nb 382
product, and the complement. The examples below show the effects of these conversions on a
single expression in both 3 and 4 dimensional spaces. In a 3-space, the complement of the
complement of an element of any grade is the element itself, hence these conversions in a 3-space
show no extra signs. In spaces of other dimension however, the results may show signs dependent
on the grades of the elements. This is exemplified by the 4-dimensional cases.
¯A; G = a Ô b Ó g Ô d ;
m k p q
Converting the exterior products to regressive products and complements yields a possible sign
change in 4-space depending on the grades of the elements.
Converting the regressive products to exterior products also yields a possible sign change.
:a Ô b Ô g Ô d, H-1Lk+m+p+q a Ô b Ô g Ô d>
m k p q m k p q
Zs = ¯%@ZD
H-a2 b1 + a1 b2 L e1 Ô e2 Ó e1 Ô e3 +
H-a3 b1 + a1 b3 L e1 Ô e2 Ó e2 Ô e3 + H-a3 b2 + a2 b3 L e1 Ô e3 Ó e2 Ô e3
2009 9 3
Grassmann Algebra Book.nb 383
Ze = RegressiveToExterior@ZD
a1 e1 Ô e2 Ô b1 e1 Ô e2 + a1 e1 Ô e2 Ô b2 e1 Ô e3 + a1 e1 Ô e2 Ô b3 e2 Ô e3 +
a2 e1 Ô e3 Ô b1 e1 Ô e2 + a2 e1 Ô e3 Ô b2 e1 Ô e3 + a2 e1 Ô e3 Ô b3 e2 Ô e3 +
a3 e2 Ô e3 Ô b1 e1 Ô e2 + a3 e2 Ô e3 Ô b2 e1 Ô e3 + a3 e2 Ô e3 Ô b3 e2 Ô e3
ConvertComplements@Ze D
H-a2 b1 + a1 b2 L e1 + H-a3 b1 + a1 b3 L e2 + H-a3 b2 + a2 b3 L e3
DeclareMetric@gD
88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<
ConvertComplements@Ze D
¯g H-a2 b1 + a1 b2 L e1 +
¯g H-a3 b1 + a1 b3 L e2 + ¯g H-a3 b2 + a2 b3 L e3
Xe = RegressiveToExterior@XD
9e1 Ô e2 Ô e2 Ô e3 Ô e3 Ô e1 , e1 Ô e2 Ô e3 Ô e1 Ô e2 Ô e3 Ô e1 =
ConvertComplements@Xe D
81, e1 <
2009 9 3
Grassmann Algebra Book.nb 384
DeclareMetric@gD
88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<
Applying ConvertComplements assumes this metric, and shows us that both results now have
the determinant of the metric tensor as multipliers.
ConvertComplements@Xe D
8¯g, ¯g e1 <
† Note that because their were two regressive product operations, ¯g was generated twice
(giving ¯g). In the example of the regressive product of bivectors in the previous example, there
was one regressive product operation, so ¯g was generated once.
x ã a e1 + b e2
Since this is a Euclidean vector space, we can depict the basis vectors at right angles to each other.
But note that since it is a vector space, we do not depict an origin.
2009 9 3
Grassmann Algebra Book.nb 385
x ã a e1 + b e2 ã a e2 - b e1
Remember, the Euclidean complement of a basis element is (formally identical to) its cobasis
element, and a basis element and its cobasis element are defined by their exterior product being the
basis n-element, in this case e1 Ôe2 .
It is clear from the depiction above that x and x are at right angles to each other, thus verifying our
geometric interpretation of the algebraic notion of orthogonality: a simple element and its
complement are orthogonal.
x ã a e2 - b e1 ã -a e1 - b e2 ã -x
Or, we could have used the complement of a complement axiom:
x ã H-1L1 H2-1L x ã -x
Continuing to take complements we find that we eventually return to the original element.
x ã -x
xãx
In a vector 2-space, taking the complement of a vector is thus equivalent to rotating the vector by
one right angle counterclockwise.
2009 9 3
Grassmann Algebra Book.nb 386
x ã a e1 + b e2
Since this is a non-Euclidean vector space, the basis vectors are generally not at right angles to
each other. We declare a general metric, and display the Complement and Metric palettes.
¯!2 ; DeclareMetric@gD;
Grid@88ComplementPalette, MetricPalette<<D
Complement Palette
Basis Complement Metric Palette
e1 Ôe2 L H1L
1 ¯g 0
e2 g1,1
-
e1 g1,2
L g1,1 g1,2
e1 ¯g ¯g 1 g1,2 g2,2
e2 g1,2 e1 g2,2
e2 - L I -g21,2 + g1,1 g2,2 M
¯g ¯g
2
e1 Ô e2 ¯g
x ã a e1 + b e2 ã ConvertComplements@a e1 + b e2 D
e2 Ha g1,1 + b g1,2 L e1 H-a g1,2 - b g2,2 L
x ã a e1 + b e2 ã +
¯g ¯g
x ã a e1 + b e2 ã ConvertComplementsAa e1 + b e2 E
x ã a e1 + b e2 ã -a e1 - b e2
2009 9 3
Grassmann Algebra Book.nb 387
x ã a e1 + b e2 ã ConvertComplementsBa e1 + b e2 F
Ï Example
1
Suppose we take a simple metric and calculate x ã 2
He1 + e2 L and its complements
Complement Palette
Metric Palette
Basis Complement
L H1L
0
1 e1 Ô e2
e1 -e1 + 2 e2 L K2 1O
1 1 1
e2 -e1 + e2
L H1L
2
e1 Ô e2 1
e1 + e2
xã ;
2
e1 + e2
x ã ConvertComplements@ D
2
3 e2
x ã -e1 +
2
e1 + e2
x ã ConvertComplements@ D
2
e1 e2
xã- -
2 2
2009 9 3
Grassmann Algebra Book.nb 388
e1 + e2
x ã ConvertComplements@ D
2
3 e2
x ã e1 -
2
x
e2
x
e1
x
We can see that although the basis vectors are neither unit vectors, nor orthogonal, that as for the
Euclidean case, x is equal to -x, and x is equal to -x.
The standard conceptual vehicle for exploring the notion of orthogonality is the interior product
and its special cases, the inner and scalar products. These we will develop in the next chapter
where we will revisit examples of this type, and connect them more closely with familiar concepts.
x ã a e1 + b e2 + c e3
2009 9 3
Grassmann Algebra Book.nb 389
¯!3 ; ComplementPalette
Complement Palette
Basis Complement
1 e1 Ô e2 Ô e3
e1 e2 Ô e3
e2 -He1 Ô e3 L
e3 e1 Ô e2
e1 Ô e2 e3
e1 Ô e3 -e2
e2 Ô e3 e1
e1 Ô e2 Ô e3 1
x ã a e1 + b e2 + c e3 ã a e2 Ô e3 - b e1 Ô e3 + c e1 Ô e2
Remember, the Euclidean complement of a basis element is (formally identical to) its cobasis
element, and a basis element and its cobasis element are defined by their exterior product being the
basis n-element, in this case e1 Ô e2 Ô e3 .
Taking the complement of x gives x. Note that this result differs in sign from the two-dimensional
case.
x ã a e2 Ô e3 - b e1 Ô e3 + c e1 Ô e2 ã a e1 + b e2 + c e3 ã x
x ã H-1L1 H3-1L x ã x
Since in a vector 3-space, the complement of the complement of a vector, or of a bivector returns
us to the original element, continuing to take complements as we did in the two-dimensional case
leads us to no further elements.
xãxãx xãx
The bivector x is a sum of components, each in one of the coordinate bivectors. We have already
shown in Chapter 2: The Exterior Product that a bivector in a 3-space is simple, and hence can be
factored into the exterior product of two vectors. It is in this form that the bivector will be most
easily interpreted as a geometric entity. There is an infinity of vectors that are orthogonal to the
vector x, but they are all contained in the bivector x. You can get express the bivector x as the
exterior product of two vectors again in an infinity of ways. The GrassmannAlgebra function
ExteriorFactorize finds one of them for you. Others may be obtained by adding vectors
from the first factor to the second factor, or vice-versa.
2009 9 3
The bivector x is a sum of components, each in one of the coordinate bivectors. We have already
shown in Chapter 2: The Exterior Product that a bivector in a 3-space is simple, and hence can be
factored into the exterior product of two vectors. It is in this form that Grassmann
the bivector willBook.nb
Algebra be most 390
easily interpreted as a geometric entity. There is an infinity of vectors that are orthogonal to the
vector x, but they are all contained in the bivector x. You can get express the bivector x as the
exterior product of two vectors again in an infinity of ways. The GrassmannAlgebra function
ExteriorFactorize finds one of them for you. Others may be obtained by adding vectors
from the first factor to the second factor, or vice-versa.
ExteriorFactorize@a e2 Ô e3 - b e1 Ô e3 + c e1 Ô e2 D
a e3 b e3
c e1 - Ô e2 -
c c
The vector x is orthogonal to each of these vectors since the exterior product of x with each of
them is zero.
a e3
¯%B: e1 - Ô Ha e2 Ô e3 - b e1 Ô e3 + c e1 Ô e2 L,
c
a e3
e1 - Ô Ha e2 Ô e3 - b e1 Ô e3 + c e1 Ô e2 L>F
c
80, 0<
The complements of each of these vectors will be bivectors which contain the original vector x.
We can verify this easily with ConvertComplements and GrassmannSimplify.
¯%AConvertComplementsA
9Hc e1 - a e3 L Ô Ha e1 + b e2 + c e3 L, Hc e2 - b e3 L Ô Ha e1 + b e2 + c e3 L=EE
80, 0<
2009 9 3
Grassmann Algebra Book.nb 391
x ã a e1 + b e2 + c e3
¯!3 ; DeclareMetric@gD;
x ã a e1 + b e2 + c e3 ã ConvertComplements@a e1 + b e2 + c e3 D
Ha g1,3 + b g2,3 + c g3,3 L e1 Ô e2
x ã a e1 + b e2 + c e3 ã +
¯g
x ã a e1 + b e2 + c e3 ã ConvertComplementsAa e1 + b e2 + c e3 E
x ã a e1 + b e2 + c e3 ã a e1 + b e2 + c e3
Ï Example
Suppose we take the metric used in the example above on the non-Euclidean complement in a
vector 2-space and extend it orthogonally to the third dimension.
2009 9 3
Grassmann Algebra Book.nb 392
Complement Palette
Basis Complement Metric Palette
1 e1 Ô e2 Ô e3 L H1L
0
e1 -He1 Ô e3 L + 2 e2 Ô e3 2 1 0
L 1 1 0
e2 -He1 Ô e3 L + e2 Ô e3 1
0 0 1
e3 e1 Ô e2
1 0 0
e1 Ô e2 e3 L 0 2 1
2
e1 Ô e3 e1 - 2 e2 0 1 1
e2 Ô e3 e1 - e2 L H1L
3
e1 Ô e2 Ô e3 1
1
We now calculate the complement of the same vector x ã 2
He1 + e2 L and verify that x is equal
to x.
e1 + e2
xã ;
2
e1 + e2
x ã ConvertComplements@ D
2
3 e2 Ô e3
x ã -He1 Ô e3 L +
2
e1 + e2
x ã ConvertComplements@ D
2
e1 e2
xã +
2 2
3 e2 Ô e3
ExteriorFactorizeB-He1 Ô e3 L + F
2
3 e2
- e1 - Ô e3
2
2009 9 3
Grassmann Algebra Book.nb 393
and observe again that the vector x is orthogonal to each of these factors, since clearly the exterior
product of the complement of x with each of these factors is zero.
3 e2 3 e2 Ô e3 3 e2 Ô e3
¯%B: e1 - Ô -He1 Ô e3 L + , e3 Ô -He1 Ô e3 L + >F
2 2 2
80, 0<
More general metrics make sense in vector spaces, because the entities all have the same
interpretation. This leads us to consider hybrid metrics in bound spaces in which the vector
subspace has a general metric but the origin is orthogonal to all vectors. We can therefore adopt a
Euclidean metric for the origin, and a more general metric for the vector subspace.
1 0 ! 0
0 g11 ! g1 n
Gij ã 5.42
ª ª ! ª
0 gn1 ! gn n
These hybrid metrics will be useful later when we discuss screw algebra in Chapter 7 and
mechanics in Chapter 8. Of course the permitted transformations on a space with such a metric
must be restricted to those which maintain the orthogonality of the origin point to the vector
subspace.
It will be convenient in what follows to refer to a bound space with underlying linear space of
dimension n+1 as an n-plane.
All the formulae for complements in the vector subspace still hold. In an n-plane, the vector
subspace has dimension n, and hence the complement operation for the n-plane and its vector
subspace will not be the same.
”
We will denote the complement operation in the vector subspace by using an overvector · instead
of an overbar ·, and call it a vector space complement. The vector space complement (overvector)
operation can be entered from the Basic Operations palette by using the button under the
complement (overbar) button.
The constant ! will be the same in both spaces and still equate to the inverse of the square root of
the determinant g of the metric tensor. Hence it is easy to show that in a bound space with the
metric tensor above that:
2009 9 3
Grassmann Algebra Book.nb 394
The constant ! will be the same in both spaces and still equate to the inverse of the square root of
the determinant g of the metric tensor. Hence it is easy to show that in a bound space with the
metric tensor above that:
1
1ã ¯! Ô e1 Ô e2 Ô ! Ô en 5.43
¯g
” 1
1ã e1 Ô e2 Ô ! Ô en 5.44
¯g
”
1 ã ¯! Ô 1 5.45
”
¯! ã 1 5.46
”
In this interpretation 1 will be called the unit n-plane, while 1 will be called the unit n-vector. The
unit n-plane is the unit n-vector bound through the origin.
n
1
ei ã ‚ gij ej 5.47
¯g j=1 Å
In the n-plane, the complement ei of a basis vector ei is given by the metric of the n-plane as:
1 n
ei ã ‚ gij J-¯" Ô ej N
¯g j=1 Å
ei ã -¯! Ô ei 5.48
More generally we have that for a basis m-vector, its complement in the n-plane is related to its
complement in the vector subspace by:
2009 9 3
Grassmann Algebra Book.nb 395
ei ã H-1Lm ¯! Ô ei 5.49
m m
”
a ã H-1Lm ¯! Ô a 5.50
m m
† The vector space complement of any exterior product involving the origin ¯" is undefined.
” ”
a Ô b ã KH-1Lm ¯" Ô aO Ô H-1Lk ¯" Ô b ã 0
m k m k
The complement of this equation is also zero, showing that the regressive product of vectorial
elements in a bound space is zero.
aÔb ã aÓb ã 0 ã 0
m k m k
For a regressive product to be non-zero, it needs to be able to assemble a copy of the unit n-
element out of its factors. If it cannot, the regressive product is zero. In this case, the space is a
bound space whose unit n-element contains the origin ¯" but the origin is missing from both
factors of the product, hence it is zero.
It is important to note that the complement axiom is only defined for complements in the full
space. Thus, we cannot say that
” ”
aÔb ã aÓb
m k m k
A simple example using basis elements in a Euclidean point 3-space will suffice. Substituting basis
elements and using the definitions for the vector-space complement shows that the left side is non-
zero, while the right side is zero.
e1 Ô e2 ã e3
e1 Ó e2 ã He2 Ô e3 L Ó He3 Ô e1 L ã 0
2009 9 3
Grassmann Algebra Book.nb 396
1 1 n
ã e1 Ô e2 Ô ! Ô en Ó -¯" Ô ‚ gij ej
¯g ¯g j=1 Å
1 n
ã- ‚ gij Jej Ô ej N Ó J¯" Ô ej N
¯g Å Å
j=1
1 n
ã- ‚ gij Jej Ô ¯" Ô ej N Ó ej
¯g j=1 Å Å
1 n
ã ‚ gij 1 Ó ej
¯g j=1 Å
1 n
ã 1Ó ‚ gij ej ã 1 Ó ei ã ei
¯g j=1 Å
¯! Ô ei ã ei 5.51
More generally, we can show that for a general m-vector, the complement of the m-vector bound
through the origin in an n-plane is simply the complement of the m-vector in the vector subspace of
the n-plane.
”
¯! Ô a ã a 5.52
m m
a ã H-1Lm Hn-mL a
m m
The vector space complement of the vector space complement of a in the vector subspace is
m
2009 9 3
Grassmann Algebra Book.nb 397
”
”
a ã H-1Lm Hn-1-mL a
m m
Hence
”
”
a ã H-1Lm a 5.53
m m
We can verify this using the formulae derived in the previous section.
”
a ã H-1Lm ¯" Ô a
m m
” ”
”
a ã H-1Lm ¯" Ô a ã H-1Lm a
m m m
OverVector@e1 Ô e2 D
e1 Ô e2
8e2 Ô e3 , x, e2 Ô e3 , x<
If, on the other hand, the basis of your currently declared space does contain the origin ¯", then
ConvertComplements will convert any expressions containing OverVector complements to
their equivalent OverBar forms.
”
¯"3 ; ConvertComplementsA9e1 , x, e1 , x=E
9e2 Ô e3 , ¯# Ô x, -H¯# Ô e2 Ô e3 L, x=
2009 9 3
Grassmann Algebra Book.nb 398
9¯#, ¯# Ô e1 , ¯# Ô x + ¯#, ¯# + e2 Ô e3 =
Complement Palette
Basis Complement
1 ¯! Ô e1 Ô e2
¯! e1 Ô e2
e1 -H¯! Ô e2 L
e2 ¯! Ô e1
¯! Ô e1 e2
¯! Ô e2 -e1
e1 Ô e2 ¯!
¯! Ô e1 Ô e2 1
The complement table tells us that each basis vector is orthogonal to the axis involving the other
basis vector. For example, e2 is orthogonal to the axis ¯" Ô e1 .
Suppose now we take a general vector x ã a e1 + b e2 . The complement of this vector is:
2009 9 3
Grassmann Algebra Book.nb 399
Again, this is an axis (bound vector) through the origin orthogonal to the vector x.
Now let us take a general point P ã ¯" + x and explore what element is orthogonal to this point.
P ã ¯" + x ã e1 Ô e2 + ¯" Ô Hb e1 - a e2 L
The effect of adding the bivector e1 Ôe2 to x is to shift the bound vector ¯" Ô Hb e1 - a e2 L
parallel to itself. We can factor e1 Ôe2 into the exterior product of two vectors, one parallel to
b e1 - a e2 .
e1 Ô e2 ã z Ô Hb e1 - a e2 L
P ã H¯" + zL Ô Hb e1 - a e2 L
The vector z is the position vector of any point on the line defined by the bound vector P. A
particular point of interest is the point on the line closest to the point P or the origin. The position
vector z would then be a vector orthogonal to the direction of the line. Thus we can write e1 Ôe2 as:
Ha e1 + b e2 L Ô Hb e1 - a e2 L
e1 Ô e2 ã -
a2 + b2
The final expression for the complement of the point P ã ¯" + x ã ¯" + a e1 + b e2 can then be
written as a bound vector in a direction orthogonal to the position vector of P.
Ha e1 + b e2 L
P ã ¯! - Ô Hb e1 - a e2 L ã P* Ô Hb e1 - a e2 L 5.54
2 2
a +b
This formula turns out to be a special case of a more general formula which we will derive later in
the section, and for which we now propose some convenient notation.
x
P ã ¯! + x P* ã ¯! + x* x* ã - 5.55
†x§2
We call the point P* the inverse point to P. Inverse points are situated on the same line through the
origin, and on opposite sides of it. The product of their distances from the origin is unity.
”
Remembering that the vector space complement of the position vector x is denoted x, we can now
rewrite our derived formula in a more condensed form.
”
P ã P* Ô I-xM 5.56
2009 9 3
Grassmann Algebra Book.nb 400
P
x
x
¯!
x*
P* P=¯!+x
P* =¯!+x*
P=P* Ô-x
P
Of course, since the bound space of a plane is a three-dimensional linear space, the complement of
the complement of a point is the point itself, just like the complement of a complement of a vector
in a vector 3-space is the vector itself.
Ha e1 + b e2 L
P ^= ¯" - Ô Hb e1 - a e2 L; ConvertComplements@PD
a2 + b2
¯# + a e1 + b e2
2009 9 3
Grassmann Algebra Book.nb 401
¯"3 ; ComplementPalette
Complement Palette
Basis Complement
1 ¯! Ô e1 Ô e2 Ô e3
¯! e1 Ô e2 Ô e3
e1 -H¯! Ô e2 Ô e3 L
e2 ¯! Ô e1 Ô e3
e3 -H¯! Ô e1 Ô e2 L
¯! Ô e1 e2 Ô e3
¯! Ô e2 -He1 Ô e3 L
¯! Ô e3 e1 Ô e2
e1 Ô e2 ¯! Ô e3
e1 Ô e3 -H¯! Ô e2 L
e2 Ô e3 ¯! Ô e1
¯! Ô e1 Ô e2 e3
¯! Ô e1 Ô e3 -e2
¯! Ô e2 Ô e3 e1
e1 Ô e2 Ô e3 -¯!
¯! Ô e1 Ô e2 Ô e3 1
Each basis vector is orthogonal to the coordinate plane involving the other basis vectors. For
example, e2 is orthogonal to the axis ¯" Ô e1 Ô e3 .
Suppose now we take a general vector x ã a e1 + b e2 + c e3 . The complement of this vector is:
x ã a e1 + b e2 + c e3
ã - a ¯" Ô e2 Ô e3 + b ¯" Ô e1 Ô e3 - c ¯" Ô e1 Ô e2
ã ¯" Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L
Again, this is a plane (bound bivector) through the origin orthogonal to the vector x.
P ã ¯" + x ã e1 Ô e2 Ô e3 + ¯" Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L
The effect of adding the trivector e1 Ôe2 Ôe3 to x is to shift the bound bivector
¯" Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L parallel to itself. We can factor e1 Ôe2 Ôe3 into the
exterior product of a vector and a bivector congruent to - a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 .
2009 9 3
Grassmann Algebra Book.nb 402
The effect of adding the trivector e1 Ôe2 Ôe3 to x is to shift the bound bivector
¯" Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L parallel to itself. We can factor e1 Ôe2 Ôe3 into the
exterior product of a vector and a bivector congruent to - a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 .
e1 Ô e2 Ô e3 ã z Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L
P ã H¯" + zL Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L
The vector z is the position vector of any point on the plane defined by the bound bivector P. A
particular point of interest is the point on the plane closest to the point P or the origin. The position
vector z would then be a vector orthogonal to the plane. Thus we can write e1 Ôe2 Ôe3 as:
Ha e1 + b e2 + c e3 L Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L
e1 Ô e2 Ô e3 ã -
a2 + b2 + c2
The final expression for the complement of the point P ã ¯" + x ã ¯" + a e1 + b e2 + c e3 can
then be written as a bound bivector in a direction orthogonal to the position vector of P.
Ha e1 + b e2 + c e3 L
P ã ¯! - Ô
a2 + b2 + c2 5.57
H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L
From this result we see that our condensed formula is also valid in a point 3-space.
”
P ã P* Ô I-xM 5.58
2009 9 3
Grassmann Algebra Book.nb 403
In a point 3-space, the complement of the complement of a point is the negative of the point itself,
or equivalently, the point with a weight of -1.
P = ¯" + a e1 + b e2 + c e3 ; ConvertComplements@PD
-¯# - a e1 - b e2 - c e3
Consider a bound simple m-vector PÔa in an n-plane, where the position vector x of the point P is
m
orthogonal to a, that is, to each of the 1-element factors of a.
m m
P Ô a ã H¯" + xL Ô a ã ¯" Ô a + x Ô a
m m m m
Kx Ô aO Ó x ã Hx Ô a1 Ô a2 Ô a3 Ô … Ô am L Ó x
m
ã Hx Ô xL Ó Ha1 Ô a2 Ô … Ô am L - Ha1 Ô xL Ó Hx Ô a2 Ô a3 Ô … Ô am L +
Ha2 Ô xL Ó Hx Ô a1 Ô a3 Ô … Ô am L - … H-1Lm Ham Ô xL Ó Hx Ô a1 Ô a2 Ô … Ô am-1 L
But since we have chosen x to be orthogonal to each of the ai , the terms ai Ô x are zero so that
2009 9 3
Grassmann Algebra Book.nb 404
Kx Ô aO Ó x ã Hx Ô xL Ó a
m m
By taking the complement of this expression, we can manipulate it into the form we want.
Kx Ô aO Ó x ã Hx Ô xL Ó a
m m
First apply the complement axiom (once on the left and twice on the right).
Hx Ô aL Ô x ã Ix Ó xM Ô a
m m
Hx Ô aL Ô x ã Hx Ó xL Ô a
m m
Now, the regressive product of an element with its complement is a scalar which, as we will see in
the next chapter is the scalar product of the element with itself. This scalar product may also be
written as the square of the magnitude (measure) of the element †x§2 . Hence on the right side we
have
Hx Ô aL Ô x ã H-1Ln-1 †x§2 a
m m
Hx Ô aL Ô x ã H-1Ln-1+m x Ô Hx Ô aL
m m
H-1Lm x Ô Hx Ô aL ã †x§2 a
m m
”
¯" Ô Kx Ô x Ô aO ã ¯" Ô KH-1Lm †x§2 aO
m m
We can now return to the formula for the bound element under consideration and take its
complement.
2009 9 3
Grassmann Algebra Book.nb 405
P Ô a ã H¯" + xL Ô a ã ¯" Ô a + x Ô a
m m m m
”
P Ô a ã ¯" Ô a + x Ô a ã a + H-1Lm+1 ¯" Ô x Ô a
m m m m m
”
Substituting for a from our derivation above gives
m
x
P Ô a ã H-1Lm Ô x Ô a + H-1Lm+1 ¯" Ô x Ô a
2
m †x§ m m
x
ã ¯" - Ô K- a Ô xO
†x§2 m
x
P Ô a ã ¯! - Ô K- a Ô xO P ã ¯! + x aûx ã 0 5.59
m †x§2 m m
This formula hold only under the condition that the position vector x of the point P is orthogonal to
a. Here, we have preempted the notation of the next chapter by using a û x ã 0 to express this
m m
condition.
As a check, we see that putting a equal to unity in this formula gives us the formula we have
m
already induced earlier in the previous section.
x ”
P ã ¯" - Ô I-xM P ã ¯" + x
2
†x§
x
P Ô a ã ¯! - Ô Hx Ô aL P ã ¯! + x aûx ã 0 5.60
†x§2
2009 9 3
Grassmann Algebra Book.nb 406
† In a point 3-space, the complement of this bound vector becomes a second bound vector
(orthogonal to the first one).
2009 9 3
Grassmann Algebra Book.nb 407
x
P Ô a Ô b ã ¯! - Ô Hx Ô a Ô bL Pã
†x§2 5.61
¯! + x Ha Ô bL û x ã 0
2009 9 3
Grassmann Algebra Book.nb 408
In geometric terms, this can be interpreted as saying that the complement of a line defined by two
points is the intersection of the complements of the points.
† The simplest case is in the plane, where the complement of the points are themselves lines, and
whose intersection is a point.
P Ô Q = PÓQ = R
R = PÔQ
Q
y
P*
R*
x*
z*
P
¯!
x
z
y*
P
Q* Q
† In a point 3-space, the same relations hold, but their interpretations are different. The
complement P of a point is a bound bivector as we have depicted earlier. The complements of
two different points P and Q yield two distinct bound bivectors which intersect in a line R, say.
And the complement of this line R is the line passing through the points P and Q.
† The complement of a bound bivector can be composed as the regressive product of the
complements of three points. In a point 3-space, this results in the intersection of three bound
bivectors yielding a point. The complement of this point is a bound bivector passing through the
original three points.
† The simple examples explored in the sections above are straightforwardly extended mutatis
mutandis to higher dimensional spaces, but of course they are more challenging to depict!
2009 9 3
Grassmann Algebra Book.nb 409
Reciprocal bases
Up to this point we have only considered one basis for L, which we have denoted with subscripted
1
indices, for example ei . We will refer to this basis as the standard basis. Introducing a second
basis reciprocal to this standard basis enables us to write the formulae for complements in a more
symmetric way.
This section will summarize formulae relating basis elements and their cobases and complements
in terms of reciprocal bases. For simplicity, we adopt the Einstein summation convention.
In L the metric tensor gij forms the relationship between the reciprocal bases.
1
ei ã gij ej 5.62
m
This relationship induces a metric tensor gi,j on L.
m
m
ei ã gij ej 5.63
m m
1
ei ã gij ej 5.64
¯g
Here ¯g ã †gij § is the determinant of the metric tensor. Taking the complement of formula 5.64
and substituting for ei in formula 5.66 gives:
1
ei ã ei 5.65
¯g
2009 9 3
Grassmann Algebra Book.nb 410
ei ã ¯g ei 5.66
ei ã ¯g ei 5.67
m m
1
ei ã ei 5.68
m ¯g m
1
e1 Ô e2 Ô ! Ô en ã 1 ã ¯g e1 Ô e2 Ô ! Ô en 5.69
¯g
e1 Ô e2 Ô ! Ô en ã ¯g 5.70
1
e1 Ô e2 Ô ! Ô en ã 5.71
¯g
ei ã ¯g H-1Li-1 e1 Ô ! Ô ·i Ô ! Ô en 5.72
1
ei ã H-1Li-1 e1 Ô ! Ô ·i Ô ! Ô en 5.73
¯g
1
e i1 Ô ! Ô e i m ã H-1LKm e1 Ô ! Ô ·i1 Ô ! Ô ·im Ô ! Ô en 5.75
¯g
2009 9 3
Grassmann Algebra Book.nb 411
Taking the complement of this formula and applying formula 5.69 gives the required result.
ã ¯g H-1Lm Hn-mL ei
m
Similarly:
1
emi ã H-1Lm Hn-mL ei 5.77
¯g m
ei ã g ei
m m
2009 9 3
Grassmann Algebra Book.nb 412
1
ãJ ¯g N H-1Lm Hn-mL ei
¯g m
ã H-1Lm Hn-mL ei
m
A similar result is obtained for the complement of the complement of a reciprocal basis element.
1 j
ei Ô ej ã ei Ô ej ã di 1
m m m ¯g m
ei Ô ej ã ei Ô J ¯g N ej ã dij 1
m m m m
j
ei Ô ej ã di 1 ei Ô ej ã dij 1 5.78
m m m m
The exterior product of a basis element with the complement of any other basis element in the
same basis is equal to the corresponding component of the metric tensor times the unit n-element.
m m 1 m
ei Ô ej ã ei Ô Kgjk ek O ã ei Ô gjk ek ã gij 1
m m m m m ¯g m
m jk m jk m ij
ei Ô ej ã ei Ô Kg ek O ã ei Ô g J ¯g N ek ã g 1
m m m m m m
m m ij
ei Ô ej ã gij 1 ei Ô ej ã g 1 5.79
m m m m
j
ei Ô ej ã di 1 ei Ô ej ã dij 1 5.80
2009 9 3
Grassmann Algebra Book.nb 413
j j
ei Ô ej ã di 1 ei Ó ej ã di 5.82
m m m m
m m
ei Ô ej ã gij 1 ei Ó ej ã gij 5.84
m m m m
m ij m ij
ei Ô ej ã g 1 ei Ó ej ã g 5.85
m m m m
a ã a1 Ô a2 Ô ! Ô am
m
Take any other n|m 1-elements am+1 , am+2 , !, an such that a1 , a2 , !, am , am+1 , !, an form an
independent set, and thus a basis of L. Then by equation 5.76 we have:
1
a ã a1 Ô a2 Ô ! Ô am ã ga am+1 Ô am+2 Ô ! Ô an
m
5.13 Summary
2009 9 3
Grassmann Algebra Book.nb 414
5.13 Summary
In this chapter we have shown that by defining the complement operation on a basis of L as
1
n
ei ã ! ‚ gij ej
j=1
aÔb ã aÓb
m k m k
as the mechanism for extending the complement to higher grade elements, the requirement that
a ã %a
m m
1
constrains gij to be symmetric and ! to have the value % ,where ¯g is the determinant of the
¯g
The metric tensor gij was introduced as a mapping between the two linear spaces L and L .
1 n-1
To this point none of these concepts has yet been related to notions of interior, inner or scalar
products. This will be addressed in the next chapter, where we will also see that the constraint
1
!ã% is equivalent to requiring that the magnitude of the unit n-element is unity.
¯g
Once we have constrained gij to be symmetric, we are able to introduce the standard notion of a
reciprocal basis. The formulae for complements of basis elements in any of the L then become
m
much simpler to express.
This chapter also discusses how the complement operation can be interpreted geometrically both in
vector spaces and in bound spaces. In both spaces the complement axiom holds, but its geometric
interpretation in the two spaces is quite different.
2009 9 3
Grassmann Algebra Book.nb 415
6.1 Introduction
To this point we have defined three important operations in the Grassmann algebra: the exterior
product, the regressive product, and the complement.
In this chapter we introduce the interior product, the fourth operation of fundamental importance
to the algebra. The interior product of two elements is defined as the regressive product of one
element with the complement of the other. Whilst the exterior product of an m-element and a k-
element generates an (m+k)-element, the interior product of an m-element and a k-element (m ¥ k)
generates an (m|k)-element. This means that the interior product of two elements of equal grade is
a 0-element (or scalar). The interior product of an element with itself is a scalar, and it is this scalar
that is used to define the measure of the element.
The interior product of two 1-elements corresponds to the usual notion of inner, scalar, or dot
product. But we will see that the notion of measure is not restricted to 1-elements. Just as one may
associate the measure of a vector with a length, the measure of a bivector may be associated with
an area, and the measure of a trivector with a volume.
If the exterior product of an m-element and a 1-element is zero, then it is known that the 1-element
is contained in the m-element. If the interior product of an m-element and a 1-element is zero, then
this means that the 1-element is contained in the complement of the m-element. In this case it may
be said that the 1-element is orthogonal to the m-element.
The basing of the notion of interior product on the notions of regressive product and complement
follows here the Grassmannian tradition rather than that of the current literature which introduces
the inner product onto a linear space as an arbitrary extra definition. We do this in the belief that it
is the most straightforward way to obtain consistency within the algebra and to see and exploit the
relationships between the notions of exterior product, regressive product, complement and interior
product, and to discover and prove formulae relating them.
We use the term 'interior' in addition to 'inner' to signal that the products are not quite the same. In
traditional usage the inner product has resulted in a scalar. The interior product is however more
general, being able to operate on two elements of any and perhaps different grades. We reserve the
term inner product for the interior product of two elements of the same grade. An inner product of
two elements of grade 1 is called a scalar product. In sum: inner products are scalar whilst, in
general, interior products are not.
We denote the interior product with a small circle with a 'bar' through it, thus û. This is to signify
that it has a more extended meaning than the inner product. Thus the interior product of a with b
m k
becomes a û b. This product is zero if m < k. Thus the order is important: for a non-zero product
m k
the element of higher grade should be on the left. The interior product has the same left
associativity as the negation operator, or minus sign. It is possible to define both left and right
interior products, but in practice the added complexity is not rewarded by an increase in utility.
2009 9 3
We denote the interior product with a small circle with a 'bar' through it, thus û. This is to signify
that it has a more extended meaning than the inner product. Thus the interior product of a with b
Grassmann Algebra Book.nb 416 m k
becomes a û b. This product is zero if m < k. Thus the order is important: for a non-zero product
m k
the element of higher grade should be on the left. The interior product has the same left
associativity as the negation operator, or minus sign. It is possible to define both left and right
interior products, but in practice the added complexity is not rewarded by an increase in utility.
We will see in Chapter 10: The Generalized Product that the interior product of two elements can
be expressed as a certain generalized product, which is independent of the order of its factors.
This immediately suggests a definition for the inner product of an element with itself as a Ó a, and,
m m
by extension, a formula for the inner product of any two m-elements as a Ó b.
m m
The result of taking the regressive product of a and a in the reverse order introduces a potential
m m
change of sign.
a Ó a ã H-1Lm Hn-mL a Ó a
m m m m
Because of the form of Grassmann's definition of the complement, the element a Ó a is always a
m m
non-negative scalar. However, it is clear from the above that the element a Ó a may well not be.
m m
And furthermore, it may depend on the dimension of the space. Hence in the definition of the inner
product it is important that the complemented element be the second factor.
2009 9 3
Grassmann Algebra Book.nb 417
Thus, the interior product does not depend on the dimension of the space, since the dependence on
the dimension of the regressive product and the complement operations 'cancel' each other out.
This makes it, like the exterior product, of special importance in the algebra.
If m < k, then a Ó b is necessarily zero (otherwise the grade of the product would be negative),
m k
hence:
Ï An important convention
In order to avoid unnecessarily distracting caveats on every formula involving interior products, in
the rest of this book we will suppose that the grade of the first factor is always greater than or
equal to the grade of the second factor. The formulae will remain true even if this is not the case,
but they will be trivially so by virtue of their terms reducing to zero.
To deal with the apparent asymmetry of the interior product, some authors (for example E. Tonti
On the Formal Structure of Physical Theories, page 68, 1975) have introduced the notion of left
and right interior products equivalent to the definitions:
a ¢ b ã aÓb œ L m¥k
m k m k m-k
6.4
a § b ã aÓb œ L k¥m
m k m k k-m
Here, a ¢ b is the right interior product, and a § b is the left interior product. By interchanging
m k m k
the order of the factors in one of the regressive products, and then interchanging the m-element
with the k-element, we can see that they are related by the formula:
In this book, we do not follow this dual-definition approach since we have found it to be an
unnecessary complication, especially with its dependence on grade and the dimension of the space.
In Chapter 10 we will discover a new product (the generalized Grassmann product) which leads to
the same interior product independent of the order of its factors, thus retrieving, in this wider
system, the sought-after symmetry.
2009 9 3
Grassmann Algebra Book.nb 418
In this book, we do not follow this dual-definition approach since we have found it to be an
unnecessary complication, especially with its dependence on grade and the dimension of the space.
In Chapter 10 we will discover a new product (the generalized Grassmann product) which leads to
the same interior product independent of the order of its factors, thus retrieving, in this wider
system, the sought-after symmetry.
Ï Historical note
Grassmann and workers in the Grassmannian tradition define the interior product of two elements as the
product of one with the complement of the other, the product being either exterior or regressive depending on
which interpretation produces a non-zero result. Furthermore, when the grades of the elements are equal, it is
defined either way. This definition involves the confusion between scalars and n-elements discussed in
Chapter 5, Section 5.1 (equivalent to assuming a Euclidean metric and identifying scalars with pseudo-
scalars). It is to obviate this inconsistency and restriction on generality that the approach adopted here bases
its definition of the interior product explicitly on the regressive exterior product.
The grade of the interior product of two elements is the difference of their grades.
a œ L, b œ L ï aûb œ L 6.6
m m k k m k m-k
Thus, in contradistinction to the regressive product, the grade of an interior product does not
depend on the dimension of the underlying linear space.
If the grade of the first factor is less than that of the second, the interior product is zero.
The following formulae are derived directly from the associativity of the regressive product, and
show that the interior product is not associative.
2009 9 3
Grassmann Algebra Book.nb 419
The interior product of an element with the unit scalar 1 does not change it. The interior product of
scalars is equivalent to ordinary scalar multiplication. The complement operation may be viewed as
the interior product with the unit n-element 1.
aû1 ã a 6.10
m m
1û1 ã 1 6.12
a ã 1ûa 6.13
m m
The interior product of the complement of a scalar and the reciprocal of the scalar is the unit n-
element.
1
1 ã aû 6.14
a
The interior product of two elements is equal (apart from a possible sign) to the interior product of
their complements in reverse order.
If the elements are of the same grade, the interior product of two elements is equal to the interior
product of their complements. (It will be shown later that, because of the symmetry of the metric
tensor, the interior product of two elements of the same grade is symmetric, hence the order of the
factors on either side of the equation may be reversed.)
2009 9 3
Grassmann Algebra Book.nb 420
If the elements are of the same grade, the interior product of two elements is equal to the interior
product of their complements. (It will be shown later that, because of the symmetry of the metric
tensor, the interior product of two elements of the same grade is symmetric, hence the order of the
factors on either side of the equation may be reversed.)
Every exterior linear space has a zero element whose interior product with any other element is
zero.
Ka + b O û g ã a û g + b û g 6.18
m m r m r m r
a û Kb + gO ã a û b + a û g 6.19
m r r m r m r
Orthogonality
Orthogonality is a concept generated by the complement operation.
If x is a 1-element and a is a simple m-element, then x and a may be said to be linearly dependent
m m
if and only if a Ô x ã 0, that is, x is contained in (the space of) a.
m m
Similarly, x and a are said to be orthogonal if and only if a Ô x ã 0, that is, x is contained in a.
m m m
Just as the vanishing of the exterior product of two simple elements a and x implies only that they
m k
have some 1-element in common, so the vanishing of their interior product implies only that a and
m
x (m ¥ k) have some 1-element in common, and conversely. That is, there is a 1-element x such
k
that the following implications hold.
aÔx ã 0 ó :a Ô x ã 0, x Ô x ã 0>
m k m k
aûx ã 0 ó :a Ô x ã 0, x Ô x ã 0>
m k m k
aûx ã 0 ó :a û x ã 0, x Ô x ã 0>
m k m k
aûx ã 0 ó aûx ã
m k m
6.20
0 ó aÔx ã 0 xÔx ã 0
m k
The vector B û x can be expanded by the Interior Common Factor Theorem (which we will discuss
later) to give:
B û x ã Hx1 Ô x2 L û x ã Hx û x1 L x2 - Hx û x2 L x1
B Ô HB û xL ã 0
The resulting vector B û x is also orthogonal to x. We can show this by taking its scalar product
with x.
2009 9 3
Grassmann Algebra Book.nb 422
HB û xL û x ã B û Hx Ô xL ã 0
`
The unit bivector B is defined by
` B
Bã
BûB
The analysis above has shown us that in a vector n-space, B û x lies in B and is orthogonal to x. It
is also orthogonal to the components of x in B (x˛ ) and orthogonal to B (x¨ ). We can depict this in
a vector 3-space as follows:
2009 9 3
Grassmann Algebra Book.nb 423
In a vector 2-space, the same analysis is valid. However, x¨ is zero because it involves the exterior
`
product of three vectors in a 2-space; and x˛ is identical to x because the unit bivector B must be
equal to the unit 2-element of the space 1. (We discussed unit n-elements in Chapter 5.)
` `
x¨ ã IB Ô xM û B ã 0
` `
x˛ ã -B û IB û xM ã -1 û I1 û xM ã -1 û x ã -HxL ã x
In addition to the definitions and formulae we have already derived for the interior product there
are many more that can be derived by successive application of just three groups of axioms - the
anti-commutativity, complement, and complement of a complement axioms. We list them here
again for reference.
2009 9 3
Grassmann Algebra Book.nb 424
Anticommutativity axioms
Complement axioms
Ì Formula 1
a û b ã a Ó b ã H-1Lk Hn-mL b Ó a
m k m k k m
Ì Formula 2
Ì Formula 3
a û b ã a Ó b ã a Ô b ã H-1Lm k b û a
m k m k m k k m
a û b ã a Ó b ã a Ô b ã H-1Lm k b û a 6.26
m k m k m k k m
2009 9 3
Grassmann Algebra Book.nb 425
Ì Formula 4
a û b ã a Ó b ã H-1Lk Hn-kL a Ó b
m k m k m k
Ì Formula 5
Ì Formula 6
Ì Formula 7
a û b ã a Ô b ã H-1LHm+kL Hn-Hm+kLL a Ô b
m k m k m k
Ì Formula 8
Ì Formula 9
2009 9 3
Grassmann Algebra Book.nb 426
Formula 9
aû b Ô b Ô!Ô b ã a Ó Hb Ô b Ô ! Ô bL ã a Ó b Ó b Ó ! Ó b
m k1 k2 kp m k1 k2 kp m k1 k2 kp
ã aÓ b Ó b Ó! Ó b ã aû b û b û! û b
m k1 k2 kp m k1 k2 kp
aû b Ô b Ô!Ô b ã aû b û b û! û b 6.33
m k1 k2 kp m k1 k2 kp
If the interior product of any of the b with a is zero, then the complete interior product is zero. We
ki m
can see this by rearranging the factors in the exterior product to bring that factor to into first
position.
By interchanging the order of the factors in the exterior product we can derive alternative
expressions. For the case of two factors we have
aû bÔg ã H-1Lk r a û g Ô b ï
m k r m r k
2009 9 3
Grassmann Algebra Book.nb 427
GrassmannAlgebra has Mathematica's natural precedence ranking for its operators which it uses to
simplify the appearance of output by removing brackets if the input conforms to the precedence.
† The interior product operator has the same precedence as the minus operator.
8HHHa û bL û gL û d L û …, a û Hb û Hg û Hd û … LLL<
8a û b û g û d û …, a û Hb û Hg û Hd û …LLL<
† The exterior product operator has a higher precedence than the interior product operator.
8a û b Ô g, a û Hb Ô gL, Ha û bL Ô g<
8a û b Ô g, a û b Ô g, Ha û bL Ô g<
† The regressive product operator has a higher precedence than the interior product operator.
8a û b Ó g, a û Hb Ó gL, Ha û bL Ó g<
8a û b Ó g, a û b Ó g, Ha û bL Ó g<
† The exterior product operator has a higher precedence than the regressive product operator.
8a Ó b Ô g, a Ó Hb Ô gL, Ha Ó bL Ô g<
8a Ó b Ô g, a Ó b Ô g, Ha Ó bL Ô g<
A = : a û b û g, a û b û g >;
m k j m k j
¯!3 ; InteriorToRegressive@AD
:a Ó b Ó g, a Ó b Ó g>
m k j m k j
† Converting interior products to regressive products is independent of the dimension of the space
since the definitional formula is used.
2009 9 3
Grassmann Algebra Book.nb 428
† In a 3-space, the complement of a complement of an element is the element itself, hence there
are no extra signs resulting from the conversion.
¯!3 ; InteriorToExterior@AD
:a Ô b Ô g, a Ô b Ô g>
m k j m k j
† In a 4-space however, this is not so, so there may be extra signs resulting from the conversion.
¯!4 ; InteriorToExterior@AD
InteriorToExterior@AD êê AggregateSigns
† Converting exterior and regressive products to interior products may utilize the conversion in
which the complement of an element is replaced by its interior product with the unit n-element
1.
B = :a Ô b Ô g, a Ô b Ó g>;
m k j m k j
¯!3 ; ToInteriorForm@BD
¯!4 ; ToInteriorForm@BD
And in a 5-space, the signs return to those of the (also odd) 3-space.
2009 9 3
Grassmann Algebra Book.nb 429
¯!5 ; ToInteriorForm@BD
We begin by choosing one of the elements, a1 say, arbitrarily and setting this to be the first
element e1 in the orthogonal set to be created.
e1 ã a1
To create a second element e2 orthogonal to e1 within the space concerned, we choose a second
element of the space, a2 say, and form the interior product.
e2 ã He1 Ô a2 L û e1
From our discussion of the interior product of a simple bivector and a vector above, we can see that
e2 is orthogonal to e1 . Alternatively we can take their interior product to see that it is zero:
We can also see that a2 lies in the 2-element e1 Ôe2 (that is, a2 is a linear combination of e1 and
e2 ) by taking their exterior product and expanding it.
e3 ã He1 Ô e2 Ô a3 L û He1 Ô e2 L
e4 ã He1 Ô e2 Ô e3 Ô a4 L û He1 Ô e2 Ô e3 L
!
In general then, the (i+1)th element ei+1 of the orthogonal set e1 , e2 , e3 , ! is obtained from the
previous i elements and the (i+1)th element ai+1 of the original set.
Following a similar procedure to the one used for the second element, we can easily show that ei+1
is orthogonal to e1 Ôe2 Ô!Ôei and hence to each of the e1 ,e2 ,!,ei , and that ai lies in
e1 Ôe2 Ô!Ôei and hence is a linear combination of e1 ,e2 ,!,ei .
This procedure is, as might be expected, equivalent to the Gram-Schmidt orthogonalization process.
2009 9 3
Grassmann Algebra Book.nb 430
This procedure is, as might be expected, equivalent to the Gram-Schmidt orthogonalization process.
: a Ó g Ô b Ó g ã a Ó b Ó g Ô g œ L, m + k + j ã 2 ¯n>
m j k j m k j j j
Interchanging the order of the factors and replacing a and b by their complements gives
m k
: g Ó a Ô g Ó b ã g Ó a Ó b Ô g œ L, j ã m + k>
j m j k j m k j j
n
a Ó b ã ‚ ai Ô b Ó ai
m s m-p s p
i=1
m
where a ã a1 Ô a1 ã a2 Ô a2 ã ! ã an Ô an , a is simple, p = m+s|n, and n ! K O.
m m-p p m-p p m-p p m k
Suppose now that b ã b, k = n|s = m|p. Substituting for b and using the formula
s k s
a Ô b ã a û b 1 (which is derived in the section below: The Inner Product) allows us to write
m m m m
the term in brackets as
2009 9 3
Grassmann
Suppose now that b ã b, k = n|s = m|p. Substituting for b and using the formulaAlgebra Book.nb 431
s k s
a Ô b ã a û b 1 (which is derived in the section below: The Inner Product) allows us to write
m m m m
the term in brackets as
ai Ô b ã ai Ô b ã ai û b 1
m-p s k k k k
n
a û b ã ‚ ai û b ai
m k k k m-k
i=1 6.37
a ã a1 Ô a1 ã a2 Ô a2 ã ! ã an Ô an
m k m-k k m-k k m-k
m
where k § m, and n ! K O, and a is simple.
k m
The formula indicates that an interior product of a simple element with another, not necessarily
simple, element of equal or lower grade, may be expressed in terms of the factors of the simple
element of higher grade.
When a is not simple, it may always be expressed as the sum of simple components:
m
a ã a + a2 + a3 + ! (the i in the ai are superscripts, not powers). From the linearity of the
1
m m m m m
interior product, the Interior Common Factor Theorem may then be applied to each of the simple
terms.
a û b ã Ka1 + a2 + a3 + !O û b ã a1 û b + a2 û b + a3 û b + !
m k m m m k m k m k m k
Thus, the Interior Common Factor Theorem may be applied to the interior product of any two
elements, simple or non-simple. Indeed, the application to components in the preceding expansion
may be applied to the components of a even if it is simple.
m
To make the Interior Common Factor Theorem more explicit, suppose a ã a1 Ô a2 Ô ! Ô am , then:
m
2009 9 3
Grassmann Algebra Book.nb 432
Ï b is a scalar
Ï b is a 1-element
m
Ha1 Ô a2 Ô ! Ô am L û b ã ‚ Hai û b L H-1Li-1 a1 Ô ! Ô ·i Ô ! Ô am 6.40
i=1
InteriorCommonFactorTheoremBa û bF
2 1
a û b ã a1 Ô a2 û b ã - a2 û b a1 + a1 û b a2
2 1 1 1 1
InteriorCommonFactorTheoremBa û bF
3 1
a û b ã a1 Ô a2 Ô a3 û b ã a3 û b a1 Ô a2 - a2 û b a1 Ô a3 + a1 û b a2 Ô a3
3 1 1 1 1 1
InteriorCommonFactorTheoremBa û bF
4 1
a û b ã a1 Ô a2 Ô a3 Ô a4 û b ã - a4 û b a1 Ô a2 Ô a3 +
4 1 1 1
a3 û b a1 Ô a2 Ô a4 - a2 û b a1 Ô a3 Ô a4 + a1 û b a2 Ô a3 Ô a4
1 1 1
2009 9 3
Grassmann Algebra Book.nb 433
Ï b is a 2-element
Ha1 Ô a2 Ô ! Ô am L û b ã
2
6.41
‚ Hai Ô aj L û b H-1Li+j-1 a1 Ô ! Ô ·i Ô ! Ô ·j Ô ! Ô am
2
i,j
InteriorCommonFactorTheoremBa û bF
3 2
a û b ã a1 Ô a2 Ô a3 û b ã b û a2 Ô a3 a1 - b û a1 Ô a3 a2 + b û a1 Ô a2 a3
3 2 2 2 2 2
InteriorCommonFactorTheoremBa û bF
4 2
a û b ã a1 Ô a2 Ô a3 Ô a4 û b ã
4 2 2
b û a3 Ô a4 a1 Ô a2 - b û a2 Ô a4 a1 Ô a3 + b û a2 Ô a3 a1 Ô a4 +
2 2 2
b û a1 Ô a4 a2 Ô a3 - b û a1 Ô a3 a2 Ô a4 + b û a1 Ô a2 a3 Ô a4
2 2 2
Ï b is a 3-element
InteriorCommonFactorTheoremBa û bF
4 3
a û b ã a1 Ô a2 Ô a3 Ô a4 û b ã - b û a2 Ô a3 Ô a4 a1 +
4 3 3 3
b û a1 Ô a3 Ô a4 a2 - b û a1 Ô a2 Ô a4 a3 + b û a1 Ô a2 Ô a3 a4
3 3 3
n
A û B ã ‚ Ai û B Ai
m k k k m-k
i=1 6.42
A ã A1 Ô A1 ã A2 Ô A2 ã ! ã An Ô An
m k m-k k m-k k m-k
m
where k § m, and n ! K O, and A is simple.
k m
2009 9 3
Grassmann Algebra Book.nb 434
m
where k § m, and n ! K O, and A is simple.
k m
Just as we did for the Common Factor Theorem in Chapter 3, we can express this in a
computational form. If
then
n
A û B ã ‚ K¯"k BAF û BO ¯"k BAF 6.44
m k m PiT k m PiT
i=1
m
where k § m, and n ! K O, and A is simple.
k m
(The Dot function is an operation on lists, and should not be confused with the interior product.)
In the next section we will discuss the inner product (where the grades of the factors are equal),
and show that the inner product is symmetric. Thus we can write the above formulas in the
alternative forms:
n
A û B ã ‚ KB û ¯"k BAF O ¯"k BAF 6.47
m k k m PiT m PiT
i=1
2009 9 3
Grassmann Algebra Book.nb 435
Ï Examples
Here are the three computational forms compared to the result from the
InteriorCommonFactorTheorem function which implements the initially derived theorem.
m = 3; k = 2; n = 3;
n
A û B ã ‚ B û ¯#k BAF ¯#k BAF
m k k m PiT m PiT
i=1
A û B ã JB û A2 Ô A3 N A1 - JB û A1 Ô A3 N A2 + JB û A1 Ô A2 N A3
3 2 2 2 2
A û B ã JB û A2 Ô A3 N A1 - JB û A1 Ô A3 N A2 + JB û A1 Ô A2 N A3
3 2 2 2 2
A û B ã JB û A2 Ô A3 N A1 - JB û A1 Ô A3 N A2 + JB û A1 Ô A2 N A3
3 2 2 2 2
InteriorCommonFactorTheoremBA û BF
3 2
A û B ã A1 Ô A2 Ô A3 û B ã JB û A2 Ô A3 N A1 - JB û A1 Ô A3 N A2 + JB û A1 Ô A2 N A3
3 2 2 2 2 2
:a Ó b ã a Ô b Ó 1 œ L , m + k - ¯n ã 0>
m k m k 0
2009 9 3
Grassmann Algebra Book.nb 436
a û b ã Ka Ô bO Ó 1 6.50
m m m m
The dual to this special case of the Common Factor Axiom says that an exterior product of two
elements whose grades sum to the dimension of the space is equal to the (scalar) regressive product
of the two elements multiplied by the unit n-element.
:a Ô b ã a Ó b 1 œ L , m + k - ¯n ã 0>
m k m k ¯n ¯n
Now that we have defined the complement operation, the unit n-element 1 becomes 1. We can
¯n
then rewrite this dual axiom as
a Ô b ã Ka û bO 1 6.51
m m m m
Taking the complement of this equation gives another form for the inner product.
a û b ã a Ô b ã a Ó b ã H-1Lm Hn-mL a Ó b ã b Ó a ã b û a
m m m m m m m m m m m m
This symmetry, is of course, a property that we would expect of an inner product. However it is
interesting to remember that the major result which permits this symmetry is that the complement
operation has been required to satisfy the complement of a complement axiom, that is, that the
complement of the complement of an element should be, apart from a possible sign, equal to the
element itself.
2009 9 3
Grassmann Algebra Book.nb 437
a û b ã Ha1 Ô a2 Ô ! Ô am L û Hb1 Ô b2 Ô ! Ô bm L
m m
aû b Ô b Ô!Ô b ã aû b û b û! û b
m k1 k2 kp m k1 k2 kp
HHa1 Ô a2 Ô ! Ô am L û b1 L û Hb2 Ô ! Ô bm L
or, by rearranging the b factors to bring bj to the beginning of the product as:
The Interior Common Factor Theorem gives the first factor of this expression as:
m
Ha1 Ô a2 Ô ! Ô am L û bj ã ‚ Hai û bj L H-1Li-1 a1 Ô a2 Ô ! Ô ·i Ô ! Ô am
i=1
2009 9 3
Grassmann Algebra Book.nb 438
Ha1 Ô a2 Ô ! Ô am L û Hb1 Ô b2 Ô ! Ô bm L ã
m
‚ Hai û bj L H-1Li+j Ha1 Ô ! Ô ·i Ô ! Ô am L û 6.55
i=1
Hb1 Ô ! Ô ·j Ô ! Ô bm L
Ha1 Ô a2 Ô ! Ô am L û Hb1 Ô b2 Ô ! Ô bm L ã
a1 û b1 a1 û b2 a1 û … a1 û b m
a2 û b1 a2 û b2 a2 û … a2 û b m 6.57
… û b1 … û b2 … û … … û bm
a m û b1 a m û b2 a m û … am û bm
Ï ComposeScalarProductMatrix
a1 û b1 a1 û b2 a1 û b3
a2 û b1 a2 û b2 a2 û b3
a3 û b1 a3 û b2 a3 û b3
Det@M1 D
-Ha1 û b3 L Ha2 û b2 L Ha3 û b1 L + Ha1 û b2 L Ha2 û b3 L Ha3 û b1 L +
Ha1 û b3 L Ha2 û b1 L Ha3 û b2 L - Ha1 û b1 L Ha2 û b3 L Ha3 û b2 L -
Ha1 û b2 L Ha2 û b1 L Ha3 û b3 L + Ha1 û b1 L Ha2 û b2 L Ha3 û b3 L
† If you just want to display M1 to look like a determinant, you can use Mathematica's Grid and
BracketingBar.
2009 9 3
Grassmann Algebra Book.nb 439
BracketingBar@Grid@M1 DD
a1 û b1 a1 û b2 a1 û b3
a2 û b1 a2 û b2 a2 û b3
a3 û b1 a3 û b2 a3 û b3
† If we reverse the order of the two elements and then take the transpose of the resulting scalar
product matrix we get a matrix M2 which differs from M1 only by the ordering of its scalar
products.
b1 û a1 b2 û a1 b3 û a1
b1 û a2 b2 û a2 b3 û a2
b1 û a3 b2 û a3 b3 û a3
This shows clearly how the symmetry of the inner product depends on the symmetry of the scalar
product (which in turn depends on the symmetry of the metric tensor).
Ï ToScalarProducts
† To obtain the expansion of the inner product more directly, you can use ToScalarProducts.
ToScalarProductsBa û bF
3 3
Applying ToScalarProducts expands any interior products until the expression only contains
scalar products.
X2 = ToScalarProducts@X1 D
a x + c H-He1 û e3 L He2 û e2 L + He1 û e2 L He2 û e3 LL +
b He1 û e3 L e1 Ô e2 - b He1 û e2 L e1 Ô e3 + b He1 û e1 L e2 Ô e3
You can convert these scalar products to metric elements with ToMetricElements. For the
default Euclidean metric you would get:
X3 = ToMetricElements@X2 D
a x + b e2 Ô e3
For a general metric you get the original expression with the general metric elements substituted
for the scalar products.
2009 9 3
Grassmann Algebra Book.nb 440
For a general metric you get the original expression with the general metric elements substituted
for the scalar products.
DeclareMetric@gD; X4 = ToMetricElements@X2 D
a x - c g1,3 g2,2 + c g1,2 g2,3 + b g1,3 e1 Ô e2 - b g1,2 e1 Ô e3 + b g1,1 e2 Ô e3
Here, a and b refer to any basis elements. The reader is referred further to Chapter 2 for a
m m
discussion of cobases, and to Chapter 5 for a discussion of the complement and the metric tensor.
j
ei û ej ã di ei û ej ã dij 6.59
m m m m
m
ei û ej ã ei û ej ã g ei û ej ã gij 6.60
m m m m m m
1 m ij
ei û ej ã ei û ej ã ei û ej ã g 6.61
m m m m g m m
The measure of an m-element a is denoted £aß and defined as the positive square root of the inner
m m
product of a with itself.
m
2009 9 3
Grassmann Algebra Book.nb 441
†a§ ã a2 6.63
The measure of unity (or its negative) is, as would be expected, equal to unity.
The measure of an n-element a ã a e1 Ô … Ô en is the absolute value of its single scalar coefficient
n
multiplied by the determinant g of the metric tensor (which, in this book, we assume to be positive).
The measure of the unit n-element (or its negative) is, as would be expected, equal to unity.
The measure of an element is equal to the measure of its complement, since, as we have shown
above, a û a ã a û a.
m m m m
(The different heights of the bracketing bars in this formula is an artifact of Mathematica's
typesetting, and is inconsequential to the meaning of the notation.)
The measure of a simple element a is equal to the determinant of the scalar products of its factors.
m
2
£aß ã Ha1 Ô a2 Ô ! Ô am L û Ha1 Ô a2 Ô ! Ô am L ã Det@ai û aj D 6.68
m
In a Euclidean metric, the measure of an element is given by the familiar 'root sum of squares'
formula yielding a measure that is always real and positive.
a ã S ai ei ï £aß ã S ai 2 6.69
m m m
Unit elements
2009 9 3
Grassmann Algebra Book.nb 442
Unit elements
`
The unit m-element associated with the element a is denoted a and may now be defined as:
m m
a
` m
aã 6.70
m
£aß
m
`
The unit scalar a corresponding to the scalar a, is equal to +1 if a is positive and is equal to -1 if
a is negative.
` a 1 a>0
a ã ã + 6.71
†a§ -1 a < 0
` `
1ã1 -1 ã -1 6.72
`
The unit n-element a corresponding to the n-element a ã a e1 Ô ! Ô en is equal to the unit n-
n n
element 1 if a is positive, and equal to -1 if a is negative.
a
` n a e1 Ô ! Ô en 1 a>0
a ã ã ã 6.73
n
£aß †a§ g -1 a < 0
n
` `
1 ã 1 -1 ã -1 6.74
`
The measure of a unit m-element a is therefore always unity.
m
`
†a§ ã 1 6.75
m
Calculating measures
To calculate the measure of any simple element you can use the GrassmannAlgebra function
Measure.
2009 9 3
Grassmann Algebra Book.nb 443
Measure@x Ô yD
-Hx û yL2 + Hx û xL Hy û yL
Measure will compute the measure of a simple element using the currently declared metric.
For example, with the default Euclidean metric, the measure of this 3-element is 5.
Measure@H2 e1 - 3 e2 L Ô He1 + e2 + e3 L Ô H5 e2 + e3 LD
5
DeclareMetric@gD
88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<
Measure@H2 e1 - 3 e2 L Ô He1 + e2 + e3 L Ô H5 e2 + e3 LD
, I50 g1,2 g1,3 g2,3 + 25 g1,1 g2,2 g3,3 - 25 Ig2 g2,2 + g1,1 g2 + g2 g3,3 MM
1,3 2,3 1,2
Measure will automatically apply any simplifications due to the symmetry of the scalar product.
Ï Scalars: m = 0
†a§2 ã a2
The measure of a scalar is its absolute value.
Measure@aD
a2
Ï Vectors: m = 1
†x§2 ã x û x
If x is interpreted as a vector, then †x§ is called the length of x.
Measure@xD
xûx
2009 9 3
Grassmann Algebra Book.nb 444
Ï Bivectors: m = 2
xûx xûy
†x Ô y§2 ã Hx Ô yL û Hx Ô yL ã £ ß
yûx yûy
Measure@x Ô yD
-Hx û yL2 + Hx û xL Hy û yL
Graphic showing a parallelogram formed by two vectors x and y, and its area.
Because of the nilpotent properties of the exterior product, the measure of the bivector is
independent of the way in which it is expressed in terms of its vector factors.
Measure@Hx + yL Ô yD
-Hx û yL2 + Hx û xL Hy û yL
Ï Trivectors: m = 3
Measure@x Ô y Ô zD
, I-Hx û zL2 Hy û yL + 2 Hx û yL Hx û zL Hy û zL -
Hx û xL Hy û zL2 - Hx û yL2 Hz û zL + Hx û xL Hy û yL Hz û zLM
2009 9 3
Grassmann Algebra Book.nb 445
¯! û ¯! ã 1 a û ¯! ã 0 ei û ej ã gij 6.76
m
P û P ã H¯" + xL û H¯" + xL
ã ¯" û ¯" + 2 ¯" û x + x û x
ã 1 + xûx
Although this is at first sight a strange result, it is due entirely to the hybrid metric referred to
above. Unlike a vector difference of two points whose measure starts off from zero and increases
continuously as the two points move further apart, the measure of a single point starts off from
unity as it coincides with the origin and increases continuously as it moves further away from it.
The measure of the difference of two points is, as expected, equal to the measure of the associated
vector.
P1 ã ¯" + x1
P2 ã ¯" + x2
HP1 - P2 L û HP1 - P2 L
ã P1 û P1 - 2 P1 û P2 + P2 û P2
ã H1 + x1 û x1 L - 2 H1 + x1 û x2 L + H1 + x2 û x2 L
ã x1 û x1 - 2 x1 û x2 + x2 û x2
ã Hx1 - x2 L û Hx1 - x2 L
2009 9 3
Grassmann Algebra Book.nb 446
a û b Hx û zL ã Ka Ô xO û b Ô z + Ka û zO û b û x
m m m m m m
If we put x ã z ã P and b ã a in this formula, and then rearrange the order we get:
m m
KP Ô aO û KP Ô aO ã Ka û aO HP û PL - Ka û PO û Ka û PO
m m m m m m
KP Ô aO û KP Ô aO ã Ka û aO H1 + x û xL - Ka û xO û Ka û xO
m m m m m m
Ka Ô xO û Ka Ô xO ã Ka û aO Hx û xL - Ka û xO û Ka û xO
m m m m m m
KP Ô aO û KP Ô aO ã a û a + Ka Ô xO û Ka Ô xO 6.78
m m m m m m
2 2 2
£P Ô aß ã £aß + £a Ô xß 6.79
m m m
` 2 ` 2
6.80
£P Ô aß ã 1 + £a Ô xß
m m
Note that the measure of a multivector bound through the origin (x ã 0) is just the measure of the
multivector alone.
`
£¯! Ô aß ã 1 6.82
m
If x¶ is the component of x orthogonal to a, then we can once again apply the triangle formula
m
above to give
Ka Ô x¶ O û Ka Ô x¶ O ã Ka û aO Hx¶ û x¶ L - Ka û x¶ O û Ka û x¶ O
m m m m m m
2009 9 3
Grassmann Algebra Book.nb 447
Ka Ô xO û Ka Ô xO ã Ka û aO Hx¶ û x¶ L
m m m m
We can thus write formulas for the measure of a bound element in the alternative forms:
2 2
£P Ô aß ã £aß I1 + †v¶ §2 M 6.83
m m
` 2
£P Ô aß ã 1 + †v¶ §2 6.84
m
Thus the measure of a bound unit multivector indicates its minimum distance from the origin.
To see this, begin with a simple bound m-vector, take its interior product with the origin, and then
apply the Common Factor Theorem.
P Ô a ã P Ô a1 Ô a2 Ô …
m
All terms except the first are clearly zero, and since ¯" û P reduces to 1, the first term reduces to a.
m
KP Ô aO û ¯! ã a 6.85
m m
Ï Example
Suppose we are in a bound 3-space, and we consider the bound bivector expressed as the product
of three points: P Ô Q Ô R. Such a 2-element could, for example, define a plane. We want to find the
bivector.
P = P Ô Q Ô R;
2009 9 3
Grassmann Algebra Book.nb 448
B1 = ¯%@P û ¯"D
¯# Ô e1 Ô e2 û ¯# - ¯# Ô e1 Ô e3 û ¯# + ¯# Ô e2 Ô e3 û ¯# + e1 Ô e2 Ô e3 û ¯#
We can convert these to scalar products using the ToScalarProducts (which is based on the
Interior Common Factor Theorem).
B2 = ToScalarProducts@B1 D
¯# Ô HH¯# û e2 - ¯# û e3 L e1 +
H-H¯# û e1 L + ¯# û e3 L e2 + H¯# û e1 - ¯# û e2 L e3 L +
H¯# û ¯# + ¯# û e3 L e1 Ô e2 + H-H¯# û ¯#L - ¯# û e2 L e1 Ô e3 +
H¯# û ¯# + ¯# û e1 L e2 Ô e3
Now, when we apply ToMetricElements, all the scalar products of basis vectors with the
origin are put to zero, and we get the required bivector.
B3 = ToMetricElements@B2 D
e1 Ô e2 - e1 Ô e3 + e2 Ô e3
B4 = ExteriorFactorize@B3 D
He1 - e3 L Ô He2 - e3 L
In terms of the original points we can rewrite this bivector just as we would if we were deriving it
geometrically from the plane of P Ô Q Ô R.
B5 ã HP - RL Ô HQ - RL
Expanding this bivector gives an important form related to the original simple exterior product
P Ô Q Ô R.
B6 ã P Ô Q - P Ô R + Q Ô R
This form was discussed in Chapter 4: m-Planes: The m-vector of a bound m-vector.
2009 9 3
Grassmann Algebra Book.nb 449
¯!3 ; DeclareMetric@-D
88$1,1 , $1,2 , $1,3 <, 8$1,2 , $2,2 , $2,3 <, 8$1,3 , $2,3 , $3,3 <<
MetricL@2D
99-$21,2 + $1,1 $2,2 , -$1,2 $1,3 + $1,1 $2,3 , -$1,3 $2,2 + $1,2 $2,3 =,
9-$1,2 $1,3 + $1,1 $2,3 , -$21,3 + $1,1 $3,3 , -$1,3 $2,3 + $1,2 $3,3 =,
9-$1,3 $2,2 + $1,2 $2,3 , -$1,3 $2,3 + $1,2 $3,3 , -$22,3 + $2,2 $3,3 ==
MetricPalette
Metric Palette
L H1L
0
-#21,2 + #1,1 #2,2 -#1,2 #1,3 + #1,1 #2,3 -#1,3 #2,2 + #1,2 #2
L -#1,2 #1,3 + #1,1 #2,3 -#21,3 + #1,1 #3,3 -#1,3 #2,3 + #1,2 #3
2
-#1,3 #2,2 + #1,2 #2,3 -#1,3 #2,3 + #1,2 #3,3 -#22,3 + #2,2 #3,3
L I -#21,3 #2,2 + 2 #1,2 #1,3 #2,3 - #1,1 #22,3 - #21,2 #3,3 + #1,1 #2,2 #3,
3
¯!3 ; B = BasisL@2D
8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <
Second, we use the Mathematica function Outer to construct the matrix of all combinations of
interior products of these basis elements.
2009 9 3
Grassmann Algebra Book.nb 450
Second, we use the Mathematica function Outer to construct the matrix of all combinations of
interior products of these basis elements.
M2 = ComposeScalarProductMatrix@M1 D; MatrixForm@M2 D
e1 û e1 e1 û e2 e1 û e1 e1 û e3 e1 û e2 e1 û e3
K O K O K O
e2 û e1 e2 û e2 e2 û e1 e2 û e3 e2 û e2 e2 û e3
e1 û e1 e1 û e2 e1 û e1 e1 û e3 e1 û e2 e1 û e3
K O K O K O
e3 û e1 e3 û e2 e3 û e1 e3 û e3 e3 û e2 e3 û e3
e2 û e1 e2 û e2 e2 û e1 e2 û e3 e2 û e2 e2 û e3
K O K O K O
e3 û e1 e3 û e2 e3 û e1 e3 û e3 e3 û e2 e3 û e3
-$21,2 + $1,1 $2,2 -$1,2 $1,3 + $1,1 $2,3 -$1,3 $2,2 + $1,2 $2,3
-$1,2 $1,3 + $1,1 $2,3 -$21,3 + $1,1 $3,3 -$1,3 $2,3 + $1,2 $3,3
-$1,3 $2,2 + $1,2 $2,3 -$1,3 $2,3 + $1,2 $3,3 -$22,3 + $2,2 $3,3
We can have Mathematica check that this is the same matrix as obtained from the MetricL
function.
M4 ã MetricL@2D
True
2009 9 3
Grassmann Algebra Book.nb 451
¯!4 ; DeclareMetric@-D
88$1,1 , $1,2 , $1,3 , $1,4 <, 8$1,2 , $2,2 , $2,3 , $2,4 <,
8$1,3 , $2,3 , $3,3 , $3,4 <, 8$1,4 , $2,4 , $3,4 , $4,4 <<
MetricMatrix@3D êê MatrixForm
!1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4
!1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4
!1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4
!1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4
!1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4
!1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4
!1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4
!1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4
!1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4
!1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4
!1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4
!1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4
Of course, we are only displaying the metric tensor in this form. Its elements are correctly the
determinants of the matrices displayed, not the matrices themselves.
ÑÓ x ï ÑÓx ï Ñûx
n-m m m
The Interior Common Factor Theorem theorem plays the same role in this development that the
Common Factor Theorem did in Chapter 3.
2009 9 3
Grassmann Algebra Book.nb 452
The basic interior Product Formula is a transformation of the first regressive Product Formula in
Chapter 3. Beginning with
a Ô b Ó x ã Ka Ó x O Ô b + H-1Lm a Ô b Ó x
m k n-1 m n-1 k m k n-1
a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x 6.86
m k m k m k
The basic interior Product Formula may also be shown using the Interior Common Factor
Theorem. To do this, suppose initially that a and b are simple. Then they can be expressed as
m k
a ã a1 Ô ! Ô am and b ã b1 Ô ! Ô bk . Thus:
m k
a Ô b û x ã HHa1 Ô ! Ô am L Ô Hb1 Ô ! Ô bk LL û x
m k
m
ã ‚ H-1Li-1 Hx û ai L Ha1 Ô ! Ô ·i Ô ! Ô am L Ô Hb1 Ô ! Ô bk L
i=1
k
+ ‚ H-1Lm+j-1 Hx û bj L Ha1 Ô ! Ô am L Ô Hb1 Ô ! Ô ·j Ô ! Ô bk L
j=1
If x is of a grade higher than 1, then similar relations hold, but with extra terms on the right-hand
side. For example, if we replace x by x1 Ô x2 and note that:
a Ó b û Hx1 Ô x2 L ã a Ó b û x1 û x2
m k m k
then the right-hand side may be expanded by applying the basic Product Formula twice. The first
application is
2009 9 3
Grassmann Algebra Book.nb 453
a Ó b û Hx1 Ô x2 L ã Ka û x1 O Ô b + H-1Lm a Ô b û x1 û x2
m k m k m k
Ka û x1 O Ô b û x2 ã KKa û x1 O û x2 O Ô b + H-1Lm-1 Ka û x1 O Ô b û x2
m k m k m k
H-1Lm a Ô b û x1 û x2 ã H-1Lm Ka û x2 O Ô b û x1 + a Ô b û x1 û x2
m k m k m k
As with the product formula for regressive products, successive application doubles the number of
terms. We started with two terms on the right hand side of the basic interior Product Formula. By
applying it again we obtain a Product Formula with four terms on the right hand side. The next
application would give us eight terms.
Ï Deriving interior Product Formulas from the dual regressive Product Formulas
A second derivation may be obtained directly by first computing the Dual of a regressive Product
Formula, and then directly using the substitution
ÑÓ x ï ÑÓx ï Ñûx
n-m m m
F1 = DeriveProductFormulaB a Ó b Ô xF
m k 2
a Ó b Ô x1 Ô x2 ã
m k
a Ó b Ô x1 Ô x2 + H-1L-m+¯n - a Ô x1 Ó b Ô x2 + a Ô x2 Ó b Ô x1 + a Ô x1 Ô x2 Ó b
m k m k m k m k
2009 9 3
Grassmann Algebra Book.nb 454
F2 = Dual@F1 D
a Ô b Ó x1 Ó x2 ã a Ô b Ó x1 Ó x2 +
m k -1+¯n -1+¯n m k -1+¯n -1+¯n
H-1Lm - a Ó x1 Ô b Ó x2 + a Ó x2 Ô b Ó x1 +
m -1+¯n k -1+¯n m -1+¯n k -1+¯n
a Ó x1 Ó x2 Ôb
m -1+¯n -1+¯n k
F3 = F2 êê. A_ Ó Underscript@x_, ¯n - 1D ß A û x
a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x
m k m k m k
which we encode as R1 .
HZ û xL û y ã Z û Hx Ô yL
which we encode as R2 .
¯¯V@x_ D;
F0 = a Ô b ;
m k
Take the interior product of F0 with x1 , and apply both rules as many times as they are applicable
to get F1 .
Ka û x1 O Ô b + H-1Lm a Ô b û x1
m k m k
Repeat by taking the interior product of F1 with x2 , expanding, and applying the rules to get F2 .
2009 9 3
Grassmann Algebra Book.nb 455
H-1L-1+m Ka û x1 O Ô b û x2 + H-1Lm Ka û x2 O Ô b û x1 +
m k m k
Ka û x1 Ô x2 O Ô b + H-1L2 m a Ô b û x1 Ô x2
m k m k
H-1L1+m Ka û x1 O Ô b û x2 +
m k
H-1Lm Ka û x2 O Ô b û x1 + Ka û x1 Ô x2 O Ô b + a Ô b û x1 Ô x2
m k m k m k
Ka û x1 O Ô b û x2 Ô x3 - Ka û x2 O Ô b û x1 Ô x3 +
m k m k
Ka û x3 O Ô b û x1 Ô x2 + H-1Lm Ka û x1 Ô x2 O Ô b û x3 +
m k m k
H-1L1+m Ka û x1 Ô x3 O Ô b û x2 + H-1Lm Ka û x2 Ô x3 O Ô b û x1 +
m k m k
Ka û x1 Ô x2 Ô x3 O Ô b + H-1Lm a Ô b û x1 Ô x2 Ô x3
m k m k
a Ô b û Hx1 Ô x2 Ô x3 L ã
m k
Ka û x1 O Ô b û x2 Ô x3 -
m k
Ka û x2 O Ô b û x1 Ô x3 + Ka û x3 O Ô b û x1 Ô x2 +
m k m k
3.65
m
H-1L Ka û x1 Ô x2 O Ô b û x3 - Ka û x1 Ô x3 O Ô b û x2 +
m k m k
-Ka û x2 Ô x3 O Ô b û x1 + a Ô b û x1 Ô x2 Ô x3 +
m k m k
Ka û x1 Ô x2 Ô x3 O Ô b
m k
2009 9 3
Grassmann Algebra Book.nb 456
DeriveInteriorProductFormulaOnce@F_, x_D :=
GrassmannExpand@F û xD êê.
9s_. A_ Ô B_ û z_ ß s HA û zL Ô B + s H-1LRawGrade@AD A Ô HB û zL,
Z_ û z_ û y_ ß Z û z Ô y=;
H1 = DeriveInteriorProductFormulaB a Ô b û x1 F
m k
a Ô b û x1 ã Ka û x1 O Ô b + H-1Lm a Ô b û x1
m k m k m k
H2 = DeriveInteriorProductFormulaB a Ô b û xF
m k 2
a Ô b û x1 Ô x2 ã H-1L1+m Ka û x1 O Ô b û x2 +
m k m k
H-1Lm Ka û x2 O Ô b û x1 + Ka û x1 Ô x2 O Ô b + a Ô b û x1 Ô x2
m k m k m k
2009 9 3
Grassmann Algebra Book.nb 457
H3 = DeriveInteriorProductFormulaB a Ô b û xF
m k 3
a Ô b û x1 Ô x2 Ô x3 ã Ka û x1 O Ô b û x2 Ô x3 - Ka û x2 O Ô b û x1 Ô x3 +
m k m k m k
Ka û x3 O Ô b û x1 Ô x2 + H-1Lm Ka û x1 Ô x2 O Ô b û x3 +
m k m k
H-1L1+m Ka û x1 Ô x3 O Ô b û x2 + H-1Lm Ka û x2 Ô x3 O Ô b û x1 +
m k m k
Ka û x1 Ô x2 Ô x3 O Ô b + H-1Lm a Ô b û x1 Ô x2 Ô x3
m k m k
: a Ô b û x1 ã F1 === H1 ,
m k
a Ô b û x1 Ô x2 ã F2 === H2 ,
m k
a Ô b û x1 Ô x2 Ô x3 ã F3 === H3 >
m k
As in Chapter 3, we can take the interior Product Formula derived in the previous sections and
rewrite it in its computational form using the notions of complete span and complete cospan. The
computational form relies on the fact that in GrassmannAlgebra the exterior and interior products
have an inbuilt Listable attribute.
aÔb ûx ã
m k j
3.66
¯r@jD m ¯
¯SBFlattenBH-1L a û ¯" BxF Ô b û ¯"¯ BxF FF
m j k j
In this formula, ¯#¯ BxF and ¯#¯ BxF are the complete span and complete cospan of X. The sign
j j j
function ¯r creates a list of elements alternating between 1 and -1.
For example, for j equal to 2, and simplifying the right hand side, we have
2009 9 3
Grassmann Algebra Book.nb 458
a Ô b û Hx1 Ô x2 L ã
m k
a Ô b û x1 Ô x2 ã -H-1Lm Ka û x1 O Ô b û x2 +
m k m k
H-1Lm Ka û x2 O Ô b û x1 + Ka û x1 Ô x2 O Ô b + a Ô b û x1 Ô x2
m k m k m k
Ï Comparing results
DeriveInteriorProductFormulaB a Ô b û xF ê.
m k j
2009 9 3
Grassmann Algebra Book.nb 459
F1 = ComputeInteriorProductFormulaB a Ô b û Hx Ô y Ô zLF
m k
aÔbûxÔyÔz ã
m k
Ka û xO Ô b û y Ô z - Ka û yO Ô b û x Ô z + Ka û zO Ô b û x Ô y +
m k m k m k
H-1Lm Ka û x Ô yO Ô b û z - H-1Lm Ka û x Ô zO Ô b û y +
m k m k
H-1Lm Ka û y Ô zO Ô b û x + Ka û x Ô y Ô zO Ô b + H-1Lm a Ô b û x Ô y Ô z
m k m k m k
We can express the 3-element as a product of different 1-element factors by adding to any given
factor, scalar multiples of the other factors. For example
F2 = ComputeInteriorProductFormulaB a Ô b û Hx Ô y Ô Hz + a x + b yLLF
m k
a Ô b û x Ô y Ô Ha x + b y + zL ã Ka û xO Ô b û y Ô Ha x + b y + zL -
m k m k
Ka û yO Ô b û x Ô Ha x + b y + zL + Ka û Ha x + b y + zLO Ô b û x Ô y +
m k m k
H-1Lm Ka û x Ô yO Ô b û Ha x + b y + zL -
m k
H-1Lm Ka û x Ô Ha x + b y + zLO Ô b û y +
m k
H-1Lm Ka û y Ô Ha x + b y + zLO Ô b û x +
m k
Ka û x Ô y Ô Ha x + b y + zLO Ô b + H-1Lm a Ô b û x Ô y Ô Ha x + b y + zL
m k m k
¯%@F1 ã F2 D
True
Thus we can write the interior Product Formula in the alternative form
3.67
2009 9 3
Grassmann Algebra Book.nb 460
a Ô b û x ã ¯SBFlattenB
m k j
Just as we encapsulated the first computable formula, we can similarly encapsulate this alternative
one. We call it the ComputeInteriorProductFormulaB.
Ï Comparing results
a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x
m k m k m k
Ka Ô xO û x ã Ka û xO Ô x + H-1Lm a Ô Hx û xL
m m m
3.67
2009 9 3
Grassmann Algebra Book.nb 461
` ` ` `
a ã H-1Lm KKa Ô xO û x - Ka û xO Ô xO 6.87
m m m
`
Here, x is a unit element.
` x
xã
xûx
` ` ` `
Note that Ka Ô xO û x and Ka û xO Ô x are orthogonal.
m m
For completeness we repeat the basic interior Product Formula introduced at the beginning of the
section.
a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x 6.88
m k m k m k
a Ó b Ô x ã Ka Ô xO Ó b + H-1L¯n-m a Ó b Ô x
m k m k m k
a Ó b Ô x ã Ka Ô xO Ó b + H-1L¯n-m a Ó b Ô x
m k m k m k
a û b Ô x ã Ka Ô xO û b + H-1Lm-1 a û b û x 6.89
m k m k m k
This product formula is the basis for the derivation of a set of formulas of geometric interest later
in the chapter: the Triangle Formulas.
2009 9 3
Grassmann Algebra Book.nb 462
An interior product of elements can always be expressed as the interior product of their
complements in reverse order.
a û b ã H-1LHm-kL H¯n-mL b û a
m k k m
When applied to interior Product Formula 2, the terms on the right-hand side interchange forms.
Ka Ô xO û b ã H-1LHm-kL H¯n-mL+¯n+k+1 b û Ka û xO
m k k m
a û b û x ã H-1LHm-kL H¯n-mL+m+1 b Ô x û a
m k k m
Hence interior Product Formula 2 may be expressed in a variety of ways, for example:
Taking the interior product of interior Product Formula 2 with a second 1-element x2 gives:
a û b Ô x1 û x2 ã
m k
6.91
m+k
Ka Ô x1 O û b Ô x2 + H-1L K a û x2 O û b û x1
m k m k
p
x ã x1 Ô x1 ã x2 Ô x2 ã ! ã xn Ô xn nãK O
p r p-r r p-r r p-r r
Now replace b with b , b Ô xi with H-1Lr Hn-rL b Ó xi , and n|k with k to give:
Ï k n-k n-k r n-k r
2009 9 3
Grassmann Algebra Book.nb 463
Now replace b with b , b Ô xi with H-1Lr Hn-rL b Ó xi , and n|k with k to give:
k n-k n-k r n-k r
p n
a û b Ô x ã ‚ H-1Lr Hm-rL ‚ a Ô xi û b û xi
m k p m p-r k r
r=0 i=1
6.92
p
x ã x1 Ô x1 ã x2 Ô x2 ã ! ã xn Ô xn , nãK O
p r p-r r p-r r p-r r
p n
a Ô b û x ã ‚ H-1Lr m ‚ a û xi Ô b û xi
m k p m p-r k r
r=0 i=1
6.93
p
x ã x1 Ô x1 ã x2 Ô x2 ã ! ã xn Ô xn , nãK O
p r p-r r p-r r p-r r
p n
r Hm-rL ` `
x ã ‚ H-1L ‚ a Ô xi û Ka û xi O
p m p-r m r
r=0 i=1
6.94
p
x ã x1 Ô x1 ã x2 Ô x2 ã ! ã xn Ô xn , nãK O
p r p-r r p-r r p-r r
If, from a series of m 1-elements, one forms the exterior products of any given grade, and
conjoins each of them in an interior product with its supplementary combination, then the
sum of these products is zero, that is
2009 9 3
Grassmann Algebra Book.nb 464
If, from a series of m 1-elements, one forms the exterior products of any given grade, and
conjoins each of them in an interior product with its supplementary combination, then the
sum of these products is zero, that is
S1 û C1 + S2 û C2 + … ã 0
This theorem is a useful source of identities involving sums of interior products. We shall also see
in Chapter 10: Exploring the Generalized Product, that it has an important role to play in proving
the Generalized Product Theorem which shows the equivalence of two different forms for the
expansion of the generalized product in terms of interior and exterior products. Chapter 10 will
also discuss the generalization of the Zero Interior Sum Theorem leading to a Zero Generalized
Sum Theorem.
As we will see in the following sections, the Zero Interior Sum Theorem turns out to be another
result associated with the invariance of an m:k-form to a factorization of the m-element A.
Here are the simplest cases. It will be immediately clear from these cases that m:k-sums in which k
is less than m are zero, since each interior product term is zero.
F2,1
a1 û a2 + a2 û -a1
F3,1
a1 û a2 Ô a3 + a2 û -Ha1 Ô a3 L + a3 û a1 Ô a2
2009 9 3
Grassmann Algebra Book.nb 465
F3,2
a1 Ô a2 û a3 + a1 Ô a3 û -a2 + a2 Ô a3 û a1
F4,1
a1 û a2 Ô a3 Ô a4 + a2 û -Ha1 Ô a3 Ô a4 L + a3 û a1 Ô a2 Ô a4 + a4 û -Ha1 Ô a2 Ô a3 L
F4,2
a1 Ô a2 û a3 Ô a4 + a1 Ô a3 û -Ha2 Ô a4 L + a1 Ô a4 û a2 Ô a3 +
a2 Ô a3 û a1 Ô a4 + a2 Ô a4 û -Ha1 Ô a3 L + a3 Ô a4 û a1 Ô a2
F4,3
a1 Ô a2 Ô a3 û a4 + a1 Ô a2 Ô a4 û -a3 + a1 Ô a3 Ô a4 û a2 + a2 Ô a3 Ô a4 û -a1
To verify the theorem, we only need to convert the interior products to scalar products and simplify.
80, 0, 0, 0<
A ã a1 Ô Ha2 + a a1 L Ô Ha3 + b a1 + c a2 L Ô …
Beginning with the first two factors a1 and a2 + a a1 , we can choose the scalar a to make the
factors orthogonal.
a1 û a2
a1 û Ha2 + a a1 L ã 0 ï aã-
a1 û a1
For the first three factors, we can choose the scalars to make all three factors mutually orthogonal.
Let
f13 = a1 û Ha3 + b a1 + c a2 L;
We can follow through the standard Gram-Schmidt process, or we can use Mathematica to solve
for the coefficients.
2009 9 3
Grassmann Algebra Book.nb 466
We can follow through the standard Gram-Schmidt process, or we can use Mathematica to solve
for the coefficients.
It is of interest to note that these scalars may be expressed in terms of interior products of 2-
elements.
It is easy to confirm that with these definitions for a, b, and c, the first three factors of A are
mutually orthogonal. First we make lists of the factors and values for the scalars we have derived.
F = 8f1 Ø a1 , f2 Ø a2 + a a1 , f3 Ø a3 + b a1 + c a2 <;
Finally we confirm that this refactorization does not change the original product.
¯%@f1 Ô f2 Ô f3 ê. F ê. ToScalarProducts@HDD
a1 Ô a2 Ô a3
For the first result we rely on the discussion of multilinear forms in Chapter 2. For the second
result we rely on the well-known Gram-Schmidt process discussed in the previous section.
A ã a1 Ô a2 Ô a3 Ô … Ô am
2009 9 3
Grassmann Algebra Book.nb 467
Any m:k-form which is the sum of interior products of k-span and k-cospan elements is given by
the expression:
By the Gram-Schmidt process A may be expressed as the exterior product of mutually orthogonal
factors.
The form based on this refactorization is clearly zero since it is composed of sums of terms, each
of which involves the interior product of mutually orthogonal factors.
Ï Example
A = a1 Ô a2 Ô a3 ;
a1 Ô a2 û a3 + a1 Ô a3 û -a2 + a2 Ô a3 û a1
A = a1 Ô Ha2 + a a1 L Ô Ha3 + b a1 + c a2 L;
a1 Ô Ha a1 + a2 L û Hb a1 + c a2 + a3 L +
a1 Ô Hb a1 + c a2 + a3 L û H-a a1 - a2 L + Ha a1 + a2 L Ô Hb a1 + c a2 + a3 L û a1
Since each one of these factors is mutually orthogonal, each term is individually zero. Hence
H32 ã 0. Since H32 is equal to the original interior sum F32 the theorem is confirmed.
¯%@H32 ã F32 D
True
2009 9 3
Grassmann Algebra Book.nb 468
Because our generalization reduces to the usual definition under the usual circumstances, we take
the liberty of continuing to refer to the generalized cross product as, simply, the cross product.
The cross product of a and b is denoted aµb and is defined as the complement of their exterior
m k m k
product. The cross product of an m-element and a k-element is thus an (n|(m+k))-element.
This definition preserves the basic property of the cross product: that the cross product of two
elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three
dimensional metric vector space.
2009 9 3
Grassmann Algebra Book.nb 469
Hx1 µ x2 L û x3 ã Hx1 Ô x2 L û x3 ã x1 Ô x2 Ô x3
Hx1 µ x2 L û x3 ã x1 Ô x2 Ô x3 6.98
Because Grassmann identified n-elements with their scalar Euclidean complements (see the
historical note in the introduction to Chapter 5), he considered x1 Ô x2 Ô x3 in a 3-space to be a
scalar. His notation for the exterior product of elements was to use square brackets @x1 x2 x3 D,
thus originating the 'box' product notation for Hx1 µ x2 L û x3 used in the three-dimensional vector
algebra in the early decades of the twentieth century.
† Note that this product, unlike the other products above, does not rely on a metric, because the
expression involves only exterior and regressive products.
2009 9 3
Grassmann Algebra Book.nb 470
a œ L, b œ L ï aµb œ L 6.101
m m k k m k ¯n-Hm+kL
The cross product is not associative. However, it can be expressed in terms of exterior and interior
products.
The cross product of an element with the unit scalar 1 yields its complement. Thus the complement
operation may be viewed as the cross product with unity.
1µ a ã aµ1 ã a 6.104
m m m
The cross product of an element with a scalar yields that scalar multiple of its complement.
aµ a ã aµa ã a a 6.105
m m m
The cross product of 1 with itself is the unit n-element. The cross product of a scalar and its
reciprocal is the unit n-element.
1
1µ1 ã aµ ã 1 6.106
a
The cross product of two elements is equal (apart from a possible sign) to their cross product in
reverse order. The cross product of two 1-elements is anti-commutative, just as is the exterior
product.
a µ b ã H-1Lm k b µ a 6.107
m k k m
Ï µ 12: The cross product is both left and right distributive under addition.
Ka + b O µ g ã a µ g + b µ g 6.109
m m r m r m r
a µ Kb + gO ã a µ b + a µ g 6.110
m r r m r m r
1µ a ã aµ1 ã a
m m m
We can therefore write the exterior, regressive, and interior products in terms only of the cross
product.
Ï
2009 9 3
Grassmann Algebra Book.nb 472
These formulae show that any result in the Grassmann algebra involving exterior, regressive or
interior products, or complements, could be expressed in terms of the generalized cross product
alone. This is somewhat reminiscent of the role played by the Scheffer stroke (or Pierce arrow)
symbol of Boolean algebra.
2009 9 3
Grassmann Algebra Book.nb 473
Triangle components
In this section several formulae will be developed which are generalizations of the fundamental
Pythagorean relationship for the right-angled triangle and the trigonometric identity
cos2 HqL + sin2 HqL = 1. These formulae will be found useful in determining projections, shortest
distances between multiplanes, and a variety of other geometric results. The starting point is the
product formula 6.89 with k = m.
a û b x ã Ka Ô xO û b + H-1Lm-1 a û b û x
m m m m m m
By dividing through by the inner product a û b, we obtain a decomposition of the 1-element x into
m m
two components.
Ka Ô xO û b a û Kb û xO
m m m-1 m m
xã + H-1L 6.118
aûb aûb
m m m m
a û b Ô x1 û x2 ã Ka Ô x1 O û b Ô x2 + H-1Lm+k K a û x2 O û b û x1
m k m k m k
Ka û bO Hx û zL ã Ka Ô xO û Kb Ô zO + Ka û zO û Kb û xO 6.119
m m m m m m
When a is equal to b and x is equal to z, this reduces to a relationship similar to the fundamental
m m
trigonometric identity.
` ` ` `
x û x ã Ka Ô xO û Ka Ô xO + Ka û xO û Ka û xO 6.120
m m m m
2009 9 3
Grassmann Algebra Book.nb 474
` ` 2 ` ` 2 6.121
£a Ô xß + £a û xß ã 1
m m
Similarly, formula 6.118 reduces to a more obvious decomposition for x in terms of a unit m-
element.
` ` ` `
x ã Ka Ô xO û a + H-1Lm-1 a û Ka û xO 6.122
m m m m
Because of its geometric significance when a is simple, the properties of this equation are worth
m
investigating further. It will be shown that the square (scalar product with itself) of each of its
terms is the corresponding term of formula 6.120. That is:
` ` ` ` ` `
KKa Ô xO û aO û KKa Ô xO û aO ã Ka Ô xO û Ka Ô xO 6.123
m m m m m m
` ` ` ` ` `
Ka û Ka û xOO û Ka û Ka û xOO ã Ka û xO û Ka û xO 6.124
m m m m m m
Further, by taking the scalar product of formula 6.122 with itself, and using formulae 6.120, 6.123,
and 6.124, yields the result that the terms on the right-hand side of formula 6.122 are orthogonal.
` ` ` `
KKa Ô xO û aO û Ka û Ka û xOO ã 0 6.125
m m m m
It is these facts that suggest the name triangle formulae for formulae 6.121 and 6.122.
` ` ` ` ` ` ` `
KKa Ô xO û aO û KKa Ô xO û aO ã H-1Lm KKa Ô xO û KKa Ô xO û aOO û a
m m m m m m m m
Focusing now on the first factor of the inner product on the right-hand side we get:
2009 9 3
Grassmann Algebra Book.nb 475
` ` ` ` `
Ka Ô xO û KKa Ô xO û aO ã Ha1 Ô ! Ô am Ô xL û KKa Ô xO û aO
m m m m m
Note that the second factor in the interior product of the right-hand side is of grade 1. We apply the
Interior Common Factor Theorem to give:
` ` ` `
Ha1 Ô ! Ô am Ô xL û KKa Ô xO û aO ã H-1Lm KKKa Ô xO û aO û xO a1 Ô ! Ô am +
m m m m
` `
‚ H-1Li-1 KKKa Ô xO û aO û ai O a1 Ô ! Ô ·i Ô ! Ô am Ô x
m m
i
` ` ` `
KKa Ô xO û aO û ai ã Ka Ô xO û Ka Ô ai O ã 0
m m m m
` ` `
Ka Ô xO û KKa Ô xO û aO ã
m m m
` ` ` ` ` `
H-1Lm KKKa Ô xO û aO û xO a ã H-1Lm Ka Ô xO û Ka Ô xO a
m m m m m m
` `
Since a û a ã 1, we have finally that:
m m
` ` ` ` ` `
KKa Ô xO û aO û KKa Ô xO û aO ã Ka Ô xO û Ka Ô xO
m m m m m m
` ` `
The measure of Ka Ô xO û a is therefore the same as the measure of a Ô x.
m m m
` ` `
£Ka Ô xO û aß ã £a Ô xß 6.126
m m m
Ka Ô xO û a ã H-1Ln-m-1 a û Ha Ô xL ã H-1Ln-m-1 a û Ka û xO
m m m m m m
2009 9 3
Grassmann Algebra Book.nb 476
Ka Ô xO û a ã H-1Ln-m-1 a û Ka û xO 6.127
m m m m
a û Ka û xO ã H-1Lm-1 Ka Ô xO û a 6.128
m m m m
` ` `
Since the proof of formula 6.103 is valid for any simple a it is also valid for a (which is equal to a)
m m m
since this is also simple. The proof of formula 6.124 is then completed by applying formula 6.127.
The next four sections will look at some applications of these formulae.
6.12 Angle
` ` 2
Cos@qD2 ã °x1 û x2 • 6.129
` ` 2 ` ` 2 6.130
°x1 Ô x2 • + °x1 û x2 • ã 1
This, together with the definition for angle above implies that:
` ` 2
Sin@qD2 ã °x1 Ô x2 • 6.131
2009 9 3
Grassmann Algebra Book.nb 477
` ` 2
Sin@qD2 ã °x1 µ x2 •
` ` 2 ` ` 2
£a Ô xß + £a û xß ã 1
m m
` ` 2
Sin@qD2 ã £a Ô xß 6.132
m
` ` 2
Cos@qD2 ã £a û xß 6.133
m
Diagram of three vectors showing the angles between them, and the angle between the bivector
and the vector.
The angle between any two of the vectors is given by formula 6.129.
` ` 2
Cos@qij D2 ã °xi û xj •
The cosine of the angle between the vector x1 and the bivector x2 Ô x3 may be obtained from
formula 6.124.
` ` 2 HHx2 Ô x3 L û x1 L û HHx2 Ô x3 L û x1 L
£a û xß ã
m HHx2 Ô x3 L û Hx2 Ô x3 LL Hx1 û x1 L
2009 9 3
Grassmann Algebra Book.nb 478
HHx2 Ô x3 L û x1 L û HHx2 Ô x3 L û x1 L
A= ;
HHx2 Ô x3 L û Hx2 Ô x3 LL Hx1 û x1 L
First, convert the interior products to scalar products and then convert the scalar products to angle
form given by formula 6.129. GrassmannAlgebra provides the function ToAngleForm for doing
this in one operation.
¯¯V@x_ D; ToAngleForm@A, qD
This result may be verified by elementary geometry to be Cos@qD2 where q is defined on the
diagram above. Thus we see a verification that formula 6.124 permits the calculation of the angle
between a vector and an m-vector in terms of the angles between the given vector and any m vector
factors of the m-vector.
Diagram of three vectors showing the angles between them, and the angle between the bivector
and the vector.
In a 3-space we can find the vector complements of the bivectors, and then find the angle between
these vectors. This is equivalent to the cross product formulation. Let q be the cosine of the angle
between the vectors, then:
But from formulas 6.28 and 6.67 we can remove the complement operations to get the simpler
expression:
Hx1 Ô x2 L û Hx3 Ô x4 L
Cos@qD ã 6.134
†x1 Ô x2 § †x3 Ô x4 §
Note that, in contradistinction to the 3-space formulation using cross products, this formulation is
valid in a space of any number of dimensions.
An actual calculation is most readably expressed by expanding each of the terms separately. We
can either represent the xi in terms of basis elements or deal with them directly. A direct
expansion, valid for any metric is:
ToScalarProducts@Hx1 Ô x2 L û Hx3 Ô x4 LD
-Hx1 û x4 L Hx2 û x3 L + Hx1 û x3 L Hx2 û x4 L
2009 9 3
Grassmann Algebra Book.nb 479
Measure@x1 Ô x2 D Measure@x3 Ô x4 D
V = Measure@x1 Ô x2 Ô x3 D
, I-Hx û x L2 Hx û x L + 2 Hx û x L Hx û x L Hx û x L -
1 3 2 2 1 2 1 3 2 3
Note that this has been simplified somewhat as permitted by the symmetry of the scalar product.
By putting this in its angle form we get the usual expression for the volume of a parallelepiped:
ToAngleForm@V, qD
, I-†x §2 †x §2 †x §2 I-1 + Cos@q D2 +
1 2 3 1,2
We can of course use the same approach in any number of dimensions. For example, the 'volume'
of a 4-dimensional parallelepiped in terms of the lengths of its sides and the angles between them
is:
¯!4 ; ToAngleForm@Measure@x1 Ô x2 Ô x3 Ô x4 D, qD
, I†x §2 †x §2 †x §2 †x §2 I1 - Cos@q D2 -
1 2 3 4 2,3
6.13 Projection
2009 9 3
Grassmann Algebra Book.nb 480
6.13 Projection
To be completed.
2009 9 3
Grassmann Algebra Book.nb 481
7.1 Introduction
In Chapter 8: Exploring Mechanics, we will see that systems of forces and momenta, and the
velocity and infinitesimal displacement of a rigid body may be represented by the sum of a bound
vector and a bivector. We have already noted in Chapter 1 that a single force is better represented
by a bound vector than by a (free) vector. Systems of forces are then better represented by sums of
bound vectors; and a sum of bound vectors may always be reduced to the sum of a single bound
vector and a single bivector.
We call the sum of a bound vector and a bivector a 2-entity. These geometric entities are therefore
worth exploring for their ability to represent the principal physical entities of mechanics.
In this chapter we begin by establishing some properties of 2-entities in an n-plane, and then show
how, in a 3-plane, they take on a particularly symmetrical and potent form. This form, a 2-entity in
a 3-plane, is called a screw. Since it is in the 3-plane that we wish to explore 3-dimensional
mechanics we then explore the properties of screws in more detail.
The objective of this chapter then, is to lay the algebraic and geometric foundations for the chapter
on mechanics to follow.
Historical Note
The classic text on screws and their application to mechanics is by Sir Robert Stawell Ball: A
Treatise on the Theory of Screws (1900). Ball was aware of Grassmann's work as he explains in the
Biographical Notes to this text in a comment on the Ausdehnungslehre of 1862.
This remarkable work, a development of an earlier volume (1844), by the same author,
contains much that is of instruction and interest in connection with the present theory.
... Here we have a very general theory, which includes screw coordinates as a particular
case.
The principal proponent of screw theory from a Grassmannian viewpoint was Edward Wyllys
Hyde. In 1888 he wrote a paper entitled 'The Directional Theory of Screws' on which Ball
comments, again in the Biographical Notes to A Treatise on the Theory of Screws.
The author writes: "I shall define a screw to be the sum of a point-vector and a plane-
vector perpendicular to it, the former being a directed and posited line, the latter the
product of two vectors, hence a directed but not posited plane." Prof. Hyde proves by his
[sic] calculus many of the fundamental theorems in the present theory in a very concise
manner.
2009 9 3
Grassmann Algebra Book.nb 482
S ã PÔa + b
Here, P is a point, a a vector and b a bivector. Remember that only in vector 1, 2 and 3-spaces are
bivectors necessarily simple. In what follows we will show that in a metric space the point P may
always be chosen in such a way that the bivector b is orthogonal to the vector a. This property,
when specialised to three-dimensional space, is important for the theory of screws to be developed
in the rest of the chapter. We show it as follows:
To the above equation, add and subtract the bound vector P* Ô a such that IP - P* M û a ã 0,
giving:
S ã P* Ô a + b*
b* ã b + IP - P* M Ô a
Ib + IP - P* M Ô aM û a ã 0
b û a + Ia û IP - P* MM a - Ha û aL IP - P* M ã 0
But from our first condition on the choice of P* (that is, IP - P* M û a ã 0) we see that the second
term is zero giving:
bûa
P* ã P -
aûa
whence b* may be expressed as:
bûa
b* ã b + Ôa
aûa
Finally, substituting for P* and b* in the expression for S above gives the canonical form for the 2-
entity S, in which the new bivector component is now orthogonal to the vector a. The bound vector
component defines a line called the central axis of S.
2009 9 3
Grassmann Algebra Book.nb 483
bûa bûa
S ã P- Ôa + b + Ôa 7.1
aûa aûa
Ï Canonical forms in !1
In a bound vector space of one dimension, that is a 1-plane, there are points, vectors, and bound
vectors. There are no bivectors. Hence every 2-entity is of the form S ã PÔa and is therefore in
some sense already in its canonical form.
Ï Canonical forms in !2
In a bound vector space of two dimensions (the plane), every bound element can be expressed as a
bound vector.
To see this we note that any bivector is simple and can be expressed using the vector of the bound
vector as one of its factors. For example, if P is a point and a, b1 , b2 , are vectors, any bound
element in the plane can be expressed in the form:
P Ô a + b1 Ô b2
But because any bivector in the plane can be expressed as a scalar factor times any other bivector,
we can also write:
b1 Ô b2 ã k Ô a
so that the bound element may now be written as the bound vector:
P Ô a + b1 Ô b2 ã P Ô a + k Ô a ã HP + kL Ô a ã P* Ô a
We can use GrassmannAlgebra to verify that formula 7.1 gives this result in a 2-space. First we
declare the 2-space, and write a and b in terms of basis elements:
'2 ; a = a e1 + b e2 ; b = c e1 Ô e2 ;
For the canonical expression [7.1] above we wish to show that the bivector term is zero. We do this
by converting the interior products to scalar products using ToScalarProducts.
bûa
SimplifyBToScalarProductsBb + Ô aFF
aûa
0
In sum: A bound 2-element in the plane, PÔa + b, may always be expressed as a bound vector:
2009 9 3
Grassmann Algebra Book.nb 484
bûa
S ã PÔa + b ã P - Ôa 7.2
aûa
Ï Canonical forms in !3
In a bound vector space of three dimensions, that is a 3-plane, every 2-element can be expressed as
the sum of a bound vector and a bivector orthogonal to the vector of the bound vector. (The
bivector is necessarily simple, because all bivectors in a 3-space are simple.)
Such canonical forms are called screws, and will be discussed in more detail in the sections to
follow.
Creating 2-entities
A 2-entity may be created by applying the GrassmannAlgebra function CreateElement. For
example in 3-dimensional space we create a 2-entity based on the symbol s by entering:
'3 ; S = CreateElementBsF
2
s1 # Ô e1 + s2 # Ô e2 + s3 # Ô e3 + s4 e1 Ô e2 + s5 e1 Ô e3 + s6 e2 Ô e3
Note that CreateElement automatically declares the generated symbols si as scalars by adding
the pattern s_ . We can confirm this by entering Scalars.
Scalars
To explicitly factor out the origin and express the 2-element in the form S ã PÔa + b, we can use
the GrassmannAlgebra function GrassmannSimplify.
%@SD
# Ô He1 s1 + e2 s2 + e3 s3 L + s4 e1 Ô e2 + s5 e1 Ô e3 + s6 e2 Ô e3
2009 9 3
Grassmann Algebra Book.nb 485
Complements in an n-plane
In this section an expression will be developed for the complement of a 2-entity in an n-plane with
a metric. In an n-plane, the complement of the sum of a bound vector and a bivector is the sum of a
bound (n-2)-vector and an (n-1)-vector. It is of pivotal consequence for the theory of mechanics
that such a complement in a 3-plane (the usual three-dimensional space) is itself the sum of a
bound vector and a bivector. Geometrically, a quantity and its complement have the same measure
but are orthogonal. In addition, for a 2-element (such as the sum of a bound vector and a bivector)
in a 3-plane, the complement of the complement of an entity is the entity itself. These results will
find application throughout Chapter 8: Exploring Mechanics.
The metric we choose to explore is the hybrid metric Gij defined in [5.33], in which the origin is
orthogonal to all vectors, but otherwise the vector space metric is arbitrary.
X ã "Ôa + b
X ã "Ôa + b
which, by formulae 5.41 and 5.43 derived in Chapter 5, gives:
” ”
X ã "Ôb + a
” ”
X ã !Ôa + b ó X ã !Ôb + a 7.3
It may be seen that the effect of taking the complement of X when it is in a form referred to the
origin is equivalent to interchanging the vector a and the bivector b whilst taking their vector space
complements. This means that the vector that was bound through the origin becomes a free (n|1)-
vector, whilst the bivector that was free becomes an (n|2)-vector bound through the origin.
2009 9 3
Grassmann Algebra Book.nb 486
X ã " Ô a + n Ô a + b - n Ô a ã H" + nL Ô a + Hb - n Ô aL
Or, equivalently:
X ã P Ô a + bp bp ã b - n Ô a
By manipulating the complement PÔa, we can write it in the alternative form - a ûP.
P Ô a ã -a Ô P ã - a Ó P ã - a û P
Further, from formula 5.41, we have that:
”
- a û P ã I" Ô aM û P
bp = " Ô bp
So that the relationship between X and its complement X , can finally be written:
” 7.4
X ã P Ô a + bp ó X ã ! Ô bp + I! Ô aM û P
Remember, this formula is valid for the hybrid metric [5.33] in an n-plane of arbitrary dimension.
We explore its application to 3-planes in the section below.
” 7.5
S ã PÔa + s a
where:
a is the vector of the screw
PÔa is the central axis of the screw
s is the pitch of the screw
”
a is the bivector of the screw
”
Remember that a is the (free) complement of the vector a in the three-plane, and hence is a simple
bivector.
2009 9 3
Grassmann Algebra Book.nb 487
` ` ”
`
S = PÔa + s a 7.6
Note that the unit screw does not have unit magnitude. The magnitude of a screw will be discussed
in Section 7.5.
The first term in the expansion of the right hand side is zero, leaving:
”
SÔa ã s aÔa
Further, by taking the free complement of this expression and invoking the definition of the interior
product we get:
” ”
SÔa ã s aÔa ã s aÓa ã s aûa
Dividing through by the square of the magnitude of a gives:
` ` ` `
SÔa ã s aûa ã s
In sum: We can obtain the pitch of a screw from the any of the following formulae:
SÔa SÔa ` `
sã ” ã ã SÔa 7.7
aÔa aûa
The second term in the expansion of the right-hand side is zero (since an element is always
orthogonal to its complement), leaving:
2009 9 3
Grassmann Algebra Book.nb 488
The second term in the expansion of the right-hand side is zero (since an element is always
orthogonal to its complement), leaving:
S û a ã HP Ô aL û a
The right-hand side of this equation may be expanded using the Interior Common Factor Theorem
to give:
S û a ã Ha û PL a - Ha û aL P
By taking the exterior product of this expression with a, we eliminate the first term on the right-
hand side to get:
HS û aL Ô a ã -Ha û aL HP Ô aL
By dividing through by the square of the magnitude of a, we can express this in terms of unit
elements.
` `
IS û aM Ô a ã -HP Ô aL
In sum: We can obtain the central axis PÔa of a screw from any of the following formulae:
HS û aL Ô a ` ` ` `
PÔa ã - ã -IS û aM Ô a ã a Ô IS û aM 7.8
aûa
` ` SÔa ”
S ã -IS û aM Ô a + a
aûa
In order to transform the second term into the form we want, we note first that S Ô a is a scalar. So
that we can write:
” ”
S Ô a a ã S Ô a Ô a ã S Ô a Ô a ã HS Ô aL û a
Hence S can be written as:
` ` ` `
S ã -IS û aM Ô a + IS Ô aM û a 7.9
This type of decomposition is in fact valid for any m-element S and 1-element a, as we have shown
` `
in Chapter 6, formula 6.68. The central axis is the term -IS û aM Ô a which is the component of S
` ` ”
parallel to a, and the term IS Ô aM û a is s a, the component of S orthogonal to a.
2009 9 3
Grassmann Algebra Book.nb 490
8 Exploring Mechanics
Ï
8.1 Introduction
Grassmann algebra applied to the field of mechanics performs an interesting synthesis between
what now seem to be regarded as disparate concepts. In particular we will explore the synthesis it
can effect between the concepts of force and moment; velocity and angular velocity; and linear
momentum and angular momentum.
This synthesis has an important concomitant, namely that the form in which the mechanical entities
are represented, is for many results of interest, independent of the dimension of the space involved.
It may be argued therefore, that such a representation is more fundamental than one specifically
requiring a three-dimensional context (as indeed does that which uses the Gibbs-Heaviside vector
algebra).
This is a more concrete result than may be apparent at first sight since the form, as well as being
valid for spaces of dimension three or greater, is also valid for spaces of dimension zero, one or
two. Of most interest, however, is the fact that the complementary form of a mechanical entity
takes on a different form depending on the number of dimensions concerned. For example, the
velocity of a rigid body is the sum of a bound vector and a bivector in a space of any dimensions.
Its complement is dependent on the dimension of the space, but in each case it may be viewed as
representing the points of the body (if they exist) which have zero velocity. In three dimensions the
complementary velocity can specify the axis of rotation of the rigid body, while in two dimensions
it can specify the centre (point) of rotation.
Furthermore, some results in mechanics (for example, Newton's second law or the conditions of
equilibrium of a rigid body) will be shown not to require the space to be a metric space. On the
other hand, use of the Gibbs-Heaviside 'cross product' to express angular momentum or the
moment condition immediately supposes the space to possess a metric.
Mechanics as it is known today is in the strange situation of being a field of mathematical physics
in which location is very important, and of which the calculus traditionally used (being vectorial)
can take no proper account. As already discussed in Chapter 1, one may take the example of the
concept of force. A (physical) force is not satisfactorily represented by a vector, yet contemporary
practice is still to use a vector calculus for this task. To patch up this inadequacy the concept of
moment is introduced and the conditions of equilibrium of a rigid body augmented by a condition
on the sum of the moments. The justification for this condition is often not well treated in
contemporary texts.
Many of these texts will of course rightly state that forces are not (represented by) free vectors and
yet will proceed to use the Gibbs-Heaviside calculus to denote and manipulate them. Although
confusing, this inaptitude is usually offset by various comments attached to the symbolic
descriptions and calculations. For example, a position is represented by a 'position vector'. A
position vector is described as a (free?) vector with its tail (fixed?) to the origin (point?) of the
coordinate system. The coordinate system itself is supposed to consist of an origin point and a
number of basis vectors. But whereas the vector calculus can cope with the vectors, it cannot cope
with the origin. This confusion between vectors, free vectors, bound vectors, sliding vectors and
position vectors would not occur if the calculus used to describe them were a calculus of position
2009 9 3as well as direction (vectors).
(points)
Many of these texts will of course rightly state that forces are not (represented by) free vectors and
yet will proceed to use the Gibbs-Heaviside calculus to denote and manipulate them. Although
confusing, this inaptitude is usually offset by various comments attached to the Algebra
Grassmann symbolicBook.nb 491
descriptions and calculations. For example, a position is represented by a 'position vector'. A
position vector is described as a (free?) vector with its tail (fixed?) to the origin (point?) of the
coordinate system. The coordinate system itself is supposed to consist of an origin point and a
number of basis vectors. But whereas the vector calculus can cope with the vectors, it cannot cope
with the origin. This confusion between vectors, free vectors, bound vectors, sliding vectors and
position vectors would not occur if the calculus used to describe them were a calculus of position
(points) as well as direction (vectors).
In order to describe the phenomena of mechanics in purely vectorial terms it has been necessary
therefore to devise the notions of couple and moment as notions almost distinct from that of force,
thus effectively splitting in two all the results of mechanics. In traditional mechanics, to every
'linear' quantity: force, linear momentum, translational velocity, etc., corresponds an 'angular'
quantity: moment, angular momentum, angular velocity.
In this chapter we will show that by representing mechanical quantities correctly in terms of
elements of a Grassmann algebra this dichotomy disappears and mechanics takes on a more unified
form. In particular we will show that there exists a screw / representing (including moments) a
system of forces (remember that a system of forces in three dimensions is not necessarily
replaceable by a single force); a screw ) representing the momentum of a system of bodies (linear
and angular), and a screw & representing the velocity of a rigid body (linear and angular): all
invariant combinations of the linear and angular components. For example, the velocity & is a
complete characterization of the kinematics of the rigid body independent of any particular point
on the body used to specify its motion. Expressions for work, power and kinetic energy of a system
of forces and a rigid body will be shown to be determinable by an interior product between the
relevant screws. For example, the interior product of / with & will give the power of the system of
forces acting on the rigid body, that is, the sum of the translational and rotational powers.
Historical Note
The application of the Ausdehnungslehre to mechanics has obtained far fewer proponents than
might be expected. This may be due to the fact that Grassmann himself was late going into the
subject. His 'Die Mechanik nach den principien der Ausdehnungslehre' (1877) was written just a
few weeks before his death. Furthermore, in the period before his ideas became completely
submerged by the popularity of the Gibbs-Heaviside system (around the turn of the century) there
were few people with sufficient understanding of the Ausdehnungslehre to break new ground using
its methods.
There are only three people who have written substantially in English using the original concepts
of the Ausdehnungslehre: Edward Wyllys Hyde (1888), Alfred North Whitehead (1898), and
Henry James Forder (1941). Each of them has discussed the theory of screws in more or less detail,
but none has addressed the complete field of mechanics. The principal works in other languages
are in German, and apart from Grassmann's paper in 1877 mentioned above, are the book by
Jahnke (1905) which includes applications to mechanics, and the short monograph by Lotze (1922)
which lays particular emphasis on rigid body mechanics.
2009 9 3
Grassmann Algebra Book.nb 492
8.2 Force
Representing force
The notion of force as used in mechanics involves the concepts of magnitude, sense, and line of
action. It will be readily seen in what follows that such a physical entity may be faithfully
represented by a bound vector. Similarly, all the usual properties of systems of forces are faithfully
represented by the analogous properties of sums of bound vectors. For the moment it is not
important whether the space has a metric, or what its dimension is.
$ ã PÔf 8.1
The vector f is called the force vector and expresses the sense and direction of the force. In a
metric space it would also express its magnitude.
The point P is any point on the line of action of the force. It can be expressed as the sum of the
origin point " and the position vector n of the point.
This simple representation delineates clearly the role of the (free) vector f. The vector f represents
all the properties of the force except the position of its line of action. The operation PÔ has the
effect of 'binding' the force vector to the line through P.
On the other hand the expression of a bound vector as a product of points is also useful when
expressing certain types of forces, for example gravitational forces. Newton's law of gravitation for
the force exerted on a point mass m1 (weighted point) by a point mass m2 may be written:
G m1 Ô m2
/12 ã 8.2
R2
Note that this expression correctly changes sign if the masses are interchanged.
Systems of forces
A system of forces may be represented by a sum of bound vectors.
$ ã ‚ $i ã ‚ Pi Ô fi 8.3
A sum of bound vectors is not necessarily reducible to a bound vector: a system of forces is not
necessarily reducible to a single force. However, a sum of bound vectors is in general reducible to
the sum of a bound vector and a bivector. This is done simply by adding and subtracting the term
PÔH⁄fi L to and from the sum [8.3].
2009 9 3
Grassmann Algebra Book.nb 493
A sum of bound vectors is not necessarily reducible to a bound vector: a system of forces is not
necessarily reducible to a single force. However, a sum of bound vectors is in general reducible to
the sum of a bound vector and a bivector. This is done simply by adding and subtracting the term
PÔH⁄fi L to and from the sum [8.3].
$ ã P Ô I‚ fi M + ‚ HPi - PL Ô fi 8.4
This 'adding and subtracting' operation is a common one in our treatment of mechanics. We call it
referring the sum to the point P.
Note that although the expression for / now involves the point P, it is completely independent of P
since the terms involving P cancel. The sum may be said to be invariant with respect to any point
used to express it.
Examining the terms in the expression 8.4 above we can see that since ⁄fi is a vector (f, say) the
first term reduces to the bound vector PÔf representing a force through the chosen point P. The
vector f is called the resultant force vector.
f ã ‚ fi
The second term is a sum of bivectors of the form HPi -PLÔfi . Such a bivector may be seen to
faithfully represent a moment: specifically, the moment of the force represented by Pi Ôfi about
the point P. Let this moment be denoted 0iP .
0iP ã HPi - PL Ô fi
To see that it is more reasonable that a moment be represented as a bivector rather than as a vector,
one has only to consider that the physical dimensions of a moment are the product of a length and
a force unit.
The expression for moment above clarifies distinctly how it can arise from two located entities: the
bound vector Pi Ôfi and the point P, and yet itself be a 'free' entity. The bivector, it will be
remembered, has no concept of location associated with it. This has certainly been a source of
confusion among students of mechanics using the usual Gibbs-Heaviside three-dimensional vector
calculus. It is well known that the moment of a force about a point does not possess the property of
location, and yet it still depends on that point. While notions of 'free' and 'bound' are not properly
mathematically characterized, this type confusion is likely to persist.
The second term in [8.4] representing the sum of the moments of the forces about the point P will
be denoted:
0P ã ‚ HPi - PL Ô fi
Then, any system of forces may be represented by the sum of a bound vector and a bivector.
$ ã P Ô f + %P 8.5
The bound vector PÔf represents a force through an arbitrary point P with force vector equal to the
sum of the force vectors of the system.
The bivector 0P represent the sum of the moments of the forces about the same point P.
2009 9 3
Grassmann Algebra Book.nb 494
The bivector 0P represent the sum of the moments of the forces about the same point P.
Suppose the system of forces be referred to some other point P* . The system of forces may then be
written in either of the two forms:
P Ô f + 0P ã P* Ô f + 0P*
Solving for 0P* gives us a formula relating the moment sum about different points.
0P* ã 0P + HP - P* L Ô f
If the bound vector term PÔf in formula 8.5 is zero then the system of forces is called a couple. If
the bivector term 0P is zero then the system of forces reduces to a single force.
Equilibrium
If a sum of forces is zero, that is:
/ ã P Ô f + 0P ã 0
then it is straightforward to show that each of PÔf and 0P must be zero. Indeed if such were not
the case then the bound vector would be equal to the negative of the bivector, a possibility
excluded by the implicit presence of the origin in a bound vector, and its absence in a bivector.
Furthermore PÔf being zero implies f is zero.
These considerations lead us directly to the basic theorem of statics. For a body to be in
equilibrium, the sum of the forces must be zero.
$ ã ‚ $i ã 0 8.6
This one expression encapsulates both the usual conditions for equilibrium of a body, that is: that
the sum of the force vectors be zero; and that the sum of the moments of the forces about an
arbitrary point P be zero.
In three dimensions, the complement of an exterior product is equivalent to the usual cross product.
0P ã ‚ HPi - PL µ fi
Thus, a system of forces in a metric 3-plane may be reduced to a single force through an arbitrarily
chosen point P plus the vector-space complement of the usual moment vector about P.
2009 9 3
Grassmann Algebra Book.nb 495
$ ã P Ô f + ‚ HPi - PL µ fi 8.7
8.3 Momentum
È dP dn È
Pã ã ãn 8.8
dt dt
Representing momentum
As may be suspected from Newton's second law, momentum is of the same tensorial nature as
force. The momentum of a particle is represented by a bound vector as is a force. The momentum
of a system of particles (or of a rigid body, or of a system of rigid bodies) is representable by a sum
of bound vectors as is a system of forces.
The momentum of a particle comprises three factors: the mass, the position, and the velocity of the
particle. The mass is represented by a scalar, the position by a point, and the velocity by a vector.
È
& ã P Ô Km PO 8.9
The momentum of a particle may be viewed either as a momentum vector bound to a line through
the position of the particle, or as a velocity vector bound to a line through the point mass. The
È
simple description [8.9] delineates clearly the role of the (free) vector m P. It represents all the
properties of the momentum of a particle except its position.
È
& ã ‚ &i ã ‚ Pi Ô Kmi Pi O 8.10
A sum of bound vectors is not necessarily reducible to a bound vector: the momentum of a system
of particles is not necessarily reducible to the momentum of a single particle. However, a sum of
bound vectors is in general reducible to the sum of a bound vector and a bivector. This is done
È
simply by adding and subtracting the terms PÔ„ mi Pi to and from the sum [8.10].
È È
& ã P Ô ‚ mi Pi + ‚ HPi - PL Ô Kmi Pi O 8.11
It cannot be too strongly emphasized that although the momentum of the system has now been
referred to the point P, it is completely independent of P.
È
Examining the terms in formula 8.11 we see that since „ mi Pi is a vector (l, say) the first term
reduces to the bound vector PÔl representing the (linear) momentum of a 'particle' situated at the
point P. The vector l is called the linear momentum of the system. The 'particle' may be viewed as
a particle with mass equal to the total mass of the system and with velocity equal to the velocity of
the centre of mass.
È
l ã ‚ mi Pi
È
The second term is a sum of bivectors of the form „ HPi -PLÔKmi Pi O. Such a bivector may be
seen to faithfully represent the moment of momentum or angular momentum of the system of
particles about the point P. Let this moment of momentum for particle i be denoted HiP .
È
HiP ã HPi - PL Ô Kmi Pi O
The expression for angular momentum Hip above clarifies distinctly how it can arise from two
È
located entities: the bound vector Pi ÔKmi Pi O and the point P, and yet itself be a 'free' entity.
Similarly to the notion of moment, the notion of angular momentum treated using the three-
dimensional Gibbs-Heaviside vector calculus has caused some confusion amongst students of
mechanics: the angular momentum of a particle about a point depends on the positions of the
particle and of the point and yet itself has no property of location.
The second term in formula 8.11 representing the sum of the moments of momenta about the point
P will be denoted ,P .
2009 9 3
Grassmann Algebra Book.nb 497
The second term in formula 8.11 representing the sum of the moments of momenta about the point
P will be denoted ,P .
È
,P ã ‚ HPi - PL Ô Kmi Pi O
Then the momentum ) of any system of particles may be represented by the sum of a bound vector
and a bivector.
The bound vector PÔl represents the linear momentum of the system referred to an arbitrary point
P.
The bivector ,P represents the angular momentum of the system about the point P.
Suppose a number of bodies with momenta )i , then the total momentum ⁄)i may be written:
‚ )i ã ‚ Pi Ô li + ‚ ,Pi
Referring this momentum sum to a point P by adding and subtracting the term PÔ⁄li we obtain:
‚ )i ã P Ô ‚ li + ‚ HPi - PL Ô li + ‚ ,Pi
It is thus natural to represent the linear momentum vector of the system of bodies l by:
l ã ‚ li
,P ã ‚ HPi - PL Ô li + ‚ ,Pi
That is, the momentum of a system of bodies is of the same form as the momentum of a system of
particles. The linear momentum vector of the system is the sum of the linear momentum vectors of
the bodies. The angular momentum of the system is made up of two parts: the sum of the angular
momenta of the bodies about their respective chosen points; and the sum of the moments of the
linear momenta of the bodies (referred to their respective chosen points) about the point P.
Again it may be worth emphasizing that the total momentum ) is independent of the point P,
whilst its component terms are dependent on it. However, we can easily convert the formulae to
refer to some other point say P* .
) ã P Ô l + ,P ã P* Ô l + ,P*
Hence the angular momentum referred to the new point is given by:
2009 9 3
Grassmann Algebra Book.nb 498
Hence the angular momentum referred to the new point is given by:
,P* ã ,P + HP - P* L Ô l
M PG ã ‚ mi Pi
È
) ã P Ô KM PG O + ,P
If now we refer the momentum to the centre of mass, that is, we choose P to be PG , we can write
the momentum of the system as:
È
& ã PG Ô KM PG O + 'PG 8.13
Thus the momentum ) of a system is equivalent to the momentum of a particle of mass equal to
the total mass M of the system situated at the centre of mass PG plus the angular momentum about
the centre of mass.
In three dimensions, the complement of the exterior product of two vectors is equivalent to the
usual cross product.
,P ã ‚ HPi - PL µ li
Thus, the momentum of a system in a metric 3-plane may be reduced to the momentum of a single
particle through an arbitrarily chosen point P plus the vector-space complement of the usual
moment of moment vector about P.
2009 9 3
Grassmann Algebra Book.nb 499
Thus, the momentum of a system in a metric 3-plane may be reduced to the momentum of a single
particle through an arbitrarily chosen point P plus the vector-space complement of the usual
moment of moment vector about P.
È
) ã ‚ Pi Ô Kmi Pi O
È È È ÈÈ È È
) ã ‚ Pi Ô Kmi Pi O + ‚ Pi Ô Kmi Pi O + ‚ Pi Ô Kmi Pi O
In what follows, it will be supposed for simplicity that the masses are not time varying, and since
the first term is zero, this expression becomes:
È ÈÈ
& ã ‚ Pi Ô Kmi Pi O 8.15
) ã P Ô l + ,P
The rate of change of momentum with respect to time is:
È È È È
& ã P Ô l + P Ô l + 'P 8.16
È È È
The term PÔl, or what is equivalent, the term PÔKM PG O, is zero under the following conditions:
• P is a fixed point.
• l is zero.
È È
• P is parallel to PG .
• P is equal to PG .
2009 9 3
Grassmann Algebra Book.nb 500
In particular then, when the point to which the momentum is referred is the centre of mass, the rate
of change of momentum [4.2] becomes:
È È È
& ã P G Ô l + 'P G 8.17
È
$ã& 8.18
This equation captures Newton's law in its most complete form, encapsulating both linear and
È
angular terms. Substituting for / and ) from equations 8.5 and 8.16 we have:
È È È
P Ô f + %P ã P Ô l + P Ô l + 'P 8.19
By equating the bound vector terms of [8.19] we obtain the vector equation:
È
fãl
By equating the bivector terms of [8.19] we obtain the bivector equation:
È È
0P ã P Ô l + ,P
In a metric 3-plane, it is the vector complement of this bivector equation which is usually given as
the moment condition.
È È
0P ã P µ l + ,P
È
If P is a fixed point so that P is zero, Newton's law [8.18] is equivalent to the pair of equations:
È
fãl
È
8.20
%P ã 'P
2009 9 3
Grassmann Algebra Book.nb 501
2009 9 3
Grassmann Algebra Book.nb 502
9 Grassmann Algebra
Ï
9.1 Introduction
The Grassmann algebra is an algebra of "numbers" composed of linear sums of elements from any
of the exterior linear spaces generated from a given linear space. These are numbers in the same
sense that, for example, a complex number or a matrix is a number. The essential property that an
algebra has, in addition to being a linear space is, broadly speaking, one of closure under a product
operation. That is, the algebra has a product operation such that the product of any elements of the
algebra is also an element of the algebra.
Thus the exterior product spaces L discussed up to this point, are not algebras, since products
m
(exterior, regressive or interior) of their elements are not generally elements of the same space.
However, there are two important exceptions. L is not only a linear space, but an algebra (and a
0
field) as well under the exterior and interior products. L is an algebra and a field under the
n
regressive product.
Many of our examples will be using GrassmannAlgebra functions, and because we will often be
changing the dimension of the space depending on the example under consideration, we will
indicate a change without comment simply by entering the appropriate DeclareBasis symbol
from the palette. For example the following entry:
&3 ;
A basis for L is obtained from the current declared basis of L by collecting together all the basis
1
elements of the various L. This may be generated by entering BasisL[].
m
2009 9 3
Grassmann Algebra Book.nb 503
&3 ; BasisL@D
81, e1 , e2 , e3 , e1 Ô e2 , e1 Ô e3 , e2 Ô e3 , e1 Ô e2 Ô e3 <
If we wish to generate a general symbolic Grassmann number in the currently declared basis, we
can use the function GrassmannAlgebra function CreateGrassmannNumber.
CreateGrassmannNumber[symbol] creates a Grassmann number with scalar coefficients
formed by subscripting the symbol given. For example, if we enter GrassmannNumber[x], we
obtain:
CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
The positional ordering of the xi with respect to the basis elements in any given term is due to
Mathematica's internal ordering routines, and does not affect the meaning of the result.
The CreateGrassmannNumber operation automatically adds the pattern for the generated
coefficients xi to the list of declared scalars, as we see by entering Scalars.
Scalars
CreateGrassmannNumber@ÑD
Ñ + Ñ e1 + Ñ e2 + Ñ e3 + Ñ e1 Ô e2 + Ñ e1 Ô e3 + Ñ e2 Ô e3 + Ñ e1 Ô e2 Ô e3
Note that the coefficients entered here will not be automatically put on the list of declared scalars.
If several Grassmann numbers are required, we can apply the operation to a list of symbols. That
is, CreateGrassmannNumber is Listable. Below we generate three numbers in 2-space.
Grassmann numbers can of course be entered in general symbolic form, not just using basis
elements. For example a Grassmann number might take the form:
2 + x - 3 y + 5 xÔy + xÔyÔz
In some cases it might be faster to change basis temporarily in order to generate a template with all
but the required scalars formed.
2009 9 3
Grassmann Algebra Book.nb 504
DeclareBasis@8x, y, z<D;
T = Table@CreateGrassmannNumber@ÑD, 83<D; &3 ; MatrixForm@TD
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
The scalar component of a Grassmann number is called its body. The body may be obtained from a
Grassmann number by using the Body function. Suppose we have a general Grassmann number X
in a 3-space.
&3 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
Body@XD
x0
The rest of a Grassmann number is called its soul. The soul may be obtained from a Grassmann
number by using the Soul function. The soul of the number X is:
Soul@XD
e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
Body and Soul apply to Grassmann numbers whose components are given in any form.
8Body@ZD, Soul@ZD<
8x û y + 2 Hx û zL + z Ó y Ô x, -4 x + x Ô y û z - 2 x Ô y<
The first two terms of the body are scalar because they are interior products of 1-elements (scalar
products). The third term of the body is a scalar in a space of 3 dimensions.
Body and Soul are both Listable. Taking the Soul of the list of Grassmann numbers
generated in 2-space above gives
2009 9 3
Grassmann Algebra Book.nb 505
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2
SoulB y0 + e1 y1 + e2 y2 + y3 e1 Ô e2 F êê MatrixForm
z0 + e1 z1 + e2 z2 + z3 e1 Ô e2
e1 x1 + e2 x2 + x3 e1 Ô e2
e1 y1 + e2 y2 + y3 e1 Ô e2
e1 z1 + e2 z2 + z3 e1 Ô e2
a Ô b = H-1Lm k b Ô a
m k k m
Thus factors of odd grade anti-commute; whereas, if one of the factors is of even grade, they
commute. This means that, if X is a Grassmann number whose components are all of odd grade,
then it is nilpotent.
The even components of a Grassmann number can be extracted with the function EvenGrade and
the odd components with OddGrade.
X
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
EvenGrade@XD
x0 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3
OddGrade@XD
e1 x1 + e2 x2 + e3 x3 + x7 e1 Ô e2 Ô e3
It should be noted that the result returned by EvenGrade or OddGrade is dependent on the
dimension of the current space. For example, the calculations above for the even and odd
components of the number X were carried out in a 3-space. If now we change to a 2-space and
repeat the calculations, we will get a different result for OddGrade[X] because the term of grade
three is necessarily zero in this 2-space.
&2 ; OddGrade@XD
e1 x1 + e2 x2 + e3 x3
For EvenGrade and OddGrade to apply, it is not necessary that the Grassmann number be in
terms of basis elements or simple exterior products. Applying EvenGrade and OddGrade to the
number Z below still separates the number into its even and odd components:
2009 9 3
Grassmann Algebra Book.nb 506
To test whether a number is even or odd we use EvenGradeQ and OddGradeQ. Consider again
the general Grassmann number X in 3-space.
8EvenGradeQ@XD, OddGradeQ@XD,
EvenGradeQ@EvenGrade@XDD, OddGradeQ@EvenGrade@XDD<
8False, False, True, False<
EvenGradeQ and OddGradeQ are not Listable. When applied to a list of elements, they
require all the elements in the list to conform in order to return True. They can of course still be
mapped over a list of elements to question the evenness or oddness of the individual components.
Finally, there is the question as to whether the Grassmann number 0 is even or odd. It may be
recalled from Chapter 2 that the single symbol 0 is actually a shorthand for the zero element of any
of the exterior linear spaces, and hence is of indeterminate grade. Entering Grade[0] will return a
flag Grade0 which can be manipulated as appropriate. Therefore, both EvenGradeQ[0] and
OddGradeQ[0] will return False.
X
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
Grade@XD
80, 1, 2, 3<
Grade will also work with more general Grassmann number expressions. For example we can
apply it to the number Z which we previously defined above.
Z
x û y + x Ô H2 + yL û H-2 + zL + z Ó y Ô x
2009 9 3
Grassmann Algebra Book.nb 507
Grade@ZD
80, 1, 2<
Because it may be necessary to expand out a Grassmann number into a sum of terms (each of
which has a definite grade) before the grades can be calculated, and because Mathematica will
reorganize the ordering of those terms in the sum according to its own internal algorithms, direct
correspondence with the original form is often lost. However, if a list of the grades of each term in
an expansion of a Grassmann number is required, we should:
For example:
A = %@ZD
-4 x + x û y + 2 Hx û zL + x Ô y û z - x Ô y Ô z Ó 1 - 2 x Ô y
A = List üü A
8-4 x, x û y, 2 Hx û zL, x Ô y û z, -Hx Ô y Ô z Ó 1L, -2 x Ô y<
Grade êü A
81, 0, 0, 1, 0, 2<
To extract the components of a Grassmann number of a given grade or grades, we can use the
GrassmannAlgebra ExtractGrade[m][X] function which takes a Grassmann number and
extracts the components of grade m.
Extracting the components of grade 1 from the number Z defined above gives:
ExtractGrade@1D@ZD
-4 x + x Ô y û z
DeclareExtraScalars@8y_ , z_ <D
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2
ExtractGrade@1DB y0 + e1 y1 + e2 y2 + y3 e1 Ô e2 F êê MatrixForm
z0 + e1 z1 + e2 z2 + z3 e1 Ô e2
e1 x1 + e2 x2
e1 y1 + e2 y2
e1 z1 + e2 z2
2009 9 3
Grassmann Algebra Book.nb 508
DeclareScalars@8a, b, c, d, 2, Â, ‰<D
8a, b, c, d<
ScalarQ@8a, b, c, 2, Â, ‰, 2 - 3 Â<D
8True, True, True, True, True, True, True<
This feature allows the GrassmannAlgebra package to deal with complex numbers just as it would
any other scalars.
%@HHa + Â bL xL Ô HÂ yLD
HÂ a - bL x Ô y
X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
We now create a second Grassmann number Y so that we can look at various operations applied to
two Grassmann numbers in 3-space.
Y = CreateGrassmannNumber@yD
y0 + e1 y1 + e2 y2 + e3 y3 + y4 e1 Ô e2 + y5 e1 Ô e3 + y6 e2 Ô e3 + y7 e1 Ô e2 Ô e3
The exterior product of two general Grassmann numbers in 3-space is obtained by multiplying out
the numbers termwise and simplifying the result.
2009 9 3
Grassmann Algebra Book.nb 509
%@X Ô YD
x0 y0 + e1 Hx1 y0 + x0 y1 L + e2 Hx2 y0 + x0 y2 L +
e3 Hx3 y0 + x0 y3 L + Hx4 y0 - x2 y1 + x1 y2 + x0 y4 L e1 Ô e2 +
Hx5 y0 - x3 y1 + x1 y3 + x0 y5 L e1 Ô e3 +
Hx6 y0 - x3 y2 + x2 y3 + x0 y6 L e2 Ô e3 +
Hx7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 L e1 Ô e2 Ô e3
When the bodies of the numbers are zero we get only components of grades two and three.
%@Soul@XD Ô Soul@YDD
H-x2 y1 + x1 y2 L e1 Ô e2 + H-x3 y1 + x1 y3 L e1 Ô e3 + H-x3 y2 + x2 y3 L e2 Ô e3 +
Hx6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 L e1 Ô e2 Ô e3
Z = %@X Ó YD
e1 Ô e2 Ô e3 Ó Hx7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 +
e1 Hx7 y1 - x5 y4 + x4 y5 + x1 y7 L + e2 Hx7 y2 - x6 y4 + x4 y6 + x2 y7 L +
e3 Hx7 y3 - x6 y5 + x5 y6 + x3 y7 L + Hx7 y4 + x4 y7 L e1 Ô e2 +
Hx7 y5 + x5 y7 L e1 Ô e3 + Hx7 y6 + x6 y7 L e2 Ô e3 + x7 y7 e1 Ô e2 Ô e3 L
We cannot obtain the result in the form of a pure exterior product without relating the unit n-
element to the basis n-element. In general these two elements may be related by a scalar multiple
which we have denoted !. In 3-space this relationship is given by 1ã! e1 Ôe2 Ôe3 . To obtain the
3
result as an exterior product, but one that will also involve the scalar !, we use the
GrassmannAlgebra function ToCongruenceForm.
Z1 = ToCongruenceForm@ZD
1
Hx7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 +
%
e1 Hx7 y1 - x5 y4 + x4 y5 + x1 y7 L + e2 Hx7 y2 - x6 y4 + x4 y6 + x2 y7 L +
e3 Hx7 y3 - x6 y5 + x5 y6 + x3 y7 L + Hx7 y4 + x4 y7 L e1 Ô e2 +
Hx7 y5 + x5 y7 L e1 Ô e3 + Hx7 y6 + x6 y7 L e2 Ô e3 + x7 y7 e1 Ô e2 Ô e3 L
2009 9 3
Grassmann Algebra Book.nb 510
%@Z1D
x7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 +
e1 Hx7 y1 - x5 y4 + x4 y5 + x1 y7 L + e2 Hx7 y2 - x6 y4 + x4 y6 + x2 y7 L +
e3 Hx7 y3 - x6 y5 + x5 y6 + x3 y7 L + Hx7 y4 + x4 y7 L e1 Ô e2 +
Hx7 y5 + x5 y7 L e1 Ô e3 + Hx7 y6 + x6 y7 L e2 Ô e3 + x7 y7 e1 Ô e2 Ô e3
X
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
The complement of X is denoted X . Entering X applies the complement operation, but does
not simplify it in any way.
X
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
Further simplification to an explicit Grassmann number depends on the metric of the space
concerned and may be accomplished by applying GrassmannSimplify.
GrassmannSimplify will look at the metric and make the necessary transformations. In the
Euclidean metric assumed by GrassmannAlgebra as the default we have:
%@ X D
e3 x4 - e2 x5 + e1 x6 + x7 + x3 e1 Ô e2 - x2 e1 Ô e3 + x1 e2 Ô e3 + x0 e1 Ô e2 Ô e3
For metrics other than Euclidean, the expression for the complement will be more complex. We
can explore the complement of a general Grassmann number in a general metric by entering
DeclareMetric[$] and then applying GrassmannSimplify to X . As an example, we take
the complement of a general Grassmann number in a 2-space with a general metric. We choose a 2-
space rather than a 3-space, because the result is less complex to display.
&2 ; DeclareMetric@-D
88$1,1 , $1,2 <, 8$1,2 , $2,2 <<
U = CreateGrassmannNumber@uD
u0 + e1 u1 + e2 u2 + u3 e1 Ô e2
%@ U D êê Simplify
2009 9 3
Grassmann Algebra Book.nb 511
W1 = %@U û VD
u0 w0 + e1 u1 w0 + e2 u2 w0 + HHe1 û e1 L u1 + He1 û e2 L u2 L w1 +
He1 Ô e2 û e1 L u3 w1 + HHe1 û e2 L u1 + He2 û e2 L u2 L w2 +
He1 Ô e2 û e2 L u3 w2 + He1 Ô e2 û e1 Ô e2 L u3 w3 + u3 w0 e1 Ô e2
Note that there are no products of the form ei ûHej Ôek L as these have been put to zero by
GrassmannSimplify. If we wish, we can convert the remaining interior products to scalar
products using the GrassmannAlgebra function ToScalarProducts.
W2 = ToScalarProducts@W1 D
u0 w0 + e1 u1 w0 + e2 u2 w0 + He1 û e1 L u1 w1 + He1 û e2 L u2 w1 -
He1 û e2 L e1 u3 w1 + He1 û e1 L e2 u3 w1 + He1 û e2 L u1 w2 +
He2 û e2 L u2 w2 - He2 û e2 L e1 u3 w2 + He1 û e2 L e2 u3 w2 -
He1 û e2 L2 u3 w3 + He1 û e1 L He2 û e2 L u3 w3 + u3 w0 e1 Ô e2
Finally, we can substitute the values from the currently declared metric tensor for the scalar
products using the GrassmannAlgebra function ToMetricForm. For example, if we first declare
a general metric:
DeclareMetric@-D
88$1,1 , $1,2 <, 8$1,2 , $2,2 <<
ToMetricForm@W2 D
u0 w0 + e1 u1 w0 + e2 u2 w0 + u1 w1 $1,1 + e2 u3 w1 $1,1 +
u2 w1 $1,2 - e1 u3 w1 $1,2 + u1 w2 $1,2 + e2 u3 w2 $1,2 - u3 w3 $21,2 +
u2 w2 $2,2 - e1 u3 w2 $2,2 + u3 w3 $1,1 $2,2 + u3 w0 e1 Ô e2
2009 9 3
Grassmann Algebra Book.nb 512
x0 y0 + x1 y1 + x2 y2 + x3 y3 + x4 y4 + e3 Hx3 y0 + x5 y1 + x6 y2 + x7 y4 L +
x5 y5 + e2 Hx2 y0 + x4 y1 - x6 y3 - x7 y5 L + x6 y6 +
e1 Hx1 y0 - x4 y2 - x5 y3 + x7 y6 L + x7 y7 + Hx4 y0 + x7 y3 L e1 Ô e2 +
Hx5 y0 - x7 y2 L e1 Ô e3 + Hx6 y0 + x7 y1 L e2 Ô e3 + x7 y0 e1 Ô e2 Ô e3
&3 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
Y = CreateGrassmannNumber@yD
y0 + e1 y1 + e2 y2 + e3 y3 + y4 e1 Ô e2 + y5 e1 Ô e3 + y6 e2 Ô e3 + y7 e1 Ô e2 Ô e3
Z = %@X û YD
x0 y0 + e1 x1 y0 + e2 x2 y0 + e3 x3 y0 +
HHe1 û e1 L x1 + He1 û e2 L x2 + He1 û e3 L x3 L y1 + He1 Ô e2 û e1 L x4 y1 +
He1 Ô e3 û e1 L x5 y1 + He2 Ô e3 û e1 L x6 y1 + He1 Ô e2 Ô e3 û e1 L x7 y1 +
HHe1 û e2 L x1 + He2 û e2 L x2 + He2 û e3 L x3 L y2 + He1 Ô e2 û e2 L x4 y2 +
He1 Ô e3 û e2 L x5 y2 + He2 Ô e3 û e2 L x6 y2 + He1 Ô e2 Ô e3 û e2 L x7 y2 +
HHe1 û e3 L x1 + He2 û e3 L x2 + He3 û e3 L x3 L y3 + He1 Ô e2 û e3 L x4 y3 +
He1 Ô e3 û e3 L x5 y3 + He2 Ô e3 û e3 L x6 y3 + He1 Ô e2 Ô e3 û e3 L x7 y3 +
HHe1 Ô e2 û e1 Ô e2 L x4 + He1 Ô e2 û e1 Ô e3 L x5 + He1 Ô e2 û e2 Ô e3 L x6 L y4 +
He1 Ô e2 Ô e3 û e1 Ô e2 L x7 y4 +
HHe1 Ô e2 û e1 Ô e3 L x4 + He1 Ô e3 û e1 Ô e3 L x5 + He1 Ô e3 û e2 Ô e3 L x6 L y5 +
He1 Ô e2 Ô e3 û e1 Ô e3 L x7 y5 +
HHe1 Ô e2 û e2 Ô e3 L x4 + He1 Ô e3 û e2 Ô e3 L x5 + He2 Ô e3 û e2 Ô e3 L x6 L y6 +
He1 Ô e2 Ô e3 û e2 Ô e3 L x7 y6 + He1 Ô e2 Ô e3 û e1 Ô e2 Ô e3 L x7 y7 +
x4 y0 e1 Ô e2 + x5 y0 e1 Ô e3 + x6 y0 e2 Ô e3 + x7 y0 e1 Ô e2 Ô e3
We can expand these interior products to inner products and replace them by elements of the
metric tensor by using the GrassmannAlgebra function ToMetricForm. For example in the case
of a Euclidean metric we have:
1; ToMetricForm@ZD
x0 y0 + e1 x1 y0 + e2 x2 y0 + e3 x3 y0 + x1 y1 + e2 x4 y1 + e3 x5 y1 + x2 y2 -
e1 x4 y2 + e3 x6 y2 + x3 y3 - e1 x5 y3 - e2 x6 y3 + x4 y4 + e3 x7 y4 +
x5 y5 - e2 x7 y5 + x6 y6 + e1 x7 y6 + x7 y7 + x4 y0 e1 Ô e2 + x7 y3 e1 Ô e2 +
x5 y0 e1 Ô e3 - x7 y2 e1 Ô e3 + x6 y0 e2 Ô e3 + x7 y1 e2 Ô e3 + x7 y0 e1 Ô e2 Ô e3
2009 9 3
Grassmann Algebra Book.nb 513
A = HH1 + xL Ô H2 + y + y Ô xLL û H3 + zL
H1 + xL Ô H2 + y + y Ô xL û H3 + zL
Ÿ Expanding products
ExpandProducts performs the first level of the simplification operation by simply expanding
out any products. It treats scalars as elements of the Grassmann algebra, just like those of higher
grade.Taking the expression A and expanding out both the exterior and interior products gives:
A1 = ExpandProducts@AD
1Ô2û3 + 1Ô2ûz + 1Ôyû3 + 1Ôyûz + xÔ2û3 + xÔ2ûz + xÔyû3 +
xÔyûz + 1ÔyÔxû3 + 1ÔyÔxûz + xÔyÔxû3 + xÔyÔxûz
Ÿ Factoring scalars
FactorScalars collects all the scalars in each term, which figure either by way of ordinary
multiplication or by way of exterior multiplication, and multiplies them as a scalar factor at the
beginning of each term. Remember that both the exterior and interior product of scalars is a scalar.
Mathematica will also automatically rearrange the terms in this sum according to its internal
ordering algorithm.
A2 = FactorScalars@A1 D
6 + 6 x + 3 y + 2 H1 û zL + 2 Hx û zL + y û z + x Ô y û z +
yÔxûz + xÔyÔxûz + 3 xÔy + 3 yÔx + 3 xÔyÔx
2009 9 3
Grassmann Algebra Book.nb 514
A3 = ZeroSimplify@A2 D
6 + 6 x + 3 y + 2 Hx û zL + y û z + x Ô y û z + y Ô x û z + 3 x Ô y + 3 y Ô x
Ÿ Reordering factors
In order to simplify further it is necessary to put the factors of each term into a canonical order so
that terms of opposite sign in an exterior product will cancel and any inner product will be
transformed into just one of its two symmetric forms. This reordering may be achieved by using
the GrassmannAlgebra ToStandardOrdering function. (The rules for GrassmannAlgebra
standard ordering are given in the package documentation.)
A4 = ToStandardOrdering@A3 D
6 + 6 x + 3 y + 2 Hx û zL + y û z
Ÿ Simplifying expressions
We can perform all the simplifying operations above with GrassmannSimplify. Applying
GrassmannSimplify to our original expression A we get the expression A4 .
%@A4 D
6 + 6 x + 3 y + 2 Hx û zL + y û z
2009 9 3
Grassmann Algebra Book.nb 515
To refer to an exterior power n of a Grassmann number X (but not to compute it), we will use the
usual notation Xn .
As an example we take the general element of 3-space, X, introduced at the beginning of this
chapter, and calculate some powers:
X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
%@X Ô XD
x20 + 2 e1 x0 x1 + 2 e2 x0 x2 + 2 e3 x0 x3 + 2 x0 x4 e1 Ô e2 + 2 x0 x5 e1 Ô e3 +
2 x0 x6 e2 Ô e3 + H-2 x2 x5 + 2 Hx3 x4 + x1 x6 + x0 x7 LL e1 Ô e2 Ô e3
%@X Ô X Ô XD
Direct computation is generally an inefficient way to calculate a power, as the terms are all
expanded out before checking to see which are zero. It will be seen below that we can generally
calculate powers more efficiently by using knowledge of which terms in the product would turn
out to be zero.
Xe = EvenGrade@XD
x0 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3
2009 9 3
Grassmann Algebra Book.nb 516
The soul of the even part of a general Grassmann number in 3-space just happens to be nilpotent
due to its being simple. (See Section 2.10.)
Xes = EvenSoul@XD
x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3
%@Xes Ô Xes D
0
However, in the general case an even soul will not be nilpotent, but its powers will become zero
when the grades of the terms in the power expansion exceed the dimension of the space. We can
show this by calculating the powers of an even soul in spaces of successively higher dimensions.
Let
Zes = m Ô p + q Ô r Ô s Ô t + u Ô v Ô w Ô x Ô y Ô z;
Xo = OddGrade@XD
e1 x1 + e2 x2 + e3 x3 + x7 e1 Ô e2 Ô e3
%@Xo Ô Xo D
0
2009 9 3
Grassmann Algebra Book.nb 517
Xp ã HXe + Xo Lp ã Xe p + p Xe p-1 Ô Xo + !
where the powers of p and p|1 refer to the exterior powers of the Grassmann numbers. For
example, the cube of a general Grassmann number X in 3-space may then be calculated as
This algorithm is used in the general power function GrassmannPower described below.
The formula developed above for positive powers of Grassmann number still applies to the specific
case of numbers without a body. If we can determine an expression for the highest non-zero power
of an even number with no body, we can substitute it into the formula to obtain the required result.
Let S be an even Grassmann number with no body. Express S as a sum of components s, where i is
i
the grade of the component. (Components may be a sum of several terms. Components with no
underscript are 1-elements by default.)
S = s + s + s;
2 3
%@S Ô SD
sÔs + sÔs
2 2
2009 9 3
Grassmann Algebra Book.nb 518
SÔS ã 2 sÔs
2
It is clear from this that because this expression is of grade three, further multiplication by S would
give zero. It thus represents an expression for the highest non-zero power (the square) of a general
bodyless element in 3-space.
The highest non-zero power pmax of an even Grassmann number (with no body) can be seen to be
that which enables the smallest term (of grade 2 and hence commutative) to be multiplied by itself
the largest number of times, without the result being zero. For a space of even dimension n this is
obviously pmax = n2 .
&4 ; X = CreateElementBxF
2 2
x1 e1 Ô e2 + x2 e1 Ô e3 + x3 e1 Ô e4 + x4 e2 Ô e3 + x5 e2 Ô e4 + x6 e3 Ô e4
The square of this number is a 4-element. Any further multiplication by a bodyless Grassmann
number will give zero.
%BX Ô XF
2 2
H-2 x2 x5 + 2 Hx3 x4 + x1 x6 LL e1 Ô e2 Ô e3 Ô e4
n
Substituting s for Xe and 2
for p into formula 9.1 for the pth power gives:
2
n n
Xp ã s 2 -1 Ô Ks + Xo O
2 2 2
The only odd component Xo capable of generating a non-zero term is the one of grade 1. Hence Xo
is equal to s and we have finally that:
The maximum non-zero power of a bodyless Grassmann number in a space of even dimension n is
equal to n2 and may be computed from the formula
n n n
Xpmax ã X 2 ã s 2 -1 Ô Ks + sO n even 9.2
2 2 2
A similar argument may be offered for odd-dimensional spaces. The highest non-zero power of an
even Grassmann number (with no body) can be seen to be that which enables the smallest term (of
grade 2 and hence commutative) to be multiplied by itself the largest number of times, without the
result being zero. For a space of odd dimension n this is obviously n-1
2
. However the highest
power of a general bodyless number will be one greater than this due to the possibility of
2009 9 3 this even product by the element of grade 1. Hence pmax = n+1 .
multiplying 2
Grassmann Algebra Book.nb 519
A similar argument may be offered for odd-dimensional spaces. The highest non-zero power of an
even Grassmann number (with no body) can be seen to be that which enables the smallest term (of
grade 2 and hence commutative) to be multiplied by itself the largest number of times, without the
result being zero. For a space of odd dimension n this is obviously n-1
2
. However the highest
power of a general bodyless number will be one greater than this due to the possibility of
multiplying this even product by the element of grade 1. Hence pmax = n+12
.
n+1
Substituting s for Xe , s for Xo , and 2
for p into the formula for the pth power and noting that the
2
term involving the power of s only is zero, leads to the following:
2
The maximum non-zero power of a bodyless Grassmann number in a space of odd dimension n is
equal to n+21 and may be computed from the formula
n+1 Hn + 1L n-1
Xpmax ã X 2 ã s 2 Ô s n odd 9.3
2 2
A formula which gives the highest power for both even and odd spaces is
1
pmax ã H2 n + 1 - H-1Ln L 9.4
4
The following table gives the maximum non-zero power of a bodiless Grassmann number and the
formula for it for spaces up to dimension 8.
n pmax Xpmax
1 1 s
2 1 s+s
2
3 2 2 sÔs
2
4 2 K2 s + sO Ô s
2 2
5 3 3 sÔsÔs
2 2
6 3 K3 s + sO Ô s Ô s
2 2 2
7 4 4 sÔsÔsÔs
2 2 2
8 4 K4 s + sO Ô s Ô s Ô s
2 2 2 2
&3 ; X3 = Soul@CreateGrassmannNumber@xDD
e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
2009 9 3
Grassmann Algebra Book.nb 520
8e1 x1 + e2 x2 + e3 x3 , x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 <
%BX3 Ô X3 ã 2 s Ô sF
2
True
&4 ; X4 = Soul@CreateGrassmannNumber@xDD
e1 x1 + e2 x2 + e3 x3 + e4 x4 + x5 e1 Ô e2 + x6 e1 Ô e3 +
x7 e1 Ô e4 + x8 e2 Ô e3 + x9 e2 Ô e4 + x10 e3 Ô e4 + x11 e1 Ô e2 Ô e3 +
x12 e1 Ô e2 Ô e4 + x13 e1 Ô e3 Ô e4 + x14 e2 Ô e3 Ô e4 + x15 e1 Ô e2 Ô e3 Ô e4
8e1 x1 + e2 x2 + e3 x3 + e4 x4 ,
x5 e1 Ô e2 + x6 e1 Ô e3 + x7 e1 Ô e4 + x8 e2 Ô e3 + x9 e2 Ô e4 + x10 e3 Ô e4 <
%BX4 Ô X4 ã K2 s + sO Ô sF
2 2
True
&3 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
Finding an inverse of X is equivalent to finding the Grassmann number A such that XÔAã1 or AÔ
Xã1. In what follows we shall show that A commutes with X and therefore is a unique inverse
which we may call the inverse of X. To find A we need to solve the equation XÔA-1ã0. First we
create a general Grassmann number for A.
A = CreateGrassmannNumber@aD
a0 + e1 a1 + e2 a2 + e3 a3 + a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3 + a7 e1 Ô e2 Ô e3
2009 9 3
Grassmann Algebra Book.nb 521
%@X Ô A - 1D
-1 + a0 x0 + e1 Ha1 x0 + a0 x1 L + e2 Ha2 x0 + a0 x2 L +
e3 Ha3 x0 + a0 x3 L + Ha4 x0 + a2 x1 - a1 x2 + a0 x4 L e1 Ô e2 +
Ha5 x0 + a3 x1 - a1 x3 + a0 x5 L e1 Ô e3 +
Ha6 x0 + a3 x2 - a2 x3 + a0 x6 L e2 Ô e3 +
Ha7 x0 + a6 x1 - a5 x2 + a4 x3 + a3 x4 - a2 x5 + a1 x6 + a0 x7 L e1 Ô e2 Ô e3
To make this expression zero we need to solve for the ai in terms of the xi which make the
coefficients zero. This is most easily done with the GrassmannSolve function to be introduced
later. Here, to see more clearly the process, we do it directly with Mathematica's Solve function.
Xr = A ê. S
1 e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
- - - - - -
x0 x20 x20 x20 x20 x20
x6 e2 Ô e3 H2 x3 x4 - 2 x2 x5 + 2 x1 x6 - x0 x7 L e1 Ô e2 Ô e3
+
x20 x30
To verify that this is indeed the inverse and that the inverse commutes, we calculate the products of
X and Xr
To avoid having to rewrite the coefficients for the Solve equations, we can use the
GrassmannAlgebra function GrassmannSolve (discussed in Section 9.6) to get the same results.
To use GrassmannSolve we only need to enter a single undefined symbol (here we have used
Y)for the Grassmann number we are looking to solve for.
2009 9 3
Grassmann Algebra Book.nb 522
GrassmannSolve@X Ô Y ã 1, YD
1 e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
::Y Ø - - - - - -
x0 x20 x20 x20 x20 x20
x6 e2 Ô e3 H-2 x3 x4 + 2 x2 x5 - 2 x1 x6 + x0 x7 L e1 Ô e2 Ô e3
- >>
x20 x30
It is evident from this example that (at least in 3-space), for a Grassmann number to have an
inverse, it must have a non-zero body.
To calculate an inverse directly, we can use the function GrassmannInverse. To see the pattern
for inverses in general, we calculate the inverses of a general Grassmann number in 1, 2, 3, and 4-
spaces:
Ï Inverse in a 1-space
&1 ; X1 = CreateGrassmannNumber@xD
x0 + e1 x1
X1 r = GrassmannInverse@X1 D
1 e1 x1
-
x0 x20
Ï Inverse in a 2-space
&2 ; X2 = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2
X2 r = GrassmannInverse@X2 D
1 e1 x1 e2 x2 x3 e1 Ô e2
- - -
x0 x20 x20 x20
Ï Inverse in a 3-space
&3 ; X3 = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
X3 r = GrassmannInverse@X3 D
1 e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
- - - - - -
x0 x20 x20 x20 x20 x20
x6 e2 Ô e3 -2 x2 x5 + 2 Hx3 x4 + x1 x6 L x7
+ - e1 Ô e2 Ô e3
x20 x30 x20
Ï Inverse in a 4-space
2009 9 3
Grassmann Algebra Book.nb 523
Inverse in a 4-space
&4 ; X4 = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + e4 x4 + x5 e1 Ô e2 + x6 e1 Ô e3 +
x7 e1 Ô e4 + x8 e2 Ô e3 + x9 e2 Ô e4 + x10 e3 Ô e4 + x11 e1 Ô e2 Ô e3 +
x12 e1 Ô e2 Ô e4 + x13 e1 Ô e3 Ô e4 + x14 e2 Ô e3 Ô e4 + x15 e1 Ô e2 Ô e3 Ô e4
X4 r = GrassmannInverse@X4 D
1 e1 x1 e2 x2 e3 x3 e4 x4 x5 e1 Ô e2
- - - - - -
x0 x20 x20 x20 x20 x20
x6 e1 Ô e3 x7 e1 Ô e4 x8 e2 Ô e3 x9 e2 Ô e4 x10 e3 Ô e4
- - - - +
x20 x20 x20 x20 x20
-2 x2 x6 + 2 Hx3 x5 + x1 x8 L x11
- e1 Ô e2 Ô e3 +
x30 x20
-2 x2 x7 + 2 Hx4 x5 + x1 x9 L x12
- e1 Ô e2 Ô e4 +
x30 x20
H1 + bL Ô I1 - b + b2 - b3 + b4 - ... % bq M ã 1 % bq+1
We can see this most easily by writing the expansion in the form below and noting that all the
intermediate terms cancel.
1 - b + b2 - b3 + b4 - ... % bq
+ b - b2 + b3 - b4 + ... ° bq % bq+1
Alternatively, consider the product in the reverse order I1-b+b2 -b3 +b4 - ...%bq MÔH1+bL.
Expanding this gives precisely the same result. Hence a Grassmann number and its inverse
commute.
Furthermore, it has been shown in the previous section that for a bodiless Grassmann number in a
Ï
space of n dimensions, the greatest non-zero power is pmax = 14 I2 n + 1 + H-1Ln M. Thus if q is
q+1
2009 9to3pmax , b is equal to zero, and the identity becomes:
equal
Grassmann Algebra Book.nb 524
Furthermore, it has been shown in the previous section that for a bodiless Grassmann number in a
space of n dimensions, the greatest non-zero power is pmax = 14 I2 n + 1 + H-1Ln M. Thus if q is
equal to pmax , bq+1 is equal to zero, and the identity becomes:
If now we have a general Grassmann number X, say, we can write X as Xãx0 H1+bL, so that if Xs
is the soul of X, then
Xs
X ã x0 H1 + bL ã x0 + Xs bã
x0
1 Xs Xs 2 Xs 3 Xs pmax
-1
X = 1- + - + ... % 9.6
x0 x0 x0 x0 x0
We tabulate some examples for Xãx0 +Xs , expressing the inverse X-1 both in terms of Xs and x0
alone, and in terms of the even and odd components s and s of Xs .
2
It is easy to verify the formulae of the table for dimensions 1 and 2 from the results above. The
formulae for spaces of dimension 3 and 4 are given below. But for a slight rearrangement of terms
they are identical to the results above.
2009 9 3
Grassmann Algebra Book.nb 525
8e1 x1 + e2 x2 + e3 x3 , x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 <
1 Xs 2
%B 1- + s Ô s Ô Hx0 + Xs LF
x0 x0 x0 2 2
8e1 x1 + e2 x2 + e3 x3 + e4 x4 ,
x5 e1 Ô e2 + x6 e1 Ô e3 + x7 e1 Ô e4 + x8 e2 Ô e3 + x9 e2 Ô e4 + x10 e3 Ô e4 <
1 Xs 1
%B 1- + K2 s + sO Ô s Ô Hx0 + Xs LF
x0 x0 x0 2 2 2
? GrassmannPower
GrassmannPower@X,nD computes the nth power of a Grassmann
number X in a space of the currently declared number of
dimensions. n may be any numeric or symbolic quantity.
Y = GrassmannPower@X, 3D
2009 9 3
Grassmann Algebra Book.nb 526
Z = GrassmannPower@X, -3D
1 3 e1 x1 3 e2 x2 3 x3 e1 Ô e2
- - -
x30 x40 x40 x40
%@8Y Ô Z, Z Ô Y<D
81, 1<
Ï General powers
GrassmannPower will also work with general elements that are not expressed in terms of basis
elements.
X = 1 + x + x Ô y + x Ô y Ô z;
%@8Y Ô Z, Z Ô Y<D
81, 1<
Ï Symbolic powers
We take a general Grassmann number in 2-space.
&2 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2
Y = GrassmannPower@X, nD
xn0 + n e1 x-1+n
0 x1 + n e2 x-1+n
0 x2 + n x-1+n
0 x3 e1 Ô e2
Z = GrassmannPower@X, -nD
x-n -1-n
0 - n e1 x0 x1 - n e2 x-1-n
0 x2 - n x-1-n
0 x3 e1 Ô e2
Before we verify that these are indeed inverses of each other, we need to declare the power n to be
a scalar.
DeclareExtraScalars@nD;
%@8Y Ô Z, Z Ô Y<D
81, 1<
2009 9 3
Grassmann Algebra Book.nb 527
xn0 + n e1 x-1+n
0 x1 + n e2 x-1+n
0 x2 + n e3 x-1+n
0 x3 +
n x-1+n
0 x4 e1 Ô e2 + n x-1+n
0 x5 e1 Ô e3 + n x-1+n
0 x6 e2 Ô e3 +
Ix-2+n
0 IIn - n2 M x2 x5 + I-n + n2 M Hx3 x4 + x1 x6 LM + n x-1+n
0 x7 M e1 Ô e2 Ô e3
Z = GrassmannPower@X, -nD
x-n -1-n
0 - n e1 x0 x1 - n e2 x-1-n
0 x2 - n e3 x-1-n
0 x3 -
n x-1-n
0 x4 e1 Ô e2 - n x-1-n
0 x5 e1 Ô e3 - n x-1-n
0 x6 e2 Ô e3 +
Ix-2-n
0 Ix3 In x4 + n2 x4 M - n x2 x5 - n2 x2 x5 + n x1 x6 + n2 x1 x6 M -
n x-1-n
0 x7 M e1 Ô e2 Ô e3
%@8Y Ô Z, Z Ô Y<D
81, 1<
1
&3 ; X = CreateGrassmannNumber@xD; Y = GrassmannPowerBX, F
2
e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
x0 + + + + + +
2 x0 2 x0 2 x0 2 x0 2 x0
x6 e2 Ô e3 -x3 x4 + x2 x5 - x1 x6 x7
+ + e1 Ô e2 Ô e3
2 x0 4 x3ê2
0 2 x0
Simplify@%@X ã Y Ô YDD
True
xp0 + p e1 x-1+p
0 x1 + p e2 x-1+p
0 x2 + p x-1+p
0 x3 e1 Ô e2
2009 9 3
Grassmann Algebra Book.nb 528
Z = GrassmannPower@X, -pD
x-p -1-p
0 - p e1 x0 x1 - p e2 x-1-p
0 x2 - p x-1-p
0 x3 e1 Ô e2
%@8Y Ô Z, Z Ô Y<D
81, 1<
Z = a - b - 1 + Hb + 3 - 5 aL e1 + Hd - 2 c + 6L e3 + He + 4 aL e1 Ô e2 +
Hc + 2 + 2 gL e2 Ô e3 + H-1 + e - 7 g + bL e1 Ô e2 Ô e3 ;
We can calculate the values of the parameters by using the GrassmannAlgebra function
GrassmannScalarSolve.
GrassmannScalarSolve@Z ã 0D
1 1 1
::a Ø , bØ- , c Ø -1, d Ø -8, e Ø -2, g Ø - >>
2 2 2
GrassmannScalarSolve will also work for several equations and for general symbols (other
than basis elements).
Z1 = 8a H-xL + b y Ô H-2 xL + f == c x + d x Ô y,
H7 - aL y == H13 - dL y Ô x - f<;
GrassmannScalarSolve@Z1 D
13
::a Ø 7, c Ø -7, d Ø 13, b Ø , f Ø 0>>
2
2009 9 3
Grassmann Algebra Book.nb 529
? GrassmannScalarSolve
GrassmannScalarSolve@eqnsD attempts to find the values of those
HdeclaredL scalar variables which make the equations True.
GrassmannScalarSolve@eqns,scalarsD attempts to find the
values of the scalars which make the equations True. If
not already in the DeclaredScalars list the scalars will
be added to it. GrassmannScalarSolve@eqns,scalars,elimsD
attempts to find the values of the scalars which make the
equations True while eliminating the scalars elims. If
the equations are not fully solvable, GrassmannScalarSolve
will still find the values of the scalar variables which
reduce the number of terms in the equations as much as
possible, and will additionally return the reduced equations.
? GrassmannSolve
GrassmannSolve@eqns,varsD attempts to solve an equation or set
of equations for the Grassmann number variables vars. If
the equations are not fully solvable, GrassmannSolve will
still find the values of the Grassmann number variables
which reduce the number of terms in the equations as much as
possible, and will additionally return the reduced equations.
Suppose first that we have an equation involving an unknown Grassmann number, A, say. For
example if X is a general number in 3-space, and we want to find its inverse A as we did in the
Section 9.5 above, we can solve the following equation for A:
&3 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
R = GrassmannSolve@X Ô A ã 1, AD
1 e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
::A Ø - - - - - -
x0 x20 x20 x20 x20 x20
x6 e2 Ô e3 H-2 x3 x4 + 2 x2 x5 - 2 x1 x6 + x0 x7 L e1 Ô e2 Ô e3
- >>
x20 x30
2009 9 3
Grassmann Algebra Book.nb 530
%@X Ô A ã 1 ê. RD
8True<
In general, GrassmannSolve can solve the same sorts of equations that Mathematica's inbuilt
Solve routines can deal with because it uses Solve as its main calculation engine. In particular, it
can handle powers of the unknowns. Further, it does not require the equations necessarily to be
expressed in terms of basis elements. For example, we can take a quadratic equation in a:
Q = a Ô a + 2 H1 + x + x Ô yL Ô a + H1 + 2 x + 2 x Ô yL ã 0;
S = GrassmannSolve@Q, aD
88a Ø -1 + x C1 + C2 x Ô y<<
Here there are two arbitrary parameters C1 and C2 introduced by the GrassmannSolve function.
(C is a symbol reserved by Mathematica for the generation of unknown coefficients in the solution
of equations.)
Now test whether a is indeed a solution to the equation:
%@Q ê. SD
8True<
Note that here we have a double infinity of solutions, parametrized by the pair {C1 ,C2 }. This
means that for any values of C1 and C2 , S is a solution. For example for various C1 and C2 :
GrassmannSolve can also take several equations in several unknowns. As an example we take
the following pair of equations:
Q = 8H1 + xL Ô b + x Ô y Ô a ã 3 + 2 x Ô y,
H1 - yL Ô a Ô a + H2 + y + x + 5 x Ô yL ã 7 - 5 x<;
%@Q ê. SD
88True, True<, 8True, True<<
Note that GrassmannAlgebra assumes that all other symbols (here, x and y) are 1-elements. We
can add the symbol x to the list of Grassmann numbers to be solved for.
2009 9 3
Grassmann Algebra Book.nb 531
Note that GrassmannAlgebra assumes that all other symbols (here, x and y) are 1-elements. We
can add the symbol x to the list of Grassmann numbers to be solved for.
18 108 C1 12 6 C2
bØ- + y 2 - C2 - + - ,
-11 + C22 I-11 + C22 M
2
-11 + C22 -11 + C22
1 18 203 y 5 31 y
x Ø y C1 + I5 - C22 M>, :a Ø y C3 , b Ø + , xØ - >>
6 11 121 6 36
Again we get two solutions, but this time involving some arbitrary scalar constants.
%@Q ê. SD êê Simplify
88True, True<, 8True, True<<
X ã YÔZ
or to the equation:
X ã ZÔY
To distinguish these two possibilities, we call the solution Z to the first equation XãYÔZ the left
exterior quotient of X by Y because the denominator Y multiplies the quotient Z from the left.
Correspondingly, we call the solution Z to the second equation XãZÔY the right exterior quotient
of X by Y because the denominator Y multiplies the quotient Z from the right.
To solve for Z in either case, we can use the GrassmannSolve function. For example if X and Y
are general elements in a 2-space, the left exterior quotient is obtained as:
GrassmannSolve@X ã Y Ô Z, 8Z<D
x0 e1 H-x1 y0 + x0 y1 L
::Z Ø - -
y0 y20
e2 H-x2 y0 + x0 y2 L H-x3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 Ô e2
- >>
y20 y20
2009 9 3
Grassmann Algebra Book.nb 532
GrassmannSolve@X ã Z Ô Y, 8Z<D
x0 e1 H-x1 y0 + x0 y1 L
::Z Ø - -
y0 y20
e2 H-x2 y0 + x0 y2 L H-x3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 Ô e2
- >>
y20 y20
? LeftGrassmannDivide
LeftGrassmannDivide@X,YD calculates
the Grassmann number Z such that X == YÔZ.
? RightGrassmannDivide
RightGrassmannDivide@X,YD calculates
the Grassmann number Z such that X == ZÔY.
A shorthand infix notation for the division operation is provided by the down-vector symbol
LeftDownVector B (LeftGrassmannDivide) and RightDownArrow E
(RightGrassmannDivide). The previous examples would then take the form
X Y
x0 e1 H-x1 y0 + x0 y1 L e2 H-x2 y0 + x0 y2 L
- - -
y0 y20 y20
H-x3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 Ô e2
y20
X Y
x0 e1 H-x1 y0 + x0 y1 L e2 H-x2 y0 + x0 y2 L
- - -
y0 y20 y20
H-x3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 Ô e2
y20
2009 9 3
Grassmann Algebra Book.nb 533
81 X, 1 X<
1 e1 x1 e2 x2 x3 e1 Ô e2 1 e1 x1 e2 x2 x3 e1 Ô e2
: - - - , - - - >
x0 x20 x20 x20 x0 x20 x20 x20
Ï Division by a scalar
Division by a scalar is just the Grassmann number divided by the scalar as expected.
8X a, X a<
x0 e1 x1 e2 x2 x3 e1 Ô e2 x0 e1 x1 e2 x2 x3 e1 Ô e2
: + + + , + + + >
a a a a a a a a
8X X, X X<
81, 1<
8Ha XL X, X Ha XL<
1
:a, >
a
8X H1 + e1 Ô e2 L, X H1 + e1 Ô e2 L<
8x0 + e1 x1 + e2 x2 + H-x0 + x3 L e1 Ô e2 ,
x0 + e1 x1 + e2 x2 + H-x0 + x3 L e1 Ô e2 <
2009 9 3
Grassmann Algebra Book.nb 534
Hx Ô yL x
y + x C1 + C2 x Ô y
The quotient is returned as a linear combination of elements containing arbitrary scalar constants
C1 and C2 . To see that this is indeed a left quotient, we can multiply it by the denominator from the
left and see that we get the numerator.
%@x Ô Hy + x C1 + C2 x Ô yLD
xÔy
We get a slightly different result for the right quotient which we verify by multiplying from the
right.
Hx Ô yL x
-y + x C1 + C2 x Ô y
%@H-y + x C1 + C2 x Ô yL Ô xD
xÔy
&3 ; Hx Ô y Ô zL Hx - 2 y + 3 z + x Ô zL
3 9 C1 3 C3 1 3 C1 C3
z - - 3 C2 - +x - + - C2 -
2 2 2 2 2 2
y H-1 + 3 C1 + 2 C2 + C3 L + C1 x Ô y + C2 x Ô z + C3 y Ô z + C4 x Ô y Ô z
In some circumstances we may not want the most general result. Say, for example, we knew that
the result we wanted had to be a 1-element. We can use the GrassmannAlgebra function
ExtractGrade.
ExtractGrade@1D@Hx Ô y Ô zL Hx - 2 y + 3 z + x Ô zLD
3 9 C1 3 C3
z - - 3 C2 - +
2 2 2
1 3 C1 C3
x - - C2 - + y H-1 + 3 C1 + 2 C2 + C3 L
2 2 2
Should there be insufficient information to determine a result, the quotient will be returned
unchanged. For example.
Hx Ô yL z
Hx Ô yL z
2009 9 3
Grassmann Algebra Book.nb 535
? GrassmannFactor
GrassmannFactor@X,FD attempts to factorize the Grassmann number X
into the form given by F. F must be an expression involving the
symbols of X, together with sufficient scalar coefficients whose
values are to be determined. GrassmannFactor@X,F,SD attempts to
factorize the Grassmann number X by determining the scalars S.
If the scalars S are not already declared as scalars, they will
be automatically declared. If no factorization can be effected
in the form given, the original expression will be returned.
&2 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2
Xf = GrassmannFactor@X,
Ha + b e1 + c e2 L Ô Hf + g e1 + h e2 L, 8a, b, c, f, g, h<D
x0 e2 H-h x0 + f x2 L e1 H-h x0 x1 + f x1 x2 + f x0 x3 L
+ + Ô
f f2 f2 x2
e1 Hh x1 - f x3 L
f + h e2 +
x2
This factorization is valid for any values of the scalars {a,b,c,f,g,h} which appear in the
result, as we can verify by applying GrassmannSimplify to the result.
2009 9 3
Grassmann Algebra Book.nb 536
%@XfD
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2
&3 ; Y = CreateElementByF
2
y1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3
Xf = GrassmannFactor@Y,
Ha e1 + b e2 + c e3 L Ô Hd e1 + e e2 + f e3 L, 8a, b, c, d, e, f<D
c H-f y1 +e y2 L
e1 Jy2 + y3
N e2 Hc e + y3 L
: c e3 + + Ô
f f
e1 H-f y1 + e y2 L
e e2 + f e3 + ,
y3
b e y2
e1 Jy1 + y3
N e3 y3 e e1 y2
b e2 + - Ô e e2 + >
e e y3
%@XfD
8y1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 , y1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 <
&4 ; X = -14 w Ô x Ô y + 26 w Ô x Ô z + 11 w Ô y Ô z + 13 x Ô y Ô z;
Since every (n|1)-element in an n-space is simple, we know the 3-element can be factorized.
Suppose also that we would like the factorization in the following form:
J = Hx a1 + y a2 L Ô Hy b2 + z b3 L Ô Hz g3 + w g4 L
Hx a1 + y a2 L Ô Hy b2 + z b3 L Ô Hz g3 + w g4 L
2009 9 3
Grassmann Algebra Book.nb 537
XJ = GrassmannFactor@X, JD
11 y a1 14 y 26 z 13 z g4
x a1 + Ô - + Ô w g4 -
26 a1 g4 a1 g4 14
%@XJD
-14 x Ô y Ô w + 13 x Ô y Ô z + 26 x Ô z Ô w + 11 y Ô z Ô w
It has been shown in Section 9.5 that the degree pmax of the highest non-zero power of a
Grassmann number in a space of n dimensions is given by pmax = 14 I2 n + 1 - H-1Ln M.
If we expand the series up to the term in this power, it then becomes a formula for a function f of a
Grassmann number in that space. The tables below give explicit expressions for a function f[X]
of a Grassmann variable in spaces of dimensions 1 to 4.
n pmax f@XD
1 1 f@x0 D + f£ @x0 D Xs
2 1 f@x0 D + f£ @x0 D Xs
1
3 2 f@x0 D + f£ @x0 D Xs + 2
f" @x0 D Xs 2
1
4 2 f@x0 D + f£ @x0 D Xs + 2
f" @x0 D Xs 2
If we replace the square of the soul of X by its simpler formulation in terms of the components of X
of the first and second grade we get a computationally more convenient expression in the case on
dimensions 3 and 4.
2009 9 3
Grassmann Algebra Book.nb 538
n pmax f@XD
1 1 f@x0 D + f£ @x0 D Xs
2 1 f@x0 D + f£ @x0 D Xs
3 2 f@x0 D + f£ @x0 D Xs + f" @x0 D s Ô s
2
1
4 2 f@x0 D + f£ @x0 D Xs + 2
f" @x0 D K2 s + sO Ô s
2 2
? GrassmannFunctionForm
GrassmannFunctionForm@8f@xD,x<,Xb,XsD expresses the function f@xD
of a Grassmann number in a space of the currently declared
number of dimensions in terms of the symbols Xb and Xs, where Xb
represents the body, and Xs represents the soul of the number.
GrassmannFunctionForm@8f@x,y,z,...D,8x,y,z,...<<,8Xb,Yb,Zb,...<,8
Xs,Ys,Zs,...<D expresses the function f@x,y,z,...D
of Grassmann numbers x,y,z,... in terms of the symbols
Xb,Yb,Zb,... and Xs,Ys,Zs..., where Xb,Yb,Zb,... represent the
bodies, and Xs,Ys,Zs... represent the souls of the numbers.
GrassmannFunctionForm@f@Xb,XsDD expresses the function f@XD of
a Grassmann number X=Xb+Xs in a space of the currently declared
number of dimensions in terms of body and soul symbols Xb and
Xs. GrassmannFunctionForm@f@8Xb,Yb,Zb,...<,8Xs,Ys,Zs,...<DD
is equivalent to the second form above.
2009 9 3
Grassmann Algebra Book.nb 539
2009 9 3
Grassmann Algebra Book.nb 540
? GrassmannFunction
GrassmannFunction@8f@xD,x<,XD or GrassmannFunction@f@XDD computes
the function f of a single Grassmann number X in a space of
the currently declared number of dimensions. In the first
form f@xD may be any formula; in the second, f must be a pure
function. GrassmannFunction@8f@x,y,z,..D,8x,y,z,..<<,8X,Y,Z,..<D
or GrassmannFunction@f@X,Y,Z,..DD computes the function
f of Grassmann numbers X,Y,Z,... In the first form
f@x,y,z,..D may be any formula Hwith xi corresponding
to XiL; in the second, f must be a pure function. In
both these forms the result of the function may be
dependent on the ordering of the arguments X,Y,Z,...
In the sections below we explore the GrassmannFunction operation and give some examples.
We will work in a 3-space with the general Grassmann number X.
&3 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3
2009 9 3
Grassmann Algebra Book.nb 541
GrassmannFunction@Xa D
xa0 + a e1 x-1+a
0 x1 + a e2 x-1+a
0 x2 + a e3 x-1+a
0 x3 +
a x-1+a
0 x4 e1 Ô e2 + a x-1+a
0 x5 e1 Ô e3 + a x-1+a
0 x6 e2 Ô e3 +
Ix-2+a
0 IIa - a2 M x2 x5 + I-a + a2 M Hx3 x4 + x1 x6 LM + a x-1+a
0 x7 M e1 Ô e2 Ô e3
GrassmannFunctionAX-1 E
1 e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
- - - - - -
x0 x20 x20 x20 x20 x20
x6 e2 Ô e3 -2 x2 x5 + 2 Hx3 x4 + x1 x6 L x7
+ - e1 Ô e2 Ô e3
x20 x30 x20
Ï Exponential functions
Here is the exponential of a general Grassmann number in 3-space.
expX = GrassmannFunction@Exp@XDD
‰ x0 + ‰ x0 e 1 x 1 + ‰ x0 e 2 x 2 + ‰ x0 e 3 x 3 + ‰ x0 x 4 e 1 Ô e 2 + ‰ x0 x 5 e 1 Ô e 3 +
‰x0 x6 e2 Ô e3 + ‰x0 Hx3 x4 - x2 x5 + x1 x6 + x7 L e1 Ô e2 Ô e3
expXn = GrassmannFunction@Exp@-XDD
Ï Logarithmic functions
2009 9 3
Grassmann Algebra Book.nb 542
Logarithmic functions
The logarithm of X is:
GrassmannFunction@Log@XDD
e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
Log@x0 D + + + + + +
x0 x0 x0 x0 x0
x6 e2 Ô e3 -x3 x4 + x2 x5 - x1 x6 x7
+ + e1 Ô e2 Ô e3
x0 x20 x0
We expect the logarithm of the previously calculated exponential to be the original number X.
We explore the classic trigonometric relation Sin@XD2 + Cos@XD2 ã 1and show that even for
Grassmann numbers this remains true.
2009 9 3
Grassmann Algebra Book.nb 543
tanX = GrassmannFunction@Tan@XDD
sinX = GrassmannFunction@Sin@XDD
Sin@x0 D + Cos@x0 D e1 x1 + Cos@x0 D e2 x2 + Cos@x0 D e3 x3 +
Cos@x0 D x4 e1 Ô e2 + Cos@x0 D x5 e1 Ô e3 + Cos@x0 D x6 e2 Ô e3 +
HSin@x0 D H-x3 x4 + x2 x5 - x1 x6 L + Cos@x0 D x7 L e1 Ô e2 Ô e3
secX = GrassmannFunction@Sec@XDD
Sec@x0 D + Sec@x0 D e1 x1 Tan@x0 D +
Sec@x0 D e2 x2 Tan@x0 D + Sec@x0 D e3 x3 Tan@x0 D +
Sec@x0 D x4 Tan@x0 D e1 Ô e2 + Sec@x0 D x5 Tan@x0 D e1 Ô e3 +
Sec@x0 D x6 Tan@x0 D e2 Ô e3 + ISec@x0 D3 Hx3 x4 - x2 x5 + x1 x6 L +
Sec@x0 D Ix7 Tan@x0 D + Hx3 x4 - x2 x5 + x1 x6 L Tan@x0 D2 MM e1 Ô e2 Ô e3
2009 9 3
Grassmann Algebra Book.nb 544
%@X Ô YD
x0 y0 + e1 Hx1 y0 + x0 y1 L +
e2 Hx2 y0 + x0 y2 L + Hx3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 Ô e2
However, to allow for the two different exterior products that X and Y can have together, the
variables in the second argument {X,Y} may be interchanged. The parameters x and y of the first
argument {x y,{x,y}} are simply dummy variables and can stay the same. Thus YÔX may be
obtained by evaluating:
%@Y Ô XD
x0 y0 + e1 Hx1 y0 + x0 y1 L +
e2 Hx2 y0 + x0 y2 L + Hx3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 Ô e2
expX = GrassmannFunction@Exp@XDD
‰ x0 + ‰ x0 e 1 x 1 + ‰ x0 e 2 x 2 + ‰ x0 x 3 e 1 Ô e 2
expY = GrassmannFunction@Exp@YDD
‰ y0 + ‰ y0 e 1 y 1 + ‰ y0 e 2 y 2 + ‰ y0 y 3 e 1 Ô e 2
If we compute their product using the exterior product operation we observe that the order must be
important, and that a different result is obtained when the order is reversed.
%@expX Ô expYD
2009 9 3
Grassmann Algebra Book.nb 545
%@expY Ô expXD
We can compute these two results also by using GrassmannFunction. Note that the second
argument in the second computation has its components in reverse order.
On the other hand, if we wish to compute the exponential of the sum X + Y we need to use
GrassmannFunction, because there can be two possibilities for its definition. That is, although
Exp[x+y] appears to be independent of the order of its arguments, an interchange in the 'ordering'
argument from {X,Y} to {Y,X} will change the signs in the result.
In sum: A function of several Grassmann variables may have different results depending on the
ordering of the variables in the function, even if their usual (scalar) form is commutative.
9.10 Summary
To be completed
2009 9 3
Grassmann Algebra Book.nb 546
10.1 Introduction
In this chapter we define and explore a new product which we call the generalized Grassmann
product. The generalized Grassmann product was originally developed by the author in order to
treat the hypercomplex and Clifford products of general elements in a succinct manner. The
hypercomplex and Clifford products of general elements can lead to quite complex expressions,
however we will show that such expressions always reduce to simple sums of generalized
Grassmann products. We discuss the hypercomplex product in the next chapter, and the Clifford
product in Chapter 12.
The generalized Grassmann product is really a sequence of products indexed by a parameter, called
the order of the product. When the order is zero, the product reduces to the exterior product. When
the order is its maximum value, the product reduces to the interior product. In between, the
sequence of generalized Grassmann products makes a transition between the two.
For example, consider an exterior product of simple elements. It only takes one 1-element to be
common to both factors of the exterior product for the product to be zero. On the other end of the
sequence, even if one of the simple elements has all its 1-element factors in common with the other
simple element, their interior product is not zero. The elements of the sequence of generalized
products require progressively more common 1-elements in their factors for them to reduce to zero.
Specifically, if two simple elements contain a common factor of grade g, then their generalized
Grassmann products of order less than g are zero.
The generalized Grassmann product has another enticing property. Although the interior product of
elements of different grade is not commutative, the generalized Grasmann product form of the
interior product is commutative.
For brevity in the rest of this book, where no confusion will arise, we may call the generalized
Grassmann product simply the generalized product.
2009 9 3
Grassmann Algebra Book.nb 547
k
K O
l
a Û b ã ‚ a û bj Ô bj
m l k m l k-l 10.1
j=1
b ã b1 Ô b1 ã b2 Ô b2 ã !
k l k-l l k-l
k
Here, b is expressed in all of the K O essentially different arrangements of its 1-element factors
k l
into a l-element and a (k-l)-element. This pairwise decomposition of a simple exterior product is
the same as that used in the Common Factor Theorem in chapters 3 and 6.
The grade of a generalized Grassmann product a Û b is therefore m + k - 2l, and like the grades of
m l k
the exterior and interior products in terms of which it is defined, is independent of the dimension of
the underlying space.
Note the similarity to the form of the Interior Common Factor Theorem. However, in the definition
of the generalized product there is no requirement for the interior product on the right hand side to
be an inner product, since an exterior product has replaced the ordinary multiplication operator.
For brevity in the discussion that follows, we will assume without explicitly drawing attention to
the fact, that an element undergoing a factorization is necessarily simple.
2009 9 3
Grassmann Algebra Book.nb 548
Case 0 < l < Min[m, k]: Reduction to exterior and interior products
When the order l of the generalized product is greater than zero but less than the smaller of the
grades of the factors in the product, the product may be expanded in terms of either factor
(provided it is simple). The formula for expansion in terms of the first factor is similar to the
definition [10.1] which expands in terms of the second factor.
m
K O
l
a Û b ã ‚ ai Ô b û ai 0 < l < Min@m, kD
m l k
i=1
m-l k l 10.3
a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l
In Section 10.3 we will prove that the product can actually be expanded in terms of both factors,
and that this formula is an almost immediate consequent.
For completeness it should be remarked that [10.3] is also valid for l equal to 0 or Min[m,k]. We
shall include these special cases in the ranges for l in subsequent formulas.
a Û b ã aûb lãk§m
m l k m k
10.4
a Û b ã bûa lãm§k
m l k k m
This is a particularly enticing property of the generalized product. If the interior product of two
elements is non-zero, but they are of unequal grade, an interchange in the order of their factors will
give an interior product which is zero. If an interior product of two factors is to be non-zero, it
must have the larger factor first. The interior product is non-commutative. By contrast, the
generalized product form of the interior product of two elements of unequal grade is commutative
and equal to that interior product which is non-zero, whichever one it may be.
a Û b ã b Û a ã aûb k§m
m k k k k m m k
2009 9 3
Grassmann Algebra Book.nb 549
This result, in which a sum of terms is identically zero, is a source of many useful identities
between exterior and interior products. It will be explored in Section 10.4 below.
To show this, we start with the definition [10.1] of the generalized Grassmann product.
2009 9 3
Grassmann Algebra Book.nb 550
k
K O
l
a Û b ã ‚ a û bj Ô bj
m l k m l k-l
j=1
From the Interior Common Factor Theorem we can write the interior product aûbj as:
m l
m
K O
l
a û bj ã ‚ ai û bj ai
m l l l m-l
i=1
Substituting this in the above expression for the generalized product gives:
k m
K O K O
l l
aÛb ã ‚ ‚ ai û bj ai Ô bj
m l k l l m-l k-l
j=1 i=1
The expansion of the generalized product in terms of both factors a and b is thus given by:
m k
m k
K O K O
l l
a Û b ã ‚ ‚ ai û bj ai Ô bj 0 § l § Min@m, kD
m l k l l m-l k-l 10.8
i=1 j=1
a ã a1 Ô a1 ã a2 Ô a2 ã !, b ã b1 Ô b1 ã b2 Ô b2 ã !
m l m-l l m-l k l k-l l k-l
To show this, rewrite the symmetric form for the product of b with a, that is, b Û a . We then
k m k l m
reverse the order of the exterior product to obtain the terms of a Û b except for possibly a sign. The
m l k
inner product is of course symmetric, and so can be written in either ordering.
k m
K OK O
l l
bÛa ã ‚‚ bj û ai bj Ô ai
k l m l l k-l m-l
j=1 i=1
k m
K OK O
l l
ã ‚‚ ai û bj H-1LHm-lL Hk-lL ai Ô bj
l l m-l k-l
j=1 i=1
2009 9 3
Grassmann Algebra Book.nb 551
It is easy to see, by using the distributivity of the generalized product that this relationship holds
also for non-simple a and b.
m k
m
K O
l
a Û b ã H-1LHm-lL Hk-lL b Û a ã H-1LHm-lL Hk-lL ‚ b û ai Ô ai
m l k k l m k l m-l
i=1
Interchanging the order of the terms of the exterior product then gives the required alternative
expression:
m
K O
l
a Û b ã ‚ ai Ô b û ai 0 § l § Min@m, kD
m l k m-l k l
i=1
a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l
GeneralizedProduct@lD@X, YD
where l is the order of the product and X and Y are the factors. On entering this expression into
Mathematica it will display it in the form:
XÛY
l
Alternatively, you can click on the generalized product button Ñ Û Ñ on the GrassmannAlgebra
Ñ
palette, and then tab through the placeholders, entering the elements of the product as you go.
To enter multiple products, click repeatedly on the button. For example to enter a concatenated
product of four elements, click three times:
2009 9 3
Grassmann Algebra Book.nb 552
To enter multiple products, click repeatedly on the button. For example to enter a concatenated
product of four elements, click three times:
ÑÛÉÛÉÛÑ
É É Ñ
If you select this expression and enter it, you will see how the inherently binary product is grouped.
JJÑ Û ÉN Û ÉN Û Ñ
É É Ñ
Tabbing through the placeholders and entering factors and product orders, we might obtain
something like:
KJX Û YN Û ZO Û W
4 3 2
FullFormBJJX Û YN Û ZN Û WF
4 1 0
GeneralizedProduct@0D@
GeneralizedProduct@1D@GeneralizedProduct@4D@X, YD, ZD, WD
You can edit the expression at any stage to group the products in a different order.
GeneralizedProduct@4D@X,
GeneralizedProduct@2D@GeneralizedProduct@3D@Y, ZD, WDD
Finally, you can also enter a generalized product sequentially by typing the first factor, then
clicking on the product operator Û from the first column of the GrassmannAlgebra palette, then
Ñ
typing the second factor.
These sums, although appearing different because expressed in terms of interior products, are of
course equal, as has been shown in the previous section.
The GrassmannAlgebra function ToInteriorProducts will take any generalized product and
convert it to a sum involving exterior and interior products by expanding whichever factor leads to
the simplest expression. If the elements are given in symbolic grade-underscripted form,
ToInteriorProducts will first create a corresponding simple exterior product before
expanding.
2009 9 3
Grassmann Algebra Book.nb 553
The GrassmannAlgebra function ToInteriorProducts will take any generalized product and
convert it to a sum involving exterior and interior products by expanding whichever factor leads to
the simplest expression. If the elements are given in symbolic grade-underscripted form,
ToInteriorProducts will first create a corresponding simple exterior product before
expanding.
For example, suppose the generalized product is given in symbolic form as a Û b. Applying
m 2 3
A = ToInteriorProductsBa Û bF
m 2 3
Ka û b1 Ô b2 O Ô b3 - Ka û b1 Ô b3 O Ô b2 + Ka û b2 Ô b3 O Ô b1
m m m
B = ToInteriorProductsBa Û bF
4 2 k
b û a1 Ô a2 Ô a3 Ô a4 - b û a1 Ô a3 Ô a2 Ô a4 + b û a1 Ô a4 Ô a2 Ô a3 +
k k k
b û a2 Ô a3 Ô a1 Ô a4 - b û a2 Ô a4 Ô a1 Ô a3 + b û a3 Ô a4 Ô a1 Ô a2
k k k
3 4
Note that in the expansion A there are (= 3) terms while in expansion B there are (= 6)
2 2
terms.
If now we let m equal 4 and k equal 3, these two expressions become equal.
A1 = ToInteriorProductsBa Û bF ê. a Ø ComposeSimpleFormBaF
m 2 3 m 4
Ha1 Ô a2 Ô a3 Ô a4 û b1 Ô b2 L Ô b3 -
Ha1 Ô a2 Ô a3 Ô a4 û b1 Ô b3 L Ô b2 + Ha1 Ô a2 Ô a3 Ô a4 û b2 Ô b3 L Ô b1
B1 = ToInteriorProductsBa Û bF ê. b Ø ComposeSimpleFormBbF
4 2 k k 3
Hb1 Ô b2 Ô b3 û a1 Ô a2 L Ô a3 Ô a4 - Hb1 Ô b2 Ô b3 û a1 Ô a3 L Ô a2 Ô a4 +
Hb1 Ô b2 Ô b3 û a1 Ô a4 L Ô a2 Ô a3 + Hb1 Ô b2 Ô b3 û a2 Ô a3 L Ô a1 Ô a4 -
Hb1 Ô b2 Ô b3 û a2 Ô a4 L Ô a1 Ô a3 + Hb1 Ô b2 Ô b3 û a3 Ô a4 L Ô a1 Ô a2
Although these expressions are equal, they appear different because of their expansion in terms of
interior products. To display them in the same form, we need to reduce the interior products to
inner (and exterior) products.
Consider the product of the previous section with m equal to 4 and k equal to 3. To explore this
product we must first increase the dimension of the space to at least 4 (since it is zero in a space of
lesser dimension). Then, we convert any interior products to inner products by applying
ToInnerProducts
2009 93 , and simplify the result with GrassmannExpandAndSimplify.
Grassmann Algebra Book.nb 554
Consider the product of the previous section with m equal to 4 and k equal to 3. To explore this
product we must first increase the dimension of the space to at least 4 (since it is zero in a space of
lesser dimension). Then, we convert any interior products to inner products by applying
ToInnerProducts, and simplify the result with GrassmannExpandAndSimplify.
¯!4 ; A2 = ¯%@ToInnerProducts@A1 DD
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 - Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 +
Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 - Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 +
Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 - Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 +
Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 - Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 +
Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 + Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 + Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 -
Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 + Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3
B2 = ¯%@ToInnerProducts@B1 DD
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 - Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 +
Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 - Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 +
Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 - Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 +
Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 - Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 +
Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 + Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 + Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 -
Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 + Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3
By inspection we see that these two expressions are the same. Checking with Mathematica verifies
this.
A2 ã B2
True
The identity of the forms A2 and B2 is an example of the expansion of a generalized product in
terms of either factor yielding the same result.
2009 9 3
Consider a generalized product a Û b of an arbitrary m-element a andGrassmann
a simple Algebra Book.nb
k-element b 555
m l k m k
expressed as a simple exterior product. We need to be specific about the grade k, so for this
example we choose k to be 3. We can allow m to be symbolic, but should assume it to be greater
than or equal to k in order to have the resulting formulas remain valid should we later substitute
specific values.
S = SpineBbF
3
81, b1 , b2 , b3 , b1 Ô b2 , b1 Ô b3 , b2 Ô b3 , b1 Ô b2 Ô b3 <
Sc = CospineBbF
3
We can now generate the list of products of the form Ka û sO Ô c, where s is an element of S, and
m
c is the corresponding element of Sc, by threading these operations across the two lists.
T1 = ThreadBThreadBa û SF Ô ScF
m
:Ka û 1O Ô b1 Ô b2 Ô b3 , Ka û b1 O Ô b2 Ô b3 ,
m m
Ka û b2 O Ô -Hb1 Ô b3 L, Ka û b3 O Ô b1 Ô b2 , Ka û b1 Ô b2 O Ô b3 ,
m m m
Ka û b1 Ô b3 O Ô -b2 , Ka û b2 Ô b3 O Ô b1 , Ka û b1 Ô b2 Ô b3 O Ô 1>
m m m
The collection of elements of this list which are of the same grade (g, say) will form the terms in
the sum of the generalized product of the same grade g. We can see the distribution of grades by
applying Grade to the list.
Grade@T1 D
83 + m, 1 + m, 1 + m, 1 + m, -1 + m, -1 + m, -1 + m, -3 + m<
We can select the terms of a given grade (using Select), add them together (using Plus), and
and tabulate these sums over the different grades g (using Table). We also reverse the list to get
the grades in decreasing order.
T2 =
Reverse@Table@Plus üü Select@T1 , GradeQ@gDD, 8g, m - 3, m + 3, 2<DD
:Ka û 1O Ô b1 Ô b2 Ô b3 ,
m
Ka û b2 O Ô -Hb1 Ô b3 L + Ka û b1 O Ô b2 Ô b3 + Ka û b3 O Ô b1 Ô b2 ,
m m m
Ka û b1 Ô b2 O Ô b3 + Ka û b1 Ô b3 O Ô -b2 + Ka û b2 Ô b3 O Ô b1 ,
m m m
Ka û b1 Ô b2 Ô b3 O Ô 1>
m
2009 9 3
Grassmann Algebra Book.nb 556
:a Û b, a Û b, a Û b, a Û b>
m 0 3 m 1 3 m 2 3 m 3 3
TableForm@Thread@T3 ã T2 DD
a Û b ã Ka û 1O Ô b1 Ô b2 Ô b3
m 0 3 m
a Û b ã Ka û b2 O Ô -Hb1 Ô b3 L + Ka û b1 O Ô b2 Ô b3 + Ka û b3 O Ô b1 Ô b2
m 1 3 m m m
a Û b ã Ka û b1 Ô b2 O Ô b3 + Ka û b1 Ô b3 O Ô -b2 + Ka û b2 Ô b3 O Ô b1
m 2 3 m m m
a Û b ã Ka û b1 Ô b2 Ô b3 O Ô 1
m 3 3 m
Finally, we can collect these steps into a single function, generalize it for arbitrary simple b, and
k
restrict it to apply only to the grades of factors we have assumed in this development. That is, the
grade of a should be symbolic, but constrained to be greater than or equal to the grade of b, which
should itself be greater than 1.
GeneralizedProductList@GeneralizedProduct@l_D@a_, b_DD ê;
HHead@RawGrade@aDD === Symbol »»
RawGrade@aD ¥ RawGrade@bDL && RawGrade@bD > 1 :=
ModuleB8m, k, B, Bc, T1, T2, T3, g, m<, m = RawGrade@aD;
k = RawGrade@bD; B = Spine@bD; Bc = Cospine@bD;
T1 = Thread@Thread@a û BD Ô BcD;
T2 = Reverse@Table@Plus üü Select@T1, RawGrade@ÒD ã g &D,
8g, m - k, m + k, 2<DD;
T3 = TableBa Û b, 8m, 0, k<F; TableForm@Thread@T3 ã T2DDF;
m
Ï Example
Here are the formulas for a 2-element and a 4-element as second factors.
2009 9 3
Grassmann Algebra Book.nb 557
GeneralizedProductListBa Û bF
m l 2
a Û b ã Ka û 1O Ô b1 Ô b2
m 0 2 m
a Û b ã Ka û b1 O Ô b2 + Ka û b2 O Ô -b1
m 1 2 m m
a Û b ã Ka û b1 Ô b2 O Ô 1
m 2 2 m
GeneralizedProductListBa Û bF
m l 4
a Û b ã Ka û 1O Ô b1 Ô b2 Ô b3 Ô b4
m 0 4 m
a Û b ã Ka û b2 O Ô -Hb1 Ô b3 Ô b4 L + Ka û b4 O Ô -Hb1 Ô b2 Ô b3 L + Ka û b1 O Ô b2 Ô b3 Ô
m 1 4 m m m
a Û b ã Ka û b1 Ô b3 O Ô -Hb2 Ô b4 L + Ka û b2 Ô b4 O Ô -Hb1 Ô b3 L + Ka û b1 Ô b2 O Ô b3 Ô
m 2 4 m m m
a Û b ã Ka û b1 Ô b2 Ô b3 O Ô b4 + Ka û b1 Ô b2 Ô b4 O Ô -b3 + Ka û b1 Ô b3 Ô b4 O Ô b2 + K
m 3 4 m m m
a Û b ã Ka û b1 Ô b2 Ô b3 Ô b4 O Ô 1
m 4 4 m
k
K O
l
A: a Û b ã ‚ a û bj Ô bj
m l k m l k-l
j=1
k
K O
l
10.10
B: a Û b ã ‚ a Ô bj û bj
m l k m k-l l
j=1
b ã b1 Ô b1 ã b2 Ô b2 ã !
k l k-l l k-l
The identity between the A form and the B form is the source of many useful relations in the
Grassmann and Clifford algebras. We call it the Generalized Product Theorem.
2009 9 3
Grassmann Algebra Book.nb 558
The identity between the A form and the B form is the source of many useful relations in the
Grassmann and Clifford algebras. We call it the Generalized Product Theorem.
In the previous sections, the identity between the expansions in terms of either of the two factors
was shown to be at the inner product level. The identity between the A form and the B form is at a
further level of complexity - that of the scalar product. That is, in order to show the identity
between the two forms, a generalized product may need to be reduced to an expression involving
exterior and scalar products.
We can also show straightforwardly that the A and B forms are equal for l equal to 1 and k-1,
hence proving the theorem for all generalized products of the form a Û b in which k is less than or
m l k
equal to 3. I have not yet found such a straightforward proof for the general case.
The example of the next section will extend this result to show that the theorem is true for m and k
equal to 4 and l equal to 2. By using the quasi-commutativity of the generalized product we can
thus show that the theorem is valid for all generalized products in a 4-space. The example will also
make clearer the detail by which the equality of the two forms is achieved.
Ï l=1
We begin with the B form for l equal to 1, and show that it reduces to the A form for l equal to 1.
k
a Û b ã ‚ a Ô bj û bj b ã b1 Ô b1 ã b2 Ô b2 ã !
m 1 k m k-1 1 k 1 k-1 1 k-1
j=1
a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x
m k m k m k
k k
a Û b ã ‚ a û bj Ô bj + H-1Lm ‚ a Ô bj û bj
m 1 k m 1 k-1 m k-1 1
j=1 j=1
The first sum is of the A form, while the second sum is zero on account of the Zero Interior Sum
theorem. Hence we have proved the theorem for l equal to 1.
Ï l = k-1
This time we begin with the A form for l equal to k-1, and show that it reduces to the B form for l
equal to k-1.
2009 9 3
Grassmann Algebra Book.nb 559
k
a Û b ã ‚ a û bj Ô bj b ã b1 Ô b1 ã b2 Ô b2 ã !
m k-1 k m k-1 1 k k-1 1 k-1 1
j=1
a û b Ô x ã Ka Ô xO û b + H-1Lm+1 a û b û x
m k m k m k
which enables us to expand the summand again to give two types of terms
k k
a Û b ã ‚ a Ô bj û bj + H-1Lm+1 ‚ a û bj û bj
m k-1 k m 1 k-1 m k-1 1
j=1 j=1
The first sum is of the B form, while the second sum is zero on account of the Zero Interior Sum
theorem. Hence we have proved the theorem for l equal to k-1.
A = ToInteriorProductsBa Û bF
m 2 4
Ka û b1 Ô b2 O Ô b3 Ô b4 - Ka û b1 Ô b3 O Ô b2 Ô b4 + Ka û b1 Ô b4 O Ô b2 Ô b3 +
m m m
Ka û b2 Ô b3 O Ô b1 Ô b4 - Ka û b2 Ô b4 O Ô b1 Ô b3 + Ka û b3 Ô b4 O Ô b1 Ô b2
m m m
B = ToInteriorProductsBBa Û bF
m 2 4
a Ô b1 Ô b2 û b3 Ô b4 - a Ô b1 Ô b3 û b2 Ô b4 + a Ô b1 Ô b4 û b2 Ô b3 +
m m m
a Ô b2 Ô b3 û b1 Ô b4 - a Ô b2 Ô b4 û b1 Ô b3 + a Ô b3 Ô b4 û b1 Ô b2
m m m
Note that the A form and the B form at this first (interior product) level of their expansion both
4
have the same number of terms (in this case = 6). Although the expressions also both have the
2
same concatenation of factors, their grouping is different. Due to Mathematica's inbuilt
precedence, parentheses are not needed in the second case: the interior product has higher
precedence, so parentheses effectively surround the exterior products.
2009 9 3
Grassmann Algebra Book.nb 560
The next step is to convert these two expressions to their inner product forms. But to do this we
will need to specify the grade of a. For this example, we take m equal to 4, and replace a in the
m m
expressions above by " (and consequently also we need to increase the dimension of the space
above the default value of 3 to prevent " being treated as zero).
¯!6 ; 2 = ComposeSimpleFormBaF
4
a1 Ô a2 Ô a3 Ô a4
A1 = ¯%BToInnerProductsBA ê. a Ø 2FF
m
Ha3 Ô a4 û b3 Ô b4 L a1 Ô a2 Ô b1 Ô b2 - Ha3 Ô a4 û b2 Ô b4 L a1 Ô a2 Ô b1 Ô b3 +
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 Ô b4 + Ha3 Ô a4 û b1 Ô b4 L a1 Ô a2 Ô b2 Ô b3 -
Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 Ô b4 + Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 Ô b4 -
Ha2 Ô a4 û b3 Ô b4 L a1 Ô a3 Ô b1 Ô b2 + Ha2 Ô a4 û b2 Ô b4 L a1 Ô a3 Ô b1 Ô b3 -
Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 Ô b4 - Ha2 Ô a4 û b1 Ô b4 L a1 Ô a3 Ô b2 Ô b3 +
Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 Ô b4 - Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 Ô b4 +
Ha2 Ô a3 û b3 Ô b4 L a1 Ô a4 Ô b1 Ô b2 - Ha2 Ô a3 û b2 Ô b4 L a1 Ô a4 Ô b1 Ô b3 +
Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 Ô b4 + Ha2 Ô a3 û b1 Ô b4 L a1 Ô a4 Ô b2 Ô b3 -
Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 Ô b4 + Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 Ô b4 +
Ha1 Ô a4 û b3 Ô b4 L a2 Ô a3 Ô b1 Ô b2 - Ha1 Ô a4 û b2 Ô b4 L a2 Ô a3 Ô b1 Ô b3 +
Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 Ô b4 + Ha1 Ô a4 û b1 Ô b4 L a2 Ô a3 Ô b2 Ô b3 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 Ô b4 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 Ô b4 -
Ha1 Ô a3 û b3 Ô b4 L a2 Ô a4 Ô b1 Ô b2 + Ha1 Ô a3 û b2 Ô b4 L a2 Ô a4 Ô b1 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 Ô b4 - Ha1 Ô a3 û b1 Ô b4 L a2 Ô a4 Ô b2 Ô b3 +
Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 Ô b4 - Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 Ô b4 +
Ha1 Ô a2 û b3 Ô b4 L a3 Ô a4 Ô b1 Ô b2 - Ha1 Ô a2 û b2 Ô b4 L a3 Ô a4 Ô b1 Ô b3 +
Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 Ô b4 + Ha1 Ô a2 û b1 Ô b4 L a3 Ô a4 Ô b2 Ô b3 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 Ô b4 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3 Ô b4
2009 9 3
Grassmann Algebra Book.nb 561
B1 = ¯%BToInnerProductsBB ê. a Ø 2FF
m
It can be seen that at this inner product level, these two expressions are not of the same form: B1
contains additional terms to those of A1 . The interior products of A1 are only of the form
Hai Ô aj û br Ô bs L, whereas the extra terms of B1 are of the forms Ibi Ô bj û br Ô bs M
andHai Ô bj û br Ô bs L. Calculating their difference gives:
2009 9 3
Grassmann Algebra Book.nb 562
AB = B1 - A1
H2 Hb1 Ô b2 û b3 Ô b4 L - 2 Hb1 Ô b3 û b2 Ô b4 L + 2 Hb1 Ô b4 û b2 Ô b3 LL
a1 Ô a2 Ô a3 Ô a4 +
H-Ha4 Ô b2 û b3 Ô b4 L + a4 Ô b3 û b2 Ô b4 - a4 Ô b4 û b2 Ô b3 L a1 Ô a2 Ô a3 Ô b1 +
Ha4 Ô b1 û b3 Ô b4 - a4 Ô b3 û b1 Ô b4 + a4 Ô b4 û b1 Ô b3 L a1 Ô a2 Ô a3 Ô b2 +
H-Ha4 Ô b1 û b2 Ô b4 L + a4 Ô b2 û b1 Ô b4 - a4 Ô b4 û b1 Ô b2 L a1 Ô a2 Ô a3 Ô b3 +
Ha4 Ô b1 û b2 Ô b3 - a4 Ô b2 û b1 Ô b3 + a4 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a3 Ô b4 +
Ha3 Ô b2 û b3 Ô b4 - a3 Ô b3 û b2 Ô b4 + a3 Ô b4 û b2 Ô b3 L a1 Ô a2 Ô a4 Ô b1 +
H-Ha3 Ô b1 û b3 Ô b4 L + a3 Ô b3 û b1 Ô b4 - a3 Ô b4 û b1 Ô b3 L a1 Ô a2 Ô a4 Ô b2 +
Ha3 Ô b1 û b2 Ô b4 - a3 Ô b2 û b1 Ô b4 + a3 Ô b4 û b1 Ô b2 L a1 Ô a2 Ô a4 Ô b3 +
H-Ha3 Ô b1 û b2 Ô b3 L + a3 Ô b2 û b1 Ô b3 - a3 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a4 Ô b4 +
H-Ha2 Ô b2 û b3 Ô b4 L + a2 Ô b3 û b2 Ô b4 - a2 Ô b4 û b2 Ô b3 L a1 Ô a3 Ô a4 Ô b1 +
Ha2 Ô b1 û b3 Ô b4 - a2 Ô b3 û b1 Ô b4 + a2 Ô b4 û b1 Ô b3 L a1 Ô a3 Ô a4 Ô b2 +
H-Ha2 Ô b1 û b2 Ô b4 L + a2 Ô b2 û b1 Ô b4 - a2 Ô b4 û b1 Ô b2 L a1 Ô a3 Ô a4 Ô b3 +
Ha2 Ô b1 û b2 Ô b3 - a2 Ô b2 û b1 Ô b3 + a2 Ô b3 û b1 Ô b2 L a1 Ô a3 Ô a4 Ô b4 +
Ha1 Ô b2 û b3 Ô b4 - a1 Ô b3 û b2 Ô b4 + a1 Ô b4 û b2 Ô b3 L a2 Ô a3 Ô a4 Ô b1 +
H-Ha1 Ô b1 û b3 Ô b4 L + a1 Ô b3 û b1 Ô b4 - a1 Ô b4 û b1 Ô b3 L a2 Ô a3 Ô a4 Ô b2 +
Ha1 Ô b1 û b2 Ô b4 - a1 Ô b2 û b1 Ô b4 + a1 Ô b4 û b1 Ô b2 L a2 Ô a3 Ô a4 Ô b3 +
H-Ha1 Ô b1 û b2 Ô b3 L + a1 Ô b2 û b1 Ô b3 - a1 Ô b3 û b1 Ô b2 L a2 Ô a3 Ô a4 Ô b4
By inspection, we can see that each scalar coefficient is a zero interior sum, thus making AB equal
to zero and proving the theorem for this particular case. We can also verify this directly by
reducing the inner products of AB to scalar products.
¯%@ToScalarProducts@ABDD
0
The principal difference between an expansion using the A form, and one using the B form is that
the A form needs only to be expanded to the inner product level to show the identity between two
equal expressions. On the other hand, expansion of the B form requires expansion to the scalar
product level, often involving significantly more computation.
We take the same generalized product a Û b that we discussed in the previous section, and show
4 2 3
that we need to expand the product to the scalar product level before the identity of the two
expansions become evident.
2009 9 3
Grassmann Algebra Book.nb 563
8a1 Ô a2 Ô a3 Ô a4 , b1 Ô b2 Ô b3 <
A = ToInteriorProductsBB2 Û %F
2
a1 Ô a2 Ô a3 Ô a4 Ô b1 û b2 Ô b3 -
a1 Ô a2 Ô a3 Ô a4 Ô b2 û b1 Ô b3 + a1 Ô a2 Ô a3 Ô a4 Ô b3 û b1 Ô b2
To expand with respect to a we reverse the order of the generalized product to give b Û a and
4 3 2 4
B = ToInteriorProductsBB% Û 2F
2
a1 Ô a2 Ô b1 Ô b2 Ô b3 û a3 Ô a4 - a1 Ô a3 Ô b1 Ô b2 Ô b3 û a2 Ô a4 +
a1 Ô a4 Ô b1 Ô b2 Ô b3 û a2 Ô a3 + a2 Ô a3 Ô b1 Ô b2 Ô b3 û a1 Ô a4 -
a2 Ô a4 Ô b1 Ô b2 Ô b3 û a1 Ô a3 + a3 Ô a4 Ô b1 Ô b2 Ô b3 û a1 Ô a2
A1 = ¯%@ToInnerProducts@ADD
Ha4 Ô b1 û b2 Ô b3 - a4 Ô b2 û b1 Ô b3 + a4 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a3 +
H-Ha3 Ô b1 û b2 Ô b3 L + a3 Ô b2 û b1 Ô b3 - a3 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a4 +
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 -
Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 + Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 +
Ha2 Ô b1 û b2 Ô b3 - a2 Ô b2 û b1 Ô b3 + a2 Ô b3 û b1 Ô b2 L a1 Ô a3 Ô a4 -
Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 + Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 -
Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 + Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 -
Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 + Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 +
H-Ha1 Ô b1 û b2 Ô b3 L + a1 Ô b2 û b1 Ô b3 - a1 Ô b3 û b1 Ô b2 L a2 Ô a3 Ô a4 +
Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 + Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 -
Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 + Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3
2009 9 3
Grassmann Algebra Book.nb 564
B1 = ¯%@ToInnerProducts@BDD
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 -
Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 + Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 -
Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 + Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 -
Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 + Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 -
Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 + Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 +
Ha2 Ô a3 û a4 Ô b3 - a2 Ô a4 û a3 Ô b3 + a2 Ô b3 û a3 Ô a4 L a1 Ô b1 Ô b2 +
H-Ha2 Ô a3 û a4 Ô b2 L + a2 Ô a4 û a3 Ô b2 - a2 Ô b2 û a3 Ô a4 L a1 Ô b1 Ô b3 +
Ha2 Ô a3 û a4 Ô b1 - a2 Ô a4 û a3 Ô b1 + a2 Ô b1 û a3 Ô a4 L a1 Ô b2 Ô b3 +
Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 - Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 +
Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 - Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 +
Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 - Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 +
H-Ha1 Ô a3 û a4 Ô b3 L + a1 Ô a4 û a3 Ô b3 - a1 Ô b3 û a3 Ô a4 L a2 Ô b1 Ô b2 +
Ha1 Ô a3 û a4 Ô b2 - a1 Ô a4 û a3 Ô b2 + a1 Ô b2 û a3 Ô a4 L a2 Ô b1 Ô b3 +
H-Ha1 Ô a3 û a4 Ô b1 L + a1 Ô a4 û a3 Ô b1 - a1 Ô b1 û a3 Ô a4 L a2 Ô b2 Ô b3 +
Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3 +
Ha1 Ô a2 û a4 Ô b3 - a1 Ô a4 û a2 Ô b3 + a1 Ô b3 û a2 Ô a4 L a3 Ô b1 Ô b2 +
H-Ha1 Ô a2 û a4 Ô b2 L + a1 Ô a4 û a2 Ô b2 - a1 Ô b2 û a2 Ô a4 L a3 Ô b1 Ô b3 +
Ha1 Ô a2 û a4 Ô b1 - a1 Ô a4 û a2 Ô b1 + a1 Ô b1 û a2 Ô a4 L a3 Ô b2 Ô b3 +
H-Ha1 Ô a2 û a3 Ô b3 L + a1 Ô a3 û a2 Ô b3 - a1 Ô b3 û a2 Ô a3 L a4 Ô b1 Ô b2 +
Ha1 Ô a2 û a3 Ô b2 - a1 Ô a3 û a2 Ô b2 + a1 Ô b2 û a2 Ô a3 L a4 Ô b1 Ô b3 +
H-Ha1 Ô a2 û a3 Ô b1 L + a1 Ô a3 û a2 Ô b1 - a1 Ô b1 û a2 Ô a3 L a4 Ô b2 Ô b3 +
H2 Ha1 Ô a2 û a3 Ô a4 L - 2 Ha1 Ô a3 û a2 Ô a4 L + 2 Ha1 Ô a4 û a2 Ô a3 LL b1 Ô b2 Ô b3
By observation we can see that A1 and B1 contain some common terms. We compute the
difference of the two inner product forms and collect terms with the same exterior product factor:
2009 9 3
Grassmann Algebra Book.nb 565
AB = B1 - A1
-Ha4 Ô b1 û b2 Ô b3 - a4 Ô b2 û b1 Ô b3 + a4 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a3 -
H-Ha3 Ô b1 û b2 Ô b3 L + a3 Ô b2 û b1 Ô b3 - a3 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a4 -
Ha2 Ô b1 û b2 Ô b3 - a2 Ô b2 û b1 Ô b3 + a2 Ô b3 û b1 Ô b2 L a1 Ô a3 Ô a4 +
Ha2 Ô a3 û a4 Ô b3 - a2 Ô a4 û a3 Ô b3 + a2 Ô b3 û a3 Ô a4 L a1 Ô b1 Ô b2 +
H-Ha2 Ô a3 û a4 Ô b2 L + a2 Ô a4 û a3 Ô b2 - a2 Ô b2 û a3 Ô a4 L a1 Ô b1 Ô b3 +
Ha2 Ô a3 û a4 Ô b1 - a2 Ô a4 û a3 Ô b1 + a2 Ô b1 û a3 Ô a4 L a1 Ô b2 Ô b3 -
H-Ha1 Ô b1 û b2 Ô b3 L + a1 Ô b2 û b1 Ô b3 - a1 Ô b3 û b1 Ô b2 L a2 Ô a3 Ô a4 +
H-Ha1 Ô a3 û a4 Ô b3 L + a1 Ô a4 û a3 Ô b3 - a1 Ô b3 û a3 Ô a4 L a2 Ô b1 Ô b2 +
Ha1 Ô a3 û a4 Ô b2 - a1 Ô a4 û a3 Ô b2 + a1 Ô b2 û a3 Ô a4 L a2 Ô b1 Ô b3 +
H-Ha1 Ô a3 û a4 Ô b1 L + a1 Ô a4 û a3 Ô b1 - a1 Ô b1 û a3 Ô a4 L a2 Ô b2 Ô b3 +
Ha1 Ô a2 û a4 Ô b3 - a1 Ô a4 û a2 Ô b3 + a1 Ô b3 û a2 Ô a4 L a3 Ô b1 Ô b2 +
H-Ha1 Ô a2 û a4 Ô b2 L + a1 Ô a4 û a2 Ô b2 - a1 Ô b2 û a2 Ô a4 L a3 Ô b1 Ô b3 +
Ha1 Ô a2 û a4 Ô b1 - a1 Ô a4 û a2 Ô b1 + a1 Ô b1 û a2 Ô a4 L a3 Ô b2 Ô b3 +
H-Ha1 Ô a2 û a3 Ô b3 L + a1 Ô a3 û a2 Ô b3 - a1 Ô b3 û a2 Ô a3 L a4 Ô b1 Ô b2 +
Ha1 Ô a2 û a3 Ô b2 - a1 Ô a3 û a2 Ô b2 + a1 Ô b2 û a2 Ô a3 L a4 Ô b1 Ô b3 +
H-Ha1 Ô a2 û a3 Ô b1 L + a1 Ô a3 û a2 Ô b1 - a1 Ô b1 û a2 Ô a3 L a4 Ô b2 Ô b3 +
H2 Ha1 Ô a2 û a3 Ô a4 L - 2 Ha1 Ô a3 û a2 Ô a4 L + 2 Ha1 Ô a4 û a2 Ô a3 LL b1 Ô b2 Ô b3
In order to show the equality of the two forms, we need to show that the above difference is zero.
As in the previous section we may do this by converting the inner products to scalar products.
Alternatively we may observe that the coefficients of the terms are zero by virtue of the Zero
Interior Sum Theorem discussed later in this chapter.
¯%@ToScalarProducts@ABDD
0
This is an example of how the B form of the generalized product may be expanded in terms of
either factor.
When l is equal to the larger of the grades of the factors, the generalized product reduces to a
single interior product which is zero by virtue of its left factor being of lesser grade than its right
factor. Suppose l = k > m. Then:
2009 9 3
Grassmann Algebra Book.nb 566
a Û b ã aûb ã 0 l=k>m
m k k m k
These relationships are the source of an interesting suite of identities relating exterior and interior
products. We take some examples; in each case we verify that the result is zero by converting the
expression to its scalar product form.
But we will need to circumvent the default behaviour of ToInteriorProducts by not intially
telling it the grade of one of the terms (since it would normally return 0 for these type of products).
We also need to declare some vector symbols and use the B variant of ToInteriorProducts
(ToInteriorProductsB) to force the desired expansion.
DeclareExtraVectorSymbols@a_ D;
Ï Example 1
A123 = ToInteriorProductsBBa Û bF ê. a Ø a1
2 3
a1 Ô b1 û b2 Ô b3 - a1 Ô b2 û b1 Ô b3 + a1 Ô b3 û b1 Ô b2
¯$@ToScalarProducts@A123 DD
0
Ï Example 2
A124 = ToInteriorProductsBBa Û bF ê. a Ø a1
2 4
a1 Ô b1 Ô b2 û b3 Ô b4 - a1 Ô b1 Ô b3 û b2 Ô b4 + a1 Ô b1 Ô b4 û b2 Ô b3 +
a1 Ô b2 Ô b3 û b1 Ô b4 - a1 Ô b2 Ô b4 û b1 Ô b3 + a1 Ô b3 Ô b4 û b1 Ô b2
¯$@ToScalarProducts@A124 DD
0
Ï Example 3
A234 = ToInteriorProductsBBa Û bF ê. a Ø a1 Ô a2
3 4
-Ha1 Ô a2 Ô b1 û b2 Ô b3 Ô b4 L + a1 Ô a2 Ô b2 û b1 Ô b3 Ô b4 -
a1 Ô a2 Ô b3 û b1 Ô b2 Ô b4 + a1 Ô a2 Ô b4 û b1 Ô b2 Ô b3
¯$@ToScalarProducts@A234 DD
0
2009 9 3
Grassmann Algebra Book.nb 567
If two simple elements contain a common factor of grade g, then their generalized Grassmann
product of order g reduces to a simple interior product, and their generalized products of order
less than g are zero.
We can show this most directly from the B form of the generalized product formula 10.10. The B
form is
k
K O
l
a Û b ã ‚ a Ô bj û bj
m l k m k-l l
j=1
b ã b1 Ô b1 ã b2 Ô b2 ã !
k l k-l l k-l
a ã gÔA b ã gÔB
m l a k l b
Substituting these products into the formula above enables us to begin writing the generalized
product sum:
But we quickly see that because the subsequent terms must now contain some common terms in
their exterior product, they must all reduce to zero. Hence, for simple elements if the order of the
generalized product is equal to the grade of the common factor, the generalized product reduces to
a simple interior product.
2009 9 3
Grassmann Algebra Book.nb 568
In a similar way, we can see that if the order of the generalized product is less than the grade of the
common factor, all terms must now contain some common terms in their exterior product, and the
generalized product reduces to zero.
Of course, this result is also valid in the case that either or both of A and B are scalars.
a b
Congruent factors
If both A and B are scalars, that is, a and b are equal to 0, they may be factored out of the formulas.
a b
g Û g ã gûg , l ã m g Û g ã 0, l ! m 10.15
m l m m m m l m
If a and b are both n-elements, then they are conguent (that is, differ by at most a scalar factor)
n n
a Û b ã aûb , l ã n a Û b ã 0, l ! n 10.16
n l n n n n l n
These results give rise to a series of relationships among the exterior and interior products of
factors of a simple element. These relationships are independent of the dimension of the space
concerned. For example, applying the result to simple 2, 3 and 4-elements gives the following
relationships at the interior product level:
2009 9 3
Grassmann Algebra Book.nb 569
Ï Example: 2-elements
ToInteriorProductsBa Û aF ã 0
2 1 2
Ka û a1 O Ô a2 - Ka û a2 O Ô a1 ã 0
2 2
Ï Example: 3-elements
ToInteriorProductsBa Û aF ã 0
3 1 3
a1 Ô a2 Ô a û a3 - a1 Ô a3 Ô a û a2 + a2 Ô a3 Ô a û a1 ã 0
3 3 3
ToInteriorProductsBa Û aF ã 0
3 2 3
a û a1 Ô a2 Ô a3 - a û a1 Ô a3 Ô a2 + a û a2 Ô a3 Ô a1 ã 0
3 3 3
Ï Example: 4-elements
ToInteriorProductsBa Û aF ã 0
4 1 4
Ka û a1 O Ô a2 Ô a3 Ô a4 - Ka û a2 O Ô a1 Ô a3 Ô a4 +
4 4
Ka û a3 O Ô a1 Ô a2 Ô a4 - Ka û a4 O Ô a1 Ô a2 Ô a3 ã 0
4 4
ToInteriorProductsBa Û aF ã 0
4 2 4
a1 Ô a2 Ô Ka û a3 Ô a4 O - a1 Ô a3 Ô Ka û a2 Ô a4 O + a1 Ô a4 Ô Ka û a2 Ô a3 O +
4 4 4
a2 Ô a3 Ô Ka û a1 Ô a4 O - a2 Ô a4 Ô Ka û a1 Ô a3 O + a3 Ô a4 Ô Ka û a1 Ô a2 O ã 0
4 4 4
ToInteriorProductsBa Û aF ã 0
4 3 4
Ka û a1 Ô a2 Ô a3 O Ô a4 - Ka û a1 Ô a2 Ô a4 O Ô a3 +
4 4
Ka û a1 Ô a3 Ô a4 O Ô a2 - Ka û a2 Ô a3 Ô a4 O Ô a1 ã 0
4 4
At the inner product level, some of the generalized products yield further relationships. For
example, a Û a confirms the Zero Interior Sum theorem.
4 2 4
2009 9 3
Grassmann Algebra Book.nb 570
¯%BToInnerProductsBa Û aFF ã 0
4 2 4
X ã a1 + a2 + a3 + …
m m m m
From the axioms for the exterior product it is straightforward to show that XÔXãH-1Lm XÔX. This
m m m m
means, of course, that the exterior product of a general m-element with itself is zero if m is odd.
In a similar manner, the formula for the quasi-commutativity of the generalized product is:
a Û b ã H-1LHm-lL Hk-lL b Û a
m l k k l m
This shows that the generalized product of a general m-element with itself is:
X Û X ã H-1LHm-lL X Û X
m l m m l m
Thus the generalization of the above result for the exterior product is that the generalized product
of a general (not necessarily simple) m-element with itself is zero if m-l is odd, or alternatively, if
just one of m and l is odd.
a1 + a2 + a3 + … Û a1 + a2 + a3 + … ã0 m-l œ
m m m l m m m 10.17
OddIntegers
Orthogonal factors
The right hand side of formula 10.11 can be expanded using the Interior Common Factor Theorem
(see Chapter 6, The Interior Product) below:
n
a û b ã ‚ ai û b ai
m k k k m-k
i=1
a ã a1 Ô a1 ã a2 Ô a2 ã ! ã an Ô an
m k m-k k m-k k m-k
To do this we put
a ã g Ô A Ô B, bãg
m l a b k l
2009 9 3
Grassmann Algebra Book.nb 571
g ã g1 Ô g2 Ô … Ô gi Ô … Ô gl
l
A ã A1 Ô A2 Ô … Ô Ai Ô … Ô Aa
a
B ã B1 Ô B2 Ô … Ô Bi Ô … Ô Ba
B
Each of the subsequent terms will, in the sum on the right hand side of this expansion, contain at
least one of the Ai or Bj in the inner product. Thus, if g is totally orthogonal to both A and B, then
l a b
these terms will be zero and the formula becomes simply:
g Ô A Û g Ô B ã g Ô A Ô B û g ã g û g A Ô B,
l a l l b l a b l l l a b 10.18
gi û Aj ã 0, gi û Bj ã 0
(A simple element is totally orthogonal to another simple element if each of its 1-element factors is
orthogonal to each of the 1-element factors of the other element.)
It is also often the case that we work with an Euclidean metric. The examples below tabulate the
effect of the generalized product on the basis elements of a Grassmann algebra in both the general
metric and the Euclidean metric cases.
These results will be important when we discuss hypercomplex and Clifford algebras in later
chapters.
Here is the piece of code used to generate the examples in the next section. If you want to generate
your own examples, you will need to click anywhere in the box, and then press the Enter (or Shift-
Enter) key.
2009 9 3
Grassmann Algebra Book.nb 572
TabulateBasisProducts@n_DBx_ Û y_F :=
l
Here are tabulations of the generalized products of all the basis elements of a Grassmann algebra of
3-space.
TabulateBasisProducts@3DBe1 Ô e2 Ô e3 Û e1 Ô e2 Ô e3 F
l
TabulateBasisProducts@3DBe1 Ô e2 Ô e3 Û e1 Ô e2 F
l
2009 9 3
Grassmann Algebra Book.nb 573
TabulateBasisProducts@3DBe1 Ô e2 Ô e3 Û e1 F
l
TabulateBasisProducts@3DBe1 Ô e2 Û e1 Ô e2 F
l
TabulateBasisProducts@3DBe1 Ô e2 Û e1 Ô e3 F
l
TabulateBasisProducts@3DBe1 Ô e2 Û e1 F
l
2009 9 3
Grassmann Algebra Book.nb 574
TabulateBasisProducts@3DBe1 Ô e2 Û e3 F
l
TabulateBasisProducts@3DBe1 Û e1 F
l
TabulateBasisProducts@3DBe1 Û e2 F
l
This suggests a way in which we might be able to find the common factor of two elements. In
Chapter 3: The Regressive Product, we developed the Common Factor Theorem based on the
regressive product. Based on this theorem, we were able to derive an Interior Common Factor
Theorem in Chapter 6. The formula defining the generalized product is a generalization of this
theorem. Thus, it is not so surprising that the generalized product might have application to finding
common factors.
2009 9 3
Grassmann Algebra Book.nb 575
This suggests a way in which we might be able to find the common factor of two elements. In
Chapter 3: The Regressive Product, we developed the Common Factor Theorem based on the
regressive product. Based on this theorem, we were able to derive an Interior Common Factor
Theorem in Chapter 6. The formula defining the generalized product is a generalization of this
theorem. Thus, it is not so surprising that the generalized product might have application to finding
common factors.
In Chapter 3 we explored finding common factors without yet having introduced the notion of
metric. The generalized product formula above however, supposes that a metric has been
introduced onto the underlying linear space. But although the interior product g Ô A Ô B û g
l a b l
does depend on the metric, the factors g Ô A Ô B and g do not.
l a b l
The formulas for finding common factors using the regressive product developed in Chapter 3
required that the sum of the grades of the elements which contained the common factors be greater
than the dimension of the underlying linear space. The formula above has the advantage of being
independent of the dimension of the space. A concomitant disadvantage however, is that the
product g Ô A Ô B may not be displayed in simple form, making it harder to extract the common
l a b
factor g.
l
We note also that even though the formula seems to explicitly display the common factor g, this is
l
only because the arguments g Ô A and g Ô B, to the generalized product are displayed explicitly in
l a l b
an already factorized form. In fact we are faced with the same situation that occurs with the
common factor calculation based on the regressive product: a common factor is determined only up
to congruence. We can see this noting that the formula above shows that for any scalar s, s g is
l
also a common factor.
A B
a b 1
sg Ô Û sg Ô ã gÔAÔB û sg
l s l l s s l a b l
Ï Ÿ Example
To begin, suppose we are working in a space of large enough dimension to require none of the
products to be zero because their grade exceeded the dimension of the space. For the purpose of
this example we set it (arbitrarily) to be 8.
¯!8 ;
X = 3 e1 Ô e2 Ô e3 + 2 e1 Ô e2 Ô e5 -
14 e1 Ô e3 Ô e5 - 3 e2 Ô e3 Ô e4 + 2 e2 Ô e4 Ô e5 - 14 e3 Ô e4 Ô e5 ;
Y = 6 e1 Ô e2 Ô e3 Ô e4 - 30 e1 Ô e2 Ô e3 Ô e5 -
4 e1 Ô e2 Ô e4 Ô e5 - 15 e1 Ô e3 Ô e4 Ô e5 + 42 e2 Ô e3 Ô e4 Ô e5 ;
2009 9 3
Grassmann Algebra Book.nb 576
8SimpleQ@XD, SimpleQ@YD<
8True, True<
¯%@X Ô YD
0
With the knowledge that they are simple, and (because their exterior product is zero) have a
(simple) common factor (of still unknown grade) we begin computing the progressively higher
orders of generalized products. The first non-zero generalized product will be an expression from
which we can extract the common factor. The order of this non-zero generalized product will give
the grade of the common factor.
The zero-order generalized product is just the exterior product that we computed above. Its
returning 0 means that the common factor is of grade at least 1.
¯%BX Û YF
0
The first-order generalized product also returns 0. This means that the common factor is of grade at
least 2.
¯%BX Û YF
1
The second-order generalized product does not return 0. This means that the common factor is of
grade 2.
Z = ¯%BX Û YF
2
-129 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e1 Ô e3 L -
86 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e1 Ô e5 L + 36 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e2 Ô e3 L +
24 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e2 Ô e5 L - 129 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e3 Ô e4 L -
168 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e3 Ô e5 L + 86 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e4 Ô e5 L
Since we can only extract the common factor up to congruence, we can simply replace each term
of the form e1 Ô e2 Ô e3 Ô e4 Ô e5 û ei Ô ej with ei Ô ej .
Z1 = Z ê. e1 Ô e2 Ô e3 Ô e4 Ô e5 û e_ Ø e
-129 e1 Ô e3 - 86 e1 Ô e5 + 36 e2 Ô e3 +
24 e2 Ô e5 - 129 e3 Ô e4 - 168 e3 Ô e5 + 86 e4 Ô e5
We can verify that Z1 is a common factor of each of X and Y by taking their generalized products
of order 1 and seeing if they do indeed result in zero.
2009 9 3
Grassmann Algebra Book.nb 577
¯%BToInnerProductsB¯%B:Z1 Û X, Z1 Û Y>FFF
1 1
80, 0<
Z2 = ExteriorFactorize@Z1 D
12 e2 56 e5 2 e5
-129 e1 - - e4 - Ô e3 +
43 43 3
Remember that an exterior factorization is never unique, reflecting the fact that any linear
combination of these factors is a factor of both X and Y.
12 e2 56 e5 2 e5
Z3 = a e1 - - e4 - + b e3 + ;
43 43 3
¯%@8Z3 Ô X, Z3 Ô Y<D
80, 0<
Summary of properties
The following simple results may be obtained from the definition and results derived above.
2009 9 3
Grassmann Algebra Book.nb 578
The only non-zero generalized product with a scalar is of order zero, leading to the ordinary field
product.
a Û a ã a Û a ã aÔa ã a ûa ã a a 10.22
0 m m 0 m m m
Ka aO Û b ã a Û a b ã a aÛb 10.25
m l k m l k m l k
The generalized product of congruent elements of grade m is equal to their interior product
whenever the order l of the generalized product is equal to the grade of the elements, and is zero
otherwise.
a Û b ã aûb , l ã m a Û b ã 0, l ! m 10.27
m l m m m m l m
2009 9 3
Grassmann Algebra Book.nb 579
The generalized product of elements containing a common factor is zero whenever the order of the
generalized product l is less than the grade of the common factor m.
The generalized product of elements containing a common factor reduces to an interior product
whenever the order of the generalized product l is equal to the grade of the common factor.
The generalized product of a general (not necessarily simple) m-element with itself is zero if m-l is
odd.
a1 + a2 + a3 + … Û a1 + a2 + a3 + … ã0 m-l œ
m m m l m m m 10.30
OddIntegers
The generalized product of elements containing a common factor reduces to an exterior product
whenever the common factor is totally orthogonal to the other elements.
g Ô A Û g Ô B ã g Ô A Ô B û g ã g û g A Ô B,
l a l l b l a b l l l a b 10.31
gi û Aj ã 0, gi û Bj ã 0
2009 9 3
Grassmann Algebra Book.nb 580
To explore this we start with the symmetric form of the generalized product:
m k
K O K O
l l
a Û b ã ‚ ‚ ai û bj ai Ô bj 0 < l < Min@m, kD
m l k l l m-l k-l
i=1 j=1
a ã a1 Ô a1 ã a2 Ô a2 ã !, b ã b1 Ô b1 ã b2 Ô b2 ã !
m l m-l l m-l k l k-l l k-l
For simplicity, and without loss of generality (given the quasi-commutativity of the generalized
product), we can assume k is less than or equal to m, so that l is then less than k.
We begin by looking at the simplest case, that in which the order l is equal to 1. The formula
above then becomes:
m k
a Û b ã ‚‚ ai û bj ai Ô bj
m 1 k 1 1 m-1 k-1
i=1 j=1
a ã a1 Ô a1 ã a2 Ô a2 ã !, b ã b1 Ô b1 ã b2 Ô b2 ã !
m 1 m-1 1 m-1 k 1 k-1 1 k-1
To get a clearer picture of the form of this sum, we can take a specific example, say for m equal to
3 and k equal to 2:
3 2
a Û b ã ‚ ‚ Hai û bj L ai Ô bj
3 1 2 2 1
i=1 j=1
G = ToScalarProductsBa Û bF
3 1 2
Thus we can see that the generalized product is a linear combination of terms, each of which is an
exterior product with a scalar product as coefficient.
2009 9 3
Grassmann Algebra Book.nb 581
Thus we can see that the generalized product is a linear combination of terms, each of which is an
exterior product with a scalar product as coefficient.
The distinguishing feature of this linear combination is the way in which it combines all the
essentially different combinations of 1-element factors of the original elements. Before we proceed
further, we will explore ways in which we can visualize the essential components that make up this
type of expression.
Key to visualizing the generalized product is to show how it may be built up from four lists: the
As a first example, we will take m equal to 3, k equal to 2, and l equal to 1, as above. Composing
the lists and displaying them in MatrixForm for easier visualization gives
8A, Ac<
a1 a2 Ô a3
: a2 , -Ha1 Ô a3 L >
a3 a1 Ô a2
8B, Bc<
8H b1 b2 L, H b2 -b1 L<
AûB
a1
a2 û H b1 b2 L
a3
and multiply it out (using the interior product as the product operation) by applying the
GrassmannAlgebra function MatrixProduct (or its alias ¯B). (Note that MatrixProduct
removes any MatrixForm wrappers.)
2009 9 3
Grassmann Algebra Book.nb 582
Mi = ¯B@A û BD êê MatrixForm
a1 û b1 a1 û b2
a2 û b1 a2 û b2
a3 û b1 a3 û b2
Ac Ô Bc
a2 Ô a3
-Ha1 Ô a3 L Ô H b2 -b1 L
a1 Ô a2
and multiply it out (using the exterior product as the product operation). We won't simplify the
double negative signs in order to retain the correspondence with the previous expressions.
Finally, we can put these two matrices side by side to show the interior and exteior components of
the final expression.
Mi Me
a1 û b1 a1 û b2 a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
a2 û b1 a2 û b2 -Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a3 û b1 a3 û b2 a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1
The generalized product is now the sum of the element-by-element products (not the matrix
product) of these matrices.
Ÿ Visualization examples
We can collect the operations of the the previous section into a simple function to easily visualize
any generalized product.
2009 9 3
Grassmann Algebra Book.nb 583
VizualizeGeneralizedProductBX_ Û Y_F :=
l_
Ï Example 1
In this first example we visualize all the non-zero generalized products of a 3-element and 2-
element.
VizualizeGeneralizedProductBa Û bF
3 0 2
H 1 û 1 L H a1 Ô a2 Ô a3 Ô b1 Ô b2 L
VizualizeGeneralizedProductBa Û bF
3 1 2
a1 û b1 a1 û b2 a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
a2 û b1 a2 û b2 -Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a3 û b1 a3 û b2 a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1
VizualizeGeneralizedProductBa Û bF
3 2 2
a1 Ô a2 û b1 Ô b2 a3 Ô 1
a1 Ô a3 û b1 Ô b2 -a2 Ô 1
a2 Ô a3 û b1 Ô b2 a1 Ô 1
Ï Example 2
In this example we visualize some non-zero generalized products of a 4-element and 3-element.
VizualizeGeneralizedProductBa Û bF
4 1 3
a1 û b1 a1 û b2 a1 û b3 a2 Ô a3 Ô a4 Ô b2 Ô b3 a2 Ô a3 Ô a4 Ô -Hb1 Ô b3 L
a2 û b1 a2 û b2 a2 û b3 -Ha1 Ô a3 Ô a4 L Ô b2 Ô b3 -Ha1 Ô a3 Ô a4 L Ô -Hb1 Ô b3
a3 û b1 a3 û b2 a3 û b3 a1 Ô a2 Ô a4 Ô b2 Ô b3 a1 Ô a2 Ô a4 Ô -Hb1 Ô b3 L
a4 û b1 a4 û b2 a4 û b3 -Ha1 Ô a2 Ô a3 L Ô b2 Ô b3 -Ha1 Ô a2 Ô a3 L Ô -Hb1 Ô b3
2009 9 3
Grassmann Algebra Book.nb 584
VizualizeGeneralizedProductBa Û bF
4 2 3
a1 Ô a2 û b1 Ô b2 a1 Ô a2 û b1 Ô b3 a1 Ô a2 û b2 Ô b3
a1 Ô a3 û b1 Ô b2 a1 Ô a3 û b1 Ô b3 a1 Ô a3 û b2 Ô b3
a1 Ô a4 û b1 Ô b2 a1 Ô a4 û b1 Ô b3 a1 Ô a4 û b2 Ô b3
a2 Ô a3 û b1 Ô b2 a2 Ô a3 û b1 Ô b3 a2 Ô a3 û b2 Ô b3
a2 Ô a4 û b1 Ô b2 a2 Ô a4 û b1 Ô b3 a2 Ô a4 û b2 Ô b3
a3 Ô a4 û b1 Ô b2 a3 Ô a4 û b1 Ô b3 a3 Ô a4 û b2 Ô b3
a3 Ô a4 Ô b3 a3 Ô a4 Ô -b2 a3 Ô a4 Ô b1
-Ha2 Ô a4 L Ô b3 -Ha2 Ô a4 L Ô -b2 -Ha2 Ô a4 L Ô b1
a2 Ô a3 Ô b3 a2 Ô a3 Ô -b2 a2 Ô a3 Ô b1
a1 Ô a4 Ô b3 a1 Ô a4 Ô -b2 a1 Ô a4 Ô b1
-Ha1 Ô a3 L Ô b3 -Ha1 Ô a3 L Ô -b2 -Ha1 Ô a3 L Ô b1
a1 Ô a2 Ô b3 a1 Ô a2 Ô -b2 a1 Ô a2 Ô b1
VizualizeGeneralizedProductBa Û bF
4 3 3
a1 Ô a2 Ô a3 û b1 Ô b2 Ô b3 a4 Ô 1
a1 Ô a2 Ô a4 û b1 Ô b2 Ô b3 -a3 Ô 1
a1 Ô a3 Ô a4 û b1 Ô b2 Ô b3 a2 Ô 1
a2 Ô a3 Ô a4 û b1 Ô b2 Ô b3 -a1 Ô 1
The components
Now that we are able to visualize the generalized product in terms of its exterior and interior
product components, it becomes easier to see the influence on the product result of its factors. In
the examples above we expressed the generalized product as the sum of the element-by-element
(ordinary) products of an interior product matrix, and an exterior product matrix.
VizualizeGeneralizedProductBa Û bF
3 1 2
a1 û b1 a1 û b2 a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
a2 û b1 a2 û b2 -Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a3 û b1 a3 û b2 a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1
The interior products of this generalized product of order 1 comprise all the essentially different
inner products between a 1-element of a and a 1-element of b.
3 2
a1 û b1 a1 û b2
a2 û b1 a2 û b2
a3 û b1 a3 û b2
Consider now the inner (scalar, in this case) product of a general element belonging to a with a
3
general element belonging to b. Expanding and simplifying the product gives
2
2009 9 3
Grassmann Algebra Book.nb 585
Consider now the inner (scalar, in this case) product of a general element belonging to a with a
3
general element belonging to b. Expanding and simplifying the product gives
2
¯%@Ha a1 + b a2 + c a3 L û Hd b1 + e b2 LD
a d Ha1 û b1 L + a e Ha1 û b2 L + b d Ha2 û b1 L +
b e Ha2 û b2 L + c d Ha3 û b1 L + c e Ha3 û b2 L
We can see from this that b is totally orthogonal to a if and only if every one of the 6 scalar
2 3
products is zero.
a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
-Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1
The exterior product of a general element belonging to a with a general element belonging to b is:
3 2
¯%@Ha a1 + b a2 + c a3 L Ô Hd b1 + e b2 LD
a d a1 Ô b1 + a e a1 Ô b2 + b d a2 Ô b1 + b e a2 Ô b2 + c d a3 Ô b1 + c e a3 Ô b2
Ï Common elements
G = ToScalarProductsBa Û bF
3 1 2
G1 = ¯$@G ê. b1 Ø a1 D
-Ha1 û b2 L a1 Ô a2 Ô a3 + Ha1 û a3 L a1 Ô a2 Ô b2 -
Ha1 û a2 L a1 Ô a3 Ô b2 + Ha1 û a1 L a2 Ô a3 Ô b2
G2 = ¯$@G1 ê. b2 Ø a2 D
0
2009 9 3
Grassmann Algebra Book.nb 586
-Ha4 û b3 L a1 Ô a2 Ô a3 Ô b1 Ô b2 + Ha4 û b2 L a1 Ô a2 Ô a3 Ô b1 Ô b3 -
Ha4 û b1 L a1 Ô a2 Ô a3 Ô b2 Ô b3 + Ha3 û b3 L a1 Ô a2 Ô a4 Ô b1 Ô b2 -
Ha3 û b2 L a1 Ô a2 Ô a4 Ô b1 Ô b3 + Ha3 û b1 L a1 Ô a2 Ô a4 Ô b2 Ô b3 -
Ha2 û b3 L a1 Ô a3 Ô a4 Ô b1 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô a4 Ô b1 Ô b3 -
Ha2 û b1 L a1 Ô a3 Ô a4 Ô b2 Ô b3 + Ha1 û b3 L a2 Ô a3 Ô a4 Ô b1 Ô b2 -
Ha1 û b2 L a2 Ô a3 Ô a4 Ô b1 Ô b3 + Ha1 û b1 L a2 Ô a3 Ô a4 Ô b2 Ô b3
H1 = ¯$@H ê. b1 Ø a1 D
-Ha1 û b3 L a1 Ô a2 Ô a3 Ô a4 Ô b2 + Ha1 û b2 L a1 Ô a2 Ô a3 Ô a4 Ô b3 -
Ha1 û a4 L a1 Ô a2 Ô a3 Ô b2 Ô b3 + Ha1 û a3 L a1 Ô a2 Ô a4 Ô b2 Ô b3 -
Ha1 û a2 L a1 Ô a3 Ô a4 Ô b2 Ô b3 + Ha1 û a1 L a2 Ô a3 Ô a4 Ô b2 Ô b3
H2 = ¯$@H1 ê. b2 Ø a2 D
0
G = ToScalarProductsBa Û bF
3 1 2
G1 = ¯%@G ê. b1 Ø Ha a1 + b a2 + c a3 LD
H-a Ha1 û b2 L - b Ha2 û b2 L - c Ha3 û b2 LL a1 Ô a2 Ô a3 +
Ha Ha1 û a3 L + b Ha2 û a3 L + c Ha3 û a3 LL a1 Ô a2 Ô b2 +
H-a Ha1 û a2 L - b Ha2 û a2 L - c Ha2 û a3 LL a1 Ô a3 Ô b2 +
Ha Ha1 û a1 L + b Ha1 û a2 L + c Ha1 û a3 LL a2 Ô a3 Ô b2
G2 = ¯%@G1 ê. b2 Ø Hd a1 + e a2 + f a3 LD
0
2009 9 3
Grassmann Algebra Book.nb 587
Ï A larger expression
-Ha4 û b3 L a1 Ô a2 Ô a3 Ô b1 Ô b2 + Ha4 û b2 L a1 Ô a2 Ô a3 Ô b1 Ô b3 -
Ha4 û b1 L a1 Ô a2 Ô a3 Ô b2 Ô b3 + Ha3 û b3 L a1 Ô a2 Ô a4 Ô b1 Ô b2 -
Ha3 û b2 L a1 Ô a2 Ô a4 Ô b1 Ô b3 + Ha3 û b1 L a1 Ô a2 Ô a4 Ô b2 Ô b3 -
Ha2 û b3 L a1 Ô a3 Ô a4 Ô b1 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô a4 Ô b1 Ô b3 -
Ha2 û b1 L a1 Ô a3 Ô a4 Ô b2 Ô b3 + Ha1 û b3 L a2 Ô a3 Ô a4 Ô b1 Ô b2 -
Ha1 û b2 L a2 Ô a3 Ô a4 Ô b1 Ô b3 + Ha1 û b1 L a2 Ô a3 Ô a4 Ô b2 Ô b3
H1 = ¯$@H ê. b3 Ø z ê. a4 Ø zD
Hz û b2 L z Ô a1 Ô a2 Ô a3 Ô b1 - Hz û b1 L z Ô a1 Ô a2 Ô a3 Ô b2 +
Hz û a3 L z Ô a1 Ô a2 Ô b1 Ô b2 - Hz û a2 L z Ô a1 Ô a3 Ô b1 Ô b2 +
Hz û a1 L z Ô a2 Ô a3 Ô b1 Ô b2 - Hz û zL a1 Ô a2 Ô a3 Ô b1 Ô b2
H2 = ¯$@H1 ê. b2 Ø y ê. a2 Ø yD
0
More completely
H3 = ¯%@H ê. b1 Ø Ha a1 + b a2 + c a3 + d a4 LD
H-a Ha1 û b3 L - b Ha2 û b3 L - c Ha3 û b3 L - d Ha4 û b3 LL a1 Ô a2 Ô a3 Ô a4 Ô b2 +
Ha Ha1 û b2 L + b Ha2 û b2 L + c Ha3 û b2 L + d Ha4 û b2 LL a1 Ô a2 Ô a3 Ô a4 Ô b3 +
H-a Ha1 û a4 L - b Ha2 û a4 L - c Ha3 û a4 L - d Ha4 û a4 LL a1 Ô a2 Ô a3 Ô b2 Ô b3 +
Ha Ha1 û a3 L + b Ha2 û a3 L + c Ha3 û a3 L + d Ha3 û a4 LL a1 Ô a2 Ô a4 Ô b2 Ô b3 +
H-a Ha1 û a2 L - b Ha2 û a2 L - c Ha2 û a3 L - d Ha2 û a4 LL a1 Ô a3 Ô a4 Ô b2 Ô b3 +
Ha Ha1 û a1 L + b Ha1 û a2 L + c Ha1 û a3 L + d Ha1 û a4 LL a2 Ô a3 Ô a4 Ô b2 Ô b3
H4 = ¯%@H3 ê. b2 Ø He a1 + f a2 + g a3 + h a4 LD
0
a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
-Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1
88a2 Ô a3 Ô b2 , a2 Ô a3 Ô -b1 <,
8-Ha1 Ô a3 L Ô b2 , -Ha1 Ô a3 L Ô -b1 <, 8a1 Ô a2 Ô b2 , a1 Ô a2 Ô -b1 <<
2009 9 3
Grassmann Algebra Book.nb 588
Mi1 =
MatrixForm@88a1 û b1 , a1 û b2 <, 8a2 û b1 , a2 û b2 <, 8a3 û b1 , a3 û b2 <<D ê.
b1 Ø Ha a1 + b a2 + c a3 L ê. b2 Ø Hd a1 + e a2 + f a3 L
a1 û Ha a1 + b a2 + c a3 L a1 û Hd a1 + e a2 + f a3 L
a2 û Ha a1 + b a2 + c a3 L a2 û Hd a1 + e a2 + f a3 L
a3 û Ha a1 + b a2 + c a3 L a3 û Hd a1 + e a2 + f a3 L
Me2 = ¯% êü Me1
d a1 Ô a2 Ô a3 -a a1 Ô a2 Ô a3
e a1 Ô a2 Ô a3 -b a1 Ô a2 Ô a3
f a1 Ô a2 Ô a3 -c a1 Ô a2 Ô a3
This reduces to
a1 û d Ha a1 + b a2 + c a3 L a1 û H-aL Hd a1 + e a2 + f a3 L
a2 û e Ha a1 + b a2 + c a3 L a2 û H-bL Hd a1 + e a2 + f a3 L
a3 û f Ha a1 + b a2 + c a3 L a3 û H-cL Hd a1 + e a2 + f a3 L
It can be seen that for every scalar product, there is a corresponding negative version which will
cancell it when the terms are summed.
These two cases are disjoint: factors cannot both be totally orthogonal and have elements in
common. For elements in common will give rise to scalar products of the form
a1 û a1
For suppose the simplest case
2009 9 3
Grassmann Algebra Book.nb 589
G = ToScalarProductsBa Û bF
3 1 2
If b2 is orthogonal tp the a
Go = G ê. 8a1 û b2 Ø 0, a2 û b2 Ø 0, a3 û b2 Ø 0<
Ha3 û b1 L a1 Ô a2 Ô b2 - Ha2 û b1 L a1 Ô a3 Ô b2 + Ha1 û b1 L a2 Ô a3 Ô b2
ã HHa1 Ô a2 Ô a3 L û b1 L Ô b2
A larger expression
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 - Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 +
Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 - Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 +
Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 - Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 +
Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 - Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 +
Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 + Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 + Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 -
Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 + Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3
J1 = ¯$@J ê. b3 Ø z ê. a4 Ø zD
-Hz Ô a3 û b1 Ô b2 L z Ô a1 Ô a2 +
Hz Ô a2 û b1 Ô b2 L z Ô a1 Ô a3 + Hz Ô b2 û a2 Ô a3 L z Ô a1 Ô b1 -
Hz Ô b1 û a2 Ô a3 L z Ô a1 Ô b2 - Hz Ô a1 û b1 Ô b2 L z Ô a2 Ô a3 -
Hz Ô b2 û a1 Ô a3 L z Ô a2 Ô b1 + Hz Ô b1 û a1 Ô a3 L z Ô a2 Ô b2 +
Hz Ô b2 û a1 Ô a2 L z Ô a3 Ô b1 - Hz Ô b1 û a1 Ô a2 L z Ô a3 Ô b2 +
Hz Ô a3 û z Ô b2 L a1 Ô a2 Ô b1 - Hz Ô a3 û z Ô b1 L a1 Ô a2 Ô b2 -
Hz Ô a2 û z Ô b2 L a1 Ô a3 Ô b1 + Hz Ô a2 û z Ô b1 L a1 Ô a3 Ô b2 +
Hz Ô a1 û z Ô b2 L a2 Ô a3 Ô b1 - Hz Ô a1 û z Ô b1 L a2 Ô a3 Ô b2
2009 9 3
Grassmann Algebra Book.nb 590
J2 = ¯$@J1 ê. b2 Ø x ê. a3 Ø xD
Hx Ô a2 û z Ô b1 - x Ô b1 û z Ô a2 L x Ô z Ô a1 +
H-Hx Ô a1 û z Ô b1 L + x Ô b1 û z Ô a1 L x Ô z Ô a2 +
Hx Ô z û a1 Ô a2 L x Ô z Ô b1 + Hx Ô z û z Ô b1 L x Ô a1 Ô a2 -
Hx Ô z û z Ô a2 L x Ô a1 Ô b1 + Hx Ô z û z Ô a1 L x Ô a2 Ô b1 -
Hx Ô z û x Ô b1 L z Ô a1 Ô a2 + Hx Ô z û x Ô a2 L z Ô a1 Ô b1 -
Hx Ô z û x Ô a1 L z Ô a2 Ô b1 + Hx Ô z û x Ô zL a1 Ô a2 Ô b1
¯%@ToScalarProducts@J2DD
H-Hx û b1 L Hz û a2 L + Hx û a2 L Hz û b1 LL x Ô z Ô a1 +
HHx û b1 L Hz û a1 L - Hx û a1 L Hz û b1 LL x Ô z Ô a2 +
H-Hx û a2 L Hz û a1 L + Hx û a1 L Hz û a2 LL x Ô z Ô b1 +
H-Hx û b1 L Hz û zL + Hx û zL Hz û b1 LL x Ô a1 Ô a2 +
HHx û a2 L Hz û zL - Hx û zL Hz û a2 LL x Ô a1 Ô b1 +
H-Hx û a1 L Hz û zL + Hx û zL Hz û a1 LL x Ô a2 Ô b1 +
HHx û zL Hx û b1 L - Hx û xL Hz û b1 LL z Ô a1 Ô a2 +
H-Hx û zL Hx û a2 L + Hx û xL Hz û a2 LL z Ô a1 Ô b1 +
HHx û zL Hx û a1 L - Hx û xL Hz û a1 LL z Ô a2 Ô b1 +
I-Hx û zL2 + Hx û xL Hz û zLM a1 Ô a2 Ô b1
J3 = ¯$@J2 ê. b1 Ø y ê. a2 Ø yD
H-Hx Ô y û z Ô a1 L + x Ô z û y Ô a1 - x Ô a1 û y Ô zL x Ô y Ô z
¯%@ToScalarProducts@J3DD
0
Ha1 Ô y Ô x Ô zL Û Hy Ô x Ô zL ã 0
2
2009 9 3
Grassmann Algebra Book.nb 591
g Ô A Û g Ô B ã g Ô A Ô B û g ã g û g A Ô B,
l a l l b l a b l l l a b 10.32
gi û Aj ã 0, gi û Bj ã 0
H-1Lp m a Û g Ô b ûg 10.34
m l p k p
g = g1 Ô g2 Ô ! Ô gp a û gi ã b û gi ã 0
p m k
Ï Hypothesis
gÔA Û gÔB ã gÔ A Û B û g ã g û g A Û B,
l m m l k l m m-l k l l l m m-l k
gi û Aj ã 0, gi û Bj ã 0
A = Ha1 Ô a2 Ô g1 L; B = Hb1 Ô b2 Ô b3 Ô g1 L;
¯$BToInnerProductsBA Û BFF
0
2009 9 3
Grassmann Algebra Book.nb 592
-Hg1 û g1 L a1 Ô a2 Ô b1 Ô b2 Ô b3 + Hb3 û g1 L a1 Ô a2 Ô b1 Ô b2 Ô g1 -
Hb2 û g1 L a1 Ô a2 Ô b1 Ô b3 Ô g1 + Hb1 û g1 L a1 Ô a2 Ô b2 Ô b3 Ô g1 -
Ha2 û g1 L a1 Ô b1 Ô b2 Ô b3 Ô g1 + Ha1 û g1 L a2 Ô b1 Ô b2 Ô b3 Ô g1
AB2 = ¯$BToScalarProductsBHa1 Ô a2 Ô b1 Ô b2 Ô b3 Ô g1 L Û g1 FF
1
-Hg1 û g1 L a1 Ô a2 Ô b1 Ô b2 Ô b3 + Hb3 û g1 L a1 Ô a2 Ô b1 Ô b2 Ô g1 -
Hb2 û g1 L a1 Ô a2 Ô b1 Ô b3 Ô g1 + Hb1 û g1 L a1 Ô a2 Ô b2 Ô b3 Ô g1 -
Ha2 û g1 L a1 Ô b1 Ô b2 Ô b3 Ô g1 + Ha1 û g1 L a2 Ô b1 Ô b2 Ô b3 Ô g1
AB4 =
¯$@AB3 êê OrthogonalSimplify@88a1 Ô a2 , g1 <, 8b1 Ô b2 Ô b3 , g1 <<DD
Ha2 û b3 L Hg1 û g1 L a1 Ô b1 Ô b2 - Ha2 û b2 L Hg1 û g1 L a1 Ô b1 Ô b3 +
Ha2 û b1 L Hg1 û g1 L a1 Ô b2 Ô b3 - Ha1 û b3 L Hg1 û g1 L a2 Ô b1 Ô b2 +
Ha1 û b2 L Hg1 û g1 L a2 Ô b1 Ô b3 - Ha1 û b1 L Hg1 û g1 L a2 Ô b2 Ô b3
AB5 = ¯$BToScalarProductsBa1 Ô a2 Û b1 Ô b2 Ô b3 FF
1
Orthogonal elements 2
2009 9 3
Grassmann Algebra Book.nb 593
Orthogonal elements 2
Taking the exterior product of an element with a generalized product is equivalent to taking the
exterior product only with the elements which are not orthogonal to it.
Consider an element made up of two factors, B and G, and a third element A totally orthogonal to
B. Then the generalized product of A with GÔ B is equal to the exterior product of B with the
generalized product of A and G. That is, there is a sort of quasi-associativity between the factors.
For orthogonal elements, the generalized products cease being zero when the order of the product
equals or exceeds the grade of the common factor. We assume in the following that
di û ej ã 0
Ka Ô dO Û b Ô e !0 l<m+k
m r l k p
Ka Ô dO Û b Ô e == lãm+k
m r l k p
Ka Ô dO Û b Ô e ã0 l>m+k
m r l k p
2009 9 3
Grassmann Algebra Book.nb 594
Ï Ka Ô dO Û e di û ej ã 0 l : Min[m,p]
m r l p
Ka Ô dO Û e ! 0 l<m
m r l p
Ka Ô dO Û e ã d Ô a Û e l § Min@m, pD
m r l p r m l p
Ka Ô dO Û e ã 0 l>m
m r l p
a Û gÔb ã a Û g Ôb ai û bj ã 0 l § Min@m, pD
m l p k m l p k
A = Ha1 Ô a2 Ô d1 L; B = He1 Ô e2 Ô e3 L;
VizualizeGeneralizedProductBA Û BF
0
H 1 û 1 L H a1 Ô a2 Ô d1 Ô e1 Ô e2 Ô e3 L
VizualizeGeneralizedProductBA Û BF êê
1
OrthogonalSimplify@88d1 Ô d2 Ô d3 , e1 Ô e2 Ô e3 Ô e4 <<D
a1 û e1 a1 û e2 a1 û e3
a2 û e1 a2 û e2 a2 û e3
0 0 0
a2 Ô d1 Ô e2 Ô e3 a2 Ô d1 Ô -He1 Ô e3 L a2 Ô d1 Ô e1 Ô e2
-Ha1 Ô d1 L Ô e2 Ô e3 -Ha1 Ô d1 L Ô -He1 Ô e3 L -Ha1 Ô d1 L Ô e1 Ô e2
a1 Ô a2 Ô e2 Ô e3 a1 Ô a2 Ô -He1 Ô e3 L a1 Ô a2 Ô e1 Ô e2
ToScalarProductsBHa1 Ô a2 L Û He1 Ô e2 Ô e3 LF
1
VizualizeGeneralizedProductBA Û BF êê
2
OrthogonalSimplify@88d1 Ô d2 Ô d3 , e1 Ô e2 Ô e3 Ô e4 <<D
H a1 Ô a2 û e1 Ô e2 a1 Ô a2 û e1 Ô e3 a1 Ô a2 û e2 Ô e3 L H 1 Ô e3 1 Ô -e2 1 Ô e1 L
2009 9 3
Grassmann Algebra Book.nb 595
AB2a = ¯$BToInnerProductsBA Û BF êê
2
OrthogonalSimplify@88d1 Ô d2 Ô d3 , e1 Ô e2 Ô e3 Ô e4 <<DF
Ha1 Ô a2 û e3 Ô e4 L d1 Ô d2 Ô d3 Ô e1 Ô e2 -
Ha1 Ô a2 û e2 Ô e4 L d1 Ô d2 Ô d3 Ô e1 Ô e3 +
Ha1 Ô a2 û e2 Ô e3 L d1 Ô d2 Ô d3 Ô e1 Ô e4 +
Ha1 Ô a2 û e1 Ô e4 L d1 Ô d2 Ô d3 Ô e2 Ô e3 -
Ha1 Ô a2 û e1 Ô e3 L d1 Ô d2 Ô d3 Ô e2 Ô e4 + Ha1 Ô a2 û e1 Ô e2 L d1 Ô d2 Ô d3 Ô e3 Ô e4
Ha1 Ô a2 û e3 Ô e4 L d1 Ô d2 Ô d3 Ô e1 Ô e2 -
Ha1 Ô a2 û e2 Ô e4 L d1 Ô d2 Ô d3 Ô e1 Ô e3 +
Ha1 Ô a2 û e2 Ô e3 L d1 Ô d2 Ô d3 Ô e1 Ô e4 +
Ha1 Ô a2 û e1 Ô e4 L d1 Ô d2 Ô d3 Ô e2 Ô e3 -
Ha1 Ô a2 û e1 Ô e3 L d1 Ô d2 Ô d3 Ô e2 Ô e4 + Ha1 Ô a2 û e1 Ô e2 L d1 Ô d2 Ô d3 Ô e3 Ô e4
AB2a ã AB2b
True
VizualizeGeneralizedProductBA Û BF êê
3
OrthogonalSimplify@88d1 Ô d2 Ô d3 , e1 Ô e2 Ô e3 Ô e4 <<D
Inner::incom :
Length 0 of dimension 1 in 8< is incommensurate with
length 1 of dimension 1 in 88e1 Ô e2 Ô e3 <<. More…
Inner::incom :
Length 0 of dimension 1 in 8< is incommensurate
with length 1 of dimension 1 in 881<<. More…
Inner@CircleMinus, 8<, 88e1 Ô e2 Ô e3 <<, PlusD
Inner@Wedge, 8<, 881<<, PlusD
VizualizeGeneralizedProductBA Û BF êê
4
OrthogonalSimplify@88d1 Ô d2 Ô d3 , e1 Ô e2 Ô e3 Ô e4 <<D
0 d3 Ô 1
0 -d2 Ô 1
0 d1 Ô 1
0 -a2 Ô 1
0 a1 Ô 1
2009 9 3
Grassmann Algebra Book.nb 596
k k
K O K O
m m
1 Û b ã ‚ 1 û bj Ô bj ã ‚ 1 Ô bj û bj
m k m k-m k-m m
j=1 j=1
1Ûb ãb
0 k k
When m is greater than zero, the first A form is clearly zero because the grade of the left factor of
the interior product (1) is less than that of the right factor. Hence the B form is zero also. Thus we
have the immediate result that
k
K O
m
‚ bj û bj ã 0
j=1
k-m m 10.37
b ã b1 Ô b1 ã b2 Ô b2 ã !
k m k-m m k-m
b ã b1 Ô b1 ã b2 Ô b2 ã b3 Ô b3 ã !
k m k-m m k-m m k-m
1 1 2 2
ï b ûb + b ûb + b2 û b3 + ! ã 0
k-m m k-m m k-m m
If m is less than or equal to k-m, this sum is zero as shown by 10.16. However, the sum created by
interchanging the order of the factors in the interior products is also zero, since each term in the
sum now becomes zero by virtue of the second factor being of higher grade than the first.
b1 û b1 + b2 û b2 + b2 û b3 + ! ã 0
l k-m l k-m l k-m
These two results are collected together and referred to as the Zero Interior Sum Theorem. It is
valid for 0 < m < k.
2009 9 3 10.38
Grassmann Algebra Book.nb 597
b ã b1 Ô b1 ã b2 Ô b2 ã b3 Ô b3 ã !
k m k-m m k-m m k-m
ï b1 û b1 + b2 û b2 + b2 û b3 + ! ã 0
m k-m m k-m m k-m
ï b1 û b1 + b2 û b2 + b2 û b3 + ! ã 0
k-m m k-m m k-m m
BaseBbF
3
81, b1 , b2 , b3 , b1 Ô b2 , b1 Ô b3 , b2 Ô b3 , b1 Ô b2 Ô b3 <
CobaseBbF
3
If we take the interior products of the corresponding elements of this list (using Thread), and then
add (Plus) together the cases (Cases) in which the grade of the second factor is equal to the
chosen value of m (here we choose m equal to 1), we get the terms of the zero interior sum.
Plus üü
CasesBThreadBBaseBbF û CobaseBbFF, a_ û b_ ê; RawGrade@bD ã 1F
3 3
b1 Ô b2 û b3 + b1 Ô b3 û -b2 + b2 Ô b3 û b1
We then give our function a name and add a condition 0 < m < RawGrade[b] to restrict the
formulas to the valid range of m.
Ï Examples
2009 9 3 10.38
Grassmann Algebra Book.nb 598
ComposeZeroInteriorSum@1DBbF
2
b1 û b2 + b2 û -b1
For a 3-element, with m equal to 1 (one 1-element factor to the right of the interior product sign),
we get
ComposeZeroInteriorSum@1DBbF
3
b1 Ô b2 û b3 + b1 Ô b3 û -b2 + b2 Ô b3 û b1
Two factors to the right of the interior product sign means one to the left, and
For m equal to 2 (two 1-element factors to the right of the interior product sign), we get
ComposeZeroInteriorSum@2DBbF
3
b1 û b2 Ô b3 + b2 û -Hb1 Ô b3 L + b3 û b1 Ô b2
In this case, each of the interior products is zero in its own right. However, remember that
ComposeZeroInteriorSum is GrassmannAlgebra "composer", and hence by design, does not
effect any simplifications.
For an element of grade 4, we get non-trivial results for m equal to 1, and for m equal to 2.
ComposeZeroInteriorSum@1DBbF
4
b1 Ô b2 Ô b3 û b4 + b1 Ô b2 Ô b4 û -b3 + b1 Ô b3 Ô b4 û b2 + b2 Ô b3 Ô b4 û -b1
Z1 = ComposeZeroInteriorSum@2DBbF
4
b1 Ô b2 û b3 Ô b4 + b1 Ô b3 û -Hb2 Ô b4 L + b1 Ô b4 û b2 Ô b3 +
b2 Ô b3 û b1 Ô b4 + b2 Ô b4 û -Hb1 Ô b3 L + b3 Ô b4 û b1 Ô b2
Some zero interior sums can be simplified. For example the previous result can be written
Z2 = ¯$@Z1 D
2 Hb1 Ô b2 û b3 Ô b4 L - 2 Hb1 Ô b3 û b2 Ô b4 L + 2 Hb1 Ô b4 û b2 Ô b3 L
We can also verify that these expressions are indeed zero by converting them to their scalar
product form. For example:
¯$@ToScalarProducts@Z2 DD
0
2009 9 3
Grassmann Algebra Book.nb 599
b ã b1 Ô b1 ã b2 Ô b2 ã b3 Ô b3 ã !
k m k-m m k-m m k-m
ï b1 Û b1 + b2 Û b2 + b2 Û b3 + ! ã 0 l!0 10.39
m l k-m m l k-m m l k-m
ï b1 Û b1 + b2 Û b2 + b2 Û b3 + ! ã 0 l!0
k-m l m k-m l m k-m l m
To compose a zero generalized sum, we simply modify the code for the zero interior sum to
replace the interior product by the generalized product, and add another argument and condition for
l.
ComposeZeroGeneralizedSum@m_, l_D@b_D ê;
0 < m < RawGrade@bD && l > 0 := Plus üü CasesB
Ï Ÿ Examples
Here is the only valid non-trivial zero generalized sum for a 4-element that does not reduce to a
zero interior sum.
ComposeZeroGeneralizedSum@2, 1DBbF
4
b1 Ô b2 Û b3 Ô b4 + b1 Ô b3 Û -Hb2 Ô b4 L + b1 Ô b4 Û b2 Ô b3 +
1 1 1
b2 Ô b3 Û b1 Ô b4 + b2 Ô b4 Û -Hb1 Ô b3 L + b3 Ô b4 Û b1 Ô b2
1 1 1
Here we tabulate the first 50 cases to confirm that they reduce to zero.
2009 9 3
Grassmann Algebra Book.nb 600
FlattenBTableB
¯%BToScalarProductsBComposeZeroGeneralizedSum@m, lDBbFFF,
k
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
b ã b1 Ô b2 Ô b3 Ô b4 Ô b5
5
b1 Û b1 + b2 Û b2 + b3 Û b3 + … ã 0
m l 5-m m l 5-m m l 5-m
b ã b1 Ô b1 ã b2 Ô b2 ã b3 Ô b3 ã !
5 m 5-m m 5-m m 5-m
The generalized product is zero when l is greater than the minimum of 5 and 5-m. It is also quasi-
commutative, only changing by a (possible) sign when its factors are interchanged:
bi Û bi ã H-1LHm-lL Hk-m-lL bi Û bi
m l k-m k-m l m
F1 ã b1 Û b1 + b2 Û b2 + b3 Û b3 + … ã 0
1 1 4 1 1 4 1 1 4
F2 ã b1 Û b1 + b2 Û b2 + b3 Û b3 + … ã 0
2 1 3 2 1 3 2 1 3
F3 ã b1 Û b1 + b2 Û b2 + b3 Û b3 + … ã 0
2 2 3 2 2 3 2 2 3
Since the terms in F1 and F3 are already interior products, they are zero by the Zero Interior Sum
Theorem. It remains to show that F2 is zero
To display the terms of F2 for b, we use the ZeroGeneralizedSum function defined in the
5
previous section. Specifically, the terms of F2 can be written
2009 9 3
Grassmann Algebra Book.nb 601
To display the terms of F2 for b, we use the ZeroGeneralizedSum function defined in the
5
previous section. Specifically, the terms of F2 can be written
Z1 = ComposeZeroGeneralizedSum@2, 1DBbF
5
b1 Ô b2 Ô b3 Û b4 Ô b5 + b1 Ô b2 Ô b4 Û -Hb3 Ô b5 L +
1 1
b1 Ô b2 Ô b5 Û b3 Ô b4 + b1 Ô b3 Ô b4 Û b2 Ô b5 + b1 Ô b3 Ô b5 Û -Hb2 Ô b4 L +
1 1 1
b1 Ô b4 Ô b5 Û b2 Ô b3 + b2 Ô b3 Ô b4 Û -Hb1 Ô b5 L +
1 1
b2 Ô b3 Ô b5 Û b1 Ô b4 + b2 Ô b4 Ô b5 Û -Hb1 Ô b3 L + b3 Ô b4 Ô b5 Û b1 Ô b2
1 1 1
Before we proceed with the proof for this specific case, we verify first that Z1 is indeed zero.
¯%@ToScalarProducts@Z1 DD
0
Z2 = ToInteriorProducts@Z1 D
b1 Ô Hb2 Ô b3 Ô b4 û b5 L - b1 Ô Hb2 Ô b3 Ô b5 û b4 L +
b1 Ô Hb2 Ô b4 Ô b5 û b3 L - b1 Ô Hb3 Ô b4 Ô b5 û b2 L - b2 Ô Hb1 Ô b3 Ô b4 û b5 L +
b2 Ô Hb1 Ô b3 Ô b5 û b4 L - b2 Ô Hb1 Ô b4 Ô b5 û b3 L + b2 Ô Hb3 Ô b4 Ô b5 û b1 L +
b3 Ô Hb1 Ô b2 Ô b4 û b5 L - b3 Ô Hb1 Ô b2 Ô b5 û b4 L + b3 Ô Hb1 Ô b4 Ô b5 û b2 L -
b3 Ô Hb2 Ô b4 Ô b5 û b1 L - b4 Ô Hb1 Ô b2 Ô b3 û b5 L + b4 Ô Hb1 Ô b2 Ô b5 û b3 L -
b4 Ô Hb1 Ô b3 Ô b5 û b2 L + b4 Ô Hb2 Ô b3 Ô b5 û b1 L + b5 Ô Hb1 Ô b2 Ô b3 û b4 L -
b5 Ô Hb1 Ô b2 Ô b4 û b3 L + b5 Ô Hb1 Ô b3 Ô b4 û b2 L - b5 Ô Hb2 Ô b3 Ô b4 û b1 L
Ï 2. Collect together the terms having the same exterior product factor
A simple way to do this is to replace the exterior product with an arbitrary scalar multiple. In this
form it becomes clear that each sum of the terms associated with each scalar multiple is a zero
interior sum, and hence zero, making the complete expression zero, and thus proving the theorem
for this case.
Z3 = Z2 ê. bi_ Ô B_ ß bi B
Hb2 Ô b3 Ô b4 û b5 L b1 - Hb2 Ô b3 Ô b5 û b4 L b1 +
Hb2 Ô b4 Ô b5 û b3 L b1 - Hb3 Ô b4 Ô b5 û b2 L b1 - Hb1 Ô b3 Ô b4 û b5 L b2 +
Hb1 Ô b3 Ô b5 û b4 L b2 - Hb1 Ô b4 Ô b5 û b3 L b2 + Hb3 Ô b4 Ô b5 û b1 L b2 +
Hb1 Ô b2 Ô b4 û b5 L b3 - Hb1 Ô b2 Ô b5 û b4 L b3 + Hb1 Ô b4 Ô b5 û b2 L b3 -
Hb2 Ô b4 Ô b5 û b1 L b3 - Hb1 Ô b2 Ô b3 û b5 L b4 + Hb1 Ô b2 Ô b5 û b3 L b4 -
Hb1 Ô b3 Ô b5 û b2 L b4 + Hb2 Ô b3 Ô b5 û b1 L b4 + Hb1 Ô b2 Ô b3 û b4 L b5 -
Hb1 Ô b2 Ô b4 û b3 L b5 + Hb1 Ô b3 Ô b4 û b2 L b5 - Hb2 Ô b3 Ô b4 û b1 L b5
2009 9 3
Grassmann Algebra Book.nb 602
To confirm that the complete expression is zero, we need first to declare the bi as scalar symbols
so that GrassmannSimplify can effect the collection of terms resulting from expanding the
interior products to scalar products.
DeclareExtraScalarSymbols@b_ D; ¯%@ToScalarProducts@Z3 DD
H = Kx Û yO Û z ! x Û Jy Û zN;
0 1 0 1
ToScalarProducts@HD
y Hx û zL - x Hy û zL # x Hy û zL
Clearly the two sides of the relation are not in general equal. Hence the generalized product is not
in general associative.
2009 9 3
Grassmann Algebra Book.nb 603
form x Û y Û z . Since we have shown above that these forms will in general be different, we
m l k m p
will refer to the first one as the A form, and to the second as the B form.
We define the signed triple generalized product of form B to be the triple generalized product of
form B multiplied by the factor H-1LsB , where sB = m l + 12 l H1 + lL + k m + 12 m H1 + mL.
1 1
where sA = m l + 2
l Hl + 1L + Hm + kL Hn - lL + 2
Hn - lL Hn - l + 1L.
We define a triple generalized sum of form B to be the sum of all the signed triple generalized
products of form B of the same elements and which have the same grade n.
n
‚ H-1LsB x Û y Û z
m l k n-l p
l=0
1 1
where sB = m l + 2
l Hl + 1L + k Hn - lL + 2
Hn - lL Hn - l + 1L.
For example, we tabulate below the triple generalized sums of form A for grades of 0, 1 and 2:
2009 9 3
Grassmann Algebra Book.nb 604
TableForm
0 xÛy Ûz
m 0 k 0 p
Similarly the triple generalized sums of form B for grades of 0, 1 and 2 are
TableForm
0 xÛ yÛz
m 0 k 0 p
1 H-1L1+k x Û y Û z + H-1L1+m x Û y Û z
m 0 k 1 p m 1 k 0 p
n n
‚ H-1LsA x Û y Û z ã ‚ H-1LsB x Û y Û z 10.40
m l k n-l p m l k n-l p
l=0 l=0
1 1
where sA = m l + 2
l Hl + 1L + Hm + kL Hn - lL + 2
Hn - lL Hn - l + 1L
1 1
and sB = m l + 2
l Hl + 1L + k Hn - lL + 2
Hn - lL Hn - l + 1L.
If this conjecture can be shown to be true, then the associativity of the Clifford product of a general
Grassmann expression can be straightforwardly proven using the definition of the Clifford product
in terms of exterior and interior products (or, what is equivalent, in terms of generalized products).
That is, the rules of Clifford algebra may be entirely determined by the axioms and theorems of the
Grassmann algebra, making it unnecessary, and indeed potentially inconsistent, to introduce any
special Clifford algebra axioms.
2009 9 3
Grassmann Algebra Book.nb 605
A = TripleGeneralizedSumA@2DBx, y, zF
3 2
B = TripleGeneralizedSumB@2DBx, y, zF
3 2
We could demonstrate the equality of these two sums directly by converting their difference into
scalar product form, thus allowing terms to cancel.
Expand@ToScalarProducts@A - BDD
0
However it is more instructive to convert each of the terms in the difference separately to scalar
products.
X = List üü HA - BL
:x Û y Û z , - xÛy Ûz , xÛ yÛz ,
3 0 2 2 3 2 2 0 3 1 2 1
x Û y Û z, x Û y Û z , - xÛy Ûz >
3 1 2 1 3 2 2 0 3 0 2 2
2009 9 3
Grassmann Algebra Book.nb 606
X1 = Expand@ToScalarProducts@XDD
80, -Hx2 û y2 L Hx3 û y1 L z Ô x1 + Hx2 û y1 L Hx3 û y2 L z Ô x1 +
Hx1 û y2 L Hx3 û y1 L z Ô x2 - Hx1 û y1 L Hx3 û y2 L z Ô x2 -
Hx1 û y2 L Hx2 û y1 L z Ô x3 + Hx1 û y1 L Hx2 û y2 L z Ô x3 ,
-Hz û y2 L Hx3 û y1 L x1 Ô x2 + Hz û y1 L Hx3 û y2 L x1 Ô x2 +
Hz û y2 L Hx2 û y1 L x1 Ô x3 - Hz û y1 L Hx2 û y2 L x1 Ô x3 -
Hz û y2 L Hx1 û y1 L x2 Ô x3 + Hz û y1 L Hx1 û y2 L x2 Ô x3 ,
Hz û y2 L Hx3 û y1 L x1 Ô x2 - Hz û y1 L Hx3 û y2 L x1 Ô x2 -
Hz û y2 L Hx2 û y1 L x1 Ô x3 + Hz û y1 L Hx2 û y2 L x1 Ô x3 -
Hz û x3 L Hx2 û y2 L x1 Ô y1 + Hz û x2 L Hx3 û y2 L x1 Ô y1 +
Hz û x3 L Hx2 û y1 L x1 Ô y2 - Hz û x2 L Hx3 û y1 L x1 Ô y2 +
Hz û y2 L Hx1 û y1 L x2 Ô x3 - Hz û y1 L Hx1 û y2 L x2 Ô x3 +
Hz û x3 L Hx1 û y2 L x2 Ô y1 - Hz û x1 L Hx3 û y2 L x2 Ô y1 -
Hz û x3 L Hx1 û y1 L x2 Ô y2 + Hz û x1 L Hx3 û y1 L x2 Ô y2 -
Hz û x2 L Hx1 û y2 L x3 Ô y1 + Hz û x1 L Hx2 û y2 L x3 Ô y1 +
Hz û x2 L Hx1 û y1 L x3 Ô y2 - Hz û x1 L Hx2 û y1 L x3 Ô y2 ,
Hx2 û y2 L Hx3 û y1 L z Ô x1 - Hx2 û y1 L Hx3 û y2 L z Ô x1 -
Hx1 û y2 L Hx3 û y1 L z Ô x2 + Hx1 û y1 L Hx3 û y2 L z Ô x2 +
Hx1 û y2 L Hx2 û y1 L z Ô x3 - Hx1 û y1 L Hx2 û y2 L z Ô x3 +
Hz û x3 L Hx2 û y2 L x1 Ô y1 - Hz û x2 L Hx3 û y2 L x1 Ô y1 -
Hz û x3 L Hx2 û y1 L x1 Ô y2 + Hz û x2 L Hx3 û y1 L x1 Ô y2 -
Hz û x3 L Hx1 û y2 L x2 Ô y1 + Hz û x1 L Hx3 û y2 L x2 Ô y1 +
Hz û x3 L Hx1 û y1 L x2 Ô y2 - Hz û x1 L Hx3 û y1 L x2 Ô y2 +
Hz û x2 L Hx1 û y2 L x3 Ô y1 - Hz û x1 L Hx2 û y2 L x3 Ô y1 -
Hz û x2 L Hx1 û y1 L x3 Ô y2 + Hz û x1 L Hx2 û y1 L x3 Ô y2 , 0<
X2 = Plus üü X1
0
TestTripleGeneralizedSumConjecture[n_][x_,y_,z_]:=
Expand[ToScalarProducts[TripleGeneralizedSumA[n][x,y,z]-
TripleGeneralizedSumB[n][x,y,z]]]
As an example of this procedure we run through the first 320 cases. A value of zero will validate
the conjecture for that case.
2009 9 3
Grassmann Algebra Book.nb 607
TableBTestTripleGeneralizedSumConjecture@nDBx, y, zF,
m k p
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
A conjecture
As an example we suggest a conjecture for a relationship amongst generalized products of order 1
of the following form, valid for any m, k, and p.
The signs H-1Lx and H-1Ly are at this point unknown, but if the conjecture is true we expect
them to be simple functions of m, k, p and their products. We conjecture therefore that in any
particular case they will be either +1 or |1.
2009 9 3
Grassmann Algebra Book.nb 608
Y = ToScalarProductsB b Û g Û aF;
k 1 p 1 m
X + Y + Z == 0 »» X + Y - Z == 0 »» X - Y + Z == 0 »» X - Y - Z == 0DF
It can be seen that the formula is valid for some values of x and y in the first 64 cases. This gives
us some confidence that our efforts may not be wasted if we wished to prove the formula and/or
find the values of x and y an terms of m, k, and p.
2009 9 3
Grassmann Algebra Book.nb 609
We can test this by reducing the expression to scalar products and tabulating cases.
FlattenBTableBA = ToScalarProductsB g Ô a Û g Ô b F;
p m l p k
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
Of course, this result is also valid in the case that either or both the grades of a and b is zero.
m k
Ÿ The case l ! p
For the case where l is equal to, or greater than the grade of the intersection p, the generalized
product of such intersecting elements may be expressed as the interior product of the common
factor g with a generalized product of the remaining factors. This generalized product is of order
p
lower by the grade of the common factor. Hence the formula is only valid for l ¥ p (since the
generalized product has only been defined for non-negative orders).
10.44
2009 9 3
Grassmann Algebra Book.nb 610
g Ô a Û g Ô b ã H-1Lp m a Û gÔb ûg
p m l p k m l-p p k p
We can test these formulae by reducing the expressions on either side to scalar products and
tabulating cases. For l > p, the first formula gives
FlattenBTableB
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
The second formula can be derived from the first by using the quasi-commutativity of the
generalized product. To check, we tabulate the first few cases:
FlattenB
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
FlattenB
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
2009 9 3
Grassmann Algebra Book.nb 612
FlattenBTableBToScalarProductsBa Û bF ê.
m l k
OrthogonalSimplificationRulesB::a, b>>F,
m k
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
Consider the generalized product a Û gÔb of an element a and another element gÔb in which a
m l p k m p k m
But since ai ûbj ã0, the interior products of a and any factors of gÔb in the expansion of the
m p k
generalized product will be zero whenever they contain any factors of b. That is, the only non-zero
k
terms in the expansion of the generalized product are of the form aûgi Ô g i Ôb. Hence:
m l p-l k
FlattenBTableBToScalarProductsBa Û g Ô b - a Û g Ô b F ê.
m l p k m l p k
OrthogonalSimplificationRulesB::a, b>>F,
m k
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
By using the quasi-commutativity of the generalized and exterior products, we can transform the
above results to:
2009 9 3 10.50
Grassmann Algebra Book.nb 613
aÔg Û b ã
m p l k
H-1Lm l a Ô g Û b ai û bj ã 0 l § Min@k, pD
m p l k
FlattenBTableBToScalarProductsB a Ô g Û b - H-1Lm l a Ô g Û b F ê.
m p l k m p l k
OrthogonalSimplificationRulesB::a, b>>F,
m k
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
Ÿ The case l ! p
Consider now the case of three simple elements a, b and g where g is totally orthogonal to both a
m k p p m
and b (and hence to aÔb). A simple element g is totally orthogonal to an element a if and only if
k m k p m
The generalized product of the elements gÔa and gÔb of order l ¥ p is a scalar factor gûg times
p m p k p p
the generalized product of the factors a and b, but of an order lower by the grade of the common
m k
factor; that is, of order l|p.
2009 9 3 10.50
Grassmann Algebra Book.nb 614
The generalized product of the elements gÔa and gÔb of order l ¥ p is a scalar factor gûg times
p m p k p p
the generalized product of the factors a and b, but of an order lower by the grade of the common
m k
factor; that is, of order l|p.
Note here that the factors of g are not necessarily orthogonal to each other. Neither are the factors
p
of a. Nor the factors of b.
m k
We can test this out by making a table of cases. Here we look at the first 25 cases.
FlattenBTableBToScalarProductsB g Ô a Û g Ô b - g û g a Û b F ê.
p m l p k p p m l-p k
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
By combining the results of the previous section with those of this section, we see that we can also
write that:
gÔa Û b a Û gÔb
p m l k m l p k
H-1Lp m a Û g Ô b ûg 10.53
m l p k p
g = g1 Ô g2 Ô ! Ô gp a û gi ã b û gi ã 0
p m k
2009 9 3
Grassmann Algebra Book.nb 615
FlattenB
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
In spaces of all dimensions, generalized products in which one of the factors is a scalar will reduce
to either the usual (field or linear space) product by the scalar quantity (when the order of the
product is zero) or else to zero (when the order of the product is greater than zero). Except for a
brief discussion of the implications of this for 0-space, further discussions will assume that the
factors of the generalized products are not scalar.
In spaces of dimension 0, 1, or 2, all generalized products reduce either to the ordinary field
product, the exterior product, the interior product, or zero. Hence they bring little new to a study of
these spaces.
These results will be clear upon reference to the simple properties of generalized products
discussed in previous sections. The only result that may not be immediately obvious is that of the
generalized product of order one of two 2-elements in a 2-space, for example a Û b. In a 2-space, a
2 1 2 2
and b must be congruent (differ only by (possibly) a scalar factor), and hence by formula 10.17 is
2
zero.
If one of the factors is a scalar, then the only non-zero generalized product is that of order zero,
equivalent to the ordinary field product (and, incidentally, also to the exterior and interior products).
Ÿ 0-space
In a space of zero dimensions (the underlying field of the Grassmann algebra), the generalized
product reduces to the exterior product and hence to the usual scalar (field) product.
2009 9 3
Grassmann Algebra Book.nb 616
ToScalarProductsBa Û bF
0
ab
Higher order products (for example a Û b) are of course zero since the order of the product is
1
greater than the minimum of the grades of the factors (which in this case is zero).
In sum: The only non-zero generalized product in a space of zero dimensions is the product of
order zero, equivalent to the underlying field product.
Ÿ 1-space
Ha e1 L Û Hb e1 L ã a b e1 Ô e1 ã 0
0
ToScalarProductsBHa e1 L Û Hb e1 LF
0
Ha e1 L Û Hb e1 L ã a b He1 û e1 L
1
ToScalarProductsBHa e1 L Û Hb e1 LF
1
a b He1 û e1 L
In sum: The only non-zero generalized product of elements (other than where one is a scalar) in a
space of one dimension is the product of order one, equivalent to the scalar product.
Ÿ 2-space
Ha e1 + b e2 L Û Hc e1 + d e2 L ã Ha e1 + b e2 L Ô Hc e1 + d e2 L
0
2009 9 3
Grassmann Algebra Book.nb 617
Ha e1 + b e2 L Û Hc e1 + d e2 L ã Ha e1 + b e2 L û Hc e1 + d e2 L
1
ToScalarProductsBHa e1 + b e2 L Û Hc e1 + d e2 LF
1
The product of the first order of a 1-element and a 2-element is also commutative and reduces to
their interior product.
Ha e1 + b e2 L Û Hc e1 Ô e2 L ã
1
Hc e1 Ô e2 L Û Ha e1 + b e2 L ã Hc e1 Ô e2 L û Ha e1 + b e2 L
1
ToScalarProductsBHa e1 + b e2 L Û Hc e1 Ô e2 LF
1
The product of the first order of two 2-elements is zero. This can be determined directly from
[10.17].
aÛa ã 0, l!m
m l m
ToScalarProductsBHa e1 Ô e2 L Û Hb e1 Ô e2 LF
1
ToScalarProductsBHa e1 + b e2 L Û Hc e1 + d e2 LF
2
Products of the second order of two 2-elements reduce to their inner product:
Ha e1 Ô e2 L Û Hb e1 Ô e2 L ã Ha e1 Ô e2 L û Hb e1 Ô e2 L
2
ToInnerProductsBHa e1 Ô e2 L Û Hb e1 Ô e2 LF
2
a e1 Ô e2 û b e1 Ô e2
2009 9 3
Grassmann Algebra Book.nb 618
In sum: The only non-zero generalized products of elements (other than where one is a scalar) in a
space of two dimensions reduce to exterior, interior, inner or scalar products.
10.17 Summary
† To be completed
2009 9 3
Grassmann Algebra Book.nb 619
11 Exploring Hypercomplex
Algebra
Ï
11.1 Introduction
In this chapter we explore the relationship of Grassmann algebra to the hypercomplex algebras.
Typical examples of hypercomplex algebras are the algebras of the real numbers !, complex
numbers #, quaternions $, octonions (or Cayley numbers) %, sedenions &, and Clifford algebras
!. We will show that each of these algebras can be generated as an algebra of Grassmann numbers
under a new product which we take the liberty of calling the hypercomplex product.
This hypercomplex product of an m-element a and a k-element b is denoted aÎb and defined as a
m k m k
sum of signed generalized Grassmann products of a with b.
m k
Min@m,kD
m,l,k
a Îb ã ‚ s aÛb 11.1
m k m l k
l=0
m,l,k
Here, the sign s is a function of the grades m and k of the factors and the order l of the
generalized product and takes the values +1 or -1 depending on the type of hypercomplex product
being defined.
Note particularly that this approach does not require the algebra to be defined in terms of
generators or basis elements other than those of the Grassmann algebra. We will see in what
follows that this leads to considerably more insight on the nature of the hypercomplex algebras
than afforded by the traditional algebraic approach. The generalized Grassmann products on the
right-hand side subsume and extend the principal dimension-independent products of geometric
linear algebra: the exterior and interior products. It is enticing to postulate that many algebras with
a geometric interpretation may be defined by sums of generalized products.
Our approach then, is not to define a hypercomplex algebra by defining new (hypercomplex)
elements, but rather to define a new product called the hypercomplex product which acts on
ordinary Grassmann numbers. This approach has the conceptual advantage that the hypercomplex
algebra can now sit comfortably inside the Grassmann algebra and be used in geometric
constructions just like the exterior and interior products.
Strictly speaking, this means that there are no hypercomplex numbers per se, only Grassmann
numbers acting under the hypercomplex product. The hypercomplex product defined above, in
m,l,k
which the hypercomplex signs s are not yet specified, may be viewed as a definition of the
most unconstrained hypercomplex algebra. Different hypercomplex algebras may be defined by
constraining these signs to have given values. The traditional hypercomplex algebras will have one
set of constraints, while the Clifford algebras will have another. And just as with the Grassmann
2009 9 3 they will have significantly different properties depending on the dimension of their
algebras,
underlying space. We will also see that one of the major sources of difference is due to the fact that
spaces of dimension higher than three can have non-simple elements. For example, this is the
reason why the algebra of octonions, since it lives in a Grassmann algebra with underlying space of
Grassmann Algebra Book.nb 620
Strictly speaking, this means that there are no hypercomplex numbers per se, only Grassmann
numbers acting under the hypercomplex product. The hypercomplex product defined above, in
m,l,k
which the hypercomplex signs s are not yet specified, may be viewed as a definition of the
most unconstrained hypercomplex algebra. Different hypercomplex algebras may be defined by
constraining these signs to have given values. The traditional hypercomplex algebras will have one
set of constraints, while the Clifford algebras will have another. And just as with the Grassmann
algebras, they will have significantly different properties depending on the dimension of their
underlying space. We will also see that one of the major sources of difference is due to the fact that
spaces of dimension higher than three can have non-simple elements. For example, this is the
reason why the algebra of octonions, since it lives in a Grassmann algebra with underlying space of
three dimensions, is the largest hypercomplex algebra with a scalar norm.
There is some care needed with terminology. In our approach to hypercomplex algebras, there are
no hypercomplex numbers per se, but only different types of hypercomplex product (that is, with
different sets of constraints on the hypercomplex signs), and spaces of different dimension. These
two factors together give the traditional hypercomplex algebras their individual flavours. So, what
then is a quaternion? From our approach, it is not an object, but rather a Grassmann number living
in a two-dimensional subspace (the Grassmann algebra on an underlying linear space of two
dimensions has 4 basis elements) responding to a specific type of hypercomplex product. With this
meaning, and in order to concord with traditional usage, we shall continue to speak of
hypercomplex numbers, such as quaternions, as if they were objects in their own right.
"Hypercomplex number" should be read as "Grassmann number under the hypercomplex product".
"Quaternion" should be read as "Hypercomplex number in a Grassmann algebra of two
dimensions". More carefully, by "Grassmann algebra of two dimensions", we mean a Grassmann
algebra constructed on a two-dimensional subspace of the underlying linear space.
It turns out that the real numbers are hypercomplex numbers in a subspace of zero dimensions, the
complex numbers are hypercomplex numbers in a subspace of one dimension, the quaternions are
hypercomplex numbers in a subspace of two dimensions, the octonions are hypercomplex numbers
in a subspace of three dimensions, and the sedenions are hypercomplex numbers in a subspace of
four dimensions. The number of base elements in the hypercomplex algebra is equal to the number
of basis elements in the corresponding Grassmann algebra.
We can see from this approach that different hypercomplex numbers can now live and work in the
same space. For example, complex numbers can operate on vectors, or quaternions on octonions.
In Chapter 12: Exploring Clifford Algebra we will also show that the Clifford product may be
defined in a space of any number of dimensions as a hypercomplex product with signs:
m,l,k 1
s ã H-1Ll Hm-lL+ 2 l Hl-1L
We could also explore the algebras generated by products of the type defined by definition 11.1,
m,l,k
but which have some of the s defined to be zero. For example the relations,
m,l,k m,l,k
s ã 1, l ã 0 s ã 0, l > 0
defines the algebra with the hypercomplex product reducing to the exterior product, and
m,l,k m,l,k
s ã 1, l = Min@m, kD s ã 0, l ! Min@m, kD
defines the algebra with the hypercomplex product reducing to the interior product.
Both of these definitions however lead to products having zero divisors, that is, some products can
be zero even though neither of its factors is zero. Because one of the principally useful
characteristics of the scalar-normed hypercomplex numbers (that is, hypercomplex numbers up to
and including the Octonions) are that they have no zero divisors, we shall limit ourselves in this
chapter to exploring just those algebras.
2009 9 3
Grassmann Algebra Book.nb 621
Both of these definitions however lead to products having zero divisors, that is, some products can
be zero even though neither of its factors is zero. Because one of the principally useful
characteristics of the scalar-normed hypercomplex numbers (that is, hypercomplex numbers up to
and including the Octonions) are that they have no zero divisors, we shall limit ourselves in this
chapter to exploring just those algebras.
m,l,k
That is, we shall always assume that s is not zero.
m,l,k
We begin by exploring the constraints we would require on the hypercomplex signs s in
definition 11.1 which will lead to algebras with a scalar norm. It turns out that we can ensure a
scalar norm (up to the Octonions) by only constraining a subset of the signs. The specification of
the rest of the signs will then be explored by considering spaces of increasing dimension, starting
with a space of zero dimensions generating the real numbers. In order to embed the algebras
defined on lower dimensional spaces within those defined on higher dimensional spaces we will
maintain the lower dimensional relations we determine for the hypercomplex signs when we
explore the higher dimensional spaces.
Finally, we note that this approach to hypercomplex numbers is invariant in a tensorial sense. The
traditional product tables for the hypercomplex numbers are recovered in our approach by
assuming the underlying space has an Euclidean metric. However, we will show that the
hypercomplex product definition in 11.1 above is still meaningful under different metrics. Indeed,
a simple change of metric can lead to a whole range of new hypercomplex algebras.
Distributivity
Distributivity of hypercomplex products of sums of elements of the same grade can be shown from
definition 11.1 using the distributivity of the generalized Grassmann products.
Ka1 + a2 + …OÎb ã a1 Îb + a2 Îb + …
m m k m k m k
11.2
a Î b1 + b2 + … ã a Îb1 + a Îb2 + …
m k k m k m k
a1 + a2 + … Îb ã a1 Îb + a2 Îb + …
m1 m2 k m1 k m2 k
11.3
a Î b1 + b2 + … ã a Îb1 + a Îb2 + …
m k1 k2 m k1 m k2
2009 9 3
Grassmann Algebra Book.nb 622
Factorization of scalars
By using the definition 11.1 of the hypercomplex product and the properties of the generalized
Grassmann product we can easily show that scalars may be factorized out of any hypercomplex
products:
Ka aOÎb ã a Î a b ã a a Îb 11.4
m k m k m k
Multiplication by scalars
The next property we will require of the hypercomplex product is that it behaves as expected when
one of the elements is a scalar. That is:
a Îb ã bÎa ã b a 11.5
m m m
m,l,k
From the relations 11.5 and the definition 11.1 we can determine the constraints on the s
which will accomplish this.
Min@m,0D
m,l,0 m,0,0 m,0,0
a Îb ã ‚ s aÛbã s aÔb ã s ba
m m l m m
l=0
Min@0,mD
0,l,m 0,0,m 0,0,m
bÎa ã ‚ s bÛaã s bÔa ã s ba
m l m m m
l=0
m,l,k
Hence the first constraints we impose on the s to ensure the properties we require for
multiplication by scalars are that:
m,0,0 0,0,m
s ã s ã1 11.6
2009 9 3
Grassmann Algebra Book.nb 623
HypercomplexScalarConstraints :=
m_,0,0 0,0,k_ 11.7
: ¯s ß 1, ¯s ß 1>
These rules are the first in a collection which we will eventually use to define the hypercomplex
algebras.
The conjugate
The conjugate of a Grassmann number X is denoted Xc and is defined as the body Xb (scalar part)
of X minus the soul Xs (non-scalar part) of X.
X ã Xb + Xs Xc ã Xb - Xs 11.9
Ï Ÿ Example
¯!3 ; X = ¯0a
a0 + a1 e1 + a2 e2 + a3 e3 + a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3 + a7 e1 Ô e2 Ô e3
8Body@XD, Soul@XD<
8a0 , a1 e1 + a2 e2 + a3 e3 + a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3 + a7 e1 Ô e2 Ô e3 <
GrassmannConjugate@XD
a0 - a1 e1 - a2 e2 - a3 e3 - a4 e1 Ô e2 - a5 e1 Ô e3 - a6 e2 Ô e3 - a7 e1 Ô e2 Ô e3
The norm
One of the important properties that !, #, $, and % possess in common is that of having a positive
scalar norm.
The norm of a Grassmann number X is denoted C@XD and is defined as the hypercomplex product
of X with its conjugate.
The norm of an m-element a is then denoted CBaF and defined as the hypercomplex product of a
m m m
with its conjugate ac . If a is not scalar, then ac is simply -a. Hence
m m m m
Equation 11.11 then implies that for the norm to be a positive scalar quantity we must have
m,m,m
s ã-1.
m,m,m
s ã -1 m!0 11.13
The norm of a scalar a is just a2 ãaûa, so formula 11.12 applies also for m equal to zero.
2009 9 3
Grassmann Algebra Book.nb 625
HypercomplexSquareConstraints :=
m_,m_,m_ 11.14
: ¯s ê; m ! 0 ß -1>;
These rules are the second in a collection which we will eventually use to define the hypercomplex
algebras.
We remark that, although the constraints 11.14 have been determined from a consideration of the
hypercomplex product of a simple element with itself, it must also now apply to the interior
product term of any hypercomplex product of two elements of the same grade. From formula 11.1,
the hypercomplex product of two elements of the same grade can be written
m
m,l,m m,0,m m,1,m m,m,m
a Îb ã ‚ s aÛbã s aÛb+ s aÛb+…+ s aÛb
m m m l m m 0 m m 1 m m m m
l=0
m,m,m
The last term involves s , and we must take it to be -1 here also.
This expansion also reminds us that while a Îa is necessarily scalar, aÎb is not.
m m m m
To investigate the norm further we look at the product AÎA. Suppose A is the sum of two (not
necessarily simple) elements of different grade: an m-element a and a k-element b . Then AÎA
m k
becomes:
A Î A ã a + b Î a + b ã a Îa + a Îb + b Îa + b Îb
m k m k m m m k k m k k
2009 9 3
Grassmann Algebra Book.nb 626
We would like the norm of a general Grassmann number to be a scalar quantity. But none of the
generalized product components of aÎb or bÎa can be scalar if m and k are different, so we must
m k k m
choose to make bÎa equal to - aÎb as a fundamental defining axiom of the hypercomplex
k m m k
product, thus eliminating both products from the expression for the norm. This requirement of
antisymmetry can always be satisfied because, since the grades are different, the two products are
always distinguishable.
a Îb ã - b Îa 11.15
m k k m
Min@m,kD
m,l,k
a Îb ã ‚ s aÛb
m k m l k
l=0
Now write the definition for the factors in the reverse order, and reverse the order of the
generalized Grassmann products on the right hand side back to their initial order. (The formula for
doing this is in the previous chapter).
Min@m,kD Min@m,kD
k,l,m k,l,m
b Îa ã ‚ s bÛaã ‚ s H-1LHm-lL Hk-lL a Û b
k m k l m m l k
l=0 l=0
m,l,k k,l,m
s ã -H-1LHm-lL Hk-lL s
This in turn implies that
m,l,k k,l,m
if (m-l)(k-l) is odd, then s ã s
m,l,k k,l,m
if (m-l)(k-l) is even, then s ã - s .
2009 9 3
Grassmann Algebra Book.nb 627
m k l m-l k - l Hm - lL Hk - lL
even even even even even even
even even odd odd odd odd
even odd even even odd even
even odd odd odd even even
odd even even odd even even
odd even odd even odd even
odd odd even odd odd odd
odd odd odd even even even
This table can be summarized by the rule: If m and k are of the same parity, but l is of the opposite
parity, then the parity of (m-l)(k-l) is odd; otherwise it is even. This can be encoded by the
constraint:
HypercomplexAntisymmetryConstraints :=
m_?EvenQ,l_?OddQ,k_?EvenQ k,l,m
: ¯s ê; m < k ß ¯s ,
m_?OddQ,l_?EvenQ,k_?OddQ k,l,m 11.16
¯s ê; m < k ß ¯s ,
m_,l_,k_ k,l,m
¯s ê; m < k ß - ¯s >
m,l,k
Note that this constraint does not constrain the values of the remaining s for m > k.
A ã a +b + g + …
m k p
The antisymmetry axiom then allows us to write AÎA as the sum of hypercomplex squares of the
components of different grade of A.
A Î A ã a + b + g + … Î a + b + g + … ã a Îa + b Îb + g Îg + …
m k p m k p m m k k p p
C@XD ã a2 - a Îa - b Îb - g Îg - …
m m k k p p
2009 9 3
Grassmann Algebra Book.nb 628
,Ba + a + b + g + …F ã a2 - a Îa - b Îb - g Îg - … 11.17
m k p m m k k p p
A ã a +b + g + …
m k p
Then by 11.12 and 11.15 we have that the norm of A is positive and may be written as the sum of
inner products of the individual simple elements.
,Ba + a + b + g + …F ã a2 + a û a + b û b + g û g + … 11.18
m k p m m k k p p
We now focus our attention on the product aÎa (where a may or may not be simple). The
m m m
following analysis will be critical to our understanding of the properties of hypercomplex numbers
in spaces of dimension greater than 3.
Suppose a is the sum of two simple elements a1 and a2 . The product aÎa then becomes:
m m m m m
Since a1 and a2 are simple, a1 Îa1 and a2 Îa2 are scalar, and can be written:
m m m m m m
a1 Îa1 + a2 Îa2 ã - a1 û a1 - a2 û a2
m m m m m m m m
The expression of the remaining terms a1 Îa2 and a2 Îa1 in terms of exterior and interior products
m m m m
is more interesting and complex. We discuss this in section 11.4.
2009 9 3
Grassmann Algebra Book.nb 629
Ï Scalar constraints
HypercomplexScalarConstraints
m_,0,0 0,0,k_
: ¯s ß 1, ¯s ß 1>
Ï Square constraints
HypercomplexSquareConstraints
m_,m_,m_
: ¯s ê; m # 0 ß -1>
Ï Antisymmetry constraints
HypercomplexAntisymmetryConstraints
m_?EvenQ,l_?OddQ,k_?EvenQ k,l,m
: ¯s ê; m < k ß ¯s ,
m_?OddQ,l_?EvenQ,k_?OddQ k,l,m m_,l_,k_ k,l,m
¯s ê; m < k ß ¯s , ¯s ê; m < k ß - ¯s >
Note that whereas the scalar and square constraints gave specific values, these antisymmetry
constraints still leave half of each pair unspecified.
Ï Hypercomplex constraints
¯H := Flatten@8HypercomplexScalarConstraints,
HypercomplexSquareConstraints,
HypercomplexAntisymmetryConstraints<D
Ï Residual constraints
Although these constraints cover a significnt range of values of the hypercomplex signs, there are
two residual constraints that must be specified by other considerations.
m,l,k
¯s m>k
m,l,m
¯s m>l
We shall explore these further in the sections below.
2009 9 3
Grassmann Algebra Book.nb 630
Returning to our definition of the hypercomplex product 11.1, and specializing it to the case where
both elements are of the same grade we get:
m
m,l,m
a1 Îa2 ã ‚ s a1 Û a2
m m m l m
l=0
m
m,l,m
a2 Îa1 ã ‚ s a2 Û a1
m m m l m
l=0
The generalized products on the right may now be reversed together with the change of sign
H-1LHm-lL Hm-lL = H-1LHm-lL .
m
m,l,m
a2 Îa1 ã ‚ H-1LHm-lL s a1 Û a2
m m m l m
l=0
m
m,l,m
a1 Îa2 + a2 Îa1 ã ‚ I1 + H-1LHm-lL M s a1 Û a2 11.19
m m m m m l m
l=0
Because we will need to refer to this sum of products subsequently, we will call it the symmetrized
product of two m-elements (because the expression does not change if the order of the elements is
reversed).
2009 9 3
Grassmann Algebra Book.nb 631
The term 1+H-1Lm-l will be 2 for m and l both even or both odd, and zero otherwise. To get a
clearer picture of what this formula entails, we write it out explicitly for the lower values of m.
m,m,m
Note that we use the fact already established that s ã-1.
Ï m=0
Ï m=1
Ï m=2
2,0,2
a1 Îa2 + a2 Îa1 == 2 s a1 Ô a2 - a1 û a2 11.22
2 2 2 2 2 2 2 2
Ï m=3
3,1,3
a1 Îa2 + a2 Îa1 == 2 s a1 Û a2 - a1 û a2 11.23
3 3 3 3 3 1 3 3 3
Ï m=4
4,0,4 4,2,4
a1 Îa2 + a2 Îa1 == 2 s a1 Ô a2 + s a1 Û a2 - a1 û a2 11.24
4 4 4 4 4 4 4 2 4 4 4
2009 9 3
Grassmann Algebra Book.nb 632
It is apparent from formula 11.19 and the examples given above, that the body of the symmetrized
product a1 Îa2 +a2 Îa1 is the inner product term in which l becomes equal to m, that is:
m m m m
m,m,m
2 s Ka1 ûa2 O .
m m
m,m,m
And since the hypercomplex square constraint requires that s ã-1,we can thus write:
Suppose now that we have an m-element written as the sum of two m-elements.
a ã a1 + a2
m m m
Hence, even if a component element like a is not simple, the body of its hypercomplex square is
m
given by the same expression as if it were simple, that is, the negative of its inner product square.
It is straightforward to see that this is valid for aa being a sum of any number of m-elements.
Thus we can say that the body of the hypercomplex square of an arbitrary m-element is the
negative of its inner product square.
2009 9 3
Grassmann Algebra Book.nb 633
m-2
m,l,m
SoulBa ÎaF ã ‚ I1 + H-1LHm-lL M s a1 Û a2 11.27
m m m l m
l=0
(Here we have used the fact that because of the coefficient I1 + H-1LHm-lL M, the terms with l
equal to m|1 is always zero, so we only need to sum to m|2).
For convenience in generating examples of this expression, we temporarily define the soul of aÎa
m m
as a function of m, and denote it h[m]. To make it easier to read we convert the generalized
products where possible to exterior and interior products.
h@m_D :=
m-2
m,l,m
‚ I1 + H-1LHm-lL M s a1 Û a2 êê SimplifyGeneralizedProducts
m l m
l=0
With this function we can make a palette of souls for various values of m. We do this below for m
from 1 to 6.
3,1,3
3 2 s a1 Û a2
3 1 3
4,0,4 4,2,4
4 2 s a1 Ô a2 + 2 s a1 Û a2
4 4 4 2 4
5,1,5 5,3,5
5 2 s a1 Û a2 + 2 s a1 Û a2
5 1 5 5 3 5
Of particular note is the soul of the symmetrized product of two different simple 2-elements.
2,0,2
2 s a1 Ô a2
2 2
This 4-element is the simplest critical potentially non-scalar residue in a norm calculation. It is
always zero in spaces of dimension 2, or 3. Hence the norm of a Grassmann number in these
spaces under the hypercomplex product is guaranteed to be scalar.
2009 9 3
Grassmann Algebra Book.nb 634
This 4-element is the simplest critical potentially non-scalar residue in a norm calculation. It is
always zero in spaces of dimension 2, or 3. Hence the norm of a Grassmann number in these
spaces under the hypercomplex product is guaranteed to be scalar.
In a space of 4 dimensions, on the other hand, this element may not be zero. Hence the space of
dimension 3 is the highest-dimensional space in which the norm of a Grassmann number is
guaranteed to be scalar.
X ã a + a +b + g + …
m k p
Here, we suppose a to be scalar, and the rest of the terms to be nonscalar simple or non-simple
elements.
C@XD ã a2 - a Îa - b Îb - g Îg - …
m m k k p p
The hypercomplex square aÎa of an element, has in general, both a body and a soul.
m m
BodyBa ÎaF ã - a û a
m m m m
The soul of aÎa depends on the terms of which it is composed. If aãa1 +a2 +a3 +… then
m m m m m m
m-2
m,l,m
SoulBa ÎaF ã ‚ I1 + H-1LHm-lL M s a1 Û a2 + a1 Û a3 + a2 Û a3 + …
m m m l m m l m m l m
l=0
If the component elements of different grade of X are simple, then its soul is zero, and its norm
becomes the scalar:
C@XD ã a2 + a û a + b û b + g û g + …
m m k k p p
It is only in 3-space that we are guaranteed that all the components of a Grassmann number are
simple. Therefore it is only in 3-space that we are guaranteed that the norm of a Grassmann
number under the hypercomplex product is scalar.
2009 9 3
Grassmann Algebra Book.nb 635
In one dimension all 1-elements are of the form ae, where a is a scalar and e is the basis element.
Let a be a 1-element. From the definition of the hypercomplex product we see that the
hypercomplex product of a 1-element with itself is the (possibly signed) scalar product.
Min@1,1D
1,l,1 1,0,1 1,1,1 1,1,1
aÎa ã ‚ s aÛa ã s aÔa + s aûa ã s aûa
l
l=0
In the usual notation, the product of two complex numbers would be written:
Ha + Â bL Hc + Â dL ã Ha c - b dL + Â H b c + a dL
In the general hypercomplex notation we have:
1,1,1
Ha c + b d eÎeL + Hb c + a dL e ã Ka c + b d s e û eO + Hb c + a dL e
1,1,1
Isomorphism with the complex numbers is then obtained by constraining s eûe to be -1.
One immediate interpretation that we can explore to satisfy this is that e is a unit 1-element (with
1,1,1
eûe ã 1), and s ã-1.
This then is the constraint we will impose to allow the incorporation of complex numbers in the
hypercomplex structure.
1,1,1
s ã -1 11.28
2009 9 3
Grassmann Algebra Book.nb 636
ÂÎÂ ã -Â û Â ã -1 11.30
In this interpretation, instead of the  being a new entity with the special property Â2 ã-1, the
focus is shifted to interpret it as a unit 1-element acting under a new product operation Î. Complex
numbers are then interpreted as Grassmann numbers in a space of one dimension under the
hypercomplex product operation.
For example, the norm of a complex number a+b ã a+be ã a+b  can be calculated as:
,@a + bD ã a2 + b û b ã a2 + b2 11.31
The product table for the types of hypercomplex products that can be encountered in a 2-space can
be entered by using the GrassmannAlgebra function HypercomplexProductTable.
T = HypercomplexProductTable@81, a, b, a Ô b<D
881Î1, 1Îa, 1Îb, 1ÎHa Ô bL<,
8aÎ1, aÎa, aÎb, aÎHa Ô bL<, 8bÎ1, bÎa, bÎb, bÎHa Ô bL<,
8Ha Ô bLÎ1, Ha Ô bLÎa, Ha Ô bLÎb, Ha Ô bLÎHa Ô bL<<
To make this easier to read we format this with the GrassmannAlgebra function TableTemplate.
TableTemplate@TD
Products where one of the factors is a scalar have already been discussed. Products of a 1-element
with itself have been discussed in the section on complex numbers. Applying these results to the
table above enables us to simplify it to:
2009 9 3
Grassmann Algebra Book.nb 637
Products where one of the factors is a scalar have already been discussed. Products of a 1-element
with itself have been discussed in the section on complex numbers. Applying these results to the
table above enables us to simplify it to:
1 a b aÔb
a -Ha û aL aÎb aÎHa Ô bL
b bÎa -Hb û bL bÎHa Ô bL
a Ô b Ha Ô bLÎa Ha Ô bLÎb Ha Ô bLÎHa Ô bL
In this table there are four essentially different new products which we have not yet discussed.
1,0,1
aÎb ã s a Ô b - Ha û bL
11.32
1,0,1
bÎa ã - s a Ô b - Ha û bL
Min@1,2D
1,l,2 1,0,2 1,1,2
aÎHa Ô bL ã ‚ s a Û Ha Ô bL ã s a Ô Ha Ô bL + s Ha Ô bL û a
l
l=0
Min@2,1D
2,l,1 2,0,1 2,1,1
Ha Ô bLÎa ã ‚ s Ha Ô bL Û a ã s Ha Ô bL Ô a + s Ha Ô bL û a
l
l=0
2009 9 3
Grassmann Algebra Book.nb 638
Since the first term involving only exterior products is zero, we obtain:
1,1,2
aÎHa Ô bL ã s Ha Ô bL û a
11.33
2,1,1
Ha Ô bLÎa ã s Ha Ô bL û a
The generalized product of two identical elements is zero in all cases except for that which reduces
to the inner product. From this fact and the definition of the hypercomplex product we obtain
Min@2,2D
2,l,2 2,2,2
Ha Ô bLÎHa Ô bL == ‚ s Ha Ô bL Û Ha Ô bL == s Ha Ô bL û Ha Ô bL
l
l=0
2,2,2
Ha Ô bLÎHa Ô bL == s Ha Ô bL û Ha Ô bL 11.34
1 a b aÔb
1,0,1 1,1,2
a -Ha û aL s a Ô b - Ha û bL s Ha Ô bL û a
11.35
1,0,1 1,1,2
b - s a Ô b - Ha û bL -Hb û bL s Ha Ô bL û b
2,1,1 2,1,1 2,2,2
b s Ha Ô bL û a s Ha Ô bL û b s Ha Ô bL û Ha Ô b
2009 9 3
Grassmann Algebra Book.nb 639
aûa ã bûb ã 1
aûb ã 0 Ha Ô bL û a ã Ha û aL b Ha Ô bL û Ha Ô bL ã Ha û aL Hb û bL
The hypercomplex product table then becomes:
1 a b aÔb
1,0,1 1,1,2
a -1 s aÔb s b
1,0,1 1,1,2
b - s aÔb -1 - s a
2,1,1 2,1,1 2,2,2
aÔb s b - s a s
1 $ % &
1,0,1 1,1,2
$ -1 s & s %
% - 1,0,1
s & -1 1,1,2
- s $
2,1,1 2,1,1 2,2,2
& s % - s $ s
2009 9 3
Grassmann Algebra Book.nb 640
1 $ % &
$ -1 & -%
11.38
% - & -1 $
& % -$ -1
Substituting these values back into the original table 11.17 gives a hypercomplex product table in
terms only of exterior and interior products. This table defines the real, complex, and quaternion
product operations.
1 a b aÔb
a -Ha û aL a Ô b - Ha û bL -Ha Ô bL û a
11.39
b - a Ô b - Ha û bL -Hb û bL -Ha Ô bL û b
aÔb Ha Ô bL û a Ha Ô bL û b - Ha Ô bL û Ha Ô bL
Q ã a + a + aÔb 11.40
Here, a is a scalar, a is a 1-element and aÔb is congruent to any 2-element of the space.
The norm of Q is denoted C[Q] and given as the hypercomplex product of Q with its conjugate Qc .
expanding using formula 11.5 gives:
Using table 11.21 again then allows us to write the norm of a quaternion either in terms of the
hypercomplex product or the inner product.
,@QD ã a2 + a û a + Ha Ô bL û Ha Ô bL 11.42
In terms of 3, 4, and !
Q=a+b3+c4 +d!
1 a b aÔb
a -Ha û aL aÔb -Ha û aL b
11.43
b - aÔb -Hb û bL Hb û bL a
a Ô b Ha û aL b -Hb û bL a - Ha û aL Hb û bL
In the notation we have used, the generators are 1, a, b, and aÔb. The parameters are aûa and bûb.
2009 9 3
Grassmann Algebra Book.nb 642
12.1 Introduction
In this chapter we define a new type of algebra: the Clifford algebra. Clifford algebra was
originally proposed by William Kingdon Clifford (Clifford ) based on Grassmann's ideas.
Clifford algebras have now found an important place as mathematical systems for describing some
of the more fundamental theories of mathematical physics. Clifford algebra is based on a new kind
of product called the Clifford product.
In this chapter we will show how the Clifford product can be defined straightforwardly in terms of
the exterior and interior products that we have already developed, without introducing any new
axioms. This approach has the dual advantage of ensuring consistency and enabling all the results
we have developed previously to be applied to Clifford numbers.
In the previous chapter we have gone to some lengths to define and explore the generalized
Grassmann product. In this chapter we will use the generalized Grassmann product to define the
Clifford product in its most general form as a simple sum of generalized products. Computations in
general Clifford algebras can be complicated, and this approach at least permits us to render the
complexities in a straightforward manner.
From the general case, we then show how Clifford products of intersecting and orthogonal
elements simplify. This is the normal case treated in introductory discussions of Clifford algebra.
Clifford algebras occur throughout mathematical physics, the most well known being the real
numbers, complex numbers, quaternions, and complex quaternions. In this book we show how
Clifford algebras can be firmly based on the Grassmann algebra as a sum of generalized
Grassmann products.
Historical Note
The seminal work on Clifford algebra is by Clifford in his paper Applications of Grassmann's
Extensive Algebra that he published in the American Journal of Mathematics Pure and Applied in
1878 (Clifford ). Clifford became a great admirer of Grassmann and one of those rare
contemporaries who appears to have understood his work. The first paragraph of this paper
contains the following passage.
Until recently I was unacquainted with the Ausdehnungslehre, and knew only so much of it as is
contained in the author's geometrical papers in Crelle's Journal and in Hankel's Lectures on Complex
Numbers. I may, perhaps, therefore be permitted to express my profound admiration of that
extraordinary work, and my conviction that its principles will exercise a vast influence on the future
of mathematical science.
2009 9 3
Grassmann Algebra Book.nb 643
Min@m,kD
1
aùb ã ‚ H-1Ll Hm-lL+ 2 l Hl-1L
aÛb 12.1
m k m l k
l=0
The most surprising property of the Clifford product is its associativity, even though it is defined in
terms of products which are non-associative. The associativity of the Clifford products is not
directly evident from the definition.
aùb aÛb
m 0 m 0 0
aùb a Û b - H-1Lm Ka Û bO
m m 0 m 1
aùb a Û b - H-1Lm a Û b - a Û b
m 2 m 0 2 m 1 2 m 2 2
2009 9 3
Grassmann Algebra Book.nb 644
RawGradeBa ù bF
3 2
81, 3, 5<
Given a space of a certain dimension, some of these elements may necessarily be zero (and thus
result in a grade of Grade0), because their grade is larger than the dimension of the space. For
example, in a 3-space:
&3 ; GradeBa ù bF
3 2
81, 3, Grade0<
In the general discussion of the Clifford product that follows, we will assume that the dimension of
the space is high enough to avoid any terms of the product becoming zero because their grade
exceeds the dimension of the space. In later more specific examples, however, the dimension of the
space becomes an important factor in determining the structure of the particular Clifford algebra
under consideration.
For example, aùb may be expressed as the sum of three signed generalized products of grades 1, 3
3 2
and 5. In GrassmannAlgebra we can effect this conversion by applying the
ToGeneralizedProducts function.
ToGeneralizedProductsBa ù bF
3 2
aÛb+aÛb-aÛb
3 0 2 3 1 2 3 2 2
Multiple Clifford products can be expanded in the same way. For example:
2009 9 3
Grassmann Algebra Book.nb 645
ToGeneralizedProductsBa ù b ù gF
3 2
X1 = ToGeneralizedProducts@XD
x0 Û y0 + x0 Û He1 y1 L + x0 Û He2 y2 L + x0 Û Hy3 e1 Ô e2 L +
0 0 0 0
ToInteriorProductsBa ù bF
3 2
-Ha1 Ô a2 Ô a3 û b1 Ô b2 L + Ha1 Ô a2 Ô a3 û b1 L Ô b2 -
Ha1 Ô a2 Ô a3 û b2 L Ô b1 + a1 Ô a2 Ô a3 Ô b1 Ô b2
Note that ToInteriorProducts works with explicit elements, but also as above, automatically
creates a simple exterior product from an underscripted element.
ToInteriorProducts expands the Clifford product by using the A form of the generalized
product. A second function ToInteriorProductsB expands the Clifford product by using the
B form of the expansion to give a different but equivalent expression.
2009 9 3
Grassmann Algebra Book.nb 646
ToInteriorProducts expands the Clifford product by using the A form of the generalized
product. A second function ToInteriorProductsB expands the Clifford product by using the
B form of the expansion to give a different but equivalent expression.
ToInteriorProductsBBa ù bF
3 2
-Ha1 Ô a2 Ô a3 û b1 Ô b2 L - a1 Ô a2 Ô a3 Ô b1 û b2 +
a1 Ô a2 Ô a3 Ô b2 û b1 + a1 Ô a2 Ô a3 Ô b1 Ô b2
In this example, the difference is evidenced in the second and third terms.
ToInnerProductsBa ù bF
3 2
-Ha2 Ô a3 û b1 Ô b2 L a1 + Ha1 Ô a3 û b1 Ô b2 L a2 -
Ha1 Ô a2 û b1 Ô b2 L a3 - Ha3 û b2 L a1 Ô a2 Ô b1 +
Ha3 û b1 L a1 Ô a2 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô b1 - Ha2 û b1 L a1 Ô a3 Ô b2 -
Ha1 û b2 L a2 Ô a3 Ô b1 + Ha1 û b1 L a2 Ô a3 Ô b2 + a1 Ô a2 Ô a3 Ô b1 Ô b2
ToScalarProductsBa ù bF
3 2
2009 9 3
Grassmann Algebra Book.nb 647
We can easily work out the number of permutations to achieve this rearrangement as
1
1 m Hm-1L
2
m Hm - 1L. Mathematica automatically simplifies H-1L 2 to Âm Hm-1L .
1
m Hm-1L
a† ã H-1L 2 a ã Âm Hm-1L a 12.3
m m m
GrassmannReverseB:1, x, x Ô y, a Ô 3, b, a û b - g >F
3 k m k p
81, x, y Ô x, 3 a3 Ô a2 Ô a1 , bk Ô ! Ô b2 Ô b1 ,
am Ô ! Ô a2 Ô a1 û bk Ô ! Ô b2 Ô b1 - am Ô ! Ô a2 Ô a1 û gp Ô ! Ô g2 Ô g1 <
On the other hand, if we wish just to put exterior products into reverse form (that is, reversing the
factors, and changing the sign of the product so that the result remains equal to the original), we
can use the GrassmannAlgebra function ReverseForm.
ReverseFormB:1, x, x Ô y, a, b, a û b - g >F
3 k m k p
ToScalarProductsBa ù aF
m
a a1 Ô a2 Ô ! Ô am
aùa ã a a 12.4
m m
Hence the Clifford product of any number of scalars is just their underlying field product.
ToScalarProducts@a ù b ù cD
abc
ToScalarProducts@x ù yD
xûy + xÔy
ToScalarProducts@x ù y ù zD
z Hx û yL - y Hx û zL + x Hy û zL + x Ô y Ô z
ToScalarProducts@w ù x ù y ù zD
Hw û zL Hx û yL - Hw û yL Hx û zL + Hw û xL Hy û zL +
Hy û zL w Ô x - Hx û zL w Ô y + Hx û yL w Ô z +
Hw û zL x Ô y - Hw û yL x Ô z + Hw û xL y Ô z + w Ô x Ô y Ô z
2009 9 3
Grassmann Algebra Book.nb 649
a ù x ã a Ô x - H-1Lm a û x 12.7
m m m
H-1Lm a ù x ã x Ô a - a û x
m m m
we can add and subtract equations 12.6 and 12.7 to express the exterior product and interior
products of a 1-element and an m-element in terms of Clifford products
1
xÔa ã Kx ù a + H-1Lm a ù xO 12.8
m 2 m m
1
aûx ã Kx ù a - H-1Lm a ù xO 12.9
m 2 m m
a ù b ã a Ô b - H-1Lm a Û b - a û b 12.10
m 2 m 2 m 1 2 m 2
2009 9 3
Grassmann Algebra Book.nb 650
RawGrade@Hx Ô yL ù Hu Ô vLD
80, 2, 4<
ToGeneralizedProducts@Hx Ô yL ù Hu Ô vLD
xÔy Û uÔv - xÔy Û uÔv - xÔy Û uÔv
0 1 2
The first of these generalized products is equivalent to an exterior product of grade 4. The second
is a generalized product of grade 2. The third reduces to an interior product of grade 0. We can see
this more explicitly by converting to interior products.
ToInteriorProducts@Hx Ô yL ù Hu Ô vLD
-Hu Ô v û x Ô yL - Hx Ô y û uL Ô v + Hx Ô y û vL Ô u + u Ô v Ô x Ô y
We can convert the middle terms to inner (in this case scalar) products.
ToInnerProducts@Hx Ô yL ù Hu Ô vLD
-Hu Ô v û x Ô yL + Hv û yL u Ô x -
Hv û xL u Ô y - Hu û yL v Ô x + Hu û xL v Ô y + u Ô v Ô x Ô y
Finally we can express the Clifford number in terms only of exterior and scalar products.
ToScalarProducts@Hx Ô yL ù Hu Ô vLD
Hu û yL Hv û xL - Hu û xL Hv û yL + Hv û yL u Ô x -
Hv û xL u Ô y - Hu û yL v Ô x + Hu û xL v Ô y + u Ô v Ô x Ô y
p
1
g ù g ã ‚ H-1Ll Hp-lL+ 2 l Hl-1L
gÛg
p p p l p
l=0
Since the only non-zero generalized product of the form g Û g is that for which l = p, that is gûg,
p l p p p
we have immediately that:
1
p Hp-1L
g ù g ã H-1L 2 gûg
p p p p
Or, alternatively:
g û g ã g† ù g ã g ù g† 12.11
p p p p p p
Min@m,kD
1
aùb ã ‚ H-1Ll Hm-lL+ 2 l Hl-1L
aÛb
m k m l k
l=0
Alternate forms for the generalized product have been discussed in Chapter 10. The generalized
product, and hence the Clifford product, may be expanded by decomposition of a, by
m
decomposition of b, or by decomposition of both a and b.
k m k
m
K O
l
a Û b ã ‚ ai Ô b û ai a ã a1 Ô a1 ã a2 Ô a2 ã !
m l k m-l k l m l m-l l m-l
i=1
Substituting this into the expression for the Clifford product gives:
m
K O
Min@m,kD l
1
aùb ã ‚ ‚ H-1Ll Hm-lL+ 2 l Hl-1L
ai Ô b û ai
m k m-l k l
l=0 i=1
a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l
Our objective is to rearrange the formula for the Clifford product so that the signs are absorbed into
the formula, thus making the form of the formula independent of the values of m and l. We can do
1
l Hl-1L † †
this by writing H-1L 2 ai ãai (where ai is the reverse of ai ) and interchanging the
l l l l
order of the decomposition of a into a l-element and a (m-l)-element to absorb the H-1Ll Hm-lL
m
factor: H-1Ll Hm-lL aã a1 Ôa1 ã a2 Ôa2 ã! .
m m-l l m-l l
2009 9 3
Grassmann Algebra Book.nb 652
m
K O
Min@m,kD l
†
aùb ã ‚ ‚ ai Ô b û ai
m k m-l k l
l=0 i=1
a ã a1 Ô a1 ã a2 Ô a2 ã !
m m-l l m-l l
Because of the symmetry of the expression with respect to l and (m-l), we can write m equal to
(m-l) and then l becomes (m-m). This enables us to write the formula a little simpler by arranging
for the factors of grade m to come before those of (m-m). Finally, because of the inherent
arbitrariness of the symbol m, we can change it back to l to get the formula in a more accustomed
form.
m
Min@m,kD KlO
†
aùb ã ‚ ‚ ai Ô b û ai
m k l k m-l 12.12
l=0 i=1
a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l
The right hand side of this expression is a sum of interior products. In GrassmannAlgebra we can
develop the Clifford product aùb as a sum of interior products in this particular form by using
m k
ToInteriorProductsD (note the final 'D'). Since a is to be decomposed, it must have an
m
explicit numerical grade.
Ï Example 1
We can expand any explicit elements.
ToInteriorProductsD@Hx Ô yL ù Hu Ô vLD
x Ô y û v Ô u + x Ô Hu Ô v û yL - y Ô Hu Ô v û xL + u Ô v Ô x Ô y
Note that this is a different (although equivalent) result from that obtained using
ToInteriorProducts.
ToInteriorProducts@Hx Ô yL ù Hu Ô vLD
-Hu Ô v û x Ô yL - Hx Ô y û uL Ô v + Hx Ô y û vL Ô u + u Ô v Ô x Ô y
Ï Example 2
We can also enter the elements as graded variables and have GrassmannAlgebra create the
requisite products.
ToInteriorProductsDBa ù bF
2 2
b1 Ô b2 û a2 Ô a1 + a1 Ô Hb1 Ô b2 û a2 L - a2 Ô Hb1 Ô b2 û a1 L + a1 Ô a2 Ô b1 Ô b2
Ï Example 3
2009 9 3
Grassmann Algebra Book.nb 653
Example 3
The formula shows that it does not depend on the grade of b for its form. Thus we can still obtain
k
an expansion for general b. For example we can take:
k
A = ToInteriorProductsDBa ù b F
2 k
b û a2 Ô a1 + a1 Ô b û a2 - a2 Ô b û a1 + a1 Ô a2 Ô b
k k k k
Ï Example 4
Note that if k is less than the grade of the first factor, some of the interior product terms may be
zero, thus simplifying the expression.
B = ToInteriorProductsDBa ù b F
2
-Ha1 Ô a2 û bL + b Ô a1 Ô a2
A1 = A ê. k Ø 1
b û a2 Ô a1 + a1 Ô Hb û a2 L - a2 Ô Hb û a1 L + a1 Ô a2 Ô b
Although this does not immediately look like the expression B above, we can see that it is the same
by noting that the first term of A1 is zero, and expanding their difference to scalar products.
ToScalarProducts@B - A1D
0
Min@m,kD
1
aùb ã ‚ H-1Ll Hm-lL+ 2 l Hl-1L+Hm-lL Hk-lL
bÛa
m k k l m
l=0
m
K O
Min@m,kD l
1
ã ‚ ‚ H-1Lk Hm-lL+ 2 l Hl-1L
b Ô ai û ai
k m-l l
l=0 i=1
1
l Hl-1L †
As before we write H-1L 2 ai ãai and interchange the order of b and ai to absorb the
l l k m-l
signÏH-1L k Hm-lL
to get:
2009 9 3
Grassmann Algebra Book.nb 654
1
l Hl-1L †
As before we write H-1L 2 ai ãai and interchange the order of b and ai to absorb the
l l k m-l
k Hm-lL
sign H-1L to get:
m
K O
Min@m,kD l
†
aùb ã ‚ ‚ ai Ô b û ai
m k m-l k l
l=0 i=1
a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l
In order to get this into a form for direct comparison of our previously derived result in formula
12.12, we write H-1Ll Hm-lL aã a1 Ôa1 ã a2 Ôa2 ã! , then interchange l and (m-l) as before to
m m-l l m-l l
get finally:
m
Min@m,kD KlO
†
aùb ã ‚ ‚ H-1Ll Hm-lL ai Ô b û ai
m k l k m-l 12.13
l=0 i=1
a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l
As before, the right hand side of this expression is a sum of interior products. In
GrassmannAlgebra we can develop the Clifford product aùb in this form by using
m k
ToInteriorProductsC. This is done in the Section 12.6 below.
k
K O
Min@m,kD l
†
aùb ã ‚ ‚ H-1Ll Hm-lL a û bj Ô bj
m k
l=0 j=1
m l k-l 12.14
b ã b1 Ô b1 ã b2 Ô b2 ã !
k l k-l l k-l
2009 9 3
Grassmann Algebra Book.nb 655
k
K O
Min@m,kD l
†
aùb ã ‚ ‚ H-1Ll Hm-lL a Ô bj û bj
m k
l=0 j=1
m k-l l 12.15
b ã b1 Ô b1 ã b2 Ô b2 ã !
k l k-l l k-l
We can similarly decompose the Clifford product in terms of both a and b. The sign
m k
1
l Hl-1L
H-1L 2 can be absorbed by taking the reverse of either ai or bj .
l l
m k
K OK O
Min@m,kD l l
†
aùb ã ‚ ‚ ‚ H-1Ll Hm-lL ai û bj ai Ô bj
m k l l m-l k-l
l=0 i=1 j=1
m k
Min@m,kD
K OK O
l l
12.16
†
aùb ã ‚ ‚ ‚ H-1Ll Hm-lL ai û bj ai Ô bj
m k l l m-l k-l
l=0 i=1 j=1
a ã a1 Ô a1 ã a2 Ô a2 ã ! b ã b1 Ô b1 ã b2 Ô b2 ã !
m l m-l l m-l k l k-l l k-l
2009 9 3
Grassmann Algebra Book.nb 656
Ï The D form
X = ToInteriorProductsDBa ù bF
4 k
b û a4 Ô a3 Ô a2 Ô a1 + a1 Ô b û a4 Ô a3 Ô a2 -
k k
a2 Ô b û a4 Ô a3 Ô a1 + a3 Ô b û a4 Ô a2 Ô a1 -
k k
a4 Ô b û a3 Ô a2 Ô a1 + a1 Ô a2 Ô b û a4 Ô a3 - a1 Ô a3 Ô b û a4 Ô a2 +
k k k
a1 Ô a4 Ô b û a3 Ô a2 + a2 Ô a3 Ô b û a4 Ô a1 - a2 Ô a4 Ô b û a3 Ô a1 +
k k k
a3 Ô a4 Ô b û a2 Ô a1 + a1 Ô a2 Ô a3 Ô b û a4 - a1 Ô a2 Ô a4 Ô b û a3 +
k k k
a1 Ô a3 Ô a4 Ô b û a2 - a2 Ô a3 Ô a4 Ô b û a1 + a1 Ô a2 Ô a3 Ô a4 Ô b
k k k
Ï The C form
Note that since the exterior product has higher precedence in Mathematica than the interior
product, a1 Ôbûa2 is equivalent to a1 Ôb ûa2 , and no parentheses are necessary in the terms of
k k
the C form.
ToInteriorProductsCBa ù bF
4 k
b û a4 Ô a3 Ô a2 Ô a1 - a1 Ô b û a4 Ô a3 Ô a2 +
k k
a2 Ô b û a4 Ô a3 Ô a1 - a3 Ô b û a4 Ô a2 Ô a1 +
k k
a4 Ô b û a3 Ô a2 Ô a1 + a1 Ô a2 Ô b û a4 Ô a3 - a1 Ô a3 Ô b û a4 Ô a2 +
k k k
a1 Ô a4 Ô b û a3 Ô a2 + a2 Ô a3 Ô b û a4 Ô a1 - a2 Ô a4 Ô b û a3 Ô a1 +
k k k
a3 Ô a4 Ô b û a2 Ô a1 - a1 Ô a2 Ô a3 Ô b û a4 + a1 Ô a2 Ô a4 Ô b û a3 -
k k k
a1 Ô a3 Ô a4 Ô b û a2 + a2 Ô a3 Ô a4 Ô b û a1 + a1 Ô a2 Ô a3 Ô a4 Ô b
k k k
Either form is easy to write down directly by observing simple mnemonic rules.
Ï 1. Calculate the ai
l
2009 9 3
Grassmann Algebra Book.nb 657
1. Calculate the ai
l
These are all the essentially different combinations of factors of all grades. The list has the same
form as the basis of the Grassmann algebra whose n-element is a.
m
A = CobasisL@D
8a1 Ô a2 Ô a3 Ô a4 , a2 Ô a3 Ô a4 , -Ha1 Ô a3 Ô a4 L,
a1 Ô a2 Ô a4 , -Ha1 Ô a2 Ô a3 L, a3 Ô a4 , -Ha2 Ô a4 L, a2 Ô a3 ,
a1 Ô a4 , -Ha1 Ô a3 L, a1 Ô a2 , a4 , -a3 , a2 , -a1 , 1<
R = GrassmannReverse@AD
8a4 Ô a3 Ô a2 Ô a1 , a4 Ô a3 Ô a2 , -Ha4 Ô a3 Ô a1 L,
a4 Ô a2 Ô a1 , -Ha3 Ô a2 Ô a1 L, a4 Ô a3 , -Ha4 Ô a2 L, a3 Ô a2 ,
a4 Ô a1 , -Ha3 Ô a1 L, a2 Ô a1 , a4 , -a3 , a2 , -a1 , 1<
S = ThreadBb û RF
k
:b û a4 Ô a3 Ô a2 Ô a1 , b û a4 Ô a3 Ô a2 , b û -Ha4 Ô a3 Ô a1 L, b û a4 Ô a2 Ô a1 ,
k k k k
b û -Ha3 Ô a2 Ô a1 L, b û a4 Ô a3 , b û -Ha4 Ô a2 L, b û a3 Ô a2 , b û a4 Ô a1 ,
k k k k k
2009 9 3
Grassmann Algebra Book.nb 658
Ï 5. Take the exterior product of the elements of the two lists B and S
T = Thread@B Ô SD
:1 Ô b û a4 Ô a3 Ô a2 Ô a1 , a1 Ô b û a4 Ô a3 Ô a2 ,
k k
a2 Ô b û -Ha4 Ô a3 Ô a1 L , a3 Ô b û a4 Ô a2 Ô a1 , a4 Ô b û -Ha3 Ô a2 Ô a1 L ,
k k k
a1 Ô a2 Ô b û a4 Ô a3 , a1 Ô a3 Ô b û -Ha4 Ô a2 L , a1 Ô a4 Ô b û a3 Ô a2 ,
k k k
a2 Ô a3 Ô b û a4 Ô a1 , a2 Ô a4 Ô b û -Ha3 Ô a1 L , a3 Ô a4 Ô b û a2 Ô a1 ,
k k k
a1 Ô a2 Ô a3 Ô b û a4 , a1 Ô a2 Ô a4 Ô b û -a3 , a1 Ô a3 Ô a4 Ô b û a2 ,
k k k
a2 Ô a3 Ô a4 Ô b û -a1 , a1 Ô a2 Ô a3 Ô a4 Ô b û 1 >
k k
Ï 6. Add the terms and simplify the result by factoring out the scalars
U = FactorScalars@Plus üü TD
b û a4 Ô a3 Ô a2 Ô a1 + a1 Ô b û a4 Ô a3 Ô a2 -
k k
a2 Ô b û a4 Ô a3 Ô a1 + a3 Ô b û a4 Ô a2 Ô a1 -
k k
a4 Ô b û a3 Ô a2 Ô a1 + a1 Ô a2 Ô b û a4 Ô a3 - a1 Ô a3 Ô b û a4 Ô a2 +
k k k
a1 Ô a4 Ô b û a3 Ô a2 + a2 Ô a3 Ô b û a4 Ô a1 - a2 Ô a4 Ô b û a3 Ô a1 +
k k k
a3 Ô a4 Ô b û a2 Ô a1 + a1 Ô a2 Ô a3 Ô b û a4 - a1 Ô a2 Ô a4 Ô b û a3 +
k k k
a1 Ô a3 Ô a4 Ô b û a2 - a2 Ô a3 Ô a4 Ô b û a1 + a1 Ô a2 Ô a3 Ô a4 Ô b
k k k
XãU
True
2009 9 3
Grassmann Algebra Book.nb 659
Min@m,kD+p
1
gÔa ù gÔb ã ‚ H-1Ll Hp+m-lL + 2 l Hl-1L
gÔa Û gÔb
p m p k p m l p k
l=0
Min@m,kD+p
1
gÔa ù gÔb ã ‚ H-1Lp+l Hm-lL + 2 l Hl-1L
gÔa Û b ûg
p m p k p m l-p k p
l=0
Since the terms on the right hand side are zero for l < p, we can define m = l - p and rewrite the
right hand side as:
Min@m,kD
1
gÔa ù gÔb ã H-1Lw ‚ H-1Lm Hp+m-mL + 2 m Hm-1L
gÔa Û b ûg
p m p k p m m k p
m=0
1
where w = m p + 2
p Hp - 1L. Hence we can finally cast the right hand side as a Clifford product
by taking out the second g factor.
p
1
p Hp-1L
gÔa ù gÔb ã H-1Lm p+ 2 gÔa ùb ûg 12.17
p m p k p m k p
In a similar fashion we can show that the right hand side can be written in the alternative form:
1
p Hp-1L
gÔa ù gÔb ã H-1L 2 aù gÔb ûg 12.18
p m p k m p k p
By reversing the order of a and g in the first factor, formula 12.17 can be written as:
m p
2009 9 3
Grassmann Algebra Book.nb 660
1
p Hp-1L
aÔg ù gÔb ã H-1L 2 gÔa ùb ûg 12.19
m p p k p m k p
1
p Hp-1L
And by noting that the reverse, g† of g is H-1L 2 g, we can absorb the sign into the
p p p
†
formulae by changing one of the g to g . For example:
p p
a Ô g† ù g Ô b ã gÔa ùb ûg 12.20
m p p k p m k p
a Ô g† ù g Ô b ã H-1Lm p a ù g Ô b ûg 12.22
m p p k m p k p
The relations derived up to this point are completely general and apply to Clifford products in an
arbitrary space with an arbitrary metric. For example they do not require any of the elements to be
orthogonal.
a Ô g† ù g ã H-1Lm p a ù g û g ã H-1Lm p a Ô g û g
m p p m p p m p p
Next, put a equal to unity, and then, to enable comparison with the previous formulae, replace b by
m k
a.
m
2009 9 3
Grassmann Algebra Book.nb 661
g† ù g Ô a ã gùa ûg ã gÔa ûg
p p m p m p p m p
Finally we note that, since the far right hand sides of these two equations are equal, all the
expressions are equal.
a Ô g† ù g ã g† ù g Ô a
m p p p p m
12.24
mp
ã g ù a û g ã g Ô a û g ã H-1L aùg ûg
p m p p m p m p p
By putting a to unity in these equations we can recover the relation 12.11 between the Clifford
m
product of identical elements and their interior product.
Thus we see immediately from the definition of the Clifford product that the Clifford product of
two totally orthogonal elements is equal to their exterior product.
Min@m,k+pD
1
aù gÔb ã ‚ H-1Ll Hm- lL+ 2 l Hl-1L
a Û gÔb
m p k m l p k
l=0
2009 9 3
Grassmann Algebra Book.nb 662
a Û gÔb ã a Û g Ôb ai û bj ã 0 l § Min@m, pD
m l p k m l p k
Hence:
Similarly, expressing aÔg ùb in terms of generalized products and substituting from equation
m p k
10.37 gives:
Min@p,kD
1
aÔg ùb ã ‚ H-1Ll Hm+p-lL + 2 l Hl-1L+m l
aÔ g Û b
m p k m p l k
l=0
1 1
But H-1Ll Hm+p-lL + 2 l Hl-1L+m l
ã H-1Ll Hp-lL + 2 l Hl-1L
, hence we can write:
? OrthogonalSimplificationRules
OrthogonalSimplificationRules@88X1,Y1<,8X2,Y2<,!<D
develops a list of rules which put to zero all the
scalar products of a 1-element from Xi and a 1-element
from Yi. Xi and Yi may be either variables, basis
elements, exterior products of these, or graded variables.
2009 9 3
Grassmann Algebra Book.nb 663
FlattenBTableBToScalarProductsB a Ô g ù b - a Ô g ù b F ê.
m p k m p k
OrthogonalSimplificationRulesB::a, b>>F,
m k
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
Orthogonal union
We have shown in Section 12.7 above that the Clifford product of arbitrary intersecting elements
may be expressed by:
a Ô g† ù g Ô b ã H-1Lm p aÔg ùb ûg
m p p k m p k p
a Ô g† ù g Ô b ã H-1Lm p a ù g Ô b ûg
m p p k m p k p
But we have also shown in Section 12.8 that if a and b are totally orthogonal, then:
m k
aÔg ùb ã aÔ gùb
m p k m p k
aù gÔb ã aùg Ôb
m p k m p k
Hence the right hand sides of the first equations may be written, in the orthogonal case, as:
a Ô g† ù g Ô b ã H-1Lm p a Ô g ù b ûg ai û bj ã 0 12.28
m p p k m p k p
Orthogonal intersection
2009 9 3
Grassmann Algebra Book.nb 664
Orthogonal intersection
Consider the case of three simple elements a, b and g where g is totally orthogonal to both a and
m k p p m
b (and hence to aÔb). A simple element g is totally orthogonal to an element a if and only if
k m k p m
aûgi ã0 for all gi belonging to g. Here, we do not assume that a and b are orthogonal.
m p m k
As before we consider the Clifford product of intersecting elements, but because of the
orthogonality relationships, we can replace the generalized products on the right hand side by the
right hand side of equation 10.39.
Min@m,kD+p
1
gÔa ù gÔb ã ‚ H-1Ll Hp+m-lL + 2 l Hl-1L
gÔa Û gÔb
p m p k p m l p k
l=0
Min@m,kD+p
1
ã ‚ H-1Ll Hp+m-lL + 2 l Hl-1L
gûg a Û b
p p m l-p k
l=p
Min@m,kD
1
gÔa ù gÔb ã gûg ‚ H-1LHm+pL Hm-mL + 2 Hm+pL Hm+p-1L
aÛb
p m p k p p m m k
m=0
Min@m,kD
1 1
p Hp-1L
ã H-1Lm p+ 2 gûg ‚ H-1Lm Hm-mL + 2 m Hm-1L
aÛb
p p m m k
m=0
1
p Hp-1L
gÔa ù gÔb ã H-1Lm p+ 2 gûg aùb
p m p k p p m k
Note carefully, that for this formula to be true, a and b are not necessarily orthogonal. However, if
m k
they are orthogonal we can express the Clifford product on the right hand side as an exterior
product.
2009 9 3
Grassmann Algebra Book.nb 665
a Ô g† ù g Ô b ã
m p p k
12.31
gûg aÔb ai û bj ã ai û gs ã bj û gs ã 0
p p m k
Arbitrary elements
The main results of the preceding sections concerning Clifford Products of intersecting and
orthogonal elements are summarized below.
g† ù g ã g ù g† ã g û g 12.32
p p p p p p
g ù g ã g† û g ã g û g† 12.33
p p p p p p
g ù g ù g ã g† û g g ã g û g† g 12.34
p p p p p p p p p
2
gùg ùgùg ã gûg 12.35
p p p p p p
2009 9 3
Grassmann Algebra Book.nb 666
a Ô g† ù g ã g† ù g Ô a
m p p p p m
12.36
mp
ã g ù a û g ã g Ô a û g ã H-1L aùg ûg
p m p p m p m p p
a Ô g† ù g Ô b ã H-1Lm p a ù g Ô b ûg 12.38
m p p k m p k p
a Ô g† ù g Ô b ã H-1Lm p a Ô g ù b ûg ai û bj ã 0 12.41
m p p k m p k p
2009 9 3
Grassmann Algebra Book.nb 667
a Ô g† ù g Ô b ã
m p p k
12.43
gûg aùb ai û gs ã bj û gs ã 0
p p m k
Orthogonal elements
aÔg ù gÔb ã
m p p k
12.45
†
g ûg aÔb ai û bj ã ai û gs ã bj û gs ã 0
p p m k
2009 9 3
Grassmann Algebra Book.nb 668
If we know that all the factors of a Clifford product are totally orthogonal, then we can interchange
the Clifford product and the exterior product at will. Hence, for totally orthogonal elements, the
Clifford and exterior products are associative, and we do not need to include parentheses.
Note carefully however, that this associativity does not extend to the factors of the m-, k-, or p-
elements unless the factors of the m-, k-, or p-element concerned are mutually orthogonal. In which
case we could for example then write:
a1 ù a2 ù ! ù a m ã a1 Ô a2 Ô ! Ô a m ai û aj ã 0 i!j 12.47
Ï Example
For example if xÔy and z are totally orthogonal, that is xûz ã 0 and yûz ã 0, then we can write
Hx Ô yL ù z ã x Ô y Ô z ã x Ô Hy ù zL ã -y Ô Hx ù zL
But since x is not necessarily orthogonal to y, these are not the same as
x ù y ù z ã Hx ù yL Ô z ã x ù Hy Ô zL
We can see the difference by reducing the Clifford products to scalar products.
ToScalarProducts@8x ù y ù z, Hx ù yL Ô z, x ù Hy Ô zL<D ê.
OrthogonalSimplificationRules@88x Ô y, z<<D
8z Hx û yL + x Ô y Ô z, z Hx û yL + x Ô y Ô z, z Hx û yL + x Ô y Ô z<
2009 9 3
Grassmann Algebra Book.nb 669
Formula 12.45 tells us that the Clifford product of (possibly) intersecting and totally orthogonal
elements is given by:
ai û bj ã ai û gs ã bj û gs ã 0
Note that it is not necessary that the factors of a be orthogonal to each other. This is true also for
m
the factors of b or of g.
k p
We begin by writing a Clifford product of three elements, and associating the first pair. The
elements contain factors which are specific to the element (a, b, w), pairwise common (g, d, e),
m k r p q s
and common to all three (q). This product will therefore represent the most general case since other
z
cases may be obtained by letting one or more factors reduce to unity.
ã q† Ô g † û q Ô g aÔeÔbÔd ù dÔeÔwÔq
z p z p m s k q q s r z
ã q† Ô g † û q Ô g H-1Lsk e† Ô d† û e Ô d a Ô b Ô Kw Ô qO
z p z p s q s q m k r z
ã H-1Lsk q† Ô g † û q Ô g e† Ô d† û e Ô d aÔbÔwÔq
z p z p s q s q m k r z
On the other hand we can associate the second pair to obtain the same result.
2009 9 3
Grassmann Algebra Book.nb 670
ã H-1Lzk+zr+ks q† Ô d† û q Ô d e† Ô g † û e Ô g Ka Ô qO Ô b Ô w
z q z q s p s p m z k r
ã H-1Lsk q† Ô g † û q Ô g e† Ô d† û e Ô d aÔbÔwÔq
z p z p s q s q m k r z
In sum: We have shown that the Clifford product of possibly intersecting but otherwise totally
orthogonal elements is associative. Any factors which make up a given individual element are not
specifically involved, and hence it is of no consequence to the associativity of the element with
another whether or not these factors are mutually orthogonal.
q† Ô g† û q Ô g ã Kq† û qO g† û g
z p z p z z p p
e† Ô d† û e Ô d ã e† û e d† û d
s q s q s s q q
2009 9 3
A mnemonic for making this transformation is then
1. Rearrange the factors in a Clifford product to get common factors adjacent
GrassmanntoAlgebra
the Clifford
Book.nb 671
product symbol, taking care to include any change of sign due to the quasi-commutativity of the
exterior product.
2. Replace the common factors by their inner product, but with one copy being reversed.
3. If there are no common factors in a Clifford product, the Clifford product can be replaced by the
exterior product.
Remember that for these relations to hold all the elements must be totally orthogonal to each other.
Note that if, in addition, the 1-element factors of any of these elements, g say, are orthogonal to
p
each other, then:
Consider the Clifford product aùb ùg where there are no restrictions on the factors a, b and g. It
m k p m k p
has been shown in Section 6.3 that an arbitrary simple m-element may be expressed in terms of m
orthogonal 1-element factors (the Gram-Schmidt orthogonalization process). Suppose that n such
orthogonal 1-elements e1 ,e2 ,!,en have been found in terms of which a, b, and g can be
m k p
expressed. Writing the m-elements, k-elements and p-elements formed from the ei as ej , er , and
m k
es respectively, we can write:
p
a ã S aj ej b ã S br er g ã S cs es
m m k k p p
a ù b ù g ã S aj ej ù S br er ù S cs es ã S S S aj br cs ej ù er ù es
m k p m k p m k p
But we have already shown in the previous section that the Clifford product of orthogonal elements
is associative. That is:
ej ù er ù es ã ej ù er ù es ã ej ù er ù es
m k p m k p m k p
a ù b ù g ã S S S aj br cs ej ù er ù es ã a ù b ù g
m k p m k p m k p
The associativity of the Clifford product is usually taken as an axiom. However, in this book we
have chosen to show that associativity is a consequence of our definition of the Clifford product in
terms of exterior and interior products. In this way we can ensure that the Grassmann and Clifford
algebras are consistent.
2009 9 3
Grassmann Algebra Book.nb 672
The associativity of the Clifford product is usually taken as an axiom. However, in this book we
have chosen to show that associativity is a consequence of our definition of the Clifford product in
terms of exterior and interior products. In this way we can ensure that the Grassmann and Clifford
algebras are consistent.
• Declare a space of the given dimension. It does not matter what the metric is since we do not use
it.
• Create general elements of the given grades in a space of the given dimension.
• To make the code more readable, define a function which converts a product to scalar product
form.
• Compute the products associated in the two different ways, and subtract them.
• The associativity test is successful if a result of 0 is returned.
Y = CreateElementByF; Z = CreateElementBzF;
k p
CliffordAssociativityTest@4D@2, 2, 2D
0
Or, we can tabulate a number of results together. For example, elements of all grades in all spaces
of dimension 0, 1, 2, and 3.
Table@CliffordAssociativityTest@nD@m, k, pD,
8n, 0, 3<, 8m, 0, n<, 8k, 0, n<, 8p, 0, m<D
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<
2009 9 3
Grassmann Algebra Book.nb 673
products of even order aùb , and those whose terms are generalized products of odd order
m k e
Min@m,kD
1
aùb ã ‚ H-1Ll Hm-lL+ 2 l Hl-1L
aÛb
m k m l k
l=0
Min@m,kD
l
aùb ã ‚ H-1L 2 aÛb 12.51
m k e m l k
l = 0,2,4,!
(Here, the limit of the sum is understood to mean the greatest even integer less than or equal to
Min[m,k].)
Min@m,kD
l+1
m
aùb ã H-1L ‚ H-1L 2 aÛb 12.52
m k o m l k
l = 1,3,5,!
(In this case, the limit of the sum is understood to mean the greatest odd integer less than or equal
to Min[m,k].)
Note carefully that the evenness and oddness to which we refer is to the order of the generalized
product not to the grade of the Clifford product.
2009 9 3
Grassmann Algebra Book.nb 674
Min@m,kD Min@m,kD
l l
bùa ã ‚ H-1L 2 bÛa ã ‚ H-1L 2 +Hm-lL Hk-lL a Û b
k m e k l m m l k
l = 0,2,4,! l = 0,2,4,!
Min@m,kD
l
mk
bùa ã H-1L ‚ H-1L 2 aÛb 12.53
k m e m l k
l = 0,2,4,!
Hence, the even terms of both products are equal, except when both factors of the product are of
odd grade.
Min@m,kD
l+1
k +Hm-lL Hk-lL
bùa ã H-1L ‚ H-1L 2 aÛb
k m o m l k
l = 1,3,5,!
Min@m,kD
l+1
mk m
bùa ã -H-1L H-1L ‚ H-1L 2 aÛb 12.55
k m o m l k
l = 1,3,5,!
2009 9 3
Grassmann Algebra Book.nb 675
1
aùb ã a ù b + H-1Lm k b ù a 12.59
m k e 2 m k k m
1
aùb ã a ù b - H-1Lm k b ù a 12.60
m k o 2 m k k m
1
aÔb ã Ka ù b + H-1Lm b ù a O 12.61
m 2 m m
1
Ka ù b - H-1Lm b ù a O ã -H-1Lm a û b 12.62
2 m m m
1
aùb + bùa ã aÔb - aûb 12.63
2 m 2 2 m m 2 m 2
1
aùb - bùa ã -H-1Lm a Û b 12.64
2 m 2 2 m m 1 2
2009 9 3
Grassmann Algebra Book.nb 676
Broadly speaking, an algebra can be constructed from a set of elements of a linear space which has
a product operation, provided all products of the elements are again elements of the set.
In what follows we shall discuss Clifford algebras. The generating elements will be selected
subsets of the basis elements of the full Grassmann algebra on the space. The product operation
will be the Clifford product which we have defined in the first part of this chapter in terms of the
generalized Grassmann product. Thus, Clifford algebras may viewed as living very much within
the Grassmann algebra, relying on it for both its elements and its operations.
In this development therefore, a particular Clifford algebra is ultimately defined by the values of
the scalar products of basis vectors of the underlying Grassmann algebra, and thus by the metric on
the space.
In many cases we will be defining the specific Clifford algebras in terms of orthogonal (not
necessarily orthonormal) basis elements.
Clifford algebras include the real numbers, complex numbers, quaternions, biquaternions, and the
Pauli and Dirac algebras.
Real algebra
All the products that we have developed up to this stage, the exterior, interior, generalized
Grassmann and Clifford products possess the valuable and consistent property that when applied to
scalars yield results equivalent to the usual (underlying field) product. The field has been identified
with a space of zero dimensions, and the scalars identified with its elements.
Thus the real algebra is isomorphic with the Clifford algebra of a space of zero dimensions.
2009 9 3
Grassmann Algebra Book.nb 677
&1 ; BasisL@D
81, e1 <
There are only four possible Clifford products of these basis elements. We can construct a table of
these products by using the GrassmannAlgebra function CliffordProductTable.
CliffordProductTable@D
881 ù 1, 1 ù e1 <, 8e1 ù 1, e1 ù e1 <<
Usually however, to make the products easier to read and use, we will display them in the form of
a palette using the GrassmannAlgebra function PaletteForm. (We can click on the palette to
enter any of its expressions into the notebook).
C1 = CliffordProductTable@D; PaletteForm@C1 D
1ù1 1 ù e1
e1 ù 1 e1 ù e1
In the general case any Clifford product may be expressed in terms of exterior and interior
products. We can see this by applying ToInteriorProducts to the table (although only
interior (here scalar) products result from this simple case),
C2 = ToInteriorProducts@C1 D; PaletteForm@C2 D
1 e1
e1 e1 û e1
Different Clifford algebras may be generated depending on the metric chosen for the space. In this
example we can see that the types of Clifford algebra which we can generate in a 1-space are
dependent only on the choice of a single scalar value for the scalar product e1 ûe1 . The Clifford
product of two general elements of the algebra is
Ha + b e1 L ù Hc + d e1 L êê ToScalarProducts
a c + b d He1 û e1 L + b c e1 + a d e1
It is clear to see that if we choose e1 ûe1 ã-1, we have an algebra isomorphic to the complex
algebra. The basis 1-element e1 then plays the role of the imaginary unit Â. We can generate this
particular algebra immediately by declaring the metric, and then generating the product table.
2009 9 3
Grassmann Algebra Book.nb 678
DeclareBasis@8Â<D; DeclareMetric@88-1<<D;
CliffordProductTable@D êê ToScalarProducts êê ToMetricForm êê
PaletteForm
1 Â
 -1
However, our main purpose in discussing this very simple example in so much detail is to
emphasize that even in this case, there are an infinite number of Clifford algebras on a 1-space
depending on the choice of the scalar value for e1 ûe1 . The complex algebra, although it has surely
proven itself to be the most useful, is just one among many.
Finally, we note that all Clifford algebras possess the real algebra as their simplest even subalgebra.
1ù1 1 ù e1 1 ù e2 1 ù He1 Ô e2 L
e1 ù 1 e1 ù e1 e1 ù e2 e1 ù He1 Ô e2 L
e2 ù 1 e2 ù e1 e2 ù e2 e2 ù He1 Ô e2 L
He1 Ô e2 L ù 1 He1 Ô e2 L ù e1 He1 Ô e2 L ù e2 He1 Ô e2 L ù He1 Ô e2 L
To see the way in which these Clifford products reduce to generalized Grassmann products we can
apply the GrassmannAlgebra function ToGeneralizedProducts.
C2 = ToGeneralizedProducts@C1 D; PaletteForm@C2 D
1Û1 1 Û e1 1 Û e2
0 0 0
e1 Û 1 e1 Û e1 + e1 Û e1 e1 Û e2 + e1 Û e2
0 0 1 0 1
e2 Û 1 e2 Û e1 + e2 Û e1 e2 Û e2 + e2 Û e2
0 0 1 0 1
2009 9 3
Grassmann Algebra Book.nb 679
C3 = ToInteriorProducts@C2 D; PaletteForm@C3 D
1 e1 e2 e1 Ô
e1 e1 û e1 e1 û e2 + e1 Ô e2 e1 Ô e2
e2 e1 û e2 - e1 Ô e2 e2 û e2 e1 Ô e2
e1 Ô e2 -He1 Ô e2 û e1 L -He1 Ô e2 û e2 L -He1 Ô e2 û e1 Ô e2 L - He1 Ô e2 û
C4 = ToScalarProducts@C3 D; PaletteForm@C4 D
1 e1 e2
e1 e1 û e1 e1 û e2 + e1 Ô e2 -He1 û
e2 e1 û e2 - e1 Ô e2 e2 û e2 -He2 û
e1 Ô e2 He1 û e2 L e1 - He1 û e1 L e2 He2 û e2 L e1 - He1 û e2 L e2 He1 û e2 L2
It should be noted that the GrassmannAlgebra operation ToScalarProducts also applies the
function ToStandardOrdering. Hence scalar products are reordered to present the basis
element with lower index first. For example the scalar product e2 ûe1 does not appear in the table
above.
C5 = C4 ê. e1 û e2 Ø 0; PaletteForm@C5 D
1 e1 e2 e1 Ô e2
e1 e1 û e1 e1 Ô e2 He1 û e1 L e2
e2 -He1 Ô e2 L e2 û e2 -He2 û e2 L e1
e1 Ô e2 -He1 û e1 L e2 He2 û e2 L e1 -He1 û e1 L He2 û e2 L
This table defines all the Clifford algebras on the 2-space. Different Clifford algebras may be
generated by choosing different metrics for the space, that is, by choosing the two scalar values
e1 ûe1 and e2 ûe2 .
Note however that allocation of scalar values a to e1 ûe1 and b to e2 ûe2 would lead to essentially
the same structure as allocating b to e1 ûe1 and a to e2 û e2 .
In the rest of what follows however, we will restrict ourselves to metrics in which e1 ûe1 ã%1 and
e2 ûe2 ã%1, whence there are three cases of interest.
2009 9 3
Grassmann Algebra Book.nb 680
In the rest of what follows however, we will restrict ourselves to metrics in which e1 ûe1 ã%1 and
e2 ûe2 ã%1, whence there are three cases of interest.
As observed previously the case 8e1 ûe1 Ø-1,e2 ûe2 Ø+1< is isomorphic to the case
8e1 ûe1 Ø+1,e2 ûe2 Ø-1<, so we do not need to consider it.
1 e1 e2 e1 Ô e2
e1 1 e1 Ô e2 e2
e2 -He1 Ô e2 L 1 -e1
e1 Ô e2 -e2 e1 -1
Inspecting this table for interesting structures or substructures, we note first that the even
subalgebra (that is, the algebra based on products of the even basis elements) is isomorphic to the
complex algebra. For our own explorations we can use the palette to construct a product table for
the subalgebra, or we can create a table using the GrassmannAlgebra function TableTemplate,
and edit it by deleting the middle rows and columns.
TableTemplate@C6 D
1 e1 e2 e1 Ô e2
e1 1 e1 Ô e2 e2
e2 -He1 Ô e2 L 1 -e1
e1 Ô e2 -e2 e1 -1
1 e1 Ô e2
e1 Ô e2 -1
If we want to create a palette for the subalgebra, we have to edit the normal list (matrix) form and
then apply PaletteForm. For even subalgebras we can also apply the GrassmannAlgebra
function EvenCliffordProductTable which creates a Clifford product table from just the
basis elements of even grade. We then set the metric we want, convert the Clifford product to
scalar products, evaluate the scalar products according to the chosen metric, and display the
resulting table as a palette.
2009 9 3
If we want to create a palette for the subalgebra, we have to edit the normal list Algebra
Grassmann (matrix) form and
Book.nb 681
then apply PaletteForm. For even subalgebras we can also apply the GrassmannAlgebra
function EvenCliffordProductTable which creates a Clifford product table from just the
basis elements of even grade. We then set the metric we want, convert the Clifford product to
scalar products, evaluate the scalar products according to the chosen metric, and display the
resulting table as a palette.
1 e1 Ô e2
e1 Ô e2 -1
Here we see that the basis element e1 Ôe2 has the property that its (Clifford) square is -1. We can
see how this arises by carrying out the elementary operations on the product. Note that
e1 ûe1 ãe2 ûe2 ã1 since we have assumed the 2-space under consideration is Euclidean.
Thus the pair 81,e1 Ôe2 < generates an algebra under the Clifford product operation, isomorphic to
the complex algebra. It is also the basis of the even Grassmann algebra of L.
2
1 e1 e2 e1 Ô e2
e1 1 e1 Ô e2 e2
e2 -He1 Ô e2 L -1 e1
e1 Ô e2 -e2 -e1 1
1 e1 e2 e1 Ô e2
e1 -1 e1 Ô e2 -e2
e2 -He1 Ô e2 L -1 e1
e1 Ô e2 e2 -e1 -1
We can see this isomorphism more clearly by substituting the usual quaternion symbols (here we
Mathematica's double-struck symbols and choose the correspondence 8e1 Ø3,e2 Ø4,e1 Ôe2 Ø!<.
2009 9 3
Grassmann Algebra Book.nb 682
We can see this isomorphism more clearly by substituting the usual quaternion symbols (here we
Mathematica's double-struck symbols and choose the correspondence 8e1 Ø3,e2 Ø4,e1 Ôe2 Ø!<.
1 $ % &
$ -1 & -%
% -& -1 $
& % -$ -1
Having verified that the structure is indeed quaternionic, let us return to the original specification
in terms of the basis of the Grassmann algebra. A quaternion can be written in terms of these basis
elements as:
Q ã a + b e1 + c e2 + d e1 Ô e2 ã Ha + b e1 L + Hc + d e1 L Ô e2
Now, because e1 and e2 are orthogonal, e1 Ôe2 is equal to e1 ùe2 . But for any further calculations
we will need to use the Clifford product form. Hence we write
Q ã a + b e1 + c e2 + d e1 ù e2 ã Ha + b e1 L + Hc + d e1 L ù e2 12.66
Hence under one interpretation, each of e1 and e2 and their Clifford product e1 ùe2 behaves as a
different imaginary unit. Under the second interpretation, a quaternion is a complex number with
imaginary unit e2 , whose components are complex numbers based on e1 as the imaginary unit.
2009 9 3
Grassmann Algebra Book.nb 683
1ù1 1 ù e1 1 ù e2 1 ù e3
e1 ù 1 e1 ù e1 e1 ù e2 e1 ù e3
e2 ù 1 e2 ù e1 e2 ù e2 e2 ù e3
e3 ù 1 e3 ù e1 e3 ù e2 e3 ù e3
He1 Ô e2 L ù 1 He1 Ô e2 L ù e1 He1 Ô e2 L ù e2 He1 Ô e2 L ù e3
He1 Ô e3 L ù 1 He1 Ô e3 L ù e1 He1 Ô e3 L ù e2 He1 Ô e3 L ù e3
He2 Ô e3 L ù 1 He2 Ô e3 L ù e1 He2 Ô e3 L ù e2 He2 Ô e3 L ù e3
He1 Ô e2 Ô e3 L ù 1 He1 Ô e2 Ô e3 L ù e1 He1 Ô e2 Ô e3 L ù e2 He1 Ô e2 Ô e3 L ù
MatrixForm@MetricD
1 0 0
0 1 0
0 0 1
Since the basis elements are orthogonal, we can use the GrassmannAlgebra function
CliffordToOrthogonalScalarProducts for computing the Clifford products. This
function is faster for larger calculations than the more general ToScalarProducts used in the
previous examples.
C2 = ToMetricForm@CliffordToOrthogonalScalarProducts@C1 DD;
PaletteForm@C2 D
1 e1 e2 e3 e1 Ô e2 e1
e1 1 e1 Ô e2 e1 Ô e3 e2 e3
e2 -He1 Ô e2 L 1 e2 Ô e3 -e1 -He1 Ô
e3 -He1 Ô e3 L -He2 Ô e3 L 1 e1 Ô e2 Ô e3 -
e1 Ô e2 -e2 e1 e1 Ô e2 Ô e3 -1 -He2
e1 Ô e3 -e3 -He1 Ô e2 Ô e3 L e1 e2 Ô e3 -
e2 Ô e3 e1 Ô e2 Ô e3 -e3 e2 -He1 Ô e3 L e1
e1 Ô e2 Ô e3 e2 Ô e3 -He1 Ô e3 L e1 Ô e2 -e3 e2
We can show that this Clifford algebra of Euclidean 3-space is isomorphic to the Pauli algebra.
Pauli's representation of the algebra was by means of 2!2 matrices over the complex field:
2009 9 3
Grassmann Algebra Book.nb 684
We can show that this Clifford algebra of Euclidean 3-space is isomorphic to the Pauli algebra.
Pauli's representation of the algebra was by means of 2!2 matrices over the complex field:
0 1 0 -Â 1 0
s1 ã K O s2 ã K O s3 ã K O
1 0 Â 0 0 -1
8I ¨ 1, s1 ¨ e1 , s2 ¨ e2 , s3 ¨ e3 , s1 s2 ¨ e1 Ô e2 ,
s1 s3 ¨ e1 Ô e3 , s2 s3 ¨ e2 Ô e3 , s1 s2 s3 ¨ e1 Ô e2 Ô e3 <
We perform these steps in sequence. Because of the size of the output, only the electronic version
will show the complete tables.
I s1 s2 s3 s1 .s2 s1 .s3 s2
s1 I s1 .s2 s1 .s3 s2 s3 s1 .
s2 -s1 .s2 I s2 .s3 -s1 -s1 .s2 .s3 s3
s3 -s1 .s3 -s2 .s3 I s1 .s2 .s3 -s1 -
s1 .s2 -s2 s1 s1 .s2 .s3 -I -s2 .s3 s1
s1 .s3 -s3 -s1 .s2 .s3 s1 s2 .s3 -I -s1
s2 .s3 s1 .s2 .s3 -s3 s2 -s1 .s3 s1 .s2
s1 .s2 .s3 s2 .s3 -s1 .s3 s1 .s2 -s3 s2 -
2009 9 3
Grassmann Algebra Book.nb 685
1 0 0 1 0 -Â 1 0
C4 = C3 ê. :I Ø K O , s1 Ø K O , s2 Ø K O, s3 Ø K O>;
0 1 1 0 Â 0 0 -1
MatrixForm@C4 D
1 0 0 1 0 -Â 1 0 Â 0 0 -1 0 Â Â 0
K O K O K O K O K O K O K O K O
0 1 1 0 Â 0 0 -1 0 -Â 1 0 Â 0 0 Â
0 1 1 0 Â 0 0 -1 0 -Â 1 0 Â 0 0 Â
K O K O K O K O K O K O K O K O
1 0 0 1 0 -Â 1 0 Â 0 0 -1 0 Â Â 0
0 -Â -Â 0 1 0 0 Â 0 -1 -Â 0 1 0 0 1
K O K O K O K O K O K O K O K O
 0 0  0 1  0 -1 0 0 - 0 -1 -1 0
1 0 0 1 0 -Â 1 0 Â 0 0 -1 0 Â Â 0
K O K O K O K O K O K O K O K O
0 -1 -1 0 -Â 0 0 1 0 Â -1 0 -Â 0 0 -Â
 0 0  0 1  0 -1 0 0 - 0 -1 -1 0
K O K O K O K O K O K O K O K O
0 -Â -Â 0 1 0 0 Â 0 -1 -Â 0 1 0 0 1
0 -1 -1 0 -Â 0 0 1 0 Â -1 0 -Â 0 0 -Â
K O K O K O K O K O K O K O K O
1 0 0 1 0 -Â 1 0 Â 0 0 -1 0 Â Â 0
0 Â Â 0 -1 0 0 -Â 0 1 Â 0 -1 0 0 -1
K O K O K O K O K O K O K O K O
 0 0  0 1  0 -1 0 0 - 0 -1 -1 0
 0 0  0 1  0 -1 0 0 - 0 -1 -1 0
K O K O K O K O K O K O K O K O
0 Â Â 0 -1 0 0 -Â 0 1 Â 0 -1 0 0 -1
Ï Step 3:
Here we let the first row (column) of the product table correspond back to the basis elements of the
Grassmann representation, and make the substitution: throughout the whole table.
C5 = C4 ê. Thread@First@C4 D Ø BasisL@DD ê.
Thread@-First@C4D Ø -BasisL@DD
881, e1 , e2 , e3 , e1 Ô e2 , e1 Ô e3 , e2 Ô e3 , e1 Ô e2 Ô e3 <,
8e1 , 1, e1 Ô e2 , e1 Ô e3 , e2 , e3 , e1 Ô e2 Ô e3 , e2 Ô e3 <,
8e2 , -He1 Ô e2 L, 1, e2 Ô e3 , -e1 , -He1 Ô e2 Ô e3 L, e3 , -He1 Ô e3 L<,
8e3 , -He1 Ô e3 L, -He2 Ô e3 L, 1, e1 Ô e2 Ô e3 , -e1 , -e2 , e1 Ô e2 <,
8e1 Ô e2 , -e2 , e1 , e1 Ô e2 Ô e3 , -1, -He2 Ô e3 L, e1 Ô e3 , -e3 <,
8e1 Ô e3 , -e3 , -He1 Ô e2 Ô e3 L, e1 , e2 Ô e3 , -1, -He1 Ô e2 L, e2 <,
8e2 Ô e3 , e1 Ô e2 Ô e3 , -e3 , e2 , -He1 Ô e3 L, e1 Ô e2 , -1, -e1 <,
8e1 Ô e2 Ô e3 , e2 Ô e3 , -He1 Ô e3 L, e1 Ô e2 , -e3 , e2 , -e1 , -1<<
Ï Step 4: Verification
C5 ã C2
True
2009 9 3
Grassmann Algebra Book.nb 686
C4 = EvenCliffordProductTable@D êê
CliffordToOrthogonalScalarProducts êê
ToMetricForm; PaletteForm@C4 D
1 e1 Ô e2 e1 Ô e3 e2 Ô e3
e1 Ô e2 -1 -He2 Ô e3 L e1 Ô e3
e1 Ô e3 e2 Ô e3 -1 -He1 Ô e2 L
e2 Ô e3 -He1 Ô e3 L e1 Ô e2 -1
From this multiplication table we can see that the even subalgebra of the Clifford algebra of 3-
space is isomorphic to the quaternions. To see the isomorphism more clearly, replace the bivectors
by 3, 4, and !.
1 & % $
& -1 -$ %
% $ -1 -&
$ -% & -1
Ÿ Biquaternions
We now explore the metric in which 8e1 û e1 Ø -1, e2 û e2 Ø -1, e3 û e3 Ø -1<.
2009 9 3
Grassmann Algebra Book.nb 687
1 e1 e2 e3 e1 Ô e2 e1
e1 -1 e1 Ô e2 e1 Ô e3 -e2 -
e2 -He1 Ô e2 L -1 e2 Ô e3 e1 -He1 Ô
e3 -He1 Ô e3 L -He2 Ô e3 L -1 e1 Ô e2 Ô e3 e1
e1 Ô e2 e2 -e1 e1 Ô e2 Ô e3 -1 e2
e1 Ô e3 e3 -He1 Ô e2 Ô e3 L -e1 -He2 Ô e3 L -
e2 Ô e3 e1 Ô e2 Ô e3 e3 -e2 e1 Ô e3 -He1
e1 Ô e2 Ô e3 -He2 Ô e3 L e1 Ô e3 -He1 Ô e2 L -e3 e2
Note that every one of the basis elements of the Grassmann algebra (except 1 and e1 Ôe2 Ôe3 ) acts
as an imaginary unit under the Clifford product. This enables us to build up a general element of
the algebra as a sum of nested complex numbers, or of nested quaternions. To show this, we begin
by writing a general element in terms of the basis elements of the Grassmann algebra:
QQ ã a + b e1 + c e2 + d e3 + e e1 Ô e2 + f e1 Ô e3 + g e2 Ô e3 + h e1 Ô e2 Ô e3
Or, as previously argued, since the basis 1-elements are orthogonal, we can replace the exterior
product by the Clifford product and rearrange the terms in the sum to give
QQ ã Ha + b e1 + c e2 + e e1 ù e2 L + Hd + f e1 + g e2 + h e1 ù e2 L ù e3
Q1 ã a + b e1 + c e2 + e e1 ù e2 ã Ha + b e1 L + Hc + e e1 L ù e2
Q2 ã d + f e1 + g e2 + h e1 ù e2 ã Hd + f e1 L + Hg + h e1 L ù e2
QQ ã Q1 + Q2 ù e3
Historical Note
This complex sum of two quaternions was called a biquaternion by Hamilton [Hamilton, Elements
of Quaternions, p133] but Clifford in a footnote to his Preliminary Sketch of Biquaternions
[Clifford, Mathematical Papers, Chelsea] says 'Hamilton's biquaternion is a quaternion with
complex coefficients; but it is convenient (as Prof. Pierce remarks) to suppose from the beginning
that all scalars may be complex. As the word is thus no longer wanted in its old meaning, I have
made bold to use it in a new one."
Hamilton uses the word biscalar for a complex number and bivector [p 225 Elements of
Quaternions] for a complex vector, that is, for a vector x + Â y, where x and y are vectors; and the
word biquaternion for a complex quaternion q0 + Â q1 , where q0 and q1 are quaternions. He
emphasizes here that "… Â is the (scalar) imaginary of algebra, and not a symbol for a
geometrically real right versor …"
2009 9 3
Grassmann Algebra Book.nb 688
Hamilton uses the word biscalar for a complex number and bivector [p 225 Elements of
Quaternions] for a complex vector, that is, for a vector x + Â y, where x and y are vectors; and the
word biquaternion for a complex quaternion q0 + Â q1 , where q0 and q1 are quaternions. He
emphasizes here that "… Â is the (scalar) imaginary of algebra, and not a symbol for a
geometrically real right versor …"
Hamilton introduces his biquaternion as the quotient of a bivector (his usage) by a (real) vector.
1ù1 1 ù e1 1 ù e2
e1 ù 1 e1 ù e1 e1 ù e2
e2 ù 1 e2 ù e1 e2 ù e2
e3 ù 1 e3 ù e1 e3 ù e2
e4 ù 1 e4 ù e1 e4 ù e2
He1 Ô e2 L ù 1 He1 Ô e2 L ù e1 He1 Ô e2 L ù e2
He1 Ô e3 L ù 1 He1 Ô e3 L ù e1 He1 Ô e3 L ù e2
He1 Ô e4 L ù 1 He1 Ô e4 L ù e1 He1 Ô e4 L ù e2
He2 Ô e3 L ù 1 He2 Ô e3 L ù e1 He2 Ô e3 L ù e2
He2 Ô e4 L ù 1 He2 Ô e4 L ù e1 He2 Ô e4 L ù e2
He3 Ô e4 L ù 1 He3 Ô e4 L ù e1 He3 Ô e4 L ù e2
He1 Ô e2 Ô e3 L ù 1 He1 Ô e2 Ô e3 L ù e1 He1 Ô e2 Ô e3 L ù e2 H
He1 Ô e2 Ô e4 L ù 1 He1 Ô e2 Ô e4 L ù e1 He1 Ô e2 Ô e4 L ù e2 H
He1 Ô e3 Ô e4 L ù 1 He1 Ô e3 Ô e4 L ù e1 He1 Ô e3 Ô e4 L ù e2 H
He2 Ô e3 Ô e4 L ù 1 He2 Ô e3 Ô e4 L ù e1 He2 Ô e3 Ô e4 L ù e2 H
He1 Ô e2 Ô e3 Ô e4 L ù 1 He1 Ô e2 Ô e3 Ô e4 L ù e1 He1 Ô e2 Ô e3 Ô e4 L ù e2 He1
2009 9 3
Grassmann Algebra Book.nb 689
Metric
881, 0, 0, 0<, 80, 1, 0, 0<, 80, 0, 1, 0<, 80, 0, 0, 1<<
Since the basis elements are orthogonal, we can use the faster GrassmannAlgebra function
CliffordToOrthogonalScalarProducts for computing the Clifford products.
C2 = CliffordToOrthogonalScalarProducts@C1 D êê ToMetricForm;
PaletteForm@C2 D
1 e1 e2 e3
e1 1 e1 Ô e2 e1 Ô e3
e2 -He1 Ô e2 L 1 e2 Ô e3
e3 -He1 Ô e3 L -He2 Ô e3 L 1
e4 -He1 Ô e4 L -He2 Ô e4 L -He3 Ô e4 L
e1 Ô e2 -e2 e1 e1 Ô e2 Ô e3
e1 Ô e3 -e3 -He1 Ô e2 Ô e3 L e1
e1 Ô e4 -e4 -He1 Ô e2 Ô e4 L -He1 Ô e3 Ô e4 L
e2 Ô e3 e1 Ô e2 Ô e3 -e3 e2
e2 Ô e4 e1 Ô e2 Ô e4 -e4 -He2 Ô e3 Ô e4 L
e3 Ô e4 e1 Ô e3 Ô e4 e2 Ô e3 Ô e4 -e4
e1 Ô e2 Ô e3 e2 Ô e3 -He1 Ô e3 L e1 Ô e2
e1 Ô e2 Ô e4 e2 Ô e4 -He1 Ô e4 L -He1 Ô e2 Ô e3 Ô e4
e1 Ô e3 Ô e4 e3 Ô e4 e1 Ô e2 Ô e3 Ô e4 -He1 Ô e4 L
e2 Ô e3 Ô e4 -He1 Ô e2 Ô e3 Ô e4 L e3 Ô e4 -He2 Ô e4 L
e1 Ô e2 Ô e3 Ô e4 -He2 Ô e3 Ô e4 L e1 Ô e3 Ô e4 -He1 Ô e2 Ô e4 L
This table is well off the page in the printed version, but we can condense the notation for the basis
elements of L by replacing ei Ô!Ôej by ei,!,j . To do this we can use the GrassmannAlgebra
function ToBasisIndexedForm. For example
2009 9 3
Grassmann Algebra Book.nb 690
1 - He1 Ô e2 Ô e3 L êê ToBasisIndexedForm
1 - e1,2,3
To display the table in condensed notation we make up a rule for each of the basis elements.
C3 =
C2 ê. Reverse@Thread@BasisL@D Ø ToBasisIndexedForm@BasisL@DDDD;
PaletteForm@C3 D
1 e1 e2 e3 e4 e1,2 e1,3 e
e1 1 e1,2 e1,3 e1,4 e2 e3 e4
e2 -e1,2 1 e2,3 e2,4 -e1 -e1,2,3 -e
e3 -e1,3 -e2,3 1 e3,4 e1,2,3 -e1 -e
e4 -e1,4 -e2,4 -e3,4 1 e1,2,4 e1,3,4 -
e1,2 -e2 e1 e1,2,3 e1,2,4 -1 -e2,3 -
e1,3 -e3 -e1,2,3 e1 e1,3,4 e2,3 -1 -
e1,4 -e4 -e1,2,4 -e1,3,4 e1 e2,4 e3,4 -
e2,3 e1,2,3 -e3 e2 e2,3,4 -e1,3 e1,2 e1,
e2,4 e1,2,4 -e4 -e2,3,4 e2 -e1,4 -e1,2,3,4 e
e3,4 e1,3,4 e2,3,4 -e4 e3 e1,2,3,4 -e1,4 e
e1,2,3 e2,3 -e1,3 e1,2 e1,2,3,4 -e3 e2 e2
e1,2,4 e2,4 -e1,4 -e1,2,3,4 e1,2 -e4 -e2,3,4 e2
e1,3,4 e3,4 e1,2,3,4 -e1,4 e1,3 e2,3,4 -e4 e3
e2,3,4 -e1,2,3,4 e3,4 -e2,4 e2,3 -e1,3,4 e1,2,4 -e
e1,2,3,4 -e2,3,4 e1,3,4 -e1,2,4 e1,2,3 -e3,4 e2,4 -
2009 9 3
Grassmann Algebra Book.nb 691
C3 = EvenCliffordProductTable@D êê
CliffordToOrthogonalScalarProducts êê
ToMetricForm; PaletteForm@C3 D
1 e1 Ô e2 e1 Ô e3 e1 Ô e4
e1 Ô e2 -1 -He2 Ô e3 L -He2 Ô e4 L
e1 Ô e3 e2 Ô e3 -1 -He3 Ô e4 L
e1 Ô e4 e2 Ô e4 e3 Ô e4 -1 e1
e2 Ô e3 -He1 Ô e3 L e1 Ô e2 e1 Ô e2 Ô e3 Ô e4
e2 Ô e4 -He1 Ô e4 L -He1 Ô e2 Ô e3 Ô e4 L e1 Ô e2
e3 Ô e4 e1 Ô e2 Ô e3 Ô e4 -He1 Ô e4 L e1 Ô e3
e1 Ô e2 Ô e3 Ô e4 -He3 Ô e4 L e2 Ô e4 -He2 Ô e3 L
To generate the Dirac algebra we need the Minkowski metric in which there is one time-like basis
element e1 ûe1 ã+1, and three space-like basis elements ei ûei ã-1.
2009 9 3
Grassmann Algebra Book.nb 692
C6 = CliffordToOrthogonalScalarProducts@C1 D êê ToMetricForm;
PaletteForm@C6 D
1 e1 e2 e3
e1 1 e1 Ô e2 e1 Ô e3
e2 -He1 Ô e2 L -1 e2 Ô e3
e3 -He1 Ô e3 L -He2 Ô e3 L -1
e4 -He1 Ô e4 L -He2 Ô e4 L -He3 Ô e4 L
e1 Ô e2 -e2 -e1 e1 Ô e2 Ô e3
e1 Ô e3 -e3 -He1 Ô e2 Ô e3 L -e1
e1 Ô e4 -e4 -He1 Ô e2 Ô e4 L -He1 Ô e3 Ô e4 L
e2 Ô e3 e1 Ô e2 Ô e3 e3 -e2
e2 Ô e4 e1 Ô e2 Ô e4 e4 -He2 Ô e3 Ô e4 L
e3 Ô e4 e1 Ô e3 Ô e4 e2 Ô e3 Ô e4 e4
e1 Ô e2 Ô e3 e2 Ô e3 e1 Ô e3 -He1 Ô e2 L
e1 Ô e2 Ô e4 e2 Ô e4 e1 Ô e4 -He1 Ô e2 Ô e3 Ô e4
e1 Ô e3 Ô e4 e3 Ô e4 e1 Ô e2 Ô e3 Ô e4 e1 Ô e4
e2 Ô e3 Ô e4 -He1 Ô e2 Ô e3 Ô e4 L -He3 Ô e4 L e2 Ô e4
e1 Ô e2 Ô e3 Ô e4 -He2 Ô e3 Ô e4 L -He1 Ô e3 Ô e4 L e1 Ô e2 Ô e4
To confirm that this structure is isomorphic to the Dirac algebra we can go through the same
procedure that we followed in the case of the Pauli algebra.
2009 9 3
Grassmann Algebra Book.nb 693
C7 = HC6 êê ReplaceNegativeUnitL ê.
81 Ø I, e1 Ø g0 , e2 Ø g1 , e3 Ø g2 , e4 Ø g3 < ê.
Wedge Ø Dot; PaletteForm@C7 D
I g0 g1 g2 g3
g0 I g0 .g1 g0 .g2 g0 .g3
g1 -g0 .g1 -I g1 .g2 g1 .g3
g2 -g0 .g2 -g1 .g2 -I g2 .g3
g3 -g0 .g3 -g1 .g3 -g2 .g3 -I
g0 .g1 -g1 -g0 g0 .g1 .g2 g0 .g1 .g3
g0 .g2 -g2 -g0 .g1 .g2 -g0 g0 .g2 .g3
g0 .g3 -g3 -g0 .g1 .g3 -g0 .g2 .g3 -g0
g1 .g2 g0 .g1 .g2 g2 -g1 g1 .g2 .g3
g1 .g3 g0 .g1 .g3 g3 -g1 .g2 .g3 -g1
g2 .g3 g0 .g2 .g3 g1 .g2 .g3 g3 -g2
g0 .g1 .g2 g1 .g2 g0 .g2 -g0 .g1 g0 .g1 .g2 .
g0 .g1 .g3 g1 .g3 g0 .g3 -g0 .g1 .g2 .g3 -g0 .g1
g0 .g2 .g3 g2 .g3 g0 .g1 .g2 .g3 g0 .g3 -g0 .g2
g1 .g2 .g3 -g0 .g1 .g2 .g3 -g2 .g3 g1 .g3 -g1 .g2
g0 .g1 .g2 .g3 -g1 .g2 .g3 -g0 .g2 .g3 g0 .g1 .g3 -g0 .g1 .g2
0 0 0 Â 0 0 -1 0
0 0 -Â 0 0 0 0 1
g2 Ø , g3 Ø >; MatrixForm@C8 D
0 -Â 0 0 1 0 0 0
 0 0 0 0 -1 0 0
2009 9 3
Grassmann Algebra Book.nb 694
1 0 0 0 1 0 0 0 0 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 -1
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
0 0 1 0 0 0 -1 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 -1 0 0
0 0 0 1 0 0 0 -1 1 0 0 0 Â 0 0 0 0 -1 0 0 -1 0 0 0
1 0 0 0 1 0 0 0 0 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 -1
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 0 0 -1 0 0 0 1 -1 0 0 0 -Â 0 0 0 0 1 0 0 1 0 0 0
0 0 0 -1 0 0 0 1 -1 0 0 0 -Â 0 0 0 0 1 0 0 1 0 0 0
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
1 0 0 0 1 0 0 0 0 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 -1
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
0 0 -Â 0 0 0 Â 0 0 -Â 0 0 0 -1 0 0 -Â 0 0 0 0 Â 0 0
0 -Â 0 0 0 -Â 0 0 0 0 Â 0 0 0 -1 0 0 0 0 -Â 0 0 Â 0
 0 0 0  0 0 0 0 0 0 - 0 0 0 -1 0 0 - 0 0 0 0 -Â
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 0 0 1 0 0 0 -1 1 0 0 0 Â 0 0 0 0 -1 0 0 -1 0 0 0
1 0 0 0 1 0 0 0 0 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 -1
0 -1 0 0 0 -1 0 0 0 0 1 0 0 0 Â 0 0 0 0 -1 0 0 1 0
0 0 0 -1 0 0 0 1 -1 0 0 0 -Â 0 0 0 0 1 0 0 1 0 0 0
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 -1 0 0 0 -1 0 0 0 0 1 0 0 0 Â 0 0 0 0 -1 0 0 1 0
-1 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 1
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
0 0 -Â 0 0 0 Â 0 0 -Â 0 0 0 -1 0 0 -Â 0 0 0 0 Â 0 0
0 Â 0 0 0 Â 0 0 0 0 -Â 0 0 0 1 0 0 0 0 Â 0 0 -Â 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 0 0 1 0 0 0 -1 1 0 0 0 Â 0 0 0 0 -1 0 0 -1 0 0 0
-1 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 1
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 Â 0 0 0 Â 0 0 0 0 -Â 0 0 0 1 0 0 0 0 Â 0 0 -Â 0
0 0 -Â 0 0 0 Â 0 0 -Â 0 0 0 -1 0 0 -Â 0 0 0 0 Â 0 0
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
-1 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 1
0 0 0 1 0 0 0 -1 1 0 0 0 Â 0 0 0 0 -1 0 0 -1 0 0 0
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 -Â 0 0 0 -Â 0 0 0 0 Â 0 0 0 -1 0 0 0 0 -Â 0 0 Â 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 0 0 -Â 0 0 0 Â -Â 0 0 0 1 0 0 0 0 Â 0 0 Â 0 0 0
0 0 -Â 0 0 0 Â 0 0 -Â 0 0 0 -1 0 0 -Â 0 0 0 0 Â 0 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 Â 0 0 0 Â 0 0 0 0 -Â 0 0 0 1 0 0 0 0 Â 0 0 -Â 0
0 0 Â 0 0 0 -Â 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 -Â 0 0
0 0 0 -Â 0 0 0 Â -Â 0 0 0 1 0 0 0 0 Â 0 0 Â 0 0 0
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
-1 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 1
0 0 0 -1 0 0 0 1 -1 0 0 0 -Â 0 0 0 0 1 0 0 1 0 0 0
0 0 1 0 0 0 -1 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 -1 0 0
0 -Â 0 0 0 -Â 0 0 0 0 Â 0 0 0 -1 0 0 0 0 -Â 0 0 Â 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
0 0 Â 0 0 0 -Â 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 -Â 0 0
0 0 Â 0 0 0 -Â 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 -Â 0 0
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 -Â 0 0 0 -Â 0 0 0 0 Â 0 0 0 -1 0 0 0 0 -Â 0 0 Â 0
0 0 Â 0 0 0 -Â 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 -Â 0 0
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
 0 0 0  0 0 0 0 0 0 - 0 0 0 -1 0 0 - 0 0 0 0 -Â
0 Â 0 0 0 Â 0 0 0 0 -Â 0 0 0 1 0 0 0 0 Â 0 0 -Â 0
Ï Step 3:
2009 9 3
Grassmann Algebra Book.nb 695
Step 3:
C9 = C8 ê. Thread@First@C8 D Ø BasisL@DD ê.
Thread@-First@C8 D Ø -BasisL@DD;
Ï Step 4: Verification
C9 ã C6
True
To generate the Clifford algebra of complex quaternions we need a metric in which all the scalar
products of orthogonal basis elements are of the form ei ûej (i not equal to j) are equal to -1. That
is
2009 9 3
Grassmann Algebra Book.nb 696
C6 = CliffordToOrthogonalScalarProducts@C1 D êê ToMetricForm;
PaletteForm@C6 D
1 e1 e2 e3
e1 -1 e1 Ô e2 e1 Ô e3
e2 -He1 Ô e2 L -1 e2 Ô e3
e3 -He1 Ô e3 L -He2 Ô e3 L -1
e4 -He1 Ô e4 L -He2 Ô e4 L -He3 Ô e4 L
e1 Ô e2 e2 -e1 e1 Ô e2 Ô e3
e1 Ô e3 e3 -He1 Ô e2 Ô e3 L -e1
e1 Ô e4 e4 -He1 Ô e2 Ô e4 L -He1 Ô e3 Ô e4 L
e2 Ô e3 e1 Ô e2 Ô e3 e3 -e2
e2 Ô e4 e1 Ô e2 Ô e4 e4 -He2 Ô e3 Ô e4 L
e3 Ô e4 e1 Ô e3 Ô e4 e2 Ô e3 Ô e4 e4
e1 Ô e2 Ô e3 -He2 Ô e3 L e1 Ô e3 -He1 Ô e2 L
e1 Ô e2 Ô e4 -He2 Ô e4 L e1 Ô e4 -He1 Ô e2 Ô e3 Ô e4
e1 Ô e3 Ô e4 -He3 Ô e4 L e1 Ô e2 Ô e3 Ô e4 e1 Ô e4
e2 Ô e3 Ô e4 -He1 Ô e2 Ô e3 Ô e4 L -He3 Ô e4 L e2 Ô e4
e1 Ô e2 Ô e3 Ô e4 e2 Ô e3 Ô e4 -He1 Ô e3 Ô e4 L e1 Ô e2 Ô e4
Note that both the vectors and bivectors act as an imaginary units under the Clifford product. This
enables us to build up a general element of the algebra as a sum of nested complex numbers,
quaternions, or complex quaternions. To show this, we begin by writing a general element in terms
of the basis elements of the Grassmann algebra:
X = CreateGrassmannNumber@aD
a0 + e1 a1 + e2 a2 + e3 a3 + e4 a4 + a5 e1 Ô e2 + a6 e1 Ô e3 +
a7 e1 Ô e4 + a8 e2 Ô e3 + a9 e2 Ô e4 + a10 e3 Ô e4 + a11 e1 Ô e2 Ô e3 +
a12 e1 Ô e2 Ô e4 + a13 e1 Ô e3 Ô e4 + a14 e2 Ô e3 Ô e4 + a15 e1 Ô e2 Ô e3 Ô e4
We can then factor this expression to display it as a nested sum of numbers of the form
Ck ãai +aj e1 . (Of course any basis element will do upon which to base the factorization.)
X ã Ia0 + a1 e1 M + Ia2 + a5 e1 M Ô e2 +
IIa3 + a6 e1 M + Ia8 + a11 e1 M Ô e2 M Ô e3 + IIa4 + a7 e1 M +
Ia9 + a12 e1 M Ô e2 + IIa8 + a13 e1 M + Ia14 + a15 e1 M Ô e2 M Ô e3 M Ô e4
2009 9 3
Grassmann Algebra Book.nb 697
Again, we can rewrite each of these elements as Qk ãCi +Cj Ôe2 to get
X ã Q1 + Q2 Ô e3 + HQ3 + Q4 Ô e3 L Ô e4
X ã QQ1 + QQ2 Ô e4
Since the basis 1-elements are orthogonal, we can replace the exterior product by the Clifford
product to give
X ã Ia0 + a1 e1 M + Ia2 + a5 e1 M ù e2 +
IIa3 + a6 e1 M + Ia8 + a11 e1 M ù e2 M ù e3 + IIa4 + a7 e1 M +
Ia9 + a12 e1 M ù e2 + IIa8 + a13 e1 M + Ia14 + a15 e1 M ù e2 M ù e3 M ù e4
ã Q1 + Q2 ù e3 + HQ3 + Q4 ù e3 L ù e4
ã QQ1 + QQ2 ù e4
12.17 Rotations
To be completed
2009 9 3
Grassmann Algebra Book.nb 698
13.1 Introduction
This chapter introduces the concept of a matrix of Grassmann numbers, which we will call a
Grassmann matrix. Wherever it makes sense, the operations discussed will work also for listed
collections of the components of tensors of any order as per the Mathematica representation: a set
of lists nested to a certain number of levels. Thus, for example, an operation may also work for
vectors or a list containing only one element.
We begin by discussing some quick methods for generating matrices of Grassmann numbers,
particularly matrices of symbolic Grassmann numbers, where it can become tedious to define all
the scalar factors involved. We then discuss the common algebraic operations like multiplication
by scalars, addition, multiplication, taking the complement, finding the grade of elements,
simplification, taking components or determining the type of elements involved.
Separate sections are devoted to discussing the notions of transpose, determinant and adjoint of a
Grassmann matrix. These notions do not carry over directly into the algebra of Grassmann matrices
due to the non-commutative nature of Grassmann numbers.
Matrix powers are then discussed, and the inverse of a Grassmann matrix defined in terms of its
positive integer powers. Non-integer powers are defined for matrices whose bodies have distinct
eigenvalues. The determination of the eigensystem of a Grassmann matrix is discussed for this
class of matrix.
Finally, functions of Grassmann matrices are discussed based on the determination of their
eigensystems. We show that relationships that we expect of scalars, for example Log[Exp[A]] = A
or Sin@AD2 + Cos@AD2 = 1, can still apply to Grassmann matrices.
2009 9 3
Grassmann Algebra Book.nb 699
? CreateMatrixForm
CreateMatrixForm@DD@X,SD constructs an array of the specified
dimensions with copies of the expression X formed by indexing
its scalars and variables with subscripts. S is an optional
list of excluded symbols. D is a list of dimensions of
the array Hwhich may be symbolic for lists and matricesL.
Note that an indexed scalar is not recognised as a scalar
unless it has an underscripted 0, or is declared as a scalar.
Suppose we require a 2!2 matrix of general Grassmann numbers of the form X (given below) in a
2-space.
X = a + b e1 + c e2 + d e1 Ô e2
a + b e1 + c e2 + d e1 Ô e2
We declare the 2-space, and then create a 2!2 matrix of the form X.
We need to declare the extra scalars that we have formed. This can be done with just one pattern
(provided we wish all symbols with this pattern to be scalar).
DeclareExtraScalars@8Subscript@_, _, _D<D
Y = CreateGrassmannNumber@ÑD
Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2
CreateMatrixForm@82, 2<D@YD
88Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 , Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 <,
8Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 , Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 <<
2009 9 3
Grassmann Algebra Book.nb 700
882 + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 , Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 <,
8Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 , Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 <<
• Finish off your editing and give the matrix a name.
Z = CreateGrassmannNumber@ÑD
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
CreateMatrixForm@83<D@ZD êê MatrixForm
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
DeclareBasis@BD
8e1 , e2 , e3 <
Ÿ Matrix simplification
Simplification of the elements of a matrix may be effected by using GrassmannSimplify. For
example, suppose we are working in a Euclidean 3-space and have a matrix B:
2009 9 3
Grassmann Algebra Book.nb 701
%@BD
881, 2 + e1 û e2 <, 8e3 , 3<<
If we wish to simplify any orthogonal elements, we can use ToMetricForm. Since we are in a
Euclidean space the scalar product term is zero.
ToMetricForm@%@BDD
881, 2<, 8e3 , 3<<
A êê MatrixForm
2 + 3 e1 e1 + 6 e2
K O
5 - 9 e1 Ô e2 e2 + e1 Ô e2
Grade@AD êê MatrixForm
80, 1< 1
K O
80, 2< 81, 2<
M êê MatrixForm
a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2
a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 Ô e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 Ô e2
2009 9 3
Grassmann Algebra Book.nb 702
Body@MD êê MatrixForm
a1,1 a1,2
a2,1 a2,2
Soul@MD êê MatrixForm
e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2
e1 b2,1 + e2 c2,1 + d2,1 e1 Ô e2 e1 b2,2 + e2 c2,2 + d2,2 e1 Ô e2
EvenGrade@MD êê MatrixForm
a1,1 + d1,1 e1 Ô e2 a1,2 + d1,2 e1 Ô e2
a2,1 + d2,1 e1 Ô e2 a2,2 + d2,2 e1 Ô e2
OddGrade@MD êê MatrixForm
e1 b1,1 + e2 c1,1 e1 b1,2 + e2 c1,2
e1 b2,1 + e2 c2,1 e1 b2,2 + e2 c2,2
Finally, we can extract the elements of any specific grade using ExtractGrade.
ExtractGrade@2D@MD êê MatrixForm
d1,1 e1 Ô e2 d1,2 e1 Ô e2
d2,1 e1 Ô e2 d2,2 e1 Ô e2
A êê MatrixForm
2 + 3 e1 e1 + 6 e2
K O
5 - 9 e1 Ô e2 e2 + e1 Ô e2
EvenGradeQ, OddGradeQ, EvenSoulQ and GradeQ all interrogate the individual elements of a
matrix.
EvenGradeQ@AD êê MatrixForm
False False
K O
True False
2009 9 3
Grassmann Algebra Book.nb 703
OddGradeQ@AD êê MatrixForm
False True
K O
False False
EvenSoulQ@AD êê MatrixForm
False False
K O
False False
GradeQ@1D@AD êê MatrixForm
False True
K O
False False
To check the type of a complete matrix we can ask whether it is free of either True or False
values.
FreeQ@GradeQ@1D@AD, FalseD
False
FreeQ@EvenSoulQ@AD, TrueD
True
Ÿ Multiplication by scalars
Due to the Listable attribute of the Times operation in Mathematica, multiplication by scalars
behaves as expected. For example, multiplying the Grassmann matrix A created in Section 13.2 by
the scalar a gives:
A êê MatrixForm
2 + 3 e1 e1 + 6 e2
K O
5 - 9 e1 Ô e2 e2 + e1 Ô e2
a A êê MatrixForm
a H2 + 3 e1 L a He1 + 6 e2 L
K O
a H5 - 9 e1 Ô e2 L a He2 + e1 Ô e2 L
Ÿ Matrix addition
Due to the Listable attribute of the Plus operation in Mathematica, matrices of the same size
add automatically.
2009 9 3
Grassmann Algebra Book.nb 704
A+M
882 + 3 e1 + a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 ,
e1 + 6 e2 + a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2 <,
85 + a2,1 + e1 b2,1 + e2 c2,1 - 9 e1 Ô e2 + d2,1 e1 Ô e2 ,
e2 + a2,2 + e1 b2,2 + e2 c2,2 + e1 Ô e2 + d2,2 e1 Ô e2 <<
%@A + MD
882 + a1,1 + e1 H3 + b1,1 L + e2 c1,1 + d1,1 e1 Ô e2 ,
a1,2 + e1 H1 + b1,2 L + e2 H6 + c1,2 L + d1,2 e1 Ô e2 <,
85 + a2,1 + e1 b2,1 + e2 c2,1 + H-9 + d2,1 L e1 Ô e2 ,
a2,2 + e1 b2,2 + e2 H1 + c2,2 L + H1 + d2,2 L e1 Ô e2 <<
M êê MatrixForm
a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2
a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 Ô e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 Ô e2
%@ M D êê MatrixForm
e2 b1,1 - e1 c1,1 + d1,1 + a1,1 e1 Ô e2 e2 b1,2 - e1 c1,2 + d1,2 + a1,2 e1 Ô e2
e2 b2,1 - e1 c2,1 + d2,1 + a2,1 e1 Ô e2 e2 b2,2 - e1 c2,2 + d2,2 + a2,2 e1 Ô e2
Ÿ Matrix products
As with the product of two Grassmann numbers, entering the product of two matrices of
Grassmann numbers, does not effect any multiplication. For example, take the matrix A in a 2-
space created in the previous section.
MatrixForm@AD Ô MatrixForm@AD
2 + 3 e1 e1 + 6 e2 2 + 3 e1 e1 + 6 e2
K OÔK O
5 - 9 e1 Ô e2 e2 + e1 Ô e2 5 - 9 e1 Ô e2 e2 + e1 Ô e2
To multiply out the product AÔA, we use the GrassmannAlgebra MatrixProduct function
which performs the multiplication. The short-hand for MatrixProduct is a script capital M $,
obtained by typing ÂscMÂ.
2009 9 3
Grassmann Algebra Book.nb 705
B@A Ô AD
88H2 + 3 e1 L Ô H2 + 3 e1 L + He1 + 6 e2 L Ô H5 - 9 e1 Ô e2 L,
H2 + 3 e1 L Ô He1 + 6 e2 L + He1 + 6 e2 L Ô He2 + e1 Ô e2 L<,
8H5 - 9 e1 Ô e2 L Ô H2 + 3 e1 L + He2 + e1 Ô e2 L Ô H5 - 9 e1 Ô e2 L,
H5 - 9 e1 Ô e2 L Ô He1 + 6 e2 L + He2 + e1 Ô e2 L Ô He2 + e1 Ô e2 L<<
To take the product and simplify at the same time, apply the GrassmannSimplify function.
Ï Regressive products
We can also take the regressive product.
Ï Interior products
We calculate an interior product of two matrices in the same way.
P = %@B@A û ADD
884 + 9 He1 û e1 L + 11 e1 + 30 e2 , 3 He1 û e1 L + 19 He1 û e2 L + 6 He2 û e2 L<,
810 - 27 He1 Ô e2 û e1 L - 9 He1 Ô e2 û e1 Ô e2 L + 5 e2 - 13 e1 Ô e2 ,
e2 û e2 - 9 He1 Ô e2 û e1 L - 53 He1 Ô e2 û e2 L + e1 Ô e2 û e1 Ô e2 <<
To convert the interior products to scalar products, use the GrassmannAlgebra function
ToScalarProducts on the matrix.
P1 = ToScalarProducts@PD
984 + 9 He1 û e1 L + 11 e1 + 30 e2 , 3 He1 û e1 L + 19 He1 û e2 L + 6 He2 û e2 L<,
910 + 9 He1 û e2 L2 - 9 He1 û e1 L He2 û e2 L + 27 He1 û e2 L e1 + 5 e2 -
27 He1 û e1 L e2 - 13 e1 Ô e2 , -He1 û e2 L2 + e2 û e2 + He1 û e1 L He2 û e2 L +
9 He1 û e2 L e1 + 53 He2 û e2 L e1 - 9 He1 û e1 L e2 - 53 He1 û e2 L e2 ==
This product can be simplified further if the values of the scalar products of the basis elements are
known. In a Euclidean space, we obtain:
ToMetricForm@P1D êê MatrixForm
13 + 11 e1 + 30 e2 9
K O
1 - 22 e2 - 13 e1 Ô e2 2 + 53 e1 - 9 e2
Ï Clifford products
2009 9 3
Grassmann Algebra Book.nb 706
Clifford products
The Clifford product of two matrices can be calculated in stages. First we multiply the elements
and expand and simplify the terms with GrassmannSimplify.
C1 = %@B@A ù ADD
882 ù 2 + 3 µ 2 ù e1 + 3 e1 ù 2 + e1 ù 5 + 9 e1 ù e1 - 9 e1 ù He1 Ô e2 L +
6 e2 ù 5 - 54 e2 ù He1 Ô e2 L, 2 ù e1 + 6 µ 2 ù e2 + 3 e1 ù e1 +
19 e1 ù e2 + e1 ù He1 Ô e2 L + 6 e2 ù e2 + 6 e2 ù He1 Ô e2 L<,
85 ù 2 + 3 µ 5 ù e1 + e2 ù 5 - 9 e2 ù He1 Ô e2 L - 9 He1 Ô e2 L ù 2 +
He1 Ô e2 L ù 5 - 27 He1 Ô e2 L ù e1 - 9 He1 Ô e2 L ù He1 Ô e2 L,
5 ù e1 + 6 µ 5 ù e2 + e2 ù e2 + e2 ù He1 Ô e2 L - 9 He1 Ô e2 L ù e1 -
53 He1 Ô e2 L ù e2 + He1 Ô e2 L ù He1 Ô e2 L<<
Next we can convert the Clifford products into exterior and scalar products with
ToScalarProducts.
C2 = ToScalarProducts@C1D
984 + 9 He1 û e1 L + 17 e1 + 9 He1 û e2 L e1 +
54 He2 û e2 L e1 + 30 e2 - 9 He1 û e1 L e2 - 54 He1 û e2 L e2 ,
3 He1 û e1 L + 19 He1 û e2 L + 6 He2 û e2 L + 2 e1 - He1 û e2 L e1 -
6 He2 û e2 L e1 + 12 e2 + He1 û e1 L e2 + 6 He1 û e2 L e2 + 19 e1 Ô e2 <,
910 - 9 He1 û e2 L2 + 9 He1 û e1 L He2 û e2 L + 15 e1 - 27 He1 û e2 L e1 +
9 He2 û e2 L e1 + 5 e2 + 27 He1 û e1 L e2 - 9 He1 û e2 L e2 - 13 e1 Ô e2 ,
He1 û e2 L2 + e2 û e2 - He1 û e1 L He2 û e2 L + 5 e1 - 9 He1 û e2 L e1 -
54 He2 û e2 L e1 + 30 e2 + 9 He1 û e1 L e2 + 54 He1 û e2 L e2 ==
ToMetricForm@C2D êê MatrixForm
13 + 71 e1 + 21 e2 9 - 4 e1 + 13 e2 + 19 e1 Ô e2
K O
19 + 24 e1 + 32 e2 - 13 e1 Ô e2 -49 e1 + 39 e2
2009 9 3
Grassmann Algebra Book.nb 707
BT Ô AT ã HBe + Bo LT Ô HAe + Ao LT ã
IBTe + BTo M Ô IAeT + AoT M ã BTe Ô AeT + BTe Ô AoT + BTo Ô AeT + BTo Ô AoT
If the elements of two Grassmann matrices commute, then just as for the usual case, the transpose
of a product is the product of the transposes in reverse order. If they anti-commute, then the
transpose of a product is the negative of the product of the transposes in reverse order. Thus, the
corresponding terms in the two expansions above which involve an even matrix will be equal, and
the last terms involving two odd matrices will differ by a sign:
Thus
HA Ô BLT ã BT Ô AT ñ Ao Ô Bo ã 0 13.2
TestTransposeRelation[A_,B_]:=
Module[{T},T=Transpose;
![T["[AÔB]]]ã !["[T[B]ÔT[A]]-2"[T[OddGrade[B]]Ô
T[OddGrade[A]]]]]
A êê MatrixForm
2 + 3 e1 e1 + 6 e2
K O
5 - 9 e1 Ô e2 e2 + e1 Ô e2
2009 9 3
Grassmann Algebra Book.nb 708
M êê MatrixForm
a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2
a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 Ô e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 Ô e2
TestTransposeRelation@M, AD
True
It should be remarked that for the transpose of a product to be equal to the product of the
transposes in reverse order, the evenness of the matrices is a sufficient condition, but not a
necessary one. All that is required is that the exterior product of the odd components of the
matrices be zero. This may be achieved without the odd components themselves necessarily being
zero.
x 0
K O
0 y
The natural definition of the determinant of this matrix would be the product of the diagonal
elements. However, which product should it be, xÔy or yÔx?. The possible non-commutativity of
the Grassmann numbers on the diagonal may give rise to different products of the diagonal
elements, depending on the ordering of the elements.
On the other hand, if the product of the diagonal elements of a diagonal Grassmann matrix is
unique, as would be the case for even elements, the determinant may be defined uniquely as that
product.
To calculate the determinant of an even Grassmann matrix, we can use the same basic algorithm as
for scalar matrices. In GrassmannAlgebra this is implemented with the function
GrassmannDeterminant. For example:
B = EvenGrade@MD; MatrixForm@BD
a1,1 + d1,1 e1 Ô e2 a1,2 + d1,2 e1 Ô e2
a2,1 + d2,1 e1 Ô e2 a2,2 + d2,2 e1 Ô e2
2009 9 3
Grassmann Algebra Book.nb 709
B1 = GrassmannDeterminant@BD
Ha1,1 + d1,1 e1 Ô e2 L Ô Ha2,2 + d2,2 e1 Ô e2 L -
Ha1,2 + d1,2 e1 Ô e2 L Ô Ha2,1 + d2,1 e1 Ô e2 L
Note that GrassmannDeterminant does not carry out any simplification, but instead leaves the
products in raw form to facilitate the interpretation of the process occurring. To simplify the result
one needs to use GrassmannSimplify.
%@B1D
-a1,2 a2,1 + a1,1 a2,2 + Ha2,2 d1,1 - a2,1 d1,2 - a1,2 d2,1 + a1,1 d2,2 L e1 Ô e2
B@A Ô AD
88x Ô x + y Ô u, x Ô y + y Ô v<, 8u Ô x + v Ô u, u Ô y + v Ô v<<
2009 9 3
Grassmann Algebra Book.nb 710
A3 = %@A Ô A2 D; MatrixForm@A3 D
-Hu Ô v Ô yL + 2 u Ô x Ô y vÔxÔy
K O
-Hu Ô v Ô xL -2 u Ô v Ô y + u Ô x Ô y
Because this matrix has no body, there will eventually be a power which is zero, dependent on the
dimension of the space, which in this case is 4. Here we see that it is the fourth power.
B = A + IdentityMatrix@2D; MatrixForm@BD
1+x y
K O
u 1+v
Ï GrassmannIntegerPower
Although we will see later that the GrassmannAlgebra function GrassmannMatrixPower will
actually deal with more general cases than the positive integer powers discussed here, we can write
a simple function GrassmannIntegerPower to calculate any integer power. Like
GrassmannMatrixPower we can also easily store the result for further use in the same
Mathematica session. This makes subsequent calculations involving the powers already calculated
much faster.
GrassmannIntegerPower[M_,n_?PositiveIntegerQ]:=
Module[{P},P=NestList[!["[MÔ#]]&,M,n-1];
GrassmannIntegerPower[M,m_,Dimension]:=First[Take[P,{m}]];
Last[P]]
B
881 + x, y<, 8u, 1 + v<<
2009 9 3
Grassmann Algebra Book.nb 711
GrassmannIntegerPower@B, 4D êê MatrixForm
1 + 4 x - 6 uÔy - 4 uÔvÔy + 8 uÔxÔy 4 y - 6 vÔy + 6 xÔy + 4 vÔxÔy
K
4 u - 6 uÔv + 6 uÔx - 4 uÔvÔx 1 + 4 v + 6 uÔy - 8 uÔvÔy + 4 uÔxÔ
However, on the way to calculating the fourth power, the function GrassmannIntegerPower
has remembered the values of the all the integer powers up to the fourth, in this case the second,
third and fourth. We can get immediate access to these by adding the Dimension of the space as a
third argument. Since the power of a matrix may take on a different form in spaces of different
dimensions (due to some terms being zero because their degree exceeds the dimension of the
space), the power is recalled only by including the Dimension as the third argument.
Dimension
4
Ï GrassmannMatrixPower
The principal and more general function for calculating matrix powers provided by
GrassmannAlgebra is GrassmannMatrixPower. Here we verify that it gives the same result as
our simple GrassmannIntegerPower function.
GrassmannMatrixPower@B, 4D
881 + 4 x - 6 u Ô y - 4 u Ô v Ô y + 8 u Ô x Ô y,
4 y - 6 v Ô y + 6 x Ô y + 4 v Ô x Ô y<, 84 u - 6 u Ô v + 6 u Ô x - 4 u Ô v Ô x,
1 + 4 v + 6 u Ô y - 8 u Ô v Ô y + 4 u Ô x Ô y<<
2009 9 3
Grassmann Algebra Book.nb 712
The matrix A-4 is calculated by GrassmannMatrixPower by taking the fourth power of the
inverse.
We could of course get the same result by taking the inverse of the fourth power.
GrassmannMatrixInverse@GrassmannMatrixPower@A, 4DD
881 + 10 x Ô y, -4 x<, 8-4 y, 1 - 10 x Ô y<<
Let A be the 2!2 matrix which differs from the matrix discussed above by having distinct
eigenvalues 1 and 2. We will discuss eigenvalues in Section 13.9 below.
1
sqrtA = GrassmannMatrixPowerBA, F
2
3
::1 + - + 2 x Ô y, I-1 + 2 M x>,
2
1
:I-1 + 2 M y, 2 + I-4 + 3 2 M x Ô y>>
4
%@B@sqrtA Ô sqrtADD
More generally, we can extract a symbolic pth root. But to have the simplest result we need to
ensure that p has been declared a scalar first,
2009 9 3
Grassmann Algebra Book.nb 713
More generally, we can extract a symbolic pth root. But to have the simplest result we need to
ensure that p has been declared a scalar first,
DeclareExtraScalars@8p<D
1
pthRootA = GrassmannMatrixPowerBA, F
p
1
1 1
::1 + -1 + 2 p - x Ô y, -1 + 2 p x>,
p
1
-1+ p
1 1 1
2
: -1 + 2 p y, 2 p + -1 + 2 p - x Ô y>>
p
If the power required is not integral and the eigenvalues are not distinct,
GrassmannMatrixPower will return a message and the unevaluated input. Let B be a simple
2!2 matrix with equal eigenvalues 1 and 1
1
GrassmannMatrixPowerBB, F
2
Eigenvalues::notDistinct :
The matrix 881, x<, 8y, 1<< does not have distinct
scalar eigenvalues. The operation applies only
to matrices with distinct scalar eigenvalues.
1
GrassmannMatrixPowerB881, x<, 8y, 1<<, F
2
2009 9 3
Grassmann Algebra Book.nb 714
? GrassmannDistinctEigenvaluesMatrixPower
GrassmannDistinctEigenvaluesMatrixPower@A,pD
calculates the power p of a Grassmann matrix A with
distinct eigenvalues. p may be either numeric or symbolic.
For example, suppose we wish to calculate the 100th power of the matrix A below.
A êê MatrixForm
1 x
K O
y 2
DistinctEigenvaluesQ@AD
True
A
881, x<, 8y, 2<<
GrassmannDistinctEigenvaluesMatrixPower@A, 100D
881 + 1 267 650 600 228 229 401 496 703 205 275 x Ô y,
1 267 650 600 228 229 401 496 703 205 375 x<,
81 267 650 600 228 229 401 496 703 205 375 y,
1 267 650 600 228 229 401 496 703 205 376 -
62 114 879 411 183 240 673 338 457 063 425 x Ô y<<
HI + Xk L Ô II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk q M ã I % Xk q+1
Now, since Xk has no body, its highest non-zero power will be equal to the dimension n of the
space. That is Xk n+1 ã 0. Thus
HI + Xk L Ô II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M ã I 13.3
2009 9 3
Grassmann Algebra Book.nb 715
If now we have a general Grassmann matrix X, say, we can write X as X ã Xb + Xs , where Xb is the
body of X and Xs is the soul of X, and then take Xb out as a factor to get:
X ã Xb + Xs ã Xb II + Xb -1 Xs M ã Xb HI + Xk L
Pre- and post-multiplying the equation above for the inverse of HI + Xk Lby Xb and
Xb -1 respectively gives:
Xb HI + Xk L Ô II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M Xb -1 ã I
Since the first factor is simply X, we finally obtain the inverse of X in terms of Xk ã Xb -1 Xs as:
2 n
X-1 ã JI - IXb -1 Xs M + IXb -1 Xs M - ... % IXb -1 Xs M N Xb -1 13.5
Ÿ GrassmannMatrixInverse
This formula is straightforward to implement as a function. In GrassmannAlgebra we can calculate
the inverse of a Grassmann matrix X by using GrassmannMatrixInverse.
? GrassmannMatrixInverse
GrassmannMatrixInverse@XD computes the
inverse of the matrix X of Grassmann numbers in a
space of the currently declared number of dimensions.
GrassmannMatrixInverse[X_]:=
Module[{B,S,iB,iBS,K},B=Body[X];S=Soul[X];
iB=Inverse[B];iBS=!["[iBÔS]];
K=Sum[(-1)^i GrassmannMatrixPower[iBS,i],{i,0,Dimension}];
!["[KÔiB]]]
2009 9 3
Grassmann Algebra Book.nb 716
iA = GrassmannMatrixInverse@AD; MatrixForm@iAD
1 + xÔy -x
K O
-y 1 - xÔy
iA = GrassmannMatrixInverse@AD êê Simplify
1 1 e4
:: H-He1 Ô e4 L - e2 Ô e4 L, H6 + e1 Ô e4 - e2 Ô e4 + e1 Ô e2 Ô e4 L, - >,
6 18 3
1
: H6 + e1 Ô e4 + e2 Ô e4 - 3 e1 Ô e2 Ô e3 + e1 Ô e2 Ô e4 L,
12
1
H-6 - 6 e1 - e1 Ô e4 + e2 Ô e4 + 3 e1 Ô e2 Ô e3 L,
36
1
He4 + e1 Ô e4 + 3 e2 Ô e3 L>,
6
1 1
: H-2 e1 - e1 Ô e2 Ô e4 L, H6 e1 - 12 e2 + 13 e1 Ô e2 Ô e4 L,
4 36
1
H6 + 5 e1 Ô e4 + 2 e2 Ô e4 - 3 e1 Ô e2 Ô e3 L>>
6
Finally, if we try to find the inverse of a matrix with a singular body we get the expected type of
error messages.
2009 9 3
Grassmann Algebra Book.nb 717
GrassmannMatrixInverse@AD
Inverse::sing : Matrix 881, 2<, 81, 2<< is singular.
%@B@GrassmannMatrixInverse@MD Ô YDD
Alternatively, if the equations need to be expressed as XÔMãY in which case the solution X is given
by:
%@B@Y Ô GrassmannMatrixInverse@MDDD
? GrassmannLeftLinearSolve
GrassmannLeftLinearSolve@M,YD calculates a
matrix or vector X which solves the equation MÔX==Y.
Y = 83 + 2 x Ô y, 7 - 5 x<; MatrixForm@YD
3 + 2 xÔy
K O
7-5x
2009 9 3
Grassmann Algebra Book.nb 718
GrassmannLeftLinearSolve gives:
%@B@M Ô XDD ã Y
True
GrassmannRightLinearSolve gives:
%@B@X Ô MDD ã Y
True
The pair {X,L} is called the eigensystem of A. If X is invertible we can post-multiply by X-1 to get:
A ã X Ô L Ô X-1 13.7
2009 9 3
Grassmann Algebra Book.nb 719
X-1 Ô A Ô X ã L 13.8
It is this decomposition which allows us to define functions of Grassmann matrices. This will be
discussed further in Section 13.10.
The number of scalar equations resulting from the matrix equation AÔX ã XÔL is n2 . The number
of unknowns that we are likely to be able to determine is therefore n2 . There are n unknown
eigenvalues in L, leaving n2 - n unknowns to be determined in X. Since X occurs on both sides of
the equation, we need only determine its columns (or rows) to within a scalar multiple. Hence there
are really only n2 - n unknowns to be determined in X, which leads us to hope that a decomposition
of this form is indeed possible.
The process we adopt is to assume unknown general Grassmann numbers for the diagonal
components of L and for the components of X. If the dimension of the space is N, there will be
2N scalar coefficients to be determined for each of the n2 unknowns. We then use Mathematica's
powerful Solve function to obtain the values of these unknowns. In practice, during the
calculations, we assume that the basis of the space is composed of just the non-scalar symbols
existing in the matrix A. This enables the reduction of computation should the currently declared
basis be of higher dimension than the number of different 1-elements in the matrix, and also allows
the eigensystem to be obtained for matrices which are not expressed in terms of basis elements.
Ab Ô Xb + HAb Ô Xs + As Ô Xb + As Ô Xs L == Xb Ô Lb + HXb Ô Ls + Xs Ô Lb + Xs Ô Ls L
The first term on each side is the body of the equation, and the second terms (in brackets) its soul.
The body of the equation is simply the eigensystem equation for the body of A and shows that the
scalar components of the unknown X and L matrices are simply the eigensystem of the body of A.
Ab Ô Xb ã Xb Ô Lb 13.9
By decomposing the soul of the equation into a sequence of equations relating matrices of elements
of different grades, the solution for the unknown elements of X and L may be obtained sequentially
by using the solutions from the equations of lower grade. For example, the equation for the
elements of grade one is
Ab Ô Xs + As Ô Xb ã Xb Ô Ls + Xs Ô Lb 13.10
1 1 1 1
where As , Xs , and Ls are the components of As , Xs , and Ls of grade 1. Here we know Ab and As and
1 1 1 1
have already solved for Xb and Lb , leaving just Xs and Ls to be determined. However, although this
1 1
2009 9 3
process would be helpful if trying to solve by hand, it is not necessary when using the Solve
function in Mathematica.
Grassmann Algebra Book.nb 720
where As , Xs , and Ls are the components of As , Xs , and Ls of grade 1. Here we know Ab and As and
1 1 1 1
have already solved for Xb and Lb , leaving just Xs and Ls to be determined. However, although this
1 1
process would be helpful if trying to solve by hand, it is not necessary when using the Solve
function in Mathematica.
Ÿ GrassmannMatrixEigensystem
The function implemented in GrassmannAlgebra for calculating the eigensystem of a matrix of
Grassmann numbers is GrassmannMatrixEigensystem. It is capable of calculating the
eigensystem only of matrices whose body has distinct eigenvalues.
? GrassmannMatrixEigensystem
GrassmannMatrixEigensystem@AD calculates a list comprising the
matrix of eigenvectors and the diagonal matrix of eigenvalues
for a Grassmann matrix A whose body has distinct eigenvalues.
If the matrix does not have distinct eigenvalues, GrassmannMatrixEigensystem will return a
message telling us. For example:
GrassmannMatrixEigensystem@AD
Eigenvalues::notDistinct :
The matrix 882, x<, 8y, 2<< does not have distinct
scalar eigenvalues. The operation applies only
to matrices with distinct scalar eigenvalues.
GrassmannMatrixEigensystem@882, x<, 8y, 2<<D
If the matrix has distinct eigenvalues, a list of two matrices is returned. The first is the matrix of
eigenvectors (whose columns have been normalized with an algorithm which tries to produce as
simple a form as possible), and the second is the (diagonal) matrix of eigenvalues.
To verify that this is indeed the eigensystem for A we can evaluate the equation AÔX ã XÔL.
2009 9 3
Grassmann Algebra Book.nb 721
:80, 0, 1<,
7Â 5 5Â 17 7Â
:H-2 + ÂL - 3 + e1 - + e2 + - e1 Ô e2 , H-2 - ÂL -
2 2 2 4 2
7Â 5 5Â 17 7Â
3- e1 - - e2 + + e1 Ô e2 , 0>, 81, 1, 0<>
2 2 2 4 2
L
9Â 1 7Â 15 9Â
::-Â + 1 + e1 - - e2 - - e1 Ô e2 , 0, 0>,
2 2 2 4 2
9Â 1 7Â 15 9Â
:0, Â + 1 - e1 - + e2 - + e1 Ô e2 , 0>,
2 2 2 4 2
80, 0, 2 + e1 + 2 e1 Ô e2 <>
A ã X Ô L Ô X-1
Hence we can write its exterior square as:
This result is easily generalized to give an expression for any power of a matrix.
2009 9 3
Grassmann Algebra Book.nb 722
This result is easily generalized to give an expression for any power of a matrix.
Ap ã X Ô Lp Ô X-1 13.11
‚ ap Ap ã ‚ ap X Ô Lp Ô X-1 ã X Ô I‚ ap Lp M Ô X-1
and hence any function f[A] defined by a linear combination of powers (series):
f@AD ã ‚ ap Ap
L ã DiagonalMatrix@8l1 , l2 , …, lm <D
its pth power is just the diagonal matrix of the pth powers of the elements.
Lp ã DiagonalMatrix@8l1 p , l2 p , …, lm p <D
Hence:
‚ ap Lp ã DiagonalMatrixA9‚ ap l1 p , ‚ ap l2 p , …, ‚ ap lm p =E
Hence, finally we have an expression for the function f of the matrix A in terms of the eigenvector
matrix and a diagonal matrix in which each diagonal element is the function f of the corresponding
eigenvalue.
Ÿ GrassmannMatrixFunction
We have already seen in Chapter 9 how to calculate a function of a Grassmann number by using
GrassmannFunction. We use GrassmannFunction and the formula above to construct
GrassmannMatrixFunction for calculating functions of matrices.
2009 9 3
Grassmann Algebra Book.nb 723
? GrassmannMatrixFunction
GrassmannMatrixFunction@f@ADD calculates a function f of a
Grassmann matrix A with distinct eigenvalues, where f is a
pure function. GrassmannMatrixFunction@8fx,x<,AD calculates
a function fx of a Grassmann matrix A with distinct
eigenvalues, where fx is a formula with x as variable.
It should be remarked that, just as functions of a Grassmann number will have different forms in
spaces of different dimension, so too will functions of a Grassmann matrix.
expA = GrassmannMatrixFunction@Exp@ADD
1 1
::‰ + ‰ x + ‰ I-3 + ‰2 M x Ô z, ‰ I-1 + ‰2 M x Ô z>,
2 2
3
1 2
‰z ‰3 z
:-‰ + ‰ + ‰ I-3 + ‰ M x - - + 3 ‰ x Ô z,
2 2 2
1
‰3 - ‰3 z + I‰ + ‰3 M x Ô z>>
2
We then calculate the logarithm of this exponential and see that it is A as we expect.
GrassmannMatrixFunction@Log@expADD
881 + x, x Ô z<, 82, 3 - z<<
Ÿ Trigonometric functions
We compute the square of the sine and the square of the cosine of A:
2009 9 3
Grassmann Algebra Book.nb 724
Ÿ Symbolic matrices
B = 88a + x, x Ô z<, 8b, c - z<<; MatrixForm@BD
a + x xÔz
K O
b c-z
2009 9 3
Grassmann Algebra Book.nb 725
GrassmannMatrixFunction@Exp@BDD
b H-‰a + a ‰a - c ‰a + ‰c L x Ô z H‰a - ‰c L x Ô z
::‰a H1 + xL + , >,
Ha - cL2 a-c
1
: Hb HHa - cL H-c ‰a + c ‰c - ‰a x - c ‰a x + ‰c x -
Ha - cL3
‰a z + ‰c z - c ‰c z + a H‰a - ‰c + ‰a x + ‰c zLL +
H1 + bL H-2 ‰a - c ‰a + 2 ‰c - c ‰c + a H‰a + ‰c LL x Ô zLL,
c
b H‰a - ‰c - a ‰c + c ‰c L x Ô z
-‰ H-1 + zL + >>
Ha - cL2
Ÿ Symbolic functions
Suppose we wish to compute the symbolic function f of the matrix A.
A êê MatrixForm
1 + x xÔz
K O
2 3-z
If we try to compute this we get an incorrect result because the derivatives evaluated at various
scalar values are not recognized by GrassmannSimplify as scalars. Here we print out just three
lines of the result.
Short@GrassmannMatrixFunction@f@ADD, 3D
1 1
:8á1à<, :-f@1D + f@3D + x Ô f@1D - x Ô f@3D + á22à +
2 2
5 1
x Ô z 2 f£ @1D + x f£ @1D + z f£ @1D + f£ @3D + x f£ @3D + 2 z f£ @3D ,
2 2
1 1
f@3D + á9à + x Ô z x f£ @1D + f£ @3D + z f£ @3D >>
2 2
We see from this that the derivatives in question are the zeroth and first, f[_], and f'[_]. We
therefore declare these patterns to be scalars so that the derivatives evaluated at any scalar
arguments will be considered by GrassmannSimplify as scalars.
DeclareExtraScalars@8f@_D, f£ @_D<D
2009 9 3
Grassmann Algebra Book.nb 726
fA = GrassmannMatrixFunction@f@ADD
1 1
::f@1D + x f£ @1D - x Ô z Hf@1D - f@3D + 2 f£ @1DL, - Hf@1D - f@3DL x Ô z>,
2 2
1 1 1 1
:-f@1D - x f@1D - z f@1D + f@3D + x f@3D + z f@3D -
2 2 2 2
3
x f£ @1D - z f£ @3D + x Ô z Hf@1D - f@3D + f£ @1D + f£ @3DL,
2
1
f@3D - z f£ @3D + x Ô z Hf@1D - f@3D + 2 f£ @3DL>>
2
The function f may be replaced by any specific function. We replace it here by Exp and check that
the result is the same as that given above.
13.11 Supermatrices
To be completed.
2009 9 3
Grassmann Algebra Book.nb 727
A1 Guide to GrassmannAlgebra
Ï To be completed
2009 9 3
Grassmann Algebra Book.nb 728
A2 A Brief Biography of
Grassmann
Hermann Günther Grassmann was born in 1809 in Stettin, a town in Pomerania a short distance
inland from the Baltic. His father Justus Günther Grassmann taught mathematics and physical
science at the Stettin Gymnasium. Hermann was no child prodigy. His father used to say that he
would be happy if Hermann became a craftsman or a gardener.
In 1827 Grassmann entered the University of Berlin with the intention of studying theology. As his
studies progressed he became more and more interested in studying philosophy. At no time whilst
a student in Berlin was he known to attend a mathematics lecture.
Grassmann was however only 23 when he made his first important geometric discovery: a method
of adding and multiplying lines. This method was to become the foundation of his
Ausdehnungslehre (extension theory). His own account of this discovery is given below.
Grassmann was interested ultimately in a university post. In order to improve his academic
standing in science and mathematics he composed in 1839 a work (over 200 pages) on the study of
tides entitled Theorie der Ebbe und Flut. This work contained the first presentation of a system of
spacial analysis based on vectors including vector addition and subtraction, vector differentiation,
and the elements of the linear vector function, all developed for the first time. His examiners failed
to see its importance.
Around Easter of 1842 Grassmann began to turn his full energies to the composition of his first
'Ausdehnungslehre', and by the autumn of 1843 he had finished writing it. The following is an
excerpt from the foreword in which he describes how he made his seminal discovery. The
translation is by Lloyd Kannenberg (Grassmann 1844).
The initial incentive was provided by the consideration of negatives in geometry; I was used to regarding the
displacements AB and BA as opposite magnitudes. From this it follows that if A, B, C are points of a straight
line, then AB + BC = AC is always true, whether AB and BC are directed similarly or oppositely, that is even
if C lies between A and B. In the latter case AB and BC are not interpreted merely as lengths, but rather their
directions are simultaneously retained as well, according to which they are precisely oppositely oriented.
Thus the distinction was drawn between the sum of lengths and the sum of such displacements in which the
directions were taken into account. From this there followed the demand to establish this latter concept of a
sum, not only for the case that the displacements were similarly or oppositely directed, but also for all other
cases. This can most easily be accomplished if the law AB + BC = AC is imposed even when A, B, C do not
lie on a single straight line.
Thus the first step was taken toward an analysis that subsequently led to the new branch of mathematics
presented here. However, I did not then recognize the rich and fruitful domain I had reached; rather, that
result seemed scarcely worthy of note until it was combined with a related idea.
While I was pursuing the concept of product in geometry as it had been established by my father, I concluded
that not only rectangles but also parallelograms in general may be regarded as products of an adjacent pair of
their sides, provided one again interprets the product, not as the product of their lengths, but as that of the
two displacements with their directions taken into account. When I combined this concept of the product
with that previously established for the sum, the most striking harmony resulted; thus whether I multiplied
the sum (in the sense just given) of two displacements by a third displacement lying in the same plane, or the
individual terms by the same displacement and added the products with due regard for their positive and
2009 9 3
negative values, the same result obtained, and must always obtain.
This harmony did indeed enable me to perceive that a completely new domain had thus been disclosed, one
that could lead to important results. Yet this idea remained dormant for some time since the demands of my
Thus the first step was taken toward an analysis that subsequently led to the new branch of mathematics
presented here. However, I did not then recognize the rich and fruitful domain I had reached; rather, that
result seemed scarcely worthy of note until it was combined with a related idea.
While I was pursuing the concept of product in geometry as it had been established by my
Grassmann father,
Algebra I concluded
Book.nb 729
that not only rectangles but also parallelograms in general may be regarded as products of an adjacent pair of
their sides, provided one again interprets the product, not as the product of their lengths, but as that of the
two displacements with their directions taken into account. When I combined this concept of the product
with that previously established for the sum, the most striking harmony resulted; thus whether I multiplied
the sum (in the sense just given) of two displacements by a third displacement lying in the same plane, or the
individual terms by the same displacement and added the products with due regard for their positive and
negative values, the same result obtained, and must always obtain.
This harmony did indeed enable me to perceive that a completely new domain had thus been disclosed, one
that could lead to important results. Yet this idea remained dormant for some time since the demands of my
job led me to other tasks; also, I was initially perplexed by the remarkable result that, although the laws of
ordinary multiplication, including the relation of multiplication to addition, remained valid for this new type
of product, one could only interchange factors if one simultaneously changed the sign (i.e. changed + into |
and vice versa).
As with his earlier work on tides, the importance of this work was ignored. Since few copies were
sold, most ended by being used as waste paper by the publisher. The failure to find acceptance for
Grassmann's ideas was probably due to two main reasons. The first was that Grassmann was just a
simple schoolteacher, and had none of the academic charisma that other contemporaries, like
Hamilton for example, had. History seems to suggest that the acceptance of radical discoveries
often depends more on the discoverer than the discovery.
The second reason is that Grassmann adopted the format and the approach of the modern
mathematician. He introduced and developed his mathematical structure axiomatically and
abstractly. The abstract nature of the work, initially devoid of geometric or physical significance,
was just too new and formal for the mathematicians of the day and they all seemed to find it too
difficult. More fully than any earlier mathematician, Grassmann seems to have understood the
associative, commutative and distributive laws; yet still, great mathematicians like Möbius found it
unreadable, and Hamilton was led to write to De Morgan that to be able to read Grassmann he
'would have to learn to smoke'.
In the year of publication of the Ausdehnungslehre (1844) the Jablonowski Society of Leipzig
offered a prize for the creation of a mathematical system fulfilling the idea that Leibniz had
sketched in 1679. Grassmann entered with 'Die Geometrische Analyse geknüpft und die von
Leibniz Characteristik', and was awarded the prize. Yet as with the Ausdehnungslehre it was
subsequently received with almost total silence.
However, in the few years following, three of Grassmann's contemporaries were forced to take
notice of his work because of priority questions. In 1845 Saint-Venant published a paper in which
he developed vector sums and products essentially identical to those already occurring in
Grassmann's earlier works (Barré 1845). In 1853 Cauchy published his method of 'algebraic keys'
for solving sets of linear equations (Cauchy 1853). Algebraic keys behaved identically to
Grassmann's units under the exterior product. In the same year Saint-Venant published an
interpretation of the algebraic keys geometrically and in terms of determinants (Barré 1853). Since
such were fundamental to Grassmann's already published work he wrote a reply for Crelle's
Journal in 1855 entitled 'Sur les différentes genres de multiplication' in which he claimed priority
over Cauchy and Saint-Venant and published some new results (Grassmann 1855).
It was not until 1853 that Hamilton heard of the Ausdehnungslehre. He set to reading it and soon
after wrote to De Morgan.
I have recently been reading … more than a hundred pages of Grassmann's Ausdehnungslehre, with great
admiration and interest …. If I could hope to be put in rivalship with Des Cartes on the one hand and with
Grassmann on the other, my scientific ambition would be fulfilled.
During the period 1844 to 1862 Grassmann published seventeen scientific papers, including
important papers in physics, and a number of mathematics and language textbooks. He edited a
political paper for a time and published materials on the evangelization of China. This, on top of a
heavy
2009 9teaching
3 load and the raising of a large family. However, this same period saw only few
mathematicians ~ Hamilton, Cauchy, Möbius, Saint-Venant, Bellavitis and Cremona ~ having
any acquaintance with, or appreciation of, his work.
Grassmann Algebra Book.nb 730
During the period 1844 to 1862 Grassmann published seventeen scientific papers, including
important papers in physics, and a number of mathematics and language textbooks. He edited a
political paper for a time and published materials on the evangelization of China. This, on top of a
heavy teaching load and the raising of a large family. However, this same period saw only few
mathematicians ~ Hamilton, Cauchy, Möbius, Saint-Venant, Bellavitis and Cremona ~ having
any acquaintance with, or appreciation of, his work.
This apparently was a mistake, for the reception accorded this new work was as quiet as that
accorded the first, although it contained many new results including a solution to Pfaff's problem.
Friedrich Engel (the editor of Grassmann's collected works) comments: 'As in the first
Ausdehnungslehre so in the second: matters which Grassmann had published in it were later
independently rediscovered by others, and only much later was it realized that Grassmann had
published them earlier' (Engel 1896).
Thus Grassmann's works were almost totally neglected for forty-five years after his first discovery.
In the second half of the 1860s recognition slowly started to dawn on his contemporaries, among
them Hankel, Clebsch, Schlegel, Klein, Noth, Sylvester, Clifford and Gibbs. Gibbs discovered
Grassmann's works in 1877 (the year of Grassmann's death), and Clifford discovered them in depth
about the same time. Both became quite enthusiastic about Grassmann's new mathematics.
Grassmann's activities after 1862 continued to be many and diverse. His contribution to philology
rivals his contribution to mathematics. In 1849 he had begun a study of Sanskrit and in 1870
published his Wörtebuch zum Rig-Veda, a work of 1784 pages, and his translation of the Rig-Veda,
a work of 1123 pages, both still in use today. In addition he published on mathematics, languages,
botany, music and religion. In 1876 he was made a member of the American Oriental Society, and
received an honorary doctorate from the University of Tübingen.
On 26 September 1877 Hermann Grassmann died, departing from a world only just beginning to
recognize the brilliance of the mathematical creations of one of its most outstanding eclectics.
2009 9 3
Grassmann Algebra Book.nb 731
A3 Notation
To be completed.
Operations
Ô exterior product operation
Ñ complement operation
†Ñ§ measure
= defined equal to
ã equal to
ª congruence
Elements
a, b, c, … scalars
x, y, z, … 1-elements or vectors
a, b, g , … m-elements
m m m
X, Y, Z, … Grassmann numbers
n1 , n2 , … position vectors
P1 , P2 , … points
L1 , L2 , … lines
2009 9 3
Grassmann Algebra Book.nb 732
L1 , L2 , … lines
P1 , P2 , … planes
Declarations
¯!n declare a linear or vector space of n dimensions
¯"n declare an n-space comprising an origin point and a vector space of n dimensions
Special objects
1 unit scalar
1 unit n-element
Spaces
L linear space of scalars or field of scalars
0
Basis elements
ei basis 1-element or covariant basis element
2009 9 3
Grassmann Algebra Book.nb 733
ei cobasis element of ei
m m
2009 9 3
Grassmann Algebra Book.nb 734
A4 Glossary
To be completed.
Ï Ausdehnungslehre
The term Ausdehnungslehre is variously translated as 'extension theory', 'theory of
extension', or 'calculus of extension'. Refers to Grassmann's original work and other early
work in the same notational and conceptual tradition.
Ï Bivector
A bivector is a sum of simple bivectors.
Ï Bound vector
A bound vector B is the exterior product of a point and a vector. B ã PÔx. It may also
always be expressed as the exterior product of two points.
Ï Bound bivector
A bound bivector B is the exterior product of a point P and a bivector W: B ã PÔW.
Ï Cofactor
The cofactor of a minor M of a matrix A is the signed determinant formed from the rows
and columns of A which are not in M.
The sign may be determined from H-1LS Hri +ci L , where ri and ci are the row and
column numbers.
2009 9 3
Grassmann Algebra Book.nb 735
Ï Direction
A direction is the space of a vector and is therefore the set of all vectors parallel to a
given vector.
Ï Displacement
A displacement is a physical interpretation of a vector. It may also be viewed as the
difference of two points.
Ï Force
A force is a physical entity represented by a bound vector. This differs from common
usage in which a force is represented by a vector. For reasons discussed in the text,
common use does not provide a satisfactory model.
Ï Force vector
A force vector is the vector of the bound vector representing the force.
2009 9 3
Grassmann Algebra Book.nb 736
Ï Geometric entities
Points, lines, planes, … are geometric entities. Each is defined as the space of a
geometrically interpreted element.
A point is a geometric 1-entity.
A line is a geometric 2-entity.
A plane is a geometric 3-entity.
Ï Geometric interpretations
Points, weighted points, vectors, bound vectors, bivectors, … are geometric
interpretations of m-elements.
Ï Grade
The grade of an m-element is m.
The grade of a simple m-element is the number of 1-element factors in it.
The grade of the exterior product of an m-element and a k-element is m+k.
The grade of the regressive product of an m-element and a k-element is m+k|n.
The grade of the complement of an m-element is n|m.
The grade of the interior product of an m-element and a k-element is m|k.
The grade of a scalar is zero.
(The dimension n is the dimension of the underlying linear space.)
Ï GrassmannAlgebra
The concatenated italicized term GrassmannAlgebra refers to the Mathematica software
package which accompanies this book.
Ï A Grassmann algebra
A Grassmann algebra is the direct sum of an underlying linear space L, its field L, and
1 0
the exterior linear spaces L (2 § m § n).
m
L#L#L#!#L#!#L
0 1 2 m n
Ï Intersection
2009 9 3
Grassmann Algebra Book.nb 737
Intersection
An intersection of two simple elements is any of the congruent elements defined by the
intersection of their spaces.
Ï Line
A line is the space of a bound vector. Thus a line consists of all the (perhaps weighted)
points on it and all the vectors parallel to it.
Ï Linear space
A linear space is a mathematical structure defined by a standard set of axioms. It is often
referred to simply as a 'space'.
Ï Minor
A minor of a matrix A is the determinant (or sometimes matrix) of degree (or order) k
formed from A by selecting the elements at the intersection of k distinct columns and k
distinct rows.
Ï m-direction
An m-direction is the space of a simple m-vector. It is also therefore the set of all vectors
parallel to a given simple m-vector.
Ï m-element
An m-element is a sum of simple m-elements.
Ï m-plane
An m-plane is the space of a bound simple m-vector. Thus a plane consists of all the
(perhaps weighted) points on it and all the vectors parallel to it.
Ï m-vector
An m-vector is a sum of simple m-vectors.
Ï n-algebra
The term n-algebra is an alias for the phrase Grassmann algebra with an underlying
linear space of n dimensions.
Ï
Ï Origin
2009 9 3
Grassmann Algebra Book.nb 738
Origin
The origin ¯" is the geometric interpretation of a specific 1-element as a reference point.
Ï Plane
A plane is the space of a bound simple bivector. Thus a plane consists of all the (perhaps
weighted) points on it and all the vectors parallel to it.
Ï Point
A point P is the sum of the origin " and a vector x: P ã ¯" + x.
Ï Point mass
A point mass is a physical interpretation of a weighted point.
Ï Physical entities
Point masses, displacements, velocities, forces, moments, angular velocities, … are
physical entities. Each is represented by a geometrically interpreted element.
A point mass, displacement or velocity is a physical 1-entity.
A force, moment or angular velocity is a physical 2-entity.
Ï Physical representations
Points, weighted points, vectors, bound vectors, bivectors, … are geometric
representations of physical entities such as point masses, displacements, velocities,
forces, moments.
Physical entities are represented by geometrically interpreted elements.
Ï Scalar
A scalar is an element of the field L of the underlying linear space L.
0 1
A scalar is of grade zero.
Ï Screw
A screw is a geometrically interpreted 2-element in a three-dimensional (physical) space
(four-dimensional linear space) in which the bivector is orthogonal to the vector of the
bound vector.
The bivector is necessarily simple since the vector subspace is three-dimensional.
Ï Simple element
A simple element is an element which may be expressed as the exterior product of 1-
elements.
Ï Simple bivector
Ï
2009 9 3
Grassmann Algebra Book.nb 739
Simple bivector
A simple bivector V is the exterior product of two vectors. V ã xÔy.
Ï Simple m-element
A simple m-element is the exterior product of m 1-elements.
Ï Simple m-vector
A simple m-vector is the exterior product of m vectors.
Ï 2-direction
A 2-direction is the space of a simple bivector. It is therefore the set of all vectors parallel
to a given simple bivector.
Ï Union
A union of two simple elements is any of the congruent elements which define the union
of their spaces.
Ï Vector
A vector is a geometric interpretation of a 1-element.
Ï
Ï Vector space
2009 9 3
Grassmann Algebra Book.nb 740
Vector space
A vector space is a linear space in which the elements are interpreted as vectors.
Ï Weighted point
A weighted point is a scalar multiple a of a point P: a P.
2009 9 3
Grassmann Algebra Book.nb 741
A5 Bibliography
To be completed.
Bibliography
Note
This bibliography is specifically directed at covering the algebraic implications of
Grassmann's work. It is outside the scope of this book to cover any applications of
Grassmann's work to calculus (for example, the theory of exterior differential forms), nor
to the discussion of other algebraic systems (for example Clifford algebra) except where
they clearly intersect with Grassmann's algebraic concepts. Of course, a result which may
be expressed by an author in one system may well find expression in others.
Ì Armstrong H L 1959
'On an alternative definition of the vector product in n-dimensional vector analysis'
Matrix and Tensor Quarterly, IX no 4, pp 107-110.
The author proposes a definition equivalent to the complement of the exterior product of
n|1 vectors.
Ì Ball R S 1876
The Theory of Screws: A Study in the Dynamics of a Rigid Body
Dublin
See the note on Hyde (1888).
Ì Ball R S 1900
A Treatise on the Theory of Screws
Cambridge University Press (1900). Reprinted 1998.
A classical work on the theory of screws, containing an annotated bibliography in which
Ball refers to the Ausdehnungslehre of 1862: 'This remarkable work … contains much
that is of instruction and interest in connection with the present theory … . Here we have
a very general theory which includes screw coordinates as a special case.' Ball does not
use Grassmann's methods in his treatise.
Ì Barton H 1927
'A Modern Presentation of Grassmann's Tensor Analysis'
American Journal of Mathematics, XLIX, pp 598-614.
This paper covers similar ground to that of Moore (1926).
Ì Bourbaki N 1948
2009 9 3
Grassmann Algebra Book.nb 742
Bourbaki N 1948
Algèbra
Actualités Scientifiques et Industrielles No.1044, Paris
Chapter III treats multilinear algebra.
Ì Brand L 1947
Vector and Tensor Analysis
Wiley, New York.
Contains a chapter on motor algebra which, according to the author in his preface '… is
apparently destined to play an important role in mechanics as well as in line geometry'.
There is also a chapter on quaternions.
Ì Buchheim A 1884-1886
'On the Theory of Screws in Elliptic Space'
Proceedings of the London Mathematical Society, xiv (1884) pp 83-98, xvi (1885) pp
15-27, xvii (1886) pp 240-254, xvii p 88.
The author writes 'My special object is to show that the Ausdehnungslehre supplies all
the necessary materials for a calculus of screws in elliptic space. Clifford was apparently
led to construct his theory of biquaternions by the want of such a calculus, but
Grassmann's method seems to afford a simpler and more natural means of expression
than biquaternions.' (xiv, p 90) Later he extends this theory to '… all kinds of space.' (xvi,
p 15)
Ì Burali-Forti C 1897
Introduction à la Géométrie Différentielle suivant la Méthode de H. Grassmann
Gauthier-Villars, Paris
This work covers both algebra and differential geometry in the tradition of the Peano
approach to the Ausdehnungslehre.
Mostly a treatise on standard vector analysis, but it does contain an appendix (pp
176-198) on the methods of the Ausdehnungslehre and some interesting historical notes
on the vector calculi and their notations. The authors use the wedge Ô to denote Gibbs'
cross product ! and use the !, initially introduced by Grassmann to denote the scalar or
inner product.
Ì Cartan É 1922
Leçons sur les Invariants Intégraux
Hermann, Paris
In this work Cartan develops the theory of exterior differential forms, remarking that he
has called them 'extérieures' since they obey Grassmann's rules of 'multiplication
extérieures'.
Ì Cartan É 1938
Leçons sur la Théorie des Spineures
Hermann, Paris
In the introduction Cartan writes "One of the principal aims of this work is to develop the
theory of spinors systematically by giving a purely geometrical definition of these
mathematical entities: because of this geometrical origin, the matrices used by physicists
in quantum mechanics appear of their own accord, and we can grasp the profound origin
of the property, possesssed by Clifford algebras, of representing rotations in space having
any number of dimensions". Contains a short section on multivectors and complements
(the French term is supplément).
Ì Cartan É 1946
Leçons sur la Géométrie des Espaces de Riemann
Gautier-Villars, Paris
This is a classic text (originating in 1925) on the application of the exterior calculus to
differential geometry. Cartan begins with a chapter on exterior algebra and its expression
in tensor index notation. He uses square brackets to denote the exterior product (as did
Grassmann), and a wedge to denote the cross product.
Ì Carvallo M E 1892
'La Méthode de Grassmann'
Nouvelles Annales de Mathématiques, serie 3 XI, pp 8-37.
An exposition of some of Grassmann's methods applied to three dimensional geometry
following the approach of Peano. It does not treat the interior product.
Ì Chevalley C 1955
The Construction and Study of Certain Important Algebras
Mathematical Society of Japan, Tokyo.
Lectures given at the University of Tokyo on graded, tensor, Clifford and exterior
algebras.
Ì Clifford W K 1873
2009 9 3
Grassmann Algebra Book.nb 744
Clifford W K 1873
'Preliminary Sketch of Biquaternions'
Proceedings of the London Mathematical Society, IV, nos 64 and 65, pp 381-395
This paper includes an interesting discussion of the geometric nature of mechanical
quantities. Clifford adopts the term 'rotor' for the bound vector, and 'motor' for the general
sum of rotors. By analogy with the quaternion as a quotient of vectors he defines the
biquaternion as a quotient of motors.
Ì Clifford W K 1878
'Applications of Grassmann's Extensive Algebra'
American Journal of Mathematics Pure and Applied, I, pp 350-358
In this paper Clifford lays the foundations for general Clifford algebras.
Ì Clifford W K 1882
Mathematical Papers
Reprinted by Chelsea Publishing Co, New York (1968).
Of particular interest in addition to his two published papers above are the otherwise
unpublished notes:
'Notes on Biquaternions' (~1873)
'Further Note on Biquaternions' (1876)
'On the Classification of Geometric Algebras' (1876)
'On the Theory of Screws in a Space of Constant Positive Curvature' (1876).
Ì Coffin J G 1909
Vector Analysis
Wiley, New York.
This is the second English text in the Gibbs-Heaviside tradition. It contains an appendix
comparing the various notations in use at the time, including his view of the
Grassmannian notation.
Ì Collins J V 1899-1900
'An elementary Exposition of Grassmann's Ausdehnungslehre or Theory of Extension'
American Mathematical Monthly, 6 (1899) pp 193-198, 261-266, 297-301; 7 (1900) pp
31-35, 163-166, 181-187, 207-214, 253-258.
This work follows in summary form the Ausdehnungslehre of 1862 as regards general
theory but differs in its discussion of applications. It includes applications to geometry
and brief applications to linear equations, mechanics and logic.
2009 9 3
Grassmann Algebra Book.nb 745
Ì Coolidge J L 1940
'Grassmann's Calculus of Extension' in
A History of Geometrical Methods
Oxford University Press, pp 252-257
This brief treatment of Grassmann's work is characterized by its lack of clarity. The
author variously describes an exterior product as 'essentially a matrix' and as 'a vector
perpendicular to the factors' (p 254). And confusion arises between Grassmann's matrix
and the division of two exterior products (p 256).
Ì Cox H 1882
'On the application of Quaternions and Grassmann's Ausdehnungslehre to different kinds
of Uniform Space'
Cambridge Philosophical Transactions, XIII part II, pp 69-143.
The author shows that the exterior product is the multiplication required to describe non-
metric geometry, for 'it involves no ideas of distance' (p 115). He then discusses exterior,
regressive and interior products, applying them to geometry, systems of forces, and linear
complexes | using the notation of 1844. In other papers Cox applies the
Ausdehnungslehre to non-Euclidean geometry (1873) and to the properties of circles
(1890).
Ì Dibag I 1974
'Factorization in Exterior Algebras'
Journal of Algebra, 30, pp 259-262
The author develops necessary and sufficient conditions for an m-element to have a
certain number of 1-element factors. He also shows that an (n|2)-element in an odd
dimensional space always has a 1-element factor.
Ì Drew T B 1961
Handbook of Vector and Polyadic Analysis
Reinhold, New York
2009 9 3
Grassmann Algebra Book.nb 746
Ì Fehr H 1899
Application de la Méthode Vectorielle de Grassmann à la Géométrie Infinitésimale
Georges Carré, Paris
Comprises an initial chapter on exterior algebra as well as standard differential geometry.
Ì Fleming W H 1965
Functions of Several Variables
Addison-Wesley
Contains a chapter on exterior algebra.
Ì Forder H G 1941
The Calculus of Extension
Cambridge
This text is the one of the most recent devoted to an exposition of Grassmann's methods.
It is an extensive work (490 pages) largely using Grassmann's notations and relying
primarily on the Ausdehnungslehre of 1862. Its application is particularly to geometry
including many examples well illustrating the power of the methods, a chapter on forces,
screws and linear complexes, and a treatment of the algebra of circles.
Ì Gibbs J W 1886
'On multiple algebra'
Address to the American Association for the Advancement of Science
In Collected Works, Gibbs 1928, vol 2.
This paper is probably the most authoritative historical comparison of the different
'vectorial' algebras of the time. Gibbs was obviously very enthusiastic about the
Ausdehnungslehre, and shows himself here to be one of Grassmann's greatest proponents.
Ì Gibbs J W 1891
'Quaternions and the Ausdehnungslehre'
Nature, 44, pp 79|82. Also in Collected Works, Gibbs 1928.
Gibbs compares Hamilton's Quaternions with Grassmann's Ausdehnungslehre and
concludes that '... Grassmann's system is of indefinitely greater extension ...'. Here he also
concludes that to Grassmann must be attributed the discovery of matrices. Gibbs
published a further three papers in Nature (also in Collected Works, Gibbs 1928) on the
relationship between quaternions and vector analysis, providing an enlightening insight
into the quaternion|vector analysis controversy of the time.
2009 9 3
Grassmann Algebra Book.nb 747
Ì Gibbs J W 1928
The Collected Works of J. Willard Gibbs Ph.D. LL.D.
Two volumes. Longmans, New York.
In part 2 of Volume 2 is reprinted Gibbs' only personal work on vector analysis: Elements
of Vector Analysis, Arranged for the Use of Students of Physics (1881|1884). This was
not published elsewhere.
Ì Grassmann H G 1844
Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik
Leipzig.
The full title is Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik
dargestellt und durch Andwendungen auf die übrigen Zweige der Mathematik, wie auch
auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert.
This first book on Grassmann's new mathematics is known shortly as Die
Ausdehnungslehre von 1844. It develops the theory of exterior multiplication and division
and regressive exterior multiplication. It does not treat complements or interior products
in the way of the Ausdehnungslehre of 1862. The original Die Ausdehnungslehre von
1844 was republished in 1878. The best source to this work is Volume 1 of Grassmann's
collected works (1896), of which an English translation has been made by Lloyd C.
Kannenberg (1995).
Ì Grassmann H G 1855
'Sur les différents genres de multiplication'
Crelle's Journal, 49, pp 123-141
This paper was written to claim of Cauchy priority for a method of solving linear
equations.
Ì Grassmann H G 1862
Die Ausdehnungslehre. Vollständig und in strenger Form
Berlin.
This is Grassmann's second attempt to publish his new discoveries in book form. It
adopts a substantially different approach to the Ausdehnungslehre of 1844, relying more
on the theorem|proof approach. The work comprises two main parts: the first on the
exterior algebra and the second on the theory of functions. The first part includes chapters
on addition and subtraction, products in general, combinatorial products (exterior and
regressive), the interior product, and applications to geometry. This is probably
Grassmann's most important work. The best source is Volume 1 of Grassmann's collected
works (1896), of which an English translation has been made by Lloyd C. Kannenberg
(2000). In the collected works edition, the editor Friedrich Engel has appended extensive
notes and comments, which Kannenberg has also translated.
2009 9 3
adopts a substantially different approach to the Ausdehnungslehre of 1844, relying more
on the theorem|proof approach. The work comprises two main parts: the first on the
exterior algebra and the second on the theory of functions. The first part includes chapters
on addition and subtraction, products in general, combinatorial Grassmann
productsAlgebra
(exterior and 748
Book.nb
regressive), the interior product, and applications to geometry. This is probably
Grassmann's most important work. The best source is Volume 1 of Grassmann's collected
works (1896), of which an English translation has been made by Lloyd C. Kannenberg
(2000). In the collected works edition, the editor Friedrich Engel has appended extensive
notes and comments, which Kannenberg has also translated.
Ì Grassmann H G 1878
'Verwendung der Ausdehnungslehre fur die allgemeine Theorie der Polaren und den
Zusammenhang algebraischer Gebilde'
Crelle's Journal, 84, pp 273|283.
This is Grassmann's last paper. It contains, among other material, his most complete
discussion on the notion of 'simplicity'.
Ì Grassmann H G 1896|1911
Hermann Grassmanns Gesammelte Mathematische und Physikalische Werke
Teubner, Leipzig. Volume 1 (1896), Volume 2 (1902, 1904), Volume 3 (1911).
Grassmann's complete collected works appeared between 1896 and 1911 under the
editorship of Friedrich Engel and with the collaboration of Jakob Lüroth, Eduard Study,
Justus Grassmann, Hermann Grassmann jr. and Georg Scheffers. The following is a
summary of their contents.
Volume 1
Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik (1844)
Geometrische Analyse: geknüpft an die von Leibniz erfundene geometrische
Charakteristik (1847)
Die Ausdehnungslehre. Vollständig und in strenger Form (1862)
Volume 2
Papers on geometry, analysis, mechanics and physics
Volume 3
Theorie der Ebbe und Flut (1840)
Further papers on mathematical physics.
Parts of Volume 1 (Die lineale Ausdehnungslehre (1844) and Geometrische Analyse
(1847)) together with selected papers on mathematics and physics have been translated
into English by Lloyd C Kannenberg and published as A New Branch of Mathematics
(1995). Geometrische Analyse is Grassmann's prize-winning essay fulfilling Leibniz'
search for an algebra of geometry. The remainder of Volume 1 (Die Ausdehnungslehre
(1862)) has been translated into English by Lloyd C Kannenberg and published as
Extension Theory (2000).
Volume 2 comprises papers on geometry, analysis, analytical mechanics and
mathematical physics plus two texts, one on arithmetic and the other on trigonometry.
Volume 3 comprises Grassmann's earliest major work (Theorie der Ebbe und Flut
(1840)) and further papers, particularly on wave theory. The Theory of Tides begins to
apply Grassmann's new approach to vector analysis.
2009 9 3
Grassmann Algebra Book.nb 749
A comprehensive treatment in three books using the methods of the elder H. Grassmann.
Ì Greenberg M J 1976
'Element of area via exterior algebra'
American Mathematical Monthly, 83, pp 274-275
The author suggests that the treatment of elements of area by using the exterior product
would be a more satisfactory treatment than that normally given in calculus texts.
Ì Greub W H 1967
Multilinear Algebra
Springer-Verlag, Berlin.
Contains chapters on exterior algebra.
Ì Gurevich G B 1964
Foundations of the Theory of Algebraic Invariants
Noordhoff, Groningen
Contains a chapter on m-vectors (called here polyvectors) with an extensive treatment of
the conditions for divisibility of an m-vector by one or more vectors (pp 354-395).
Ì Hamilton W R 1853
Lectures on Quaternions
Dublin
The first English text on quaternions. The introduction briefly mentions Grassmann.
Ì Hamilton W R 1866
Elements of Quaternions
Dublin
The editor Charles Joly in his preface remarks in relation to the quaternion's associativity
that "For example, Grassmann's multiplication is sometimes associative, but sometimes it
is not." The exterior product is of course associative. However, here it seems Joly may
be suffering from a confusion caused by Grassmann's notation for expresssions in which
both exterior and regressive products appear.
The second edition of 1899 has been reprinted in 1969 by Chelsea Publishing Company.
Ì Hardy A S 1895
Elements of Quaternions
Ginn and Company, Boston
The author claims this to be an introduction to quaternions at an elementary level.
2009 9 3
Grassmann Algebra Book.nb 750
Ì Heath A E 1917
'Hermann Grassmann 1809-1877'
The Monist, 27, pp 1-21
'The Neglect of the Work of H. Grassmann'
The Monist, 27, pp 22-35
'The Geometric Analysis of Hermann Grassmann and its connection with Leibniz's
characteristic'
The Monist, 27, pp 36-56
Ì Hestenes D 1966
Space|Time Algebra
Gordon and Breach
This work is a seminal exposition of Clifford algebra emphasising the geometric nature
of the quantities and operations involved. The author writes "… ideas of Grassmann are
used to motivate the construction of Clifford algebra and to provide a geometric
interpretation of Clifford numbers. This is to be contrasted with other treatments of
Clifford algebra which are for the most part formal algebra. By insisting on Grassmann's
geometric viewpoint, we are led to look upon the Dirac algebra with new eyes." (p 2)
Ì Hestenes D 1968
'Multivector Calculus'
Journal of Mathematical Analysis and Applications, 24, pp 313-325
In the words of the author: "The object of this paper is to show how differential and
integral calculus in many dimensions can be greatly simplified using Clifford algebra."
Ì Hodge W V D 1952
Theory and Applications of Harmonic Integrals
Cambridge
The "star" operator defined by Hodge is of similar nature to Grassmann's complement
operator.
Ì Hunt K H 1970
Screw systems in Spatial Kinematics (Screw Systems Surveyed and Applied to Jointed
Rigid Bodies)
Report MMERS 3, Department of Mechanical Engineering, Monash University, Clayton,
Australia
Although in this report the author has intentionally confined himself to well known
methods of pure and analytical geometry, he is a strong proponent of the screw as being
the natural language for investigating spacial mechanisms.
2009 9 3
Grassmann Algebra Book.nb 751
Ì Hyde E W 1884
'Calculus of Direction and Position'
American Journal of Mathematics, VI, pp 1-13
In this paper the author compares quaternions to the methods of the Ausdehnungslehre
and concludes that Grassmann's system is far preferable as a system of directed quantities.
Ì Hyde E W 1888
'Geometric Division of Non-congruent Quantities'
Annals of Mathematics, 4, pp 9-18
This paper deals with the concept of exterior division more extensively than did
Grassmann.
Ì Hyde E W 1888
'The Directional Theory of Screws'
Annals of Mathematics, 4, pp 137-155
This paper is an account of the theory of screws using the Ausdehnungslehre. Hyde
claims that "A screw evidently belongs thoroughly to the realm of the Directional
Calculus and will not be easily or naturally treated by Cartesian methods; and Ball's
treatment is throughout essentially Cartesian in its nature." Here he is referring to The
Theory of Screws: A Study in the Dynamics of a Rigid Body (1876). In A Treatise on the
Theory of Screws (1900) Ball comments: "Prof. Hyde proves by his [sic] calculus many
of the fundamental theorems in the present theory in a very concise manner" (p 531).
Ì Hyde E W 1890
The Directional Calculus based upon the Methods of Hermann Grassmann
Ginn and Company, Boston
The author discusses geometric applications in 2 and 3 dimensions including screws and
complements of bound elements (for example, points, lines and planes).
Ì Hyde E W 1906
Grassmann's Space Analysis
Wiley, New York
In the words of the author: "This little book is an attempt to present simply and concisely
the principles of the "Extensive Analysis" as fully developed in the comprehensive
treatises of Hermann Grassmann, restricting the treatment however to the geometry of
two and three dimensional space."
2009 9 3
Grassmann Algebra Book.nb 752
Ì Jahnke E 1905
Vorlesungen über die Vektorenrechnung mit Anwendungen auf Geometrie, Mechanik und
Mathematische Physik.
Teubner, Leipzig
This work is full of examples of application of the Ausdehnungslehre (notation of 1862)
to geometry, mechanics and physics. Only two and three dimensional problems are
considered.
Ì Lasker E 1896
'A Essay on the Geometrical Calculus'
Proceedings of the London Mathematical Society, XXVIII, pp 217-260
This work differs from most of the other papers of this era on the geometrical
applications of the Ausdehnungslehre by concentrating on a space of arbitrary dimension
rather than two or three.
Ì Lewis G N 1910
'On four-dimensional vector calculus and its application in electrical theory'
Proceedings of the American Academy of Arts and Sciences, XLVI, pp 165-181
A specialization of the Ausdehnungslehre to four dimensions and its applications to
electromagnetism in Minkowskian terms. The author introduces the new concepts with
the minimum of explanation: for example, the anti-symmetric properties of bivectors and
trivectors are justified as conventions! (p 167).
Ì Lotze A 1922
Die Grundgleichungen der Mechanik insbesondere Starrer Körper
Teubner, Leipzig
This short monograph is one of the rare works addressing mechanics using the methods
of the Ausdehnungslehre. It treats the dynamics of systems of particles and the kinematics
and dynamics of rigid bodies.
Ì Macfarlane A 1904
Bibliography of Quaternions and Allied Systems of Mathematics
Dublin
Published for the International Association for Promoting the Study of Quaternions and
Allied Systems of Mathematics, this bibliography together with supplements to 1913
contains about 2500 articles including many on the Ausdehnungslehre and vector analysis.
Ì Marcus M 1966
'The Cauch-Schwarz inequality in the exterior algebra'
Quarterly Journal of Mathematics, 17, pp 61-63
The author shows that a classical inequality for positive definite hermitian matrices is a
special case of the Cauchy-Schwarz inequality in the appropriate exterior algebra.
2009 9 3
Grassmann Algebra Book.nb 753
The author shows that a classical inequality for positive definite hermitian matrices is a
special case of the Cauchy-Schwarz inequality in the appropriate exterior algebra.
Ì Marcus M 1975
Finite Dimensional Multilinear Algebra
Marcel Dekker, New York
Part II contains chapters on Grassmann and Clifford algebras.
Ì Mehmke R 1913
Vorlesungen über Punkt- und Vektorenrechnung (2 volumes)
Teubner, Leipzig
Volume 1 (394 pages) deals with the analysis of bound elements (points, lines and
planes) and projective geometry.
Ì Milne E A 1948
Vectorial Mechanics
Methuen, London
An exposition of three-dimensional vectorial (and tensorial) mechnaics using 'invariant'
notation. Milne is one of the rare authors who realises that physical forces and the linear
momenta of particles are better modelled by line vectors (bound vectors) rather than by
the usual vector algebra. He treats systems of line vectors (bound vectors) as vector pairs.
Ì Moore C L E 1926
'Grassmannian Geometry in Riemannian Space'
Journal of Mathematics and Physics, 5, pp 191-200
This paper treats the complement, and the exterior, regressive and interior products in a
Riemannian space in tensor index notation using the alternating tensors and the
generalised Kronecker symbol. This classic use of tensor notation does not enhance the
readability of the exposition.
Ì Murnaghan F D 1925
'The Generalised Kronecker Symbol and its Application to the Theory of Determinants'
American Mathematical Monthly, 32, pp 233-241
The generalised Kronecker symbol is essentially the generalisation to an exterior product
space of the usual Kronecker symbol.
Ì Murnaghan F D 1925
'The Tensor Character of the Generalised Kronecker Symbol'
Bulletin of the American Mathematical Society, 31, pp 323-329
The author states "It will be readily recognised that there is an intimate connection here
with Grassmann's Ausdehnungslehre, and we believe, in fact, that a systematic exposition
of this theory with the aid of the generalised Kronecker symbol would help to make it
more widely understood".
2009 9 3
Grassmann Algebra Book.nb 754
The author states "It will be readily recognised that there is an intimate connection here
with Grassmann's Ausdehnungslehre, and we believe, in fact, that a systematic exposition
of this theory with the aid of the generalised Kronecker symbol would help to make it
more widely understood".
Ì Peano G 1895
'Essay on Geometrical Calculus'
in Selected Works of Guiseppe Peano Chapter XV, p 169-188
Allen and Unwin, London
The essay is translated from 'Saggio di calcolo geometrico' Atti, Accad. Sci. Torino, 31,
(1895-6) pp 952-975. Peano claims to have understood Grassmann's ideas by
reconstructing them himself. His geometric ideas come through with clarity,
substantiating his claim. Peano's principle exposition of Grassmann's work was in
Calcolo geometrico secondo l'Ausdehnunglehre di H. Grassmann, Turin (1888) of which
a translation of selected passages appears on pp 90-100 of the above Selected Works.
Ì Peano G 1901
Formulaire de Mathematiques
Gauthier-Villars, Paris
A compendium of axioms and results. The last part (pp 192-209) is devoted to point and
vector spaces and includes some interesting historical comments.
Ì Pedoe D 1967
'On a geometrical theorem in exterior algebra'
Canadian Journal of Mathematics, 19, pp 1187-1191
The author remarks 'This paper owes its inspiration to the remarkable book by H. G.
Forder The Calculus of Extension … Forder introduces many concepts which I find
difficult to bring down to earth. But the methods developed in his book are powerful
ones, and it is evident that much work can usefully be done in simplifying and
interpreting some of the concepts he uses'. He does not mention Grassmann.
Ì Saddler W 1927
'Apolar triads on a cubic curve'
Proceedings of the London Mathematical Society, Series 2, 26, pp 249-256
'Apolar tetrads on the Grassmann quartic surface'
Journal of the London Mathematical Society, 2, pp 185-189
Exterior algebra applied to geometric construction.
Ì Sain M 1976
'The Growing Algebraic Presence in Systems Engineering: An Introduction'
Proceedings of the IEEE, 64, p 96
A modern algebraic discussion culminating in the definition of the exterior algebra and
its relationship to the theory of determinants and some systems theoretical applications.
2009 9 3
Grassmann Algebra Book.nb 755
A modern algebraic discussion culminating in the definition of the exterior algebra and
its relationship to the theory of determinants and some systems theoretical applications.
Ì Schouten J A 1951
Tensor Analysis for Physicists
Oxford
Contains a chapter on m-vectors from a tensor-analytic viewpoint.
Ì Schweitzer A R 1950
Bulletin of the American Mathematical Society, 56
'Grassmann's extensive algebra and modern number theory' (Part I, p 355; Part II, p 458)
'On the place of the algebraic equation in Grassmann's extensive algebra' (p 459)
'On the derivation of the regressive product in Grassmann's geometrical calculus' (p 463)
'A metric generalisation of Grassmann's geometric calculus' (p 464)
Resumés only of these papers are printed.
Ì Scott R F 1880
A Treatise on the Theory of Determinants
Cambridge, London
The author states in the preface that 'The principal novelty in the treatise lies in its
systematic use of Grassmann's alternate units, by means of which the study of
determinants is, I believe, much simplified.'
Ì Shepard G C 1966
Vector Spaces of Finite Dimension
Oliver and Boyd, London
Chapter IV contains an introduction to exterior products via tensors and multilinear
algebra.
Ì Thomas J M 1962
Systems and Roots
W Byrd Press
The author uses exterior algebraic concepts in some of his network analysis.
2009 9 3
Grassmann Algebra Book.nb 756
Ì Tonti E 1972
Accademia Nazionale die Lincei, Serie VIII, Volume L II
'On the Mathematical Structure of a Large Class of Physical Theories' (Fasc.1, p 48)
'A Mathematical Model for Physical Theories' (Fasc.2-3, p 176, 351)
These papers begin the author's investigations into the structure of physical theories, in
which he considers the geometrical calculus to play an important part.
Ì Tonti E 1975
On the Formal Structure of Physical Theories
Report of the Instituto di Matematica del Politecnico di Milano
In this report the author constructs a classification scheme for physical quantities and the
equations of physical theories. The mathematical structures needed for this, and which
are reviewed in this report are algebraic topology, exterior algebra, exterior differential
forms, and Clifford algebra. The author shows that the underlying structure of physical
theories is basically capable of a geometric interpretation.
Ì Whitehead A N 1898
A Treatise on Universal Algebra (Volume 1)
Cambridge
No further volumes appeared. This is probably the best and most complete exposition of
Grassmann's works in English (586 pages). The author recreates many of Grassmann's
results, in many cases extending and clarifying them with original contributions.
Whitehead considers non-Euclidean metrics and spaces of arbitrary dimension. However,
like Grassmann, he does not distinguish between n-elements and scalars.
Ì Willmore T J 1959
Introduction to Differential Geometry
Oxford
Includes a brief discussion of exterior algebra and its application to differential geometry
(p 189).
Ì Wilson E B 1901
Vector Analysis
New York
The first formally published book entirely devoted to presenting the Gibbs-Heaviside
system of vector analysis based on Gibbs' lectures and Heaviside's papers in the
Electrician in 1893. (Wilson uses the term 'bivector', but by it means a vector with real
and imaginary parts.)
2009 9 3
Grassmann Algebra Book.nb 757
Ì Ziwet A 1885-6
'A Brief Account of H. Grassmann's Geometrical Theories'
Annals of Mathematics, 2, (1885 pp 1-11; 1886 pp 25-34)
In the words of the author: 'It is the object of the present paper to give in the simplest
form possible, a succinct accont of Grassmann's matheamtical theories and methods in
their application to plane geometry'. (Follows in the main Schlegel's System der
Raumlehre.)
Die Ausdehnungslehre von 1862, fully titled: Die Ausdehnungslehre. Vollständig und in strenger
Form is perhaps Grassmann's most important mathematical work. It comprises two main parts: the
first devoted basically to the Ausdehnungslehre (212 pages) and the second to the theory of
functions (155 pages). The Collected Works edition contains 98 pages of notes and comments. The
discussion on the Ausdehnungslehre includes chapters on addition and subtraction, products in
general, progressive and regressive products, interior products, and applications to geometry. A
Cartesian metric is assumed.
Both Grassmann's Ausdehnungslehre have been translated into English by Lloyd C Kannenberg.
The 1844 version is published as A New Branch of Mathematics: The Ausdehnungslehre of 1844
and Other Works, Open Court 1995. The translation contains Die Ausdehnungslehre von 1844,
Geometrische Analyse, selected papers on mathematics and physics, a bibliography of Grassmann's
principal works, and extensive editorial notes. The 1862 version is published as Extension Theory.
It contains work on both the theory of extension and the theory of functions. Particularly useful are
the editorial and supplementary notes.
Apart from these translations, probably the best and most complete exposition on the
Ausdehnungslehre in English is in Alfred North Whitehead's A Treatise on Universal Algebra
(Whitehead 1898). Whitehead saw Grassmann's work as one of the foundation stones on which he
hoped to build an algebraic theory which united the several important and new mathematical
systems which emerged during the nineteenth century ~ the algebra of symbolic logic,
Grassmann's theory of extension, quaternions, matrices and the general theory of linear algebras.
The second most complete exposition of the Ausdehnungslehre is Henry George Forder's The
Theory of Extension (Forder 1941). Forder's interest is mainly in the geometric applications of the
theory of extension.
2009 9 3
Grassmann Algebra Book.nb 758
The second most complete exposition of the Ausdehnungslehre is Henry George Forder's The
Theory of Extension (Forder 1941). Forder's interest is mainly in the geometric applications of the
theory of extension.
The only other books on Grassmann in English are those by Edward Wyllys Hyde, The Directional
Calculus (Hyde 1890) and Grassmann's Space Analysis (Hyde 1906). They treat the theory of
extension in two and three-dimensional geometric contexts and include some applications to
statics. Several topics such as Hyde's treatment of screws are original contributions.
The seminal papers on Clifford algebra are by William Kingdon Clifford and can be found in his
collected works Mathematical Papers (Clifford 1882), republished in a facsimile edition by
Chelsea.
Fortunately for those interested in the evolution of the emerging 'geometric algebras', The
International Association for Promoting the Study of Quaternions and Allied Systems of
Mathematics published a bibliography (Macfarlane 1913) which, together with supplements to
1913, contains about 2500 articles. This therefore most likely contains all the works on the
Ausdehnungslehre and related subjects up to 1913.
The only other recent text devoted specifically to Grassmann algebra (to the author's knowledge as
of 2001) is Arno Zaddach's Grassmanns Algebra in der Geometrie, BI-Wissenschaftsverlag,
(Zaddach 1994).
2009 9 3
Grassmann Algebra Book.nb 759
Index
To be completed.
2009 9 3