You are on page 1of 759

Grassmann

Algebra
Exploring extended vector algebra
with Mathematica

Incomplete draft Version 0.50 September 2009

John Browne

Quantica Publishing, Melbourne, Australia

2009 9 3
Grassmann Algebra Book.nb 2

2009 9 3
Grassmann Algebra Book.nb 3

Table of Contents

1 Introduction
1.1 Background
The mathematical representation of physical entities
The central concept of the Ausdehnungslehre
Comparison with the vector and tensor algebras
Algebraicizing the notion of linear dependence
Grassmann algebra as a geometric calculus

1.2 The Exterior Product


The anti-symmetry of the exterior product
Exterior products of vectors in a three-dimensional space
Terminology: elements and entities
The grade of an element
Interchanging the order of the factors in an exterior product
A brief summary of the properties of the exterior product

1.3 The Regressive Product


The regressive product as a dual product to the exterior product
Unions and intersections of spaces
A brief summary of the properties of the regressive product
The Common Factor Axiom
The intersection of two bivectors in a three dimensional space

1.4 Geometric Interpretations


Points and vectors
Sums and differences of points
Determining a mass-centre
Lines and planes
The intersection of two lines

1.5 The Complement


The complement as a correspondence between spaces
The Euclidean complement
The complement of a complement
The Complement Axiom

1.6 The Interior Product


The definition of the interior product
Inner products and scalar products
Sequential interior products
Orthogonality
Measure and magnitude
Calculating interior products
Expanding interior products
The interior product of a bivector and a vector
The cross product
2009 9 3
1.6 The Interior Product
The definition of the interior product
Inner products and scalar products
Sequential interior products Grassmann Algebra Book.nb 4
Orthogonality
Measure and magnitude
Calculating interior products
Expanding interior products
The interior product of a bivector and a vector
The cross product

1.7 Exploring Screw Algebra


To be completed

1.8 Exploring Mechanics


To be completed

1.9 Exploring Grassmann Algebras


To be completed

1.10 Exploring the Generalized Product


To be completed

1.11 Exploring Hypercomplex Algebras


To be completed

1.12 Exploring Clifford Algebras


To be completed

1.13 Exploring Grassmann Matrix Algebras


To be completed

1.14 The Various Types of Linear Product


Introduction
Case 1 The product of an element with itself is zero
Case 2 The product of an element with itself is non-zero
Examples

1.15 Terminology
Terminology

1.16 Summary
To be completed

2 The Exterior Product


2.1 Introduction

2.2 The Exterior Product


Basic properties of the exterior product
Declaring scalar and vector symbols in GrassmannAlgebra
Entering exterior products

2.3 Exterior Linear Spaces


Composing m-elements
Composing elements automatically
Spaces and congruence
The associativity of the exterior product
Transforming exterior products

2009 9 3
2.3 Exterior Linear Spaces Grassmann Algebra Book.nb 5
Composing m-elements
Composing elements automatically
Spaces and congruence
The associativity of the exterior product
Transforming exterior products

2.4 Axioms for Exterior Linear Spaces


Summary of axioms
Grassmann algebras
On the nature of scalar multiplication
Factoring scalars
Grassmann expressions
Calculating the grade of a Grassmann expression

2.5 Bases
Bases for exterior linear spaces
Declaring a basis in GrassmannAlgebra
Composing bases of exterior linear spaces
Composing palettes of basis elements
Standard ordering
Indexing basis elements of exterior linear spaces

2.6 Cobases
Definition of a cobasis
The cobasis of unity
Composing palettes of cobasis elements
The cobasis of a cobasis

2.7 Determinants
Determinants from exterior products
Properties of determinants
The Laplace expansion technique
Calculating determinants

2.8 Cofactors
Cofactors from exterior products
The Laplace expansion in cofactor form
Exploring the calculation of determinants using minors and cofactors
Transformations of cobases
Exploring transformations of cobases

2.9 Solution of Linear Equations


Grassmann's approach to solving linear equations
Example solution: 3 equations in 4 unknowns
Example solution: 4 equations in 4 unknowns

2.10 Simplicity
The concept of simplicity
All (n|1)-elements are simple
Conditions for simplicity of a 2-element in a 4-space
Conditions for simplicity of a 2-element in a 5-space

2.11 Exterior division


The definition of an exterior quotient
Division by a 1-element
Division by a k-element
Automating the division process

2009 9 3
Grassmann Algebra Book.nb 6
2.11 Exterior division
The definition of an exterior quotient
Division by a 1-element
Division by a k-element
Automating the division process

2.12 Multilinear forms


The span of a simple element
Composing spans
Example: Refactorizations
Multilinear forms
Defining m:k-forms
Composing m:k-forms
Expanding and simplifying m:k-forms
Developing invariant forms
Properties of m:k-forms
The complete span of a simple element

2.13 Unions and intersections


Union and intersection as a multilinear form
Where the intersection is evident
Where the intersections is not evident
Intersection with a non-simple element
Factorizing simple elements

2.14 Summary

3 The Regressive Product


3.1 Introduction

3.2 Duality
The notion of duality
Examples: Obtaining the dual of an axiom
Summary: The duality transformation algorithm

3.3 Properties of the Regressive Product


Axioms for the regressive product
The unit n-element
The inverse of an n-element
Grassmann's notation for the regressive product

3.4 The Duality Principle


The dual of a dual
The Grassmann Duality Principle
Using the GrassmannAlgebra function Dual

3.5 The Common Factor Axiom


Motivation
The Common Factor Axiom
Extension of the Common Factor Axiom to general elements
Special cases of the Common Factor Axiom
Dual versions of the Common Factor Axiom
Application of the Common Factor Axiom
When the common factor is not simple

2009 9 3
3.5 The Common Factor Axiom
Motivation
The Common Factor Axiom Grassmann Algebra Book.nb 7
Extension of the Common Factor Axiom to general elements
Special cases of the Common Factor Axiom
Dual versions of the Common Factor Axiom
Application of the Common Factor Axiom
When the common factor is not simple

3.6 The Common Factor Theorem


Development of the Common Factor Theorem
Proof of the Common Factor Theorem
The A and B forms of the Common Factor Theorem
Example: The decomposition of a 1-element
Example: Applying the Common Factor Theorem
Automating the application of the Common Factor Theorem

3.7 The Regressive Product of Simple Elements


The regressive product of simple elements
The regressive product of (n|1)-elements
Regressive products leading to scalar results
Expressing an element in terms of another basis
Exploration: The cobasis form of the Common Factor Axiom
Exploration: The regressive product of cobasis elements

3.8 Factorization of Simple Elements


Factorization using the regressive product
Factorizing elements expressed in terms of basis elements
The factorization algorithm
Factorization of (n|1)-elements
Factorizing simple m-elements
Factorizing contingently simple m-elements
Determining if an element is simple

3.9 Product Formulas for Regressive Products


The Product Formula
Deriving Product Formulas
Deriving Product Formulas automatically
Computing the General Product Formula
Comparing the two forms of the Product Formula
The invariance of the General Product Formula
Alternative forms for the General Product Formula
The Decomposition Formula
Exploration: Dual forms of the General Product Formulas

3.10 Summary

4 Geometric Interpretations
4.1 Introduction

4.2 Geometrically Interpreted 1-elements


Vectors
Points
Declaring a basis for a bound vector space
Composing vectors and points
Example: Calculation of the centre of mass

2009 9 3
4.2 Geometrically Interpreted 1-elements Grassmann Algebra Book.nb 8
Vectors
Points
Declaring a basis for a bound vector space
Composing vectors and points
Example: Calculation of the centre of mass

4.3 Geometrically Interpreted 2-elements


Simple geometrically interpreted 2-elements
Bivectors
Bound vectors
Composing bivectors and bound vectors
The sum of two parallel bound vectors
The sum of two non-parallel bound vectors
Sums of bound vectors
Example: Reducing a sum of bound vectors

4.4 Geometrically Interpreted m-Elements


Types of geometrically interpreted m-elements
The m-vector
The bound m-vector
Bound simple m-vectors expressed by points
Bound simple bivectors
Composing m-vectors and bound m-vectors

4.5 Geometrically Interpreted Spaces


Vector and point spaces
Coordinate spaces
Geometric dependence
Geometric duality

4.6 m-planes
m-planes defined by points
m-planes defined by m-vectors
m-planes as exterior quotients
Computing exterior quotients
The m-vector of a bound m-vector

4.7 Line Coordinates


Lines in a plane
Lines in a 3-plane
Lines in a 4-plane
Lines in an m-plane

4.8 Plane Coordinates


Planes in a 3-plane
Planes in a 4-plane
Planes in an m-plane
The coordinates of geometric entities

4.9 Calculation of Intersections


The intersection of two lines in a plane
The intersection of a line and a plane in a 3-plane
The intersection of two planes in a 3-plane
Example: The osculating plane to a curve

4.10 Decomposition into Components


The shadow
Decomposition in a 2-space
Decomposition in a 3-space
2009 9 3
Decomposition in a 4-space
Decomposition of a point or vector in an n-space
Grassmann Algebra Book.nb 9

4.10 Decomposition into Components


The shadow
Decomposition in a 2-space
Decomposition in a 3-space
Decomposition in a 4-space
Decomposition of a point or vector in an n-space

4.11 Projective Space


The intersection of two lines in a plane
The line at infinity in a plane
Projective 3-space
Homogeneous coordinates
Duality
Desargues' theorem
Pappas' theorem
Projective n-space

4.12 Regions of Space


Regions of space
Regions of a plane
Regions of a line
Planar regions defined by two lines
Planar regions defined by three lines
Creating a pentagonal region
Creating a 5-star region
Creating a 5-star pyramid
Summary

4.13 Geometric Constructions


Geometric expressions
Geometric equations for lines and planes
The geometric equation of a conic section in the plane
The geometric equation as a prescription to construct
The algebraic equation of a conic section in the plane
An alternative geometric equation of a conic section in the plane
Conic sections through five points
Dual constructions
Constructing conics in space
A geometric equation for a cubic in the plane
Pascal's Theorem
Pascal lines

4.14 Summary

5 The Complement
5.1 Introduction

5.2 Axioms for the Complement


The grade of a complement
The linearity of the complement operation
The complement axiom
The complement of a complement axiom
2009 9 3The complement of unity
Grassmann Algebra Book.nb 10

5.2 Axioms for the Complement


The grade of a complement
The linearity of the complement operation
The complement axiom
The complement of a complement axiom
The complement of unity

5.3 Defining the Complement


The complement of an m-element
The complement of a basis m-element
Defining the complement of a basis 1-element
Constraints on the value of !
Choosing the value of !
Defining the complements in matrix form

5.4 The Euclidean Complement


Tabulating Euclidean complements of basis elements
Formulae for the Euclidean complement of basis elements
Products leading to a scalar or n-element

5.5 Complementary Interlude


Alternative forms for complements
Orthogonality
Visualizing the complement axiom
The regressive product in terms of complements
Glimpses of the inner product

5.6 The Complement of a Complement


The complement of a complement axiom
The complement of a cobasis element
The complement of the complement of a basis 1-element
The complement of the complement of a basis m-element
The complement of the complement of an m-element
Idempotent complements

5.7 Working with Metrics


Working with metrics
The default metric
Declaring a metric
Declaring a general metric
Calculating induced metrics
The metric for a cobasis
Creating palettes of induced metrics

5.8 Calculating Complements


Entering a complement
Creating palettes of complements of basis elements
Converting complements of basis elements
Simplifying expressions involving complements
Converting expressions involving complements to specified forms
Converting regressive products of basis elements in a metric space

5.9 Complements in a vector space


The Euclidean complement in a vector 2-space
The non-Euclidean complement in a vector 2-space
2009 9 3The Euclidean complement in a vector 3-space
The non-Euclidean complement in a vector 3-space
Grassmann Algebra Book.nb 11

5.9 Complements in a vector space


The Euclidean complement in a vector 2-space
The non-Euclidean complement in a vector 2-space
The Euclidean complement in a vector 3-space
The non-Euclidean complement in a vector 3-space

5.10 Complements in a bound space


Metrics in a bound space
The complement of an m-vector
Products of vectorial elements in a bound space
The complement of an element bound through the origin
The complement of the complement of an m-vector
Calculating with vector space complements

5.11 Complements of bound elements


The Euclidean complement of a point in the plane
The Euclidean complement of a point in a point 3-space
The complement of a bound element
Euclidean complements of bound elements
The regressive product of point complements

5.12 Reciprocal Bases


Reciprocal bases
The complement of a basis element
The complement of a cobasis element
The complement of a complement of a basis element
The exterior product of basis elements
The regressive product of basis elements
The complement of a simple element is simple

5.13 Summary

6 The Interior Product


6.1 Introduction

6.2 Defining the Interior Product


Definition of the inner product
Definition of the interior product
Implications of the regressive product axioms
Orthogonality
Example: The interior product of a simple bivector and a vector

6.3 Properties of the Interior Product


Implications of the Complement Axiom
Extended interior products
Converting interior products
Example: Orthogonalizing a set of 1-elements

6.4 The Interior Common Factor Theorem


The Interior Common Factor Formula
The Interior Common Factor Theorem
Examples of the Interior Common Factor Theorem
2009 9 3The computational form of the Interior Common Factor Theorem
Grassmann Algebra Book.nb 12

6.4 The Interior Common Factor Theorem


The Interior Common Factor Formula
The Interior Common Factor Theorem
Examples of the Interior Common Factor Theorem
The computational form of the Interior Common Factor Theorem

6.5 The Inner Product


Implications of the Common Factor Axiom
The symmetry of the inner product
The inner product of complements
The inner product of simple elements
Calculating inner products
Inner products of basis elements

6.6 The Measure of an m-element


The definition of measure
Unit elements
Calculating measures
The measure of free elements
The measure of bound elements
Determining the multivector of a bound multivector

6.7 The Induced Metric Tensor


Calculating induced metric tensors
Using scalar products to construct induced metric tensors
Displaying induced metric tensors as a matrix of matrices

6.8 Product Formulae for Interior Products


The basic interior Product Formula
Deriving interior Product Formulas
Deriving interior Product Formulas automatically
Computable forms of interior Product Formulas
The invariance of interior Product Formulas
An alternative form for the interior Product Formula
The interior decomposition formula
Interior Product Formulas for 1-elements
Interior Product Formulas in terms of double sums

6.9 The Zero Interior Sum Theorem


The zero interior sum
Composing interior sums
The Gram-Schmidt process
Proving the Zero Interior Sum Theorem

6.10 The Cross Product


Defining a generalized cross product
Cross products involving 1-elements
Implications of the axioms for the cross product
The cross product as a universal product
Cross product formulae

6.11 The Triangle Formulae


Triangle components
The measure of the triangle components
Equivalent forms for the triangle components
2009 9 3
Grassmann Algebra Book.nb 13

6.11 The Triangle Formulae


Triangle components
The measure of the triangle components
Equivalent forms for the triangle components

6.12 Angle
Defining the angle between elements
The angle between a vector and a bivector
The angle between two bivectors
The volume of a parallelepiped

6.13 Projection
To be completed.

6.14 Interior Products of Interpreted Elements


To be completed.

6.15 The Closest Approach of Multiplanes


To be completed.

7 Exploring Screw Algebra


7.1 Introduction

7.2 A Canonical Form for a 2-Entity


The canonical form
Canonical forms in an n-plane
Creating 2-entities

7.3 The Complement of 2-Entity


Complements in an n-plane
The complement referred to the origin
The complement referred to a general point

7.4 The Screw


The definition of a screw
The unit screw
The pitch of a screw
The central axis of a screw
Orthogonal decomposition of a screw

7.5 The Algebra of Screws


To be completed

7.6 Computing with Screws


To be completed

2009 9 3
Grassmann Algebra Book.nb 14

8 Exploring Mechanics
8.1 Introduction

8.2 Force
Representing force
Systems of forces
Equilibrium
Force in a metric 3-plane

8.3 Momentum
The velocity of a particle
Representing momentum
The momentum of a system of particles
The momentum of a system of bodies
Linear momentum and the mass centre
Momentum in a metric 3-plane

8.4 Newton's Law


Rate of change of momentum
Newton's second law

8.5 The Angular Velocity of a Rigid Body


To be completed.

8.6 The Momentum of a Rigid Body


To be completed.

8.7 The Velocity of a Rigid Body


To be completed.

8.8 The Complementary Velocity of a Rigid Body


To be completed.

8.9 The Infinitesimal Displacement of a Rigid Body


To be completed.

8.10 Work, Power and Kinetic Energy


To be completed.

9 Grassmann Algebra
9.1 Introduction

9.1 Grassmann Numbers


Creating Grassmann numbers
Body and soul
Even and odd components
The grades of a Grassmann number
Working with complex scalars

2009 9 3
9.1 Grassmann Numbers Grassmann Algebra Book.nb 15
Creating Grassmann numbers
Body and soul
Even and odd components
The grades of a Grassmann number
Working with complex scalars

9.3 Operations with Grassmann Numbers


The exterior product of Grassmann numbers
The regressive product of Grassmann numbers
The complement of a Grassmann number
The interior product of Grassmann numbers

9.4 Simplifying Grassmann Numbers


Elementary simplifying operations
Expanding products
Factoring scalars
Checking for zero terms
Reordering factors
Simplifying expressions

9.5 Powers of Grassmann Numbers


Direct computation of powers
Powers of even Grassmann numbers
Powers of odd Grassmann numbers
Computing positive powers of Grassmann numbers
Powers of Grassmann numbers with no body
The inverse of a Grassmann number
Integer powers of a Grassmann number
General powers of a Grassmann number

9.6 Solving Equations


Solving for unknown coefficients
Solving for an unknown Grassmann number

9.7 Exterior Division


Defining exterior quotients
Special cases of exterior division
The non-uniqueness of exterior division

9.8 Factorization of Grassmann Numbers


The non-uniqueness of factorization
Example: Factorizing a Grassmann number in 2-space
Example: Factorizing a 2-element in 3-space
Example: Factorizing a 3-element in 4-space

9.9 Functions of Grassmann Numbers


The Taylor series formula
The form of a function of a Grassmann number
Calculating functions of Grassmann numbers
Powers of Grassmann numbers
Exponential and logarithmic functions of Grassmann numbers
Trigonometric functions of Grassmann numbers
Functions of several Grassmann numbers

10 Exploring the Generalized Grassmann Product

2009 9 3
Grassmann Algebra Book.nb 16

10 Exploring the Generalized Grassmann Product


10.1 Introduction

10.2 Geometrically Interpreted 1-elements


Definition of the generalized product
Case l = 0: Reduction to the exterior product
Case 0 < l < Min[m, k]: Reduction to exterior and interior products
Case l = Min[m, k]: Reduction to the interior product
Case Min[m, k] < l < Max[m, k]: Reduction to zero
Case l = Max[m, k]: Reduction to zero
Case l > Max[m, k]: Undefined

10.3 The Symmetric Form of the Generalized Product


Expansion of the generalized product in terms of both factors
The quasi-commutativity of the generalized product
Expansion in terms of the other factor

10.4 Calculating with Generalized Products


Entering a generalized product
Reduction to interior products
Reduction to inner products
Example: Case Min[m, k] < l < Max[m, k]: Reduction to zero

10.5 The Generalized Product Theorem


The A and B forms of a generalized product
Example: Verification of the Generalized Product Theorem
Verification that the B form may be expanded in terms of either factor

10.6 Products with Common Factors


Products with common factors
Congruent factors
Orthogonal factors
Generalized products of basis elements
Finding common factors

10.7 The Zero Interior Sum Theorem


The Zero Interior Sum theorem
Composing a zero interior sum

10.8 The Zero Generalized Sum


The zero generalized sum conjecture
Generating the zero generalized sum
Exploring the conjecture

10.9 Nilpotent Generalized Products


Nilpotent products of simple elements
Nilpotent products of non-simple elements

10.10 Properties of the Generalized Product


Summary of properties

2009 9 3
Grassmann Algebra Book.nb 17

10.10 Properties of the Generalized Product


Summary of properties

10.11 The Triple Generalized Sum Conjecture


The generalized Grassmann product is not associative
The triple generalized sum
The triple generalized sum conjecture
Exploring the triple generalized sum conjecture
An algorithm to test the conjecture

10.12 Exploring Conjectures


A conjecture
Exploring the conjecture

10.13 The Generalized Product of Intersecting Elements


The case l < p
The case l ¥ p
The special case of l = p

10.14 The Generalized Product of Orthogonal Elements


The generalized product of totally orthogonal elements
The generalized product of partially orthogonal elements

10.15 The Generalized Product of Intersecting Orthogonal Elements


The case l < p
The case l ¥ p

10.16 Generalized Products in Lower Dimensional Spaces


Generalized products in 0, 1, and 2-spaces
0-space
1-space
2-space

10.17 Generalized Products in 3-Space


To be completed

11 Exploring Hypercomplex Algebra


11.1 Introduction

11.2 Some Initial Definitions and Properties


The conjugate
Distributivity
The norm
Factorization of scalars
Multiplication by scalars
The real numbers !

11.3 The Complex Numbers !


Constraints on the hypercomplex signs
Complex numbers as Grassmann numbers under the hypercomplex product

11.4 The Hypercomplex Product in a 2-Space


Tabulating products in 2-space
The hypercomplex product of two 1-elements
2009 9 3The hypercomplex product of a 1-element and a 2-element
The hypercomplex square of a 2-element
The product table in terms of exterior and interior products
Grassmann Algebra Book.nb 18

11.4 The Hypercomplex Product in a 2-Space


Tabulating products in 2-space
The hypercomplex product of two 1-elements
The hypercomplex product of a 1-element and a 2-element
The hypercomplex square of a 2-element
The product table in terms of exterior and interior products

11.5 The Quaternions "


The product table for orthonormal elements
Generating the quaternions
The norm of a quaternion
The Cayley-Dickson algebra

11.6 The Norm of a Grassmann number


The norm
The norm of a simple m-element
The skew-symmetry of products of elements of different grade
The norm of a Grassmann number in terms of hypercomplex products
The norm of a Grassmann number of simple components
The norm of a non-simple element

11.7 Products of two different elements of the same grade


The symmetrized sum of two m-elements
Symmetrized sums for elements of different grades
The body of a symmetrized sum
The soul of a symmetrized sum
Summary of results of this section

11.8 Octonions
To be completed

12 Exploring Clifford Algebra


12.1 Introduction

12.2 The Clifford Product


Definition of the Clifford product
Tabulating Clifford products
The grade of a Clifford product
Clifford products in terms of generalized products
Clifford products in terms of interior products
Clifford products in terms of inner products
Clifford products in terms of scalar products

12.3 The Reverse of an Exterior Product


Defining the reverse
Computing the reverse

12.4 Special Cases of Clifford Products


The Clifford product with scalars
The Clifford product of 1-elements
The Clifford product of an m-element and a 1-element
The Clifford product of an m-element and a 2-element
2009 9 3The Clifford product of two 2-elements
The Clifford product of two identical elements
Grassmann Algebra Book.nb 19

12.4 Special Cases of Clifford Products


The Clifford product with scalars
The Clifford product of 1-elements
The Clifford product of an m-element and a 1-element
The Clifford product of an m-element and a 2-element
The Clifford product of two 2-elements
The Clifford product of two identical elements

12.5 Alternate Forms for the Clifford Product


Alternate expansions of the Clifford product
The Clifford product expressed by decomposition of the first factor
Alternative expression by decomposition of the first factor
The Clifford product expressed by decomposition of the second factor
The Clifford product expressed by decomposition of both factors

12.6 Writing Down a General Clifford Product


The form of a Clifford product expansion
A mnemonic way to write down a general Clifford product

12.7 The Clifford Product of Intersecting Elements


General formulae for intersecting elements
Special cases of intersecting elements

12.8 The Clifford Product of Orthogonal Elements


The Clifford product of totally orthogonal elements
The Clifford product of partially orthogonal elements
Testing the formulae

12.9 The Clifford Product of Intersecting Orthogonal Elements


Orthogonal union
Orthogonal intersection

12.10 Summary of Special Cases of Clifford Products


Arbitrary elements
Arbitrary and orthogonal elements
Orthogonal elements
Calculating with Clifford products

12.11 Associativity of the Clifford Product


Associativity of orthogonal elements
A mnemonic formula for products of orthogonal elements
Associativity of non-orthogonal elements
Testing the general associativity of the Clifford product

12.13 Clifford Algebra


Generating Clifford algebras
Real algebra
Clifford algebras of a 1-space

12.14 Clifford Algebras of a 2-Space


The Clifford product table in 2-space
Product tables in a 2-space with an orthogonal basis
Case 1: 8e1 û e1 Ø +1, e2 û e2 Ø +1<
Case 2: 8e1 û e1 Ø +1, e2 û e2 Ø -1<
ŸCase 3: 8e1 û e1 Ø -1, e2 û e2 Ø -1<

2009 9 3
12.14 Clifford Algebras of a 2-Space Grassmann Algebra Book.nb 20
The Clifford product table in 2-space
Product tables in a 2-space with an orthogonal basis
Case 1: 8e1 û e1 Ø +1, e2 û e2 Ø +1<
Case 2: 8e1 û e1 Ø +1, e2 û e2 Ø -1<
ŸCase 3: 8e1 û e1 Ø -1, e2 û e2 Ø -1<

12.15 Clifford Algebras of a 3-Space


The Clifford product table in 3-space
!"3 : The Pauli algebra
!"3 + : The Quaternions
The Complex subalgebra
Biquaternions

12.16 Clifford Algebras of a 4-Space


The Clifford product table in 4-space
!"4 : The Clifford algebra of Euclidean 4-space
!"4 + : The even Clifford algebra of Euclidean 4-space
!"1,3 : The Dirac algebra
!"0,4 : The Clifford algebra of complex quaternions

12.17 Rotations
To be completed

13 Exploring Grassmann Matrix Algebra


13.1 Introduction

13.2 Creating Matrices


Creating general symbolic matrices
Creating matrices with specific coefficients
Creating matrices with variable elements

13.3 Manipulating Matrices


Matrix simplification
The grades of the elements of a matrix
Taking components of a matrix
Determining the type of elements

13.4 Matrix Algebra


Multiplication by scalars
Matrix addition
The complement of a matrix
Matrix products

13.5 Transpose and Determinant


Conditions on the transpose of an exterior product
Checking the transpose of a product
Conditions on the determinant of a general matrix
Determinants of even Grassmann matrices

13.6 Matrix Powers


Positive integer powers
Negative integer powers
Non-integer powers of matrices with distinct eigenvalues
Integer powers of matrices with distinct eigenvalues

2009 9 3
Grassmann Algebra Book.nb 21
13.6 Matrix Powers
Positive integer powers
Negative integer powers
Non-integer powers of matrices with distinct eigenvalues
Integer powers of matrices with distinct eigenvalues

13.7 Matrix Inverses


A formula for the matrix inverse
GrassmannMatrixInverse

13.8 Matrix Equations


Two types of matrix equations
Solving matrix equations

13.9 Matrix Eigensystems


Exterior eigensystems of Grassmann matrices
GrassmannMatrixEigensystem

13.10 Matrix Functions


Distinct eigenvalue matrix functions
GrassmannMatrixFunction
Exponentials and Logarithms
Trigonometric functions
Symbolic matrices
Symbolic functions

13.11 Supermatrices
To be completed

Appendices

Ï Contents
A1 Guide to GrassmannAlgebra

A2 A Brief Biography of Grassmann

A3 Notation

A4 Glossary

A5 Bibliography
Bibliography
A note on sources to Grassmann's work

Index

2009 9 3
Grassmann Algebra Book.nb 22

Preface
Ï The origins of this book

This book has grown out of an interest in Grassmann's work over the past three decades. There is
something fascinating about the beauty with which the mathematical structures Grassmann
discovered (invented, if you will) describe the physical world, and something also fascinating
about how these beautiful structures have been largely lost to the mainstreams of mathematics and
science.

Ï Who was Grassmann?

Hermann Günther Grassmann was born in 1809 in Stettin, near the border of Germany and Poland.
He was only 23 when he discovered the method of adding and multiplying points and vectors
which was to become the foundation of his Ausdehnungslehre. In 1839 he composed a work on the
study of tides entitled Theorie der Ebbe und Flut, which was the first work ever to use vectorial
methods. In 1844 Grassmann published his first Ausdehnungslehre (Die lineale Ausdehnungslehre
ein neuer Zweig der Mathematik) and in the same year won a prize for an essay which expounded a
system satisfying an earlier search by Leibniz for an 'algebra of geometry'. Despite these
achievements, Grassmann received virtually no recognition.

In 1862 Grassmann re-expounded his ideas from a different viewpoint in a second


Ausdehnungslehre (Die Ausdehnungslehre. Vollständig und in strenger Form). Again the work
was met with resounding silence from the mathematical community, and it was not until the latter
part of his life that he received any significant recognition from his contemporaries. Of these, most
significant were J. Willard Gibbs who discovered his works in 1877 (the year of Grassmann's
death), and William Kingdon Clifford who discovered them in depth about the same time. Both
became quite enthusiastic about this new mathematics.

A more detailed biography of Grassmann may be found at the end of the book.

Ï From the Ausdehnungslehre to GrassmannAlgebra

The term 'Ausdehnungslehre' is variously translated as 'extension theory', 'theory of extension', or


'calculus of extension'. In this book we will use these terms to refer to Grassmann's original work
and to other early work in the same notational and conceptual tradition (particularly that of Edward
Wyllys Hyde, Henry James Forder and Alfred North Whitehead).

The term 'Exterior Calculus' will be reserved for the calculus of exterior differential forms,
originally developed by Elie Cartan from the Ausdehnungslehre. This is an area in which there are
many excellent texts, and which is outside the scope of this book.

The term 'Grassmann algebra' will be used to describe that body of algebraic theory and results
based on the Ausdehnungslehre, but extended to include more recent results and viewpoints. This
will be the basic focus of this book.

Finally, the term 'GrassmannAlgebra' will be used to refer to the Mathematica based software
package which accompanies it.

2009 9 3
Grassmann Algebra Book.nb 23

Finally, the term 'GrassmannAlgebra' will be used to refer to the Mathematica based software
package which accompanies it.

Ï What is Grassmann algebra?

The intrinsic power of Grassmann algebra arises from its fundamental product operation, the
exterior product. The exterior product codifies the property of linear dependence, so essential for
modern applied mathematics, directly into the algebra. Simple non-zero elements of the algebra
may be viewed as representing constructs of linearly independent elements. For example, a simple
bivector is the exterior product of two vectors; a line is represented by the exterior product of two
points; a plane is represented by the exterior product of three points.

Ï The focus of this book

The primary focus of this book is to provide a readable account in modern notation of Grassmann's
major algebraic contributions to mathematics and science. I would like it to be accessible to
scientists and engineers, students and professionals alike. Consequently I have tried to avoid all
mathematical terminology which does not make an essential contribution to understanding the
basic concepts. The only assumptions I have made as to the reader's background is that they have
some familiarity with basic linear algebra.

The secondary concern of this book is to provide an environment for exploring applications of the
Grassmann algebra. For general applications in higher dimensional spaces, computations by hand
in any algebra become tedious, indeed limiting, thus restricting the hypotheses that can be
explored. For this reason the book is integrated with a package for exploring Grassmann algebra,
called GrassmannAlgebra. GrassmannAlgebra has been developed in Mathematica. You can read
the book without using the package, or you can use the package to extend the examples in the text,
experiment with hypotheses, or explore your own interests.

Ï The power of Mathematica

Mathematica is a powerful system for doing mathematics on a computer. It has an inbuilt


programming language ideal for extending its capabilities to other mathematical systems like
Grassmann algebra. It also has a sophisticated mathematical typesetting capability. This book uses
both. The chapters are typeset by Mathematica in its standard notebook format, making the book
interactive in its electronic version with the GrassmannAlgebra package. GrassmannAlgebra can
turn an exploration requiring days by hand into one requiring just minutes of computing time.

Ï How you can use this book

Chapter 1: Introduction provides a brief preparatory overview of the book, introducing the seminal
concepts of each chapter, and solidifying them with simple examples. This chapter is designed to
give you a global appreciation with which better to understand the detail of the chapters which
follow. However, it is independent of those chapters, and may be read as far as your interest takes
you.

Chapters 2 to 6: The Exterior Product, The Regressive Product, Geometric Interpretations, The
Complement, and The Interior Product develop the basic theory. These form the essential core for
a working knowledge of Grassmann algebra and for the explorations in subsequent chapters. They
should be read sequentially.

Chapters 7 to 13: Exploring Screw Algebra, Mechanics, Grassmann Algebra, The Generalized
Product, Hypercomplex Algebra, Clifford Algebra, and Grassmann Matrix Algebra, may be read
independently of each other, with the proviso that the chapters on hypercomplex and Clifford
algebra
2009 9 3depend on the notion of the generalized product.
Grassmann Algebra Book.nb 24

Chapters 7 to 13: Exploring Screw Algebra, Mechanics, Grassmann Algebra, The Generalized
Product, Hypercomplex Algebra, Clifford Algebra, and Grassmann Matrix Algebra, may be read
independently of each other, with the proviso that the chapters on hypercomplex and Clifford
algebra depend on the notion of the generalized product.

Some computational sections within the text use GrassmannAlgebra or Mathematica. Wherever
possible, explanation is provided so that the results are still intelligible without a knowledge of
Mathematica. Mathematica input/output dialogue is in indented Courier font, with the input in
bold. For example:

GrassmannSimplify@1 + x + x Ô xD
1+x

Any chapter or section whose title begins with Exploring … may be ommitted on first reading
without unduly disturbing the flow of concept development. Exploratory chapters cover separate
applications. Exploratory sections treat topics that are: important for the justification of later
developments, but not essential for their understanding; of a specialized nature; or, results which
appear to be new and worth documenting.

Ï The GrassmannAlgebra website

As with most books these days, this book has its own website:
http://sites.google.com/site/grassmannalgebra
From this site you will be able to: email me, download a copy of the book or package, learn of any
updates to them, check for known errata or bugs, let me know of any errata or bugs you find, get
hints and faqs for using the package, and (eventually) link to where you can find a hardcopy if the
book.

Ï Acknowledgements

Finally I would like to acknowledge those who were instrumental in the evolution of this book:
Cecil Pengilley who originally encouraged me to look at applying Grassmann's work to
engineering; the University of Melbourne, Xerox Corporation, the University of Rochester and
Swinburne University of Technology which sheltered me while I pursued this interest; Janet Blagg
who peerlessly edited the text; my family who had to put up with my writing, and finally to
Stephen Wolfram who created Mathematica and provided me with a visiting scholar's grant to
work at Wolfram Research Institute in Champaign where I began developing the
GrassmannAlgebra package.

Above all however, I must acknowledge Hermann Grassmann. His contribution to mathematics
and science puts him among the great thinkers of the nineteenth century.

I hope you enjoy exploring this beautiful mathematical system.

For I have every confidence that the effort I have applied to the science reported upon here, which has
occupied a considerable span of my lifetime and demanded the most intense exertions of my powers, is not to
be lost. … a time will come when it will be drawn forth from the dust of oblivion and the ideas laid down
here will bear fruit. … some day these ideas, even if in an altered form, will reappear and with the passage of
time will participate in a lively intellectual exchange. For truth is eternal, it is divine; and no phase in the
development of truth, however small the domain it embraces, can pass away without a trace. It remains even
if the garments in which feeble men clothe it fall into dust.
2009 9 3
For I have every confidence that the effort I have applied to the science reported upon here,
Grassmann which
Algebra has
Book.nb 25
occupied a considerable span of my lifetime and demanded the most intense exertions of my powers, is not to
be lost. … a time will come when it will be drawn forth from the dust of oblivion and the ideas laid down
here will bear fruit. … some day these ideas, even if in an altered form, will reappear and with the passage of
time will participate in a lively intellectual exchange. For truth is eternal, it is divine; and no phase in the
development of truth, however small the domain it embraces, can pass away without a trace. It remains even
if the garments in which feeble men clothe it fall into dust.

Hermann Grassmann
in the foreword to the Ausdehnungslehre of 1862
translated by Lloyd Kannenberg

2009 9 3
Grassmann Algebra Book.nb 26

1 Introduction
Ï

1.1 Background

The mathematical representation of physical entities


Three of the more important mathematical systems for representing the entities of contemporary
engineering and physical science are the (three-dimensional) vector algebra, the more general
tensor algebra, and geometric algebra. Grassmann algebra is more general than vector algebra,
overlaps aspects of the tensor algebra, and underpins geometric algebra. It predates all three. In this
book we will show that it is only via Grassmann algebra that many of the geometric and physical
entities commonly used in the engineering and physical sciences may be represented
mathematically in a way which correctly models their pertinent properties and leads
straightforwardly to principal results.

As a case in point we may take the concept of force. It is well known that a force is not
satisfactorily represented by a (free) vector, yet contemporary practice is still to use a (free) vector
calculus for this task. The deficiency may be made up for by verbal appendages to the
mathematical statements: for example 'where the force f acts along the line through the point P'.
Such verbal appendages, being necessary, and yet not part of the calculus being used, indicate that
the calculus itself is not adequate to model force satisfactorily. In practice this inadequacy is coped
with in terms of a (free) vector calculus by the introduction of the concept of moment. The
conditions of equilibrium of a rigid body include a condition on the sum of the moments of the
forces about any point. The justification for this condition is not well treated in contemporary texts.
It will be shown later however that by representing a force correctly in terms of an element of the
Grassmann algebra, both force-vector and moment conditions for the equilibrium of a rigid body
are natural consequences of the algebraic processes alone.

Since the application of Grassmann algebra to mechanics was known during the nineteenth century
one might wonder why, with the 'progress of science', it is not currently used. Indeed the same
question might be asked with respect to its application in many other fields. To attempt to answer
these questions, a brief biography of Grassmann is included as an appendix. In brief, the scientific
world was probably not ready in the nineteenth century for the new ideas that Grassmann
proposed, and now, in the twenty-first century, seems only just becoming aware of their potential.

2009 9 3
Grassmann Algebra Book.nb 27

The central concept of the Ausdehnungslehre


Grassmann's principal contribution to the physical sciences was his discovery of a natural language
of geometry from which he derived a geometric calculus of significant power. For a mathematical
representation of a physical phenomenon to be 'correct' it must be of a tensorial nature and since
many 'physical' tensors have direct geometric counterparts, a calculus applicable to geometry may
be expected to find application in the physical sciences.

The word 'Ausdehnungslehre' is most commonly translated as 'theory of extension', the


fundamental product operation of the theory then becoming known as the exterior product. The
notion of extension has its roots in the interpretation of the algebra in geometric terms: an element
of the algebra may be 'extended' to form a higher order element by its (exterior) product with
another, in the way that a point may be extended to a line, or a line to a plane by a point exterior to
it. The notion of exteriorness is equivalent algebraically to that of linear independence. If the
exterior product of elements of grade 1 (for example, points or vectors) is non-zero, then they are
independent.

A line may be defined by the exterior product of any two distinct points on it. Similarly, a plane
may be defined by the exterior product of any three points in it, and so on for higher dimensions.
This independence with respect to the specific points chosen is an important and fundamental
property of the exterior product. Each time a higher dimensional object is required it is simply
created out of a lower dimensional one by multiplying by a new element in a new dimension.
Intersections of elements are also obtainable as products.

Simple elements of the Grassmann algebra may be interpreted as defining subspaces of a linear
space. The exterior product then becomes the operation for building higher dimensional subspaces
(higher order elements) from a set of lower dimensional independent subspaces. A second product
operation called the regressive product may then be defined for determining the common lower
dimensional subspaces of a set of higher dimensional non-independent subspaces.

Comparison with the vector and tensor algebras


The Grassmann algebra is a tensorial algebra, that is, it concerns itself with the types of
mathematical entities and operations necessary to describe physical quantities in an invariant
manner. In fact, it has much in common with the algebra of anti-symmetric tensors | the exterior
product being equivalent to the anti-symmetric tensor product. Nevertheless, there are conceptual
and notational differences which make the Grassmann algebra richer and easier to use.

Rather than a sub-algebra of the tensor algebra, it is perhaps more meaningful to view the
Grassmann algebra as a super-algebra of the three-dimensional vector algebra since both
commonly use invariant (component free) notations. The principal differences are that the
Grassmann algebra has a dual axiomatic structure, can treat higher order elements than vectors, can
differentiate between points and vectors, generalizes the notion of 'cross product', is independent of
dimension, and possesses the structure of a true algebra.

2009 9 3
Grassmann Algebra Book.nb 28

Algebraicizing the notion of linear dependence


Another way of viewing Grassmann algebra is as linear or vector algebra onto which has been
introduced a product operation which algebraicizes the notion of linear dependence. This product
operation is called the exterior product and is symbolized with a wedge Ô.

If vectors x1 , x2 , x3 , … are linearly dependent, then it turns out that their exterior product is zero:
x1 Ô x2 Ô x3 Ô … ã 0. If they are independent, their exterior product is non-zero.

Conversely, if the exterior product of vectors x1 , x2 , x3 , … is zero, then the vectors are linearly
dependent. Thus the exterior product brings the critical notion of linear dependence into the realm
of direct algebraic manipulation.

Although this might appear to be a relatively minor addition to linear algebra, we expect to
demonstrate in this book that nothing could be further from the truth: the consequences of being
able to model linear dependence with a product operation are far reaching, both in facilitating an
understanding of current results, and in the generation of new results for many of the algebras and
their entities used in science and engineering today. These include of course linear and multilinear
algebra, but also vector and tensor algebra, screw algebra, the complex numbers, quaternions,
octonions, Clifford algebra, and Pauli and Dirac algebra.

Grassmann algebra as a geometric calculus


Most importantly however, Grassmann's contribution has enabled the operations and entities of all
of these algebras to be interpretable geometrically, thus enabling us to bring to bear the power of
geometric visualization and intuition into our algebraic manipulations.

It is well known that a vector x1 may be interpreted geometrically as representing a direction in


space. If the space has a metric, then the magnitude of x1 is interpreted as its length. The
introduction of the exterior product enables us to extend the entities of the space to higher
dimensions. The exterior product of two vectors x1 Ôx2 , called a bivector, may be visualized as the
two-dimensional analogue of a direction, that is, a planar direction. Neither vectors nor bivectors
are interpreted as being located anywhere since they do not possess sufficient information to
specify independently both a direction and a position. If the space has a metric, then the magnitude
of x1 Ôx2 is interpreted as its area, and similarly for higher order products.

Depicting a simple bivector. This bivector is located nowhere.

For applications to the physical world, however, the Grassmann algebra possesses a critical
capability that no other algebra possesses so directly: it can distinguish between points and vectors
and treat them as separate tensorial entities. Lines and planes are examples of higher order
2009 9 3 from points and vectors, which have both position and direction. A line can be
constructs
represented by the exterior product of any two points on it, or by any point on it with a vector
parallel to it.
Grassmann Algebra Book.nb 29

For applications to the physical world, however, the Grassmann algebra possesses a critical
capability that no other algebra possesses so directly: it can distinguish between points and vectors
and treat them as separate tensorial entities. Lines and planes are examples of higher order
constructs from points and vectors, which have both position and direction. A line can be
represented by the exterior product of any two points on it, or by any point on it with a vector
parallel to it.

PointÔvector depiction PointÔpoint depiction


Two different depictions of a bound vector in its line.

A plane can be represented by the exterior product of any point on it with a bivector parallel to it,
any two points on it with a vector parallel to it, or any three points on it.

PointÔvectorÔvector PointÔpointÔvector PointÔpointÔpoint


Three different depictions of a bound bivector.

Finally, it should be noted that the Grassmann algebra subsumes all of real algebra, the exterior
product reducing in this case to the usual product operation amongst real numbers.

Here then is a geometric calculus par excellence. We hope you enjoy exploring it.

1.2 The Exterior Product

The anti-symmetry of the exterior product


The exterior product of two vectors x and y of a linear space yields the bivector x Ô y. The
bivector is not a vector, and so does not belong to the original linear space. In fact the bivectors
form their own linear space.

The fundamental defining characteristic of the exterior product is its anti-symmetry. That is, the
product changes sign if the order of the factors is reversed.

2009 9 3
Grassmann Algebra Book.nb 30

x Ô y ã -y Ô x 1.1

From this we can easily show the equivalent relation, that the exterior product of a vector with
itself is zero.

xÔx ã 0 1.2

This is as expected because x is linearly dependent on itself.

The exterior product is associative, distributive, and behaves linearly as expected with scalars.

Exterior products of vectors in a three-dimensional space


By way of example, suppose we are working in a three-dimensional space, with basis e1 , e2 , and
e3 . Then we can express vectors x and y as linear combinations of these basis vectors:

x ã a1 e1 + a2 e2 + a3 e3

y ã b1 e1 + b2 e2 + b3 e3

Here, the ai and bi are of course scalars. Taking the exterior product of x and y and multiplying
out the product allows us to express the bivector x Ô y as a linear combination of basis bivectors.

x Ô y ã Ha1 e1 + a2 e2 + a3 e3 L Ô Hb1 e1 + b2 e2 + b3 e3 L

x Ô y ã Ha1 b1 L e1 Ô e1 + Ha1 b2 L e1 Ô e2 + Ha1 b3 L e1 Ô e3 +


Ha2 b1 L e2 Ô e1 + Ha2 b2 L e2 Ô e2 + Ha2 b3 L e2 Ô e3 +
Ha3 b1 L e3 Ô e1 + Ha3 b2 L e3 Ô e2 + Ha3 b3 L e3 Ô e3

The first simplification we can make is to put all basis bivectors of the form ei Ô ei to zero. The
second simplification is to use the anti-symmetry of the product and collect the terms of the
bivectors which are not essentially different (that is, those that may differ only in the order of their
factors, and hence differ only by a sign). The product x Ô y can then be written:

x Ô y ã Ha1 b2 - a2 b1 L e1 Ô e2 + Ha2 b3 - a3 b2 L e2 Ô e3 + Ha3 b1 - a1 b3 L e3 Ô e1

The scalar factors appearing here are just those which would have appeared in the usual vector
cross product of x and y. However, there is an important difference. The exterior product
expression does not require the vector space to have a metric, while the cross product, because it
generates a vector orthogonal to x and y, necessarily assumes a metric. Furthermore, the exterior
product is valid for any number of vectors in spaces of arbitrary dimension, while the cross product
is necessarily confined to products of two vectors in a space of three dimensions.

For example, we may continue the product by multiplying x Ô y by a third vector z.

z ã c1 e1 + c2 e2 + c3 e3

2009 9 3
Grassmann Algebra Book.nb 31

xÔyÔz ã
HHa1 b2 - a2 b1 L e1 Ô e2 + Ha2 b3 - a3 b2 L e2 Ô e3 + Ha3 b1 - a1 b3 L e3 Ô e1 L Ô
Hc1 e1 + c2 e2 + c3 e3 L

Adopting the same simplification procedures as before we obtain the trivector x Ô y Ô z expressed
in basis form.

xÔyÔz ã
Ha1 b2 c3 - a3 b2 c1 + a2 b3 c1 + a3 b1 c2 - a1 b3 c2 - a2 b1 c3 L e1 Ô e2 Ô e3

A trivector is the exterior product of a three vectors.

A trivector in a space of three dimensions has just one component. Its coefficient is the
determinant of the coefficients of the original three vectors. Clearly, if these three vectors had been
linearly dependent, this determinant would have been zero. In a metric space, this coefficient
would be proportional to the volume of the parallelepiped formed by the vectors x, y, and z. Hence
the geometric interpretation of the algebraic result: if x, y, and z are lying in a planar direction,
that is, they are dependent, then the volume of the parallelepiped defined is zero.

We see here also that the exterior product begins to give geometric meaning to the often
inscrutable operations of the algebra of determinants. In fact we shall see that all the operations of
determinants are straightforward consequences of the properties of the exterior product.

In three-dimensional metric vector algebra, the vanishing of the scalar triple product of three
vectors is often used as a criterion of their linear dependence, whereas in fact the vanishing of their
exterior product (valid also in a non-metric space) would suffice. It is interesting to note that the
notation for the scalar triple product, or 'box' product, is Grassmann's original notation, viz [x y
z].

Finally, we can see that the exterior product of more than three vectors in a three-dimensional
space will always be zero, since they must be dependent.

2009 9 3
Grassmann Algebra Book.nb 32

Finally, we can see that the exterior product of more than three vectors in a three-dimensional
space will always be zero, since they must be dependent.

Terminology: elements and entities


To this point we have been referring to the elements of the space under discussion as vectors, and
their higher order constructs in three dimensions as bivectors and trivectors. In the general case we
will refer to the exterior product of an unspecified number of vectors as a multivector, and the
exterior product of m vectors as an m-vector.

The word 'vector' however, is in current practice used in two distinct ways. The first and traditional
use endows the vector with its well-known geometric properties of direction, sense, and (possibly)
magnitude. In the second and more recent, the term vector may refer to any element of a linear
space, even though the space is devoid of geometric context.

In this book, we adopt the traditional practice and use the term vector only when we intend it to
have its traditional geometric interpretation of a (free) arrow-like entity. When referring to an
element of a linear space which we are not specifically interpreting geometrically, we simply use
the term element. The exterior product of m 1-elements of a linear space will thus be referred to as
an m-element. (In the GrassmannAlgebra package however, we have had to depart somewhat from
this convention in the interests of common usage: symbols representing 1-elements are called
VectorSymbols).

In Grassmann algebra however, a further dimension arises. By introducing a new element called
the Origin into the basis of a vector space which has the interpretation of a point, we are able to
distinguish two types of 1-element: vectors and points. This simple distinction leads to a
sophisticated and powerful tableaux of free (involving only vectors) and bound (involving points)
entities with which to model geometric and physical systems.

Science and engineering make use of mathematics by endowing its constructs with geometric or
physical interpretations. We will use the term entity to refer to an element which we specifically
wish to endow with a geometric or physical interpretation. For example we would say that
(geometric) points and vectors and (physical) positions and directions are 1-entities because they
are represented by 1-elements, while (geometric) bound vectors and bivectors and (physical) forces
and angular momenta are 2-entities because they are represented by 2-elements. Points, vectors,
bound vectors and bivectors are examples of geometric entities. Positions, directions, forces and
momenta are examples of physical entities.

Points, lines, planes and hyperplanes may also be conveniently considered for computational
purposes as geometric entities, or we may also define them in the more common way as a set of
points: the set of all points in the entity. (A 1-element is in an m-element if their exterior product is
zero.) We call such a set of points a geometric object. We interpret elements of a linear space
geometrically or physically, while we represent geometric or physical entities by elements of a
linear space.

A more complete summary of terminology is given at the end of this chapter.

2009 9 3
Grassmann Algebra Book.nb 33

The grade of an element


The exterior product of m 1-elements is called an m-element. The value m is called the grade of the
m-element. For example the element x Ô y Ô u Ô v is of grade 4.

An m-element may be denoted by a symbol underscripted with the value m. For example:

a ã xÔyÔuÔv
4

For simplicity, however, we do not generally denote 1-elements with an underscripted '1'.

The grade of a scalar is 0. We shall see that this is a natural consequence of the exterior product
axioms formulated for elements of general grade.

The dimension of the underlying linear space of 1-elements is denoted by n. Elements of grade
greater than n are zero.

The complementary grade of an m-element in an n-space is n|m.

Interchanging the order of the factors in an exterior product


The exterior product of is defined to be associative. Hence interchanging the order of any two
adjacent 1-element factors will change the sign of the product:

… Ô x Ô y Ô … ã -H… Ô y Ô x Ô …L

In fact, interchanging the order of any two non-adjacent 1-element factors will also change the sign
of the product.

… Ô x Ô … Ô y Ô … ã -H… Ô y Ô … Ô x Ô …L

To see why this is so, suppose the number of factors between x and y is m. First move y to the
immediate left of x. This will cause m+1 changes of sign. Then move x to the position that y
vacated. This will cause m changes of sign. In all there will be 2m+1 changes of sign, equivalent to
just one sign change.

Note that it is only elements of odd grade that anti-commute. If, in a product of two elements, at
least one of them is of even grade, then the elements commute. For example, 2-elements commute
with all other elements.

Hx Ô yL Ô z ã z Ô Hx Ô yL

A brief summary of the properties of the exterior product


In this section we summarize a few of the more important properties of the exterior product that we
have already introduced informally. In Chapter 2: The Exterior Product, the complete set of axioms
is discussed.

• The exterior product of an m-element and a k-element is an (m+k)-element.

2009 9 3
Grassmann Algebra Book.nb 34

• The exterior product of an m-element and a k-element is an (m+k)-element.

• The exterior product is associative.

aÔb Ôg ã aÔ bÔg 1.3


m k r m k r

• The unit scalar acts as an identity under the exterior product.

a ã 1Ôa ã aÔ1 1.4


m m m

• Scalars factor out of products.

Ka aO Ô b ã a Ô a b ã a aÔb 1.5
m k m k m k

• An exterior product is anti-commutative whenever the grades of the factors are both odd.

a Ô b ã H-1Lm k b Ô a 1.6
m k k m

• The exterior product is both left and right distributive under addition.

Ka + b O Ô g ã a Ô g + b Ô g a Ô Kb + gO ã a Ô b + a Ô g 1.7
m m r m r m r m r r m r m r

2009 9 3
Grassmann Algebra Book.nb 35

1.3 The Regressive Product

The regressive product as a dual product to the exterior product


One of Grassmann's major contributions, which appears to be all but lost to current mathematics, is
the regressive product. The regressive product is the real foundation for the theory of the inner and
scalar products (and their generalization, the interior product). Yet the regressive product is often
ignored and the inner product defined as a new construct independent of the regressive product.
This approach not only has potential for inconsistencies, but also fails to capitalize on the wealth of
results available from the natural duality between the exterior and regressive products. The
approach adopted in this book follows Grassmann's original concept. The regressive product is a
simple dual operation to the exterior product and an enticing and powerful symmetry is lost by
ignoring it, particularly in the development of metric results involving complements and interior
products.

The underlying beauty of the Ausdehnungslehre is due to this symmetry, which in turn is due to the
fact that linear spaces of m-elements and linear spaces of (n|m)-elements have the same dimension.
This too is the key to the duality of the exterior and regressive products. For example, the exterior
product of m 1-elements is an m-element. The dual to this is that the regressive product of m (n|1)-
elements is an (n|m)-element. This duality has the same form as that in a Boolean algebra: if the
exterior product corresponds to a type of 'union' then the regressive product corresponds to a type
of 'intersection'.

It is this duality that permits the definition of complement, and hence interior, inner and scalar
products defined in later chapters. To underscore this duality it is proposed to adopt here the Ó
('vee') for the regressive product operation.

Unions and intersections of spaces


Consider a (non-zero) 2-element x Ô y. We can test to see if any given 1-element z is in the
subspace spanned by x and y by taking the exterior product of x Ô y with z and seeing if the result
is zero. From this point of view, x Ô y is an element which can be used to define the subspace
instead of the individual 1-elements x and y.

Thus we can define the space of x Ô y as the space of all 1-elements z such that x Ô y Ô z ã 0. We
extend this to more general elements by defining the space of a simple element a as the space of all
m
1-elements b such that a Ô b ã 0.
m

We will also need the notion of congruence. Two elements are congruent if one is a scalar factor
times the other. For example x and 2 x are congruent; x Ô y and -x Ô y are congruent. Congruent
elements define the same subspace. We denote congruence by the symbol !. For two elements,
congruence is equivalent to linear dependence. For more than two elements however, although the
elements may be linearly dependent, they are not necessarily congruent. The following concepts of
union and intersection only make sense up to congruence.

A union of elements is an element defining the subspace they together span.

2009 9 3
Grassmann Algebra Book.nb 36

A union of elements is an element defining the subspace they together span.

The dual concept to union of elements is intersection of elements. An intersection of elements is an


element defining the subspace they span in common.

Suppose we have three independent 1-elements: x, y, and z. A union of x Ô y and y Ô z is any


element congruent to x Ô y Ô z. An intersection of x Ô y and y Ô z is any element congruent to y.

The regressive product enables us to calculate intersections of elements. Thus it is the product
operation correctly dual to the exterior product. In fact, in Chapter 3: The Regressive Product, we
will develop the axiom set for the regressive product from that of the exterior product, and also
show that we can reverse the procedure.

A brief summary of the properties of the regressive product


In this section we summarize a few of the more important properties of the regressive product. In
Chapter 3: The Regressive Product, we develop the complete set of axioms from those of the
exterior product. By comparing the axioms below with those for the exterior product in the
previous section, we see that they are effectively generated by replacing Ô with Ó, and m by n-m.
The unit element 1 in its form 1 becomes 1.
0 n

• The regressive product of an m-element and a k-element in an n-space is an (m+k|n)-element.

• The regressive product is associative.

aÓb Óg ã aÓ bÓg 1.8


m k r m k r

• The unit n-element 1 acts as an identity under the regressive product.


n

a ã 1Óa ã aÓ1 1.9


m n m m n

• Scalars factor out of products.

Ka aO Ó b ã a Ó a b ã a aÓb 1.10
m k m k m k

• A regressive product is anti-commutative whenever the complementary grades of the factors are
both odd.

a Ó b ã H-1LHn-mL Hn-kL b Ó a 1.11


m k k m

• The regressive product is both left and right distributive under addition.

2009 9 3
Grassmann Algebra Book.nb 37

• The regressive product is both left and right distributive under addition.

Ka + b O Ó g ã a Ó g + b Ó g a Ó Kb + gO ã a Ó b + a Ó g 1.12
m m r m r m r m r r m r m r

Note: In GrassmannAlgebra the dimension of a space is denoted by 1 to ensure a correct


¯n
interpretation of the symbol.

The Common Factor Axiom


Up to this point we have no way of connecting the dual axiom structures of the exterior and
regressive products. That is, given a regressive product of an m-element and a k-element, how do
we find the (m+k|n)-element to which it is equivalent, expressed only in terms of exterior products?

To make this connection we need to introduce a further axiom which we call the Common Factor
Axiom. The form of the Common Factor Axiom may seem somewhat arbitrary, but it is in fact one
of the simplest forms which enable intersections to be calculated. This can be seen in the following
application of the axiom to a vector 3-space.

Suppose x, y, and z are three independent vectors in a vector 3-space. The Common Factor Axiom
says that the regressive product of the two bivectors x Ô z and y Ô z may also be expressed as the
regressive product of a 3-element x Ô y Ô z with their common factor z.

Hx Ô zL Ó Hy Ô zL ã Hx Ô y Ô zL Ó z

Since the space is 3-dimensional, we can write any 3-element such as x Ô y Ô z as a scalar factor
(a, say) times the unit 3-element (introduced in axiom 1.9).

Hx Ô y Ô zL Ó z ã Ka 1O Ó z ã a z
3

This then gives us the axiomatic structure to say that the regressive product of two elements
possessing an element in common is congruent to that element. Another way of saying this is: the
regressive product of two elements defines their intersection.

2009 9 3
Grassmann Algebra Book.nb 38

Hx Ô zL Ó Hy Ô zL ª z

The intersection of two bivectors in a three-dimensional space

Of course this is just a simple case. More generally, let a, b, and g be simple elements with m+k+p
m k p
= n, where n is the dimension of the space. Then the Common Factor Axiom states that

aÔg Ó bÔg ã aÔbÔg Óg m+k+p ã n 1.13


m p k p m k p p

There are many rearrangements and special cases of this formula which we will encounter in later
chapters. For example, when p is zero, the Common Factor Axiom shows that the regressive
product of an m-element with an (n|m)-element is a scalar which can be expressed in the
alternative form of a regressive product with the unit 1.

aÓ b ã aÔ b Ó1
m n-m m n-m

The Common Factor Axiom allows us to prove a particularly useful result: the Common Factor
Theorem. The Common Factor Theorem expresses any regressive product in terms of exterior
products alone. This of course enables us to calculate intersections of arbitrary elements.

Most importantly however we will see later that the Common Factor Theorem has a counterpart
expressed in terms of exterior and interior products, called the Interior Common Factor Theorem.
This forms the principal expansion theorem for interior products and from which we can derive all
the important theorems relating exterior and interior products.

The Interior Common Factor Theorem, and the Common Factor Theorem upon which it is based,
are possibly the most important theorems in the Grassmann algebra.

2009 9 3
Grassmann Algebra Book.nb 39

The Interior Common Factor Theorem, and the Common Factor Theorem upon which it is based,
are possibly the most important theorems in the Grassmann algebra.

In the next section we informally apply the Common Factor Theorem to obtain the intersection of
two bivectors in a three-dimensional space.

The intersection of two bivectors in a three-dimensional space


Suppose that x Ô y and u Ô v are non-congruent bivectors in a three dimensional space. Since the
space has only three dimensions, the bivectors must have an intersection. We denote the regressive
product of x Ô y and u Ô v by z:

z ã Hx Ô yL Ó Hu Ô vL

We will see in Chapter 3: The Regressive Product that this can be expanded by the Common
Factor Theorem to give

Hx Ô yL Ó Hu Ô vL ã Hx Ô y Ô vL Ó u - Hx Ô y Ô uL Ó v 1.14

But we have already seen in Section 1.2 that in a 3-space, the exterior product of three vectors will,
in any given basis, give the basis trivector, multiplied by the determinant of the components of the
vectors making up the trivector.

Additionally, we note that the regressive product (intersection) of a vector with an element like the
basis trivector completely containing the vector, will just give an element congruent to itself. Thus
the regressive product leads us to an explicit expression for an intersection of the two bivectors.

z ª Det@x, y, vD u - Det@x, y, uD v

Here Det[x,y,v] is the determinant of the components of x, y, and v. We could also have
obtained an equivalent formula expressing z in terms of x and y instead of u and v by simply
interchanging the order of the bivector factors in the original regressive product.

Note carefully however, that this formula only finds the common factor up to congruence, because
until we determine an explicit expression for the unit n-element 1 in terms of basis elements
n
(which we do by introducing the complement operation in Chapter 5), we cannot usefully use
axiom 1.9 above. Nevertheless, this is not to be seen as a restriction. Rather, as we shall see in the
next section it leads to interesting insights as to what can be accomplished when we work in spaces
without a metric (projective spaces).

2009 9 3
Grassmann Algebra Book.nb 40

1.4 Geometric Interpretations

Points and vectors


In this section we introduce two different types of geometrically interpreted elements which can be
represented by the elements of a linear space: vectors and points. Then we look at the
interpretations of the various higher order elements that we can generate from them by the exterior
product. Finally we see how the regressive product can be used to calculate intersections of these
higher order elements.

As discussed in Section 1.2, the term 'vector' is often used to refer to an element of a linear space
with no intention of implying an interpretation. In this book however, we reserve the term for a
particular type of geometric interpretation: that associated with representing direction.

But an element of a linear space may also be interpreted as a point. Of course vectors may also be
used to represent points, but only relative to another given point (the information for which is not
in the vector). These vectors are properly called position vectors. Common practice often omits
explicit reference to this other given point, or perhaps may refer to it verbally. Points can be
represented satisfactorily in many cases by position vectors alone, but when both position and
direction are required in the same element we must distinguish mathematically between the two.

To describe true position in a three-dimensional physical space, a linear space of four dimensions
is required, one for an origin point, and the other three for the three spatial directions. Since the
exterior product is independent of the dimension of the underlying space, it can deal satisfactorily
with points and vectors together. The usual three-dimensional vector algebra however cannot.

Suppose x, y, and z are elements of a linear space interpreted as vectors. Vectors always have a
direction and sense. But only when the linear space has a metric do they also have a magnitude.
Since to this stage we have not yet introduced the notion of metric, we will only be discussing
interpretations and applications which do not require elements (other than congruent elements) to
be commensurable.

Of course, vectors may be summed in a space with no metric: the geometric interpretation of this
operation being the standard 'triangle rule'.

As we have seen in the previous sections, multiplying vectors together with the exterior product
leads to higher order entities like bivectors and trivectors. These entities are interpreted as
representing directions and their higher dimensional analogues.

Sums and differences of points


A point is defined as the sum of the origin point and a vector. If ¯" is the origin, and x is a vector,
then ¯" + x is a point.

P ã ¯! + x 1.15

The vector x is called the position vector of the point P.

2009 9 3
Grassmann Algebra Book.nb 41

The vector x is called the position vector of the point P.

The sum of the origin and a vector is a point.

The sum of a point and a vector is another point.

Q ã P + y ã H¯" + xL + y == ¯" + Hx + yL

P y
x
x+y Q

¯!

The sum of a point and a vector is another point

The difference of two points is a vector since the origins cancel.

P - Q ã H¯" + x L - H¯" + x + yL ã y

The difference of two points is a vector.

A scalar multiple of a point is called a weighted point. For example, if m is a scalar, m P is a


weighted point with weight m.

The sum of two points gives the point halfway between them with a weight of 2.

2009 9 3
Grassmann Algebra Book.nb 42

x1 + x2
P1 + P2 ã H¯" + x1 L + H¯" + x2 L ã 2 K¯" + O ã 2 Pc
2

The sum of two points is a point of weight 2 located mid-way between them

Ï Historical Note

The point was originally considered the fundamental geometric entity of interest. However the difference of
points was clearly no longer a point, since reference to the origin had been lost. Sir William Rowan Hamilton
coined the term 'vector' for this new entity since adding a vector to a point 'carried' the point to a new point.

Determining a mass-centre
A classic application of a sum of weighted points is in the determination of a centre of mass.

Consider a collection of points Pi weighted with masses mi . The sum of the weighted points gives
the point PG at the mass-centre (centre of gravity) weighted with the total mass M.

To show this, first add the weighted points and collect the terms involving the origin.

M PG ã m1 H¯" + x1 L + m2 H¯" + x2 L + m3 H¯" + x3 L + …


ã Hm1 + m2 + m3 + …L ¯" + Hm1 x1 + m2 x2 + m3 x3 + …L

Dividing through by the total mass M gives the centre of mass.

Hm1 x1 + m2 x2 + m3 x3 + …L
PG ã ¯" +
Hm1 + m2 + m3 + …L

† To fix ideas, we take a simple example demonstrating that centres of mass can be accumulated
in any order. Suppose we have three points P, Q, and R with masses p, q, and r. The centres of
mass taken two at a time are given by

Hp + qL GPQ ã p P + q Q
Hq + rL GQR ã q Q + r R
Hp + rL GPR ã p P + r R

Now take the centre of mass of each of these with the other weighted point. Clearly, the three sums
will be equal.

2009 9 3
Grassmann Algebra Book.nb 43

Hp + qL GPQ + r R ã Hq + rL GQR + p P ã
Hp + rL GPR + q Q ã p P + q Q + r R ã Hp + q + rL GPQR

GPR

GQR

GPQR

P Q
GPQ

Centres of mass can be accumulated in any order

Lines and planes


The exterior product of a point and a vector gives a bound vector. Bound vectors are the entities we
need for mathematically representing lines. A line is the set of points in the bound vector. That is,
it consists of all the points whose exterior product with the bound vector is zero.

In practice, we usually compute with lines by computing with their bound vectors. For example, to
get the intersection of two lines in the plane, we take the regressive product of their bound vectors.
By abuse of terminology we may therefore often refer to a bound vector as a line.

A bound vector can be defined by the exterior product of a point and a vector, or of two points. In
the first case we represent the line L through the point P in the direction of x by any entity
congruent to the exterior product of P and x. In the second case we can introduce Q as P + x to get
the same result.

L ª P Ô x ª P Ô HQ - PL ª P Ô Q

Q
x

P P

L ª PÔx L ª PÔQ
Two different depictions of a bound vector in its line.

A line is independent of the specific points used to define it. To see this, consider any other point R
on the line. Since R is on the line it can be represented by the sum of P with an arbitrary scalar
multiple of the vector x:
2009 9 3
Grassmann Algebra Book.nb 44

A line is independent of the specific points used to define it. To see this, consider any other point R
on the line. Since R is on the line it can be represented by the sum of P with an arbitrary scalar
multiple of the vector x:

L ª R Ô x ã HP + a xL Ô x ã P Ô x
A line may also be represented by the exterior product of any two points on it.

L ª P Ô R ã P Ô HP + a xL ã a P Ô x

Note that the bound vectors P Ô x and P Ô R are different, but congruent. They therefore define the
same line.

L ª PÔx ª RÔx ª PÔR 1.16

x
R
x
P

Two congruent bound vectors defining the same line.

These concepts extend naturally to higher dimensional constructs. For example a plane P may be
represented by the exterior product of single point on it together with a bivector in the direction of
the plane, any two points on it together with a vector in it (not parallel to the line joining the
points), or any three points on it (not in the same line).

P ª PÔxÔy ª PÔQÔy ª PÔQÔR 1.17

PÔxÔy PÔQÔy PÔQÔR


Three different depictions of a bound bivector in its plane.

To build higher dimensional geometric entities from lower dimensional ones, we simply take their
exterior product. For example we can build a line by taking the exterior product of a point with any
point or vector exterior to it. Or we can build a plane by taking the exterior product of a line with
any point or vector exterior to it.

The intersection of two lines

2009 9 3
Grassmann Algebra Book.nb 45

The intersection of two lines


To find intersections of geometric entities, we use the regressive product. For example, suppose we
have two lines in a plane and we want to find the point of intersection P. As we have seen we can
represent the lines in a number of ways. For example:

L1 ª P1 Ô x1 ã H¯" + n1 L Ô x1 ã ¯" Ô x1 + n1 Ô x1

L2 ª P2 Ô x2 ã H¯" + n2 L Ô x2 ã ¯" Ô x2 + n2 Ô x2

x1 x2

P1 P2

The intersection of lines as the common factor of bound vectors.

The point of intersection of L1 and L2 is the point P given by (congruent to) the regressive product
of the lines L1 and L2 .

P ª L1 Ó L2 ã H¯" Ô x1 + n1 Ô x1 L Ó H¯" Ô x2 + n2 Ô x2 L

Expanding the formula for P gives four terms:

P ª H¯" Ô x1 L Ó H¯" Ô x2 L + Hn1 Ô x1 L Ó H¯" Ô x2 L +


H¯" Ô x1 L Ó Hn2 Ô x2 L + Hn1 Ô x1 L Ó Hn2 Ô x2 L

The Common Factor Theorem for the regressive product of elements of the form
Hx Ô yL Ó Hu Ô vL in a linear space of three dimensions was introduced as formula 1.14 in Section
1.3 as:

Hx Ô yL Ó Hu Ô vL ã Hx Ô y Ô vL Ó u - Hx Ô y Ô uL Ó v

Since a bound 2-space is three dimensional (its basis contains three elements - the origin and two
vectors), we can use this formula to expand each of the terms in P:

H¯" Ô x1 L Ó H¯" Ô x2 L ã H¯" Ô x1 Ô x2 L Ó ¯" - H¯" Ô x1 Ô ¯"L Ó x2

Hn1 Ô x1 L Ó H¯" Ô x2 L ã Hn1 Ô x1 Ô x2 L Ó ¯" - Hn1 Ô x1 Ô ¯"L Ó x2

H¯" Ô x1 L Ó Hn2 Ô x2 L ã -Hn2 Ô x2 L Ó H¯" Ô x1 L ã


-Hn2 Ô x2 Ô x1 L Ó ¯" + Hn2 Ô x2 Ô ¯"L Ó x1

Hn1 Ô x1 L Ó Hn2 Ô x2 L ã Hn1 Ô x1 Ô x2 L Ó n2 - Hn1 Ô x1 Ô n2 L Ó x2

The term ¯" Ô x1 Ô ¯" is zero because of the exterior product of repeated factors. The four terms
involving the exterior product of three vectors, for example n1 Ô x1 Ô x2 , are also zero since any
three vectors in a two-dimensional vector space must be dependent (The vector space is 2-
dimensional since it is the vector sub-space of a bound 2-space). Hence we can express the point of
intersection as congruent to the weighted point P:
2009 9 3
Grassmann Algebra Book.nb 46

The term ¯" Ô x1 Ô ¯" is zero because of the exterior product of repeated factors. The four terms
involving the exterior product of three vectors, for example n1 Ô x1 Ô x2 , are also zero since any
three vectors in a two-dimensional vector space must be dependent (The vector space is 2-
dimensional since it is the vector sub-space of a bound 2-space). Hence we can express the point of
intersection as congruent to the weighted point P:

P ª H¯" Ô x1 Ô x2 L Ó ¯" + H¯" Ô n2 Ô x2 L Ó x1 - H¯" Ô n1 Ô x1 L Ó x2

If we express the vectors in terms of a basis, e1 and e2 say, we can reduce this formula (after some
manipulation) to:

Det@n2 , x2 D Det@n1 , x1 D
P ª ¯" + x1 - x2
Det@x1 , x2 D Det@x1 , x2 D
Here, the determinants are the determinants of the coefficients of the vectors in the given basis.

To verify that P does indeed lie on both the lines L1 and L2 , we only need to carry out the
straightforward verification that the products P Ô L1 and P Ô L2 are both zero.

Although this approach in this simple case is certainly more complex than the standard algebraic
approach in the plane, its interest lies in the facts that it is immediately generalizable to
intersections of any geometric objects in spaces of any number of dimensions, and that it leads to
easily computable solutions.

1.5 The Complement

The complement as a correspondence between spaces


The Grassmann algebra has a duality in its structure which not only gives it a certain elegance, but
is also the basis of its power. We have already introduced the regressive product as the dual
product operation to the exterior product. In this section we extend the notion of duality to define
the complement of an element. The notions of metric, orthogonality, and interior, inner and scalar
products are all based on the complement.

Consider a linear space of dimension n with basis e1 , e2 , …, en . The set of all the essentially
different m-element products of these basis elements forms the basis of another linear space, but
n
this time of dimension K O. For example, when n is 3, the linear space of 2-elements has three
m
elements in its basis: e1 Ôe2 , e1 Ôe3 , e2 Ôe3 .

The anti-symmetric nature of the exterior product means that there are just as many basis elements
in the linear space of (n|m)-elements as there are in the linear space of m-elements. Because these
linear spaces have the same dimension, we can set up a correspondence between m-elements and
(n|m)-elements. That is, given any m-element, we can define its corresponding (n|m)-element. The
(n|m)-element is called the complement of the m-element. Normally this correspondence is set up
between basis elements and extended to all other elements by linearity.

The Euclidean complement

2009 9 3
Grassmann Algebra Book.nb 47

The Euclidean complement


Suppose we have a three-dimensional linear space with basis e1 , e2 , e3 . We define the Euclidean
complement of each of the basis elements as the basis 2-element whose exterior product with the
basis element gives the basis 3-element e1 Ô e2 Ô e3 . We denote the complement of an element by
placing a 'bar' over it. Thus:

e1 ã e2 Ô e3 ï e1 Ô e1 ã e1 Ô e2 Ô e3

e2 ã e3 Ô e1 ï e2 Ô e2 ã e1 Ô e2 Ô e3

e3 ã e1 Ô e2 ï e3 Ô e3 ã e1 Ô e2 Ô e3

The Euclidean complement is the simplest type of complement and defines a Euclidean metric, that
is, where the basis elements are mutually orthonormal. This was the only type of complement
considered by Grassmann. In Chapter 5: The Complement, we will show however, that
Grassmann's concept of complement is easily extended to more general metrics. Note carefully that
we will be using the notion of complement to define the notions of orthogonality and metric, and
until we do this, we will not be relying on their existence in Chapters 2, 3, and 4.

With the definitions above, we can now proceed to define the Euclidean complement of a general 1-
element x ã a e1 + b e2 + c e3 . To do this we need to endow the complement operation with the
property of linearity so that it has meaning for linear combinations of basis elements.

x ã a e1 + b e2 + c e3 ã a e1 + b e2 + c e3 ã a e2 Ô e3 + b e3 Ô e1 + c e1 Ô e2

A vector and its bivector complement in a three-dimensional vector space

In Section 1.6 we will see that the complement of an element is orthogonal to the element, because
we will define the interior product (and hence inner and scalar products) using the complement.
We can start to see how the scalar product of a 1-element with itself might arise by expanding the
product
2009 9 3x Ô x:
Grassmann Algebra Book.nb 48

In Section 1.6 we will see that the complement of an element is orthogonal to the element, because
we will define the interior product (and hence inner and scalar products) using the complement.
We can start to see how the scalar product of a 1-element with itself might arise by expanding the
product x Ô x:

x Ô x ã Ha e1 + b e2 + c e3 L Ô Ha e2 Ô e3 + b e3 Ô e1 + c e1 Ô e2 L
ã Ia2 + b2 + c2 M e1 Ô e2 Ô e3

The complement of the basis 2-elements can be defined in a manner analogous to that for 1-
elements, that is, such that the exterior product of a basis 2-element with its complement is equal to
the basis 3-element. The complement of a 2-element in 3-space is therefore a 1-element.

e2 Ô e3 ã e1 ï e2 Ô e3 Ô e2 Ô e3 ã e2 Ô e3 Ô e1 ã e1 Ô e2 Ô e3

e3 Ô e1 ã e2 ï e3 Ô e1 Ô e3 Ô e1 ã e3 Ô e1 Ô e2 ã e1 Ô e2 Ô e3

e1 Ô e2 ã e3 ï e1 Ô e2 Ô e1 Ô e2 ã e1 Ô e2 Ô e3

To complete the definition of Euclidean complement in a 3-space we note that:

1 ã e1 Ô e2 Ô e3

e1 Ô e2 Ô e3 ã 1

Summarizing these results for the Euclidean complement of the basis elements of a Grassmann
algebra in three dimensions shows the essential symmetry of the complement operation.

Complement Palette
Basis Complement
1 e1 Ô e2 Ô e3
e1 e2 Ô e3
e2 -He1 Ô e3 L
e3 e1 Ô e2
e1 Ô e2 e3
e1 Ô e3 -e2
e2 Ô e3 e1
e1 Ô e2 Ô e3 1

2009 9 3
Grassmann Algebra Book.nb 49

The complement of a complement


Applying the complement operation twice to a 1-element x we can see that the complement of the
complement of x in a 3-space is just x itself.

x ã a e1 + b e2 + c e3

x ã a e1 + b e2 + c e3 ã a e1 + b e2 + c e3 ã a e2 Ô e3 + b e3 Ô e1 + c e1 Ô e2

x ã a e2 Ô e3 + b e3 Ô e1 + c e1 Ô e2 ã
a e2 Ô e3 + b e3 Ô e1 + c e1 Ô e2 ã a e1 + b e2 + c e3

ï xãx
More generally, as we shall see in Chapter 5: The Complement, we can show that the complement
of the complement of any element is the element itself, apart from a possible sign.

a ã H-1Lm Hn-mL a 1.18


m m

This result is independent of the correspondence that we set up between the m-elements and (n|m)-
elements of the space, except that the correspondence must be symmetric. This is equivalent to the
requirement that the metric tensor (and inner product) be symmetric.

Whereas in a 3-space, the complement of the complement of a 1- element is the element itself, in a
2-space it turns out to be the negative of the element.

Complement Palette
Basis Complement
1 e1 Ô e2
e1 e2
e2 -e1
e1 Ô e2 1

Although this sign dependence on the dimension of the space and grade of the element might
appear arbitrary, it turns out to capture some essential properties of the elements in their spaces to
which we have become accustomed. For example in 2-space the complement x of a vector x
p
rotates it anticlockwise by 2 . Taking the complement x of the vector x rotates it anticlockwise by a
p
further 2 into -x.

x ã a e1 + b e2

2009 9 3
Grassmann Algebra Book.nb 50

x ã a e1 + b e2 ã a e2 - b e1

x ã a e2 - b e1 ã -a e1 - b e2

x = -x

The complement operation rotating vectors in a vector 2-space

The Complement Axiom


From the Common Factor Axiom we can derive a powerful relationship between the Euclidean
complements of elements and their exterior and regressive products. The Euclidean complement of
the exterior product of two elements is equal to the regressive product of their complements.

aÔb ã aÓb 1.19


m k m k

However, although this may be derived in this simple case, to develop the Grassmann algebra for
general metrics, we will assume this relationship holds independent of the metric. It thus takes on
the mantle of an axiom.

This axiom, which we call the Complement Axiom, is the quintessential formula of the Grassmann
algebra. It expresses the duality of its two fundamental operations on elements and their
complements. We note the formal similarity to de Morgan's law in Boolean algebra.

We will also see that adopting this formula for general complements will enable us to compute the
complement of any element of a space once we have defined the complements of its basis 1-
elements.

† As an example consider two vectors x and y in 3-space, their exterior product. The Complement
Axiom becomes

xÔy ã xÓy

The complement of the bivector x Ô y is a vector. The complements of x and y are bivectors. The
regressive product of these two bivectors is a vector. The following graphic depicts this
relationship, and the orthogonality of the elements. We discuss orthogonality in the next section.

2009 9 3
Grassmann Algebra Book.nb 51

The complement of the bivector x Ô y is a vector. The complements of x and y are bivectors. The
regressive product of these two bivectors is a vector. The following graphic depicts this
relationship, and the orthogonality of the elements. We discuss orthogonality in the next section.

Visualizing the complement axiom in vector three-space

1.6 The Interior Product

The definition of the interior product


The interior product is a generalization of the inner or scalar product to elements of arbitrary grade.
First we will define the interior product and then show how the inner and scalar products are
special cases.

The interior product of an element a with an element b is denoted a û b and defined to be the
m k m k
regressive product of a with the complement of b.
m k

aûb ã aÓb 1.20


m k m k

The grade of an interior product a û b may be seen from the definition to be m+(n|k)|n = m|k.
m k

Note that while the grade of a regressive product depends on the dimension of the underlying linear
space, the grade of an interior product is independent of the dimension of the underlying space.
This independence underpins the important role the interior product plays in the Grassmann
algebra - the exterior product sums grades while the interior product differences them. However,
grades may be arbitrarily summed, but not arbitrarily differenced, since there are no elements of
negative grade.
2009 9 3
Grassmann Algebra Book.nb 52
Note that while the grade of a regressive product depends on the dimension of the underlying linear
space, the grade of an interior product is independent of the dimension of the underlying space.
This independence underpins the important role the interior product plays in the Grassmann
algebra - the exterior product sums grades while the interior product differences them. However,
grades may be arbitrarily summed, but not arbitrarily differenced, since there are no elements of
negative grade.

Thus the order of factors in an interior product is important. When the grade of the first element is
less than that of the second element, the result is necessarily zero.

Inner products and scalar products


The interior product of two elements a and b of the same grade is (also) called their inner product.
m m
Since the grade of an interior product is the difference of the grades of its factors, an inner product
is always of grade zero, hence scalar.

A special case of the interior or inner product in which the two factors of the product are of grade 1
is called a scalar product to conform to the common usage.

In Chapter 6 we will show that the inner product is symmetric, that is, the order of the factors is
immaterial.

a û b ã b û a 1.21
m m m m

When the inner product is between simple elements it can be expressed as the determinant of the
array of scalar products according to the following formula:

Ha1 Ô a2 Ô ! Ô am L û Hb1 Ô b2 Ô ! Ô bm L ã Det@ai û bj D 1.22

For example, the inner product of two 2-elements a1 Ôa2 and b1 Ôb2 may be written:

Ha1 Ô a2 L û Hb1 Ô b2 L ã Ha1 û b1 L Ha2 û b2 L - Ha1 û b2 L Ha2 û b1 L 1.23

Sequential interior products


Definition 1.20 for the interior product leads to an immediate and powerful formula relating
exterior and interior products by grace of the associativity of the regressive product and the
Complement Axiom 1.19.

a û Hb1 Ô b2 Ô … Ô bk L ã a Ó Hb1 Ô b2 Ô … Ô bk L ã
m m

a Ó b1 Ó b2 Ó … Ó bk ã KKa û b1 O û b2 O û … û bk
m m

2009 9 3
Grassmann Algebra Book.nb 53

a û Hb1 Ô b2 Ô … Ô bk L ã
m
1.24
Ka û b1 O û Hb2 Ô … Ô bk L ã KKKa û b1 O û b2 O û … O û bk
m m

For example, the inner product of two bivectors can be rewritten to display the interior product of
the first bivector with either of the vectors

Hx Ô yL û Hu Ô vL ã HHx Ô yL û uL û v ã -HHx Ô yL û vL û u

It is the straightforward and consistent derivation of formulae like 1.24 from definition 1.20 using
only the fundamental exterior, regressive and complement operations, that shows how powerful
Grassmann's approach is. The alternative approach of simply introducing an inner product onto a
space cannot bring such power to bear.

Orthogonality
As is well known, two 1-elements are said to be orthogonal if their scalar product is zero.

More generally, a 1-element x is orthogonal to a simple element a if and only if their interior
m
product a û x is zero.
m

However, for a û Hx1 Ô x2 Ô ! Ô xk L to be zero it is only necessary that one of the xi be


m
orthogonal to a. To show this, suppose it to be (without loss of generality) x1 . Then by formula
m
1.24 we can write

a û Hx1 Ô x2 Ô ! Ô xk L ã Ka û x1 O û Hx2 Ô ! Ô xk L
m m

Hence it becomes immediately clear that if a û x1 is zero then so is the whole product
m
a û Hx1 Ô x2 Ô ! Ô xk L.
m

Measure and magnitude


Formula 1.22 is the basis for computing the measure of a simple m-element. The measure of a
simple element A is denoted †A§, and is defined to be the positive square root of the interior
product of the element with itself. For A ã a1 Ô a2 Ô ! Ô am we have

†A§2 ã A û A ã
1.25
Ha1 Ô a2 Ô ! Ô am L û Ha1 Ô a2 Ô ! Ô am L ã Det@ai û aj D

Under a geometric interpretation of the space in which vectors are interpreted as representing
displacements, the concept of measure corresponds to the concept of magnitude. The magnitude of
a vector is its length, the magnitude of a bivector is the area of the parallelogram formed by its two
vectors, and the magnitude of a trivector is the volume of the parallelepiped formed by its three
vectors. The magnitude of a scalar is the scalar itself.
2009 9 3
Grassmann Algebra Book.nb 54

Under a geometric interpretation of the space in which vectors are interpreted as representing
displacements, the concept of measure corresponds to the concept of magnitude. The magnitude of
a vector is its length, the magnitude of a bivector is the area of the parallelogram formed by its two
vectors, and the magnitude of a trivector is the volume of the parallelepiped formed by its three
vectors. The magnitude of a scalar is the scalar itself.

The magnitude of a vector x is, as expected, given by the standard formula.

†x§ ã xûx 1.26

The magnitude of a bivector x Ô y is given by formula 1.24 as

xûx xûy
†x Ô y§ ã Hx Ô yL û Hx Ô yL ã DetB F 1.27
xûy yûy

Of course, a bivector may be expressed in an infinity of ways as the exterior product of two
vectors, for example

B = x Ô y ã x Ô Hy + a xL

From this, the square of its area may be written in either of two ways

B û B ã Hx Ô yL û Hx Ô yL
B û B ã Hx Ô Hy + a xLL û Hx Ô Hy + a xLL

However, multiplying out these expressions using formula 1.23 shows that terms cancel in the
second expression, thus reducing them both to the same expression.

B û B ã Hx û xL Hy û yL - Hx û yL2

Thus the measure of a bivector is independent of the actual vectors used to express it.
Geometrically interpreted this means that the area of the corresponding parallelogram (with sides
corresponding to the displacements represented by the vectors) is independent of its shape. These
results extend straightforwardly to simple elements of any grade.

The measure of an element is equal to the measure of its complement. By the definition of the
interior product, and formulae 1.18 and 1.19 we have

a û a ã a Ó a ã a Ó a ã H-1Lm Hn-mL a Ó a ã a Ó a ã a û a
m m m m m m m m m m m m

aûa ã aûa 1.28


m m m m

`
A unit element a can be defined by the ratio of the element to its measure.
m

2009 9 3
Grassmann Algebra Book.nb 55

a
` m
a ã 1.29
m
£aß
m

Calculating interior products


We can use definition 1.20 to calculate interior products directly from their definition. In what
follows we calculate the interior products of representative basis elements of a 3-space with
Euclidean metric. As expected, the scalar products e1 û e1 and e1 û e2 turn out to be 1 and 0
respectively.

e1 û e1 ã e1 Ó e1 ã e1 Ó He2 Ô e3 L ã He1 Ô e2 Ô e3 L Ó 1 ã 1 Ó 1 ã 1

e1 û e2 ã e1 Ó e2 ã e1 Ó He3 Ô e1 L ã He1 Ô e3 Ô e1 L Ó 1 ã 0 Ó 1 ã 0

Inner products of identical basis 2-elements are unity:

He1 Ô e2 L û He1 Ô e2 L ã He1 Ô e2 L Ó He1 Ô e2 L ã


He1 Ô e2 L Ó e3 ã He1 Ô e2 Ô e3 L Ó 1 ã 1 Ó 1 ã 1

Inner products of non-identical basis 2-elements are zero:

He1 Ô e2 L û He2 Ô e3 L ã He1 Ô e2 L Ó He2 Ô e3 L ã


He1 Ô e2 L Ó e1 ã He1 Ô e2 Ô e1 L Ó 1 ã 0 Ó 1 ã 0
If a basis 2-element contains a given basis 1-element, then their interior product is not zero:

He1 Ô e2 L û e1 ã He1 Ô e2 L Ó e1 ã
He1 Ô e2 L Ó He2 Ô e3 L ã He1 Ô e2 Ô e3 L Ó e2 ã 1 Ó e2 ã e2

If a basis 2-element does not contain a given basis 1-element, then their interior product is zero:

He1 Ô e2 L û e3 ã He1 Ô e2 L Ó e3 ã He1 Ô e2 L Ó He1 Ô e2 L ã 0

Expanding interior products


To expand interior products, we use the Interior Common Factor Theorem. This theorem shows
how an interior product of a simple element a with another, not necessarily simple element of
m
m
equal or lower grade b, may be expressed as a linear combination of the n (= K O) essentially
k k
different factors ai (of grade m|k) of the simple element of higher degree.
m-k

1.30

2009 9 3
Grassmann Algebra Book.nb 56

n
a û b ã ‚ ai û b ai
m k k k m-k
i=1
a ã a1 Ô a1 ã a2 Ô a2 ã ! ã an Ô an
m k m-k k m-k k m-k

For example, the Interior Common Factor Theorem may be used to prove a relationship involving
the interior product of a 1-element x with the exterior product of two factors, each of which may
not be simple. This relationship and the special cases that derive from it find application
throughout the explorations in the rest of this book.

a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x 1.31
m k m k m k

The interior product of a bivector and a vector


Suppose x is a vector and B ã x1 Ô x2 is a simple bivector. The interior product of the bivector B
with the vector x is the vector B û x. This can be expanded by the Interior Common Factor
Theorem, or the formula above derived from it, to give:

B ã x1 Ô x2

B û x ã Hx1 Ô x2 L û x ã Hx û x1 L x2 - Hx û x2 L x1

Since B û x is expressed as a linear combination of x1 and x2 it is clearly contained in the bivector


B.

B Ô HB û xL ã 0

The resulting vector B û x is also orthogonal to x. We can show this by taking its scalar product
with x, or by using formula 1.24.

HB û xL û x ã B û Hx Ô xL ã 0
`
If B is the unit bivector of B, the projection x˛ of x onto B is given by
` `
x˛ ã -B û IB û xM

The component x¨ of x orthogonal to B is given by

1.30

2009 9 3
Grassmann Algebra Book.nb 57

` `
x¨ ã IB Ô xM û B

The interior product of a bivector with a vector

These concepts may easily be extended to geometric entities of higher grade. We will explore this
further in Chapter 6.

The cross product


The cross or vector product of the three-dimensional vector calculus of Gibbs et al. [Gibbs 1928]
corresponds to two operations in Grassmann's more general calculus. Taking the cross-product of
two vectors in three dimensions corresponds to taking the complement of their exterior product.
However, whilst the usual cross product formulation is valid only for vectors in three dimensions,
the exterior product formulation is valid for elements of any grade in any number of dimensions.
Therefore the opportunity exists to generalize the concept.

Because our generalization reduces to the usual definition under the usual circumstances, we take
the liberty of continuing to refer to the generalized cross product as, simply, the cross product.

The cross product of a and b is denoted a µ b and is defined as the complement of their exterior
m k m k
product. The cross product of an m-element and a k-element is thus an (n|(m+k))-element.

aµb ã aÔb 1.32


m k m k

This definition preserves the basic property of the cross product: that the cross product of two
elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three
dimensional metric vector space. For 1-elements xi the definition has the following consequences,
independent of the dimension of the space.
2009 9 3
Grassmann Algebra Book.nb 58

This definition preserves the basic property of the cross product: that the cross product of two
elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three
dimensional metric vector space. For 1-elements xi the definition has the following consequences,
independent of the dimension of the space.

† The triple cross product is a 1-element in any number of dimensions.

Hx1 µ x2 L µ x3 ã Hx1 Ô x2 L û x3 ã Hx3 û x1 L x2 - Hx3 û x2 L x1 1.33

† The box product, or triple vector product, is an (n|3)-element, and therefore a scalar only in
three dimensions.

Hx1 µ x2 L û x3 ã x1 Ô x2 Ô x3 1.34

† The scalar product of two cross products is a scalar in any number of dimensions.

Hx1 µ x2 L û Hx3 µ x4 L ã Hx1 Ô x2 L û Hx3 Ô x4 L 1.35

† The cross product of two cross products is a (4|n)-element, and therefore a 1-element only in
three dimensions. It corresponds to the regressive product of two exterior products.

Hx1 µ x2 L µ Hx3 µ x4 L ã Hx1 Ô x2 L Ó Hx3 Ô x4 L 1.36

The cross product of the three-dimensional vector calculus requires the space to have a metric,
since it is defined to be orthogonal to its factors. Equation 1.35 however, shows that in this
particular case, the result does not require a metric.

1.7 Exploring Screw Algebra


To be completed

1.8 Exploring Mechanics


To be completed

1.9 Exploring Grassmann Algebra


To be completed

1.10 Exploring The Generalized Product

2009 9 3
Grassmann Algebra Book.nb 59

1.10 Exploring The Generalized Product


To be completed

1.11 Exploring Hypercomplex Algebra


To be completed

1.12 Exploring Clifford Algebra


To be completed

1.13 Exploring Grassmann Matrix Algebra


To be completed

1.14 The Various Types of Linear Product

Introduction
In this section we explore the different possible types of linear product axiom that can exist
between two elements of a linear space. Our exploration will take a different approach to that used
by Grassmann in his Ausdehnungslehre of 1862 (Chapter 2, Section 3: The various types of
product structure), but we will arrive at essentially the same conclusions: that there are just two
types of linear product axioms: one asserting the symmetry of the product operation, and the other
asserting the antisymmetry of the product operation.

Of course, this does not mean to imply that all linear products must be either symmetric or
antisymmetric! Only that, if there is an axiom relating the products of two arbitrary elements of a
linear space, then it must either define the product operation to be symmetric, or must define it to
be antisymmetric. Intuitively, this makes sense, corresponding to there being just two parities: even
and odd.

Suppose a and b are (any) two distinct non-zero elements of a given linear space. To help make the
development more concrete, we can conceive of them as 1-elements or vectors, but this is not
necessary. A linear product axiom involving a and b is defined as one that remains valid under the
substitutions

a ö pa +qb b ö ra +sb

where p, q, r, and s are scalars, perhaps zero. We also assume distributivity and commutativity
with scalars.

2009 9 3
Grassmann Algebra Book.nb 60

where p, q, r, and s are scalars, perhaps zero. We also assume distributivity and commutativity
with scalars.

We denote the product with a dot: a.b. Note that, while a and b must both belong to the same
linear space, we make no assertions as to where the product belongs.

There are two essentially different forms of product we can form from the two distinct elements a
and b: those involving an element with itself (a.a and b.b); and those involving distinct elements
(a.b and b.a).

For a product of an element with itself, there are just two distinct possibilities: either it is zero, or it
is non-zero. In what follows, we shall take each of these two cases and explore for any relations it
might imply on products of distinct elements.

Case 1 The product of an element with itself is zero


Suppose a ü a ã 0 for all a (thus including b) of the linear space. The assumption of linearity then
implies that the product of any linear combination of a and b with itself is also zero. In particular

Ha + bL ü Ha + bL ã 0
Since we have assumed the product is distributive, we can expand it to give

Ha + bL ü Ha + bL ã a ü a + a ü b + b ü a + b ü b ã 0

But since a ü a ã 0 and b ü b ã 0, this reduces to

aüb + büa ã 0

Conversely, if we suppose that a ü b + b ü a ã 0 for all a and b, we can replace b with a to


obtain 2 a ü a ã 0. Whence a ü a ã 0. Similarly b ü b ã 0. Thus for Case 1 we have

a ü a ã 0, b ü b ã 0 ó aüb + büa ã 0 1.37

This means that, if a linear product is defined to be nilpotent (that is a ü a ã 0), then necessarily it
is antisymmetric. Conversely, if it is antisymmetric, it is also nilpotent.

Case 2 The product of an element with itself is non-zero


The second case is a little more complex to explore. Suppose a ü a is not equal to zero for all a
(thus including b) of the linear space. We would like to see if this constraint implies any linear
relationship between the products a ü a, a ü b, b ü a and b ü b.

The most general linear relationship between these products may be written:

A: a aüa + b aüb + c büa + d büb ã 0

where a, b, c and d are scalars.

First, replace a by -a to obtain

2009 9 3
Grassmann Algebra Book.nb 61

First, replace a by -a to obtain

B: a aüa - b aüb - c büa + d büb ã 0

then add equations A and B to give

C: 2 a aüa + 2 d büb ã 0
Since, for the case we are considering, the product of no element with itself is zero, we conclude
that both a and d must be zero. Thus, equation A becomes

D: b aüb + c büa ã 0

Now, in this equation, replace b by a to give

E: Hb + cL a ü a ã 0

Again, since for this case a.a is not zero, we must have that b + c is zero, that is, c is equal to -b.
Replace c by -b in equation D to give

F: b Ha ü b - b ü aL ã 0

Now, either b is zero, or a ü b - b ü a is zero. In the case b is zero we also have that c is zero.
Additionally, from equation C we have that both a and d are zero. This condition leads to no linear
axiom of the form A being possible.

The remaining possibility is that a ü b - b ü a ã 0. Thus we can summarize the results as leading
to either of the two possibilities

a ü a ! 0, b ü b ! 0 ï Either a ü b - b ü a ã 0,
1.38
or no linear product axiom exists

This means that if a linear product is defined such that the product of any non-zero element with
itself is never zero, then necessarily, either no linear relation is implied between products or, the
product operation is symmetric, that is a ü b ã b ü a.

Ï Summary

In sum, there exists just two types of linear product axioms: one asserting the symmetry of the
product operation, and the other asserting the antisymmetry of the product operation. This does not
preclude linear products which have no axioms relating products of two elements.

Examples

Ï Symmetric products

Inner products These are symmetric for factors of (equal) arbitrary grade.

2009 9 3
Grassmann Algebra Book.nb 62

Scalar products These are a special case of inner product for factors of grade 1.

Exterior products These are symmetric for factors of (equal) even grade.

Algebraic products The ordinary algebraic product is symmetric. It is equivalent to the special
cases of the inner or exterior products of elements of zero grade.

Ï Antisymmetric products

Exterior products These are antisymmetric for factors of (equal) odd grade.

Regressive products These are antisymmetric for factors of (equal) grade, but one whose parity is
different from that of the dimension of the space.

Generalized products These are antisymmetric for factors of (equal) grade, but one whose parity
is different from that of the order of the product.

Ï Products defined as sums of symmetric and antisymmetric products


Two types of important product which have not yet been mentioned are the Clifford product, and
the hypercomplex product. These products both involve sums of symmetric and antisymmetric
products. Since the products are neither symmetric, nor antisymmetric, we might expect no
associated linear product axioms to exist for the Clifford and hypercomplex products.

To confirm this, we take the simplest case of the Clifford product of two 1-elements, and posit that
there does exist a linear product axiom. Our expectation is that this will lead to a contradiction.

Consider the linear product axiom

A: a aüa + b aüb + c büa + d büb ã 0


The Clifford product of two 1-elements is defined as the sum of their exterior product and their
inner (scalar) product, so now let a ü b ã a Ô b + a û b

B:
a Ha Ô a + a û aL + b Ha Ô b + a û bL + c Hb Ô a + b û aL + d Hb Ô b + b û bL ã 0
Since the exterior and scalar product operations are distinct, leading to elements of different grade,
we can write B as two separate linear product axioms:

C: a Ha Ô aL + b Ha Ô bL + c Hb Ô aL + d Hb Ô bL ã 0

D: a Ha û aL + b Ha û bL + c Hb û aL + d Hb û bL ã 0

From the symmetry and antisymmetry of these products, C and D simplify to

E: Hb - cL Ha Ô bL ã 0

F: a Ha û aL + Hb + cL Ha û bL + d Hb û bL ã 0

From E we find that b must equal c. Hence F becomes

G: a Ha û aL + 2 b Ha û bL + d Hb û bL ã 0

Put a equal to - a in G and subtract the result from G to give 4b(a û b) ã 0. Hence b must be
zero, giving
2009 9 3
Grassmann Algebra Book.nb 63

Put a equal to - a in G and subtract the result from G to give 4b(a û b) ã 0. Hence b must be
zero, giving

H: a Ha û aL + d Hb û bL ã 0

Now put b equal to a to get (a + d)(a û a) ã 0, whence a must be equal to -d yielding finally

I: aûa - bûb ã 0
This is clearly a contradiction (as we suspected might be the case), hence there is no linear product
axiom relating Clifford products of 1-elements.

1.15 Terminology

Terminology
Grassmann algebra is really quite a simple structure, straightforwardly generated by introducing a
product operation onto the elements of a linear space to generate a suite of new linear spaces. If we
only wish to describe this algebra and its elements, the terminology can be quite compact and
consistent with accepted practice.

However in applications of the algebra, for example to geometry and physics, we may want to
work in an interpreted version of the algebra. This new interpretation may be as simple as viewing
one of the basis elements of the underlying linear space as an origin point, and the rest as vectors,
but the interpreted algebraic, and therefore terminological, distinctions are now significantly
increased. Whereas the exterior product on an algebraically uninterpreted basis multiplied the basis
elements into a single suite of higher grade elements, the exterior product on such an interpreted
basis multiplies the basis elements into two suites of interpreted higher grade elements - those
containing the origin as a factor, and those which do not.

This is of course more complex, and requires more terminology to make all the distinctions.
Deciding on the terminology to use however poses a challenge, because many of the distinctions
while new to some, are known to others by different names. For example, E. A. Milne in his
Vectorial Mechanics, although not mentioning Grassmann or the exterior product uses the term
'line vector' for the entity we model by the exterior product of a point with a vector. Robert Stawell
Ball in his A Treatise on the Theory of Screws, only briefly mentioning Grassmann, uses the word
'screw' for the entity we model by the sum of a 'line vector' and what is currently known as a
bivector.

It is not within my capabilities to extract complete consistency from historically precedent


terminology in this highly applicable yet largely unexplored area of mathematics. I have had to
adopt some compromises. These compromises have been directed by the desire that the
terminology of the book be consistent, historically cognizant, modern, and intuitive. It is certainly
not perfect.

2009 9 3
Grassmann Algebra Book.nb 64

One compromise, adopted to avoid tedious extra-locution, is to extend the term space in this book
to also mean a Grassmann algebra. I have done this because it seems to me that when we visualize
space, it is not only inhabited by points and vectors, but also by lines, planes, bivectors, trivectors,
and other multi-dimensional entities.

Ï Linear space

In this book a linear space is defined in the usual way as consisting of an abelian group under
addition, a field, and a (scalar) multiplication operation between their elements. The dimension of a
linear space is the number of elements in a basis of the space.

Ï Underlying linear space

The underlying linear space L of a Grassmann algebra is the linear space which, together with the
1
exterior product operation generates the algebra. The field of L is denoted L, and dimension of L
1 0 1
by n (or, when using GrassmannAlgebra, ¯n or ¯D).

Ï Exterior linear space

An exterior linear space L is the linear space whose basis consists of all the essentially different
m
n
exterior products of basis elements of L, m at a time. The dimension of L is therefore . Its
1 m m
elements are called m-elements. The integer m is called the grade of L and its elements.
m

Exterior linear spaces and their elements


Grade Basis Element
L 0 1 0-element
0

L 1 e1 , e2 , e3 , …, en 1-element
1

L 2 e1 Ôe2 , e1 Ôe3 , …, en-1 Ôen 2-element


2

L m e1 Ôe2 Ô…Ôem , …, en-m+1 Ô…Ôen-1 Ôen m-element


m

L n e1 Ôe2 Ô…Ôen n-element


n

2009 9 3
Grassmann Algebra Book.nb 65

Ï Grassmann algebra

A Grassmann algebra L is the direct sum L!L ! L ! … ! L ! … ! L of an underlying linear


0 1 2 m n
space L, its field L, and the exterior linear spaces L (2 § m § n). As well as being an algebra, L is
1 0 m
n
a linear space of dimension 2 , whose basis consists of all the basis elements of its exterior linear
spaces. Its elements of grade m are called m-elements, and its multigraded elements are called L-
elements. An m-element is called simple if it can be expressed as the exterior product of 1-element
factors.

Ï Space
In this book, the term space is another term for a Grassmann algebra. The term n-space is another
term for a Grassmann algebra whose underlying linear space is of dimension n. By abuse of
terminology we will refer to the dimension and basis of an n-space as that of its underlying linear
space. In GrassmannAlgebra you can declare that you want to work in an n-space by entering ¯!n .

The space of a simple m-element is the m-space whose underlying linear space consists of all the 1-
elements in the m-element. A 1-element is said to be in a simple m-element if and only their
exterior product is zero. A basis of the space of a simple m-element may be taken as any set of m
independent factors of the m-element. An n-space and the space of its n-element are identical.

Ï Vector space

A vector space is a space in which the elements of the underlying linear space have been
interpreted as vectors. This interpretation views a vector as an (unlocated) direction and
graphically depicts it as an arrow. An m-element in a vector space is called an m-vector or
multivector. A 2-vector is also called a bivector. A 3-vector is also called a trivector. A simple m-
vector is viewed as a multi-dimensional direction, or m-direction.

An element of a vector space may also be called a geometric entity (or simply, entity) to emphasize
that it has a geometric interpretation. An m-vector is an entity of grade m. Vectors are 1-entities,
bivectors are 2-entities.

A vector n-space is a vector space whose underlying linear space is of dimension n. A vector 3-
space is thus richer than the usual 3-dimensional vector algebra, since it also contains bivectors and
trivectors, as well as the exterior product operation.

2009 9 3
Grassmann Algebra Book.nb 66

A vector n-space and its entities


Grade Basis Entity
L 0 1 scalar
0

L 1 e1 , e2 , e3 , …, en vector
1

L 2 e1 Ôe2 , e1 Ôe3 , …, en-1 Ôen bivector


2

L m e1 Ôe2 Ô…Ôem , …, en-m+1 Ô…Ôen-1 Ôen m-vector


m

L n e1 Ôe2 Ô…Ôen n-vector


n

The space of a simple m-vector is the m-space whose underlying linear space consists of all the
vectors in the m-vector.

Ï Bound space

A bound space is a vector space to whose basis has been added an origin. The origin is an element
with the geometric interpretation of a point. A bound n-space is a vector n-space to whose basis
has been added an origin. The symbol n refers to the number of vectors in the basis of the bound
space, thus making the dimension of a bound n-space n+1. In GrassmannAlgebra you can declare
that you want to work in a bound n-space by entering ¯"n . You can determine the dimension of a
space by entering ¯D, which in this case would return n+1. Thus the underlying linear space of a
bound n-space is of dimension n+1.

An element of a bound space may also be called a geometric entity (or simply, entity) to emphasize
it has a geometric interpretation. An m-entity is an entity of grade m. Vectors and points are 1-
entities. Bivectors and bound vectors are 2-entities.

A bound n-space is thus richer than a vector n-space, since as well as containing vectorial
(unlocated) entities (vectors, bivectors, …), it also contains bound (located) entities (points, bound
vectors, …) and sums of these. In contradistinction to the term bound space, a vector space may
sometimes be called a free space. A bound 3-space is a closer algebraic model to physical 3-space
than a vector 3-space, even though its underlying linear space is of four dimensions.

2009 9 3
Grassmann Algebra Book.nb 67

A bound n-space and its entities


Grade Basis Free Entity Bound Entity
L 0 1 scalar
0

L 1 ¯!, e1 , e2 , e3 , vector weighted point


1
…, en
L 2 ¯!Ôe1 , ¯!Ôe2 , bivector bound vector
2
…, en-1 Ôen
L m ¯!Ôe1 Ôe2 Ô…Ôem -1 , m-vector bound Hm-1L-vector
m
…, en-m+1 Ô…Ôen-1 Ôen
L n ¯!Ôe1 Ôe2 Ô…Ôen -1 , n-vector bound Hn-1L-vector
n
…, e1 Ôe2 Ô…Ôen
L n+1 ¯!Ôe1 Ôe2 Ô…Ôen bound n-vector
n+1

The space of a bound simple m-vector is the (m+1)-space whose underlying linear space consists of
all the points and vectors in the bound simple m-vector.

Ï Geometric objects

The geometric object of a bound simple entity is the set of all points in the entity, that is, all points
whose exterior product with the bound simple entity is zero.

The geometric object of a bound scalar or weighted point is the point itself. The geometric object
of a bound vector is a line (an infinite set of points). The geometric object of a bound simple
bivector is a plane (a doubly infinite set of points).

Since the properties of geometric objects are well modelled algebraically by their corresponding
entities, we find it convenient to compute with geometric objects by computing with their
corresponding bound simple entities; thus for example computing the intersection of two lines by
computing the regressive product of two bound vectors.

2009 9 3
Grassmann Algebra Book.nb 68

Geometric objects and their corresponding entities


Grade Geometric Object Entity
L 1 point bound scalar
1

L 2 line bound vector


2

L 3 plane bound simple bivector


3

L m Hm-1L-plane bound simple Hm-1L-vector


m

L n hyperplane bound Hn-1L-vector


n

L n+1 n|plane bound n-vector


n+1

Ï Congruence and orientation

Two elements are said to be congruent if one is a scalar multiple (not zero) of the other.

The scalar multiple associated with two congruent elements is called the congruence factor.

Two congruent elements are said to be of opposite orientation if their congruence factor is negative.

Ï Points, vectors and carriers

The origin of a bound space is the basis element designated as the origin.

A vector in a bound space is a 1-element that does not involve the origin.

A point is the sum of the origin with a vector. This vector is called the position vector of the point.

A weighted point is a scalar multiple of a point. The scalar multiple is called the weight of the point.

The sum of a point and a vector is another point. The vector may be said to carry the first point
into the second.

The sum of a bound simple m-vector and a simple (m+1)-vector containing the m-vector is another
bound simple m-vector. The simple (m+1)-vector may be said to carry the first bound simple m-
vector into the second.

Ï Position, direction and sense

Congruent weighted points are said to define the same position.

Congruent vectors are said to define the same direction.

2009 9 3
Grassmann Algebra Book.nb 69

Congruent vectors of opposite orientation are also said to have opposite sense.

Congruent simple m-vectors are said to define the same m-direction.

Congruent bound simple elements are said to define the same position and direction.

A bound simple m-vector that has been carried to a new bound simple m-vector by the addition of
an (m+1)-vector may be said to have been carried to a new position. The m-direction of the bound
simple m-vector is unchanged.

The geometric object of a bound simple entity may be said to have the position and direction of its
entity.

The notions of position, direction, and sense are geometric interpretations on the algebraic
elements.

1.16 Summary
To be completed

2009 9 3
Grassmann Algebra Book.nb 70

2 The Exterior Product


Ï

2.1 Introduction
The exterior product is the natural fundamental product operation for elements of a linear space.
Although it is not a closed operation (that is, the product of two elements is not itself an element of
the same linear space), the products it generates form a series of new linear spaces, the totality of
which can be used to define a true algebra, which is closed.

The exterior product is naturally and fundamentally connected with the notion of linear
dependence. Several 1-elements are linearly dependent if and only if their exterior product is zero.
All the properties of determinants flow naturally from the simple axioms of the exterior product.
The notion of 'exterior' is equivalent to the notion of linear independence, since elements which are
truly exterior to one another (that is, not lying in the same space) will have a non-zero product. An
exterior product of 1-elements has a straightforward geometric interpretation as the multi-
dimensional equivalent to the 1-elements from which it is constructed. And if the space possesses a
metric, its measure or magnitude may be interpreted as a length, area, volume, or hyper-volume
according to the grade of the product.

However, the exterior product does not require the linear space to possess a metric. This is in direct
contrast to the three-dimensional vector calculus in which the vector (or cross) product does
require a metric, since it is defined as the vector orthogonal to its two factors, and orthogonality is
a metric concept. Some of the results which use the cross product can equally well be cast in terms
of the exterior product, thus avoiding unnecessary assumptions.

We start the chapter with an informal discussion of the exterior product, and then collect the
axioms for exterior linear spaces together more formally into a set of 12 axioms which combine
those of the underlying field and underlying linear space with the specific properties of the exterior
product. In later chapters we will derive equivalent sets for the regressive and interior products, to
which we will often refer.

Next, we pin down the linear space notions that we have introduced axiomatically by introducing a
basis onto the underlying (primary) linear space and showing how this can induce a basis onto each
of the other linear spaces generated by the exterior product. A constantly useful partner to a basis
element of any of these exterior linear spaces is its cobasis element. The exterior product of a basis
element with its cobasis element always gives the basis n-element of the algebra. We develop the
notion of cobasis following that of basis.

The next three sections look at some standard topics in linear algebra from a Grassmannian
viewpoint: determinants, cofactors, and the solution of systems of linear equations. We show that
all the well-known properties of determinants, cofactors, and linear equations proceed directly
from the properties of the exterior product.

The next two sections discuss two concepts dependent on the exterior product that have no direct
counterpart in standard linear algebra: simplicity and exterior division. If an element is simple it
can be factorized into 1-elements. If a simple element is divided by another simple element
contained in it, the result is not unique. Both these properties will find application later in our
explorations of geometry.
2009 9 3
Grassmann Algebra Book.nb 71

The next two sections discuss two concepts dependent on the exterior product that have no direct
counterpart in standard linear algebra: simplicity and exterior division. If an element is simple it
can be factorized into 1-elements. If a simple element is divided by another simple element
contained in it, the result is not unique. Both these properties will find application later in our
explorations of geometry.

At the end of the chapter, we take the opportunity to introduce the concepts of span and cospan of a
simple element, and the concept that we can define multilinear forms involving the spans and
cospans of an element which are invariant to any refactorization of the element. Following this we
explore their application to calculating the union and intersection of two simple elements, and the
factorization of a simple element. These applications involve only the exterior product. But we will
continue to use the concepts of span, cospan and multilinear forms throughout the book, where
they will generally involve other product operations.

2.2 The Exterior Product

Basic properties of the exterior product


Let L denote the field of real numbers. Its elements, called 0-elements (or scalars) will be denoted
0
by a, b, c, … . Let L denote a linear space of dimension n whose field is L. Its elements, called 1-
1 0
elements, will be denoted by x, y, z, … .

We call L a linear space rather than a vector space and its elements 1-elements rather than vectors
1
because we will be interpreting its elements as points as well as vectors.

A second linear space denoted L may be constructed as the space of sums of exterior products of 1-
2
elements taken two at a time. The exterior product operation is denoted by Ô, and has the following
properties:

a Hx Ô yL ã Ha xL Ô y 2.1

x Ô Hy + zL ã x Ô y + x Ô z 2.2

Hy + zL Ô x ã y Ô x + z Ô x 2.3

xÔx ã 0 2.4

An important property of the exterior product (which is sometimes taken as an axiom) is the anti-
symmetry of the product of two 1-elements.

x Ô y ã -y Ô x 2.5

This may be proved from the distributivity and nilpotency axioms since:

2009 9 3
Grassmann Algebra Book.nb 72

This may be proved from the distributivity and nilpotency axioms since:

Hx + yL Ô Hx + yL ã 0
ï xÔx + xÔy + yÔx + yÔy ã 0
ï xÔy + yÔx ã 0
ï x Ô y ã -y Ô x

An element of L will be called a 2-element (of grade 2) and is denoted by a kernel letter with a '2'
2
written below. For example:

a ã xÔy + zÔw + !
2

A simple 2-element is the exterior product of two 1-elements.

It is important to note the distinction between a 2-element and a simple 2-element (a distinction of
no consequence in L, where all elements are simple). A 2-element is in general a sum of simple 2-
1
elements, and is not generally expressible as the product of two 1-elements (except where L is of
1
dimension n b 3). The structure of L, whilst still that of a linear space, is thus richer than that of L,
2 1
that is, it has more properties.

Ï Example: 2-elements in a three-dimensional space are simple

By way of example, the following shows that when L is of dimension 3, every element of L is
1 2
simple.

Suppose that L has basis e1 , e2 , e3 . Then a basis of L is the set of all essentially different (linearly
1 2
independent) products of basis elements of L taken two at a time: e1 Ô e2 , e2 Ô e3 , e3 Ô e1 . (The
1
product e1 Ô e2 is not considered essentially different from e2 Ô e1 in view of the anti-symmetry
property).

Let a general 2-element a be expressed in terms of this basis as:


2

a ã a e1 Ô e2 + b e2 Ô e3 + c e3 Ô e1
2

Without loss of generality, suppose a " 0. Then a can be recast in the form below, thus proving
2
the proposition.

b c
a ã a e1 - e3 Ô e2 - e3
2 a a

2009 9 3
Grassmann Algebra Book.nb 73

Ï Historical Note
In the Ausdehnungslehre of 1844 Grassmann denoted the exterior product of two symbols
a and b by a simple concatenation viz. a b; whilst in the Ausdehnungslehre of 1862 he
enclosed them in square brackets viz. [a b]. This notation has survived in the three-
dimensional vector calculus as the 'box' product [a b g] used for the triple scalar product.
Modern usage denotes the exterior product operation by the wedge Ô, thus aÔb. Amongst
other writers, Whitehead [1898] used the 1844 version whilst Forder [1941] and Cartan
[1922] followed the 1862 version.

Declaring scalar and vector symbols in GrassmannAlgebra


The GrassmannAlgebra software needs to know which symbols it will treat as scalar (0-element)
symbols, and which symbols it will treat as vector (1-element) symbols. These lists must be
distinct (disjoint), as they have different interpretations in the algebra. It loads with default values
for lists of these symbols.

The default settings are:


ScalarSymbols {a,b,c,d,e,f,g,h}
VectorSymbols {p,q,r,s,t,u,v,w,x,y,z}

If you have just loaded the GrassmannAlgebra package, you will see these symbols listed in the
status panes at the top of the palette. (You can load the package by clicking the red title button on
the GrassmannAlgebra Palette, and get help on any of the topics or commands by clicking the
associated ? button.)

To declare your own list of scalar symbols, enter DeclareScalarSymbols[list].

DeclareScalarSymbols@8a, b, g<D
8a, b, g<

You will now see these new symbols reflected in the Current ScalarSymbols pane.

Now, if you enter ScalarSymbols you see the new list:

ScalarSymbols
8a, b, g<

You can always return to the default declaration by entering the command
DeclareDefaultScalarSymbols or its alias ¯S. (You can enter this with the keystroke
combination of Â*5ÂS, or by using the 5-star symbol on the palette.)

¯S
8a, b, c, d, e, f, g, h<

The same procedures apply mutatis mutandi for vector symbols, with the word "Scalar" replaced
by the word "Vector" and ¯S by ¯V.

† As well as symbols in the lists ScalarSymbols and VectorSymbols, you can also include
patterns. Any expression which conforms to such a pattern is considered to be a symbol of the
respective type. For further information, check the Preferences section in the palette.
2009 9 3
Grassmann Algebra Book.nb 74

As well as symbols in the lists ScalarSymbols and VectorSymbols, you can also include
patterns. Any expression which conforms to such a pattern is considered to be a symbol of the
respective type. For further information, check the Preferences section in the palette.

† Note that the five-pointed star symbol ¯ (Â*5Â) is used in GrassmannAlgebra as the first
character in each of its aliases to commonly used commands, objects or functions.

† You can reset all the default declarations (ScalarSymbols, VectorSymbols, Basis and
Metric) by entering ¯A, which you can access from the palette. We will do this often
throughout the book to ensure we are starting our computations from the default environment.

Entering exterior products


The exterior product is symbolized by the wedge 'Ô'. This may be entered either by typing \[
Wedge], by typing Â^Â, or by clicking on the Ô button on the GrassmannAlgebra palette.

As an infix operator, xÔyÔzÔ! is interpreted as Wedge[x,y,z,!].

A quick way to enter xÔy is to type x, click on Ô in the palette, then type y.

Other ways are to type xÂ^Ây, or click the ÉÔÉ button on the palette. The first placeholder will
be selected automatically for immediate typing. To move to the next placeholder, press the tab key.
To produce a multifactored exterior product, click the palette button as many times as required. To
take the exterior product with an existing expression, select the expression and click the ÉÔÉ
button.

For example, to type (x + y)Ô(x + z), first type x + y and select it, click the ÉÔÉ button, then
type (x + z).

† For further information, check the Expression Composition section in the palette.

2.3 Exterior Linear Spaces

Composing m-elements
In the previous section the exterior product was introduced as an operation on the elements of the
linear space L to produce elements belonging to a new space L. Just as L was composed from
1 2 2
sums of exterior products of elements of L two at a time, so L may be composed from sums of
1 m
exterior products of elements of L m at a time.
1

A simple m-element is a product of the form:

x1 Ô x2 Ô ! Ô xm xi œ L
1

An m-element is a linear combination of simple m-elements. It is denoted by a kernel letter with an



'm' written below:

2009 9 3
Grassmann Algebra Book.nb 75

An m-element is a linear combination of simple m-elements. It is denoted by a kernel letter with an


'm' written below:

a = a1 Hx1 Ô x2 Ô ! Ô xm L + a2 Hy1 Ô y2 Ô ! Ô ym L + ! ai œ L
m 0

The number m is called the grade of the m-element.

The exterior linear space L has essentially only one element in its basis, which we take as the unit
0
1. All other elements of L are scalar multiples of this basis element.
0

The exterior linear space L (where n is the dimension of L) also has essentially only one element in
n 1
its basis: e1 Ô e2 Ô ! Ô en . All other elements of L are scalar multiples of this basis element.
n

Any m-element, where m > n, is zero, since there are no more than n independent elements
available from which to compose it, and the nilpotency property causes it to vanish.

Composing elements automatically


You can automatically compose an element by using ComposeSimpleForm. For example a
general element in the algebra of a 3-space may be composed by entering

ComposeSimpleFormBa x + b x + c x + d xF
0 1 2 3

a x0 + b x1 + c x2 Ô x3 + d x4 Ô x5 Ô x6

GrassmannAlgebra understands that the new symbols generated are vector symbols. You can
verify this by checking the Current VectorSymbols status pane in the palette, or by entering
VectorSymbols.

VectorSymbols
8p, q, r, s, t, u, v, w, x, y, z, x1 , x2 , x3 , x4 , x5 , x6 <

† For further information, check the Expression Composition section in the palette.

Spaces and congruence


In this book, the term space is another term for a Grassmann algebra. The term n-space is another
term for a Grassmann algebra whose underlying linear space is of dimension n. By abuse of
terminology we will refer to the dimension and basis of an n-space as that of its underlying linear
space. In GrassmannAlgebra you can declare that you want to work in an n-space by entering ¯!n .

The space of a simple m-element is the m-space whose underlying linear space consists of all the 1-
elements in the m-element. A basis of the space of a simple m-element may be taken as any set of
m independent factors of the m-element. An n-space and the space of its n-element are identical.

A 1-element a is said to belong to, to be contained in, or simply, to be in a simple m-element a if


m
and only if their exterior product is zero, that is a Ô a ã 0.
m

2009 9 3
Grassmann Algebra Book.nb 76

A 1-element a is said to belong to, to be contained in, or simply, to be in a simple m-element a if


m
and only if their exterior product is zero, that is a Ô a ã 0.
m

We may also say that a is in the space of a or that a defines its own space.
m m

A simple element b is said to be contained in another simple element a if and only if the space of b
k m k
is a subset of that of a.
m

We say that two elements are congruent if one is a scalar multiple of the other, and denote the
relationship with the symbol ª. For example we may write:

aªaa
m m

We can therefore say that if two elements are congruent, then their spaces are the same.

Note that whereas several congruent elements are linearly dependent, several linearly dependent
elements are not necessarily congruent.

When we have one element equal to a scalar multiple of another, a = a b say, we may sometimes
m m
take the liberty of writing the scalar multiple as a quotient of the two elements:
a
m

b
m

These notions will be encountered many times in the rest of the book.

† An overview of the terminology used in this book can found at the end of Chapter 1.

The associativity of the exterior product


The exterior product is associative in all groups of (adjacent) factors. For example:

Hx1 Ô x2 L Ô x3 Ô x4 ã x1 Ô Hx2 Ô x3 L Ô x4 ã Hx1 Ô x2 Ô x3 L Ô x4 ã !

Hence

aÔb Ôg ã aÔ bÔg 2.6


m k p m k p

Thus the brackets may be omitted altogether.

From this associativity together with the anti-symmetric property of 1-elements it may be shown
that the exterior product is anti-symmetric in all (1-element) factors. That is, a transposition of any
two 1-element factors changes the sign of the product. For example:

x1 Ô x2 Ô x3 Ô x4 ã -x3 Ô x2 Ô x1 Ô x4 ã x3 Ô x4 Ô x1 Ô x2 ã !

Furthermore, from the nilpotency axiom, a product with two identical 1-element factors is zero.
For example:

2009 9 3
Grassmann Algebra Book.nb 77

Furthermore, from the nilpotency axiom, a product with two identical 1-element factors is zero.
For example:

x1 Ô x2 Ô x3 Ô x2 ã 0

Ï Example: Non-simple elements are not generally nilpotent

It should be noted that for simple elements a, a Ô a ã 0, but that non-simple elements do not
m m m
necessarily possess this property, as the following example shows.

Suppose L is of dimension 4 with basis e1 , e2 , e3 , e4 . Then the following exterior product of


1
identical elements is not zero:

He1 Ô e2 + e3 Ô e4 L Ô He1 Ô e2 + e3 Ô e4 L ã 2 e1 Ô e2 Ô e3 Ô e4

Transforming exterior products

Ï Expanding exterior products

To expand an exterior product, you can use the function GrassmannExpand. For example, to
expand the product in the previous example:

X = GrassmannExpand@He1 Ô e2 + e3 Ô e4 L Ô He1 Ô e2 + e3 Ô e4 LD
e1 Ô e2 Ô e1 Ô e2 + e1 Ô e2 Ô e3 Ô e4 + e3 Ô e4 Ô e1 Ô e2 + e3 Ô e4 Ô e3 Ô e4

An alias for GrassmannExpand is ¯#, which you can find on the GrasssmannAlgebra palette.

Note that GrassmannExpand does not simplify the expression.

Ï Simplifying exterior products

To simplify the expression you would use the GrassmannSimplify function. An alias for
GrassmannSimplify is ¯$, which you can also find on the palette.

However in this case, you must first tell GrassmannAlgebra that you have chosen a 4-space with
basis 8e1 , e2 , e3 , e4 <, otherwise it will not know that the ei are basis 1-elements. You can do
that by entering the alias ¯!4 . (We will discuss the detail of declaring bases in a later section).

¯!4
8e1 , e2 , e3 , e4 <

Now GrassmannSimplify will simplify the expanded product.

¯$@XD
2 e1 Ô e2 Ô e3 Ô e4

Ï Simplifying and expanding exterior products

2009 9 3
Grassmann Algebra Book.nb 78

Simplifying and expanding exterior products

To expand and then simplify the expression in one operation, you can use the single function
GrassmannExpandAndSimplify (which has the alias ¯%).

¯%@He1 Ô e2 + e3 Ô e4 L Ô He1 Ô e2 + e3 Ô e4 LD
2 e1 Ô e2 Ô e3 Ô e4

† Note that you can also use these functions to expand and/or simplify any Grassmann expression.

† For further information, check the Expression Transformation section in the palette.

2.4 Axioms for Exterior Linear Spaces

Summary of axioms
The axioms for exterior linear spaces are summarized here for future reference. They are a
composite of the requisite field, linear space, and exterior product axioms, and may be used to
derive the properties discussed informally in the preceding sections. Composite statements are
given in list form.

Ï Ô1: The sum of m-elements is itself an m-element.

:a œ L, b œ L> ï : a + b œ L > 2.7


m m m m m m m

Ï Ô2: Addition of m-elements is associative.

Ka + b O + g ã a + Kb + g O 2.8
m m m m m m

Ï Ô3: m-elements have an additive identity (zero element).

:!"#$%$B0, 0 œ LF, a ã 0 + a> 2.9


m m m m m m

2009 9 3
Grassmann Algebra Book.nb 79

Ï Ô4: m-elements have an additive inverse.

:!"#$%$B-a, -a œ LF, 0 ã a + -a> 2.10


m m m m m m

Ï Ô5: Addition of m-elements is commutative.

a+b ã b+a 2.11


m m m m

Ï Ô6: The exterior product of an m-element and a k-element is an (m+k)-


element.

:a œ L, b œ L> ï : a Ô b œ L > 2.12


m m k k m k m+k

Ï Ô7: The exterior product is associative.

aÔb Ôg ã aÔ bÔg 2.13


m k j m k j

Ï Ô8: There is a unit scalar which acts as an identity under the exterior product.

:!"#$%$B1, 1 œ LF, a ã 1 Ô a> 2.14


0 m m

Ï Ô9: Non-zero scalars have inverses with respect to the exterior product.

:!"#$%$Ba-1 , a-1 œ LF, 1 ã a Ô a-1 , &'()**Ba, a ! 0 œ LF> 2.15


0 0 0

2009 9 3
Grassmann Algebra Book.nb 80

Ï Ô10: The exterior product of elements of odd grade is anti-commutative.

a Ô b ã H-1Lm k b Ô a 2.16
m k k m

Ï Ô11: Additive identities act as multiplicative zero elements under the exterior
product.

:!"#$%$B0, 0 œ LF, 0 Ô a ã 0> 2.17


k k k k m k+m

Ï Ô12: The exterior product is both left and right distributive under addition.

Ka + b O Ô g ã a Ô g + b Ô g
m m k m k m k
2.18
aÔ b + g ã aÔb + aÔg
m k k m k m k

Ï Ô13: Scalar multiplication commutes with the exterior product.

a Ô a b ã Ka aO Ô b ã a a Ô b 2.19
m k m k m k

Ï A convention for the zeros

It can be seen from the above axioms that each of the linear spaces has its own zero. Apart from
grade, these zeros all have the same properties, so that for simplicity in computations we will
denote them all by the one symbol 0.

However, this also means that we cannot determine the grade of this symbol 0 from the symbol
alone. In practice, we overcome this problem by defining the grade of 0 to be the (undefined)
symbol ¯0.

Grassmann algebras
Under the foregoing axioms it may be directly shown that:

1. L is a field.
0
2. The L are linear spaces over L.
m 0
3. The direct sum of the L is an algebra.
m
2009 9 3
Grassmann Algebra Book.nb 81

1. L is a field.
0
2. The L are linear spaces over L.
m 0
3. The direct sum of the L is an algebra.
m

This algebra is called a Grassmann algebra. Its elements are sums of elements from the L, thus
m
allowing closure over both addition and exterior multiplication.

For example, we can expand and simplify the product of two elements of the algebra to give
another element.

¯%@H1 + 2 x + 3 x Ô y + 4 x Ô y Ô zL Ô H1 + 2 x + 3 x Ô y + 4 x Ô y Ô zLD
1 + 4 x + 6 xÔy + 8 xÔyÔz

A Grassmann algebra is also a linear space of dimension 2n , where n is the dimension of the
underlying linear space L. We will sometimes refer to the Grassmann algebra whose underlying
1
linear space is of dimension n as the Grassmann algebra of n-space.

Grassmann algebras will be discussed further in Chapter 9: Exploring Grassmann Algebra.

On the nature of scalar multiplication

The anti-commutativity axiom Ô10 for general elements states that:


a Ô b ã H-1Lm k b Ô a
m k k m

If one of these factors is a scalar (b = a, say; k = 0), the axiom reduces to:
k

aÔa ã aÔa
m m

Ô
Since by axiom 6, each of these terms is an m-element, we may permit the exterior product to
subsume the normal field multiplication. Thus, if a is a scalar, a Ô a is equivalent to a a. The latter
m m
(conventional) notation will usually be adopted.

In the usual definitions of linear spaces no discussion is given to the nature of the product of a
scalar and an element of the space. A notation is usually adopted (that is, the omission of the
product sign) that leads one to suppose this product to be of the same nature as that between two
scalars. From the axioms above it may be seen that both the product of two scalars and the product
of a scalar and an element of the linear space may be interpreted as exterior products.

aÔa ã aÔa ã a a aœL 2.20


m m m 0

2009 9 3
Grassmann Algebra Book.nb 82

Factoring scalars
In GrassmannAlgebra scalars can be factored out of a product by GrassmannSimplify (alias
¯$) so that they are collected at the beginning. For example:

¯$@5 H2 xL Ô H3 yL Ô Hb zLD
30 b x Ô y Ô z

If any of the factors in the product is scalar, it will also be collected at the beginning. Here a is a
scalar since it has been declared as one (by default).

¯$@5 H2 xL Ô H3 aL Ô Hb zLD
30 a b x Ô z

GrassmannSimplify works with any Grassmann expression, or lists (to any level) of
Grassmann expressions:

1 + aÔb aÔx
¯$BK OF êê MatrixForm
zÔb 1 - zÔx
1+ab ax
K O
bz 1 + xÔz

Grassmann expressions
A Grassmann expression is any well-formed expression recognized by GrassmannAlgebra as a
valid element of a Grassmann algebra. To check whether an expression is considered to be a
Grassmann expression, you can use the function GrassmannExpressionQ.

GrassmannExpressionQ@1 +x Ô yD
True

GrassmannExpressionQ also works on lists of expressions. Below we determine that whereas


the product of a scalar a and a 1-element x is a valid element of the Grassmann algebra, the
ordinary multiplication of two 1-elements x y is not.

GrassmannExpressionQ@8x Ô y, a x, x y<D
8True, True, False<

† For further information, check the Expression Analysis section in the palette.

Calculating the grade of a Grassmann expression


The grade of an m-element is m.

At this stage we have not yet begun to use expressions for which the grade is other than obvious by
inspection. However, as will be seen later, the grade will not always be obvious, especially when
general Grassmann expressions, for example those involving Clifford or hypercomplex numbers,
are
2009being
9 3 considered. A Clifford number may, for example, have components of different grade.
Grassmann Algebra Book.nb 83

At this stage we have not yet begun to use expressions for which the grade is other than obvious by
inspection. However, as will be seen later, the grade will not always be obvious, especially when
general Grassmann expressions, for example those involving Clifford or hypercomplex numbers,
are being considered. A Clifford number may, for example, have components of different grade.

In GrassmannAlgebra you can use the function Grade (alias ¯G) to calculate the grade of a
Grassmann expression. Grade works with single Grassmann expressions or with lists (to any
level) of expressions.

Grade@x Ô y Ô zD
3

Grade@81, x, x Ô y, x Ô y Ô z<D
80, 1, 2, 3<

It will also calculate the grades of the elements in a more general expression, returning them in a
list.

Grade@1 + x + x Ô y + x Ô y Ô zD
80, 1, 2, 3<

Ï The grade of zero

† As mentioned in the summary of axioms above the zero symbol 0 will be used indiscriminately
for the zero element of any of the exterior linear spaces. The grade of the symbol 0 is therefore
ambiguous. In GrassmannAlgebra, whenever Grade encounters a 0 it will return the symbol ¯0.
This symbol may be read as "the grade of zero". Grade also returns ¯0 when it encounters an
element whose notional grade is greater than the dimension of the currently declared space (and
hence is zero).

As an example, we first declare a 3-space, then compute the grades of three elements. The first
element 0 could be the zero of any exterior linear space, hence its grade is returned as ¯0. The
second element, the unit scalar 1, is of grade 0 (this 0 is a true scalar!). The third element has a
notional grade of 4, and hence reduces to 0 in the currently declared 3-space.

¯!3 ; Grade@80, 1, x Ô y Ô z Ô w<D


8¯0, 0, ¯0<

2.5 Bases

Bases for exterior linear spaces


Suppose e1 , e2 , !, en is a basis for L. Then, as is well known, any element of L may be
1 1
expressed as a linear combination of these basis elements.

A basis for L may be constructed from the ei by assembling all the essentially different non-zero
2
products ei1 Ô ei2 Hi1 , i2 : 1, …, nL. Two products are essentially different if they do not involve
2009 9 3 n n
the same 1-elements. There are obviously K O such products, making K O also the dimension of L.
2 2 2
Grassmann Algebra Book.nb 84

A basis for L may be constructed from the ei by assembling all the essentially different non-zero
2
products ei1 Ô ei2 Hi1 , i2 : 1, …, nL. Two products are essentially different if they do not involve
n n
the same 1-elements. There are obviously K O such products, making K O also the dimension of L.
2 2 2

In general, a basis for L may be constructed from the ei by taking all the essentially different
m
n
products ei1 Ô ei2 Ô ! Ô eim Hi1 , i2 , …, im : 1, …, nL. L is thus K O-dimensional.
m m

Essentially different products are of course linearly independent.

Declaring a basis in GrassmannAlgebra


GrassmannAlgebra allows the setting up of an environment in which given symbols are declared
basis elements of L.
1

When first loaded GrassmannAlgebra sets up a default basis of 8e1 , e2 , e3 <. You can see this
from the Current Basis pane on the palette.

This is the basis of a 3-dimensional linear space which may be interpreted as a 3-dimensional
vector space if you wish. (We will be exploring interpretations other than the vectorial later.)

To declare your own basis, enter DeclareBasis[list]. For example:

DeclareBasis@8i, j, k<D
8i, j, k<

Notice that the Current Basis pane of the palette now reads {i, j, k}.

DeclareBasis either takes a list (of your own basis elements) or a positive integer as its
argument. By using the positive integer form you can simply declare a subscripted basis of any
dimension you please.

DeclareBasis@8D
8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <

An optional argument gives you control over the kernel symbol used for the basis elements.

DeclareBasis@8, eD
8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <

A convenient way to change your space to one of dimension n is by entering the symbol ¯!n with
n a positive integer. The simplest way to enter this is from the palette using the ¯!Ñ symbol, and
entering the integer into the placeholder Ñ.

2009 9 3
Grassmann Algebra Book.nb 85

¯!4
8e1 , e2 , e3 , e4 <

You can return to the default basis by entering DeclareBasis[3] or ¯!3 . You can set all
preferences back to their default values by entering DeclareAllDefaultPreferences (or it
alias ¯A).

† For further information, check the Preferences section of the palette.

Composing bases of exterior linear spaces


The function BasisL[m] generates a list of basis elements of L arranged in standard order from
m
the declared basis of L. For example, we declare the basis to be that of a 3-dimensional vector
1
space, and then compute the bases of L, L, L, and L.
0 1 2 3

¯!3
8e1 , e2 , e3 <

BasisL@0D
81<

BasisL@1D
8e1 , e2 , e3 <

BasisL@2D
8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <

BasisL@3D
8e1 Ô e2 Ô e3 <

BasisL[] generates a list of the bases of each of the L.


m

BasisL@D
881<, 8e1 , e2 , e3 <, 8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <, 8e1 Ô e2 Ô e3 <<

You can combine these into a single list by using Mathematica's inbuilt Flatten function.

Flatten@BasisL@DD
81, e1 , e2 , e3 , e1 Ô e2 , e1 Ô e3 , e2 Ô e3 , e1 Ô e2 Ô e3 <

This is a basis of the Grassmann algebra whose underlying linear space is 3-dimensional. As a
linear space, the algebra has 23 = 8 dimensions corresponding to its 8 basis elements.

Composing palettes of basis elements


2009 9 3
Grassmann Algebra Book.nb 86

Composing palettes of basis elements


If you would like to compose a palette of the basis elements of all the exterior linear spaces
induced by the declared basis, you can use the GrassmannAlgebra command BasisPalette. For
example suppose you are working in a four-dimensional space.

¯!4
8e1 , e2 , e3 , e4 <

Entering BasisPalette would then give you the 16 basis elements of the corresponding
Grassmann algebra:

BasisPalette

Basis Palette
L L L L L
0 1 2 3 4

1 e1 e1 Ô e2 e1 Ô e2 Ô e3 e1 Ô e2 Ô e3 Ô e4
e2 e1 Ô e3 e1 Ô e2 Ô e4
e3 e1 Ô e4 e1 Ô e3 Ô e4
e4 e2 Ô e3 e2 Ô e3 Ô e4
e2 Ô e4
e3 Ô e4

You can click on any of the buttons to get its contents pasted into your notebook. For example you
can compose Grassmann numbers of your choice by clicking on the relevant basis elements.

X = a + b e1 + c e1 Ô e3 + d e1 Ô e3 Ô e4

Standard ordering
The standard ordering of the basis of L is defined as the ordering of the elements in the declared
1
list of basis elements. This ordering induces a natural standard ordering on the basis elements of L:
m
if the basis elements of L were letters of the alphabet arranged alphabetically, then the basis
1
elements of L would be words arranged alphabetically. Equivalently, if the basis elements were
m
digits, the ordering would be numeric.

For example, if we take {A,B,C,D} as basis, we can see from its BasisPalette that the basis
elements for each of the bases are arranged alphabetically.

2009 9 3
Grassmann Algebra Book.nb 87

DeclareBasis@8A, B, C, D<D; BasisPalette

Basis Palette
L L L L L
0 1 2 3 4

1 A AÔB AÔBÔC AÔBÔCÔD


B AÔC AÔBÔD
C AÔD AÔCÔD
D BÔC BÔCÔD
BÔD
CÔD

Indexing basis elements of exterior linear spaces


Just as we can denote the basis elements of L by indexed symbols ei , (i : 1, !, n), so too can we
1
n
denote the basis elements of L by indexed symbols ei , (i : 1, !, K O). This would enable us to
m m m
denote more succinctly the basis elements of an exterior linear space of large or arbitrary
dimension.

For example, suppose we have a basis 8e1 , e2 , e3 , e4 < for L, then the standard basis for L is
1 3
given by

¯!4 ; BasisL@3D
8e1 Ô e2 Ô e3 , e1 Ô e2 Ô e4 , e1 Ô e3 Ô e4 , e2 Ô e3 Ô e4 <

We could, if we wished, set up rules to denote these basis elements more compactly. For example:

:e1 Ô e2 Ô e3 Ø e1 , e1 Ô e2 Ô e4 Ø e2 , e1 Ô e3 Ô e4 Ø e3 , e2 Ô e3 Ô e4 Ø e4 >
3 3 3 3

We will find this more compact notation of some use in later theoretical derivations. Note however
that GrassmannAlgebra is set up to accept only declarations for the basis of L from which it
1
generates the bases of the other exterior linear spaces of the algebra.

2009 9 3
Grassmann Algebra Book.nb 88

2.6 Cobases

Definition of a cobasis
The notion of cobasis of a basis will be conceptually and notationally useful in our development of
later concepts. The cobasis of a basis of L is simply the basis of L which has been ordered in a
m n-m
special way relative to the basis of L.
m

Let 8e1 , e2 , !, en < be a basis for L. The cobasis element associated with a basis element ei is
1
denoted ei and is defined as the product of the remaining basis elements such that the exterior
product of a basis element with its cobasis element is equal to the basis element of L.
n

ei Ô ei ã e1 Ô e2 Ô ! Ô en

That is:

ei = H-1Li-1 e1 Ô ! Ô ·i Ô ! Ô en 2.21

where ·i means that ei is missing from the product.

The choice of the underbar notation to denote the cobasis may be viewed as a mnemonic to
indicate that the element ei is the basis element of L with ei 'struck out' from it.
n

Cobasis elements have similar properties to Euclidean complements, which are denoted with
overbars (see Chapter 5: The Complement). However, it should be noted that the underbar denotes
an element, and is not an operation. For example: a ei is not defined.

More generally, the cobasis element of the basis m-element ei = ei1 Ô ! Ô eim of L is denoted ei
m m m

and is defined as the product of the remaining basis elements such that:

ei Ô ei ã e1 Ô e2 Ô ! Ô en 2.22
m m

That is:

ei ã H-1LKm e1 Ô ! Ô ·i1 Ô ! Ô ·im Ô ! Ô en 2.23


m

where Km = ⁄mg=1 ig + 12 m Hm + 1L.

From the above definition it can be seen that the exterior product of a basis element with the
cobasis element of another basis element is zero. Hence we can write:

2009 9 3
Grassmann Algebra Book.nb 89

From the above definition it can be seen that the exterior product of a basis element with the
cobasis element of another basis element is zero. Hence we can write:

ei Ô ej ã dij e1 Ô e2 Ô ! Ô en
m m 2.24

Here, dij is the Kronecker delta.

The cobasis of unity


The natural basis of L is 1. The cobasis 1 of this basis element is defined by formula 2.24 as the
0
product of the remaining basis elements such that:

1 Ô 1 ã e1 Ô e2 Ô ! Ô en

Thus:

1 ã e1 Ô e2 Ô ! Ô en ã e 2.25
n

Any of these three representations will be called the basis n-element of the space. 1 may also be
described as the cobasis of unity. Note carefully that 1 may be different in different bases.

Composing palettes of cobasis elements


You can compose a palette of basis elements of the currently declared basis together with their
corresponding cobasis elements by the command CobasisPalette. For the default basis
8e1 ,e2 ,e3 < this would give

2009 9 3
Grassmann Algebra Book.nb 90

¯!3 ; CobasisPalette

Cobasis Palette
Basis Cobasis
1 e1 Ô e2 Ô e3
e1 e2 Ô e3
e2 -He1 Ô e3 L
e3 e1 Ô e2
e1 Ô e2 e3
e1 Ô e3 -e2
e2 Ô e3 e1
e1 Ô e2 Ô e3 1

If you want to compose a palette for another basis, first declare the basis.

DeclareBasis@8&, '<D; CobasisPalette

Cobasis Palette
Basis Cobasis
1 !Ô "
! "
" -!
!Ô " 1

The cobasis of a cobasis


Let ei be a basis element and ei be its cobasis element. Then the cobasis element of this cobasis
m m

element is denoted ej and is defined as expected by the product of the remaining basis elements
m

such that:

2009 9 3
Grassmann Algebra Book.nb 91

ei Ô ej ã dij e1 Ô e2 Ô ! Ô en
m
m 2.26

The left-hand side may be rearranged to give:

ei Ô ej ã H-1Lm Hn-mL ej Ô ei
m m
m m

which, by comparison with the definition for the cobasis shows that:

ei ã H-1Lm Hn-mL emi


m 2.27

That is, the cobasis element of a cobasis element of an element is (apart from a possible sign) equal
to the element itself. We shall find this formula useful as we develop general formulae for the
complement and interior products in later chapters.

2.7 Determinants

Determinants from exterior products


All the properties of determinants follow naturally from the properties of the exterior product.
Indeed, it may be reasonably posited that the theory of determinants was a system for answering
questions of linear dependence, developed only because Grassmann's work was ignored. Had
Grassmann's work been more widely known, his simpler and more understandable approach would
have rendered derterminant theory of little interest.

The determinant:

a11 a12 ! a1 n
a21 a22 ! a2 n

ª ª ! ª
an1 an2 ! an n

may be calculated by considering each of its rows (or columns) as a 1-element. Here, we consider
rows. Development by columns may be obtained mutatis mutandis. Introduce an arbitrary basis ei
in order to encode the position of the entry in the row, and let

ai ã ai1 e1 + ai2 e2 + ! + ain en

Then form the exterior product of all the ai . We can arrange the ai in rows to portray the effect of
an array:

2009 9 3
Grassmann Algebra Book.nb 92

a1 Ô a2 Ô ! Ô an ã Ha11 e1 + a12 e2 + ! + a1 n en L
ÔHa21 e1 + a22 e2 + ! + a2 n en L
ª ª ! ª
ÔHan1 e1 + an2 e2 + ! + an n en L

The determinant D is then the coefficient of the resulting n-element:

a1 Ô a2 Ô ! Ô an ã D e1 Ô e2 Ô ! Ô en
Because L is of dimension 1, we can express this uniquely as:
n

a1 Ô a2 Ô ! Ô an
D= 2.28
e1 Ô e2 Ô ! Ô en

Properties of determinants
All the well-known properties of determinants proceed directly from the properties of the exterior
product.

Ï 1 The determinant changes sign on the interchange of any two rows.

The exterior product is anti-symmetric in any two factors.

! Ô ai Ô ! Ô aj Ô ! ã - ! Ô aj Ô ! Ô ai Ô !

Ï 2 The determinant is zero if two of its rows are equal.

The exterior product is nilpotent.

! Ô ai Ô ! Ô ai Ô ! ã 0

Ï 3 The determinant is multiplied by a factor k if any row is multiplied by k.

The exterior product is commutative with respect to scalars.

a1 Ô ! Ô Hk ai L Ô ! ã k H a1 Ô ! Ô ai Ô !L

Ï 4 The determinant is equal to the sum of p determinants if each element of a


given row is the sum of p terms.

The exterior product is distributive with respect to addition.

a1 Ô ! Ô JS ai N Ô ! ã S H a1 Ô ! Ô ai Ô !L
i i

2009 9 3
Grassmann Algebra Book.nb 93

Ï 5 The determinant is unchanged if to any row is added scalar multiples of


other rows.

The exterior product is unchanged if to any factor is added multiples of other factors.

a1 Ô ! Ô ai + S kj aj
j!i
Ô ! ã a1 Ô ! Ô ai Ô !

The Laplace expansion technique


The Laplace expansion technique is equivalent to the calculation of the exterior product in four
stages:
1. Take the exterior product of any p of the ai .
2. Take the exterior product of the remaining n-p of the ai .
3. Take the exterior product of the results of the first two operations.
4. Adjust the sign to ensure the parity of the original ordering of the ai is preserved.

n n
Each of the first two operations produces an element with K O = K O terms.
p n- p

A generalization of the Laplace expansion technique is evident from the fact that the exterior
product of the ai may be effected in any grouping and sequence which facilitate the computation.

Calculating determinants
As an example take the following matrix. We can calculate its determinant with Mathematica's
inbuilt Det function:

a 0 0 0 0 0 0 0
0 0 0 c 0 0 0 a
0 b 0 0 0 f 0 0
0 0 0 0 e 0 0 0
DetB F
0 0 e 0 0 0 g 0
0 f 0 d 0 c 0 0
g 0 0 0 0 0 0 h
0 0 0 0 d 0 b 0

a b2 c2 e2 h - a b c e2 f2 h

Alternatively, we can set L to be 8-dimensional.


1

DeclareBasis@8D
8e1 , e2 , e3 , e4 , e5 , e6 , e7 , e8 <

and then expand and simplify the exterior product of the 8 rows or columns expressed as 1-
elements. Here we use ExpandAndSimplifyExteriorProducts (as it is somewhat faster in
this specialized case than the more general GrassmannExpandAndSimplify) to expand by
columns.

2009 9 3
Grassmann Algebra Book.nb 94

and then expand and simplify the exterior product of the 8 rows or columns expressed as 1-
elements. Here we use ExpandAndSimplifyExteriorProducts (as it is somewhat faster in
this specialized case than the more general GrassmannExpandAndSimplify) to expand by
columns.

ExpandAndSimplifyExteriorProducts@
HHa e1 + g e7 L Ô Hb e3 + f e6 L Ô He e5 L Ô Hc e2 + d e6 L Ô
He e4 + d e8 L Ô Hf e3 + c e6 L Ô Hg e5 + b e8 L Ô Ha e2 + h e7 LLD

Ia b2 c2 e2 h - a b c e2 f2 hM e1 Ô e2 Ô e3 Ô e4 Ô e5 Ô e6 Ô e7 Ô e8

The determinant is then the coefficient of the basis 8-element.

Ï Speeding up calculations

For maximum speed it is clearly better to use Mathematica's inbuilt Det wherever possible. If
however, you are using GrassmannExpandAndSimplify or
ExpandAndSimplifyExteriorProducts it is usually faster to apply the generalized Laplace
expansion technique. That is, the order in which the products are calculated is arranged with the
objective of minimizing the number of resulting terms produced at each stage. You can do this by
dividing the product up into groupings of factors where each grouping contains as few basis
elements as possible in common with the others. You then calculate the result from each grouping
before combining them into the final result. Of course, you must ensure that the sign of the overall
regrouping is the same as the original expression.

In the example below, we have rearranged the determinant calculation above into three groupings,
which we calculate successively before combining the results. Our determinant calculation then
becomes

G = ExpandAndSimplifyExteriorProducts;

-G@G@Ha e1 + g e7 L Ô Hc e2 + d e6 L Ô Ha e2 + h e7 LD Ô G@
Hb e3 + f e6 L Ô Hf e3 + c e6 LD Ô G@He e5 L Ô He e4 + d e8 L Ô Hg e5 + b e8 LDD

a b c e2 Ib c - f2 M h e1 Ô e2 Ô e3 Ô e4 Ô e5 Ô e6 Ô e7 Ô e8

This grouping took a third of the time of the direct approach above.

Ï Historical Note
Grassmann applied his Ausdehnungslehre to the theory of determinants and linear
equations quite early in his work. Later, Cauchy published his technique of 'algebraic keys'
which essentially duplicated Grassmann's results. To claim priority, Grassmann was led to
publish his only paper in French, obviously directed at Cauchy: 'Sur les différents genres de
multiplication' ('On different types of multiplication') [1855]. For a complete treatise on the
theory of determinants from a Grassmannian viewpoint see R. F. Scott [1880].

2009 9 3
Grassmann Algebra Book.nb 95

2.8 Cofactors

Cofactors from exterior products


The cofactor is an important concept in the usual approach to determinants. One often calculates a
determinant by summing the products of the elements of a row by their corresponding cofactors.
Cofactors divided by the determinant form the elements of an inverse matrix. In the subsequent
development of the Grassmann algebra, particularly the development of the complement operation,
we will find it useful to see how cofactors arise from exterior products.

Consider the product of n 1-elements introduced in the previous section:

a1 Ô a2 Ô ! Ô an ã Ha11 e1 + a12 e2 + ! + a1 n en L
Ô
Ha21 e1 + a22 e2 + ! + a2 n en L
ª ª ! ª
Ô
Han1 e1 + an2 e2 + ! + an n en L

Omitting the first factor a1 :

a2 Ô ! Ô an ã Ha21 e1 + a22 e2 + ! + a2 n en L
ª ª ! ª
ÔHan1 e1 + an2 e2 + ! + an n en L
and multiplying out the remaining n|1 factors results in an expression of the form:

a2 Ô ! Ô an ã a11 e2 Ô e3 Ô ! Ô en +
a12 H- e1 Ô e3 Ô ! Ô en L + ! + a1 n H-1Ln-1 e1 Ô e2 Ô ! Ô en-1

Here, the signs attached to the basis (n|1)-elements have been specifically chosen so that together
they correspond to the cobasis elements of e1 to en . The underscored scalar coefficients have yet
to be interpreted. Thus we can write:

a2 Ô ! Ô an ã a11 e1 + a12 e2 + ! + a1 n en

If we now premultiply by the first factor a1 we get a particularly symmetric form.

a1 Ô a2 Ô ! Ô an ã Ha11 e1 + a12 e2 + ! + a1 n en L
Ô
Ia11 e1 + a12 e2 + ! + a1 n en M

Multiplying this out and remembering that the exterior product of a basis element with the cobasis
element of another basis element is zero, we get the sum:

a1 Ô a2 Ô ! Ô an ã Ia11 a11 + a12 a12 + ! + a1 n a1 n M e1 Ô e2 Ô ! Ô en

But we have already seen that:

a1 Ô a2 Ô ! Ô an ã D e1 Ô e2 Ô ! Ô en

Hence the determinant D can be expressed as the sum of products:

2009 9 3
Grassmann Algebra Book.nb 96

Hence the determinant D can be expressed as the sum of products:

D ã a11 a11 + a12 a12 + ! + a1 n a1 n

showing that the aij are the cofactors of the aij .

Mnemonically we can visualize the aij as the determinant with the row and column containing
aij as 'struck out' by the underscore.

Of course, there is nothing special in the choice of the element a1 about which to expand the
determinant. We could have written the expansion in terms of any factor (row or column):

D ã ai1 ai1 + ai2 ai2 + ! + ain ain

It can be seen immediately that the product of the elements of a row with the cofactors of any other
row is zero, since in the exterior product formulation the row must have been included in the
calculation of the cofactors.

Summarizing these results for both row and column expansions we have:

n n
‚ aij akj ã ‚ aji ajk ã D dik 2.29
j=1 j=1

In matrix terms of course this is equivalent to the standard results

A Ac T ã A T Ac ã D I

where Ac is the matrix of cofactors of the elements of A.

The Laplace expansion in cofactor form


We have already introduced the Laplace expansion technique in our discussion of determinants:
1. Take the exterior product of any m of the ai
2. Take the exterior product of the remaining n|m of the ai
3. Take the exterior product of the results of the first two operations (in the correct order).

In this section we revisit it more specifically in the context of the cofactor. The results will be
important in deriving later results.

Let ei be basis m-elements and ei be their corresponding cobasis (n|m)-elements (i: 1,…,n and n =
m m
n
). To fix ideas, suppose we expand a determinant about the first m rows:
m

Ha1 Ô ! Ô am L Ô Ham+1 Ô ! Ô an L

ã Ka1 e1 + a2 e2 + ! + an en O Ô a1 e1 + a2 e2 + ! + an en
m m m m m m

Here, the ai and ai are simply the coefficients resulting from the partial expansions. As with basis
1-elements, the exterior product of a basis m-element with the cobasis element of another basis m-
element
2009 9 3is zero. That is:
Grassmann Algebra Book.nb 97

Here, the ai and ai are simply the coefficients resulting from the partial expansions. As with basis
1-elements, the exterior product of a basis m-element with the cobasis element of another basis m-
element is zero. That is:

ei Ô ej ã dij e1 Ô e2 Ô ! Ô en
m m

Expanding the product and applying this simplification then yields:

D ã a1 a1 + a2 a2 + ! + an an

This is the Laplace expansion. The ai are minors and the ai are their cofactors.

Exploring the calculation of determinants using minors and cofactors


In this section we explore the relationship between a determinant calculation in terms of minors
and cofactors, and its exterior product formulation. This is intended for purely theoretical interest
only, because of course, if the simple calculation of an actual determinant is required,
Mathematica's inbuilt Det function will certainly be the more effective.

Our example involves a fourth order determinant. To begin then, we declare a 4-space, and then
compose a list of 4 vectors, give each a name, and call the list M. The new scalar coefficients
generated by ComposeVector are automatically added to the list of declared scalars.

¯!4 ; M = 8x, y, z, w< = ComposeVector@8a, b, c, d<D; MatrixForm@MD


a1 e1 + a2 e2 + a3 e3 + a4 e4
b1 e1 + b2 e2 + b3 e3 + b4 e4
c1 e1 + c2 e2 + c3 e3 + c4 e4
d1 e1 + d2 e2 + d3 e3 + d4 e4

We can extract the scalar coefficients from M using the GrassmannAlgebra function
ExtractCoefficients. The result is now a 4!4 matrix Mc whose determinant we proceed to
calculate.

Mc = ExtractCoefficients@BasisD êü M; MatrixForm@McD
a1 a2 a3 a4
b1 b2 b3 b4
c1 c2 c3 c4
d1 d2 d3 d4

Ï Case 1: Calculation by expansion about the first row

Expanding and simplifying the exterior product of rows two, three and four gives the minors of the
first row as coefficients of the resulting 3-element.

2009 9 3
Grassmann Algebra Book.nb 98

A = ¯%@y Ô z Ô wD
H-b3 c2 d1 + b2 c3 d1 + b3 c1 d2 - b1 c3 d2 - b2 c1 d3 + b1 c2 d3 L e1 Ô e2 Ô e3 +
H-b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L e1 Ô e2 Ô e4 +
H-b4 c3 d1 + b3 c4 d1 + b4 c1 d3 - b1 c4 d3 - b3 c1 d4 + b1 c3 d4 L e1 Ô e3 Ô e4 +
H-b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L e2 Ô e3 Ô e4

The cobasis of the current basis 8e1 , e2 , e3 , e4 < can be displayed by entering Cobasis.

Cobasis
8e2 Ô e3 Ô e4 , -He1 Ô e3 Ô e4 L, e1 Ô e2 Ô e4 , -He1 Ô e2 Ô e3 L<

By comparing the basis elements in the expansion of A with the cobasis elements of the current
basis above, we see that both e1 Ôe2 Ôe3 and e1 Ôe3 Ôe4 require a change of sign to bring them into
the required cobasis form. The crux of the calculation is to garner this required sign from the
coefficients of the basis elements (and hence also changing the sign of the coefficient).

We denote the cobasis elements of 8e1 , e2 , e3 , e4 < by 9e1 , e2 , e3 , e4 = and use a rule to
replace the basis elements of A with the correctly signed values.

A1 = A ê. 9e2 Ô e3 Ô e4 -> e1 ,
e1 Ô e3 Ô e4 -> -e2 , e1 Ô e2 Ô e4 -> e3 , e1 Ô e2 Ô e3 -> -e4 =

H-b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L e1 -
H-b4 c3 d1 + b3 c4 d1 + b4 c1 d3 - b1 c4 d3 - b3 c1 d4 + b1 c3 d4 L e2 +
H-b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L e3 -
H-b3 c2 d1 + b2 c3 d1 + b3 c1 d2 - b1 c3 d2 - b2 c1 d3 + b1 c2 d3 L e4

The coefficients of 9e1 , e2 , e3 , e4 = are the cofactors of 8a1 , a2 , a3 , a4 <, which we


denote 9a1 , a2 , a3 , a4 =. We extract them again using ExtractCoefficients.

9a1 , a2 , a3 , a4 = = ExtractCoefficientsA9e1 , e2 , e3 , e4 =E@A1 D

8-b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 ,
b4 c3 d1 - b3 c4 d1 - b4 c1 d3 + b1 c4 d3 + b3 c1 d4 - b1 c3 d4 ,
-b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 ,
b3 c2 d1 - b2 c3 d1 - b3 c1 d2 + b1 c3 d2 + b2 c1 d3 - b1 c2 d3 <

We can then calculate the determinant from:

D1 = a1 a1 + a2 a2 + a3 a3 + a4 a4

a4 Hb3 c2 d1 - b2 c3 d1 - b3 c1 d2 + b1 c3 d2 + b2 c1 d3 - b1 c2 d3 L +
a3 H-b4 c2 d1 + b2 c4 d1 + b4 c1 d2 - b1 c4 d2 - b2 c1 d4 + b1 c2 d4 L +
a2 Hb4 c3 d1 - b3 c4 d1 - b4 c1 d3 + b1 c4 d3 + b3 c1 d4 - b1 c3 d4 L +
a1 H-b4 c3 d2 + b3 c4 d2 + b4 c2 d3 - b2 c4 d3 - b3 c2 d4 + b2 c3 d4 L

This can be easily verified as equal to the determinant of the original array computed using
Mathematica's Det function.

2009 9 3
Grassmann Algebra Book.nb 99

This can be easily verified as equal to the determinant of the original array computed using
Mathematica's Det function.

Expand@D1 D == Det@McD
True

Ï Case 2: Calculation by expansion about the first two rows

Expand and simplify the product of the first and second rows.

B1 = ¯%@x Ô yD
H-a2 b1 + a1 b2 L e1 Ô e2 + H-a3 b1 + a1 b3 L e1 Ô e3 + H-a4 b1 + a1 b4 L e1 Ô e4 +
H-a3 b2 + a2 b3 L e2 Ô e3 + H-a4 b2 + a2 b4 L e2 Ô e4 + H-a4 b3 + a3 b4 L e3 Ô e4

Extract the coefficients of these basis 2-elements and denote them by ai .

8a1 , a2 , a3 , a4 , a5 , a6 < = ExtractCoefficients@LBasisD@B1 D


8-a2 b1 + a1 b2 , -a3 b1 + a1 b3 , -a4 b1 + a1 b4 ,
-a3 b2 + a2 b3 , -a4 b2 + a2 b4 , -a4 b3 + a3 b4 <

Expand and simplify the product of the third and fourth rows.

B2 = ¯%@z Ô wD
H-c2 d1 + c1 d2 L e1 Ô e2 + H-c3 d1 + c1 d3 L e1 Ô e3 + H-c4 d1 + c1 d4 L e1 Ô e4 +
H-c3 d2 + c2 d3 L e2 Ô e3 + H-c4 d2 + c2 d4 L e2 Ô e4 + H-c4 d3 + c3 d4 L e3 Ô e4

Extract the coefficients of these basis 2-elements and denote them by bi . (For a more direct
correspondence, we denote them in reverse order).

8b6 , b5 , b4 , b3 , b2 , b1 < = ExtractCoefficients@LBasisD@B2 D


8-c2 d1 + c1 d2 , -c3 d1 + c1 d3 , -c4 d1 + c1 d4 ,
-c3 d2 + c2 d3 , -c4 d2 + c2 d4 , -c4 d3 + c3 d4 <

Now, noting the required change in sign from the basis of L and its cobasis:
2

8BasisL@2D, CobasisL@2D<
88e1 Ô e2 , e1 Ô e3 , e1 Ô e4 , e2 Ô e3 , e2 Ô e4 , e3 Ô e4 <,
8e3 Ô e4 , -He2 Ô e4 L, e2 Ô e3 , e1 Ô e4 , -He1 Ô e3 L, e1 Ô e2 <<

we can define the cofactors 9a6 , a5 , a4 , a3 , a2 , a1 =.

9a6 , a5 , a4 , a3 , a2 , a1 = = 8b6 , -b5 , b4 , b3 , -b2 , b1 <

8-c2 d1 + c1 d2 , c3 d1 - c1 d3 , -c4 d1 + c1 d4 ,
-c3 d2 + c2 d3 , c4 d2 - c2 d4 , -c4 d3 + c3 d4 <

We can then calculate the determinant from:

2009 9 3
Grassmann Algebra Book.nb 100

D2 = a1 a1 + a2 a2 + a3 a3 + a4 a4 + a5 a5 + a6 a6

H-a4 b3 + a3 b4 L H-c2 d1 + c1 d2 L + H-a4 b2 + a2 b4 L Hc3 d1 - c1 d3 L +


H-a4 b1 + a1 b4 L H-c3 d2 + c2 d3 L + H-a3 b2 + a2 b3 L H-c4 d1 + c1 d4 L +
H-a3 b1 + a1 b3 L Hc4 d2 - c2 d4 L + H-a2 b1 + a1 b2 L H-c4 d3 + c3 d4 L

This can be easily verified as equal to the determinant of the original array computed using
Mathematica's Det function.

Expand@D2 D == Det@McD
True

Transformations of cobases
In this section we show that if a basis of the underlying linear space is transformed by a
transformation whose components are aij , then its cobasis is transformed by the induced
transformation whose components are the cofactors of the aij . For simplicity in what follows, we
use Einstein's summation convention in which a summation over repeated indices is understood.

Let aij be a transformation on the basis ej to give the new basis !i . That is, !i ã aij ej .

Let aij be the corresponding transformation on the cobasis ej to give the new cobasis !i . That
©
is, !i ã aij ej .
©

Now take the exterior product of these two equations.

!i Ô !k ã Haip ep L Ô J akj ej N ã aip akj ep Ô ej


© ©
But the product of a basis element and its cobasis element is equal to the n-element of that basis.
That is, !i Ô !k ã dik ! and ep Ô ej ã dpj e. Substituting in the previous equation gives:
n n

dik ! ã aip akj dpj e


n © n

Using the properties of the Kronecker delta we can simplify the right side to give:

dik ! ã aij akj e


n ©n

We can now substitute ! ã D e where D ã Det@aij D is the determinant of the transformation.


n n

dik D ã aij akj


©
This is precisely the relationship 2.29 derived in the section above for the expansion of a
determinant in terms of cofactors. Hence we have shown that akj ãakj .
©

2009 9 3
Grassmann Algebra Book.nb 101

!i ã aij ej ó !i ã aij ej 2.30

In sum: If a basis of the underlying linear space is transformed by a transformation whose


components are aij , then its cobasis is transformed by the induced transformation whose
components are the cofactors aij of the aij .

Exploring transformations of cobases


In order to verify the equation above we first recast it into a pseudo-code equation and then
proceed to translate it into Mathematica commands:

!i ê. H!i Ø aij ej L ã aij ej

The commands will be valid in an arbitrary-dimensional space, but we exhibit the 3-dimensional
case as we proceed.

1 Reset all the defaults and declare the elements of the transformation as scalar symbols.

¯A; DeclareExtraScalarSymbols@a__ D

8a, b, c, d, e, f, g, h, a__ <

2 Construct a formula for an n-dimensional transformation An .

An_ := Table@ai,j , 8i, 1, n<, 8j, 1, n<D; MatrixForm@A3 D


a1,1 a1,2 a1,3
a2,1 a2,2 a2,3
a3,1 a3,2 a3,3

3 Construct a formula for the adjoint Cn of An (the matrix of cofactors) using Mathematica's
inbuilt functions.

Cn_ := Transpose@Det@An D Inverse@An DD; MatrixForm@C3 D


-a2,3 a3,2 + a2,2 a3,3 a2,3 a3,1 - a2,1 a3,3 -a2,2 a3,1 + a2,1 a3,2
a1,3 a3,2 - a1,2 a3,3 -a1,3 a3,1 + a1,1 a3,3 a1,2 a3,1 - a1,1 a3,2
-a1,3 a2,2 + a1,2 a2,3 a1,3 a2,1 - a1,1 a2,3 -a1,2 a2,1 + a1,1 a2,2

4 Declare an e-basis, and compute its cobasis.

eBasisn_ := DeclareBasis@n, eD; MatrixForm@eBasis3 D


e1
e2
e3

2009 9 3
Grassmann Algebra Book.nb 102

eCobasisn_ := HeBasisn ; CobasisL; MatrixForm@eCobasis3 D


e2 Ô e3
-He1 Ô e3 L
e1 Ô e2

5 Declare an e-basis, and compute its cobasis.

eBasisn_ := DeclareBasis@nD; MatrixForm@eBasis3 D


e1
e2
e3

eCobasisn_ := HeBasisn ; CobasisL; MatrixForm@eCobasis3 D


e2 Ô e3
-He1 Ô e3 L
e1 Ô e2

6 Compute the left hand side of the pseudo-code equation.

Ln_ := eCobasisn ê. Thread@eBasisn Ø An .eBasisn D; MatrixForm@L3 D


He1 a2,1 + e2 a2,2 + e3 a2,3 L Ô He1 a3,1 + e2 a3,2 + e3 a3,3 L
-HHe1 a1,1 + e2 a1,2 + e3 a1,3 L Ô He1 a3,1 + e2 a3,2 + e3 a3,3 LL
He1 a1,1 + e2 a1,2 + e3 a1,3 L Ô He1 a2,1 + e2 a2,2 + e3 a2,3 L

7 Compute the right hand side of the pseudo-code equation.

Rn_ := Cn .eCobasisn ; MatrixForm@R3 D


H-a2,2 a3,1 + a2,1 a3,2 L e1 Ô e2 - Ha2,3 a3,1 - a2,1 a3,3 L e1 Ô e3 + H-a2,3 a3,2 +
Ha1,2 a3,1 - a1,1 a3,2 L e1 Ô e2 - H-a1,3 a3,1 + a1,1 a3,3 L e1 Ô e3 + Ha1,3 a3,2 - a
H-a1,2 a2,1 + a1,1 a2,2 L e1 Ô e2 - Ha1,3 a2,1 - a1,1 a2,3 L e1 Ô e3 + H-a1,3 a2,2 +

8 Expand and simplify the exterior products on both sides of the equation.

Fn_ :=
Expand@ExpandAndSimplifyExteriorProducts@Ln DD == Expand@Rn D; F3
True

9 Verify the equation for a few other dimensions.

Table@Fi , 8i, 2, 5<D


8True, True, True, True<

2009 9 3
Grassmann Algebra Book.nb 103

2.9 Solution of Linear Equations

Grassmann's approach to solving linear equations


Because of its encapsulation of the properties of linear independence, Grassmann was able to use
the exterior product to present a theory and formulae for the solution of linear equations well
before anyone else.

Suppose m independent equations in n (m § n) unknowns xi :

a11 x1 + a12 x2 + ! + a1 n xn ã a1

a21 x1 + a22 x2 + ! + a2 n xn ã a2

ª ª ! ª ª

am1 x1 + am2 x2 + ! + am n xn ã am

Multiply these equations by e1 , e2 , !, em respectively and define

Ci ã a1 i e1 + a2 i e2 + ! + ami em

C0 ã a1 e1 + a2 e2 + ! + am em

The Ci and C0 are therefore 1-elements in a linear space of dimension m.

Adding the resulting equations then gives the system in the form:

x1 C1 + x2 C2 + ! + xn Cn ã C0 2.31

To obtain an equation from which the unknowns xi have been eliminated, it is only necessary to
multiply the linear system through by the corresponding Ci .

If m = n and a solution for xi exists, it is obtained by eliminating x1 , !, xi-1 , xi+1 , !, xn ; that is,
by multiplying the linear system through by C1 Ô ! Ô Ci-1 Ô Ci+1 Ô ! Ô Cn :

xi Ci Ô C1 Ô ! Ô Ci-1 Ô Ci+1 Ô ! Ô Cn ã C0 Ô C1 Ô ! Ô Ci-1 Ô Ci+1 Ô ! Ô Cn


Solving for xi gives:

C0 Ô HC1 Ô ! Ô Ci-1 Ô Ci+1 Ô ! Ô Cn L


xi ã 2.32
Ci Ô HC1 Ô ! Ô Ci-1 Ô Ci+1 Ô ! Ô Cn L

In this form we only have to calculate the (n|1)-element C1 Ô ! Ô Ci-1 Ô Ci+1 Ô ! Ô Cn once.

An alternative form more reminiscent of Cramer's Rule is:

2009 9 3
Grassmann Algebra Book.nb 104

C1 Ô ! Ô Ci-1 Ô C0 Ô Ci+1 Ô ! Ô Cn
xi ã 2.33
C1 Ô C2 Ô ! Ô Cn

All the well-known properties of solutions to systems of linear equations proceed directly from the
properties of the exterior products of the Ci .

Example solution: 3 equations in 4 unknowns


Consider the following system of 3 equations in 4 unknowns.

a-2b+3c+4dã2

2a+7c-5dã9

a+b+c+dã8
To solve this system, first declare a basis of dimension at least equal to the number of equations. In
this example, we declare a 4-space so that we can add another equation later.

DeclareBasis@4D
8e1 , e2 , e3 , e4 <

Next define a 1-element for each unknown which encodes its coefficients, and one which encodes
the constants

Ca = e1 + 2 e2 + e3 ;
Cb = -2 e1 + e3 ;
Cc = 3 e1 + 7 e2 + e3 ;
Cd = 4 e1 - 5 e2 + e3 ;
C0 = 2 e1 + 9 e2 + 8 e3 ;

The system equation then becomes:

a Ca + b Cb + c Cc + d Cd ã C0

Suppose we wish to eliminate a and d thus giving a relationship between b and c. To accomplish
this we multiply the system equation through by Ca Ô Cd .

Ha Ca + b Cb + c Cc + d Cd L Ô Ca Ô Cd ã C0 Ô Ca Ô Cd

Or, since the terms involving a and d will obviously be eliminated by their product with Ca Ô Cd ,
we have more simply:

Hb Cb + c Cc L Ô Ca Ô Cd ã C0 Ô Ca Ô Cd

This can be put in a form similar to that of equations 2.32 and 2.33 by dividing through by the right
side.

2009 9 3
Grassmann Algebra Book.nb 105

Hb Cb + c Cc L Ô Ca Ô Cd
ã1
C0 Ô Ca Ô Cd

Expanding and simplifying the numerator and denominator then gives the required relationship
between b and c.

¯%@Hb Cb + c Cc L Ô Ca Ô Cd D
ã1
¯%@C0 Ô Ca Ô Cd D
1
H27 b - 29 cL ã 1
63

Example solution: 4 equations in 4 unknowns


Had we had a fourth equation, say b - 3c + d ã 7, we could have simply added the new
information for each coefficient.

Ca = e1 + 2 e2 + e3 ;
Cb = -2 e1 + e3 + e4 ;
Cc = 3 e1 + 7 e2 + e3 - 3 e4 ;
Cd = 4 e1 - 5 e2 + e3 + e4 ;
C0 = 2 e1 + 9 e2 + 8 e3 + 7 e4 ;

Now we can use equation 2.33 directly

Ca Ô C0 Ô Cc Ô Cd

Ca Ô Cb Ô Cc Ô Cd

and calculate the result directly by simplifying:

¯%@Ca Ô C0 Ô Cc Ô Cd D

¯%@Ca Ô Cb Ô Cc Ô Cd D
30
bã-
41

2.10 Simplicity

The concept of simplicity


An important concept in the Grassmann algebra is that of simplicity. Earlier in the chapter we
introduced the concept informally. Now we will discuss it in a little more detail.

An element is simple if it is the exterior product of 1-elements. We extend this definition to scalars
by defining all scalars to be simple. Clearly also, since any n-element always reduces to a product
of 1-elements, all n-elements are simple. Thus we see immediately that all 0-elements, 1-elements,
and n-elements are simple. In the next section we show that all (n|1)-elements are also simple.

2009 9 3
Grassmann Algebra Book.nb 106

An element is simple if it is the exterior product of 1-elements. We extend this definition to scalars
by defining all scalars to be simple. Clearly also, since any n-element always reduces to a product
of 1-elements, all n-elements are simple. Thus we see immediately that all 0-elements, 1-elements,
and n-elements are simple. In the next section we show that all (n|1)-elements are also simple.

In 2-space therefore, all elements of L, L, and L are simple. In 3-space all elements of L, L, L and
0 1 2 0 1 2
L are simple. In higher dimensional spaces elements of grade 2 § m § n|2 are therefore the only
3
ones that may not be simple. In 4-space, the only elements that may not be simple are those of
grade 2.

There is a straightforward way of testing the simplicity of 2-elements not shared by elements of
higher grade. A 2-element is simple if and only if its exterior square is zero. (The exterior square of
a is a Ô a.) Since odd elements (elements of odd grade) anti-commute, the exterior square of odd
m m m
elements will be zero, even if they are not simple. An even element of grade 4 or higher may be of
the form of the exterior product of a 1-element with a non-simple 3-element: whence its exterior
square is zero without its being simple.

We will return to a further discussion of simplicity from the point of view of factorization in
Chapter 3: The Regressive Product.

All (n|1)-elements are simple


In general, (n|1)-elements are simple. We can show this as follows.

Consider first two simple (n|1)-elements. Since they can differ by at most one 1-element factor
(otherwise they would together contain more than n independent factors), we can express them as
a Ôb1 and a Ôb2 . Summing these, and factoring the common (n|2)-element gives a simple (n|1)-
n-2 n-2
element.

a Ô b1 + a Ô b2 ã a Ô Hb1 + b2 L
n-2 n-2 n-2

Any (n|1)-element can be expressed as the sum of simple (n|1)-elements. We can therefore prove
the general case by supposing pairs of simple elements to be combined to form another simple
element, until just one simple element remains.

Conditions for simplicity of a 2-element in a 4-space


Consider a simple 2-element in a 4-dimensional space. First we declare a 4-dimensional basis for
the space, and then compose a general bivector B. We can shortcut the entry of the bivector by
using the GrassmannAlgebra function ComposeBivector. ComposeBivector automatically
declares the coefficients to be scalar symbols by including them in the list of declared scalar
symbols.

¯!4 ; B = ComposeBivector@aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4

2009 9 3
Grassmann Algebra Book.nb 107

Since we are supposing B to be simple, B Ô B ã 0. To see how this constrains the coefficients ai
of the terms of B we expand and simplify the product:

¯%@B Ô BD
H2 a3 a4 - 2 a2 a5 + 2 a1 a6 L e1 Ô e2 Ô e3 Ô e4

We thus see that the condition for a 2-element in a 4-dimensional space to be simple may be
written:

a3 a4 - a2 a5 + a1 a6 ã 0 2.34

Conditions for simplicity of a 2-element in a 5-space


The situation in 5-space is a little more complex. First we declare a 5-space, and then compose a
general bivector.

¯!5 ; B = ComposeBivector@aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e1 Ô e5 + a5 e2 Ô e3 +
a6 e2 Ô e4 + a7 e2 Ô e5 + a8 e3 Ô e4 + a9 e3 Ô e5 + a10 e4 Ô e5

Next, we expand and simplify its exterior square.

B2 = ¯%@B Ô BD
H2 a3 a5 - 2 a2 a6 + 2 a1 a8 L e1 Ô e2 Ô e3 Ô e4 +
H2 a4 a5 - 2 a2 a7 + 2 a1 a9 L e1 Ô e2 Ô e3 Ô e5 +
H2 a4 a6 - 2 a3 a7 + 2 a1 a10 L e1 Ô e2 Ô e4 Ô e5 +
H2 a4 a8 - 2 a3 a9 + 2 a2 a10 L e1 Ô e3 Ô e4 Ô e5 +
H2 a7 a8 - 2 a6 a9 + 2 a5 a10 L e2 Ô e3 Ô e4 Ô e5

For the bivector B to be simple we require this product B2 to be zero, hence each of its five
coefficients must be zero simultaneously. We can extract the coefficients by using
GrassmannAlgebra's ExtractCoefficients function, and then thread them into a list of five
equations.

A = Thread@ExtractCoefficients@LBasisD@B2 D == 80, 0, 0, 0, 0<D


82 a3 a5 - 2 a2 a6 + 2 a1 a8 ã 0,
2 a4 a5 - 2 a2 a7 + 2 a1 a9 ã 0, 2 a4 a6 - 2 a3 a7 + 2 a1 a10 ã 0,
2 a4 a8 - 2 a3 a9 + 2 a2 a10 ã 0, 2 a7 a8 - 2 a6 a9 + 2 a5 a10 ã 0<

Finally, we can solve these five equations with Mathematica's inbuilt Solve function.

Solve@A, 8a8 , a9 , a10 <D


a3 a5 - a2 a6 a4 a5 - a2 a7 a4 a6 - a3 a7
::a8 Ø - , a9 Ø - , a10 Ø - >>
a1 a1 a1

By inspection we see that these three solutions correspond to the first three equations.
Consequently, from the generality of the expression for the bivector we expect any three equations
to suffice.
2009 9 3
Grassmann Algebra Book.nb 108

By inspection we see that these three solutions correspond to the first three equations.
Consequently, from the generality of the expression for the bivector we expect any three equations
to suffice.

In summary we can say that a 2-element in a 5-space is simple if any three of the following
equations is satisfied:

a3 a5 - a2 a6 + a1 a8 ã 0
a4 a5 - a2 a7 + a1 a9 ã 0
a4 a6 - a3 a7 + a1 a10 ã 0 2.35
a4 a8 - a3 a9 + a2 a10 ã 0
a7 a8 - a6 a9 + a5 a10 ã 0

† The concept of simplicity will be explored in more detail in Chapter 3: The Regressive Product.

Factorizing simple elements


In Chapter 3; The Regressive Product, we will develop an algorithm for obtaining a factorization of
a simple element. In this section we discuss factorization from first principles using only the
exterior product. To make the discussion concrete, we take an example.

Suppose we have a 4-element X which we would like to factorize. X is expressed in terms of six 1-
elements, and is known to be simple. For reference purposes we begin with the element in factored
form X0 to assure ourselves that it is simple, but the example will take its expanded form X as
starting point. The factorization we will obtain is of course unlikely to be of the same form as X0 .

X0 = H2 x + 3 yL Ô Hu - v - xL Ô H2 w - 5 z + y + uL Ô Hz - xL;

X = ¯%@X0 D
3 uÔvÔxÔy + 2 uÔvÔxÔz + 3 uÔvÔyÔz + 6 uÔwÔxÔy +
4 u Ô w Ô x Ô z + 6 u Ô w Ô y Ô z - 14 u Ô x Ô y Ô z - 6 v Ô w Ô x Ô y -
4 v Ô w Ô x Ô z - 6 v Ô w Ô y Ô z + 17 v Ô x Ô y Ô z + 6 w Ô x Ô y Ô z

Clearly, a factor of X must be a linear combination of u, v, w, x, y and z. Let us call it f1 . The si


are scalar coefficients which we declare as scalar symbols by entering ¯¯s@s_ D.

¯¯s@s_ D;
f1 = s1 u + s2 v + s3 w + s4 x + s5 y + s6 z;

Now if f1 is to be a factor of X, its exterior product with X must be zero. Hence

2009 9 3
Grassmann Algebra Book.nb 109

X1 = ¯%@f1 Ô XD ã 0
H-6 s1 - 6 s2 + 3 s3 L u Ô v Ô w Ô x Ô y +
H-4 s1 - 4 s2 + 2 s3 L u Ô v Ô w Ô x Ô z + H-6 s1 - 6 s2 + 3 s3 L u Ô v Ô w Ô y Ô z +
H17 s1 + 14 s2 + 3 s4 - 2 s5 + 3 s6 L u Ô v Ô x Ô y Ô z +
H6 s1 + 14 s3 + 6 s4 - 4 s5 + 6 s6 L u Ô w Ô x Ô y Ô z +
H6 s2 - 17 s3 - 6 s4 + 4 s5 - 6 s6 L v Ô w Ô x Ô y Ô z ã 0

Since the coefficients of each of these terms must be zero, we now have some equations of
constraint between the coefficients of f1 . In this case it is easy to see that the first 3 are identical,
so we can satisfy them by substituting 2 s1 + 2 s2 for s3 .

X2 = HX1 ê. s3 Ø 2 s1 + 2 s2 L êê Simplify
H17 s1 + 14 s2 + 3 s4 - 2 s5 + 3 s6 L
Hu Ô v Ô x Ô y Ô z + 2 u Ô w Ô x Ô y Ô z - 2 v Ô w Ô x Ô y Ô zL ã 0

Similarly we can eliminate s5 to satisfy the remaining constraint.

17 s1 + 14 s2 + 3 s4 + 3 s6
X3 = X2 ê. s5 Ø êê Simplify
2
True

Finally, we can substitute back into the expression for f1 to get an expression for a generic factor
of X, which we call f2 .

17 s1 + 14 s2 + 3 s4 + 3 s6
f2 = f1 ê. s3 Ø 2 s1 + 2 s2 ê. s5 Ø
2
1
u s1 + v s2 + w H2 s1 + 2 s2 L + x s4 + z s6 + y H17 s1 + 14 s2 + 3 s4 + 3 s6 L
2

X can now be constructed as the exterior product of any four linearly independent versions of f2 .
As an example, we create 1-elements xi by letting si equal 1, and the others equal 0.

x1 = f2 ê. 8s1 Ø 1, s2 Ø 0, s4 Ø 0, s6 Ø 0<
17 y
u+2w+
2

x2 = f2 ê. 8s1 Ø 0, s2 Ø 1, s4 Ø 0, s6 Ø 0<
v+2w+7y

x3 = f2 ê. 8s1 Ø 0, s2 Ø 0, s4 Ø 1, s6 Ø 0<
3y
x+
2

2009 9 3
Grassmann Algebra Book.nb 110

x4 = f2 ê. 8s1 Ø 0, s2 Ø 0, s4 Ø 0, s6 Ø 1<
3y
+z
2

The exterior product of these xi is

X4 = ¯%@x1 Ô x2 Ô x3 Ô x4 D
3 3
uÔvÔxÔy + uÔvÔxÔz + uÔvÔyÔz + 3 uÔwÔxÔy +
2 2
2 uÔwÔxÔz + 3 uÔwÔyÔz - 7 uÔxÔyÔz - 3 vÔwÔxÔy -
17
2 vÔwÔxÔz - 3 vÔwÔyÔz + vÔxÔyÔz + 3 wÔxÔyÔz
2

This turns out to be half of X. So by doubling one of the xi , say x4 , we have the final result which
we can verify.

17 y 3y
¯%BX ã u + 2 w + Ô Hv + 2 w + 7 yL Ô x + Ô H3 y + 2 zLF
2 2
True

Ï Applying the process to a non-simple element

Consider the case where X is not simple. For example

X = x Ô y + u Ô v;

If X had a 1-element factor, it would have to be some linear combination of x, y, u, and v.

f = s1 x + s2 y + s3 u + s4 v;

If f is a factor of X, its exterior product with X must be zero.

¯%@f Ô XD
s1 u Ô v Ô x + s2 u Ô v Ô y + s3 u Ô x Ô y + s4 v Ô x Ô y

Clearly this expression can only be zero if all the coefficients si are zero, which in turn implies f
must be zero. Hence in this case, X has no 1-element factors.

More generally, an element being non-simple does not imply that it has no 1-element factors. Here
is an non-simple element with two 1-element factors.

X = Hx Ô y + u Ô vL Ô w Ô z;

f = s1 x + s2 y + s3 u + s4 v + s5 w + s6 z;

¯%@f Ô XD
-s1 u Ô v Ô w Ô x Ô z - s2 u Ô v Ô w Ô y Ô z + s3 u Ô w Ô x Ô y Ô z + s4 v Ô w Ô x Ô y Ô z

Hence s1 , s2 , s3 , s4 must be zero, leaving f = s5 w + s6 z as a generic factor of X.

2009 9 3
Grassmann Algebra Book.nb 111

Hence s1 , s2 , s3 , s4 must be zero, leaving f = s5 w + s6 z as a generic factor of X.

2.11 Exterior Division

The definition of an exterior quotient


It will be seen in Chapter 4: Geometric Interpretations that one way of defining geometric entities
like lines and planes is by the exterior quotient of two interpreted elements. Such a quotient does
not yield a unique element. Indeed this is why it is useful for defining geometric entities, for they
are composed of sets of elements.

The exterior quotient of a simple (m+k)-element a by another simple element b which is


m+k k
contained in a is defined to be the most general m-element contained in a (g, say) such that the
m+k m+k m
exterior product of the quotient g with the denominator b yields the numerator a .
m k m+k

a
m+k
gã ï gÔb ã a 2.36
m b m k m+k
k

Note the convention adopted for the order of the factors.

In Chapter 9: Exploring Grassmann Algebra we shall generalize this definition of quotient to


define the general division of Grassmann numbers.

Division by a 1-element
Consider the quotient of a simple (m+1)-element a by a 1-element b. Since a contains b we can
m+1 m+1
write it as a1 Ôa2 Ô!Ôam Ôb. However, we could also have written this numerator in the more
general form:

Ha1 + t1 bL Ô Ha2 + t2 bL Ô ! Ô Ham + tm bL Ô b

where the ti are arbitrary scalars. It is in this more general form that the numerator must be written
before b can be 'divided out'. Thus the quotient may be written:

a1 Ô a2 Ô ! Ô a m Ô b
ã Ha1 + t1 bL Ô Ha2 + t2 bL Ô ! Ô Ham + tm bL 2.37
b

2009 9 3
Grassmann Algebra Book.nb 112

Division by a k-element
Suppose now we have a quotient with a simple k-element denominator. Since the denominator is
contained in the numerator, we can write the quotient as:

Ha1 Ô a2 Ô ! Ô am L Ô Hb1 Ô b2 Ô ! Ô bk L
Hb1 Ô b2 Ô ! Ô bk L

To prepare for the 'dividing out' of the b factors we rewrite the numerator in the more general form:

Ha1 + t11 b1 + ! + t1 k bk L Ô Ha2 + t21 b1 + ! + t2 k bk L Ô


! Ô Ham + tm1 b1 + ! + tmk bk L Ô Hb1 Ô b2 Ô ! Ô bk L

where the tij are arbitrary scalars. Hence:

Ha1 Ô a2 Ô ! Ô am L Ô Hb1 Ô b2 Ô ! Ô bk L
ã
Hb1 Ô b2 Ô ! Ô bk L
2.38
Ha1 + t11 b1 + ! + t1 k bk L Ô Ha2 + t21 b1 + ! + t2 k bk L Ô
! Ô Ham + tm1 b1 + ! + tmk bk L

In the special case of m equal to 1, this reduces to:

a Ô b1 Ô b2 Ô ! Ô bk
ã a + t1 b1 + ! + tk bk 2.39
b1 Ô b2 Ô ! Ô bk

We will later see that this formula neatly defines a hyperplane.

Ï Special cases

In the special cases where a or b is a scalar, the results are unique.


m k

a b1 Ô b2 Ô ! Ô bk
ãa 2.40
b1 Ô b2 Ô ! Ô bk

b a1 Ô a2 Ô ! Ô a m
ã a1 Ô a2 Ô ! Ô a m 2.41
b

2009 9 3
Grassmann Algebra Book.nb 113

Automating the division process


By using the GrassmannAlgebra function ExteriorQuotient you can automatically generate
the expression resulting from the exterior quotient of two exterior products. For example, in a 3-
space, the exterior quotient of the basis trivector e1 Ô e2 Ô e3 by the basis bivector e1 Ô e2 results
in the variable vector e3 + e1 ¯t1 + e2 ¯t2 , where the coefficients ¯t1 and ¯t2 are
automatically generated arbitrary scalar symbols (Mathematica automatically puts these scalar
symbols after the basis elements because of its internal ordering routine).
e1 Ô e2 Ô e3
¯!3 ; ExteriorQuotientB F
e1 Ô e2
e3 + e1 ¯t1 + e2 ¯t2

Whereas the exterior quotient of the basis trivector e1 Ô e2 Ô e3 by the basis vector e1 results in
the variable bivector He2 + e1 ¯t1 L Ô He3 + e1 ¯t2 L.
e1 Ô e2 Ô e3
ExteriorQuotientB F
e1
He2 + e1 ¯t1 L Ô He3 + e1 ¯t2 L

ExteriorQuotient requires that the factors in the denominator are also specifically displayed
in the numerator.

Hx - yL Ô Hy - zL Ô Hz - xL
ExteriorQuotientB F
Hz - xL Ô Hx - yL
y - z + Hx - yL ¯t1 + H-x + zL ¯t2

More generally, you can use ExteriorQuotient on any Grassmann expression involving
exterior quotients.
e1 Ô e2 Ô e3 e1 Ô e2 Ô e3 e1 Ô e2 Ô e3
ExteriorQuotientBa +b +c F
e1 e1 Ô e2 e1 Ô e2 Ô e3
c + b He3 + e1 ¯t1 + e2 ¯t2 L + a He2 + e1 ¯t3 L Ô He3 + e1 ¯t4 L

2.12 Multilinear Forms

The span of a simple element


We have already introduced the notion of the space of a simple m-element in Section 2.3. The
space of a simple m-element is the m-space whose underlying linear space consists of all the 1-
elements in the m-element. A 1-element is said to be in a simple m-element if and only their
exterior product is zero. The space of a simple m-element is a subspace of the n-space in which it
resides. An n-space and the space of its n-element are thus identical.

In order to explore the space of a simple m-element we can of course choose as basis for its
underlying linear space, any set of m independent 1-elements whose exterior product is congruent
to the given m-element. Such sets span the space of the simple m-element, and by extension, we
2009 9 3
call each of them a span of the m-element.
Grassmann Algebra Book.nb 114

In order to explore the space of a simple m-element we can of course choose as basis for its
underlying linear space, any set of m independent 1-elements whose exterior product is congruent
to the given m-element. Such sets span the space of the simple m-element, and by extension, we
call each of them a span of the m-element.

For example, if A ã xÔyÔz, then we can say that a span of A is {x, y, z}. But of course, {x +
y, y, z} and {a x, y, z} are also spans of A.

Practically however, the simple m-elements we are working with will often be available to us in
some known factored form, that is, as a given exterior product of m 1-elements. In these cases it is
convenient for computational purposes to choose these factors to span the element, and, by abuse
of terminology, call the list of these factors the span of the element. (We can read "the span" to
mean "the current working span" of the element).

For example, if A ã xÔyÔz, then we say that the span of A is {x, y, z}.

More generally, we define the k-span of a simple m-element as the list of all the essentially
different exterior products of grade k formed from the span (or 1-span) of the element.

Given this provisory preamble, we can now say that the most straightforward way to conceive of
the span of a simple m-element is as the basis of an m-space, where the basis is the list of factors of
the m-element in order. Similarly the k-span of a simple m-element may be conceived of as the
basis of the concomitant exterior product space of grade k, and the k-cospan of the element may be
conceived of as its cobasis.

The k-span of a simple m-element has no natural notion of invariance attached to it, because the
same m-element, factorized differently, will generate different k-spans. On the other hand, the pair
comprising the k-span of an m-element and its concomitant (m-k)-span will, if present together
appropriately in a multilinear form, can generate results invariant to different factorizations. Many
expressions in the algebra are multilinear forms, and an understanding of their properties is
instructive. We will explore these forms in the sections to follow. But first, we solidify what these
definitions mean by seeing how to compose a span.

Composing spans
In this section we discuss how to compose the span (1-span), k-span or k-cospan of a simple m-
element. As a concrete case, let us take a 4-element A. The compositions are independent of the
dimension of the currently declared space, but any computations with them will require the
dimension of the space to be at least that of the m-element. First we compose A. (By using
ComposeSimpleForm, the ai are automatically added to the list of declared vector symbols.)

A = ComposeSimpleFormBa, 1F
4

a1 Ô a2 Ô a3 Ô a4

To compose the span of A, you can use the GrassmannAlgebra function ComposeSpan[A][1]
or its alternative representation ¯#1 @AD. To compose the 2-span of A you can use
ComposeSpan[A][2] or its alternative ¯#2 @AD.

2009 9 3
Grassmann Algebra Book.nb 115

8ComposeSpan@AD@1D, ¯#2 @AD<


88a1 , a2 , a3 , a4 <, 8a1 Ô a2 , a1 Ô a3 , a1 Ô a4 , a2 Ô a3 , a2 Ô a4 , a3 Ô a4 <<

To compose the cospan of A, you can use ComposeCospan[A][1] or ¯#1 @AD. To compose the
k-cospan of A you can use ComposeCospan[A][2] or ¯#2 @AD. (You can enter the 2 in ¯#2 as
either a superscript or a power.) Note that while a k-span is a list of k-elements, a k-cospan is a list
of (m-k)-elements.

9ComposeCospan@AD@1D, ¯#2 @AD=

88a2 Ô a3 Ô a4 , -Ha1 Ô a3 Ô a4 L, a1 Ô a2 Ô a4 , -Ha1 Ô a2 Ô a3 L<,


8a3 Ô a4 , -Ha2 Ô a4 L, a2 Ô a3 , a1 Ô a4 , -Ha1 Ô a3 L, a1 Ô a2 <<

Since the exterior product is Listable (like all Grassmann products in GrassmannAlgebra), the
exterior product of a k-span and its k-cospan gives a list of 4-elements which simplify to 4 copies
of A.

¯#2 @AD Ô ¯#2 @AD


8a1 Ô a2 Ô a3 Ô a4 , a1 Ô a3 Ô -Ha2 Ô a4 L, a1 Ô a4 Ô a2 Ô a3 ,
a2 Ô a3 Ô a1 Ô a4 , a2 Ô a4 Ô -Ha1 Ô a3 L, a3 Ô a4 Ô a1 Ô a2 <

In the discussion of multilinear forms to follow we will usually use the alternative representation
¯# since its compact nature makes the composition of the forms clearer. Further syntax for using
¯# follows.

† To compose the k-spans or k-cospans for several values of k you can enter the values of k
enclosed in ¯[ ]. For example

¯#¯@1,3D @AD

88a1 , a2 , a3 , a4 <, 8a1 Ô a2 Ô a3 , a1 Ô a2 Ô a4 , a1 Ô a3 Ô a4 , a2 Ô a3 Ô a4 <<

¯#¯@1,3D @AD
88a2 Ô a3 Ô a4 , -Ha1 Ô a3 Ô a4 L, a1 Ô a2 Ô a4 , -Ha1 Ô a2 Ô a3 L<,
8a4 , -a3 , a2 , -a1 <<

† If you want to multiply the elements of a span by scalar coefficients you can use Mathematica's
inbuilt listability attribute of Times, and simply multiply the span by the list of coefficients
(making sure that the two lists are the same length). The alias ¯¯S stands for
DeclareExtraScalarSymbols, and is here used to declare any subscripted s as a scalar.

¯¯S@s_ D;
8s1 , s2 , s3 , s4 , s5 , s6 < ¯#2 @AD
8s1 a1 Ô a2 , s2 a1 Ô a3 , s3 a1 Ô a4 , s4 a2 Ô a3 , s5 a2 Ô a4 , s6 a3 Ô a4 <

† If you want to multiply the elements of a span by scalar coefficients and then sum them, you can
use Mathematica's Dot (.) function.

2009 9 3
Grassmann Algebra Book.nb 116

8s1 , s2 , s3 , s4 , s5 , s6 <.¯#2 @AD


s1 a1 Ô a2 + s2 a1 Ô a3 + s3 a1 Ô a4 + s4 a2 Ô a3 + s5 a2 Ô a4 + s6 a3 Ô a4

† If you want to use some other type of listable product between your elements (other than
Times) and then sum them, you can use the GrassmannAlgebra alias for Mathematica's Total
function ¯S which simply sums the elements of any list.

¯S@8s1 , s2 , s3 , s4 , s5 , s6 < Ô ¯#2 @ADD


s1 Ô a1 Ô a2 + s2 Ô a1 Ô a3 + s3 Ô a1 Ô a4 + s4 Ô a2 Ô a3 + s5 Ô a2 Ô a4 + s6 Ô a3 Ô a4

† To pick out the ith element in any of these lists you can use Mathematica's inbuilt Part
function by giving a subscript to the list in the form PiT. For example

¯#2 @ADP3T

a1 Ô a4

Thus we can write A as the exterior product of any two corresponding terms from a k-span and a k-
cospan.

¯%AA ã ¯#2 @ADP3T Ô ¯#2 @ADP3T E

True

Example: Refactorizations
Let us suppose we have a simple element A for which we have a factorization, and that we wish to
obtain a factorization that has a given 1-element X (which we know belongs to A) as one of the
factors. Take again the example of the previous section.

A = He4 - 2 e7 L Ô H3 e5 + e2 L Ô He1 + 2 e2 L; X = e1 - 6 e5 ;

By taking the 2-span of A and multiplying each of the elements by X we get 3 candidates.

¯#2 @AD Ô X
8He4 - 2 e7 L Ô He2 + 3 e5 L Ô He1 - 6 e5 L,
He4 - 2 e7 L Ô He1 + 2 e2 L Ô He1 - 6 e5 L,
He2 + 3 e5 L Ô He1 + 2 e2 L Ô He1 - 6 e5 L<

Expanding and simplifying these eliminates the third candidate.

¯%@¯#2 @AD Ô XD
8-He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 + 3 e1 Ô e4 Ô e5 + 6 e1 Ô e5 Ô e7 +
6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7 , -2 e1 Ô e2 Ô e4 + 4 e1 Ô e2 Ô e7 +
6 e1 Ô e4 Ô e5 + 12 e1 Ô e5 Ô e7 + 12 e2 Ô e4 Ô e5 + 24 e2 Ô e5 Ô e7 , 0<

In expanded form, A was equal to

2009 9 3
Grassmann Algebra Book.nb 117

-He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 + 3 e1 Ô e4 Ô e5 +
6 e1 Ô e5 Ô e7 + 6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7

So the first two candidates are thus valid factorizations congruent to A.

Multilinear forms
In the chapters to follow we introduce various new Grassmann products in addition to the exterior
product: the regressive, interior, inner, generalized, hypercomplex and Clifford products. In the
computation of expressions involving these products we will often come across formulae which
express the computation as a sum of terms where each term itself is a product. These sums must
have the same natural invariance to transformations possessed by the original expression, and it
turns out they are all examples of multilinear forms.

For our purposes, a typical multilinear form is a function M@a1 , a2 , …, am D of 1-elements ai


which is linear in each of the ai . That is,

M@a1 , a2 , …, a1 b1 + a2 b2 , …, am D ã
a1 M@a1 , a2 , …, b1 , …, am D + a2 M@a1 , a2 , …, b2 , …, am D

where the ai are scalars and the bi are also 1-elements. In this book however, because the M are
usually viewed as products, we write them in their infix form, just as we have seen that
Mathematica allows us to do with the exterior product: entering an expression in the argument
form above displays it in its infix form.

Wedge@a1 , a2 , …, a1 b1 + a2 b2 , …, am D ã
a1 Wedge@a1 , a2 , …, b1 , …, am D + a2 Wedge@a1 , a2 , …, b2 , …, am D
a1 Ô a2 Ô … Ô Ha1 b1 + a2 b2 L Ô … Ô am ã
a1 a1 Ô a2 Ô … Ô b1 Ô … Ô am + a2 a1 Ô a2 Ô … Ô b2 Ô … Ô am

More specifically, we say that a product operation " is a linear product if it is both left and right
distributive under addition, and permits scalars to be factored out.

Ka + b O " g ã a " g + b " g


m m k m k m k

a" b + g ã a"b + a"g 2.42


m k k m k m k

a " a b ã Ka aO " b ã a a " b


m k m k m k

In the chapters to follow, the multilinear forms of interest may involve two, or sometimes three,
different linear product operations. To represent these we choose three initially undefined infix
operators from Mathematica's library of symbols, ", #, and ü (CircleTimes, CirclePlus,
CircleDot). We make them Listable (as all the Grassmann products are), and make them
behave linearly within any functions we define intended to operate on expressions involving them.

As a beginning example, consider a simple 3-element A ã a1 Ô a2 Ô a3 in a space of at least 3


dimensions, the three operations ", #, and ü, and two arbitrary Grassmann expressions G1 , and G2 .
From these we can form the expression
2009 9 3
Grassmann Algebra Book.nb 118

As a beginning example, consider a simple 3-element A ã a1 Ô a2 Ô a3 in a space of at least 3


dimensions, the three operations ", #, and ü, and two arbitrary Grassmann expressions G1 , and G2 .
From these we can form the expression

F ã HG1 " a1 L ü HG2 # Ha2 Ô a3 LL

Now if we replace a2 , say, by a1 b1 + a2 b2 , we can expand the expression to get a sum of two
expressions.

F ã HG1 " a1 L ü HG2 # HHa1 b1 + a2 b2 L Ô a3 LL ã


HG1 " a1 L ü HG2 # Ha1 b1 Ô a3 + a2 b2 Ô a3 LL ã
HG1 " a1 L ü Ha1 G2 # Hb1 Ô a3 L + a2 G2 # H b2 Ô a3 LL ã
HG1 " a1 L ü Ha1 G2 # Hb1 Ô a3 LL + HG1 " a1 L ü Ha2 Q2 # H b2 Ô a3 LL ã
a1 HG1 " a1 L ü H G2 # Hb1 Ô a3 LL + a2 HG1 " a1 L ü HG2 # H b2 Ô a3 LL

Since the individual product operations are linear, the whole expression is multilinear: substituting
for any of the Gi or ai with a sum of terms will lead to a sum of expressions.

In later applications these linear products will of course be Grassmann products (exterior,
regressive, interior, inner, generalized, hypercomplex or Clifford), so the multilinear form F will
simply be a Grassmann expression (an element of the algebra under consideration).

Defining m:k-forms
In this section we define in more detail the particular types of multilinear forms we will find useful
later in the book. We will find it convenient to call them m:k-forms because, for any given simple
m-element, there are forms of grade k, where k runs from 0 to m. To begin consider a simple m-
element A.

A ã a1 Ô a2 Ô … Ô ak Ô ak+1 Ô … Ô am

The k-span and k-cospan of A are denoted ¯#k @AD and ¯#k @AD. Each of these lists contain n =
m
elements of grade k and grade m-k respectively. The ith element of any list is denoted with a
k
subscripted PiT.

Hence A can always be written as

A ã ¯"k @ADPiT Ô ¯"k @ADPiT 2.43

m
where A is a simple m-element, k can range from 0 to m, and i can range from 1 to n = .
k

A generic m:k-form for A is denoted Fm:k and can now be defined as

2.44

2009 9 3
Grassmann Algebra Book.nb 119

n
Fm:k ã ‚ IG1 # ¯"k @ADPiT M ü IG2 " ¯"k @ADPiT M
i=1

A ã ¯"k @ADPiT Ô ¯"k @ADPiT

Here, A is a simple m-element, 0 § k § m, G1 and G2 are as yet undefined Grassmann expressions,


and #, ü, and " are as yet undefined linear products. The form is intended to be generic so that it
includes all its special cases, for example

n
‚ IG1 # ¯"k @ADPiT M ü IG2 " ¯"k @ADPiT M
i=1
n
‚ I¯"k @ADPiT M ü I¯"k @ADPiT M
i=1
2.45
n
‚ IG1 # ¯"k @ADPiT M ü I¯"k @ADPiT M
i=1
n
‚ I¯"k @ADPiT M ü IG2 " ¯"k @ADPiT M
i=1

Composing m:k-forms
All the product operations we will be dealing with are defined with the Mathematica attribute
Listable. This means that to build up expressions, you only need to enter a product of lists (with
the same number of elements in each), or the product of a single element and a list, to get a list of
the products. To see how this works in building up forms, we can take the simple case of
A = a1 Ô a2 Ô a3 and show the steps as follows.

9¯#1 @AD, ¯#1 @AD=

88a1 , a2 , a3 <, 8a2 Ô a3 , -Ha1 Ô a3 L, a1 Ô a2 <<

9G1 # ¯#1 @AD, G2 " ¯#1 @AD=

88G1 ! a1 , G1 ! a2 , G1 ! a3 <, 8G2 " a2 Ô a3 , G2 " -Ha1 Ô a3 L, G2 " a1 Ô a2 <<

HG1 # ¯#1 @ADL ü IG2 " ¯#1 @ADM

8HG1 ! a1 L ü HG2 " a2 Ô a3 L,


HG1 ! a2 L ü HG2 " -Ha1 Ô a3 LL, HG1 ! a3 L ü HG2 " a1 Ô a2 L<

If at any stage you want to sum the terms of a list, you can apply Mathematica's Total function,
or its GrassmannAlgebra alias ¯S.
2.44

2009 9 3
Grassmann Algebra Book.nb 120

¯SAHG1 # ¯#1 @ADL ü IG2 " ¯#1 @ADME


HG1 ! a1 L ü HG2 " a2 Ô a3 L +
HG1 ! a2 L ü HG2 " -Ha1 Ô a3 LL + HG1 ! a3 L ü HG2 " a1 Ô a2 L

Because all our product operations are defined to be listable you can compose the forms in the
previous section without explicitly indexing the terms (i) and specifying their number (n). All you
need to do is eliminate the index and replace the summation sign with ¯S.

¯S@HG1 # ¯"k @ADL ü IG2 " ¯"k @ADME


¯S@H¯"k @ADL ü I¯"k @ADME
2.46
¯S@HG1 # ¯"k @ADL ü I¯"k @ADM E
¯S@H¯"k @ADL ü IG2 " ¯"k @ADME

For example if A = a1 Ô a2 Ô a3 , the following two expressions give the same result.

A = a1 Ô a2 Ô a3 ;
3
‚ IG1 # ¯#1 @ADPiT M ü IG2 " ¯#1 @ADPiT M
i=1

HG1 ! a1 L ü HG2 " a2 Ô a3 L +


HG1 ! a2 L ü HG2 " -Ha1 Ô a3 LL + HG1 ! a3 L ü HG2 " a1 Ô a2 L

¯S@HG1 # ¯#1 @ADL ü IG2 " ¯#1 @ADME


HG1 ! a1 L ü HG2 " a2 Ô a3 L +
HG1 ! a2 L ü HG2 " -Ha1 Ô a3 LL + HG1 ! a3 L ü HG2 " a1 Ô a2 L

Expanding and simplifying m:k-forms


If you want to expand and simplify an m:k form, you can use the GrassmannAlgebra function
ExpandAndSimplifyForm. This function distributes products over sums, and factors scalars
out of terms. For example

F = HG1 # Ha x + b yLL ü HG2 " Hc zLL


HG1 ! Ha x + b yLL ü HG2 " Hc zLL

Note that, because there are a number of product operators with their own precedence, it is
advisable to assure any expected behaviour by liberal use of parentheses in the input expression.

ExpandAndSimplifyForm@FD
a c HG1 ! xL ü HG2 " zL + b c HG1 ! yL ü HG2 " zL

As another example let A be a 3-element and F3,1 be a form based on it.

2009 9 3
Grassmann Algebra Book.nb 121

A = a1 Ô a2 Ô a3 ;

F3,1 = ¯S@HG1 # ¯#1 @ADL ü IG2 " ¯#1 @ADME

HG1 ! a1 L ü HG2 " a2 Ô a3 L +


HG1 ! a2 L ü HG2 " -Ha1 Ô a3 LL + HG1 ! a3 L ü HG2 " a1 Ô a2 L

Now if a1 is expressed as a linear combination of other 1-elements say

A = Ha x + b yL Ô a2 Ô a3 ;

the form is now composed as

F3,1 = ¯S@HG1 # ¯#1 @ADL ü IG2 " ¯#1 @ADME

HG1 ! Ha x + b yLL ü HG2 " a2 Ô a3 L +


HG1 ! a2 L ü HG2 " -HHa x + b yL Ô a3 LL + HG1 ! a3 L ü HG2 " Ha x + b yL Ô a2 L

ExpandAndSimplifyForm applied to this new form now gives

ExpandAndSimplifyForm@F3,1 D

a HG1 ! xL ü HG2 " a2 Ô a3 L + b HG1 ! yL ü HG2 " a2 Ô a3 L -


a HG1 ! a2 L ü HG2 " x Ô a3 L - b HG1 ! a2 L ü HG2 " y Ô a3 L +
a HG1 ! a3 L ü HG2 " x Ô a2 L + b HG1 ! a3 L ü HG2 " y Ô a2 L

Developing invariant forms


Our objective in this section is to understand the construction of forms which are invariant to
precisely the same types of substitutions that leave an m-element invariant. For example, a 3-
element A ã a1 Ô a2 Ô a3 is left invariant if to any of its 1-element factors is added a linear
combination of its other 1-element factors: A is unchanged if we add a a2 to a1 .

A = a1 Ô a2 Ô a3 ;

Aa = a1 Ô a2 Ô a3 ê. a1 Ø a1 + a a2
Ha1 + a a2 L Ô a2 Ô a3

¯%@A ã Aa D
True

Now consider the expression F involving the factors of A and make the same substitution.

F = HG1 " a1 L ü HG2 # Ha2 Ô a3 LL;

Fa = HG1 " a1 L ü HG2 # Ha2 Ô a3 LL ê. a1 Ø a1 + a a2


HG1 " Ha1 + a a2 LL ü HG2 ! a2 Ô a3 L

Applying ExpandAndSimplifyForm to Fa effects the expansion and extracts the scalar


coefficient.

2009 9 3
Grassmann Algebra Book.nb 122

Applying ExpandAndSimplifyForm to Fa effects the expansion and extracts the scalar


coefficient.

ExpandAndSimplifyForm@Fa D
HG1 " a1 L ü HG2 ! a2 Ô a3 L + a HG1 " a2 L ü HG2 ! a2 Ô a3 L

If the form F had been invariant to this transformation as the 3-element A was, Fa would have
simplified back again to F. But Fa now has an extra term which we notice displays a2 twice, but in
different parts of the product. This isolates the two occurrences of a2 so that the antisymmetry of
the exterior product cannot put their product to zero.

This suggests that we should antisymmetrize the original form F to create a new form Fb by adding
a term featuring the same factors ai of A but reordered to bring a2 to first place. If we do this we
find Fb is invariant to the substitution a1 Ø a1 + a a2 .

Fb = HG1 " a1 L ü HG2 # Ha2 Ô a3 LL + HG1 " a2 L ü HG2 # H-Ha1 Ô a3 LLL;

Fc = Fb ê. a1 Ø a1 + a a2
HG1 " a2 L ü HG2 ! -HHa1 + a a2 L Ô a3 LL + HG1 " Ha1 + a a2 LL ü HG2 ! a2 Ô a3 L

ExpandAndSimplifyForm@Fb ã Fc D
True

Of course, if we want a form which is antisymmetric with respect to substitution of all of its factors
ai , we will need to include all the essentially different decompositions of the 3-element into a 1-
element and a 2-element.

Fd = HG1 " a1 L ü HG2 # Ha2 Ô a3 LL +


HG1 " a2 L ü HG2 # H-Ha1 Ô a3 LLL + HG1 " a3 L ü HG2 # Ha1 Ô a2 LL;

Now, replacing a1 by a1 + a a2 + b a3 in this form leaves the form invariant to the substitution.

Fe = Fd ê. a1 Ø a1 + a a2 + b a3
HG1 " a2 L ü HG2 ! -HHa1 + a a2 + b a3 L Ô a3 LL +
HG1 " a3 L ü HG2 ! Ha1 + a a2 + b a3 L Ô a2 L +
HG1 " Ha1 + a a2 + b a3 LL ü HG2 ! a2 Ô a3 L

ExpandAndSimplifyForm@Fd ã Fe D
True

The form Fd may be recognized as the m:k-form F3,1 introduced in the previous section.

F3,1 = ¯S@HG1 # ¯#1 @ADL ü IG2 " ¯#1 @ADME

HG1 ! a1 L ü HG2 " a2 Ô a3 L +


HG1 ! a2 L ü HG2 " -Ha1 Ô a3 LL + HG1 ! a3 L ü HG2 " a1 Ô a2 L

2009 9 3
Grassmann Algebra Book.nb 123

Properties of m:k-forms
To this point we have discussed simple m-elements A and the m:k-forms Fm,k based on them.

A ã a1 Ô a2 Ô … Ô ak Ô ak+1 Ô … Ô am

Fm,k ã ¯SAHG1 # ¯#k @ADL ü IG2 " ¯#k @ADME

Our interest is to show that any transformation which leaves A invariant, also leaves any of its
associated m:k-forms Fm,k invariant. These transformations are equivalent to refactorizations of
the m-element: we want to show that an m:k-form is unchanged no matter what factorization of A
we use.

For concreteness initially, let us take A to be a 3-element as we have in previous sections.

A = a1 Ô a2 Ô a3 ;

A is of course invariant to any of its factorizations. Any factorization Aa of A is congruent to the


exterior product of three independent linear combinations of the ai .

Aa = Ha1,1 a1 + a1,2 a2 + a1,3 a3 L Ô


Ha2,1 a1 + a2,2 a2 + a2,3 a3 L Ô Ha3,1 a1 + a3,2 a2 + a3,3 a3 L;

Expanding and simplifying Aa gives, as expected, the original factorization A multiplied by the
determinant of the coefficients. (But first we need to declare the coefficients as scalars.)

¯¯S@a__ D;
Ab = ¯%@Aa D
H-a1,3 a2,2 a3,1 + a1,2 a2,3 a3,1 + a1,3 a2,1 a3,2 -
a1,1 a2,3 a3,2 - a1,2 a2,1 a3,3 + a1,1 a2,2 a3,3 L a1 Ô a2 Ô a3

det =
Det@88a1,1 , a1,2 , a1,3 <, 8a2,1 , a2,2 , a2,3 <, 8a3,1 , a3,2 , a3,3 <<D

-a1,3 a2,2 a3,1 + a1,2 a2,3 a3,1 + a1,3 a2,1 a3,2 -


a1,1 a2,3 a3,2 - a1,2 a2,1 a3,3 + a1,1 a2,2 a3,3

Any set of coefficients whose determinant is unity will provide a transformation to which A is
invariant. Now let us compose the range of 3:k-forms for A, and for Aa .

F3,k = TableA¯SAHG1 # ¯#i @ADL ü IG2 " ¯#i @ADME , 8i, 0, 3<E êê
ExpandAndSimplifyForm
8HG1 ! 1L ü HG2 " a1 Ô a2 Ô a3 L, HG1 ! a1 L ü HG2 " a2 Ô a3 L -
HG1 ! a2 L ü HG2 " a1 Ô a3 L + HG1 ! a3 L ü HG2 " a1 Ô a2 L,
HG1 ! a1 Ô a2 L ü HG2 " a3 L - HG1 ! a1 Ô a3 L ü HG2 " a2 L +
HG1 ! a2 Ô a3 L ü HG2 " a1 L, HG1 ! a1 Ô a2 Ô a3 L ü HG2 " 1L<

2009 9 3
Grassmann Algebra Book.nb 124

Fa,3,k = ITableA¯SAHG1 # ¯#i @Aa DL ü IG2 " ¯#i @Aa DME , 8i, 0, 3<E êê
ExpandAndSimplifyForm êê SimplifyM ê.
8Simplify@detD Ø $, Simplify@-detD Ø -$<
8! HG1 ! 1L ü HG2 " a1 Ô a2 Ô a3 L,
! HHG1 ! a1 L ü HG2 " a2 Ô a3 L - HG1 ! a2 L ü HG2 " a1 Ô a3 L +
HG1 ! a3 L ü HG2 " a1 Ô a2 LL,
! HHG1 ! a1 Ô a2 L ü HG2 " a3 L - HG1 ! a1 Ô a3 L ü HG2 " a2 L +
HG1 ! a2 Ô a3 L ü HG2 " a1 LL, ! HG1 ! a1 Ô a2 Ô a3 L ü HG2 " 1L<

Here we have used Mathematica's inbuilt Simplify to extract the determinant of the
transformation and denote it symbolically by $. If we divide by $, we can easily see that this list
of forms is equal to the original list of forms.

Fa,3,k
F3,k ã
$
True

Thus transformations (factorizations) which leave the 3-element A invariant, also leave all its 3:k-
forms invariant.

The complete span of a simple element


In this section we discuss the notion of complete span and cospan and a form based on it. We will
use the result we derive for the form to develop alternative formulations of the General Product
Formula in the next chapter.

The complete span of a simple m-element A is denoted ¯#¯ @AD and defined to be the ordered list
of all the k-spans of A, with k ranging from 0 to m.

Similarly the complete cospan of a simple m-element A is denoted ¯#¯ @AD and defined to be the
ordered list of all the k-cospans of A, with k ranging from 0 to m.

For example, if A = u Ô x Ô y Ô z the complete span and complete cospan of A are

A = u Ô x Ô y Ô z;

¯#¯ @AD
881<, 8u, x, y, z<, 8u Ô x, u Ô y, u Ô z, x Ô y, x Ô z, y Ô z<,
8u Ô x Ô y, u Ô x Ô z, u Ô y Ô z, x Ô y Ô z<, 8u Ô x Ô y Ô z<<

¯#¯ @AD
88u Ô x Ô y Ô z<, 8x Ô y Ô z, -Hu Ô y Ô zL, u Ô x Ô z, -Hu Ô x Ô yL<,
8y Ô z, -Hx Ô zL, x Ô y, u Ô z, -Hu Ô yL, u Ô x<, 8z, -y, x, -u<, 81<<

2009 9 3
Grassmann Algebra Book.nb 125

Since the exterior product is Listable, simplifying the exterior product of a complete span with
its complete cospan gives lists of the original m-element.

¯$A¯#¯ @AD Ô ¯#¯ @ADE

88u Ô x Ô y Ô z<, 8u Ô x Ô y Ô z, u Ô x Ô y Ô z, u Ô x Ô y Ô z, u Ô x Ô y Ô z<,


8u Ô x Ô y Ô z, u Ô x Ô y Ô z, u Ô x Ô y Ô z,
u Ô x Ô y Ô z, u Ô x Ô y Ô z, u Ô x Ô y Ô z<,
8u Ô x Ô y Ô z, u Ô x Ô y Ô z, u Ô x Ô y Ô z, u Ô x Ô y Ô z<, 8u Ô x Ô y Ô z<<

Taking the exterior product of a complete cospan with a complete span give a similar result, but
this time the exterior products of the k-cospan elements with the k-span elements include a sign
H-1Lk Hm-kL .

¯$A¯#¯ @AD Ô ¯#¯ @ADE

88u Ô x Ô y Ô z<,
8-Hu Ô x Ô y Ô zL, -Hu Ô x Ô y Ô zL, -Hu Ô x Ô y Ô zL, -Hu Ô x Ô y Ô zL<,
8u Ô x Ô y Ô z, u Ô x Ô y Ô z, u Ô x Ô y Ô z, u Ô x Ô y Ô z,
u Ô x Ô y Ô z, u Ô x Ô y Ô z<, 8-Hu Ô x Ô y Ô zL, -Hu Ô x Ô y Ô zL,
-Hu Ô x Ô y Ô zL, -Hu Ô x Ô y Ô zL<, 8u Ô x Ô y Ô z<<

If m is odd then k(m-k) will be even for all k, and hence there will be no change in sign. If m is
even, then k(m-k) will alternate between even and odd as k ranges from 0 to m. Thus we can
construct a list of signs dependent on the parity of m.

¯r@m_D := Mod@Range@0, mD, 2D;


¯rr@m_D := If@EvenQ@mD, ¯r@mD, 0D

For example

9H-1L¯rr@1D , H-1L¯rr@2D , H-1L¯rr@3D , H-1L¯rr@4D =

81, 81, -1, 1<, 1, 81, -1, 1, -1, 1<<

With this sign construction, we can now determine a relationship between products of complete
spans and cospans. Below we verify the identity for m ranging from 1 to 5.

¯!6 ;
TableB¯$B¯#¯ BAF Ô ¯#¯ BAF ã H-1L¯rr@mD J¯#¯ BAF Ô ¯#¯ BAFNF, 8m, 1, 5<F
m m m m

8True, True, True, True, True<

In the more general case of a form based on the complete span and cospan of a simple m-element
A, we can obtain a similar identity providing we reverse all the lists, that is, reverse each list of
elements and the list of these lists. To simplify this two-level reversing process we define a
function ReverseAll with alias ¯R.

2009 9 3
Grassmann Algebra Book.nb 126

ReverseAll@s_ListD := Reverse@Reverse êü sD; ¯R = ReverseAll;

JG1 # ¯"¯ BAFN ü JG2 " ¯"¯ BAFN ã


m m
2.47
¯RBH-1L¯rr@mD JG1 # ¯"¯ BAFN ü JG2 " ¯"¯ BAFNF
m m

Again we verify the identity for m ranging from 1 to 5.

TableB¯(BJG1 # ¯#¯ BAFN ü JG2 " ¯#¯ BAFNF ã


m m
¯rr@mD ¯
¯(B¯RBH-1L JG1 # ¯# BAFN ü JG2 " ¯#¯ BAFNFF, 8m, 1, 5<F
m m

8True, True, True, True, True<

As might be expected, the identity remains valid no matter to which form the sign change
H-1L¯rr@mD or the ReverseAll operation is applied. Let

F1 @m_D := JG1 # ¯#¯ BAFN ü JG2 " ¯#¯ BAFN


m m

F2 @m_D := JG1 # ¯#¯ BAFN ü JG2 " ¯#¯ BAFN


m m

then the following identities hold

F1 @mD ã ¯RAH-1L¯rr@mD F2 @mDE ã H-1L¯rr@mD ¯R@F2 @mDD 2.48

F2 @mD ã ¯RAH-1L¯rr@mD F1 @mDE ã H-1L¯rr@mD ¯R@F1 @mDD 2.49

¯R@F1 @mDD ã H-1L¯rr@mD F2 @mD 2.50

¯R@F2 @mDD ã H-1L¯rr@mD F1 @mD 2.51

To verify these identities we first define some functions Ji @mD representing them which simplify
the terms, and then tabulate the combined results for m ranging from 1 to 5.

J1 @m_D :=
¯(@F1 @mDD ã ¯(A¯RAH-1L¯rr@mD F2 @mDEE ã ¯(AH-1L¯rr@mD ¯R@F2 @mDDE;

J2 @m_D :=
¯(@F2 @mDD ã ¯(A¯RAH-1L¯rr@mD F1 @mDEE ã ¯(AH-1L¯rr@mD ¯R@F1 @mDDE;

2009 9 3
Grassmann Algebra Book.nb 127

J3 @m_D := ¯(@¯R@F1 @mDDD ã ¯(AH-1L¯rr@mD F2 @mDE;

J4 @m_D := ¯(@¯R@F2 @mDDD ã ¯(AH-1L¯rr@mD F1 @mDE;

Table@And@J1 @mD, J2 @mD, J3 @mD, J4 @mDD, 8m, 1, 5<D


8True, True, True, True, True<

2.13 Unions and Intersections

Union and intersection as a multilinear form


One of the more immediate applications of multilinear forms that we can make at this stage is to
the computing of unions and intersections of simple elements.

The intersection of two simple elements A and B is the element C of highest grade which is
common to both A and B.

A ã A* Ô C; B ã B* Ô C; A* Ô C Ô B* ! 0

The element A* Ô C Ô B* is called the union of A and B.

Two elements A and B are said to be disjoint if C is of grade 0, that is, a scalar: thus AÔB ! 0. The
formulae that we are going to develop still remain valid in this case, so that we will not need to
consider it further.

The first thing to note is that defining the union and intersection only defines them up to
congruence because they still remain valid if we multiply and divide the factors in the formulae by
an arbitrary scalar.

A* B* A* B*
Aã Ô Hc CL; B ã Ô Hc CL; Ô Hc CL Ô !0
c c c c

Thus now, if c C is the intersection, I 1c M A* Ô C Ô B* is the union.

The observation that the union and intersection are only defined up to congruence leads us to
consider how we might develop a formula for the union and intersection which is not dependent on
an arbitrary congruence factor. Suppose ü is a linear product as discussed in the previous section,
not equal to the exterior product, but otherwise arbitrary. Then the ü product of A and B can be
written as the ü product of their union and intersection, and the congruence factors cancel.

A ü B ã HA* Ô CL ü HB* Ô CL ã HA* Ô C Ô B* L ü C 2.52

This formula can be written also as

A ü HB* Ô CL ã HA Ô B* L ü C

Suppose B is of grade m, and B* is of grade k, then C is of grade m-k. To compute the union A Ô B*
we need to 'partition' B into such k and m-k elements, and then find the maximal value of k (equal
to k* , say) for which A Ô B* is not zero. To do this partitioning, B has to be simple, and we need to
know9some
2009 3 factorization of it. If we partition B using its span and cospan for various k and for the
given factorization, then our formula becomes a special case of an m:k-form, and invariance of the
result to the actual factorization of B is guaranteed.
Grassmann Algebra Book.nb 128

Suppose B is of grade m, and B* is of grade k, then C is of grade m-k. To compute the union A Ô B*
we need to 'partition' B into such k and m-k elements, and then find the maximal value of k (equal
to k* , say) for which A Ô B* is not zero. To do this partitioning, B has to be simple, and we need to
know some factorization of it. If we partition B using its span and cospan for various k and for the
given factorization, then our formula becomes a special case of an m:k-form, and invariance of the
result to the actual factorization of B is guaranteed.

Hence we write the formula for the ü product of the possibly intersecting simple elements A and B
in terms of their union and intersection as

*
A ü B ã ¯SAHA Ô ¯"k* @BDL ü ¯"k @BDE 2.53

where k* is the maximum grade for which the right hand side is not zero.

In the sections to follow, we will make these notions more concrete with some examples.

Where the intersection is evident


To see how this formula works, let us first take some simple examples where the two elements are
expressed in such a way that their union and intersection are evident by inspection.

To enable the computations we first declare any subscripted a, b, or g as vector symbols.

DeclareExtraVectorSymbols@a_ , b_ , g_ D;

Ï Example: 2:k-forms

A = a1 Ô a2 Ô g1 ; B = b1 Ô g1 ;
Here it is obvious that our formula should give us

Ha1 Ô a2 Ô g1 L ü Hb1 Ô g1 L ã Ha1 Ô a2 Ô g1 Ô b1 L ü g1


If we simply compose a table of all the 2:k-forms (k equal to 0, 1, and 2 in this case) without
simplifying the forms, we get the three forms

T = TableA¯SAHA Ô ¯#k @BDL ü ¯#k @BDE, 8k, 0, 2<E; TableForm@TD

Ha1 Ô a2 Ô g1 Ô 1L ü Hb1 Ô g1 L
Ha1 Ô a2 Ô g1 Ô b1 L ü g1 + Ha1 Ô a2 Ô g1 Ô g1 L ü H-b1 L
Ha1 Ô a2 Ô g1 Ô b1 Ô g1 L ü 1

Clearly, when simplified, the first form (k = 0) is equal to A ü B, and the third form (k = 2) is zero.
The second form (k = 1) reduces to a single term, giving us the expected expression for the union
and intersection. To simplify a form, or a list of forms, we use ExpandAndSimplifyForm, or its
alias ¯(.

2009 9 3
Grassmann Algebra Book.nb 129

¯(@TD êê TableForm
Ha1 Ô a2 Ô g1 L ü Hb1 Ô g1 L
-Ha1 Ô a2 Ô b1 Ô g1 L ü g1
0

(Note that ¯( has automatically put the factors in the second form into canonical order, based in
this case on the alphabetical order of the symbols used)

Ï Example: 3:k-forms

Had B also been a 3-element, we would have generated four forms.

A = a1 Ô a2 Ô g1 ; B = b1 Ô b2 Ô g1 ;

T = TableA¯SAHA Ô ¯#k @BDL ü ¯#k @BDE, 8k, 0, 3<E; TableForm@TD

Ha1 Ô a2 Ô g1 Ô 1L ü Hb1 Ô b2 Ô g1 L
Ha1 Ô a2 Ô g1 Ô b1 L ü Hb2 Ô g1 L + Ha1 Ô a2 Ô g1 Ô b2 L ü H-Hb1 Ô g1 LL + Ha1 Ô a2 Ô g1 Ô
Ha1 Ô a2 Ô g1 Ô b1 Ô b2 L ü g1 + Ha1 Ô a2 Ô g1 Ô b1 Ô g1 L ü H-b2 L + Ha1 Ô a2 Ô g1 Ô b2 Ô g1
Ha1 Ô a2 Ô g1 Ô b1 Ô b2 Ô g1 L ü 1

¯(@TD êê TableForm
Ha1 Ô a2 Ô g1 L ü Hb1 Ô b2 Ô g1 L
-Ha1 Ô a2 Ô b1 Ô g1 L ü Hb2 Ô g1 L + Ha1 Ô a2 Ô b2 Ô g1 L ü Hb1 Ô g1 L
Ha1 Ô a2 Ô b1 Ô b2 Ô g1 L ü g1
0

The union and intersection is clearly the third form.

Ï Example: Interchanging the order of A and B

We would not expect that interchanging the order of A and B in the form would make any
difference to the results, except perhaps for a change in sign.

A = a1 Ô a2 Ô g1 ; B = b1 Ô b2 Ô g1 ;

¯(ATableA¯SAHB Ô ¯#k @ADL ü ¯#k @ADE, 8k, 0, 3<EE êê TableForm

Hb1 Ô b2 Ô g1 L ü Ha1 Ô a2 Ô g1 L
-Ha1 Ô b1 Ô b2 Ô g1 L ü Ha2 Ô g1 L + Ha2 Ô b1 Ô b2 Ô g1 L ü Ha1 Ô g1 L
Ha1 Ô a2 Ô b1 Ô b2 Ô g1 L ü g1
0

2009 9 3
Grassmann Algebra Book.nb 130

Ï Example: Interchanging the order of the span and cospan

Neither would we expect that interchanging the order of the span ¯#k and cospan ¯#k functions in
the form would make any difference to the results, except again perhaps for a change in sign.
However this interchange does reverse the order of the forms, making the form displaying the
union and intersection come after the zero form, rather than before it.

A = a1 Ô a2 Ô g1 ; B = b1 Ô b2 Ô g1 ;

¯(ATableA¯SAIA Ô ¯#k @BDM ü ¯#k @BDE, 8k, 0, 3<EE êê TableForm

0
Ha1 Ô a2 Ô b1 Ô b2 Ô g1 L ü g1
-Ha1 Ô a2 Ô b1 Ô g1 L ü Hb2 Ô g1 L + Ha1 Ô a2 Ô b2 Ô g1 L ü Hb1 Ô g1 L
Ha1 Ô a2 Ô g1 L ü Hb1 Ô b2 Ô g1 L

Where the intersections is not evident


In the previous section we explored the application of the union and intersection formula in trivial
cases where we could have more easily obtained the results by inspection. The usefulness of the
formula becomes evident however when the intersection of A and B is not displayed explicitly.

Ï Example: 2:k-forms

We begin with the same elements as in the first example above. But now, we express all the 1-
element factors in terms of the current basis. The process is independent of the actual basis, so we
can choose it arbitrarily for the example, say an 8-space.

¯!8 ;

We compose A as the product of three factors to explicitly display the intersection as the last factor.
But the algorithm does not need to have A in its factored form, so we expand and simplify before
applying the formula.

A = ¯%@He4 - 2 e7 L Ô H3 e5 + e2 L Ô He1 + 2 e2 LD
-He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 + 3 e1 Ô e4 Ô e5 +
6 e1 Ô e5 Ô e7 + 6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7

The second element B must be supplied to the algorithm in factored form, otherwise its spans and
cospans cannot be computed. However the factorization can be arbitrary, so we expand it and
refactor it differently so that it does not explicitly display the intersection we have chosen for the
example.

B = ExteriorFactorize@¯%@H2 e3 - e2 L Ô He1 + 2 e2 LDD


He1 + 4 e3 L Ô He2 - 2 e3 L

In this case we know that k equal to 1 will give us the result we are looking for.

2009 9 3
Grassmann Algebra Book.nb 131

F1 = ¯(A¯SAHA Ô ¯#1 @BDL ü ¯#1 @BDEE

2 He1 Ô e2 Ô e3 Ô e4 L ü e1 + 4 He1 Ô e2 Ô e3 Ô e4 L ü e2 - 4 He1 Ô e2 Ô e3 Ô e7 L ü e1 -


8 He1 Ô e2 Ô e3 Ô e7 L ü e2 - 3 He1 Ô e2 Ô e4 Ô e5 L ü e1 - 6 He1 Ô e2 Ô e4 Ô e5 L ü e2 -
6 He1 Ô e2 Ô e5 Ô e7 L ü e1 - 12 He1 Ô e2 Ô e5 Ô e7 L ü e2 +
6 He1 Ô e3 Ô e4 Ô e5 L ü e1 + 12 He1 Ô e3 Ô e4 Ô e5 L ü e2 +
12 He1 Ô e3 Ô e5 Ô e7 L ü e1 + 24 He1 Ô e3 Ô e5 Ô e7 L ü e2 +
12 He2 Ô e3 Ô e4 Ô e5 L ü e1 + 24 He2 Ô e3 Ô e4 Ô e5 L ü e2 +
24 He2 Ô e3 Ô e5 Ô e7 L ü e1 + 48 He2 Ô e3 Ô e5 Ô e7 L ü e2

Of course, this is in expanded form. We would like to factorize it to display the union and
intersection explicitly. We can do this with GrassmannAlgebra's FactorizeForm.

F2 = FactorizeForm@F1 D
3 e5
He1 - 6 e5 L Ô He2 + 3 e5 L Ô e3 + Ô He4 - 2 e7 L ü H2 e1 + 4 e2 L
2
The intersection displayed here is congruent to the factor that we introduced as our 'hidden'
intersection in the A and B supplied to the form, and is therefore a correct result as any congruent
factor would be.

Although the union does not explicitly display the intersection, we can verify that the intersection
is indeed a factor of it, and that it is also a factor of A and B as supplied to the formula. Let U be the
union and J be the intersection.

3 e5
U = He1 - 6 e5 L Ô He2 + 3 e5 L Ô e3 + Ô He4 - 2 e7 L;
2
J = 2 e1 + 4 e2 ;
then the following are zero as expected.

¯%@8U Ô J, A Ô J, B Ô J<D
80, 0, 0<

All the other factors of the original A and B are also easily verified to belong to U.

¯%@U Ô 8He4 - 2 e7 L, H3 e5 + e2 L, H2 e3 - e2 L, He1 + 4 e3 L, He2 - 2 e3 L<D


80, 0, 0, 0, 0<

Intersection with a non-simple element


To this point we have assumed A is simple. But, provided the intersection with B is simple, the
algorithm will still yield results. The following examples demonstrate that one may need to pay
special attention to their interpretation.

2009 9 3
Grassmann Algebra Book.nb 132

Ï Example 1

We take the simplest example with A non-simple, and a 1-element intersection g1 , which does not
occur in the non-simple 2-element factor of A.

A = Ha1 Ô a2 + a3 Ô a4 L Ô a5 Ô g1 ; B = b1 Ô g1 ;

Since the intersection is a 1-element, and B is a 2-element, we can get the union and intersection
from the 1-span and 1-cospan of B.

¯SAHA Ô ¯#1 @BDL ü ¯#1 @BDE


HHa1 Ô a2 + a3 Ô a4 L Ô a5 Ô g1 Ô b1 L ü g1 +
HHa1 Ô a2 + a3 Ô a4 L Ô a5 Ô g1 Ô g1 L ü H-b1 L

Upon simplification, the second term is clearly zero, leading to the expected result as the first term
showing, in this case, that the union is not simple.

Ï Example 2

Now let B contain one of the 1-element factors which occurs in the non-simple 2-element factor of
A, say a1 .

A = Ha1 Ô a2 + a3 Ô a4 L Ô a5 Ô g1 ; B = a1 Ô g1 ;

¯SAHA Ô ¯#1 @BDL ü ¯#1 @BDE


HHa1 Ô a2 + a3 Ô a4 L Ô a5 Ô g1 Ô a1 L ü g1 +
HHa1 Ô a2 + a3 Ô a4 L Ô a5 Ô g1 Ô g1 L ü H-a1 L

Upon simplification, the second term is again zero, but the first term, the union, is now simple.

† Thus we conclude that under some circumstances (Example 1) the union and intersection
formula will give straightforwardly interpretable results. But in others (Example 2) one would
need to pay special attention to their interpretation.

Factorizing simple elements


In the previous section we have seen how one may obtain the union and intersection of two simple
elements. We can now use these results to obtain a factorization of a simple element. For
concreteness we take the example from the previous section where A is a simple 3-element in an 8-
space. We will work with the expanded form of A, and pretend that we do not know any
factorization of it.

A = ¯%@He4 - 2 e7 L Ô H3 e5 + e2 L Ô He1 + 2 e2 LD
-He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 + 3 e1 Ô e4 Ô e5 +
6 e1 Ô e5 Ô e7 + 6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7

Since A is a 3-element in an 8-space, we can always obtain a 1-element belonging to it by obtaining


the union and intersection of A with a simple 6-element B using the formula involving the 5-span
and 5-cospan of B.
2009 9 3
Grassmann Algebra Book.nb 133

Since A is a 3-element in an 8-space, we can always obtain a 1-element belonging to it by obtaining


the union and intersection of A with a simple 6-element B using the formula involving the 5-span
and 5-cospan of B.

But as we have seen in the sections above, it is really only the number of distinct symbols (hence
independent 1-elements) that are important in computing unions and intersections. Thus we can
simplify our computations by considering A as a 3-element in a 5-space 8e1 , e2 , e4 , e5 , e7 <
and B as a simple 3-element using the formula involving the 2-span and 2-cospan of B.

To obtain our first factor of A, suppose we take B as e1 Ô e2 Ô e4 . The

B1 = e1 Ô e2 Ô e4 ;

¯(A¯SAHA Ô ¯#2 @B1 DL ü ¯#2 @B1 DEE êê FactorizeForm

He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H6 e1 + 12 e2 L

The factor of interest, our first factor of A, is the intersection 6 e1 + 12 e2 . We can ignore the
union as it is not of any significance in this procedure. Repeating the procedure for the remaining
possible independent 3-element we can construct for B gives

BB = ¯#3 @e1 Ô e2 Ô e4 Ô e5 Ô e7 D
8e1 Ô e2 Ô e4 , e1 Ô e2 Ô e5 , e1 Ô e2 Ô e7 , e1 Ô e4 Ô e5 , e1 Ô e4 Ô e7 ,
e1 Ô e5 Ô e7 , e2 Ô e4 Ô e5 , e2 Ô e4 Ô e7 , e2 Ô e5 Ô e7 , e4 Ô e5 Ô e7 <

T = IFactorizeFormA¯(A¯SAHA Ô ¯#2 @ÒDL ü ¯#2 @ÒDEEE &M êü BB

8He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H6 e1 + 12 e2 L,
0, He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H3 e1 + 6 e2 L,
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H2 e1 - 12 e5 L,
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H6 e4 - 12 e7 L,
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H-e1 + 6 e5 L, He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H2 e2 + 6 e5 L,
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H-3 e4 + 6 e7 L,
He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H-e2 - 3 e5 L, He1 Ô e2 Ô e4 Ô e5 Ô e7 L ü H-e4 + 2 e7 L<

By inspection we can see that there are 4 different factors. We can take the simplest form
congruent to each. Clearly they are not independent

AA = He1 + 2 e2 L Ô He1 - 6 e5 L Ô He4 - 2 e7 L Ô He2 + 3 e5 L; ¯%@AAD


0

But it is an easy matter to compute all the products of 3 different factors.

¯#3 @AAD
8He1 + 2 e2 L Ô He1 - 6 e5 L Ô He4 - 2 e7 L,
He1 + 2 e2 L Ô He1 - 6 e5 L Ô He2 + 3 e5 L,
He1 + 2 e2 L Ô He4 - 2 e7 L Ô He2 + 3 e5 L,
He1 - 6 e5 L Ô He4 - 2 e7 L Ô He2 + 3 e5 L<

By inspection the second factorization in the list, because it does not display all the symbols of A,
will be zero. The others are all candidates for a factorization of A, and all can be seen to be
congruent to A when expanded and simplified.

2009 9 3
Grassmann Algebra Book.nb 134

By inspection the second factorization in the list, because it does not display all the symbols of A,
will be zero. The others are all candidates for a factorization of A, and all can be seen to be
congruent to A when expanded and simplified.

¯%@¯#3 @AADD
8-2 e1 Ô e2 Ô e4 + 4 e1 Ô e2 Ô e7 + 6 e1 Ô e4 Ô e5 +
12 e1 Ô e5 Ô e7 + 12 e2 Ô e4 Ô e5 + 24 e2 Ô e5 Ô e7 , 0,
-He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 + 3 e1 Ô e4 Ô e5 + 6 e1 Ô e5 Ô e7 +
6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7 , -He1 Ô e2 Ô e4 L + 2 e1 Ô e2 Ô e7 +
3 e1 Ô e4 Ô e5 + 6 e1 Ô e5 Ô e7 + 6 e2 Ô e4 Ô e5 + 12 e2 Ô e5 Ô e7 <

† In Chapter 3, we will explore an equivalent algorithm for factorizing simple elements, based on
the regressive product as the multilinear product.

2.14 Summary
This chapter has introduced the exterior product. Its codification of linear dependence is the
foundation of Grassmann's contribution to the mathematical and physical sciences. From it he was
able to build an algebraic system widely applicable to much of the physical world.

We began the chapter by formally extending the axioms of a linear space to include the exterior
product. We then showed how the exterior product leads naturally to an understanding of
determinants and linear systems of equations. Although Grassmann was the first to develop an
algebraic structure which naturally handled these now accepted denizens of "Linear Algebra", we
showed, in discussing the concepts of simplicity and exterior division, that his exterior product
spaces (although themselves linear spaces) had the potential to be much richer than the simple
linear space of Linear Algebra.

At the end of the chapter we undertook an analysis inspired by Grassmann of the various types of
possible product structures. This analysis is really quite fundamental to his contribution, and
confirms the central role played by the exterior and interior products in their own right, and as we
shall see later in the book, the construction of higher level products (the Clifford product,
Generalized Grassmann product and hypercomplex product) as sums of these two.

The chapter has not yet introduced any geometric or physical interpretations for the elements of the
algebra, as, being a mathematical system, its power lies in its potentially multiple interpretations.
Nevertheless, I am acutely aware that comprehension is enhanced by context, particularly the
geometric. I beg your indulgence that the next chapter is still devoid of any depiction of the entities
discussed, but hope that Chapter 4: Geometric Interpretations and subsequent examples will
redress this balance.

Chapter 3 will discover that the symmetry in the suite of linear spaces created by the exterior
product leads to a beautiful "dual" product: the regressive product. Equipped with these dual
products, Chapter 4 will then be able finally to provide a geometric interpretation for all the
different types of elements of the spaces. Chapter 5 shows how the symmetry further enables the
definition of a new operation: the complement. Then Chapter 6 shows how to combine all of these
to define the interior product, a much more general product than the scalar product. These chapters
form the critical foundation to Grassmann's algebra. The rest is exploration!

2009 9 3
Grassmann Algebra Book.nb 135

3 The Regressive Product


Ï

3.1 Introduction
n n
Since the linear spaces L an L are of the same dimension K O= K O and hence are
m n-m m n-m
isomorphic, the opportunity exists to define a product operation (called the regressive product) dual
to the exterior product such that theorems involving exterior products of m-elements have duals
involving regressive products of (n|m)-elements.

Very roughly speaking, if the exterior product is associated with the notion of union, then the
regressive product is associated with the notion of intersection.

The regressive product appears to be little used in the recent literature. Grassmann's original
development did not distinguish notationally between the exterior and regressive product
operations. Instead he capitalized on the inherent duality and used a notation which, depending on
the grade of the elements in the product, could only sensibly be interpreted by one or the other
operation. This was a very elegant idea, but its subtlety may have been one of the reasons the
notion has become lost. (See "Grassmann's notation for the regressive product" in Section 3.3.)

However, since the regressive product is a simple dual operation to the exterior product, an
enticing and powerful symmetry is lost by ignoring it. We will find that its 'intersection' properties
are a useful conceptual and algorithmic addition to non-metric geometry, and that its algebraic
properties enable a firm foundation for the development of metric geometry. Some of the results
which are usually proven in metric spaces via the inner and cross products can also be proven
using the exterior and regressive products, thus showing that the result is independent of whether
or not the space has a metric.

The approach adopted in this book is to distinguish between the two product operations by using
different notations, just as Boolean algebra has its dual operations of union and intersection. We
will find that this approach does not detract from the elegance of the results. We will also find that
differentiating the two operations explicitly enhances the simplicity and power of the derivation of
results.

Since the commonly accepted modern notation for the exterior product operation is the 'wedge'
symbol Ô, we will denote the regressive product operation by a 'vee' symbol Ó. Note however that
this (unfortunately) does not correspond to the Boolean algebra usage of the 'vee' for union and the
'wedge' for intersection.

2009 9 3
Grassmann Algebra Book.nb 136

3.2 Duality

The notion of duality


In order to ensure that the regressive product is defined as an operation correctly dual to the
exterior product, we give the defining axiom set for the regressive product the same formal
symbolic structure as the axiom set for the exterior product. This may be accomplished by
replacing Ô by Ó, and replacing the grades of elements and spaces by their complementary grades.
The complementary grade of a grade m is defined in a linear space of n dimensions to be n|m.

Ôö Ó a ö a
m n-m
L ö L
m n-m 3.1

Note that here we are undertaking the construction of a mathematical structure and thus there is no
specific mapping implied between individual elements at this stage. In Chapter 5: The
Complement, we will introduce a mapping between the elements of L and L which will lead to
m n-m
the definition of complement and interior product.

For concreteness, we take some examples.

Examples: Obtaining the dual of an axiom

Ï The dual of axiom Ô6

We begin with the exterior product axiom Ô6:


The exterior product of an m-element and a k-element is an (m+k)-element.

:a œ L, b œ L> ï : a Ô b œ L >
m m k k m k m+k

To form the dual of this axiom, replace Ô with Ó, and the grades of elements and spaces by their
complementary grades.

:a œ L, b œ L> ï : a Ó b œ L >
n-m n-m n-k n-k n-m n-k n-Hm+kL

Ô
Although this is indeed the dual of axiom 6, it is not necessary to display the grades of what are
arbitrary elements in the more complex form specifically involving the dimension n of the space. It
will be more convenient to display them as grades denoted by simple symbols like m and k as were
the grades of the elements of the original axiom. To effect this transformation most expeditiously
we first let m' = n|m, k' = n|k to get

2009 9 3
Grassmann Algebra Book.nb 137

:a œ L, b œ L> ï : a Ó b œ L >
m' m' k' k' m' k' Im'+k'M-n

The grade of the space to which the regressive product belongs is n|(m+k) = n|((n|m')+(n|k')) =
(m'+k')|n.

Finally, since the primes are no longer necessary we drop them. Then the final form of the axiom
dual to Ô6, which we label Ó6, becomes:
:a œ L, b œ L> ï : a Ó b œ L >
m m k k m k m+k-n

In words, this says that the regressive product of an m-element and a k-element is an (m+k|n)-
element.

Ï The dual of axiom Ô8

Axiom Ô8 says:
There is a unit scalar which acts as an identity under the exterior product.

:#)*+,+B1, 1 œ LF, a ã 1 Ô a>


0 m m

For simplicity we do not normally display designated scalars with an underscripted zero. However,
the duality transformation will be clearer if we rewrite the axiom with 1 in place of 1.
0

:#)*+,+B1, 1 œ LF, a ã 1 Ô a>


0 0 0 m 0 m

Replace Ô with Ó, and the grades of elements and spaces by their complementary grades.

:#)*+,+B1, 1 œ LF, a ã 1Ó a >


n n n n-m n n-m

Replace arbitrary grades m with n|m' (n is not an arbitrary grade).

:#)*+,+B1, 1 œ LF, a ã 1 Ó a >


n n n m' n m'

Drop the primes.

:#)*+,+B1, 1 œ LF, a ã 1 Ó a>


n n n m n m

In words, this says that there is a unit n-element which acts as an identity under the regressive
product.

Ï The dual of axiom Ô10

Axiom Ô10 says:


The exterior product of elements of odd grade is anti-commutative.

2009 9 3
Grassmann Algebra Book.nb 138

The exterior product of elements of odd grade is anti-commutative.


a Ô b ã H-1Lm k b Ô a
m k k m

Replace Ô with Ó, and the grades of elements by their complementary grades. Note that it is only
the grades as shown in the underscripts which should be replaced, not those figuring elsewhere in
the formula.

a Ó b ã H-1Lm k b Ó a
n-m n-k n-k n-m

Replace arbitrary grades m (wherever they occur in the formula) with n|m', k with n|k'. An
arbitrary grade is one without a specific value, like 0, 1, or n.

a Ó b ã H-1LIn-m'M In- k'M b Ó a


m' k' k' m'

Drop the primes.

a Ó b ã H-1LHn-mL Hn- kL b Ó a
m k k m

In words this says that the regressive product of elements of odd complementary grade is anti-
commutative.

Summary: The duality transformation algorithm


The algorithm for the duality transformation may be summarized as follows:

1. Replace Ô with Ó, and the grades of elements and spaces by their complementary grades.
2. Replace arbitrary grades m with n|m', k with n|k'. Drop the primes.

An arbitrary grade is one without a specific value, like 0, 1, or n.

3.3 Properties of the Regressive Product

Axioms for the regressive product


In this section we collect the results of applying the duality algorithm above to the exterior product
axioms Ô6 to Ô12. Axioms Ô1 to Ô5 transform unchanged since there are no products involved.
We use the symbol ¯n for the dimension of the underlying linear space wherever we want to
ensure the formulas can be understood by GrassmannAlgebra functions (for example, the Dual
function to be discussed later). However in general textual comment we will usually retain the
simpler symbol n to represent this dimension.

Ï Ó6: The regressive product of an m-element and a k-element is an (m+k|n)-


element.

2009 9 3
Grassmann Algebra Book.nb 139

Ó6: The regressive product of an m-element and a k-element is an (m+k|n)-


element.

:a œ L, b œ L>ï:a Ó b œ L > 3.2


m m k k m k k+m-¯n

Ï Ó7: The regressive product is associative.

aÓb Óg ã aÓ bÓg 3.3


m k j m k j

Ï Ó8: There is a unit n-element which acts as an identity under the regressive
product.

:!"#$%$B 1 , 1 œ L F, a ã 1 Ó a> 3.4


¯n ¯n ¯n m ¯n m

Ï Ó9: Non-zero n-elements have inverses with respect to the regressive product.

1 1 1
:!"#$%$B , œ L F, 1 ã a Ó , &'()**Ba, a ! 0 œ L F> 3.5
a a ¯n ¯n a ¯n ¯n

Ï Ó10:The regressive product of elements of odd complementary grade is anti-


commutative.

a Ó b ã H-1LH¯n-kL H¯n-mL b Ó a 3.6


m k k m

Ï Ó11: Additive identities act as multiplicative zero elements under the


regressive product.

:!"#$%$B0, 0 œ LF, 0 Ó a ã 0 > 3.7


k k k k m k+m-¯n

Ï
Ï Ó12: The regressive product is both left and right distributive under addition.

2009 9 3
Grassmann Algebra Book.nb 140

Ó12: The regressive product is both left and right distributive under addition.

Ka + b O Ó g ã a Ó g + b Ó g
m m k m k m k
3.8
aÓ b + g ã aÓb + aÓg
m k k m k m k

Ï Ó13: Scalar multiplication commutes with the regressive product.

a Ó a b ã Ka aO Ó b ã a a Ó b 3.9
m k m k m k

The unit n-element

Ï The unit n-element is congruent to any basis n-element.

The duality algorithm has generated a unit n-element 1 which acts as the multiplicative identity for
n
the regressive product, just as the unit scalar 1 (or 1) acts as the multiplicative identity for the
0

exterior product (axioms Ô8 and Ó8).


We have already seen that any basis of L contains only one element. If a basis of L is
n 1
8e1 ,e2 ,!,en <, then the single basis element of L is congruent to e1 Ô e2 Ô ! Ô en . (Two
n
elements are congruent if one is a scalar multiple of the other.) If the basis of L is changed by an
1
arbitrary (non-singular) linear transformation, then the basis of L changes by a scalar factor which
n
is the determinant of the transformation. Any basis of L may therefore be expressed as a scalar
n
multiple of some given basis, say e1 Ô e2 Ô ! Ô en . Hence we can therefore also express
e1 Ô e2 Ô ! Ô en as some scalar multiple ¯c of 1. (We denote the scalar in this form so that
n
GrassmannAlgebra can understand its special role in later computations.)

e1 Ô e2 Ô ! Ô en ã ¯c 1 3.10
n

Defining 1 any more specifically than this is normally done by imposing a metric onto the space.
n
This we do in Chapter 5: The Complement, and Chapter 6: The Interior Product. It turns out then
that 1 is the n-element whose measure (magnitude, volume) is unity.
n

Ï other hand, for geometric application in spaces without a metric, for example the
On the
calculation of intersections of lines, planes, and hyperplanes, it is inconsequential that we only
know 1 up to congruence, because we will see that if an element defines a geometric entity then
n
2009 93
any element congruent to it will define the same geometric entity.
Grassmann Algebra Book.nb 141

On the other hand, for geometric application in spaces without a metric, for example the
calculation of intersections of lines, planes, and hyperplanes, it is inconsequential that we only
know 1 up to congruence, because we will see that if an element defines a geometric entity then
n
any element congruent to it will define the same geometric entity.

Ï The unit n-element is idempotent under the regressive product.

By putting a equal to 1 in the regressive product axiom


m ¯n
Ó8 we have immediately that:

1 ã 1Ó1 3.11
¯n ¯n ¯n

Ï n-elements allow a sort of associativity.

By letting a ã a 1 we can show that regressive products with an n-element allow a sort of
¯n ¯n
associativity with the exterior product.

a Ób Ôg ã a Ó bÔg 3.12
¯n k j ¯n k j

Ï Division by n-elements.

An equation may be 'divided' through by an n-element under the regressive product.

b Óa ã b Óg ï aãg 3.13
¯n m ¯n m m m

We can easily see this by putting b ã b 1 , and using axiom


¯n
Ó8.
¯n

The inverse of an n-element

Ó
Axiom 9 says that every (non-zero) n-element has an inverse with respect to the regressive
product, and the unit n-element.

1 1 1
:#)*+,+B , œ L F, 1 ã a Ó , (-./00Ba, a ! 0 œ L F>
a a ¯n ¯n a ¯n ¯n

Suppose we have an n-element a expressed in terms of some basis. Then, according to 3.10 we can
n
express it as a scalar multiple (a, say) of the unit n-element 1.
n

We may then write the inverse a-1 of a with respect to the regressive product as the scalar multiple
n n
1
a
of 1.
n
2009 9 3
Grassmann Algebra Book.nb 142

We may then write the inverse a-1 of a with respect to the regressive product as the scalar multiple
n n
1
a
of 1.
n

1
aãa1 ó a-1 ã 1
n n n a n

1
We can see this by taking the regressive product of a 1 with 1:
a n
n

1
a Ó a-1 ã Ja 1N Ó 1 ã 1Ó1 ã 1
n n n a n n n n

If a is now expressed in terms of a basis we have:


n

a ã b e1 Ô e2 Ô ! Ô en ã b ¯c 1
n n

Hence a-1 can be written as:


n

1 1 1 1
a-1 ã 1ã e1 Ô e2 Ô ! Ô en
n b ¯c n b ¯c2

Summarizing these results: If a-1 is defined by a Ó a-1 ã 1, then


n n n n

1
aãa1 ó a-1 ã 1 3.14
n n n a n

1 1
a ã b e1 Ô e2 Ô ! Ô en ó a-1 ã e1 Ô e2 Ô ! Ô en 3.15
n n b ¯c2

Grassmann's notation for the regressive product


In Grassmann's Ausdehnungslehre of 1862 Section 95 (translated by Lloyd C. Kannenberg) he
states:

If q and r are the orders of two magnitudes A and B, and n that of the principal
domain, then the order of the product [A B] is first equal to q+r if q+r is smaller than
n, and second equal to q+r|n if q+r is greater than or equal to n.
Translating this into the terminology used in this book we have:

If q and r are the grades of two elements A and B, and n that of the underlying linear
space, then the grade of the product [A B] is first equal to q+r if q+r is smaller than n,
and second equal to q+r|n if q+r is greater than or equal to n.

2009 9 3
Grassmann Algebra Book.nb 143

Translating this further into the notation used in this book we have:

@A BD ó A Ô B p+q<n
p q

@A BD ó A Ó B p+q¥n
p q

Grassmann called the product [A B] in the first case a progressive (exterior) product, and in the
second case a regressive (exterior) product. (In current terminology, which we adopt in this book,
the progressive exterior product is called simply the exterior product, and the regressive exterior
product is called simply the regressive product).

In the equivalence above, Grassmann has opted to define the product of two elements whose
grades sum to the dimension n of the space as regressive, and thus a scalar. However, the more
explicit notation that we have adopted identifies that some definition is still required for the
progressive (exterior) product of two such elements.

The advantage of denoting the two products differently enables us to correctly define the exterior
product of two elements whose grades sum to n, as an n-element. Grassmann by his choice of
notation has decided to define it as a scalar. In modern terminology this is equivalent to confusing
scalars with pseudo-scalars. A separate notation for the two products thus avoids this tensorially
invalid confusion.

We can see how then, in not being explicitly denoted, the regressive product may have become lost.

3.4 The Grassmann Duality Principle

The dual of a dual


The duality of the axiom sets for the exterior and regressive products is completed by requiring that
the dual of a dual of an axiom be the axiom itself. The dual of a regressive product axiom may be
obtained by applying the following algorithm:

1. Replace Ó with Ô, and the grades of elements and spaces by their complementary grades.
2. Replace arbitrary grades m with n|m', k with n|k'. Drop the primes.

This differs from the algorithm for obtaining the dual of an exterior product axiom only in the
replacement of Ó with Ô instead of vice versa.

It is easy to see that applying this algorithm to the regressive product axioms generates the original
exterior product axiom set.

We can combine both transformation algorithms by restating them as:

1. Replace each product operation by its dual operation, and the grades of elements and spaces by
their complementary grades.
2. Replace arbitrary grades m with n|m', k with n|k'. Drop the primes.

Since this algorithm applies to both sets of axioms, it also applies to any theorem. Thus to each
theorem involving exterior or regressive products corresponds a dual theorem obtained by applying
the algorithm. We call this the Grassmann Duality Principle (or, where the context is clear, simply
the Duality Principle).
2009 9 3
Grassmann Algebra Book.nb 144

Since this algorithm applies to both sets of axioms, it also applies to any theorem. Thus to each
theorem involving exterior or regressive products corresponds a dual theorem obtained by applying
the algorithm. We call this the Grassmann Duality Principle (or, where the context is clear, simply
the Duality Principle).

The Grassmann Duality Principle

To every theorem involving exterior and regressive products, a dual theorem may be obtained
by:
1. Replacing each product operation by its dual operation, and the grades of elements and
spaces by their complementary grades.
2. Replacing arbitrary grades m with n|m', k with n|k', then dropping the primes.

Ï Example 1
The following theorem says that the exterior product of two elements is zero if the sum of their
grades is greater than the dimension of the linear space.

:a Ô b ã 0, m + k - ¯n > 0>
m k

The dual theorem states that the regressive product of two elements is zero if the sum of their
grades is less than the dimension of the linear space.

:a Ó b ã 0, ¯n - Hk + mL > 0>
m k

We can recover the original theorem as the dual of this one.

Ï Example 2
This theorem below says that the exterior product of an element with itself is zero if it is of odd
grade.

:a Ô a ã 0, m œ 8OddPositiveIntegers<>
m m

The dual theorem states that the regressive product of an element with itself is zero if its
complementary grade is odd.

:a Ó a ã 0, H¯n - mL œ 8OddPositiveIntegers<>
m m

Again, we can recover the original theorem by calculating the dual of this one.

Ï Example 3

The following theorem states that the exterior product of unity with itself any number of times
remains unity.

2009 9 3
Grassmann Algebra Book.nb 145

1Ô1Ô!Ô1Ôa ã a
m m

The dual theorem states the corresponding fact for the regressive product of unit n-elements 1 .
¯n

1 Ó 1 Ó!Ó 1 Óa ã a
¯n ¯n ¯n m m

Using the GrassmannAlgebra function Dual


The algorithm of the Duality Principle has been encapsulated in the function Dual in
GrassmannAlgebra. Dual takes an expression or list of expressions which comprise an axiom or
theorem and generates the list of dual expressions by transforming them according to the
replacement rules of the Duality Principle.

Ï Example 1: The Dual of axiom Ô10

Our first examples are to show how Dual may be used to develop dual axioms. For example, to
take the dual of axiom Ô10 simply enter:
DualBa Ô b ã H-1Lm k b Ô aF
m k k m

a Ó b ã H-1LH-k+¯nL H-m+¯nL b Ó a
m k k m

Ï Example 2: The Dual of axiom Ô8

Again, to take the Dual of axiom Ô8 enter:


DualB:#)*+,+B1, 1 œ LF, a ã 1 Ô a>F
0 m m

:!"#$%$B 1 , 1 œ L F, a ã 1 Ó a>
¯n ¯n ¯n m ¯n m

Ï Example 3: The Dual of axiom Ô9

To apply Dual to an axiom involving more than one statement, collect the statements in a list. For
example, the dual of axiom Ô9 is obtained as:
DualB:#)*+,+Ba-1 , a-1 œ LF, 1 ã a Ô a-1 , (-./00Ba, a ! 0 œ LF>F
0 0 0

1 1 1
:!"#$%$B , œ L F, 1 ã a Ó , &'()**Ba, a # 0 œ L F>
a a ¯n ¯n a ¯n ¯n

Note here that we are purposely frustrating Mathematica's inbuilt definition of Exists and
ForAll by writing them in script.

2009 9 3
Grassmann Algebra Book.nb 146

Note here that we are purposely frustrating Mathematica's inbuilt definition of Exists and
ForAll by writing them in script.

Ï Example 4: The Dual of the Common Factor Axiom

Our fourth example is to generate the dual of the Common Factor Axiom. This axiom will be
introduced in the next section. Note that the dimension of the space must be entered as ¯n in order
for Dual to recognize it as such.

CommonFactorAxiom =

: a Ô g Ó b Ô g ã a Ô b Ô g Ó g , m + k + j - ¯n ã 0>;
m j k j m k j j

Dual@CommonFactorAxiomD

: a Ó g Ô b Ó g ã a Ó b Ó g Ô g, -j - k - m + 2 ¯n ã 0>
m j k j m k j j

We can verify that the dual of the dual of the axiom is the axiom itself.

CommonFactorAxiom ã Dual@Dual@CommonFactorAxiomDD
True

Ï Example 5: The Dual of a formula

Our fifth example is to obtain the dual of one of the product formulae derived in a later section of
this chapter. Again, the dimension of the space must be entered as ¯n.

F = a Ô b Ó x ã Ka Ó x O Ô b + H-1Lm a Ô b Ó x ;
m k ¯n-1 m ¯n-1 k m k ¯n-1

Dual@FD

a Ó b Ô x ã H-1L-m+¯n a Ó b Ô x + a Ô x Ó b
m k m k m k

Again, we can verify that the dual of the dual of the formula is the formula itself.

F ã Dual@Dual@FDD
True

Ï Setting up your own input to the Dual function

• You can use the GrassmannAlgebra function Dual with any expression involving exterior and
regressive products of graded, vector or scalar symbols.
• You can include multiple statements by combining them in a list.
• The statements should not include any expressions which will evaluate when entered.
• The dimension of the underlying linear space should be denoted as ¯n.
• Avoid using scalar or vector symbols as (underscripted) grades if they appear other than as
underscripts elsewhere in the expression.

3.5 The Common Factor Axiom


2009 9 3
Grassmann Algebra Book.nb 147

3.5 The Common Factor Axiom

Motivation
Although the axiom sets for the exterior and regressive products have been posited, it still remains
to propose an axiom explicitly relating the two types of products.

The axiom set for the regressive product, which we have created above simply as a dual axiom set
to that of the exterior product, asserts that in an n-space the regressive product of an m-element and
a k-element is an (m+k-n)-element. Our first criterion for the new axiom then, is that it can be used
to compute this new element.

Earlier we have also remarked on some enticing correspondences between the dual exterior and
regressive products and the dual union and intersection operations of a Boolean algebra. We can
already see that the (non-zero) exterior product of simple elements is an element whose space is the
'union' of the spaces of its factors. Our second criterion then, is that the new axiom enable the 'dual'
of this property: the (non-zero) regressive product of simple elements is an element whose space is
the 'intersection' of the spaces of its factors.

This criterion leads us first to return to Chapter 2: Unions and Intersections, where, with an
arbitrary multilinear product operation (which we denoted ü), we were able to compute the union
and intersection of simple elements. In the cases explored the simple elements were of arbitrary
grade. We saw there that given simple elements A* Ô C and C Ô B* , we could obtain their union and
intersection as (congruent to) A* Ô C Ô B* and C respectively.

A ü B ã HA* Ô CL ü HC Ô B* L ã HA* Ô C Ô B* L ü C

Since ü is an arbitrary multilinear product (other than the exterior product), we can satisfy the
second criterion by positing the same form for the regressive product.

A Ó B ã HA* Ô CL Ó HC Ô B* L ã HA* Ô C Ô B* L Ó C 3.16

What additional requirements on this would one need in order to satisfy the first criterion, that is,
that the axiom enables a result which does not involve the regressive product? Axiom Ó8 above
gives us the clue.

:#)*+,+B 1 , 1 œ L F, a ã 1 Ó a>
¯n ¯n ¯n m ¯n m

If the grade of A* Ô C Ô B* is equal to the dimension of the space n (or in computations, ¯n), then
since all n-elements are congruent we can write A* Ô C Ô B* ã c 1 , where c is some scalar,
¯n
showing that the right hand side is congruent to the common factor (intersection) C.

HA* Ô CL Ó HC Ô B* L ã HA* Ô C Ô B* L Ó C ã Jc 1 N Ó C ã c C ª C
¯n

2009 9 3
Grassmann Algebra Book.nb 148

An axiom which satisfies both of our original criteria is then

8HA* Ô CL Ó HC Ô B* L ã HA* Ô C Ô B* L Ó C, Grade@A* Ô C Ô B* D ã ¯n<


Using graded symbols, we could also write this in either of the following forms

: a Ô g Ó g Ô b ã a Ô g Ô b Ó g , m + k + j - ¯n ã 0>
m j j k m j k j

: a Ô g Ó b Ô g ã a Ô b Ô g Ó g , m + k + j - ¯n ã 0>
m j k j m k j j

As we will see, an axiom of this form works very well. Indeed, it turns out to be one of the
fundamental underpinnings of the algebra. We call it the Common Factor Axiom and explore it
further below.

Ï Historical Note
The approach we have adopted in this chapter of treating the common factor relation as an
axiom is effectively the same as Grassmann used in his first Ausdehnungslehre (1844) but
differs from the approach that he used in his second Ausdehnungslehre (1862). (See
Chapter 3, Section 5 in Kannenberg.) In the 1862 version Grassmann proves this relation
from another which is (almost) the same as the Complement Axiom that we introduce in
Chapter 5: The Complement. Whitehead [1898], and other writers in the Grassmannian
tradition follow his 1862 approach.
The relation which Grassmann used in the 1862 Ausdehnungslehre is in effect equivalent
to assuming the space has a Euclidean metric (his Ergänzung or supplement). However the
Common Factor Axiom does not depend on the space having a metric; that is, it is
completely independent of any correspondence we set up between L and L . Hence we
m n-m
would rather not adopt an approach which introduces an unnecessary constraint, especially
since we want to show later that the Ausdehnungslehre is easily extended to more general
metrics than the Euclidean.

The Common Factor Axiom


We begin completely afresh in positing the axiom. Let a, b, and g be simple elements with m+k+j
m k j
= n, where n is the dimension of the space. Then the Common Factor Axiom states that:

: a Ô g Ó b Ô g ã a Ô b Ô g Ó g , m + k + j - ¯n ã 0> 3.17
m j k j m k j j

Thus, the regressive product of two elements a Ô g and b Ô g with a common factor g is equal to
m j k j j
the regressive product of the 'union' of the elements a Ô b Ô g with the common factor g (their
m k j j
'intersection').

If a and b still contain some simple elements in common, then the product a Ô b Ô g is zero, hence,
m k m k j

2009
by the9definition
3 above a Ô g Ó b Ô g is also zero. In what follows, we suppose that a Ô b Ô g
m j k j m k j
is not zero.
Grassmann Algebra Book.nb 149

If a and b still contain some simple elements in common, then the product a Ô b Ô g is zero, hence,
m k m k j

by the definition above a Ô g Ó b Ô g is also zero. In what follows, we suppose that a Ô b Ô g


m j k j m k j
is not zero.

Since the union a Ô b Ô g is an n-element, we can write it as some scalar factor c, say, of the unit n-
m k j

element: a Ô b Ô g ã c 1 . Hence by axiom


m ¯n
Ó8 we derive immediately that the regressive product
k j
of two elements a Ô g and b Ô g with a common factor g is congruent to that factor.
m j k j j

: a Ô g Ó b Ô g ª g , m + k + j - ¯n ã 0> 3.18
m j k j j

Ô
It is easy to see that by using the anti-commutativity axiom 10, that the axiom may be arranged
in any of a number of alternative forms, the most useful of which are:

: a Ô g Ó g Ô b ã a Ô g Ô b Ó g , m + k + j - ¯n ã 0> 3.19
m j j k m j k j

: g Ô a Ó g Ô b ã g Ô a Ô b Ó g , m + k + j - ¯n ã 0> 3.20
j m j k j m k j

And since for the regressive product, any n-element commutes with any other element, we can
always rewrite the right hand side with the common factor first.

: a Ô g Ó g Ô b ã g Ó a Ô g Ô b , m + k + j - ¯n ã 0> 3.21
m j j k j m j k

Extension of the Common Factor Axiom to general elements


The axiom has been stated for simple elements. In this section we show that it remains valid for
general (possibly non-simple) elements, provided that the common factor remains simple.

Consider two simple elements a1 and a2 . Then the Common Factor Axiom can be written for each
m m
as:

2009 9 3
Grassmann Algebra Book.nb 150

a1 Ô g Ó b Ô g ã a1 Ô b Ô g Ó g
m j k j m k j j

a2 Ô g Ó b Ô g ã a2 Ô b Ô g Ó g
m j k j m k j j

Adding these two equations and using the distributivity of Ô and Ó gives:

Ka1 + a2 O Ô g Ó b Ô g ã Ka1 + a2 O Ô b Ô g Ó g
m m j k j m m k j j

Extending this process, we see that the formula remains true for arbitrary a and b, providing g is
m k j
simple.

: a Ô g Ó b Ô g ã a Ô b Ô g Ó g ª g, m + k + j - ¯n ã 0> 3.22
m j k j m k j j j

This is an extended version of the Common Factor Axiom. It states that: the regressive product of
two arbitrary elements containing a simple common factor is congruent to that factor.

For applications involving computations in a non-metric space, particularly those with a geometric
interpretation, we will see that the congruence form is not restrictive. Indeed, it will be quite
elucidating. For more general applications in metric spaces we will see that the associated scalar
factor is no longer arbitrary but is determined by the metric imposed.

Special cases of the Common Factor Axiom


In this section we list some special cases of the Common Factor Axiom. We assume, without
explicitly stating, that the common factor is simple.

When the grades do not conform to the requirement shown, the product is zero.

: a Ô g Ó b Ô g ã 0, m + k + j - ¯n ! 0> 3.23
m j k j

If there is no common factor (other than the scalar 1), then the axiom reduces to:

:a Ó b ã a Ô b Ó 1 œ L , m + k - ¯n ã 0> 3.24
m k m k 0

The following version yields a sort of associativity for products which are scalar.

2009 9 3
Grassmann Algebra Book.nb 151

: aÔb Óg ã aÓ bÔg œ L , m + k + j - ¯n ã 0> 3.25


m k j m k j 0

To prove this, we can use the previous formula to show that each side the equation is equal to
a Ô b Ô g Ó 1.
m k j

Dual versions of the Common Factor Axiom


As discussed earlier in the chapter, dual versions of formulae can be obtained automatically by
using the GrassmannAlgebra function Dual.

The dual Common Factor Axiom is:

: a Ó g Ô b Ó g ã a Ó b Ó g Ô g œ L, m + k + j - 2 ¯n ã 0> 3.26
m j k j m k j j j

The duals of the three special cases in the previous section are:

: a Ó g Ô b Ó g ã 0, m + k + j - 2 ¯n ! 0> 3.27
m j k j

:a Ô b ã a Ó b Ô 1 œ L , m + k - ¯n ã 0> 3.28
m k m k ¯n ¯n

: aÓb Ôg ã aÔ bÓg œ L , m + k + j - 2 ¯n ã 0> 3.29


m k j m k j ¯n

Application of the Common Factor Axiom


We now work through an example to illustrate how the Common Factor Axiom might be applied.
In most cases however, results will be obtained more effectively using the Common Factor
Theorem. This is discussed in the section to follow.

Suppose we have two general 2-elements X and Y in a 3-space and we wish to find a formula for
their 1-element intersection Z. Because X and Y are in a 3-space, we are assured that they are
simple.

2009 9 3
Grassmann Algebra Book.nb 152

X = x1 e1 Ô e2 + x2 e1 Ô e3 + x3 e2 Ô e3

Y = y1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3

We calculate Z as the regressive product of X and Y:

Z ã XÓY
ã Hx1 e1 Ô e2 + x2 e1 Ô e3 + x3 e2 Ô e3 L Ó Hy1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 L

Expanding this product, and remembering that the regressive product of identical basis 2-elements
is zero, we obtain:

Z ã Hx1 e1 Ô e2 L Ó Hy2 e1 Ô e3 L + Hx1 e1 Ô e2 L Ó Hy3 e2 Ô e3 L


+Hx2 e1 Ô e3 L Ó Hy1 e1 Ô e2 L + Hx2 e1 Ô e3 L Ó Hy3 e2 Ô e3 L
+Hx3 e2 Ô e3 L Ó Hy1 e1 Ô e2 L + Hx3 e2 Ô e3 L Ó Hy2 e1 Ô e3 L

In a 3 space, regressive products of 2-elements are anti-commutative since


H-1LHn-mL Hn-kL = H-1LH3-2L H3-2L = -1. Hence we can collect pairs of terms with the same factors by
making the corresponding sign change:

Z ã Hx1 y2 - x2 y1 L He1 Ô e2 L Ó He1 Ô e3 L


+Hx1 y3 - x3 y1 L He1 Ô e2 L Ó He2 Ô e3 L
+Hx2 y3 - x3 y2 L He1 Ô e3 L Ó He2 Ô e3 L

We can now apply the Common Factor Axiom to each of these regressive products:

Z ã Hx1 y2 - x2 y1 L He1 Ô e2 Ô e3 L Ó e1
+Hx1 y3 - x3 y1 L He1 Ô e2 Ô e3 L Ó e2
+Hx2 y3 - x3 y2 L He1 Ô e2 Ô e3 L Ó e3

Finally, by putting e1 Ô e2 Ô e3 ã ¯c 1 , we have Z expressed as a 1-element:


¯n

Z = ¯c HHx1 y2 - x2 y1 L e1 + Hx1 y3 - x3 y1 L e2 + Hx2 y3 - x3 y2 L e3 L

Thus, in sum, we have the general congruence relation for the intersection of two 2-elements in a 3-
space.

Hx1 e1 Ô e2 + x2 e1 Ô e3 + x3 e2 Ô e3 L Ó
Hy1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 L 3.30
ª Hx1 y2 - x2 y1 L e1 + Hx1 y3 - x3 y1 L e2 + Hx2 y3 - x3 y2 L e3

The result on the right hand side almost looks as if it could have been obtained from a cross-
product operation. However the cross-product requires the space to have a metric, while this does
not. We will see in Chapter 6 how, once we have introduced a metric, we can transform this into
the formula for the cross product. This is an example of how the regressive product can generate
results, independent of whether the space has a metric; whereas the usual vector algebra, in using
the cross-product, must assume that it does.

2009 9 3
Grassmann Algebra Book.nb 153

Ï Check by calculating the exterior products

We can check that Z is indeed a common element to X and Y by determining if the exterior product
of Z with each of X and Y is zero. We compose the expressions for X, Y, and Z, and then use ¯%
(the palette alias of GrassmannExpandAndSimplify) to do the check.

GrassmannAlgebra already knows that the special congruence symbol ¯c is scalar, and
ComposeBivector will automatically declare the scalar coefficients as ScalarSymbols.

¯!3 ; X = ComposeBivector@xD; Y = ComposeBivector@yD;

Z = ¯c HHx1 y2 - x2 y1 L e1 + Hx1 y3 - x3 y1 L e2 + Hx2 y3 - x3 y2 L e3 L;

Expand@¯%@8Z Ô X, Z Ô Y<DD
80, 0<

When the common factor is not simple


Let us take a precautionary example to show what happens when we have a factor that looks like a
common factor but it is not simple. When an m-element is simple, it only needs m independent 1-
elements to express it. When an m-element is not simple, it requires at least m+2 independent 1-
elements to express it. The simplest example of a non-simple element is the 2-element C, say, equal
to uÔv + xÔy, where u, v, x and y are independent 1-elements.

Ï 4-space

In a 4-space we only have 4 independent 1-elements available. Let A and B be 3-elements with C
apparently a common factor. First, suppose we simplify them before applying the Common Factor
Axiom

A ã u Ô C ã u Ô Hu Ô v + x Ô yL ã u Ô x Ô y
B ã C Ô x ã Hu Ô v + x Ô yL Ô x ã u Ô v Ô x

A Ó B ã Hu Ô x Ô yL Ó Hu Ô v Ô xL ã Hy Ô u Ô xL Ó H-u Ô x Ô vL

The Common Factor Axiom gives us the correct result for the simplified factors because the
simplified factors had a simple common factor. But of course it is not C.

A Ó B ã Hy Ô u Ô x Ô vL Ó Hx Ô uL

Now if we formally (that is, only looking at the form), (and thus incorrectly) apply the Common
Factor Axiom without simplifying we get

A Ó B ã Hu Ô CL Ó HC Ô xL ã Hu Ô C Ô xL Ó C

Substituting back for C on the right hand side gives zero, which is of course an incorrect result.

† In sum The Common Factor Axiom gives the correct result (as expected) when applied to terms
resulting from the complete expansion of a regressive product, since these terms will either have
simple common factors, or be zero. On the other hand it will not necessarily give the correct
result (as expected) when the symbol for the common factor takes on a non-simple value.

2009 9 3
Grassmann Algebra Book.nb 154

In sum The Common Factor Axiom gives the correct result (as expected) when applied to terms
resulting from the complete expansion of a regressive product, since these terms will either have
simple common factors, or be zero. On the other hand it will not necessarily give the correct
result (as expected) when the symbol for the common factor takes on a non-simple value.

3.6 The Common Factor Theorem

Development of the Common Factor Theorem


The Common Factor Theorem which we are about to develop is one of the most important in the
Grassmann algebra as it is the source of many formulae. We will also see later that it has a
counterpart which forms the principal expansion theorem for interior products.

The example in the previous section applied the Common Factor Axiom to two elements by
expanding all the terms in their regressive product, applying the Common Factor Axiom to each of
the terms, and then factoring the result. This is always possible. But in situations where there is a
large number of terms, it may not be very efficient. The Common Factor Theorem will enable
more efficient solutions.

Let us begin by returning to the motivation of the Common Factor Axiom where we recast the
union and intersection formula from Chapter 2 to become the axiom.

A Ó B ã HA* Ô CL Ó HC Ô B* L ã HA* Ô C Ô B* L Ó C

Here, the grade of A* Ô C Ô B* is equal to the dimension of the space. Since an n-element will
commute with an element of any grade, the term on the right hand side can always be rewritten as
C Ô HA* Ô C Ô B* L. This latter form is often convenient from a mnemonic point of view, and we
may make such exchanges without further comment. Thus

A Ó B ã HA* Ô CL Ó HC Ô B* L ã HA* Ô C Ô B* L Ó C ã C Ô HA* Ô C Ô B* L 3.31

There are two different forms of the Common Factor Theorem which we will be developing: The
A form and the B form. It is useful to have both forms at hand, since depending on the case, one of
them is likely to be more computationally efficient than the other; and formulae resulting from
each, while ultimately yielding the same result, give distinctly different forms to their results. The
A form requires A to be simple and obtains the common factor C from the factors of A by spanning
A (that is, composing an appropriate span and cospan of A). The B form requires B to be simple and
obtains the common factor C from the factors of B by spanning B.

Ï The A form

The basic formula for the A form begins with the Common Factor Axiom written as

A Ó B ã HA* Ô CL Ó B ã HA* Ô BL Ó C 3.32

† us now display the grades explicitly. Let A be of grade m, B be of grade k, and C be of grade j.
Let
A* is then of grade m - j = n - k.

2009 9 3
Grassmann Algebra Book.nb 155

A Ó B ã A* Ô C Ó B ã A* Ô B Ó C
m k m-j j k m-j k j

The crux of the development comes from the intrinsic relationship between A, its span, and its
cospan: that is, for any i and s, A is equal to the exterior product of the ith s-span element and the
corresponding ith s-cospan element.

A = I¯#s @ADPiT M Ô I¯#s @ADPiT M

In our case, s is m-j. So we can write

A = A* Ô C ã I¯#m-j @ADPiT M Ô I¯#m-j @ADPiT M


m-j j

We now focus on the right hand side of the Common Factor Axiom. If, for a given value of i, we
were to replace A* by ¯#m-j @ADPiT and C by ¯#m-j @ADPiT , we would get the term
m-j j

J¯#m-j @ADPiT Ô BN Ó ¯#m-j @ADPiT


k

Clearly, this term may be different for different values of i, since, for example, wherever the span
element of A (¯#m-j @ADPiT ) contains a factor of the common factor it will be zero, since B will
contain the same factor. In other cases it may be non-zero.

As it turns out, the common factor needs the sum of these terms. We will see why in the next
section when we discuss the proof of the theorem.

n
A Ó B ã ‚ K¯"m-j BAF Ô BO Ó ¯"m-j BAF 3.33
m k m PiT k m PiT
i=1

m m
The index i ranges from 1 to n, where n is equal to (or equivalently ).
j m- j

Because the exterior and regressive products have Mathematica's Listable attribute, we do not
actually need to index the list elements and then sum them. If we write ¯S as a shorthand for
Mathematica's inbuilt Total function, which sums the elements of a list, we can write

A Ó B ã ¯SBJ¯"m-j BAF Ô BN Ó ¯"m-j BAFF 3.34


m k m k m

To make these ideas more concrete we now apply the formula to some simple examples before
proceeding to a proof in the next section.

Ï Generalizations

† It is evident that if both elements are simple, expansion in factors of the element of lower grade
will be the more efficient.

† When neither a nor b is simple, the Common Factor Theorem can still be applied to the simple
m k
component terms of the product.
2009 9 3
Grassmann Algebra Book.nb 156

When neither a nor b is simple, the Common Factor Theorem can still be applied to the simple
m k
component terms of the product.

† Multiple regressive products may be treated by successive applications of the formulae above.

Ï Example 1

Suppose we are working in a 5-space and A and B are expressed in such a way as to display their
common factor C. We declare the extra vector symbols with the alias ¯¯V.

¯!5 ; A = a1 Ô g1 Ô g2 ; B = g1 Ô g2 Ô b1 Ô b2 ; ¯¯V@a_ , b_ , g_ D;

Thus, n is 5, m (the grade of A) is 3, k (the grade of B) is 4, and j (the grade of C) is 2.

For reference, the 1-span and 1-cospan of A are

9¯#1 @AD, ¯#1 @AD=

88a1 , g1 , g2 <, 8g1 Ô g2 , -Ha1 Ô g2 L, a1 Ô g1 <<

Now if we simply evaluate the formula, we notice that two of the three terms are zero due to
repeated factors in the exterior product.

F = ¯SAH¯#1 @AD Ô BL Ó ¯#1 @ADE

a1 Ô g1 Ô g2 Ô b1 Ô b2 Ó g1 Ô g2 +
g1 Ô g1 Ô g2 Ô b1 Ô b2 Ó -Ha1 Ô g2 L + g2 Ô g1 Ô g2 Ô b1 Ô b2 Ó a1 Ô g1

Simplifying then gives us the result.

¯%@FD
g1 Ô g2 Ó a1 Ô b1 Ô b2 Ô g1 Ô g2

Because the second factor in the regressive product is an n-element, we can immediately write this
as congruent to the common factor:

Hg1 Ô g2 L Ó Ha1 Ô b1 Ô b2 Ô g1 Ô g2 L ª g1 Ô g2

Ï Example 2

We now take the same example as we did earlier when we multiplied out the regressive product
term by term. X and Y are both 2-elements in a 3-space, and we wish to compute a 1-element Z
common to them (remember Z can only be computed up to congruence).

¯!3 ; 8X, Y< = 8¯%x , ¯%y <

8x1 e1 Ô e2 + x2 e1 Ô e3 + x3 e2 Ô e3 , y1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 <

To apply the Common Factor Theorem in the form above, we need X in factored form.

2009 9 3
Grassmann Algebra Book.nb 157

Xf = ExteriorFactorize@XD
e3 x3 e3 x2
x1 e1 - Ô e2 +
x1 x1

To compute the span of an element, the element needs to be a simple product of 1-elements - not a
scalar multiple of such a product. The GrassmannAlgebra function ¯# ignores any such scalar
factors. In applications where only a result up to congruence is required, this is entirely
satisfactory. Otherwise if we wish to include the scalar factor, we will need to multiply it into one
of the other factors, say the first.

Xf = Xf ê. a_ Hb_ Ô c__L ß Ha bL Ô c
e3 x3 e3 x2
x1 e1 - Ô e2 +
x1 x1

The span and cospan of Xf are both of grade 1. We display them for reference.

9¯#1 @Xf D, ¯#1 @Xf D=


e3 x3 e3 x2 e3 x2 e3 x3
::x1 e1 - , e2 + >, :e2 + , -x1 e1 - >>
x1 x1 x1 x1

Application of the formula gives the unsimplified sum.

F = ¯SAH¯#1 @Xf D Ô YL Ó ¯#1 @Xf DE


e3 x2 e3 x3
e2 + Ô Hy1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 L Ó -x1 e1 - +
x1 x1
e3 x3 e3 x2
x1 e1 - Ô Hy1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 L Ó e2 +
x1 x1

Expansion and simplification gives

F = ¯%@FD
H-x2 y1 + x1 y2 L e1 Ó e1 Ô e2 Ô e3 +
H-x3 y1 + x1 y3 L e2 Ó e1 Ô e2 Ô e3 + H-x3 y2 + x2 y3 L e3 Ó e1 Ô e2 Ô e3

The n-element in this case is e1 Ô e2 Ô e3 . Factoring it out gives

F = HH-x2 y1 + x1 y2 L e1 + H-x3 y1 + x1 y3 L e2 + H-x3 y2 + x2 y3 L e3 L Ó


e1 Ô e2 Ô e3

Writing this in congruence form gives us the common factor required, and corroborates the result
originally obtained by applying the Common Factor Axiom to each term of the full expansion.

Z ª HH-x2 y1 + x1 y2 L e1 + H-x3 y1 + x1 y3 L e2 + H-x3 y2 + x2 y3 L e3 L


¯c He1 H-x2 y1 + x1 y2 L + e2 H-x3 y1 + x1 y3 L + e3 H-x3 y2 + x2 y3 LL

Proof of the Common Factor Theorem

2009 9 3
Grassmann Algebra Book.nb 158

Proof of the Common Factor Theorem


Consider a regressive product A Ó B where A is given as a simple product of 1-element factors and
m k m
m+k = n+j, j > 0. Then A Ó B is either zero, or, by the Common Factor Axiom, has a common factor
m k
C. We assume it is not zero. We then express A and B in terms of C as we have done previously.
j m k j

A ã A* Ô C B ã C Ô B*
m m-j j k j k-j

Let the (m-j)-span and (m-j)-cospan of A be written as


m

¯#m-j BAF ã : A1 , A2 , !, An > ¯#m-j BAF ã :A1 , A2 , !, An >


m m-j m-j m-j m j j j

then A can be written in any of the forms


m

A ã A1 Ô A1 ã A2 Ô A2 ã ! ã An Ô An
m m-j j m-j j m-j j

m m
Here n is equal to , or equivalently .
j m- j

Since the common factor C is a j-element belonging to A it can be expressed as a linear


j m

combination of the cospan elements of A.


m

C ã a1 A1 + a2 A2 + … + an An
j j j j

The exterior product of the ith (m-j)-span element Ai with C can be expanded to give
m-j j

Ai Ô C ã Ai Ô a1 A1 + a2 A2 + … + an An ã
m-j j m-j j j j

a1 Ai Ô A1 + a2 Ai Ô A2 + … + ai Ai Ô Ai + … + an Ai Ô An
m-j j m-j j m-j j m-j j

And indeed, because by definition, the exterior product of an (m-j)-span element (say, A2 ) and any
m-j
of the non-corresponding cospan elements (say, A3 ) is zero, we can write for any i
j

Ai Ô C ã ai Ai Ô Ai ã ai A
m-j j m-j j m

Now return to the Common Factor Axiom in the form

2009 9 3
Grassmann Algebra Book.nb 159

Z ã AÓB ã A * Ô C Ó C Ô B* ã A * Ô C Ô B* Ó C ã A Ô B* Ó C
m k m-j j j k-j m-j j k-j j m k-j j

Substituting for the common factor in the right hand side gives

Z ã A Ô B* Ó C ã A Ô B* Ó a1 A1 + a2 A2 + … + an An
m k-j j m k-j j j j

By the distributivity of the exterior and regressive products, and their behaviour with scalars, the
scalar factors can be transferred and attached to A.
m

Z ã Ja1 AN Ô B* Ó A1 + Ja2 AN Ô B* Ó A2 + … + Jan AN Ô B* Ó An


m k-j j m k-j j m k-j j

But we have shown earlier that

ai A ã Ai Ô C
m m-j j

Hence substituting gives

Z ã A1 Ô C Ô B* Ó A1 + A2 Ô C Ô B* Ó A2 + … + An Ô C Ô B* Ó An
m-j j k-j j m-j j k-j j m-j j k-j j

Finally, a further substitution of B for C Ô B* gives the final result.


k j k-j

Z ã A1 Ô B Ó A1 + A2 Ô B Ó A2 + … + An Ô B Ó An ã
m-j k j m-j k j m-j k j
n 3.35
‚ Ai Ô B Ó Ai
m-j k j
i=1

The A and B forms of the Common Factor Theorem


We have just proved what we will now call the A form of the Common Factor Theorem (because it
works by spanning the first factor in the regressive product). An analogous formula may be
obtained mutatis mutandis by factoring B rather than A. Hence B must now be simple. This formula
k m k
we will call the B form of the Common Factor Theorem.

In the case where both A and B are simple but not of the same grade, the form which decomposes
the element of lower grade will generate the least number of terms and be more computationally
efficient. Multiple regressive products may be treated by successive applications of the theorem in
either of its forms.

2009 9 3
Grassmann Algebra Book.nb 160

Ï The A form of the Common Factor Theorem

n
:A Ó B ã ‚ Ai Ô B Ó Ai , m - j + k - ¯n ã 0,
m k m-j k j
i=1
3.36
m
A ã A1 Ô A1 ã A2 Ô A2 ã ! ã An Ô An , n ã K O>
m m-j j m-j j m-j j j

As we have already seen, this is equivalent to the computationally oriented form.

n
A Ó B ã ‚ ¯#m-j BAF Ô B Ó ¯#m-j BAF
m k m PiT k m PiT
i=1

¯#m-j BAF ã : A1 , A2 , !, An > ¯#m-j BAF ã :A1 , A2 , !, An >


m m-j m-j m-j m j j j

Or, using Mathematica's inbuilt Listability

A Ó B ã ¯SBJ¯#m-j BAF Ô BN Ó ¯#m-j BAFF


m k m k m

Ï The B form of the Common Factor Theorem

n
: A Ó B ã ‚ A Ô Bi Ó Bi , m - j + k - ¯n ã 0,
m k m k-j j
i=1
3.37
k
B ã B1 Ô B1 ã B2 Ô B2 ã ! ã Bn Ô Bn , nã >
k j k-j j k-j j k-j j

The computationally oriented form is

n
A Ó B ã ‚ A Ô ¯#j BBF Ó ¯#j BBF
m k m k PiT k PiT
i=1

¯#j BBF ã :B1 , B2 , !, Bn > ¯#j BBF ã : B1 , B2 , !, Bn >


k j j j k k-j k-j k-j

Or, using Mathematica's inbuilt Listability

A Ó B ã ¯SBJA Ô ¯#j BBFN Ó ¯#j BBFF


m k m k k

2009 9 3
Grassmann Algebra Book.nb 161

Example: The decomposition of a 1-element


The special case of the regressive product of a 1-element b with an n-element a enables us to
n
decompose b directly in terms of the factors of a. The Common Factor Theorem gives:
n

n
a Ó b ã ‚ ai Ô b Ó ai
n n-1
i=1

where:

a ã a1 Ô a2 Ô ! Ô an ã H-1Ln-i Ha1 Ô a2 Ô ! Ô ·i Ô ! Ô an L Ô ai
n

The symbol ·i means that the ith factor is missing from the product. Substituting in the Common
Factor Theorem gives:
n
a Ó b ã ‚ H-1Ln-i HHa1 Ô a2 Ô ! Ô ·i Ô ! Ô an L Ô bL Ó ai
n
i=1
n
ã ‚ Ha1 Ô a2 Ô ! Ô ai-1 Ô b Ô ai+1 Ô ! Ô an L Ó ai
i=1

Hence the decomposition formula becomes:

Ha1 Ô a2 Ô ! Ô an L Ó b ã
n
3.38
‚ Ha1 Ô a2 Ô ! Ô ai-1 Ô b Ô ai+1 Ô ! Ô an L Ó ai
i=1

Writing this out in full shows that we can expand the expression simply by interchanging b
successively with each of the factors of a1 Ôa2 Ô!Ôan , and summing the results.

Ha1 Ô a2 Ô ! Ô an L Ó b ã Hb Ô a2 Ô ! Ô an L Ó a1 +
3.39
Ha1 Ô b Ô ! Ô an L Ó a2 + ! + Ha1 Ô a2 Ô ! Ô bL Ó an

We can make the result express more explicitly the decomposition of b in terms of the ai by
'dividing through' by a. (The quotient of two n-elements has been defined previously as the
n
quotient of their scalar coefficients.)

n
a1 Ô a2 Ô ! Ô ai-1 Ô b Ô ai+1 Ô ! Ô an
bã‚ ai 3.40
i=1
a1 Ô a2 Ô ! Ô an

This expression is, of course, equivalent to that which would have been obtained from Grassmann's
approach to solving linear equations (see Chapter 2: The Exterior Product).

2009 9 3
Grassmann Algebra Book.nb 162

This expression is, of course, equivalent to that which would have been obtained from Grassmann's
approach to solving linear equations (see Chapter 2: The Exterior Product).

Example: Applying the Common Factor Theorem


In this section we show how the Common Factor Theorem generally leads to a more efficient
computation of results than repeated application of the Common Factor Axiom, particularly when
done manually, since there are fewer terms in the Common Factor Theorem expansion, and many
are evidently zero by inspection.

Again, we take the problem of finding the 1-element common to two 2-elements in a 3-space. This
time, however, we take a numerical example and suppose that we know the factors of at least one
of the 2-elements. Let:

x1 ã 3 e1 Ô e2 + 2 e1 Ô e3 + 3 e2 Ô e3
x2 ã H5 e2 + 7 e3 L Ô e1

We keep x1 fixed initially and apply the Common Factor Theorem to rewrite the regressive
product as a sum of the two products, each due to one of the essentially different rearrangements of
x2 :

x1 Ó x2 ã Hx1 Ô e1 L Ó H5 e2 + 7 e3 L - Hx1 Ô H5 e2 + 7 e3 LL Ó e1

The next step is to expand out the exterior products with x1 . Often this step can be done by
inspection, since all products with a repeated basis element factor will be zero.

x1 Ó x2 ã H3 e2 Ô e3 Ô e1 L Ó H5 e2 + 7 e3 L -
H10 e1 Ô e3 Ô e2 + 21 e1 Ô e2 Ô e3 L Ó e1

Factorizing out the 3-element e1 Ô e2 Ô e3 then gives:

x1 Ó x2 ã He1 Ô e2 Ô e3 L Ó H-11 e1 + 15 e2 + 21 e3 L
ã ¯c H-11 e1 + 15 e2 + 21 e3 L ª -11 e1 + 15 e2 + 21 e3

Thus all factors common to both x1 and x2 are congruent to -11 e1 + 15 e2 + 21 e3 .

Ï Check by calculating the exterior products

We can check that this result is correct by taking its exterior product with each of x1 and x2 . We
should get zero in both cases, indicating that the factor determined is indeed common to both
original 2-elements. Here we use GrassmannExpandAndSimplify in its alias form ¯%.

¯!3 ; ¯%@H3 e1 Ô e2 + 2 e1 Ô e3 + 3 e2 Ô e3 L Ô H-11 e1 + 15 e2 + 21 e3 LD


0

¯%@H5 e2 + 7 e3 L Ô e1 Ô H-11 e1 + 15 e2 + 21 e3 LD
0

Automating the application of the Common Factor Theorem

2009 9 3
Grassmann Algebra Book.nb 163

Automating the application of the Common Factor Theorem


The Common Factor Theorem can be computed in GrassmannAlgebra by using either the A or B
form in its computable form using the appropriate span and cospan. However GrassmannAlgebra
also has a more direct function ToCommonFactor which will attempt to handle multiple
regressive products in an optimum way.

Ï Using ToCommonFactor

ToCommonFactor reduces any regressive products in a Grassmann expression to their common


factor form.

For example in the case explored above we have:

x1 = 3 e1 Ô e2 + 2 e1 Ô e3 + 3 e2 Ô e3 ;
x2 = H5 e2 + 7 e3 L Ô e1 ;

¯A; ToCommonFactor@x1 Ó x2 D
¯c H-11 e1 + 15 e2 + 21 e3 L

The scalar ¯c is called the congruence factor. The congruence factor has already been defined in
Section 3.3 above as the connection between the current basis n-element and the unit n-element 1
n
(in GrassmannAlgebra input we use ¯n instead of n, in case n is being used elsewhere in your
computations).

e1 Ô e2 Ô ! Ô en ã ¯c 1
¯n

Since this connection cannot be defined unless a metric is introduced, the congruence factor
remains arbitrary in non-metric spaces. Although such an arbitrariness may appear at first sight to
be disadvantageous, it is on the contrary, highly elucidating. In application to the computing of
unions and intersections of spaces, perhaps under a geometric interpretation where they represent
lines, planes and hyperplanes, the notion of congruence becomes central, since spaces can only be
determined up to congruence.

In the example above, x1 and x2 were expressed in terms of basis elements. ToCommonFactor
can also apply the Common Factor Theorem to more general elements. For example, if x1 and x2
were expressed as symbolic bivectors, the common factor expansion could still be performed.

x1 = x Ô y;
x2 = u Ô v;

ToCommonFactor@x1 Ó x2 D
¯c Hy †u Ô v Ô x§ - x †u Ô v Ô y§L

The expression †uÔvÔx§ represents the coefficient of uÔvÔx when uÔvÔx is expressed in terms
of the current basis n-element (in this case 3-element).

As an example, we use the GrassmannAlgebra function ComposeBasisForm to compose an


expression for uÔvÔx in terms of basis elements, and then expand and simplify the result using the
alias ¯% for GrassmannExpandAndSimplify.

2009 9 3
Grassmann Algebra Book.nb 164

As an example, we use the GrassmannAlgebra function ComposeBasisForm to compose an


expression for uÔvÔx in terms of basis elements, and then expand and simplify the result using the
alias ¯% for GrassmannExpandAndSimplify.

X = ComposeBasisForm@u Ô v Ô xD
He1 u1 + e2 u2 + e3 u3 L Ô He1 v1 + e2 v2 + e3 v3 L Ô He1 x1 + e2 x2 + e3 x3 L

¯%@XD
H-u3 v2 x1 + u2 v3 x1 + u3 v1 x2 - u1 v3 x2 - u2 v1 x3 + u1 v2 x3 L e1 Ô e2 Ô e3

Hence in this case †uÔvÔx§ is given by

†u Ô v Ô x§ ã H-u3 v2 x1 + u2 v3 x1 + u3 v1 x2 - u1 v3 x2 - u2 v1 x3 + u1 v2 x3 L

Ï Using ToCommonFactorA and ToCommonFactorB

If you wish to explicitly apply the A form of the Common Factor Theorem, you can use
ToCommonFactorA. Or if the B form you can use ToCommonFactorB. The results will be
equal, but they may look different.

We illustrate by taking the example in the section above and expanding it both ways. Again define

x1 = x Ô y;
x2 = u Ô v;

ToCommonFactorA@x1 Ó x2 D
¯c Hy †u Ô v Ô x§ - x †u Ô v Ô y§L

ToCommonFactorB@x1 Ó x2 D
¯c H-v †u Ô x Ô y§ + u †v Ô x Ô y§L

To see most directly that these are equal, we express the vectors in terms of basis elements.

Z = ComposeBasisForm@x1 Ó x2 D
He1 x1 + e2 x2 + e3 x3 L Ô He1 y1 + e2 y2 + e3 y3 L Ó
He1 u1 + e2 u2 + e3 u3 L Ô He1 v1 + e2 v2 + e3 v3 L

Applying the A form of the Common Factor Theorem gives

ToCommonFactorA@ZD
¯c He1 Hu3 v1 x2 y1 - u1 v3 x2 y1 - u2 v1 x3 y1 + u1 v2 x3 y1 -
u3 v1 x1 y2 + u1 v3 x1 y2 + u2 v1 x1 y3 - u1 v2 x1 y3 L +
e2 Hu3 v2 x2 y1 - u2 v3 x2 y1 - u3 v2 x1 y2 + u2 v3 x1 y2 -
u2 v1 x3 y2 + u1 v2 x3 y2 + u2 v1 x2 y3 - u1 v2 x2 y3 L +
e3 Hu3 v2 x3 y1 - u2 v3 x3 y1 - u3 v1 x3 y2 + u1 v3 x3 y2 -
u3 v2 x1 y3 + u2 v3 x1 y3 + u3 v1 x2 y3 - u1 v3 x2 y3 LL

The resulting expression is identical to that calculated by the B form.

2009 9 3
Grassmann Algebra Book.nb 165

ToCommonFactorB@ZD ã ToCommonFactorA@ZD
True

Ï Expressions involving non-decomposable elements

The GrassmannAlgebra function ToCommonFactor uses the A or B form heuristically. When a


common factor can be calculated by using either the A or B form, the A form is used by default. If
an expansion is not possible by using the A form, ToCommonFactor tries the B form.

For example, consider the regressive product of two symbolic 2-elements in a 3-space. Attempting
to compute the common factor fails, since neither factor can be decomposed.

¯A; ToCommonFactorBa Ó bF
2 2

aÓb
2 2

However, if either one of the factors is decomposable, then a formula for the common factor can be
generated. In the case below, the results are the same; remember that ¯c is an arbitrary scalar
factor.

:ToCommonFactorBa Ó Hx Ô yLF, ToCommonFactorBHx Ô yL Ó aF>


2 2

:¯c K-y £x Ô aß + x £y Ô aßO, ¯c Ky £x Ô aß - x £y Ô aßO>


2 2 2 2

Ï Example: The regressive product of three 3-elements in a 4-space.

ToCommonFactor can find the common factor of any number of elements. As an example, we
take the regressive product of three 3-elements in a 4-space. This is a 1-element, since
3 + 3 + 3 - 2 µ 4 # 1. Remember also that a 3-element in a 4-space is necessarily simple (see
Chapter 2: The Exterior Product).

To demonstrate this we first declare a 4-space, and then use the GrassmannAlgebra function
ComposeBasisForm to compose the required product in basis form.

¯!4 ; X = ComposeBasisFormBa Ó b Ó gF
3 3 3

Ha3,1 e1 Ô e2 Ô e3 + a3,2 e1 Ô e2 Ô e4 + a3,3 e1 Ô e3 Ô e4 + a3,4 e2 Ô e3 Ô e4 L Ó


Hb3,1 e1 Ô e2 Ô e3 + b3,2 e1 Ô e2 Ô e4 + b3,3 e1 Ô e3 Ô e4 + b3,4 e2 Ô e3 Ô e4 L Ó
Hg3,1 e1 Ô e2 Ô e3 + g3,2 e1 Ô e2 Ô e4 + g3,3 e1 Ô e3 Ô e4 + g3,4 e2 Ô e3 Ô e4 L

The common factor is then determined up to congruence by ToCommonFactor.

2009 9 3
Grassmann Algebra Book.nb 166

Xc = ToCommonFactor@XD

¯c2 He1 H-a3,3 b3,2 g3,1 + a3,2 b3,3 g3,1 +


a3,3 b3,1 g3,2 - a3,1 b3,3 g3,2 - a3,2 b3,1 g3,3 + a3,1 b3,2 g3,3 L +
e2 H-a3,4 b3,2 g3,1 + a3,2 b3,4 g3,1 + a3,4 b3,1 g3,2 -
a3,1 b3,4 g3,2 - a3,2 b3,1 g3,4 + a3,1 b3,2 g3,4 L +
e3 H-a3,4 b3,3 g3,1 + a3,3 b3,4 g3,1 + a3,4 b3,1 g3,3 -
a3,1 b3,4 g3,3 - a3,3 b3,1 g3,4 + a3,1 b3,3 g3,4 L +
e4 H-a3,4 b3,3 g3,2 + a3,3 b3,4 g3,2 + a3,4 b3,2 g3,3 -
a3,2 b3,4 g3,3 - a3,3 b3,2 g3,4 + a3,2 b3,3 g3,4 LL

Note that the congruence factor ¯c is to the second power. This is because the original expression
contained the regressive product operator twice: one ¯c effectively stands in for each basis 4-
element that is produced during the calculation (3+3+3 = 4+4+1).

It is easy to check that this result is indeed a common factor by taking its exterior product with
each of the 3-elements. For example:

¯%@Xc Ô Ha3,1 e1 Ô e2 Ô e3 + a3,2 e1 Ô e2 Ô e4 +


a3,3 e1 Ô e3 Ô e4 + a3,4 e2 Ô e3 Ô e4 LD êê Expand

3.7 The Regressive Product of Simple Elements

The regressive product of simple elements


The regressive product of simple elements is simple.

To show this consider the (non-zero) regressive product a Ó b, where a and b are simple, m+k ¥ n.
m k m k
The 1-element factors of a and b must then have a common subspace of dimension m+k-n = j. Let
m k
g be a simple j-element which spans this common subspace. We can then write:
j

aÓb ã a Ôg Ó b Ôg ã a Ô b Ôg Óg ª g
m k m-j j k-j j m-j k-j j j j

The Common Factor Axiom then shows us that since g is simple, then so is the original product of
j
simple elements a Ó b.
m k

2009 9 3
Grassmann Algebra Book.nb 167

The regressive product of (n|1)-elements


Since we have shown in Chapter 2: The Exterior Product that all (n|1)-elements are simple, and in
the previous section that the regressive product of simple elements is simple, it follows
immediately that the regressive product of any number of (n|1)-elements is simple.

Ï Example: The regressive product of two 3-elements in a 4-space is simple

As an example of the foregoing result we calculate the regressive product of two 3-elements in a 4-
space. We begin by declaring a 4-space and creating two general 3-elements.

¯!4 ; Z = ComposeBasisFormBx Ó yF
3 3

Hx3,1 e1 Ô e2 Ô e3 + x3,2 e1 Ô e2 Ô e4 + x3,3 e1 Ô e3 Ô e4 + x3,4 e2 Ô e3 Ô e4 L Ó


Hy3,1 e1 Ô e2 Ô e3 + y3,2 e1 Ô e2 Ô e4 + y3,3 e1 Ô e3 Ô e4 + y3,4 e2 Ô e3 Ô e4 L

The common 2-element is:

Zc = ToCommonFactor@ZD
¯c HH-x3,2 y3,1 + x3,1 y3,2 L e1 Ô e2 + H-x3,3 y3,1 + x3,1 y3,3 L e1 Ô e3 +
H-x3,3 y3,2 + x3,2 y3,3 L e1 Ô e4 + H-x3,4 y3,1 + x3,1 y3,4 L e2 Ô e3 +
H-x3,4 y3,2 + x3,2 y3,4 L e2 Ô e4 + H-x3,4 y3,3 + x3,3 y3,4 L e3 Ô e4 L

We can show that this 2-element is simple by confirming that its exterior product with itself is
zero. (This technique was discussed in Chapter 2: The Exterior Product.)

¯%@Zc Ô ZcD êê Expand


0

Regressive products leading to scalar results


A formula particularly useful in its interior product form to be derived later in Chapter 6 is
obtained by application of the Common Factor Theorem to the product
Ha1 Ô ! Ô am L Ó b1 Ó ! Ó bm where the ai are 1-elements and the bi are (n|1)-elements.
n-1 n-1 n-1

2009 9 3
Grassmann Algebra Book.nb 168

Ha1 Ô ! Ô am L Ó b1 Ó ! Ó bm
n-1 n-1

ã Ha1 Ô ! Ô am L Ó bj Ó H-1Lj-1 b1 Ó ! Ó ·j Ó ! Ó bm
n-1 n-1 n-1

ã ‚ H-1Li-1 ai Ó bj Ha1 Ô ! Ô ·i Ô ! Ô am L Ó
i n-1

H-1Lj-1 b1 Ó ! Ó ·j Ó ! Ó bm
n-1 n-1

ã ‚ H-1Li+j ai Ó bj Ha1 Ô ! Ô ·i Ô ! Ô am L Ó b1 Ó ! Ó ·j Ó ! Ó bm
n-1 n-1 n-1
i

By repeating this process one obtains finally that:

Ha1 Ô ! Ô am L Ó b1 Ó ! Ó bm ã DetBai Ó bj F 3.41


n-1 n-1 n-1

where DetBai Ó bj F is the determinant of the matrix whose elements are the scalars ai Ó bj .
n-1 n-1

For the case m = 2 we have:

Ha1 Ô a2 L Ó b1 Ó b2 ã a1 Ó b1 a2 Ó b2 - a1 Ó b2 a2 Ó b1
n-1 n-1 n-1 n-1 n-1 n-1

This determinant formula is of central importance in the computation of inner products to be


discussed in Chapter 6.

Expressing an element in terms of another basis


The Common Factor Theorem may also be used to express an m-element in terms of another basis
with basis n-element b by expanding the product b Ó a.
n n m

Let the new basis be 8b1 , !, bn < and let b ã b1 Ô ! Ô bn , then the Common Factor Theorem
n
permits us to write:
n
b Ó a ã ‚ bi Ô a Ó bi
n m n-m m m
i=1

n
where n = K O and
m

b ã b1 Ô b1 ã b2 Ô b2 ã ! ã bn Ô bn
n n-m m n-m m n-m m

We can visualize how the formula operates by writing a and b as simple products and then
m k
exchanging the bi with the ai in all the essentially different ways possible whilst always retaining
the original ordering. To make this more concrete, suppose n is 5 and m is 2:
2009 9 3
Grassmann Algebra Book.nb 169

We can visualize how the formula operates by writing a and b as simple products and then
m k
exchanging the bi with the ai in all the essentially different ways possible whilst always retaining
the original ordering. To make this more concrete, suppose n is 5 and m is 2:

Hb1 Ô b2 Ô b3 Ô b4 Ô b5 L Ó Ha1 Ô a2 L ã
Ha1 Ô a2 Ô b3 Ô b4 Ô b5 L Ó Hb1 Ô b2 L
+Ha1 Ô b2 Ô a2 Ô b4 Ô b5 L Ó Hb1 Ô b3 L
+Ha1 Ô b2 Ô b3 Ô a2 Ô b5 L Ó Hb1 Ô b4 L
+Ha1 Ô b2 Ô b3 Ô b4 Ô a2 L Ó Hb1 Ô b5 L
+Hb1 Ô a1 Ô a2 Ô b4 Ô b5 L Ó Hb2 Ô b3 L
+Hb1 Ô a1 Ô b3 Ô a2 Ô b5 L Ó Hb2 Ô b4 L
+Hb1 Ô a1 Ô b3 Ô b4 Ô a2 L Ó Hb2 Ô b5 L
+Hb1 Ô b2 Ô a1 Ô a2 Ô b5 L Ó Hb3 Ô b4 L
+Hb1 Ô b2 Ô a1 Ô b4 Ô a2 L Ó Hb3 Ô b5 L
+Hb1 Ô b2 Ô b3 Ô a1 Ô a2 L Ó Hb4 Ô b5 L

Now let the new n-elements on the right hand side be written as scalar factors bi times the new
basis n-element b.
n

bi Ô a ã bi b
n-m m n

Substituting gives

n n
b Ó a ã ‚ bi b Ó bi ã b Ó ‚ bi bi
n m n m n m
i=1 i=1

Since b is an n-element, we can 'divide' through by it to give finally


n

n bi Ô a
n-m m
a ã ‚ bi bi bi ã 3.42
m
i=1
m b
n

Ï Example: Expressing a 3-element in terms of another basis

Consider a 4-space with basis 8e1 ,e2 ,e3 ,e4 < and let a be a 3-element expressed in terms of this
3
basis.

¯!4 ; a = 2 e1 Ô e2 Ô e3 - 3 e1 Ô e3 Ô e4 - 2 e2 Ô e3 Ô e4 ;
3

Suppose now that we have another basis related to the e basis by:

2009 9 3
Grassmann Algebra Book.nb 170

b1 = 2 e1 + 3 e3 ;
b2 = 5 e3 - e4 ;
b3 = e1 - e3 ;
b4 = e1 + e2 + e3 + e4 ;

We wish to express a in terms of the new basis. Instead of inverting the transformation to find the
3
ei in terms of the bi , substituting for the ei in a, and simplifying, we can almost write the result
3
required by inspection.

Hb1 Ô b2 Ô b3 Ô b4 L Ó a ã b1 Ô a Ó Hb2 Ô b3 Ô b4 L - b2 Ô a Ó Hb1 Ô b3 Ô b4 L +


3 3 3

b3 Ô a Ó Hb1 Ô b2 Ô b4 L - b4 Ô a Ó Hb1 Ô b2 Ô b3 L
3 3

ã H-4 e1 Ô e2 Ô e3 Ô e4 L Ó Hb2 Ô b3 Ô b4 L - H2 e1 Ô e2 Ô e3 Ô e4 L Ó Hb1 Ô b3 Ô b4 L +


H-2 e1 Ô e2 Ô e3 Ô e4 L Ó Hb1 Ô b2 Ô b4 L - H-e1 Ô e2 Ô e3 Ô e4 L Ó Hb1 Ô b2 Ô b3 L

ã He1 Ô e2 Ô e3 Ô e4 L Ó
HH-4 b2 Ô b3 Ô b4 L + H-2 b1 Ô b3 Ô b4 L + H-2 b1 Ô b2 Ô b4 L + Hb1 Ô b2 Ô b3 LL

But from the transformation we find that

¯%@b1 Ô b2 Ô b3 Ô b4 D
-5 e1 Ô e2 Ô e3 Ô e4

so that the equation becomes

1
Hb1 Ô b2 Ô b3 Ô b4 L Ó a ã - Hb1 Ô b2 Ô b3 Ô b4 L Ó
35
HH-4 b2 Ô b3 Ô b4 L + H-2 b1 Ô b3 Ô b4 L + H-2 b1 Ô b2 Ô b4 L + Hb1 Ô b2 Ô b3 LL

We can simplify this by 'dividing' through by b1 Ô b2 Ô b3 Ô b4 , to give finally

4 2 2 1
aã b2 Ô b3 Ô b4 + b1 Ô b3 Ô b4 + b1 Ô b2 Ô b4 - b1 Ô b2 Ô b3
3 5 5 5 5
We can easily check this by expanding and simplifying the right hand side to give both sides in
terms of the original e basis.

4 2 2 1
a ã ¯%B b2 Ô b3 Ô b4 + b1 Ô b3 Ô b4 + b1 Ô b2 Ô b4 - b1 Ô b2 Ô b3 F
3 5 5 5 5
True

2009 9 3
Grassmann Algebra Book.nb 171

Ï Expressing a 1-element in terms of another basis

For the special case of a 1-element, the scalar coefficients bi in formula 3.35 can be written more
explicitly, and without undue complexity, resulting in a decomposition of a in terms of n
independent elements bi equivalent to expressing a in the basis bi .

n
b1 Ô b2 Ô ! Ô a Ô ! Ô bn
aã‚ bi 3.43
i=1
b1 Ô b2 Ô ! Ô bi Ô ! Ô bn

Here the denominators are the same in each term but are expressed this way to show the
positioning of the factor a in the numerator.

This result has already been introduced in the section above Example: The decomposition of a 1-
element.

Ï The symmetric expansion of a 1-element in terms of another basis

For the case m = 1, b Ó a may be expanded by the Common Factor Theorem to give:
n

n
Hb1 Ô b2 Ô ! Ô bn L Ó a ã ‚ Hb1 Ô b2 Ô ! Ô bi-1 Ô a Ô bi+1 Ô ! Ô bn L Ó bi
i=1

or, in terms of the mnemonic expansion in the previous section:

Hb1 Ô b2 Ô b3 Ô ! Ô bn L Ó a
ã Ha Ô b2 Ô b3 Ô ! Ô bn L Ó b1
+Hb1 Ô a Ô b3 Ô ! Ô bn L Ó b2
+Hb1 Ô b2 Ô a Ô ! Ô bn L Ó b3
+!
+Hb1 Ô b2 Ô b3 Ô ! Ô aL Ó bn

Putting a equal to b0 , this may be written more symmetrically as:

n
‚ H-1Li Hb0 Ô b1 Ô ! Ô ·i Ô ! Ô bn L Ó bi ã 0 3.44
i=0

For example, suppose b0 , b1 , b2 , b3 are four dependent 1-elements which span a 3-space, then the
formula reduces to the identity:

Hb1 Ô b2 Ô b3 L Ó b0 - Hb0 Ô b2 Ô b3 L Ó b1
3.45
+Hb0 Ô b1 Ô b3 L Ó b2 - Hb0 Ô b1 Ô b2 L Ó b3 ã 0

We can get GrassmannAlgebra to check this formula by composing each of the bi in basis form,
and finding the common factor of each term. (To use ComposeBasisForm, we will first have to
give the bi new names which do not already involve subscripts.)
2009 9 3
Grassmann Algebra Book.nb 172

We can get GrassmannAlgebra to check this formula by composing each of the bi in basis form,
and finding the common factor of each term. (To use ComposeBasisForm, we will first have to
give the bi new names which do not already involve subscripts.)

¯A; B = Hb1 Ô b2 Ô b3 L Ó b0 -
Hb0 Ô b2 Ô b3 L Ó b1 + Hb0 Ô b1 Ô b3 L Ó b2 - Hb0 Ô b1 Ô b2 L Ó b3 ;

B = B ê. 8b0 Ø w, b1 Ø x, b2 Ø y, b3 Ø z<
-Hw Ô x Ô y Ó zL + w Ô x Ô z Ó y - w Ô y Ô z Ó x + x Ô y Ô z Ó w

Bn = ComposeBasisForm@BD
-HHe1 w1 + e2 w2 + e3 w3 L Ô He1 x1 + e2 x2 + e3 x3 L Ô He1 y1 + e2 y2 + e3 y3 L Ó
He1 z1 + e2 z2 + e3 z3 LL +
He1 w1 + e2 w2 + e3 w3 L Ô He1 x1 + e2 x2 + e3 x3 L Ô He1 z1 + e2 z2 + e3 z3 L Ó
He1 y1 + e2 y2 + e3 y3 L -
He1 w1 + e2 w2 + e3 w3 L Ô He1 y1 + e2 y2 + e3 y3 L Ô He1 z1 + e2 z2 + e3 z3 L Ó
He1 x1 + e2 x2 + e3 x3 L +
He1 x1 + e2 x2 + e3 x3 L Ô He1 y1 + e2 y2 + e3 y3 L Ô He1 z1 + e2 z2 + e3 z3 L Ó
He1 w1 + e2 w2 + e3 w3 L

ToCommonFactor@Bn D ã 0
True

Exploration: The cobasis form of the Common Factor Axiom


The Common Factor Axiom has a significantly suggestive form when written in terms of cobasis
elements. This form will later help us extend the definition of the interior product to arbitrary
elements.

We start with three basis elements whose exterior product is equal to the basis n-element. The basis
n-element may also be expressed as the cobasis 1 of the unit element 1 of L.
0

ei Ô ej Ô es ã e1 Ô e2 Ô ! Ô en ã 1 ã ¯c 1
m k p ¯n

The Common Factor Axiom can be written for these basis elements as:

ei Ô es Ó ej Ô es ã ei Ô ej Ô es Ó es , m + k + p ã ¯n 3.46
m p k p m k p p

From the definition of cobasis elements we have that:

ei Ô es ã H-1Lm k ej
m p k

2009 9 3
Grassmann Algebra Book.nb 173

ej Ô es ã ei
k p m

es ã ei Ô ej
p m k

ei Ô ej Ô es ã 1
m k p

Substituting these four elements into the Common Factor Axiom above gives:

H-1Lm k ej Ó ei ã 1 Ó Hei Ô ej L
k m m k

Or, more symmetrically, by interchanging the first two factors:

ei Ó ej ã 1 Ó Hei Ô ej L ã ¯c ei Ô ej
m k m k m k
3.47

It can be seen that in this form the Common Factor Axiom does not specifically display the
common factor, and indeed remains valid for all basis elements, independent of their grades.

In sum: Given any two basis elements, the cobasis element of their exterior product is congruent to
the regressive product of their cobasis elements.

Exploration: The regressive product of cobasis elements


In Chapter 5: The Complement, we will have cause to calculate the regressive product of cobasis
elements. From the formula derived below we will have an instance of the fact that the regressive
product of (n|1)-elements is simple, and we will determine that simple element.

First, consider basis elements e1 and e2 of an n-space and their cobasis elements e1 and e2 . The
regressive product of e1 and e2 is given by:

e1 Ó e2 ã He2 Ô e3 Ô ! Ô en L Ó H- e1 Ô e3 Ô ! Ô en L

Applying the Common Factor Axiom enables us to write:

e1 Ó e2 ã He1 Ô e2 Ô e3 Ô ! Ô en L Ó H e3 Ô ! Ô en L

We can write this either in the form already derived in the section above:

e1 Ó e2 ã 1 Ó H e1 Ô e2 L

or, by writing 1 ã ¯c 1 as:


n

e1 Ó e2 ã ¯c H e1 Ô e2 L

Taking the regressive product of this equation with e3 gives:

2009 9 3
Grassmann Algebra Book.nb 174

e1 Ó e2 Ó e3 ã ¯c H e1 Ô e2 L Ó e3 ã ¯c 1 Ó H e1 Ô e2 Ô e3 L ã ¯c2 H e1 Ô e2 Ô e3 L

Continuing this process, we arrive finally at the result:

e1 Ó e2 Ó ! Ó em ã ¯cm-1 H e1 Ô e2 Ô ! Ô em L 3.48

A special case which we will have occasion to use in Chapter 5 is where the result reduces to a 1-
element.

H-1Lj-1 e1 Ó e2 Ó ! Ó ·j Ó ! Ó en ã ¯cn-2 H-1Ln-1 ej 3.49

In sum: The regressive product of cobasis elements of basis 1-elements is congruent to the cobasis
element of their exterior product.

In fact, this formula is just an instance of a more general result which says that: The regressive
product of cobasis elements of any grade is congruent to the cobasis element of their exterior
product. We will discuss a result very similar to this in more detail after we have defined the
complement of an element in Chapter 5.

3.8 Factorization of Simple Elements

Factorization using the regressive product


The Common Factor Axiom asserts that in an n-space, the regressive product of a simple m-
element with a simple (n|m+1)-element will give either zero, or a 1-element belonging to them
both, and hence a factor of the m-element. If m such factors can be obtained which are
independent, their product will therefore constitute (apart from a scalar factor easily determined) a
factorization of the m-element.

Let a be the simple element to be factorized. Choose first an (n|m)-element b whose product with
m n-m
a is non-zero. Next, choose a set of m independent 1-elements bj whose products with b are also
m n-m
non-zero.

A factorization aj of a is then obtained from:


m

a ã a a1 Ô a2 Ô ! Ô a m aj ã a Ó Kbj Ô b O 3.50
m m n-m

The scalar factor a may be determined simply by equating any two corresponding terms of the
original element and the factorized version.

Note that no factorization is unique. Had different bj been chosen, a different factorization would
have been obtained. Nevertheless, any one factorization may be obtained from any other by adding
multiples of the factors to each factor.
2009 9 3
Grassmann Algebra Book.nb 175

Note that no factorization is unique. Had different bj been chosen, a different factorization would
have been obtained. Nevertheless, any one factorization may be obtained from any other by adding
multiples of the factors to each factor.

If an element is simple, then the exterior product of the element with itself is zero. The converse,
however, is not true in general for elements of grade higher than 2, for it only requires the element
to have just one simple factor to make the product with itself zero.

If the method is applied to the factorization of a non-simple element, the result will still be a simple
element. Thus an element may be tested to see if it is simple by applying the method of this
section: if the factorization is not equivalent to the original element, the hypothesis of simplicity
has been violated.

Ï Example: Factorization of a simple 2-element

Suppose we have a 2-element a which we wish to show is simple and which we wish to factorize.
2

a ã vÔw + vÔx + vÔy + vÔz + wÔz + xÔz + yÔz


2

There are 5 independent 1-elements in the expression for a : v, w, x, y, z, hence we can choose n
2
to be 5. Next, we choose b (= b) arbitrarily as xÔyÔz, b1 as v, and b2 as w. Our two factors then
n-m 3
become:

a1 ª a Ó b1 Ô b ª a Ó Hv Ô x Ô y Ô zL
2 3 2

a2 ª a Ó b2 Ô b ª a Ó Hw Ô x Ô y Ô zL
2 3 2

In determining a1 the Common Factor Theorem permits us to write for arbitrary 1-elements x and
y:

Hx Ô yL Ó Hv Ô x Ô y Ô zL ã Hx Ô v Ô x Ô y Ô zL Ó y - Hy Ô v Ô x Ô y Ô zL Ó x

Applying this to each of the terms of a gives:


2

a1 ª -Hw Ô v Ô x Ô y Ô zL Ó v + Hw Ô v Ô x Ô y Ô zL Ó z ª v - z

Similarly for a2 we have the same formula, except that w replaces v.

Hx Ô yL Ó Hw Ô x Ô y Ô zL ã Hx Ô w Ô x Ô y Ô zL Ó y - Hy Ô w Ô x Ô y Ô zL Ó x

Again applying this to each of the terms of a gives:


2

a2 ª Hv Ô w Ô x Ô y Ô zL Ó w + Hv Ô w Ô x Ô y Ô zL Ó x +
Hv Ô w Ô x Ô y Ô zL Ó y + Hv Ô w Ô x Ô y Ô zL Ó z ª w + x + y + z

Hence the factorization is congruent to:

2009 9 3
Grassmann Algebra Book.nb 176

a1 Ô a2 ª Hv - zL Ô Hw + x + y + zL

Verification by expansion of this product shows that this is indeed a factorization of the original
element.

Factorizing elements expressed in terms of basis elements


We now take the special case where the element to be factorized is expressed in terms of basis
elements. This will enable us to develop formulae from which we can write down the factorization
of an element (almost) by inspection.

The development is most clearly apparent from a specific example, but one that is general enough
to cover the general concept. Consider a 3-element in a 5-space, where we suppose the coefficients
to be such as to ensure the simplicity of the element.

a ã a1 e1 Ô e2 Ô e3 + a2 e1 Ô e2 Ô e4 + a3 e1 Ô e2 Ô e5
3
+a4 e1 Ô e3 Ô e4 + a5 e1 Ô e3 Ô e5 + a6 e1 Ô e4 Ô e5
+a7 e2 Ô e3 Ô e4 + a8 e2 Ô e3 Ô e5 + a9 e2 Ô e4 Ô e5
+a10 e3 Ô e4 Ô e5

Choose b1 ã e1 , b2 ã e2 , b3 ã e3 , b ã e4 Ô e5 , then we can write each of the factors bi Ô b as a


2 2
cobasis element.

b1 Ô b ã e1 Ô e4 Ô e5 ã e2 Ô e3
2

b2 Ô b ã e2 Ô e4 Ô e5 ã -e1 Ô e3
2

b3 Ô b ã e3 Ô e4 Ô e5 ã e1 Ô e2
2

Consider the cobasis element e2 Ô e3 , and a typical term of a Ó e2 Ô e3 , which we write as


3
Ha ei Ô ej Ô ek L Ó e2 Ô e3 . The Common Factor Theorem tells us that this product is zero if
ei Ô ej Ô ek does not contain e2 Ô e3 . We can thus simplify the product a Ó e2 Ô e3 by dropping
3
out the terms of a which do not contain e2 Ô e3 . Thus:
3

a Ó e2 Ô e3 ã Ha1 e1 Ô e2 Ô e3 + a7 e2 Ô e3 Ô e4 + a8 e2 Ô e3 Ô e5 L Ó e2 Ô e3
3

Furthermore, the Common Factor Theorem applied to a typical term He2 Ô e3 Ô ei L Ó e2 Ô e3 of


the expansion in which both e2 Ô e3 and e2 Ô e3 occur yields a 1-element congruent to the
remaining basis 1-element ei in the product. This effectively cancels out (up to congruence) the
product e2 Ô e3 from the original term.

He2 Ô e3 Ô ei L Ó e2 Ô e3 ã IHe2 Ô e3 L Ô He2 Ô e3 LM Ó ei ª ei

Thus we can further reduce a Ó e2 Ô e3 to give:


3

2009 9 3
Grassmann Algebra Book.nb 177

Thus we can further reduce a Ó e2 Ô e3 to give:


3

a1 ª a Ó e2 Ô e3 ª a1 e1 + a7 e4 + a8 e5
3

Similarly we can determine the other factors:

a2 ª a Ó -e1 Ô e3 ª a1 e2 - a4 e4 - a5 e5
3

a3 ª a Ó e1 Ô e2 ª a1 e3 + a2 e4 + a3 e5
3

It is clear from inspecting the product of the first terms in each 1-element that the product requires
a scalar divisor of a21 . The final result is then:

1
aã a1 Ô a2 Ô a3
3 a1 2
ã
1
Ha1 e1 + a7 e4 + a8 e5 L Ô Ha1 e2 - a4 e4 - a5 e5 L Ô Ha1 e3 + a2 e4 + a3 e5 L
a1 2

Ï Verification and derivation of conditions for simplicity

We verify the factorization by multiplying out the factors and comparing the result with the
original expression. When we do this we obtain a result which still requires some conditions to be
met: those ensuring the original element is simple.

First we declare a 5-dimensional basis and compose a (this automatically declares the coefficients
3
ai to be scalar).

¯A; ¯!5 ; ComposeMElement@3, aD


a1 e1 Ô e2 Ô e3 + a2 e1 Ô e2 Ô e4 + a3 e1 Ô e2 Ô e5 +
a4 e1 Ô e3 Ô e4 + a5 e1 Ô e3 Ô e5 + a6 e1 Ô e4 Ô e5 +
a7 e2 Ô e3 Ô e4 + a8 e2 Ô e3 Ô e5 + a9 e2 Ô e4 Ô e5 + a10 e3 Ô e4 Ô e5

To effect the multiplication of the factored form we use GrassmannExpandAndSimplify (in


its alias form).

2009 9 3
Grassmann Algebra Book.nb 178

1
A = ¯%B
a1 2
Ha1 e1 + a7 e4 + a8 e5 L Ô Ha1 e2 - a4 e4 - a5 e5 L Ô Ha1 e3 + a2 e4 + a3 e5 LF

a1 e1 Ô e2 Ô e3 + a2 e1 Ô e2 Ô e4 + a3 e1 Ô e2 Ô e5 + a4 e1 Ô e3 Ô e4 +
a3 a4 a2 a5
a5 e1 Ô e3 Ô e5 + - + e1 Ô e4 Ô e5 + a7 e2 Ô e3 Ô e4 +
a1 a1
a3 a7 a2 a8 a5 a7 a4 a8
a8 e2 Ô e3 Ô e5 + - + e2 Ô e4 Ô e5 + - + e3 Ô e4 Ô e5
a1 a1 a1 a1

For this expression to be congruent to the original expression we clearly need to apply the
simplicity conditions for a general 3-element in a 5-space. Once this is done we retrieve the
original 3-element a with which we began. The simplicity conditions can be expressed by the
3
following rules, which we apply to the expression A. We will discuss them in the next section.
a3 a4 a2 a5
A ê. : - + Ø a6 ,
a1 a1
a3 a7 a2 a8 a5 a7 a4 a8
- + Ø a9 , - + Ø a10 >
a1 a1 a1 a1
a1 e1 Ô e2 Ô e3 + a2 e1 Ô e2 Ô e4 + a3 e1 Ô e2 Ô e5 +
a4 e1 Ô e3 Ô e4 + a5 e1 Ô e3 Ô e5 + a6 e1 Ô e4 Ô e5 +
a7 e2 Ô e3 Ô e4 + a8 e2 Ô e3 Ô e5 + a9 e2 Ô e4 Ô e5 + a10 e3 Ô e4 Ô e5

In sum: A 3-element in a 5-space of the form:

a1 e1 Ô e2 Ô e3 + a2 e1 Ô e2 Ô e4 + a3 e1 Ô e2 Ô e5 +
a4 e1 Ô e3 Ô e4 + a5 e1 Ô e3 Ô e5 + a6 e1 Ô e4 Ô e5 +
a7 e2 Ô e3 Ô e4 + a8 e2 Ô e3 Ô e5 + a9 e2 Ô e4 Ô e5 + a10 e3 Ô e4 Ô e5

whose coefficients are constrained by the relations:

a2 a5 - a3 a4 - a1 a6 ã 0
a2 a8 - a3 a7 - a1 a9 ã 0
a4 a8 - a5 a7 - a1 a10 ã 0

is simple, and has a factorization of:

1
Ha1 e1 + a7 e4 + a8 e5 L Ô H a1 e2 - a4 e4 - a5 e5 L Ô Ha1 e3 + a2 e4 + a3 e5 L
a1 2
This factorization is of course not unique.

2009 9 3
Grassmann Algebra Book.nb 179

The factorization algorithm


In this section we take the development of the previous sections and extract an algorithm for
writing down a factorization by inspection.

Suppose we have a simple m-element expressed as a sum of terms, each of which is of the form
a x Ô y Ô ! Ô z, where a denotes a scalar (which may be unity) and the x, y, !, z are symbols
denoting 1-element factors.

Ï To obtain a 1-element factor of the m-element

• Select an (m|1)-element belonging to at least one of the terms, for example y Ô ! Ô z.


• Drop any terms not containing the selected (m|1)-element.
• Factor this (m|1)-element from the resulting expression and eliminate it.
• The 1-element remaining is a 1-element factor of the m-element.

Ï To factorize the m-element

• Select an m-element belonging to at least one of the terms, for example x Ô y Ô ! Ô z.


• Create m different (m|1)-elements by dropping a different 1-element factor each time. The sign
of the result is not important, since a scalar factor will be determined in the last step.
• Obtain m independent 1-element factors corresponding to each of these (m|1)-elements.
• The original m-element is congruent to the exterior product of these 1-element factors.
• Compare this product to the original m-element to obtain the correct scalar factor and hence the
final factorization.

Ï Example 1: Factorizing a 2-element in a 4-space

Suppose we have a 2-element in a 4-space, and we wish to apply this algorithm to obtain a
factorization. We have already seen that such an element is in general not simple. We may
however use the preceding algorithm to obtain the simplicity conditions on the coefficients.

a ã a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4
2

• Select a 2-element belonging to at least one of the terms, say e1 Ô e2 .


• Drop e2 to create e1 . Then drop e1 to create e2 .

• Select e1 .
• Drop the terms a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4 since they do not contain e1 .
• Factor e1 from a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 and eliminate it to give factor a1 .

a1 ã a1 e2 + a2 e3 + a3 e4

• Select e2 .
• Drop the terms a2 e1 Ô e3 + a3 e1 Ô e4 + a6 e3 Ô e4 since they do not contain e2 .
• Factor e2 from a1 e1 Ô e2 + a4 e2 Ô e3 + a5 e2 Ô e4 and eliminate it to give factor a2 .

2009 9 3
Grassmann Algebra Book.nb 180

a2 ã - a1 e1 + a4 e3 + a5 e4

• The exterior product of these 1-element factors is

a1 Ô a2 ã Ha1 e2 + a2 e3 + a3 e4 L Ô H- a1 e1 + a4 e3 + a5 e4 L

ã a1 a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 +
-a3 a4 + a2 a5
a4 e2 Ô e3 + a5 e2 Ô e4 + e3 Ô e4
a1

• Comparing this product to the original 2-element a gives the final factorization as
2

1
aã Ha1 e2 + a2 e3 + a3 e4 L Ô H- a1 e1 + a4 e3 + a5 e4 L
2 a1
provided that the simplicity conditions on the coefficients are satisfied:

a1 a6 + a3 a4 - a2 a5 ã 0

In sum: A 2-element in a 4-space may be factorized if and only if a condition on the coefficients is
satisfied.

a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 +
a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4
1 3.51
ã Ha1 e1 - a4 e3 - a5 e4 L Ô Ha1 e2 + a2 e3 + a3 e4 L,
a1
ó a3 a4 - a2 a5 + a1 a6 ã 0

Again, this factorization is not unique.

Ï Example 2: Factorizing a 3-element in a 5-space

This time we have a numerical example of a 3-element in 5-space. We wish to determine if the
element is simple, and if so, to obtain a factorization of it. To achieve this, we first assume that it is
simple, and apply the factorization algorithm. We will verify its simplicity (or its non-simplicity)
by comparing the results of this process to the original element.

a ã -3 e1 Ô e2 Ô e3 - 4 e1 Ô e2 Ô e4 + 12 e1 Ô e2 Ô e5 + 3 e1 Ô e3 Ô e4 -
3
3 e1 Ô e3 Ô e5 + 8 e1 Ô e4 Ô e5 + 6 e2 Ô e3 Ô e4 - 18 e2 Ô e3 Ô e5 + 12 e3 Ô e4 Ô e5

• Select a 3-element belonging to at least one of the terms, say e1 Ô e2 Ô e3 .


• Drop e1 to create e2 Ô e3 . Drop e2 to create e1 Ô e3 . Drop e3 to create e1 Ô e2 .

• Select e2 Ô e3 .
• Drop the terms not containing it, factor it from the remainder, and eliminate it to give a1 .

a1 ª -3 e1 + 6 e4 - 18 e5 ª - e1 + 2 e4 - 6 e5

• Select e1 Ô e3 .
• Drop the terms not containing it, factor it from the remainder, and eliminate it to give a2 .
2009 9 3
Grassmann Algebra Book.nb 181

• Select e1 Ô e3 .
• Drop the terms not containing it, factor it from the remainder, and eliminate it to give a2 .

a2 ª 3 e2 + 3 e4 - 3 e5 ª e2 + e4 - e5

• Select e1 Ô e2 .
• Drop the terms not containing it, factor it from the remainder, and eliminate it to give a3 .

a3 ª -3 e3 - 4 e4 + 12 e5

• The exterior product of these 1-element factors is:

a1 Ô a2 Ô a3 ã H- e1 + 2 e4 - 6 e5 L Ô He2 + e4 - e5 L Ô H-3 e3 - 4 e4 + 12 e5 L

Multiplying this out (here using GrassmannExpandAndSimplify in its alias form) gives:

¯A; ¯!5 ; ¯%@H- e1 + 2 e4 - 6 e5 L Ô He2 + e4 - e5 L Ô H-3 e3 - 4 e4 + 12 e5 LD


3 e1 Ô e2 Ô e3 + 4 e1 Ô e2 Ô e4 - 12 e1 Ô e2 Ô e5 - 3 e1 Ô e3 Ô e4 + 3 e1 Ô e3 Ô e5 -
8 e1 Ô e4 Ô e5 - 6 e2 Ô e3 Ô e4 + 18 e2 Ô e3 Ô e5 - 12 e3 Ô e4 Ô e5

• Comparing this product to the original 3-element a verifies a final factorization as:
3

a ã He1 - 2 e4 + 6 e5 L Ô He2 + e4 - e5 L Ô H-3 e3 - 4 e4 + 12 e5 L


3

This factorization is, of course, not unique. For example, a slightly simpler factorization could be
obtained by subtracting twice the first factor from the third factor to obtain:

a ã He1 - 2 e4 + 6 e5 L Ô He2 + e4 - e5 L Ô H-3 e3 - 2 e1 L


3

Factorization of (n|1)-elements
The foregoing method may be used to prove constructively that any (n|1)-element is simple by
obtaining a factorization and subsequently verifying its validity. Let the general (n|1)-element be:
n
a ã ‚ ai ei , a1 ! 0
n-1
i=1

where the ei are the cobasis elements of the ei .

Choose b ã e1 and bj ã ej and apply the Common Factor Theorem to obtain (for j "1):
1

2009 9 3
Grassmann Algebra Book.nb 182

a Ó He1 Ô ej L ã K a Ô ej O Ó e1 - K a Ô e1 O Ó ej
n-1 n-1 n-1
n n
ã ‚ ai ei Ô ej Ó e1 - ‚ ai ei Ô e1 Ó ej
i=1 i=1

ã H-1Ln-1 Kaj e Ó e1 - a1 e Ó ej O
n n
n-1
ã H-1L e Ó Haj e1 - a1 ej L
n

Factors of a are therefore of the form aj e1 - a1 ej , j " 1, so that a is congruent to:


n-1 n-1

a ª Ha2 e1 - a1 e2 L Ô Ha3 e1 - a1 e3 L Ô ! Ô Han e1 - a1 en L


n-1

The result may be summarized as follows:

a1 e1 + a2 e2 + ! + an en
ª Ha2 e1 - a1 e2 L Ô Ha3 e1 - a1 e3 L Ô ! Ô Han e1 - a1 en L, 3.52
a1 ! 0

n n
‚ ai ei ª L Ha
i=2
i e1 - a1 ei L, a1 ! 0 3.53
i=1

By putting equation 3.46 in its equality form rather than its congruence form, we have

n n
a1 n-2
‚ ai ei ã H-1L n-1
L Ha
i=2
i e1 - a1 ei L, a1 ! 0 3.54
i=1

Ï Example: Verifying the formula

The left and right sides of the formula 3.47 are easily coded in GrassmannAlgebra as functions of
the dimension n of the space.
n
L@n_D := ¯!n ; a1 n-2 ‚ ai CobasisElement@ei D
i=1

R@n_D := H-1Ln-1 Wedge üü Table@ai e1 - a1 ei , 8i, 2, n<D

This enables us to tabulate the formula for different dimensions. For example

2009 9 3
Grassmann Algebra Book.nb 183

Table@L@nD ã R@nD, 8n, 3, 5<D


9a1 Ha3 e1 Ô e2 - a2 e1 Ô e3 + a1 e2 Ô e3 L ã Ha2 e1 - a1 e2 L Ô Ha3 e1 - a1 e3 L,
a21 H-a4 e1 Ô e2 Ô e3 + a3 e1 Ô e2 Ô e4 - a2 e1 Ô e3 Ô e4 + a1 e2 Ô e3 Ô e4 L ã
-HHa2 e1 - a1 e2 L Ô Ha3 e1 - a1 e3 L Ô Ha4 e1 - a1 e4 LL,
3
a1 Ha5 e1 Ô e2 Ô e3 Ô e4 - a4 e1 Ô e2 Ô e3 Ô e5 + a3 e1 Ô e2 Ô e4 Ô e5 -
a2 e1 Ô e3 Ô e4 Ô e5 + a1 e2 Ô e3 Ô e4 Ô e5 L ã
Ha2 e1 - a1 e2 L Ô Ha3 e1 - a1 e3 L Ô Ha4 e1 - a1 e4 L Ô Ha5 e1 - a1 e5 L=

or to verify it for different dimensions by expanding and simplifying both sides.

Table@¯%@L@nDD ã ¯%@R@nDD, 8n, 3, 8<D


8True, True, True, True, True, True<

Factorizing simple m-elements


For m-elements known to be simple we can use the GrassmannAlgebra function
ExteriorFactorize to obtain a factorization. Note that the result obtained from
ExteriorFactorize will be only one factorization out of the generally infinitely many
possibilities. However, every factorization may be obtained from any other by adding scalar
multiples of each factor to the other factors.

We have shown above that every (n|1)-element in an n-space is simple. Applying


ExteriorFactorize to a series of general (n|1)-elements in spaces of 4, 5, and 6 dimensions
corroborates this result (although the form it generates is slightly different).

In each case we first declare a basis of the requisite dimension by entering ¯!n ;, and then create a
general (n|1)-element X with scalars ai using the GrassmannAlgebra function
ComposeMElement.

Ï Factorizing a 3-element in a 4-space

¯A; ¯!4 ; X = ComposeMElement@3, aD


a1 e1 Ô e2 Ô e3 + a2 e1 Ô e2 Ô e4 + a3 e1 Ô e3 Ô e4 + a4 e2 Ô e3 Ô e4

ExteriorFactorize@XD
a4 e4 a3 e4 a2 e4
a1 e1 + Ô e2 - Ô e3 +
a1 a1 a1

Ï Factorizing a 4-element in a 5-space

¯!5 ; X = ComposeMElement@4, aD
a1 e1 Ô e2 Ô e3 Ô e4 + a2 e1 Ô e2 Ô e3 Ô e5 +
a3 e1 Ô e2 Ô e4 Ô e5 + a4 e1 Ô e3 Ô e4 Ô e5 + a5 e2 Ô e3 Ô e4 Ô e5

2009 9 3
Grassmann Algebra Book.nb 184

ExteriorFactorize@XD
a5 e5 a4 e5 a3 e5 a2 e5
a1 e1 - Ô e2 + Ô e3 - Ô e4 +
a1 a1 a1 a1

Ï Factorizing a 5-element in a 6-space

¯!6 ; X = ComposeMElement@5, aD
a1 e1 Ô e2 Ô e3 Ô e4 Ô e5 + a2 e1 Ô e2 Ô e3 Ô e4 Ô e6 + a3 e1 Ô e2 Ô e3 Ô e5 Ô e6 +
a4 e1 Ô e2 Ô e4 Ô e5 Ô e6 + a5 e1 Ô e3 Ô e4 Ô e5 Ô e6 + a6 e2 Ô e3 Ô e4 Ô e5 Ô e6

ExteriorFactorize@XD
a6 e6 a5 e6 a4 e6 a3 e6 a2 e6
a1 e1 + Ô e2 - Ô e3 + Ô e4 - Ô e5 +
a1 a1 a1 a1 a1

Ï The regressive product of two 3-elements in a 4-space

Let X and Y be 3-elements in a 4-space.

¯A; ¯!4 ;

X = ComposeMElement@3, xD
x1 e1 Ô e2 Ô e3 + x2 e1 Ô e2 Ô e4 + x3 e1 Ô e3 Ô e4 + x4 e2 Ô e3 Ô e4

Y = ComposeMElement@3, yD
y1 e1 Ô e2 Ô e3 + y2 e1 Ô e2 Ô e4 + y3 e1 Ô e3 Ô e4 + y4 e2 Ô e3 Ô e4

We can obtain the 2-element common factor of these two 3-elements by applying
ToCommonFactor to their regressive product.

Z = ToCommonFactor@X Ó YD
¯c HH-x2 y1 + x1 y2 L e1 Ô e2 + H-x3 y1 + x1 y3 L e1 Ô e3 +
H-x3 y2 + x2 y3 L e1 Ô e4 + H-x4 y1 + x1 y4 L e2 Ô e3 +
H-x4 y2 + x2 y4 L e2 Ô e4 + H-x4 y3 + x3 y4 L e3 Ô e4 L

Applying ExteriorFactorize to this common factor shows that it is simple and can be
factorized.

ExteriorFactorize@ZD
e3 Hx4 y1 - x1 y4 L e4 Hx4 y2 - x2 y4 L
¯c H-x2 y1 + x1 y2 L e1 + + Ô
-x2 y1 + x1 y2 -x2 y1 + x1 y2
e3 Hx3 y1 - x1 y3 L e4 Hx3 y2 - x2 y3 L
e2 + +
x2 y1 - x1 y2 x2 y1 - x1 y2

2009 9 3
Grassmann Algebra Book.nb 185

Because this is a 2-element in a 4-space we can of course also prove its simplicity by expanding
and simplifying its exterior square to show that it is zero.

Expand@¯%@Z Ô ZDD
0

Factorizing contingently simple m-elements


If ExteriorFactorize is applied to an element which is not simple, the element will simply be
returned. For example, in a 4-space, the 2-element xÔy + uÔv is not simple.

¯!4 ; ExteriorFactorize@x Ô y + u Ô vD
uÔv + xÔy

If ExteriorFactorize is applied to an element which may be simple conditional on the values


taken by some of its scalar symbol coefficients, the conditional result will be returned.

Ï Contingently factorizing a 2-element in a 4-space

¯!4 ; X = ComposeMElement@2, aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4

X1 = ExteriorFactorize@XD

IfB8a3 a4 - a2 a5 + a1 a6 ã 0<,
a4 e3 a5 e4 a2 e3 a3 e4
a1 e1 - - Ô e2 + + F
a1 a1 a1 a1

This Mathematica If function has syntax If[C,F], where C is a list of constraints on the scalar
symbols of the expression, and F is the factorization if the constraints are satisfied. Hence the
above may be read: if the simplicity condition is satisfied, then the factorization is as given, else
the element is not factorizable.

Instead of a list of constraints, we can also recast the list of constraints in predicate form by
applying And to the list. (In this example, where there is only one constraint, the effect is simply to
remove the braces from the constraint.)

X1 = X1 ê. c_List ß And üü c
a4 e3 a5 e4 a2 e3 a3 e4
IfBa3 a4 - a2 a5 + a1 a6 ã 0, a1 e1 - - Ô e2 + + F
a1 a1 a1 a1

If we are able to assert the condition required, then the 2-element is indeed simple and the
factorization is valid. Substituting this condition into the predicate form of the If statement, yields
true for the predicate, hence the factorization is returned.

2009 9 3
Grassmann Algebra Book.nb 186

a2 a5 - a3 a4
X1 ê. a6 Ø
a1
a4 e3 a5 e4 a2 e3 a3 e4
a1 e1 - - Ô e2 + +
a1 a1 a1 a1

Ï Contingently factorizing a 2-element in a 5-space

¯!5 ; X = ComposeMElement@2, aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e1 Ô e5 + a5 e2 Ô e3 +
a6 e2 Ô e4 + a7 e2 Ô e5 + a8 e3 Ô e4 + a9 e3 Ô e5 + a10 e4 Ô e5

X1 = ExteriorFactorize@XD

IfB8a3 a5 - a2 a6 + a1 a8 ã 0,
a4 a5 - a2 a7 + a1 a9 ã 0, a4 a6 - a3 a7 + a1 a10 ã 0<,
a5 e3 a6 e4 a7 e5 a2 e3 a3 e4 a4 e5
a1 e1 - - - Ô e2 + + + F
a1 a1 a1 a1 a1 a1

In this case of a 2-element in a 5-space, we have a list of three constraints, which we can turn into a
predicate by applying And to the list.

X1 = X1 ê. c_List ß And üü c

IfBa3 a5 - a2 a6 + a1 a8 ã 0 &&
a4 a5 - a2 a7 + a1 a9 ã 0 && a4 a6 - a3 a7 + a1 a10 ã 0,
a5 e3 a6 e4 a7 e5 a2 e3 a3 e4 a4 e5
a1 e1 - - - Ô e2 + + + F
a1 a1 a1 a1 a1 a1

Now, if we assert the constraints, we get the simple factored expression.

a3 a5 - a2 a6 a4 a5 - a2 a7 a4 a6 - a3 a7
X1 ê. :a8 Ø - , a9 Ø - , a10 Ø - >
a1 a1 a1
a5 e3 a6 e4 a7 e5 a2 e3 a3 e4 a4 e5
a1 e1 - - - Ô e2 + + +
a1 a1 a1 a1 a1 a1

Ï Elements expressed in terms of independent vector symbols

ExteriorFactorize will also contingently factorize m-elements expressed in terms of


(independent) vector symbols. For example, here is a 2-element in a 4-space expressed in terms of
independent vector symbols x, y, z, w, and scalars a and b.

¯!4 ; X = 2 Hx Ô yL + b Hy Ô zL - 5 Hx Ô wL + 7 a Hy Ô wL - 4 Hz Ô wL;

2009 9 3
Grassmann Algebra Book.nb 187

ExteriorFactorize@XD
2y 7ay 4z
IfB88 + 5 b ã 0<, 5 w - Ô x- + F
5 5 5

Determining if an element is simple


To determine if an element is simple, we can use the GrassmannAlgebra function SimpleQ. If an
element is simple, SimpleQ returns True. If it is not simple, it returns False. If it may be simple
conditional on the values taken by some of its scalar coefficients, SimpleQ returns the conditions
required.

We take some of the examples discussed in the previous section on factorization.

Ï For simple m-elements

Ì A 4-element in a 5-space

¯!5 ; X = ComposeMElement@4, aD
a1 e1 Ô e2 Ô e3 Ô e4 + a2 e1 Ô e2 Ô e3 Ô e5 +
a3 e1 Ô e2 Ô e4 Ô e5 + a4 e1 Ô e3 Ô e4 Ô e5 + a5 e2 Ô e3 Ô e4 Ô e5

SimpleQ@XD
True

Ï For non-simple m-elements

If SimpleQ is applied to an element which is not simple, the element will simply be returned.

SimpleQ@x Ô y + u Ô vD
False

Ï For contingently simple m-elements

If SimpleQ is applied to an element which may be simple conditional on the values taken by
some of its scalar coefficients, the conditional result will be returned.

Ì A 2-element in a 4-space

¯!4 ; X = ComposeMElement@2, aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4

2009 9 3
Grassmann Algebra Book.nb 188

SimpleQ@XD
If@8a3 a4 - a2 a5 + a1 a6 ã 0<, TrueD

If we are able to assert the condition required, that a3 a4 - a2 a5 + a1 a6 ã 0, then the 2-element
is indeed simple.
a3 a4 - a2 a5
SimpleQBX ê. a6 Ø - F
a1
True

Ì A 2-element in a 5-space

¯!5 ; X = ComposeMElement@2, aD
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e1 Ô e5 + a5 e2 Ô e3 +
a6 e2 Ô e4 + a7 e2 Ô e5 + a8 e3 Ô e4 + a9 e3 Ô e5 + a10 e4 Ô e5

SimpleQ@XD
If@8a3 a5 - a2 a6 + a1 a8 ã 0,
a4 a5 - a2 a7 + a1 a9 ã 0, a4 a6 - a3 a7 + a1 a10 ã 0<, TrueD

In this case of a 2-element in a 5-space, we have three conditions on the coefficients to satisfy
before being able to assert that the element is simple.

Ï For general m-elements

SimpleQ will also check m-elements expressed in terms of (independent) vector symbols.

¯!4 ; X = 2 Hx Ô yL + b Hy Ô zL - 5 Hx Ô wL + 7 a Hy Ô wL - 4 Hz Ô wL;

SimpleQ@XD
If@88 + 5 b ã 0<, TrueD

3.9 Product Formulas for Regressive Products

The Product Formula


The Common Factor Theorem forms the basis for many of the important formulas relating the
various products of the Grassmann algebra. In this section, some of the more basic formulas which
involve just the exterior and regressive products will be developed. These in turn will be shown in
Chapter 6: The Interior Product to have their counterparts in terms of exterior and interior products.
The formulas are usually directed at obtaining alternative expressions (or expansions) for an
element, or for products of elements.

The first formula to be developed (which forms a basis for much of the rest) is an expansion for the
regressive product of an (n|1)-element with the exterior product of two arbitrary elements. We call
this (and its dual) the Product Formula.

2009 9 3
Grassmann Algebra Book.nb 189

The first formula to be developed (which forms a basis for much of the rest) is an expansion for the
regressive product of an (n|1)-element with the exterior product of two arbitrary elements. We call
this (and its dual) the Product Formula.

Let x be an (n|1)-element, then:


n-1

a Ô b Ó x ã Ka Ó x O Ô b + H-1Lm a Ô b Ó x 3.55
m k n-1 m n-1 k m k n-1

To prove this, suppose initially that a and b are simple and can be expressed as a ã a1 Ô ! Ô am
m k m
and b ã b1 Ô ! Ô bk . Applying the Common Factor Theorem gives
k

a Ô b Ó x ã HHa1 Ô ! Ô am L Ô Hb1 Ô ! Ô bk LL Ó x ã
m k n-1 n-1
m
‚ H-1Li-1 Jai Ô x N Ó HHa1 Ô ! Ô ·i Ô ! Ô am L Ô Hb1 Ô ! Ô bk LL
n-1
i=1

k
+ ‚ H-1Lm+j-1 Jbj Ô x N Ó HHa1 Ô ! Ô am L Ô Hb1 Ô ! Ô ·j Ô ! Ô bk LL
n-1
j=1

Since the ai Ô x and bj Ô x are n-elements, we can rearrange the parentheses in the terms of the
n-1 n-1
sums by using the property of n-elements that they behave with regressive products just like scalars
behave with exterior products: A Ó B Ô C Ø KA Ó BO Ô C. So the right hand side becomes
n b c n b c

m
‚ H-1Li-1 JJai Ô x N Ó Ha1 Ô ! Ô ·i Ô ! Ô am LN Ô Hb1 Ô ! Ô bk L
n-1
i=1

k
+ ‚ H-1Lm +j-1 H-1Lm Hk-1L JJbj Ô x N Ó Hb1 Ô ! Ô ·j Ô ! Ô bk LN Ô Ha1 Ô ! Ô am L
n-1
j=1

Reapplying the Common Factor Theorem in reverse enables us to condense the sums back to:

Ka Ó x O Ô Hb1 Ô ! Ô bk L + H-1Lm k b Ó x Ô Ha1 Ô ! Ô am L


m n-1 k n-1

A reordering of the second term gives the final result.

Ka Ó x O Ô Hb1 Ô ! Ô bk L + H-1Lm Ha1 Ô ! Ô am L Ô b Ó x


m n-1 k n-1

This result may be extended in a straightforward manner to the case where a and b are not simple:
m k
since a non-simple element may be expressed as the sum of simple terms, and the formula is valid
for each term, then by addition it can be shown to be valid for the sum.

We can calculate the dual of this formula by applying the GrassmannAlgebra function Dual.
(Note that we use ¯n instead of n in order to let the Dual function know of its special meaning as
the dimension of the space.)

2009 9 3
Grassmann Algebra Book.nb 190

We can calculate the dual of this formula by applying the GrassmannAlgebra function Dual.
(Note that we use ¯n instead of n in order to let the Dual function know of its special meaning as
the dimension of the space.)

DualB a Ô b Ó x ã Ka Ó x O Ô b + H-1Lm a Ô b Ó x F
m k ¯n-1 m ¯n-1 k m k ¯n-1

a Ó b Ô x ã H-1L-m+¯n a Ó b Ô x + a Ô x Ó b
m k m k m k

which we rearrange slightly to make it more readable as:

a Ó b Ô x ã Ka Ô xO Ó b + H-1L¯n-m a Ó b Ô x 3.56
m k m k m k

Note that here x is a 1-element.

Deriving Product Formulas


If x is of a grade higher than 1, then similar relations hold, but with extra terms on the right-hand
side. For example, if we replace x by x1 Ô x2 and note that:

a Ó b Ô Hx1 Ô x2 L ã a Ó b Ô x1 Ô x2
m k m k

then the right-hand side may be expanded by applying the above Product Formula twice to obtain:

a Ó b Ô Hx1 Ô x2 L ã Ka Ô Hx1 Ô x2 LO Ó b + a Ó b Ô Hx1 Ô x2 L


m k m k m k
3.57
+H-1Ln-m Ka Ô x2 O Ó b Ô x1 - Ka Ô x1 O Ó b Ô x2
m k m k

Each successive application doubles the number of terms. We started with two terms on the right
hand side of the basic Product Formula. By applying it again we obtain a Product Formula with
four terms on the right hand side. The next application would give us eight terms as shown in the
Product Formula below.

a Ó b Ô Hx1 Ô x2 Ô x3 L ã
m k

Ka Ô x1 O Ó b Ô x2 Ô x3 - Ka Ô x2 O Ó b Ô x1 Ô x3 +
m k m k

+ + 3.58

2009 9 3
Grassmann Algebra Book.nb 191

Ka Ô x3 O Ó b Ô x1 Ô x2 + Ka Ô x1 Ô x2 Ô x3 O Ó b +
m k m k

H-1Ln-m a Ó b Ô x1 Ô x2 Ô x3 + Ka Ô x1 Ô x2 O Ó b Ô x3 -
m k m k

Ka Ô x1 Ô x3 O Ó b Ô x2 + Ka Ô x2 Ô x3 O Ó b Ô x1
m k m k

Thus the Product Formula for a Ó b Ô Hx1 Ô x2 Ô … Ô xj L would lead to 2j terms.


m k

Because this derivation process is simply the repeated application of a formula, we can get
Mathematica to do it for us automatically.

Deriving Product Formulas automatically


By doing the previous derivations by hand, we identify three basic steps:
1. Devise the rule to be applied at each step
2. Apply the rule as many times as required
3. Simplify the signs of the terms.

We discuss the coding of each of these steps in turn.

Ï 1. Devise the rule to be applied at each step

DeriveProductFormulaOnce@F_, x_D :=
GrassmannExpand@F Ô xD ê. HHa_. * Hu_ Ó v_LL Ô z_ ß
a * HHu Ô zL Ó vL + H-1L^H¯n - RawGrade@uDL * a * Hu Ó Hv Ô zLLL;

DeriveProductFormulaOnce takes an existing Product Formula F, together with a new 1-


element x, and expands out their exterior product using GrassmannExpand. Each of the
resulting terms will be of the form shown by the pattern (a_.*(u_Óv_))Ôz_, where u_, v_, and
z_ may be exterior products, and a_ may be a scalar coefficient.

Each term is now transformed into two terms with the forms a*((uÔz)Óv) and (-1)^(¯n-
RawGrade[u])*a*(uÓ(vÔz))). Here ¯n is the (symbolic) dimension of the underlying linear
space, and the function RawGrade computes the grade of an element without consideration to the
dimension of the currently declared space. Both these features enable the formula derivation to
apply to an element of any grade.

3.58

2009 9 3
Grassmann Algebra Book.nb 192

Ï 2. Apply the rule as many times as required

DeriveProductFormula@HA_ Ó B_L Ô X_D :=


HA Ó BL Ô ComposeSimpleForm@X, 1D ã
DPFSimplify@Fold@DeriveProductFormulaOnce, A Ó B,
8ComposeSimpleForm@X, 1D< ê. Wedge Ø SequenceDD;

DeriveProductFormula takes the left hand side of the original Product Formula (AÓB)ÔX,
and uses Mathematica's Fold function to repeatedly apply DeriveProductFormulaOnce to
the previous result, each time including a new 1-element factor of X (or the form resulting from
applying ComposeSimpleForm to X).

Ï 3. Simplify the signs of the terms.

DPFSimplify is a set of rules which simplify any products of the form H-1La+b m+c ¯n which arise
in the derivation. Here, a, b, and c may be odd or even integers. Symbol m may be symbolic.
Symbol ¯n is symbolic.

† This code is explanatory only. You do not need to enter it to have DeriveProductFormula
work.

Ï Examples: Testing the code

We can test out the code by seeing if we get the same results as the formulas derived by hand
above.

¯A; DeriveProductFormulaB a Ó b Ô xF
m k

a Ó b Ô x ã H-1L-m+¯n a Ó b Ô x + a Ô x Ó b
m k m k m k

DeriveProductFormulaB a Ó b Ô xF
m k 2

a Ó b Ô x1 Ô x2 ã
m k

a Ó b Ô x1 Ô x2 + H-1L-m+¯n - a Ô x1 Ó b Ô x2 + a Ô x2 Ó b Ô x1 + a Ô x1 Ô x2 Ó b
m k m k m k m k

2009 9 3
Grassmann Algebra Book.nb 193

DeriveProductFormulaB a Ó b Ô xF
m k 3

a Ó b Ô x1 Ô x2 Ô x3 ã
m k

a Ô x1 Ó b Ô x2 Ô x3 - a Ô x2 Ó b Ô x1 Ô x3 + a Ô x3 Ó b Ô x1 Ô x2 +
m k m k m k

H-1L-m+¯n a Ó b Ô x1 Ô x2 Ô x3 + a Ô x1 Ô x2 Ó b Ô x3 - a Ô x1 Ô x3 Ó b Ô x2 +
m k m k m k

a Ô x2 Ô x3 Ó b Ô x1 + a Ô x1 Ô x2 Ô x3 Ó b
m k m k

DeriveProductFormulaB a Ó b Ô xF
m k 4

a Ó b Ô x1 Ô x2 Ô x3 Ô x4 ã
m k

a Ó b Ô x1 Ô x2 Ô x3 Ô x4 + a Ô x1 Ô x2 Ó b Ô x3 Ô x4 - a Ô x1 Ô x3 Ó b Ô x2 Ô x4 +
m k m k m k
a Ô x1 Ô x4 Ó b Ô x2 Ô x3 + a Ô x2 Ô x3 Ó b Ô x1 Ô x4 - a Ô x2 Ô x4 Ó b Ô x1 Ô x3 +
m k m k m k

a Ô x3 Ô x4 Ó b Ô x1 Ô x2 + H-1L-m+¯n - a Ô x1 Ó b Ô x2 Ô x3 Ô x4 +
m k m k

a Ô x2 Ó b Ô x1 Ô x3 Ô x4 - a Ô x3 Ó b Ô x1 Ô x2 Ô x4 +
m k m k
a Ô x4 Ó b Ô x1 Ô x2 Ô x3 - a Ô x1 Ô x2 Ô x3 Ó b Ô x4 + a Ô x1 Ô x2 Ô x4 Ó b Ô x3 -
m k m k m k

a Ô x1 Ô x3 Ô x4 Ó b Ô x2 + a Ô x2 Ô x3 Ô x4 Ó b Ô x1 + a Ô x1 Ô x2 Ô x3 Ô x4 Ó b
m k m k m k

Although these outputs are not quite as easy to read as the manually derived formulas in the
previous section (because the outputs omit any brackets deemed unnecessary with the inbuilt
precedence of the operators), one can at least see in these cases that the formulas are equivalent.

Computing the General Product Formula


If we look carefully at the form of the product formulae for different grades j of x we can see that
j
each of them is a specific case of a more general explicit formula. We will call this more general
explicit formula the General Product Formula. We derive it below by deconstructing the result for
x, and then confirm its identity to the original iteratively derived form in the first few cases.
4

2009 9 3
Grassmann Algebra Book.nb 194

F1 = DeriveProductFormulaB a Ó b Ô xF
m k 4

a Ó b Ô x1 Ô x2 Ô x3 Ô x4 ã
m k

a Ó b Ô x1 Ô x2 Ô x3 Ô x4 + a Ô x1 Ô x2 Ó b Ô x3 Ô x4 - a Ô x1 Ô x3 Ó b Ô x2 Ô x4 +
m k m k m k
a Ô x1 Ô x4 Ó b Ô x2 Ô x3 + a Ô x2 Ô x3 Ó b Ô x1 Ô x4 - a Ô x2 Ô x4 Ó b Ô x1 Ô x3 +
m k m k m k

a Ô x3 Ô x4 Ó b Ô x1 Ô x2 + H-1L-m+¯n - a Ô x1 Ó b Ô x2 Ô x3 Ô x4 +
m k m k

a Ô x2 Ó b Ô x1 Ô x3 Ô x4 - a Ô x3 Ó b Ô x1 Ô x2 Ô x4 +
m k m k
a Ô x4 Ó b Ô x1 Ô x2 Ô x3 - a Ô x1 Ô x2 Ô x3 Ó b Ô x4 + a Ô x1 Ô x2 Ô x4 Ó b Ô x3 -
m k m k m k

a Ô x1 Ô x3 Ô x4 Ó b Ô x2 + a Ô x2 Ô x3 Ô x4 Ó b Ô x1 + a Ô x1 Ô x2 Ô x3 Ô x4 Ó b
m k m k m k

The first thing we note is that apart possibly from the sign H-1L-m+¯n , the terms on the right hand
side are all products of the form

a Ô xi Ó b Ô xi x ã xi Ô xi
m 4-r k r j r 4-r

This means we can compute them using the r-span and r-cospan of x. For j equal to 4, r will range
j
from 0 to 4. Thus we have

Ka Ô ¯#0 BxFO Ó b Ô ¯#0 BxF


m 4 k 4

:a Ô x1 Ô x2 Ô x3 Ô x4 Ó b Ô 1>
m k

Ka Ô ¯#1 BxFO Ó b Ô ¯#1 BxF


m 4 k 4

:a Ô x2 Ô x3 Ô x4 Ó b Ô x1 , a Ô -Hx1 Ô x3 Ô x4 L Ó b Ô x2 ,
m k m k

a Ô x1 Ô x2 Ô x4 Ó b Ô x3 , a Ô -Hx1 Ô x2 Ô x3 L Ó b Ô x4 >
m k m k

Ka Ô ¯#2 BxFO Ó b Ô ¯#2 BxF


m 4 k 4

:a Ô x3 Ô x4 Ó b Ô x1 Ô x2 , a Ô -Hx2 Ô x4 L Ó b Ô x1 Ô x3 , a Ô x2 Ô x3 Ó b Ô x1 Ô x4 ,
m k m k m k

a Ô x1 Ô x4 Ó b Ô x2 Ô x3 , a Ô -Hx1 Ô x3 L Ó b Ô x2 Ô x4 , a Ô x1 Ô x2 Ó b Ô x3 Ô x4 >
m k m k m k

2009 9 3
Grassmann Algebra Book.nb 195

Ka Ô ¯#3 BxFO Ó b Ô ¯#3 BxF


m 4 k 4

:a Ô x4 Ó b Ô x1 Ô x2 Ô x3 , a Ô -x3 Ó b Ô x1 Ô x2 Ô x4 ,
m k m k

a Ô x2 Ó b Ô x1 Ô x3 Ô x4 , a Ô -x1 Ó b Ô x2 Ô x3 Ô x4 >
m k m k

Ka Ô ¯#4 BxFO Ó b Ô ¯#4 BxF


m 4 k 4

:a Ô 1 Ó b Ô x1 Ô x2 Ô x3 Ô x4 >
m k

But all these terms can be computed at once using the complete span ¯#¯ and cospan ¯#¯ .

Ka Ô ¯#¯ BxFO Ó b Ô ¯#¯ BxF


m 4 k 4

::a Ô x1 Ô x2 Ô x3 Ô x4 Ó b Ô 1>,
m k

:a Ô x2 Ô x3 Ô x4 Ó b Ô x1 , a Ô -Hx1 Ô x3 Ô x4 L Ó b Ô x2 ,
m k m k

a Ô x1 Ô x2 Ô x4 Ó b Ô x3 , a Ô -Hx1 Ô x2 Ô x3 L Ó b Ô x4 >,
m k m k

:a Ô x3 Ô x4 Ó b Ô x1 Ô x2 , a Ô -Hx2 Ô x4 L Ó b Ô x1 Ô x3 , a Ô x2 Ô x3 Ó b Ô x1 Ô x4 ,
m k m k m k

a Ô x1 Ô x4 Ó b Ô x2 Ô x3 , a Ô -Hx1 Ô x3 L Ó b Ô x2 Ô x4 , a Ô x1 Ô x2 Ó b Ô x3 Ô x4 >,
m k m k m k

:a Ô x4 Ó b Ô x1 Ô x2 Ô x3 , a Ô -x3 Ó b Ô x1 Ô x2 Ô x4 , a Ô x2 Ó b Ô x1 Ô x3 Ô x4 ,
m k m k m k

a Ô -x1 Ó b Ô x2 Ô x3 Ô x4 >, :a Ô 1 Ó b Ô x1 Ô x2 Ô x3 Ô x4 >>


m k m k

Now we need to multiply the terms by a scalar factor which is 1 when the grade of the span is
even, and H-1L¯n-m when it is odd. Since the lists of terms in the collection above are arranged
according to the grades of their spans, we then need a list whose elements alternate between 1 and
H-1L¯n-m . To construct this list, we first define a function ¯r which returns an alternating list of
1s and 0s of the correct length j + 1, multiply this list by ¯n - m, and then use it as a power.
Mathematica's inbuilt Listable attributes of Times and Power will again return the result we
want.

¯r@j_D := Mod@Range@0, jD, 2D

H-1L¯r@4D H¯n-mL
91, H-1L-m+¯n , 1, H-1L-m+¯n , 1=

Multiplying by the scalar factor now gives us the terms we want.

2009 9 3
Grassmann Algebra Book.nb 196

H-1L¯r@4D H¯n-mL Ka Ô ¯#¯ BxFO Ó b Ô ¯#¯ BxF


m 4 k 4

::a Ô x1 Ô x2 Ô x3 Ô x4 Ó b Ô 1>,
m k

:H-1L-m+¯n a Ô x2 Ô x3 Ô x4 Ó b Ô x1 , H-1L-m+¯n a Ô -Hx1 Ô x3 Ô x4 L Ó b Ô x2 ,


m k m k

H-1L-m+¯n a Ô x1 Ô x2 Ô x4 Ó b Ô x3 , H-1L-m+¯n a Ô -Hx1 Ô x2 Ô x3 L Ó b Ô x4 >,


m k m k

:a Ô x3 Ô x4 Ó b Ô x1 Ô x2 , a Ô -Hx2 Ô x4 L Ó b Ô x1 Ô x3 , a Ô x2 Ô x3 Ó b Ô x1 Ô x4 ,
m k m k m k

a Ô x1 Ô x4 Ó b Ô x2 Ô x3 , a Ô -Hx1 Ô x3 L Ó b Ô x2 Ô x4 , a Ô x1 Ô x2 Ó b Ô x3 Ô x4 >,
m k m k m k
-m+¯n -m+¯n
:H-1L a Ô x4 Ó b Ô x1 Ô x2 Ô x3 , H-1L a Ô -x3 Ó b Ô x1 Ô x2 Ô x4 ,
m k m k

H-1L-m+¯n a Ô x2 Ó b Ô x1 Ô x3 Ô x4 , H-1L-m+¯n a Ô -x1 Ó b Ô x2 Ô x3 Ô x4 >,


m k m k

:a Ô 1 Ó b Ô x1 Ô x2 Ô x3 Ô x4 >>
m k

Flattening this list and then summing the terms gives the final result.

F2 = a Ó b Ô Hx1 Ô x2 Ô x3 Ô x4 L ã
m k

¯SBFlattenBH-1L¯r@4D H¯n-mL Ka Ô ¯#¯ BxFO Ó b Ô ¯#¯ BxF FF


m 4 k 4

a Ó b Ô x1 Ô x2 Ô x3 Ô x4 ã
m k

a Ô 1 Ó b Ô x1 Ô x2 Ô x3 Ô x4 + H-1L-m+¯n a Ô -x1 Ó b Ô x2 Ô x3 Ô x4 +
m k m k
-m+¯n -m+¯n
H-1L a Ô x2 Ó b Ô x1 Ô x3 Ô x4 + H-1L a Ô -x3 Ó b Ô x1 Ô x2 Ô x4 +
m k m k
-m+¯n
H-1L a Ô x4 Ó b Ô x1 Ô x2 Ô x3 + a Ô -Hx1 Ô x3 L Ó b Ô x2 Ô x4 +
m k m k
-m+¯n
a Ô -Hx2 Ô x4 L Ó b Ô x1 Ô x3 + H-1L a Ô -Hx1 Ô x2 Ô x3 L Ó b Ô x4 +
m k m k
-m+¯n
H-1L a Ô -Hx1 Ô x3 Ô x4 L Ó b Ô x2 + a Ô x1 Ô x2 Ó b Ô x3 Ô x4 +
m k m k
a Ô x1 Ô x4 Ó b Ô x2 Ô x3 + a Ô x2 Ô x3 Ó b Ô x1 Ô x4 +
m k m k

a Ô x3 Ô x4 Ó b Ô x1 Ô x2 + H-1L-m+¯n a Ô x1 Ô x2 Ô x4 Ó b Ô x3 +
m k m k
-m+¯n
H-1L a Ô x2 Ô x3 Ô x4 Ó b Ô x1 + a Ô x1 Ô x2 Ô x3 Ô x4 Ó b Ô 1
m k m k

Applying GrassmannExpandAndSimplify will simplify the signs and allow us to confirm the
result is identical to that produced by DeriveProductFormula.

2009 9 3
Grassmann Algebra Book.nb 197

¯%@F1 D === ¯%@F2 D


True

Ï The General Product Formula

To summarize the results above: we can take the Product Formula derived in the previous section
and rewrite it in its computational form using the notions of complete span and complete cospan.
The computational form relies on the fact that in GrassmannAlgebra the exterior and regressive
products have an inbuilt Listable attribute. (This attribute means that a product of lists is
automatically converted to a list of products.)

aÓb Ôx ã
m k j
3.59
¯r@jD H¯n-mL ¯
¯SBFlattenBH-1L a Ô ¯" BxF Ó b Ô ¯"¯ BxF FF
m j k j

In this formula, ¯#¯ BxF and ¯#¯ BxF are the complete span and complete cospan of X. The sign
j j j
-m+¯n
function ¯r just creates a list of elements alternating between 1 and H-1L . For example, for j
equal to 8:

H-1L¯r@8D H¯n-mL
91, H-1L-m+¯n , 1, H-1L-m+¯n , 1, H-1L-m+¯n , 1, H-1L-m+¯n , 1=

Comparing the two forms of the Product Formula

Ï Encapsulating the computable formula

We can easily encapsulate this computable formula in a function GeneralProductFormula for


easy comparison with the results from our initial iterative derivation using
DeriveProductFormula.

GeneralProductFormula@HA_ Ó B_L Ô X_D :=


HA Ó BL Ô ComposeSimpleForm@X, 1D ã
¯SAFlattenAH-1L¯r@¯G@XDD H¯n-¯G@ADL IA Ô ¯"¯ @XDM Ó HB Ô ¯"¯ @XDLEE;

¯r@m_D := Mod@Range@0, mD, 2D;

2009 9 3
Grassmann Algebra Book.nb 198

Ï Comparing results from DeriveProductFormula with


GeneralProductFormula.

We can compare the results obtained by our two formulations. DeriveProductFormula


invoked the successive application of a basic formula. GeneralProductFormula invoked an
explicit formula for the general case. Below we show the two formulations give the same result for
all simple elements from grades 1 to 10.

¯!10 ; TableB¯%BDeriveProductFormulaB a Ó b Ô xFF ===


m k j

¯%BGeneralProductFormulaB a Ó b Ô xFF, 8j, 1, 10<F


m k j

8True, True, True, True, True, True, True, True, True, True<

The invariance of the General Product Formula


In Chapter 2 we saw that the m:k-forms based on a simple factorized m-element X were, like the m-
element, independent of the factorization.

¯S@HG1 # ¯#k @XDL ü IG2 " ¯#k @XDME


As we have seen in its development, the General Product Formula is composed of a number of m:k-
forms, and, as expected, shows the same invariance. For example, suppose we generate the formula
F1 for a 3-element X equal to xÔyÔz:

F1 = GeneralProductFormulaB a Ó b Ô Hx Ô y Ô zLF
m k

a Ó b Ô x Ô y Ô z ã H-1L-m+¯n a Ô 1 Ó b Ô x Ô y Ô z + a Ô x Ó b Ô y Ô z +
m k m k m k
-m+¯n
a Ô -y Ó b Ô x Ô z + a Ô z Ó b Ô x Ô y + H-1L a Ô -Hx Ô zL Ó b Ô y +
m k m k m k
-m+¯n -m+¯n
H-1L a Ô x Ô y Ó b Ô z + H-1L aÔyÔzÓbÔx + aÔxÔyÔzÓbÔ1
m k m k m k

We can express the 3-element as a product of different 1-element factors by adding to any given
factor, scalar multiples of the other factors. For example

2009 9 3
Grassmann Algebra Book.nb 199

F2 = GeneralProductFormulaB a Ó b Ô Hx Ô y Ô Hz + a x + b yLLF
m k

a Ó b Ô x Ô y Ô Ha x + b y + zL ã
m k

H-1L-m+¯n a Ô 1 Ó b Ô x Ô y Ô Ha x + b y + zL + a Ô x Ó b Ô y Ô Ha x + b y + zL +
m k m k
a Ô -y Ó b Ô x Ô Ha x + b y + zL + a Ô Ha x + b y + zL Ó b Ô x Ô y +
m k m k
-m+¯n
H-1L a Ô -Hx Ô Ha x + b y + zLL Ó b Ô y +
m k
-m+¯n
H-1L a Ô x Ô y Ó b Ô Ha x + b y + zL +
m k
-m+¯n
H-1L a Ô y Ô Ha x + b y + zL Ó b Ô x + a Ô x Ô y Ô Ha x + b y + zL Ó b Ô 1
m k m k

Applying GrassmannExpandAndSimplify to these expressions shows that they are equal.

¯%@F1 ã F2 D
True

Alternative forms for the General Product Formula


If we start from the General Product Formula we can rearrange it to interchange the span and
cospan, so that the span elements become associated with a and the cospan elements become
m
associated with b. Of course, there will need to be some associated changes in the signs of the
k
terms as well. We start with the General Product Formula in the form developed above.

a Ó b Ô x ã ¯SBFlattenBH-1L¯r@jD H¯n-mL a Ô ¯#¯ BxF Ó b Ô ¯#¯ BxF FF


m k j m j k j

From Chapter 2: Multilinear Forms, we have shown that a form such as

a Ô ¯#¯ BxF Ó b Ô ¯#¯ BxF


m j k j

can be rewritten in the form

H-1L¯rr@jD ¯RB a Ô ¯#¯ BxF Ó b Ô ¯#¯ BxF F


m j k j

Thus we can write the General Product formula in the alternative form

a Ó b Ô x ã ¯SBFlattenBH-1L¯r@jD H¯n-mL
m k j
3.60
¯rr@jD ¯
H-1L ¯RB a Ô ¯"¯ BxF Ó b Ô ¯" BxF FFF
m j k j

Here, the definitions of ¯r, ¯rr and ¯R are

2009 9 3
Grassmann Algebra Book.nb 200

Here, the definitions of ¯r, ¯rr and ¯R are

¯r@m_D := Mod@Range@0, mD, 2D;


¯rr@m_D := If@EvenQ@mD, ¯r@mD, 0D;
ReverseAll@s_ListD := Reverse@Reverse êü sD; ¯R = ReverseAll;

Ï Encapsulating the second computable formula

Just as we encapsulated the first computable formula, we can similarly encapsulate this alternative
one.

GeneralProductFormulaB@HA_ Ó B_L Ô X_D :=


HA Ó BL Ô ComposeSimpleForm@X, 1D ã
¯SBFlattenBH-1L¯r@¯G@XDD H¯n-¯G@ADL

H-1L¯rr@¯G@XDD ¯RB a Ô ¯"¯ BxF Ó b Ô ¯"¯ BxF FFF;


m j k j

Ï Comparing results from DeriveProductFormula with


GeneralProductFormulaB.

Again comparing the results of DeriveProductFormula with this alternative formula verifies
their equivalence for values of j from 1 to 10.

¯!10 ; TableB¯%BDeriveProductFormulaB a Ó b Ô xFF ===


m k j

¯%BGeneralProductFormulaBB a Ó b Ô xFF, 8j, 1, 10<F


m k j

8True, True, True, True, True, True, True, True, True, True<

The Decomposition Formula


In the General Product Formula, putting k equal to ¯n|m permits the left-hand side to be expressed
as a scalar multiple of x and hence expresses a type of 'decomposition' of x into components.
j j

2009 9 3
Grassmann Algebra Book.nb 201

x ã ¯SB
j

a Ô ¯"¯ BxF Ó b Ô ¯"¯ BxF 3.61


m j ¯n-m j
¯r@jD H¯n-mL
FlattenBH-1L FF
aÓ b
m ¯n-m

If the element to be decomposed is a 1-element, that is j = 1, there are just two terms, reducing the
decomposition formula to:

Ka Ô ¯#¯ @xDO Ó b Ô ¯#¯ @xD


m ¯n-m
x ã ¯%B¯SBFlattenBH-1L ¯r@1D H¯n-mL
FFF
aÓ b
m ¯n-m

H-1Lm+¯n a Ó b Ô x + a Ô x Ó b
m -m+¯n m -m+¯n

aÓ b
m -m+¯n

To make this a little more readable we add some parentheses and absorb the sign by interchanging
the order of the factors x and b .
¯n-m

aÓ xÔ b Ka Ô xO Ó b
m ¯n-m m ¯n-m
xã + 3.62
aÓ b aÓ b
m ¯n-m m ¯n-m

In Chapter 4: Geometric Interpretations we will explore some of the geometric significance of


these formulas. They will also find application in Chapter 6: The Interior Product.

Exploration: Dual forms of the General Product Formulas


You can obtain the dual forms of the various specific product formulas by applying the Dual
function to them. Below we simply display the duals of the first three specific cases.

Ï First Product Formula

F1 = DeriveProductFormulaB a Ó b Ô xF
m k

a Ó b Ô x ã H-1L-m+¯n a Ó b Ô x + a Ô x Ó b
m k m k m k

2009 9 3
Grassmann Algebra Book.nb 202

Dual@F1 D

aÔbÓ x ã H-1Lm a Ô b Ó x + Ka Ó x O Ô b
m k -1+¯n m k -1+¯n m -1+¯n k

Ï Second Product Formula

F2 = DeriveProductFormulaB a Ó b Ô xF
m k 2

a Ó b Ô x1 Ô x2 ã
m k

a Ó b Ô x1 Ô x2 + H-1L-m+¯n - a Ô x1 Ó b Ô x2 + a Ô x2 Ó b Ô x1 + a Ô x1 Ô x2 Ó b
m k m k m k m k

Dual@F2D

a Ô b Ó x1 Ó x2 ã a Ô b Ó x1 Ó x2 +
m k -1+¯n -1+¯n m k -1+¯n -1+¯n

H-1Lm - a Ó x1 Ô b Ó x2 + a Ó x2 Ô b Ó x1 +
m -1+¯n k -1+¯n m -1+¯n k -1+¯n

a Ó x1 Ó x2 Ôb
m -1+¯n -1+¯n k

Ï Third Product Formula

F3 = DeriveProductFormulaB a Ó b Ô xF
m k 3

a Ó b Ô x1 Ô x2 Ô x3 ã
m k

a Ô x1 Ó b Ô x2 Ô x3 - a Ô x2 Ó b Ô x1 Ô x3 + a Ô x3 Ó b Ô x1 Ô x2 +
m k m k m k

H-1L-m+¯n a Ó b Ô x1 Ô x2 Ô x3 + a Ô x1 Ô x2 Ó b Ô x3 - a Ô x1 Ô x3 Ó b Ô x2 +
m k m k m k

a Ô x2 Ô x3 Ó b Ô x1 + a Ô x1 Ô x2 Ô x3 Ó b
m k m k

2009 9 3
Grassmann Algebra Book.nb 203

Dual@F3D
a Ô b Ó x1 Ó x2 Ó x3 ã
m k -1+¯n -1+¯n -1+¯n

a Ó x1 Ô b Ó x2 Ó x3 - a Ó x2 Ô b Ó x1 Ó x3 +
m -1+¯n k -1+¯n -1+¯n m -1+¯n k -1+¯n -1+¯n

a Ó x3 Ô b Ó x1 Ó x2 + H-1Lm a Ô b Ó x1 Ó x2 Ó x3 +
m -1+¯n k -1+¯n -1+¯n m k -1+¯n -1+¯n -1+¯n

a Ó x1 Ó x2 Ô b Ó x3 - a Ó x1 Ó x3 Ô b Ó x2 +
m -1+¯n -1+¯n k -1+¯n m -1+¯n -1+¯n k -1+¯n

a Ó x2 Ó x3 Ô b Ó x1 + a Ó x1 Ó x2 Ó x3 Ôb
m -1+¯n -1+¯n k -1+¯n m -1+¯n -1+¯n -1+¯n k

The double sum form of the General Product Formula


Although not directly computable, the following double sum form of the General Product Formula
may be shown to be equivalent to the computable form. It will find particular application in
Chapter 6 where we convert it to its interior product forms.

p n
r H¯n-mL
a Ó b Ô x ã ‚ H-1L ‚ a Ô xi Ó b Ô xi
m k p m p-r k r
r=0 i=1
3.63
p
x ã x1 Ô x1 ã x2 Ô x2 ã ! ã xn Ô xn , nãK O
p r p-r r p-r r p-r r

3.10 Summary
This chapter has introduced the regressive product as a true dual to the exterior product. This
means that to every theorem T involving exterior and regressive products there corresponds a dual
theorem Dual[T] such that T ã Dual[Dual[T]].

But although the regressive product axioms assert that the regressive product of an m-element with
a k-element is an (m+k-n)-element (where n is the dimension of the underlying linear space), the
axiom sets for the exterior and regressive products alone do not provide a mechanism for deriving
such an element explicitly. A further explicit axiom involving both exterior and regressive products
is necessary. We called this axiom the Common Factor Axiom and motivated it by a combination
of algebraic and geometric argument. If the exterior product is viewed as a sort of 'union' of
independent elements, the regressive product may be viewed as a sort of 'intersection'. Because
Grassmann considered only Euclidean spaces, used the same notation for both exterior and
regressive products, and equated scalars and pseudo-scalars, the Common Factor Axiom was
effectively hidden in his notation.

The Common Factor Axiom was then extended to prove one of the most important formulae in the
Grassmann algebra, the Common Factor Theorem. This theorem enables the regressive product of
any two arbitrary elements of the algebra to be computed in an effective manner. It will be shown
in Chapter 5: The Complement that if the underlying linear space is endowed with a metric, then
the result is specific, and depends on the metric for its precise value. Otherwise, if there is no
metric,
2009 9 3the element is specific only up to congruence (that is, up to an arbitrary scalar factor). It
was then shown how the Common Factor Theorem could be used in the factorization of simple
elements.
Grassmann Algebra Book.nb 204

The Common Factor Axiom was then extended to prove one of the most important formulae in the
Grassmann algebra, the Common Factor Theorem. This theorem enables the regressive product of
any two arbitrary elements of the algebra to be computed in an effective manner. It will be shown
in Chapter 5: The Complement that if the underlying linear space is endowed with a metric, then
the result is specific, and depends on the metric for its precise value. Otherwise, if there is no
metric, the element is specific only up to congruence (that is, up to an arbitrary scalar factor). It
was then shown how the Common Factor Theorem could be used in the factorization of simple
elements.

Finally, it was shown how the Common Factor Theorem leads to a suite of formulas called Product
Formulas. These formulas expand expressions involving an exterior product of an element with a
regressive product of elements, or, involving a regressive product of an element with an exterior
product of elements. In Chapter 6 we will show how these lead naturally to formulas where the
regressive products are replaced by interior products. These interior product forms of the product
formulae will find application throughout the rest of the book.

The next chapter is an interlude in the development of the algebraic fundamentals. In it we begin to
explore one of Grassmann algebra's most enticing interpretations: geometry. But at this stage we
will only be discussing non-metric geometry. Chapters 5 and 6 to follow will develop the algebra's
metric concepts, and thus complete the fundamentals required for subsequent applications and
geometric interpretations in metric space.

2009 9 3
Grassmann Algebra Book.nb 205

4 Geometric Interpretations
Ï

4.1 Introduction
In Chapter 2, the exterior product operation was introduced onto the elements of a linear space L,
1
enabling the construction of a series of new linear spaces L possessing new types of elements. In
m
this chapter, the elements of L will be interpreted, some as vectors, some as points. This will be
1
done by singling out one particular element of L and conferring upon it the interpretation of origin
1
point. All the other elements of the linear space then divide into two categories. Those that involve
the origin point will be called (weighted) points, and those that do not will be called vectors. As
this distinction is developed in succeeding sections it will be seen to be both illuminating and
consistent with accepted notions. Vectors and points will be called geometric interpretations of the
elements of L.
1

Some of the more important consequences however, of the distinction between vectors and points
arise from the distinctions thereby generated in the higher grade spaces L. It will be shown that a
m
simple element of L takes on two interpretations. The first, that of a multivector (or m-vector) is
m
when the m-element can be expressed in terms of vectors alone. The second, that of a bound
multivector (or bound m-vector) is when the m-element requires both points and vectors to express
it. These simple interpreted elements will be found useful for defining geometric entities such as
lines and planes and their higher dimensional analogues known as multiplanes (or m-planes).
Unions and intersections of multiplanes may then be calculated straightforwardly by using the
bound multivectors which define them. A multivector may be visualized as a 'free' entity with no
location. A bound multivector may be visualized as 'bound' through a location in space.

It is not only simple interpreted elements which will be found useful in applications however. In
Chapter 7: Exploring Screw Algebra and Chapter 8: Exploring Mechanics, a basis for a theory of
mechanics is developed whose principal quantities (for example, systems of forces, momentum of
a system of particles, velocity of a rigid body) may be represented by a general interpreted 2-
element, that is, by the sum of a bound vector and a bivector.

In the literature of the nineteenth century, wherever vectors and points were considered together,
vectors were introduced as point differences. When it is required to designate physical quantities it
is not satisfactory that all vectors should arise as the differences of points. In later literature, this
problem appeared to be overcome by designating points by their position vectors alone, making
vectors the fundamental entities [Gibbs 1886]. This approach is not satisfactory either, since by
excluding points much of the power of the calculus for dealing with free and located entities
together is excluded. In this book we do not require that vectors be defined in terms of points, but
rather, propose a difference of interpretation between the origin element and those elements not
involving the origin. This approach permits the existence of points and vectors together without the
vectors necessarily arising as point differences.

2009 9 3
vectors were introduced as point differences. When it is required to designate physical quantities it
is not satisfactory that all vectors should arise as the differences of points. In later literature, this
problem appeared to be overcome by designating points by their position vectors alone, making
vectors the fundamental entities [Gibbs 1886]. This approach is not satisfactory either,Book.nb
Grassmann Algebra since by 206
excluding points much of the power of the calculus for dealing with free and located entities
together is excluded. In this book we do not require that vectors be defined in terms of points, but
rather, propose a difference of interpretation between the origin element and those elements not
involving the origin. This approach permits the existence of points and vectors together without the
vectors necessarily arising as point differences.

In this chapter, as in the preceding chapters, L and the spaces L do not yet have a metric. That is,
1 m
there is no way of calculating a measure or magnitude associated with an element. The
interpretation discussed therefore may also be supposed non-metric. In the next chapter a metric
will be introduced onto the uninterpreted spaces and the consequences of this for the interpreted
elements developed.

In summary then, it is the aim of this chapter to set out the distinct non-metric geometric
interpretations of m-elements brought about by the interpretation of one specific element of L as an
1
origin point.

4.2 Geometrically Interpreted 1-Elements

Vectors

Ï Depicting vectors

The most common current geometric interpretation for an element of a linear space is that of a
vector. We suppose in this chapter that the linear space does not have a metric (that is, we cannot
calculate magnitudes). Such a vector has the geometric properties of direction and sense (but no
location and no magnitude), and will be graphically depicted (in the usual way) by a directed and
sensed line segment thus:

The usual depiction of a vector. But this vector is located nowhere!

This depiction is unsatisfactory in that the line segment has a definite location and length whilst the
vector it is depicting is supposed to possess neither of these properties. One way perhaps to depict
an unlocated entity is to show it in many locations.

One way to depict an unlocated entity is to show it in many locations.

Another way to emphasize that a vector has no location is show it on a dynamic graphic. If you are
viewing this notebook live with the GrassmannAlgebra package loaded, you can use your mouse
to manipulate the vector below to reinforce that any of the vector's locations that retain its direction
and
2009sense
9 3 are equally satisfactory depictions for it. This dynamic depiction is a somewhat more
faithful way of depicting something without locating it. But it is still not fully satisfactory, since we
are depicting the property of no-location by using any location.
Grassmann Algebra Book.nb 207

Another way to emphasize that a vector has no location is show it on a dynamic graphic. If you are
viewing this notebook live with the GrassmannAlgebra package loaded, you can use your mouse
to manipulate the vector below to reinforce that any of the vector's locations that retain its direction
and sense are equally satisfactory depictions for it. This dynamic depiction is a somewhat more
faithful way of depicting something without locating it. But it is still not fully satisfactory, since we
are depicting the property of no-location by using any location.

Another way is to allow it to be moveable to any location.


HYou can drag the arrow by its name with your mouse.L

A linear space, all of whose elements are interpreted as vectors, will be called a vector space.

Ï Depicting the addition of vectors

The geometric interpretation of the addition operation of a vector space is the triangle (or
parallelogram) rule. To effect a sum of two vectors using the triangle rule in this 'unlocated' view
of vectors, you would slide them (always in a direction parallel to themselves) from where you had
each of them conveniently docked, until their tails touched, make the sum in the standard textbook
manner, then park the result anywhere you please.

The triangle rule for the addition of two vectors.

† From this point onwards we will always show vectors docked in a visually convenient location,
often one that is suggestive of the operation we wish to perform. Of course as discussed above,
this does not mean it is actually located there.

Ï Comparing vectors

In a metric space we can define the magnitude of a vector and hence compare vectors by
comparing their magnitudes even if they are not in the same direction. In spaces where no metric
has been imposed, as are the spaces we are discussing in this chapter, we cannot define the
magnitude of a vector, and hence we cannot compare it with another vector in a different direction.

However, we can compare vectors in the same direction. Vectors in the same direction are
congruent. That is, one is a scalar multiple (not zero) of the other. This scalar multiple is called the
congruence factor.

2009 9 3
Grassmann Algebra Book.nb 208

However, we can compare vectors in the same direction. Vectors in the same direction are
congruent. That is, one is a scalar multiple (not zero) of the other. This scalar multiple is called the
congruence factor.
a b
Thus vectors a x and b x are congruent with congruence factor b
or a
. Most often it is not the
magnitude of the congruence factor that is important, but its sign. If it is positive, the vectors may
be said to have the same orientation. If it is negative the vectors may be said to have an opposite
orientation. Orientation is thus a relative concept. It applies in the same way to elements of any
grade. For the special case of vectors however, the term sense is often used synonymously with
orientation.

Below we depict three vectors with conguence factors (relative to x) of 1, -1 and 2.

x -x 2x

The effect of multiplying the vector x by factors 1, -1, and 2.

Points

Ï The origin

In order to describe position in space it is necessary to have a reference point. This point is usually
called the origin.

Rather than the standard technique of implicitly assuming the origin and working only with vectors
to describe position, we find it important for later applications to augment the vector space with the
origin as a new element to create a new linear space with one more element in its basis. For
reasons which will appear later, such a linear space will be called a bound vector space.

The only difference between the origin element and the vector elements of the linear space is their
interpretation. The origin element is interpreted as a point. We will denote it in GrassmannAlgebra
by ¯", which you can access from the palette or type as Â*5ÂÂdsOÂ (a five-star followed by
a double-struck capital O).

¯!

In order to describe position it is necessary to have a reference point.

2009 9 3
Grassmann Algebra Book.nb 209

Ï The sum of the origin and a vector is a point

The bound vector space in addition to its vectors and its origin now possesses a new set of
elements requiring interpretation: those formed from the sum of the origin and a vector.

P ã ¯" + x

It is these elements that will be used to describe position and that we will call points. The vector x
is called the position vector of the point P. A position vector is just like any other vector and is
therefore, of course, located nowhere (even though we show it in a conveniently suggestive
position!).

We depict points (other than the origin) by blue points.

The sum of the origin and a vector is a point.

Ï The difference of two points is a vector

It follows immediately from this definition of a point that the difference of two points is a vector:

P - Q ã H¯" + pL - H¯" + qL ã p - q ã x

The difference of two points is a vector.

Remember that the bound vector space we are discussing does not yet have a metric. That is, the
distance between two points (the magnitude of the vector equal to the point difference) is not
meaningful. However, the relative distances between points on the same line can be measured by
means of their intensities.

Ï The sum of a point and a vector is a point

The sum of a point and a vector is another point.

2009 9 3
Grassmann Algebra Book.nb 210

Q ã P + y ã H¯" + xL + y ã ¯" + Hx + yL

P y
x
x+y Q

¯!

The sum of a point and a vector is another point

and so a vector may be viewed as a carrier of points. That is, the addition of a vector to a point
carries or transforms it to another point. (See the historical note below).

Ï The sum of two points is a weighted point

The sum of two points is not quite another point. Rather, it is a point with weight 2, situated mid-
way between the two points.

P1 + P2 ã H¯" + x1 L + H¯" + x2 L
1
ã 2 ¯" + Hx1 - x2 L ã 2 ¯" + Hx1 - x2 L ã 2 Pc
2
Wherever pictorially feasible we will show weighted points with their weights attached to their
names, and/or a size or colour change to distinguish them from the 'pure' points on the same
graphic.

The sum of two points is a point of weight 2 located mid-way between them

Similarly, the sum of n points is a point with weight n, situated at the centre of mass (centre of
gravity) of the points.

Ï The sum of weighted points

A scalar multiple of a point (of the form m P or mÔP) will be called a weighted point. Weighted
points are summed just like a set of point masses is deemed equivalent to their total mass located at
their centre of mass. For example the sum of two weighted points with weights (masses) m1 and
m2 is equal to the weighted point with weight Hm1 + m2 Llocated on the line between them. The
location is the point Pc about which the 'mass moments' m1 l1 and m2 l2 are equal.

2009 9 3
Grassmann Algebra Book.nb 211

l2 m2 P2

l1 Hm1 + m2 L Pc
l1 m1 ! l2 m2
m1 P1

The sum of two weighted points is a weighted point between them

In the more general case we have

I‚ mi M Pc ã ‚ mi Pi

We can also easily express this equation in terms of the position vectors of the weighted points.

S mi xi
‚ mi Pi ã ‚ mi H¯" + xi L ã I‚ mi M ¯" +
S mi
Thus the sum of a number of mass-weighted points ⁄ mi Pi is equivalent to the centre of gravity
Sm x
¯" + S im i weighted by the total mass ⁄ mi . A numerical example is included later in this section.
i

As will be seen in Section 4.4 below, a weighted point may also be viewed as a bound scalar.

Ï Historical Note
Sir William Rowan Hamilton in his Lectures on Quaternions [Hamilton 1853] was the first
to introduce the notion of vector as 'carrier' of points.
... I regard the symbol B-A as denoting "the step from B to A": namely, that step by making which,
from the given point A, we should reach or arrive at the sought point B; and so determine, generate,
mark or construct that point. This step, (which we always suppose to be a straight line) may also in
my opinion be properly called a vector; or more fully, it may be called "the vector of the point B
from the point A": because it may be considered as having for its office, function, work, task or
business, to transport or carry (in Latin vehere) a moveable point, from the given or initial position
A, to the sought or final position B.

Declaring a basis for a bound vector space


Any geometry that we do with points in GrassmannAlgebra will require us to declare the origin ¯"
as one of the elements of the basis of the space. We have already seen that a shorthand way of
declaring a basis 8e1 , e2 , !, en < is by entering ¯!n . Declaring the augmented basis
8¯", e1 , e2 , !, en < can be accomplished by entering the alias ¯"n (most easily from the
palette). These are 5-star-script-capital letters subscripted with the integer n denoting the desired
'vectorial' dimension of the space. For example, entering ¯"2 gives a basis for the plane.

2009 9 3
Grassmann Algebra Book.nb 212

¯"2
8¯#, e1 , e2 <

¯ "2
e2

e1
¯!
Declaring a basis for the plane

We do not depict the basis vectors at right angles because the space does not yet have a metric, and
orthogonality is a metric concept.

As always, you can confirm your currently declared basis by checking the status pane at the top of
the palette,

We may often precede a calculation with one of these DeclareBasis aliases followed by a semi-
colon. This accomplishes the declaration of the basis but for brevity suppresses the confirming
output. For example, below we declare a bound 3-space, and then compose a palette of the basis of
the algebra constructed on it.

¯"3 ; BasisPalette

Basis Palette
L L L L L
0 1 2 3 4

1 ¯! ¯! Ô e1 ¯! Ô e1 Ô e2 ¯! Ô e1 Ô e2 Ô e3
e1 ¯! Ô e2 ¯! Ô e1 Ô e3
e2 ¯! Ô e3 ¯! Ô e2 Ô e3
e3 e1 Ô e2 e1 Ô e2 Ô e3
e1 Ô e3
e2 Ô e3

Of course, you can declare your own bound vector space basis if you wish. For example if you
want the vector basis elements to be i, j, and k, you could enter

DeclareBasis@8i, j, k, ¯"<D
8¯#, i, j, k<

(DeclareBasis will always rearrange the ordering to make the origin ¯" come first).

Composing vectors and points

2009 9 3
Grassmann Algebra Book.nb 213

Composing vectors and points

Ï Composing vectors

Once you have declared a basis for your bound vector space, say ¯"3 ,

¯"3
8¯#, e1 , e2 , e3 <

you can quickly compose any vectors in this basis with ComposeVector or its alias ¯!Ñ . The
placeholder Ñ is used for entering the symbol upon which you want the coefficients to be based.
Here we choose a.

Va = ¯&a
a1 e1 + a2 e2 + a3 e3

ComposeVector automatically declares the ai to be scalar symbols.

If you enter ¯!Ñ , you get an expression with placeholders in which you can enter your own
coefficients just by clicking on any placeholder, and tabbing through them.

¯&Ñ
Ñ e1 + Ñ e2 + Ñ e3

Note however that in this case if you want any symbolic coefficients you enter to be recognized as
scalar symbols, you would need to ensure this yourself.

Ï Composing points

Similarly, to compose a point you can use ¯"Ñ .

Pb = ¯'b
¯# + b1 e1 + b2 e2 + b3 e3

¯'Ñ
¯# + Ñ e1 + Ñ e2 + Ñ e3

Now you can use the points and vectors you have composed in expressions. For example, here we
add Pb and 2 Va , then use GrassmannSimplify (alias ¯$) to collect their coefficients.

Pb + 2 Va êê ¯$
¯# + H2 a1 + b1 L e1 + H2 a2 + b2 L e2 + H2 a3 + b3 L e3

2009 9 3
Grassmann Algebra Book.nb 214

Example: Calculation of the centre of mass


Suppose a space with basis 8¯", e1 , e2 , e3 < and a set of masses Mi situated at points Pi . It is
required to find their centre of mass. First declare the basis, then enter the mass points.

¯"3
8¯#, e1 , e2 , e3 <

M1 = 2 P1 ; P1 = ¯" + e1 + 3 e2 - 4 e3 ;
M2 = 4 P2 ; P2 = ¯" + 2 e1 - e2 - 2 e3 ;
M3 = 7 P3 ; P3 = ¯" - 5 e1 + 3 e2 - 6 e3 ;
M4 = 5 P4 ; P4 = ¯" + 4 e1 + 2 e2 - 9 e3 ;

We simply add the mass-weighted points.

4
M = ‚ Mi
i=1

5 H¯# + 4 e1 + 2 e2 - 9 e3 L + 7 H¯# - 5 e1 + 3 e2 - 6 e3 L +
2 H¯# + e1 + 3 e2 - 4 e3 L + 4 H¯# + 2 e1 - e2 - 2 e3 L

Simplifying this gives a weighted point with weight 18, the scalar attached to the origin.

M = ¯%@MD
18 ¯# - 5 e1 + 33 e2 - 103 e3

To take the weight out as a factor, that is, expressing the result in the form mass ! point, we can
use ToWeightedPointForm.

M = ToWeightedPointForm@MD
5 e1 11 e2 103 e3
18 ¯# - + -
18 6 18

7 P3 2 P1
4 P2
18 Pc

5 P4

The centre of mass of four weighted points

5 e1 11 e2 103 e3
Thus the total mass is 18 situated at the point Pc ã ¯" - 18
+ 6
- 18
. If you are reading
this notebook in Mathematica, you can rotate this graphic, and check the positions of the points by
2009 9 3 your mouse over their symbols.
hovering
Grassmann Algebra Book.nb 215

5 e1 11 e2 103 e3
Thus the total mass is 18 situated at the point Pc ã ¯" - 18
+ 6
- 18
. If you are reading
this notebook in Mathematica, you can rotate this graphic, and check the positions of the points by
hovering your mouse over their symbols.

4.3 Geometrically Interpreted 2-Elements

Simple geometrically interpreted 2-elements


It has been seen in Chapter 2 that the linear space L may be generated from L by the exterior
2 1
product operation. In the preceding section the elements of L have been given two geometric
1
interpretations: that of a vector and that of a point. These interpretations in turn generate various
other interpretations for the elements of L.
2

In L there are at first sight three possibilities for simple elements:


2
1. xÔy (vector by vector).
2. PÔx (point by vector).
3. P1 Ô P2 (point by point).

However, P1 Ô P2 may be expressed as P1 Ô HP2 - P1 L which is the product of a point and a


vector, and thus reduces to the second case.

There are thus two simple interpreted elements in L:


2
1. xÔy (the simple bivector).
2. PÔx (the bound vector).

Ï A note on terminology

The term bound as in the bound vector PÔx indicates that the vector x is conceived of as bound
through the point P, rather than to the point P, since the latter conception would give the incorrect
impression that the vector was located at the point P. By adhering to the terminology bound
through, we get a slightly more correct impression of the 'freedom' that the vector enjoys.

The term bound vector space will be used to denote a vector space to whose basis has been added
an origin point. This should be read as bound vector-space, rather than bound-vector space.

The term bound m-space will be used to denote an m-dimensional vector space to whose basis has
been added an origin point.

2009 9 3
Grassmann Algebra Book.nb 216

Bivectors

Ï Depicting bivectors

Earlier in this chapter, a vector was depicted graphically by a directed and sensed line segment
supposed to be located nowhere in particular.

In like manner we depict a simple bivector by depicting its two component vectors supposed
located nowhere in particular. But to indicate that they are part of the same exterior product, the
vectors are depicted conveniently docked head to tail in the order of the product, and 'joined' by a
somewhat transparent parallelogram in the same colour, named by default with the exterior product
symbol.

Depicting a simple bivector. This bivector is located nowhere.

This bivector defines a planar 2-direction, the precise 2-dimensional analog of the unlocated
vector. Its vectors are located nowhere, and it is located nowhere. The parallelogram should be
viewed as an artifact which is used to visually tie the two vectors together in the exterior product.
We motivate this choice by noting that if this bivector were in a metric space, the area of the
parallelogram would correspond to the magnitude of the bivector. We will discuss this further in
Chapter 6.

If we change the sign of both vectors in the product, we do not change the bivector, but we get a
different depiction.

Changing the signs of the both vectors gives the same bivector.

Any bivector congruent to this bivector (that is, a scalar multiple of it) will represent the same 2-
direction. Reversing the order of the factors in a bivector is equivalent to multiplying it by -1,
producing an 'opposite orientation'.

2009 9 3
Grassmann Algebra Book.nb 217

Congruent bivectors with different orientations.

This parallelogram depiction of the simple bivector is still misleading in a further major respect
that did not arise for vectors. It incorrectly suggests a specific shape of the parallelogram. Indeed,
since xÔy ã xÔ(x + y), another valid depiction of this simple bivector would be a parallelogram
with sides constructed from vectors x and x + y.

x Ô y ã x Ô Hx + yL

Or, more generally, for any scalar multiplier k:

x Ô y ã x Ô Hk x + yL

However in what follows, just as we will usually depict a bivector docked in a location convenient
for the discussion, so too we will usually depict it in the shape most convenient for the discussion
at hand, usually one with the simplest factors.

Ï Bivectors in a metric space

In the following chapter a metric will be introduced onto L from which a metric is induced onto L.
1 2
This will permit the definition of the measure of a vector (its length) and the measure of the simple
bivector (its area).

The measure of a simple bivector is geometrically interpreted as the area of the parallelogram
formed by its two vectors. However, as demonstrated above, the bivector can be expressed in terms
of an infinite number of pairs of vectors. Despite this, the simple geometric fact that they have the
same base and height shows that parallelograms formed from all of them have the same area. For
example, the areas of the parallelograms in the previous figure are the same. Thus the area
definition of the measure of the bivector is truly an invariant measure. From this point of view the
parallelogram depiction in a metric space is correctly suggestive, although the parallelogram is not
of fixed shape.

In Chapter 6 it will be shown that this parallelogram-area notion of measure is simply the 2-
dimensional case of a very much more general notion of measure applicable to entities of any
grade.

A sum of simple bivectors is called a bivector. In two and three dimensions all bivectors are simple.
This will have important consequences for our exploration of screw algebra in Chapter 7 and its
application to mechanics in Chapter 8.

Earlier in the chapter it was remarked that a vector may be viewed as a 'carrier' of points.
Analogously, a simple bivector may be viewed as a carrier of bound vectors. This view will be
more fully explored in the next section.

Bound vectors
2009 9 3
Grassmann Algebra Book.nb 218

Bound vectors
In mechanics the concept of force is paramount. In Chapter 8: Exploring Mechanics we will show
that a force may be represented by a bound vector, and that a system of forces may be represented
by a sum of bound vectors.

It has already been shown that a bound vector may be expressed either as the product of a point
with a vector or as the product of two points.

P1 Ô x ã P1 Ô HP2 - P1 L ã P1 Ô P2 P2 ã P1 + x

The bound vector in the form P1 Ô x defines a line through the point P1 in the direction of x.
Similarly, in the form P1 Ô P2 it defines the line through P1 and P2 .

Here we use the word 'define' in the context discussed in Chapter 2 in the section Spaces and
congruence. To say that a bound vector B defines a line means that the line may be defined as the
set of all points P which belong to the space of B, that is, such that B Ô P ã 0. A consequence of
this definition is that any other bound vector congruent to B defines the same line.

We depict a bound vector as located in its line in either of two ways, by:
1. Two points and their vector difference.
2. A single point and a vector.
In both cases the vector indicates the order of the factors in the exterior product.

PointÔvector depiction PointÔpoint depiction


Two different depictions of a bound vector in its line.

These graphical depictions of the bound vector are each misleading in their own way.

The first (point-vector) depiction suggests that the vector lies in the line. Since the vector is located
nowhere, it is certainly not bound to the point. However as a component of the bound vector, we
will often find it convenient to imagine it docked in the line, and to speak of it as bound through
the point. And the point, again as a component of the bound vector, can be anywhere in the line
since if P and P* are any two points in the line, P - P* is a vector of the same direction as x and
the bound vector can be expressed as either P Ô x or P* Ô x .

HP - P* L Ô x ã 0 ï P Ô x ã P* Ô x
Hence, although a lone vector is located nowhere, and a lone point is immoveably fixed, the bound
vector formed by the exterior product of the two is bound to a line through the point in the
direction of the vector. And to add to this representational conundrum, the bound vector has no
specific location in the line. The following diagram depicts a bound vector docked in two different
locations in the line.

2009 9 3
Grassmann Algebra Book.nb 219

x
P*
x
P

A bound vector displaying its ability to slide along its line.

The second (point-point) depiction suggests that the depicted points are of specific importance over
other points in the line. However, any pair of points in the line with the same vector difference may
be used to express the bound vector. Let x ã Q - P ã Q* - P* , then

P Ô x ã P* Ô x ï P Ô HQ - PL ã P* Ô HQ* - P* L ï P Ô Q ã P* Ô Q*

A bound vector displaying its ability to look like a sliding pair of points.

It has been mentioned in the previous section that a simple bivector may be viewed as a 'carrier' of
bound vectors. To see this, take any bound vector PÔx and a bivector whose space contains x. The
bivector may be expressed in terms of x and some other vector, y say, yielding yÔx. Thus:

P Ô x + y Ô x ã HP + yL Ô x ã P* Ô x

The bivector as a carrier of bound vectors.

The geometric interpretation of the addition of such a simple bivector to a bound vector is then
similar to that for the addition of a vector to a point, that is, a shift in position of the bound element.

Composing bivectors and bound vectors

Ï Composing bivectors

Once you have declared a basis for your bound vector space, say ¯"4 ,

2009 9 3
Grassmann Algebra Book.nb 220

¯"4
8¯#, e1 , e2 , e3 , e4 <

you can use quickly compose any bivectors in this basis with ComposeBivector or its alias
¯#Ñ . The placeholder Ñ is used for entering the symbol upon which you want the coefficients to be
based. Here we choose a.

Ba = ¯%a
a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e1 Ô e4 + a4 e2 Ô e3 + a5 e2 Ô e4 + a6 e3 Ô e4

Just as with vectors and points, if you enter ¯#Ñ , you get an expression with placeholders in which
you can enter your own coefficients. (But if you want to do further manipulation, make sure they
are declared as scalar symbols).

¯%Ñ
Ñ e1 Ô e2 + Ñ e1 Ô e3 + Ñ e1 Ô e4 + Ñ e2 Ô e3 + Ñ e2 Ô e4 + Ñ e3 Ô e4

Ï Composing simple bivectors

In a 4-space, bivectors are not necessarily simple. If you want to compose a simple bivector, you
can use ComposeSimpleBivector or its alias ¯$Ñ .

Bb = ¯(b

He1 b1 + e2 b2 + e3 b3 + e4 b4 L Ô He1 b5 + e2 b6 + e3 b7 + e4 b8 L

Ï Composing bound vectors

Similarly, to compose a bound vector you can use ComposeBoundVector, or its alias ¯%Ñ . The
symbol ) has been chosen because as we have seen, bound vectors are often used to define lines.

L1 = ¯)q

H¯# + e1 q1 + e2 q2 + e3 q3 + e4 q4 L Ô He1 q5 + e2 q6 + e3 q7 + e4 q8 L

The sum of two parallel bound vectors


Suppose we have two bound vectors !1 and !2 , and we wish to explore how their sum might be
expressed. In this section we explore the special case where they are parallel. The results are valid
in a space of any dimension.

Ï The sum of two parallel bound vectors whose vectors are equal but of opposite
sense.

Consider a space of any dimension. Let !1 ã P Ô x and !2 ã Q Ô y be two bound vectors. Then
their sum % is

2009 9 3
Grassmann Algebra Book.nb 221

% ã %1 + %2 ã P Ô x + Q Ô y

Since the bound vectors are parallel we can write y equal to a x (where a is a scalar) so that

% ã P Ô x + Q Ô y ã P Ô x + Q Ô Ha xL ã HP + a QL Ô x

In the special case that y is equal to -x (that is a is -1), we have that % reduces to a bivector

% ã HP - QL Ô x

The sum of two bound vectors, whose vectors are equal


but of opposite sense, is a bivector.

In mechanics (see Chapters 7 and 8) this is equivalent to two oppositely directed forces of equal
magnitude reducing to a couple.

Ï The sum of two parallel bound vectors is in general a bound vector parallel to
them

Supposing now that a is not -1, then P + a Q is a weighted point of weight (1+a) situated on the
line joining P and Q. We can shift this weight to the vectorial term (so that the bound vector again
becomes the product of a (pure) point and a vector) by writing % as

P+aQ
% ã R Ô HH1 + aL xL; R ã
a+1

The sum of two parallel bound vectors is a bound vector parallel to them.

In mechanics, this is equivalent to two parallel forces reducing to a resultant force parallel to them.
If the two parallel forces are of equal magnitude (a is equal to 1), the resultant force passes through
a point mid-way between them.

The sum of two non-parallel bound vectors


2009 9 3
Grassmann Algebra Book.nb 222

The sum of two non-parallel bound vectors


We now turn to the more general case where we assume the bound vectors are not parallel. In a
space of any dimension we can always express a sum of bound vectors in terms of an arbitrary
point, P say, by writing:

% ã P Ô x + Q Ô y ã R Ô Hx + yL + HP - RL Ô x + HQ - RL Ô y ã R Ô Hx + yL + B

Because our two bivectors are not parallel, the sum of their vectors cannot be zero. Hence first
term is a new bound vector which is not zero. The remaining terms are simple bivectors. In the
general dimensional case, this sum of bivectors B may not be reducible to a simple bivector. In 3-
dimensional space, the sum of bivectors B can always be reduced to a single simple bivector. In the
plane, a point R can always be found which makes the sum zero, leaving us with the result that the
sum of two bound vectors in the plane (whose sum of vectors is not zero) can always be reduced to
a single bound vector.

Ï The sum of two non-parallel bound vectors in the plane

This point R is in fact the point of intersection of the lines of the bound vectors. We have seen
earlier in the chapter that a bound vector is independent of the point used to express it, provided
that the point lies on its line. We can thus imagine 'sliding' each bound vector along its line until its
point reaches the point of intersection with the line of the other bound vector. We can then take the
common point out of the sum as a factor, and we are left with the vector of the resultant being the
sum of the vectors of the bound vectors.

The sum of two non-parallel bound vectors in the plane


is a bound vector through their point of intersection.

This result applies of course, not only to bound vectors in the plane, but to any pair of intersecting
bound vectors in a space of any dimensions, since together they form a planar subspace.

As we might expect, given any two bound vectors in the plane, we can check if they intersect or
are parallel by extracting a common factor from their regressive product. If they intersect, their
regressive product will give their point of intersection. If they are parallel, it will give their
common vector.

† For example, if they intersect we have

¯"2 ; B1 = H¯" + a x + b yL Ô x; B2 = H¯" + c x + d yL Ô y;

2009 9 3
Grassmann Algebra Book.nb 223

ToCommonFactor@B1 Ó B2 D
¯c Hc x †¯# Ô x Ô y§ + b y †¯# Ô x Ô y§ + ¯# †¯# Ô x Ô y§L

Whence their point of intersection can be extracted as

¯" + c x + b y

Ï The sum of two non-parallel bound vectors in 3-space

If the lines of the bound vectors in 3-space intersect, then the bound vectors together belong to a
plane and the results of the previous section apply. We are thus left to explore the remaining case
where the bivectors are neither parallel nor intersecting. In this case, the exterior product of the
bound vectors will not be zero.

As we have seen above, the sum of two bound vectors in a space of any number of dimensions,
may always be reduced to the sum of a bound vector and a bivector. The resultant bound vector
can be chosen through an arbitrary point, but its vector must be the sum of the vectors of the
original two bound vectors. The bivector will depend on the choice of the point defining the
resultant bound vector. And in 3-space this bivector is guaranteed to be simple.

% ã P Ô x + Q Ô y ã R Ô Hx + yL + B

To visualize how this works, we take a simple example. Define the two bound vectors as:

¯"3 ; ) = ¯" Ô e1 ; * = H¯" + 2 e2 L Ô e3

Choose a convenient point through which to define the resultant bound vector, say half way
between the points used to define the component bound vectors.

+ = H¯" + e2 L Ô He1 + e3 L

Then the bivector of the sum will be

Bã)+*-+ã
¯" Ô e1 + H¯" + 2 e2 L Ô e3 - H¯" + e2 L Ô He1 + e3 L ã e2 Ô He3 - e1 L

The following depictions give two different views of the same summation. The original component
bound vectors are shown with a somewhat ghostly opacity, while the resultant bound vector and
bivector are shown in full-blooded colours.

2009 9 3
Grassmann Algebra Book.nb 224

The sum of two bound vectors. View 1. The sum of two bound vectors. View 2.

We shall see in the next section that the sum of any number of bound vectors in 3-space may be
reduced to the sum of a single bound vector and a single bivector

† In Chapter 8: Exploring Mechanics, we will see that bound vectors correspond to forces and
bivectors to moments. If the resultant bound vector force is chosen through a point which makes
it and its bivector moment orthogonal, such a system of forces is called a wrench.

Sums of bound vectors

Ï Sums of bound vectors in the plane

We have seen in the sections above that the sum of two bound vectors in the plane is either a bound
vector or a bivector or zero. We have also seen that the sum of a bound vector and a bivector is
another bound vector.

A sum of any number of bound vectors in the plane, therefore, is clearly also reducible to a bound
vector, a bivector, or zero; for we can simply add them two at a time.

Ï Sums of bound vectors in 3-space

In 3-space we can present the same sort of argument. The sum of two bound vectors in 3-space is
either a bound vector, a simple bivector, the sum of a bound vector and a simple bivector, or zero.
In addition, the sum of two simple bivectors in 3-space is itself a simple bivector.

Thus again, we can add our bound vectors two at a time, and will end up with a bound vector, a
simple bivector, the sum of a bound vector and a simple bivector, or zero.

In 3-space, the sum of a bound vector and a bivector can also always be recast back into the sum of
two bound vectors by adding and subtracting a bound vector.

P Ô x + y Ô z ã HP Ô x - P Ô zL + HP Ô z + y Ô zL ã P Ô Hx - zL + HP + yL Ô z

Ï Sums of bound vectors in n-space

2009 9 3
Grassmann Algebra Book.nb 225

Sums of bound vectors in n-space

Sums of bound vectors in n-space (n > 3) differ from sums in 3-space only in one aspect. The
bivector may not be simple.

If the bivector is simple, a sum of bound vectors in n-space has essentially the same properties as
one in 3-space.

Ï A criterion for a sum of bound vectors to be reducible to a single bound vector

A non-zero sum of bound vectors S can be reduced to a single bound vector only in the case that
the vector of the bound vector is a factor of the bivector. That is, S can be expressed in the form

S ã P Ô x + y Ô x ã HP + yL Ô x

In this case it is clear that the exterior square of S is zero.

S Ô S ã HHP + yL Ô xL Ô HHP + yL Ô xL ã 0

Suppose now that the vector of the bound vector is not a factor of the bivector. Then we have S =
PÔx + yÔz, where x, y, and z are independent. The exterior square of S is then

S Ô S ã HP Ô x + y Ô zL Ô HP Ô x + y Ô zL ã 2 P Ô x Ô y Ô z

This forms a straightforward test to see if the reduction of the sum to a single bound vector can be
effected. If the exterior square is zero, the sum can be reduced to a single bound vector. If the
exterior square is not zero, then the sum cannot be reduced to a single bound vector. We have
already seen this property of exterior squares of 2-elements in our discussion of simplicity in
Chapter 3.

Ï Sums of bound vectors: Summary

A non-zero sum of bound vectors S Pi Ô xi (except in the case S xi ã 0) may always be reduced
to the sum of a bound vector and a bivector (or zero), since, by choosing an arbitrary point P,
S Pi Ô xi may always be written in the form:

S Pi Ô xi ã P Ô HS xi L + S HPi - PL Ô xi 4.1

The first term on the right hand side is a bound vector. The second term is a bivector. This
decomposition is clearly valid in a space of any dimension.

If S xi ã 0 then the sum is a bivector. In spaces of vector dimension greater than 3, the bivector
may not be simple.

If S HPi - PL Ô xi ã 0 for some P, then the sum is a bound vector.

Alternatively, if HS Pi Ô xi M Ô HS Pi Ô xi M ã 0, then the sum is a bound vector.

This transformation is of fundamental importance in our exploration of mechanics in Chapter 8.


Ï

Example: Reducing a sum of bound vectors


2009 9 3
Grassmann Algebra Book.nb 226

Example: Reducing a sum of bound vectors


Here is an example of a sum of bound vectors and its reduction to the sum of a bound vector and a
bivector. We begin by declaring a bound vector space of three vector dimensions, so in this case
the bivector is guaranteed to be simple.

¯A; ¯"3 ;

Next, we define and enter the four bound vectors.

b1 = P1 Ô x1 ; P1 = ¯" + e1 + 3 e2 - 4 e3 ; x1 = e1 - e3 ;
b2 = P2 Ô x2 ; P2 = ¯" + 2 e1 - e2 - 2 e3 ; x2 = e1 - e2 + e3 ;
b3 = P3 Ô x3 ; P3 = ¯" - 5 e1 + 3 e2 - 6 e3 ; x3 = 2 e1 + 3 e2 ;
b4 = P4 Ô x4 ; P4 = ¯" + 4 e1 + 2 e2 - 9 e3 ; x4 = 5 e3 ;

The sum of the four bound vectors is:

4
B = ‚ bi
i=1

H¯# + 4 e1 + 2 e2 - 9 e3 L Ô H5 e3 L + H¯# - 5 e1 + 3 e2 - 6 e3 L Ô H2 e1 + 3 e2 L +
H¯# + e1 + 3 e2 - 4 e3 L Ô He1 - e3 L + H¯# + 2 e1 - e2 - 2 e3 L Ô He1 - e2 + e3 L

By expanding these products, simplifying and collecting terms, we obtain the sum of a bound
vector (through the origin) ¯" Ô H4 e1 + 2 e2 + 5 e3 L and a bivector
-25 e1 Ô e2 + 39 e1 Ô e3 + 22 e2 Ô e3 . We can use GrassmannExpandAndSimplify to do
the computations for us.

B = ¯%@BD
¯# Ô H4 e1 + 2 e2 + 5 e3 L - 25 e1 Ô e2 + 39 e1 Ô e3 + 22 e2 Ô e3

We could just as well have expressed this 2-element as bound through (for example) the point
¯" + e1 . To do this, we simply add e1 Ô H4 e1 + 2 e2 + 5 e3 L to the bound vector and subtract it
from the bivector to get:

H¯" + e1 L Ô H4 e1 + 2 e2 + 5 e3 L - 27 e1 Ô e2 + 34 e1 Ô e3 + 22 e2 Ô e3

We can of course also express the bivector in factored form in many ways. Using
ExteriorFactorize (discussed in Chapter 2) gives a default result

ExteriorFactorize@-27 e1 Ô e2 + 34 e1 Ô e3 + 22 e2 Ô e3 D
22 e3 34 e3
-27 e1 + Ô e2 -
27 27

Finally, we can check to see if the sum can be reduced to a bound vector by calculating its exterior
square.

2009 9 3
Grassmann Algebra Book.nb 227

¯%@B Ô BD
-230 ¯# Ô e1 Ô e2 Ô e3

Since this is not zero, the reduction of the sum to a single bound vector is not possible in this case.

4.4 Geometrically Interpreted m-Elements

Types of geometrically interpreted m-elements


In L the situation is analogous to that in L. A simple product of m points and vectors may always
m 2
be reduced to the product of a point and a simple (m|1)-vector by subtracting one of the points
from all of the others. For example, take the exterior product of three points and two vectors. By
subtracting the first point, say, from the other two, we can cast the product into the form of a bound
simple 4-vector.

P Ô P1 Ô P2 Ô x Ô y ã P Ô HP1 - PL Ô HP2 - PL Ô x Ô y

There are thus only two simple interpreted elements in L:


m
1. x1 Ô x2 Ô !xm (the simple m-vector).
2. P Ô x2 Ô !xm (the bound simple (m|1)-vector).

A sum of simple m-vectors is called an m-vector.

If a is a (not necessarily simple) m-vector, then P Ô a is called a bound m-vector.


m m

A sum of bound m-vectors may always be reduced to the sum of a bound m-vector and an (m+1)-
vector (with the proviso that either or both may be zero).

These interpreted elements and their relationships will be discussed further in the following
sections.

The m-vector
The simple m-vector, or multivector, is the multidimensional equivalent of the vector. As with a
vector, it does not have the property of location. The m-dimensional vector space of a simple m-
vector may be used to define the multidimensional direction of the m-vector.

If we had access to m-dimensional paper, we would depict the simple m-vector, just as we did for
the bivector, by depicting its vectors conveniently docked head to tail in the order of the product,
and 'joined' by a somewhat transparent m-dimensional parallelepiped in the same colour.

In practice of course, we will only be depicting vectors, bivectors and trivectors. A trivector may
be depicted by its three vectors in order joined by a parallelepiped.

2009 9 3
Grassmann Algebra Book.nb 228

Depicting a simple trivector. A trivector is located nowhere.

The orientation is given by the order of the factors in the simple m-vector. An interchange of any
two factors produces an m-vector of opposite orientation. By the anti-symmetry of the exterior
product, there are just two distinct orientations.

In Chapter 6: The Interior Product, it will be shown that the measure of a trivector may be
geometrically interpreted as the volume of this parallelepiped. However, the depiction of the
simple trivector in the manner above suffers from similar defects to those already described for the
bivector: namely, it incorrectly suggests a specific location and shape of the parallelepiped.

A simple m-vector may also be viewed as a carrier of bound simple (m|1)-vectors in a manner
analogous to that already described for the simple bivector as a carrier of bound vectors. Thus, a
simple trivector may be viewed as a carrier for bound simple bivectors. (See below for a depiction
of this.)

A sum of simple m-vectors (that is, an m-vector) is not necessarily reducible to a simple m-vector,
except in L, L, L , L.
0 1 n-1 n

The bound m-vector


The exterior product of a point and an m-vector is called a bound m-vector. Note that it belongs to
L . A sum of bound m-vectors is not necessarily a bound m-vector. However, it may in general be
m+1
reduced to the sum of a bound m-vector and an (m+1)-vector as follows:

‚ Pi Ô ai ã ‚ HP + bi L Ô ai ã P Ô ‚ ai + ‚ bi Ô ai 4.2
m m m m

The first term P Ô „ ai is a bound m-vector providing that „ ai ! 0 and the second term is an
m m
(m+1)-vector.

When m = 0, a bound 0-vector or bound scalar PÔa ( = a P) is seen to be equivalent to a weighted


point.

When m = n (n being the dimension of the underlying vector space), any bound n-vector is but a
scalar multiple of the basis (n+1)-element. (The default basis (n+1)-element is denoted
¯" Ô e1 Ô e2 Ô ! Ô en .)
2009 9 3
Grassmann Algebra Book.nb 229

When m = n (n being the dimension of the underlying vector space), any bound n-vector is but a
scalar multiple of the basis (n+1)-element. (The default basis (n+1)-element is denoted
¯" Ô e1 Ô e2 Ô ! Ô en .)

The graphical depiction of bound simple m-vectors presents even greater difficulties than those
already discussed for bound vectors. As in the case of the bound vector, the point used to express
the bound simple m-vector is not unique.

In Section 4.6 we will see how bound simple m-vectors may be used to define multiplanes.

Bound simple m-vectors expressed by points


A bound simple m-vector may always be expressed as a product of m+1 points.
Let Pi = P0 + xi , then:

P0 Ô x1 Ô x2 Ô ! Ô xm
ã P0 Ô HP1 - P0 L Ô HP2 - P0 L Ô ! Ô HPm - P0 L
ã P0 Ô P1 Ô P2 Ô ! Ô Pm

Conversely, as we have already seen, a product of m+1 points may always be expressed as the
product of a point and a simple m-vector by subtracting one of the points from all of the others.

The m-vector of a bound simple m-vector P0 ÔP1 ÔP2 Ô!ÔPm may thus be expressed in terms of
these points as:

HP1 - P0 L Ô HP2 - P0 L Ô ! Ô HPm - P0 L

A particularly symmetrical formula results from the expansion of this product reducing it to a form
no longer showing preference for P0 .

HP1 - P0 L Ô HP2 - P0 L Ô ! Ô HPm - P0 L ã


m
4.3
‚ H-1Li P0 Ô P1 Ô ! Ô ·i Ô ! Ô Pm
i=0

Here ·i denotes deletion of the factor Pi from the product.

Bound simple bivectors


In this section, by way of example, we discuss the special case of the bound simple 2-vector, or
bound simple bivector.

A bound bivector is a 3-element. Any exterior product of three 1-elements, in which at least one 1-
element is a point, is a bound simple bivector. (For brevity in this section, we will suppose all the
bound bivectors discussed are bound simple bivectors.)

Consider three points in a space of any number of dimensions:

2009 9 3
Grassmann Algebra Book.nb 230

P
QãP+x
RãP+y

We can construct the same bound bivector % from these points in a number of ways

% = PÔQÔR ã QÔRÔP ã RÔPÔQ

By substituting for R and expanding, we see that % can also be expressed as

% = PÔQÔy ã QÔyÔP ã yÔPÔQ

Substituting again for Q and expanding gives

% = PÔxÔy ã xÔyÔP ã yÔPÔx

Thus we have nine (inessentially different) ways of constructing or denoting the same bound
bivector. From here on we will usually only discuss bound bivectors in any of the three
conceptually distinct forms in which points are denoted before vectors:

% = PÔQÔR ã PÔQÔy ã PÔxÔy

We depict each of these forms in a slightly different way: in each case emphasizing the entities
involved in the form, but in all cases showing the order of factors in the product by the chain of
arrows. (These graphical conventions will also apply to our discussion of bound trivectors later.)

PointÔvectorÔvector PointÔpointÔvector PointÔpointÔpoint


Three different depictions of a bound bivector.

The bound bivector % defines a plane through the points P, Q and R in the directions of x and y.

Here we use the word 'define' in the context discussed in Chapter 2 in the section Spaces and
congruence. To say that a bound bivector % defines a plane means that the plane may be defined as
the set of all points P which belong to the space of %, that is, such that % Ô P ã 0. A consequence
of this definition is that any other bound bivector congruent to % defines the same plane.

These graphical depictions of the bound bivector are each misleading in their own way. Since the
bound bivector defines a plane, we can conceive of it as lying in the plane. However, just as a
bound vector can lie anywhere in its line, a bound bivector can lie anywhere in its plane.

The following diagram depicts a bound bivector docked in three different locations in its plane.
The depictions each have the same vectors, but differ in the point of the plane they use.

2009 9 3
Grassmann Algebra Book.nb 231

A bound bivector displaying its ability to slide around in its plane.

Hence, although a lone bivector is located nowhere, and a lone point is immoveably fixed, the
bound bivector formed by the exterior product of the two is bound to a plane through the point in
the direction of the bivector.

It has been shown earlier that a simple bivector may be viewed as a 'carrier' of bound vectors. In a
similar manner, a simple trivector may be viewed as a 'carrier' of bound bivectors. To see this, take
any bound bivector PÔyÔz and a trivector xÔyÔz. Thus:

P Ô y Ô z + x Ô y Ô z ã HP + xL Ô y Ô z

The trivector as a carrier of bound bivectors.

The geometric interpretation of the addition of such a simple trivector to a bound bivector is then
similar to that for the addition of a vector to a point, that is, a shift in position of the bound element.

Composing m-vectors and bound m-vectors

Ï Composing m-vectors

Suppose that we are interested in a bound vector space of 3 dimensions.

¯"3
8¯#, e1 , e2 , e3 <

Even though we are in a bound vector space we can still can compose m-vectors with
ComposeMVector. In a bound 3-space, we have three possible m-vectors. Here we base the
coefficients on the symbol a.

2009 9 3
Grassmann Algebra Book.nb 232

Even though we are in a bound vector space we can still can compose m-vectors with
ComposeMVector. In a bound 3-space, we have three possible m-vectors. Here we base the
coefficients on the symbol a.

Table@8m, ComposeMVector@m, aD<, 8m, 1, 3<D êê TableForm


1 a1 e1 + a2 e2 + a3 e3
2 a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e2 Ô e3
3 a e1 Ô e2 Ô e3

Ï Composing simple m-vectors

Similarly we can compose simple m-vectors with ComposeSimpleMVector. In this case we use
a placeholder to enable later input of specific coefficients.

Table@8m, ComposeSimpleMVector@m, ÑD<, 8m, 1, 3<D êê TableForm


1 Ñ e1 + Ñ e2 + Ñ e3
2 HÑ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 + Ñ e2 + Ñ e3 L
3 HÑ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 + Ñ e2 + Ñ e3 L

Ï Composing bound simple m-vectors

Table@8m, ComposeMPlaneElement@m, ÑD<, 8m, 1, 3<D êê TableForm


1 H¯# + Ñ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 + Ñ e2 + Ñ e3 L
2 H¯# + Ñ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 + Ñ e2 + Ñ e3 L
3 H¯# + Ñ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 +

Ï Composing bound m-vectors

Of course you can also combine composition functions.

Table@8m, ¯'Ñ Ô ComposeMVector@m, ÑD<, 8m, 1, 3<D êê TableForm


1 H¯# + Ñ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 + Ñ e2 + Ñ e3 L
2 H¯# + Ñ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 Ô e2 + Ñ e1 Ô e3 + Ñ e2 Ô e3 L
3 H¯# + Ñ e1 + Ñ e2 + Ñ e3 L Ô HÑ e1 Ô e2 Ô e3 L

4.5 Geometrically Interpreted Spaces

Vector and point spaces


The space of a simple non-zero m-element a has been defined in Section 2.3 as the set of 1-
m

elements x whose exterior product with a is zero: :x : a Ô x ã 0>.


m m

If x is interpreted as a vector v, then the vector space of a is defined as :v : a Ô v ã 0>.


m m

2009 9 3
Grassmann Algebra Book.nb 233

If x is interpreted as a vector v, then the vector space of a is defined as :v : a Ô v ã 0>.


m m

If x is interpreted as a point P, then the point space of a is defined as :P : a Ô P ã 0>.


m m

The point space of a simple m-vector is empty.

The vector space of a simple m-vector is an m-dimensional vector space. Conversely, the m-
dimensional vector space may be said to be defined by the m-vector.

The point space of a bound simple m-vector is called an m-plane (sometimes multiplane). Thus the
point space of a bound vector is a 1-plane (or line) and the point space of a bound simple bivector
is a 2-plane (or, simply, a plane). The m-plane will be said to be defined by the bound simple m-
vector.

The vector space of a bound simple m-vector is an m-dimensional vector space.

The geometric interpretation for the notion of set inclusion is taken as 'to lie in'. Thus for example,
a point may be said to lie in an m-plane.

The point and vector spaces for a bound simple m-element are tabulated below.

m Bound Simple m-Vector Point Space Vector Space


0 bound scalar point
1 bound vector line 1|dimensional vector space
2 bound simple bivector plane 2|dimensional vector space
m bound simple m-vector m-plane m|dimensional vector space
n-1 bound Hn-1L-vector hyperplane Hn-1L|dimensional vector space
n bound n|vector n|plane n|dimensional vector space

Two congruent bound simple m-vectors PÔa and a P Ô a define the same m-plane. Thus, for
m m
example, the point ¯" + x and the weighted point 2(¯" + x) define the same point.

Ï Bound simple m-vectors as m-planes

We have defined an m-plane as a set of points defined by a bound simple m-vector. It will often
turn out however to be more convenient and conceptually fruitful to work with m-planes as if they
were the bound simple m-vectors which define them. This is in fact the approach taken by
Grassmann and the early workers in the Grassmannian tradition (for example, [Whitehead 1898],
[Hyde 1884-1906] and [Forder 1941]). This will be satisfactory provided that the equality
relationship we define for m-planes is that of congruence rather than the more specific equals.

Thus we may, when speaking in a geometric context, refer to a bound simple m-vector as an m-
plane and vice versa. Hence in saying P is an m-plane, we are also saying that all a P (where a is a
scalar factor, not zero) is the same m-plane.

2009 9 3
Grassmann Algebra Book.nb 234

Thus we may, when speaking in a geometric context, refer to a bound simple m-vector as an m-
plane and vice versa. Hence in saying P is an m-plane, we are also saying that all a P (where a is a
scalar factor, not zero) is the same m-plane.

Coordinate spaces
The coordinate spaces of a Grassmann algebra are the spaces defined by the basis elements.

The coordinate m-spaces are the spaces defined by the basis elements of L. For example, if L has
m 1
basis 8¯", e1 , e2 , e3 <, that is, we are working in a bound vector 3-space, then the Grassmann
algebra it generates has basis:

¯"3 ; LBasis
881<, 8¯#, e1 , e2 , e3 <,
8¯# Ô e1 , ¯# Ô e2 , ¯# Ô e3 , e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <,
8¯# Ô e1 Ô e2 , ¯# Ô e1 Ô e3 , ¯# Ô e2 Ô e3 , e1 Ô e2 Ô e3 <, 8¯# Ô e1 Ô e2 Ô e3 <<

Each one of these basis elements defines a coordinate space. Most familiar are the coordinate m-
planes. The coordinate 1-planes ¯"Ôe1 , ¯"Ôe2 , ¯"Ôe3 define the coordinate axes, while the
coordinate 2-planes ¯"Ôe1 Ôe2 , ¯"Ôe1 Ôe3 , ¯"Ôe2 Ôe3 define the coordinate planes. Additionally
however there are the coordinate vectors e1 , e2 , e3 and the coordinate bivectors e1 Ôe2 , e1 Ôe3 ,
e2 Ôe3 .

Perhaps less familiar is the fact that there are no coordinate m-planes in a vector space, but rather
simply coordinate m-vectors.

Geometric dependence
In Chapter 2 the notion of dependence was discussed for elements of a linear space. Non-zero 1-
elements are said to be dependent if and only if their exterior product is zero.

If the elements concerned have been endowed with a geometric interpretation, the notion of
dependence takes on an additional geometric interpretation, as the following table shows.

x1 Ô x2 ã 0 x1 , x2 are parallel Hco|directionalL


P1 Ô P2 ã 0 P1 , P2 are coincident
x1 Ô x2 Ô x3 ã 0 x1 , x2 , x3 are co|2|directional Hor parallelL
P1 Ô P2 Ô P3 ã 0 P1 , P2 , P3 are collinear Hor coincidentL
x1 Ô ! Ô xm ã 0 x1 , !, xm are co| k |directional, k < m
P1 Ô ! Ô Pm ã 0 P1 , !, Pm are co| k |planar, k < m - 1

Thus, co-directionality and coincidence, while being distinct interpretive geometric notions, are
equivalent algebraically.

2009 9 3
Grassmann Algebra Book.nb 235

Geometric duality
The concept of duality introduced in Chapter 3 is most striking when interpreted geometrically.
Suppose:

P defines a point
L defines a line
p defines a plane
V defines a 3-plane

In what follows we tabulate the dual relationships of these entities to each other.

Ï Duality in a plane

In a plane there are just three types of geometric entity: points, lines and planes. In the table below
we can see that in the plane, points and lines are 'dual' entities, and planes and scalars are 'dual'
entities, because their definitions convert under the application of the Duality Principle.

L ª P1 Ô P2 P ª L1 Ó L2
p ª P1 Ô P2 Ô P3 1 ª L1 Ó L2 Ó L3
p ª LÔP 1 ª PÓL

Ï Duality in a 3-plane

In the 3-plane there are just four types of geometric entity: points, lines, planes and 3-planes. In the
table below we can see that in the 3-plane, lines are self-dual, points and planes are now dual, and
scalars are now dual to 3-planes.

L ª P1 Ô P2 L ª p1 Ó p2
p ª P1 Ô P2 Ô P3 P ª p1 Ó p2 Ó p3
V ª P1 Ô P2 Ô P3 Ô P4 1 ª p1 Ó p2 Ó p3 Ó p4
p ª LÔP P ª LÓp
V ª L1 Ô L2 1 ª L1 Ó L2
V ª pÔP 1 ª PÓp

Ï Duality in an n-plane

From these cases the types of relationships in higher dimensions may be composed
straightforwardly. For example, if P defines a point and H defines a hyperplane ((n|1)-plane), then
we have the dual formulations:

H ª P1 Ô P2 Ô ! Ô Pn-1 P ª H1 Ó H2 Ó ! Ó Hn-1

2009 9 3
Grassmann Algebra Book.nb 236

4.6 m-Planes
In an earlier section m-planes were defined as point spaces of bound simple m-vectors. In this
section m-planes will be considered from three other aspects: the first in terms of a simple exterior
product of points, the second as an m-vector and the third as an exterior quotient.

m-planes defined by points


Grassmann and those who wrote in the style of the Ausdehnungslehre considered the point more
fundamental than the vector for exploring geometry. This approach indeed has its merits. An m-
plane is quite straightforwardly defined and expressed as the (space of the) exterior product of m+1
points.

P ã P0 Ô P1 Ô P2 Ô ! Ô Pm

m-planes defined by m-vectors


Consider a bound simple m-vector P0 Ô x1 Ô x2 Ô ! Ô xm . Its m-plane is the set of points P such
that:

P Ô P0 Ô x1 Ô x2 Ô ! Ô xm ã 0

This equation is equivalent to the statement: there exist scalars a, a0 , ai , not all zero, such that:

a P + a0 P0 + S ai xi ã 0

And since this is only possible if a ã -a0 (since for the sum to be zero, it must be a sum of
vectors) then we can rewrite the condition as:

a HP - P0 L + S ai xi ã 0

or equivalently:

HP - P0 L Ô x1 Ô x2 Ô ! Ô xm ã 0

We are thus lead to the following alternative definition of an m-plane: an m-plane defined by the
bound simple m-vector P0 Ô x1 Ô x2 Ô ! Ô xm is the set of points:

8P : HP -P0 L Ô x1 Ô x2 Ô ! Ô xm ã 0<

This is of course equivalent to the usual definition of an m-plane. That is, since the vectors
HP - P0 L, x1 , x2 , !, xm are dependent, then for scalar parameters ti :

HP - P0 L ã t1 x1 + t2 x2 + ! + tm xm 4.4

m-planes as exterior quotients

2009 9 3
Grassmann Algebra Book.nb 237

m-planes as exterior quotients


The alternative definition of an m-plane developed above shows that an m-plane may be defined as
the set of points P such that:

P Ô x1 Ô x2 Ô ! Ô xm ã P0 Ô x1 Ô x2 Ô ! Ô xm

'Solving' for P and noting from Section 2.11 that the quotient of an (m+1)-element by an m-element
contained in it is a 1-element with m arbitrary scalar parameters, we can write:

P0 Ô x1 Ô x2 Ô ! Ô xm
Pã ã P0 + t1 x1 + t2 x2 + ! + tm xm
x1 Ô x2 Ô ! Ô xm

P0 Ô x1 Ô x2 Ô ! Ô x m
Pã 4.5
x1 Ô x2 Ô ! Ô x m

Computing exterior quotients


In Chapter 2: The Exterior Product, we introduced the GrassmannAlgebra function
ExteriorQuotient. In what follows we use ExteriorQuotient to compute the expressions
for various m-planes.

Note that all the 1-elements in the numerator must be declared vector symbols, so we need to
declare any points we use.

¯A; ¯"3 ; DeclareExtraVectorSymbols@PD;

Ï Points

The exterior quotient of a weighted point by its weight gives (as would be expected), the (pure)
point.

PÔa
ExteriorQuotientB F
a
P

Ï Points in a line

The exterior quotient of a bound vector with its vector gives all the points in the line defined by the
bound vector, parametrized by the scalar ¯t1 .

PÔx
ExteriorQuotientB F
x
P + x ¯t1

We can see that any other point used to define the bound vector gives effectively the same
parametrization.

2009 9 3
Grassmann Algebra Book.nb 238

We can see that any other point used to define the bound vector gives effectively the same
parametrization.

HP + a xL Ô x
ExteriorQuotientB F
x
P + a x + x ¯t1

Ï Points in a plane

The exterior quotient of a bound simple bivector with its simple bivector gives all the points in the
plane defined by the bound simple bivector, parametrized by the scalars ¯t1 and ¯t2 .

PÔxÔy
ExteriorQuotientB F
xÔy
P + x ¯t1 + y ¯t2

Ï Points in a 3-plane

The exterior quotient of a bound simple trivector with its simple trivector gives all the points in the
3-plane defined by the bound simple trivector, parametrized by the scalars ¯t1 , ¯t2 and ¯t3 .

PÔxÔyÔz
ExteriorQuotientB F
xÔyÔz
P + x ¯t1 + y ¯t2 + z ¯t3

Ï Lines in a plane

The exterior quotient of a bound simple bivector with a vector in it gives a set of bound vectors.
These bound vectors define lines in the plane of the bound simple bivector, parametrized by the
scalars ¯t1 and ¯t2 . They are characterized by the fact that their exterior product with the vector
gives the original bound simple bivector.

PÔxÔy
ExteriorQuotientB F
y
HP + y ¯t1 L Ô Hx + y ¯t2 L

Ï Lines in a 3-plane

The exterior quotient of a bound simple trivector with a bivector in it gives a set of bound vectors.
These bound vectors define lines in the 3-plane of the bound simple trivector, parametrized by the
scalars ¯t1 , ¯t2 , ¯t3 and ¯t4 . They are characterized by the fact that their exterior product with
the bivector gives the original bound simple trivector.

2009 9 3
Grassmann Algebra Book.nb 239

PÔxÔyÔz
ExteriorQuotientB F
yÔz
HP + y ¯t1 + z ¯t2 L Ô Hx + y ¯t3 + z ¯t4 L

Ï Planes in a 3-plane

The exterior quotient of a bound simple trivector with any vector in it gives all the planes in the 3-
plane of the bound simple trivector, parametrized by the scalars ¯t1 , ¯t2 and ¯t3 .

The exterior quotient of a bound simple trivector with a vector in it gives a set of bound bivectors.
These bound bivectors define planes in the 3-plane of the bound simple trivector, parametrized by
the scalars ¯t1 , ¯t2 and ¯t3 . They are characterized by the fact that their exterior product with
the vector gives the original bound simple trivector.

PÔxÔyÔz
ExteriorQuotientB F
z
HP + z ¯t1 L Ô Hx + z ¯t2 L Ô Hy + z ¯t3 L

The m-vector of a bound m-vector


† We can define an operator $ which takes a simple (m+1)-element of the form
P0 Ô P1 Ô P2 Ô ! Ô Pm and converts it to an m-element of the form
HP1 -P0 LÔHP2 -P0 LÔ!ÔHPm -P0 L.

The interesting property of this operation is that when it is applied twice, the result is zero.
Operationally, $2 ã0. For example:

$HPL ã 1

$HP0 Ô P1 L ã P1 - P0
$HP1 - P0 L ã 1 - 1 ã 0

$HP0 Ô P1 Ô P2 L ã P1 Ô P2 + P2 Ô P0 + P0 Ô P1
$HP1 Ô P2 + P2 Ô P0 + P0 Ô P1 L ã P2 - P1 + P0 - P2 + P1 - P0 ã 0

Remember that P1 Ô P2 + P2 Ô P0 + P0 Ô P1 is simple since it may be expressed as

HP1 - P0 L Ô HP2 - P0 L ã HP2 - P1 L Ô HP0 - P1 L ã HP0 - P2 L Ô HP1 - P2 L

This property of nilpotence is shared by the boundary operator of algebraic topology and the
exterior derivative. Furthermore, if a product with a given 1-element is considered an operation,
then the exterior, regressive and interior products are all likewise nilpotent.

† In Chapter 6 we will show that under the hybrid metric we will adopt for bound vector spaces
(in which the origin is orthogonal to all vectors), taking the interior product of any bound
element with the origin ¯" has the same effect as the $ operator.

Applied to a bound m-vector, this operator generates the m-vector, thus creating a 'free' entity from
a bound one. Applied a second time to the result gives zero, since the m-vector is aready free.

2009 9 3
Grassmann Algebra Book.nb 240

Applied to a bound m-vector, this operator generates the m-vector, thus creating a 'free' entity from
a bound one. Applied a second time to the result gives zero, since the m-vector is aready free.

† Another way of generating the m-vector from a bound m-vector is to compose the first co-span
of the bound m-vector, and then sum the elements.

Suppose we have a bound trivector W in a bound m-space (m ¥ 3)

W = P1 Ô P2 Ô P3 Ô P0 ;

Declare the Pi as 'vector' symbols (that is, as 1-elements), and sum the elements of its first co-span.

¯"5 ; ¯¯V@P_ D; V1 = ¯SA¯#1 @WDE

P0 Ô P1 Ô P2 - P0 Ô P1 Ô P3 + P0 Ô P2 Ô P3 - P1 Ô P2 Ô P3

We can see this has the simple form we expected by factorizing it.

V2 = ExteriorFactorize@V1 D
HP0 - P3 L Ô HP1 - P3 L Ô HP2 - P3 L

Applying the summed first cospan operation to each of these terms again gives zero as do the other
operators.

¯SA¯#1 @P0 Ô P1 Ô P2 DE - ¯SA¯#1 @P0 Ô P1 Ô P3 DE +


¯SA¯#1 @P0 Ô P2 Ô P3 DE - ¯SA¯#1 @P1 Ô P2 Ô P3 DE
0

$Ñ ó Ñ û ¯! ó ¯SA¯"1 @ÑDE 4.6

4.7 Line Coordinates


We have already seen that lines are defined by bound vectors independent of the dimension of the
space. We now look at the types of coordinate descriptions we can use to define lines in bound
spaces (multiplanes) of various dimensions.

For simplicity of exposition we refer to a bound vector as 'a line', rather than as 'defining a line',
and to a bound bivector as 'a plane', rather than as 'defining a plane '.

Lines in a plane
To explore lines in a plane, we first declare the basis of the plane: ¯"2 .

¯"2
8¯#, e1 , e2 <

A line in a plane can be written in several forms. The most intuitive form perhaps is as a product of
two points ¯" + x and ¯" + y where x and y are position vectors.

2009 9 3
Grassmann Algebra Book.nb 241

A line in a plane can be written in several forms. The most intuitive form perhaps is as a product of
two points ¯" + x and ¯" + y where x and y are position vectors.

L ª H¯" + xL Ô H¯" + yL

¯! + y

H¯! + xLÔ H¯! + yL

¯! + x
y

¯!

A line defined by the product of two points

We can automatically generate a basis form for each of the points by using the GrassmannAlgebra
ComposePoint function, or its alias form ¯"Ñ .

L ª ¯'x Ô ¯'y

L ª H¯# + e1 x1 + e2 x2 L Ô H¯# + e1 y1 + e2 y2 L

Or, we can express the line as the product of any point in it and a vector parallel to it. For example:

L ª H¯" + xL Ô Hy - xL ª H¯" + yL Ô Hy - xL

H¯! + yxLÔ
- xHy - xL

¯! + x
y

¯!

A line defined by the product of a point and a vector

If we want to compose a line element in this form, we can also use ComposeLineElement or its
alias ¯%Ñ . By leaving the placeholder unfilled we get a template for entering our own scalar
coefficients.
2009 9 3
Grassmann Algebra Book.nb 242

If we want to compose a line element in this form, we can also use ComposeLineElement or its
alias ¯%Ñ . By leaving the placeholder unfilled we get a template for entering our own scalar
coefficients.

¯)Ñ
H¯# + Ñ e1 + Ñ e2 L Ô HÑ e1 + Ñ e2 L

Alternatively, we can express L without specific reference to points in it. For example:

L ª ¯" Ô Ha e1 + b e2 L + c e1 Ô e2

The first term ¯" Ô Ha e1 + b e2 L is a vector bound through the origin, and hence defines a line
through the origin. The second term c e1 Ô e2 is a bivector whose addition represents a shift in the
line parallel to itself, away from the origin.

Components of a line in the plane expressed in basis form.

We know that this can indeed represent a line since we can factorize it into any of the forms:

b c
† A line of gradient a
through the point with coordinate b
on the ¯"Ôe1 axis.

c
Lª ¯" + e1 Ô Ha e1 + b e2 L
b

¯! Ô e2 J¯! +
c
e1 N Ô Ha e1 + b e2 L
b

¯! Ô e1
¯! ¯! +
c
e1
b

A line in the plane expressed through a point on the ¯!Ôe1 axis.

b
† A line of gradient a
through the point with coordinate - ca on the ¯"Ôe2 axis.

2009 9 3
Grassmann Algebra Book.nb 243

b
A line of gradient a
through the point with coordinate - ca on the ¯"Ôe2 axis.

c
Lª ¯" - e2 Ô Ha e1 + b e2 L
a

¯! Ô e2

c
J¯! - e2 N Ô Ha e1 + b e2 L
a

¯! Ô e1
¯!

c
¯! - e2
a

A line in the plane expressed through a point on the ¯!Ôe2 axis.

† Or, a line through both points.

ab c c
Lª ¯" - e2 Ô ¯" + e1
c a b
ab
Of course the scalar factor c
is inessential so we can just as well say:

c c
Lª ¯" - e2 Ô ¯" + e1
a b

¯! Ô e2 c c
J¯! - e2 N Ô J¯! + e1 N
a b

¯! Ô e1
c
¯! ¯! + e1
b
c
¯! - e2
a

A line in the plane expressed through a point on the ¯!Ôe1 axis.

2009 9 3
Grassmann Algebra Book.nb 244

Ï Information required to express a line in a plane

The expression of the line above in terms of a pair of points requires the four coordinates of the
points. Expressed without specific reference to points, we seem to need three parameters.
However, the last expression shows, as expected, that it is really only two parameters that are
necessary (viz y = m x + c).

Lines in a 3-plane
Our normal notion of three-dimensional space corresponds to a 3-plane. Lines in a 3-plane have the
same form when expressed in coordinate-free notation as they do in a plane (2-plane). Remember
that a 3-plane is a bound vector 3-space whose basis may be chosen as 3 independent vectors and a
point, or equivalently as 4 independent points. For example, we can still express a line in a 3-plane
in any of the following equivalent forms.

L ª H¯" + xL Ô H¯" + yL
L ª H¯" + xL Ô Hy - xL
L ª H¯" + yL Ô Hy - xL
L ª ¯" Ô Hy - xL + x Ô y

Here, x and y are independent vectors in the 3-plane.

The coordinate form however will appear somewhat different to that in the 2-plane case. To
explore this, we redeclare the basis with ¯"3 , and use ComposePoint in its alias form to define a
line as the product of 2 points.

¯"3
8¯#, e1 , e2 , e3 <

L = ¯'x Ô ¯'y

H¯# + e1 x1 + e2 x2 + e3 x3 L Ô H¯# + e1 y1 + e2 y2 + e3 y3 L

We can expand and simplify this product with GrassmannExpandAndSimplify or its alias ¯%.

L = ¯%@LD
¯# Ô He1 H-x1 + y1 L + e2 H-x2 + y2 L + e3 H-x3 + y3 LL +
H-x2 y1 + x1 y2 L e1 Ô e2 + H-x3 y1 + x1 y3 L e1 Ô e3 + H-x3 y2 + x2 y3 L e2 Ô e3

The scalar coefficients in this expression are sometimes called the Plücker coordinates of the line.

Alternatively, we can express L in terms of basis elements, but without specific reference to points
or vectors in it. For example:

L = ¯" Ô Ha e1 + b e2 + c e3 L + d e1 Ô e2 + e e2 Ô e3 + f e1 Ô e3 ;

The first term ¯" Ô Ha e1 + b e2 + c e3 L is a vector bound through the origin, and hence defines
a line through the origin. The second term d e1 Ô e2 + e e2 Ô e3 + f e1 Ô e3 is a bivector whose
addition represents a shift in the line parallel to itself, away from the origin. In order to effect this
shift, however, it is necessary that the bivector contain the vector Ha e1 + b e2 + c e3 L. Hence
there will be some constraint on the coefficients d, e, and f. To determine this we only need to
determine
2009 9 3 the condition that the exterior product of the vector and the bivector is zero.
Grassmann Algebra Book.nb 245

The first term ¯" Ô Ha e1 + b e2 + c e3 L is a vector bound through the origin, and hence defines
a line through the origin. The second term d e1 Ô e2 + e e2 Ô e3 + f e1 Ô e3 is a bivector whose
addition represents a shift in the line parallel to itself, away from the origin. In order to effect this
shift, however, it is necessary that the bivector contain the vector Ha e1 + b e2 + c e3 L. Hence
there will be some constraint on the coefficients d, e, and f. To determine this we only need to
determine the condition that the exterior product of the vector and the bivector is zero.

¯%@Ha e1 + b e2 + c e3 L Ô Hd e1 Ô e2 + e e2 Ô e3 + f e1 Ô e3 LD ã 0
Hc d + a e - b fL e1 Ô e2 Ô e3 ã 0

Alternatively, this constraint amongst the coefficients could have been obtained by noting that in
order to be a line, L must be simple, hence the exterior product with itself must be zero.

¯%@L Ô LD ã 0
H2 c d + 2 a e - 2 b fL ¯# Ô e1 Ô e2 Ô e3 ã 0

Thus the constraint that the coefficients must obey in order for a general bound vector of the form
L to be a line in a 3-plane is that:

cd+ae-bf ã0
This constraint is sometimes referred to as the Plücker identity.

Given this constraint, and supposing neither a, b nor c to be zero, we can factorize the line into
any of the following forms:

f e
L ã ¯" + e1 + e2 Ô Ha e1 + b e2 + c e3 L
c c
d f
L ã ¯" - e2 - e3 Ô Ha e1 + b e2 + c e3 L
a a
d e
L ã ¯" + e1 - e3 Ô Ha e1 + b e2 + c e3 L
b b
Each of these forms represents a line in the direction of a e1 + b e2 + c e3 and intersecting a
coordinate plane. For example, the first form intersects the ¯" Ô e1 Ô e2 coordinate plane in the
point ¯"+ fc e1 + ec e2 with coordinates ( fc , ec ,0).

The most compact form, in terms of the number of scalar parameters used, is when L is expressed
as the product of two points, each of which lies in a coordinate plane.

ac d f f e
L= ¯" - e2 - e3 Ô ¯" + e1 + e2 ;
f a a c c
We can verify that this formulation gives us the original form of the line by expanding and
simplifying the product and substituting the constraint relation previously obtained.

¯%@Expand@¯#@¯%@LD ê. Hc d + a e Ø f bLDDD
¯# Ô Ha e1 + b e2 + c e3 L + d e1 Ô e2 + f e1 Ô e3 + e e2 Ô e3

Similar expressions may be obtained for L in terms of points lying in the other coordinate planes.
To summarize, there are three possibilities in a 3-plane, corresponding to the number of different
pairs of coordinate 2-planes in the 3-plane.

2009 9 3
Grassmann Algebra Book.nb 246

Similar expressions may be obtained for L in terms of points lying in the other coordinate planes.
To summarize, there are three possibilities in a 3-plane, corresponding to the number of different
pairs of coordinate 2-planes in the 3-plane.

L ª H¯! + x1 e1 + x2 e2 L Ô H¯! + y2 e2 + y3 e3 L ª P Ô Q
L ª H¯! + x1 e1 + x2 e2 L Ô H¯! + z1 e1 + z3 e3 L ª P Ô R 4.7
L ª H¯! + y2 e2 + y3 e3 L Ô H¯! + z1 e1 + z3 e3 L ª Q Ô R

A line in a 3-plane through points in the coordinate planes.

Ï Information required to express a line in a 3-plane

As with a line in a 2-plane, we find that a line in a 3-plane is expressed with the minimum number
of parameters by expressing it as the product of two points, each in one of the coordinate planes. In
this form, there are just 4 independent scalar parameters (coordinates) required to express the line.

Ï Determining the point of intersection with the third coordinate plane

Suppose we have a line expressed as the exterior product of points in two coordinate planes, say
¯" Ô e1 Ô e2 and ¯" Ô e2 Ô e3 .

L = H¯" + x1 e1 + x2 e2 L Ô H¯" + y2 e2 + y3 e3 L;

We can always find the point on the line in the third coordinate plane ¯" Ô e3 Ô e1 by computing
the intersection of the line and the third plane.

2009 9 3
Grassmann Algebra Book.nb 247

We can always find the point on the line in the third coordinate plane ¯" Ô e3 Ô e1 by computing
the intersection of the line and the third plane.

To use GrassmannAlgebra for this, we first need to declare the space, and the scalar symbols.

¯"3 ; DeclareExtraScalarSymbols@8x1 , x2 , y2 , y3 <D;

The intersection of the line and the third coordinate plane is given by finding their common factor.

F = ToCommonFactor@L Ó H¯" Ô e1 Ô e3 LD
¯c H¯# Hx2 - y2 L - e1 x1 y2 + e3 x2 y3 L

The point we need is then

R = ToPointForm@FD
e1 x1 y2 e3 x2 y3
¯# - +
x2 - y2 x2 - y2

We can confirm that this point is both on the line and on the coordinate plane by simplifying their
exterior products.

8¯$@¯%@R Ô LDD, ¯%@R Ô ¯" Ô e1 Ô e3 D<


80, 0<

Lines in a 4-plane
Lines in a 4-plane have the same form when expressed in coordinate-free notation as they do in
any multiplane.

To obtain the Plücker coordinates of a line in a 4-plane, express the line as the exterior product of
two points and multiply it out. The resulting coefficients of the basis elements are the Plücker
coordinates of the line.

Additionally, from the results above, we can expect that a line in a 4-plane may be expressed with
the least number of scalar parameters as the exterior product of two points, each point lying in one
of the coordinate 3-planes. For example, the expression for the line as the product of the points in
the coordinate 3-planes ¯"Ôe1 Ôe2 Ôe3 and ¯"Ôe2 Ôe3 Ôe4 is

L = H¯" + x1 e1 + x2 e2 + x3 e3 L Ô H¯" + y2 e2 + y3 e3 + y4 e4 L

Lines in an m-plane
The formulae below summarize some of the expressions for defining a line, valid in a multiplane
of any dimension.

Coordinate-free expressions may take any of a number of forms. For example:

L ª H¯! + xL Ô H¯! + yL
4.8

2009 9 3
Grassmann Algebra Book.nb 248

L ª H¯! + xL Ô H¯! + yL
L ª H¯! + xL Ô Hy - xL
L ª H¯! + yL Ô Hy - xL
L ª ¯! Ô Hy - xL + x Ô y

A line can be expressed in terms of the 2m coordinates of any two points on it.

L ª H¯! + x1 e1 + x2 e2 + ! + xm em L Ô
4.9
H¯! + y1 e1 + y2 e2 + ! + ym em L

When multiplied out, the expression for the line takes a form explicitly displaying the Plücker
coordinates of the line.

L ª ¯! Ô HHy1 - x1 L e1 + Hy2 - x2 L e2 + ! + Hym - xm L em L


+Hx1 y2 - x2 y1 L e1 Ô e2 + Hx1 y3 - x3 y1 L e1 Ô e3 + 4.10
Hx1 y4 - x4 y1 L e1 Ô e4 + ! + Hxm-1 ym - xm ym-1 L em-1 Ô em

Alternatively, a line in an m-plane ¯"Ôe1 Ôe2 Ô!Ôem can be expressed in terms of its intersections
with two of its coordinate (m|1)-planes, ¯" Ô e1 Ô ! Ô ·i Ô ! Ô em and ¯"Ôe1 Ô!Ô·j Ô!Ôem
say. The notation ·i means that the ith element or term is missing.

L ª H¯! + x1 e1 + ! + ·i + ! + xm em L Ô
4.11
H¯! + y1 e1 + ! + ·j + ! + ym em L

This formulation indicates that a line in m-space has at most 2(m|1) independent parameters
required to describe it.

It also implies that in the special case when the line lies in one of the coordinate (m|1)-spaces, it
can be even more economically expressed as the product of two points, each lying in one of the
coordinate (m|2)-spaces contained in the (m|1)-space. And so on.

4.8 Plane Coordinates


We have already seen that planes are defined by simple bound bivectors independent of the
dimension of the space. We now look at the types of coordinate descriptions we can use to define
planes in bound spaces (multiplanes) of various dimensions.

Planes in a 3-plane
A plane P in a 3-plane can be written in several forms. The most intuitive form perhaps is as a
product of three non-collinear points P, Q, and R.

4.8

2009 9 3
Grassmann Algebra Book.nb 249

P ã ¯" + x
Q ã ¯" + y
R ã ¯" + z

P ª P Ô Q Ô R = H¯" + xL Ô H¯" + yL Ô H¯" + zL

Here, x, y and z are the position vectors of P, Q, and R.

A 2-plane in a 3-plane through three points.

Or, we can express it as the product of any two different points in it and a vector parallel to it (but
not in the direction of the line joining the two points). For example:

2009 9 3
Grassmann Algebra Book.nb 250

P ª H¯" + xL Ô H¯" + yL Ô Hz - yL

A 2-plane in a 3-plane through two points and a vector.

Or, we can express it as the product of any point in it and any two independent vectors parallel to
it. For example:

P ª H¯" + xL Ô Hy - xL Ô Hz - yL

A 2-plane in a 3-plane through one point and two vectors.

Or, we can express it as the product of any point in it and any line in it not through the point. For
example:

2009 9 3
Grassmann Algebra Book.nb 251

Or, we can express it as the product of any point in it and any line in it not through the point. For
example:

P ª H¯" + xL Ô L

A 2-plane in a 3-plane through a point and a line.

Or, we can express it as the product of any line in it and any vector parallel to it (but not parallel to
the line). For example:

P ª Hy - xL Ô L

A 2-plane in a 3-plane through a vector and a line.

Given a basis, we can always express the plane in terms of the coordinates of the points or vectors
in the expressions above. However the form which requires the least number of coordinates is that
which expresses the plane as the exterior product of its three points of intersection with the
2009 9 3 axes.
coordinate
Grassmann Algebra Book.nb 252

Given a basis, we can always express the plane in terms of the coordinates of the points or vectors
in the expressions above. However the form which requires the least number of coordinates is that
which expresses the plane as the exterior product of its three points of intersection with the
coordinate axes.

P ª H¯" + a e1 L Ô H¯" + b e2 L Ô H¯" + c e3 L

A 2-plane in a 3-plane through three points on the coordinate axes.

If the plane is parallel to one of the coordinate axes, say ¯" Ô e3 , it may be expressed as:

P ª H¯" + a e1 L Ô H¯" + b e2 L Ô e3

Whereas, if it is parallel to two of the coordinate axes, say ¯" Ô e2 and ¯" Ô e3 , it may be
expressed as:

P ª H¯" + a e1 L Ô e2 Ô e3

If we wish to express a plane as the exterior product of its intersection points with the coordinate
axes, we first determine its points of intersection with the axes and then take the exterior product of
the resulting points. This leads to the following identity:

P ª HP Ó H¯" Ô e1 LL Ô HP Ó H¯" Ô e2 LL Ô HP Ó H¯" Ô e3 LL

Ï Ÿ Example: To express a plane in terms of its intersections with the


coordinate axes

Suppose we have a plane in a 3-plane defined by three points.

¯"3 ; P = H¯" + e1 + 2 e2 + 5 e3 L Ô H¯" - e1 + 9 e2 L Ô H¯" - 7 e1 + 6 e2 + 4 e3 L;

To express this plane in terms of its intersections with the coordinate axes we calculate the
intersection points with the axes.

2009 9 3
Grassmann Algebra Book.nb 253

To express this plane in terms of its intersections with the coordinate axes we calculate the
intersection points with the axes.

ToCommonFactor@P Ó H¯" Ô e1 LD
¯c H13 ¯# + 329 e1 L

ToCommonFactor@P Ó H¯" Ô e2 LD
¯c H38 ¯# + 329 e2 L

ToCommonFactor@P Ó H¯" Ô e3 LD
¯c H48 ¯# + 329 e3 L

We then take the product of these points (ignoring the weights) to form the plane.

329 e1 329 e2 329 e3


P ª ¯" + Ô ¯" + Ô ¯" +
13 38 48
To verify that this is indeed the same plane, we can check to see if these points are in the original
plane. For example:

329 e1
¯%BP Ô ¯" + F
13
0

Planes in a 4-plane
From the results above, we can expect that a plane in a 4-plane is most economically expressed as
the product of three points, each point lying in one of the coordinate 2-planes. For example:

P ª H¯" + x1 e1 + x2 e2 L Ô H¯" + y2 e2 + y3 e3 L Ô H¯" + z3 e3 + z4 e4 L

(This is analogous to expressing a line in a 3-plane as the product of two points, each point lying in
one of the coordinate 2-planes.)

If a plane is expressed in any other form, we can express it in the form above by first determining
its points of intersection with the coordinate planes and then taking the exterior product of the
resulting points. This leads to the following identity:

P ª HP Ó H¯" Ô e1 Ô e2 LL Ô HP Ó H¯" Ô e2 Ô e3 LL Ô HP Ó H¯" Ô e3 Ô e4 LL

Planes in an m-plane
A plane in an m-plane is most economically expressed as the product of three points, each point
lying in one of the coordinate (m|2)-planes.

2009 9 3
Grassmann Algebra Book.nb 254

P ª J¯" + x1 e1 + ! + xi1 ei1 + ! + xi2 ei2 + ! + xm em N Ô

J¯" + y1 e1 + ! + yi3 ei3 + ! + yi4 ei4 + ! + ym em N Ô

J¯" + z1 e1 + ! + zi5 ei5 + ! + zi6 ei6 + ! + zm em N

Here the notation ™


xi ei means that the term is missing from the sum.

This formulation indicates that a plane in an m-plane has at most 3(m|2) independent scalar
parameters required to describe it.

The coordinates of geometric entities


In the sections above we have discussed some common geometric entities, and various ways in
which they can be represented by coordinates in a given basis system. In this section we make
some general observations about the coordinates of geometric entities from a Grassmannian point
of view.

We can define a geometric entity as an element of an exterior product space which has a specific
geometric interpretation. Examples we have already met include points, lines, planes, m-planes,
vectors, bivectors, and m-vectors.

These entities are characterized by the fact that they are simple. That is, they may be factorized into
a product of 1-elements. The 1-element factors are, of course, not unique. However, this is not a
shortcoming. Indeed, the power of such a representation for computations owes much to this non-
uniqueness. For example, if we know two lines intersect in a point P, we can represent them
immediately and expressly as products involving P (say, PÔQ and PÔR).

If an element is not simple, then it is not a candidate for geometric entities of this sort. (We will
look at geometric entities represented by non-simple elements in Chapter 7: Exploring Screw
Algebra.) Apart from 0, 1, n-1 and n-elements, an m-element of an n-space is not necessarily
simple. Thus in the plane (a three-dimensional linear space), n is equal to 3, and all entities
(scalars, points, lines and planes) are simple. In space, n is equal to 4, and hence 2-elements are the
only one that are potentially not simple. This means that only a subset of the 2-elements of space
(the simple 2-elements) can represent lines.

These comments extend to a space of any larger number of dimensions. In particular, (n-1)-
elements in a space of n dimensions are simple. Hence any (n-1)-element represents a hyperplane.
For other elements (apart from the trivial 0 and n-elements), only a subset of them can represent a
geometric entity. This subset may be defined by a set of constraints between the coefficients of the
element which ensures that it be simple.

If we are constructing a geometric entity as the exterior product of known independent points, or as
the regressive product of hyperplanes, then we are assured that the result will be simple. In the
examples of the previous section, we have seen several cases of this approach. In what follows, we
will explore the results that a consideration of simplicity produces.

2009 9 3
Grassmann Algebra Book.nb 255

Ï Lines in space

Consider a general 2-element in a point 3-space.

¯"3 ; A = ComposeMElement@2, aD
a1 ¯# Ô e1 + a2 ¯# Ô e2 + a3 ¯# Ô e3 + a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3

We can attempt to factorize this using ExteriorFactorize.

A1 = ExteriorFactorize@AD

IfB8a3 a4 - a2 a5 + a1 a6 ã 0<,
a4 e2 a5 e3 a2 e2 a3 e3
a1 ¯# - - Ô e1 + + F
a1 a1 a1 a1

This says that the given factorization is possible only if and only if the condition
a3 a4 - a2 a5 + a1 a6 ã 0 on the coefficients of A is met.

There is a straightforward way of testing the simplicity of 2-elements not shared by elements of
higher grade. A 2-element is simple if and only if its exterior square is zero. (The exterior square of
A is A Ô A.) In our case, we can write A as A ã ¯"Ôb + B, where b is a vector, and B is a bivector.

[Since elements of odd grade anti-commute, the exterior square of odd elements will be zero, even
if they are not simple. An even element of grade 4 or higher may be of the form of the exterior
product of a 1-element with a non-simple 3-element: whence its exterior square is zero without its
being simple.]

In the case of a 2-element A ã ¯"Ôb + B in a point space then, we require

A Ô A ã H¯" Ô b + BL Ô H¯" Ô b + BL ã 2 ¯" Ô b Ô B ã 0

Thus we have the reduced requirement only that bÔB ã 0. In the case of a point 3-space

b = a1 e1 + a2 e2 + a3 e3 ;
B = a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3 ;

Simplifying bÔB gives

¯%@b Ô BD
Ha3 a4 - a2 a5 + a1 a6 L e1 Ô e2 Ô e3

Hence by these means we arrive at the same condition we obtained through


ExteriorFactorize.

At this point we should note that these results have not involved any metric concepts. The simple
requirement that bÔB be zero, has led to the required constraint.

On the other hand, if we require from the start that bÔB ã 0, that is B contains the factor a, then
we can write B ã aÔb, and so write A immediately as the product of a point and a vector.

2009 9 3
Grassmann Algebra Book.nb 256

A ã ¯" Ô b + a Ô b ã H¯" + aL Ô b

The vector a may be visualized as the position vector of a point on the line, and the vector b as a
vector in the direction of the line. The coefficients of aÔb are the same as the coefficients of the
cross product a!b. The cross product is the complement of the exterior product, and is orthogonal
to it. (The cross product and orthogonality are metric concepts which will be introduced in the next
chapter.)

The (Plücker) coordinates of a line in space are often given by the coefficients of the two vectors a
and a!b. Composing a and b in basis form, and then calculating their exterior product displays the
coefficients of a!b.

8a = ¯&a , b = ¯&b , ¯%@a Ô bD<


8a1 e1 + a2 e2 + a3 e3 , b1 e1 + b2 e2 + b3 e3 ,
H-a2 b1 + a1 b2 L e1 Ô e2 + H-a3 b1 + a1 b3 L e1 Ô e3 + H-a3 b2 + a2 b3 L e2 Ô e3 <

Then with the mapping

8e1 Ô e2 , e1 Ô e3 , e1 Ô e2 < Ø 8e1, - e2 , e3 <

the Plücker line coordinates are given by

8a1 , a2 , a3 , Ha2 b3 - a3 b2 L, Ha3 b1 - a1 b3 L, Ha1 b2 - a2 b1 L<

And as we would expect, the first three components as a vector are orthogonal to the second three.

8a1 , a2 , a3 <.8Ha2 b3 - a3 b2 L, Ha3 b1 - a1 b3 L, Ha1 b2 - a2 b1 L< êê Expand


0

In sum Despite this common practice of developing the Plücker coordinates in terms of dot and
cross products, we can see from the Grassmannian approach that we began with, that it is neither
necessary, nor particularly elucidating. The coordinates of a line do not need to rely on metric
notions. And if the geometry is projective, they perhaps should not.

Ï Lines in 4-space

Now consider a general 2-element in a point 4-space.

¯"4 ; A = ComposeMElement@2, aD
a1 ¯# Ô e1 + a2 ¯# Ô e2 + a3 ¯# Ô e3 + a4 ¯# Ô e4 + a5 e1 Ô e2 +
a6 e1 Ô e3 + a7 e1 Ô e4 + a8 e2 Ô e3 + a9 e2 Ô e4 + a10 e3 Ô e4

Factorizing gives

2009 9 3
Grassmann Algebra Book.nb 257

A1 = ExteriorFactorize@AD

IfB8a3 a5 - a2 a6 + a1 a8 ã 0,
a4 a5 - a2 a7 + a1 a9 ã 0, a4 a6 - a3 a7 + a1 a10 ã 0<,
a5 e2 a6 e3 a7 e4 a2 e2 a3 e3 a4 e4
a1 ¯# - - - Ô e1 + + + F
a1 a1 a1 a1 a1 a1

On the other hand, adopting the method of the previous section, we can write

b = a1 e1 + a2 e2 + a3 e3 + a4 e4 ;
B = a5 e1 Ô e2 + a6 e1 Ô e3 + a7 e1 Ô e4 + a8 e2 Ô e3 + a9 e2 Ô e4 + a10 e3 Ô e4 ;

Simplifying bÔB gives

¯%@b Ô BD
Ha3 a5 - a2 a6 + a1 a8 L e1 Ô e2 Ô e3 + Ha4 a5 - a2 a7 + a1 a9 L e1 Ô e2 Ô e4 +
Ha4 a6 - a3 a7 + a1 a10 L e1 Ô e3 Ô e4 + Ha4 a8 - a3 a9 + a2 a10 L e2 Ô e3 Ô e4

Again, we arrive at the same conditions, except that with this approach we have one extra one.
However, it is easy to show that the number of independent conditions is still only three. Clearly,
because this product will always be a trivector, in a space of n vector dimensions, it will always
have c = n(n-1)(n-2)/6 components, and hence spawn c conditions.

On the other hand, if we require from the start that bÔB ã 0, that is B contains the factor a, then
we can write B ã aÔb, and so write A immediately as the product of a point and a vector.

A ã ¯" Ô b + a Ô b ã H¯" + aL Ô b

The vector a may be visualized as the position vector of a point on the line, and the vector b as a
vector in the direction of the line. The coefficients of aÔb cannot be expressed as the coefficients
of the cross product as before since the vector space is now 4-dimensional.

The (Plücker) coordinates of a line in 4-space must now be given by the coefficients of the vector
a and the bivector aÔb. If

b = ¯&b ; Cp = 8a = ¯&a , ¯%@a Ô bD<


8a1 e1 + a2 e2 + a3 e3 + a4 e4 ,
H-a2 b1 + a1 b2 L e1 Ô e2 + H-a3 b1 + a1 b3 L e1 Ô e3 +
H-a4 b1 + a1 b4 L e1 Ô e4 + H-a3 b2 + a2 b3 L e2 Ô e3 +
H-a4 b2 + a2 b4 L e2 Ô e4 + H-a4 b3 + a3 b4 L e3 Ô e4 <

Then the Plücker line coordinates are given by

ExtractCoefficients@LBasisD@CpD
8a1 , a2 , a3 , a4 , -a2 b1 + a1 b2 , -a3 b1 + a1 b3 ,
-a4 b1 + a1 b4 , -a3 b2 + a2 b3 , -a4 b2 + a2 b4 , -a4 b3 + a3 b4 <

2009 9 3
Grassmann Algebra Book.nb 258

In sum In a point 4-space, the Grassmannian approach generalizes, whereas the approach
developed through the dot and cross products does not generalize.

4.9 Calculation of Intersections

The intersection of two lines in a plane


Suppose we wish to find the point P of intersection of two lines L1 and L2 in a plane. We have seen
in the previous section how we could express a line in a plane as the exterior product of two points,
and that these points could be taken as the points of intersection of the line with the coordinate
axes.

First declare the basis of the plane, and then define the lines.

¯"2
8¯#, e1 , e2 <

L1 = H¯" + a e1 L Ô H¯" + b e2 L;
L2 = H¯" + c e1 L Ô H¯" + d e2 L;
To find the point of intersection of these two lines, we first find their common factor. The common
factor of two elements can be found by applying ToCommonFactor to their regressive product.
Remember that common factors are only determinable up to congruence (that is, to within an
arbitrary scalar multiple). Hence the common factor we determine will, in general, be a weighted
point.

wP = ToCommonFactor@L1 Ó L2 D
¯c HHb c - a dL ¯# + Ha b c - a c dL e1 + H-a b d + b c dL e2 L

We can convert this weighted point to a form which separates the weight from the point by
applying ToWeightedPointForm.

ToWeightedPointForm@wPD
a c Hb - dL ¯c e1 b H-a + cL d ¯c e2
Hb c - a dL ¯c ¯# + +
b c ¯c - a d ¯c b c ¯c - a d ¯c

Or, if we are only interested in the point, we can apply ToPointForm.

P = ToPointForm@wPD
a c Hb - dL e1 b H-a + cL d e2
¯# + +
bc-ad bc-ad

To verify that this point lies in both lines, we can take its exterior product with each of the lines
and show the results to be zero.

2009 9 3
Grassmann Algebra Book.nb 259

¯%@8L1 Ô P, L2 Ô P<D êê Simplify


80, 0<

In the special case in which the lines are parallel, that is b c - a d = 0, their intersection is no longer
a point, but a vector defining their common direction.

The intersection of a line and a plane in a 3-plane


Suppose we wish to find the point P of intersection of a line L and a plane P in a 3-plane. We
express the line as the exterior product of two points in two coordinate planes, and the plane as the
exterior product of the points of intersection of the plane with the coordinate axes.

First declare the basis of the 3-plane, and then define the line and plane.

¯"3
8¯#, e1 , e2 , e3 <

L = H¯" + a e1 + b e2 L Ô H¯" + c e2 + d e3 L;
P = H¯" + e e1 L Ô H¯" + f e2 L Ô H¯" + g e3 L;

To find the point of intersection of the line with the plane, we first determine their common factor
(a weighted point) by applying ToCommonFactor to their regressive product, and then drop the
weight by applying ToPointForm.

P = ToPointForm@ToCommonFactor@L Ó PDD
a e Hd f + Hc - fL gL e1
¯# + +
d e f - Hb e - c e + a fL g
f Hb e Hd - gL + c H-a + eL gL e2 d Hb e + Ha - eL fL g e3
-
d e f - Hb e - c e + a fL g d e f - Hb e - c e + a fL g

To verify that this point lies in both the line and the plane, we can take its exterior product with
each of the line and the plane and show the results to be zero.

¯$@¯%@8L Ô P, P Ô P<DD
80, 0<

In the special case in which the line is parallel to the plane, their intersection is no longer a point,
but a vector defining their common direction. When the line lies in the plane, the result from the
calculation will be zero.

The intersection of two planes in a 3-plane


Suppose we wish to find the line L of intersection of two planes P1 and P2 in a 3-plane.

First declare the basis of the 3-plane, and then define the planes.

2009 9 3
Grassmann Algebra Book.nb 260

¯"3
8¯#, e1 , e2 , e3 <

P1 = H¯" + a e1 L Ô H¯" + b e2 L Ô H¯" + c e3 L;


P2 = H¯" + e e1 L Ô H¯" + f e2 L Ô H¯" + g e3 L;

To find the line of intersection of the two planes, we first determine their common factor (a line)
by applying ToCommonFactor to their regressive product, and then (if we wish) drop the
(arbitrary) congruence factor ¯c by setting it equal to unity. Finally, applying
GrassmannSimplify collects the terms in a convenient form with the origin ¯# factored out.

L = ¯$@ToCommonFactor@P1 Ó P2 D ê. ¯c Ø 1D
¯# Ô Ha e Hc f - b gL e1 + b f H-c e + a gL e2 + c Hb e - a fL g e3 L +
a b e f H-c + gL e1 Ô e2 + a c e Hb - fL g e1 Ô e3 + b c H-a + eL f g e2 Ô e3

This is of course, a 2-element in a 4-space, and hence is not necessarily simple. However we can
verify in a number of ways that this result is indeed simple, and therefore does define a line. The
simplest way is to apply the GrassmannAlgebra function SimpleQ.

SimpleQ@LD
True

Perhaps more convincing is to factorize L so as to display it as the product of a point and a vector.

Lp = ExteriorFactorize@LD
b f H-c + gL e2 c Hb - fL g e3
a e Hc f - b gL ¯# + + Ô
-c f + b g -c f + b g
Hb c e f - a b f gL e2 Hb c e g - a c f gL e3
e1 - +
acef-abeg acef-abeg

We can extract the point and vector as specific expressions by applying the GrassmannAlgebra
function ExtractArguments.

8P, V< = ExtractArguments@Lp D


b f H-c + gL e2 c Hb - fL g e3
:¯# + + ,
-c f + b g -c f + b g
Hb c e f - a b f gL e2 Hb c e g - a c f gL e3
e1 - + >
acef-abeg acef-abeg

To verify that both the point and the vector lie in both planes, we can take their exterior product
with each of the planes and show the results to be zero.

Simplify@ ¯%@8P1 Ô P, P1 Ô V, P2 Ô P, P2 Ô V<DD


80, 0, 0, 0<

In the special case in which the planes are parallel, their intersection is no longer a line, but a
bivector defining their common 2-direction.

2009 9 3
Grassmann Algebra Book.nb 261

In the special case in which the planes are parallel, their intersection is no longer a line, but a
bivector defining their common 2-direction.

Example: The osculating plane to a curve

Ï The problem

Show that the osculating planes at any three points to the curve defined by:

P ã ¯" + n e1 + n 2 e2 + n 3 e3

intersect at a point coplanar with these three points. Here, n is a scalar parametrizing the curve.

Ï The solution
£ " £ "
The osculating plane P to the curve at the point P is given by PãPÔPÔP. P and P are the first and
second derivatives of P with respect to n.

P = ¯" + n e1 + n 2 e2 + n 3 e3 ;
£
P = e1 + 2 n e2 + 3 n 2 e3 ;
"
P = 2 e2 + 6 n e3 ;
£ "
P = P Ô P Ô P;
We can declare the space to be a 3-plane and subscripts of the parameter n to be scalar, and then
use GrassmannSimplify to derive the expression for the osculating plane as a function of n.
(We divide out a common multiple of 2 to make the resulting expressions a little simpler.)

¯"3 ; DeclareExtraScalarSymbols@8n, n_ <D; P = ¯%@P ê 2D

¯# Ô Ie1 Ô e2 + 3 n e1 Ô e3 + 3 n2 e2 Ô e3 M + n3 e1 Ô e2 Ô e3

Now select any three points on the curve P1 , P2 , and P3 .

P1 = ¯" + n1 e1 + n1 2 e2 + n1 3 e3 ;
P2 = ¯" + n2 e1 + n2 2 e2 + n2 3 e3 ;
P3 = ¯" + n3 e1 + n3 2 e2 + n3 3 e3 ;

The osculating planes at these three points are:

8P1 , P2 , P3 < = 8P ê. n Ø n1 , P ê. n Ø n2 , P ê. n Ø n3 <

9¯# Ô Ie1 Ô e2 + 3 n1 e1 Ô e3 + 3 n21 e2 Ô e3 M + n31 e1 Ô e2 Ô e3 ,


¯# Ô Ie1 Ô e2 + 3 n2 e1 Ô e3 + 3 n22 e2 Ô e3 M + n32 e1 Ô e2 Ô e3 ,
¯# Ô Ie1 Ô e2 + 3 n3 e1 Ô e3 + 3 n23 e2 Ô e3 M + n33 e1 Ô e2 Ô e3 =

The (weighted) point of intersection of these three planes may be obtained by calculating their
common factor. The congruence factor ¯c appears squared because there are two regressive
product operations.

2009 9 3
Grassmann Algebra Book.nb 262

The (weighted) point of intersection of these three planes may be obtained by calculating their
common factor. The congruence factor ¯c appears squared because there are two regressive
product operations.

wP = ToCommonFactor@P1 Ó P2 Ó P3 D

¯c2 I¯# I-9 n21 n2 + 9 n1 n22 + 9 n21 n3 - 9 n22 n3 - 9 n1 n23 + 9 n2 n23 M +


e1 I-3 n31 n2 + 3 n1 n32 + 3 n31 n3 - 3 n32 n3 - 3 n1 n33 + 3 n2 n33 M +
e2 I-3 n31 n22 + 3 n21 n32 + 3 n31 n23 - 3 n32 n23 - 3 n21 n33 + 3 n22 n33 M +
e3 I-9 n31 n22 n3 + 9 n21 n32 n3 + 9 n31 n2 n23 -
9 n1 n32 n23 - 9 n21 n2 n33 + 9 n1 n22 n33 MM

The scalar coefficient attached to the origin factors out of the expression so that we can extract the
point from the resulting weighted point to give the point of intersection Q as

Q = ToPointForm@wPD
1 1
¯# + e3 n1 n2 n3 + e1 Hn1 + n2 + n3 L + e2 Hn2 n3 + n1 Hn2 + n3 LL
3 3

Finally, to show that this point of intersection Q is coplanar with the points P1 , P2 , and P3 , we
compute their exterior product.

Simplify@¯%@P1 Ô P2 Ô P3 Ô QDD
0

This proves the original assertion.

4.10 Decomposition into Components

The shadow
In Chapter 3 we developed a formula which expressed a j-element as a sum of components, the
element being decomposed with respect to a pair of elements a and b which together span the
m n-m
whole space.

2j Ka Ô Xbi O Ó Xci Ô b
¯G@Xci D Hj+1L m n-m
X ã ‚ H-1L
j
i=1 aÓ b
m n-m

The first and last terms of this decomposition are:

2009 9 3
Grassmann Algebra Book.nb 263

aÓ XÔ b aÔX Ó b
m j n-m m j n-m
Xã +!+
j aÓ b aÓ b
m n-m m n-m

Grassmann called the first term the shadow of X on a excluding b .


j m n-m

It can be seen that the last term can be rearranged as the shadow of X on b excluding a.
j n-m m

aÔX Ó b b Ó XÔa
m j n-m n-m j m
ã
aÓ b b Óa
m n-m n-m m

If j = 1, the decomposition formula reduces to the sum of just two components, xa and xb , where
xa lies in a and xb lies in b .
m n-m

a Ó Kx Ô b O b Ó Kx Ô aO
m n-m n-m m
x ã xa + xb ã + 4.12
aÓ b b Óa
m n-m n-m m

Another variant on this decomposition formula is formula 3.36 derived in Chapter 3:

n b1 Ô b2 Ô ! Ô x Ô ! Ô bn
xã‚ bi
i=1
b1 Ô b2 Ô ! Ô bi Ô ! Ô bn

We now explore these formulae with a number of geometric examples, beginning with the simplest
case of decomposition in a 2-space.

Decomposition in a 2-space

Ï Decomposition of a vector in a bivector

Suppose we have a vector 2-space defined by the bivector aÔb. We wish to decompose a vector x
in the space to give one component in a and the other in b. Applying the decomposition formula
above, and noting that xÔb and xÔa are bivectors while xÓb and xÓa are scalars, we can use
formula 3.26 and axiom Ó8 to give:
a Ó JHx Ó bL 1 N
a Ó Hx Ô bL ¯n xÓb
xa ã ã ã Ja Ó 1 N
aÓb aÓb aÓb ¯n

xÓb xÔb
ã aã a
aÓb aÔb

2009 9 3
Grassmann Algebra Book.nb 264

b Ó JHx Ô aL 1 N
b Ó Hx Ô aL ¯n xÓa
xb ã ã ã Jb Ó 1 N
bÓa bÓa bÓa ¯n

xÓa aÔx
ã bã b
bÓa aÔb
xÓb xÔb
(We are able to replace the coefficient aÓb
on the right hand side by aÔb
because we are in a 2-
space. See the section on Exterior Division in Chapter 2.)

Decomposition of a vector in a bivector into two components.

The coefficients of a and b are scalars showing that xa is congruent to a and xb is congruent to b.
If each of the three vectors is expressed in basis form, we can determine these scalars more
specifically. For example:

x = x1 e1 + x2 e2 ; a = a1 e1 + a2 e2 ; b = b1 e1 + b2 e2 ;
xÓb Hx1 e1 + x2 e2 L Ó Hb1 e1 + b2 e2 L
ã ã
aÓb Ha1 e1 + a2 e2 L Ó Hb1 e1 + b2 e2 L
Hx1 b2 - x2 b1 L e1 Ó e2 Hx1 b2 - x2 b1 L
ã
Ha1 b2 - a2 b1 L e1 Ó e2 Ha1 b2 - a2 b1 L

Finally then we can express the original vector x as the required sum of two components, one in a
and one in b.

Hx1 b2 - x2 b1 L Hx1 a2 - x2 a1 L
xã a+ b
Ha1 b2 - a2 b1 L Hb1 a2 - b2 a1 L

We can easily check that the right hand side does indeed reduce to x by composing x, a, and b,
and then simplifying:

¯!2 ; ¯&x ; a = ¯&a ; b = ¯&b ;

Hx1 b2 - x2 b1 L Hx1 a2 - x2 a1 L
a+ b êê Simplify
Ha1 b2 - a2 b1 L Hb1 a2 - b2 a1 L
e1 x1 + e2 x2

Ï Decomposition of a point in a line

2009 9 3
Grassmann Algebra Book.nb 265

Decomposition of a point in a line

These same calculations apply mutatis mutandis to decomposing a point P into two component
weighted points congruent to two given points Q and R in a line. For variety here, we take a more
general and less coordinate-based approach.

In the first formulae of the previous section above let:

xØP
aØQ
bØR
then the decomposition becomes

PÔR QÔP
Pã Q+ R
QÔR QÔR

Decomposition of a point in a line into two weighted points.

Because P is a point, we would expect the sum of its two component weights to be unity.

PÔR QÔP P Ô HR - QL
+ ã
QÔR QÔR QÔR
We can see immediately that this is indeed so because adding them gives the same bound vector in
the numerator and the denominator. We can either note this equivalence from our previous
discussion of bound vectors, or do a substitution. For a substitution, let r and q be scalars, and v be
a vector in (parallel to) the line.

Q=P-qv
R=P+rv
PÔR P Ô HP + r vL r PÔv r
ã ã ã
QÔR HP - q vL Ô HP + r vL r + q PÔv r+q

QÔP HP - q vL Ô P q PÔv q
ã ã ã
QÔR HP - q vL Ô HP + r vL r + q PÔv r+q

Hence, in these terms, the final decomposition reduces to


Ï

2009 9 3
Grassmann Algebra Book.nb 266

r q
Pã Q+ R
r+q r+q

Ï Decomposition of a vector in a line

Again, these same calculations apply mutatis mutandis to the situation above. Although the result
turns out to be trivial in terms of our understanding that a vector may be expressed as the
difference of two points, we include the example to emphasize that the decomposition formula
applies easily to either bound or free entities. Let

PØx
then the decomposition becomes

xÔR QÔx
xã Q+ R
QÔR QÔR

QÔx
R H QÔR L R
xÔR QÔx
x H
QÔR
LQ + H
QÔR
LR

xÔR
Q H QÔR L Q

Decomposition of a vector in a line into two points of opposite weight.

Because x is a vector, we would expect the sum of its two component weights to be zero.

xÔR QÔx x Ô HR - QL
+ ã
QÔR QÔR QÔR
Again, we can see immediately that this is so because adding them gives a zero bivector in the
numerator. For a substitution, let a be a scalar; then R - Q is a vector in (parallel to) the line.

x = a HR - QL
xÔR a HR - QL Ô R QÔR
ã ã -a ã -a
QÔR QÔR QÔR
QÔx Q Ô a HR - QL QÔR
ã ãa ãa
QÔR QÔR QÔR
Hence, in these terms, the final decomposition reduces to

x ã -a Q + a R

which is of course our original definition for x!

Decomposition in a 3-space
2009 9 3
Grassmann Algebra Book.nb 267

Decomposition in a 3-space

Ï Decomposition of a vector in a trivector

Suppose we have a vector 3-space represented by the trivector aÔbÔg. We wish to decompose a
vector x in this 3-space to give one component in aÔb and the other in g. Applying the
decomposition formula gives:

Ha Ô bL Ó Hx Ô gL
xaÔb ã
Ha Ô bL Ó g
g Ó Hx Ô a Ô bL
xg ã
g Ó Ha Ô bL

Because xÔaÔb is a 3-element it can be seen immediately that the component xg can be written as
a scalar multiple of g where the scalar is expressed either as a ratio of regressive products (scalars)
or exterior products (n-elements).

x Ó Ha Ô bL aÔbÔx
xg ã gã g
g Ó Ha Ô bL aÔbÔg

The component xaÔb will be a linear combination of a and b. To show this we can expand the
expression above for xaÔb using the Common Factor Axiom.

Ha Ô bL Ó Hx Ô gL Ha Ô x Ô gL Ó b Hb Ô x Ô gL Ó a
xaÔb ã ã -
Ha Ô bL Ó g Ha Ô bL Ó g Ha Ô bL Ó g

Rearranging these two terms into a similar form as that derived for xg gives:

xÔbÔg aÔxÔg
xaÔb ã a+ b
aÔbÔg aÔbÔg
Of course we could have obtained this decomposition directly by using the results of the
decomposition formula 3.36 for decomposing a 1-element into a linear combination of the factors
of an n-element.

2009 9 3
Grassmann Algebra Book.nb 268

xÔbÔg aÔxÔg aÔbÔx


xã a+ b+ g
aÔbÔg aÔbÔg aÔbÔg

Decomposition of a vector in a trivector into three components.

Ï Decomposition of a point in a plane

Suppose we have a plane represented by the bound bivector QÔRÔS, where Q, R and S are points.
We wish to decompose a point P in this plane in two ways. The first, as weighted points at Q, R,
and S. The second as a weighted point at S and a weighted point in the line represented by QÔR.
Applying the decomposition formula 3.36 gives:

PÔRÔS QÔPÔS QÔRÔP


Pã Q+ R+ S
QÔRÔS QÔRÔS QÔRÔS

This formula immediately displays P as a sum of weighted points situated at Q, R and S.

2009 9 3
Grassmann Algebra Book.nb 269

QÔRÔP
H LS
QÔRÔS

P
PÔRÔS
H LQ
QÔRÔS
QÔPÔS
H LR
QÔRÔS

A point P in a bound bivector. P decomposed into 3 weighted points.


Decomposition of a point in a bound bivector into three weighted points.

Alternatively, we can group these terms two at a time to obtain a weighted point in the
corresponding line. For example, adding the first two terms together gives us the point which we
denote PQÔR .

PÔRÔS QÔPÔS
PQÔR ã Q+ R
QÔRÔS QÔRÔS

QÔRÔP
H LS
QÔRÔS

P
PÔRÔS QÔPÔS
H LQ+H LR
QÔRÔS QÔRÔS

A point P in a bound bivector. P decomposed into 2 weighted points.


Decomposition of a point in a bound bivector into two weighted points.

Decomposition in a 4-space

Ï Decomposition of a vector in a 4-vector

Suppose we have a vector 4-space represented by the trivector aÔbÔgÔd and we wish to
decompose a vector x in this 4-space into two vectors, one in aÔb and one in gÔd.

x ã xaÔb + xgÔd

Applying the decomposition formula 3.36 gives:

xÔbÔgÔd aÔxÔgÔd aÔbÔxÔd aÔbÔgÔx


xã a+ b+ g+ d
aÔbÔgÔd aÔbÔgÔd aÔbÔgÔd aÔbÔgÔd
Hence we an write

2009 9 3
Grassmann Algebra Book.nb 270

Hence we an write

xÔbÔgÔd aÔxÔgÔd
xaÔb ã a+ b
aÔbÔgÔd aÔbÔgÔd
aÔbÔxÔd aÔbÔgÔx
xgÔd ã g+ d
aÔbÔgÔd aÔbÔgÔd
As we have previously remarked, we can rearrange these components in whatever combinations or
interpretations (as points or vectors) we require.

Decomposition of a point or vector in an n-space


Decomposition of a point or vector in a space of n dimensions is generally most directly
accomplished by using the decomposition formula 3.36 derived in Chapter 3: The Regressive
Product. For example, providing at least one of the ai is a point, the decomposition of a point P
can be written:

n
a1 Ô a2 Ô ! Ô P Ô ! Ô an
Pã‚ ai 4.13
i=1
a1 Ô a2 Ô ! Ô ai Ô ! Ô an

As we have already seen in the examples above, the components are simply arranged in whatever
combinations are required to achieve the decomposition.

† In summary, it is worth reemphasizing that, although the decomposition of elements depicts


geometrically quite differently depending on whether the elements are interpreted as points or
vectors, the formulae are identical. As with all the formulae of the Grassmann algebra, the only
difference is the interpretation.

4.11 Projective Space


In perusing this chapter, you may well have remarked the similarities between the Grassmann
geometry described above, and the geometry of projective space. In this section, we will briefly
explore these parallels. We will find that there are no essential differences, and that the inessential
differences are ones simply of definition or interpretation.

The apparent differences stem in the main from the way in which the intersection of two geometric
elements is treated in the Projective Geometry on the one hand, and Grassmann geometry on the
other. For example, in the plane, both geometries will say that two lines always intersect. In
Projective geometry the lines intersect in either a point or a point-at-infinity. In Grassmann
geometry the lines intersect in either a point or a vector. We see here that there is no essential
difference, but an inessential difference of definition or interpretation: a vector and a point-at-
infinity are essentially the same object.

2009 9 3
Grassmann Algebra Book.nb 271

The advantage of the Grassmann geometry interpretation however, is that one can add a metric to it
if one wishes to use metric concepts (for example the inner product).

The intersection of two lines in a plane


In the Grassmann approach to the geometry of the plane, two (distinct) points define a line, and
two (non-parallel) lines intersect in a common point. If the lines are parallel, they intersect in a
common vector. Thus, as we intimated in the introduction to this section, we need only rename this
vector as a point-at-infinity to recover the projective geometry of 2-space.

But this correspondence is not as arbitrary as it might first appear. It turns out that the result of
computing the intersection of two lines approaching parallelism in the Grassmann algebra gives us
a result which can be viewed on the one hand as a point approaching infinity, and on the other hand
as a weighted point approaching a vector. That is, the point of intersection of two lines moves off
towards infinity as the lines approach parallelism, and in the limit becomes a finite vector.

To see this we need only apply the methods previously developed for the calculation of
intersections to two such lines. For simplicity, and without loss of generality, we can represent the
two lines as follows:
• L1 by a bound vector through a point P, in the direction of the vector x; and
• L2 by a bound vector through the point P + y, with the vector of the line differing in direction
from x by a scalar multiple e of y.

x
P+y
L2 - ey
y

L1
P Q

The intersection of two lines as they approach parallelism.

¯"2 ; DeclareExtraScalarSymbols@eD;
DeclareExtraVectorSymbols@PD;
L1 = P Ô x;
L2 = HP + yL Ô Hx - e yL;

We obtain the point of intersection up to congruence by applying ToCommonFactor to the


regressive product of the lines.

ToCommonFactor@L1 Ó L2 D
¯c H-x †P Ô x Ô y§ - P e †P Ô x Ô y§L

Since the congruence factor ¯c, and †PÔxÔy§ (the coefficient of the exterior product in any basis)
are all scalars, we can say that the intersection is congruent to the weighted point wQ given by

wQ ª e P + x

By factoring out the scalar factor e, we can also say that the intersection is congruent to the point Q
given by

2009 9 3
Grassmann Algebra Book.nb 272

By factoring out the scalar factor e, we can also say that the intersection is congruent to the point Q
given by
x
QªP+
e
As e approaches zero, the lines approach parallelism, the point of intersection Q has an ever-
increasing vector xe , causing its position to approach infinity, and the weighted point wQ has an
ever-decreasing influence from the point P, causing it to approach the (ultimately) common vector
x.

It may be remarked that in this calculation, the point of intersection does not involve the vector y
which specifies the position of L2 . In general then, we may conclude that the resulting vector and
hence the resulting point-at-infinity is the same for all lines parallel to L1 .

The line at infinity in a plane


A basis for the plane involves the origin ¯" and two basis vectors, which we denote, say, by e1
and e2 . We can now repeat the previous computation for each of the basis vectors to obtain the
weighted points

wQ1 ª e ¯" + e1 wQ2 ª e ¯" + e2

We can now see how the plane may be said to contain a line approaching a line-at-infinity if we
define it by the product of these two weighted points.

L¶ ª wQ1 Ô wQ2 ª He ¯" + e1 L Ô He ¯" + e2 L

This line has the property that in the limit, it contains all the points-at-infinity of the plane. We can
show this by taking its exterior product with a third arbitrary point approaching a point-at-infinity,
and obtain an element that becomes zero in the limit as e approaches zero.

¯%@He ¯" + e1 L Ô He ¯" + e2 L Ô He ¯" + a e1 + b e2 LD


He - a e - b eL ¯# Ô e1 Ô e2

In fact, in the limit, the line-at-infinity is the bivector of the plane. We can see this by letting e
approach zero in the expression for L¶ or its expansion:

¯%@He ¯" + e1 L Ô He ¯" + e2 LD


¯# Ô H-e e1 + e e2 L + e1 Ô e2

Projective 3-space
In projective 3-space, all the above concepts of the projective plane carry over naturally. We can
see the parallels between Projective Geometry and Grassmann Geometry as follows:

2009 9 3
Grassmann Algebra Book.nb 273

Projective Geometry: Non-parallel planes intersect in lines. Parallel planes intersect in a line-at-
infinity. All the lines-at-infinity belong to a single plane-at-infinity.

Grassmann Geometry: Non-parallel planes intersect in lines. Parallel planes intersect in a bivector.
All the bivectors belong to a single trivector.

Homogeneous coordinates

Ï The plane

As we have seen, a point P in the plane may be represented up to congruence by

P = a0 ¯" + a1 e1 + a2 e2 ;

The elements of the triplet 8ao , a1 , a2 < are called the homogeneous coordinates of the point P.
In projective geometry, the triplet 8ao , a1 , a2 < and the triplet 8k ao , k a1 , k a2 <, where k is
a scalar, are defined to represent the same point. In the Grassmann approach, this corresponds to
weighted points with different weights also defining the same point.

The cobasis to the basis of the plane is

¯"2 ; Cobasis
8e1 Ô e2 , -H¯# Ô e2 L, ¯# Ô e1 <

A line in the plane can be represented up to congruence by a linear combination of these cobasis
elements

L = b0 e1 Ô e2 + b1 H-¯" Ô e2 L + b2 ¯" Ô e1 ;

The elements of the triplet 8b0 , b1 , b2 < are called the homogeneous coordinates of the line L.

The equation to the line may then be obtained from the condition that a point lies in the line, that is,
their exterior product is zero.

PÔL ã 0
On simplifying the product, we recover the equation of the line in homogeneous coordinates
(a0 b0 + a1 b1 + a2 b2 ã 0) by viewing the line as fixed and the point as variable.

DeclareExtraScalarSymbols@a_ , b_ D;
¯%@P Ô LD ã 0
Ha0 b0 + a1 b1 + a2 b2 L ¯# Ô e1 Ô e2 ã 0

However, we can also view the point as fixed, and the line as variable, wherein the same form of
equation describes a variable line through a fixed point.

Ï Points and planes in 3-space

A point P in 3-space may be represented up to congruence by

2009 9 3
Grassmann Algebra Book.nb 274

¯"3 ; P = a0 ¯" + a1 e1 + a2 e2 + a3 e3 ;

The elements of the quartet 8ao , a1 , a2 , a3 < are called the homogeneous coordinates of the
point P.

The cobasis to the basis of the 3-space is

Cobasis
8e1 Ô e2 Ô e3 , -H¯# Ô e2 Ô e3 L, ¯# Ô e1 Ô e3 , -H¯# Ô e1 Ô e2 L<

Hence now it is planes in the 3-space that can be represented up to congruence by a linear
combination of these cobasis elements

P = b0 e1 Ô e2 Ô e3 + b1 H-¯" Ô e2 Ô e3 L + b2 ¯" Ô e1 Ô e3 + b3 H-¯" Ô e1 Ô e2 L;

The elements of the quartet 8b0 , b1 , b2 , b3 < are called the homogeneous coordinates of the
plane P.

The equation to the plane may then be obtained from the condition that a point lies in the plane,
that is, their exterior product is zero.

PÔP ã 0
On simplifying the product, we recover the equation of the plane in homogeneous coordinates by
viewing the plane as fixed and the point as variable.

¯%@P Ô PD ã 0
Ha0 b0 + a1 b1 + a2 b2 + a3 b3 L ¯# Ô e1 Ô e2 Ô e3 ã 0

And as in the case of the plane, we can also view the point as fixed, and the plane as variable,
wherein the same form of equation describes a variable plane through a fixed point.

Ï Points and hyperplanes in n-space

The cases of points and lines in 2-space and points and planes in 3-space generalize in an obvious
manner to points and hyperplanes ((n-1)-planes) in n-space.

The reason that the relationships here are so straightforward is because 1-elements and (n-1)-
elements are simple. Hence they can immediately represent geometric objects like points and
hyperplanes as linear sums of the basis elements.

This is not, in general, the case for elements of intermediate grade. For example, in a 3-space, 2-
elements are not necessarily simple. Hence lines in space have no such simple form as a linear sum
of the basis 2-elements. We can easily compose such a form and check for its simplicity.

¯"3 ; A = ComposeMElement@2, aD
a1 ¯# Ô e1 + a2 ¯# Ô e2 + a3 ¯# Ô e3 + a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3

SimpleQ@AD
If@8a3 a4 - a2 a5 + a1 a6 ã 0<, TrueD

Thus, such a form will only be simple, and thus able to represent a line if the coefficients are
constrained by the relation

2009 9 3
Grassmann Algebra Book.nb 275

Thus, such a form will only be simple, and thus able to represent a line if the coefficients are
constrained by the relation

a3 a4 - a2 a5 + a1 a6 ã 0

Duality
The concept of duality in Grassmann algebra corresponds closely to the concept of duality in
projective geometry. For example, consider the expression for a line as the exterior product of two
points.

L ã PÔQ
This may be read variously as:
The points P and Q lie on the line L.
The line L passes through the points P and Q.
The points P and Q join to form the line L.
The line L is the join of the points P and Q.

In a plane, the dual expression to "a line is the exterior product of two points", is "a point is the
regressive product of two lines".

P ã LÓM
This may be read variously as:
The lines L and M pass through the point P.
The point P lies on the lines L and M.
The lines L and M intersect to form the point P.
The point P is the intersection of the lines L and M.

The concurrency of three points is defined by

PÔQÔR ã 0
The collinearity of three lines is defined by

LÓMÓN ã 0
Duality extends naturally to a space of any dimension. Remember that, just as in projective
geometry, these entities and expressions are only defined up to congruence.

Desargues' theorem
As an example of how Grassmann algebra can be used in Projective Geometry, we prove
Desargues' theorem. It should be noted that the methods used are simply applications of the
exterior and regressive products, and as are all the applications in this chapter, entirely non-metric.
This is in contradistinction to "proofs" of the theorem using the dot and cross products of 3-
dimensional vector algebra, which are metric concepts.

If two triangles have corresponding vertices joined by concurrent lines, then the
intersections of corresponding sides are collinear. That is, if PU, QV, and RW all pass
through one point O, then the intersections X = PQ.UV, Y = PR.UW and Z = QR.VW all
lie on one line.

2009 9 3
Grassmann Algebra Book.nb 276

If two triangles have corresponding vertices joined by concurrent lines, then the
intersections of corresponding sides are collinear. That is, if PU, QV, and RW all pass
through one point O, then the intersections X = PQ.UV, Y = PR.UW and Z = QR.VW all
lie on one line.

U
P

O Q V

R
W

Desargues' theorem.

We begin by declaring that we wish to work in the plane.

¯"2 ;

For simplicity, and without loss of generality, we may take the fixed point O as the origin ¯". We
can then define the points P, Q, and R simply by

P = ¯" + p; Q = ¯" + q; R = ¯" + r;

And since U, V, and W are on the same lines (respectively) through O as are P, Q, and R, we can
define U, V, and W using scalar multiples of the position vectors of P, Q, and R.

U = ¯" + a p; V = ¯" + b q; W = ¯" + c r;

To confirm that these triplets of points are collinear, we can compute their exterior product.

8¯%@¯" Ô P Ô UD, ¯%@¯" Ô Q Ô VD, ¯%@¯" Ô R Ô WD<


80, 0, 0<

The next stage is to compute the intersections X, Y, and Z. We do this by using


ToCommonFactor.

2009 9 3
Grassmann Algebra Book.nb 277

X = ToCommonFactor@HP Ô QL Ó HU Ô VLD
¯c Ha p †¯# Ô p Ô q§ - a b p †¯# Ô p Ô q§ - b q †¯# Ô p Ô q§ +
a b q †¯# Ô p Ô q§ + ¯# Ha †¯# Ô p Ô q§ - b †¯# Ô p Ô q§LL

Here A is a weighted point We can extract the associated point by using ToPointForm.
(Remember that †¯"ÔpÔq§ represents the scalar coefficient of the ¯"ÔpÔq in any basis that
might be chosen, and ¯c is the scalar congruence factor.)

X = ToPointForm@XD
Ha - a bL p H-1 + aL b q
+ + ¯#
a-b a-b

Similarly, we can proceed with the other two intersections.

Y = ToCommonFactor@HP Ô RL Ó HU Ô WLD êê ToPointForm


Ha - a cL p H-1 + aL c r
+ + ¯#
a-c a-c

Z = ToCommonFactor@HQ Ô RL Ó HV Ô WLD êê ToPointForm


Hb - b cL q H-1 + bL c r
+ + ¯#
b-c b-c

We can now check whether these points are collinear by taking their exterior product. The first
level of simplification gives

F = ¯%@X Ô Y Ô ZD
¯# Ô
H-1 + aL a b H-1 + cL a H-1 + bL b H-1 + cL a b H-1 + cL2
- + pÔq +
Ha - bL Ha - cL Ha - bL Hb - cL Ha - cL Hb - cL
H-1 + aL Ha - a bL c a H-1 + bL2 c H-1 + bL c Ha - a cL
+ + pÔr +
Ha - bL Ha - cL Ha - bL Hb - cL Ha - cL Hb - cL
H-1 + aL2 b c H1 - aL H-1 + bL b c H-1 + aL b H-1 + cL c
+ + qÔr
Ha - bL Ha - cL Ha - bL Hb - cL Ha - cL Hb - cL

On simplifying the scalar coefficients we get zero as expected, thus proving the theorem.

¯$@Simplify@FDD
0

Pappas' theorem
As a second example, we prove Pappas' theorem. The methods are essentially the same as used for
Desargues' theorem in the previous section.

Given two sets of collinear points P, Q, R, and U, V, W, then the intersection points X, Y,
Z of line pairs PV and QU, PW and RU, QW and RV, are collinear.

2009 9 3
Grassmann Algebra Book.nb 278

Given two sets of collinear points P, Q, R, and U, V, W, then the intersection points X, Y,
Z of line pairs PV and QU, PW and RU, QW and RV, are collinear.

Z
X Y

Pappas' theorem.

We begin as before, by declaring that we wish to work in the plane.

¯"2 ;
As our example, we take the more general case in which the lines through the two sets of collinear
points are not parallel. For simplicity, and without loss of generality, we take their intersection
point as the origin. We can then define the points P, Q, R, and U, V, W by means of two vectors p
and u; and four scalars a, b, c, d.

P = ¯" + p;
Q = ¯" + a p;
R = ¯" + b p;
U = ¯" + u;
V = ¯" + c u;
W = ¯" + d u;
By definition these triplets of points are collinear.

The next stage is to compute the intersections X, Y, and Z. We do this by using


ToCommonFactor and ToPointForm as in the previous section.

2009 9 3
Grassmann Algebra Book.nb 279

X = ToCommonFactor@HP Ô VL Ó HQ Ô ULD êê ToPointForm


a H-1 + cL p H-1 + aL c u
+ + ¯#
-1 + a c -1 + a c

Y = ToCommonFactor@HP Ô WL Ó HR Ô ULD êê ToPointForm


b H-1 + dL p H-1 + bL d u
+ + ¯#
-1 + b d -1 + b d

Z = ToCommonFactor@HQ Ô WL Ó HR Ô VLD êê ToPointForm


a b Hc - dL p Ha - bL c d u
+ + ¯#
ac-bd ac-bd

We can now check whether these points are collinear by taking their exterior product. A value of
zero thus proving the theorem.

F = ¯%@X Ô Y Ô ZD êê Simplify
0

Projective n-space
A projective n-space is essentially, a bound n-space, that is, a linear space of n+1 dimensions, in
which one of the basis elements is interpreted as a point, say the origin, and the other n basis
elements are interpreted as vectors. In projective n-space (n ¥2), the concepts of the projective
plane carry over naturally.

Non-parallel (n-1)-planes intersect in (n-2)-planes. Parallel (n-1)-planes may be said to intersect in


an (n-2)-plane-at-infinity. All the (n-2)-planes-at-infinity belong to a single (n-1)-plane-at-infinity
(which is in fact the n-vector of the space).

Here, a 0-plane is a point, a 1-plane is a line, a 2-plane is a plane, and so on. Two (n-1)-planes may
be said to be parallel if their (n-1)-directions are the same (that is, their (n-1)-vectors are
congruent).

We can see how this works more concretely by thinking in terms of the actual bound elements
representing these geometric objects.

We can define two (n-1)-planes P and Q by

P = PÔ a ; Q = QÔ b
n-1 n-1

where P and Q are points and a and b are (n-1)-vectors.


n-1 n-1

2009 9 3
Grassmann Algebra Book.nb 280

The Common Factor Theorem shows that P and Q will have a common (n-1)-element, C say. If
n-1
a is congruent to b , then C will be congruent to them both, and will be an (n-1)-vector. If a
n-1 n-1 n-1 n-1
is not congruent to b , then C will be of the form C = R Ô g , thus representing an (n-2)-plane.
n-1 n-1 n-2

Projective space terminology is just simply equivalent to defining C as an (n-2)-plane-at-infinity.


n-1
All the C from all the possible intersections, being (n-1)-vectors, belong to the n-vector of the
n-1
space. In projective space terminology: an (n-1)-plane-at-infinity.

In sum
Thus we can see that Projective Geometry differs in no essential way from the geometry which is a
natural consequence of Grassmann's algebra. The superficial differences, as is the case with many
mathematical systems covering the same applications, reside in definition or interpretation.

4.12 Regions of Space

Regions of space
Some geometric applications require an assessment of the region in which a point is located. Once
we have a test which determines if a point is inside or outside a region, we can then define the
region as the set of points which satisfy the test.

In this section we begin with the very simplest of regions, and build up to more complex ones of
higher dimension. A closed region of dimension r is bounded by regions of dimension r-1. A
closed region is often called a polytope, a generalization of the notion of polygon and polyhedron.

Regions of a plane
Consider a line L in a plane. The line will divide the plane into two regions (half-planes). A half-
plane region is an open region if it does not contain any points of the line.

Any point P will be located in either one of the half-planes or in the line itself.

We can form the exterior product PÔL. This exterior product will vary in weight as P varies over
the regions, but will only change sign when P crosses the line. The product (and hence the weight)
is of course zero when P is on the line itself.

Suppose now we have a reference point Q, not in the line, and we want to find if P and Q are on the
same side of the line. All we need to do is to take the ratio of PÔL to QÔL. If the ratio is positive, P
and Q are on the same side. If the ratio is negative, they are on opposite sides. If the ratio is zero, P
is in the line.

2009 9 3
Grassmann Algebra Book.nb 281

P2 ÔL P1 ÔL
QÔL
<0 QÔL
>0

L
Q

P2 P1

Regions of a plane divided by a line L

As can be seen from the figure above, the reason this occurs is because the ordering of the points in
the product PÔL is clockwise on one side of the line, and anti-clockwise on the other. (Remember
that the line L itself can be expressed as the product of any two points on it.) Because the line L is
in both the numerator and denominator of the ratio, the ratio is invariant with respect to how the
line is expressed (for example, which two points are chosen to express it).

If we know in which region the point Q is located, we can therefore determine the region in which
the point P is located. In the sections below we will see how this simple result can be extended in a
boolean manner to regions of higher dimension, and regions with more complex boundaries.

Ï Example

As shown in the graphic below, let L be a line through the origin ¯" in the direction of the basis
vector e2 , the reference point Q at the point ¯" + e1 , and point P1 and P2 at the points
¯" + a e1 + e2 and ¯" - b e1 + e2 respectively. The scalar coefficients a and b are strictly
positive.

P2 ÔL P1 ÔL
QÔL
ã -b < 0 QÔL
ãa>0

P2 P1
Q

Example: Regions of a plane divided by a line

The products of each of the three points with the line are:

Q Ô L ã H¯" + e1 L Ô ¯" Ô e2 ã - ¯" Ô e1 Ô e2

2009 9 3
Grassmann Algebra Book.nb 282

P1 Ô L ã H¯" + a e1 + e2 L Ô ¯" Ô e2 ã -a ¯" Ô e1 Ô e2

P2 Ô L ã H¯" - b e1 + e2 L Ô ¯" Ô e2 ã b ¯" Ô e1 Ô e2

The ratios of P1 Ô L and P2 Ô L with the reference product QÔL are:

P1 Ô L P2 Ô L
ãa >0 ã -b < 0
QÔL QÔL

Thus we conclude that P1 is on the same side of the line L as Q, while P2 is on the opposite side.

Regions of a line
Before we explore more complex regions and higher dimensions, let us take a step back to consider
a line as the complete space (a 1-space), and a point R in the line as dividing it into two regions.
The point R now takes the role of the line L in the previous example. Suppose the 1-space has basis
{¯",e1 }, and for simplicity we take R to be the origin ¯".

Following the pattern of the example above, we let the reference point Q be the point ¯" + e1 , and
points P1 and P2 be the points ¯" + a e1 and ¯" - b e1 respectively. Here, as above, the scalar
coefficients a and b are strictly positive.

P2 ÔR P1 ÔR
QÔR
ã -b < 0 QÔR
ãa>0

P2 R Q P1

Example: Regions of a line divided by a point R

The products of each of these three points with the dividing point R are:

Q Ô R ã H¯" + e1 L Ô ¯" ã - ¯" Ô e1

P1 Ô R ã H¯" + a e1 L Ô ¯" ã -a ¯" Ô e1

P2 Ô R ã H¯" - b e1 L Ô ¯" ã b ¯" Ô e1

The quotients of P1 Ô R and P2 Ô R with the reference product QÔR are:

P1 Ô R P2 Ô R
ãa >0 ã -b < 0
QÔR QÔR

Thus we conclude that P1 is on the same side of the dividing point R as Q, while P2 is on the
opposite side.

2009 9 3
Grassmann Algebra Book.nb 283

Planar regions defined by two lines


Suppose now, we have two intersecting lines L1 and L2 in the plane and a known reference point
Q, not on either line. We wish to find in which quadrant defined by the intersecting lines that a
given point lies. In order to explore this we define the regions by four nominal test points P1 , P2 ,
P3 , P4 , one in each quadrant, and for both lines L1 and L2 , making a pair of ratios for each point.
For a given region, represented by the point Pi we can form the ratio as we did above for each of
the lines Lj and portray them as follows:

P2 ÔL1 P2 ÔL2
QÔL1
<0 QÔL2
>0

L2
Q
P2

P1 ÔL1 P1 ÔL2 P3 ÔL1 P3 ÔL2


QÔL1
>0 QÔL2
>0 <0 <0
QÔL1 QÔL2
P1 P3

P4
L1
P4 ÔL1 P4 ÔL2
QÔL1
>0 QÔL2
<0

Planar regions defined by two lines

Although for easier comprehension, the graphic shows the two lines as if they are at right angles,
and the points as if they are equidistant from the intersection, remember that all the explorations in
this chapter are non-metric, and so angles and distances at this stage are undefined.

We can summarize the region definitions more succinctly in a table.

2009 9 3
Grassmann Algebra Book.nb 284

Region L1 L2
P1 ÔL1 P1 ÔL2
P1 QÔL1
>0 QÔL2
>0
P2 ÔL1 P2 ÔL2
P2 QÔL1
<0 QÔL2
>0
P3 ÔL1 P3 ÔL2
P3 QÔL1
<0 QÔL2
<0
P4 ÔL1 P4 ÔL2
P4 QÔL1
>0 QÔL2
<0

Ï Example

To see how this works, suppose we know that some point Q is in some quadrant, and we would like
to know which quadrant another given point P is in. We simply calculate the two ratios and
compare their results to the table. We do this for a numerical example below.

First define the lines and the point Q:

L1 = H¯" + e2 L Ô H¯" + 2 e1 - e2 L;
L2 = H¯" - e2 L Ô H¯" + 2 e1 + e2 L;
Q = ¯" - 4 e1 + 3 e2 ;

To see if a point P is in the same quadrant as the reference point Q, we simply check that the signs
of both products PÔL
QÔL
1
and PÔL
QÔL
2
are positive. Below we write a function CheckP to take a random
1 2

point P and calculate the value of the combined predicate for it.

CheckP := ModuleB8P<,
P = ¯" + RandomReal@8-5, 5<D e1 + RandomReal@8-5, 5<D e2 ;
¯%@P Ô L1 D ¯%@P Ô L2 D
:P, > 0 && > 0>F
¯%@Q Ô L1 D ¯%@Q Ô L2 D

Below we check 10 random points and find that three of them return True for CheckP and are
thus in the same quadrant as Q.

2009 9 3
Grassmann Algebra Book.nb 285

Table@CheckP, 810<D êê TableForm


¯# + 2.50516 e1 - 0.477195 e2 False
¯# - 4.78253 e1 + 4.11593 e2 True
¯# - 0.511969 e1 + 2.70047 e2 False
¯# - 2.45479 e1 + 1.84245 e2 True
¯# + 2.7643 e1 - 2.33078 e2 False
¯# + 3.40323 e1 - 4.45599 e2 False
¯# + 3.68059 e1 - 4.49096 e2 False
¯# - 1.29608 e1 + 1.51291 e2 True
¯# + 3.90768 e1 - 1.83592 e2 False
¯# + 2.52696 e1 - 0.82049 e2 False

Planar regions defined by three lines


Suppose now, we have three intersecting lines L1 , L2 and L3 in the plane and a known reference
point Q, not on any line. We wish to find in which region defined by the intersecting lines that a
given point lies. We will assume the lines are not parallel and do not all intersect in a single point.
We define the regions by seven points P1 , P2 , P3 , P4 , P5 , P6 , P7 , one in each region, and for each
of the three lines L1 , L2 and L3 , making a triplet of ratios for each point.

L1 L2
P2

P1 P3

P4
L3

P5 P6 P7

Planar regions defined by three lines

For a given region, represented by the point Pi we can form the ratio as we did above for each of
the lines Lj and tabulate the regions as follows:

2009 9 3
Grassmann Algebra Book.nb 286

Region L1 L2 L3
P1 ÔL1 P1 ÔL2 P1 ÔL3
P1 QÔL1
>0 QÔL2
>0 QÔL3
>0

P2 ÔL1 P2 ÔL2 P2 ÔL3


P2 QÔL1
<0 QÔL2
>0 QÔL3
>0
P3 ÔL1 P3 ÔL2 P3 ÔL3
P3 QÔL1
<0 QÔL2
<0 QÔL3
>0

P4 ÔL1 P4 ÔL2 P4 ÔL3


P4 QÔL1
>0 QÔL2
<0 QÔL3
>0
P5 ÔL1 P5 ÔL2 P5 ÔL3
P5 QÔL1
>0 QÔL2
>0 QÔL3
<0
P6 ÔL1 P6 ÔL2 P6 ÔL3
P6 QÔL1
>0 QÔL2
<0 QÔL3
<0
P7 ÔL1 P7 ÔL2 P7 ÔL3
P7 QÔL1
<0 QÔL2
<0 QÔL3
<0

Ï Example

Here are some actual lines and a reference point Q:

L1 = H¯" + e2 L Ô H¯" + 2 e1 - e2 L;
L2 = H¯" - e2 L Ô H¯" + 2 e1 + e2 L;
L3 = H¯" - e1 - 2 e2 L Ô H¯" + 3 e1 - 2 e2 L;
Q = ¯" - 3 e1 + e2 ;

To see if a point P is in the centre triangle (the region in which P4 is in), we check that the signs of
the products are +, ~, +, according to the table above, and hence write the predicate CheckP4 for
this triangular region as

CheckP4 := ModuleB8P<,
P = ¯" + RandomReal@8-3, 3<D e1 + RandomReal@8-3, 3<D e2 ;
¯%@P Ô L1 D ¯%@P Ô L2 D ¯%@P Ô L3 D
:P, > 0 && < 0 && > 0>F
¯%@Q Ô L1 D ¯%@Q Ô L2 D ¯%@Q Ô L3 D

Below we check 10 random points and find that only one of them returns True for CheckP2 and
is thus in the centre triangle.

2009 9 3
Grassmann Algebra Book.nb 287

Table@CheckP4, 810<D êê TableForm


¯# - 0.830005 e1 - 0.848417 e2 False
¯# - 2.94884 e1 - 2.22242 e2 False
¯# + 2.77172 e1 + 0.3699 e2 False
¯# + 0.13623 e1 - 0.109629 e2 False
¯# - 2.48096 e1 - 1.96646 e2 False
¯# - 2.17117 e1 + 0.0203384 e2 False
¯# + 1.02258 e1 + 1.51379 e2 False
¯# + 0.506026 e1 - 0.54325 e2 True
¯# - 1.15566 e1 + 0.842319 e2 False
¯# + 2.25353 e1 - 1.15175 e2 False

Creating a pentagonal region


In this section we explore a simple example of a pentagon in the plane defined by 5 (non-parallel)
lines. Each line is defined by the exterior product of two points. If we know the vertices of the
pentagon, then the most obvious way to construct these lines is as products of pairs of adjacent
vertices. However, it is just as easy if all we know are the lines forming the sides of the pentagon.
And it is more general since a line can be constructed as the product of any two points on a side or
its extension.

In the section which follows this one, we will extend this example to look at the construction of a 5-
star. To make use of common geometry in the two examples, we will define the sides of the
pentagon in this example by the vertices of the 5-star (as shown below). We locate the reference
point Q in the middle of the pentagon.

Each half-plane in the diagram, on the side of its line which include the reference point Q, is
coloured an opaque grey. Regions which overlap therefore show themselves in a darker grey.
There are four shades of grey shown. The pentagonal region is the overlap of all five half-planes,
and hence shows in the darkest grey. The lightest shade is the overlap of two half-planes.

2009 9 3
Grassmann Algebra Book.nb 288

Creating a pentagonal region


by intersecting the half-planes which include the point Q.

To be specific, we define the points and lines involved.

Q = ¯";
P1 = ¯" + 2 e2 ;
P2 = ¯" + 2 e1 + e2 ;
P3 = ¯" + 2 e1 - 2 e2 ;
P4 = ¯" - 2 e1 - 2 e2 ;
P5 = ¯" - 2 e1 + e2 ;
L1 = P1 Ô P3 ;
L2 = P2 Ô P4 ;
L3 = P3 Ô P5 ;
L4 = P4 Ô P1 ;
L5 = P5 Ô P2 ;

Ï The pentagon in the star

The pentagon internal to the 5-star is the region (including the boundary) defined by the set of
points P such that the predicate FiveStarPentagon below is True.

¯%@P Ô L1 D
FiveStarPentagon@P_D := AndB ¥ 0,
¯%@Q Ô L1 D
¯%@P Ô L2 D ¯%@P Ô L3 D ¯%@P Ô L4 D ¯%@P Ô L5 D
¥ 0, ¥ 0, ¥ 0, ¥ 0F;
¯%@Q Ô L2 D ¯%@Q Ô L3 D ¯%@Q Ô L4 D ¯%@Q Ô L5 D

For example, the reference point Q belongs to the pentagon.

2009 9 3
Grassmann Algebra Book.nb 289

For example, the reference point Q belongs to the pentagon.

FiveStarPentagon@QD
True

Ï The inside of the pentagon

The inside of the 5-star pentagon (excluding the boundary) is defined by replacing the sign ¥ by
the sign > in the formula above.

¯%@P Ô L1 D
FiveStarPentagonInside@P_D := AndB > 0,
¯%@Q Ô L1 D
¯%@P Ô L2 D ¯%@P Ô L3 D ¯%@P Ô L4 D ¯%@P Ô L5 D
> 0, > 0, > 0, > 0F;
¯%@Q Ô L2 D ¯%@Q Ô L3 D ¯%@Q Ô L4 D ¯%@Q Ô L5 D

Ï The boundary of the pentagon


The boundary of the 5-star pentagon is defined as the pentagon without its inside.

FiveStarPentagonBoundary@P_D :=
FiveStarPentagon@PD && Not@FiveStarPentagonInside@PDD

To check, we choose three points one inside, one on, and one outside the pentagon boundary. For
the point on the boundary we choose the point M, say, mid-way between P2 and P5 . For the other
two we perturb the position of M by a small vertical displacement e. As expected, only the point M
on the boundary returns True.

M = HP2 + P5 L ê 2; e = .000001 e2 ;

FiveStarPentagonBoundary êü 8M - e, M, M + e<
8False, True, False<

Note that the pentagon boundary cannot be defined by a Boolean expression of predicates in which
the ratios are zero, because the lines from which the boundary is formed extend beyond the
boundary.

Ï The outside of the pentagon

The plane outside the 5-star pentagon is defined simply by the negation of the
FiveStarPentagon predicate. Since FiveStarPentagon includes the boundary,
FiveStarPentagonOutside excludes it.

FiveStarPentagonOutside@P_D := Not@FiveStarPentagon@PDD

For example, the vertices of the 5-star are outside the pentagon.

2009 9 3
Grassmann Algebra Book.nb 290

FiveStarPentagonOutside@ÒD & êü 8P1 , P2 , P3 , P4 , P5 <


8True, True, True, True, True<

Ï The vertices of the pentagon

Although in this example it has been unnecessary to know the vertices of the pentagon, we can of
course obtain these points by applying ToCommonFactor to each of the pairs of intersecting lines.

8A5 , A1 , A2 , A3 , A4 < = ToPointForm@ToCommonFactor@ÒDD & êü


8L1 Ó L2 , L2 Ó L3 , L3 Ó L4 , L4 Ó L5 , L5 Ó L1 <
10 e1 2 e2 e2
:¯# + + , ¯# - ,
11 11 2
10 e1 2 e2 e1 e1
¯# - + , ¯# - + e2 , ¯# + + e2 >
11 11 2 2
We confirm that these are on the boundary of the pentagon:

FiveStarPentagonBoundary êü 8A5 , A1 , A2 , A3 , A4 <


8True, True, True, True, True<

Creating a 5-star region


We now define the 5-star region itself. That is, a Boolean function of the point P which will yield
True if P is inside the 5-star or on its boundary, and False otherwise. The Boolean function will
of course also contain, as parameters, the reference point Q and the lines defining the 5-star.

The vertices of the 5-star are the same as the points we chose in the pentagon example in the
previous section, and the edges of the 5-star are segments of the lines we chose.

There are many ways of creating a Boolean expression to define the 5-star. The most conceptually
straightforward way is to use either a disjunction or conjunction of sub-regions - each of which is
itself defined by a Boolean expression. A disjunction (Boolean Or) would be used when the region
being defined is composed of one or other of the sub-regions. A conjunction (Boolean And) would
be used when the region being defined is the overlap of the sub-regions.

We avoid discussing solutions requiring knowledge of the vertices of the inner pentagon, as these
may not be known (although of course, as shown in the previous section, they can be easily
calculated using the regressive product). However, in the diagram below we name them as points
Ai (opposite to Pi ) in order to explain our construction.

2009 9 3
Grassmann Algebra Book.nb 291

Ï Building the formula

Ì The triangle

We view the 5-star as two polygons: the triangle P2 P5 A1 , and the quadrilateral P1 P3 A1 P4 . The
triangular region can be defined as the intersection of the three half-planes defined by the lines L2 ,
L3 , L5 , and which include the point Q. This region is shown in the darkest grey in the figure below.

¯%@P Ô L2 D ¯%@P Ô L3 D ¯%@P Ô L5 D


Triangle@P_D := AndB ¥ 0, ¥ 0, ¥ 0F
¯%@Q Ô L2 D ¯%@Q Ô L3 D ¯%@Q Ô L5 D

Creating a 5-star region Stage I:


Creating the triangular region P2 P5 A1

Ì The quadrilateral

The region defined by the quadrilateral can be viewed as the intersection of the (infinite) region
below the lines L1 and L4 , and the (infinite) region above the lines L2 and L3 . The region below
the lines L1 and L4 can be defined by the conjunction of the regions on the positive side (relative to
Q) of the lines L1 and L4 .

2009 9 3
Grassmann Algebra Book.nb 292

¯%@P Ô L1 D ¯%@P Ô L4 D
BelowL1L4@P_D := AndB ¥ 0, ¥ 0F
¯%@Q Ô L1 D ¯%@Q Ô L4 D

Creating a 5-star region Stage IIa:


Creating the region under the lines L1 and L4

The region above the lines L2 and L3 can be defined by the disjunction of the regions on the
positive side (relative to Q) of the lines L2 and L3 . This region is shown in the figure below with
various opacities of red.

2009 9 3
Grassmann Algebra Book.nb 293

¯%@P Ô L2 D ¯%@P Ô L3 D
AboveL2L3@P_D := OrB ¥ 0, ¥ 0F
¯%@Q Ô L2 D ¯%@Q Ô L3 D

Creating a 5-star region Stage IIb:


Creating the region above the lines L2 and L3

The quadrilateral is then the conjunction of these two regions.

Quadrilateral@P_D := And@BelowL1L4@PD, AboveL2L3@PDD

Ì The 5-star

And the 5-star is the disjunction of the triangle and the quadrilateral.

FiveStar@P_D := Or@Triangle@PD, Quadrilateral@PDD

Ï Testing the formula

To test this formula, we can choose test points on, inside and outside the 5-star. The vertices are on
the 5-star, so should all yield True.

FiveStar êü 8P1 , P2 , P3 , P4 , P5 <


8True, True, True, True, True<

And so also are the vertices of the interior pentagon explored in the previous section.

2009 9 3
Grassmann Algebra Book.nb 294

FiveStar êü 8A5 , A1 , A2 , A3 , A4 <


8True, True, True, True, True<

We can add small perturbations to these vertices to create points just outside the 5-star. These
should all yield False.

d = .000001 e1 ; e = .000001 e2 ;
8FiveStar êü 8P1 + e, P2 + e, P3 + e, P4 + e, P5 + e<,
FiveStar êü 8A1 - e, A2 - d, A3 + e, A4 + e, A5 + d<<
88False, False, False, False, False<,
8False, False, False, False, False<<

Perturbations to create points just inside the 5-star should all yield True.

8FiveStar êü 8P1 - e, P2 - 2 d - e, P3 - d + e, P4 + d + e, P5 + 2 d - e<,


FiveStar êü 8A1 + e, A2 + d, A3 - e, A4 - e, A5 - d<<
88True, True, True, True, True<, 8True, True, True, True, True<<

Ï Defining a general 5-star region formula

To define a general explicit formula for a 5-star region, all we need to do is to collect the regions
into one expression.

FiveStar@P_, Q_, L_ListD :=


¯%@P Ô LP2T D ¯%@P Ô LP3T D ¯%@P Ô LP5T D
OrBAndB ¥ 0, ¥ 0, ¥ 0F,
¯%@Q Ô LP2T D ¯%@Q Ô LP3T D ¯%@Q Ô LP5T D
¯%@P Ô LP1T D ¯%@P Ô LP4T D
AndBAndB ¥ 0, ¥ 0F,
¯%@Q Ô LP1T D ¯%@Q Ô LP4T D
¯%@P Ô LP2T D ¯%@P Ô LP3T D
OrB ¥ 0, ¥ 0FFF
¯%@Q Ô LP2T D ¯%@Q Ô LP3T D

This formula is valid for any set of five lines intersecting to form the 5-star, and any reference
point Q inside the 5-star region. We test the formula with the points just inside the vertices of the
pentagon as above.

FiveStar@Ò, Q, 8L1 , L2 , L3 , L4 , L5 <D & êü


8A1 + e, A2 + d, A3 - e, A4 - e, A5 - d<
8True, True, True, True, True<

Creating a 5-star pyramid


Having defined a region in the plane, we can easily extend it into three dimensions by choosing
points in the third dimension, and converting all the lines to planes by multiplying them by one or
another of these points.

2009 9 3
Grassmann Algebra Book.nb 295

Creating a 5-star pyramid.

To create a three-dimensionalized pyramidal-type 5-star we only need to take a single point S


above the origin, and extend each of the lines through S to form five intersecting planes. The
formula (predicate) for this is given by

FiveStar@P, Q, L Ô SD

Here, L is the list of lines forming the original 5-star. Since the exterior product operation Wedge
is Listable, taking the exterior product of this list of lines with the point S gives the
corresponding list of planes through S.

We then truncate the (infinite) pyramidal cone thus formed by a sixth plane equivalent to the plane
of the original planar 5-star by adding an extra predicate term. We define the plane P as the
exterior product of any three of the original 5-star vertices.

P = P1 Ô P2 Ô P3

We must also redefine the location of the reference point Q. Since the original point Q is now on
the boundary of the new three dimensional region, we will need to relocate it to an interior point.

The required predicate for the truncating plane is then

2009 9 3
Grassmann Algebra Book.nb 296

¯%@P Ô PD
¥0
¯%@Q Ô PD

We can now define the solid region FiveStar3D as the conjunction of these two regions.

¯%@P Ô PD
FiveStar3D@P_D := AndBFiveStar@P, Q, L Ô SD, ¥ 0F
¯%@Q Ô PD
Here is our original data for the planar 5-star:

P1 = ¯" + 2 e2 ;
P2 = ¯" + 2 e1 + e2 ;
P3 = ¯" + 2 e1 - 2 e2 ;
P4 = ¯" - 2 e1 - 2 e2 ;
P5 = ¯" - 2 e1 + e2 ;
L1 = P1 Ô P3 ;
L2 = P2 Ô P4 ;
L3 = P3 Ô P5 ;
L4 = P4 Ô P1 ;
L5 = P5 Ô P2 ;
L = 8L1 , L2 , L3 , L4 , L5 <;

We now change to a 3-space, and define the two new points, and the new plane.

¯"3 ;
Q = ¯" + e3 ;
S = ¯" + 2 e3 ;
P = P1 Ô P2 Ô P3 ;

To verify the formula, we can first check that the 5-star pyramid contains all its defining points.

FiveStar3D êü 8P1 , P2 , P3 , P4 , P5 , S<


8True, True, True, True, True, True<

We can also check that points clustered sufficiently close to the origin are inside the region.

Table@FiveStar3D@¯" + RandomReal@0.5D e1 +
RandomReal@0.5D e2 + RandomReal@0.5D e3 D, 810<D
8True, True, True, True, True, True, True, True, True, True<

We can add small perturbations to these vertices to create points just outside the 5-star. These
should all yield False.

e = .000001 e3 ; FiveStar3D êü H8P1 , P2 , P3 , P4 , P5 , S< + eL


8False, False, False, False, False, False<

2009 9 3
Grassmann Algebra Book.nb 297

Summary
† In 1-space, a hyperplane is a point.
In 2-space, a hyperplane is a line.
In 3-space, a hyperplane is a plane.
In n-space, a hyperplane is an (n-1)-plane.

† Hyperplanes divide their space into regions.

† If two points P and Q lie on the same side of a hyperplane ,, then the ratio R of the exterior
products of the two points with the hyperplane is positive:

PÔ,
R= >0
QÔ,

† If P and Q lie on opposite sides of ,, then R is negative:

PÔ,
R= <0
QÔ,

† If P lies on ,, then R is zero:

PÔ,
R= ã0
QÔ,

† The point Q may be considered as a reference point, and the region in which it is located as the
reference region.

† The point P may be considered as a variable point, and the region in which it is located as the
region of interest.

† Regions of space may be defined by Boolean functions of the predicates R > 0, R ã 0, R < 0, R
¥ 0, and R § 0.

4.13 Geometric Constructions

Geometric expressions
In this chapter, we have been exploring non-metric geometry involving geometric entities: points,
lines, planes, ..., hyperplanes. In this section we will explore expressions, called geometric
expressions, involving exterior and regressive products of these entities - the exterior product to
extend them, and the regressive product to intersect them. When geometric expressions are
equated to zero they becomes a geometric equations, leading to corresponding algebraic equations
for curves and surfaces of various degrees.

One of the most enticing features of Grassmann algebra applied to geometry which we will also
explore in this section, is the idea that some geometric expressions may not only be geometrically
depicted, but may also double as prescriptions for their geometric construction.

2009 9 3
Grassmann Algebra Book.nb 298

One of the most enticing features of Grassmann algebra applied to geometry which we will also
explore in this section, is the idea that some geometric expressions may not only be geometrically
depicted, but may also double as prescriptions for their geometric construction.

The types of geometric expressions we will explore are exemplified by that leading to the equation
to a conic section in the plane (discussed in the next section).

P Ô P1 Ô HL1 Ó HP0 Ô HL2 Ó HP Ô P2 LLLL ã 0

Here, the factors P0 , P1 , P2 , L1 and L2 are fixed (constant) entities defining the parameters of the
conic section. P is considered the independent variable, and since it occurs twice, the expression
represents an equation of the second degree. (If P had occurred m times in the expression, then the
equation would have been of degree m.)

The reduction of a geometric expression to an algebraic equation for a curve or surface requires
that the expression be either a scalar, or an n-element. The expression above is in fact the exterior
product of three points, which reduces to a 3-element in the plane (a linear space of 3 dimensions).
Each time a regressive product is simplified to its common factor, for example when two lines
intersect, the result is in general a weighted point multiplied by an arbitrary scalar congruence
factor (symbolized ¯c in GrassmannAlgebra). The zero on the right hand side of the equation
means that we can effectively ignore the weights and congruence factors, and visualize each
regressive product as resulting simply in a point. In computations however, it is most effective to
retain these weights to avoid coordinates with denominators.

For a geometric expression to result in an n-element it will usually be the exterior product of n
points (3 points for the plane, 4 points for space). Geometrically this may be interpreted as the
points all belonging to a hyperplane in the space, that is, collinear in the plane, coplanar in 3-space.

For a geometric expression to result in an scalar it will usually be the regressive product of n
hyperplanes (3 lines for the plane, 4 planes for 3-space). Geometrically this may be interpreted as
the hyperplanes all belonging to a point in the space, that is, three lines intersecting in one point in
the plane, four plane intersecting in one point in 3-space.

Ì Historical Note

Grassmann discusses geometric constructions in Chapter 5 (Applications to Geometry) in his


Ausdehnungslehre of 1862, and also in Crelles' Journal volumes XXXI, XXXVI and LII. Probably the most
comprehensive additional treatment is given by Alfred North Whitehead in Book IV of his Treatise on
Universal Algebra (Chapter IV: Descriptive Geometry and Chapter V: Descriptive Geometry of Conics and
Cubics) 1898.

Geometric equations for lines and planes


Before embarking on exploring the more general geometric expressions, we take this opportunity
to first review the simplest cases of geometric equations, those defining lines in the plane, and
planes in space.

2009 9 3
Grassmann Algebra Book.nb 299

Ï The equation of a line in the plane

Suppose we have a fixed line L through two points in the plane, and we want to derive its normal
algebraic equation. Let the line L be defined by the points Pa and Pb .

¯"2 ; Pa = ¯'a ; Pb = ¯'b ; L = Pa Ô Pb


H¯# + a1 e1 + a2 e2 L Ô H¯# + b1 e1 + b2 e2 L

The geometric equation to this line is PÔL ã 0.

P = ¯'x ; P Ô L ã 0
H¯# + e1 x1 + e2 x2 L Ô H¯# + a1 e1 + a2 e2 L Ô H¯# + b1 e1 + b2 e2 L ã 0

The algebraic equation to this line is obtained by simplifying this 3-element and extracting its
scalar coefficient by dividing through by the basis 3-element of the space

¯%@P Ô LD
ã0
¯" Ô e1 Ô e2
-a2 b1 + a1 b2 + a2 x1 - b2 x1 - a1 x2 + b1 x2 ã 0

Clearly this equation can be rearranged in the more usual forms

Ha2 - b2 L x1 + Hb1 - a1 L x2 + Ha1 b2 - a2 b1 L ã 0


b2 - a2 a2 b1 - a1 b2
x2 ã x1 +
b1 - a1 b1 - a1
The first of these is often cast in the "hyperplane" form.

A x1 + B x2 + C ã 0

Ï The equation of a plane in space

To determine the equation of a plane in 3-space we follow the same process.

¯"3 ; Pa = ¯'a ; Pb = ¯'b ; Pc = ¯'c ; P = ¯'x ; P = Pa Ô Pb Ô Pc ; P Ô P ã 0


H¯# + e1 x1 + e2 x2 + e3 x3 L Ô H¯# + a1 e1 + a2 e2 + a3 e3 L Ô
H¯# + b1 e1 + b2 e2 + b3 e3 L Ô H¯# + c1 e1 + c2 e2 + c3 e3 L ã 0

Simplifying this expression gives us the equation to a plane in 3-space in the more recognizable
form A + B x1 + C x2 + D x3 ã 0. Here we introduce the GrassmannAlgebra function
ExpandAndCollect for collecting the coefficients of the algebraic equation.

2009 9 3
Grassmann Algebra Book.nb 300

¯%@P Ô PD
ExpandAndCollectB , 8x1 , x2 , x3 <, 3F ã 0
¯" Ô e1 Ô e2 Ô e3
-a3 b2 c1 + a2 b3 c1 + a3 b1 c2 - a1 b3 c2 - a2 b1 c3 +
a1 b2 c3 + Ha3 b2 - a2 b3 - a3 c2 + b3 c2 + a2 c3 - b2 c3 L x1 +
H-a3 b1 + a1 b3 + a3 c1 - b3 c1 - a1 c3 + b1 c3 L x2 +
Ha2 b1 - a1 b2 - a2 c1 + b2 c1 + a1 c2 - b1 c2 L x3 ã 0

The same process applied to a hyperplane in n-space yields, as expected:

C + C1 x1 + C2 x2 + C2 x3 + … + Cn xn ã 0

The geometric equation of a conic section in the plane


The previous section introduced the simplest cases of geometric equations: those of linear
elements. We now turn our attention to quadratic elements, exploring how their equations may be
generated by constraining the position of a point in a more general manner than we used for the
linear elements.

The idea is that an equation of the general form PÔH ã 0 still holds, where P is a variable point,
and H is a hyperplane in the space. But for higher degree curves, H will itself need to be dependent
on the point P. Thus for a conic section, P will occur twice in the product, leading ultimately to an
equation of the second degree in its coordinates.

The hyperplane in a 2-space is a line, so in this section we are looking for ways in which this line
()a , say) can be nontrivially dependent on P.

In what follows, we will use double struck symbols )i for line parameters which will not appear in
the final expression. We will also use the congruence relation when defining intersections of lines,
since these normally lead to weighted points (although, as we have previously mentioned, the
weights are inconsequential in the final result). Thus we begin with

P Ô )a ã 0

If P is a simple exterior factor of )a , the equation is true for all P, leading us nowhere.

How can )a depend linearly on P, without P being an (exterior) factor of )a ? The answer lies in
the regressive product. Geometrically, a line is most meaningfully constructed as an exterior
product of points (P1 and Q1 , say). Hence

)a ã P1 Ô Q1

So one of these points must depend on P. Suppose it is Q1 . Thus in turn, Q1 must be a regressive
product, since in the plane the regressive product of two lines yields a point. Thus

Q1 ª L1 Ó )b

Again, allowing P to be a simple exterior factor of one of these lines leads to a trivial dependence,
so we leave one of them constant (L1 , say) and replace the other, )b by an exterior product of
points, neither of which is P.

2009 9 3
Grassmann Algebra Book.nb 301

)b ã P0 Ô Q2

As in the previous steps, we leave one factor constant, and express the other as a product (in this
case regressive).

Q2 ª L2 Ó )c

Finally, we are linearly far enough away from P on our chain of lines through points to come back
and intersect it non-trivially.

)c ã P Ô P2

We can now collect all these steps together

0 ã P Ô )a
)a ã P1 Ô Q1
Q1 ª L1 Ó )b
)b ã P0 Ô Q2
Q2 ª L2 Ó )c
)c ã P Ô P2

and eliminate )a , )b , )c , Q1 and Q2 to give the final equation for the conic section.

P Ô P1 Ô HL1 Ó HP0 Ô HL2 Ó HP Ô P2 LLLL ã 0 4.14

Note that since interchanging the order of factors in an exterior or regressive product changes at
most its sign, and that this change is inconsequential to the final result, we can if we wish, rewrite
the geometric equation in any of a number of ways, in particular, in reverse order.

HHHHP2 Ô PL Ó L2 L Ô P0 L Ó L1 L Ô P1 Ô P ã 0 4.15

The geometric equation as a prescription to construct


This geometric equation, either as a series of steps, or in its final form can be used to guide the
construction of points on a conic section. Thus, the equation can be viewed as a prescription to
construct a conic section. We begin with the second form of the equation above, and work from
left to right.

2009 9 3
Grassmann Algebra Book.nb 302

Constructing a conic section through 5 points

1. Construct the fixed points and lines: P0 , P1 , P2 , L1 , and L2 .


2. Construct a line )c through P2 to intersect line L2 in point Q2 .
3. Construct a line )b through Q2 and P0 to intersect line L1 in point Q1 .
4. Construct a line )a through Q1 and P1 to intersect line )c in point P.
5. Repeat the construction by drawing a new line )c through P2 .

We do of course still need to verify that this formula and construction leads to a scalar second
degree equation in two variables. This we do in the next section.

The algebraic equation of a conic section in the plane


To construct the algebraic equation for the conic section we essentially follow the same process as
we did with the geometric construction. In simplifying the regressive products, it will be
convenient to use the GrassmannAlgebra function CongruenceSimplify (through its
shorthand ¯1). CongruenceSimplify takes a nested expression involving exterior and
regressive products, and simplifies the it, ignoring any scalar congruence factors ¯c.

First, define the variable point P.

¯"2 ; DeclareExtraScalarSymbols@x, yD;


P = ¯" + x e1 + y e2 ;

1. Define the fixed points and lines: P0 , P1 , P2 , L1 , and L2 .

DeclareExtraScalarSymbols@82, 3, 4, 5, 6, 7, 8, 9, ., +<D;

2009 9 3
Grassmann Algebra Book.nb 303

P0 = ¯" + 6 e1 + 7 e2 ;
P1 = ¯" + 2 e1 + 3 e2 ;
P2 = ¯" + 4 e1 + 5 e2 ;
L1 = H¯" + 8 e1 L Ô H¯" + 9 e2 L;
L2 = H¯" + . e1 L Ô H¯" + + e2 L;

2. Construct a line )c through P and P2 to intersect line L2 in (weighted) point Q2 .

Q2 = ¯1@L2 Ó HP Ô P2 LD
H-+ ( - , $ + $ x + ( yL ¯# +
( HH-+ + $L x + , H-$ + yLL e1 + $ H+ H-( + xL + H-, + (L yL e2

3. Construct a line )b through Q2 and P0 to intersect line L1 in (weighted) point Q1 .

Q1 = ¯1@L1 Ó HP0 Ô Q2 LD
H-- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLL -
/ H0 H+ ( + , $ - $ x - ( yL + ( HH-+ + $L x + , H-$ + yLLLL ¯# +
- H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL +
0 / ( y - 0 ( $ y + , H-H. - /L ( H$ - yL + 0 $ H-/ + yLLL e1 +
/ H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H-( + xL + H-, + (L yL -
- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLLL e2

4. Construct a line )a through Q1 and P1 to intersect line )c in point P. Q1 , P1 and P are thus on
the same line, and the left hand side of the equation (which we call #1 ) becomes

#1 = ¯1@P Ô P1 Ô Q1 D
H-1 - H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL +
0 / ( y - 0 ( $ y + , H-H. - /L ( H$ - yL + 0 $ H-/ + yLLL +
2 / H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H-( + xL + H-, + (L yL -
- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLLL +
y H- H0 / $ x + . ( $ x - / ( $ x - + HH. - /L ( x + 0 H/ ( - ( $ + $ xLL +
0 / ( y - 0 ( $ y + , H-H. - /L ( H$ - yL + 0 $ H-/ + yLLL -
2 H-- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLL -
/ H0 H+ ( + , $ - $ x - ( yL + ( HH-+ + $L x + , H-$ + yLLLLL -
x H/ H. ( HH+ - $L x + , H$ - yLL + 0 $ H+ H-( + xL + H-, + (L yL -
- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLLL -
1 H-- H. H+ ( + , $ - $ x - ( yL + $ H+ H-( + xL + H-, + (L yLL -
/ H0 H+ ( + , $ - $ x - ( yL +
( HH-+ + $L x + , H-$ + yLLLLLL ¯# Ô e1 Ô e2

Dividing out the basis n-element and collecting the coefficients of x and y.

2009 9 3
Grassmann Algebra Book.nb 304

#1
#2 = ExpandAndCollectB , 8x, y<, 2F ã 0
¯" Ô e1 Ô e2
1+0-/(-2+.-/(+1,0-/$-2,.-/$-1+0-($+
1,.-($-2+0/($+2,./($-1,-/($+2+-/($+
H-1 + 0 / ( + 2 + . / ( - 1 + - / ( + + . - / ( + 1 + 0 - $ - 1 , . - $ - 1 , 0 / $ +
2+0/$-2+-/$-10-/$+2.-/$+,.-/$+1+-($-1.-($+
1 , / ( $ + + 0 / ( $ - 2 . / ( $ - , . / ( $ + 1 - / ( $ - + - / ( $L x +
H1 + / ( - + . / ( - 1 + - $ + 1 . - $ + 1 0 / $ - + 0 / $ + + - / $ -
. - / $ - 1 / ( $ + . / ( $L x2 +
H-1 , . - ( + 2 + . - ( + 2 + 0 / ( - 2 , . / ( + 1 , - / ( - 1 0 - / ( - + 0 - / ( +
2.-/(-1,0-$+2,.-$+2,-/$-,0-/$-2+-($+10-($+
+ 0 - ( $ - , . - ( $ - 2 , / ( $ + 2 0 / ( $ - 2 - / ( $ + , - / ( $L y +
H1 . - ( - + . - ( - 1 , / ( - 2 + / ( + 1 0 / ( + , . / ( + + - / ( - . - / ( +
1,-$+2+-$-+0-$-2.-$-20/$+,0/$-
, - / $ + 0 - / $ - 1 - ( $ + . - ( $ + 2 / ( $ - 0 / ( $L x y +
H-2 . - ( + , . - ( + 2 , / ( - 2 0 / ( - , - / ( + 0 - / ( - 2 , - $ +
, 0 - $ + 2 - ( $ - 0 - ( $L y2 ã 0

To check if this gives the correct form, we can substitute in some values for the coordinates of the
points and lines.

#2 ê. 82 Ø 2, 3 Ø 1, 4 Ø 1, 5 Ø 2, 6 Ø 1, 7 Ø 1, 8 Ø 1, 9 Ø 0, . Ø 1, + Ø 3<

-3 + 6 x - 3 x2 + 2 x y - y2 ã 0

This is an ellipse.

ContourPlotA-3 + 6 x - 3 x2 + 2 x y - y2 ã 0, 8x, 0, 3<, 8y, 0, 3<,


GridLines Ø Automatic, ImageSize Ø 83 µ 72, 3 µ 72<E

3.0

2.5

2.0

1.5

1.0

0.5

0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0

An alternative geometric equation of a conic section in the plane


2009 9 3
Grassmann Algebra Book.nb 305

An alternative geometric equation of a conic section in the plane


We can derive an alternative formula which is a little more symmetric. As in the previous
derivation, we still take the fundamental form of the equation to be the collinearity of three points.

P0 Ô Q1 Ô Q2 ã 0

But this time, P0 is a fixed point while Q1 and Q2 are functions of the variable point P. Because we
are looking for a more symmetric formula, let us suppose the form for Q1 is the same as that for Q2
in the previous formulation:

Qi ª Li Ó HP Ô Pi L

Here, the Li and Pi are fixed. The Qi lie on their lines Li .

The equation becomes

P0 Ô HL1 Ó HP Ô P1 LL Ô HL2 Ó HP Ô P2 LL ã 0 4.16

Thus we can imagine two fixed lines L1 and L2 and three fixed points P0 , P1 , and P2 . The
prescription to construct the conic section is as follows:

1. Construct the fixed points and lines: P0 , P1 , P2 , L1 , and L2 .


2. Construct a line through the point P0 to intersect L1 and L2 in Q1 and Q2 respectively.
3. Construct a line through Q1 and P1 .
4. Construct a line through Q2 and P2 .

The first line through P0 , Q1 and Q2 satisfies the condition that P0 Ô Q1 Ô Q2 ã 0.

The second and third lines through Qi and Pi satisfy the conditions that Qi Ô P Ô Pi ã 0. Thus
their intersection gives the point P.

We now have two formulas purporting to describe a conic section: the formula above, and the
formula we first derived.

$1 ã P Ô P1 Ô HL1 Ó HP0 Ô HL2 Ó HP Ô P2 LLLL

$2 ã P0 Ô HL1 Ó HP Ô P1 LL Ô HL2 Ó HP Ô P2 LL

To see if they are the same, we could choose a basis, substitute symbolic coordinates, and check
that the same second degree equation results. An alternative method is to prove the identity of the
formulas from the Common Factor Theorem by applying CongruenceSimplify (¯1) to the
symbolic expressions in the formulas. To indicate that the symbols for the lines represent elements
of grade 2 we underscript them with their grade. To indicate that the symbols for the points
represent elements of grade 1 we declare them as vector symbols.

¯"2 ; DeclareExtraVectorSymbols@8P, P0 , P1 , P2 <D;

A typical computation might then be to find a weighted intersection point:

2009 9 3
Grassmann Algebra Book.nb 306

¯1BL1 Ó HP Ô P1 LF
2

P P1 Ô L1 - P Ô L1 P1
2 2

Remember that the bracketing bar expressions are scalar quantities. †Z§ is to be interpreted as the
coefficient of the n-element Z when expressed in the current basis. See ToCongruenceForm.

We can expand a complete formula in a similar manner to get an alternative expression for the
conic section. For example, the first formula is

¯1BP Ô P1 Ô L1 Ó P0 Ô L2 Ó HP Ô P2 L Fã0
2 2

P Ô L2 P2 Ô L1 - P Ô L1 P2 Ô L2 P Ô P0 Ô P1 +
2 2 2 2

P Ô L2 P0 Ô L1 P Ô P1 Ô P2 ã 0
2 2

P Ô L2 P2 Ô L1 - P Ô L1 P2 Ô L2 P Ô P0 Ô P1 +
2 2 2 2
4.17
P Ô L2 P0 Ô L1 P Ô P1 Ô P2 ã 0
2 2

This is just one of a number of alternative forms. Expanding the second formula gives a different
expression. In order to show that they are in fact equal we need to note that in the formulae there
are four points involved, of which only three are independent in the plane. Hence we can arbitrarily
express one of them (say P) in terms of the other three. (We use the symbol wP to emphasize that
the expression we substitute is actually a weighted point).

wP = a P0 + b P1 + c P2 ;

In this form, the Pi can be visualized as representing a weighted point basis for the plane, and a, b
and c as variable coefficients of the weighted point wP in this basis. The formulae for comparison
then become

$1 = wP Ô P1 Ô L1 Ó P0 Ô L2 Ó HwP Ô P2 L ;
2 2
$1 = GrassmannCollect@Expand@¯1@$1 DDD

a2 P0 Ô L1 P0 Ô L2 + a b P0 Ô L1 P1 Ô L2 + a c P0 Ô L2 P2 Ô L1 +
2 2 2 2 2 2

b c P1 Ô L2 P2 Ô L1 - b c P1 Ô L1 P2 Ô L2 P0 Ô P1 Ô P2
2 2 2 2

2009 9 3
Grassmann Algebra Book.nb 307

$2 = P0 Ô L1 Ó HwP Ô P1 L Ô L2 Ó HwP Ô P2 L ;
2 2
$2 = GrassmannCollect@Expand@¯1@$2 DDD

a2 P0 Ô L1 P0 Ô L2 + a b P0 Ô L1 P1 Ô L2 + a c P0 Ô L2 P2 Ô L1 +
2 2 2 2 2 2

b c P1 Ô L2 P2 Ô L1 - b c P1 Ô L1 P2 Ô L2 P0 Ô P1 Ô P2
2 2 2 2

Clearly these two expressions are equal.

$1 ã $2
True

Conic sections through five points


Since it is well known that a unique conic section may be constructed through any five points in
the plane, we ought to be able to convert the formula above to one involving only these points,
rather than involving any lines. Already we can see directly from the formula that P must pass
through both P1 and P2 . Since we have supposed the lines to be non-parallel, they will intersect in
some point, O say. We can then pick two more points, one on each line, R1 and R2 say, with which
to define the lines.

Li ã O Ô Ri

But since any points on the lines (except O) will be satisfactory, we can choose them
advantageously as the intersections with the lines P0 Ô Pi .

R1 ª L1 Ó HP0 Ô P2 L
R2 ª L2 Ó HP0 Ô P1 L

The point P0 is then the intersection of the lines P1 Ô R2 and P2 Ô R1 .

P0 ª HP1 Ô R2 L Ó HP2 Ô R1 L

On making these substitutions the equation becomes

P0 Ô HL1 Ó HP Ô P1 LL Ô HL2 Ó HP Ô P2 LL ã 0

HHP1 Ô R2 L Ó HP2 Ô R1 LL Ô
4.18
HHO Ô R1 L Ó HP Ô P1 LL Ô HHO Ô R2 L Ó HP Ô P2 LL ã 0

Note the symmetry. There are five fixed points, O, P1 , P2 , R1 , R2 , and the variable point P. Each
point occurs twice. Swapping P1 and P2 while at the same time swapping R1 and R2 leaves the
equation unchanged.

We can show that these five points satisfy the equation, and thus all lie on the conic section. But
since a conic section constructed through five points is unique, this equation represents the general
conic section, or general quadric curve in the plane.

2009 9 3
Grassmann Algebra Book.nb 308

We can show that these five points satisfy the equation, and thus all lie on the conic section. But
since a conic section constructed through five points is unique, this equation represents the general
conic section, or general quadric curve in the plane.

We show this as follows. First, we can immediately read off from the equation that it is satisfied
when P coincides with either P1 or P2 . To show it for O, R1 and R2 we can expand the regressive
products by the Common Factor Theorem as we did in the previous section to a different view of
the equation.

DeclareExtraVectorSymbols@8P, O, P1 , P2 , R1 , R2 <D;
11 =
¯1@HHP1 Ô R2 L Ó HP2 Ô R1 LL Ô HHO Ô R1 L Ó HP Ô P1 LL Ô HHO Ô R2 L Ó HP Ô P2 LLD ã 0
†O Ô P Ô P1 § †P Ô P2 Ô R2 § †P2 Ô R1 Ô R2 § O Ô P1 Ô R1 -
†O Ô P Ô P2 § †P Ô P1 Ô R1 § †P2 Ô R1 Ô R2 § O Ô P1 Ô R2 +
†O Ô P Ô P1 § †P Ô P2 Ô R2 § †P1 Ô P2 Ô R1 § O Ô R1 Ô R2 -
†O Ô P Ô P1 § †O Ô P Ô P2 § †P2 Ô R1 Ô R2 § P1 Ô R1 Ô R2 ã 0

It is clear from the first factor in each of the terms that the equation is satisfied when P is equal to
O. But it is not yet clear that the equation is satisfied when P is equal to R1 or R2 . To show this we
try applying CongruenceSimplify to a rearranged form where the factors of the regressive
products are reversed.

12 =
¯1@HHP2 Ô R1 L Ó HP1 Ô R2 LL Ô HHP Ô P1 L Ó HO Ô R1 LL Ô HHP Ô P2 L Ó HO Ô R2 LLD ã 0
†O Ô P Ô R1 § †O Ô P2 Ô R2 § †P1 Ô R1 Ô R2 § P Ô P1 Ô P2 -
†O Ô P Ô R1 § †O Ô P2 Ô R2 § †P1 Ô P2 Ô R2 § P Ô P1 Ô R1 +
†O Ô P Ô R2 § †O Ô P1 Ô R1 § †P1 Ô P2 Ô R2 § P Ô P2 Ô R1 -
†O Ô P Ô R1 § †O Ô P Ô R2 § †P1 Ô P2 Ô R2 § P1 Ô P2 Ô R1 ã 0

Now we see that not only is the equation satisfied when P is equal to O, but also when P is equal to
R1 . We try again with the factors of only the second and third regressive products reversed.

13 =
¯1@HHP1 Ô R2 L Ó HP2 Ô R1 LL Ô HHP Ô P1 L Ó HO Ô R1 LL Ô HHP Ô P2 L Ó HO Ô R2 LLD ã 0
-†O Ô P Ô R2 § †O Ô P1 Ô R1 § †P2 Ô R1 Ô R2 § P Ô P1 Ô P2 +
†O Ô P Ô R1 § †O Ô P2 Ô R2 § †P1 Ô P2 Ô R1 § P Ô P1 Ô R2 -
†O Ô P Ô R2 § †O Ô P1 Ô R1 § †P1 Ô P2 Ô R1 § P Ô P2 Ô R2 +
†O Ô P Ô R1 § †O Ô P Ô R2 § †P1 Ô P2 Ô R1 § P1 Ô P2 Ô R2 ã 0

This time we see immediately that the equation is satisfied for P is equal to R2 .

Of course it would have been more direct to show these sorts of results by replacing the point by P
in the equation, and then simplifying. For example, replacing R2 in the above equation, then
simplifying, gives zero as expected.

¯1@HHP1 Ô PL Ó HP2 Ô R1 LL Ô HHP Ô P1 L Ó HO Ô R1 LL Ô HHP Ô P2 L Ó HO Ô PLLD


0

We have thus shown that the five fixed points, O, P1 , P2 , R1 , R2 , all lie on the conic section.

Here is an hyperbola as a graphic example, plotted from the original formula above.

2009 9 3
Grassmann Algebra Book.nb 309

Here is an hyperbola as a graphic example, plotted from the original formula above.

R2

L1
Q1 O R1

P0

P2 P Q2

L2
P1

Constructing an hyperbola through 5 points

And here is a circle. The only difference being the relative location of the five points.

2009 9 3
Grassmann Algebra Book.nb 310

Q2

L2 P2
P

R2 P0 P1

L1
O R1 Q1

Constructing a circle through 5 points

Dual constructions
The Principle of Duality discussed in Chapter 3 says that to every formula such as

P0 Ô HL1 Ó HP Ô P1 LL Ô HL2 Ó HP Ô P2 LL ã 0

there corresponds a dual formula in which exterior and regressive products are interchanged, and m-
elements are interchanged with (n-m)-elements: in this case, points are interchanged with lines.
Thus we can write another formula for a conic section as

:1 = L0 Ó HP1 Ô H; Ó L1 LL Ó HP2 Ô H; Ó L2 LL;

where the Pi are fixed points, the Li are fixed lines, and ; is a variable line.

This formula says that three lines intersect at one point. One line L0 is fixed. The other two are of
the form

)i ã P1 Ô H; Ó L1 L
To see most directly how such a formula translates into an algebraic equation, we take a numerical
example with fixed points and lines defined:

2009 9 3
Grassmann Algebra Book.nb 311

P1 = ¯" + 2 e1 + e2 ;
P2 = ¯" + e1 + 2 e2 ;
L0 = H¯" + 5 e1 L Ô H¯" + 4 e2 L;
L1 = ¯" Ô H¯" + e1 L;
L2 = H¯" - e1 L Ô H¯" + 3 e2 L;
We can define the variable line as the product of its intersections with the coordinate axes:

¯"2 ; DeclareExtraScalarSymbols@x, yD;


; = H¯" + x e1 L Ô H¯" + y e2 L;
Substituting these values into the geometric equation and simplifying gives

Expand@¯1@:1 DD ã 0

198 x y - 90 x2 y - 21 y2 - 80 x y2 + 37 x2 y2 ã 0

We can solve for y in terms of x, but remember that these are line coordinates, not point
coordinates.

18 I-11 x + 5 x2 M
y= ;
-21 - 80 x + 37 x2
By plotting a line for each pair of line coordinates, we see that they form the envelope of a conic
section as expected.

Graphics@Table@Line@882 x, -y<, 8-x, 2 y<<D, 8x, -20, 20, .05<D,


PlotRange Ø 88-1, 3<, 8-.5, 3<<D

Constructing conics in space

2009 9 3
Grassmann Algebra Book.nb 312

Constructing conics in space


To construct a conic in 3-space we need only take a 2-space formula such as

P Ô P1 Ô HL1 Ó HP0 Ô HL2 Ó HP Ô P2 LLLL ã 0

and replace the 2-space hyperplanes (the lines) with 3-space hyperplanes (planes). Thus we can
rewrite the formula as

P Ô P1 Ô HP1 Ó HP0 Ô HP2 Ó HP Ô LLLLL ã 0 4.19

Note that the point P2 becomes the line L in order to make the innermost term into a plane.

1. Construct the fixed points, lines and planes: P0 , P1 , L, P1 , and P2 .

2. Construct a plane Pc through L to intersect plane P2 in line L2 .

L2 ª P2 Ó Pc
Pc ã P Ô L

3. Construct a plane Pb through line L2 and point P0 to intersect plane P1 in line L1 .

L1 ª P1 Ó Pb
Pb ã P0 Ô L2

4. Construct a plane Pa through line L1 and point P1 to intersect line Pc in point P.

0 ã P Ô Pa
Pa ã P1 Ô L1

5. Repeat the construction by drawing a new plane Pc through L.

We now derive the algebraic equation in the same manner as we have previously for the planar
cases.

First, define the variable point P and the fixed entities.

¯A; ¯"3 ; DeclareExtraScalarSymbols@x, y, zD;


P = ¯" + x e1 + y e2 + z e3 ;

DeclareExtraScalarSymbols@
82, 3, 4, *, 6, 7, 8, 9, ., +, <, =, >, ), ?, @<D;
P0 = ¯" + 2 e1 + 3 e2 + 4 e2 ;
P1 = ¯" + * e1 + 6 e2 + 7 e3 ;
L = H¯" + 8 e1 + 9 e2 L Ô H¯" + . e2 + + e3 L;
P1 = H¯" + < e1 L Ô H¯" + = e2 L Ô H¯" + > e3 L;
P2 = H¯" + ) e1 L Ô H¯" + ? e2 L Ô H¯" + @ e3 L;

2009 9 3
Grassmann Algebra Book.nb 313

Next, compute the algebraic formula.

A1 = ExpandAndCollectB
¯1@P Ô P1 Ô HP1 Ó HP0 Ô HP2 Ó HP Ô LLLLLD
, 8x, y, z<, 3F
¯" Ô e1 Ô e2 Ô e3
-1 . - $ 3 4 5 " 6 - , . - $ 3 4 5 " 6 + 2 . / $ 3 4 5 " 6 - 1 . - ( 3 4 5 " 7 - , . - ( 3 4 5 " 7 -
1#/$345"7-,#/$345"7+20/$345"7-2.-(34567-1#-$34567-
,#-$34567+20-$34567+1.-$34"67+,.-$34"67-2./$34"67+
10-$35"67+,0-$35"67-20/$35"67+1#-$45"67+
,#-$45"67-2#/$45"67+.-(345"67-0-$345"67+#/$345"67+
H-1 0 - $ 3 5 " 6 - , 0 - $ 3 5 " 6 + 2 0 / $ 3 5 " 6 - 1 # - $ 4 5 " 6 - , # - $ 4 5 " 6 +
2#/$45"6+1.-345"6+,.-345"6-2./345"6+2.(345"6-
.-(345"6+1-$345"6+,-$345"6-2/$345"6+1.-(34"7+
,.-(34"7+1#/$34"7+,#/$34"7-20/$34"7+1#/345"7+
,#/345"7-20/345"7-1#(345"7-,#(345"7+20(345"7+
1-(345"7+,-(345"7-0-(345"7+2.-(3467+1#-$3467+
,#-$3467-20-$3467+1#-34567+,#-34567-20-34567+
2-(34567-#-(34567-1.-34"67-,.-34"67+2./34"67-
2.(34"67-1-$34"67-,-$34"67+0-$34"67+2/$34"67-
#/$34"67-10-35"67-,0-35"67+20/35"67-20(35"67+
0-(35"67-1#-45"67-,#-45"67+2#/45"67-2#(45"67+
# - ( 4 5 " 6 7 + 0 - 3 4 5 " 6 7 - # / 3 4 5 " 6 7 + # ( 3 4 5 " 6 7 - - ( 3 4 5 " 6 7L z +
H1 0 - 3 5 " 6 + , 0 - 3 5 " 6 - 2 0 / 3 5 " 6 + 2 0 ( 3 5 " 6 - 0 - ( 3 5 " 6 +
1#-45"6+,#-45"6-2#/45"6+2#(45"6-#-(45"6-
1-345"6-,-345"6+2/345"6-2(345"6+-(345"6-
1#/34"7-,#/34"7+20/34"7+1#(34"7+,#(34"7-20(34"7-
1-(34"7-,-(34"7+0-(34"7-1#-3467-,#-3467+
20-3467-2-(3467+#-(3467+1-34"67+,-34"67-
0 - 3 4 " 6 7 - 2 / 3 4 " 6 7 + # / 3 4 " 6 7 + 2 ( 3 4 " 6 7 - # ( 3 4 " 6 7L z2 +
H1 . - $ 4 5 " 6 + , . - $ 4 5 " 6 - 2 . / $ 4 5 " 6 + 1 . $ 3 4 5 " 6 + , . $ 3 4 5 " 6 -
./$345"6+1.-(45"7+,.-(45"7+1#/$45"7+,#/$45"7-
20/$45"7-1./345"7-,./345"7+1.(345"7+,.(345"7+
1/$345"7+,/$345"7-0/$345"7-1.-$3467-,.-$3467+
2./$3467-10-$3567-,0-$3567+20/$3567+
2.-(4567-20-$4567+2#/$4567-2./34567+2.(34567+
1#$34567+,#$34567-20$34567+1-$34567+,-$34567-
#/$34567-1.$34"67-,.$34"67+./$34"67-10$35"67-
,0$35"67+0/$35"67-.-(45"67-1#$45"67-,#$45"67-
1-$45"67-,-$45"67+0-$45"67+2/$45"67+
. / 3 4 5 " 6 7 - . ( 3 4 5 " 6 7 + 0 $ 3 4 5 " 6 7 - / $ 3 4 5 " 6 7L x +
H1 0 $ 3 5 " 6 + , 0 $ 3 5 " 6 - 0 / $ 3 5 " 6 - 1 . - 4 5 " 6 - , . - 4 5 " 6 +
2./45"6-2.(45"6+.-(45"6+1#$45"6+,#$45"6-#/$45"6-
1$345"6-,$345"6+/$345"6+1./34"7+,./34"7-1.(34"7-
- - + - - +

2009 9 3
Grassmann Algebra Book.nb 314

1$345"6-,$345"6+/$345"6+1./34"7+,./34"7-1.(34"7-
,.(34"7-1/$34"7-,/$34"7+0/$34"7-1#/45"7-,#/45"7+
20/45"7+1#(45"7+,#(45"7-20(45"7-1-(45"7-,-(45"7+
0-(45"7+1.-3467+,.-3467-.-(3467-1#$3467-,#$3467+
20$3467-2/$3467+#/$3467+10-3567+,0-3567-20/3567+
20(3567-0-(3567+20-4567-2#/4567+2#(4567-2-(4567-
1-34567-,-34567+2/34567-2(34567+-(34567-./34"67+
.(34"67+1$34"67+,$34"67-0$34"67+1-45"67+,-45"67-
0 - 4 5 " 6 7 - 2 / 4 5 " 6 7 + # / 4 5 " 6 7 + 2 ( 4 5 " 6 7 - # ( 4 5 " 6 7L z x +
H-1 . $ 4 5 " 6 - , . $ 4 5 " 6 + . / $ 4 5 " 6 + 1 . / 4 5 " 7 + , . / 4 5 " 7 -
1.(45"7-,.(45"7-1/$45"7-,/$45"7+0/$45"7+1.$3467+
,.$3467-./$3467+10$3567+,0$3567-0/$3567+2./4567-
2.(4567+20$4567-2/$4567-1$34567-,$34567+/$34567-
. / 4 5 " 6 7 + . ( 4 5 " 6 7 + 1 $ 4 5 " 6 7 + , $ 4 5 " 6 7 - 0 $ 4 5 " 6 7L x2 +
H1 . - $ 3 5 " 6 + , . - $ 3 5 " 6 - 2 . / $ 3 5 " 6 - 2 . $ 3 4 5 " 6 + . - $ 3 4 5 " 6 -
1.-$34"7-,.-$34"7+2./$34"7+1.-(35"7+,.-(35"7-
10-$35"7-,0-$35"7+1#/$35"7+,#/$35"7-1#-$45"7-
,#-$45"7+2#/$45"7+1.-345"7+,.-345"7+1#$345"7+
,#$345"7-20$345"7+0-$345"7-2/$345"7+2.-(3567+
1#-$3567+,#-$3567-20-$3567+2.-34567-2-$34567+
#-$34567+2.$34"67-.-$34"67-.-(35"67+20$35"67-
1-$35"67-,-$35"67+2/$35"67-#/$35"67+2#$45"67-
# - $ 4 5 " 6 7 - . - 3 4 5 " 6 7 - # $ 3 4 5 " 6 7 + - $ 3 4 5 " 6 7L y +
H-1 . - 3 5 " 6 - , . - 3 5 " 6 + 2 . / 3 5 " 6 - 2 . ( 3 5 " 6 + . - ( 3 5 " 6 -
20$35"6+0-$35"6-2#$45"6+#-$45"6+2$345"6-
-$345"6-2./34"7+2.(34"7-.-(34"7-1#$34"7-,#$34"7+
20$34"7+1-$34"7+,-$34"7-0-$34"7+10-35"7+,0-35"7-
1#/35"7-,#/35"7+1#(35"7+,#(35"7-1-(35"7-,-(35"7+
1#-45"7+,#-45"7-2#/45"7+2#(45"7-#-(45"7-1-345"7-
,-345"7+2/345"7-2(345"7+-(345"7-2.-3467+2-$3467-
#-$3467-1#-3567-,#-3567+20-3567-2-(3567+#-(3567+
.-34"67-2$34"67+#$34"67+1-35"67+,-35"67-
0 - 3 5 " 6 7 - 2 / 3 5 " 6 7 + # / 3 5 " 6 7 + 2 ( 3 5 " 6 7 - # ( 3 5 " 6 7L z y +
H-1 . $ 3 5 " 6 - , . $ 3 5 " 6 + . / $ 3 5 " 6 + 2 . $ 4 5 " 6 - . - $ 4 5 " 6 +
1.$34"7+,.$34"7-./$34"7+1./35"7+,./35"7-1.(35"7-
,.(35"7+10$35"7+,0$35"7-1/$35"7-,/$35"7-
1.-45"7-,.-45"7+20$45"7+1-$45"7+,-$45"7-
0-$45"7-#/$45"7-1$345"7-,$345"7+/$345"7-
2.$3467+.-$3467+2./3567-2.(3567-1#$3567-,#$3567+
0-$3567-2/$3567+#/$3567-2.-4567-2#$4567+2-$4567+
2$34567--$34567-./35"67+.(35"67+1$35"67+
, $ 3 5 " 6 7 - 0 $ 3 5 " 6 7 + . - 4 5 " 6 7 - 2 $ 4 5 " 6 7 + # $ 4 5 " 6 7L x y +
H2 . $ 3 5 " 6 - . - $ 3 5 " 6 - 2 . $ 3 4 " 7 + . - $ 3 4 " 7 - 1 . - 3 5 " 7 -
,.-35"7-1#$35"7-,#$35"7+1-$35"7+,-$35"7-
2#$45"7+#-$45"7+2$345"7--$345"7-2.-3567+
- + - + L y2

Substituting some random values for the coordinates of the fixed points, line and planes gives us
2009 93
the equation:
Grassmann Algebra Book.nb 315

Substituting some random values for the coordinates of the fixed points, line and planes gives us
the equation:

4.753 - 0.900375 z + 2.06412 z2 - 12.6175 x + 2.27084 z x + 3.12987 x2 +


2.82363 y + 2.80525 z y + 2.39794 x y - 0.486937 y2 ã 0

The graphic below depicts this conic together with its defining points, line and planes. The
construction lines are not shown as they are difficult to portray unambiguously in one view.

Constructing a conic from two points, a line and two planes

Note that the line L and the point P1 are on the surface. This is clear from the geometric equation.

2009 9 3
Grassmann Algebra Book.nb 316

A geometric equation for a cubic in the plane


To this point we have explored just second degree equations in the plane and space. In this section
we will explore a geometric formula for a third degree equation (cubic) in the plane. As for the
quadratic equation, we will assemble the formula from the condition that three points are collinear,
but this time, all three points will depend non-trivially on the variable point P. The point P itself
will, instead of being at the intersection of two lines, be at the intersection of three lines. We will
write down the proposed formula, and then show that it leads to a cubic equation for the
coordinates of the point P. Note carefully that this cubic formula is not a prescription to construct,
as it is not the simple intersection of two elements.

:1 = HL1 Ó HP Ô P1 LL Ô HL2 Ó HP Ô P2 LL Ô HL3 Ó HP Ô P3 LL;

As before we define the points and lines in terms of coordinates.

¯"2 ; DeclareExtraScalarSymbols@x, yD;


DeclareExtraScalarSymbols@82, 3, 4, 5, 6, 7, 8, 9, ., +, <, =<D;
P = ¯" + x e1 + y e2 ;
P1 = ¯" + 2 e1 + 3 e2 ;
P2 = ¯" + 4 e1 + 5 e2 ;
P3 = ¯" + 6 e1 + 7 e2 ;
K1 = ¯" + 8 e1 + 9 e2 ;
K2 = ¯" + . e1 + + e2 ;
K3 = ¯" + < e1 + = e2 ;
L1 = K2 Ô K3 ;
L2 = K1 Ô K3 ;
L3 = K2 Ô K1 ;

A typical factor of the formula, for example the first, simplifies to the weighted point:

Q1 = ¯1@L1 Ó HP Ô P1 LD
HH$ - 4L H2 - xL + H-( + 3L H1 - yLL ¯# +
HH$ 3 - ( 4L H2 - xL + H-( + 3L H1 x - 2 yLL e1 +
HH$ 3 - ( 4L H1 - yL + H-$ + 4L H1 x - 2 yLL e2

Remember that ¯1 is a shorthand for CongruenceSimplify. It automatically eliminates the


congruence factor ¯c arising from the regressive product in the current basis.

The complete formula simplifies to

2009 9 3
Grassmann Algebra Book.nb 317

¯1@:1 D
:2 = ã0
¯" Ô e1 Ô e2
HH$ 3 - ( 4L H1 - yL + H-$ + 4L H1 x - 2 yLL
HHH/ - $L H0 - xL - H- - (L H. - yLL HH/ 3 - - 4L H, - xL +
H-- + 3L H+ x - , yLL + HH/ - 4L H, - xL + H-- + 3L H+ - yLL
HH/ ( - - $L H-0 + xL + H- - (L H. x - 0 yLLL +
H-H$ 3 - ( 4L H2 - xL - H-( + 3L H1 x - 2 yLL
HHH/ - $L H0 - xL - H- - (L H. - yLL
HH/ 3 - - 4L H+ - yL + H-/ + 4L H+ x - , yLL +
HH/ - 4L H, - xL + H-- + 3L H+ - yLL
HH-/ ( + - $L H. - yL + H/ - $L H. x - 0 yLLL +
HH$ - 4L H2 - xL + H-( + 3L H1 - yLL
HH-H/ 3 - - 4L H+ - yL + H/ - 4L H+ x - , yLL
HH/ ( - - $L H-0 + xL + H- - (L H. x - 0 yLL +
HH/ 3 - - 4L H, - xL + H-- + 3L H+ x - , yLL
HH-/ ( + - $L H. - yL + H/ - $L H. x - 0 yLLL ã 0

It is easy to check that the expression is indeed a cubic, either by inspection, or application of
ExpandAndCollect. as in the previous section.

It is instructive perhaps to see a graphical depiction of how the formula works. In the graphic
below we plot a typical cubic in the plane defined by the three points P1 , P2 , P3 , and the three lines
L1 , L2 , and L3 . The point P is located by the condition that the three points Q1 , Q2 , and Q3 are
concurrent, where each of the points Qi is defined as the intersection of the line P Ô Pi with the line
Li .

Qi = Li Ó HP Ô Pi L
As can be seen from the graphic, the cubic curve passes through nine special points: the three
points P1 , P2 , P3 , the three intersections of the three lines L1 , L2 , and L3 , and the intersections R1
of these three lines with the lines through the points Pi taken two at a time.

:1 = HL1 Ó HP Ô P1 LL Ô HL2 Ó HP Ô P2 LL Ô HL3 Ó HP Ô P3 LL

R1 = L1 Ó HP2 Ô P3 L
R2 = L2 Ó HP3 Ô P1 L
R3 = L3 Ó HP1 Ô P2 L

These special points are displayed as large semi-opaque red points (perhaps behind a blue point).

2009 9 3
Grassmann Algebra Book.nb 318

A cubic equation defined by three points and three lines

Pascal's Theorem
Our discussion up to this point leads us to Pascal's Theorem which may be stated: If a hexagon is
inscribed in a conic, then opposite sides intersect in collinear points.

In the previous sections we have shown that the geometric equation below, involving five fixed
points and a variable point P, leads to the algebraic equation of a conic section:

HHP1 Ô R2 L Ó HP2 Ô R1 LL Ô HHO Ô R1 L Ó HP Ô P1 LL Ô HHO Ô R2 L Ó HP Ô P2 LL ã 0


This is therefore the converse to Pascal's Theorem: If opposite sides of a hexagon intersect in three
collinear points, then the hexagon may be inscribed in a conic.

To explore Pascal's Theorem further we find it conceptually advantageous to start with a fresh
notation. We denote the six points by P1 , P2 , P3 , P4 , P5 , and P6 , and suppose that they are placed
on the conic in arbitrary (but of course separate) positions. Let us consider the hexagon formed by
these points in order, so that the sides S1 to S6 may be written, starting with P1 as

S1 = P1 Ô P2 ; S2 = P2 Ô P3 ; S3 = P3 Ô P4 ; S4 = P4 Ô P5 ; S5 = P5 Ô P6 ; S6 = P6 Ô P1 ;

The pairs of opposite sides can then be intersected to give the collinear points Z1 , Z2 and Z3 .

Z1 = S1 Ó S4 ; Z2 = S2 Ó S5 ; Z3 = S3 Ó S6 ;

We will call these points Pascal points. The Pascal line ) on which these points lie can be written
alternatively as

2009 9 3
Grassmann Algebra Book.nb 319

) ª Z1 Ô Z2 ª Z1 Ô Z3 ª Z2 Ô Z3

The formula is then constructed from the condition that the points Z1 , Z2 , and Z3 all lie on ), that
is, their exterior product is zero.

Z1 Ô Z2 Ô Z3 ã 0
HP1 Ô P2 Ó P4 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P6 L Ô HP3 Ô P4 Ó P6 Ô P1 L ã 0

HHP1 Ô P2 L Ó HP4 Ô P5 LL Ô
4.20
HHP2 Ô P3 L Ó HP5 Ô P6 LL Ô HHP3 Ô P4 L Ó HP6 Ô P1 LL ã 0

The graphic below depicts these entities for six points on a circle. The sides are shown in dark
green and their extensions in light green. The Pascal line for this hexagon is shown in red.

Z2
S5
P5
P6
S6

S1
P2 Z1 P1

S2 S4

S3
P3 P4 Z3

Pascal's theorem

As may be observed, this is essentially the same graphic as Constructing a circle through 5 points
in the section above on Conic sections through five points, except that the points have been
renamed as follows:

P1 öP1 , P2 öP5 , R1 öP4 , R2 öP2 ,


OöP3 , PöP6 , Q1 öZ3 , Q2 öZ2 , P0 öZ1

2009 9 3
Grassmann Algebra Book.nb 320

Ï Demonstration of Pascal's Theorem in an ellipse

The diagram above demonstrates Pascal's Theorem in a conic in which two opposite sides cross
within the conic, hence making the Pascal line intersect it. The following example demonstrates the
Theorem for an ellipse in which all the opposite sides intersect outside the conic. We take the
ellipse

x 2 y 2
K O + ã1
4 3
and choose some symmetrically placed points around it:

EllipsePoints =
3 3
:P1 Ø ¯" + 3 e2 , P2 Ø ¯" + 2 e1 - e2 , P3 Ø ¯" + 4 e1 ,
2
3 3
P4 Ø ¯" - 3 e2 , P5 Ø ¯" - 4 e1 , P6 Ø ¯" - 2 e1 - e2 >;
2
We defined the sides Si above by

S1 = P1 Ô P2 ; S2 = P2 Ô P3 ; S3 = P3 Ô P4 ; S4 = P4 Ô P5 ; S5 = P5 Ô P6 ; S6 = P6 Ô P1 ;

To compute the three Pascal points we take the regressive product of opposite sides, substitute in
the values EllipsePoints, and use CongruenceSimplify (in its short form ¯1) and
ToPointForm to do the simplifications.

¯"2 ; 8Z1 , Z2 , Z3 < =


ToPointForm@¯1@8S1 Ó S4 , S2 Ó S5 , S3 Ó S6 < ê. EllipsePointsDD

9¯# + 4 I-1 + 3 M e1 - 3 3 e2 ,
¯# - 3 3 e2 , ¯# + I4 - 4 3 M e1 - 3 3 e2 =

Pascal's Theorem says that these three points are collinear, hence their exterior product should be
zero.

¯%@Z1 Ô Z2 Ô Z3 D
0

The Pascal line may be obtained as the exterior product of any pair of the points.

8¯%@Z1 Ô Z2 D, ¯%@Z1 Ô Z3 D, ¯%@Z2 Ô Z3 D<

9I4 - 4 3 M ¯# Ô e1 + 12 I-3 + 3 M e1 Ô e2 ,
I8 - 8 3 M ¯# Ô e1 + 24 I-3 + 3 M e1 Ô e2 ,
I4 - 4 3 M ¯# Ô e1 + 12 I-3 + 3 M e1 Ô e2 =

2009 9 3
Grassmann Algebra Book.nb 321

Since the weight is not important, we can divide through by it and simplify to obtain the Pascal line
in the form

) ª J¯" - 3 3 e2 N Ô e1

P1

S6 S1
P5 P3

S5 S2

S4 S3

P6 P2
P4

Z3 Z2 Z1

Pascal's theorem in an ellipse

Ï Demonstration of Pascal's Theorem for a regular hexagon


An interesting case occurs when we choose the conic to be a circle and the hexagon to be a regular
hexagon, since the pairs of opposite sides are parallel, and hence do not intersect. We begin by
generating a list of rules for replacing the Pi by the points on a regular hexagon.

2009 9 3
Grassmann Algebra Book.nb 322

RegularHexagonPoints =
Table@Pi Ø ¯" + Cos@i p ê 3D e1 + Sin@i p ê 3D e2 , 8i, 1, 6<D

e1 3 e2 e1 3 e2
:P1 Ø ¯# + + , P2 Ø ¯# - + , P3 Ø ¯# - e1 ,
2 2 2 2
e1 3 e2 e1 3 e2
P4 Ø ¯# - - , P5 Ø ¯# + - , P6 Ø ¯# + e1 >
2 2 2 2
The intersections of the opposite sides of the regular hexagon gives

8Z1 , Z2 , Z3 < = ¯1@8S1 Ó S4 , S2 Ó S5 , S3 Ó S6 < ê. RegularHexagonPointsD

1 3 e2 3 e1 3 e2
:- 3 e1 , - 3 e1 - , - >
2 2 2 2
Note that these are all vectors, not points as expected, because the opposite sides are all parallel.

Pascal's theorem for a regular hexagon


Opposite sides intersect at points at infinity lying in the same bivector "

These vectors are dependent (as of course they must be in the plane); and consequently, their
exterior product is zero.

2009 9 3
Grassmann Algebra Book.nb 323

These vectors are dependent (as of course they must be in the plane); and consequently, their
exterior product is zero.

¯%@Z1 Ô Z2 Ô Z3 D
0

Thus Pascal's Theorem is demonstrated in this example to hold for pairs of sides intersecting in
points at infinity (vectors). The three points at infinity lie on the same line at infinity. That is, the
three vectors lie in the same bivector. (Of course, this must be the case for any three vectors in the
plane.)

Pascal lines

Ï Hexagons in a conic

We have seen how Pascal's Theorem establishes a line associated with any hexagon in a conic.
Suppose we have fixed the positions of the six points anywhere we wish on the conic, and named
them in any order we wish. How many essentially different hexagons (whose sides may of course
cross each other) can we form by joining these six points in some order?

As before, we name these points P1 , P2 , P3 , P4 , P5 and P6 . Without loss of generality we can start
with P1 and join it in any of five ways to the other five points. From that point we then have four
ways of forming the next side; then three ways; then two ways. This makes 5!4!3!2 = 120
hexagons. However, for each hexagon P1 Pa Pb Pc Pd Pe there corresponds the same hexagon
traversed in the reverse order P1 Pe Pd Pc Pb Pa . Thus there are just 60 essentially different
hexagons. We use Mathematica's Permutations function to list them and Union to eliminate
the ones traversed in reverse order.

2009 9 3
Grassmann Algebra Book.nb 324

hexagons =
Join@8P1 <, ÒD & êü Union@Permutations@8P2 , P3 , P4 , P5 , P6 <D,
SameTest Ø HÒ1 === Reverse@Ò2D &LD
88P1 , P2 , P3 , P4 , P5 , P6 <, 8P1 , P2 , P3 , P4 , P6 , P5 <,
8P1 , P2 , P3 , P5 , P4 , P6 <, 8P1 , P2 , P3 , P5 , P6 , P4 <,
8P1 , P2 , P3 , P6 , P4 , P5 <, 8P1 , P2 , P3 , P6 , P5 , P4 <,
8P1 , P2 , P4 , P3 , P5 , P6 <, 8P1 , P2 , P4 , P3 , P6 , P5 <,
8P1 , P2 , P4 , P5 , P3 , P6 <, 8P1 , P2 , P4 , P5 , P6 , P3 <,
8P1 , P2 , P4 , P6 , P3 , P5 <, 8P1 , P2 , P4 , P6 , P5 , P3 <,
8P1 , P2 , P5 , P3 , P4 , P6 <, 8P1 , P2 , P5 , P3 , P6 , P4 <,
8P1 , P2 , P5 , P4 , P3 , P6 <, 8P1 , P2 , P5 , P4 , P6 , P3 <,
8P1 , P2 , P5 , P6 , P3 , P4 <, 8P1 , P2 , P5 , P6 , P4 , P3 <,
8P1 , P2 , P6 , P3 , P4 , P5 <, 8P1 , P2 , P6 , P3 , P5 , P4 <,
8P1 , P2 , P6 , P4 , P3 , P5 <, 8P1 , P2 , P6 , P4 , P5 , P3 <,
8P1 , P2 , P6 , P5 , P3 , P4 <, 8P1 , P2 , P6 , P5 , P4 , P3 <,
8P1 , P3 , P2 , P4 , P5 , P6 <, 8P1 , P3 , P2 , P4 , P6 , P5 <,
8P1 , P3 , P2 , P5 , P4 , P6 <, 8P1 , P3 , P2 , P5 , P6 , P4 <,
8P1 , P3 , P2 , P6 , P4 , P5 <, 8P1 , P3 , P2 , P6 , P5 , P4 <,
8P1 , P3 , P4 , P2 , P5 , P6 <, 8P1 , P3 , P4 , P2 , P6 , P5 <,
8P1 , P3 , P4 , P5 , P2 , P6 <, 8P1 , P3 , P4 , P6 , P2 , P5 <,
8P1 , P3 , P5 , P2 , P4 , P6 <, 8P1 , P3 , P5 , P2 , P6 , P4 <,
8P1 , P3 , P5 , P4 , P2 , P6 <, 8P1 , P3 , P5 , P6 , P2 , P4 <,
8P1 , P3 , P6 , P2 , P4 , P5 <, 8P1 , P3 , P6 , P2 , P5 , P4 <,
8P1 , P3 , P6 , P4 , P2 , P5 <, 8P1 , P3 , P6 , P5 , P2 , P4 <,
8P1 , P4 , P2 , P3 , P5 , P6 <, 8P1 , P4 , P2 , P3 , P6 , P5 <,
8P1 , P4 , P2 , P5 , P3 , P6 <, 8P1 , P4 , P2 , P6 , P3 , P5 <,
8P1 , P4 , P3 , P2 , P5 , P6 <, 8P1 , P4 , P3 , P2 , P6 , P5 <,
8P1 , P4 , P3 , P5 , P2 , P6 <, 8P1 , P4 , P3 , P6 , P2 , P5 <,
8P1 , P4 , P5 , P2 , P3 , P6 <, 8P1 , P4 , P5 , P3 , P2 , P6 <,
8P1 , P4 , P6 , P2 , P3 , P5 <, 8P1 , P4 , P6 , P3 , P2 , P5 <,
8P1 , P5 , P2 , P3 , P4 , P6 <, 8P1 , P5 , P2 , P4 , P3 , P6 <,
8P1 , P5 , P3 , P2 , P4 , P6 <, 8P1 , P5 , P3 , P4 , P2 , P6 <,
8P1 , P5 , P4 , P2 , P3 , P6 <, 8P1 , P5 , P4 , P3 , P2 , P6 <<

We take the example of the ellipse and its points from the previous section and display their
hexagons.

2009 9 3
Grassmann Algebra Book.nb 325

2009 9 3
Grassmann Algebra Book.nb 326

The 60 hexagons formed from six points on an ellipse

Ï Pascal points

When two opposite sides of a hexagon are extended they intersect in a point which we have called
a Pascal point. Pascal's Theorem says that the three Pascal points lie on the one line. We can
convert each hexagon in the list of hexagons above to a list of three formulae, one formula for each
point.

allPascalPointFormulae =
hexagons ê. 8Pa_, Pb_, Pc_, Pd_, Pe_, Pf_< ß
8HPa Ô PbL Ó HPd Ô PeL, HPb Ô PcL Ó HPe Ô PfL, HPc Ô PdL Ó HPf Ô PaL<
88P1 Ô P2 Ó P4 Ô P5 , P2 Ô P3 Ó P5 Ô P6 , P3 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P4 Ô P6 , P2 Ô P3 Ó P6 Ô P5 , P3 Ô P4 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P4 , P2 Ô P3 Ó P4 Ô P6 , P3 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P6 , P2 Ô P3 Ó P6 Ô P4 , P3 Ô P5 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P4 , P2 Ô P3 Ó P4 Ô P5 , P3 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P5 , P2 Ô P3 Ó P5 Ô P4 , P3 Ô P6 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P5 , P2 Ô P4 Ó P5 Ô P6 , P4 Ô P3 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P6 , P2 Ô P4 Ó P6 Ô P5 , P4 Ô P3 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P3 , P2 Ô P4 Ó P3 Ô P6 , P4 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P6 , P2 Ô P4 Ó P6 Ô P3 , P4 Ô P5 Ó P3 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P3 , P2 Ô P4 Ó P3 Ô P5 , P4 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P5 , P2 Ô P4 Ó P5 Ô P3 , P4 Ô P6 Ó P3 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P4 , P2 Ô P5 Ó P4 Ô P6 , P5 Ô P3 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P6 , P2 Ô P5 Ó P6 Ô P4 , P5 Ô P3 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P4 Ô P3 , P2 Ô P5 Ó P3 Ô P6 , P5 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P2 Ó P4 Ô P6 , P2 Ô P5 Ó P6 Ô P3 , P5 Ô P4 Ó P3 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P3 , P2 Ô P5 Ó P3 Ô P4 , P5 Ô P6 Ó P4 Ô P1 <,
,

2009 9 3
Grassmann Algebra Book.nb 327

8P1 Ô P2 Ó P6 Ô P3 , P2 Ô P5 Ó P3 Ô P4 , P5 Ô P6 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P6 Ô P4 , P2 Ô P5 Ó P4 Ô P3 , P5 Ô P6 Ó P3 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P4 , P2 Ô P6 Ó P4 Ô P5 , P6 Ô P3 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P3 Ô P5 , P2 Ô P6 Ó P5 Ô P4 , P6 Ô P3 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P4 Ô P3 , P2 Ô P6 Ó P3 Ô P5 , P6 Ô P4 Ó P5 Ô P1 <,
8P1 Ô P2 Ó P4 Ô P5 , P2 Ô P6 Ó P5 Ô P3 , P6 Ô P4 Ó P3 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P3 , P2 Ô P6 Ó P3 Ô P4 , P6 Ô P5 Ó P4 Ô P1 <,
8P1 Ô P2 Ó P5 Ô P4 , P2 Ô P6 Ó P4 Ô P3 , P6 Ô P5 Ó P3 Ô P1 <,
8P1 Ô P3 Ó P4 Ô P5 , P3 Ô P2 Ó P5 Ô P6 , P2 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P4 Ô P6 , P3 Ô P2 Ó P6 Ô P5 , P2 Ô P4 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P5 Ô P4 , P3 Ô P2 Ó P4 Ô P6 , P2 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P5 Ô P6 , P3 Ô P2 Ó P6 Ô P4 , P2 Ô P5 Ó P4 Ô P1 <,
8P1 Ô P3 Ó P6 Ô P4 , P3 Ô P2 Ó P4 Ô P5 , P2 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P6 Ô P5 , P3 Ô P2 Ó P5 Ô P4 , P2 Ô P6 Ó P4 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P5 , P3 Ô P4 Ó P5 Ô P6 , P4 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P6 , P3 Ô P4 Ó P6 Ô P5 , P4 Ô P2 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P5 Ô P2 , P3 Ô P4 Ó P2 Ô P6 , P4 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P6 Ô P2 , P3 Ô P4 Ó P2 Ô P5 , P4 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P4 , P3 Ô P5 Ó P4 Ô P6 , P5 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P6 , P3 Ô P5 Ó P6 Ô P4 , P5 Ô P2 Ó P4 Ô P1 <,
8P1 Ô P3 Ó P4 Ô P2 , P3 Ô P5 Ó P2 Ô P6 , P5 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P3 Ó P6 Ô P2 , P3 Ô P5 Ó P2 Ô P4 , P5 Ô P6 Ó P4 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P4 , P3 Ô P6 Ó P4 Ô P5 , P6 Ô P2 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P2 Ô P5 , P3 Ô P6 Ó P5 Ô P4 , P6 Ô P2 Ó P4 Ô P1 <,
8P1 Ô P3 Ó P4 Ô P2 , P3 Ô P6 Ó P2 Ô P5 , P6 Ô P4 Ó P5 Ô P1 <,
8P1 Ô P3 Ó P5 Ô P2 , P3 Ô P6 Ó P2 Ô P4 , P6 Ô P5 Ó P4 Ô P1 <,
8P1 Ô P4 Ó P3 Ô P5 , P4 Ô P2 Ó P5 Ô P6 , P2 Ô P3 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P3 Ô P6 , P4 Ô P2 Ó P6 Ô P5 , P2 Ô P3 Ó P5 Ô P1 <,
8P1 Ô P4 Ó P5 Ô P3 , P4 Ô P2 Ó P3 Ô P6 , P2 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P6 Ô P3 , P4 Ô P2 Ó P3 Ô P5 , P2 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P4 Ó P2 Ô P5 , P4 Ô P3 Ó P5 Ô P6 , P3 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P2 Ô P6 , P4 Ô P3 Ó P6 Ô P5 , P3 Ô P2 Ó P5 Ô P1 <,
8P1 Ô P4 Ó P5 Ô P2 , P4 Ô P3 Ó P2 Ô P6 , P3 Ô P5 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P6 Ô P2 , P4 Ô P3 Ó P2 Ô P5 , P3 Ô P6 Ó P5 Ô P1 <,
8P1 Ô P4 Ó P2 Ô P3 , P4 Ô P5 Ó P3 Ô P6 , P5 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P3 Ô P2 , P4 Ô P5 Ó P2 Ô P6 , P5 Ô P3 Ó P6 Ô P1 <,
8P1 Ô P4 Ó P2 Ô P3 , P4 Ô P6 Ó P3 Ô P5 , P6 Ô P2 Ó P5 Ô P1 <,
8P1 Ô P4 Ó P3 Ô P2 , P4 Ô P6 Ó P2 Ô P5 , P6 Ô P3 Ó P5 Ô P1 <,
8P1 Ô P5 Ó P3 Ô P4 , P5 Ô P2 Ó P4 Ô P6 , P2 Ô P3 Ó P6 Ô P1 <,
8P1 Ô P5 Ó P4 Ô P3 , P5 Ô P2 Ó P3 Ô P6 , P2 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P5 Ó P2 Ô P4 , P5 Ô P3 Ó P4 Ô P6 , P3 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P5 Ó P4 Ô P2 , P5 Ô P3 Ó P2 Ô P6 , P3 Ô P4 Ó P6 Ô P1 <,
8P1 Ô P5 Ó P2 Ô P3 , P5 Ô P4 Ó P3 Ô P6 , P4 Ô P2 Ó P6 Ô P1 <,
8P1 Ô P5 Ó P3 Ô P2 , P5 Ô P4 Ó P2 Ô P6 , P4 Ô P3 Ó P6 Ô P1 <<

This list is comprised of 3!60 formulae. But some of them define the same point because we can
interchange the factors in the regressive product, or interchange the factors in each exterior
product, without affecting the result. To reduce these formulae to a list of unique formulae, we can
apply GrassmannSimplify to put the products into canonical order, resulting in the duplicates
only
2009differing
93 by a sign (which is unimportant).
Grassmann Algebra Book.nb 328

This list is comprised of 3!60 formulae. But some of them define the same point because we can
interchange the factors in the regressive product, or interchange the factors in each exterior
product, without affecting the result. To reduce these formulae to a list of unique formulae, we can
apply GrassmannSimplify to put the products into canonical order, resulting in the duplicates
only differing by a sign (which is unimportant).

¯"2 ; pascalPointFormulae =
Union@¯$@Flatten@allPascalPointFormulaeDD ê. -p_ ß pD
8P1 Ô P2 Ó P3 Ô P4 , P1 Ô P2 Ó P3 Ô P5 , P1 Ô P2 Ó P3 Ô P6 ,
P1 Ô P2 Ó P4 Ô P5 , P1 Ô P2 Ó P4 Ô P6 , P1 Ô P2 Ó P5 Ô P6 ,
P1 Ô P3 Ó P2 Ô P4 , P1 Ô P3 Ó P2 Ô P5 , P1 Ô P3 Ó P2 Ô P6 ,
P1 Ô P3 Ó P4 Ô P5 , P1 Ô P3 Ó P4 Ô P6 , P1 Ô P3 Ó P5 Ô P6 , P1 Ô P4 Ó P2 Ô P3 ,
P1 Ô P4 Ó P2 Ô P5 , P1 Ô P4 Ó P2 Ô P6 , P1 Ô P4 Ó P3 Ô P5 , P1 Ô P4 Ó P3 Ô P6 ,
P1 Ô P4 Ó P5 Ô P6 , P1 Ô P5 Ó P2 Ô P3 , P1 Ô P5 Ó P2 Ô P4 , P1 Ô P5 Ó P2 Ô P6 ,
P1 Ô P5 Ó P3 Ô P4 , P1 Ô P5 Ó P3 Ô P6 , P1 Ô P5 Ó P4 Ô P6 , P1 Ô P6 Ó P2 Ô P3 ,
P1 Ô P6 Ó P2 Ô P4 , P1 Ô P6 Ó P2 Ô P5 , P1 Ô P6 Ó P3 Ô P4 , P1 Ô P6 Ó P3 Ô P5 ,
P1 Ô P6 Ó P4 Ô P5 , P2 Ô P3 Ó P4 Ô P5 , P2 Ô P3 Ó P4 Ô P6 , P2 Ô P3 Ó P5 Ô P6 ,
P2 Ô P4 Ó P3 Ô P5 , P2 Ô P4 Ó P3 Ô P6 , P2 Ô P4 Ó P5 Ô P6 , P2 Ô P5 Ó P3 Ô P4 ,
P2 Ô P5 Ó P3 Ô P6 , P2 Ô P5 Ó P4 Ô P6 , P2 Ô P6 Ó P3 Ô P4 , P2 Ô P6 Ó P3 Ô P5 ,
P2 Ô P6 Ó P4 Ô P5 , P3 Ô P4 Ó P5 Ô P6 , P3 Ô P5 Ó P4 Ô P6 , P3 Ô P6 Ó P4 Ô P5 <

Length@pascalPointFormulaeD
45

There are thus in general 45 distinct Pascal point formulae for the hexagons of a conic. Of course,
since at this stage we have yet to discuss Pascal lines, these formulae are also valid for the 60
hexagons formed from any six points - not necessarily those constrained to lie on a conic.

It should be noted that these are distinct formulae. The number of actual distinct points resulting
from their application to any six given points may be less. Some may be duplicates, for example,
when the hexagon has some symmetry, and some may reduce to vectors when the hexagon sides
are parallel.

For the ellipse and hexagon points discussed above, we can apply the formulae to get the actual
points.

pascalPoints =
ToPointForm@¯1@pascalPointFormulae êê. EllipsePointsDD
4
:¯# + 4 - e1 - 3 e2 , ¯# + I8 - 4 3 M e1 ,
3
3
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 , ¯# + 4 I-1 + 3 M e1 - 3 3 e2 ,
2
4 e1 9
¯# + -2 3 e2 , ¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 ,
3 2

, ,

2009 9 3
Grassmann Algebra Book.nb 329

4
¯# + 4 + e1 - 3 e2 , ¯# + 4 I2 + 3 M e1 - 3 I1 + 3 M e2 ,
3
3 3 e2
¯# + 2 I2 + 3 M e1 - , -96 e1 + 72 e2 ,
2
¯# + 4 I1 + 3 M e1 - 3 3 e2 , ¯# - 4 I2 + 3 M e1 + 3 I3 + 3 M e2 ,
3 3 e2
¯# - 3 3 e2 , ¯# - 3 e2 , ¯# - , ¯#, ¯# - 3 e2 ,
2
¯# - 3 3 e2 , ¯# + 4 I2 + 3 M e1 + 3 I3 + 3 M e2 ,
3 3 e2
¯# - 4 I1 + 3 M e1 - 3 3 e2 , ¯# - 2 I2 + 3 M e1 - ,
2
96 e1 + 72 e2 , ¯# - 4 I2 + 3 M e1 - 3 I1 + 3 M e2 ,
4 9
¯# - I3 + 3 M e1 - 3 e2 , ¯# - 2 I1 + 3 M e1 - I1 + 3 M e2 ,
3 2
4 e1 3
¯# - -2 3 e2 , ¯# + I2 - 2 3 M e1 - I-1 + 3 M e2 ,
3 2
¯# + I4 - 4 3 M e1 - 3 3 e2 , ¯# + 4 I-2 + 3 M e1 ,
4
¯# + -4 + e1 - 3 e2 , ¯# + I8 - 4 3 M e1 + 3 I-3 + 3 M e2 ,
3
9
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 , ¯# - 3 3 e2 ,
2
3
¯# + 4 I2 + 3 M e1 , ¯# - 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
9
¯# + I2 - 2 3 M e1 - I-1 + 3 M e2 ,
2
¯# + I8 - 4 3 M e1 + I3 - 3 3 M e2 , ¯# - 3 e2 ,
3 3 3 e2
¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 , ¯# + I4 - 2 3 M e1 - ,
2 2
3 3 e2
-48 3 e1 , ¯# + 2 I-2 + 3 M e1 - ,
2
¯# + 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 ,
¯# - 4 I2 + 3 M e1 , ¯# + 4 I-2 + 3 M e1 + I3 - 3 3 M e2 >

Two of these points occur in triplicate ¯& - 3 e2 and ¯& - 3 3 e2 ; and three of the entities
are vectors (points at infinity).

vectors = Select@pascalPoints, VectorFormQD

9-96 e1 + 72 e2 , 96 e1 + 72 e2 , -48 3 e1 =

Thus, when we plot these points we should be able to count 38 points in blue, and 3 vectors
(without their arrowheads) in green.

2009 9 3
Grassmann Algebra Book.nb 330

Thus, when we plot these points we should be able to count 38 points in blue, and 3 vectors
(without their arrowheads) in green.

Red: 6 hexagon vertices. Blue: 38 distinct Pascal points. Green: 3 vectors

Ï Pascal lines

Since there are 60 essentially different hexagons one can form from 6 given points, there are thus
60 corresponding Pascal lines. We can display the equations for these lines.

PascalLineEquations =
allPascalPointFormulae ê. 8Q1_, Q2_, Q3_< ß Q1 Ô Q2 Ô Q3 ã 0
8HP1 Ô P2 Ó P4 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P6 L Ô HP3 Ô P4 Ó P6 Ô P1 L ã 0,
HP1 Ô P2 Ó P4 Ô P6 L Ô HP2 Ô P3 Ó P6 Ô P5 L Ô HP3 Ô P4 Ó P5 Ô P1 L ã 0,
HP1 Ô P2 Ó P5 Ô P4 L Ô HP2 Ô P3 Ó P4 Ô P6 L Ô HP3 Ô P5 Ó P6 Ô P1 L ã 0,
HP1 Ô P2 Ó P5 Ô P6 L Ô HP2 Ô P3 Ó P6 Ô P4 L Ô HP3 Ô P5 Ó P4 Ô P1 L ã 0,
HP1 Ô P2 Ó P6 Ô P4 L Ô HP2 Ô P3 Ó P4 Ô P5 L Ô HP3 Ô P6 Ó P5 Ô P1 L ã 0,
HP1 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P4 L Ô HP3 Ô P6 Ó P4 Ô P1 L ã 0,
HP1 Ô P2 Ó P3 Ô P5 L Ô HP2 Ô P4 Ó P5 Ô P6 L Ô HP4 Ô P3 Ó P6 Ô P1 L ã 0,
HP1 Ô P2 Ó P3 Ô P6 L Ô HP2 Ô P4 Ó P6 Ô P5 L Ô HP4 Ô P3 Ó P5 Ô P1 L ã 0,
HP1 Ô P2 Ó P5 Ô P3 L Ô HP2 Ô P4 Ó P3 Ô P6 L Ô HP4 Ô P5 Ó P6 Ô P1 L ã 0,
HP1 Ô P2 Ó P5 Ô P6 L Ô HP2 Ô P4 Ó P6 Ô P3 L Ô HP4 Ô P5 Ó P3 Ô P1 L ã 0,
HP1 Ô P2 Ó P6 Ô P3 L Ô HP2 Ô P4 Ó P3 Ô P5 L Ô HP4 Ô P6 Ó P5 Ô P1 L ã 0,
HP1 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P4 Ó P5 Ô P3 L Ô HP4 Ô P6 Ó P3 Ô P1 L ã 0,
,

2009 9 3
Grassmann Algebra Book.nb 331

HP1 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P4 Ó P5 Ô P3 L Ô HP4 Ô P6 Ó P3 Ô P1 L ã 0,


HP1 Ô P2 Ó P3 Ô P4 L Ô HP2 Ô P5 Ó P4 Ô P6 L Ô HP5 Ô P3 Ó P6 Ô P1 L ã 0,
HP1 Ô P2 Ó P3 Ô P6 L Ô HP2 Ô P5 Ó P6 Ô P4 L Ô HP5 Ô P3 Ó P4 Ô P1 L ã 0,
HP1 Ô P2 Ó P4 Ô P3 L Ô HP2 Ô P5 Ó P3 Ô P6 L Ô HP5 Ô P4 Ó P6 Ô P1 L ã 0,
HP1 Ô P2 Ó P4 Ô P6 L Ô HP2 Ô P5 Ó P6 Ô P3 L Ô HP5 Ô P4 Ó P3 Ô P1 L ã 0,
HP1 Ô P2 Ó P6 Ô P3 L Ô HP2 Ô P5 Ó P3 Ô P4 L Ô HP5 Ô P6 Ó P4 Ô P1 L ã 0,
HP1 Ô P2 Ó P6 Ô P4 L Ô HP2 Ô P5 Ó P4 Ô P3 L Ô HP5 Ô P6 Ó P3 Ô P1 L ã 0,
HP1 Ô P2 Ó P3 Ô P4 L Ô HP2 Ô P6 Ó P4 Ô P5 L Ô HP6 Ô P3 Ó P5 Ô P1 L ã 0,
HP1 Ô P2 Ó P3 Ô P5 L Ô HP2 Ô P6 Ó P5 Ô P4 L Ô HP6 Ô P3 Ó P4 Ô P1 L ã 0,
HP1 Ô P2 Ó P4 Ô P3 L Ô HP2 Ô P6 Ó P3 Ô P5 L Ô HP6 Ô P4 Ó P5 Ô P1 L ã 0,
HP1 Ô P2 Ó P4 Ô P5 L Ô HP2 Ô P6 Ó P5 Ô P3 L Ô HP6 Ô P4 Ó P3 Ô P1 L ã 0,
HP1 Ô P2 Ó P5 Ô P3 L Ô HP2 Ô P6 Ó P3 Ô P4 L Ô HP6 Ô P5 Ó P4 Ô P1 L ã 0,
HP1 Ô P2 Ó P5 Ô P4 L Ô HP2 Ô P6 Ó P4 Ô P3 L Ô HP6 Ô P5 Ó P3 Ô P1 L ã 0,
HP1 Ô P3 Ó P4 Ô P5 L Ô HP3 Ô P2 Ó P5 Ô P6 L Ô HP2 Ô P4 Ó P6 Ô P1 L ã 0,
HP1 Ô P3 Ó P4 Ô P6 L Ô HP3 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P4 Ó P5 Ô P1 L ã 0,
HP1 Ô P3 Ó P5 Ô P4 L Ô HP3 Ô P2 Ó P4 Ô P6 L Ô HP2 Ô P5 Ó P6 Ô P1 L ã 0,
HP1 Ô P3 Ó P5 Ô P6 L Ô HP3 Ô P2 Ó P6 Ô P4 L Ô HP2 Ô P5 Ó P4 Ô P1 L ã 0,
HP1 Ô P3 Ó P6 Ô P4 L Ô HP3 Ô P2 Ó P4 Ô P5 L Ô HP2 Ô P6 Ó P5 Ô P1 L ã 0,
HP1 Ô P3 Ó P6 Ô P5 L Ô HP3 Ô P2 Ó P5 Ô P4 L Ô HP2 Ô P6 Ó P4 Ô P1 L ã 0,
HP1 Ô P3 Ó P2 Ô P5 L Ô HP3 Ô P4 Ó P5 Ô P6 L Ô HP4 Ô P2 Ó P6 Ô P1 L ã 0,
HP1 Ô P3 Ó P2 Ô P6 L Ô HP3 Ô P4 Ó P6 Ô P5 L Ô HP4 Ô P2 Ó P5 Ô P1 L ã 0,
HP1 Ô P3 Ó P5 Ô P2 L Ô HP3 Ô P4 Ó P2 Ô P6 L Ô HP4 Ô P5 Ó P6 Ô P1 L ã 0,
HP1 Ô P3 Ó P6 Ô P2 L Ô HP3 Ô P4 Ó P2 Ô P5 L Ô HP4 Ô P6 Ó P5 Ô P1 L ã 0,
HP1 Ô P3 Ó P2 Ô P4 L Ô HP3 Ô P5 Ó P4 Ô P6 L Ô HP5 Ô P2 Ó P6 Ô P1 L ã 0,
HP1 Ô P3 Ó P2 Ô P6 L Ô HP3 Ô P5 Ó P6 Ô P4 L Ô HP5 Ô P2 Ó P4 Ô P1 L ã 0,
HP1 Ô P3 Ó P4 Ô P2 L Ô HP3 Ô P5 Ó P2 Ô P6 L Ô HP5 Ô P4 Ó P6 Ô P1 L ã 0,
HP1 Ô P3 Ó P6 Ô P2 L Ô HP3 Ô P5 Ó P2 Ô P4 L Ô HP5 Ô P6 Ó P4 Ô P1 L ã 0,
HP1 Ô P3 Ó P2 Ô P4 L Ô HP3 Ô P6 Ó P4 Ô P5 L Ô HP6 Ô P2 Ó P5 Ô P1 L ã 0,
HP1 Ô P3 Ó P2 Ô P5 L Ô HP3 Ô P6 Ó P5 Ô P4 L Ô HP6 Ô P2 Ó P4 Ô P1 L ã 0,
HP1 Ô P3 Ó P4 Ô P2 L Ô HP3 Ô P6 Ó P2 Ô P5 L Ô HP6 Ô P4 Ó P5 Ô P1 L ã 0,
HP1 Ô P3 Ó P5 Ô P2 L Ô HP3 Ô P6 Ó P2 Ô P4 L Ô HP6 Ô P5 Ó P4 Ô P1 L ã 0,
HP1 Ô P4 Ó P3 Ô P5 L Ô HP4 Ô P2 Ó P5 Ô P6 L Ô HP2 Ô P3 Ó P6 Ô P1 L ã 0,
HP1 Ô P4 Ó P3 Ô P6 L Ô HP4 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P1 L ã 0,
HP1 Ô P4 Ó P5 Ô P3 L Ô HP4 Ô P2 Ó P3 Ô P6 L Ô HP2 Ô P5 Ó P6 Ô P1 L ã 0,
HP1 Ô P4 Ó P6 Ô P3 L Ô HP4 Ô P2 Ó P3 Ô P5 L Ô HP2 Ô P6 Ó P5 Ô P1 L ã 0,
HP1 Ô P4 Ó P2 Ô P5 L Ô HP4 Ô P3 Ó P5 Ô P6 L Ô HP3 Ô P2 Ó P6 Ô P1 L ã 0,
HP1 Ô P4 Ó P2 Ô P6 L Ô HP4 Ô P3 Ó P6 Ô P5 L Ô HP3 Ô P2 Ó P5 Ô P1 L ã 0,
HP1 Ô P4 Ó P5 Ô P2 L Ô HP4 Ô P3 Ó P2 Ô P6 L Ô HP3 Ô P5 Ó P6 Ô P1 L ã 0,
HP1 Ô P4 Ó P6 Ô P2 L Ô HP4 Ô P3 Ó P2 Ô P5 L Ô HP3 Ô P6 Ó P5 Ô P1 L ã 0,
HP1 Ô P4 Ó P2 Ô P3 L Ô HP4 Ô P5 Ó P3 Ô P6 L Ô HP5 Ô P2 Ó P6 Ô P1 L ã 0,
HP1 Ô P4 Ó P3 Ô P2 L Ô HP4 Ô P5 Ó P2 Ô P6 L Ô HP5 Ô P3 Ó P6 Ô P1 L ã 0,
HP1 Ô P4 Ó P2 Ô P3 L Ô HP4 Ô P6 Ó P3 Ô P5 L Ô HP6 Ô P2 Ó P5 Ô P1 L ã 0,
HP1 Ô P4 Ó P3 Ô P2 L Ô HP4 Ô P6 Ó P2 Ô P5 L Ô HP6 Ô P3 Ó P5 Ô P1 L ã 0,
HP1 Ô P5 Ó P3 Ô P4 L Ô HP5 Ô P2 Ó P4 Ô P6 L Ô HP2 Ô P3 Ó P6 Ô P1 L ã 0,
HP1 Ô P5 Ó P4 Ô P3 L Ô HP5 Ô P2 Ó P3 Ô P6 L Ô HP2 Ô P4 Ó P6 Ô P1 L ã 0,
HP1 Ô P5 Ó P2 Ô P4 L Ô HP5 Ô P3 Ó P4 Ô P6 L Ô HP3 Ô P2 Ó P6 Ô P1 L ã 0,
,

2009 9 3
Grassmann Algebra Book.nb 332

HP1 Ô P5 Ó P2 Ô P4 L Ô HP5 Ô P3 Ó P4 Ô P6 L Ô HP3 Ô P2 Ó P6 Ô P1 L ã 0,


HP1 Ô P5 Ó P4 Ô P2 L Ô HP5 Ô P3 Ó P2 Ô P6 L Ô HP3 Ô P4 Ó P6 Ô P1 L ã 0,
HP1 Ô P5 Ó P2 Ô P3 L Ô HP5 Ô P4 Ó P3 Ô P6 L Ô HP4 Ô P2 Ó P6 Ô P1 L ã 0,
HP1 Ô P5 Ó P3 Ô P2 L Ô HP5 Ô P4 Ó P2 Ô P6 L Ô HP4 Ô P3 Ó P6 Ô P1 L ã 0<

We can verify these equations for the points we have chosen on the ellipse.

Union@Simplify@¯1@PascalLineEquations ê. EllipsePointsDDD
8True<

To generate formulae for the Pascal lines themselves we only have to choose any two of its Pascal
points. Here we choose the first two.

PascalLineFormulae =
allPascalPointFormulae ê. 8Q1_, Q2_, Q3_< ß Q1 Ô Q2
8HP1 Ô P2 Ó P4 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P6 L, HP1 Ô P2 Ó P4 Ô P6 L Ô HP2 Ô P3 Ó P6 Ô P5 L,
HP1 Ô P2 Ó P5 Ô P4 L Ô HP2 Ô P3 Ó P4 Ô P6 L, HP1 Ô P2 Ó P5 Ô P6 L Ô HP2 Ô P3 Ó P6 Ô P4 L,
HP1 Ô P2 Ó P6 Ô P4 L Ô HP2 Ô P3 Ó P4 Ô P5 L, HP1 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P3 Ó P5 Ô P4 L,
HP1 Ô P2 Ó P3 Ô P5 L Ô HP2 Ô P4 Ó P5 Ô P6 L, HP1 Ô P2 Ó P3 Ô P6 L Ô HP2 Ô P4 Ó P6 Ô P5 L,
HP1 Ô P2 Ó P5 Ô P3 L Ô HP2 Ô P4 Ó P3 Ô P6 L, HP1 Ô P2 Ó P5 Ô P6 L Ô HP2 Ô P4 Ó P6 Ô P3 L,
HP1 Ô P2 Ó P6 Ô P3 L Ô HP2 Ô P4 Ó P3 Ô P5 L, HP1 Ô P2 Ó P6 Ô P5 L Ô HP2 Ô P4 Ó P5 Ô P3 L,
HP1 Ô P2 Ó P3 Ô P4 L Ô HP2 Ô P5 Ó P4 Ô P6 L, HP1 Ô P2 Ó P3 Ô P6 L Ô HP2 Ô P5 Ó P6 Ô P4 L,
HP1 Ô P2 Ó P4 Ô P3 L Ô HP2 Ô P5 Ó P3 Ô P6 L, HP1 Ô P2 Ó P4 Ô P6 L Ô HP2 Ô P5 Ó P6 Ô P3 L,
HP1 Ô P2 Ó P6 Ô P3 L Ô HP2 Ô P5 Ó P3 Ô P4 L, HP1 Ô P2 Ó P6 Ô P4 L Ô HP2 Ô P5 Ó P4 Ô P3 L,
HP1 Ô P2 Ó P3 Ô P4 L Ô HP2 Ô P6 Ó P4 Ô P5 L, HP1 Ô P2 Ó P3 Ô P5 L Ô HP2 Ô P6 Ó P5 Ô P4 L,
HP1 Ô P2 Ó P4 Ô P3 L Ô HP2 Ô P6 Ó P3 Ô P5 L, HP1 Ô P2 Ó P4 Ô P5 L Ô HP2 Ô P6 Ó P5 Ô P3 L,
HP1 Ô P2 Ó P5 Ô P3 L Ô HP2 Ô P6 Ó P3 Ô P4 L, HP1 Ô P2 Ó P5 Ô P4 L Ô HP2 Ô P6 Ó P4 Ô P3 L,
HP1 Ô P3 Ó P4 Ô P5 L Ô HP3 Ô P2 Ó P5 Ô P6 L, HP1 Ô P3 Ó P4 Ô P6 L Ô HP3 Ô P2 Ó P6 Ô P5 L,
HP1 Ô P3 Ó P5 Ô P4 L Ô HP3 Ô P2 Ó P4 Ô P6 L, HP1 Ô P3 Ó P5 Ô P6 L Ô HP3 Ô P2 Ó P6 Ô P4 L,
HP1 Ô P3 Ó P6 Ô P4 L Ô HP3 Ô P2 Ó P4 Ô P5 L, HP1 Ô P3 Ó P6 Ô P5 L Ô HP3 Ô P2 Ó P5 Ô P4 L,
HP1 Ô P3 Ó P2 Ô P5 L Ô HP3 Ô P4 Ó P5 Ô P6 L, HP1 Ô P3 Ó P2 Ô P6 L Ô HP3 Ô P4 Ó P6 Ô P5 L,
HP1 Ô P3 Ó P5 Ô P2 L Ô HP3 Ô P4 Ó P2 Ô P6 L, HP1 Ô P3 Ó P6 Ô P2 L Ô HP3 Ô P4 Ó P2 Ô P5 L,
HP1 Ô P3 Ó P2 Ô P4 L Ô HP3 Ô P5 Ó P4 Ô P6 L, HP1 Ô P3 Ó P2 Ô P6 L Ô HP3 Ô P5 Ó P6 Ô P4 L,
HP1 Ô P3 Ó P4 Ô P2 L Ô HP3 Ô P5 Ó P2 Ô P6 L, HP1 Ô P3 Ó P6 Ô P2 L Ô HP3 Ô P5 Ó P2 Ô P4 L,
HP1 Ô P3 Ó P2 Ô P4 L Ô HP3 Ô P6 Ó P4 Ô P5 L, HP1 Ô P3 Ó P2 Ô P5 L Ô HP3 Ô P6 Ó P5 Ô P4 L,
HP1 Ô P3 Ó P4 Ô P2 L Ô HP3 Ô P6 Ó P2 Ô P5 L, HP1 Ô P3 Ó P5 Ô P2 L Ô HP3 Ô P6 Ó P2 Ô P4 L,
HP1 Ô P4 Ó P3 Ô P5 L Ô HP4 Ô P2 Ó P5 Ô P6 L, HP1 Ô P4 Ó P3 Ô P6 L Ô HP4 Ô P2 Ó P6 Ô P5 L,
HP1 Ô P4 Ó P5 Ô P3 L Ô HP4 Ô P2 Ó P3 Ô P6 L, HP1 Ô P4 Ó P6 Ô P3 L Ô HP4 Ô P2 Ó P3 Ô P5 L,
HP1 Ô P4 Ó P2 Ô P5 L Ô HP4 Ô P3 Ó P5 Ô P6 L, HP1 Ô P4 Ó P2 Ô P6 L Ô HP4 Ô P3 Ó P6 Ô P5 L,
HP1 Ô P4 Ó P5 Ô P2 L Ô HP4 Ô P3 Ó P2 Ô P6 L, HP1 Ô P4 Ó P6 Ô P2 L Ô HP4 Ô P3 Ó P2 Ô P5 L,
HP1 Ô P4 Ó P2 Ô P3 L Ô HP4 Ô P5 Ó P3 Ô P6 L, HP1 Ô P4 Ó P3 Ô P2 L Ô HP4 Ô P5 Ó P2 Ô P6 L,
HP1 Ô P4 Ó P2 Ô P3 L Ô HP4 Ô P6 Ó P3 Ô P5 L, HP1 Ô P4 Ó P3 Ô P2 L Ô HP4 Ô P6 Ó P2 Ô P5 L,
HP1 Ô P5 Ó P3 Ô P4 L Ô HP5 Ô P2 Ó P4 Ô P6 L, HP1 Ô P5 Ó P4 Ô P3 L Ô HP5 Ô P2 Ó P3 Ô P6 L,
HP1 Ô P5 Ó P2 Ô P4 L Ô HP5 Ô P3 Ó P4 Ô P6 L, HP1 Ô P5 Ó P4 Ô P2 L Ô HP5 Ô P3 Ó P2 Ô P6 L,
HP1 Ô P5 Ó P2 Ô P3 L Ô HP5 Ô P4 Ó P3 Ô P6 L, HP1 Ô P5 Ó P3 Ô P2 L Ô HP5 Ô P4 Ó P2 Ô P6 L<

We can explicitly compute the Pascal lines (bound vectors) for the ellipse by substituting in our
chosen points.

2009 9 3
Grassmann Algebra Book.nb 333

We can explicitly compute the Pascal lines (bound vectors) for the ellipse by substituting in our
chosen points.

PascalLines = allPascalPointFormulae ê. 8Q1_, Q2_, Q3_< ß


Simplify@ToPointForm@¯1@Q1 ê. EllipsePointsDD Ô
ToPointForm@¯1@Q2 ê. EllipsePointsDDD

:I¯# + 4 I-1 + 3 M e1 - 3 3 e2 M Ô I¯# - 3 3 e2 M,


4 e1
¯# + -2 3 e2 Ô I¯# - 3 3 e2 M,
3
I¯# + 4 I-1 + 3 M e1 - 3 3 e2 M Ô
9
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 ,
2
9
¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 Ô
2
9
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 ,
2
4 e1
¯# + -2 3 e2 Ô I¯# - 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,
3
9
¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 Ô
2
I¯# - 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,
9
I¯# - 4 I-2 + 3 M e1 M Ô ¯# - 2 I-1 + 3 M e1 - I-1 + 3 M e2 ,
2
3
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 Ô
2
9
¯# - 2 I-1 + 3 M e1 - I-1 + 3 M e2 ,
2
3
I¯# - 4 I-2 + 3 M e1 M Ô ¯# - 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
9
¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 Ô
2
3
¯# - 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
3
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 Ô I¯# + 4 I2 + 3 M e1 M,
2
9
¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 Ô I¯# + 4 I2 + 3 M e1 M,
2
4 3
¯# + 4 - e1 - 3 e2 Ô ¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 ,
3 2
3
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 Ô
2

2009 9 3
Grassmann Algebra Book.nb 334

3
¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
4
¯# + 4 - e1 - 3 e2 Ô I¯# - 3 e2 M,
3
4 e1
¯# + -2 3 e2 Ô I¯# - 3 e2 M,
3
3
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 Ô
2
I¯# - 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,
4 e1
¯# + -2 3 e2 Ô I¯# - 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,
3
4 3 3 e2
¯# + 4 - e1 - 3 e2 Ô ¯# + 2 I-2 + 3 M e1 - ,
3 2

3 3 e2
I¯# - 4 I-2 + 3 M e1 M Ô ¯# + 2 I-2 + 3 M e1 - ,
2
4
¯# + 4 - e1 - 3 e2 Ô I-48 3 e1 M,
3
I¯# + 4 I-1 + 3 M e1 - 3 3 e2 M Ô I48 3 e1 M,

3 3 e2
I¯# - 4 I-2 + 3 M e1 M Ô ¯# - 2 I-2 + 3 M e1 - ,
2

3 3 e2
I¯# + 4 I-1 + 3 M e1 - 3 3 e2 M Ô ¯# - 2 I-2 + 3 M e1 - ,
2

H-96 e1 + 72 e2 L Ô I¯# - 3 3 e2 M,
I¯# + 4 I1 + 3 M e1 - 3 3 e2 M Ô I¯# - 3 3 e2 M,
9
H96 e1 - 72 e2 L Ô ¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 ,
2
I¯# - 4 I2 + 3 M e1 + 3 I3 + 3 M e2 M Ô
9
¯# + 2 I-1 + 3 M e1 - I-1 + 3 M e2 ,
2
I¯# + 4 I1 + 3 M e1 - 3 3 e2 M Ô
I¯# - 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,
I¯# - 4 I2 + 3 M e1 + 3 I3 + 3 M e2 M Ô
I¯# - 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,
I¯# + 4 I2 + 3 M e1 - 3 I1 + 3 M e2 M Ô
I¯# + 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,

2009 9 3
Grassmann Algebra Book.nb 335

3 3 e2
¯# + 2 I2 + 3 M e1 - Ô
2

I¯# + 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,


I¯# + 4 I2 + 3 M e1 - 3 I1 + 3 M e2 M Ô

3 3 e2 3 3 e2
¯# - 2 I-2 + 3 M e1 - , ¯# + 2 I2 + 3 M e1 - Ô
2 2

I¯# - 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,


4
¯# + I3 + 3 M e1 - 3 e2 Ô I¯# - 4 I2 + 3 M e1 M,
3
3 3 e2
¯# + 2 I2 + 3 M e1 - Ô I¯# - 4 I2 + 3 M e1 M,
2
4
¯# + I3 + 3 M e1 - 3 e2 Ô I48 3 e1 M,
3
3 3 e2
¯# + 2 I2 + 3 M e1 - Ô I¯# + 4 I2 + 3 M e1 M,
2
4
¯# + I3 + 3 M e1 - 3 e2 Ô I¯# + 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,
3
I¯# + 4 I2 + 3 M e1 - 3 I1 + 3 M e2 M Ô
I¯# + 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,
4
¯# + I3 + 3 M e1 - 3 e2 Ô I¯# - 3 e2 M,
3
I¯# + 4 I2 + 3 M e1 - 3 I1 + 3 M e2 M Ô
3
¯# - 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
9
¯# Ô ¯# - 2 I-1 + 3 M e1 - I-1 + 3 M e2 ,
2
9
I¯# - 3 e2 M Ô ¯# - 2 I-1 + 3 M e1 - I-1 + 3 M e2 ,
2
3
¯# Ô ¯# - 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
I¯# - 3 e2 M Ô I¯# + 4 I2 + 3 M e1 M,
I¯# - 3 e2 M Ô I¯# + 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,

3 3 e2
¯# - Ô I¯# + 4 I-2 + 3 M e1 + 3 I-3 + 3 M e2 M,
2

3 3 e2
I¯# - 3 e2 M Ô ¯# - 2 I-2 + 3 M e1 - ,
2

,
2009 9 3
Grassmann Algebra Book.nb 336

3 3 e2
¯# - Ô I¯# - 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,
2

I¯# - 3 3 e2 M Ô I¯# + 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,

3 3 e2
I¯# - 3 3 e2 M Ô ¯# + 2 I-2 + 3 M e1 - ,
2

I¯# - 3 3 e2 M Ô I¯# - 4 I2 + 3 M e1 M,
3
I¯# - 3 3 e2 M Ô ¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
3
H96 e1 + 72 e2 L Ô ¯# + 2 I1 + 3 M e1 - I1 + 3 M e2 ,
2
H-96 e1 - 72 e2 L Ô I¯# - 3 e2 M,
I¯# - 4 I1 + 3 M e1 - 3 3 e2 M Ô I¯# - 4 I2 + 3 M e1 M,
I¯# - 4 I1 + 3 M e1 - 3 3 e2 M Ô I-48 3 e1 M,
I¯# + 4 I2 + 3 M e1 + 3 I3 + 3 M e2 M Ô
I¯# + 4 I-2 + 3 M e1 - 3 I-1 + 3 M e2 M,
I¯# + 4 I2 + 3 M e1 + 3 I3 + 3 M e2 M Ô

3 3 e2
¯# + 2 I-2 + 3 M e1 - >
2

We can plot these 60 Pascal lines on their Pascal points. Lines normally have an infinite extent, but
to make it clearer which lines correspond to which points, we have plotted them spanning just their
three points, where of course one of the points may be at infinity. Thus if the three Pascal points
(or vectors) of a Pascal line were Pa , Pb , and Pc we have plotted the line segments HPa , Pb L,
HPb , Pc L, and HPc , Pa L. If this is being read as a Mathematica notebook, actual line expressions
are discernible as Tooltips. However care should be taken when attempting to correlate the
expressions shown with lines, as some lines may overlap.

2009 9 3
Grassmann Algebra Book.nb 337

Sixty Pascal lines each joining its three points or points at infinity

Finally, we plot the sixty Pascal lines and bivectors of a regular hexagon. In this case we have
shown the arrowheads on the line-bound vectors and on the bivectors. The lack of apparent
symmetry is an artifact caused by the vectors and bivectors showing more information (their signs,
weights and orientation) than the geometric lines and 2-directions that they are being used to
represent.

2009 9 3
Grassmann Algebra Book.nb 338

The sixty Pascal lines and bivectors of a regular hexagon

4.14 Summary
In this chapter we have explored the implications of interpreting one of the elements of the
underlying linear space as an (origin) point (and the rest as vectors). This is a slightly different
approach to that adopted by Grassmann and workers in the earlier Grassmannian tradition, of
interpreting all the basis elements of the underlying linear space as points, and then treating any
consequent point differences as vectors.

Although the modern context is to interpret elements of a linear space most commonly as vectors,
workers in the early nineteenth century treated the point as the paramount entity of a linear space.
Indeed it was Hamilton who coined the term vector in 1853 as noted in the Historical Note at the
beginning of this chapter.

As we hope to have demonstrated in this chapter, the modern context is significantly poorer for
ignoring the possibility of interpreting the elements of a linear space such that they can include
points. We can probably trace this development back to Gibbs' early work before he became
familiar with Grassmann's work, where he unwrapped Hamilton's quaternion operation into the dot
and cross product.

Of course, this chapter involves no metric concepts other than the comparison of weights of points
(their scalar factors) or the relative lengths of vectors in the same direction. This has enabled us to
see what types of geometric theorems and constructions can be done without the concept of a
metric. Or, what is equivalent, with an arbitrary metric. The notion of congruence has been central.

The next two chapters will introduce the two seminal concepts with which we will explore metric
geometry: the complement, and the interior product. Armed with these, we will also be able to
define the notions of length and angle, and more specifically, orthogonality. Because the
2009 9 3 algebra extends the vector algebra to higher grade entities, we will find these notions
Grassmann
also extend naturally to higher grade entities.
Grassmann Algebra Book.nb 339

The next two chapters will introduce the two seminal concepts with which we will explore metric
geometry: the complement, and the interior product. Armed with these, we will also be able to
define the notions of length and angle, and more specifically, orthogonality. Because the
Grassmann algebra extends the vector algebra to higher grade entities, we will find these notions
also extend naturally to higher grade entities.

2009 9 3
Grassmann Algebra Book.nb 340

5 The Complement
Ï

5.1 Introduction
Up to this point various linear spaces and the dual exterior and regressive product operations have
been introduced. The elements of these spaces were incommensurable unless they were congruent,
that is, nowhere was there involved the concept of measure or magnitude of an element by which it
could be compared with any other element. Thus the subject of the last chapter on Geometric
Interpretations was explicitly non-metric geometry; or, to put it another way, it was what geometry
is before the ability to compare or measure is added.

The question then arises, how do we associate a measure with the elements of a Grassmann algebra
in a consistent way? Of course, we already know the approach that has developed over the last
century, that of defining a metric tensor. In this book we will indeed define a metric tensor, but we
will take an approach which develops from the concepts of the exterior product and the duality
operations which are the foundations of the Grassmann algebra. This will enable us to see how the
metric tensor on the underlying linear space generates metric tensors on the exterior linear spaces
of higher grade; the implications of the symmetry of the metric tensor; and how consistently to
generalize the notion of inner product to elements of arbitrary grade.

One of the consequences of the anti-symmetry of the exterior product of 1-elements is that the
exterior linear space of m-elements has the same dimension as the exterior linear space of (n|m)-
elements. We have already seen this property evidenced in the notions of duality and cobasis
element. And one of the consequences of the notion of duality is that the regressive product of an
m-element and an (n|m)-element is a scalar. Thus there is the opportunity of defining for each m-
element a corresponding 'co-m-element' of grade n|m such that the regressive product of these two
elements gives a scalar. We will see that this scalar measures the square of the 'magnitude' of the m-
element or the (n|m)-element, and corresponds to the inner product of either of them with itself.
We will also see that the notion of orthogonality is defined by the correspondence between m-
elements and their 'co-m-elements'. But most importantly, the definition of this inner product as a
regressive product of an m-element with a 'co-m-element' is immediately generalizable to elements
of arbitrary grade, thus permitting a theory of interior products to be developed which is consistent
with the exterior and regressive product axioms and which, via the notion of 'co-m-element', leads
to explicit and easily derived formulae between elements of arbitrary (and possibly different) grade.

The foundation of the notion of measure or metric then is the notion of 'co-m-element'. In this book
we use the term complement rather than 'co-m-element'. In this chapter we will develop the notion
of complement in preparation for the development of the notions of interior product and
orthogonality in the next chapter.

The complement of an element is denoted with a horizontal bar over the element. For example, the
complements of a, x + y and xÔy are denoted a, x+y and x Ô y.
m m

Finally, it should be noted that the term 'complement' may be used either to refer to an operation
(the operation of taking the complement of an element), or to the element itself (which is the result
of the operation).

2009 9 3
Grassmann Algebra Book.nb 341

Finally, it should be noted that the term 'complement' may be used either to refer to an operation
(the operation of taking the complement of an element), or to the element itself (which is the result
of the operation).

Ï Historical Note

Grassmann introduced the notion of complement (Ergänzung) into the Ausdehnungslehre of 1862
[Grassmann 1862]. He denoted the complement of an element x by preceding it with a vertical bar, viz |x.
For mnemonic reasons, particularly in the derivation of formulas using the Complement Axiom, the notation
for the complement used in this book is rather the horizontal bar: x.

In discussing the complement, Grassmann defines the product of the n basis elements (the basis n-element)
to be unity. That is @e1 e2 ! en D ã 1 or, in the present notation, e1 Ô e2 Ô ! Ô en ã 1. Since
Grassmann discussed only the Euclidean complement (equivalent to imposing a Euclidean metric
gij ãdij ), this statement in the present notation is equivalent to 1 ã 1. The introduction of such an

identity, however, destroys the essential duality between L and L which requires rather the identity
m n-m

1 ã 1. In current terminology, equating e1 Ô e2 Ô ! Ô en to 1 is equivalent to equating n-elements (or


pseudo-scalars) and scalars. All other writers in the Grassmannian tradition (for example, Hyde, Whitehead
and Forder) followed Grassmann's approach. This enabled them to use the same notation for both the
progressive and regressive products. While being an attractive approach in a Euclidean system, it is not
tenable for general metric spaces. The tenets upon which the Ausdehnungslehre are based are so
geometrically fundamental however, that it is readily extended to more general metrics.

Ï On the use of the term 'complement'

Grassmann's term Ergänzung may be translated as either complement or supplement (as well as other
meanings, for example completion). Among the earlier workers in the Grassmannian tradition, Whitehead
used supplement, while Hyde used complement. Kannenberg, in his excellent translation of the
Ausdehnungslehre of 1862, has used supplement.

Whitehead also gives an interesting historical note on extensions of the Ausdehnungslehre to general metric
concepts. [Book VI (Theory of Metrics) in A Treatise on Universal Algebra (p 369)]

In more modern works the Ergänzung for general metrics has become known as the Hodge Star operator.
But we do not use the term here since our aim is to develop the concept by showing how it can be built
straightforwardly on the foundations laid by Grassmann.

In this text we have chosen to use the word complement for its more recent added evocative meanings in set
theory and linear algebra (viz orthogonal complement) and its mnemonic evocation of the concept of co-m-
element. But most especially, to slightly differentiate it from Grassmann's original use as may be found in
Kannenberg's translation; because our aim is to extend its meaning to include a more general metric (the
Riemannian metric) than Grassmann initially envisaged.

2009 9 3
Grassmann Algebra Book.nb 342

5.2 Axioms for the Complement

The grade of a complement

Ï 1 : The complement of an m-element is an (n|m)-element.

aœL ï a œ L 5.1
m m m n-m

The grade of the complement of an element is the complementary grade of the element.
(The complementary grade of an m-element in an algebra with underlying linear space of
dimension n, is n-m.)

The linearity of the complement operation

Ï 2 : The complement operation is linear.

aa+bb ã aa+bb ã aa+bb 5.2


m k m k m k

For scalars a and b, the complement of a sum of elements (perhaps of different grades) is the sum
of the complements of the elements. The complement of a scalar multiple of an element is the
scalar multiple of the complement of the element.

The complement axiom

Ï 3 : The complement of a product is the dual product of the complements.

aÔb ã aÓb 5.3


m k m k

aÓb ã aÔb 5.4


m k m k

Note that for the terms on each side of the expression 5.3 to be non-zero we require m+k § n, while
in expression 5.4 we require m+k ¥ n.

Expressions 5.3 and 5.4 are duals of each other. We call these dual expressions the complement
axiom. Note its enticing similarity to De Morgan's law in Boolean algebra. If we wish, we can
confirm this duality by applying the GrassmannAlgebra function Dual.
2009 9 3
Grassmann Algebra Book.nb 343

Expressions 5.3 and 5.4 are duals of each other. We call these dual expressions the complement
axiom. Note its enticing similarity to De Morgan's law in Boolean algebra. If we wish, we can
confirm this duality by applying the GrassmannAlgebra function Dual.

DualBa Ô b ã a Ó bF
m k m k

aÓb ã aÔb
m k m k

This axiom is of central importance in the development of the properties of the complement and
interior, inner and scalar products, and formulae relating these with exterior and regressive
products. In particular, it permits us to be able consistently to generate the complements of basis m-
elements from the complements of basis 1-elements, and hence via the linearity axiom, the
complements of arbitrary elements.

The forms 5.3 and 5.4 may be written for any number of elements. To see this, let bãgÔd and
k p q

substitute for b in expression 5.3:


k

aÔgÔd ã aÓ gÔd ã aÓgÓd


m p q m p q m p q

In general then, expressions 5.3 and 5.4 may be stated in the equivalent forms:

aÔbÔ!Ôg ã aÓbÓ!Óg 5.5


m k p m k p

aÓbÓ!Óg ã aÔbÔ!Ôg 5.6


m k p m k p

In Grassmann's work, this axiom was hidden in his notation. However, since modern notation
explicitly distinguishes the progressive and regressive products, this axiom needs to be explicitly
stated.

The complement of a complement axiom

Ï 4 : The complement of the complement of an element is equal (apart from a


possible sign) to the element itself.

a ã H-1Lf Hm,nL a 5.7


m m

2009 9 3
Grassmann Algebra Book.nb 344

Axiom 1 says that the complement of an m-element is an (n|m)-element. Clearly then the
complement of an (n|m)-element is an m-element. Thus the complement of the complement of an
m-element is itself an m-element.

In the interests of symmetry and simplicity we will require that the complement of the complement
of an element is equal (apart from a possible sign) to the element itself. Although consistent
algebras could no doubt be developed by rejecting this axiom, it will turn out to be an essential
underpinning to the development of the standard Riemannian metric concepts to which we are
accustomed. For example, its satisfaction will require the metric tensor to be symmetric.

It will turn out that, again in compliance with standard results, that the index f(m,n) is m(n|m). But
this result is more in the nature of a theorem than an axiom. We discuss this further in section 5.6.

Although this text does not consider pseudo-Riemannian metrics, the equivalent value for the index
f would in this case turn out to be m(n|m)+s, where s is the signature of the metric.

The complement of unity

Ï 5 : The complement of unity is equal to the unit n-element.

In Chapter 3: The Regressive Product, we showed in Section 3.3 that the unit n-element is
congruent to any basis n-element, and wrote this result in a form involving an (as yet
undetermined) scalar congruence factor ¯c.

e1 Ô e2 Ô ! Ô en ã ¯c 1
n

In the discussions and non-Mathematica derivations that follow in this chapter, it turns out to be
more notationally convenient to use the inverse of this factor, which we denote !, and rewrite this
requirement as

1 ã 1 ã ! e1 Ô e2 Ô ! Ô en 5.8
n

This is the axiom which finally enables us to define the unit n-element. This axiom requires that in
a metric space the unit n-element be identical to the complement of unity. The hitherto unspecified
scalar constant ! may now be determined from the specific complement mapping or metric under
consideration.

The dual of this axiom is 1 ã 1, since 1 and 1 are duals. We can confirm by applying the Dual
n n
function. (Note that Dual requires us to unambiguously specify that n is the dimension of the
space concerned by writing it as ¯n)

DualB1 ã 1 F
¯n

1 ã1
¯n

Hence taking the complement of expression 5.8 and using this dual axiom tells us that the
complement of the complement of unity is unity. This result clearly complies with axiom 4 .

2009 9 3
Grassmann Algebra Book.nb 345

Hence taking the complement of expression 5.8 and using this dual axiom tells us that the
complement of the complement of unity is unity. This result clearly complies with axiom 4 .

1ã1ã1 5.9
n

We can now write the unit n-element as 1 instead of 1 in any Grassmann algebras that have a
n
metric. For example in a metric space, 1 becomes the unit for the regressive product.

a ã 1Óa 5.10
m m

5.3 Defining the Complement

The complement of an m-element


To define the complement of a general m-element, we need only define the complement of basis m-
elements, since by the linearity axiom 2, we have that for a a general m-element expressed as a
m
linear combination of basis m-elements, the complement of a is the corresponding linear
m
combination of the complements of the basis m-elements.

a ã ‚ ai ei ó a ã ‚ ai ei 5.11
m m m m

The complement of a basis m-element


To define the complement of an m-element we need to define the complements of the basis m-
elements. The complement of a basis m-element however, cannot be defined independently of the
complements of basis elements in exterior linear spaces of other grades, since they are related by
the complement axiom. For example, the complement of a basis 4-element may also be expressed
as the regressive product of the complements of two basis 2-elements, or as the regressive product
of the complement of a basis 3-element and the complement of a basis 1-element.

e1 Ô e2 Ô e3 Ô e4 ã e1 Ô e2 Ó e3 Ô e4 ã e1 Ô e2 Ô e3 Ó e4

The complement axiom enables us to define the complement of a basis m-element in terms only of
the complements of basis 1-elements.

2009 9 3
Grassmann Algebra Book.nb 346

e1 Ô e2 Ô ! Ô e m ã e1 Ó e2 Ó ! Ó e m 5.12

Thus in order to define the complement of any element in a Grassmann algebra, we only need to
define the complements of the basis 1-elements, that is, the correspondence between basis 1-
elements and basis (n|1)-elements.

Defining the complement of a basis 1-element


The most general definition we could devise for the complement of a basis 1-element is a linear
combination of basis (n|1)-elements. For example in a 3-space we could define the complement of
e1 as a linear combination of the three basis 2-elements.

e1 ã a1 e1 Ô e2 + a2 e1 Ô e3 + a3 e2 Ô e3

However, it will be much more notationally convenient to define the complements of basis 1-
elements as a linear combination of their cobasis elements. (For a definition of cobasis elements,
see Chapter 2: The Exterior Product.)

e1 ã a11 e1 + a12 e2 + a13 e3

Substituting for the cobasis symbols we see that we get a somewhat notationally different, though
equivalent, form to the definition with which we began.

e1 ã a11 e2 Ô e3 + a12 H- e1 Ô e3 L + a13 e1 Ô e2

Hence in a space of any number of dimensions we can write:

n
ei ã ‚ aij ej
j=1

For reasons which the reader will perhaps discern from the choice of notation involving gij , we
extract the scalar factor ! as an explicit factor from the coefficients of the complement mapping.
We assume for simplicity, that ! is positive. It will turn out to be the reciprocal of the 'volume'
(measure) of the basis n-element.

n
ei ã # ‚ gij ej 5.13
j=1

This then is the form in which we will define the complement of basis 1-elements.

The scalar ! and the scalars gij are at this point otherwise entirely arbitrary. However, we must
still ensure that our definition satisfies the complement of a complement axiom 4. We will see that
in order to satisfy 4, some constraints will need to be imposed on ! and gij . Therefore, our task
now in the ensuing sections is to determine these constraints. We begin by determining ! in terms
of the gij .

Constraints on the value of !


2009 9 3
Grassmann Algebra Book.nb 347

Constraints on the value of !

A consideration of axiom 5 in the form 1ã1 enables us to determine a relationship between the
scalar ! and the gij . First we apply some axioms (in order): the complement of axiom 5, axiom
2, and axiom 3.

1 ã 1 ã ! e1 Ô e2 Ô ! Ô en
ã ! e1 Ô e2 Ô ! Ô en

ã ! e1 Ó e2 Ó ! Ó en
Then we use the definition of the complement of a 1-element in terms of cobasis elements (formula
5.13)

n n n
ã ! !n ‚ g1 j ej Ó ‚ g2 j ej Ó ! Ó ‚ gn j ej
j=1 j=1 j=1

Now because ei Ó ej is equal to - ej Ó ei (see the axioms for the regressive product), just as
ei Ô ej is equal to - ej Ô ei , we can rewrite this regressive product (see the section on
determinants in Chapter 2: The Exterior Product) as

ã ! !n †gij § e1 Ó e2 Ó ! Ó en

The symbol †gij § denotes the determinant of the gij .

In Chapter 3: The Regressive Product, we explore the regressive product of cobasis elements.
Formula 3.41 is useful here:

e1 Ó e2 Ó ! Ó em ã ¯cm-1 H e1 Ô e2 Ô ! Ô em L

1
Put m equal to n, and ¯c equal to (as per our original definition of ! at the beginning of this
!
chapter) to then give

1
1 ã ! !n †gij § H e1 Ô e2 Ô ! Ô en L
!n-1
1
ã ! !n †gij § 1
!n-1

ã !2 †gij §

Ï Example in 2 dimensions

1 ã 1 ã ! e1 Ô e2
ã ! e1 Ô e2

2009 9 3
Grassmann Algebra Book.nb 348

ã ! e1 Ó e2

ã ! !2 Ig11 e1 + g12 e2 M Ó Ig21 e1 + g22 e2 M

ã ! !2 Hg11 g22 - g12 g21 L e1 Ó e2

ã ! !2 †gij § e1 Ó e2

1
ã ! !2 †gij § e1 Ô e2
!
1
ã ! !2 †gij § 1
!

ã !2 †gij §

Ï Example in 3 dimensions

1 ã 1 ã ! e1 Ô e2 Ô e3

ã ! e1 Ô e2 Ô e3

ã ! e1 Ó e2 Ó e3

ã ! !3 Ig11 e1 + g12 e2 + g13 e3 M Ó


Ig21 e1 + g22 e2 + g23 e3 M Ó Ig31 e1 + g32 e2 + g33 e3 M

ã ! !3 †gij § e1 Ó e2 Ó e3

1
ã ! !3 †gij § H e1 Ô e2 Ô e3 L
!2
1
ã ! !3 †gij § 1
!2

ã !2 †gij §

Choosing the value of !


Formula 5.13 defined the complement of a basis 1-element as a linear combination of cobasis
elements.
n
ei ã ! ‚ gij ej
j=1

The previous section showed that in order for this definition to conform to axiom 5 (1 ã 1), the
scalar ! must be related to the determinant of the scalar coefficients gij by

2009 9 3
Grassmann Algebra Book.nb 349

!2 †gij § ã 1

Although we have not yet identified the gij with the metric tensor, our assumption in this text of
only considering Riemannian metrics (rather than pseudo-Riemannian metrics, for example),
permits us now to further suppose that the determinant of the gij is positive. Combining this with
our assumption above that ! is positive, enables us to write

1

†gij §

Hence from this point on then, we take the value of ! to be the reciprocal of the positive square
root of the determinant of the coefficients of the complement mapping gij .

1
#ã 5.14
†gij §

Defining the complements in matrix form


We can define all the complements of the basis 1-elements in one matrix equation. Let B be the
matrix of basis elements, and G be the matrix of the gij .

e1 g11 g12 … g1 n
e2 g21 g22 … g2 n
Bã Gã
… … … … …
en gn1 gn2 … gn n

Then the formulas

1 n
ei ã ‚ gij ej
†gij § j=1

can be collected in the form

1
Bã GB
†G§

e1 g11 g12 … g1 n e1
e2 1 g21 g22 … g2 n e2
ã
… … … … … …
†G§
en gn1 gn2 … gnn en

2009 9 3
Grassmann Algebra Book.nb 350

5.4 The Euclidean Complement

Tabulating Euclidean complements of basis elements


The Euclidean complement of a basis m-element may be defined as its cobasis element.
Conceptually this is the simplest correspondence we can define between basis m-elements and
basis (n|m)-elements. In this case the matrix of the metric tensor is the identity matrix, with the
1
result that the 'volume' of the basis n-element is unity. (However, we cannot yet talk formally
!
about 'volumes' because we have not yet introduced the notion of inner product.) We tabulate the
basis-complement pairs for spaces of two, three and four dimensions by using the
GrassmannAlgebra function ComplementPalette.

Ï Basis elements and their Euclidean complements in 2-space

¯!2 ; ComplementPalette

Complement Palette
Basis Complement
1 e1 Ô e2
e1 e2
e2 -e1
e1 Ô e2 1

2009 9 3
Grassmann Algebra Book.nb 351

Ï Basis elements and their Euclidean complements in 3-space

¯!3 ; ComplementPalette

Complement Palette
Basis Complement
1 e1 Ô e2 Ô e3
e1 e2 Ô e3
e2 -He1 Ô e3 L
e3 e1 Ô e2
e1 Ô e2 e3
e1 Ô e3 -e2
e2 Ô e3 e1
e1 Ô e2 Ô e3 1

2009 9 3
Grassmann Algebra Book.nb 352

Ï Basis elements and their Euclidean complements in 4-space

¯!4 ; ComplementPalette

Complement Palette
Basis Complement
1 e1 Ô e2 Ô e3 Ô e4
e1 e2 Ô e3 Ô e4
e2 -He1 Ô e3 Ô e4 L
e3 e1 Ô e2 Ô e4
e4 -He1 Ô e2 Ô e3 L
e1 Ô e2 e3 Ô e4
e1 Ô e3 -He2 Ô e4 L
e1 Ô e4 e2 Ô e3
e2 Ô e3 e1 Ô e4
e2 Ô e4 -He1 Ô e3 L
e3 Ô e4 e1 Ô e2
e1 Ô e2 Ô e3 e4
e1 Ô e2 Ô e4 -e3
e1 Ô e3 Ô e4 e2
e2 Ô e3 Ô e4 -e1
e1 Ô e2 Ô e3 Ô e4 1

Formulae for the Euclidean complement of basis elements


In this section we summarize some of the formulae involving Euclidean complements of basis
elements. Later, we will generalize these formulas for the general Riemannian metric case.

The Euclidean complement of a basis 1-element is its cobasis (n|1)-element. For a basis 1-element
ei then we have:

2009 9 3
Grassmann Algebra Book.nb 353

ei ã ei ã H-1Li-1 e1 Ô ! Ô ·i Ô ! Ô en 5.15

This simple relationship extends naturally to complements of basis elements of any grade.

ei ã ei 5.16
m m

This may also be written:

ei1 Ô ! Ô eim ã H-1LKm e1 Ô ! Ô ·i1 Ô ! Ô ·im Ô ! Ô en


m 1 5.17
Km ã ‚ ig + m Hm + 1L
g=1
2

where the symbol · means the corresponding element is missing from the product.

In particular, the unit n-element is now the basis n-element:

1 ã e1 Ô e2 Ô ! Ô en 5.18

and the complement of the basis n-element is just unity.

1 ã e1 Ô e2 Ô ! Ô en 5.19

This simple correspondence is that of a Euclidean complement, defined by a Euclidean metric,


simply given by gij ãdij , with consequence that ! ã 1.

gij ã dij ! ã 1 5.20

Finally, we can see that exterior and regressive products of basis elements with complements take
on particularly simple forms.

ei Ô ej ã dij 1 5.21
m m

ei Ó ej ã dij 5.22
m m

These forms will be the basis for the definition of the inner product in the next chapter.

We note that the concept of cobasis which we introduced in Chapter 2: The Exterior Product,
despite its formal similarity to the Euclidean complement, is only a notational convenience. We do
not define it for linear combinations of elements as the definition of a complement requires.
2009 9 3
Grassmann Algebra Book.nb 354

We note that the concept of cobasis which we introduced in Chapter 2: The Exterior Product,
despite its formal similarity to the Euclidean complement, is only a notational convenience. We do
not define it for linear combinations of elements as the definition of a complement requires.

Products leading to a scalar or n-element


In this section we collect together some simple Euclidean results which we will find useful in the
rest of the chapter.

From the basis element formulae above we can immediately derive formulae for the exterior and
regressive products of general elements a and b.
m m

a ã ‚ ai ei b ã ‚ bi ei a ã ‚ ai ei b ã ‚ bi ei
m m m m m m m m

a Ô b ã K‚ ai ei O Ô K‚ bk ek O ã I‚ ai bi M 1
m m m m

a Ô b ã I‚ ai bi M 1 5.23
m m

Taking the complement of this formula, or else doing a derivation mutatis mutandis, leads to the
regressive product form.

a Ó b ã ‚ ai bi 5.24
m m

a Ô b ã Ka Ó bO 1 5.25
m m m m

2
In the case a is equal to b, and writing ⁄ ai 2 as £aß we obtain
m m m

2
a Ô a ã I‚ ai 2 M 1 ã £aß 1 5.26
m m m

2
a Ó a ã ‚ ai 2 ã £aß 5.27
m m m

A common case is for the 1-element.

x ã ‚ ai ei y ã ‚ bi ei

2009 9 3
Grassmann Algebra Book.nb 355

x Ô y ã I‚ ai bi M 1 x Ó y ã ‚ ai bi 5.28

5.5 Complementary Interlude

Alternative forms for complements


As a consequence of the complement axiom and the fact that multiplication by a scalar is
equivalent to exterior multiplication (see Chapter 2: Section 2.4) we can write the complement of a
scalar, an m-element, and a scalar multiple of an m-element in several alternative forms.

Ï Alternative forms for the complement of a scalar

From the linearity axiom 2 we have a a ã a a, hence:


m m

a == a 1 ã a 1 == a Ô 1
The complement of a scalar can then be expressed in any of the following forms:

a ã 1Ôa ã 1Óa ã 1Ôa ã 1 a ã aÔ1 ã a 1 5.29

Ï Alternative forms for the complement of an m-element

a ã 1Ôa ã 1Óa ã 1Ôa ã 1 a 5.30


m m m m m

Ï Alternative forms for the complement of a scalar multiple of an m-element

a a ã aÔa ã aÓa ã aÔa ã a a 5.31


m m m m m

Orthogonality
The specific complement mapping that we impose on a Grassmann algebra will define the notion
of orthogonality for that algebra. A simple element and its complement will be referred to as being
orthogonal to each other. In standard linear space terminology, the space of a simple element and
the space of its complement are said to be orthogonal complements of each other.

This orthogonality is total. That is, every 1-element in a given simple m-element is orthogonal to
every 1-element in the complement of the m-element.

2009 9 3
Grassmann Algebra Book.nb 356

This orthogonality is total. That is, every 1-element in a given simple m-element is orthogonal to
every 1-element in the complement of the m-element.

In the special case of a 2-dimensional vector space, the complement of a vector is itself a vector.

A vector and its complement in a two-dimensional vector space

In a vector three-space, the complement of a vector is a bivector. And of course the complement of
a bivector is a vector.

A vector and its bivector complement in a three-dimensional vector space

More generally we can say that two simple elements of the same grade a and b, are orthogonal if
and only if the exterior product of one with the complement of the other is zero.

aÔb ã 0 aÔb ã 0 5.32

In the case that a is x and b is x, both forms are clearly verified.

xÔx ã 0 xÔx ã 0
By taking the complement of these equations, and applying both the complement axiom, and the
complement of a complement axiom, we see that we can also say that two simple elements of the
same grade a and b, are orthogonal if and only if the regressive product of one with the
complement of the other is zero.
2009 9 3
Grassmann Algebra Book.nb 357

By taking the complement of these equations, and applying both the complement axiom, and the
complement of a complement axiom, we see that we can also say that two simple elements of the
same grade a and b, are orthogonal if and only if the regressive product of one with the
complement of the other is zero.

aÓb ã 0 aÓb ã 0 5.33

We will develop these results in more depth in the next chapter.

Visualizing the complement axiom


We can use this notion of orthogonality to visualize the complement axiom geometrically.

Consider the bivector xÔy. Then x Ô y is orthogonal to xÔy. But since x Ô y ã x Ó y this also
means that the intersection of the two (n|1)-spaces defined by x and y is orthogonal to xÔy. We
can depict this in 3-space as follows:

Visualizing the complement axiom in vector three-space

The regressive product in terms of complements


2009 9 3
Grassmann Algebra Book.nb 358

The regressive product in terms of complements


All the operations in the Grassmann algebra can be expressed in terms only of the exterior product
and the complement operations. It is this fact that makes the complement so important for an
understanding of the algebra.

In particular the regressive product (discussed in Chapter 3) and the interior product (to be
discussed in the next chapter) have simple representations in terms of the exterior and complement
operations.

We have already introduced the complement axiom 3 (equation 5.3) as part of the definition of the
complement.

aÓb ã aÔb
m k m k

Taking the complement of both sides of this axiom, noting that the grade of a Ó b is m+k|n, and
m k
using the complement of a complement axiom 4 gives:

a Ó b ã a Ô b ã H-1Lf Hm+k-n,nL a Ó b
m k m k m k

Hence, any regressive product can be written as the complement of an exterior product of
complements.

a Ó b ã H-1Lf Hm+k-n,nL a Ô b 5.34


m k m k

In section 5.6 below, we will show that f(m,n) is equal to m(n-m), so that f (m+k-n, n) becomes
(m+k-n) (n-(m+k-n)) which simplifies to (m+k) (m+k-n). For reference here, we include this version
of the formula below, although it has not yet been derived.

a Ó b ã H-1LHm+kL Hm+k-nL a Ô b 5.35


m k m k

The expression of a regressive product in terms of an exterior product and a double application of
the complement operation poses an interesting conundrum. On the left hand side, there is a product
which is defined in a completely non-metric way, and is thus independent of any metric imposed
on the space. On the right hand side we have a double application of the complement operation,
each of which requires a metric. This leads us to the notion that the complement operation applied
twice in this way is independent of any metric used to define it. A second application of the
complement operation effectively 'cancels out' the metric elements introduced during the first
application.

2009 9 3
Grassmann Algebra Book.nb 359

Glimpses of the inner product


The exterior product of an m-element with the complement of another m-element is either zero, or
an n-element. If it is zero, we say that the elements are orthogonal. Orthogonality was discussed in
a previous section.

If on the other hand it is an n-element, it must be a scalar multiple (a, say) of the unit n-element 1.
Hence:

aÔb ã a 1 ó aÔb ã a 1 ã a
m m m m
m Hn-mL
ó H-1L bÔa ã a ó H-1Lm Hn-mL b Ó a ã a
m m m m
m Hn-mL+f Hm,nL
ó H-1L bÓa ã a
m m

We will see in the following chapter how this formula is the foundation of the definition of the
inner product. It shows how a scalar may be obtained from the regressive product of an element
with the complement of an element of the same grade. It also shows the 'equivalent' definition in
terms of the exterior product.

If f(m,n) is equal to m(n-m), as we will soon show, then the last formula simplifies, giving

aÔb ã a 1 ó bÓa ã a 5.36


m m m m

5.6 The Complement of a Complement

The complement of a complement axiom

In the sections below we determine the function f (m, n) in axiom 4 (Formula 5.7) above:

a ã H-1Lf Hm,nL a
m m

To find the function f (m, n) we proceed to calculate the complement of the complement of an m-
element. What we also discover along the way, is that for a formula like this to be valid, the
coefficients of our metric mapping gij must also be symmetric. And, as we have already previewed
in Section 5.5 above, f (m, n) must be equal to m(n-m).

2009 9 3
Grassmann Algebra Book.nb 360

The complement of a cobasis element


The complement of the complement of a basis element is obtained by taking the complement of
Formula 5.13
n
ei ã ! ‚ gij ej
j=1

to give

n
ei ã ! ‚ gij ej
j=1

In order to determine ei , we need to obtain an expression for the complement of a cobasis element
of a 1-element ej . Note that whereas the complement of a cobasis element is defined, the cobasis
element of a complement is not. To simplify the development we consider, without loss of
generality, a specific basis element e1 .

First we express the cobasis element as a basis (n|1)-element, and then use the complement axiom
to express the right-hand side as a regressive product of complements of 1-elements.

e1 ã e2 Ô e3 Ô ! Ô en ã e2 Ó e3 Ó ! Ó en

Substituting for the ei from the definition 5.13:

e1 ã ! Ig21 e1 + g22 e2 + ! + g2 n en M Ó
! Ig31 e1 + g32 e2 + ! + g3 n en M Ó
! Ó
! Ign1 e1 + gn2 e2 + ! + gnn en M

Expanding this expression and collecting terms gives:

n
e1 ã !n-1 ‚ g1 j H-1Lj-1 e1 Ó e2 Ó ! Ó ·j Ó ! Ó en
j=1

In this expression, the notation ·j means that the jth factor has been omitted. The scalars g1 j are
the cofactors of g1 j in the array gij . Further discussion of cofactors, and how they result from
such products of (n|1) elements may be found in Chapter 2. The results for regressive products are
identical mutatis mutandis to those for the exterior product.

By equation 3.42, regressive products of the form above simplify to a scalar multiple of the
missing element. (Remember, ! is now equal to the inverse of ¯c for the purposes of this Chapter).

1
H-1Lj-1 e1 Ó e2 Ó ! Ó ·j Ó ! Ó en ã H-1Ln-1 ej
!n-2
The expression for e1 then simplifies to:

2009 9 3
Grassmann Algebra Book.nb 361

The expression for e1 then simplifies to:

n
e1 ã H-1Ln-1 ! ‚ g1 j ej
j=1

The final step is to see that the actual basis element chosen (e1 ) was, as expected, of no
significance in determining the final form of the formula. We thus have the more general result:

n
ei ã H-1Ln-1 # ‚ gij ej 5.37
j=1

Thus we have determined the complement of the cobasis element of a basis 1-element as a specific
1-element. The coefficients of this 1-element are the products of the scalar ! with cofactors of the
gij , and a possible sign depending on the dimension of the space.

In matrix formulation, this can be written as

1
B ã H-1Ln-1 G B
†G§

where B is a column matrix of basis vectors, and G is the matrix of cofactors of elements of G.

Our major use of this formula is to derive an expression for the complement of a complement of a
basis element. This we do below.

The complement of the complement of a basis 1-element


Consider again the basis element ei . The complement of ei is, by definition:
n
ei ã ! ‚ gij ej
j=1

Taking the complement a second time gives:

n
ei ã ! ‚ gij ej
j=1

We can now substitute for the ej from the formula derived in the previous section to get:

n n
n-1 2
ei ã H-1L ! ‚ gij ‚ gjk ek
j=1 k=1

In the matrix formulation of the previous section, this can be written as

2009 9 3
Grassmann Algebra Book.nb 362

1
B ã H-1Ln-1 GG B
†G§

In Chapter 2 the sum over j of the terms gij gkj was shown to be equal to the determinant †gij §
whenever i equals k and zero otherwise. That is:
n
‚ gij gkj ã †gij § dik
j=1

Note carefully that in this sum, the order of the subscripts is reversed in the cofactor term compared
to that in the expression for ei .

Thus we conclude that if and only if the array gij is symmetric, that is, gij ãgji , can we express
the complement of a complement of a basis 1-element in terms only of itself and no other basis
element.

ei ã H-1Ln-1 !2 †gij § dik ek == H-1Ln-1 !2 †gij § ei

Furthermore, since we have already shown in Section 5.3 that !2 †gij §ã1, we also have that:

ei ã H-1Ln-1 ei
In sum: In order to satisfy the complement of a complement axiom for m-elements it is necessary
that gij ãgji . For 1-elements it is also sufficient.

Below we shall show that the symmetry of the gij is also sufficient for m-elements.

The complement of the complement of a basis m-element


Consider the basis m-element e1 Ôe2 Ô!Ôem . By taking the complement of the complement of this
element and applying the complement axiom twice, we obtain:

e1 Ô e2 Ô ! Ô em ã e1 Ó e2 Ó ! Ó em ã e1 Ô e2 Ô ! Ô em

But since ei ãH-1Ln-1 ei we obtain immediately that:

e1 Ô e2 Ô ! Ô em ã H-1Lm Hn-1L e1 Ô e2 Ô ! Ô em

But of course the form of this result is valid for any basis element ei . Writing H-1Lm Hn-1L in the
m
equivalent form H-1Lm Hn-mL (since the parity of m(n-1) is equal to the parity of m(n-m)), we
obtain that for any basis element of any grade:

ei ã H-1Lm Hn-mL ei 5.38


m m

The complement of the complement of an m-element

2009 9 3
Grassmann Algebra Book.nb 363

The complement of the complement of an m-element


Consider the general m-element a ã ⁄i ai ei . By taking the complement of the complement of
m m
this element and substituting from equation 5.18, we obtain:

a ã ‚ ai ei ã ‚ ai H-1Lm Hn-mL ei ã H-1Lm Hn-mL a


m m m m
i i

Finally then we have shown that, provided the complement mapping gij is symmetric and the
constant ! is such that !2 †gij §ã1, the complement of a complement axiom is satisfied by an
otherwise arbitrary mapping, with sign H-1Lm Hn-mL .

a ã H-1Lm Hn-mL a 5.39


m m

Ï Special cases

The complement of the complement of a scalar is the scalar itself.

aãa 5.40

The complement of the complement of an n-element is the n-element itself.

aãa 5.41
n n

The complement of the complement of any element in a 3-space is the element itself, since
H-1Lm H3-mL is positive for m equal to 0, 1, 2, or 3.

aãa
m m

Alternatively, we can say that aãa except when a is of odd degree in an even-dimensional space.
m m m

The simplest case of an element of odd degree in an even-dimensional space is a vector in a vector
2-space.

2009 9 3
Grassmann Algebra Book.nb 364

x = -x

Vectors x, x, and x in a two-dimensional vector space

Idempotent complements
No simple element (except zero) can be equal to its complement. This is because the exterior
product of a simple element with its complement is congruent to the unit n-element (which is not
zero) - a property that follows directly from the definition of complement.

However, this is not necessarily true for non-simple elements. Let S be the sum of a simple m-
element (a, say) and its complement.
m

Sãa+a
m m

Then the complement of S is equal to

S ã a + a ã a + H-1Lm Hn-mL a
m m m m

Since H-1Lm Hn-mL is equal to H-1Lm Hn-1L , we can see that S ã S if and only if the dimension n
of the space is odd, or the grade m is even.

For example, in a vector 3-space, the complement of a sum of a vector and its bivector complement
is identical to itself.

¯!3 ; S = x + x; 9S, ¯$@SD=

8x + x, x + x<

And in a point 3-space, the sum of a 2-element and its 2-element complement is identical to itself.

¯"3 ; S = B + B; 9S, ¯$@SD=


2 2

:B + B, B + B>
2 2 2 2

In Chapter 7: Exploring Screw Algebra, we will explore this property further.

2009 9 3
Grassmann Algebra Book.nb 365

In Chapter 7: Exploring Screw Algebra, we will explore this property further.

5.7 Working with Metrics

Working with metrics


In GrassmannAlgebra, there are three basic ways in which you might apply a metric.

First, you can transform expressions involving Clifford, hypercomplex, generalized Grassmann,
interior, inner, or scalar products to expressions specifically involving scalar products of basis
elements. Your results involving these scalar products of basis elements are valid for any metric. If
you wish to make a specific metric apply to your results, you can declare that metric, and use
ToMetricElements to perform the substitution of the currently declared metric elements for the
corresponding scalar products of basis elements. We will discuss these transformations after we
have introduced the interior product in the next chapter.

Second, if you have elements expressed in terms of basis elements, you can compute their
complements by first taking their complement (applying GrassmannComplement, or using the
OverBar in the GrassmannAlgebra Palette), and then applying ConvertComplements. This
will use the currently declared metric to convert the complements of basis elements to
complementary basis elements. We will discuss these transformations in the section to follow this
one.

Third, if you want to display the metrics induced by the various exterior product spaces, you can
use MetricL for an individual space or MetricPalette to display all the induced metrics. We
discuss these in the sections below.

The default metric


In order to simplify complements and interior products, and any products defined in terms of them
(for example, Clifford and hypercomplex products), GrassmannAlgebra needs to know what
metric has been imposed on the underlying linear space L. Grassmann and all those writing in the
1
early tradition of the Ausdehnungslehre tacitly assumed a Euclidean metric; that is, one in which
gij ã dij . This metric is also the one tacitly assumed in beginning presentations of the three-
dimensional vector calculus, and is most evident in the definition of the cross product as a vector
normal to the factors of the product.

The default metric assumed by GrassmannAlgebra is the Euclidean metric. In the case that the
GrassmannAlgebra package has just been loaded, entering Metric will show the components of
the default Euclidean metric tensor.

Metric
881, 0, 0<, 80, 1, 0<, 80, 0, 1<<

These components are arranged as the elements of a 3!3 matrix because the default basis is 3-
dimensional.

2009 9 3
Grassmann Algebra Book.nb 366

These components are arranged as the elements of a 3!3 matrix because the default basis is 3-
dimensional.

Basis
8e1 , e2 , e3 <

However, if the dimension of the basis is changed, the default metric changes accordingly.

DeclareBasis@8¯", i, j, k<D
8¯#, i, j, k<

Metric
881, 0, 0, 0<, 80, 1, 0, 0<, 80, 0, 1, 0<, 80, 0, 0, 1<<

We will take the liberty of referring to such a matrix as the metric.

Declaring a metric
GrassmannAlgebra permits us to declare a metric as a matrix with any numeric or symbolic
components. There are just three conditions to which a valid matrix must conform in order to be a
metric:
1) It must be symmetric (and hence square).
2) Its order must be the same as the dimension of the declared linear space.
3) Its components must be scalars.

It is up to the user to ensure the first two conditions. The third is handled by GrassmannAlgebra
automatically assuming all symbols in the matrix are scalars.

Ï Example: Declaring a metric


We return to a vector 3-space (by entering ¯!3 from the palette), create a 3!3 matrix M, and then
declare it as the metric.

¯!3 ; M = 88a, 0, n<, 80, b, 0<, 8n, 0, g<<; MatrixForm@MD


a 0 n
0 b 0
n 0 g

DeclareMetric@MD
88a, 0, n<, 80, b, 0<, 8n, 0, g<<

We can verify that this is indeed the metric by entering Metric.

Metric
88a, 0, n<, 80, b, 0<, 8n, 0, g<<

We can verify that a, b, and n are scalars by using ScalarQ.

2009 9 3
Grassmann Algebra Book.nb 367

ScalarQ@8a, b, n<D
8True, True, True<

Declaring a general metric


For theoretical calculations it is sometimes useful to be able quickly to declare a metric of general
symbolic elements. We can do this as described in the previous section, or we can use the
GrassmannAlgebra function DeclareMetric[g] where g is a symbol. We will often use the
'double-struck' symbol - for the kernel symbol of the metric components.

¯!3 ; G = DeclareMetric@-D; MatrixForm@GD


$1,1 $1,2 $1,3
$1,2 $2,2 $2,3
$1,3 $2,3 $3,3

You can check that the components of G are now considered scalars.

ScalarQ@GD
88True, True, True<, 8True, True, True<, 8True, True, True<<

You can test to see if a symbol is a component of the currently declared metric by using MetricQ.

MetricQ@8-2,3 , -23 , -3,4 <D

8True, False, False<

Calculating induced metrics


The GrassmannAlgebra function for calculating the metric induced on L by the metric in L is
m 1
MetricL[m].

Ï Induced general metrics

Suppose we are working in 3-space with the general metric defined above. Then the metrics on L,
0
L, L, L can be calculated by entering:
1 2 3

¯!3 ; DeclareMetric@-D;

M0 = MetricL@0D; MatrixForm@M0 D
H1L

2009 9 3
Grassmann Algebra Book.nb 368

M1 = MetricL@1D; MatrixForm@M1 D
$1,1 $1,2 $1,3
$1,2 $2,2 $2,3
$1,3 $2,3 $3,3

M2 = MetricL@2D; MatrixForm@M2 D
-$21,2 + $1,1 $2,2 -$1,2 $1,3 + $1,1 $2,3 -$1,3 $2,2 + $1,2 $2,3
-$1,2 $1,3 + $1,1 $2,3 -$21,3 + $1,1 $3,3 -$1,3 $2,3 + $1,2 $3,3
-$1,3 $2,2 + $1,2 $2,3 -$1,3 $2,3 + $1,2 $3,3 -$22,3 + $2,2 $3,3

M3 = MetricL@3D; MatrixForm@M3 D

I -$21,3 $2,2 + 2 $1,2 $1,3 $2,3 - $1,1 $22,3 - $21,2 $3,3 + $1,1 $2,2 $3,3 M

The metric in L is of course just the determinant of the metric in L.


3 1

Ï Example: Induced specific metrics

We return to the metric discussed in a previous example.

¯!3 ; M = 88a, 0, n<, 80, b, 0<, 8n, 0, g<<;


DeclareMetric@MD; MatrixForm@MD
a 0 n
0 b 0
n 0 g

The induced metrics are

MatrixForm@MetricL@ÒDD & êü 80, 1, 2, 3<


a 0 n ab 0 -b n
:H 1 L, 0 b 0 , 2 , H a b g - b n2 L>
0 ag-n 0
n 0 g -b n 0 bg

Ï Verifying the symmetry of the induced metrics

It is easy to verify the symmetry of the induced metrics in any particular case by inspection.
However to automate this we need only ask Mathematica to compare the metric to its transpose.
For example we can verify the symmetry of the metric induced on L by a general metric in 4-space.
3

¯!4 ; DeclareMetric@-D; M3 = MetricL@3D; M3 ã Transpose@M3 D


True

2009 9 3
Grassmann Algebra Book.nb 369

The metric for a cobasis


In the sections above we have shown how to calculate the metric induced on L by the metric
m
defined on L. Because of the way in which the cobasis elements of L are naturally ordered
1 m
alphanumerically, their ordering and signs will not generally correspond to that of the basis
elements of L . The arrangement of the elements of the metric tensor for the cobasis elements of L
n-m m
will therefore differ from the arrangement of the elements of the metric tensor for the basis
elements of L . The metric tensor of a cobasis is the tensor of cofactors of the metric tensor of the
n-m
basis. Hence we can obtain the tensor of cofactors of the elements of the metric tensor of L by
m
reordering and resigning the elements of the metric tensor induced on L .
n-m

As an example, take the general metric in 3-space discussed in the previous section. We will show
that we can obtain the tensor of the cofactors of the elements of the metric tensor of L by
1
reordering and resigning the elements of the metric tensor induced on L.
2

The metric on L is given as a correspondence G1 between the basis elements and cobasis elements
1
of L.
1

e1 e2 Ô e3 -1,1 -1,2 -1,3


e2 ã ! G1 -He1 Ô e3 L ; G1 = -1,2 -2,2 -2,3 ;
e3 e1 Ô e2 -1,3 -2,3 -3,3

The metric on L is given as a correspondence G2 between the basis elements and cobasis elements
2
of L.
2

e1 Ô e2 e3
e1 Ô e3 ã ! G2 -e2 ;
e2 Ô e3 e1

--21,2 + -1,1 -2,2 --1,2 -1,3 + -1,1 -2,3 --1,3 -2,2 + -1,2 -2,3
G2 = --1,2 -1,3 + -1,1 -2,3 --21,3 + -1,1 -3,3 --1,3 -2,3 + -1,2 -3,3 ;
--1,3 -2,2 + -1,2 -2,3 --1,3 -2,3 + -1,2 -3,3 --22,3 + -2,2 -3,3

We can transform the columns of basis elements in this equation into the form of the preceding one
with the transformation T which is simply determined as:

0 0 1
T= 0 -1 0 ;
1 0 0

e1 Ô e2 e2 Ô e3 e3 e1
T e1 Ô e3 ã -He1 Ô e3 L ; T -e2 ã e2 ;
e2 Ô e3 e1 Ô e2 e1 e3

And since this transformation is its own inverse, we can also transform G2 to T.G2 .T. We now
expect this transformed array to be the array of cofactors of the metric tensor G1 . We can easily
check
2009 9this
3 in Mathematica by entering the predicate
Grassmann Algebra Book.nb 370

And since this transformation is its own inverse, we can also transform G2 to T.G2 .T. We now
expect this transformed array to be the array of cofactors of the metric tensor G1 . We can easily
check this in Mathematica by entering the predicate

Simplify@G1 .HT.G2 .TL ã Det@G1 D IdentityMatrix@3DD


True

Creating palettes of induced metrics


You can create a palette of all the induced metrics by declaring a metric and then entering
MetricPalette.

The induced metrics of a Euclidean vector 4-space are all Euclidean.

¯!4 ; MetricPalette

Metric Palette
L H1L
0

1 0 0 0
L 0 1 0 0
1 0 0 1 0
0 0 0 1
1 0 0 0 0 0
0 1 0 0 0 0
L 0 0 1 0 0 0
2 0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
1 0 0 0
L 0 1 0 0
3 0 0 1 0
0 0 0 1
L H1L
4

Here are the metrics for a non-Euclidean vector 3-space.

2009 9 3
Grassmann Algebra Book.nb 371

¯!3 ; M = 88a, 0, n<, 80, b, 0<, 8n, 0, g<<;


DeclareMetric@MD; MetricPalette

Metric Palette
L H1L
0

a 0 n
L 0 b 0
1
n 0 g
ab 0 -b n
L 0 a g - n2 0
2
-b n 0 bg
L H a b g - b n2 L
3

Here are general metrics for a vector 3-space.

¯!3 ; DeclareMetric@.D; MetricPalette

Metric Palette
L H1L
0

"1,1 "1,2 "1,3


L "1,2 "2,2 "2,3
1
"1,3 "2,3 "3,3

-"21,2 + "1,1 "2,2 -"1,2 "1,3 + "1,1 "2,3 -"1,3 "2,2 + "1
L -"1,2 "1,3 + "1,1 "2,3 -"21,3 + "1,1 "3,3 -"1,3 "2,3 + "1
2
-"1,3 "2,2 + "1,2 "2,3 -"1,3 "2,3 + "1,2 "3,3 -"22,3 + "2,2

L I -"21,3 "2,2 + 2 "1,2 "1,3 "2,3 - "1,1 "22,3 - "21,2 "3,3 + "1,1 "2
3

You can paste any of these metric matrices into your notebook by clicking on the palette.

-.21,2 + .1,1 .2,2 -.1,2 .1,3 + .1,1 .2,3 -.1,3 .2,2 + .1,2 .2,3
-.1,2 .1,3 + .1,1 .2,3 -.21,3 + .1,1 .3,3 -.1,3 .2,3 + .1,2 .3,3
-.1,3 .2,2 + .1,2 .2,3 -.1,3 .2,3 + .1,2 .3,3 -.22,3 + .2,2 .3,3

5.8 Calculating Complements

2009 9 3
Grassmann Algebra Book.nb 372

5.8 Calculating Complements

Entering a complement
To enter the expression for the complement of a Grassmann expression X in GrassmannAlgebra,
enter GrassmannComplement[X]. For example to enter the expression for the complement of
xÔy, enter:

GrassmannComplement@x Ô yD
xÔy

Or, you can simply select the expression xÔy and click the É button on the GrassmannAlgebra
palette (or this one).

Note that an expression with a bar over it symbolizes another expression (the complement of the
original expression), not a GrassmannAlgebra operation on the expression. Hence no conversions
will occur by entering an overbarred expression.

GrassmannComplement will also work for lists, matrices, or tensors of elements. For example,
here is a matrix:

M = 88a e1 Ô e2 , b e2 <, 8-b e2 , c e3 Ô e1 <<; MatrixForm@MD


a e1 Ô e2 b e2
K O
-b e2 c e3 Ô e1

And here is its complement:

M êê MatrixForm
a e1 Ô e2 b e2
-b e2 c e3 Ô e1

Creating palettes of complements of basis elements


GrassmannAlgebra has a function ComplementPalette for tabulating basis elements and their
complements in the currently declared space and the currently declared metric. Once the palette is
generated you can paste from it by clicking on any of the buttons.

Ï Euclidean metric

Here we repeat the construction of the complement palette of Section 5.4 in a 2-space.

2009 9 3
Grassmann Algebra Book.nb 373

¯!2 ; ComplementPalette

Complement Palette
Basis Complement
1 e1 Ô e2
e1 e2
e2 -e1
e1 Ô e2 1

Palettes of complements for other bases are obtained by declaring the bases, then entering the
command ComplementPalette.

DeclareBasis@86, 7<D; ComplementPalette

Complement Palette
Basis Complement
1 #Ô$
# $
$ -#
#Ô$ 1

Ï Non-Euclidean metric

Ì 2-space

Here is the complement palette in a 2-space with general metric.

2009 9 3
Grassmann Algebra Book.nb 374

¯!2 ; DeclareMetric@gD; ComplementPalette

Complement Palette
Basis Complement
e1 Ôe2
1 ¯g

e2 g1,1 e1 g1,2
e1 -
¯g ¯g

e2 g1,2 e1 g2,2
e2 -
¯g ¯g

e1 Ô e2 ¯g

† The symbol ¯g represents the determinant of the metric tensor. It (or expressions involving it)
can be evaluated in any space with any declared metric using ToMetricElements.

ToMetricElements@¯gD

-g21,2 + g1,1 g2,2

The palette is arranged to 'collapse' the full determinant expression under the symbol ¯g since it is
often large. However if you wish, you can include the full form of ¯g in the palette by substituting
for it.

ComplementPalette ê. ¯g Ø ToMetricElements@¯gD

Complement Palette
Basis Complement
e1 Ôe2
1 -g21,2 +g1,1 g2,2

e2 g1,1 e1 g1,2
e1 -
-g21,2 +g1,1 g2,2 -g21,2 +g1,1 g2,2

e2 g1,2 e1 g2,2
e2 -
-g21,2 +g1,1 g2,2 -g21,2 +g1,1 g2,2

e1 Ô e2 -g21,2 + g1,1 g2,2

Ì 3-space

Here is the corresponding palette in 3-space.

2009 9 3
Grassmann Algebra Book.nb 375

¯A; DeclareMetric@gD; ComplementPalette

Complement Palette
Basis Complement
e1 Ôe2 Ôe3
1 ¯g

g1,3 e1 Ôe2 g1,2 e1 Ôe3 g1,1 e2 Ôe3


e1 - +
¯g ¯g ¯g

g2,3 e1 Ôe2 g2,2 e1 Ôe3 g1,2 e2 Ôe3


e2 - +
¯g ¯g ¯g

g3,3 e1 Ôe2 g2,3 e1 Ôe3 g1,3 e2 Ôe3


e3 - +
¯g ¯g ¯g

e3 I-g21,2 +g1,1 g2,2 M e2 Hg1,2 g1,3 -g1,1 g2,3 L e1 H-g1,3 g2,2 +g1,2 g2,3 L
e1 Ô e2 + +
¯g ¯g ¯g

e3 H-g1,2 g1,3 +g1,1 g2,3 L e2 Ig21,3 -g1,1 g3,3 M e1 H-g1,3 g2,3 +g1,2 g3,3 L
e1 Ô e3 + +
¯g ¯g ¯g

e3 H-g1,3 g2,2 +g1,2 g2,3 L e2 Hg1,3 g2,3 -g1,2 g3,3 L e1 I-g22,3 +g2,2 g3,3 M
e2 Ô e3 + +
¯g ¯g ¯g

e1 Ô e2 Ô e3 ¯g

The determinant of this metric tensor is

ToMetricElements@¯gD

-g21,3 g2,2 + 2 g1,2 g1,3 g2,3 - g1,1 g22,3 - g21,2 g3,3 + g1,1 g2,2 g3,3

Converting complements of basis elements


The GrassmannAlgebra function for converting complements of basis elements to other basis
elements is ConvertComplements. If the declared metric is non-Euclidean, the result will
include elements of the metric tensor.

Ï Example: Converting complements of individual basis elements in a Euclidean


space

Here is the palette of complements for a Euclidean 3-space.

2009 9 3
Grassmann Algebra Book.nb 376

¯A; ComplementPalette

Complement Palette
Basis Complement
1 e1 Ô e2 Ô e3
e1 e2 Ô e3
e2 -He1 Ô e3 L
e3 e1 Ô e2
e1 Ô e2 e3
e1 Ô e3 -e2
e2 Ô e3 e1
e1 Ô e2 Ô e3 1

We could have obtained these complements in three steps:


1. Generate the basis elements (using LBasis)
2. Take their complements (using GrassmannComplement)
3. Convert the complements to other basis elements (using ConvertComplements).

LBasis
881<, 8e1 , e2 , e3 <, 8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <, 8e1 Ô e2 Ô e3 <<

GrassmannComplement@LBasisD

981<, 8e1 , e2 , e3 <, 8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <, 8e1 Ô e2 Ô e3 <=

ConvertComplements@GrassmannComplement@LBasisDD
88e1 Ô e2 Ô e3 <, 8e2 Ô e3 , -He1 Ô e3 L, e1 Ô e2 <, 8e3 , -e2 , e1 <, 81<<

Ï Example: Converting complements of individual basis elements in a non-


Euclidean space

To convert complements of basis elements in a non-Euclidean space, the same three steps as in the
previous section.

¯!3 ; DeclareMetric@88a, 0, n<, 80, b, 0<, 8n, 0, g<<D êê MatrixForm


a 0 n
0 b 0
n 0 g

2009 9 3
Grassmann Algebra Book.nb 377

Here is the palette of complements for this metric.

ComplementPalette

Complement Palette
Basis Complement
e1 Ôe2 Ôe3
1 ¯g

n e1 Ôe2 a e2 Ôe3
e1 +
¯g ¯g

b e1 Ôe3
e2 -
¯g

g e1 Ôe2 n e2 Ôe3
e3 +
¯g ¯g

b n e1 a b e3
e1 Ô e2 - +
¯g ¯g

I-a g+n2 M e2
e1 Ô e3
¯g

b g e1 b n e3
e2 Ô e3 -
¯g ¯g

e1 Ô e2 Ô e3 ¯g

We could have obtained these complements in three steps:


1. Generate the basis elements (using LBasis)
2. Take their complements (using GrassmannComplement)
3. Convert the complements to other basis elements (using ConvertComplements).

LBasis
881<, 8e1 , e2 , e3 <, 8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <, 8e1 Ô e2 Ô e3 <<

GrassmannComplement@LBasisD

981<, 8e1 , e2 , e3 <, 8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <, 8e1 Ô e2 Ô e3 <=

ConvertComplements@GrassmannComplement@LBasisDD
e1 Ô e2 Ô e3 n e1 Ô e2 a e2 Ô e3 b e1 Ô e3 g e1 Ô e2 n e2 Ô e3
:: >, : + ,- , + >,
¯g ¯g ¯g ¯g ¯g ¯g

b n e1 a b e3 I-a g + n2 M e2 b g e1 b n e3
:- + , , - >, : ¯g >>
¯g ¯g ¯g ¯g ¯g

The determinant of the metric tensor is

2009 9 3
Grassmann Algebra Book.nb 378

The determinant of the metric tensor is

ToMetricElements@¯gD

a b g - b n2

† The palette is arranged to 'collapse' the full determinant expression under the symbol ¯g since it
is often large. However if you wish, you can include the full form of ¯g in the palette by
substituting for it.

ComplementPalette ê. ¯g Ø ToMetricElements@¯gD

Complement Palette
Basis Complement
e1 Ôe2 Ôe3
1 a b g-b n2

n e1 Ôe2 a e2 Ôe3
e1 +
a b g-b n2 a b g-b n2

b e1 Ôe3
e2 -
a b g-b n2

g e1 Ôe2 n e2 Ôe3
e3 +
a b g-b n2 a b g-b n2

b n e1 a b e3
e1 Ô e2 - +
a b g-b n2 a b g-b n2

I-a g+n2 M e2
e1 Ô e3
a b g-b n2

b g e1 b n e3
e2 Ô e3 -
a b g-b n2 a b g-b n2

e1 Ô e2 Ô e3 a b g - b n2

Ï Example: Verifying the complement of a complement axiom for basis elements

The complement of a complement of any element in 3-space is the element itself. We can verify
this for the basis elements by applying the complement conversion procedure twice. This example
verifies the result for a general metric in 3-space.

DeclareMetric@gD
88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<

2009 9 3
Grassmann Algebra Book.nb 379

G = ConvertComplements@GrassmannComplement@LBasisDD
e1 Ô e2 Ô e3 g1,3 e1 Ô e2 g1,2 e1 Ô e3 g1,1 e2 Ô e3
:: >, : - + ,
¯g ¯g ¯g ¯g
g2,3 e1 Ô e2 g2,2 e1 Ô e3 g1,2 e2 Ô e3
- + ,
¯g ¯g ¯g
g3,3 e1 Ô e2 g2,3 e1 Ô e3 g1,3 e2 Ô e3
- + >,
¯g ¯g ¯g

e3 I-g21,2 + g1,1 g2,2 M e2 Hg1,2 g1,3 - g1,1 g2,3 L


: + +
¯g ¯g

e1 H-g1,3 g2,2 + g1,2 g2,3 L e3 H-g1,2 g1,3 + g1,1 g2,3 L


, +
¯g ¯g

e2 Ig21,3 - g1,1 g3,3 M e1 H-g1,3 g2,3 + g1,2 g3,3 L


+ ,
¯g ¯g

e3 H-g1,3 g2,2 + g1,2 g2,3 L e2 Hg1,3 g2,3 - g1,2 g3,3 L


+ +
¯g ¯g

e1 I-g22,3 + g2,2 g3,3 M


>, : ¯g >>
¯g

Taking the complement of these and converting them again returns us to the original list of basis
elements.

ConvertComplements@GrassmannComplement@GDD
881<, 8e1 , e2 , e3 <, 8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <, 8e1 Ô e2 Ô e3 <<

Ï Example: Converting complements of basis elements in general expressions

† ConvertComplements will take any Grassmann expression and convert any complements of
basis elements in it. Suppose we are in a 3-space with a Euclidean metric.

¯A; ConvertComplementsB2 + a b e2 Ô e2 Ô c e3 F

2 + a b c e1 Ô e2

If the metric is more general we get the more general result.

2009 9 3
Grassmann Algebra Book.nb 380

DeclareMetric@gD; F = ConvertComplementsB2 + a b e2 Ô e2 Ô c e3 F

a b c g2,2 g3,3 e1 Ô e2 a b c g2,2 g2,3 e1 Ô e3 a b c g1,3 g2,2 e2 Ô e3


2+ - +
¯g ¯g ¯g

Simplify@FD

2 ¯g + a b c g2,2 Hg3,3 e1 Ô e2 - g2,3 e1 Ô e3 + g1,3 e2 Ô e3 L

¯g

† ConvertComplements will also apply some simplification rules to expressions involving


non-basis elements.

¯A; ConvertComplementsB2 + a b e2 Ô e2 Ô c x F

2+abcx

DeclareMetric@gD; ConvertComplementsB2 + a b e2 Ô e2 Ô c x F

2 + a b c x g2,2

Simplifying expressions involving complements


ConvertComplements has been specifically designed to convert expressions involving
complements of basis elements according to the currently declared metric of the space. Generally
the results will differ when the metrics differ.

GrassmannSimplify, on the other hand, only includes simplification rules which are
independent of the metric. Generally, these rules apply when an expression involving complements
can be expressed without the complements, or can be expressed by another form deemed simpler,
often involving the interior product (which will be discussed in the next chapter).

† In the example discussed in the previous section we see that using GrassmannSimplify
gives a result involving the interior product e2 û e2 (in this case a scalar product) which is
valid independent of the metric.

¯%B2 + a b e2 Ô e2 Ô c x F

2 + a b c He2 û e2 L x

In the case of a Euclidean metric, the scalar product e2 û e2 in the result is equal to 1. In the case
of a general metric it is equal to g2,2 .

† Here are some more examples in 3-space.

2009 9 3
Grassmann Algebra Book.nb 381

¯A; ¯%B:x, x, x Ó y, x Ô y, Hu Ô vL Ó y, u Ô v Ô y,

Hu Ô vL Ó y Ó z, u Ô v Ô y Ô z, Hx Ô yL û z, Hx Ô yL Ó z>F

8x, x, x û y, x û y, u Ô v û y, u Ô v û y,
u Ô v û y Ô z, u Ô v û y Ô z, Hx û yL z, x Ô y Ô z<

The dimension of the space may change the sign of some expressions, or indeed make them zero.
Here are the same examples in 4-space, giving two sign-changes, and one zero.

¯!4 ; ¯%B:x, x, x Ó y, x Ô y, Hu Ô vL Ó y, u Ô v Ô y,

Hu Ô vL Ó y Ó z, u Ô v Ô y Ô z, Hx Ô yL û z, Hx Ô yL Ó z>F

8-x, x, x û y, -Hx û yL, u Ô v û y,


u Ô v û y, u Ô v û y Ô z, u Ô v û y Ô z, Hx û yL z, 0<

† GrassmannSimplify will also handle expressions with symbolic grades.

TableB¯!n ; :n, ¯%B:a, a Ô b, a Ó b>F>, 8n, 2, 4<F


m m k m k

::2, :H-1Lm a, H-1Lm a û b , H-1Lk+m a Ó b>>,


m m k m k

:3, :a, a û b, a Ó b>>, :4, :H-1Lm a, H-1Lm a û b , H-1Lk+m a Ó b>>>


m m k m k m m k m k

Converting expressions involving complements to specified forms


In the usual way of developing the algebra, the most basic product is the exterior product. The
regressive product is then developed as a "dual" product to the exterior product (that is, it has the
same axiom structure, but different interpretation on its symbols). The introduction of a
complement operation (mapping) then enables the definition of the interior product (and its special
cases the inner product and the scalar product). The interior, inner and scalar products are
discussed in the next chapter.

The exterior and regressive products are taken as basic. If a complement operation (or equivalently,
a metric) is introduced (defined) on the space, then the exterior and regressive products can be
expressed in terms of each other and the complement. The interior product operation can also be
expressed in terms either of the exterior product and the complement, or else the regressive product
and the complement. To complete the suite of conversion possibilities, the exterior and regressive
products can be expressed in terms of the interior product and the complement, and the
complement of an element can be expressed as its interior product with the unit n-element.

In this chapter, we discuss only the conversions between the exterior product, the regressive
product, and the complement. The examples below show the effects of these conversions on a
single expression in both 3 and 4 dimensional spaces. In a 3-space, the complement of the
complement of an element of any grade is the element itself, hence these conversions in a 3-space
show no extra signs. In spaces of other dimension however, the results may show signs dependent
on the grades of the elements. This is exemplified by the 4-dimensional cases.

2009 9 3
In this chapter, we discuss only the conversions between the exterior product, the
Grassmann regressive
Algebra Book.nb 382
product, and the complement. The examples below show the effects of these conversions on a
single expression in both 3 and 4 dimensional spaces. In a 3-space, the complement of the
complement of an element of any grade is the element itself, hence these conversions in a 3-space
show no extra signs. In spaces of other dimension however, the results may show signs dependent
on the grades of the elements. This is exemplified by the 4-dimensional cases.

Ï Example: Conversions in 3 and 4 dimensional spaces

As an example expression we take the regressive product of two exterior products.

¯A; G = a Ô b Ó g Ô d ;
m k p q

Converting the exterior products to regressive products and complements yields a possible sign
change in 4-space depending on the grades of the elements.

8¯!3 ; ExteriorToRegressive@GD, ¯!4 ; ExteriorToRegressive@GD<

:a Ó b Ó g Ó d, H-1Lk+m a Ó b Ó H-1Lp+q g Ó d >


m k p q m k p q

Converting the regressive products to exterior products also yields a possible sign change.

8¯!3 ; RegressiveToExterior@GD, ¯!4 ; RegressiveToExterior@GD<

:a Ô b Ô g Ô d, H-1Lk+m+p+q a Ô b Ô g Ô d>
m k p q m k p q

Converting regressive products of basis elements in a metric space


In a metric space we can compute the regressive product of basis elements in terms of exterior
products of basis elements.

Ï Example: The regressive product of two bivectors in Euclidean space.

Here is the regressive product of two bivectors

¯A; Z = ¯%a Ó ¯%b


Ha1 e1 Ô e2 + a2 e1 Ô e3 + a3 e2 Ô e3 L Ó Hb1 e1 Ô e2 + b2 e1 Ô e3 + b3 e2 Ô e3 L

Expanding and simplifying the product gives

Zs = ¯%@ZD
H-a2 b1 + a1 b2 L e1 Ô e2 Ó e1 Ô e3 +
H-a3 b1 + a1 b3 L e1 Ô e2 Ó e2 Ô e3 + H-a3 b2 + a2 b3 L e1 Ô e3 Ó e2 Ô e3

Now convert the regressive products to exterior products and complements.

2009 9 3
Grassmann Algebra Book.nb 383

Ze = RegressiveToExterior@ZD

a1 e1 Ô e2 Ô b1 e1 Ô e2 + a1 e1 Ô e2 Ô b2 e1 Ô e3 + a1 e1 Ô e2 Ô b3 e2 Ô e3 +
a2 e1 Ô e3 Ô b1 e1 Ô e2 + a2 e1 Ô e3 Ô b2 e1 Ô e3 + a2 e1 Ô e3 Ô b3 e2 Ô e3 +
a3 e2 Ô e3 Ô b1 e1 Ô e2 + a3 e2 Ô e3 Ô b2 e1 Ô e3 + a3 e2 Ô e3 Ô b3 e2 Ô e3

Finally, convert the complements of the basis elements.

ConvertComplements@Ze D
H-a2 b1 + a1 b2 L e1 + H-a3 b1 + a1 b3 L e2 + H-a3 b2 + a2 b3 L e3

Ï Example: The regressive product of two bivectors in non-Euclidean space.


Take the same regressive product of bivectors as in the previous example. Declare a non-Euclidean
metric.

DeclareMetric@gD
88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<

Then convert the complements of the basis elements.

ConvertComplements@Ze D

¯g H-a2 b1 + a1 b2 L e1 +

¯g H-a3 b1 + a1 b3 L e2 + ¯g H-a3 b2 + a2 b3 L e3

Ï Example: The regressive product of three elements in Euclidean 3-space

Here is a pair of regressive products of three basis elements in a 3-space.

¯A; X = 8He1 Ô e2 L Ó He2 Ô e3 L Ó He3 Ô e1 L, He1 Ô e2 Ô e3 L Ó He1 Ô e2 Ô e3 L Ó e1 <;

Convert the regressive products to exterior products and complements.

Xe = RegressiveToExterior@XD

9e1 Ô e2 Ô e2 Ô e3 Ô e3 Ô e1 , e1 Ô e2 Ô e3 Ô e1 Ô e2 Ô e3 Ô e1 =

Convert the complements. ConvertComplements assumes the currently declared metric - in


this case the default Euclidean metric.

ConvertComplements@Xe D
81, e1 <

2009 9 3
Grassmann Algebra Book.nb 384

Ï Example: The regressive product of three elements in non-Euclidean 3-space

We now change the metric to a general metric.

DeclareMetric@gD
88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<

Applying ConvertComplements assumes this metric, and shows us that both results now have
the determinant of the metric tensor as multipliers.

ConvertComplements@Xe D
8¯g, ¯g e1 <

† Note that because their were two regressive product operations, ¯g was generated twice
(giving ¯g). In the example of the regressive product of bivectors in the previous example, there
was one regressive product operation, so ¯g was generated once.

5.9 Complements in a vector space

The Euclidean complement in a vector 2-space


Consider a vector x in a 2-dimensional Euclidean vector space expressed in terms of basis vectors
e1 and e2 .

x ã a e1 + b e2
Since this is a Euclidean vector space, we can depict the basis vectors at right angles to each other.
But note that since it is a vector space, we do not depict an origin.

2009 9 3
Grassmann Algebra Book.nb 385

Complementing a two-dimensional vector

The complement of x is given by:

x ã a e1 + b e2 ã a e2 - b e1
Remember, the Euclidean complement of a basis element is (formally identical to) its cobasis
element, and a basis element and its cobasis element are defined by their exterior product being the
basis n-element, in this case e1 Ôe2 .

It is clear from the depiction above that x and x are at right angles to each other, thus verifying our
geometric interpretation of the algebraic notion of orthogonality: a simple element and its
complement are orthogonal.

Taking the complement of x gives -x:

x ã a e2 - b e1 ã -a e1 - b e2 ã -x
Or, we could have used the complement of a complement axiom:

x ã H-1L1 H2-1L x ã -x
Continuing to take complements we find that we eventually return to the original element.

x ã -x

xãx
In a vector 2-space, taking the complement of a vector is thus equivalent to rotating the vector by
one right angle counterclockwise.

The non-Euclidean complement in a vector 2-space

2009 9 3
Grassmann Algebra Book.nb 386

The non-Euclidean complement in a vector 2-space


Suppose now a vector x in a 2-dimensional non-Euclidean vector space expressed similarly in
terms of basis vectors e1 and e2 .

x ã a e1 + b e2
Since this is a non-Euclidean vector space, the basis vectors are generally not at right angles to
each other. We declare a general metric, and display the Complement and Metric palettes.

¯!2 ; DeclareMetric@gD;
Grid@88ComplementPalette, MetricPalette<<D

Complement Palette
Basis Complement Metric Palette
e1 Ôe2 L H1L
1 ¯g 0

e2 g1,1
-
e1 g1,2
L g1,1 g1,2
e1 ¯g ¯g 1 g1,2 g2,2
e2 g1,2 e1 g2,2
e2 - L I -g21,2 + g1,1 g2,2 M
¯g ¯g
2

e1 Ô e2 ¯g

The complement of x is given by:

x ã a e1 + b e2 ã ConvertComplements@a e1 + b e2 D
e2 Ha g1,1 + b g1,2 L e1 H-a g1,2 - b g2,2 L
x ã a e1 + b e2 ã +
¯g ¯g

Taking the complement of x still gives -x:

x ã a e1 + b e2 ã ConvertComplementsAa e1 + b e2 E

x ã a e1 + b e2 ã -a e1 - b e2

And taking the complement of x still gives -x:

2009 9 3
Grassmann Algebra Book.nb 387

x ã a e1 + b e2 ã ConvertComplementsBa e1 + b e2 F

e2 H-a g1,1 - b g1,2 L e1 Ha g1,2 + b g2,2 L


x ã a e1 + b e2 ã +
¯g ¯g

Ï Example
1
Suppose we take a simple metric and calculate x ã 2
He1 + e2 L and its complements

¯!2 ; M = DeclareMetric@882, 1<, 81, 1<<D;


Grid@88ComplementPalette, MetricPalette<<D

Complement Palette
Metric Palette
Basis Complement
L H1L
0
1 e1 Ô e2
e1 -e1 + 2 e2 L K2 1O
1 1 1
e2 -e1 + e2
L H1L
2
e1 Ô e2 1

e1 + e2
xã ;
2
e1 + e2
x ã ConvertComplements@ D
2
3 e2
x ã -e1 +
2

e1 + e2
x ã ConvertComplements@ D
2
e1 e2
xã- -
2 2

2009 9 3
Grassmann Algebra Book.nb 388

e1 + e2
x ã ConvertComplements@ D
2
3 e2
x ã e1 -
2

x
e2
x
e1
x

Complementing a two-dimensional vector in a non-Euclidean space

We can see that although the basis vectors are neither unit vectors, nor orthogonal, that as for the
Euclidean case, x is equal to -x, and x is equal to -x.

The standard conceptual vehicle for exploring the notion of orthogonality is the interior product
and its special cases, the inner and scalar products. These we will develop in the next chapter
where we will revisit examples of this type, and connect them more closely with familiar concepts.

The Euclidean complement in a vector 3-space


We now explore the Euclidean complement of a vector x in a vector 3-space. This differs
somewhat from the vector 2-space case, since the complement of a vector is a bivector, and the
complement of a bivector is a vector.

x ã a e1 + b e2 + c e3

For reference we display the complement palette.

2009 9 3
Grassmann Algebra Book.nb 389

¯!3 ; ComplementPalette

Complement Palette
Basis Complement
1 e1 Ô e2 Ô e3
e1 e2 Ô e3
e2 -He1 Ô e3 L
e3 e1 Ô e2
e1 Ô e2 e3
e1 Ô e3 -e2
e2 Ô e3 e1
e1 Ô e2 Ô e3 1

The complement of x is therefore given by:

x ã a e1 + b e2 + c e3 ã a e2 Ô e3 - b e1 Ô e3 + c e1 Ô e2

Remember, the Euclidean complement of a basis element is (formally identical to) its cobasis
element, and a basis element and its cobasis element are defined by their exterior product being the
basis n-element, in this case e1 Ô e2 Ô e3 .

Taking the complement of x gives x. Note that this result differs in sign from the two-dimensional
case.

x ã a e2 Ô e3 - b e1 Ô e3 + c e1 Ô e2 ã a e1 + b e2 + c e3 ã x

Or, we could have used the complement of a complement axiom:

x ã H-1L1 H3-1L x ã x
Since in a vector 3-space, the complement of the complement of a vector, or of a bivector returns
us to the original element, continuing to take complements as we did in the two-dimensional case
leads us to no further elements.

xãxãx xãx
The bivector x is a sum of components, each in one of the coordinate bivectors. We have already
shown in Chapter 2: The Exterior Product that a bivector in a 3-space is simple, and hence can be
factored into the exterior product of two vectors. It is in this form that the bivector will be most
easily interpreted as a geometric entity. There is an infinity of vectors that are orthogonal to the
vector x, but they are all contained in the bivector x. You can get express the bivector x as the
exterior product of two vectors again in an infinity of ways. The GrassmannAlgebra function
ExteriorFactorize finds one of them for you. Others may be obtained by adding vectors
from the first factor to the second factor, or vice-versa.

2009 9 3
The bivector x is a sum of components, each in one of the coordinate bivectors. We have already
shown in Chapter 2: The Exterior Product that a bivector in a 3-space is simple, and hence can be
factored into the exterior product of two vectors. It is in this form that Grassmann
the bivector willBook.nb
Algebra be most 390
easily interpreted as a geometric entity. There is an infinity of vectors that are orthogonal to the
vector x, but they are all contained in the bivector x. You can get express the bivector x as the
exterior product of two vectors again in an infinity of ways. The GrassmannAlgebra function
ExteriorFactorize finds one of them for you. Others may be obtained by adding vectors
from the first factor to the second factor, or vice-versa.

ExteriorFactorize@a e2 Ô e3 - b e1 Ô e3 + c e1 Ô e2 D
a e3 b e3
c e1 - Ô e2 -
c c

The vector x is orthogonal to each of these vectors since the exterior product of x with each of
them is zero.
a e3
¯%B: e1 - Ô Ha e2 Ô e3 - b e1 Ô e3 + c e1 Ô e2 L,
c
a e3
e1 - Ô Ha e2 Ô e3 - b e1 Ô e3 + c e1 Ô e2 L>F
c
80, 0<

A vector and its bivector as complements in a vector 3-space

The complements of each of these vectors will be bivectors which contain the original vector x.
We can verify this easily with ConvertComplements and GrassmannSimplify.

¯%AConvertComplementsA
9Hc e1 - a e3 L Ô Ha e1 + b e2 + c e3 L, Hc e2 - b e3 L Ô Ha e1 + b e2 + c e3 L=EE

80, 0<

The non-Euclidean complement in a vector 3-space

2009 9 3
Grassmann Algebra Book.nb 391

The non-Euclidean complement in a vector 3-space


Suppose now a vector x in a 3-dimensional non-Euclidean vector space.

x ã a e1 + b e2 + c e3

¯!3 ; DeclareMetric@gD;

The complement of x is given by:

x ã a e1 + b e2 + c e3 ã ConvertComplements@a e1 + b e2 + c e3 D
Ha g1,3 + b g2,3 + c g3,3 L e1 Ô e2
x ã a e1 + b e2 + c e3 ã +
¯g

H-a g1,2 - b g2,2 - c g2,3 L e1 Ô e3 Ha g1,1 + b g1,2 + c g1,3 L e2 Ô e3


+
¯g ¯g

Taking the complement of x gives x:

x ã a e1 + b e2 + c e3 ã ConvertComplementsAa e1 + b e2 + c e3 E

x ã a e1 + b e2 + c e3 ã a e1 + b e2 + c e3

Ï Example

Suppose we take the metric used in the example above on the non-Euclidean complement in a
vector 2-space and extend it orthogonally to the third dimension.

2009 9 3
Grassmann Algebra Book.nb 392

¯!3 ; M = DeclareMetric@882, 1, 0<, 81, 1, 0<, 80, 0, 1<<D;


Grid@88ComplementPalette, MetricPalette<<D

Complement Palette
Basis Complement Metric Palette
1 e1 Ô e2 Ô e3 L H1L
0

e1 -He1 Ô e3 L + 2 e2 Ô e3 2 1 0
L 1 1 0
e2 -He1 Ô e3 L + e2 Ô e3 1
0 0 1
e3 e1 Ô e2
1 0 0
e1 Ô e2 e3 L 0 2 1
2
e1 Ô e3 e1 - 2 e2 0 1 1

e2 Ô e3 e1 - e2 L H1L
3

e1 Ô e2 Ô e3 1

1
We now calculate the complement of the same vector x ã 2
He1 + e2 L and verify that x is equal
to x.
e1 + e2
xã ;
2
e1 + e2
x ã ConvertComplements@ D
2
3 e2 Ô e3
x ã -He1 Ô e3 L +
2

e1 + e2
x ã ConvertComplements@ D
2
e1 e2
xã +
2 2

If we factorize the bivector x using ExteriorFactorize, we obtain

3 e2 Ô e3
ExteriorFactorizeB-He1 Ô e3 L + F
2
3 e2
- e1 - Ô e3
2

2009 9 3
Grassmann Algebra Book.nb 393

and observe again that the vector x is orthogonal to each of these factors, since clearly the exterior
product of the complement of x with each of these factors is zero.

3 e2 3 e2 Ô e3 3 e2 Ô e3
¯%B: e1 - Ô -He1 Ô e3 L + , e3 Ô -He1 Ô e3 L + >F
2 2 2
80, 0<

5.10 Complements in a bound space

Metrics in a bound space


If we want to interpret one element of a linear space as an origin point, we need to consider which
forms of metric make sense, or are useful in some degree. We have seen above that a Euclidean
metric makes sense both in vector spaces (as we expected) but also in a bound space where one
basis element is interpreted as the origin point and the others as basis vectors.

More general metrics make sense in vector spaces, because the entities all have the same
interpretation. This leads us to consider hybrid metrics in bound spaces in which the vector
subspace has a general metric but the origin is orthogonal to all vectors. We can therefore adopt a
Euclidean metric for the origin, and a more general metric for the vector subspace.

1 0 ! 0
0 g11 ! g1 n
Gij ã 5.42
ª ª ! ª
0 gn1 ! gn n

These hybrid metrics will be useful later when we discuss screw algebra in Chapter 7 and
mechanics in Chapter 8. Of course the permitted transformations on a space with such a metric
must be restricted to those which maintain the orthogonality of the origin point to the vector
subspace.

It will be convenient in what follows to refer to a bound space with underlying linear space of
dimension n+1 as an n-plane.

All the formulae for complements in the vector subspace still hold. In an n-plane, the vector
subspace has dimension n, and hence the complement operation for the n-plane and its vector
subspace will not be the same.

We will denote the complement operation in the vector subspace by using an overvector · instead
of an overbar ·, and call it a vector space complement. The vector space complement (overvector)
operation can be entered from the Basic Operations palette by using the button under the
complement (overbar) button.

The constant ! will be the same in both spaces and still equate to the inverse of the square root of
the determinant g of the metric tensor. Hence it is easy to show that in a bound space with the
metric tensor above that:

2009 9 3
Grassmann Algebra Book.nb 394

The constant ! will be the same in both spaces and still equate to the inverse of the square root of
the determinant g of the metric tensor. Hence it is easy to show that in a bound space with the
metric tensor above that:

1
1ã ¯! Ô e1 Ô e2 Ô ! Ô en 5.43
¯g

” 1
1ã e1 Ô e2 Ô ! Ô en 5.44
¯g


1 ã ¯! Ô 1 5.45


¯! ã 1 5.46


In this interpretation 1 will be called the unit n-plane, while 1 will be called the unit n-vector. The
unit n-plane is the unit n-vector bound through the origin.

The complement of an m-vector


If we define Å
ei as the cobasis element of ei in the vector basis of the vector subspace (note that
this is denoted by an 'underbracket' rather than an 'underbar'), then the formula for the complement
of a basis vector in the vector subspace is:

n
1
ei ã ‚ gij ej 5.47
¯g j=1 Å

In the n-plane, the complement ei of a basis vector ei is given by the metric of the n-plane as:

1 n
ei ã ‚ gij J-¯" Ô ej N
¯g j=1 Å

Hence by using the formula above we have that:

ei ã -¯! Ô ei 5.48

More generally we have that for a basis m-vector, its complement in the n-plane is related to its
complement in the vector subspace by:

2009 9 3
Grassmann Algebra Book.nb 395

ei ã H-1Lm ¯! Ô ei 5.49
m m

And for a general m-vector:


a ã H-1Lm ¯! Ô a 5.50
m m

† The vector space complement of any exterior product involving the origin ¯" is undefined.

Products of vectorial elements in a bound space


In the previous section we have shown that, in a bound space the complement of an m-vector will
contain the origin as a factor.

a ã H-1Lm ¯" Ô a
m m

Hence the exterior product of two such complements will be zero.

” ”
a Ô b ã KH-1Lm ¯" Ô aO Ô H-1Lk ¯" Ô b ã 0
m k m k

The complement of this equation is also zero, showing that the regressive product of vectorial
elements in a bound space is zero.

aÔb ã aÓb ã 0 ã 0
m k m k

For a regressive product to be non-zero, it needs to be able to assemble a copy of the unit n-
element out of its factors. If it cannot, the regressive product is zero. In this case, the space is a
bound space whose unit n-element contains the origin ¯" but the origin is missing from both
factors of the product, hence it is zero.

It is important to note that the complement axiom is only defined for complements in the full
space. Thus, we cannot say that

” ”
aÔb ã aÓb
m k m k

A simple example using basis elements in a Euclidean point 3-space will suffice. Substituting basis
elements and using the definitions for the vector-space complement shows that the left side is non-
zero, while the right side is zero.

e1 Ô e2 ã e3

e1 Ó e2 ã He2 Ô e3 L Ó He3 Ô e1 L ã 0

The complement of an element bound through the origin

2009 9 3
Grassmann Algebra Book.nb 396

The complement of an element bound through the origin


To express the complement of a bound element in the n-plane in terms of the complement in the
vector subspace we can use the complement axiom.

¯" Ô ei ã ¯" Ó ei ã 1 Ó I-¯" Ô ei M

1 1 n
ã e1 Ô e2 Ô ! Ô en Ó -¯" Ô ‚ gij ej
¯g ¯g j=1 Å

1 n
ã- ‚ gij Jej Ô ej N Ó J¯" Ô ej N
¯g Å Å
j=1

1 n
ã- ‚ gij Jej Ô ¯" Ô ej N Ó ej
¯g j=1 Å Å

1 n
ã ‚ gij 1 Ó ej
¯g j=1 Å

1 n
ã 1Ó ‚ gij ej ã 1 Ó ei ã ei
¯g j=1 Å

¯! Ô ei ã ei 5.51

More generally, we can show that for a general m-vector, the complement of the m-vector bound
through the origin in an n-plane is simply the complement of the m-vector in the vector subspace of
the n-plane.


¯! Ô a ã a 5.52
m m

The complement of the complement of an m-vector


Let n be the dimension of a bound space. Then n-1 is the dimension of the vector subspace. Let a
m
be an m-vector in the bound space. Then the complement of the complement of a is given by
m

a ã H-1Lm Hn-mL a
m m

The vector space complement of the vector space complement of a in the vector subspace is
m

2009 9 3
Grassmann Algebra Book.nb 397



a ã H-1Lm Hn-1-mL a
m m

Hence



a ã H-1Lm a 5.53
m m

We can verify this using the formulae derived in the previous section.

a ã H-1Lm ¯" Ô a
m m

” ”

a ã H-1Lm ¯" Ô a ã H-1Lm a
m m m

Calculating with vector space complements

Ï Entering a vector space complement


To enter a vector space complement of a Grassmann expression X in GrassmannAlgebra you can
either use the GrassmannAlgebra palette by selecting the expression X and clicking the button

É , or simply enter OverVector[X] directly.

OverVector@e1 Ô e2 D
e1 Ô e2

Ï Simplifying a vector space complement


If the basis of your currently declared space does not contain the origin ¯", then the vector space
complement (OverVector) operation is equivalent to the normal (OverBar) operation, and
ConvertComplements will treat them as the same.

¯!3 ; ConvertComplementsA9e1 , x, e1 , x =E

8e2 Ô e3 , x, e2 Ô e3 , x<

If, on the other hand, the basis of your currently declared space does contain the origin ¯", then
ConvertComplements will convert any expressions containing OverVector complements to
their equivalent OverBar forms.

¯"3 ; ConvertComplementsA9e1 , x, e1 , x=E

9e2 Ô e3 , ¯# Ô x, -H¯# Ô e2 Ô e3 L, x=

Ï The vector space complement of the origin

2009 9 3
Grassmann Algebra Book.nb 398

The vector space complement of the origin


Note that the vector space complement of the origin, or any exterior product of elements involving
the origin is undefined, and will be left unevaluated by ConvertComplements.

¯"3 ; ConvertComplementsB:¯", ¯" Ô e1 , ¯" + x, ¯" + e1 >F

9¯#, ¯# Ô e1 , ¯# Ô x + ¯#, ¯# + e2 Ô e3 =

5.11 Complements of bound elements

The Euclidean complement of a point in the plane


Now suppose we are working in the Euclidean plane with a basis of one origin point ¯" and two
basis vectors e1 and e2 . Let us declare this basis then call for a palette of basis elements and their
complements.

¯A; ¯"2 ; ComplementPalette

Complement Palette
Basis Complement
1 ¯! Ô e1 Ô e2
¯! e1 Ô e2
e1 -H¯! Ô e2 L
e2 ¯! Ô e1
¯! Ô e1 e2
¯! Ô e2 -e1
e1 Ô e2 ¯!
¯! Ô e1 Ô e2 1

The complement table tells us that each basis vector is orthogonal to the axis involving the other
basis vector. For example, e2 is orthogonal to the axis ¯" Ô e1 .

Suppose now we take a general vector x ã a e1 + b e2 . The complement of this vector is:

x ã a e1 + b e2 ã - a ¯" Ô e2 + b ¯" Ô e1 ã ¯" Ô Hb e1 - a e2 L

2009 9 3
Grassmann Algebra Book.nb 399

Again, this is an axis (bound vector) through the origin orthogonal to the vector x.

Now let us take a general point P ã ¯" + x and explore what element is orthogonal to this point.

P ã ¯" + x ã e1 Ô e2 + ¯" Ô Hb e1 - a e2 L
The effect of adding the bivector e1 Ôe2 to x is to shift the bound vector ¯" Ô Hb e1 - a e2 L
parallel to itself. We can factor e1 Ôe2 into the exterior product of two vectors, one parallel to
b e1 - a e2 .

e1 Ô e2 ã z Ô Hb e1 - a e2 L

Now we can write P as:

P ã H¯" + zL Ô Hb e1 - a e2 L

The vector z is the position vector of any point on the line defined by the bound vector P. A
particular point of interest is the point on the line closest to the point P or the origin. The position
vector z would then be a vector orthogonal to the direction of the line. Thus we can write e1 Ôe2 as:

Ha e1 + b e2 L Ô Hb e1 - a e2 L
e1 Ô e2 ã -
a2 + b2
The final expression for the complement of the point P ã ¯" + x ã ¯" + a e1 + b e2 can then be
written as a bound vector in a direction orthogonal to the position vector of P.

Ha e1 + b e2 L
P ã ¯! - Ô Hb e1 - a e2 L ã P* Ô Hb e1 - a e2 L 5.54
2 2
a +b

This formula turns out to be a special case of a more general formula which we will derive later in
the section, and for which we now propose some convenient notation.

x
P ã ¯! + x P* ã ¯! + x* x* ã - 5.55
†x§2

We call the point P* the inverse point to P. Inverse points are situated on the same line through the
origin, and on opposite sides of it. The product of their distances from the origin is unity.

Remembering that the vector space complement of the position vector x is denoted x, we can now
rewrite our derived formula in a more condensed form.


P ã P* Ô I-xM 5.56

2009 9 3
Grassmann Algebra Book.nb 400

P
x
x

¯!
x*

P* P=¯!+x
P* =¯!+x*
P=P* Ô-x
P

The Euclidean complement of a point in the plane

Of course, since the bound space of a plane is a three-dimensional linear space, the complement of
the complement of a point is the point itself, just like the complement of a complement of a vector
in a vector 3-space is the vector itself.

To verify this in GrassmannAlgebra we can use ConvertComplements on the formula derived


in terms of basis elements above (but because the complement operation is a Protected
function, we need to use the extra symbol ^ before the equals sign).

Ha e1 + b e2 L
P ^= ¯" - Ô Hb e1 - a e2 L; ConvertComplements@PD
a2 + b2
¯# + a e1 + b e2

The Euclidean complement of a point in a point 3-space


To indicate the generality of the method, we undertake the same derivation in an Euclidean point 3-
space.

2009 9 3
Grassmann Algebra Book.nb 401

¯"3 ; ComplementPalette

Complement Palette
Basis Complement
1 ¯! Ô e1 Ô e2 Ô e3
¯! e1 Ô e2 Ô e3
e1 -H¯! Ô e2 Ô e3 L
e2 ¯! Ô e1 Ô e3
e3 -H¯! Ô e1 Ô e2 L
¯! Ô e1 e2 Ô e3
¯! Ô e2 -He1 Ô e3 L
¯! Ô e3 e1 Ô e2
e1 Ô e2 ¯! Ô e3
e1 Ô e3 -H¯! Ô e2 L
e2 Ô e3 ¯! Ô e1
¯! Ô e1 Ô e2 e3
¯! Ô e1 Ô e3 -e2
¯! Ô e2 Ô e3 e1
e1 Ô e2 Ô e3 -¯!
¯! Ô e1 Ô e2 Ô e3 1

Each basis vector is orthogonal to the coordinate plane involving the other basis vectors. For
example, e2 is orthogonal to the axis ¯" Ô e1 Ô e3 .

Suppose now we take a general vector x ã a e1 + b e2 + c e3 . The complement of this vector is:

x ã a e1 + b e2 + c e3
ã - a ¯" Ô e2 Ô e3 + b ¯" Ô e1 Ô e3 - c ¯" Ô e1 Ô e2
ã ¯" Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L

Again, this is a plane (bound bivector) through the origin orthogonal to the vector x.

The complement of the general point P ã ¯" + x is

P ã ¯" + x ã e1 Ô e2 Ô e3 + ¯" Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L

The effect of adding the trivector e1 Ôe2 Ôe3 to x is to shift the bound bivector
¯" Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L parallel to itself. We can factor e1 Ôe2 Ôe3 into the
exterior product of a vector and a bivector congruent to - a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 .
2009 9 3
Grassmann Algebra Book.nb 402

The effect of adding the trivector e1 Ôe2 Ôe3 to x is to shift the bound bivector
¯" Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L parallel to itself. We can factor e1 Ôe2 Ôe3 into the
exterior product of a vector and a bivector congruent to - a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 .

e1 Ô e2 Ô e3 ã z Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L

Now we can write P as:

P ã H¯" + zL Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L

The vector z is the position vector of any point on the plane defined by the bound bivector P. A
particular point of interest is the point on the plane closest to the point P or the origin. The position
vector z would then be a vector orthogonal to the plane. Thus we can write e1 Ôe2 Ôe3 as:

Ha e1 + b e2 + c e3 L Ô H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L
e1 Ô e2 Ô e3 ã -
a2 + b2 + c2
The final expression for the complement of the point P ã ¯" + x ã ¯" + a e1 + b e2 + c e3 can
then be written as a bound bivector in a direction orthogonal to the position vector of P.

Ha e1 + b e2 + c e3 L
P ã ¯! - Ô
a2 + b2 + c2 5.57
H- a e2 Ô e3 + b e1 Ô e3 - c e1 Ô e2 L

From this result we see that our condensed formula is also valid in a point 3-space.


P ã P* Ô I-xM 5.58

2009 9 3
Grassmann Algebra Book.nb 403

The Euclidean complement of a point in a point 3-space

In a point 3-space, the complement of the complement of a point is the negative of the point itself,
or equivalently, the point with a weight of -1.

P = ¯" + a e1 + b e2 + c e3 ; ConvertComplements@PD
-¯# - a e1 - b e2 - c e3

The complement of a bound element


We now come to the point where we can determine a formula which expresses the complement of
a general bound element in an n-plane in terms of a special point and the complement in the vector
subspace of the n-plane.

Consider a bound simple m-vector PÔa in an n-plane, where the position vector x of the point P is
m
orthogonal to a, that is, to each of the 1-element factors of a.
m m

P Ô a ã H¯" + xL Ô a ã ¯" Ô a + x Ô a
m m m m

From the Common Factor Theorem we have that

Kx Ô aO Ó x ã Hx Ô a1 Ô a2 Ô a3 Ô … Ô am L Ó x
m
ã Hx Ô xL Ó Ha1 Ô a2 Ô … Ô am L - Ha1 Ô xL Ó Hx Ô a2 Ô a3 Ô … Ô am L +
Ha2 Ô xL Ó Hx Ô a1 Ô a3 Ô … Ô am L - … H-1Lm Ham Ô xL Ó Hx Ô a1 Ô a2 Ô … Ô am-1 L

But since we have chosen x to be orthogonal to each of the ai , the terms ai Ô x are zero so that

2009 9 3
Grassmann Algebra Book.nb 404

Kx Ô aO Ó x ã Hx Ô xL Ó a
m m

By taking the complement of this expression, we can manipulate it into the form we want.

Kx Ô aO Ó x ã Hx Ô xL Ó a
m m

First apply the complement axiom (once on the left and twice on the right).

Hx Ô aL Ô x ã Ix Ó xM Ô a
m m

Then the complement of a complement axiom on x.

Hx Ô aL Ô x ã Hx Ó xL Ô a
m m

Now, the regressive product of an element with its complement is a scalar which, as we will see in
the next chapter is the scalar product of the element with itself. This scalar product may also be
written as the square of the magnitude (measure) of the element †x§2 . Hence on the right side we
have

x Ó x ã H-1Ln-1 x Ó x ã H-1Ln-1 x û x ã H-1Ln-1 †x§2


Substituting gives

Hx Ô aL Ô x ã H-1Ln-1 †x§2 a
m m

On the left side we can change the order of the factors

Hx Ô aL Ô x ã H-1Ln-1+m x Ô Hx Ô aL
m m

Collecting these results gives

H-1Lm x Ô Hx Ô aL ã †x§2 a
m m

We now need to convert these complements to vector space complements.



x Ô a ã H-1Lm+1 ¯" Ô x Ô a a ã H-1Lm ¯" Ô a
m m m m

Substituting and factoring out the Origin ¯" we get


¯" Ô Kx Ô x Ô aO ã ¯" Ô KH-1Lm †x§2 aO
m m

Extracting the vectorial factors yields



x Ô x Ô a ã H-1Lm †x§2 a
m m

We can now return to the formula for the bound element under consideration and take its
complement.

2009 9 3
Grassmann Algebra Book.nb 405

P Ô a ã H¯" + xL Ô a ã ¯" Ô a + x Ô a
m m m m


P Ô a ã ¯" Ô a + x Ô a ã a + H-1Lm+1 ¯" Ô x Ô a
m m m m m


Substituting for a from our derivation above gives
m

x
P Ô a ã H-1Lm Ô x Ô a + H-1Lm+1 ¯" Ô x Ô a
2
m †x§ m m

x
ã ¯" - Ô K- a Ô xO
†x§2 m

x
P Ô a ã ¯! - Ô K- a Ô xO P ã ¯! + x aûx ã 0 5.59
m †x§2 m m

This formula hold only under the condition that the position vector x of the point P is orthogonal to
a. Here, we have preempted the notation of the next chapter by using a û x ã 0 to express this
m m
condition.

Ï The complement of a point

As a check, we see that putting a equal to unity in this formula gives us the formula we have
m
already induced earlier in the previous section.

x ”
P ã ¯" - Ô I-xM P ã ¯" + x
2
†x§

Euclidean complements of bound elements

Ï The complement of a bound vector

x
P Ô a ã ¯! - Ô Hx Ô aL P ã ¯! + x aûx ã 0 5.60
†x§2

As a specific example, let x be e1 , and a be e2 .

† In the plane, the complement of this bound vector becomes a point.

2009 9 3
Grassmann Algebra Book.nb 406

P Ô e2 ã H¯" - e1 L Ô He1 Ô e2 L ã H¯" - e1 L Ô H1L ã ¯" - e1

The Euclidean complement of a bound vector in the plane

† In a point 3-space, the complement of this bound vector becomes a second bound vector
(orthogonal to the first one).

P Ô e2 ã H¯" - e1 L Ô He1 Ô e2 L ã H¯" - e1 L Ô e3

The Euclidean complement of a bound vector in space

2009 9 3
Grassmann Algebra Book.nb 407

Ï The complement of a bound bivector

x
P Ô a Ô b ã ¯! - Ô Hx Ô a Ô bL Pã
†x§2 5.61
¯! + x Ha Ô bL û x ã 0

Again, let x be e1 and a be e2 ; and put b to be e3 .

† In a point 3-space, the complement of this bound bivector becomes a point.

P Ô e2 Ô e3 ã H¯" - e1 L Ô He1 Ô e2 Ô e3 L ã H¯" - e1 L Ô H1L ã ¯" - e1

The Euclidean complement of a bound bivector in the plane

The regressive product of point complements


In the sections above, we have explored the simplest cases of complements of bound elements.
There is another way to compose these results if we express the bound element as an exterior
product of bound elements and then apply the complement axiom to obtain a regressive product of
complements of bound elements. We look at the simplest case first: that of the complement of a
bound vector expressed as the product of two points. Let

L ã PÔQ L ã PÔQ ã PÓQ


In geometric terms, this can be interpreted as saying that the complement of a line defined by two
points is the intersection of the complements of the points.

2009 9 3
Grassmann Algebra Book.nb 408

In geometric terms, this can be interpreted as saying that the complement of a line defined by two
points is the intersection of the complements of the points.

† The simplest case is in the plane, where the complement of the points are themselves lines, and
whose intersection is a point.

P Ô Q = PÓQ = R
R = PÔQ
Q

y
P*
R*
x*
z*
P

¯!
x
z
y*
P

Q* Q

The regressive product of point complements

† In a point 3-space, the same relations hold, but their interpretations are different. The
complement P of a point is a bound bivector as we have depicted earlier. The complements of
two different points P and Q yield two distinct bound bivectors which intersect in a line R, say.
And the complement of this line R is the line passing through the points P and Q.

† The complement of a bound bivector can be composed as the regressive product of the
complements of three points. In a point 3-space, this results in the intersection of three bound
bivectors yielding a point. The complement of this point is a bound bivector passing through the
original three points.

† The simple examples explored in the sections above are straightforwardly extended mutatis
mutandis to higher dimensional spaces, but of course they are more challenging to depict!

2009 9 3
Grassmann Algebra Book.nb 409

5.12 Reciprocal Bases

Reciprocal bases
Up to this point we have only considered one basis for L, which we have denoted with subscripted
1
indices, for example ei . We will refer to this basis as the standard basis. Introducing a second
basis reciprocal to this standard basis enables us to write the formulae for complements in a more
symmetric way.

This section will summarize formulae relating basis elements and their cobases and complements
in terms of reciprocal bases. For simplicity, we adopt the Einstein summation convention.

In L the metric tensor gij forms the relationship between the reciprocal bases.
1

ei ã gij ej 5.62

m
This relationship induces a metric tensor gi,j on L.
m

m
ei ã gij ej 5.63
m m

The complement of a basis element


The complement of a basis 1-element of the standard basis has already been defined by:

1
ei ã gij ej 5.64
¯g

Here ¯g ã †gij § is the determinant of the metric tensor. Taking the complement of formula 5.64
and substituting for ei in formula 5.66 gives:

1
ei ã ei 5.65
¯g

The reciprocal relation to this is:

2009 9 3
Grassmann Algebra Book.nb 410

ei ã ¯g ei 5.66

These formulae can be extended to basis elements of L.


m

ei ã ¯g ei 5.67
m m

1
ei ã ei 5.68
m ¯g m

Particular cases of these formulae are:

1
e1 Ô e2 Ô ! Ô en ã 1 ã ¯g e1 Ô e2 Ô ! Ô en 5.69
¯g

e1 Ô e2 Ô ! Ô en ã ¯g 5.70

1
e1 Ô e2 Ô ! Ô en ã 5.71
¯g

ei ã ¯g H-1Li-1 e1 Ô ! Ô ·i Ô ! Ô en 5.72

1
ei ã H-1Li-1 e1 Ô ! Ô ·i Ô ! Ô en 5.73
¯g

ei1 Ô ! Ô e i m ã ¯g H-1LKm e1 Ô ! Ô ·i1 Ô ! Ô ·im Ô ! Ô en 5.74

1
e i1 Ô ! Ô e i m ã H-1LKm e1 Ô ! Ô ·i1 Ô ! Ô ·im Ô ! Ô en 5.75
¯g

where Km = ⁄mg=1 ig + 12 m Hm + 1L and the symbol · means the corresponding element is


missing from the product.

2009 9 3
Grassmann Algebra Book.nb 411

where Km = ⁄mg=1 ig + 12 m Hm + 1L and the symbol · means the corresponding element is


missing from the product.

The complement of a cobasis element


The cobasis of a general basis element was developed in Section 2.6 as:

ei ã H-1LKm e1 Ô ! Ô ·i1 Ô ! Ô ·im Ô ! Ô en


m

where Km ã ⁄mg=1 ig + 12 m Hm + 1L.

Taking the complement of this formula and applying formula 5.69 gives the required result.

ei ã H-1LKm e1 Ô ! Ô ·i1 Ô ! Ô ·im Ô ! Ô en


m

ã ¯g H-1LKm e1 Ô ! Ô ·i1 Ô ! Ô ·im Ô ! Ô en

ã ¯g H-1Lm Hn-mL ei1 Ô ! Ô eim

ã ¯g H-1Lm Hn-mL ei
m

ei ã ¯g H-1Lm Hn-mL ei 5.76


m m

Similarly:

1
emi ã H-1Lm Hn-mL ei 5.77
¯g m

The complement of a complement of a basis element


To verify the complement of a complement axiom, we begin with the equation for the complement
of a standard basis m-element, take the complement of the equation, and then substitute for ei
m
from equation 5.79.

ei ã g ei
m m

2009 9 3
Grassmann Algebra Book.nb 412

1
ãJ ¯g N H-1Lm Hn-mL ei
¯g m

ã H-1Lm Hn-mL ei
m

A similar result is obtained for the complement of the complement of a reciprocal basis element.

The exterior product of basis elements


The exterior product of a basis element with the complement of the corresponding basis element in
the reciprocal basis is equal to the unit n-element. The exterior product with the complement of any
other basis element in the reciprocal basis is zero.

1 j
ei Ô ej ã ei Ô ej ã di 1
m m m ¯g m

ei Ô ej ã ei Ô J ¯g N ej ã dij 1
m m m m

j
ei Ô ej ã di 1 ei Ô ej ã dij 1 5.78
m m m m

The exterior product of a basis element with the complement of any other basis element in the
same basis is equal to the corresponding component of the metric tensor times the unit n-element.

m m 1 m
ei Ô ej ã ei Ô Kgjk ek O ã ei Ô gjk ek ã gij 1
m m m m m ¯g m

m jk m jk m ij
ei Ô ej ã ei Ô Kg ek O ã ei Ô g J ¯g N ek ã g 1
m m m m m m

m m ij
ei Ô ej ã gij 1 ei Ô ej ã g 1 5.79
m m m m

In L these reduce to:


1

j
ei Ô ej ã di 1 ei Ô ej ã dij 1 5.80

ei Ô ej ã gij 1 ei Ô ej ã gij 1 5.81

The regressive product of basis elements

2009 9 3
Grassmann Algebra Book.nb 413

The regressive product of basis elements


In the series of formulae below we have repeated the formulae derived for the exterior product and
shown its equivalent for the regressive product in the same box for comparison. We have used the
m ij
fact that dij and g are symmetric to interchange the indices.

j j
ei Ô ej ã di 1 ei Ó ej ã di 5.82
m m m m

ei Ô ej ã dij 1 ei Ó ej ã dij 5.83


m m m m

m m
ei Ô ej ã gij 1 ei Ó ej ã gij 5.84
m m m m

m ij m ij
ei Ô ej ã g 1 ei Ó ej ã g 5.85
m m m m

The complement of a simple element is simple


The expression for the complement of a basis element in terms of its cobasis element in the
reciprocal basis allows us a straightforward proof that the complement of a simple element is
simple.

To show this, suppose a simple element a expressed in terms of its factors ai .


m

a ã a1 Ô a2 Ô ! Ô am
m

Take any other n|m 1-elements am+1 , am+2 , !, an such that a1 , a2 , !, am , am+1 , !, an form an
independent set, and thus a basis of L. Then by equation 5.76 we have:
1

e i1 Ô ! Ô e i m ã ¯g H-1LKm e1 Ô ! Ô ·i1 Ô ! Ô ·im Ô ! Ô en

a ã a1 Ô a2 Ô ! Ô am ã ga am+1 Ô am+2 Ô ! Ô an
m

where ga is the determinant of the metric tensor in the a basis.

The complement of a1 Ô a2 Ô ! Ô am is thus a scalar multiple of am+1 Ô am+2 Ô ! Ô an . Since this


is evidently simple, the assertion is proven.

5.13 Summary
2009 9 3
Grassmann Algebra Book.nb 414

5.13 Summary
In this chapter we have shown that by defining the complement operation on a basis of L as
1

n
ei ã ! ‚ gij ej
j=1

and accepting the complement axiom

aÔb ã aÓb
m k m k

as the mechanism for extending the complement to higher grade elements, the requirement that

a ã %a
m m

1
constrains gij to be symmetric and ! to have the value % ,where ¯g is the determinant of the
¯g

metric tensor gij .

The metric tensor gij was introduced as a mapping between the two linear spaces L and L .
1 n-1

To this point none of these concepts has yet been related to notions of interior, inner or scalar
products. This will be addressed in the next chapter, where we will also see that the constraint
1
!ã% is equivalent to requiring that the magnitude of the unit n-element is unity.
¯g

Once we have constrained gij to be symmetric, we are able to introduce the standard notion of a
reciprocal basis. The formulae for complements of basis elements in any of the L then become
m
much simpler to express.

This chapter also discusses how the complement operation can be interpreted geometrically both in
vector spaces and in bound spaces. In both spaces the complement axiom holds, but its geometric
interpretation in the two spaces is quite different.

2009 9 3
Grassmann Algebra Book.nb 415

6 The Interior Product


Ï

6.1 Introduction
To this point we have defined three important operations in the Grassmann algebra: the exterior
product, the regressive product, and the complement.

In this chapter we introduce the interior product, the fourth operation of fundamental importance
to the algebra. The interior product of two elements is defined as the regressive product of one
element with the complement of the other. Whilst the exterior product of an m-element and a k-
element generates an (m+k)-element, the interior product of an m-element and a k-element (m ¥ k)
generates an (m|k)-element. This means that the interior product of two elements of equal grade is
a 0-element (or scalar). The interior product of an element with itself is a scalar, and it is this scalar
that is used to define the measure of the element.

The interior product of two 1-elements corresponds to the usual notion of inner, scalar, or dot
product. But we will see that the notion of measure is not restricted to 1-elements. Just as one may
associate the measure of a vector with a length, the measure of a bivector may be associated with
an area, and the measure of a trivector with a volume.

If the exterior product of an m-element and a 1-element is zero, then it is known that the 1-element
is contained in the m-element. If the interior product of an m-element and a 1-element is zero, then
this means that the 1-element is contained in the complement of the m-element. In this case it may
be said that the 1-element is orthogonal to the m-element.

The basing of the notion of interior product on the notions of regressive product and complement
follows here the Grassmannian tradition rather than that of the current literature which introduces
the inner product onto a linear space as an arbitrary extra definition. We do this in the belief that it
is the most straightforward way to obtain consistency within the algebra and to see and exploit the
relationships between the notions of exterior product, regressive product, complement and interior
product, and to discover and prove formulae relating them.

We use the term 'interior' in addition to 'inner' to signal that the products are not quite the same. In
traditional usage the inner product has resulted in a scalar. The interior product is however more
general, being able to operate on two elements of any and perhaps different grades. We reserve the
term inner product for the interior product of two elements of the same grade. An inner product of
two elements of grade 1 is called a scalar product. In sum: inner products are scalar whilst, in
general, interior products are not.

We denote the interior product with a small circle with a 'bar' through it, thus û. This is to signify
that it has a more extended meaning than the inner product. Thus the interior product of a with b
m k

becomes a û b. This product is zero if m < k. Thus the order is important: for a non-zero product
m k

the element of higher grade should be on the left. The interior product has the same left
associativity as the negation operator, or minus sign. It is possible to define both left and right
interior products, but in practice the added complexity is not rewarded by an increase in utility.

2009 9 3
We denote the interior product with a small circle with a 'bar' through it, thus û. This is to signify
that it has a more extended meaning than the inner product. Thus the interior product of a with b
Grassmann Algebra Book.nb 416 m k

becomes a û b. This product is zero if m < k. Thus the order is important: for a non-zero product
m k

the element of higher grade should be on the left. The interior product has the same left
associativity as the negation operator, or minus sign. It is possible to define both left and right
interior products, but in practice the added complexity is not rewarded by an increase in utility.

We will see in Chapter 10: The Generalized Product that the interior product of two elements can
be expressed as a certain generalized product, which is independent of the order of its factors.

6.2 Defining the Interior Product

Definition of the inner product


Grassmann defined the complement a of an element a such that the regressive product of a by a in
m m m m
that order, was always a non-negative scalar. That is a Ó a ¥ 0 for any element of any grade m in
m m
a space of any dimension. As is well known, this non-negativeness is a naturally accepted property
of Euclidean inner products.

This immediately suggests a definition for the inner product of an element with itself as a Ó a, and,
m m
by extension, a formula for the inner product of any two m-elements as a Ó b.
m m

The inner product of a and b is denoted a û b and is defined by:


m m m m

aûb ã aÓb œL 6.1


m m m m 0

The result of taking the regressive product of a and a in the reverse order introduces a potential
m m
change of sign.

a Ó a ã H-1Lm Hn-mL a Ó a
m m m m

Because of the form of Grassmann's definition of the complement, the element a Ó a is always a
m m
non-negative scalar. However, it is clear from the above that the element a Ó a may well not be.
m m
And furthermore, it may depend on the dimension of the space. Hence in the definition of the inner
product it is important that the complemented element be the second factor.

A scalar product is the inner product of two 1-elements.

Definition of the interior product


The fundamentals of Grassmann's algebra (based on the two products and the complement
operation) are however strong enough for this expression to be meaningful for elements of any
grade, leading to the definition of the interior product.

The interior product of a and b is denoted a û b and is defined by:


m k m k

2009 9 3
Grassmann Algebra Book.nb 417

The interior product of a and b is denoted a û b and is defined by:


m k m k

aûb ã aÓb œ L m¥k 6.2


m k m k m-k

Thus, the interior product does not depend on the dimension of the space, since the dependence on
the dimension of the regressive product and the complement operations 'cancel' each other out.
This makes it, like the exterior product, of special importance in the algebra.

If m < k, then a Ó b is necessarily zero (otherwise the grade of the product would be negative),
m k
hence:

aûb ã 0 m<k 6.3


m k

Ï An important convention

In order to avoid unnecessarily distracting caveats on every formula involving interior products, in
the rest of this book we will suppose that the grade of the first factor is always greater than or
equal to the grade of the second factor. The formulae will remain true even if this is not the case,
but they will be trivially so by virtue of their terms reducing to zero.

Ï Left and right interior products

To deal with the apparent asymmetry of the interior product, some authors (for example E. Tonti
On the Formal Structure of Physical Theories, page 68, 1975) have introduced the notion of left
and right interior products equivalent to the definitions:

a ¢ b ã aÓb œ L m¥k
m k m k m-k
6.4
a § b ã aÓb œ L k¥m
m k m k k-m

Here, a ¢ b is the right interior product, and a § b is the left interior product. By interchanging
m k m k
the order of the factors in one of the regressive products, and then interchanging the m-element
with the k-element, we can see that they are related by the formula:

a ¢ b ã H-1Lk Hn-mL b § a 6.5


m k k m

In this book, we do not follow this dual-definition approach since we have found it to be an
unnecessary complication, especially with its dependence on grade and the dimension of the space.
In Chapter 10 we will discover a new product (the generalized Grassmann product) which leads to
the same interior product independent of the order of its factors, thus retrieving, in this wider
system, the sought-after symmetry.

2009 9 3
Grassmann Algebra Book.nb 418
In this book, we do not follow this dual-definition approach since we have found it to be an
unnecessary complication, especially with its dependence on grade and the dimension of the space.
In Chapter 10 we will discover a new product (the generalized Grassmann product) which leads to
the same interior product independent of the order of its factors, thus retrieving, in this wider
system, the sought-after symmetry.

Ï Historical note

Grassmann and workers in the Grassmannian tradition define the interior product of two elements as the
product of one with the complement of the other, the product being either exterior or regressive depending on
which interpretation produces a non-zero result. Furthermore, when the grades of the elements are equal, it is
defined either way. This definition involves the confusion between scalars and n-elements discussed in
Chapter 5, Section 5.1 (equivalent to assuming a Euclidean metric and identifying scalars with pseudo-
scalars). It is to obviate this inconsistency and restriction on generality that the approach adopted here bases
its definition of the interior product explicitly on the regressive exterior product.

Implications of the regressive product axioms


By expressing one or more elements as a complement, the relations of the regressive product
axiom set may be rewritten in terms of the interior product, thus yielding some of its more
fundamental properties.

Ï û 6: The interior product of an m-element and a k-element is an (m|k)-element.

The grade of the interior product of two elements is the difference of their grades.

a œ L, b œ L ï aûb œ L 6.6
m m k k m k m-k

Thus, in contradistinction to the regressive product, the grade of an interior product does not
depend on the dimension of the underlying linear space.

If the grade of the first factor is less than that of the second, the interior product is zero.

aûb ã 0 m<k 6.7


m k

Ï û 7: The interior product is not associative.

The following formulae are derived directly from the associativity of the regressive product, and
show that the interior product is not associative.

aûb ûg ã aû bÔg 6.8


m k r m k r

2009 9 3
Grassmann Algebra Book.nb 419

aÓb ûg ã aÓ bûg 6.9


m k r m k r

Ï û 8: The unit scalar 1 is the identity for the interior product.

The interior product of an element with the unit scalar 1 does not change it. The interior product of
scalars is equivalent to ordinary scalar multiplication. The complement operation may be viewed as
the interior product with the unit n-element 1.

aû1 ã a 6.10
m m

aûa ã aÔa ã a a 6.11


m m m

1û1 ã 1 6.12

a ã 1ûa 6.13
m m

Ï û 9: The inverse of a scalar with respect to the interior product is its


complement.

The interior product of the complement of a scalar and the reciprocal of the scalar is the unit n-
element.

1
1 ã aû 6.14
a

Ï û 10: The interior product of two elements is congruent to the interior


product of their complements in reverse order.

The interior product of two elements is equal (apart from a possible sign) to the interior product of
their complements in reverse order.

a û b ã H-1LHn-mL Hm-kL b û a 6.15


m k k m

If the elements are of the same grade, the interior product of two elements is equal to the interior
product of their complements. (It will be shown later that, because of the symmetry of the metric
tensor, the interior product of two elements of the same grade is symmetric, hence the order of the
factors on either side of the equation may be reversed.)

2009 9 3
Grassmann Algebra Book.nb 420

If the elements are of the same grade, the interior product of two elements is equal to the interior
product of their complements. (It will be shown later that, because of the symmetry of the metric
tensor, the interior product of two elements of the same grade is symmetric, hence the order of the
factors on either side of the equation may be reversed.)

aûb ã bûa 6.16


m m m m

Ï û 11: An interior product with zero is zero.

Every exterior linear space has a zero element whose interior product with any other element is
zero.

aû0 ã 0 ã 0ûa 6.17


m m

Ï û 12: The interior product is distributive over addition.


The interior product is both left and right distributive over addition.

Ka + b O û g ã a û g + b û g 6.18
m m r m r m r

a û Kb + gO ã a û b + a û g 6.19
m r r m r m r

Orthogonality
Orthogonality is a concept generated by the complement operation.

If x is a 1-element and a is a simple m-element, then x and a may be said to be linearly dependent
m m
if and only if a Ô x ã 0, that is, x is contained in (the space of) a.
m m

Similarly, x and a are said to be orthogonal if and only if a Ô x ã 0, that is, x is contained in a.
m m m

By taking the complement of a Ô x ã 0, we can see that a Ô x ã 0 if and only if a Ó x ã 0. Thus it


m m m
may also be said that a 1-element x is orthogonal to a simple element a if and only if their interior
m
product is zero, that is, a û x ã 0.
m

If x ã x1 Ô x2 Ô ! Ô xk then a and x are said to be totally orthogonal if and only if a û xi ã 0 for


k m k m
all xi contained in x.
k

However, for a û Hx1 Ô x2 Ô ! Ô xk L to be zero it is only necessary that one of the xi be


m
orthogonal to a. To show this, suppose it to be (without loss of generality) x1 . Then by formula û7
m
2009 9 3
we can write a û Hx1 Ô x2 Ô ! Ô xk L as Ka û x1 O û Hx2 Ô ! Ô xk L, whence it becomes
m m
immediately clear that if a û x1 is zero then so is the whole product a û x.
m m k
Grassmann Algebra Book.nb 421

However, for a û Hx1 Ô x2 Ô ! Ô xk L to be zero it is only necessary that one of the xi be


m
orthogonal to a. To show this, suppose it to be (without loss of generality) x1 . Then by formula û7
m

we can write a û Hx1 Ô x2 Ô ! Ô xk L as Ka û x1 O û Hx2 Ô ! Ô xk L, whence it becomes


m m
immediately clear that if a û x1 is zero then so is the whole product a û x.
m m k

Just as the vanishing of the exterior product of two simple elements a and x implies only that they
m k
have some 1-element in common, so the vanishing of their interior product implies only that a and
m
x (m ¥ k) have some 1-element in common, and conversely. That is, there is a 1-element x such
k
that the following implications hold.

aÔx ã 0 ó :a Ô x ã 0, x Ô x ã 0>
m k m k

aûx ã 0 ó :a Ô x ã 0, x Ô x ã 0>
m k m k

aûx ã 0 ó :a û x ã 0, x Ô x ã 0>
m k m k

aûx ã 0 ó aûx ã
m k m
6.20
0 ó aÔx ã 0 xÔx ã 0
m k

Example: The interior product of a simple bivector and a vector


In this section we preview the simplest case of an interior product which is not an inner product -
that of a bivector with a vector. At this stage we just sketch the computation sufficiently to be able
to depict it geometrically. We will revisit the general case in more detail later in the chapter.

Let x be a vector and B ã x1 Ô x2 be a simple bivector in a space of any number of dimensions.


The interior product of the bivector B with the vector x is the vector B û x, which is orthogonal to
both x and its orthogonal projection x¨ onto B.

The vector B û x can be expanded by the Interior Common Factor Theorem (which we will discuss
later) to give:

B û x ã Hx1 Ô x2 L û x ã Hx û x1 L x2 - Hx û x2 L x1

Since B û x is expressed as a linear combination of x1 and x2 it is clearly contained in the bivector


B. Thus

B Ô HB û xL ã 0

The resulting vector B û x is also orthogonal to x. We can show this by taking its scalar product
with x.

2009 9 3
Grassmann Algebra Book.nb 422

HB û xL û x ã B û Hx Ô xL ã 0
`
The unit bivector B is defined by

` B

BûB

The orthogonal projection x˛ of x onto B is given by


` `
x˛ ã -B û IB û xM

And the component x¨ of x orthogonal to B is given by


` `
x¨ ã IB Ô xM û B

Ï The interior product of a bivector and a vector in a vector 3-space

The analysis above has shown us that in a vector n-space, B û x lies in B and is orthogonal to x. It
is also orthogonal to the components of x in B (x˛ ) and orthogonal to B (x¨ ). We can depict this in
a vector 3-space as follows:

The interior product of a bivector with a vector in a vector 3-space

2009 9 3
Grassmann Algebra Book.nb 423

Ï The interior product of a bivector and a vector in a vector 2-space

In a vector 2-space, the same analysis is valid. However, x¨ is zero because it involves the exterior
`
product of three vectors in a 2-space; and x˛ is identical to x because the unit bivector B must be
equal to the unit 2-element of the space 1. (We discussed unit n-elements in Chapter 5.)
` `
x¨ ã IB Ô xM û B ã 0
` `
x˛ ã -B û IB û xM ã -1 û I1 û xM ã -1 û x ã -HxL ã x

The interior product of a bivector with a vector in a vector 2-space

6.3 Properties of the Interior Product

Implications of the complement axioms


Our aim in this section is to show how the interior product of two elements, or complements of
elements, can be expressed in terms of either exterior or regressive products together (perhaps)
with the complement operation.

Ï Product and complement axioms

In addition to the definitions and formulae we have already derived for the interior product there
are many more that can be derived by successive application of just three groups of axioms - the
anti-commutativity, complement, and complement of a complement axioms. We list them here
again for reference.

2009 9 3
Grassmann Algebra Book.nb 424

Anticommutativity axioms

a Ô b ã H-1Lm k b Ô a a Ó b ã H-1LHn-kL Hn-mL b Ó a 6.21


m k k m m k k m

Complement axioms

aÔb ã aÓb aÓb ã aÔb 6.22


m k m k m k m k

Complement of a complement axiom

a ã H-1Lm Hn-mL a 6.23


m m

Ì Formula 1

a û b ã a Ó b ã H-1Lk Hn-mL b Ó a
m k m k k m

a û b ã a Ó b ã H-1Lk Hn-mL b Ó a 6.24


m k m k k m

Ì Formula 2

a û b ã a Ó b ã H-1Lm Hn-mL a Ó b ã H-1Lm Hn-mL a Ô b


m k m k m k m k

a û b ã H-1Lm Hn-mL a Ô b ã H-1LHm+kL Hn-mL b Ô a 6.25


m k m k k m

Ì Formula 3

a û b ã a Ó b ã a Ô b ã H-1Lm k b û a
m k m k m k k m

a û b ã a Ó b ã a Ô b ã H-1Lm k b û a 6.26
m k m k m k k m

2009 9 3
Grassmann Algebra Book.nb 425

Ì Formula 4

a û b ã a Ó b ã H-1Lk Hn-kL a Ó b
m k m k m k

a û b ã H-1Lk Hn-kL a Ó b 6.27


m k m k

Ì Formula 5

a û b ã H-1Lk Hn-kL a Ó b ã H-1Lk Hn-kL+m Hn-kL b Ó a


m k m k k m

a û b ã H-1LHm+kL Hn-kL b û a 6.28


m k k m

Ì Formula 6

a û b ã a Ó b ã H-1Lk Hn-kL a Ô b ã H-1Lk Hn-kL+k Hn-mL b Ô a


m k m k m k k m

a û b ã H-1Lk Hm+kL b Ô a 6.29


m k k m

Ì Formula 7

a û b ã a Ó b ã a Ô b ã H-1Lm Hn-mL+k Hn-kL a Ô b


m k m k m k m k

a û b ã a Ô b ã H-1LHm+kL Hn-Hm+kLL a Ô b
m k m k m k

a û b ã H-1LHm+kL Hn-Hm+kLL a Ô b ã H-1Lm k b û a 6.30


m k m k k m

Ì Formula 8

a û b ã H-1Lk Hn-kL a Ó b ã H-1Lk Hn-kL a Ô b


m k m k m k

a û b ã H-1Lk Hn-kL a Ô b 6.31


m k m k

Ì Formula 9

2009 9 3
Grassmann Algebra Book.nb 426

Formula 9

a û b ã H-1Lk Hn-kL a Ó b ã H-1Lk Hn-kL+m Hn-mL a Ô b


m k m k m k

a û b ã H-1Lk Hn-kL+m Hn-mL a Ô b 6.32


m k m k

Extended interior products


The interior product of an element with the exterior product of several elements may be shown
directly from its definition to be a type of 'extended' interior product.

aû b Ô b Ô!Ô b ã a Ó Hb Ô b Ô ! Ô bL ã a Ó b Ó b Ó ! Ó b
m k1 k2 kp m k1 k2 kp m k1 k2 kp

ã aÓ b Ó b Ó! Ó b ã aû b û b û! û b
m k1 k2 kp m k1 k2 kp

aû b Ô b Ô!Ô b ã aû b û b û! û b 6.33
m k1 k2 kp m k1 k2 kp

Thus, the interior product is left-associative.

If the interior product of any of the b with a is zero, then the complete interior product is zero. We
ki m
can see this by rearranging the factors in the exterior product to bring that factor to into first
position.

By interchanging the order of the factors in the exterior product we can derive alternative
expressions. For the case of two factors we have

aû bÔg ã H-1Lk r a û g Ô b ï
m k r m r k

aûb û g ã H-1Lk r Ka û g O û b 6.34


m k r m r k

2009 9 3
Grassmann Algebra Book.nb 427

Converting interior products


GrassmannAlgebra has a number of functions for converting between exterior, regressive, and
interior products, and (perhaps) the complement operation. We give some examples below. Note
carefully however, that the conversion is carried out in the currently declared space, that is,
GrassmannAlgebra will use the current dimension of the space for its conversion so that the
formula for any attached sign may be simplified. All of the functions are effectively listable.

Ï Precedence of the exterior, regressive, and interior products

GrassmannAlgebra has Mathematica's natural precedence ranking for its operators which it uses to
simplify the appearance of output by removing brackets if the input conforms to the precedence.

† The interior product operator has the same precedence as the minus operator.

8HHHa û bL û gL û d L û …, a û Hb û Hg û Hd û … LLL<
8a û b û g û d û …, a û Hb û Hg û Hd û …LLL<

† The exterior product operator has a higher precedence than the interior product operator.

8a û b Ô g, a û Hb Ô gL, Ha û bL Ô g<
8a û b Ô g, a û b Ô g, Ha û bL Ô g<

† The regressive product operator has a higher precedence than the interior product operator.

8a û b Ó g, a û Hb Ó gL, Ha û bL Ó g<
8a û b Ó g, a û b Ó g, Ha û bL Ó g<

† The exterior product operator has a higher precedence than the regressive product operator.

8a Ó b Ô g, a Ó Hb Ô gL, Ha Ó bL Ô g<
8a Ó b Ô g, a Ó b Ô g, Ha Ó bL Ô g<

Ï Converting interior products to regressive products

A = : a û b û g, a û b û g >;
m k j m k j

¯!3 ; InteriorToRegressive@AD

:a Ó b Ó g, a Ó b Ó g>
m k j m k j

† Converting interior products to regressive products is independent of the dimension of the space
since the definitional formula is used.

2009 9 3
Grassmann Algebra Book.nb 428

Ï Converting interior products to exterior products

† In a 3-space, the complement of a complement of an element is the element itself, hence there
are no extra signs resulting from the conversion.

¯!3 ; InteriorToExterior@AD

:a Ô b Ô g, a Ô b Ô g>
m k j m k j

† In a 4-space however, this is not so, so there may be extra signs resulting from the conversion.

¯!4 ; InteriorToExterior@AD

:H-1Lk+m H-1Lm a Ô b Ô g, H-1Lm a Ô H-1Lk b Ô g >


m k j m k j

† You can aggregate and simplify the signs by applying AggregateSigns.

InteriorToExterior@AD êê AggregateSigns

:H-1Lk a Ô b Ô g, H-1Lk+m a Ô b Ô g>


m k j m k j

Ï Converting exterior and regressive products to interior products

† Converting exterior and regressive products to interior products may utilize the conversion in
which the complement of an element is replaced by its interior product with the unit n-element
1.

B = :a Ô b Ô g, a Ô b Ó g>;
m k j m k j

In a 3-space, conversion to interior products gives

¯!3 ; ToInteriorForm@BD

:H-1Lj m+k m 1 û 1 û b û g Ô a , H-1L1+j+k+j k+m+j m g û 1 û a û b >


k j m j m k

But in a 4-space, the signs may be different.

¯!4 ; ToInteriorForm@BD

:H-1Lj+k+m+j m+k m 1 û 1 û b û g Ô a , H-1Lk+j k+m+j m g û 1 û a û b >


k j m j m k

And in a 5-space, the signs return to those of the (also odd) 3-space.

2009 9 3
Grassmann Algebra Book.nb 429

¯!5 ; ToInteriorForm@BD

:H-1Lj m+k m 1 û 1 û b û g Ô a , H-1L1+j+k+j k+m+j m g û 1 û a û b >


k j m j m k

Example: Orthogonalizing a set of 1-elements


Suppose we have a set of independent 1-elements a1 , a2 , a3 , !, and we wish to create an
orthogonal set e1 , e2 , e3 , ! spanning the same space.

We begin by choosing one of the elements, a1 say, arbitrarily and setting this to be the first
element e1 in the orthogonal set to be created.

e1 ã a1

To create a second element e2 orthogonal to e1 within the space concerned, we choose a second
element of the space, a2 say, and form the interior product.

e2 ã He1 Ô a2 L û e1
From our discussion of the interior product of a simple bivector and a vector above, we can see that
e2 is orthogonal to e1 . Alternatively we can take their interior product to see that it is zero:

e2 û e1 ã HHe1 Ô a2 L û e1 L û e1 ã He1 Ô a2 L û He1 Ô e1 L ã 0

We can also see that a2 lies in the 2-element e1 Ôe2 (that is, a2 is a linear combination of e1 and
e2 ) by taking their exterior product and expanding it.

e1 Ô HHe1 Ô a2 L û e1 L Ô a2 ã e1 Ô HHe1 û e1 L a2 - He1 û a2 L e1 L Ô a2 ã 0


We will develop the formula used for this expansion in Section 6.4: The Interior Common Factor
Theorem. For the meantime however, all we need to observe is that e2 must be a linear
combination of e1 and a2 , because it is expressed in no other terms.

We create the rest of the orthogonal elements in a similar manner.

e3 ã He1 Ô e2 Ô a3 L û He1 Ô e2 L

e4 ã He1 Ô e2 Ô e3 Ô a4 L û He1 Ô e2 Ô e3 L
!

In general then, the (i+1)th element ei+1 of the orthogonal set e1 , e2 , e3 , ! is obtained from the
previous i elements and the (i+1)th element ai+1 of the original set.

ei+1 ã He1 Ô e2 Ô ! Ô ei Ô ai+1 L û He1 Ô e2 Ô ! Ô ei L 6.35

Following a similar procedure to the one used for the second element, we can easily show that ei+1
is orthogonal to e1 Ôe2 Ô!Ôei and hence to each of the e1 ,e2 ,!,ei , and that ai lies in
e1 Ôe2 Ô!Ôei and hence is a linear combination of e1 ,e2 ,!,ei .

This procedure is, as might be expected, equivalent to the Gram-Schmidt orthogonalization process.

2009 9 3
Grassmann Algebra Book.nb 430

This procedure is, as might be expected, equivalent to the Gram-Schmidt orthogonalization process.

6.4 The Interior Common Factor Theorem

The Interior Common Factor Formula


The dual Common Factor Axiom introduced in Chapter 3 is:

: a Ó g Ô b Ó g ã a Ó b Ó g Ô g œ L, m + k + j ã 2 ¯n>
m j k j m k j j j

Interchanging the order of the factors and replacing a and b by their complements gives
m k

: g Ó a Ô g Ó b ã g Ó a Ó b Ô g œ L, j ã m + k>
j m j k j m k j j

Finally, by replacing a Ó b with a Ô b we get the Interior Common Factor Formula.


m k m k

gûa Ô gûb ã gû aÔb g j ã m+k 6.36


j m j k j m k j

In this case g is simple, the expression is a j-element, and j = m+k.


j

The Interior Common Factor Theorem


The Common Factor Theorem introduced in Chapter 3 enabled an explicit expression for a
regressive product to be derived. We now derive the interior product version called the Interior
Common Factor Theorem. This theorem is a source of many useful relations in Grassmann algebra.

We start with the Common Factor Theorem in the form:

n
a Ó b ã ‚ ai Ô b Ó ai
m s m-p s p
i=1

m
where a ã a1 Ô a1 ã a2 Ô a2 ã ! ã an Ô an , a is simple, p = m+s|n, and n ! K O.
m m-p p m-p p m-p p m k

Suppose now that b ã b, k = n|s = m|p. Substituting for b and using the formula
s k s

a Ô b ã a û b 1 (which is derived in the section below: The Inner Product) allows us to write
m m m m
the term in brackets as

2009 9 3
Grassmann
Suppose now that b ã b, k = n|s = m|p. Substituting for b and using the formulaAlgebra Book.nb 431
s k s

a Ô b ã a û b 1 (which is derived in the section below: The Inner Product) allows us to write
m m m m
the term in brackets as

ai Ô b ã ai Ô b ã ai û b 1
m-p s k k k k

so that the Common Factor Theorem becomes:

n
a û b ã ‚ ai û b ai
m k k k m-k
i=1 6.37
a ã a1 Ô a1 ã a2 Ô a2 ã ! ã an Ô an
m k m-k k m-k k m-k

m
where k § m, and n ! K O, and a is simple.
k m

The formula indicates that an interior product of a simple element with another, not necessarily
simple, element of equal or lower grade, may be expressed in terms of the factors of the simple
element of higher grade.

When a is not simple, it may always be expressed as the sum of simple components:
m
a ã a + a2 + a3 + ! (the i in the ai are superscripts, not powers). From the linearity of the
1
m m m m m
interior product, the Interior Common Factor Theorem may then be applied to each of the simple
terms.

a û b ã Ka1 + a2 + a3 + !O û b ã a1 û b + a2 û b + a3 û b + !
m k m m m k m k m k m k

Thus, the Interior Common Factor Theorem may be applied to the interior product of any two
elements, simple or non-simple. Indeed, the application to components in the preceding expansion
may be applied to the components of a even if it is simple.
m

To make the Interior Common Factor Theorem more explicit, suppose a ã a1 Ô a2 Ô ! Ô am , then:
m

Ha1 Ô a2 Ô ! Ô am L û b ã ‚ Hai1 Ô ! Ô aik L û b


k k
i1 ! ik

H-1LKk a1 Ô ! Ô ·i1 Ô ! Ô ·ik Ô ! Ô am 6.38


k 1
Kk ã ‚ ig + k Hk + 1L
g=1
2

where ·j means aj is missing from the product.

2009 9 3
Grassmann Algebra Book.nb 432

Examples of the Interior Common Factor Theorem


In this section we take the formula just developed and give examples for specific values of m and
k. We do this using the GrassmannAlgebra function InteriorCommonFactorTheorem which
generates the specific case of the theorem when given an argument of the form a û b. The grades m
m k
and k must be given as positive integers.

Ï b is a scalar

Ha1 Ô a2 Ô ! Ô am L û b ã b Ha1 Ô a2 Ô ! Ô am L 6.39


0 0

Ï b is a 1-element

m
Ha1 Ô a2 Ô ! Ô am L û b ã ‚ Hai û b L H-1Li-1 a1 Ô ! Ô ·i Ô ! Ô am 6.40
i=1

InteriorCommonFactorTheoremBa û bF
2 1

a û b ã a1 Ô a2 û b ã - a2 û b a1 + a1 û b a2
2 1 1 1 1

InteriorCommonFactorTheoremBa û bF
3 1

a û b ã a1 Ô a2 Ô a3 û b ã a3 û b a1 Ô a2 - a2 û b a1 Ô a3 + a1 û b a2 Ô a3
3 1 1 1 1 1

InteriorCommonFactorTheoremBa û bF
4 1

a û b ã a1 Ô a2 Ô a3 Ô a4 û b ã - a4 û b a1 Ô a2 Ô a3 +
4 1 1 1

a3 û b a1 Ô a2 Ô a4 - a2 û b a1 Ô a3 Ô a4 + a1 û b a2 Ô a3 Ô a4
1 1 1

2009 9 3
Grassmann Algebra Book.nb 433

Ï b is a 2-element

Ha1 Ô a2 Ô ! Ô am L û b ã
2
6.41
‚ Hai Ô aj L û b H-1Li+j-1 a1 Ô ! Ô ·i Ô ! Ô ·j Ô ! Ô am
2
i,j

InteriorCommonFactorTheoremBa û bF
3 2

a û b ã a1 Ô a2 Ô a3 û b ã b û a2 Ô a3 a1 - b û a1 Ô a3 a2 + b û a1 Ô a2 a3
3 2 2 2 2 2

InteriorCommonFactorTheoremBa û bF
4 2

a û b ã a1 Ô a2 Ô a3 Ô a4 û b ã
4 2 2

b û a3 Ô a4 a1 Ô a2 - b û a2 Ô a4 a1 Ô a3 + b û a2 Ô a3 a1 Ô a4 +
2 2 2

b û a1 Ô a4 a2 Ô a3 - b û a1 Ô a3 a2 Ô a4 + b û a1 Ô a2 a3 Ô a4
2 2 2

Ï b is a 3-element

InteriorCommonFactorTheoremBa û bF
4 3

a û b ã a1 Ô a2 Ô a3 Ô a4 û b ã - b û a2 Ô a3 Ô a4 a1 +
4 3 3 3

b û a1 Ô a3 Ô a4 a2 - b û a1 Ô a2 Ô a4 a3 + b û a1 Ô a2 Ô a3 a4
3 3 3

The computational form of the Interior Common Factor Theorem


The Interior Common Factor Theorem rewritten in terms of the symbols we have been using for
multilinear forms is

n
A û B ã ‚ Ai û B Ai
m k k k m-k
i=1 6.42
A ã A1 Ô A1 ã A2 Ô A2 ã ! ã An Ô An
m k m-k k m-k k m-k

m
where k § m, and n ! K O, and A is simple.
k m

2009 9 3
Grassmann Algebra Book.nb 434

m
where k § m, and n ! K O, and A is simple.
k m

Just as we did for the Common Factor Theorem in Chapter 3, we can express this in a
computational form. If

¯#k BAF ã :A1 , A2 , !, An > ¯#k BAF ã : A1 , A2 , !, An > 6.43


m k k k m m-k m-k m-k

then

n
A û B ã ‚ K¯"k BAF û BO ¯"k BAF 6.44
m k m PiT k m PiT
i=1

m
where k § m, and n ! K O, and A is simple.
k m

Or, using Mathematica's inbuilt Listability

A û B ã ¯SBJ¯"k BAF û BN ¯"k BAFF 6.45


m k m k m

Or again using Mathematica's Dot function to do the summation.

A û B ã J¯"k BAF û BN.¯"k BAF 6.46


m k m k m

(The Dot function is an operation on lists, and should not be confused with the interior product.)

In the next section we will discuss the inner product (where the grades of the factors are equal),
and show that the inner product is symmetric. Thus we can write the above formulas in the
alternative forms:

n
A û B ã ‚ KB û ¯"k BAF O ¯"k BAF 6.47
m k k m PiT m PiT
i=1

A û B ã ¯SBJB û ¯"k BAFN ¯"k BAFF 6.48


m k k m m

2009 9 3
Grassmann Algebra Book.nb 435

A û B ã JB û ¯"k BAFN.¯"k BAF 6.49


m k k m m

Ï Examples

Here are the three computational forms compared to the result from the
InteriorCommonFactorTheorem function which implements the initially derived theorem.

m = 3; k = 2; n = 3;
n
A û B ã ‚ B û ¯#k BAF ¯#k BAF
m k k m PiT m PiT
i=1

A û B ã JB û A2 Ô A3 N A1 - JB û A1 Ô A3 N A2 + JB û A1 Ô A2 N A3
3 2 2 2 2

A û B ã ¯SBJB û ¯#k BAFN ¯#k BAFF


m k k m m

A û B ã JB û A2 Ô A3 N A1 - JB û A1 Ô A3 N A2 + JB û A1 Ô A2 N A3
3 2 2 2 2

A û B ã JB û ¯#k BAFN.¯#k BAF


m k k m m

A û B ã JB û A2 Ô A3 N A1 - JB û A1 Ô A3 N A2 + JB û A1 Ô A2 N A3
3 2 2 2 2

InteriorCommonFactorTheoremBA û BF
3 2

A û B ã A1 Ô A2 Ô A3 û B ã JB û A2 Ô A3 N A1 - JB û A1 Ô A3 N A2 + JB û A1 Ô A2 N A3
3 2 2 2 2 2

6.5 The Inner Product

Implications of the Common Factor Axiom


One of the special cases of the Common Factor Axiom is

:a Ó b ã a Ô b Ó 1 œ L , m + k - ¯n ã 0>
m k m k 0

Putting b equal to b then gives immediately


k m

aûb ã aÓb ã aÔb Ó1


m m m m m m

2009 9 3
Grassmann Algebra Book.nb 436

a û b ã Ka Ô bO Ó 1 6.50
m m m m

The dual to this special case of the Common Factor Axiom says that an exterior product of two
elements whose grades sum to the dimension of the space is equal to the (scalar) regressive product
of the two elements multiplied by the unit n-element.

:a Ô b ã a Ó b 1 œ L , m + k - ¯n ã 0>
m k m k ¯n ¯n

Now that we have defined the complement operation, the unit n-element 1 becomes 1. We can
¯n
then rewrite this dual axiom as

aÔb ã aÓb 1 ã aûb 1


m m m m m m

a Ô b ã Ka û bO 1 6.51
m m m m

Taking the complement of this equation gives another form for the inner product.

aûb ã aÔb 6.52


m m m m

The symmetry of the inner product


It is a normally accepted feature of the inner product that it be symmetric - that is, independent of
the order of its factors. To show that our definition does indeed imply this symmetry we only need
to do some simple transformations on the right hand side of the last equation.

a û b ã a Ô b ã a Ó b ã H-1Lm Hn-mL a Ó b ã b Ó a ã b û a
m m m m m m m m m m m m

aûb ã bûa 6.53


m m m m

This symmetry, is of course, a property that we would expect of an inner product. However it is
interesting to remember that the major result which permits this symmetry is that the complement
operation has been required to satisfy the complement of a complement axiom, that is, that the
complement of the complement of an element should be, apart from a possible sign, equal to the
element itself.

2009 9 3
Grassmann Algebra Book.nb 437

The inner product of complements


We have already shown earlier in û10 (Implications of the regressive product axioms) that the
inner product of two elements is equal to the inner product of their complements in reverse order.
This followed from the more general axiom for interior products. However, since we have now
shown that the inner product is symmetric, we can immediately see that the inner product of two
elements is equal to the inner product of their complements.

aûb ã bûa ã aûb


m m m m m m

aûb ã aûb 6.54


m m m m

The inner product of simple elements


The inner product of simple elements can be expressed as a determinant. We begin with the form

a û b ã Ha1 Ô a2 Ô ! Ô am L û Hb1 Ô b2 Ô ! Ô bm L
m m

The formula for extended interior products is

aû b Ô b Ô!Ô b ã aû b û b û! û b
m k1 k2 kp m k1 k2 kp

So we can write Ha1 Ô a2 Ô ! Ô am L û Hb1 Ô b2 Ô ! Ô bm L as:

HHa1 Ô a2 Ô ! Ô am L û b1 L û Hb2 Ô ! Ô bm L

or, by rearranging the b factors to bring bj to the beginning of the product as:

HHa1 Ô a2 Ô ! Ô am L û bj L û IH-1Lj-1 Hb1 Ô b2 Ô ! Ô ·j Ô ! Ô bm LM

The Interior Common Factor Theorem gives the first factor of this expression as:

m
Ha1 Ô a2 Ô ! Ô am L û bj ã ‚ Hai û bj L H-1Li-1 a1 Ô a2 Ô ! Ô ·i Ô ! Ô am
i=1

Thus, Ha1 Ô a2 Ô ! Ô am L û Hb1 Ô b2 Ô ! Ô bm L becomes:

2009 9 3
Grassmann Algebra Book.nb 438

Ha1 Ô a2 Ô ! Ô am L û Hb1 Ô b2 Ô ! Ô bm L ã
m
‚ Hai û bj L H-1Li+j Ha1 Ô ! Ô ·i Ô ! Ô am L û 6.55
i=1
Hb1 Ô ! Ô ·j Ô ! Ô bm L

If this process is repeated, we find the strikingly simple formula:

Ha1 Ô a2 Ô ! Ô am L û Hb1 Ô b2 Ô ! Ô bm L ã Det@ai û bj D 6.56

Ha1 Ô a2 Ô ! Ô am L û Hb1 Ô b2 Ô ! Ô bm L ã
a1 û b1 a1 û b2 a1 û … a1 û b m
a2 û b1 a2 û b2 a2 û … a2 û b m 6.57
… û b1 … û b2 … û … … û bm
a m û b1 a m û b2 a m û … am û bm

Calculating inner products


The expression for the inner product of two simple elements can be developed in
GrassmannAlgebra by using either the operation ComposeScalarProductMatrix or the
operation ToScalarProducts.

Ï ComposeScalarProductMatrix

† ComposeScalarProductMatrix@A û BD develops the inner product of A and B into the


matrix of the scalar products of their factors. You can use Mathematica's inbuilt Det operation
on this matrix to get its determinant, and hence the inner product.

M1 = ComposeScalarProductMatrixBa û bF; MatrixForm@M1 D


3 3

a1 û b1 a1 û b2 a1 û b3
a2 û b1 a2 û b2 a2 û b3
a3 û b1 a3 û b2 a3 û b3

Det@M1 D
-Ha1 û b3 L Ha2 û b2 L Ha3 û b1 L + Ha1 û b2 L Ha2 û b3 L Ha3 û b1 L +
Ha1 û b3 L Ha2 û b1 L Ha3 û b2 L - Ha1 û b1 L Ha2 û b3 L Ha3 û b2 L -
Ha1 û b2 L Ha2 û b1 L Ha3 û b3 L + Ha1 û b1 L Ha2 û b2 L Ha3 û b3 L

† If you just want to display M1 to look like a determinant, you can use Mathematica's Grid and
BracketingBar.

2009 9 3
Grassmann Algebra Book.nb 439

BracketingBar@Grid@M1 DD
a1 û b1 a1 û b2 a1 û b3
a2 û b1 a2 û b2 a2 û b3
a3 û b1 a3 û b2 a3 û b3

† If we reverse the order of the two elements and then take the transpose of the resulting scalar
product matrix we get a matrix M2 which differs from M1 only by the ordering of its scalar
products.

M2 = ComposeScalarProductMatrixBb û aF; MatrixForm@Transpose@M2 DD


3 3

b1 û a1 b2 û a1 b3 û a1
b1 û a2 b2 û a2 b3 û a2
b1 û a3 b2 û a3 b3 û a3

This shows clearly how the symmetry of the inner product depends on the symmetry of the scalar
product (which in turn depends on the symmetry of the metric tensor).

Ï ToScalarProducts

† To obtain the expansion of the inner product more directly, you can use ToScalarProducts.

ToScalarProductsBa û bF
3 3

-Ha1 û b3 L Ha2 û b2 L Ha3 û b1 L + Ha1 û b2 L Ha2 û b3 L Ha3 û b1 L +


Ha1 û b3 L Ha2 û b1 L Ha3 û b2 L - Ha1 û b1 L Ha2 û b3 L Ha3 û b2 L -
Ha1 û b2 L Ha2 û b1 L Ha3 û b3 L + Ha1 û b1 L Ha2 û b2 L Ha3 û b3 L

† In fact, ToScalarProducts works to convert any interior products in a Grassmann


expression to scalar products. For example, here is an expression involving three interior
products.

¯!5 ; X1 = x û a + b He1 Ô e2 Ô e3 L û e1 + c He1 Ô e2 L û He2 Ô e3 L;

Applying ToScalarProducts expands any interior products until the expression only contains
scalar products.

X2 = ToScalarProducts@X1 D
a x + c H-He1 û e3 L He2 û e2 L + He1 û e2 L He2 û e3 LL +
b He1 û e3 L e1 Ô e2 - b He1 û e2 L e1 Ô e3 + b He1 û e1 L e2 Ô e3

You can convert these scalar products to metric elements with ToMetricElements. For the
default Euclidean metric you would get:

X3 = ToMetricElements@X2 D
a x + b e2 Ô e3

For a general metric you get the original expression with the general metric elements substituted
for the scalar products.

2009 9 3
Grassmann Algebra Book.nb 440

For a general metric you get the original expression with the general metric elements substituted
for the scalar products.

DeclareMetric@gD; X4 = ToMetricElements@X2 D
a x - c g1,3 g2,2 + c g1,2 g2,3 + b g1,3 e1 Ô e2 - b g1,2 e1 Ô e3 + b g1,1 e2 Ô e3

Inner products of basis elements


The following formulas collect together various results for the inner product of two basis m-
elements.

Here, a and b refer to any basis elements. The reader is referred further to Chapter 2 for a
m m
discussion of cobases, and to Chapter 5 for a discussion of the complement and the metric tensor.

aûb ã bûa ã a ûb ã b û a 6.58


m m m m m m m m

j
ei û ej ã di ei û ej ã dij 6.59
m m m m

m
ei û ej ã ei û ej ã g ei û ej ã gij 6.60
m m m m m m

1 m ij
ei û ej ã ei û ej ã ei û ej ã g 6.61
m m m m g m m

6.6 The Measure of an m-element

The definition of measure

The measure of an m-element a is denoted £aß and defined as the positive square root of the inner
m m
product of a with itself.
m

£aß ã aûa 6.62


m m m

The measure of a scalar is its absolute value.

2009 9 3
Grassmann Algebra Book.nb 441

The measure of a scalar is its absolute value.

†a§ ã a2 6.63

The measure of unity (or its negative) is, as would be expected, equal to unity.

†1§ ã 1 †-1§ ã 1 6.64

The measure of an n-element a ã a e1 Ô … Ô en is the absolute value of its single scalar coefficient
n
multiplied by the determinant g of the metric tensor (which, in this book, we assume to be positive).

a ã a e1 Ô ! Ô en ï £aß ã a2 g ã †a§ g 6.65


n n

The measure of the unit n-element (or its negative) is, as would be expected, equal to unity.

†1§ ã 1 °-1• ã 1 6.66

The measure of an element is equal to the measure of its complement, since, as we have shown
above, a û a ã a û a.
m m m m

£aß ã †a§ 6.67


m m

(The different heights of the bracketing bars in this formula is an artifact of Mathematica's
typesetting, and is inconsequential to the meaning of the notation.)

The measure of a simple element a is equal to the determinant of the scalar products of its factors.
m

2
£aß ã Ha1 Ô a2 Ô ! Ô am L û Ha1 Ô a2 Ô ! Ô am L ã Det@ai û aj D 6.68
m

In a Euclidean metric, the measure of an element is given by the familiar 'root sum of squares'
formula yielding a measure that is always real and positive.

a ã S ai ei ï £aß ã S ai 2 6.69
m m m

Unit elements
2009 9 3
Grassmann Algebra Book.nb 442

Unit elements
`
The unit m-element associated with the element a is denoted a and may now be defined as:
m m

a
` m
aã 6.70
m
£aß
m

`
The unit scalar a corresponding to the scalar a, is equal to +1 if a is positive and is equal to -1 if
a is negative.

` a 1 a>0
a ã ã + 6.71
†a§ -1 a < 0

` `
1ã1 -1 ã -1 6.72

`
The unit n-element a corresponding to the n-element a ã a e1 Ô ! Ô en is equal to the unit n-
n n
element 1 if a is positive, and equal to -1 if a is negative.

a
` n a e1 Ô ! Ô en 1 a>0
a ã ã ã 6.73
n
£aß †a§ g -1 a < 0
n

` `
1 ã 1 -1 ã -1 6.74

`
The measure of a unit m-element a is therefore always unity.
m

`
†a§ ã 1 6.75
m

Calculating measures
To calculate the measure of any simple element you can use the GrassmannAlgebra function
Measure.

2009 9 3
Grassmann Algebra Book.nb 443

Measure@x Ô yD

-Hx û yL2 + Hx û xL Hy û yL

Measure will compute the measure of a simple element using the currently declared metric.

For example, with the default Euclidean metric, the measure of this 3-element is 5.

Measure@H2 e1 - 3 e2 L Ô He1 + e2 + e3 L Ô H5 e2 + e3 LD
5

With a general metric in 3-space

DeclareMetric@gD
88g1,1 , g1,2 , g1,3 <, 8g1,2 , g2,2 , g2,3 <, 8g1,3 , g2,3 , g3,3 <<

Measure@H2 e1 - 3 e2 L Ô He1 + e2 + e3 L Ô H5 e2 + e3 LD
, I50 g1,2 g1,3 g2,3 + 25 g1,1 g2,2 g3,3 - 25 Ig2 g2,2 + g1,1 g2 + g2 g3,3 MM
1,3 2,3 1,2

Measure will automatically apply any simplifications due to the symmetry of the scalar product.

The measure of free elements


The measure of an m-element has a well-defined geometric significance.

Ï Scalars: m = 0

†a§2 ã a2
The measure of a scalar is its absolute value.

Measure@aD

a2

Ï Vectors: m = 1

†x§2 ã x û x
If x is interpreted as a vector, then †x§ is called the length of x.

Measure@xD

xûx

2009 9 3
Grassmann Algebra Book.nb 444

Ï Bivectors: m = 2

xûx xûy
†x Ô y§2 ã Hx Ô yL û Hx Ô yL ã £ ß
yûx yûy

If x Ô y is interpreted as a bivector, then †x Ô y§ is called the area of x Ô y. The scalar °x Ô y• is


in fact the area of the parallelogram formed by the vectors x and y.

Measure@x Ô yD

-Hx û yL2 + Hx û xL Hy û yL

Graphic showing a parallelogram formed by two vectors x and y, and its area.

Because of the nilpotent properties of the exterior product, the measure of the bivector is
independent of the way in which it is expressed in terms of its vector factors.

Measure@Hx + yL Ô yD

-Hx û yL2 + Hx û xL Hy û yL

Graphic showing a parallelogram formed by vectors Hx + yL and y, and its area.

Ï Trivectors: m = 3

xûx xûy xûz


†x Ô y Ô z§2 ã Hx Ô y Ô zL û Hx Ô y Ô zL ã yûx yûy yûz
zûx zûy zûz

If x Ô y Ô z is interpreted as a trivector, then †x Ô y Ô z§ is called the volume of x Ô y Ô z. The


scalar †x Ô y Ô z§ is in fact the volume of the parallelepiped formed by the vectors x, y and z.

Measure@x Ô y Ô zD
, I-Hx û zL2 Hy û yL + 2 Hx û yL Hx û zL Hy û zL -
Hx û xL Hy û zL2 - Hx û yL2 Hz û zL + Hx û xL Hy û yL Hz û zLM

Graphic showing a parallelepiped formed by vectors x, y and z, and its volume.

2009 9 3
Grassmann Algebra Book.nb 445

The measure of bound elements


In this section we explore some relationships for measures of bound elements in a bound n-space
under the hybrid metric introduced in Chapter 5. Essentially this metric has the properties that:
1. The scalar product of the origin with itself (and hence its measure) is unity.
2. The scalar product of the origin with any vector or multivector is zero.
3. The scalar product of any two vectors is given by the metric tensor on the n-dimensional vector
subspace of the bound n-space.

We summarize these properties as follows:

¯! û ¯! ã 1 a û ¯! ã 0 ei û ej ã gij 6.76
m

Ï The measure of a point


Since a point P is defined as the sum of the origin ¯" and the point's position vector x relative to
the origin, we can write:

P û P ã H¯" + xL û H¯" + xL
ã ¯" û ¯" + 2 ¯" û x + x û x
ã 1 + xûx

P ã ¯! + x ï †P§2 ã 1 + †x§2 6.77

Although this is at first sight a strange result, it is due entirely to the hybrid metric referred to
above. Unlike a vector difference of two points whose measure starts off from zero and increases
continuously as the two points move further apart, the measure of a single point starts off from
unity as it coincides with the origin and increases continuously as it moves further away from it.

The measure of the difference of two points is, as expected, equal to the measure of the associated
vector.

P1 ã ¯" + x1
P2 ã ¯" + x2

HP1 - P2 L û HP1 - P2 L
ã P1 û P1 - 2 P1 û P2 + P2 û P2
ã H1 + x1 û x1 L - 2 H1 + x1 û x2 L + H1 + x2 û x2 L
ã x1 û x1 - 2 x1 û x2 + x2 û x2
ã Hx1 - x2 L û Hx1 - x2 L

Ï The measure of a bound multivector


The second Triangle Formula (from the section on triangle formulas later in this chapter), is

2009 9 3
Grassmann Algebra Book.nb 446

a û b Hx û zL ã Ka Ô xO û b Ô z + Ka û zO û b û x
m m m m m m

If we put x ã z ã P and b ã a in this formula, and then rearrange the order we get:
m m

KP Ô aO û KP Ô aO ã Ka û aO HP û PL - Ka û PO û Ka û PO
m m m m m m

But a û P ã a û H¯" + xL ã a û x and P û P ã 1 + x û x, so that:


m m m

KP Ô aO û KP Ô aO ã Ka û aO H1 + x û xL - Ka û xO û Ka û xO
m m m m m m

By applying the second Triangle Formula again we can also write

Ka Ô xO û Ka Ô xO ã Ka û aO Hx û xL - Ka û xO û Ka û xO
m m m m m m

Subtracting these gives the final result.

KP Ô aO û KP Ô aO ã a û a + Ka Ô xO û Ka Ô xO 6.78
m m m m m m

2 2 2
£P Ô aß ã £aß + £a Ô xß 6.79
m m m

` 2 ` 2
6.80
£P Ô aß ã 1 + £a Ô xß
m m

Note that the measure of a multivector bound through the origin (x ã 0) is just the measure of the
multivector alone.

£¯! Ô aß ã £aß 6.81


m m

`
£¯! Ô aß ã 1 6.82
m

If x¶ is the component of x orthogonal to a, then we can once again apply the triangle formula
m
above to give

Ka Ô x¶ O û Ka Ô x¶ O ã Ka û aO Hx¶ û x¶ L - Ka û x¶ O û Ka û x¶ O
m m m m m m

But a Ô x¶ ã a Ô x and a û x¶ ã 0 so that this simplifies to


m m m

2009 9 3
Grassmann Algebra Book.nb 447

But a Ô x¶ ã a Ô x and a û x¶ ã 0 so that this simplifies to


m m m

Ka Ô xO û Ka Ô xO ã Ka û aO Hx¶ û x¶ L
m m m m

We can thus write formulas for the measure of a bound element in the alternative forms:

2 2
£P Ô aß ã £aß I1 + †v¶ §2 M 6.83
m m

` 2
£P Ô aß ã 1 + †v¶ §2 6.84
m

Thus the measure of a bound unit multivector indicates its minimum distance from the origin.

Determining the multivector of a bound multivector


The hybrid metric that we have chosen to explore has the property that the origin is orthogonal to
any multivector. Hence if we take the interior product of any bound multivector with the origin the
result will be to 'unbind' or 'free' it.

To see this, begin with a simple bound m-vector, take its interior product with the origin, and then
apply the Common Factor Theorem.

P Ô a ã P Ô a1 Ô a2 Ô …
m

KP Ô aO û ¯" ã H¯" û PL a - H¯" û a1 L HP Ô a2 Ô …L + …


m m

All terms except the first are clearly zero, and since ¯" û P reduces to 1, the first term reduces to a.
m

KP Ô aO û ¯! ã a 6.85
m m

Ï Example

Suppose we are in a bound 3-space, and we consider the bound bivector expressed as the product
of three points: P Ô Q Ô R. Such a 2-element could, for example, define a plane. We want to find the
bivector.

P = P Ô Q Ô R;

¯"3 ; P = ¯" + e1 ; Q = ¯" + e2 ; R = ¯" + e3 ;

2009 9 3
Grassmann Algebra Book.nb 448

Expanding and simplifying gives four terms.

B1 = ¯%@P û ¯"D
¯# Ô e1 Ô e2 û ¯# - ¯# Ô e1 Ô e3 û ¯# + ¯# Ô e2 Ô e3 û ¯# + e1 Ô e2 Ô e3 û ¯#

We can convert these to scalar products using the ToScalarProducts (which is based on the
Interior Common Factor Theorem).

B2 = ToScalarProducts@B1 D
¯# Ô HH¯# û e2 - ¯# û e3 L e1 +
H-H¯# û e1 L + ¯# û e3 L e2 + H¯# û e1 - ¯# û e2 L e3 L +
H¯# û ¯# + ¯# û e3 L e1 Ô e2 + H-H¯# û ¯#L - ¯# û e2 L e1 Ô e3 +
H¯# û ¯# + ¯# û e1 L e2 Ô e3

Now, when we apply ToMetricElements, all the scalar products of basis vectors with the
origin are put to zero, and we get the required bivector.

B3 = ToMetricElements@B2 D
e1 Ô e2 - e1 Ô e3 + e2 Ô e3

This bivector is simple, and if we wish, we can factorize it.

B4 = ExteriorFactorize@B3 D
He1 - e3 L Ô He2 - e3 L

In terms of the original points we can rewrite this bivector just as we would if we were deriving it
geometrically from the plane of P Ô Q Ô R.

B5 ã HP - RL Ô HQ - RL

Expanding this bivector gives an important form related to the original simple exterior product
P Ô Q Ô R.

B6 ã P Ô Q - P Ô R + Q Ô R

This form was discussed in Chapter 4: m-Planes: The m-vector of a bound m-vector.

6.7 The Induced Metric Tensor

Calculating induced metric tensors


In the last chapter we saw how we could use the GrassmannAlgebra function MetricL[m] to
calculate the matrix of metric tensor elements induced on L by the metric tensor declared on L .
m 1
For example, you can declare a general metric in a 3-space:

2009 9 3
Grassmann Algebra Book.nb 449

¯!3 ; DeclareMetric@-D
88$1,1 , $1,2 , $1,3 <, 8$1,2 , $2,2 , $2,3 <, 8$1,3 , $2,3 , $3,3 <<

and then ask for the metric induced on L.


2

MetricL@2D

99-$21,2 + $1,1 $2,2 , -$1,2 $1,3 + $1,1 $2,3 , -$1,3 $2,2 + $1,2 $2,3 =,
9-$1,2 $1,3 + $1,1 $2,3 , -$21,3 + $1,1 $3,3 , -$1,3 $2,3 + $1,2 $3,3 =,
9-$1,3 $2,2 + $1,2 $2,3 , -$1,3 $2,3 + $1,2 $3,3 , -$22,3 + $2,2 $3,3 ==

You can also get a palette of induced metrics by entering MetricPalette.

MetricPalette

Metric Palette
L H1L
0

#1,1 #1,2 #1,3


L #1,2 #2,2 #2,3
1
#1,3 #2,3 #3,3

-#21,2 + #1,1 #2,2 -#1,2 #1,3 + #1,1 #2,3 -#1,3 #2,2 + #1,2 #2
L -#1,2 #1,3 + #1,1 #2,3 -#21,3 + #1,1 #3,3 -#1,3 #2,3 + #1,2 #3
2
-#1,3 #2,2 + #1,2 #2,3 -#1,3 #2,3 + #1,2 #3,3 -#22,3 + #2,2 #3,3

L I -#21,3 #2,2 + 2 #1,2 #1,3 #2,3 - #1,1 #22,3 - #21,2 #3,3 + #1,1 #2,2 #3,
3

Using scalar products to construct induced metric tensors


An alternative way of viewing and calculating the metric tensor on L is to construct the matrix of
m
inner products of the basis elements of L with themselves, reduce these inner products to scalar
m
products, and then substitute for each scalar product the corresponding metric tensor element from
L. In what follows we will show an example of how to reproduce the result in the previous section.
1

First, we calculate the basis of L.


2

¯!3 ; B = BasisL@2D
8e1 Ô e2 , e1 Ô e3 , e2 Ô e3 <

Second, we use the Mathematica function Outer to construct the matrix of all combinations of
interior products of these basis elements.

2009 9 3
Grassmann Algebra Book.nb 450

Second, we use the Mathematica function Outer to construct the matrix of all combinations of
interior products of these basis elements.

M1 = Outer@InteriorProduct, B, BD; MatrixForm@M1 D


e1 Ô e2 û e1 Ô e2 e1 Ô e2 û e1 Ô e3 e1 Ô e2 û e2 Ô e3
e1 Ô e3 û e1 Ô e2 e1 Ô e3 û e1 Ô e3 e1 Ô e3 û e2 Ô e3
e2 Ô e3 û e1 Ô e2 e2 Ô e3 û e1 Ô e3 e2 Ô e3 û e2 Ô e3

Third, we use the GrassmannAlgebra function ComposeScalarProductMatrix to convert


each of these inner products into matrices of scalar products.

M2 = ComposeScalarProductMatrix@M1 D; MatrixForm@M2 D
e1 û e1 e1 û e2 e1 û e1 e1 û e3 e1 û e2 e1 û e3
K O K O K O
e2 û e1 e2 û e2 e2 û e1 e2 û e3 e2 û e2 e2 û e3
e1 û e1 e1 û e2 e1 û e1 e1 û e3 e1 û e2 e1 û e3
K O K O K O
e3 û e1 e3 û e2 e3 û e1 e3 û e3 e3 û e2 e3 û e3
e2 û e1 e2 û e2 e2 û e1 e2 û e3 e2 û e2 e2 û e3
K O K O K O
e3 û e1 e3 û e2 e3 û e1 e3 û e3 e3 û e2 e3 û e3

Fourth, we substitute the metric element -i,j for ei û ej in the matrix.

DeclareMetric@-D; M3 = ToMetricElements@M2 D; MatrixForm@M3 D


$1,1 $1,2 $1,1 $1,3 $1,2 $1,3
$1,2 $2,2 $1,2 $2,3 $2,2 $2,3
$1,1 $1,2 $1,1 $1,3 $1,2 $1,3
$1,3 $2,3 $1,3 $3,3 $2,3 $3,3
$1,2 $2,2 $1,2 $2,3 $2,2 $2,3
$1,3 $2,3 $1,3 $3,3 $2,3 $3,3

Fifth, we compute the determinants of each of the elemental matrices.

M4 = Map@Det, M3 , 82<D; MatrixForm@M4 D

-$21,2 + $1,1 $2,2 -$1,2 $1,3 + $1,1 $2,3 -$1,3 $2,2 + $1,2 $2,3
-$1,2 $1,3 + $1,1 $2,3 -$21,3 + $1,1 $3,3 -$1,3 $2,3 + $1,2 $3,3
-$1,3 $2,2 + $1,2 $2,3 -$1,3 $2,3 + $1,2 $3,3 -$22,3 + $2,2 $3,3

We can have Mathematica check that this is the same matrix as obtained from the MetricL
function.

M4 ã MetricL@2D
True

2009 9 3
Grassmann Algebra Book.nb 451

Displaying induced metric tensors as a matrix of matrices


GrassmannAlgebra has an inbuilt function MetricMatrix for displaying an induced metric in
matrix form. For example, we can display the induced metric on L in a 4-space.
3

¯!4 ; DeclareMetric@-D
88$1,1 , $1,2 , $1,3 , $1,4 <, 8$1,2 , $2,2 , $2,3 , $2,4 <,
8$1,3 , $2,3 , $3,3 , $3,4 <, 8$1,4 , $2,4 , $3,4 , $4,4 <<

MetricMatrix@3D êê MatrixForm
!1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4
!1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4
!1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4
!1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4
!1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4
!1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4
!1,1 !1,2 !1,3 !1,1 !1,2 !1,4 !1,1 !1,3 !1,4 !1,2 !1,3 !1,4
!1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4
!1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4
!1,2 !2,2 !2,3 !1,2 !2,2 !2,4 !1,2 !2,3 !2,4 !2,2 !2,3 !2,4
!1,3 !2,3 !3,3 !1,3 !2,3 !3,4 !1,3 !3,3 !3,4 !2,3 !3,3 !3,4
!1,4 !2,4 !3,4 !1,4 !2,4 !4,4 !1,4 !3,4 !4,4 !2,4 !3,4 !4,4

Of course, we are only displaying the metric tensor in this form. Its elements are correctly the
determinants of the matrices displayed, not the matrices themselves.

6.8 Product Formulas for Interior Products

The basic interior Product Formula


In discussing product formulas for the interior product we refer to the section on Product Formulas
for Regressive Products in Chapter 3. In transforming the results of that section to results involving
interior products, we may often simply transform expressions or their duals involving regressive
products with (n-m)-elements Ñ Ó x into expressions involving interior products with m-elements
n-m
Ñ û x.
m

ÑÓ x ï ÑÓx ï Ñûx
n-m m m

The Interior Common Factor Theorem theorem plays the same role in this development that the
Common Factor Theorem did in Chapter 3.

Ï The basic interior Product Formula

2009 9 3
Grassmann Algebra Book.nb 452

The basic interior Product Formula

The basic interior Product Formula is a transformation of the first regressive Product Formula in
Chapter 3. Beginning with

a Ô b Ó x ã Ka Ó x O Ô b + H-1Lm a Ô b Ó x
m k n-1 m n-1 k m k n-1

replace Ñ Ó x with Ñ û x to get


n-1

a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x 6.86
m k m k m k

Ï Derivation using the Interior Common Factor Theorem

The basic interior Product Formula may also be shown using the Interior Common Factor
Theorem. To do this, suppose initially that a and b are simple. Then they can be expressed as
m k
a ã a1 Ô ! Ô am and b ã b1 Ô ! Ô bk . Thus:
m k

a Ô b û x ã HHa1 Ô ! Ô am L Ô Hb1 Ô ! Ô bk LL û x
m k
m
ã ‚ H-1Li-1 Hx û ai L Ha1 Ô ! Ô ·i Ô ! Ô am L Ô Hb1 Ô ! Ô bk L
i=1

k
+ ‚ H-1Lm+j-1 Hx û bj L Ha1 Ô ! Ô am L Ô Hb1 Ô ! Ô ·j Ô ! Ô bk L
j=1

ã Ka û xO Ô Hb1 Ô ! Ô bk L + H-1Lm Ha1 Ô ! Ô am L Ô b û x


m k

Deriving interior Product Formulas

Ï Deriving interior Product Formulas manually

If x is of a grade higher than 1, then similar relations hold, but with extra terms on the right-hand
side. For example, if we replace x by x1 Ô x2 and note that:

a Ó b û Hx1 Ô x2 L ã a Ó b û x1 û x2
m k m k

then the right-hand side may be expanded by applying the basic Product Formula twice. The first
application is

2009 9 3
Grassmann Algebra Book.nb 453

a Ó b û Hx1 Ô x2 L ã Ka û x1 O Ô b + H-1Lm a Ô b û x1 û x2
m k m k m k

The second application is to each term on the right hand side.

Ka û x1 O Ô b û x2 ã KKa û x1 O û x2 O Ô b + H-1Lm-1 Ka û x1 O Ô b û x2
m k m k m k

H-1Lm a Ô b û x1 û x2 ã H-1Lm Ka û x2 O Ô b û x1 + a Ô b û x1 û x2
m k m k m k

Adding these equations and remembering that Ñ û Hx1 Ô x2 L ã HÑ û x1 L û x2 gives

a Ô b û Hx1 Ô x2 L ã Ka û Hx1 Ô x2 LO Ô b + a Ô b û Hx1 Ô x2 L


m k m k m k
3.64
-H-1Lm Ka û x1 O Ô b û x2 - Ka û x2 O Ô b û x1
m k m k

As with the product formula for regressive products, successive application doubles the number of
terms. We started with two terms on the right hand side of the basic interior Product Formula. By
applying it again we obtain a Product Formula with four terms on the right hand side. The next
application would give us eight terms.

Thus the Product Formula for a Ó b Ô Hx1 Ô x2 Ô … Ô xj L would lead to 2j terms.


m k

Ï Deriving interior Product Formulas from the dual regressive Product Formulas

A second derivation may be obtained directly by first computing the Dual of a regressive Product
Formula, and then directly using the substitution

ÑÓ x ï ÑÓx ï Ñûx
n-m m m

For example, here is a regressive Product Formula.

F1 = DeriveProductFormulaB a Ó b Ô xF
m k 2

a Ó b Ô x1 Ô x2 ã
m k

a Ó b Ô x1 Ô x2 + H-1L-m+¯n - a Ô x1 Ó b Ô x2 + a Ô x2 Ó b Ô x1 + a Ô x1 Ô x2 Ó b
m k m k m k m k

2009 9 3
Grassmann Algebra Book.nb 454

F2 = Dual@F1 D

a Ô b Ó x1 Ó x2 ã a Ô b Ó x1 Ó x2 +
m k -1+¯n -1+¯n m k -1+¯n -1+¯n

H-1Lm - a Ó x1 Ô b Ó x2 + a Ó x2 Ô b Ó x1 +
m -1+¯n k -1+¯n m -1+¯n k -1+¯n

a Ó x1 Ó x2 Ôb
m -1+¯n -1+¯n k

F3 = F2 êê. A_ Ó Underscript@x_, ¯n - 1D ß A û x

Ï Deriving interior Product Formulas sequentially


We can automate this process by encoding the two rules used in the derivation above and having
Mathematica apply them.

The first rule is the basic Interior Product Formula.

a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x
m k m k m k

which we encode as R1 .

R1 := Is_. HA_ Ô B_L û x_ ß s HA û xL Ô B + s H-1L¯G@AD A Ô HB û xLM

The second rule is the repeated interior product formula

HZ û xL û y ã Z û Hx Ô yL

which we encode as R2 .

R2 := HHZ_ û x_L û y_ ß Z û Hx Ô yLL

We will also need to declare the xi as extra vector symbols.

¯¯V@x_ D;

To apply these rules, start with an exterior product. Call it F0 .

F0 = a Ô b ;
m k

Take the interior product of F0 with x1 , and apply both rules as many times as they are applicable
to get F1 .

F1 = F0 û x1 êê. 8R1 , R2 <

Ka û x1 O Ô b + H-1Lm a Ô b û x1
m k m k

Repeat by taking the interior product of F1 with x2 , expanding, and applying the rules to get F2 .

2009 9 3
Grassmann Algebra Book.nb 455

F2 = ¯#@F1 û x2 D êê. 8R1 , R2 <

H-1L-1+m Ka û x1 O Ô b û x2 + H-1Lm Ka û x2 O Ô b û x1 +
m k m k

Ka û x1 Ô x2 O Ô b + H-1L2 m a Ô b û x1 Ô x2
m k m k

We can simplify the signs by using GrassmannAlgebra's SimplifyPowerSigns.

F2 = ¯#@F1 û x2 D êê. 8R1 , R2 < êê SimplifyPowerSigns

H-1L1+m Ka û x1 O Ô b û x2 +
m k

H-1Lm Ka û x2 O Ô b û x1 + Ka û x1 Ô x2 O Ô b + a Ô b û x1 Ô x2
m k m k m k

To get succeeding formulas, we simply repeat this process.

F3 = ¯#@F2 û x3 D êê. 8R1 , R2 < êê SimplifyPowerSigns

Ka û x1 O Ô b û x2 Ô x3 - Ka û x2 O Ô b û x1 Ô x3 +
m k m k

Ka û x3 O Ô b û x1 Ô x2 + H-1Lm Ka û x1 Ô x2 O Ô b û x3 +
m k m k

H-1L1+m Ka û x1 Ô x3 O Ô b û x2 + H-1Lm Ka û x2 Ô x3 O Ô b û x1 +
m k m k

Ka û x1 Ô x2 Ô x3 O Ô b + H-1Lm a Ô b û x1 Ô x2 Ô x3
m k m k

Collecting the terms with the same signs gives

a Ô b û Hx1 Ô x2 Ô x3 L ã
m k

Ka û x1 O Ô b û x2 Ô x3 -
m k

Ka û x2 O Ô b û x1 Ô x3 + Ka û x3 O Ô b û x1 Ô x2 +
m k m k
3.65
m
H-1L Ka û x1 Ô x2 O Ô b û x3 - Ka û x1 Ô x3 O Ô b û x2 +
m k m k

-Ka û x2 Ô x3 O Ô b û x1 + a Ô b û x1 Ô x2 Ô x3 +
m k m k

Ka û x1 Ô x2 Ô x3 O Ô b
m k

Deriving interior Product Formulas automatically

2009 9 3
Grassmann Algebra Book.nb 456

Deriving interior Product Formulas automatically


Just as we did in Chapter 3, it is straightforward to encapsulate this sequential derivation in a
function for automatic application.

Ï 1. Define the rules to be applied at each step

DeriveInteriorProductFormulaOnce@F_, x_D :=
GrassmannExpand@F û xD êê.
9s_. A_ Ô B_ û z_ ß s HA û zL Ô B + s H-1LRawGrade@AD A Ô HB û zL,
Z_ û z_ û y_ ß Z û z Ô y=;

Ï 2. Apply the rules as many times as required

DeriveInteriorProductFormula@HA_ Ô B_L û X_D :=


HA Ô BL û ComposeSimpleForm@X, 1D ã
SimplifyPowerSigns@Fold@DeriveInteriorProductFormulaOnce,
A Ô B, 8ComposeSimpleForm@X, 1D< ê. Wedge Ø SequenceDD;

Ï Check the results

H1 = DeriveInteriorProductFormulaB a Ô b û x1 F
m k

a Ô b û x1 ã Ka û x1 O Ô b + H-1Lm a Ô b û x1
m k m k m k

H2 = DeriveInteriorProductFormulaB a Ô b û xF
m k 2

a Ô b û x1 Ô x2 ã H-1L1+m Ka û x1 O Ô b û x2 +
m k m k

H-1Lm Ka û x2 O Ô b û x1 + Ka û x1 Ô x2 O Ô b + a Ô b û x1 Ô x2
m k m k m k

2009 9 3
Grassmann Algebra Book.nb 457

H3 = DeriveInteriorProductFormulaB a Ô b û xF
m k 3

a Ô b û x1 Ô x2 Ô x3 ã Ka û x1 O Ô b û x2 Ô x3 - Ka û x2 O Ô b û x1 Ô x3 +
m k m k m k

Ka û x3 O Ô b û x1 Ô x2 + H-1Lm Ka û x1 Ô x2 O Ô b û x3 +
m k m k

H-1L1+m Ka û x1 Ô x3 O Ô b û x2 + H-1Lm Ka û x2 Ô x3 O Ô b û x1 +
m k m k

Ka û x1 Ô x2 Ô x3 O Ô b + H-1Lm a Ô b û x1 Ô x2 Ô x3
m k m k

These give the same results as the manually derived formulas

: a Ô b û x1 ã F1 === H1 ,
m k

a Ô b û x1 Ô x2 ã F2 === H2 ,
m k

a Ô b û x1 Ô x2 Ô x3 ã F3 === H3 >
m k

8True, True, True<

Computable forms of interior Product Formulas

Ï The computable form of the interior Product Formula

As in Chapter 3, we can take the interior Product Formula derived in the previous sections and
rewrite it in its computational form using the notions of complete span and complete cospan. The
computational form relies on the fact that in GrassmannAlgebra the exterior and interior products
have an inbuilt Listable attribute.

aÔb ûx ã
m k j
3.66
¯r@jD m ¯
¯SBFlattenBH-1L a û ¯" BxF Ô b û ¯"¯ BxF FF
m j k j

In this formula, ¯#¯ BxF and ¯#¯ BxF are the complete span and complete cospan of X. The sign
j j j
function ¯r creates a list of elements alternating between 1 and -1.

For example, for j equal to 2, and simplifying the right hand side, we have

2009 9 3
Grassmann Algebra Book.nb 458

a Ô b û Hx1 Ô x2 L ã
m k

¯$B¯SBFlattenBH-1L¯r@2D m Ka û ¯#¯ @x1 Ô x2 DO Ô b û ¯#¯ @x1 Ô x2 D FFF


m k

a Ô b û x1 Ô x2 ã -H-1Lm Ka û x1 O Ô b û x2 +
m k m k

H-1Lm Ka û x2 O Ô b û x1 + Ka û x1 Ô x2 O Ô b + a Ô b û x1 Ô x2
m k m k m k

Ï Encapsulating the computable formula

We can easily encapsulate this computable formula in a function


ComputeInteriorProductFormula for easy comparison with the results from our automatic
derivation above using DeriveInteriorProductFormula.

ComputeInteriorProductFormula@HA_ Ô B_L û X_D :=


HA Ô BL û ComposeSimpleForm@X, 1D ã ¯8A¯SAFlattenA
H-1L¯r@RawGrade@XDD RawGrade@AD IA û ¯"¯ @XDM Ô HB û ¯"¯ @XDLEEE;

Ï Comparing results

We can compare the results obtained by our two formulations.

¯!5 ; TableBComputeInteriorProductFormulaB a Ô b û xF ===


m k j

DeriveInteriorProductFormulaB a Ô b û xF ê.
m k j

IH-1Lm+1 Ø -H-1Lm M , 8j, 1, 5<F

8True, True, True, True, True<

The invariance of interior Product Formulas


In Chapter 2 we saw that the m:k-forms based on a simple factorized m-element X were, like the m-
element, independent of the factorization.

¯S@HG1 # ¯#k @XDL ü IG2 " ¯#k @XDME


Just as in Chapter 3, we can see that the interior Product Formula is composed of a number of m:k-
forms, and, as expected, shows the same invariance. For example, suppose we generate the formula
F1 for a 3-element X equal to xÔyÔz:

2009 9 3
Grassmann Algebra Book.nb 459

F1 = ComputeInteriorProductFormulaB a Ô b û Hx Ô y Ô zLF
m k

aÔbûxÔyÔz ã
m k

Ka û xO Ô b û y Ô z - Ka û yO Ô b û x Ô z + Ka û zO Ô b û x Ô y +
m k m k m k

H-1Lm Ka û x Ô yO Ô b û z - H-1Lm Ka û x Ô zO Ô b û y +
m k m k

H-1Lm Ka û y Ô zO Ô b û x + Ka û x Ô y Ô zO Ô b + H-1Lm a Ô b û x Ô y Ô z
m k m k m k

We can express the 3-element as a product of different 1-element factors by adding to any given
factor, scalar multiples of the other factors. For example

F2 = ComputeInteriorProductFormulaB a Ô b û Hx Ô y Ô Hz + a x + b yLLF
m k

a Ô b û x Ô y Ô Ha x + b y + zL ã Ka û xO Ô b û y Ô Ha x + b y + zL -
m k m k

Ka û yO Ô b û x Ô Ha x + b y + zL + Ka û Ha x + b y + zLO Ô b û x Ô y +
m k m k

H-1Lm Ka û x Ô yO Ô b û Ha x + b y + zL -
m k

H-1Lm Ka û x Ô Ha x + b y + zLO Ô b û y +
m k

H-1Lm Ka û y Ô Ha x + b y + zLO Ô b û x +
m k

Ka û x Ô y Ô Ha x + b y + zLO Ô b + H-1Lm a Ô b û x Ô y Ô Ha x + b y + zL
m k m k

Applying GrassmannExpandAndSimplify to these expressions shows that they are equal.

¯%@F1 ã F2 D
True

An alternative form for the interior Product Formula


Again, as in Chapter 3, we can rearrange the interior Product Formula by interchanging the span
and cospan so that the span elements become associated with a and the cospan elements become
m
associated with b (and changing some signs).
k

Thus we can write the interior Product Formula in the alternative form

3.67

2009 9 3
Grassmann Algebra Book.nb 460

a Ô b û x ã ¯SBFlattenB
m k j

H-1L¯r@jD m H-1L¯rr@jD ¯RB a û ¯"¯ BxF Ô b û ¯"¯ BxF FFF


m j k j

Ï Encapsulating the alternative computable formula

Just as we encapsulated the first computable formula, we can similarly encapsulate this alternative
one. We call it the ComputeInteriorProductFormulaB.

ComputeInteriorProductFormulaB@HA_ Ô B_L û X_D :=


HA Ô BL û ComposeSimpleForm@X, 1D ã
¯SBFlattenBH-1L¯r@RawGrade@XDD RawGrade@AD

H-1L¯rr@RawGrade@XDD ¯RB a û ¯"¯ BxF Ô b û ¯"¯ BxF FFF;


m j k j

Ï Comparing results

Comparing the results of ComputeInteriorProductFormula with


ComputeInteriorProductFormulaB verifies their equivalence for values of j from 1 to 10.

¯!10 ; TableB¯%BComputeInteriorProductFormulaB a Ô b û xFF ===


m k j

¯%BComputeInteriorProductFormulaBB a Ô b û xFF, 8j, 1, 5<F


m k j

8True, True, True, True, True<

The interior decomposition formula


The basic interior Product Formula is

a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x
m k m k m k

By putting b equal to x we get


k

Ka Ô xO û x ã Ka û xO Ô x + H-1Lm a Ô Hx û xL
m m m

Rearranging terms expresses a 'decomposition' of the element a in terms of the 1-element x:


m

3.67

2009 9 3
Grassmann Algebra Book.nb 461

` ` ` `
a ã H-1Lm KKa Ô xO û x - Ka û xO Ô xO 6.87
m m m

`
Here, x is a unit element.

` x

xûx

` ` ` `
Note that Ka Ô xO û x and Ka û xO Ô x are orthogonal.
m m

Interior Product Formulas for 1-elements

Ï Interior Product Formula 1

For completeness we repeat the basic interior Product Formula introduced at the beginning of the
section.

a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x 6.88
m k m k m k

Ï Interior Product Formula 2

Now take the dual of the first regressive Product Formula

a Ó b Ô x ã Ka Ô xO Ó b + H-1L¯n-m a Ó b Ô x
m k m k m k

Here, x is a 1-element. Put b equal to b


k k

a Ó b Ô x ã Ka Ô xO Ó b + H-1L¯n-m a Ó b Ô x
m k m k m k

and note that:

a Ó b Ô x ã H-1LH¯n-k+1L Hk-1L a Ó Hb Ô xL ã H-1L¯n-1 a Ó Hb Ó xL


m k m k m k

a û b Ô x ã Ka Ô xO û b + H-1Lm-1 a û b û x 6.89
m k m k m k

This product formula is the basis for the derivation of a set of formulas of geometric interest later
in the chapter: the Triangle Formulas.

Ï Interior Product Formula 3

2009 9 3
Grassmann Algebra Book.nb 462

Interior Product Formula 3

An interior product of elements can always be expressed as the interior product of their
complements in reverse order.

a û b ã H-1LHm-kL H¯n-mL b û a
m k k m

When applied to interior Product Formula 2, the terms on the right-hand side interchange forms.

Ka Ô xO û b ã H-1LHm-kL H¯n-mL+¯n+k+1 b û Ka û xO
m k k m

a û b û x ã H-1LHm-kL H¯n-mL+m+1 b Ô x û a
m k k m

Hence interior Product Formula 2 may be expressed in a variety of ways, for example:

a û b Ô x ã Ka Ô xO û b + H-1LHm-kL H¯n-mL b Ô x û a 6.90


m k m k k m

Ï Interior Product Formula 4

Taking the interior product of interior Product Formula 2 with a second 1-element x2 gives:

a û b Ô x1 û x2 ã
m k
6.91
m+k
Ka Ô x1 O û b Ô x2 + H-1L K a û x2 O û b û x1
m k m k

Interior Product Formulas in terms of double sums


In this section we summarize the three major interior Product Formulas expressed in terms of
double sums. These are equivalent to the computable forms already discussed.

Ï Interior Product Formula 5


Consider the double sum regressive General Product Formula from Chapter 3.
p n
a Ó b Ô x ã ‚ H-1Lr H¯n-mL ‚ a Ô xi Ó b Ô xi
m k p m p-r k r
r=0 i=1

p
x ã x1 Ô x1 ã x2 Ô x2 ã ! ã xn Ô xn nãK O
p r p-r r p-r r p-r r

Now replace b with b , b Ô xi with H-1Lr Hn-rL b Ó xi , and n|k with k to give:
Ï k n-k n-k r n-k r

2009 9 3
Grassmann Algebra Book.nb 463

Now replace b with b , b Ô xi with H-1Lr Hn-rL b Ó xi , and n|k with k to give:
k n-k n-k r n-k r

p n
a û b Ô x ã ‚ H-1Lr Hm-rL ‚ a Ô xi û b û xi
m k p m p-r k r
r=0 i=1
6.92
p
x ã x1 Ô x1 ã x2 Ô x2 ã ! ã xn Ô xn , nãK O
p r p-r r p-r r p-r r

Ï Interior Product Formula 6


By a similar process we obtain:

p n
a Ô b û x ã ‚ H-1Lr m ‚ a û xi Ô b û xi
m k p m p-r k r
r=0 i=1
6.93
p
x ã x1 Ô x1 ã x2 Ô x2 ã ! ã xn Ô xn , nãK O
p r p-r r p-r r p-r r

Ï Interior Product Formula 7


In Formula 5 putting b equal to a expresses a simple p-element in terms of products with a unit m-
k m
element.

p n
r Hm-rL ` `
x ã ‚ H-1L ‚ a Ô xi û Ka û xi O
p m p-r m r
r=0 i=1
6.94
p
x ã x1 Ô x1 ã x2 Ô x2 ã ! ã xn Ô xn , nãK O
p r p-r r p-r r p-r r

6.9 The Zero Interior Sum Theorem

The zero interior sum


There is an interesting theorem that Grassmann proves in Section 183 of his Ausdehnungslehre of
1862. Roughly translated into the terminology and notation of this book, this theorem states:

If, from a series of m 1-elements, one forms the exterior products of any given grade, and
conjoins each of them in an interior product with its supplementary combination, then the
sum of these products is zero, that is

2009 9 3
Grassmann Algebra Book.nb 464

If, from a series of m 1-elements, one forms the exterior products of any given grade, and
conjoins each of them in an interior product with its supplementary combination, then the
sum of these products is zero, that is

S1 û C1 + S2 û C2 + … ã 0

if S1 , S2 , … are the exterior products of the m 1-elements a1 , a2 , …, am of any given (say


the kth) grade, and C2 , C1 , … are the supplementary combinations.

By the supplementary combination to an exterior product of k 1-elements out of a series of m 1-


elements, he is referring to the exterior product of the remaining m-k 1-elements ordered
consistently with the original series. In our terminology the Si and Ci are respectively the k-span
and k-cospan of the m-element A ã a1 Ô a2 Ô … Ô am .

This theorem is a useful source of identities involving sums of interior products. We shall also see
in Chapter 10: Exploring the Generalized Product, that it has an important role to play in proving
the Generalized Product Theorem which shows the equivalence of two different forms for the
expansion of the generalized product in terms of interior and exterior products. Chapter 10 will
also discuss the generalization of the Zero Interior Sum Theorem leading to a Zero Generalized
Sum Theorem.

As we will see in the following sections, the Zero Interior Sum Theorem turns out to be another
result associated with the invariance of an m:k-form to a factorization of the m-element A.

Composing interior sums


An interior sum is a particular case of an m:k-form. In Chapter 2 we considered various m:k-forms.
In this case we are interested in the simplest one in which the product is the interior product.

¯S@H¯#k @ADL ü I¯#k @ADME ï ¯S@H¯#k @ADL û I¯#k @ADME


To generate an interior m:k-sum using this form, all we need to do is to specify m and k.

Fm_,k_ := ¯SBK¯#k BaFO û K¯#k BaFOF


m m

Here are the simplest cases. It will be immediately clear from these cases that m:k-sums in which k
is less than m are zero, since each interior product term is zero.

Ì Interior sums for a 2-element

F2,1

a1 û a2 + a2 û -a1

Ì Interior sums for a 3-element

F3,1

a1 û a2 Ô a3 + a2 û -Ha1 Ô a3 L + a3 û a1 Ô a2

2009 9 3
Grassmann Algebra Book.nb 465

F3,2

a1 Ô a2 û a3 + a1 Ô a3 û -a2 + a2 Ô a3 û a1

Ì Interior sums for a 4-element

F4,1

a1 û a2 Ô a3 Ô a4 + a2 û -Ha1 Ô a3 Ô a4 L + a3 û a1 Ô a2 Ô a4 + a4 û -Ha1 Ô a2 Ô a3 L

F4,2

a1 Ô a2 û a3 Ô a4 + a1 Ô a3 û -Ha2 Ô a4 L + a1 Ô a4 û a2 Ô a3 +
a2 Ô a3 û a1 Ô a4 + a2 Ô a4 û -Ha1 Ô a3 L + a3 Ô a4 û a1 Ô a2

F4,3

a1 Ô a2 Ô a3 û a4 + a1 Ô a2 Ô a4 û -a3 + a1 Ô a3 Ô a4 û a2 + a2 Ô a3 Ô a4 û -a1

Ï Verifying the theorem

To verify the theorem, we only need to convert the interior products to scalar products and simplify.

¯%@ToScalarProducts@8F2,1 , F3,2 , F4,2 , F4,3 <DD

80, 0, 0, 0<

The Gram-Schmidt process


To prove the Zero Interior Sum Theorem, we will need the Gram-Schmidt process.

Suppose we are given any simple m-element A ã a1 Ô a2 Ô a3 Ô … Ô am . The Gram-Schmidt


process tells us that we can always refactor A into a product of orthogonal 1-element factors of the
form:

A ã a1 Ô Ha2 + a a1 L Ô Ha3 + b a1 + c a2 L Ô …

Beginning with the first two factors a1 and a2 + a a1 , we can choose the scalar a to make the
factors orthogonal.

a1 û a2
a1 û Ha2 + a a1 L ã 0 ï aã-
a1 û a1
For the first three factors, we can choose the scalars to make all three factors mutually orthogonal.
Let

f13 = a1 û Ha3 + b a1 + c a2 L;

f23 = Ha2 + a a1 L û Ha3 + b a1 + c a2 L;

We can follow through the standard Gram-Schmidt process, or we can use Mathematica to solve
for the coefficients.

2009 9 3
Grassmann Algebra Book.nb 466

We can follow through the standard Gram-Schmidt process, or we can use Mathematica to solve
for the coefficients.

S = Solve@¯%@8f13 ã 0, f23 ã 0<D, 8b, c<D


Ha1 û a3 L Ha2 û a2 L - Ha1 û a2 L Ha2 û a3 L
::b Ø - ,
-Ha1 û a2 L2 + Ha1 û a1 L Ha2 û a2 L
Ha1 û a2 L Ha1 û a3 L - Ha1 û a1 L Ha2 û a3 L
cØ- >>
Ha1 û a2 L2 - Ha1 û a1 L Ha2 û a2 L

It is of interest to note that these scalars may be expressed in terms of interior products of 2-
elements.

Ha1 Ô a2 L û Ha2 Ô a3 L Ha1 Ô a2 L û Ha1 Ô a3 L


:b Ø , cØ- >
Ha1 Ô a2 L û Ha1 Ô a2 L Ha1 Ô a2 L û Ha1 Ô a2 L

It is easy to confirm that with these definitions for a, b, and c, the first three factors of A are
mutually orthogonal. First we make lists of the factors and values for the scalars we have derived.

F = 8f1 Ø a1 , f2 Ø a2 + a a1 , f3 Ø a3 + b a1 + c a2 <;

a1 û a2 Ha1 Ô a2 L û Ha2 Ô a3 L Ha1 Ô a2 L û Ha1 Ô a3 L


H = :a Ø - , bØ , cØ - >;
a1 û a1 Ha1 Ô a2 L û Ha1 Ô a2 L Ha1 Ô a2 L û Ha1 Ô a2 L
Then we substitute them into the three orthogonality conditions, and simplify to get zero in each
case.

Together@¯%@8f1 û f2 , f1 û f3 , f2 û f3 < ê. F ê. ToScalarProducts@HDDD


80, 0, 0<

Finally we confirm that this refactorization does not change the original product.

¯%@f1 Ô f2 Ô f3 ê. F ê. ToScalarProducts@HDD
a1 Ô a2 Ô a3

Proving the Zero Interior Sum Theorem


Proving the Interior Sum Theorem is now straightforward and relies on two results:

1. That an m:k-form is invariant to factorizations of its simple m-element.


2. That a simple m-element may be refactorized into mutually orthogonal factors.

For the first result we rely on the discussion of multilinear forms in Chapter 2. For the second
result we rely on the well-known Gram-Schmidt process discussed in the previous section.

Consider an arbitrary simple m-element A.

A ã a1 Ô a2 Ô a3 Ô … Ô am

2009 9 3
Grassmann Algebra Book.nb 467

Any m:k-form which is the sum of interior products of k-span and k-cospan elements is given by
the expression:

Fm:k ã ¯SAH¯#k @ADL û I¯#k @ADME


This expression is invariant to factorizations of A.

By the Gram-Schmidt process A may be expressed as the exterior product of mutually orthogonal
factors.

The form based on this refactorization is clearly zero since it is composed of sums of terms, each
of which involves the interior product of mutually orthogonal factors.

Hence the theorem is proved.

Ï Example

To demonstrate this proof in a simple case, take A to be a simple 3-element.

A = a1 Ô a2 Ô a3 ;

The 3:2-form based on A is

F32 = ¯S@H¯#2 @ADL û I¯#2 @ADME

a1 Ô a2 û a3 + a1 Ô a3 û -a2 + a2 Ô a3 û a1

Orthogonalize A by the Gram-Schmidt process to rewrite it as the exterior product of mutually


orthogonal factors.

A = a1 Ô Ha2 + a a1 L Ô Ha3 + b a1 + c a2 L;

The form F32 may be rewritten as

H32 = ¯S@H¯#2 @ADL û I¯#2 @ADME

a1 Ô Ha a1 + a2 L û Hb a1 + c a2 + a3 L +
a1 Ô Hb a1 + c a2 + a3 L û H-a a1 - a2 L + Ha a1 + a2 L Ô Hb a1 + c a2 + a3 L û a1

Since each one of these factors is mutually orthogonal, each term is individually zero. Hence
H32 ã 0. Since H32 is equal to the original interior sum F32 the theorem is confirmed.

¯%@H32 ã F32 D
True

2009 9 3
Grassmann Algebra Book.nb 468

6.10 The Cross Product

Defining a generalized cross product


The cross or vector product of the three-dimensional vector algebra of Gibbs et al. [Gibbs 1928]
corresponds to two operations in Grassmann's more general algebra. Taking the cross-product of
two vectors in three dimensions corresponds to taking the complement of their exterior product.
However, whilst the usual cross product formulation is valid only for vectors in three dimensions,
the exterior product formulation is valid for elements of any grade in any number of dimensions.
Therefore the opportunity exists to generalize the concept.

Because our generalization reduces to the usual definition under the usual circumstances, we take
the liberty of continuing to refer to the generalized cross product as, simply, the cross product.

The cross product of a and b is denoted aµb and is defined as the complement of their exterior
m k m k
product. The cross product of an m-element and a k-element is thus an (n|(m+k))-element.

aµb ã aÔb 6.95


m k m k

This definition preserves the basic property of the cross product: that the cross product of two
elements is an element orthogonal to both, and reduces to the usual notion for vectors in a three
dimensional metric vector space.

Cross products involving 1-elements


For 1-elements xi the definition above has the following consequences, independent of the
dimension of the space.

Ï The triple cross product


The triple cross product is a 1-element in any number of dimensions.

Hx1 µ x2 L µ x3 ã Hx1 Ô x2 L Ô x3 ã Hx1 Ô x2 L û x3

Hx1 µ x2 L µ x3 ã Hx1 Ô x2 L û x3 ã Hx3 û x1 L x2 - Hx3 û x2 L x1 6.96

x1 µ Hx2 µ x3 L ã x1 Ô Hx2 Ô x3 L ã x1 Ó Hx2 Ô x3 L ã H-1LH¯n-2L Hx2 Ô x3 L û x1

x1 µ Hx2 µ x3 L ã H-1L¯n Hx2 Ô x3 L û x1 6.97

Ï The box product or triple vector product

2009 9 3
Grassmann Algebra Book.nb 469

The box product or triple vector product


The box product, or triple vector product, is an (n|3)-element, and therefore a scalar only in three
dimensions.

Hx1 µ x2 L û x3 ã Hx1 Ô x2 L û x3 ã x1 Ô x2 Ô x3

Hx1 µ x2 L û x3 ã x1 Ô x2 Ô x3 6.98

Because Grassmann identified n-elements with their scalar Euclidean complements (see the
historical note in the introduction to Chapter 5), he considered x1 Ô x2 Ô x3 in a 3-space to be a
scalar. His notation for the exterior product of elements was to use square brackets @x1 x2 x3 D,
thus originating the 'box' product notation for Hx1 µ x2 L û x3 used in the three-dimensional vector
algebra in the early decades of the twentieth century.

Ï The scalar product of two cross products


The scalar product of two cross products is a scalar in any number of dimensions.

Hx1 µ x2 L û Hx3 µ x4 L ã Hx1 Ô x2 L û Hx3 Ô x4 L ã Hx1 Ô x2 L û Hx3 Ô x4 L

Hx1 µ x2 L û Hx3 µ x4 L ã Hx1 Ô x2 L û Hx3 Ô x4 L 6.99

Ï The cross product of two cross products


The cross product of two cross products is a (4|n)-element, and therefore a 1-element only in three
dimensions. It corresponds to the regressive product of two exterior products.

Hx1 µ x2 L µ Hx3 µ x4 L ã Hx1 Ô x2 L Ô Hx3 Ô x4 L ã Hx1 Ô x2 L Ó Hx3 Ô x4 L

Hx1 µ x2 L µ Hx3 µ x4 L ã Hx1 Ô x2 L Ó Hx3 Ô x4 L 6.100

† Note that this product, unlike the other products above, does not rely on a metric, because the
expression involves only exterior and regressive products.

Implications of the axioms for the cross product


By expressing one or more elements as a complement, axioms for the exterior and regressive
products may be rewritten in terms of the cross product, thus yielding some of its more
fundamental properties.

2009 9 3
Grassmann Algebra Book.nb 470

Ï µ 6: The cross product of an m-element and a k-element is an (n|(m+k))-


element.

The grade of the cross product of an m-element and a k-element is n|(m+k).

a œ L, b œ L ï aµb œ L 6.101
m m k k m k ¯n-Hm+kL

Ï µ 7: The cross product is not associative.

The cross product is not associative. However, it can be expressed in terms of exterior and interior
products.

aµb µ g ã H-1LH¯n-1L Hm+kL a Ô b ûg 6.102


m k r m k r

aµ bµg ã H-1LH¯n-Hk+rLL Hm+k+rL b Ô g ûa 6.103


m k r k r m

Ï µ 8: The cross product with unity yields the complement.

The cross product of an element with the unit scalar 1 yields its complement. Thus the complement
operation may be viewed as the cross product with unity.

1µ a ã aµ1 ã a 6.104
m m m

The cross product of an element with a scalar yields that scalar multiple of its complement.

aµ a ã aµa ã a a 6.105
m m m

Ï µ 9: The cross product of unity with itself is the unit n-element.

The cross product of 1 with itself is the unit n-element. The cross product of a scalar and its
reciprocal is the unit n-element.

1
1µ1 ã aµ ã 1 6.106
a

Ï µ 10: The cross product of two 1-elements anti-commutes.


2009 9 3
Grassmann Algebra Book.nb 471

µ 10: The cross product of two 1-elements anti-commutes.

The cross product of two elements is equal (apart from a possible sign) to their cross product in
reverse order. The cross product of two 1-elements is anti-commutative, just as is the exterior
product.

a µ b ã H-1Lm k b µ a 6.107
m k k m

Ï µ 11: The cross product with zero is zero.

0µa ã 0 ã aµ0 6.108


m m

Ï µ 12: The cross product is both left and right distributive under addition.

Ka + b O µ g ã a µ g + b µ g 6.109
m m r m r m r

a µ Kb + gO ã a µ b + a µ g 6.110
m r r m r m r

The cross product as a universal product


We have already shown that all products can be expressed in terms of the exterior product and the
µ
complement operation. Additionally, 8 above shows that the complement operation may be
written as the cross product with unity.

1µ a ã aµ1 ã a
m m m

We can therefore write the exterior, regressive, and interior products in terms only of the cross
product.

a Ô b ã H-1LHm+kL H¯n-Hm+kLL 1 µ a µ b 6.111


m k m k

Ï
2009 9 3
Grassmann Algebra Book.nb 472

a Ó b ã H-1Lm H¯n-mL+k H¯n-kL K1 µ a O µ 1 µ b 6.112


m k m k

a û b ã H-1Lm H¯n-mL K1 µ a O µ b 6.113


m k m k

These formulae show that any result in the Grassmann algebra involving exterior, regressive or
interior products, or complements, could be expressed in terms of the generalized cross product
alone. This is somewhat reminiscent of the role played by the Scheffer stroke (or Pierce arrow)
symbol of Boolean algebra.

Cross product formulae

Ï The complement of a cross product


The complement of a cross product of two elements is, but for a possible sign, the exterior product
of the elements.

a µ b ã H-1LHm+kL Hn-Hm+kLL a Ô b 6.114


m k m k

Ï The Common Factor Axiom for cross products


The Common Factor Axiom can be written for m+k+p = n.

aµg µ bµg ã gµ aÔb g ã H-1Lp Hn-pL g û a µ b g 6.115


m p k p p m k p p m k p

Ï Product formulae for cross products


Of course, many of the product formulae derived previously have counterparts involving cross
products. For example the complement of formula 3.41 may be written:

a µ b Ô x ã H-1Ln-1 Ka û xO µ b + H-1Lm a µ b û x 6.116


m k m k m k

Ï The measure of a cross product


The measure of the cross product of two elements is equal to the measure of their exterior product.

2009 9 3
Grassmann Algebra Book.nb 473

aµb ã aÔb 6.117


m k m k

6.11 The Triangle Formulae

Triangle components
In this section several formulae will be developed which are generalizations of the fundamental
Pythagorean relationship for the right-angled triangle and the trigonometric identity
cos2 HqL + sin2 HqL = 1. These formulae will be found useful in determining projections, shortest
distances between multiplanes, and a variety of other geometric results. The starting point is the
product formula 6.89 with k = m.

a û b x ã Ka Ô xO û b + H-1Lm-1 a û b û x
m m m m m m

By dividing through by the inner product a û b, we obtain a decomposition of the 1-element x into
m m
two components.

Ka Ô xO û b a û Kb û xO
m m m-1 m m
xã + H-1L 6.118
aûb aûb
m m m m

Now take formula 6.91:

a û b Ô x1 û x2 ã Ka Ô x1 O û b Ô x2 + H-1Lm+k K a û x2 O û b û x1
m k m k m k

and put k = m, x1 equal to x and x2 equal to z.

Ka û bO Hx û zL ã Ka Ô xO û Kb Ô zO + Ka û zO û Kb û xO 6.119
m m m m m m

When a is equal to b and x is equal to z, this reduces to a relationship similar to the fundamental
m m
trigonometric identity.

` ` ` `
x û x ã Ka Ô xO û Ka Ô xO + Ka û xO û Ka û xO 6.120
m m m m

Or, more succinctly:

2009 9 3
Grassmann Algebra Book.nb 474

Or, more succinctly:

` ` 2 ` ` 2 6.121
£a Ô xß + £a û xß ã 1
m m

Similarly, formula 6.118 reduces to a more obvious decomposition for x in terms of a unit m-
element.

` ` ` `
x ã Ka Ô xO û a + H-1Lm-1 a û Ka û xO 6.122
m m m m

Because of its geometric significance when a is simple, the properties of this equation are worth
m
investigating further. It will be shown that the square (scalar product with itself) of each of its
terms is the corresponding term of formula 6.120. That is:

` ` ` ` ` `
KKa Ô xO û aO û KKa Ô xO û aO ã Ka Ô xO û Ka Ô xO 6.123
m m m m m m

` ` ` ` ` `
Ka û Ka û xOO û Ka û Ka û xOO ã Ka û xO û Ka û xO 6.124
m m m m m m

Further, by taking the scalar product of formula 6.122 with itself, and using formulae 6.120, 6.123,
and 6.124, yields the result that the terms on the right-hand side of formula 6.122 are orthogonal.

` ` ` `
KKa Ô xO û aO û Ka û Ka û xOO ã 0 6.125
m m m m

It is these facts that suggest the name triangle formulae for formulae 6.121 and 6.122.

Diagram of a vector x decomposed into components in and orthogonal to a.


m

The measure of the triangle components


`
Let a ã a1 Ô ! Ô am , then:
m

` ` ` ` ` ` ` `
KKa Ô xO û aO û KKa Ô xO û aO ã H-1Lm KKa Ô xO û KKa Ô xO û aOO û a
m m m m m m m m

Focusing now on the first factor of the inner product on the right-hand side we get:

2009 9 3
Grassmann Algebra Book.nb 475

` ` ` ` `
Ka Ô xO û KKa Ô xO û aO ã Ha1 Ô ! Ô am Ô xL û KKa Ô xO û aO
m m m m m

Note that the second factor in the interior product of the right-hand side is of grade 1. We apply the
Interior Common Factor Theorem to give:

` ` ` `
Ha1 Ô ! Ô am Ô xL û KKa Ô xO û aO ã H-1Lm KKKa Ô xO û aO û xO a1 Ô ! Ô am +
m m m m
` `
‚ H-1Li-1 KKKa Ô xO û aO û ai O a1 Ô ! Ô ·i Ô ! Ô am Ô x
m m
i

But the terms in this last sum are zero since:

` ` ` `
KKa Ô xO û aO û ai ã Ka Ô xO û Ka Ô ai O ã 0
m m m m

This leaves only the first term in the expansion. Thus:

` ` `
Ka Ô xO û KKa Ô xO û aO ã
m m m
` ` ` ` ` `
H-1Lm KKKa Ô xO û aO û xO a ã H-1Lm Ka Ô xO û Ka Ô xO a
m m m m m m

` `
Since a û a ã 1, we have finally that:
m m

` ` ` ` ` `
KKa Ô xO û aO û KKa Ô xO û aO ã Ka Ô xO û Ka Ô xO
m m m m m m

` ` `
The measure of Ka Ô xO û a is therefore the same as the measure of a Ô x.
m m m

` ` `
£Ka Ô xO û aß ã £a Ô xß 6.126
m m m

Equivalent forms for the triangle components

There is an interesting relationship between the two terms Ka Ô xO û a and a û Ka û xO which


m m m m
enables formula 6.124 to be proved in an identical manner to formula 6.123. Using formula 6.16
we find that each form may be expressed (apart from a possible sign) in the other form provided a
m
is replaced by its complement a.
m

Ka Ô xO û a ã H-1Ln-m-1 a û Ha Ô xL ã H-1Ln-m-1 a û Ka û xO
m m m m m m

2009 9 3
Grassmann Algebra Book.nb 476

Ka Ô xO û a ã H-1Ln-m-1 a û Ka û xO 6.127
m m m m

In a similar manner it may be shown that:

a û Ka û xO ã H-1Lm-1 Ka Ô xO û a 6.128
m m m m

` ` `
Since the proof of formula 6.103 is valid for any simple a it is also valid for a (which is equal to a)
m m m
since this is also simple. The proof of formula 6.124 is then completed by applying formula 6.127.

The next four sections will look at some applications of these formulae.

6.12 Angle

Defining the angle between elements

Ï The angle between 1-elements


The interior product enables us to define, as is standard practice, the angle between two 1-elements

` ` 2
Cos@qD2 ã °x1 û x2 • 6.129

Diagram of two vectors showing the angle between them.

However, putting a equal to x1 and x equal to x2 in formula 6.121 yields:


m

` ` 2 ` ` 2 6.130
°x1 Ô x2 • + °x1 û x2 • ã 1

This, together with the definition for angle above implies that:

` ` 2
Sin@qD2 ã °x1 Ô x2 • 6.131

This is a formula equivalent to the three-dimensional cross product formulation:

2009 9 3
Grassmann Algebra Book.nb 477

` ` 2
Sin@qD2 ã °x1 µ x2 •

but one which is not restricted to three-dimensional space.

Thus formula 6.130 is an identity equivalent to sin2 HqL + cos2 HqL = 1.

Ï The angle between a 1-element and a simple m-element


One may however carry this concept further, and show that it is meaningful to talk about the angle
between a 1-element x and a simple m-element a for which the general formula 6.121 holds.
m

` ` 2 ` ` 2
£a Ô xß + £a û xß ã 1
m m

` ` 2
Sin@qD2 ã £a Ô xß 6.132
m

` ` 2
Cos@qD2 ã £a û xß 6.133
m

We will explore this more fully in what follows.

The angle between a vector and a bivector


As a simple example, take the case where x is rewritten as x1 and a is interpreted as the bivector
m
x2 Ô x3 .

Diagram of three vectors showing the angles between them, and the angle between the bivector
and the vector.

The angle between any two of the vectors is given by formula 6.129.

` ` 2
Cos@qij D2 ã °xi û xj •

The cosine of the angle between the vector x1 and the bivector x2 Ô x3 may be obtained from
formula 6.124.

` ` 2 HHx2 Ô x3 L û x1 L û HHx2 Ô x3 L û x1 L
£a û xß ã
m HHx2 Ô x3 L û Hx2 Ô x3 LL Hx1 û x1 L

To express the right-hand side in terms of angles, let:

2009 9 3
Grassmann Algebra Book.nb 478

HHx2 Ô x3 L û x1 L û HHx2 Ô x3 L û x1 L
A= ;
HHx2 Ô x3 L û Hx2 Ô x3 LL Hx1 û x1 L

First, convert the interior products to scalar products and then convert the scalar products to angle
form given by formula 6.129. GrassmannAlgebra provides the function ToAngleForm for doing
this in one operation.

¯¯V@x_ D; ToAngleForm@A, qD

ICos@q1,2 D2 + Cos@q1,3 D2 - 2 Cos@q1,2 D Cos@q1,3 D Cos@q2,3 DM Csc@q2,3 D2

This result may be verified by elementary geometry to be Cos@qD2 where q is defined on the
diagram above. Thus we see a verification that formula 6.124 permits the calculation of the angle
between a vector and an m-vector in terms of the angles between the given vector and any m vector
factors of the m-vector.

The angle between two bivectors


Suppose we have two bivectors x1 Ô x2 and x3 Ô x4 .

Diagram of three vectors showing the angles between them, and the angle between the bivector
and the vector.

In a 3-space we can find the vector complements of the bivectors, and then find the angle between
these vectors. This is equivalent to the cross product formulation. Let q be the cosine of the angle
between the vectors, then:

Hx1 Ô x2 L û Hx3 Ô x4 L Hx1 µ x2 L û Hx3 µ x4 L


Cos@qD ã ã
†x1 Ô x2 § †x3 Ô x4 § †x1 µ x2 § †x3 µ x4 §

But from formulas 6.28 and 6.67 we can remove the complement operations to get the simpler
expression:

Hx1 Ô x2 L û Hx3 Ô x4 L
Cos@qD ã 6.134
†x1 Ô x2 § †x3 Ô x4 §

Note that, in contradistinction to the 3-space formulation using cross products, this formulation is
valid in a space of any number of dimensions.

An actual calculation is most readably expressed by expanding each of the terms separately. We
can either represent the xi in terms of basis elements or deal with them directly. A direct
expansion, valid for any metric is:

ToScalarProducts@Hx1 Ô x2 L û Hx3 Ô x4 LD
-Hx1 û x4 L Hx2 û x3 L + Hx1 û x3 L Hx2 û x4 L

2009 9 3
Grassmann Algebra Book.nb 479

Measure@x1 Ô x2 D Measure@x3 Ô x4 D

-Hx1 û x2 L2 + Hx1 û x1 L Hx2 û x2 L -Hx3 û x4 L2 + Hx3 û x3 L Hx4 û x4 L

The volume of a parallelepiped


We can calculate the volume of a parallelepiped as the measure of the trivector whose vectors
make up the sides of the parallelepiped. GrassmannAlgebra provides the function Measure for
expressing the volume in terms of scalar products:

V = Measure@x1 Ô x2 Ô x3 D
, I-Hx û x L2 Hx û x L + 2 Hx û x L Hx û x L Hx û x L -
1 3 2 2 1 2 1 3 2 3

Hx1 û x1 L Hx2 û x3 L2 - Hx1 û x2 L2 Hx3 û x3 L + Hx1 û x1 L Hx2 û x2 L Hx3 û x3 LM

Note that this has been simplified somewhat as permitted by the symmetry of the scalar product.

By putting this in its angle form we get the usual expression for the volume of a parallelepiped:

ToAngleForm@V, qD
, I-†x §2 †x §2 †x §2 I-1 + Cos@q D2 +
1 2 3 1,2

Cos@q1,3 D2 - 2 Cos@q1,2 D Cos@q1,3 D Cos@q2,3 D + Cos@q2,3 D2 MM

A slight rearrangement gives the volume of the parallelepiped as:

V ã †x1 § †x2 § †x3 §


, I1 + 2 Cos@q 2
1,2 D Cos@q1,3 D Cos@q2,3 D - Cos@q1,2 D - 6.135
2 2
Cos@q1,3 D - Cos@q2,3 D M

We can of course use the same approach in any number of dimensions. For example, the 'volume'
of a 4-dimensional parallelepiped in terms of the lengths of its sides and the angles between them
is:

¯!4 ; ToAngleForm@Measure@x1 Ô x2 Ô x3 Ô x4 D, qD
, I†x §2 †x §2 †x §2 †x §2 I1 - Cos@q D2 -
1 2 3 4 2,3

Cos@q2,4 D2 + 2 Cos@q1,3 D Cos@q1,4 D Cos@q3,4 D - Cos@q3,4 D2 +


2 Cos@q2,3 D Cos@q2,4 D H-Cos@q1,3 D Cos@q1,4 D + Cos@q3,4 DL +
2 Cos@q1,2 D HCos@q1,4 D HCos@q2,4 D - Cos@q2,3 D Cos@q3,4 DL +
Cos@q1,3 D HCos@q2,3 D - Cos@q2,4 D Cos@q3,4 DLL -
Cos@q1,4 D2 Sin@q2,3 D2 - Cos@q1,3 D2 Sin@q2,4 D2 -
Cos@q1,2 D2 Sin@q3,4 D2 MM

6.13 Projection

2009 9 3
Grassmann Algebra Book.nb 480

6.13 Projection
To be completed.

6.14 Interior Products of Interpreted Elements


To be completed.

6.15 The Closest Approach of Multiplanes


To be completed.

2009 9 3
Grassmann Algebra Book.nb 481

7 Exploring Screw Algebra


Ï

7.1 Introduction
In Chapter 8: Exploring Mechanics, we will see that systems of forces and momenta, and the
velocity and infinitesimal displacement of a rigid body may be represented by the sum of a bound
vector and a bivector. We have already noted in Chapter 1 that a single force is better represented
by a bound vector than by a (free) vector. Systems of forces are then better represented by sums of
bound vectors; and a sum of bound vectors may always be reduced to the sum of a single bound
vector and a single bivector.

We call the sum of a bound vector and a bivector a 2-entity. These geometric entities are therefore
worth exploring for their ability to represent the principal physical entities of mechanics.

In this chapter we begin by establishing some properties of 2-entities in an n-plane, and then show
how, in a 3-plane, they take on a particularly symmetrical and potent form. This form, a 2-entity in
a 3-plane, is called a screw. Since it is in the 3-plane that we wish to explore 3-dimensional
mechanics we then explore the properties of screws in more detail.

The objective of this chapter then, is to lay the algebraic and geometric foundations for the chapter
on mechanics to follow.

Historical Note

The classic text on screws and their application to mechanics is by Sir Robert Stawell Ball: A
Treatise on the Theory of Screws (1900). Ball was aware of Grassmann's work as he explains in the
Biographical Notes to this text in a comment on the Ausdehnungslehre of 1862.

This remarkable work, a development of an earlier volume (1844), by the same author,
contains much that is of instruction and interest in connection with the present theory.
... Here we have a very general theory, which includes screw coordinates as a particular
case.
The principal proponent of screw theory from a Grassmannian viewpoint was Edward Wyllys
Hyde. In 1888 he wrote a paper entitled 'The Directional Theory of Screws' on which Ball
comments, again in the Biographical Notes to A Treatise on the Theory of Screws.

The author writes: "I shall define a screw to be the sum of a point-vector and a plane-
vector perpendicular to it, the former being a directed and posited line, the latter the
product of two vectors, hence a directed but not posited plane." Prof. Hyde proves by his
[sic] calculus many of the fundamental theorems in the present theory in a very concise
manner.

2009 9 3
Grassmann Algebra Book.nb 482

7.2 A Canonical Form for a 2-Entity

The canonical form


The most general 2-entity in a bound vector space may always be written as the sum of a bound
vector and a bivector.

S ã PÔa + b

Here, P is a point, a a vector and b a bivector. Remember that only in vector 1, 2 and 3-spaces are
bivectors necessarily simple. In what follows we will show that in a metric space the point P may
always be chosen in such a way that the bivector b is orthogonal to the vector a. This property,
when specialised to three-dimensional space, is important for the theory of screws to be developed
in the rest of the chapter. We show it as follows:

To the above equation, add and subtract the bound vector P* Ô a such that IP - P* M û a ã 0,
giving:

S ã P* Ô a + b*

b* ã b + IP - P* M Ô a

We want to choose P* such that b* is orthogonal to a, that is:

Ib + IP - P* M Ô aM û a ã 0

Expanding the left-hand side of this equation gives:

b û a + Ia û IP - P* MM a - Ha û aL IP - P* M ã 0

But from our first condition on the choice of P* (that is, IP - P* M û a ã 0) we see that the second
term is zero giving:

bûa
P* ã P -
aûa
whence b* may be expressed as:

bûa
b* ã b + Ôa
aûa
Finally, substituting for P* and b* in the expression for S above gives the canonical form for the 2-
entity S, in which the new bivector component is now orthogonal to the vector a. The bound vector
component defines a line called the central axis of S.

2009 9 3
Grassmann Algebra Book.nb 483

bûa bûa
S ã P- Ôa + b + Ôa 7.1
aûa aûa

Canonical forms in an n-plane

Ï Canonical forms in !1
In a bound vector space of one dimension, that is a 1-plane, there are points, vectors, and bound
vectors. There are no bivectors. Hence every 2-entity is of the form S ã PÔa and is therefore in
some sense already in its canonical form.

Ï Canonical forms in !2
In a bound vector space of two dimensions (the plane), every bound element can be expressed as a
bound vector.

To see this we note that any bivector is simple and can be expressed using the vector of the bound
vector as one of its factors. For example, if P is a point and a, b1 , b2 , are vectors, any bound
element in the plane can be expressed in the form:

P Ô a + b1 Ô b2
But because any bivector in the plane can be expressed as a scalar factor times any other bivector,
we can also write:

b1 Ô b2 ã k Ô a
so that the bound element may now be written as the bound vector:

P Ô a + b1 Ô b2 ã P Ô a + k Ô a ã HP + kL Ô a ã P* Ô a
We can use GrassmannAlgebra to verify that formula 7.1 gives this result in a 2-space. First we
declare the 2-space, and write a and b in terms of basis elements:

'2 ; a = a e1 + b e2 ; b = c e1 Ô e2 ;
For the canonical expression [7.1] above we wish to show that the bivector term is zero. We do this
by converting the interior products to scalar products using ToScalarProducts.

bûa
SimplifyBToScalarProductsBb + Ô aFF
aûa
0

In sum: A bound 2-element in the plane, PÔa + b, may always be expressed as a bound vector:

2009 9 3
Grassmann Algebra Book.nb 484

bûa
S ã PÔa + b ã P - Ôa 7.2
aûa

Ï Canonical forms in !3
In a bound vector space of three dimensions, that is a 3-plane, every 2-element can be expressed as
the sum of a bound vector and a bivector orthogonal to the vector of the bound vector. (The
bivector is necessarily simple, because all bivectors in a 3-space are simple.)

Such canonical forms are called screws, and will be discussed in more detail in the sections to
follow.

Creating 2-entities
A 2-entity may be created by applying the GrassmannAlgebra function CreateElement. For
example in 3-dimensional space we create a 2-entity based on the symbol s by entering:

'3 ; S = CreateElementBsF
2

s1 # Ô e1 + s2 # Ô e2 + s3 # Ô e3 + s4 e1 Ô e2 + s5 e1 Ô e3 + s6 e2 Ô e3

Note that CreateElement automatically declares the generated symbols si as scalars by adding
the pattern s_ . We can confirm this by entering Scalars.

Scalars

:a, b, c, d, e, f, g, h, %, H_ û _L?InnerProductQ, s_ , _>


0

To explicitly factor out the origin and express the 2-element in the form S ã PÔa + b, we can use
the GrassmannAlgebra function GrassmannSimplify.

%@SD
# Ô He1 s1 + e2 s2 + e3 s3 L + s4 e1 Ô e2 + s5 e1 Ô e3 + s6 e2 Ô e3

2009 9 3
Grassmann Algebra Book.nb 485

7.3 The Complement of 2-Entity

Complements in an n-plane
In this section an expression will be developed for the complement of a 2-entity in an n-plane with
a metric. In an n-plane, the complement of the sum of a bound vector and a bivector is the sum of a
bound (n-2)-vector and an (n-1)-vector. It is of pivotal consequence for the theory of mechanics
that such a complement in a 3-plane (the usual three-dimensional space) is itself the sum of a
bound vector and a bivector. Geometrically, a quantity and its complement have the same measure
but are orthogonal. In addition, for a 2-element (such as the sum of a bound vector and a bivector)
in a 3-plane, the complement of the complement of an entity is the entity itself. These results will
find application throughout Chapter 8: Exploring Mechanics.

The metric we choose to explore is the hybrid metric Gij defined in [5.33], in which the origin is
orthogonal to all vectors, but otherwise the vector space metric is arbitrary.

The complement referred to the origin


Consider the general sum of a bound vector and a bivector expressed in its simplest form referred
to the origin. "Referring" an element to the origin means expressing its bound component as bound
through the origin, rather than through some other more general point.

X ã "Ôa + b

The complement of X is then:

X ã "Ôa + b
which, by formulae 5.41 and 5.43 derived in Chapter 5, gives:
” ”
X ã "Ôb + a

” ”
X ã !Ôa + b ó X ã !Ôb + a 7.3

It may be seen that the effect of taking the complement of X when it is in a form referred to the
origin is equivalent to interchanging the vector a and the bivector b whilst taking their vector space
complements. This means that the vector that was bound through the origin becomes a free (n|1)-
vector, whilst the bivector that was free becomes an (n|2)-vector bound through the origin.

The complement referred to a general point


Let X ã "Ôa + b and P ã " + n. We can refer X to the point P by adding and subtracting nÔa to
and from the above expression for X.

2009 9 3
Grassmann Algebra Book.nb 486

X ã " Ô a + n Ô a + b - n Ô a ã H" + nL Ô a + Hb - n Ô aL
Or, equivalently:

X ã P Ô a + bp bp ã b - n Ô a

By manipulating the complement PÔa, we can write it in the alternative form - a ûP.

P Ô a ã -a Ô P ã - a Ó P ã - a û P
Further, from formula 5.41, we have that:

- a û P ã I" Ô aM û P

bp = " Ô bp

So that the relationship between X and its complement X , can finally be written:

” 7.4
X ã P Ô a + bp ó X ã ! Ô bp + I! Ô aM û P

Remember, this formula is valid for the hybrid metric [5.33] in an n-plane of arbitrary dimension.
We explore its application to 3-planes in the section below.

7.4 The Screw

The definition of a screw


A screw is the canonical form of a 2-entity in a three-plane, and may always be written in the form:

” 7.5
S ã PÔa + s a

where:
a is the vector of the screw
PÔa is the central axis of the screw
s is the pitch of the screw

a is the bivector of the screw

Remember that a is the (free) complement of the vector a in the three-plane, and hence is a simple
bivector.

2009 9 3
Grassmann Algebra Book.nb 487

The unit screw


` `
Let the scalar a denote the magnitude of the vector a of the screw, so that aãa a. A unit screw S
`
may then be defined by Sãa S, and written as:

` ` ”
`
S = PÔa + s a 7.6

Note that the unit screw does not have unit magnitude. The magnitude of a screw will be discussed
in Section 7.5.

The pitch of a screw


An explicit formula for the pitch s of a screw is obtained by taking the exterior product of the
screw expression with its vector a.

S Ô a ã IP Ô a + s aM Ô a

The first term in the expansion of the right hand side is zero, leaving:

SÔa ã s aÔa
Further, by taking the free complement of this expression and invoking the definition of the interior
product we get:

” ”
SÔa ã s aÔa ã s aÓa ã s aûa
Dividing through by the square of the magnitude of a gives:

` ` ` `
SÔa ã s aûa ã s
In sum: We can obtain the pitch of a screw from the any of the following formulae:

SÔa SÔa ` `
sã ” ã ã SÔa 7.7
aÔa aûa

The central axis of a screw


An explicit formula for the central axis PÔa of a screw is obtained by taking the interior product of
the screw expression with its vector a.

S û a ã IP Ô a + s aM û a

The second term in the expansion of the right-hand side is zero (since an element is always
orthogonal to its complement), leaving:

2009 9 3
Grassmann Algebra Book.nb 488

The second term in the expansion of the right-hand side is zero (since an element is always
orthogonal to its complement), leaving:

S û a ã HP Ô aL û a
The right-hand side of this equation may be expanded using the Interior Common Factor Theorem
to give:

S û a ã Ha û PL a - Ha û aL P

By taking the exterior product of this expression with a, we eliminate the first term on the right-
hand side to get:

HS û aL Ô a ã -Ha û aL HP Ô aL

By dividing through by the square of the magnitude of a, we can express this in terms of unit
elements.
` `
IS û aM Ô a ã -HP Ô aL

In sum: We can obtain the central axis PÔa of a screw from any of the following formulae:

HS û aL Ô a ` ` ` `
PÔa ã - ã -IS û aM Ô a ã a Ô IS û aM 7.8
aûa

Orthogonal decomposition of a screw


By taking the expression for a screw and substituting the expressions derived in [7.7] for the pitch
and [7.8] for the central axis we obtain:

` ` SÔa ”
S ã -IS û aM Ô a + a
aûa

In order to transform the second term into the form we want, we note first that S Ô a is a scalar. So
that we can write:

” ”
S Ô a a ã S Ô a Ô a ã S Ô a Ô a ã HS Ô aL û a
Hence S can be written as:

` ` ` `
S ã -IS û aM Ô a + IS Ô aM û a 7.9

This type of decomposition is in fact valid for any m-element S and 1-element a, as we have shown
` `
in Chapter 6, formula 6.68. The central axis is the term -IS û aM Ô a which is the component of S
` ` ”
parallel to a, and the term IS Ô aM û a is s a, the component of S orthogonal to a.

7.5 The Algebra of Screws


2009 9 3
Grassmann Algebra Book.nb 489

7.5 The Algebra of Screws


To be completed

7.6 Computing with Screws


To be completed

2009 9 3
Grassmann Algebra Book.nb 490

8 Exploring Mechanics
Ï

8.1 Introduction
Grassmann algebra applied to the field of mechanics performs an interesting synthesis between
what now seem to be regarded as disparate concepts. In particular we will explore the synthesis it
can effect between the concepts of force and moment; velocity and angular velocity; and linear
momentum and angular momentum.

This synthesis has an important concomitant, namely that the form in which the mechanical entities
are represented, is for many results of interest, independent of the dimension of the space involved.
It may be argued therefore, that such a representation is more fundamental than one specifically
requiring a three-dimensional context (as indeed does that which uses the Gibbs-Heaviside vector
algebra).

This is a more concrete result than may be apparent at first sight since the form, as well as being
valid for spaces of dimension three or greater, is also valid for spaces of dimension zero, one or
two. Of most interest, however, is the fact that the complementary form of a mechanical entity
takes on a different form depending on the number of dimensions concerned. For example, the
velocity of a rigid body is the sum of a bound vector and a bivector in a space of any dimensions.
Its complement is dependent on the dimension of the space, but in each case it may be viewed as
representing the points of the body (if they exist) which have zero velocity. In three dimensions the
complementary velocity can specify the axis of rotation of the rigid body, while in two dimensions
it can specify the centre (point) of rotation.

Furthermore, some results in mechanics (for example, Newton's second law or the conditions of
equilibrium of a rigid body) will be shown not to require the space to be a metric space. On the
other hand, use of the Gibbs-Heaviside 'cross product' to express angular momentum or the
moment condition immediately supposes the space to possess a metric.

Mechanics as it is known today is in the strange situation of being a field of mathematical physics
in which location is very important, and of which the calculus traditionally used (being vectorial)
can take no proper account. As already discussed in Chapter 1, one may take the example of the
concept of force. A (physical) force is not satisfactorily represented by a vector, yet contemporary
practice is still to use a vector calculus for this task. To patch up this inadequacy the concept of
moment is introduced and the conditions of equilibrium of a rigid body augmented by a condition
on the sum of the moments. The justification for this condition is often not well treated in
contemporary texts.

Many of these texts will of course rightly state that forces are not (represented by) free vectors and
yet will proceed to use the Gibbs-Heaviside calculus to denote and manipulate them. Although
confusing, this inaptitude is usually offset by various comments attached to the symbolic
descriptions and calculations. For example, a position is represented by a 'position vector'. A
position vector is described as a (free?) vector with its tail (fixed?) to the origin (point?) of the
coordinate system. The coordinate system itself is supposed to consist of an origin point and a
number of basis vectors. But whereas the vector calculus can cope with the vectors, it cannot cope
with the origin. This confusion between vectors, free vectors, bound vectors, sliding vectors and
position vectors would not occur if the calculus used to describe them were a calculus of position
2009 9 3as well as direction (vectors).
(points)
Many of these texts will of course rightly state that forces are not (represented by) free vectors and
yet will proceed to use the Gibbs-Heaviside calculus to denote and manipulate them. Although
confusing, this inaptitude is usually offset by various comments attached to the Algebra
Grassmann symbolicBook.nb 491
descriptions and calculations. For example, a position is represented by a 'position vector'. A
position vector is described as a (free?) vector with its tail (fixed?) to the origin (point?) of the
coordinate system. The coordinate system itself is supposed to consist of an origin point and a
number of basis vectors. But whereas the vector calculus can cope with the vectors, it cannot cope
with the origin. This confusion between vectors, free vectors, bound vectors, sliding vectors and
position vectors would not occur if the calculus used to describe them were a calculus of position
(points) as well as direction (vectors).

In order to describe the phenomena of mechanics in purely vectorial terms it has been necessary
therefore to devise the notions of couple and moment as notions almost distinct from that of force,
thus effectively splitting in two all the results of mechanics. In traditional mechanics, to every
'linear' quantity: force, linear momentum, translational velocity, etc., corresponds an 'angular'
quantity: moment, angular momentum, angular velocity.

In this chapter we will show that by representing mechanical quantities correctly in terms of
elements of a Grassmann algebra this dichotomy disappears and mechanics takes on a more unified
form. In particular we will show that there exists a screw / representing (including moments) a
system of forces (remember that a system of forces in three dimensions is not necessarily
replaceable by a single force); a screw ) representing the momentum of a system of bodies (linear
and angular), and a screw & representing the velocity of a rigid body (linear and angular): all
invariant combinations of the linear and angular components. For example, the velocity & is a
complete characterization of the kinematics of the rigid body independent of any particular point
on the body used to specify its motion. Expressions for work, power and kinetic energy of a system
of forces and a rigid body will be shown to be determinable by an interior product between the
relevant screws. For example, the interior product of / with & will give the power of the system of
forces acting on the rigid body, that is, the sum of the translational and rotational powers.

Historical Note

The application of the Ausdehnungslehre to mechanics has obtained far fewer proponents than
might be expected. This may be due to the fact that Grassmann himself was late going into the
subject. His 'Die Mechanik nach den principien der Ausdehnungslehre' (1877) was written just a
few weeks before his death. Furthermore, in the period before his ideas became completely
submerged by the popularity of the Gibbs-Heaviside system (around the turn of the century) there
were few people with sufficient understanding of the Ausdehnungslehre to break new ground using
its methods.

There are only three people who have written substantially in English using the original concepts
of the Ausdehnungslehre: Edward Wyllys Hyde (1888), Alfred North Whitehead (1898), and
Henry James Forder (1941). Each of them has discussed the theory of screws in more or less detail,
but none has addressed the complete field of mechanics. The principal works in other languages
are in German, and apart from Grassmann's paper in 1877 mentioned above, are the book by
Jahnke (1905) which includes applications to mechanics, and the short monograph by Lotze (1922)
which lays particular emphasis on rigid body mechanics.

2009 9 3
Grassmann Algebra Book.nb 492

8.2 Force

Representing force
The notion of force as used in mechanics involves the concepts of magnitude, sense, and line of
action. It will be readily seen in what follows that such a physical entity may be faithfully
represented by a bound vector. Similarly, all the usual properties of systems of forces are faithfully
represented by the analogous properties of sums of bound vectors. For the moment it is not
important whether the space has a metric, or what its dimension is.

Let / denote a force. Then:

$ ã PÔf 8.1

The vector f is called the force vector and expresses the sense and direction of the force. In a
metric space it would also express its magnitude.

The point P is any point on the line of action of the force. It can be expressed as the sum of the
origin point " and the position vector n of the point.

This simple representation delineates clearly the role of the (free) vector f. The vector f represents
all the properties of the force except the position of its line of action. The operation PÔ has the
effect of 'binding' the force vector to the line through P.

On the other hand the expression of a bound vector as a product of points is also useful when
expressing certain types of forces, for example gravitational forces. Newton's law of gravitation for
the force exerted on a point mass m1 (weighted point) by a point mass m2 may be written:

G m1 Ô m2
/12 ã 8.2
R2

Note that this expression correctly changes sign if the masses are interchanged.

Systems of forces
A system of forces may be represented by a sum of bound vectors.

$ ã ‚ $i ã ‚ Pi Ô fi 8.3

A sum of bound vectors is not necessarily reducible to a bound vector: a system of forces is not
necessarily reducible to a single force. However, a sum of bound vectors is in general reducible to
the sum of a bound vector and a bivector. This is done simply by adding and subtracting the term
PÔH⁄fi L to and from the sum [8.3].

2009 9 3
Grassmann Algebra Book.nb 493

A sum of bound vectors is not necessarily reducible to a bound vector: a system of forces is not
necessarily reducible to a single force. However, a sum of bound vectors is in general reducible to
the sum of a bound vector and a bivector. This is done simply by adding and subtracting the term
PÔH⁄fi L to and from the sum [8.3].

$ ã P Ô I‚ fi M + ‚ HPi - PL Ô fi 8.4

This 'adding and subtracting' operation is a common one in our treatment of mechanics. We call it
referring the sum to the point P.

Note that although the expression for / now involves the point P, it is completely independent of P
since the terms involving P cancel. The sum may be said to be invariant with respect to any point
used to express it.

Examining the terms in the expression 8.4 above we can see that since ⁄fi is a vector (f, say) the
first term reduces to the bound vector PÔf representing a force through the chosen point P. The
vector f is called the resultant force vector.

f ã ‚ fi

The second term is a sum of bivectors of the form HPi -PLÔfi . Such a bivector may be seen to
faithfully represent a moment: specifically, the moment of the force represented by Pi Ôfi about
the point P. Let this moment be denoted 0iP .

0iP ã HPi - PL Ô fi
To see that it is more reasonable that a moment be represented as a bivector rather than as a vector,
one has only to consider that the physical dimensions of a moment are the product of a length and
a force unit.

The expression for moment above clarifies distinctly how it can arise from two located entities: the
bound vector Pi Ôfi and the point P, and yet itself be a 'free' entity. The bivector, it will be
remembered, has no concept of location associated with it. This has certainly been a source of
confusion among students of mechanics using the usual Gibbs-Heaviside three-dimensional vector
calculus. It is well known that the moment of a force about a point does not possess the property of
location, and yet it still depends on that point. While notions of 'free' and 'bound' are not properly
mathematically characterized, this type confusion is likely to persist.

The second term in [8.4] representing the sum of the moments of the forces about the point P will
be denoted:

0P ã ‚ HPi - PL Ô fi

Then, any system of forces may be represented by the sum of a bound vector and a bivector.

$ ã P Ô f + %P 8.5

The bound vector PÔf represents a force through an arbitrary point P with force vector equal to the
sum of the force vectors of the system.

The bivector 0P represent the sum of the moments of the forces about the same point P.

2009 9 3
Grassmann Algebra Book.nb 494

The bivector 0P represent the sum of the moments of the forces about the same point P.

Suppose the system of forces be referred to some other point P* . The system of forces may then be
written in either of the two forms:

P Ô f + 0P ã P* Ô f + 0P*

Solving for 0P* gives us a formula relating the moment sum about different points.

0P* ã 0P + HP - P* L Ô f

If the bound vector term PÔf in formula 8.5 is zero then the system of forces is called a couple. If
the bivector term 0P is zero then the system of forces reduces to a single force.

Equilibrium
If a sum of forces is zero, that is:

/ ã P Ô f + 0P ã 0

then it is straightforward to show that each of PÔf and 0P must be zero. Indeed if such were not
the case then the bound vector would be equal to the negative of the bivector, a possibility
excluded by the implicit presence of the origin in a bound vector, and its absence in a bivector.
Furthermore PÔf being zero implies f is zero.

These considerations lead us directly to the basic theorem of statics. For a body to be in
equilibrium, the sum of the forces must be zero.

$ ã ‚ $i ã 0 8.6

This one expression encapsulates both the usual conditions for equilibrium of a body, that is: that
the sum of the force vectors be zero; and that the sum of the moments of the forces about an
arbitrary point P be zero.

Force in a metric 3-plane


In a 3-plane the bivector 0P is necessarily simple. In a metric 3-plane, the vector-space
complement of the simple bivector 0P is just the usual moment vector of the three-dimensional
Gibbs-Heaviside vector calculus.

In three dimensions, the complement of an exterior product is equivalent to the usual cross product.

0P ã ‚ HPi - PL µ fi

Thus, a system of forces in a metric 3-plane may be reduced to a single force through an arbitrarily
chosen point P plus the vector-space complement of the usual moment vector about P.

2009 9 3
Grassmann Algebra Book.nb 495

$ ã P Ô f + ‚ HPi - PL µ fi 8.7

8.3 Momentum

The velocity of a particle


Suppose a point P with position vector n, that is P ã " + n.
È
Since the origin is fixed, the velocity P of the point P is clearly a vector given by the time-
derivative of the position vector of P.

È dP dn È
Pã ã ãn 8.8
dt dt

Representing momentum
As may be suspected from Newton's second law, momentum is of the same tensorial nature as
force. The momentum of a particle is represented by a bound vector as is a force. The momentum
of a system of particles (or of a rigid body, or of a system of rigid bodies) is representable by a sum
of bound vectors as is a system of forces.

The momentum of a particle comprises three factors: the mass, the position, and the velocity of the
particle. The mass is represented by a scalar, the position by a point, and the velocity by a vector.

È
& ã P Ô Km PO 8.9

Here ) is the particle momentum.


m is the particle mass.
P is the particle position point.
°
P is the particle velocity.
mP is the particle point mass.
È
mP is the particle momentum vector.

The momentum of a particle may be viewed either as a momentum vector bound to a line through
the position of the particle, or as a velocity vector bound to a line through the point mass. The
È
simple description [8.9] delineates clearly the role of the (free) vector m P. It represents all the
properties of the momentum of a particle except its position.

The momentum of a system of particles


2009 9 3
Grassmann Algebra Book.nb 496

The momentum of a system of particles


The momentum of a system of particles is denoted by ), and may be represented by the sum of
bound vectors:

È
& ã ‚ &i ã ‚ Pi Ô Kmi Pi O 8.10

A sum of bound vectors is not necessarily reducible to a bound vector: the momentum of a system
of particles is not necessarily reducible to the momentum of a single particle. However, a sum of
bound vectors is in general reducible to the sum of a bound vector and a bivector. This is done
È
simply by adding and subtracting the terms PÔ„ mi Pi to and from the sum [8.10].

È È
& ã P Ô ‚ mi Pi + ‚ HPi - PL Ô Kmi Pi O 8.11

It cannot be too strongly emphasized that although the momentum of the system has now been
referred to the point P, it is completely independent of P.
È
Examining the terms in formula 8.11 we see that since „ mi Pi is a vector (l, say) the first term
reduces to the bound vector PÔl representing the (linear) momentum of a 'particle' situated at the
point P. The vector l is called the linear momentum of the system. The 'particle' may be viewed as
a particle with mass equal to the total mass of the system and with velocity equal to the velocity of
the centre of mass.
È
l ã ‚ mi Pi

È
The second term is a sum of bivectors of the form „ HPi -PLÔKmi Pi O. Such a bivector may be
seen to faithfully represent the moment of momentum or angular momentum of the system of
particles about the point P. Let this moment of momentum for particle i be denoted HiP .
È
HiP ã HPi - PL Ô Kmi Pi O

The expression for angular momentum Hip above clarifies distinctly how it can arise from two
È
located entities: the bound vector Pi ÔKmi Pi O and the point P, and yet itself be a 'free' entity.
Similarly to the notion of moment, the notion of angular momentum treated using the three-
dimensional Gibbs-Heaviside vector calculus has caused some confusion amongst students of
mechanics: the angular momentum of a particle about a point depends on the positions of the
particle and of the point and yet itself has no property of location.

The second term in formula 8.11 representing the sum of the moments of momenta about the point
P will be denoted ,P .

2009 9 3
Grassmann Algebra Book.nb 497

The second term in formula 8.11 representing the sum of the moments of momenta about the point
P will be denoted ,P .
È
,P ã ‚ HPi - PL Ô Kmi Pi O

Then the momentum ) of any system of particles may be represented by the sum of a bound vector
and a bivector.

& ã P Ô l + 'P 8.12

The bound vector PÔl represents the linear momentum of the system referred to an arbitrary point
P.

The bivector ,P represents the angular momentum of the system about the point P.

The momentum of a system of bodies


The momentum of a system of bodies (rigid or non-rigid) is of the same form as that for a system
of particles [8.12], since to calculate momentum it is not necessary to consider any constraints
between the particles.

Suppose a number of bodies with momenta )i , then the total momentum ⁄)i may be written:

‚ )i ã ‚ Pi Ô li + ‚ ,Pi

Referring this momentum sum to a point P by adding and subtracting the term PÔ⁄li we obtain:

‚ )i ã P Ô ‚ li + ‚ HPi - PL Ô li + ‚ ,Pi

It is thus natural to represent the linear momentum vector of the system of bodies l by:

l ã ‚ li

and the angular momentum of the system of bodies ,P by

,P ã ‚ HPi - PL Ô li + ‚ ,Pi

That is, the momentum of a system of bodies is of the same form as the momentum of a system of
particles. The linear momentum vector of the system is the sum of the linear momentum vectors of
the bodies. The angular momentum of the system is made up of two parts: the sum of the angular
momenta of the bodies about their respective chosen points; and the sum of the moments of the
linear momenta of the bodies (referred to their respective chosen points) about the point P.

Again it may be worth emphasizing that the total momentum ) is independent of the point P,
whilst its component terms are dependent on it. However, we can easily convert the formulae to
refer to some other point say P* .

) ã P Ô l + ,P ã P* Ô l + ,P*
Hence the angular momentum referred to the new point is given by:

2009 9 3
Grassmann Algebra Book.nb 498

Hence the angular momentum referred to the new point is given by:

,P* ã ,P + HP - P* L Ô l

Linear momentum and the mass centre


There is a simple relationship between the linear momentum of the system, and the momentum of a
'particle' at the mass centre. The centre of mass PG of a system may be defined by:

M PG ã ‚ mi Pi

Here M is the total mass of the system.


Pi is the position of the ith particle or centre of mass of the ith body.
PG is the position of the centre of mass of the system.

Differentiating this equation yields:


È È
M PG ã ‚ mi Pi ã l

Formula 8.12 then may then be written:

È
) ã P Ô KM PG O + ,P

If now we refer the momentum to the centre of mass, that is, we choose P to be PG , we can write
the momentum of the system as:

È
& ã PG Ô KM PG O + 'PG 8.13

Thus the momentum ) of a system is equivalent to the momentum of a particle of mass equal to
the total mass M of the system situated at the centre of mass PG plus the angular momentum about
the centre of mass.

Momentum in a metric 3-plane


In a 3-plane the bivector ,P is necessarily simple. In a metric 3-plane, the vector-space
complement of the simple bivector ,P is just the usual momentum vector of the three-dimensional
Gibbs-Heaviside vector calculus.

In three dimensions, the complement of the exterior product of two vectors is equivalent to the
usual cross product.

,P ã ‚ HPi - PL µ li

Thus, the momentum of a system in a metric 3-plane may be reduced to the momentum of a single
particle through an arbitrarily chosen point P plus the vector-space complement of the usual
moment of moment vector about P.

2009 9 3
Grassmann Algebra Book.nb 499

Thus, the momentum of a system in a metric 3-plane may be reduced to the momentum of a single
particle through an arbitrarily chosen point P plus the vector-space complement of the usual
moment of moment vector about P.

& ã P Ô l + ‚ HPi - PL µ li 8.14

8.4 Newton's Law

Rate of change of momentum


The momentum of a system of particles has been given by [8.10] as:

È
) ã ‚ Pi Ô Kmi Pi O

The rate of change of this momentum with respect to time is:

È È È ÈÈ È È
) ã ‚ Pi Ô Kmi Pi O + ‚ Pi Ô Kmi Pi O + ‚ Pi Ô Kmi Pi O

In what follows, it will be supposed for simplicity that the masses are not time varying, and since
the first term is zero, this expression becomes:

È ÈÈ
& ã ‚ Pi Ô Kmi Pi O 8.15

Consider now the system with momentum:

) ã P Ô l + ,P
The rate of change of momentum with respect to time is:

È È È È
& ã P Ô l + P Ô l + 'P 8.16

È È È
The term PÔl, or what is equivalent, the term PÔKM PG O, is zero under the following conditions:

• P is a fixed point.
• l is zero.
È È
• P is parallel to PG .
• P is equal to PG .

2009 9 3
Grassmann Algebra Book.nb 500

In particular then, when the point to which the momentum is referred is the centre of mass, the rate
of change of momentum [4.2] becomes:

È È È
& ã P G Ô l + 'P G 8.17

Newton's second law


For a system of bodies of momentum ) acted upon by a system of forces /, Newton's second law
may be expressed as:

È
$ã& 8.18

This equation captures Newton's law in its most complete form, encapsulating both linear and
È
angular terms. Substituting for / and ) from equations 8.5 and 8.16 we have:

È È È
P Ô f + %P ã P Ô l + P Ô l + 'P 8.19

By equating the bound vector terms of [8.19] we obtain the vector equation:

È
fãl
By equating the bivector terms of [8.19] we obtain the bivector equation:

È È
0P ã P Ô l + ,P
In a metric 3-plane, it is the vector complement of this bivector equation which is usually given as
the moment condition.

È È
0P ã P µ l + ,P
È
If P is a fixed point so that P is zero, Newton's law [8.18] is equivalent to the pair of equations:

È
fãl
È
8.20
%P ã 'P

2009 9 3
Grassmann Algebra Book.nb 501

8.5 The Angular Velocity of a Rigid Body


To be completed.

8.6 The Momentum of a Rigid Body


To be completed.

8.7 The Velocity of a Rigid Body


To be completed.

8.8 The Complementary Velocity of a Rigid Body


To be completed.

8.9 The Infinitesimal Displacement of a Rigid Body


To be completed.

8.10 Work, Power and Kinetic Energy


To be completed.

2009 9 3
Grassmann Algebra Book.nb 502

9 Grassmann Algebra
Ï

9.1 Introduction
The Grassmann algebra is an algebra of "numbers" composed of linear sums of elements from any
of the exterior linear spaces generated from a given linear space. These are numbers in the same
sense that, for example, a complex number or a matrix is a number. The essential property that an
algebra has, in addition to being a linear space is, broadly speaking, one of closure under a product
operation. That is, the algebra has a product operation such that the product of any elements of the
algebra is also an element of the algebra.

Thus the exterior product spaces L discussed up to this point, are not algebras, since products
m
(exterior, regressive or interior) of their elements are not generally elements of the same space.
However, there are two important exceptions. L is not only a linear space, but an algebra (and a
0
field) as well under the exterior and interior products. L is an algebra and a field under the
n
regressive product.

Many of our examples will be using GrassmannAlgebra functions, and because we will often be
changing the dimension of the space depending on the example under consideration, we will
indicate a change without comment simply by entering the appropriate DeclareBasis symbol
from the palette. For example the following entry:

&3 ;

effects the change to a 3-dimensional linear or vector space.

9.2 Grassmann Numbers

Ÿ Creating Grassmann numbers


A Grassmann number is a sum of elements from any of the exterior product spaces L. They thus
m
form a linear space in their own right, which we call L.

A basis for L is obtained from the current declared basis of L by collecting together all the basis
1
elements of the various L. This may be generated by entering BasisL[].
m

2009 9 3
Grassmann Algebra Book.nb 503

&3 ; BasisL@D
81, e1 , e2 , e3 , e1 Ô e2 , e1 Ô e3 , e2 Ô e3 , e1 Ô e2 Ô e3 <

If we wish to generate a general symbolic Grassmann number in the currently declared basis, we
can use the function GrassmannAlgebra function CreateGrassmannNumber.
CreateGrassmannNumber[symbol] creates a Grassmann number with scalar coefficients
formed by subscripting the symbol given. For example, if we enter GrassmannNumber[x], we
obtain:

CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

The positional ordering of the xi with respect to the basis elements in any given term is due to
Mathematica's internal ordering routines, and does not affect the meaning of the result.

The CreateGrassmannNumber operation automatically adds the pattern for the generated
coefficients xi to the list of declared scalars, as we see by entering Scalars.

Scalars

:a, b, c, d, e, f, g, h, %, H_ û _L?InnerProductQ, x_ , _>


0

If we wish to enter our own coefficients, we can use CreateGrassmannNumber with a


placeholder (Ñ, ÂplÂ) as argument, and then tab through the placeholders generated, replacing
them with the desired coefficients:

CreateGrassmannNumber@ÑD
Ñ + Ñ e1 + Ñ e2 + Ñ e3 + Ñ e1 Ô e2 + Ñ e1 Ô e3 + Ñ e2 Ô e3 + Ñ e1 Ô e2 Ô e3

Note that the coefficients entered here will not be automatically put on the list of declared scalars.

If several Grassmann numbers are required, we can apply the operation to a list of symbols. That
is, CreateGrassmannNumber is Listable. Below we generate three numbers in 2-space.

&2 ; CreateGrassmannNumber@8x, y, z<D êê MatrixForm


x1 + e1 x3 + e2 x4 + x2 e1 Ô e2
y1 + e1 y3 + e2 y4 + y2 e1 Ô e2
z1 + e1 z3 + e2 z4 + z2 e1 Ô e2

Grassmann numbers can of course be entered in general symbolic form, not just using basis
elements. For example a Grassmann number might take the form:

2 + x - 3 y + 5 xÔy + xÔyÔz

In some cases it might be faster to change basis temporarily in order to generate a template with all
but the required scalars formed.

2009 9 3
Grassmann Algebra Book.nb 504

DeclareBasis@8x, y, z<D;
T = Table@CreateGrassmannNumber@ÑD, 83<D; &3 ; MatrixForm@TD
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz

Ÿ Body and soul


For certain operations (particularly the inverse) it is critically important whether or not a
Grassmann number has a (non-zero) scalar component.

The scalar component of a Grassmann number is called its body. The body may be obtained from a
Grassmann number by using the Body function. Suppose we have a general Grassmann number X
in a 3-space.

&3 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

The body of X is then:

Body@XD
x0

The rest of a Grassmann number is called its soul. The soul may be obtained from a Grassmann
number by using the Soul function. The soul of the number X is:

Soul@XD
e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

Body and Soul apply to Grassmann numbers whose components are given in any form.

Z = Hx û yL + HHx Ô Hy + 2LL û Hz - 2LL + Hz Ó Hy Ô xLL;

8Body@ZD, Soul@ZD<
8x û y + 2 Hx û zL + z Ó y Ô x, -4 x + x Ô y û z - 2 x Ô y<

The first two terms of the body are scalar because they are interior products of 1-elements (scalar
products). The third term of the body is a scalar in a space of 3 dimensions.

Body and Soul are both Listable. Taking the Soul of the list of Grassmann numbers
generated in 2-space above gives

2009 9 3
Grassmann Algebra Book.nb 505

x0 + e1 x1 + e2 x2 + x3 e1 Ô e2
SoulB y0 + e1 y1 + e2 y2 + y3 e1 Ô e2 F êê MatrixForm
z0 + e1 z1 + e2 z2 + z3 e1 Ô e2
e1 x1 + e2 x2 + x3 e1 Ô e2
e1 y1 + e2 y2 + y3 e1 Ô e2
e1 z1 + e2 z2 + z3 e1 Ô e2

Ÿ Even and odd components


Changing the order of the factors in an exterior product changes its sign according to the axiom:

a Ô b = H-1Lm k b Ô a
m k k m

Thus factors of odd grade anti-commute; whereas, if one of the factors is of even grade, they
commute. This means that, if X is a Grassmann number whose components are all of odd grade,
then it is nilpotent.

The even components of a Grassmann number can be extracted with the function EvenGrade and
the odd components with OddGrade.

X
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

EvenGrade@XD
x0 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3

OddGrade@XD
e1 x1 + e2 x2 + e3 x3 + x7 e1 Ô e2 Ô e3

It should be noted that the result returned by EvenGrade or OddGrade is dependent on the
dimension of the current space. For example, the calculations above for the even and odd
components of the number X were carried out in a 3-space. If now we change to a 2-space and
repeat the calculations, we will get a different result for OddGrade[X] because the term of grade
three is necessarily zero in this 2-space.

&2 ; OddGrade@XD
e1 x1 + e2 x2 + e3 x3

For EvenGrade and OddGrade to apply, it is not necessary that the Grassmann number be in
terms of basis elements or simple exterior products. Applying EvenGrade and OddGrade to the
number Z below still separates the number into its even and odd components:

Z = Hx û yL + HHx Ô Hy + 2LL û Hz - 2LL + Hz Ó Hy Ô xLL;

2009 9 3
Grassmann Algebra Book.nb 506

&3 ; 8EvenGrade@ZD, OddGrade@ZD<


8x û y + 2 Hx û zL + z Ó y Ô x - 2 x Ô y, -4 x + x Ô y û z<

EvenGrade and OddGrade are Listable.

To test whether a number is even or odd we use EvenGradeQ and OddGradeQ. Consider again
the general Grassmann number X in 3-space.

8EvenGradeQ@XD, OddGradeQ@XD,
EvenGradeQ@EvenGrade@XDD, OddGradeQ@EvenGrade@XDD<
8False, False, True, False<

EvenGradeQ and OddGradeQ are not Listable. When applied to a list of elements, they
require all the elements in the list to conform in order to return True. They can of course still be
mapped over a list of elements to question the evenness or oddness of the individual components.

EvenGradeQ êü 81, x, x Ô y, x Ô y Ô z<


8True, False, True, False<

Finally, there is the question as to whether the Grassmann number 0 is even or odd. It may be
recalled from Chapter 2 that the single symbol 0 is actually a shorthand for the zero element of any
of the exterior linear spaces, and hence is of indeterminate grade. Entering Grade[0] will return a
flag Grade0 which can be manipulated as appropriate. Therefore, both EvenGradeQ[0] and
OddGradeQ[0] will return False.

8Grade@0D, EvenGradeQ@0D, OddGradeQ@0D<


8Grade0, False, False<

Ÿ The grades of a Grassmann number


The GrassmannAlgebra function Grade will determine the grades of each of the components in a
Grassmann number, and return a list of the grades. For example, applying Grade to a general
number X in 3-space shows that it contains components of all grades up to 3.

X
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

Grade@XD
80, 1, 2, 3<

Grade will also work with more general Grassmann number expressions. For example we can
apply it to the number Z which we previously defined above.

Z
x û y + x Ô H2 + yL û H-2 + zL + z Ó y Ô x

2009 9 3
Grassmann Algebra Book.nb 507

Grade@ZD
80, 1, 2<

Because it may be necessary to expand out a Grassmann number into a sum of terms (each of
which has a definite grade) before the grades can be calculated, and because Mathematica will
reorganize the ordering of those terms in the sum according to its own internal algorithms, direct
correspondence with the original form is often lost. However, if a list of the grades of each term in
an expansion of a Grassmann number is required, we should:

• Expand and simplify the number into a sum of terms.


• Create a list of the terms.
• Map Grade over the list.

For example:

A = %@ZD
-4 x + x û y + 2 Hx û zL + x Ô y û z - x Ô y Ô z Ó 1 - 2 x Ô y

A = List üü A
8-4 x, x û y, 2 Hx û zL, x Ô y û z, -Hx Ô y Ô z Ó 1L, -2 x Ô y<

Grade êü A
81, 0, 0, 1, 0, 2<

To extract the components of a Grassmann number of a given grade or grades, we can use the
GrassmannAlgebra ExtractGrade[m][X] function which takes a Grassmann number and
extracts the components of grade m.

Extracting the components of grade 1 from the number Z defined above gives:

ExtractGrade@1D@ZD
-4 x + x Ô y û z

ExtractGrade also works on lists or tensors of GrassmannNumbers.

DeclareExtraScalars@8y_ , z_ <D

:a, b, c, d, e, f, g, h, %, H_ û _L?InnerProductQ, z_ , x_ , y_ , _>


0

x0 + e1 x1 + e2 x2 + x3 e1 Ô e2
ExtractGrade@1DB y0 + e1 y1 + e2 y2 + y3 e1 Ô e2 F êê MatrixForm
z0 + e1 z1 + e2 z2 + z3 e1 Ô e2
e1 x1 + e2 x2
e1 y1 + e2 y2
e1 z1 + e2 z2

Ÿ Working with complex scalars

2009 9 3
Grassmann Algebra Book.nb 508

Ÿ Working with complex scalars


The imaginary unit  is treated by Mathematica and the GrassmannAlgebra package just as if it
were a numeric quantity. This means that just like other numeric quantities, it will not appear in the
list of declared scalars, even if explicitly entered.

DeclareScalars@8a, b, c, d, 2, Â, ‰<D
8a, b, c, d<

However, Â or any other numeric quantity is treated as a scalar.

ScalarQ@8a, b, c, 2, Â, ‰, 2 - 3 Â<D
8True, True, True, True, True, True, True<

This feature allows the GrassmannAlgebra package to deal with complex numbers just as it would
any other scalars.

%@HHa + Â bL xL Ô HÂ yLD
HÂ a - bL x Ô y

9.3 Operations with Grassmann Numbers

Ÿ The exterior product of Grassmann numbers


We have already shown how to create a general Grassmann number in 3-space:

X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

We now create a second Grassmann number Y so that we can look at various operations applied to
two Grassmann numbers in 3-space.

Y = CreateGrassmannNumber@yD
y0 + e1 y1 + e2 y2 + e3 y3 + y4 e1 Ô e2 + y5 e1 Ô e3 + y6 e2 Ô e3 + y7 e1 Ô e2 Ô e3

The exterior product of two general Grassmann numbers in 3-space is obtained by multiplying out
the numbers termwise and simplifying the result.

2009 9 3
Grassmann Algebra Book.nb 509

%@X Ô YD
x0 y0 + e1 Hx1 y0 + x0 y1 L + e2 Hx2 y0 + x0 y2 L +
e3 Hx3 y0 + x0 y3 L + Hx4 y0 - x2 y1 + x1 y2 + x0 y4 L e1 Ô e2 +
Hx5 y0 - x3 y1 + x1 y3 + x0 y5 L e1 Ô e3 +
Hx6 y0 - x3 y2 + x2 y3 + x0 y6 L e2 Ô e3 +
Hx7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 L e1 Ô e2 Ô e3

When the bodies of the numbers are zero we get only components of grades two and three.

%@Soul@XD Ô Soul@YDD
H-x2 y1 + x1 y2 L e1 Ô e2 + H-x3 y1 + x1 y3 L e1 Ô e3 + H-x3 y2 + x2 y3 L e2 Ô e3 +
Hx6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 L e1 Ô e2 Ô e3

Ÿ The regressive product of Grassmann numbers


The regressive product of X and Y is again obtained by multiplying out the numbers termwise and
simplifying.

Z = %@X Ó YD
e1 Ô e2 Ô e3 Ó Hx7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 +
e1 Hx7 y1 - x5 y4 + x4 y5 + x1 y7 L + e2 Hx7 y2 - x6 y4 + x4 y6 + x2 y7 L +
e3 Hx7 y3 - x6 y5 + x5 y6 + x3 y7 L + Hx7 y4 + x4 y7 L e1 Ô e2 +
Hx7 y5 + x5 y7 L e1 Ô e3 + Hx7 y6 + x6 y7 L e2 Ô e3 + x7 y7 e1 Ô e2 Ô e3 L

We cannot obtain the result in the form of a pure exterior product without relating the unit n-
element to the basis n-element. In general these two elements may be related by a scalar multiple
which we have denoted !. In 3-space this relationship is given by 1ã! e1 Ôe2 Ôe3 . To obtain the
3
result as an exterior product, but one that will also involve the scalar !, we use the
GrassmannAlgebra function ToCongruenceForm.

Z1 = ToCongruenceForm@ZD
1
Hx7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 +
%
e1 Hx7 y1 - x5 y4 + x4 y5 + x1 y7 L + e2 Hx7 y2 - x6 y4 + x4 y6 + x2 y7 L +
e3 Hx7 y3 - x6 y5 + x5 y6 + x3 y7 L + Hx7 y4 + x4 y7 L e1 Ô e2 +
Hx7 y5 + x5 y7 L e1 Ô e3 + Hx7 y6 + x6 y7 L e2 Ô e3 + x7 y7 e1 Ô e2 Ô e3 L

In a space with a Euclidean metric ! is unity. GrassmannSimplify looks at the currently


declared metric and substitutes the value of !. Since the currently declared metric is by default
Euclidean, applying GrassmannSimplify puts ! to unity.

2009 9 3
Grassmann Algebra Book.nb 510

%@Z1D
x7 y0 + x6 y1 - x5 y2 + x4 y3 + x3 y4 - x2 y5 + x1 y6 + x0 y7 +
e1 Hx7 y1 - x5 y4 + x4 y5 + x1 y7 L + e2 Hx7 y2 - x6 y4 + x4 y6 + x2 y7 L +
e3 Hx7 y3 - x6 y5 + x5 y6 + x3 y7 L + Hx7 y4 + x4 y7 L e1 Ô e2 +
Hx7 y5 + x5 y7 L e1 Ô e3 + Hx7 y6 + x6 y7 L e2 Ô e3 + x7 y7 e1 Ô e2 Ô e3

Ÿ The complement of a Grassmann number


Consider again a general Grassmann number X in 3-space.

X
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

The complement of X is denoted X . Entering X applies the complement operation, but does
not simplify it in any way.

X
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

Further simplification to an explicit Grassmann number depends on the metric of the space
concerned and may be accomplished by applying GrassmannSimplify.
GrassmannSimplify will look at the metric and make the necessary transformations. In the
Euclidean metric assumed by GrassmannAlgebra as the default we have:

%@ X D
e3 x4 - e2 x5 + e1 x6 + x7 + x3 e1 Ô e2 - x2 e1 Ô e3 + x1 e2 Ô e3 + x0 e1 Ô e2 Ô e3

For metrics other than Euclidean, the expression for the complement will be more complex. We
can explore the complement of a general Grassmann number in a general metric by entering
DeclareMetric[$] and then applying GrassmannSimplify to X . As an example, we take
the complement of a general Grassmann number in a 2-space with a general metric. We choose a 2-
space rather than a 3-space, because the result is less complex to display.

&2 ; DeclareMetric@-D
88$1,1 , $1,2 <, 8$1,2 , $2,2 <<

U = CreateGrassmannNumber@uD
u0 + e1 u1 + e2 u2 + u3 e1 Ô e2

%@ U D êê Simplify

I-u3 $21,2 + e2 Hu1 $1,1 + u2 $1,2 L + u3 $1,1 $2,2 -

e1 Hu1 $1,2 + u2 $2,2 L + u0 e1 Ô e2 M í J -$21,2 + $1,1 $2,2 N

Ÿ The interior product of Grassmann numbers

2009 9 3
Grassmann Algebra Book.nb 511

Ÿ The interior product of Grassmann numbers


The interior product of two elements a and b where m is less than k, is always zero. Thus the
m k
interior product X û Y is zero if the components of X are of lesser grade than those of Y. For two
general Grassmann numbers we will therefore have some of the products of the components being
zero.

By way of example we take two general Grassmann numbers U and V in 2-space:

&2 ; 8U = CreateGrassmannNumber@uD, V = CreateGrassmannNumber@wD<


8u0 + e1 u1 + e2 u2 + u3 e1 Ô e2 , w0 + e1 w1 + e2 w2 + w3 e1 Ô e2 <

The interior product of U with V is:

W1 = %@U û VD
u0 w0 + e1 u1 w0 + e2 u2 w0 + HHe1 û e1 L u1 + He1 û e2 L u2 L w1 +
He1 Ô e2 û e1 L u3 w1 + HHe1 û e2 L u1 + He2 û e2 L u2 L w2 +
He1 Ô e2 û e2 L u3 w2 + He1 Ô e2 û e1 Ô e2 L u3 w3 + u3 w0 e1 Ô e2

Note that there are no products of the form ei ûHej Ôek L as these have been put to zero by
GrassmannSimplify. If we wish, we can convert the remaining interior products to scalar
products using the GrassmannAlgebra function ToScalarProducts.

W2 = ToScalarProducts@W1 D
u0 w0 + e1 u1 w0 + e2 u2 w0 + He1 û e1 L u1 w1 + He1 û e2 L u2 w1 -
He1 û e2 L e1 u3 w1 + He1 û e1 L e2 u3 w1 + He1 û e2 L u1 w2 +
He2 û e2 L u2 w2 - He2 û e2 L e1 u3 w2 + He1 û e2 L e2 u3 w2 -
He1 û e2 L2 u3 w3 + He1 û e1 L He2 û e2 L u3 w3 + u3 w0 e1 Ô e2

Finally, we can substitute the values from the currently declared metric tensor for the scalar
products using the GrassmannAlgebra function ToMetricForm. For example, if we first declare
a general metric:

DeclareMetric@-D
88$1,1 , $1,2 <, 8$1,2 , $2,2 <<

ToMetricForm@W2 D
u0 w0 + e1 u1 w0 + e2 u2 w0 + u1 w1 $1,1 + e2 u3 w1 $1,1 +
u2 w1 $1,2 - e1 u3 w1 $1,2 + u1 w2 $1,2 + e2 u3 w2 $1,2 - u3 w3 $21,2 +
u2 w2 $2,2 - e1 u3 w2 $2,2 + u3 w3 $1,1 $2,2 + u3 w0 e1 Ô e2

The interior product of two general Grassmann numbers in 3-space is

2009 9 3
Grassmann Algebra Book.nb 512

x0 y0 + x1 y1 + x2 y2 + x3 y3 + x4 y4 + e3 Hx3 y0 + x5 y1 + x6 y2 + x7 y4 L +
x5 y5 + e2 Hx2 y0 + x4 y1 - x6 y3 - x7 y5 L + x6 y6 +
e1 Hx1 y0 - x4 y2 - x5 y3 + x7 y6 L + x7 y7 + Hx4 y0 + x7 y3 L e1 Ô e2 +
Hx5 y0 - x7 y2 L e1 Ô e3 + Hx6 y0 + x7 y1 L e2 Ô e3 + x7 y0 e1 Ô e2 Ô e3

Now consider the case of two general Grassmann number in 3-space.

&3 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

Y = CreateGrassmannNumber@yD
y0 + e1 y1 + e2 y2 + e3 y3 + y4 e1 Ô e2 + y5 e1 Ô e3 + y6 e2 Ô e3 + y7 e1 Ô e2 Ô e3

The interior product of X with Y is:

Z = %@X û YD
x0 y0 + e1 x1 y0 + e2 x2 y0 + e3 x3 y0 +
HHe1 û e1 L x1 + He1 û e2 L x2 + He1 û e3 L x3 L y1 + He1 Ô e2 û e1 L x4 y1 +
He1 Ô e3 û e1 L x5 y1 + He2 Ô e3 û e1 L x6 y1 + He1 Ô e2 Ô e3 û e1 L x7 y1 +
HHe1 û e2 L x1 + He2 û e2 L x2 + He2 û e3 L x3 L y2 + He1 Ô e2 û e2 L x4 y2 +
He1 Ô e3 û e2 L x5 y2 + He2 Ô e3 û e2 L x6 y2 + He1 Ô e2 Ô e3 û e2 L x7 y2 +
HHe1 û e3 L x1 + He2 û e3 L x2 + He3 û e3 L x3 L y3 + He1 Ô e2 û e3 L x4 y3 +
He1 Ô e3 û e3 L x5 y3 + He2 Ô e3 û e3 L x6 y3 + He1 Ô e2 Ô e3 û e3 L x7 y3 +
HHe1 Ô e2 û e1 Ô e2 L x4 + He1 Ô e2 û e1 Ô e3 L x5 + He1 Ô e2 û e2 Ô e3 L x6 L y4 +
He1 Ô e2 Ô e3 û e1 Ô e2 L x7 y4 +
HHe1 Ô e2 û e1 Ô e3 L x4 + He1 Ô e3 û e1 Ô e3 L x5 + He1 Ô e3 û e2 Ô e3 L x6 L y5 +
He1 Ô e2 Ô e3 û e1 Ô e3 L x7 y5 +
HHe1 Ô e2 û e2 Ô e3 L x4 + He1 Ô e3 û e2 Ô e3 L x5 + He2 Ô e3 û e2 Ô e3 L x6 L y6 +
He1 Ô e2 Ô e3 û e2 Ô e3 L x7 y6 + He1 Ô e2 Ô e3 û e1 Ô e2 Ô e3 L x7 y7 +
x4 y0 e1 Ô e2 + x5 y0 e1 Ô e3 + x6 y0 e2 Ô e3 + x7 y0 e1 Ô e2 Ô e3

We can expand these interior products to inner products and replace them by elements of the
metric tensor by using the GrassmannAlgebra function ToMetricForm. For example in the case
of a Euclidean metric we have:

1; ToMetricForm@ZD
x0 y0 + e1 x1 y0 + e2 x2 y0 + e3 x3 y0 + x1 y1 + e2 x4 y1 + e3 x5 y1 + x2 y2 -
e1 x4 y2 + e3 x6 y2 + x3 y3 - e1 x5 y3 - e2 x6 y3 + x4 y4 + e3 x7 y4 +
x5 y5 - e2 x7 y5 + x6 y6 + e1 x7 y6 + x7 y7 + x4 y0 e1 Ô e2 + x7 y3 e1 Ô e2 +
x5 y0 e1 Ô e3 - x7 y2 e1 Ô e3 + x6 y0 e2 Ô e3 + x7 y1 e2 Ô e3 + x7 y0 e1 Ô e2 Ô e3

2009 9 3
Grassmann Algebra Book.nb 513

9.4 Simplifying Grassmann Numbers

Ÿ Elementary simplifying operations


All of the elementary simplifying operations that work with elements of a single exterior linear
space also work with Grassmann numbers. In this section we take a Grassmann number A in 2-
space involving both exterior and interior products, and explore the application of those component
operations of GrassmannSimplify which apply to exterior and interior products.

A = HH1 + xL Ô H2 + y + y Ô xLL û H3 + zL
H1 + xL Ô H2 + y + y Ô xL û H3 + zL

Ÿ Expanding products
ExpandProducts performs the first level of the simplification operation by simply expanding
out any products. It treats scalars as elements of the Grassmann algebra, just like those of higher
grade.Taking the expression A and expanding out both the exterior and interior products gives:

A1 = ExpandProducts@AD
1Ô2û3 + 1Ô2ûz + 1Ôyû3 + 1Ôyûz + xÔ2û3 + xÔ2ûz + xÔyû3 +
xÔyûz + 1ÔyÔxû3 + 1ÔyÔxûz + xÔyÔxû3 + xÔyÔxûz

Ÿ Factoring scalars
FactorScalars collects all the scalars in each term, which figure either by way of ordinary
multiplication or by way of exterior multiplication, and multiplies them as a scalar factor at the
beginning of each term. Remember that both the exterior and interior product of scalars is a scalar.
Mathematica will also automatically rearrange the terms in this sum according to its internal
ordering algorithm.

A2 = FactorScalars@A1 D
6 + 6 x + 3 y + 2 H1 û zL + 2 Hx û zL + y û z + x Ô y û z +
yÔxûz + xÔyÔxûz + 3 xÔy + 3 yÔx + 3 xÔyÔx

2009 9 3
Grassmann Algebra Book.nb 514

Ÿ Checking for zero terms


ZeroSimplify looks at each term and decides if it is zero, either from nilpotency or from
expotency. A term is nilpotent if it contains repeated 1-element factors. A term is expotent if the
number of 1-element factors in it is greater than the dimension of the space. (We have coined this
term 'expotent' for convenience.) An interior product is zero if the grade of the first factor is less
than the grade of the second factor. ZeroSimplify then puts those terms it finds to be zero,
equal to zero.

A3 = ZeroSimplify@A2 D
6 + 6 x + 3 y + 2 Hx û zL + y û z + x Ô y û z + y Ô x û z + 3 x Ô y + 3 y Ô x

Ÿ Reordering factors
In order to simplify further it is necessary to put the factors of each term into a canonical order so
that terms of opposite sign in an exterior product will cancel and any inner product will be
transformed into just one of its two symmetric forms. This reordering may be achieved by using
the GrassmannAlgebra ToStandardOrdering function. (The rules for GrassmannAlgebra
standard ordering are given in the package documentation.)

A4 = ToStandardOrdering@A3 D
6 + 6 x + 3 y + 2 Hx û zL + y û z

Ÿ Simplifying expressions
We can perform all the simplifying operations above with GrassmannSimplify. Applying
GrassmannSimplify to our original expression A we get the expression A4 .

%@A4 D
6 + 6 x + 3 y + 2 Hx û zL + y û z

2009 9 3
Grassmann Algebra Book.nb 515

9.5 Powers of Grassmann Numbers

Ÿ Direct computation of powers


The zeroth power of a Grassmann number is defined to be the unit scalar 1. The first power is
defined to be the Grassmann number itself. Higher powers can be obtained by simply taking the
exterior product of the number with itself the requisite number of times, and then applying
GrassmannSimplify to simplify the result.

To refer to an exterior power n of a Grassmann number X (but not to compute it), we will use the
usual notation Xn .

As an example we take the general element of 3-space, X, introduced at the beginning of this
chapter, and calculate some powers:

X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

%@X Ô XD

x20 + 2 e1 x0 x1 + 2 e2 x0 x2 + 2 e3 x0 x3 + 2 x0 x4 e1 Ô e2 + 2 x0 x5 e1 Ô e3 +
2 x0 x6 e2 Ô e3 + H-2 x2 x5 + 2 Hx3 x4 + x1 x6 + x0 x7 LL e1 Ô e2 Ô e3

%@X Ô X Ô XD

x30 + 3 e1 x20 x1 + 3 e2 x20 x2 + 3 e3 x20 x3 + 3 x20 x4 e1 Ô e2 + 3 x20 x5 e1 Ô e3 +


3 x20 x6 e2 Ô e3 + I6 x0 x3 x4 - 6 x0 x2 x5 + 6 x0 x1 x6 + 3 x20 x7 M e1 Ô e2 Ô e3

Direct computation is generally an inefficient way to calculate a power, as the terms are all
expanded out before checking to see which are zero. It will be seen below that we can generally
calculate powers more efficiently by using knowledge of which terms in the product would turn
out to be zero.

Ÿ Powers of even Grassmann numbers


We take the even component of X, and construct a table of its first, second, third and fourth powers.

Xe = EvenGrade@XD
x0 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3

2009 9 3
Grassmann Algebra Book.nb 516

%@8Xe , Xe Ô Xe , Xe Ô Xe Ô Xe , Xe Ô Xe Ô Xe Ô Xe <D êê Simplify êê TableForm


x0 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3
x0 Hx0 + 2 Hx4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 LL
x20 Hx0 + 3 Hx4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 LL
x30 Hx0 + 4 Hx4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 LL

The soul of the even part of a general Grassmann number in 3-space just happens to be nilpotent
due to its being simple. (See Section 2.10.)

Xes = EvenSoul@XD
x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3

%@Xes Ô Xes D
0

However, in the general case an even soul will not be nilpotent, but its powers will become zero
when the grades of the terms in the power expansion exceed the dimension of the space. We can
show this by calculating the powers of an even soul in spaces of successively higher dimensions.
Let

Zes = m Ô p + q Ô r Ô s Ô t + u Ô v Ô w Ô x Ô y Ô z;

Table@8&n ; Dimension, %@Zes Ô Zes D<, 8n, 3, 8<D êê TableForm


3 0
4 0
5 0
6 2 mÔpÔqÔrÔsÔt
7 2 mÔpÔqÔrÔsÔt
8 2 mÔpÔqÔrÔsÔt + 2 mÔpÔuÔvÔwÔxÔyÔz

Ÿ Powers of odd Grassmann numbers


Odd Grassmann numbers are nilpotent. Hence all positive integer powers except the first are zero.

Xo = OddGrade@XD
e1 x1 + e2 x2 + e3 x3 + x7 e1 Ô e2 Ô e3

%@Xo Ô Xo D
0

2009 9 3
Grassmann Algebra Book.nb 517

Ÿ Computing positive powers of Grassmann numbers


We can make use of the nilpotence of the odd component to compute powers of Grassmann
numbers more efficiently than the direct approach described at the beginning of this section by
noting that it will cause a binomial expansion of the sum of the even and odd components to
terminate after the first two terms. Let X ã Xe + Xo , then:

Xp ã HXe + Xo Lp ã Xe p + p Xe p-1 Ô Xo + !

Xp ã Xe p-1 Ô HXe + p Xo L 9.1

where the powers of p and p|1 refer to the exterior powers of the Grassmann numbers. For
example, the cube of a general Grassmann number X in 3-space may then be calculated as

&3 ; Xe = EvenGrade@XD; Xo = OddGrade@XD; %@Xe Ô Xe Ô HXe + 3 Xo LD

x30 + 3 e1 x20 x1 + 3 e2 x20 x2 + 3 e3 x20 x3 + 3 x20 x4 e1 Ô e2 + 3 x20 x5 e1 Ô e3 +


3 x20 x6 e2 Ô e3 + Ix0 H-6 x2 x5 + 6 Hx3 x4 + x1 x6 LL + 3 x20 x7 M e1 Ô e2 Ô e3

This algorithm is used in the general power function GrassmannPower described below.

Ÿ Powers of Grassmann numbers with no body


A further simplification in the computation of powers of Grassmann numbers arises when they
have no body. Powers of numbers with no body will eventually become zero. In this section we
determine formulae for the highest non-zero power of such numbers.

The formula developed above for positive powers of Grassmann number still applies to the specific
case of numbers without a body. If we can determine an expression for the highest non-zero power
of an even number with no body, we can substitute it into the formula to obtain the required result.

Let S be an even Grassmann number with no body. Express S as a sum of components s, where i is
i
the grade of the component. (Components may be a sum of several terms. Components with no
underscript are 1-elements by default.)

As a first example we take a general element in 3-space.

S = s + s + s;
2 3

The (exterior) square of S may be obtained by expanding and simplifying SÔS.

%@S Ô SD
sÔs + sÔs
2 2

which simplifies to:

2009 9 3
Grassmann Algebra Book.nb 518

SÔS ã 2 sÔs
2

It is clear from this that because this expression is of grade three, further multiplication by S would
give zero. It thus represents an expression for the highest non-zero power (the square) of a general
bodyless element in 3-space.

To generalize this result, we proceed as follows:


1) Determine an expression for the highest non-zero power of an even Grassmann number with no
body in an even-dimensional space.
2) Substitute this expression into the general expression for the positive power of a Grassmann
number in the last section.
3) Repeat these steps for the case of an odd-dimensional space.

The highest non-zero power pmax of an even Grassmann number (with no body) can be seen to be
that which enables the smallest term (of grade 2 and hence commutative) to be multiplied by itself
the largest number of times, without the result being zero. For a space of even dimension n this is
obviously pmax = n2 .

For example, let X be a 2-element in a 4-space.


2

&4 ; X = CreateElementBxF
2 2

x1 e1 Ô e2 + x2 e1 Ô e3 + x3 e1 Ô e4 + x4 e2 Ô e3 + x5 e2 Ô e4 + x6 e3 Ô e4

The square of this number is a 4-element. Any further multiplication by a bodyless Grassmann
number will give zero.

%BX Ô XF
2 2

H-2 x2 x5 + 2 Hx3 x4 + x1 x6 LL e1 Ô e2 Ô e3 Ô e4
n
Substituting s for Xe and 2
for p into formula 9.1 for the pth power gives:
2

n n
Xp ã s 2 -1 Ô Ks + Xo O
2 2 2
The only odd component Xo capable of generating a non-zero term is the one of grade 1. Hence Xo
is equal to s and we have finally that:

The maximum non-zero power of a bodyless Grassmann number in a space of even dimension n is
equal to n2 and may be computed from the formula

n n n
Xpmax ã X 2 ã s 2 -1 Ô Ks + sO n even 9.2
2 2 2

A similar argument may be offered for odd-dimensional spaces. The highest non-zero power of an
even Grassmann number (with no body) can be seen to be that which enables the smallest term (of
grade 2 and hence commutative) to be multiplied by itself the largest number of times, without the
result being zero. For a space of odd dimension n this is obviously n-1
2
. However the highest
power of a general bodyless number will be one greater than this due to the possibility of
2009 9 3 this even product by the element of grade 1. Hence pmax = n+1 .
multiplying 2
Grassmann Algebra Book.nb 519
A similar argument may be offered for odd-dimensional spaces. The highest non-zero power of an
even Grassmann number (with no body) can be seen to be that which enables the smallest term (of
grade 2 and hence commutative) to be multiplied by itself the largest number of times, without the
result being zero. For a space of odd dimension n this is obviously n-1
2
. However the highest
power of a general bodyless number will be one greater than this due to the possibility of
multiplying this even product by the element of grade 1. Hence pmax = n+12
.

n+1
Substituting s for Xe , s for Xo , and 2
for p into the formula for the pth power and noting that the
2
term involving the power of s only is zero, leads to the following:
2

The maximum non-zero power of a bodyless Grassmann number in a space of odd dimension n is
equal to n+21 and may be computed from the formula

n+1 Hn + 1L n-1
Xpmax ã X 2 ã s 2 Ô s n odd 9.3
2 2

A formula which gives the highest power for both even and odd spaces is

1
pmax ã H2 n + 1 - H-1Ln L 9.4
4

The following table gives the maximum non-zero power of a bodiless Grassmann number and the
formula for it for spaces up to dimension 8.

n pmax Xpmax
1 1 s
2 1 s+s
2
3 2 2 sÔs
2

4 2 K2 s + sO Ô s
2 2
5 3 3 sÔsÔs
2 2

6 3 K3 s + sO Ô s Ô s
2 2 2
7 4 4 sÔsÔsÔs
2 2 2

8 4 K4 s + sO Ô s Ô s Ô s
2 2 2 2

Ï Verifying the formulae in a 3-space


Declare a 3-space and create a Grassmann number X3 with no body.

&3 ; X3 = Soul@CreateGrassmannNumber@xDD
e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

2009 9 3
Grassmann Algebra Book.nb 520

:s = ExtractGrade@1D@X3 D, s = ExtractGrade@2D@X3 D>


2

8e1 x1 + e2 x2 + e3 x3 , x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 <

%BX3 Ô X3 ã 2 s Ô sF
2

True

Ï Verifying the formulae in a 4-space


Declare a 4-space and create a Grassmann number X4 with no body.

&4 ; X4 = Soul@CreateGrassmannNumber@xDD
e1 x1 + e2 x2 + e3 x3 + e4 x4 + x5 e1 Ô e2 + x6 e1 Ô e3 +
x7 e1 Ô e4 + x8 e2 Ô e3 + x9 e2 Ô e4 + x10 e3 Ô e4 + x11 e1 Ô e2 Ô e3 +
x12 e1 Ô e2 Ô e4 + x13 e1 Ô e3 Ô e4 + x14 e2 Ô e3 Ô e4 + x15 e1 Ô e2 Ô e3 Ô e4

:s = ExtractGrade@1D@X4 D, s = ExtractGrade@2D@X4 D>


2

8e1 x1 + e2 x2 + e3 x3 + e4 x4 ,
x5 e1 Ô e2 + x6 e1 Ô e3 + x7 e1 Ô e4 + x8 e2 Ô e3 + x9 e2 Ô e4 + x10 e3 Ô e4 <

%BX4 Ô X4 ã K2 s + sO Ô sF
2 2

True

Ÿ The inverse of a Grassmann number


Let X be a general Grassmann number in 3-space.

&3 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

Finding an inverse of X is equivalent to finding the Grassmann number A such that XÔAã1 or AÔ
Xã1. In what follows we shall show that A commutes with X and therefore is a unique inverse
which we may call the inverse of X. To find A we need to solve the equation XÔA-1ã0. First we
create a general Grassmann number for A.

A = CreateGrassmannNumber@aD
a0 + e1 a1 + e2 a2 + e3 a3 + a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3 + a7 e1 Ô e2 Ô e3

The calculate and simplify the expression XÔA-1.

2009 9 3
Grassmann Algebra Book.nb 521

%@X Ô A - 1D
-1 + a0 x0 + e1 Ha1 x0 + a0 x1 L + e2 Ha2 x0 + a0 x2 L +
e3 Ha3 x0 + a0 x3 L + Ha4 x0 + a2 x1 - a1 x2 + a0 x4 L e1 Ô e2 +
Ha5 x0 + a3 x1 - a1 x3 + a0 x5 L e1 Ô e3 +
Ha6 x0 + a3 x2 - a2 x3 + a0 x6 L e2 Ô e3 +
Ha7 x0 + a6 x1 - a5 x2 + a4 x3 + a3 x4 - a2 x5 + a1 x6 + a0 x7 L e1 Ô e2 Ô e3

To make this expression zero we need to solve for the ai in terms of the xi which make the
coefficients zero. This is most easily done with the GrassmannSolve function to be introduced
later. Here, to see more clearly the process, we do it directly with Mathematica's Solve function.

S = Solve@8-1 + a0 x0 ã 0, Ha1 x0 + a0 x1 L ã 0, Ha2 x0 + a0 x2 L ã 0,


Ha3 x0 + a0 x3 L ã 0, Ha4 x0 + a2 x1 - a1 x2 + a0 x4 L ã 0,
Ha5 x0 + a3 x1 - a1 x3 + a0 x5 L ã 0, Ha6 x0 + a3 x2 - a2 x3 + a0 x6 L ã 0,
Ha7 x0 + a6 x1 - a5 x2 + a4 x3 + a3 x4 - a2 x5 + a1 x6 + a0 x7 L ã
0<D êê Flatten
2 x3 x4 - 2 x2 x5 + 2 x1 x6 - x0 x7 x4
:a7 Ø , a4 Ø - ,
x30 x20
x5 x6 x1 x2 x3 1
a5 Ø - , a6 Ø - , a1 Ø - , a2 Ø - , a3 Ø - , a0 Ø >
x20 x20 x20 x20 x20 x0

We denote the inverse of X by Xr . We obtain an explicit expression for Xr by substituting the


values obtained above in the formula for A.

Xr = A ê. S
1 e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
- - - - - -
x0 x20 x20 x20 x20 x20
x6 e2 Ô e3 H2 x3 x4 - 2 x2 x5 + 2 x1 x6 - x0 x7 L e1 Ô e2 Ô e3
+
x20 x30

To verify that this is indeed the inverse and that the inverse commutes, we calculate the products of
X and Xr

%@8X Ô Xr , Xr Ô X<D êê Simplify


81, 1<

To avoid having to rewrite the coefficients for the Solve equations, we can use the
GrassmannAlgebra function GrassmannSolve (discussed in Section 9.6) to get the same results.
To use GrassmannSolve we only need to enter a single undefined symbol (here we have used
Y)for the Grassmann number we are looking to solve for.

2009 9 3
Grassmann Algebra Book.nb 522

GrassmannSolve@X Ô Y ã 1, YD
1 e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
::Y Ø - - - - - -
x0 x20 x20 x20 x20 x20
x6 e2 Ô e3 H-2 x3 x4 + 2 x2 x5 - 2 x1 x6 + x0 x7 L e1 Ô e2 Ô e3
- >>
x20 x30

It is evident from this example that (at least in 3-space), for a Grassmann number to have an
inverse, it must have a non-zero body.

To calculate an inverse directly, we can use the function GrassmannInverse. To see the pattern
for inverses in general, we calculate the inverses of a general Grassmann number in 1, 2, 3, and 4-
spaces:

Ï Inverse in a 1-space
&1 ; X1 = CreateGrassmannNumber@xD
x0 + e1 x1

X1 r = GrassmannInverse@X1 D
1 e1 x1
-
x0 x20

Ï Inverse in a 2-space
&2 ; X2 = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2

X2 r = GrassmannInverse@X2 D
1 e1 x1 e2 x2 x3 e1 Ô e2
- - -
x0 x20 x20 x20

Ï Inverse in a 3-space
&3 ; X3 = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

X3 r = GrassmannInverse@X3 D
1 e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
- - - - - -
x0 x20 x20 x20 x20 x20
x6 e2 Ô e3 -2 x2 x5 + 2 Hx3 x4 + x1 x6 L x7
+ - e1 Ô e2 Ô e3
x20 x30 x20

Ï Inverse in a 4-space

2009 9 3
Grassmann Algebra Book.nb 523

Inverse in a 4-space
&4 ; X4 = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + e4 x4 + x5 e1 Ô e2 + x6 e1 Ô e3 +
x7 e1 Ô e4 + x8 e2 Ô e3 + x9 e2 Ô e4 + x10 e3 Ô e4 + x11 e1 Ô e2 Ô e3 +
x12 e1 Ô e2 Ô e4 + x13 e1 Ô e3 Ô e4 + x14 e2 Ô e3 Ô e4 + x15 e1 Ô e2 Ô e3 Ô e4

X4 r = GrassmannInverse@X4 D
1 e1 x1 e2 x2 e3 x3 e4 x4 x5 e1 Ô e2
- - - - - -
x0 x20 x20 x20 x20 x20
x6 e1 Ô e3 x7 e1 Ô e4 x8 e2 Ô e3 x9 e2 Ô e4 x10 e3 Ô e4
- - - - +
x20 x20 x20 x20 x20
-2 x2 x6 + 2 Hx3 x5 + x1 x8 L x11
- e1 Ô e2 Ô e3 +
x30 x20

-2 x2 x7 + 2 Hx4 x5 + x1 x9 L x12
- e1 Ô e2 Ô e4 +
x30 x20

-2 x3 x7 + 2 Hx4 x6 + x1 x10 L x13


- e1 Ô e3 Ô e4 +
x30 x20

-2 x3 x9 + 2 Hx4 x8 + x2 x10 L x14


- e2 Ô e3 Ô e4 +
x30 x20

-2 x6 x9 + 2 Hx7 x8 + x5 x10 L x15


- e1 Ô e2 Ô e3 Ô e4
x30 x20

Ÿ The form of the inverse of a Grassmann number


To understand the form of the inverse of a Grassmann number it is instructive to generate it by yet
another method. Let b be a bodiless Grassmann number and bq its qth exterior power. Then the
following identity will hold.

H1 + bL Ô I1 - b + b2 - b3 + b4 - ... % bq M ã 1 % bq+1

We can see this most easily by writing the expansion in the form below and noting that all the
intermediate terms cancel.

1 - b + b2 - b3 + b4 - ... % bq
+ b - b2 + b3 - b4 + ... ° bq % bq+1

Alternatively, consider the product in the reverse order I1-b+b2 -b3 +b4 - ...%bq MÔH1+bL.
Expanding this gives precisely the same result. Hence a Grassmann number and its inverse
commute.

Furthermore, it has been shown in the previous section that for a bodiless Grassmann number in a
Ï
space of n dimensions, the greatest non-zero power is pmax = 14 I2 n + 1 + H-1Ln M. Thus if q is
q+1
2009 9to3pmax , b is equal to zero, and the identity becomes:
equal
Grassmann Algebra Book.nb 524

Furthermore, it has been shown in the previous section that for a bodiless Grassmann number in a
space of n dimensions, the greatest non-zero power is pmax = 14 I2 n + 1 + H-1Ln M. Thus if q is
equal to pmax , bq+1 is equal to zero, and the identity becomes:

H1 + bL Ô I1 - b + b2 - b3 + b4 - ... % bpmax M ã 1 9.5

We have thus shown that 1 - b + b2 - b3 + b4 ... % bpmax is the inverse of 1+b.

If now we have a general Grassmann number X, say, we can write X as Xãx0 H1+bL, so that if Xs
is the soul of X, then

Xs
X ã x0 H1 + bL ã x0 + Xs bã
x0

The inverse of X then becomes:

1 Xs Xs 2 Xs 3 Xs pmax
-1
X = 1- + - + ... % 9.6
x0 x0 x0 x0 x0

We tabulate some examples for Xãx0 +Xs , expressing the inverse X-1 both in terms of Xs and x0
alone, and in terms of the even and odd components s and s of Xs .
2

n pmax X-1 X-1


1 Xs 1 Xs
1 1 x0
J1 - x0
N x0
J1 - x0
N
1 Xs 1 Xs
2 1 x0
J1 - x0
N x0
J1 - x0
N
1 Xs X 2 1 Xs 2
3 2 J1 - + J xs N N K1 - + s Ô sO
x0 x0 0 x0 x0 x0 2 2
1 Xs X 2 1 Xs 1
4 2 J1 - + J xs N N K1 - + K2 s + sO Ô sO
x0 x0 0 x0 x0 x0 2 2 2

It is easy to verify the formulae of the table for dimensions 1 and 2 from the results above. The
formulae for spaces of dimension 3 and 4 are given below. But for a slight rearrangement of terms
they are identical to the results above.

Ï Verifying the formula in a 3-space


&3 ; Xs = Soul@CreateGrassmannNumber@xDD
e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

2009 9 3
Grassmann Algebra Book.nb 525

:s = ExtractGrade@1D@Xs D, s = ExtractGrade@2D@Xs D>


2

8e1 x1 + e2 x2 + e3 x3 , x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 <

1 Xs 2
%B 1- + s Ô s Ô Hx0 + Xs LF
x0 x0 x0 2 2

Ï Verifying the formula in a 4-space


&4 ; Xs = Soul@CreateGrassmannNumber@xDD
e1 x1 + e2 x2 + e3 x3 + e4 x4 + x5 e1 Ô e2 + x6 e1 Ô e3 +
x7 e1 Ô e4 + x8 e2 Ô e3 + x9 e2 Ô e4 + x10 e3 Ô e4 + x11 e1 Ô e2 Ô e3 +
x12 e1 Ô e2 Ô e4 + x13 e1 Ô e3 Ô e4 + x14 e2 Ô e3 Ô e4 + x15 e1 Ô e2 Ô e3 Ô e4

:s = ExtractGrade@1D@Xs D, s = ExtractGrade@2D@Xs D>


2

8e1 x1 + e2 x2 + e3 x3 + e4 x4 ,
x5 e1 Ô e2 + x6 e1 Ô e3 + x7 e1 Ô e4 + x8 e2 Ô e3 + x9 e2 Ô e4 + x10 e3 Ô e4 <

1 Xs 1
%B 1- + K2 s + sO Ô s Ô Hx0 + Xs LF
x0 x0 x0 2 2 2

Ÿ Integer powers of a Grassmann number


In the GrassmannAlgebra function GrassmannPower we collect together the algorithm
introduced above for positive integer powers of Grassmann numbers with that for calculating
inverses, so that GrassmannPower works with any integer power, positive or negative.

? GrassmannPower
GrassmannPower@X,nD computes the nth power of a Grassmann
number X in a space of the currently declared number of
dimensions. n may be any numeric or symbolic quantity.

Ï Powers in terms of basis elements


&2 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2

Y = GrassmannPower@X, 3D

x30 + 3 e1 x20 x1 + 3 e2 x20 x2 + 3 x20 x3 e1 Ô e2

2009 9 3
Grassmann Algebra Book.nb 526

Z = GrassmannPower@X, -3D
1 3 e1 x1 3 e2 x2 3 x3 e1 Ô e2
- - -
x30 x40 x40 x40

As usual, we verify with GrassmannSimplify.

%@8Y Ô Z, Z Ô Y<D
81, 1<

Ï General powers
GrassmannPower will also work with general elements that are not expressed in terms of basis
elements.

X = 1 + x + x Ô y + x Ô y Ô z;

8Y = GrassmannPower@X, 3D, Z = GrassmannPower@X, -3D<


81 + 3 x + 3 x Ô y, 1 - 3 x - 3 x Ô y<

%@8Y Ô Z, Z Ô Y<D
81, 1<

Ï Symbolic powers
We take a general Grassmann number in 2-space.

&2 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2

Y = GrassmannPower@X, nD

xn0 + n e1 x-1+n
0 x1 + n e2 x-1+n
0 x2 + n x-1+n
0 x3 e1 Ô e2

Z = GrassmannPower@X, -nD

x-n -1-n
0 - n e1 x0 x1 - n e2 x-1-n
0 x2 - n x-1-n
0 x3 e1 Ô e2

Before we verify that these are indeed inverses of each other, we need to declare the power n to be
a scalar.

DeclareExtraScalars@nD;

%@8Y Ô Z, Z Ô Y<D
81, 1<

The nth power of a general Grassmann number in 3-space is:

2009 9 3
Grassmann Algebra Book.nb 527

&3 ; X = CreateGrassmannNumber@xD; Y = GrassmannPower@X, nD

xn0 + n e1 x-1+n
0 x1 + n e2 x-1+n
0 x2 + n e3 x-1+n
0 x3 +
n x-1+n
0 x4 e1 Ô e2 + n x-1+n
0 x5 e1 Ô e3 + n x-1+n
0 x6 e2 Ô e3 +
Ix-2+n
0 IIn - n2 M x2 x5 + I-n + n2 M Hx3 x4 + x1 x6 LM + n x-1+n
0 x7 M e1 Ô e2 Ô e3

Z = GrassmannPower@X, -nD

x-n -1-n
0 - n e1 x0 x1 - n e2 x-1-n
0 x2 - n e3 x-1-n
0 x3 -
n x-1-n
0 x4 e1 Ô e2 - n x-1-n
0 x5 e1 Ô e3 - n x-1-n
0 x6 e2 Ô e3 +
Ix-2-n
0 Ix3 In x4 + n2 x4 M - n x2 x5 - n2 x2 x5 + n x1 x6 + n2 x1 x6 M -
n x-1-n
0 x7 M e1 Ô e2 Ô e3

%@8Y Ô Z, Z Ô Y<D
81, 1<

Ÿ General powers of a Grassmann number


As well as integer powers, GrassmannPower is able to compute non-integer powers.

Ï The square root of a Grassmann number


The square root of a general Grassmann number X in 3-space is given by:

1
&3 ; X = CreateGrassmannNumber@xD; Y = GrassmannPowerBX, F
2
e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
x0 + + + + + +
2 x0 2 x0 2 x0 2 x0 2 x0

x6 e2 Ô e3 -x3 x4 + x2 x5 - x1 x6 x7
+ + e1 Ô e2 Ô e3
2 x0 4 x3ê2
0 2 x0

We verify that this is indeed the square root.

Simplify@%@X ã Y Ô YDD
True

Ï The pth power of a Grassmann number


Consider a general Grassmann number X in 2-space raised to the power of p and -p.

&2 ; X = CreateGrassmannNumber@xD; Y = GrassmannPower@X, pD

xp0 + p e1 x-1+p
0 x1 + p e2 x-1+p
0 x2 + p x-1+p
0 x3 e1 Ô e2

2009 9 3
Grassmann Algebra Book.nb 528

Z = GrassmannPower@X, -pD

x-p -1-p
0 - p e1 x0 x1 - p e2 x-1-p
0 x2 - p x-1-p
0 x3 e1 Ô e2

We verify that these are indeed inverses.

%@8Y Ô Z, Z Ô Y<D
81, 1<

9.6 Solving Equations

Ÿ Solving for unknown coefficients


Suppose we have a Grassmann number whose coefficients are functions of a number of
parameters, and we wish to find the values of the parameters which will make the number zero. For
example:

Z = a - b - 1 + Hb + 3 - 5 aL e1 + Hd - 2 c + 6L e3 + He + 4 aL e1 Ô e2 +
Hc + 2 + 2 gL e2 Ô e3 + H-1 + e - 7 g + bL e1 Ô e2 Ô e3 ;

We can calculate the values of the parameters by using the GrassmannAlgebra function
GrassmannScalarSolve.

GrassmannScalarSolve@Z ã 0D
1 1 1
::a Ø , bØ- , c Ø -1, d Ø -8, e Ø -2, g Ø - >>
2 2 2

GrassmannScalarSolve will also work for several equations and for general symbols (other
than basis elements).

Z1 = 8a H-xL + b y Ô H-2 xL + f == c x + d x Ô y,
H7 - aL y == H13 - dL y Ô x - f<;

GrassmannScalarSolve@Z1 D
13
::a Ø 7, c Ø -7, d Ø 13, b Ø , f Ø 0>>
2

GrassmannScalarSolve can be used in several other ways. As with other Mathematica


functions its usage statement can be obtained by entering ?GrassmannScalarSolve.

2009 9 3
Grassmann Algebra Book.nb 529

? GrassmannScalarSolve
GrassmannScalarSolve@eqnsD attempts to find the values of those
HdeclaredL scalar variables which make the equations True.
GrassmannScalarSolve@eqns,scalarsD attempts to find the
values of the scalars which make the equations True. If
not already in the DeclaredScalars list the scalars will
be added to it. GrassmannScalarSolve@eqns,scalars,elimsD
attempts to find the values of the scalars which make the
equations True while eliminating the scalars elims. If
the equations are not fully solvable, GrassmannScalarSolve
will still find the values of the scalar variables which
reduce the number of terms in the equations as much as
possible, and will additionally return the reduced equations.

Ÿ Solving for an unknown Grassmann number


The function for solving for an unknown Grassmann number, or several unknown Grassmann
numbers is GrassmannSolve.

? GrassmannSolve
GrassmannSolve@eqns,varsD attempts to solve an equation or set
of equations for the Grassmann number variables vars. If
the equations are not fully solvable, GrassmannSolve will
still find the values of the Grassmann number variables
which reduce the number of terms in the equations as much as
possible, and will additionally return the reduced equations.

Suppose first that we have an equation involving an unknown Grassmann number, A, say. For
example if X is a general number in 3-space, and we want to find its inverse A as we did in the
Section 9.5 above, we can solve the following equation for A:

&3 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

R = GrassmannSolve@X Ô A ã 1, AD
1 e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
::A Ø - - - - - -
x0 x20 x20 x20 x20 x20
x6 e2 Ô e3 H-2 x3 x4 + 2 x2 x5 - 2 x1 x6 + x0 x7 L e1 Ô e2 Ô e3
- >>
x20 x30

As usual, we verify the result.

2009 9 3
Grassmann Algebra Book.nb 530

%@X Ô A ã 1 ê. RD
8True<

In general, GrassmannSolve can solve the same sorts of equations that Mathematica's inbuilt
Solve routines can deal with because it uses Solve as its main calculation engine. In particular, it
can handle powers of the unknowns. Further, it does not require the equations necessarily to be
expressed in terms of basis elements. For example, we can take a quadratic equation in a:

Q = a Ô a + 2 H1 + x + x Ô yL Ô a + H1 + 2 x + 2 x Ô yL ã 0;

S = GrassmannSolve@Q, aD
88a Ø -1 + x C1 + C2 x Ô y<<

Here there are two arbitrary parameters C1 and C2 introduced by the GrassmannSolve function.
(C is a symbol reserved by Mathematica for the generation of unknown coefficients in the solution
of equations.)
Now test whether a is indeed a solution to the equation:

%@Q ê. SD
8True<

Note that here we have a double infinity of solutions, parametrized by the pair {C1 ,C2 }. This
means that for any values of C1 and C2 , S is a solution. For example for various C1 and C2 :

8%@Q ê. a Ø -1 - xD, %@Q ê. a Ø -1D, %@Q ê. a Ø -1 - 9 x Ô yD<


8True, True, True<

GrassmannSolve can also take several equations in several unknowns. As an example we take
the following pair of equations:

Q = 8H1 + xL Ô b + x Ô y Ô a ã 3 + 2 x Ô y,
H1 - yL Ô a Ô a + H2 + y + x + 5 x Ô yL ã 7 - 5 x<;

First, we solve for the two unknown Grassmann numbers a and b.

S = GrassmannSolve@Q, 8a, b<D


3x 2y xÔy
::a Ø - 5 + - - , b Ø 3 - 3 x + I2 + 5 M x Ô y>,
5 5 2 5
3x 2y xÔy
:a Ø 5 - + + , b Ø 3 - 3 x + I2 - 5 M x Ô y>>
5 5 2 5

Here we get two solutions, which we verify with GrassmannSimplify.

%@Q ê. SD
88True, True<, 8True, True<<

Note that GrassmannAlgebra assumes that all other symbols (here, x and y) are 1-elements. We
can add the symbol x to the list of Grassmann numbers to be solved for.

2009 9 3
Grassmann Algebra Book.nb 531

Note that GrassmannAlgebra assumes that all other symbols (here, x and y) are 1-elements. We
can add the symbol x to the list of Grassmann numbers to be solved for.

S = GrassmannSolve@Q, 8a, b, x<D


y I-31 - 36 C1 + 11 C22 M
::a Ø C2 + ,
12 C2

18 108 C1 12 6 C2
bØ- + y 2 - C2 - + - ,
-11 + C22 I-11 + C22 M
2
-11 + C22 -11 + C22

1 18 203 y 5 31 y
x Ø y C1 + I5 - C22 M>, :a Ø y C3 , b Ø + , xØ - >>
6 11 121 6 36

Again we get two solutions, but this time involving some arbitrary scalar constants.

%@Q ê. SD êê Simplify
88True, True<, 8True, True<<

9.7 Exterior Division

Ÿ Defining exterior quotients


An exterior quotient of two Grassmann numbers X and Y is a solution Z to the equation:

X ã YÔZ
or to the equation:

X ã ZÔY

To distinguish these two possibilities, we call the solution Z to the first equation XãYÔZ the left
exterior quotient of X by Y because the denominator Y multiplies the quotient Z from the left.

Correspondingly, we call the solution Z to the second equation XãZÔY the right exterior quotient
of X by Y because the denominator Y multiplies the quotient Z from the right.

To solve for Z in either case, we can use the GrassmannSolve function. For example if X and Y
are general elements in a 2-space, the left exterior quotient is obtained as:

&2 ; X = CreateGrassmannNumber@xD; Y = CreateGrassmannNumber@yD;

GrassmannSolve@X ã Y Ô Z, 8Z<D
x0 e1 H-x1 y0 + x0 y1 L
::Z Ø - -
y0 y20
e2 H-x2 y0 + x0 y2 L H-x3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 Ô e2
- >>
y20 y20

Whereas the right exterior quotient is obtained as:

2009 9 3
Grassmann Algebra Book.nb 532

Whereas the right exterior quotient is obtained as:

GrassmannSolve@X ã Z Ô Y, 8Z<D
x0 e1 H-x1 y0 + x0 y1 L
::Z Ø - -
y0 y20
e2 H-x2 y0 + x0 y2 L H-x3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 Ô e2
- >>
y20 y20

Note the differences in signs in the e1 Ô e2 term.

To obtain these quotients more directly, GrassmannAlgebra provides the functions


LeftGrassmannDivide and RightGrassmannDivide.

? LeftGrassmannDivide
LeftGrassmannDivide@X,YD calculates
the Grassmann number Z such that X == YÔZ.

? RightGrassmannDivide
RightGrassmannDivide@X,YD calculates
the Grassmann number Z such that X == ZÔY.
A shorthand infix notation for the division operation is provided by the down-vector symbol
LeftDownVector B (LeftGrassmannDivide) and RightDownArrow E
(RightGrassmannDivide). The previous examples would then take the form

X Y
x0 e1 H-x1 y0 + x0 y1 L e2 H-x2 y0 + x0 y2 L
- - -
y0 y20 y20
H-x3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 Ô e2
y20

X Y
x0 e1 H-x1 y0 + x0 y1 L e2 H-x2 y0 + x0 y2 L
- - -
y0 y20 y20
H-x3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 Ô e2
y20

Ÿ Special cases of exterior division

Ï The inverse of a Grassmann number


The inverse of a Grassmann number may be obtained as either the left or right form quotient.

2009 9 3
Grassmann Algebra Book.nb 533

81 X, 1 X<
1 e1 x1 e2 x2 x3 e1 Ô e2 1 e1 x1 e2 x2 x3 e1 Ô e2
: - - - , - - - >
x0 x20 x20 x20 x0 x20 x20 x20

Ï Division by a scalar
Division by a scalar is just the Grassmann number divided by the scalar as expected.

8X a, X a<
x0 e1 x1 e2 x2 x3 e1 Ô e2 x0 e1 x1 e2 x2 x3 e1 Ô e2
: + + + , + + + >
a a a a a a a a

Ï Division of a Grassmann number by itself


Division of a Grassmann number by itself give unity.

8X X, X X<
81, 1<

Ï Division by scalar multiples


Division involving scalar multiples also gives the results expected.

8Ha XL X, X Ha XL<
1
:a, >
a

Ï Division by an even Grassmann number


Division by an even Grassmann number will give the same result for both division operations,
since exterior multiplication by an even Grassmann number is commutative.

8X H1 + e1 Ô e2 L, X H1 + e1 Ô e2 L<
8x0 + e1 x1 + e2 x2 + H-x0 + x3 L e1 Ô e2 ,
x0 + e1 x1 + e2 x2 + H-x0 + x3 L e1 Ô e2 <

Ÿ The non-uniqueness of exterior division


Because of the nilpotency of the exterior product, the division of Grassmann numbers does not
necessarily lead to a unique result. To take a simple case consider the division of a simple 2-
element xÔy by one of its 1-element factors x.

2009 9 3
Grassmann Algebra Book.nb 534

Hx Ô yL x
y + x C1 + C2 x Ô y

The quotient is returned as a linear combination of elements containing arbitrary scalar constants
C1 and C2 . To see that this is indeed a left quotient, we can multiply it by the denominator from the
left and see that we get the numerator.

%@x Ô Hy + x C1 + C2 x Ô yLD
xÔy

We get a slightly different result for the right quotient which we verify by multiplying from the
right.

Hx Ô yL x
-y + x C1 + C2 x Ô y

%@H-y + x C1 + C2 x Ô yL Ô xD
xÔy

Here is a slightly more complex example.

&3 ; Hx Ô y Ô zL Hx - 2 y + 3 z + x Ô zL
3 9 C1 3 C3 1 3 C1 C3
z - - 3 C2 - +x - + - C2 -
2 2 2 2 2 2
y H-1 + 3 C1 + 2 C2 + C3 L + C1 x Ô y + C2 x Ô z + C3 y Ô z + C4 x Ô y Ô z

In some circumstances we may not want the most general result. Say, for example, we knew that
the result we wanted had to be a 1-element. We can use the GrassmannAlgebra function
ExtractGrade.

ExtractGrade@1D@Hx Ô y Ô zL Hx - 2 y + 3 z + x Ô zLD
3 9 C1 3 C3
z - - 3 C2 - +
2 2 2
1 3 C1 C3
x - - C2 - + y H-1 + 3 C1 + 2 C2 + C3 L
2 2 2
Should there be insufficient information to determine a result, the quotient will be returned
unchanged. For example.

Hx Ô yL z
Hx Ô yL z

2009 9 3
Grassmann Algebra Book.nb 535

9.8 Factorization of Grassmann Numbers

The non-uniqueness of factorization


Factorization of a Grassmann number will rarely be unique due to the nilpotency property of the
exterior product. However, using the capability of GrassmannScalarSolve to generate general
solutions with arbitrary constants we may be able to find a factorization in the form we require.
The GrassmannAlgebra function which implements the attempt to factorize is
GrassmannFactor.

? GrassmannFactor
GrassmannFactor@X,FD attempts to factorize the Grassmann number X
into the form given by F. F must be an expression involving the
symbols of X, together with sufficient scalar coefficients whose
values are to be determined. GrassmannFactor@X,F,SD attempts to
factorize the Grassmann number X by determining the scalars S.
If the scalars S are not already declared as scalars, they will
be automatically declared. If no factorization can be effected
in the form given, the original expression will be returned.

Ÿ Example: Factorizing a Grassmann number in 2-space


First, consider a general Grassmann number in 2-space.

&2 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2

By using GrassmannFactor we find a general factorization into two Grassmann numbers of


lower maximum grade. We want to see if there is a factorization in the form
Ha+b e1 +c e2 LÔHf+g e1 +h e2 L, so we enter this as the second argument. The scalars that we
want to find are {a,b,c,f,g,h}, so we enter this list as the third argument.

Xf = GrassmannFactor@X,
Ha + b e1 + c e2 L Ô Hf + g e1 + h e2 L, 8a, b, c, f, g, h<D
x0 e2 H-h x0 + f x2 L e1 H-h x0 x1 + f x1 x2 + f x0 x3 L
+ + Ô
f f2 f2 x2
e1 Hh x1 - f x3 L
f + h e2 +
x2

This factorization is valid for any values of the scalars {a,b,c,f,g,h} which appear in the
result, as we can verify by applying GrassmannSimplify to the result.

2009 9 3
Grassmann Algebra Book.nb 536

%@XfD
x0 + e1 x1 + e2 x2 + x3 e1 Ô e2

Ÿ Example: Factorizing a 2-element in 3-space


Next, consider a general 2-element in 3-space. We know that 2-elements in 3-space are simple,
hence we would expect to be able to obtain a factorization.

&3 ; Y = CreateElementByF
2

y1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3

By using GrassmannFactor we can determine a factorization. In this example we obtain two


classes of solution, each with different parameters.

Xf = GrassmannFactor@Y,
Ha e1 + b e2 + c e3 L Ô Hd e1 + e e2 + f e3 L, 8a, b, c, d, e, f<D
c H-f y1 +e y2 L
e1 Jy2 + y3
N e2 Hc e + y3 L
: c e3 + + Ô
f f

e1 H-f y1 + e y2 L
e e2 + f e3 + ,
y3
b e y2
e1 Jy1 + y3
N e3 y3 e e1 y2
b e2 + - Ô e e2 + >
e e y3

Both may be verified as valid factorizations.

%@XfD
8y1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 , y1 e1 Ô e2 + y2 e1 Ô e3 + y3 e2 Ô e3 <

Ÿ Example: Factorizing a 3-element in 4-space


Finally, consider a specific 3-element in 4-space.

&4 ; X = -14 w Ô x Ô y + 26 w Ô x Ô z + 11 w Ô y Ô z + 13 x Ô y Ô z;

Since every (n|1)-element in an n-space is simple, we know the 3-element can be factorized.
Suppose also that we would like the factorization in the following form:

J = Hx a1 + y a2 L Ô Hy b2 + z b3 L Ô Hz g3 + w g4 L
Hx a1 + y a2 L Ô Hy b2 + z b3 L Ô Hz g3 + w g4 L

2009 9 3
Grassmann Algebra Book.nb 537

By using GrassmannFactor we can explore if a factorization into such a form exists.

XJ = GrassmannFactor@X, JD
11 y a1 14 y 26 z 13 z g4
x a1 + Ô - + Ô w g4 -
26 a1 g4 a1 g4 14

We check that this is indeed a factorization by applying GrassmannSimplify.

%@XJD
-14 x Ô y Ô w + 13 x Ô y Ô z + 26 x Ô z Ô w + 11 y Ô z Ô w

9.9 Functions of Grassmann Numbers

The Taylor series formula


Functions of Grassmann numbers can be defined by power series as with ordinary numbers. Let X
be a Grassmann number composed of a body x0 and a soul Xs such that X ã x0 + Xs . Then the
Taylor series expanded about the body x0 (and truncated at the third term) is:

Normal@Series@f@XD, 8X, x0 , 3<DD


1 1
f@x0 D + HX - x0 L f£ @x0 D + HX - x0 L2 f! @x0 D + HX - x0 L3 fH3L @x0 D
2 6

It has been shown in Section 9.5 that the degree pmax of the highest non-zero power of a
Grassmann number in a space of n dimensions is given by pmax = 14 I2 n + 1 - H-1Ln M.
If we expand the series up to the term in this power, it then becomes a formula for a function f of a
Grassmann number in that space. The tables below give explicit expressions for a function f[X]
of a Grassmann variable in spaces of dimensions 1 to 4.

n pmax f@XD
1 1 f@x0 D + f£ @x0 D Xs
2 1 f@x0 D + f£ @x0 D Xs
1
3 2 f@x0 D + f£ @x0 D Xs + 2
f" @x0 D Xs 2
1
4 2 f@x0 D + f£ @x0 D Xs + 2
f" @x0 D Xs 2

If we replace the square of the soul of X by its simpler formulation in terms of the components of X
of the first and second grade we get a computationally more convenient expression in the case on
dimensions 3 and 4.

2009 9 3
Grassmann Algebra Book.nb 538

n pmax f@XD
1 1 f@x0 D + f£ @x0 D Xs
2 1 f@x0 D + f£ @x0 D Xs
3 2 f@x0 D + f£ @x0 D Xs + f" @x0 D s Ô s
2
1
4 2 f@x0 D + f£ @x0 D Xs + 2
f" @x0 D K2 s + sO Ô s
2 2

Ÿ The form of a function of a Grassmann number


To automatically generate the form of formula for a function of a Grassmann number, we can use
the GrassmannAlgebra function GrassmannFunctionForm.

? GrassmannFunctionForm
GrassmannFunctionForm@8f@xD,x<,Xb,XsD expresses the function f@xD
of a Grassmann number in a space of the currently declared
number of dimensions in terms of the symbols Xb and Xs, where Xb
represents the body, and Xs represents the soul of the number.
GrassmannFunctionForm@8f@x,y,z,...D,8x,y,z,...<<,8Xb,Yb,Zb,...<,8
Xs,Ys,Zs,...<D expresses the function f@x,y,z,...D
of Grassmann numbers x,y,z,... in terms of the symbols
Xb,Yb,Zb,... and Xs,Ys,Zs..., where Xb,Yb,Zb,... represent the
bodies, and Xs,Ys,Zs... represent the souls of the numbers.
GrassmannFunctionForm@f@Xb,XsDD expresses the function f@XD of
a Grassmann number X=Xb+Xs in a space of the currently declared
number of dimensions in terms of body and soul symbols Xb and
Xs. GrassmannFunctionForm@f@8Xb,Yb,Zb,...<,8Xs,Ys,Zs,...<DD
is equivalent to the second form above.

Ï GrassmannFunctionForm for a single variable


The form of a function will be different, depending on the dimension of the space it is in - the
higher the dimension, the potentially more terms there are in the series. We can generate a table
similar to the first of the pair above by using GrassmannFunctionForm.

2009 9 3
Grassmann Algebra Book.nb 539

Table@8i, H&i ; GrassmannFunctionForm@f@Xb , Xs DDL<, 8i, 1, 8<D êê


TableForm
1 f@Xb D + Xs f£ @Xb D
2 f@Xb D + Xs f£ @Xb D
1
3 f@Xb D + Xs f£ @Xb D + 2
X2s f££ @Xb D
1
4 f@Xb D + Xs f£ @Xb D + 2
X2s f££ @Xb D
1 1
5 f@Xb D + Xs f£ @Xb D + 2
X2s f££ @Xb D + 6
X3s fH3L @Xb D
1 1
6 f@Xb D + Xs f£ @Xb D + 2
X2s f££ @Xb D + 6
X3s fH3L @Xb D
1 1 1
7 f@Xb D + Xs f£ @Xb D + 2
X2s f££ @Xb D + 6
X3s fH3L @Xb D + 24
X4s fH4L @Xb D
1 1 1
8 f@Xb D + Xs f£ @Xb D + 2
X2s f££ @Xb D + 6
X3s fH3L @Xb D + 24
X4s fH4L @Xb D

Ï GrassmannFunctionForm for several variables


GrassmannFunctionForm will also give the form of the function for functions of several
Grassmann variables. Due to their length, we will have to display them individually rather than in
table form. We display only the odd dimensional results, since the following even dimensional
ones are the same.

&1 ; GrassmannFunctionForm@f@8Xb , Yb <, 8Xs , Ys <DD

f@Xb , Yb D + Ys fH0,1L @Xb , Yb D + Xs fH1,0L @Xb , Yb D + Xs Ô Ys fH1,1L @Xb , Yb D

&3 ; GrassmannFunctionForm@f@8Xb , Yb <, 8Xs , Ys <DD


1
f@Xb , Yb D + Ys fH0,1L @Xb , Yb D + Y2s fH0,2L @Xb , Yb D +
2
1
Xs fH1,0L @Xb , Yb D + Xs Ô Ys fH1,1L @Xb , Yb D + Xs Ô Y2s fH1,2L @Xb , Yb D +
2
1 1 1
X2s fH2,0L @Xb , Yb D + X2s Ô Ys fH2,1L @Xb , Yb D + X2s Ô Y2s fH2,2L @Xb , Yb D
2 2 4

2009 9 3
Grassmann Algebra Book.nb 540

&5 ; GrassmannFunctionForm@f@8Xb , Yb <, 8Xs , Ys <DD


1
f@Xb , Yb D + Ys fH0,1L @Xb , Yb D + Y2s fH0,2L @Xb , Yb D +
2
1
Y3s fH0,3L @Xb , Yb D + Xs fH1,0L @Xb , Yb D + Xs Ô Ys fH1,1L @Xb , Yb D +
6
1 1
Xs Ô Y2s fH1,2L @Xb , Yb D + Xs Ô Y3s fH1,3L @Xb , Yb D +
2 6
1 1 1
X2s fH2,0L @Xb , Yb D + X2s Ô Ys fH2,1L @Xb , Yb D + X2s Ô Y2s fH2,2L @Xb , Yb D +
2 2 4
1 1 1
X2s Ô Y3s fH2,3L @Xb , Yb D + X3s fH3,0L @Xb , Yb D + X3s Ô Ys fH3,1L @Xb , Yb D +
12 6 6
1 1
X3s Ô Y2s fH3,2L @Xb , Yb D + X3s Ô Y3s fH3,3L @Xb , Yb D
12 36

Ÿ Calculating functions of Grassmann numbers


To calculate functions of Grassmann numbers, GrassmannAlgebra uses the
GrassmannFunction operation based on GrassmannFunctionForm.

? GrassmannFunction
GrassmannFunction@8f@xD,x<,XD or GrassmannFunction@f@XDD computes
the function f of a single Grassmann number X in a space of
the currently declared number of dimensions. In the first
form f@xD may be any formula; in the second, f must be a pure
function. GrassmannFunction@8f@x,y,z,..D,8x,y,z,..<<,8X,Y,Z,..<D
or GrassmannFunction@f@X,Y,Z,..DD computes the function
f of Grassmann numbers X,Y,Z,... In the first form
f@x,y,z,..D may be any formula Hwith xi corresponding
to XiL; in the second, f must be a pure function. In
both these forms the result of the function may be
dependent on the ordering of the arguments X,Y,Z,...

In the sections below we explore the GrassmannFunction operation and give some examples.
We will work in a 3-space with the general Grassmann number X.

&3 ; X = CreateGrassmannNumber@xD
x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

2009 9 3
Grassmann Algebra Book.nb 541

Ÿ Powers of Grassmann numbers


The computation of powers of Grassmann numbers has already been discussed in Section 9.5. We
can also calculate powers of Grassmann numbers using the GrassmannFunction operation. The
syntax we will usually use is:

GrassmannFunction@Xa D

xa0 + a e1 x-1+a
0 x1 + a e2 x-1+a
0 x2 + a e3 x-1+a
0 x3 +
a x-1+a
0 x4 e1 Ô e2 + a x-1+a
0 x5 e1 Ô e3 + a x-1+a
0 x6 e2 Ô e3 +
Ix-2+a
0 IIa - a2 M x2 x5 + I-a + a2 M Hx3 x4 + x1 x6 LM + a x-1+a
0 x7 M e1 Ô e2 Ô e3

GrassmannFunctionAX-1 E

1 e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
- - - - - -
x0 x20 x20 x20 x20 x20
x6 e2 Ô e3 -2 x2 x5 + 2 Hx3 x4 + x1 x6 L x7
+ - e1 Ô e2 Ô e3
x20 x30 x20

But we can also use GrassmannFunction@8xa , x<, XD,


HÒa &L@XD êê GrassmannFunction or GrassmannPower[X,a] to get the same result.

Ÿ Exponential and logarithmic functions of Grassmann numbers

Ï Exponential functions
Here is the exponential of a general Grassmann number in 3-space.

expX = GrassmannFunction@Exp@XDD

‰ x0 + ‰ x0 e 1 x 1 + ‰ x0 e 2 x 2 + ‰ x0 e 3 x 3 + ‰ x0 x 4 e 1 Ô e 2 + ‰ x0 x 5 e 1 Ô e 3 +
‰x0 x6 e2 Ô e3 + ‰x0 Hx3 x4 - x2 x5 + x1 x6 + x7 L e1 Ô e2 Ô e3

Here is the exponential of its negative.

expXn = GrassmannFunction@Exp@-XDD

‰-x0 - ‰-x0 e1 x1 - ‰-x0 e2 x2 - ‰-x0 e3 x3 - ‰-x0 x4 e1 Ô e2 - ‰-x0 x5 e1 Ô e3 -


‰-x0 x6 e2 Ô e3 + ‰-x0 Hx3 x4 - x2 x5 + x1 x6 - x7 L e1 Ô e2 Ô e3

We verify that they are indeed inverses of one another.

%@expX Ô expXnD êê Simplify


1

Ï Logarithmic functions

2009 9 3
Grassmann Algebra Book.nb 542

Logarithmic functions
The logarithm of X is:

GrassmannFunction@Log@XDD
e1 x1 e2 x2 e3 x3 x4 e1 Ô e2 x5 e1 Ô e3
Log@x0 D + + + + + +
x0 x0 x0 x0 x0
x6 e2 Ô e3 -x3 x4 + x2 x5 - x1 x6 x7
+ + e1 Ô e2 Ô e3
x0 x20 x0

We expect the logarithm of the previously calculated exponential to be the original number X.

Log@expXD êê GrassmannFunction êê Simplify êê PowerExpand


x0 + e1 x1 + e2 x2 + e3 x3 + x4 e1 Ô e2 + x5 e1 Ô e3 + x6 e2 Ô e3 + x7 e1 Ô e2 Ô e3

Ÿ Trigonometric functions of Grassmann numbers

Ï Sines and cosines of a Grassmann number

We explore the classic trigonometric relation Sin@XD2 + Cos@XD2 ã 1and show that even for
Grassmann numbers this remains true.

s2X = GrassmannFunctionA9Sin@xD2 , x=, XE

Sin@x0 D2 + 2 Cos@x0 D Sin@x0 D e1 x1 + 2 Cos@x0 D Sin@x0 D e2 x2 +


2 Cos@x0 D Sin@x0 D e3 x3 + 2 Cos@x0 D Sin@x0 D x4 e1 Ô e2 +
2 Cos@x0 D Sin@x0 D x5 e1 Ô e3 + 2 Cos@x0 D Sin@x0 D x6 e2 Ô e3 +
II-2 Cos@x0 D2 + 2 Sin@x0 D2 M x2 x5 + I2 Cos@x0 D2 - 2 Sin@x0 D2 M
Hx3 x4 + x1 x6 L + 2 Cos@x0 D Sin@x0 D x7 M e1 Ô e2 Ô e3

c2X = GrassmannFunctionA9Cos@xD2 , x=, XE

Cos@x0 D2 - 2 Cos@x0 D Sin@x0 D e1 x1 - 2 Cos@x0 D Sin@x0 D e2 x2 -


2 Cos@x0 D Sin@x0 D e3 x3 - 2 Cos@x0 D Sin@x0 D x4 e1 Ô e2 -
2 Cos@x0 D Sin@x0 D x5 e1 Ô e3 - 2 Cos@x0 D Sin@x0 D x6 e2 Ô e3 +
II2 Cos@x0 D2 - 2 Sin@x0 D2 M x2 x5 + I-2 Cos@x0 D2 + 2 Sin@x0 D2 M
Hx3 x4 + x1 x6 L - 2 Cos@x0 D Sin@x0 D x7 M e1 Ô e2 Ô e3

%@s2X + c2X ã 1D êê Simplify


True

2009 9 3
Grassmann Algebra Book.nb 543

Ï The tangent of a Grassmann number


Finally we show that the tangent of a Grassmann number is the exterior product of its sine and its
secant.

tanX = GrassmannFunction@Tan@XDD

Sec@x0 D2 e1 x1 + Sec@x0 D2 e2 x2 + Sec@x0 D2 e3 x3 + Tan@x0 D +


Sec@x0 D2 x4 e1 Ô e2 + Sec@x0 D2 x5 e1 Ô e3 + Sec@x0 D2 x6 e2 Ô e3 +
Sec@x0 D2 Hx7 + H-2 x2 x5 + 2 Hx3 x4 + x1 x6 LL Tan@x0 DL e1 Ô e2 Ô e3

sinX = GrassmannFunction@Sin@XDD
Sin@x0 D + Cos@x0 D e1 x1 + Cos@x0 D e2 x2 + Cos@x0 D e3 x3 +
Cos@x0 D x4 e1 Ô e2 + Cos@x0 D x5 e1 Ô e3 + Cos@x0 D x6 e2 Ô e3 +
HSin@x0 D H-x3 x4 + x2 x5 - x1 x6 L + Cos@x0 D x7 L e1 Ô e2 Ô e3

secX = GrassmannFunction@Sec@XDD
Sec@x0 D + Sec@x0 D e1 x1 Tan@x0 D +
Sec@x0 D e2 x2 Tan@x0 D + Sec@x0 D e3 x3 Tan@x0 D +
Sec@x0 D x4 Tan@x0 D e1 Ô e2 + Sec@x0 D x5 Tan@x0 D e1 Ô e3 +
Sec@x0 D x6 Tan@x0 D e2 Ô e3 + ISec@x0 D3 Hx3 x4 - x2 x5 + x1 x6 L +
Sec@x0 D Ix7 Tan@x0 D + Hx3 x4 - x2 x5 + x1 x6 L Tan@x0 D2 MM e1 Ô e2 Ô e3

tanX ã %@sinX Ô secXD êê Simplify


True

Ÿ Functions of several Grassmann numbers


GrassmannFunction can calculate functions of several Grassmann variables. It is important to
realize however, that there may be several different results depending on the order in which the
exterior products occur in the series expansion for the function, a factor that does not arise with
functions of scalar variables. This means then, that the usual function notation for several variables
is not adequate for specifically denoting precisely which form is required. For simplicity in what
follows we work in 2-space.

&2 ; 8X = CreateGrassmannNumber@xD, Y = CreateGrassmannNumber@yD<


8x0 + e1 x1 + e2 x2 + x3 e1 Ô e2 , y0 + e1 y1 + e2 y2 + y3 e1 Ô e2 <

Ï Products of two Grassmann numbers


As a first example, we look at the product. Calculating the exterior product of two Grassmann
numbers in 2-space gives:

2009 9 3
Grassmann Algebra Book.nb 544

%@X Ô YD
x0 y0 + e1 Hx1 y0 + x0 y1 L +
e2 Hx2 y0 + x0 y2 L + Hx3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 Ô e2

Using GrassmannFunction in the form below, we get the same result.

GrassmannFunction@8x y, 8x, y<<, 8X, Y<D


x0 y0 + e1 Hx1 y0 + x0 y1 L +
e2 Hx2 y0 + x0 y2 L + Hx3 y0 - x2 y1 + x1 y2 + x0 y3 L e1 Ô e2

However, to allow for the two different exterior products that X and Y can have together, the
variables in the second argument {X,Y} may be interchanged. The parameters x and y of the first
argument {x y,{x,y}} are simply dummy variables and can stay the same. Thus YÔX may be
obtained by evaluating:

%@Y Ô XD
x0 y0 + e1 Hx1 y0 + x0 y1 L +
e2 Hx2 y0 + x0 y2 L + Hx3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 Ô e2

or using GrassmannFunction with {Y,X} as its second argument.

GrassmannFunction@8x y, 8x, y<<, 8Y, X<D


x0 y0 + e1 Hx1 y0 + x0 y1 L +
e2 Hx2 y0 + x0 y2 L + Hx3 y0 + x2 y1 - x1 y2 + x0 y3 L e1 Ô e2

Ï The exponential of a sum


As a final example we show how GrassmannFunction manages the two different exterior
product equivalents of the scalar identity Exp[x+y]ã Exp[x] Exp[y].

First calculate Exp[X] and Exp[y].

expX = GrassmannFunction@Exp@XDD

‰ x0 + ‰ x0 e 1 x 1 + ‰ x0 e 2 x 2 + ‰ x0 x 3 e 1 Ô e 2

expY = GrassmannFunction@Exp@YDD

‰ y0 + ‰ y0 e 1 y 1 + ‰ y0 e 2 y 2 + ‰ y0 y 3 e 1 Ô e 2

If we compute their product using the exterior product operation we observe that the order must be
important, and that a different result is obtained when the order is reversed.

%@expX Ô expYD

‰x0 +y0 + ‰x0 +y0 e1 Hx1 + y1 L +


‰x0 +y0 e2 Hx2 + y2 L + ‰x0 +y0 Hx3 - x2 y1 + x1 y2 + y3 L e1 Ô e2

2009 9 3
Grassmann Algebra Book.nb 545

%@expY Ô expXD

‰x0 +y0 + ‰x0 +y0 e1 Hx1 + y1 L +


‰x0 +y0 e2 Hx2 + y2 L + ‰x0 +y0 Hx3 + x2 y1 - x1 y2 + y3 L e1 Ô e2

We can compute these two results also by using GrassmannFunction. Note that the second
argument in the second computation has its components in reverse order.

GrassmannFunction@8Exp@xD Exp@yD, 8x, y<<, 8X, Y<D

‰x0 +y0 + ‰x0 +y0 e1 Hx1 + y1 L +


‰x0 +y0 e2 Hx2 + y2 L + ‰x0 +y0 Hx3 - x2 y1 + x1 y2 + y3 L e1 Ô e2

GrassmannFunction@8Exp@xD Exp@yD, 8x, y<<, 8Y, X<D

‰x0 +y0 + ‰x0 +y0 e1 Hx1 + y1 L +


‰x0 +y0 e2 Hx2 + y2 L + ‰x0 +y0 Hx3 + x2 y1 - x1 y2 + y3 L e1 Ô e2

On the other hand, if we wish to compute the exponential of the sum X + Y we need to use
GrassmannFunction, because there can be two possibilities for its definition. That is, although
Exp[x+y] appears to be independent of the order of its arguments, an interchange in the 'ordering'
argument from {X,Y} to {Y,X} will change the signs in the result.

GrassmannFunction@8Exp@x + yD, 8x, y<<, 8X, Y<D

‰x0 +y0 + ‰x0 +y0 e1 Hx1 + y1 L +


‰x0 +y0 e2 Hx2 + y2 L + ‰x0 +y0 Hx3 - x2 y1 + x1 y2 + y3 L e1 Ô e2

GrassmannFunction@8Exp@x + yD, 8x, y<<, 8Y, X<D

‰x0 +y0 + ‰x0 +y0 e1 Hx1 + y1 L +


‰x0 +y0 e2 Hx2 + y2 L + ‰x0 +y0 Hx3 + x2 y1 - x1 y2 + y3 L e1 Ô e2

In sum: A function of several Grassmann variables may have different results depending on the
ordering of the variables in the function, even if their usual (scalar) form is commutative.

9.10 Summary
To be completed

2009 9 3
Grassmann Algebra Book.nb 546

10 Exploring the Generalized


Grassmann Product
Ï

10.1 Introduction
In this chapter we define and explore a new product which we call the generalized Grassmann
product. The generalized Grassmann product was originally developed by the author in order to
treat the hypercomplex and Clifford products of general elements in a succinct manner. The
hypercomplex and Clifford products of general elements can lead to quite complex expressions,
however we will show that such expressions always reduce to simple sums of generalized
Grassmann products. We discuss the hypercomplex product in the next chapter, and the Clifford
product in Chapter 12.

The generalized Grassmann product is really a sequence of products indexed by a parameter, called
the order of the product. When the order is zero, the product reduces to the exterior product. When
the order is its maximum value, the product reduces to the interior product. In between, the
sequence of generalized Grassmann products makes a transition between the two.

For example, consider an exterior product of simple elements. It only takes one 1-element to be
common to both factors of the exterior product for the product to be zero. On the other end of the
sequence, even if one of the simple elements has all its 1-element factors in common with the other
simple element, their interior product is not zero. The elements of the sequence of generalized
products require progressively more common 1-elements in their factors for them to reduce to zero.
Specifically, if two simple elements contain a common factor of grade g, then their generalized
Grassmann products of order less than g are zero.

The generalized Grassmann product has another enticing property. Although the interior product of
elements of different grade is not commutative, the generalized Grasmann product form of the
interior product is commutative.

For brevity in the rest of this book, where no confusion will arise, we may call the generalized
Grassmann product simply the generalized product.

2009 9 3
Grassmann Algebra Book.nb 547

10.2 Defining the Generalized Product

Definition of the generalized product


The generalized Grassmann product of order l of an m-element a and a simple k-element b is
m k
denoted a Û b and defined by
m l k

k
K O
l
a Û b ã ‚ a û bj Ô bj
m l k m l k-l 10.1
j=1

b ã b1 Ô b1 ã b2 Ô b2 ã !
k l k-l l k-l

k
Here, b is expressed in all of the K O essentially different arrangements of its 1-element factors
k l
into a l-element and a (k-l)-element. This pairwise decomposition of a simple exterior product is
the same as that used in the Common Factor Theorem in chapters 3 and 6.

The grade of a generalized Grassmann product a Û b is therefore m + k - 2l, and like the grades of
m l k
the exterior and interior products in terms of which it is defined, is independent of the dimension of
the underlying space.

Note the similarity to the form of the Interior Common Factor Theorem. However, in the definition
of the generalized product there is no requirement for the interior product on the right hand side to
be an inner product, since an exterior product has replaced the ordinary multiplication operator.

For brevity in the discussion that follows, we will assume without explicitly drawing attention to
the fact, that an element undergoing a factorization is necessarily simple.

Case l = 0: Reduction to the exterior product


When the order l of a generalized product is zero, bj reduces to a scalar, and the product reduces
l
to the exterior product.

a Û b ã aÔb lã0 10.2


m 0 k m k

2009 9 3
Grassmann Algebra Book.nb 548

Case 0 < l < Min[m, k]: Reduction to exterior and interior products
When the order l of the generalized product is greater than zero but less than the smaller of the
grades of the factors in the product, the product may be expanded in terms of either factor
(provided it is simple). The formula for expansion in terms of the first factor is similar to the
definition [10.1] which expands in terms of the second factor.

m
K O
l
a Û b ã ‚ ai Ô b û ai 0 < l < Min@m, kD
m l k
i=1
m-l k l 10.3

a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l

In Section 10.3 we will prove that the product can actually be expanded in terms of both factors,
and that this formula is an almost immediate consequent.

For completeness it should be remarked that [10.3] is also valid for l equal to 0 or Min[m,k]. We
shall include these special cases in the ranges for l in subsequent formulas.

Case l = Min[m, k]: Reduction to the interior product


When the order of the product l is equal to the smaller of the grades of the factors of the product,
there is only one term in the sum and the generalized product reduces to the interior product.

a Û b ã aûb lãk§m
m l k m k
10.4
a Û b ã bûa lãm§k
m l k k m

This is a particularly enticing property of the generalized product. If the interior product of two
elements is non-zero, but they are of unequal grade, an interchange in the order of their factors will
give an interior product which is zero. If an interior product of two factors is to be non-zero, it
must have the larger factor first. The interior product is non-commutative. By contrast, the
generalized product form of the interior product of two elements of unequal grade is commutative
and equal to that interior product which is non-zero, whichever one it may be.

a Û b ã b Û a ã aûb k§m
m k k k k m m k

We discuss the (quasi)-commutativity of the generalized product in the next section.

2009 9 3
Grassmann Algebra Book.nb 549

Case Min[m, k] < l < Max[m, k]: Reduction to zero


When the order of the product l is greater than the smaller of the grades of the factors of the
product, but less than the larger of the grades, the generalized product may still be expanded in
terms of the factors of the element of larger grade, leading however to a sum of terms which is zero.

aÛb ã0 Min@m, kD < l < Max@m, kD 10.5


m l k

This result, in which a sum of terms is identically zero, is a source of many useful identities
between exterior and interior products. It will be explored in Section 10.4 below.

Case l = Max[m, k]: Reduction to zero


When l is equal to the larger of the grades of the factors, the generalized product reduces to a
single interior product which is zero by virtue of its left factor being of lesser grade than its right
factor.

aÛb ã0 l ã Max@m, kD 10.6


m l k

Case l > Max[m, k]: Undefined


When the order of the product l is greater than the larger of the grades of the factors of the
product, the generalized product cannot be expanded in terms of either of its factors and is
therefore undefined.

a Û b ã Undefined l > Max@m, kD 10.7


m l k

10.3 The Symmetric Form of the Generalized Product

Expansion of the generalized product in terms of both factors


If a and b are both simple we can express the generalized Grassmann product more symmetrically
m k
by converting the interior products with the Interior Common Factor Theorem (See Chapter 6).

To show this, we start with the definition [10.1] of the generalized Grassmann product.

2009 9 3
Grassmann Algebra Book.nb 550

k
K O
l
a Û b ã ‚ a û bj Ô bj
m l k m l k-l
j=1

From the Interior Common Factor Theorem we can write the interior product aûbj as:
m l

m
K O
l
a û bj ã ‚ ai û bj ai
m l l l m-l
i=1

Substituting this in the above expression for the generalized product gives:

k m
K O K O
l l
aÛb ã ‚ ‚ ai û bj ai Ô bj
m l k l l m-l k-l
j=1 i=1

The expansion of the generalized product in terms of both factors a and b is thus given by:
m k

m k
K O K O
l l
a Û b ã ‚ ‚ ai û bj ai Ô bj 0 § l § Min@m, kD
m l k l l m-l k-l 10.8
i=1 j=1

a ã a1 Ô a1 ã a2 Ô a2 ã !, b ã b1 Ô b1 ã b2 Ô b2 ã !
m l m-l l m-l k l k-l l k-l

The quasi-commutativity of the generalized product


From the symmetric form of the generalized product [10.8] we can show directly that the ordering
of the factors in a product is immaterial except perhaps for a sign.

To show this, rewrite the symmetric form for the product of b with a, that is, b Û a . We then
k m k l m
reverse the order of the exterior product to obtain the terms of a Û b except for possibly a sign. The
m l k
inner product is of course symmetric, and so can be written in either ordering.

k m
K OK O
l l
bÛa ã ‚‚ bj û ai bj Ô ai
k l m l l k-l m-l
j=1 i=1

k m
K OK O
l l
ã ‚‚ ai û bj H-1LHm-lL Hk-lL ai Ô bj
l l m-l k-l
j=1 i=1

Comparison with [10.8] then gives:

2009 9 3
Grassmann Algebra Book.nb 551

Comparison with [10.8] then gives:

a Û b ã H-1LHm-lL Hk-lL b Û a 10.9


m l k k l m

It is easy to see, by using the distributivity of the generalized product that this relationship holds
also for non-simple a and b.
m k

Expansion in terms of the other factor


We can now prove the alternative expression [10.3] for a Û b in terms of the factors of a simple a
m l k m
by expanding the right hand side of the quasi-commutativity relationship [10.9].

m
K O
l
a Û b ã H-1LHm-lL Hk-lL b Û a ã H-1LHm-lL Hk-lL ‚ b û ai Ô ai
m l k k l m k l m-l
i=1

Interchanging the order of the terms of the exterior product then gives the required alternative
expression:

m
K O
l
a Û b ã ‚ ai Ô b û ai 0 § l § Min@m, kD
m l k m-l k l
i=1

a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l

10.4 Calculating with Generalized Products

Ÿ Entering a generalized product


In GrassmannAlgebra you can enter a generalized product in the form:

GeneralizedProduct@lD@X, YD

where l is the order of the product and X and Y are the factors. On entering this expression into
Mathematica it will display it in the form:

XÛY
l

Alternatively, you can click on the generalized product button Ñ Û Ñ on the GrassmannAlgebra
Ñ
palette, and then tab through the placeholders, entering the elements of the product as you go.

To enter multiple products, click repeatedly on the button. For example to enter a concatenated
product of four elements, click three times:
2009 9 3
Grassmann Algebra Book.nb 552

To enter multiple products, click repeatedly on the button. For example to enter a concatenated
product of four elements, click three times:

ÑÛÉÛÉÛÑ
É É Ñ

If you select this expression and enter it, you will see how the inherently binary product is grouped.

JJÑ Û ÉN Û ÉN Û Ñ
É É Ñ

Tabbing through the placeholders and entering factors and product orders, we might obtain
something like:

KJX Û YN Û ZO Û W
4 3 2

Here, X, Y, Z, and W represent any Grassmann expressions. Using Mathematica's FullForm


operation shows how this is internally represented.

FullFormBJJX Û YN Û ZN Û WF
4 1 0

GeneralizedProduct@0D@
GeneralizedProduct@1D@GeneralizedProduct@4D@X, YD, ZD, WD

You can edit the expression at any stage to group the products in a different order.

FullFormBX Û KKY Û ZO Û WOF


4 3 2

GeneralizedProduct@4D@X,
GeneralizedProduct@2D@GeneralizedProduct@3D@Y, ZD, WDD

Finally, you can also enter a generalized product sequentially by typing the first factor, then
clicking on the product operator Û from the first column of the GrassmannAlgebra palette, then
Ñ
typing the second factor.

Ÿ Reduction to interior products


If the generalized product a Û b of simple elements is expanded in terms of the second factor b it
m l k k
k
will result in a sum of terms (0 § l § k) involving just exterior and interior products. If it is
l
m
expanded in terms of the first factor a it will result in a sum of terms (0 § l § m).
m l

These sums, although appearing different because expressed in terms of interior products, are of
course equal, as has been shown in the previous section.

The GrassmannAlgebra function ToInteriorProducts will take any generalized product and
convert it to a sum involving exterior and interior products by expanding whichever factor leads to
the simplest expression. If the elements are given in symbolic grade-underscripted form,
ToInteriorProducts will first create a corresponding simple exterior product before
expanding.

2009 9 3
Grassmann Algebra Book.nb 553
The GrassmannAlgebra function ToInteriorProducts will take any generalized product and
convert it to a sum involving exterior and interior products by expanding whichever factor leads to
the simplest expression. If the elements are given in symbolic grade-underscripted form,
ToInteriorProducts will first create a corresponding simple exterior product before
expanding.

For example, suppose the generalized product is given in symbolic form as a Û b. Applying
m 2 3

ToInteriorProducts expands with respect to b and gives:


3

A = ToInteriorProductsBa Û bF
m 2 3

Ka û b1 Ô b2 O Ô b3 - Ka û b1 Ô b3 O Ô b2 + Ka û b2 Ô b3 O Ô b1
m m m

To expand with respect to a we must of course give m as an integer.


m

B = ToInteriorProductsBa Û bF
4 2 k

b û a1 Ô a2 Ô a3 Ô a4 - b û a1 Ô a3 Ô a2 Ô a4 + b û a1 Ô a4 Ô a2 Ô a3 +
k k k

b û a2 Ô a3 Ô a1 Ô a4 - b û a2 Ô a4 Ô a1 Ô a3 + b û a3 Ô a4 Ô a1 Ô a2
k k k

3 4
Note that in the expansion A there are (= 3) terms while in expansion B there are (= 6)
2 2
terms.

If now we let m equal 4 and k equal 3, these two expressions become equal.

A1 = ToInteriorProductsBa Û bF ê. a Ø ComposeSimpleFormBaF
m 2 3 m 4

Ha1 Ô a2 Ô a3 Ô a4 û b1 Ô b2 L Ô b3 -
Ha1 Ô a2 Ô a3 Ô a4 û b1 Ô b3 L Ô b2 + Ha1 Ô a2 Ô a3 Ô a4 û b2 Ô b3 L Ô b1

B1 = ToInteriorProductsBa Û bF ê. b Ø ComposeSimpleFormBbF
4 2 k k 3

Hb1 Ô b2 Ô b3 û a1 Ô a2 L Ô a3 Ô a4 - Hb1 Ô b2 Ô b3 û a1 Ô a3 L Ô a2 Ô a4 +
Hb1 Ô b2 Ô b3 û a1 Ô a4 L Ô a2 Ô a3 + Hb1 Ô b2 Ô b3 û a2 Ô a3 L Ô a1 Ô a4 -
Hb1 Ô b2 Ô b3 û a2 Ô a4 L Ô a1 Ô a3 + Hb1 Ô b2 Ô b3 û a3 Ô a4 L Ô a1 Ô a2

Although these expressions are equal, they appear different because of their expansion in terms of
interior products. To display them in the same form, we need to reduce the interior products to
inner (and exterior) products.

Ÿ Reduction to inner products


Reducing the interior products in a generalized product expansion to inner products is equivalent to
appplying the symmetric form of the expansion.

Consider the product of the previous section with m equal to 4 and k equal to 3. To explore this
product we must first increase the dimension of the space to at least 4 (since it is zero in a space of
lesser dimension). Then, we convert any interior products to inner products by applying
ToInnerProducts
2009 93 , and simplify the result with GrassmannExpandAndSimplify.
Grassmann Algebra Book.nb 554

Consider the product of the previous section with m equal to 4 and k equal to 3. To explore this
product we must first increase the dimension of the space to at least 4 (since it is zero in a space of
lesser dimension). Then, we convert any interior products to inner products by applying
ToInnerProducts, and simplify the result with GrassmannExpandAndSimplify.

¯!4 ; A2 = ¯%@ToInnerProducts@A1 DD
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 - Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 +
Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 - Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 +
Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 - Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 +
Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 - Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 +
Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 + Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 + Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 -
Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 + Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3

B2 = ¯%@ToInnerProducts@B1 DD
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 - Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 +
Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 - Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 +
Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 - Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 +
Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 - Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 +
Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 + Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 + Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 -
Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 + Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3

By inspection we see that these two expressions are the same. Checking with Mathematica verifies
this.

A2 ã B2
True

The identity of the forms A2 and B2 is an example of the expansion of a generalized product in
terms of either factor yielding the same result.

Ÿ Tabulating generalized products


In order to get a clearer concept of how the sequence of generalized products for two elements are
related, we can look at building up this sequence from the Spine and Cospine of one of the
elements. (See Chapter 3: The Regressive Product, for a discussion of Spine and Cospine).

Consider a generalized product a Û b of an arbitrary m-element a and a simple k-element b


m l k m k
expressed as a simple exterior product. We need to be specific about the grade k, so for this
example we choose k to be 3. We can allow m to be symbolic, but should assume it to be greater
than or equal to k in order to have the resulting formulas remain valid should we later substitute
specific values.

2009 9 3
Consider a generalized product a Û b of an arbitrary m-element a andGrassmann
a simple Algebra Book.nb
k-element b 555
m l k m k
expressed as a simple exterior product. We need to be specific about the grade k, so for this
example we choose k to be 3. We can allow m to be symbolic, but should assume it to be greater
than or equal to k in order to have the resulting formulas remain valid should we later substitute
specific values.

The spine and cospine of b are given by


3

S = SpineBbF
3

81, b1 , b2 , b3 , b1 Ô b2 , b1 Ô b3 , b2 Ô b3 , b1 Ô b2 Ô b3 <

Sc = CospineBbF
3

8b1 Ô b2 Ô b3 , b2 Ô b3 , -Hb1 Ô b3 L, b1 Ô b2 , b3 , -b2 , b1 , 1<

We can now generate the list of products of the form Ka û sO Ô c, where s is an element of S, and
m
c is the corresponding element of Sc, by threading these operations across the two lists.

T1 = ThreadBThreadBa û SF Ô ScF
m

:Ka û 1O Ô b1 Ô b2 Ô b3 , Ka û b1 O Ô b2 Ô b3 ,
m m

Ka û b2 O Ô -Hb1 Ô b3 L, Ka û b3 O Ô b1 Ô b2 , Ka û b1 Ô b2 O Ô b3 ,
m m m

Ka û b1 Ô b3 O Ô -b2 , Ka û b2 Ô b3 O Ô b1 , Ka û b1 Ô b2 Ô b3 O Ô 1>
m m m

The collection of elements of this list which are of the same grade (g, say) will form the terms in
the sum of the generalized product of the same grade g. We can see the distribution of grades by
applying Grade to the list.

Grade@T1 D
83 + m, 1 + m, 1 + m, 1 + m, -1 + m, -1 + m, -1 + m, -3 + m<

We can select the terms of a given grade (using Select), add them together (using Plus), and
and tabulate these sums over the different grades g (using Table). We also reverse the list to get
the grades in decreasing order.

T2 =
Reverse@Table@Plus üü Select@T1 , GradeQ@gDD, 8g, m - 3, m + 3, 2<DD

:Ka û 1O Ô b1 Ô b2 Ô b3 ,
m

Ka û b2 O Ô -Hb1 Ô b3 L + Ka û b1 O Ô b2 Ô b3 + Ka û b3 O Ô b1 Ô b2 ,
m m m

Ka û b1 Ô b2 O Ô b3 + Ka û b1 Ô b3 O Ô -b2 + Ka û b2 Ô b3 O Ô b1 ,
m m m

Ka û b1 Ô b2 Ô b3 O Ô 1>
m

The elements of this list are now the generalized products a Û b.


m l 3

2009 9 3
Grassmann Algebra Book.nb 556

The elements of this list are now the generalized products a Û b.


m l 3

T3 = TableBa Û b, 8l, 0, 3<F


m l 3

:a Û b, a Û b, a Û b, a Û b>
m 0 3 m 1 3 m 2 3 m 3 3

We can display the individual formulas by again using Thread.

TableForm@Thread@T3 ã T2 DD

a Û b ã Ka û 1O Ô b1 Ô b2 Ô b3
m 0 3 m

a Û b ã Ka û b2 O Ô -Hb1 Ô b3 L + Ka û b1 O Ô b2 Ô b3 + Ka û b3 O Ô b1 Ô b2
m 1 3 m m m

a Û b ã Ka û b1 Ô b2 O Ô b3 + Ka û b1 Ô b3 O Ô -b2 + Ka û b2 Ô b3 O Ô b1
m 2 3 m m m

a Û b ã Ka û b1 Ô b2 Ô b3 O Ô 1
m 3 3 m

Finally, we can collect these steps into a single function, generalize it for arbitrary simple b, and
k
restrict it to apply only to the grades of factors we have assumed in this development. That is, the
grade of a should be symbolic, but constrained to be greater than or equal to the grade of b, which
should itself be greater than 1.

GeneralizedProductList@GeneralizedProduct@l_D@a_, b_DD ê;
HHead@RawGrade@aDD === Symbol »»
RawGrade@aD ¥ RawGrade@bDL && RawGrade@bD > 1 :=
ModuleB8m, k, B, Bc, T1, T2, T3, g, m<, m = RawGrade@aD;
k = RawGrade@bD; B = Spine@bD; Bc = Cospine@bD;
T1 = Thread@Thread@a û BD Ô BcD;
T2 = Reverse@Table@Plus üü Select@T1, RawGrade@ÒD ã g &D,
8g, m - k, m + k, 2<DD;
T3 = TableBa Û b, 8m, 0, k<F; TableForm@Thread@T3 ã T2DDF;
m

Ï Example

Here are the formulas for a 2-element and a 4-element as second factors.

2009 9 3
Grassmann Algebra Book.nb 557

GeneralizedProductListBa Û bF
m l 2

a Û b ã Ka û 1O Ô b1 Ô b2
m 0 2 m

a Û b ã Ka û b1 O Ô b2 + Ka û b2 O Ô -b1
m 1 2 m m

a Û b ã Ka û b1 Ô b2 O Ô 1
m 2 2 m

GeneralizedProductListBa Û bF
m l 4

a Û b ã Ka û 1O Ô b1 Ô b2 Ô b3 Ô b4
m 0 4 m

a Û b ã Ka û b2 O Ô -Hb1 Ô b3 Ô b4 L + Ka û b4 O Ô -Hb1 Ô b2 Ô b3 L + Ka û b1 O Ô b2 Ô b3 Ô
m 1 4 m m m

a Û b ã Ka û b1 Ô b3 O Ô -Hb2 Ô b4 L + Ka û b2 Ô b4 O Ô -Hb1 Ô b3 L + Ka û b1 Ô b2 O Ô b3 Ô
m 2 4 m m m

a Û b ã Ka û b1 Ô b2 Ô b3 O Ô b4 + Ka û b1 Ô b2 Ô b4 O Ô -b3 + Ka û b1 Ô b3 Ô b4 O Ô b2 + K
m 3 4 m m m

a Û b ã Ka û b1 Ô b2 Ô b3 Ô b4 O Ô 1
m 4 4 m

10.5 The Generalized Product Theorem

The A and B forms of a generalized product


There is a surprising alternative form for the expansion of the generalized product in terms of
exterior and interior products. We distinguish the two forms by calling the first (definitional) form
the A form and the new second form the B form.

k
K O
l
A: a Û b ã ‚ a û bj Ô bj
m l k m l k-l
j=1

k
K O
l
10.10
B: a Û b ã ‚ a Ô bj û bj
m l k m k-l l
j=1

b ã b1 Ô b1 ã b2 Ô b2 ã !
k l k-l l k-l

The identity between the A form and the B form is the source of many useful relations in the
Grassmann and Clifford algebras. We call it the Generalized Product Theorem.

2009 9 3
Grassmann Algebra Book.nb 558

The identity between the A form and the B form is the source of many useful relations in the
Grassmann and Clifford algebras. We call it the Generalized Product Theorem.

In the previous sections, the identity between the expansions in terms of either of the two factors
was shown to be at the inner product level. The identity between the A form and the B form is at a
further level of complexity - that of the scalar product. That is, in order to show the identity
between the two forms, a generalized product may need to be reduced to an expression involving
exterior and scalar products.

Simple proofs for l equal to 0, 1, k-1 and k


The proofs for l equal to 0 and k are trivial. Substitution into the formulas shows that they reduce
respectively to the exterior and interior products.

We can also show straightforwardly that the A and B forms are equal for l equal to 1 and k-1,
hence proving the theorem for all generalized products of the form a Û b in which k is less than or
m l k
equal to 3. I have not yet found such a straightforward proof for the general case.

The example of the next section will extend this result to show that the theorem is true for m and k
equal to 4 and l equal to 2. By using the quasi-commutativity of the generalized product we can
thus show that the theorem is valid for all generalized products in a 4-space. The example will also
make clearer the detail by which the equality of the two forms is achieved.

Ï l=1

We begin with the B form for l equal to 1, and show that it reduces to the A form for l equal to 1.

k
a Û b ã ‚ a Ô bj û bj b ã b1 Ô b1 ã b2 Ô b2 ã !
m 1 k m k-1 1 k 1 k-1 1 k-1
j=1

We next retrieve an interior product theorem from Chapter 6

a Ô b û x ã Ka û xO Ô b + H-1Lm a Ô b û x
m k m k m k

which enables us to expand the summand to give two types of terms

k k
a Û b ã ‚ a û bj Ô bj + H-1Lm ‚ a Ô bj û bj
m 1 k m 1 k-1 m k-1 1
j=1 j=1

The first sum is of the A form, while the second sum is zero on account of the Zero Interior Sum
theorem. Hence we have proved the theorem for l equal to 1.

Ï l = k-1

This time we begin with the A form for l equal to k-1, and show that it reduces to the B form for l
equal to k-1.

2009 9 3
Grassmann Algebra Book.nb 559

k
a Û b ã ‚ a û bj Ô bj b ã b1 Ô b1 ã b2 Ô b2 ã !
m k-1 k m k-1 1 k k-1 1 k-1 1
j=1

We then retrieve a second interior product theorem from Chapter 6

a û b Ô x ã Ka Ô xO û b + H-1Lm+1 a û b û x
m k m k m k

which enables us to expand the summand again to give two types of terms

k k
a Û b ã ‚ a Ô bj û bj + H-1Lm+1 ‚ a û bj û bj
m k-1 k m 1 k-1 m k-1 1
j=1 j=1

The first sum is of the B form, while the second sum is zero on account of the Zero Interior Sum
theorem. Hence we have proved the theorem for l equal to k-1.

Ÿ Example: Verification of the Generalized Product Theorem


As an example we take the generalized product a Û b, and generate the interior product form for
m 2 4
both the A and B forms. Note the 'B' at the end of the function name ToInteriorProducts in
the second case to indicate that it will generate the B form).

A = ToInteriorProductsBa Û bF
m 2 4

Ka û b1 Ô b2 O Ô b3 Ô b4 - Ka û b1 Ô b3 O Ô b2 Ô b4 + Ka û b1 Ô b4 O Ô b2 Ô b3 +
m m m

Ka û b2 Ô b3 O Ô b1 Ô b4 - Ka û b2 Ô b4 O Ô b1 Ô b3 + Ka û b3 Ô b4 O Ô b1 Ô b2
m m m

B = ToInteriorProductsBBa Û bF
m 2 4

a Ô b1 Ô b2 û b3 Ô b4 - a Ô b1 Ô b3 û b2 Ô b4 + a Ô b1 Ô b4 û b2 Ô b3 +
m m m
a Ô b2 Ô b3 û b1 Ô b4 - a Ô b2 Ô b4 û b1 Ô b3 + a Ô b3 Ô b4 û b1 Ô b2
m m m

Note that the A form and the B form at this first (interior product) level of their expansion both
4
have the same number of terms (in this case = 6). Although the expressions also both have the
2
same concatenation of factors, their grouping is different. Due to Mathematica's inbuilt
precedence, parentheses are not needed in the second case: the interior product has higher
precedence, so parentheses effectively surround the exterior products.

2009 9 3
Grassmann Algebra Book.nb 560

The next step is to convert these two expressions to their inner product forms. But to do this we
will need to specify the grade of a. For this example, we take m equal to 4, and replace a in the
m m
expressions above by " (and consequently also we need to increase the dimension of the space
above the default value of 3 to prevent " being treated as zero).

¯!6 ; 2 = ComposeSimpleFormBaF
4

a1 Ô a2 Ô a3 Ô a4

A1 = ¯%BToInnerProductsBA ê. a Ø 2FF
m

Ha3 Ô a4 û b3 Ô b4 L a1 Ô a2 Ô b1 Ô b2 - Ha3 Ô a4 û b2 Ô b4 L a1 Ô a2 Ô b1 Ô b3 +
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 Ô b4 + Ha3 Ô a4 û b1 Ô b4 L a1 Ô a2 Ô b2 Ô b3 -
Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 Ô b4 + Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 Ô b4 -
Ha2 Ô a4 û b3 Ô b4 L a1 Ô a3 Ô b1 Ô b2 + Ha2 Ô a4 û b2 Ô b4 L a1 Ô a3 Ô b1 Ô b3 -
Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 Ô b4 - Ha2 Ô a4 û b1 Ô b4 L a1 Ô a3 Ô b2 Ô b3 +
Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 Ô b4 - Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 Ô b4 +
Ha2 Ô a3 û b3 Ô b4 L a1 Ô a4 Ô b1 Ô b2 - Ha2 Ô a3 û b2 Ô b4 L a1 Ô a4 Ô b1 Ô b3 +
Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 Ô b4 + Ha2 Ô a3 û b1 Ô b4 L a1 Ô a4 Ô b2 Ô b3 -
Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 Ô b4 + Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 Ô b4 +
Ha1 Ô a4 û b3 Ô b4 L a2 Ô a3 Ô b1 Ô b2 - Ha1 Ô a4 û b2 Ô b4 L a2 Ô a3 Ô b1 Ô b3 +
Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 Ô b4 + Ha1 Ô a4 û b1 Ô b4 L a2 Ô a3 Ô b2 Ô b3 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 Ô b4 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 Ô b4 -
Ha1 Ô a3 û b3 Ô b4 L a2 Ô a4 Ô b1 Ô b2 + Ha1 Ô a3 û b2 Ô b4 L a2 Ô a4 Ô b1 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 Ô b4 - Ha1 Ô a3 û b1 Ô b4 L a2 Ô a4 Ô b2 Ô b3 +
Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 Ô b4 - Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 Ô b4 +
Ha1 Ô a2 û b3 Ô b4 L a3 Ô a4 Ô b1 Ô b2 - Ha1 Ô a2 û b2 Ô b4 L a3 Ô a4 Ô b1 Ô b3 +
Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 Ô b4 + Ha1 Ô a2 û b1 Ô b4 L a3 Ô a4 Ô b2 Ô b3 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 Ô b4 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3 Ô b4

2009 9 3
Grassmann Algebra Book.nb 561

B1 = ¯%BToInnerProductsBB ê. a Ø 2FF
m

H2 Hb1 Ô b2 û b3 Ô b4 L - 2 Hb1 Ô b3 û b2 Ô b4 L + 2 Hb1 Ô b4 û b2 Ô b3 LL


a1 Ô a2 Ô a3 Ô a4 +
H-Ha4 Ô b2 û b3 Ô b4 L + a4 Ô b3 û b2 Ô b4 - a4 Ô b4 û b2 Ô b3 L a1 Ô a2 Ô a3 Ô b1 +
Ha4 Ô b1 û b3 Ô b4 - a4 Ô b3 û b1 Ô b4 + a4 Ô b4 û b1 Ô b3 L a1 Ô a2 Ô a3 Ô b2 +
H-Ha4 Ô b1 û b2 Ô b4 L + a4 Ô b2 û b1 Ô b4 - a4 Ô b4 û b1 Ô b2 L a1 Ô a2 Ô a3 Ô b3 +
Ha4 Ô b1 û b2 Ô b3 - a4 Ô b2 û b1 Ô b3 + a4 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a3 Ô b4 +
Ha3 Ô b2 û b3 Ô b4 - a3 Ô b3 û b2 Ô b4 + a3 Ô b4 û b2 Ô b3 L a1 Ô a2 Ô a4 Ô b1 +
H-Ha3 Ô b1 û b3 Ô b4 L + a3 Ô b3 û b1 Ô b4 - a3 Ô b4 û b1 Ô b3 L a1 Ô a2 Ô a4 Ô b2 +
Ha3 Ô b1 û b2 Ô b4 - a3 Ô b2 û b1 Ô b4 + a3 Ô b4 û b1 Ô b2 L a1 Ô a2 Ô a4 Ô b3 +
H-Ha3 Ô b1 û b2 Ô b3 L + a3 Ô b2 û b1 Ô b3 - a3 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a4 Ô b4 +
Ha3 Ô a4 û b3 Ô b4 L a1 Ô a2 Ô b1 Ô b2 - Ha3 Ô a4 û b2 Ô b4 L a1 Ô a2 Ô b1 Ô b3 +
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 Ô b4 + Ha3 Ô a4 û b1 Ô b4 L a1 Ô a2 Ô b2 Ô b3 -
Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 Ô b4 + Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 Ô b4 +
H-Ha2 Ô b2 û b3 Ô b4 L + a2 Ô b3 û b2 Ô b4 - a2 Ô b4 û b2 Ô b3 L a1 Ô a3 Ô a4 Ô b1 +
Ha2 Ô b1 û b3 Ô b4 - a2 Ô b3 û b1 Ô b4 + a2 Ô b4 û b1 Ô b3 L a1 Ô a3 Ô a4 Ô b2 +
H-Ha2 Ô b1 û b2 Ô b4 L + a2 Ô b2 û b1 Ô b4 - a2 Ô b4 û b1 Ô b2 L a1 Ô a3 Ô a4 Ô b3 +
Ha2 Ô b1 û b2 Ô b3 - a2 Ô b2 û b1 Ô b3 + a2 Ô b3 û b1 Ô b2 L a1 Ô a3 Ô a4 Ô b4 -
Ha2 Ô a4 û b3 Ô b4 L a1 Ô a3 Ô b1 Ô b2 + Ha2 Ô a4 û b2 Ô b4 L a1 Ô a3 Ô b1 Ô b3 -
Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 Ô b4 - Ha2 Ô a4 û b1 Ô b4 L a1 Ô a3 Ô b2 Ô b3 +
Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 Ô b4 - Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 Ô b4 +
Ha2 Ô a3 û b3 Ô b4 L a1 Ô a4 Ô b1 Ô b2 - Ha2 Ô a3 û b2 Ô b4 L a1 Ô a4 Ô b1 Ô b3 +
Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 Ô b4 + Ha2 Ô a3 û b1 Ô b4 L a1 Ô a4 Ô b2 Ô b3 -
Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 Ô b4 + Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 Ô b4 +
Ha1 Ô b2 û b3 Ô b4 - a1 Ô b3 û b2 Ô b4 + a1 Ô b4 û b2 Ô b3 L a2 Ô a3 Ô a4 Ô b1 +
H-Ha1 Ô b1 û b3 Ô b4 L + a1 Ô b3 û b1 Ô b4 - a1 Ô b4 û b1 Ô b3 L a2 Ô a3 Ô a4 Ô b2 +
Ha1 Ô b1 û b2 Ô b4 - a1 Ô b2 û b1 Ô b4 + a1 Ô b4 û b1 Ô b2 L a2 Ô a3 Ô a4 Ô b3 +
H-Ha1 Ô b1 û b2 Ô b3 L + a1 Ô b2 û b1 Ô b3 - a1 Ô b3 û b1 Ô b2 L a2 Ô a3 Ô a4 Ô b4 +
Ha1 Ô a4 û b3 Ô b4 L a2 Ô a3 Ô b1 Ô b2 - Ha1 Ô a4 û b2 Ô b4 L a2 Ô a3 Ô b1 Ô b3 +
Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 Ô b4 + Ha1 Ô a4 û b1 Ô b4 L a2 Ô a3 Ô b2 Ô b3 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 Ô b4 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 Ô b4 -
Ha1 Ô a3 û b3 Ô b4 L a2 Ô a4 Ô b1 Ô b2 + Ha1 Ô a3 û b2 Ô b4 L a2 Ô a4 Ô b1 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 Ô b4 - Ha1 Ô a3 û b1 Ô b4 L a2 Ô a4 Ô b2 Ô b3 +
Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 Ô b4 - Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 Ô b4 +
Ha1 Ô a2 û b3 Ô b4 L a3 Ô a4 Ô b1 Ô b2 - Ha1 Ô a2 û b2 Ô b4 L a3 Ô a4 Ô b1 Ô b3 +
Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 Ô b4 + Ha1 Ô a2 û b1 Ô b4 L a3 Ô a4 Ô b2 Ô b3 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 Ô b4 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3 Ô b4

It can be seen that at this inner product level, these two expressions are not of the same form: B1
contains additional terms to those of A1 . The interior products of A1 are only of the form
Hai Ô aj û br Ô bs L, whereas the extra terms of B1 are of the forms Ibi Ô bj û br Ô bs M
andHai Ô bj û br Ô bs L. Calculating their difference gives:

2009 9 3
Grassmann Algebra Book.nb 562

AB = B1 - A1
H2 Hb1 Ô b2 û b3 Ô b4 L - 2 Hb1 Ô b3 û b2 Ô b4 L + 2 Hb1 Ô b4 û b2 Ô b3 LL
a1 Ô a2 Ô a3 Ô a4 +
H-Ha4 Ô b2 û b3 Ô b4 L + a4 Ô b3 û b2 Ô b4 - a4 Ô b4 û b2 Ô b3 L a1 Ô a2 Ô a3 Ô b1 +
Ha4 Ô b1 û b3 Ô b4 - a4 Ô b3 û b1 Ô b4 + a4 Ô b4 û b1 Ô b3 L a1 Ô a2 Ô a3 Ô b2 +
H-Ha4 Ô b1 û b2 Ô b4 L + a4 Ô b2 û b1 Ô b4 - a4 Ô b4 û b1 Ô b2 L a1 Ô a2 Ô a3 Ô b3 +
Ha4 Ô b1 û b2 Ô b3 - a4 Ô b2 û b1 Ô b3 + a4 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a3 Ô b4 +
Ha3 Ô b2 û b3 Ô b4 - a3 Ô b3 û b2 Ô b4 + a3 Ô b4 û b2 Ô b3 L a1 Ô a2 Ô a4 Ô b1 +
H-Ha3 Ô b1 û b3 Ô b4 L + a3 Ô b3 û b1 Ô b4 - a3 Ô b4 û b1 Ô b3 L a1 Ô a2 Ô a4 Ô b2 +
Ha3 Ô b1 û b2 Ô b4 - a3 Ô b2 û b1 Ô b4 + a3 Ô b4 û b1 Ô b2 L a1 Ô a2 Ô a4 Ô b3 +
H-Ha3 Ô b1 û b2 Ô b3 L + a3 Ô b2 û b1 Ô b3 - a3 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a4 Ô b4 +
H-Ha2 Ô b2 û b3 Ô b4 L + a2 Ô b3 û b2 Ô b4 - a2 Ô b4 û b2 Ô b3 L a1 Ô a3 Ô a4 Ô b1 +
Ha2 Ô b1 û b3 Ô b4 - a2 Ô b3 û b1 Ô b4 + a2 Ô b4 û b1 Ô b3 L a1 Ô a3 Ô a4 Ô b2 +
H-Ha2 Ô b1 û b2 Ô b4 L + a2 Ô b2 û b1 Ô b4 - a2 Ô b4 û b1 Ô b2 L a1 Ô a3 Ô a4 Ô b3 +
Ha2 Ô b1 û b2 Ô b3 - a2 Ô b2 û b1 Ô b3 + a2 Ô b3 û b1 Ô b2 L a1 Ô a3 Ô a4 Ô b4 +
Ha1 Ô b2 û b3 Ô b4 - a1 Ô b3 û b2 Ô b4 + a1 Ô b4 û b2 Ô b3 L a2 Ô a3 Ô a4 Ô b1 +
H-Ha1 Ô b1 û b3 Ô b4 L + a1 Ô b3 û b1 Ô b4 - a1 Ô b4 û b1 Ô b3 L a2 Ô a3 Ô a4 Ô b2 +
Ha1 Ô b1 û b2 Ô b4 - a1 Ô b2 û b1 Ô b4 + a1 Ô b4 û b1 Ô b2 L a2 Ô a3 Ô a4 Ô b3 +
H-Ha1 Ô b1 û b2 Ô b3 L + a1 Ô b2 û b1 Ô b3 - a1 Ô b3 û b1 Ô b2 L a2 Ô a3 Ô a4 Ô b4

By inspection, we can see that each scalar coefficient is a zero interior sum, thus making AB equal
to zero and proving the theorem for this particular case. We can also verify this directly by
reducing the inner products of AB to scalar products.

¯%@ToScalarProducts@ABDD
0

Ÿ Verification that the B form may be expanded in terms of either


factor
In this section we verify that just as for the A form, the B form may be expanded in terms of either
of its factors.

The principal difference between an expansion using the A form, and one using the B form is that
the A form needs only to be expanded to the inner product level to show the identity between two
equal expressions. On the other hand, expansion of the B form requires expansion to the scalar
product level, often involving significantly more computation.

We take the same generalized product a Û b that we discussed in the previous section, and show
4 2 3
that we need to expand the product to the scalar product level before the identity of the two
expansions become evident.

2009 9 3
Grassmann Algebra Book.nb 563

82, %< = :ComposeSimpleFormBaF, ComposeSimpleFormBbF>


4 3

8a1 Ô a2 Ô a3 Ô a4 , b1 Ô b2 Ô b3 <

Ï Reduction to interior products

A = ToInteriorProductsBB2 Û %F
2

a1 Ô a2 Ô a3 Ô a4 Ô b1 û b2 Ô b3 -
a1 Ô a2 Ô a3 Ô a4 Ô b2 û b1 Ô b3 + a1 Ô a2 Ô a3 Ô a4 Ô b3 û b1 Ô b2

To expand with respect to a we reverse the order of the generalized product to give b Û a and
4 3 2 4

multiply by H-1LHm-lL Hk-lL (which we note for this case to be 1) to give

B = ToInteriorProductsBB% Û 2F
2

a1 Ô a2 Ô b1 Ô b2 Ô b3 û a3 Ô a4 - a1 Ô a3 Ô b1 Ô b2 Ô b3 û a2 Ô a4 +
a1 Ô a4 Ô b1 Ô b2 Ô b3 û a2 Ô a3 + a2 Ô a3 Ô b1 Ô b2 Ô b3 û a1 Ô a4 -
a2 Ô a4 Ô b1 Ô b2 Ô b3 û a1 Ô a3 + a3 Ô a4 Ô b1 Ô b2 Ô b3 û a1 Ô a2

Ï Reduction to inner products

A1 = ¯%@ToInnerProducts@ADD
Ha4 Ô b1 û b2 Ô b3 - a4 Ô b2 û b1 Ô b3 + a4 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a3 +
H-Ha3 Ô b1 û b2 Ô b3 L + a3 Ô b2 û b1 Ô b3 - a3 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a4 +
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 -
Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 + Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 +
Ha2 Ô b1 û b2 Ô b3 - a2 Ô b2 û b1 Ô b3 + a2 Ô b3 û b1 Ô b2 L a1 Ô a3 Ô a4 -
Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 + Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 -
Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 + Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 -
Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 + Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 +
H-Ha1 Ô b1 û b2 Ô b3 L + a1 Ô b2 û b1 Ô b3 - a1 Ô b3 û b1 Ô b2 L a2 Ô a3 Ô a4 +
Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 + Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 -
Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 + Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3

2009 9 3
Grassmann Algebra Book.nb 564

B1 = ¯%@ToInnerProducts@BDD
Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 -
Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 + Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 -
Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 + Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 -
Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 + Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 -
Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 + Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 +
Ha2 Ô a3 û a4 Ô b3 - a2 Ô a4 û a3 Ô b3 + a2 Ô b3 û a3 Ô a4 L a1 Ô b1 Ô b2 +
H-Ha2 Ô a3 û a4 Ô b2 L + a2 Ô a4 û a3 Ô b2 - a2 Ô b2 û a3 Ô a4 L a1 Ô b1 Ô b3 +
Ha2 Ô a3 û a4 Ô b1 - a2 Ô a4 û a3 Ô b1 + a2 Ô b1 û a3 Ô a4 L a1 Ô b2 Ô b3 +
Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 - Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 +
Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 - Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 +
Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 - Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 +
H-Ha1 Ô a3 û a4 Ô b3 L + a1 Ô a4 û a3 Ô b3 - a1 Ô b3 û a3 Ô a4 L a2 Ô b1 Ô b2 +
Ha1 Ô a3 û a4 Ô b2 - a1 Ô a4 û a3 Ô b2 + a1 Ô b2 û a3 Ô a4 L a2 Ô b1 Ô b3 +
H-Ha1 Ô a3 û a4 Ô b1 L + a1 Ô a4 û a3 Ô b1 - a1 Ô b1 û a3 Ô a4 L a2 Ô b2 Ô b3 +
Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3 +
Ha1 Ô a2 û a4 Ô b3 - a1 Ô a4 û a2 Ô b3 + a1 Ô b3 û a2 Ô a4 L a3 Ô b1 Ô b2 +
H-Ha1 Ô a2 û a4 Ô b2 L + a1 Ô a4 û a2 Ô b2 - a1 Ô b2 û a2 Ô a4 L a3 Ô b1 Ô b3 +
Ha1 Ô a2 û a4 Ô b1 - a1 Ô a4 û a2 Ô b1 + a1 Ô b1 û a2 Ô a4 L a3 Ô b2 Ô b3 +
H-Ha1 Ô a2 û a3 Ô b3 L + a1 Ô a3 û a2 Ô b3 - a1 Ô b3 û a2 Ô a3 L a4 Ô b1 Ô b2 +
Ha1 Ô a2 û a3 Ô b2 - a1 Ô a3 û a2 Ô b2 + a1 Ô b2 û a2 Ô a3 L a4 Ô b1 Ô b3 +
H-Ha1 Ô a2 û a3 Ô b1 L + a1 Ô a3 û a2 Ô b1 - a1 Ô b1 û a2 Ô a3 L a4 Ô b2 Ô b3 +
H2 Ha1 Ô a2 û a3 Ô a4 L - 2 Ha1 Ô a3 û a2 Ô a4 L + 2 Ha1 Ô a4 û a2 Ô a3 LL b1 Ô b2 Ô b3

By observation we can see that A1 and B1 contain some common terms. We compute the
difference of the two inner product forms and collect terms with the same exterior product factor:

2009 9 3
Grassmann Algebra Book.nb 565

AB = B1 - A1
-Ha4 Ô b1 û b2 Ô b3 - a4 Ô b2 û b1 Ô b3 + a4 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a3 -
H-Ha3 Ô b1 û b2 Ô b3 L + a3 Ô b2 û b1 Ô b3 - a3 Ô b3 û b1 Ô b2 L a1 Ô a2 Ô a4 -
Ha2 Ô b1 û b2 Ô b3 - a2 Ô b2 û b1 Ô b3 + a2 Ô b3 û b1 Ô b2 L a1 Ô a3 Ô a4 +
Ha2 Ô a3 û a4 Ô b3 - a2 Ô a4 û a3 Ô b3 + a2 Ô b3 û a3 Ô a4 L a1 Ô b1 Ô b2 +
H-Ha2 Ô a3 û a4 Ô b2 L + a2 Ô a4 û a3 Ô b2 - a2 Ô b2 û a3 Ô a4 L a1 Ô b1 Ô b3 +
Ha2 Ô a3 û a4 Ô b1 - a2 Ô a4 û a3 Ô b1 + a2 Ô b1 û a3 Ô a4 L a1 Ô b2 Ô b3 -
H-Ha1 Ô b1 û b2 Ô b3 L + a1 Ô b2 û b1 Ô b3 - a1 Ô b3 û b1 Ô b2 L a2 Ô a3 Ô a4 +
H-Ha1 Ô a3 û a4 Ô b3 L + a1 Ô a4 û a3 Ô b3 - a1 Ô b3 û a3 Ô a4 L a2 Ô b1 Ô b2 +
Ha1 Ô a3 û a4 Ô b2 - a1 Ô a4 û a3 Ô b2 + a1 Ô b2 û a3 Ô a4 L a2 Ô b1 Ô b3 +
H-Ha1 Ô a3 û a4 Ô b1 L + a1 Ô a4 û a3 Ô b1 - a1 Ô b1 û a3 Ô a4 L a2 Ô b2 Ô b3 +
Ha1 Ô a2 û a4 Ô b3 - a1 Ô a4 û a2 Ô b3 + a1 Ô b3 û a2 Ô a4 L a3 Ô b1 Ô b2 +
H-Ha1 Ô a2 û a4 Ô b2 L + a1 Ô a4 û a2 Ô b2 - a1 Ô b2 û a2 Ô a4 L a3 Ô b1 Ô b3 +
Ha1 Ô a2 û a4 Ô b1 - a1 Ô a4 û a2 Ô b1 + a1 Ô b1 û a2 Ô a4 L a3 Ô b2 Ô b3 +
H-Ha1 Ô a2 û a3 Ô b3 L + a1 Ô a3 û a2 Ô b3 - a1 Ô b3 û a2 Ô a3 L a4 Ô b1 Ô b2 +
Ha1 Ô a2 û a3 Ô b2 - a1 Ô a3 û a2 Ô b2 + a1 Ô b2 û a2 Ô a3 L a4 Ô b1 Ô b3 +
H-Ha1 Ô a2 û a3 Ô b1 L + a1 Ô a3 û a2 Ô b1 - a1 Ô b1 û a2 Ô a3 L a4 Ô b2 Ô b3 +
H2 Ha1 Ô a2 û a3 Ô a4 L - 2 Ha1 Ô a3 û a2 Ô a4 L + 2 Ha1 Ô a4 û a2 Ô a3 LL b1 Ô b2 Ô b3

Ï Reduction to scalar products

In order to show the equality of the two forms, we need to show that the above difference is zero.
As in the previous section we may do this by converting the inner products to scalar products.
Alternatively we may observe that the coefficients of the terms are zero by virtue of the Zero
Interior Sum Theorem discussed later in this chapter.

¯%@ToScalarProducts@ABDD
0

This is an example of how the B form of the generalized product may be expanded in terms of
either factor.

Ÿ Example: Case Min[m, k] < l < Max[m, k]: Reduction to zero


When the order of the product l is greater than the smaller of the grades of the factors of the
product, but less than the larger of the grades, the generalized product may still be expanded in
terms of the factors of the element of larger grade. This leads however to a sum of terms which is
zero.

aÛb ã0 Min@m, kD < l < Max@m, kD


m l k

When l is equal to the larger of the grades of the factors, the generalized product reduces to a
single interior product which is zero by virtue of its left factor being of lesser grade than its right
factor. Suppose l = k > m. Then:

2009 9 3
Grassmann Algebra Book.nb 566

a Û b ã aûb ã 0 l=k>m
m k k m k

These relationships are the source of an interesting suite of identities relating exterior and interior
products. We take some examples; in each case we verify that the result is zero by converting the
expression to its scalar product form.

But we will need to circumvent the default behaviour of ToInteriorProducts by not intially
telling it the grade of one of the terms (since it would normally return 0 for these type of products).
We also need to declare some vector symbols and use the B variant of ToInteriorProducts
(ToInteriorProductsB) to force the desired expansion.

DeclareExtraVectorSymbols@a_ D;

Ï Example 1

A123 = ToInteriorProductsBBa Û bF ê. a Ø a1
2 3

a1 Ô b1 û b2 Ô b3 - a1 Ô b2 û b1 Ô b3 + a1 Ô b3 û b1 Ô b2

¯$@ToScalarProducts@A123 DD
0

Ï Example 2

A124 = ToInteriorProductsBBa Û bF ê. a Ø a1
2 4

a1 Ô b1 Ô b2 û b3 Ô b4 - a1 Ô b1 Ô b3 û b2 Ô b4 + a1 Ô b1 Ô b4 û b2 Ô b3 +
a1 Ô b2 Ô b3 û b1 Ô b4 - a1 Ô b2 Ô b4 û b1 Ô b3 + a1 Ô b3 Ô b4 û b1 Ô b2

¯$@ToScalarProducts@A124 DD
0

Ï Example 3

A234 = ToInteriorProductsBBa Û bF ê. a Ø a1 Ô a2
3 4

-Ha1 Ô a2 Ô b1 û b2 Ô b3 Ô b4 L + a1 Ô a2 Ô b2 û b1 Ô b3 Ô b4 -
a1 Ô a2 Ô b3 û b1 Ô b2 Ô b4 + a1 Ô a2 Ô b4 û b1 Ô b2 Ô b3

¯$@ToScalarProducts@A234 DD
0

2009 9 3
Grassmann Algebra Book.nb 567

10.6 Products with Common Factors

Products with common factors


One of the more interesting properties of the generalized Grassmann product of simple elements is
its behaviour when these elements contain a common factor.

If two simple elements contain a common factor of grade g, then their generalized Grassmann
product of order g reduces to a simple interior product, and their generalized products of order
less than g are zero.

We can show this most directly from the B form of the generalized product formula 10.10. The B
form is
k
K O
l
a Û b ã ‚ a Ô bj û bj
m l k m k-l l
j=1

b ã b1 Ô b1 ã b2 Ô b2 ã !
k l k-l l k-l

Let a and b have a common factor g and residual factors A and B.


m k l a b

a ã gÔA b ã gÔB
m l a k l b

Substituting these products into the formula above enables us to begin writing the generalized
product sum:

gÔA Û gÔB ã gÔAÔB ûg + …


l a l l b l a b l

But we quickly see that because the subsequent terms must now contain some common terms in
their exterior product, they must all reduce to zero. Hence, for simple elements if the order of the
generalized product is equal to the grade of the common factor, the generalized product reduces to
a simple interior product.

gÔA Û gÔB ã gÔAÔB ûg 10.11


l a l l b l a b l

By interchanging g, A, and B we also have two simple alternate expressions


l a b

2009 9 3
Grassmann Algebra Book.nb 568

AÔg Û gÔB ã AÔgÔB ûg


a l l l b a l b l
10.12
AÔg Û BÔg ã AÔBÔg ûg
a l l b l a b l l

In a similar way, we can see that if the order of the generalized product is less than the grade of the
common factor, all terms must now contain some common terms in their exterior product, and the
generalized product reduces to zero.

gÔA Û gÔB ã 0 l<m 10.13


m a l m b

Of course, this result is also valid in the case that either or both of A and B are scalars.
a b

g Û g Ô B ã 0, g Ô A Û g ã 0, gÛgã0 l<m 10.14


m l m b m a l m m l m

Congruent factors
If both A and B are scalars, that is, a and b are equal to 0, they may be factored out of the formulas.
a b

g Û g ã gûg , l ã m g Û g ã 0, l ! m 10.15
m l m m m m l m

If a and b are both n-elements, then they are conguent (that is, differ by at most a scalar factor)
n n

a Û b ã aûb , l ã n a Û b ã 0, l ! n 10.16
n l n n n n l n

These results give rise to a series of relationships among the exterior and interior products of
factors of a simple element. These relationships are independent of the dimension of the space
concerned. For example, applying the result to simple 2, 3 and 4-elements gives the following
relationships at the interior product level:

2009 9 3
Grassmann Algebra Book.nb 569

Ï Example: 2-elements

ToInteriorProductsBa Û aF ã 0
2 1 2

Ka û a1 O Ô a2 - Ka û a2 O Ô a1 ã 0
2 2

Ï Example: 3-elements

ToInteriorProductsBa Û aF ã 0
3 1 3

a1 Ô a2 Ô a û a3 - a1 Ô a3 Ô a û a2 + a2 Ô a3 Ô a û a1 ã 0
3 3 3

ToInteriorProductsBa Û aF ã 0
3 2 3

a û a1 Ô a2 Ô a3 - a û a1 Ô a3 Ô a2 + a û a2 Ô a3 Ô a1 ã 0
3 3 3

Ï Example: 4-elements

ToInteriorProductsBa Û aF ã 0
4 1 4

Ka û a1 O Ô a2 Ô a3 Ô a4 - Ka û a2 O Ô a1 Ô a3 Ô a4 +
4 4

Ka û a3 O Ô a1 Ô a2 Ô a4 - Ka û a4 O Ô a1 Ô a2 Ô a3 ã 0
4 4

ToInteriorProductsBa Û aF ã 0
4 2 4

a1 Ô a2 Ô Ka û a3 Ô a4 O - a1 Ô a3 Ô Ka û a2 Ô a4 O + a1 Ô a4 Ô Ka û a2 Ô a3 O +
4 4 4

a2 Ô a3 Ô Ka û a1 Ô a4 O - a2 Ô a4 Ô Ka û a1 Ô a3 O + a3 Ô a4 Ô Ka û a1 Ô a2 O ã 0
4 4 4

ToInteriorProductsBa Û aF ã 0
4 3 4

Ka û a1 Ô a2 Ô a3 O Ô a4 - Ka û a1 Ô a2 Ô a4 O Ô a3 +
4 4

Ka û a1 Ô a3 Ô a4 O Ô a2 - Ka û a2 Ô a3 Ô a4 O Ô a1 ã 0
4 4

At the inner product level, some of the generalized products yield further relationships. For
example, a Û a confirms the Zero Interior Sum theorem.
4 2 4

2009 9 3
Grassmann Algebra Book.nb 570

¯%BToInnerProductsBa Û aFF ã 0
4 2 4

H2 Ha1 Ô a2 û a3 Ô a4 L - 2 Ha1 Ô a3 û a2 Ô a4 L + 2 Ha1 Ô a4 û a2 Ô a3 LL


a1 Ô a2 Ô a3 Ô a4 ã 0

Products of non-simple elements


Suppose X is a general not necessarily simple m-element, We can then write it as a sum of terms:
m

X ã a1 + a2 + a3 + …
m m m m

From the axioms for the exterior product it is straightforward to show that XÔXãH-1Lm XÔX. This
m m m m
means, of course, that the exterior product of a general m-element with itself is zero if m is odd.

In a similar manner, the formula for the quasi-commutativity of the generalized product is:

a Û b ã H-1LHm-lL Hk-lL b Û a
m l k k l m

This shows that the generalized product of a general m-element with itself is:

X Û X ã H-1LHm-lL X Û X
m l m m l m

Thus the generalization of the above result for the exterior product is that the generalized product
of a general (not necessarily simple) m-element with itself is zero if m-l is odd, or alternatively, if
just one of m and l is odd.

a1 + a2 + a3 + … Û a1 + a2 + a3 + … ã0 m-l œ
m m m l m m m 10.17
OddIntegers

Orthogonal factors
The right hand side of formula 10.11 can be expanded using the Interior Common Factor Theorem
(see Chapter 6, The Interior Product) below:
n
a û b ã ‚ ai û b ai
m k k k m-k
i=1

a ã a1 Ô a1 ã a2 Ô a2 ã ! ã an Ô an
m k m-k k m-k k m-k

To do this we put

a ã g Ô A Ô B, bãg
m l a b k l

in this formula, allowing the expansion to begin to be written

2009 9 3
Grassmann Algebra Book.nb 571

in this formula, allowing the expansion to begin to be written

gÔAÔB ûg ã gûg AÔB + …


l a b l l l a b

Since g, A, and B are simple, they can be written as:


l a B

g ã g1 Ô g2 Ô … Ô gi Ô … Ô gl
l

A ã A1 Ô A2 Ô … Ô Ai Ô … Ô Aa
a

B ã B1 Ô B2 Ô … Ô Bi Ô … Ô Ba
B

Each of the subsequent terms will, in the sum on the right hand side of this expansion, contain at
least one of the Ai or Bj in the inner product. Thus, if g is totally orthogonal to both A and B, then
l a b
these terms will be zero and the formula becomes simply:

g Ô A Û g Ô B ã g Ô A Ô B û g ã g û g A Ô B,
l a l l b l a b l l l a b 10.18
gi û Aj ã 0, gi û Bj ã 0

(A simple element is totally orthogonal to another simple element if each of its 1-element factors is
orthogonal to each of the 1-element factors of the other element.)

Ÿ Generalized products of basis elements


All of the basis elements of a Grassmann algebra have 1-elements in common with some other
basis elements (except for the algebra of a 1-space). The formulas discussed above are part of the
rule-base of GrassmannSimplify and so we can see how they work by applying
GrassmannSimplify to various combinations of basis elements.

It is also often the case that we work with an Euclidean metric. The examples below tabulate the
effect of the generalized product on the basis elements of a Grassmann algebra in both the general
metric and the Euclidean metric cases.

These results will be important when we discuss hypercomplex and Clifford algebras in later
chapters.

Ï The code for the examples

Here is the piece of code used to generate the examples in the next section. If you want to generate
your own examples, you will need to click anywhere in the box, and then press the Enter (or Shift-
Enter) key.

2009 9 3
Grassmann Algebra Book.nb 572

TabulateBasisProducts@n_DBx_ Û y_F :=
l

ModuleB8R, T<, ¯J; DeclareBasis@nD; T =

ComposeTableB8"Generalized Products of Basis Elements"<,

TableB:R = ¯8Bx Û yF, ToMetricElements@


l

ToScalarProducts@RDD>, 8l, 0, n<F, Range@0, nD,

8"General Metric ", "Euclidean Metric"<F; ¯K; TF

Ï Examples of basis products in 3-space

Here are tabulations of the generalized products of all the basis elements of a Grassmann algebra of
3-space.

TabulateBasisProducts@3DBe1 Ô e2 Ô e3 Û e1 Ô e2 Ô e3 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric
0 0 0
1 0 0
2 0 0
3 e1 Ô e2 Ô e3 û e1 Ô e2 Ô e3 1

TabulateBasisProducts@3DBe1 Ô e2 Ô e3 Û e1 Ô e2 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric
0 0 0
1 0 0
2 e1 Ô e2 Ô e3 û e1 Ô e2 e3
3 0 0

2009 9 3
Grassmann Algebra Book.nb 573

TabulateBasisProducts@3DBe1 Ô e2 Ô e3 Û e1 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric
0 0 0
1 e1 Ô e2 Ô e3 û e1 e2 Ô e3
2 0 0
3 0 0

TabulateBasisProducts@3DBe1 Ô e2 Û e1 Ô e2 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric
0 0 0
1 0 0
2 e1 Ô e2 û e1 Ô e2 1
3 0 0

TabulateBasisProducts@3DBe1 Ô e2 Û e1 Ô e3 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric
0 0 0
1 e1 Ô e2 Ô e3 û e1 e2 Ô e3
2 e1 Ô e2 û e1 Ô e3 0
3 0 0

TabulateBasisProducts@3DBe1 Ô e2 Û e1 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric
0 0 0
1 e1 Ô e2 û e1 e2
2 0 0
3 0 0

2009 9 3
Grassmann Algebra Book.nb 574

TabulateBasisProducts@3DBe1 Ô e2 Û e3 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric
0 e1 Ô e2 Ô e3 e1 Ô e2 Ô e3
1 e1 Ô e2 û e3 0
2 0 0
3 0 0

TabulateBasisProducts@3DBe1 Û e1 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric
0 0 0
1 e1 û e1 1
2 0 0
3 0 0

TabulateBasisProducts@3DBe1 Û e2 F
l

Generalized Products of Basis Elements


General Metric Euclidean Metric
0 e1 Ô e2 e1 Ô e2
1 e1 û e2 0
2 0 0
3 0 0

Ÿ Finding common factors


Formula 10.11 tells us that if we have two elements with a common factor, their generalized
product of an order equal to the grade of the common factor should take the form of an interior
product with the common factor appearing as the second factor.

gÔA Û gÔB ã gÔAÔB ûg


l a l l b l a b l

This suggests a way in which we might be able to find the common factor of two elements. In
Chapter 3: The Regressive Product, we developed the Common Factor Theorem based on the
regressive product. Based on this theorem, we were able to derive an Interior Common Factor
Theorem in Chapter 6. The formula defining the generalized product is a generalization of this
theorem. Thus, it is not so surprising that the generalized product might have application to finding
common factors.
2009 9 3
Grassmann Algebra Book.nb 575
This suggests a way in which we might be able to find the common factor of two elements. In
Chapter 3: The Regressive Product, we developed the Common Factor Theorem based on the
regressive product. Based on this theorem, we were able to derive an Interior Common Factor
Theorem in Chapter 6. The formula defining the generalized product is a generalization of this
theorem. Thus, it is not so surprising that the generalized product might have application to finding
common factors.

In Chapter 3 we explored finding common factors without yet having introduced the notion of
metric. The generalized product formula above however, supposes that a metric has been
introduced onto the underlying linear space. But although the interior product g Ô A Ô B û g
l a b l
does depend on the metric, the factors g Ô A Ô B and g do not.
l a b l

The formulas for finding common factors using the regressive product developed in Chapter 3
required that the sum of the grades of the elements which contained the common factors be greater
than the dimension of the underlying linear space. The formula above has the advantage of being
independent of the dimension of the space. A concomitant disadvantage however, is that the
product g Ô A Ô B may not be displayed in simple form, making it harder to extract the common
l a b
factor g.
l

We note also that even though the formula seems to explicitly display the common factor g, this is
l
only because the arguments g Ô A and g Ô B, to the generalized product are displayed explicitly in
l a l b
an already factorized form. In fact we are faced with the same situation that occurs with the
common factor calculation based on the regressive product: a common factor is determined only up
to congruence. We can see this noting that the formula above shows that for any scalar s, s g is
l
also a common factor.

A B
a b 1
sg Ô Û sg Ô ã gÔAÔB û sg
l s l l s s l a b l

Ï Ÿ Example

To begin, suppose we are working in a space of large enough dimension to require none of the
products to be zero because their grade exceeded the dimension of the space. For the purpose of
this example we set it (arbitrarily) to be 8.

¯!8 ;

Suppose also that we have two elements of interest.

X = 3 e1 Ô e2 Ô e3 + 2 e1 Ô e2 Ô e5 -
14 e1 Ô e3 Ô e5 - 3 e2 Ô e3 Ô e4 + 2 e2 Ô e4 Ô e5 - 14 e3 Ô e4 Ô e5 ;

Y = 6 e1 Ô e2 Ô e3 Ô e4 - 30 e1 Ô e2 Ô e3 Ô e5 -
4 e1 Ô e2 Ô e4 Ô e5 - 15 e1 Ô e3 Ô e4 Ô e5 + 42 e2 Ô e3 Ô e4 Ô e5 ;

We check first to see if they are simple.

2009 9 3
Grassmann Algebra Book.nb 576

8SimpleQ@XD, SimpleQ@YD<
8True, True<

Next, we check to see if they have a common factor.

¯%@X Ô YD
0

With the knowledge that they are simple, and (because their exterior product is zero) have a
(simple) common factor (of still unknown grade) we begin computing the progressively higher
orders of generalized products. The first non-zero generalized product will be an expression from
which we can extract the common factor. The order of this non-zero generalized product will give
the grade of the common factor.

The zero-order generalized product is just the exterior product that we computed above. Its
returning 0 means that the common factor is of grade at least 1.

¯%BX Û YF
0

The first-order generalized product also returns 0. This means that the common factor is of grade at
least 2.

¯%BX Û YF
1

The second-order generalized product does not return 0. This means that the common factor is of
grade 2.

Z = ¯%BX Û YF
2

-129 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e1 Ô e3 L -
86 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e1 Ô e5 L + 36 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e2 Ô e3 L +
24 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e2 Ô e5 L - 129 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e3 Ô e4 L -
168 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e3 Ô e5 L + 86 He1 Ô e2 Ô e3 Ô e4 Ô e5 û e4 Ô e5 L

Since we can only extract the common factor up to congruence, we can simply replace each term
of the form e1 Ô e2 Ô e3 Ô e4 Ô e5 û ei Ô ej with ei Ô ej .

Z1 = Z ê. e1 Ô e2 Ô e3 Ô e4 Ô e5 û e_ Ø e
-129 e1 Ô e3 - 86 e1 Ô e5 + 36 e2 Ô e3 +
24 e2 Ô e5 - 129 e3 Ô e4 - 168 e3 Ô e5 + 86 e4 Ô e5

We can verify that Z1 is a common factor of each of X and Y by taking their generalized products
of order 1 and seeing if they do indeed result in zero.

2009 9 3
Grassmann Algebra Book.nb 577

¯%BToInnerProductsB¯%B:Z1 Û X, Z1 Û Y>FFF
1 1

80, 0<

Since Z1 is simple, we can also factorize it.

Z2 = ExteriorFactorize@Z1 D
12 e2 56 e5 2 e5
-129 e1 - - e4 - Ô e3 +
43 43 3

Remember that an exterior factorization is never unique, reflecting the fact that any linear
combination of these factors is a factor of both X and Y.

12 e2 56 e5 2 e5
Z3 = a e1 - - e4 - + b e3 + ;
43 43 3
¯%@8Z3 Ô X, Z3 Ô Y<D
80, 0<

10.7 Properties of the Generalized Product

Summary of properties
The following simple results may be obtained from the definition and results derived above.

Ï The generalized product is distributive with respect to addition

a+b Ûg ã aÛg + bÛg 10.19


m k l p m l p k l p

gÛ a+b ã gÛa + gÛb 10.20


p l m k p l m p l k

Ï The generalized product with a zero element is zero

0Ûaã aÛ0ã 0 10.21


l m m l

2009 9 3
Grassmann Algebra Book.nb 578

Ï The generalized product with a scalar reduces to the field product

The only non-zero generalized product with a scalar is of order zero, leading to the ordinary field
product.

a Û a ã a Û a ã aÔa ã a ûa ã a a 10.22
0 m m 0 m m m

a Û b ã aÔb ã aûb ã a b 10.23


0

Ï The generalized products of a scalar with an m-element are zero

For l greater than zero

aÛaãaÛaã 0 l>0 10.24


l m m l

Ï Scalars may be factored out of generalized products

Ka aO Û b ã a Û a b ã a aÛb 10.25
m l k m l k m l k

Ï The generalized product is quasi-commutative

a Û b ã H-1LHm-lL Hk-lL b Û a 10.26


m l k k l m

This relationship was proven in Section 10.3.

Ï The generalized product of congruent elements is their interior product, or zero

The generalized product of congruent elements of grade m is equal to their interior product
whenever the order l of the generalized product is equal to the grade of the elements, and is zero
otherwise.

a Û b ã aûb , l ã m a Û b ã 0, l ! m 10.27
m l m m m m l m

2009 9 3
Grassmann Algebra Book.nb 579

Ï The generalized product containing a common factor is zero when l < m

The generalized product of elements containing a common factor is zero whenever the order of the
generalized product l is less than the grade of the common factor m.

gÔA Û gÔB ã 0 l<m 10.28


m a l m b

Ï The generalized product containing a common factor reduces to an interior


product when l = m

The generalized product of elements containing a common factor reduces to an interior product
whenever the order of the generalized product l is equal to the grade of the common factor.

gÔA Û gÔB ã gÔAÔB ûg 10.29


l a l l b l a b l

Ï The generalized square of a general (not necessarily simple) element is zero if


m-l is odd.

The generalized product of a general (not necessarily simple) m-element with itself is zero if m-l is
odd.

a1 + a2 + a3 + … Û a1 + a2 + a3 + … ã0 m-l œ
m m m l m m m 10.30
OddIntegers

Ï The generalized product reduces to an exterior product when the common


factor is totally orthogonal to the remaining factors.

The generalized product of elements containing a common factor reduces to an exterior product
whenever the common factor is totally orthogonal to the other elements.

g Ô A Û g Ô B ã g Ô A Ô B û g ã g û g A Ô B,
l a l l b l a b l l l a b 10.31
gi û Aj ã 0, gi û Bj ã 0

2009 9 3
Grassmann Algebra Book.nb 580

10.8 The Meaning of the Generalized Product

What does a generalized product represent?


In Chapters 1 and 2 we saw that the exterior product was intimately bound up with the notion of
"linear dependence". In Chapter 6, we saw that the interior product generalized the notion of
"orthogonality" to elements of arbitrary grade. The generalized products of varying orders are
defined in terms of both the exterior and interior products, forming a suite of products (roughly
speaking) which transition from the exterior to the interior. The question then arises, how might we
interpret these generalized products in terms of the linear dependence and orthogonality of their
factors.

To explore this we start with the symmetric form of the generalized product:

m k
K O K O
l l
a Û b ã ‚ ‚ ai û bj ai Ô bj 0 < l < Min@m, kD
m l k l l m-l k-l
i=1 j=1

a ã a1 Ô a1 ã a2 Ô a2 ã !, b ã b1 Ô b1 ã b2 Ô b2 ã !
m l m-l l m-l k l k-l l k-l

For simplicity, and without loss of generality (given the quasi-commutativity of the generalized
product), we can assume k is less than or equal to m, so that l is then less than k.

We begin by looking at the simplest case, that in which the order l is equal to 1. The formula
above then becomes:

m k
a Û b ã ‚‚ ai û bj ai Ô bj
m 1 k 1 1 m-1 k-1
i=1 j=1

a ã a1 Ô a1 ã a2 Ô a2 ã !, b ã b1 Ô b1 ã b2 Ô b2 ã !
m 1 m-1 1 m-1 k 1 k-1 1 k-1

To get a clearer picture of the form of this sum, we can take a specific example, say for m equal to
3 and k equal to 2:

3 2
a Û b ã ‚ ‚ Hai û bj L ai Ô bj
3 1 2 2 1
i=1 j=1

a ã Ha1 L Ô Ha2 Ô a3 L ã Ha2 L Ô H- a1 Ô a3 L ã Ha3 L Ô Ha1 Ô a2 L


m
b ã Hb1 L Ô Hb2 L ã Hb2 L Ô H- b2 L
k

G = ToScalarProductsBa Û bF
3 1 2

-Ha3 û b2 L a1 Ô a2 Ô b1 + Ha3 û b1 L a1 Ô a2 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô b1 -


Ha2 û b1 L a1 Ô a3 Ô b2 - Ha1 û b2 L a2 Ô a3 Ô b1 + Ha1 û b1 L a2 Ô a3 Ô b2

Thus we can see that the generalized product is a linear combination of terms, each of which is an
exterior product with a scalar product as coefficient.

2009 9 3
Grassmann Algebra Book.nb 581

Thus we can see that the generalized product is a linear combination of terms, each of which is an
exterior product with a scalar product as coefficient.

The distinguishing feature of this linear combination is the way in which it combines all the
essentially different combinations of 1-element factors of the original elements. Before we proceed
further, we will explore ways in which we can visualize the essential components that make up this
type of expression.

Ÿ Visualizing generalized products

Key to visualizing the generalized product is to show how it may be built up from four lists: the

ai , bj , ai , and bj . In GrassmannAlgebra we can compose these lists by using the functions


l l m-l k-l
Spine and Cospine.

As a first example, we will take m equal to 3, k equal to 2, and l equal to 1, as above. Composing
the lists and displaying them in MatrixForm for easier visualization gives

A = List êü SpineBa, 1F êê MatrixForm;


3

Ac = List êü CospineBa, 1F êê MatrixForm;


3

8A, Ac<
a1 a2 Ô a3
: a2 , -Ha1 Ô a3 L >
a3 a1 Ô a2

B = :SpineBb, 1F> êê MatrixForm; Bc = :CospineBb, 1F> êê MatrixForm;


2 2

8B, Bc<
8H b1 b2 L, H b2 -b1 L<

We can compose a matrix of the interior products

AûB
a1
a2 û H b1 b2 L
a3

and multiply it out (using the interior product as the product operation) by applying the
GrassmannAlgebra function MatrixProduct (or its alias ¯B). (Note that MatrixProduct
removes any MatrixForm wrappers.)

2009 9 3
Grassmann Algebra Book.nb 582

Mi = ¯B@A û BD êê MatrixForm
a1 û b1 a1 û b2
a2 û b1 a2 û b2
a3 û b1 a3 û b2

Similarly we can compose a matrix of the exterior products

Ac Ô Bc
a2 Ô a3
-Ha1 Ô a3 L Ô H b2 -b1 L
a1 Ô a2

and multiply it out (using the exterior product as the product operation). We won't simplify the
double negative signs in order to retain the correspondence with the previous expressions.

Me = ¯B@Ac Ô BcD êê MatrixForm


a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
-Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1

Finally, we can put these two matrices side by side to show the interior and exteior components of
the final expression.

Mi Me
a1 û b1 a1 û b2 a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
a2 û b1 a2 û b2 -Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a3 û b1 a3 û b2 a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1

The generalized product is now the sum of the element-by-element products (not the matrix
product) of these matrices.

¯$@Plus üü Flatten@Mi Me ê. MatrixForm Ø IdentityDD


-Ha3 û b2 L a1 Ô a2 Ô b1 + Ha3 û b1 L a1 Ô a2 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô b1 -
Ha2 û b1 L a1 Ô a3 Ô b2 - Ha1 û b2 L a2 Ô a3 Ô b1 + Ha1 û b1 L a2 Ô a3 Ô b2

Ÿ Visualization examples
We can collect the operations of the the previous section into a simple function to easily visualize
any generalized product.

2009 9 3
Grassmann Algebra Book.nb 583

VizualizeGeneralizedProductBX_ Û Y_F :=
l_

Module@8A, Ac, B, Bc, Mi, Me<, A = List êü Spine@X, lD;


Ac = List êü Cospine@X, lD; B = 8Spine@Y, lD<;
Bc = 8Cospine@Y, lD<; Mi = ¯9@A û BD;
Me = ¯9@Ac Ô BcD; MatrixForm@MiD * MatrixForm@MeDD;

Ï Example 1

In this first example we visualize all the non-zero generalized products of a 3-element and 2-
element.

VizualizeGeneralizedProductBa Û bF
3 0 2

H 1 û 1 L H a1 Ô a2 Ô a3 Ô b1 Ô b2 L

VizualizeGeneralizedProductBa Û bF
3 1 2

a1 û b1 a1 û b2 a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
a2 û b1 a2 û b2 -Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a3 û b1 a3 û b2 a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1

VizualizeGeneralizedProductBa Û bF
3 2 2

a1 Ô a2 û b1 Ô b2 a3 Ô 1
a1 Ô a3 û b1 Ô b2 -a2 Ô 1
a2 Ô a3 û b1 Ô b2 a1 Ô 1

Ï Example 2

In this example we visualize some non-zero generalized products of a 4-element and 3-element.

VizualizeGeneralizedProductBa Û bF
4 1 3

a1 û b1 a1 û b2 a1 û b3 a2 Ô a3 Ô a4 Ô b2 Ô b3 a2 Ô a3 Ô a4 Ô -Hb1 Ô b3 L
a2 û b1 a2 û b2 a2 û b3 -Ha1 Ô a3 Ô a4 L Ô b2 Ô b3 -Ha1 Ô a3 Ô a4 L Ô -Hb1 Ô b3
a3 û b1 a3 û b2 a3 û b3 a1 Ô a2 Ô a4 Ô b2 Ô b3 a1 Ô a2 Ô a4 Ô -Hb1 Ô b3 L
a4 û b1 a4 û b2 a4 û b3 -Ha1 Ô a2 Ô a3 L Ô b2 Ô b3 -Ha1 Ô a2 Ô a3 L Ô -Hb1 Ô b3

2009 9 3
Grassmann Algebra Book.nb 584

VizualizeGeneralizedProductBa Û bF
4 2 3

a1 Ô a2 û b1 Ô b2 a1 Ô a2 û b1 Ô b3 a1 Ô a2 û b2 Ô b3
a1 Ô a3 û b1 Ô b2 a1 Ô a3 û b1 Ô b3 a1 Ô a3 û b2 Ô b3
a1 Ô a4 û b1 Ô b2 a1 Ô a4 û b1 Ô b3 a1 Ô a4 û b2 Ô b3
a2 Ô a3 û b1 Ô b2 a2 Ô a3 û b1 Ô b3 a2 Ô a3 û b2 Ô b3
a2 Ô a4 û b1 Ô b2 a2 Ô a4 û b1 Ô b3 a2 Ô a4 û b2 Ô b3
a3 Ô a4 û b1 Ô b2 a3 Ô a4 û b1 Ô b3 a3 Ô a4 û b2 Ô b3

a3 Ô a4 Ô b3 a3 Ô a4 Ô -b2 a3 Ô a4 Ô b1
-Ha2 Ô a4 L Ô b3 -Ha2 Ô a4 L Ô -b2 -Ha2 Ô a4 L Ô b1
a2 Ô a3 Ô b3 a2 Ô a3 Ô -b2 a2 Ô a3 Ô b1
a1 Ô a4 Ô b3 a1 Ô a4 Ô -b2 a1 Ô a4 Ô b1
-Ha1 Ô a3 L Ô b3 -Ha1 Ô a3 L Ô -b2 -Ha1 Ô a3 L Ô b1
a1 Ô a2 Ô b3 a1 Ô a2 Ô -b2 a1 Ô a2 Ô b1

VizualizeGeneralizedProductBa Û bF
4 3 3

a1 Ô a2 Ô a3 û b1 Ô b2 Ô b3 a4 Ô 1
a1 Ô a2 Ô a4 û b1 Ô b2 Ô b3 -a3 Ô 1
a1 Ô a3 Ô a4 û b1 Ô b2 Ô b3 a2 Ô 1
a2 Ô a3 Ô a4 û b1 Ô b2 Ô b3 -a1 Ô 1

The components
Now that we are able to visualize the generalized product in terms of its exterior and interior
product components, it becomes easier to see the influence on the product result of its factors. In
the examples above we expressed the generalized product as the sum of the element-by-element
(ordinary) products of an interior product matrix, and an exterior product matrix.

VizualizeGeneralizedProductBa Û bF
3 1 2

a1 û b1 a1 û b2 a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
a2 û b1 a2 û b2 -Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a3 û b1 a3 û b2 a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1

Ï The interior products

The interior products of this generalized product of order 1 comprise all the essentially different
inner products between a 1-element of a and a 1-element of b.
3 2

a1 û b1 a1 û b2
a2 û b1 a2 û b2
a3 û b1 a3 û b2

Consider now the inner (scalar, in this case) product of a general element belonging to a with a
3
general element belonging to b. Expanding and simplifying the product gives
2
2009 9 3
Grassmann Algebra Book.nb 585

Consider now the inner (scalar, in this case) product of a general element belonging to a with a
3
general element belonging to b. Expanding and simplifying the product gives
2

¯%@Ha a1 + b a2 + c a3 L û Hd b1 + e b2 LD
a d Ha1 û b1 L + a e Ha1 û b2 L + b d Ha2 û b1 L +
b e Ha2 û b2 L + c d Ha3 û b1 L + c e Ha3 û b2 L

We can see from this that b is totally orthogonal to a if and only if every one of the 6 scalar
2 3
products is zero.

Ï The exterior products

Similarly we can consider the exterior products

a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
-Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1

The exterior product of a general element belonging to a with a general element belonging to b is:
3 2

¯%@Ha a1 + b a2 + c a3 L Ô Hd b1 + e b2 LD
a d a1 Ô b1 + a e a1 Ô b2 + b d a2 Ô b1 + b e a2 Ô b2 + c d a3 Ô b1 + c e a3 Ô b2

Thus, if each of these exterior products is zero

Ï Common elements

The generalized product is

G = ToScalarProductsBa Û bF
3 1 2

-Ha3 û b2 L a1 Ô a2 Ô b1 + Ha3 û b1 L a1 Ô a2 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô b1 -


Ha2 û b1 L a1 Ô a3 Ô b2 - Ha1 û b2 L a2 Ô a3 Ô b1 + Ha1 û b1 L a2 Ô a3 Ô b2

G1 = ¯$@G ê. b1 Ø a1 D
-Ha1 û b2 L a1 Ô a2 Ô a3 + Ha1 û a3 L a1 Ô a2 Ô b2 -
Ha1 û a2 L a1 Ô a3 Ô b2 + Ha1 û a1 L a2 Ô a3 Ô b2

G2 = ¯$@G1 ê. b2 Ø a2 D
0

2009 9 3
Grassmann Algebra Book.nb 586

¯!8 ; H = ¯%BToInnerProductsBa Û bFF


4 1 3

-Ha4 û b3 L a1 Ô a2 Ô a3 Ô b1 Ô b2 + Ha4 û b2 L a1 Ô a2 Ô a3 Ô b1 Ô b3 -
Ha4 û b1 L a1 Ô a2 Ô a3 Ô b2 Ô b3 + Ha3 û b3 L a1 Ô a2 Ô a4 Ô b1 Ô b2 -
Ha3 û b2 L a1 Ô a2 Ô a4 Ô b1 Ô b3 + Ha3 û b1 L a1 Ô a2 Ô a4 Ô b2 Ô b3 -
Ha2 û b3 L a1 Ô a3 Ô a4 Ô b1 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô a4 Ô b1 Ô b3 -
Ha2 û b1 L a1 Ô a3 Ô a4 Ô b2 Ô b3 + Ha1 û b3 L a2 Ô a3 Ô a4 Ô b1 Ô b2 -
Ha1 û b2 L a2 Ô a3 Ô a4 Ô b1 Ô b3 + Ha1 û b1 L a2 Ô a3 Ô a4 Ô b2 Ô b3

H1 = ¯$@H ê. b1 Ø a1 D
-Ha1 û b3 L a1 Ô a2 Ô a3 Ô a4 Ô b2 + Ha1 û b2 L a1 Ô a2 Ô a3 Ô a4 Ô b3 -
Ha1 û a4 L a1 Ô a2 Ô a3 Ô b2 Ô b3 + Ha1 û a3 L a1 Ô a2 Ô a4 Ô b2 Ô b3 -
Ha1 û a2 L a1 Ô a3 Ô a4 Ô b2 Ô b3 + Ha1 û a1 L a2 Ô a3 Ô a4 Ô b2 Ô b3

H2 = ¯$@H1 ê. b2 Ø a2 D
0

Ï General common elements

The generalized product is

G = ToScalarProductsBa Û bF
3 1 2

-Ha3 û b2 L a1 Ô a2 Ô b1 + Ha3 û b1 L a1 Ô a2 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô b1 -


Ha2 û b1 L a1 Ô a3 Ô b2 - Ha1 û b2 L a2 Ô a3 Ô b1 + Ha1 û b1 L a2 Ô a3 Ô b2

G1 = ¯%@G ê. b1 Ø Ha a1 + b a2 + c a3 LD
H-a Ha1 û b2 L - b Ha2 û b2 L - c Ha3 û b2 LL a1 Ô a2 Ô a3 +
Ha Ha1 û a3 L + b Ha2 û a3 L + c Ha3 û a3 LL a1 Ô a2 Ô b2 +
H-a Ha1 û a2 L - b Ha2 û a2 L - c Ha2 û a3 LL a1 Ô a3 Ô b2 +
Ha Ha1 û a1 L + b Ha1 û a2 L + c Ha1 û a3 LL a2 Ô a3 Ô b2

G2 = ¯%@G1 ê. b2 Ø Hd a1 + e a2 + f a3 LD
0

2009 9 3
Grassmann Algebra Book.nb 587

Ï A larger expression

¯!8 ; H = ¯%BToInnerProductsBa Û bFF


4 1 3

-Ha4 û b3 L a1 Ô a2 Ô a3 Ô b1 Ô b2 + Ha4 û b2 L a1 Ô a2 Ô a3 Ô b1 Ô b3 -
Ha4 û b1 L a1 Ô a2 Ô a3 Ô b2 Ô b3 + Ha3 û b3 L a1 Ô a2 Ô a4 Ô b1 Ô b2 -
Ha3 û b2 L a1 Ô a2 Ô a4 Ô b1 Ô b3 + Ha3 û b1 L a1 Ô a2 Ô a4 Ô b2 Ô b3 -
Ha2 û b3 L a1 Ô a3 Ô a4 Ô b1 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô a4 Ô b1 Ô b3 -
Ha2 û b1 L a1 Ô a3 Ô a4 Ô b2 Ô b3 + Ha1 û b3 L a2 Ô a3 Ô a4 Ô b1 Ô b2 -
Ha1 û b2 L a2 Ô a3 Ô a4 Ô b1 Ô b3 + Ha1 û b1 L a2 Ô a3 Ô a4 Ô b2 Ô b3

H1 = ¯$@H ê. b3 Ø z ê. a4 Ø zD
Hz û b2 L z Ô a1 Ô a2 Ô a3 Ô b1 - Hz û b1 L z Ô a1 Ô a2 Ô a3 Ô b2 +
Hz û a3 L z Ô a1 Ô a2 Ô b1 Ô b2 - Hz û a2 L z Ô a1 Ô a3 Ô b1 Ô b2 +
Hz û a1 L z Ô a2 Ô a3 Ô b1 Ô b2 - Hz û zL a1 Ô a2 Ô a3 Ô b1 Ô b2

If there is one element in common, orthogonal to the others, then

Ha1 Ô a2 Ô a3 Ô zL Û Hb1 Ô b2 Ô zL == -Hz û zL a1 Ô a2 Ô a3 Ô b1 Ô b2


1

H2 = ¯$@H1 ê. b2 Ø y ê. a2 Ø yD
0

More completely

H3 = ¯%@H ê. b1 Ø Ha a1 + b a2 + c a3 + d a4 LD
H-a Ha1 û b3 L - b Ha2 û b3 L - c Ha3 û b3 L - d Ha4 û b3 LL a1 Ô a2 Ô a3 Ô a4 Ô b2 +
Ha Ha1 û b2 L + b Ha2 û b2 L + c Ha3 û b2 L + d Ha4 û b2 LL a1 Ô a2 Ô a3 Ô a4 Ô b3 +
H-a Ha1 û a4 L - b Ha2 û a4 L - c Ha3 û a4 L - d Ha4 û a4 LL a1 Ô a2 Ô a3 Ô b2 Ô b3 +
Ha Ha1 û a3 L + b Ha2 û a3 L + c Ha3 û a3 L + d Ha3 û a4 LL a1 Ô a2 Ô a4 Ô b2 Ô b3 +
H-a Ha1 û a2 L - b Ha2 û a2 L - c Ha2 û a3 L - d Ha2 û a4 LL a1 Ô a3 Ô a4 Ô b2 Ô b3 +
Ha Ha1 û a1 L + b Ha1 û a2 L + c Ha1 û a3 L + d Ha1 û a4 LL a2 Ô a3 Ô a4 Ô b2 Ô b3

H4 = ¯%@H3 ê. b2 Ø He a1 + f a2 + g a3 + h a4 LD
0

Ï General Common elements in the matrix pair expression

a2 Ô a3 Ô b2 a2 Ô a3 Ô -b1
-Ha1 Ô a3 L Ô b2 -Ha1 Ô a3 L Ô -b1
a1 Ô a2 Ô b2 a1 Ô a2 Ô -b1
88a2 Ô a3 Ô b2 , a2 Ô a3 Ô -b1 <,
8-Ha1 Ô a3 L Ô b2 , -Ha1 Ô a3 L Ô -b1 <, 8a1 Ô a2 Ô b2 , a1 Ô a2 Ô -b1 <<

2009 9 3
Grassmann Algebra Book.nb 588

Mi1 =
MatrixForm@88a1 û b1 , a1 û b2 <, 8a2 û b1 , a2 û b2 <, 8a3 û b1 , a3 û b2 <<D ê.
b1 Ø Ha a1 + b a2 + c a3 L ê. b2 Ø Hd a1 + e a2 + f a3 L
a1 û Ha a1 + b a2 + c a3 L a1 û Hd a1 + e a2 + f a3 L
a2 û Ha a1 + b a2 + c a3 L a2 û Hd a1 + e a2 + f a3 L
a3 û Ha a1 + b a2 + c a3 L a3 û Hd a1 + e a2 + f a3 L

Me1 = MatrixForm@88a2 Ô a3 Ô b2 , a2 Ô a3 Ô -b1 <,


8-Ha1 Ô a3 L Ô b2 , -Ha1 Ô a3 L Ô -b1 <, 8a1 Ô a2 Ô b2 , a1 Ô a2 Ô -b1 <<D ê.
b1 Ø Ha a1 + b a2 + c a3 L ê. b2 Ø Hd a1 + e a2 + f a3 L
a2 Ô a3 Ô Hd a1 + e a2 + f a3 L a2 Ô a3 Ô H-a a1 - b a2 - c a3 L
-Ha1 Ô a3 L Ô Hd a1 + e a2 + f a3 L -Ha1 Ô a3 L Ô H-a a1 - b a2 - c a3 L
a1 Ô a2 Ô Hd a1 + e a2 + f a3 L a1 Ô a2 Ô H-a a1 - b a2 - c a3 L

Me2 = ¯% êü Me1
d a1 Ô a2 Ô a3 -a a1 Ô a2 Ô a3
e a1 Ô a2 Ô a3 -b a1 Ô a2 Ô a3
f a1 Ô a2 Ô a3 -c a1 Ô a2 Ô a3

This reduces to

a1 û d Ha a1 + b a2 + c a3 L a1 û H-aL Hd a1 + e a2 + f a3 L
a2 û e Ha a1 + b a2 + c a3 L a2 û H-bL Hd a1 + e a2 + f a3 L
a3 û f Ha a1 + b a2 + c a3 L a3 û H-cL Hd a1 + e a2 + f a3 L

It can be seen that for every scalar product, there is a corresponding negative version which will
cancell it when the terms are summed.

Hence, if a generalised product of order 1 has either


1. factors with at least two 1-elements in common, or
2. factors which are totally orthogonal
then it is zero.

If a generalised product of order 1 is zero, then it may have


1. factors with at least two 1-elements in common, or
2. factors which are totally orthogonal

These two cases are disjoint: factors cannot both be totally orthogonal and have elements in
common. For elements in common will give rise to scalar products of the form

a1 û a1
For suppose the simplest case

2009 9 3
Grassmann Algebra Book.nb 589

Ï For 1 element orthogonal, and 1 common

G = ToScalarProductsBa Û bF
3 1 2

-Ha3 û b2 L a1 Ô a2 Ô b1 + Ha3 û b1 L a1 Ô a2 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô b1 -


Ha2 û b1 L a1 Ô a3 Ô b2 - Ha1 û b2 L a2 Ô a3 Ô b1 + Ha1 û b1 L a2 Ô a3 Ô b2

If b2 is orthogonal tp the a

Go = G ê. 8a1 û b2 Ø 0, a2 û b2 Ø 0, a3 û b2 Ø 0<
Ha3 û b1 L a1 Ô a2 Ô b2 - Ha2 û b1 L a1 Ô a3 Ô b2 + Ha1 û b1 L a2 Ô a3 Ô b2

ã HHa1 Ô a2 Ô a3 L û b1 L Ô b2

A larger expression

¯!8 ; J = ¯%BToInnerProductsBa Û bFF


4 2 3

Ha3 Ô a4 û b2 Ô b3 L a1 Ô a2 Ô b1 - Ha3 Ô a4 û b1 Ô b3 L a1 Ô a2 Ô b2 +
Ha3 Ô a4 û b1 Ô b2 L a1 Ô a2 Ô b3 - Ha2 Ô a4 û b2 Ô b3 L a1 Ô a3 Ô b1 +
Ha2 Ô a4 û b1 Ô b3 L a1 Ô a3 Ô b2 - Ha2 Ô a4 û b1 Ô b2 L a1 Ô a3 Ô b3 +
Ha2 Ô a3 û b2 Ô b3 L a1 Ô a4 Ô b1 - Ha2 Ô a3 û b1 Ô b3 L a1 Ô a4 Ô b2 +
Ha2 Ô a3 û b1 Ô b2 L a1 Ô a4 Ô b3 + Ha1 Ô a4 û b2 Ô b3 L a2 Ô a3 Ô b1 -
Ha1 Ô a4 û b1 Ô b3 L a2 Ô a3 Ô b2 + Ha1 Ô a4 û b1 Ô b2 L a2 Ô a3 Ô b3 -
Ha1 Ô a3 û b2 Ô b3 L a2 Ô a4 Ô b1 + Ha1 Ô a3 û b1 Ô b3 L a2 Ô a4 Ô b2 -
Ha1 Ô a3 û b1 Ô b2 L a2 Ô a4 Ô b3 + Ha1 Ô a2 û b2 Ô b3 L a3 Ô a4 Ô b1 -
Ha1 Ô a2 û b1 Ô b3 L a3 Ô a4 Ô b2 + Ha1 Ô a2 û b1 Ô b2 L a3 Ô a4 Ô b3

J1 = ¯$@J ê. b3 Ø z ê. a4 Ø zD
-Hz Ô a3 û b1 Ô b2 L z Ô a1 Ô a2 +
Hz Ô a2 û b1 Ô b2 L z Ô a1 Ô a3 + Hz Ô b2 û a2 Ô a3 L z Ô a1 Ô b1 -
Hz Ô b1 û a2 Ô a3 L z Ô a1 Ô b2 - Hz Ô a1 û b1 Ô b2 L z Ô a2 Ô a3 -
Hz Ô b2 û a1 Ô a3 L z Ô a2 Ô b1 + Hz Ô b1 û a1 Ô a3 L z Ô a2 Ô b2 +
Hz Ô b2 û a1 Ô a2 L z Ô a3 Ô b1 - Hz Ô b1 û a1 Ô a2 L z Ô a3 Ô b2 +
Hz Ô a3 û z Ô b2 L a1 Ô a2 Ô b1 - Hz Ô a3 û z Ô b1 L a1 Ô a2 Ô b2 -
Hz Ô a2 û z Ô b2 L a1 Ô a3 Ô b1 + Hz Ô a2 û z Ô b1 L a1 Ô a3 Ô b2 +
Hz Ô a1 û z Ô b2 L a2 Ô a3 Ô b1 - Hz Ô a1 û z Ô b1 L a2 Ô a3 Ô b2

2009 9 3
Grassmann Algebra Book.nb 590

J2 = ¯$@J1 ê. b2 Ø x ê. a3 Ø xD
Hx Ô a2 û z Ô b1 - x Ô b1 û z Ô a2 L x Ô z Ô a1 +
H-Hx Ô a1 û z Ô b1 L + x Ô b1 û z Ô a1 L x Ô z Ô a2 +
Hx Ô z û a1 Ô a2 L x Ô z Ô b1 + Hx Ô z û z Ô b1 L x Ô a1 Ô a2 -
Hx Ô z û z Ô a2 L x Ô a1 Ô b1 + Hx Ô z û z Ô a1 L x Ô a2 Ô b1 -
Hx Ô z û x Ô b1 L z Ô a1 Ô a2 + Hx Ô z û x Ô a2 L z Ô a1 Ô b1 -
Hx Ô z û x Ô a1 L z Ô a2 Ô b1 + Hx Ô z û x Ô zL a1 Ô a2 Ô b1

¯%@ToScalarProducts@J2DD
H-Hx û b1 L Hz û a2 L + Hx û a2 L Hz û b1 LL x Ô z Ô a1 +
HHx û b1 L Hz û a1 L - Hx û a1 L Hz û b1 LL x Ô z Ô a2 +
H-Hx û a2 L Hz û a1 L + Hx û a1 L Hz û a2 LL x Ô z Ô b1 +
H-Hx û b1 L Hz û zL + Hx û zL Hz û b1 LL x Ô a1 Ô a2 +
HHx û a2 L Hz û zL - Hx û zL Hz û a2 LL x Ô a1 Ô b1 +
H-Hx û a1 L Hz û zL + Hx û zL Hz û a1 LL x Ô a2 Ô b1 +
HHx û zL Hx û b1 L - Hx û xL Hz û b1 LL z Ô a1 Ô a2 +
H-Hx û zL Hx û a2 L + Hx û xL Hz û a2 LL z Ô a1 Ô b1 +
HHx û zL Hx û a1 L - Hx û xL Hz û a1 LL z Ô a2 Ô b1 +
I-Hx û zL2 + Hx û xL Hz û zLM a1 Ô a2 Ô b1

J3 = ¯$@J2 ê. b1 Ø y ê. a2 Ø yD
H-Hx Ô y û z Ô a1 L + x Ô z û y Ô a1 - x Ô a1 û y Ô zL x Ô y Ô z

¯%@ToScalarProducts@J3DD
0

Ha1 Ô y Ô x Ô zL Û Hy Ô x Ô zL ã 0
2

Bringing the formulas together


Common elements 2
For common elements, the generalized products cease being zero when the order of the product
equals or exceeds the grade of the common factor.

2009 9 3
Grassmann Algebra Book.nb 591

aÔg Û bÔg ã 0 l<p


m p l k p

aÔg Û bÔg ã aÔbÔg ûg lã p


m p l k p m k p p

aÔg Û bÔg ! 0 l> p


m p l k p

g Ô A Û g Ô B ã g Ô A Ô B û g ã g û g A Ô B,
l a l l b l a b l l l a b 10.32
gi û Aj ã 0, gi û Bj ã 0

gÔa Û gÔb ã gûg a Û b


p m l p k p p m l-p k
10.33
g = g1 Ô g2 Ô ! Ô gp a û gi ã b û gi ã 0 l¥p
p m k

gûg a Û b ã H-1Lp l gÔa Û b ûg ã


p p m l k p m l k p

H-1Lp m a Û g Ô b ûg 10.34
m l p k p

g = g1 Ô g2 Ô ! Ô gp a û gi ã b û gi ã 0
p m k

Ï Hypothesis

gÔA Û gÔB ã gÔ A Û B û g ã g û g A Û B,
l m m l k l m m-l k l l l m m-l k

gi û Aj ã 0, gi û Bj ã 0

A = Ha1 Ô a2 Ô g1 L; B = Hb1 Ô b2 Ô b3 Ô g1 L;

¯$BToInnerProductsBA Û BFF
0

2009 9 3
Grassmann Algebra Book.nb 592

AB1 = ¯$BToScalarProductsBA Û BFF


1

-Hg1 û g1 L a1 Ô a2 Ô b1 Ô b2 Ô b3 + Hb3 û g1 L a1 Ô a2 Ô b1 Ô b2 Ô g1 -
Hb2 û g1 L a1 Ô a2 Ô b1 Ô b3 Ô g1 + Hb1 û g1 L a1 Ô a2 Ô b2 Ô b3 Ô g1 -
Ha2 û g1 L a1 Ô b1 Ô b2 Ô b3 Ô g1 + Ha1 û g1 L a2 Ô b1 Ô b2 Ô b3 Ô g1

AB2 = ¯$BToScalarProductsBHa1 Ô a2 Ô b1 Ô b2 Ô b3 Ô g1 L Û g1 FF
1

-Hg1 û g1 L a1 Ô a2 Ô b1 Ô b2 Ô b3 + Hb3 û g1 L a1 Ô a2 Ô b1 Ô b2 Ô g1 -
Hb2 û g1 L a1 Ô a2 Ô b1 Ô b3 Ô g1 + Hb1 û g1 L a1 Ô a2 Ô b2 Ô b3 Ô g1 -
Ha2 û g1 L a1 Ô b1 Ô b2 Ô b3 Ô g1 + Ha1 û g1 L a2 Ô b1 Ô b2 Ô b3 Ô g1

AB3 = ¯$BToScalarProductsBA Û BFF


2

H-Ha2 û g1 L Hb3 û g1 L + Ha2 û b3 L Hg1 û g1 LL a1 Ô b1 Ô b2 +


HHa2 û g1 L Hb2 û g1 L - Ha2 û b2 L Hg1 û g1 LL a1 Ô b1 Ô b3 +
H-Ha2 û b3 L Hb2 û g1 L + Ha2 û b2 L Hb3 û g1 LL a1 Ô b1 Ô g1 +
H-Ha2 û g1 L Hb1 û g1 L + Ha2 û b1 L Hg1 û g1 LL a1 Ô b2 Ô b3 +
HHa2 û b3 L Hb1 û g1 L - Ha2 û b1 L Hb3 û g1 LL a1 Ô b2 Ô g1 +
H-Ha2 û b2 L Hb1 û g1 L + Ha2 û b1 L Hb2 û g1 LL a1 Ô b3 Ô g1 +
HHa1 û g1 L Hb3 û g1 L - Ha1 û b3 L Hg1 û g1 LL a2 Ô b1 Ô b2 +
H-Ha1 û g1 L Hb2 û g1 L + Ha1 û b2 L Hg1 û g1 LL a2 Ô b1 Ô b3 +
HHa1 û b3 L Hb2 û g1 L - Ha1 û b2 L Hb3 û g1 LL a2 Ô b1 Ô g1 +
HHa1 û g1 L Hb1 û g1 L - Ha1 û b1 L Hg1 û g1 LL a2 Ô b2 Ô b3 +
H-Ha1 û b3 L Hb1 û g1 L + Ha1 û b1 L Hb3 û g1 LL a2 Ô b2 Ô g1 +
HHa1 û b2 L Hb1 û g1 L - Ha1 û b1 L Hb2 û g1 LL a2 Ô b3 Ô g1 +
H-Ha1 û g1 L Ha2 û b3 L + Ha1 û b3 L Ha2 û g1 LL b1 Ô b2 Ô g1 +
HHa1 û g1 L Ha2 û b2 L - Ha1 û b2 L Ha2 û g1 LL b1 Ô b3 Ô g1 +
H-Ha1 û g1 L Ha2 û b1 L + Ha1 û b1 L Ha2 û g1 LL b2 Ô b3 Ô g1

AB4 =
¯$@AB3 êê OrthogonalSimplify@88a1 Ô a2 , g1 <, 8b1 Ô b2 Ô b3 , g1 <<DD
Ha2 û b3 L Hg1 û g1 L a1 Ô b1 Ô b2 - Ha2 û b2 L Hg1 û g1 L a1 Ô b1 Ô b3 +
Ha2 û b1 L Hg1 û g1 L a1 Ô b2 Ô b3 - Ha1 û b3 L Hg1 û g1 L a2 Ô b1 Ô b2 +
Ha1 û b2 L Hg1 û g1 L a2 Ô b1 Ô b3 - Ha1 û b1 L Hg1 û g1 L a2 Ô b2 Ô b3

AB5 = ¯$BToScalarProductsBa1 Ô a2 Û b1 Ô b2 Ô b3 FF
1

-Ha2 û b3 L a1 Ô b1 Ô b2 + Ha2 û b2 L a1 Ô b1 Ô b3 - Ha2 û b1 L a1 Ô b2 Ô b3 +


Ha1 û b3 L a2 Ô b1 Ô b2 - Ha1 û b2 L a2 Ô b1 Ô b3 + Ha1 û b1 L a2 Ô b2 Ô b3

AB4x = AB4 ê. Hg1 û g1 L Ø 1


Ha2 û b3 L a1 Ô b1 Ô b2 - Ha2 û b2 L a1 Ô b1 Ô b3 + Ha2 û b1 L a1 Ô b2 Ô b3 -
Ha1 û b3 L a2 Ô b1 Ô b2 + Ha1 û b2 L a2 Ô b1 Ô b3 - Ha1 û b1 L a2 Ô b2 Ô b3

Orthogonal elements 2

2009 9 3
Grassmann Algebra Book.nb 593

Orthogonal elements 2
Taking the exterior product of an element with a generalized product is equivalent to taking the
exterior product only with the elements which are not orthogonal to it.

Consider an element made up of two factors, B and G, and a third element A totally orthogonal to
B. Then the generalized product of A with GÔ B is equal to the exterior product of B with the
generalized product of A and G. That is, there is a sort of quasi-associativity between the factors.

a Û gÔb ã a Û g Ôb ai û bj ã 0 l § Min@m, pD 10.35


m l p k m l p k

a Û gÔb ã 0 ai û bj ã 0 l > Min@m, pD 10.36


m l p k

For orthogonal elements, the generalized products cease being zero when the order of the product
equals or exceeds the grade of the common factor. We assume in the following that

di û ej ã 0

Ka Ô dO Û b Ô e !0 l<m+k
m r l k p

Ka Ô dO Û b Ô e == lãm+k
m r l k p

Ka Ô dO Û b Ô e ã0 l>m+k
m r l k p

¯A; ¯!12 ; DeclareExtraVectorSymbols@


8Subscript@a, _D, Subscript@b, _D,
Subscript@g, _D, Subscript@d, _D, Subscript@e, _D<D
8p, q, r, s, t, u, v, w, x, y, z, a_ , b_ , g_ , d_ , e_ <

2009 9 3
Grassmann Algebra Book.nb 594

Ï Ka Ô dO Û e di û ej ã 0 l : Min[m,p]
m r l p

Ka Ô dO Û e ! 0 l<m
m r l p

Ka Ô dO Û e ã d Ô a Û e l § Min@m, pD
m r l p r m l p

Ka Ô dO Û e ã 0 l>m
m r l p

a Û gÔb ã a Û g Ôb ai û bj ã 0 l § Min@m, pD
m l p k m l p k

A = Ha1 Ô a2 Ô d1 L; B = He1 Ô e2 Ô e3 L;

VizualizeGeneralizedProductBA Û BF
0

H 1 û 1 L H a1 Ô a2 Ô d1 Ô e1 Ô e2 Ô e3 L

VizualizeGeneralizedProductBA Û BF êê
1
OrthogonalSimplify@88d1 Ô d2 Ô d3 , e1 Ô e2 Ô e3 Ô e4 <<D
a1 û e1 a1 û e2 a1 û e3
a2 û e1 a2 û e2 a2 û e3
0 0 0
a2 Ô d1 Ô e2 Ô e3 a2 Ô d1 Ô -He1 Ô e3 L a2 Ô d1 Ô e1 Ô e2
-Ha1 Ô d1 L Ô e2 Ô e3 -Ha1 Ô d1 L Ô -He1 Ô e3 L -Ha1 Ô d1 L Ô e1 Ô e2
a1 Ô a2 Ô e2 Ô e3 a1 Ô a2 Ô -He1 Ô e3 L a1 Ô a2 Ô e1 Ô e2

ToScalarProductsBHa1 Ô a2 L Û He1 Ô e2 Ô e3 LF
1

-Ha2 û e3 L a1 Ô e1 Ô e2 + Ha2 û e2 L a1 Ô e1 Ô e3 - Ha2 û e1 L a1 Ô e2 Ô e3 +


Ha1 û e3 L a2 Ô e1 Ô e2 - Ha1 û e2 L a2 Ô e1 Ô e3 + Ha1 û e1 L a2 Ô e2 Ô e3

VizualizeGeneralizedProductBA Û BF êê
2
OrthogonalSimplify@88d1 Ô d2 Ô d3 , e1 Ô e2 Ô e3 Ô e4 <<D
H a1 Ô a2 û e1 Ô e2 a1 Ô a2 û e1 Ô e3 a1 Ô a2 û e2 Ô e3 L H 1 Ô e3 1 Ô -e2 1 Ô e1 L

2009 9 3
Grassmann Algebra Book.nb 595

AB2a = ¯$BToInnerProductsBA Û BF êê
2

OrthogonalSimplify@88d1 Ô d2 Ô d3 , e1 Ô e2 Ô e3 Ô e4 <<DF

Ha1 Ô a2 û e3 Ô e4 L d1 Ô d2 Ô d3 Ô e1 Ô e2 -
Ha1 Ô a2 û e2 Ô e4 L d1 Ô d2 Ô d3 Ô e1 Ô e3 +
Ha1 Ô a2 û e2 Ô e3 L d1 Ô d2 Ô d3 Ô e1 Ô e4 +
Ha1 Ô a2 û e1 Ô e4 L d1 Ô d2 Ô d3 Ô e2 Ô e3 -
Ha1 Ô a2 û e1 Ô e3 L d1 Ô d2 Ô d3 Ô e2 Ô e4 + Ha1 Ô a2 û e1 Ô e2 L d1 Ô d2 Ô d3 Ô e3 Ô e4

AB2b = ¯%BToInnerProductsBJHa1 Ô a2 L Û He1 Ô e2 Ô e3 Ô e4 LNF Ô d1 Ô d2 Ô d3 F


2

Ha1 Ô a2 û e3 Ô e4 L d1 Ô d2 Ô d3 Ô e1 Ô e2 -
Ha1 Ô a2 û e2 Ô e4 L d1 Ô d2 Ô d3 Ô e1 Ô e3 +
Ha1 Ô a2 û e2 Ô e3 L d1 Ô d2 Ô d3 Ô e1 Ô e4 +
Ha1 Ô a2 û e1 Ô e4 L d1 Ô d2 Ô d3 Ô e2 Ô e3 -
Ha1 Ô a2 û e1 Ô e3 L d1 Ô d2 Ô d3 Ô e2 Ô e4 + Ha1 Ô a2 û e1 Ô e2 L d1 Ô d2 Ô d3 Ô e3 Ô e4

AB2a ã AB2b
True

VizualizeGeneralizedProductBA Û BF êê
3

OrthogonalSimplify@88d1 Ô d2 Ô d3 , e1 Ô e2 Ô e3 Ô e4 <<D
Inner::incom :
Length 0 of dimension 1 in 8< is incommensurate with
length 1 of dimension 1 in 88e1 Ô e2 Ô e3 <<. More…
Inner::incom :
Length 0 of dimension 1 in 8< is incommensurate
with length 1 of dimension 1 in 881<<. More…
Inner@CircleMinus, 8<, 88e1 Ô e2 Ô e3 <<, PlusD
Inner@Wedge, 8<, 881<<, PlusD

VizualizeGeneralizedProductBA Û BF êê
4
OrthogonalSimplify@88d1 Ô d2 Ô d3 , e1 Ô e2 Ô e3 Ô e4 <<D
0 d3 Ô 1
0 -d2 Ô 1
0 d1 Ô 1
0 -a2 Ô 1
0 a1 Ô 1

2009 9 3
Grassmann Algebra Book.nb 596

10.9 The Zero Generalized Sum Theorem

The Zero Interior Sum theorem


Suppose in the Generalized Product Theorem that a is unity. Then we can expand the product
m
1 Û b in both the A and B forms to give:
m k

k k
K O K O
m m
1 Û b ã ‚ 1 û bj Ô bj ã ‚ 1 Ô bj û bj
m k m k-m k-m m
j=1 j=1

When m is equal to zero we have the trivial identity that:

1Ûb ãb
0 k k

When m is greater than zero, the first A form is clearly zero because the grade of the left factor of
the interior product (1) is less than that of the right factor. Hence the B form is zero also. Thus we
have the immediate result that

k
K O
m
‚ bj û bj ã 0
j=1
k-m m 10.37

b ã b1 Ô b1 ã b2 Ô b2 ã !
k m k-m m k-m

We might express this more mnemonically as

b ã b1 Ô b1 ã b2 Ô b2 ã b3 Ô b3 ã !
k m k-m m k-m m k-m
1 1 2 2
ï b ûb + b ûb + b2 û b3 + ! ã 0
k-m m k-m m k-m m

If m is less than or equal to k-m, this sum is zero as shown by 10.16. However, the sum created by
interchanging the order of the factors in the interior products is also zero, since each term in the
sum now becomes zero by virtue of the second factor being of higher grade than the first.

b1 û b1 + b2 û b2 + b2 û b3 + ! ã 0
l k-m l k-m l k-m

These two results are collected together and referred to as the Zero Interior Sum Theorem. It is
valid for 0 < m < k.

2009 9 3 10.38
Grassmann Algebra Book.nb 597

b ã b1 Ô b1 ã b2 Ô b2 ã b3 Ô b3 ã !
k m k-m m k-m m k-m

ï b1 û b1 + b2 û b2 + b2 û b3 + ! ã 0
m k-m m k-m m k-m

ï b1 û b1 + b2 û b2 + b2 û b3 + ! ã 0
k-m m k-m m k-m m

Ÿ Composing a zero interior sum


We can easily compose a zero interior sum by noting its relationship to the base and cobase of an
element. (See Chapter 3: The Regressive Product, for a discussion of Base and Cobase). For
example, consider a simple element b. Its base and cobase are given by:
3

BaseBbF
3

81, b1 , b2 , b3 , b1 Ô b2 , b1 Ô b3 , b2 Ô b3 , b1 Ô b2 Ô b3 <

CobaseBbF
3

8b1 Ô b2 Ô b3 , b2 Ô b3 , -Hb1 Ô b3 L, b1 Ô b2 , b3 , -b2 , b1 , 1<

If we take the interior products of the corresponding elements of this list (using Thread), and then
add (Plus) together the cases (Cases) in which the grade of the second factor is equal to the
chosen value of m (here we choose m equal to 1), we get the terms of the zero interior sum.

Plus üü
CasesBThreadBBaseBbF û CobaseBbFF, a_ û b_ ê; RawGrade@bD ã 1F
3 3

b1 Ô b2 û b3 + b1 Ô b3 û -b2 + b2 Ô b3 û b1

We then give our function a name and add a condition 0 < m < RawGrade[b] to restrict the
formulas to the valid range of m.

ComposeZeroInteriorSum@m_D@b_D ê; 0 < m < RawGrade@bD :=


Plus üü Cases@Thread@Base@bD û Cobase@bDD,
a_ û b_ ê; RawGrade@bD ã mD;

Ï Examples

For a 2-element, the zero interior sum is obviously zero.

2009 9 3 10.38
Grassmann Algebra Book.nb 598

ComposeZeroInteriorSum@1DBbF
2

b1 û b2 + b2 û -b1

For a 3-element, with m equal to 1 (one 1-element factor to the right of the interior product sign),
we get

ComposeZeroInteriorSum@1DBbF
3

b1 Ô b2 û b3 + b1 Ô b3 û -b2 + b2 Ô b3 û b1

Two factors to the right of the interior product sign means one to the left, and

For m equal to 2 (two 1-element factors to the right of the interior product sign), we get

ComposeZeroInteriorSum@2DBbF
3

b1 û b2 Ô b3 + b2 û -Hb1 Ô b3 L + b3 û b1 Ô b2

In this case, each of the interior products is zero in its own right. However, remember that
ComposeZeroInteriorSum is GrassmannAlgebra "composer", and hence by design, does not
effect any simplifications.

For an element of grade 4, we get non-trivial results for m equal to 1, and for m equal to 2.

ComposeZeroInteriorSum@1DBbF
4

b1 Ô b2 Ô b3 û b4 + b1 Ô b2 Ô b4 û -b3 + b1 Ô b3 Ô b4 û b2 + b2 Ô b3 Ô b4 û -b1

Z1 = ComposeZeroInteriorSum@2DBbF
4

b1 Ô b2 û b3 Ô b4 + b1 Ô b3 û -Hb2 Ô b4 L + b1 Ô b4 û b2 Ô b3 +
b2 Ô b3 û b1 Ô b4 + b2 Ô b4 û -Hb1 Ô b3 L + b3 Ô b4 û b1 Ô b2

Some zero interior sums can be simplified. For example the previous result can be written

Z2 = ¯$@Z1 D
2 Hb1 Ô b2 û b3 Ô b4 L - 2 Hb1 Ô b3 û b2 Ô b4 L + 2 Hb1 Ô b4 û b2 Ô b3 L

We can also verify that these expressions are indeed zero by converting them to their scalar
product form. For example:

¯$@ToScalarProducts@Z2 DD
0

2009 9 3
Grassmann Algebra Book.nb 599

Ÿ The Zero Generalized Sum theorem


The Zero Generalized Sum theorem is of the same form as the Zero Interior Sum theorem, but with
the interior product replaced by the generalized product. The formulas are valid for b simple, 0 < m
k
< k, and l ! 0. The Zero Interior Sum theorem is the special case of the Zero Generalized Sum
theorem in which l ã Min[m,k-m].

b ã b1 Ô b1 ã b2 Ô b2 ã b3 Ô b3 ã !
k m k-m m k-m m k-m

ï b1 Û b1 + b2 Û b2 + b2 Û b3 + ! ã 0 l!0 10.39
m l k-m m l k-m m l k-m

ï b1 Û b1 + b2 Û b2 + b2 Û b3 + ! ã 0 l!0
k-m l m k-m l m k-m l m

To compose a zero generalized sum, we simply modify the code for the zero interior sum to
replace the interior product by the generalized product, and add another argument and condition for
l.

ComposeZeroGeneralizedSum@m_, l_D@b_D ê;
0 < m < RawGrade@bD && l > 0 := Plus üü CasesB

ThreadBBase@bD Û Cobase@bDF, a_ Û b_ ê; RawGrade@bD ã mF;


l l

Ï Ÿ Examples

Here is the only valid non-trivial zero generalized sum for a 4-element that does not reduce to a
zero interior sum.

ComposeZeroGeneralizedSum@2, 1DBbF
4

b1 Ô b2 Û b3 Ô b4 + b1 Ô b3 Û -Hb2 Ô b4 L + b1 Ô b4 Û b2 Ô b3 +
1 1 1
b2 Ô b3 Û b1 Ô b4 + b2 Ô b4 Û -Hb1 Ô b3 L + b3 Ô b4 Û b1 Ô b2
1 1 1

Here we tabulate the first 50 cases to confirm that they reduce to zero.

2009 9 3
Grassmann Algebra Book.nb 600

FlattenBTableB

¯%BToScalarProductsBComposeZeroGeneralizedSum@m, lDBbFFF,
k

8k, 2, 8<, 8m, 1, k - 1<, 8l, 1, Min@m, k - mD<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

Ÿ The mechanism behind zero generalized sums


To see the mechanism behind why these sums are zero, we take a specific case and follow through
the steps. A general proof of the theorem need only abstract these fews simple steps. The proof will
rely on the Zero Interior Sum theorem.

Suppose we have a simple 5-element b.


5

b ã b1 Ô b2 Ô b3 Ô b4 Ô b5
5

and we wish to show that

b1 Û b1 + b2 Û b2 + b3 Û b3 + … ã 0
m l 5-m m l 5-m m l 5-m

where 0 < m < 5, l > 0, and

b ã b1 Ô b1 ã b2 Ô b2 ã b3 Ô b3 ã !
5 m 5-m m 5-m m 5-m

The generalized product is zero when l is greater than the minimum of 5 and 5-m. It is also quasi-
commutative, only changing by a (possible) sign when its factors are interchanged:

bi Û bi ã H-1LHm-lL Hk-m-lL bi Û bi
m l k-m k-m l m

Thus the only essentially different nontrivial cases remaining are

F1 ã b1 Û b1 + b2 Û b2 + b3 Û b3 + … ã 0
1 1 4 1 1 4 1 1 4

F2 ã b1 Û b1 + b2 Û b2 + b3 Û b3 + … ã 0
2 1 3 2 1 3 2 1 3

F3 ã b1 Û b1 + b2 Û b2 + b3 Û b3 + … ã 0
2 2 3 2 2 3 2 2 3

Since the terms in F1 and F3 are already interior products, they are zero by the Zero Interior Sum
Theorem. It remains to show that F2 is zero

To display the terms of F2 for b, we use the ZeroGeneralizedSum function defined in the
5
previous section. Specifically, the terms of F2 can be written

2009 9 3
Grassmann Algebra Book.nb 601

To display the terms of F2 for b, we use the ZeroGeneralizedSum function defined in the
5
previous section. Specifically, the terms of F2 can be written

Z1 = ComposeZeroGeneralizedSum@2, 1DBbF
5

b1 Ô b2 Ô b3 Û b4 Ô b5 + b1 Ô b2 Ô b4 Û -Hb3 Ô b5 L +
1 1
b1 Ô b2 Ô b5 Û b3 Ô b4 + b1 Ô b3 Ô b4 Û b2 Ô b5 + b1 Ô b3 Ô b5 Û -Hb2 Ô b4 L +
1 1 1
b1 Ô b4 Ô b5 Û b2 Ô b3 + b2 Ô b3 Ô b4 Û -Hb1 Ô b5 L +
1 1
b2 Ô b3 Ô b5 Û b1 Ô b4 + b2 Ô b4 Ô b5 Û -Hb1 Ô b3 L + b3 Ô b4 Ô b5 Û b1 Ô b2
1 1 1

Before we proceed with the proof for this specific case, we verify first that Z1 is indeed zero.

¯%@ToScalarProducts@Z1 DD
0

Ï 1. Convert the generalized products into their interior product form

Z2 = ToInteriorProducts@Z1 D
b1 Ô Hb2 Ô b3 Ô b4 û b5 L - b1 Ô Hb2 Ô b3 Ô b5 û b4 L +
b1 Ô Hb2 Ô b4 Ô b5 û b3 L - b1 Ô Hb3 Ô b4 Ô b5 û b2 L - b2 Ô Hb1 Ô b3 Ô b4 û b5 L +
b2 Ô Hb1 Ô b3 Ô b5 û b4 L - b2 Ô Hb1 Ô b4 Ô b5 û b3 L + b2 Ô Hb3 Ô b4 Ô b5 û b1 L +
b3 Ô Hb1 Ô b2 Ô b4 û b5 L - b3 Ô Hb1 Ô b2 Ô b5 û b4 L + b3 Ô Hb1 Ô b4 Ô b5 û b2 L -
b3 Ô Hb2 Ô b4 Ô b5 û b1 L - b4 Ô Hb1 Ô b2 Ô b3 û b5 L + b4 Ô Hb1 Ô b2 Ô b5 û b3 L -
b4 Ô Hb1 Ô b3 Ô b5 û b2 L + b4 Ô Hb2 Ô b3 Ô b5 û b1 L + b5 Ô Hb1 Ô b2 Ô b3 û b4 L -
b5 Ô Hb1 Ô b2 Ô b4 û b3 L + b5 Ô Hb1 Ô b3 Ô b4 û b2 L - b5 Ô Hb2 Ô b3 Ô b4 û b1 L

Ï 2. Collect together the terms having the same exterior product factor
A simple way to do this is to replace the exterior product with an arbitrary scalar multiple. In this
form it becomes clear that each sum of the terms associated with each scalar multiple is a zero
interior sum, and hence zero, making the complete expression zero, and thus proving the theorem
for this case.

Z3 = Z2 ê. bi_ Ô B_ ß bi B

Hb2 Ô b3 Ô b4 û b5 L b1 - Hb2 Ô b3 Ô b5 û b4 L b1 +
Hb2 Ô b4 Ô b5 û b3 L b1 - Hb3 Ô b4 Ô b5 û b2 L b1 - Hb1 Ô b3 Ô b4 û b5 L b2 +
Hb1 Ô b3 Ô b5 û b4 L b2 - Hb1 Ô b4 Ô b5 û b3 L b2 + Hb3 Ô b4 Ô b5 û b1 L b2 +
Hb1 Ô b2 Ô b4 û b5 L b3 - Hb1 Ô b2 Ô b5 û b4 L b3 + Hb1 Ô b4 Ô b5 û b2 L b3 -
Hb2 Ô b4 Ô b5 û b1 L b3 - Hb1 Ô b2 Ô b3 û b5 L b4 + Hb1 Ô b2 Ô b5 û b3 L b4 -
Hb1 Ô b3 Ô b5 û b2 L b4 + Hb2 Ô b3 Ô b5 û b1 L b4 + Hb1 Ô b2 Ô b3 û b4 L b5 -
Hb1 Ô b2 Ô b4 û b3 L b5 + Hb1 Ô b3 Ô b4 û b2 L b5 - Hb2 Ô b3 Ô b4 û b1 L b5

2009 9 3
Grassmann Algebra Book.nb 602

Ï 3. Confirm the result by converting the interior products to scalar products.

To confirm that the complete expression is zero, we need first to declare the bi as scalar symbols
so that GrassmannSimplify can effect the collection of terms resulting from expanding the
interior products to scalar products.

DeclareExtraScalarSymbols@b_ D; ¯%@ToScalarProducts@Z3 DD

10.10 The Triple Generalized Sum Conjecture


In this section we discuss an interesting conjecture associated with our definition of the Clifford
product in Chapter 12. As already mentioned, we have defined the generalized Grassmann product
to facilitate the definition of the Clifford product of general Grassmann expressions. As is well
known, the Clifford product is associative. But in general, a Clifford product will involve both
exterior and interior products. And the interior product is not associative. In this section we look at
a conjecture, which, if established, will prove the validity of our definition of the Clifford product
in terms of generalized Grassmann products.

Ÿ The generalized product is not associative


We take a simple example to show that the generalized product is not associative. First we set up
the assertion:

H = Kx Û yO Û z ! x Û Jy Û zN;
0 1 0 1

Then we convert the generalized products to scalar products.

ToScalarProducts@HD
y Hx û zL - x Hy û zL # x Hy û zL

Clearly the two sides of the relation are not in general equal. Hence the generalized product is not
in general associative.

2009 9 3
Grassmann Algebra Book.nb 603

Ÿ The triple generalized sum

Ï Triple generalized products

We define a triple generalized product to be a product either of the form x Û y Û z or of the


m l k m p

form x Û y Û z . Since we have shown above that these forms will in general be different, we
m l k m p
will refer to the first one as the A form, and to the second as the B form.

Ï Signed triple generalized products


We define the signed triple generalized product of form A to be the triple generalized product of
form A multiplied by the factor H-1LsA , where sA =
m l + 12 l H1 + lL + Hk + mL m + 12 m H1 + mL.

We define the signed triple generalized product of form B to be the triple generalized product of
form B multiplied by the factor H-1LsB , where sB = m l + 12 l H1 + lL + k m + 12 m H1 + mL.

Ï Triple generalized sums


We define a triple generalized sum of form A to be the sum of all the signed triple generalized
products of form A of the same elements and which have the same grade n.
n
‚ H-1LsA x Û y Û z
m l k n-l p
l=0

1 1
where sA = m l + 2
l Hl + 1L + Hm + kL Hn - lL + 2
Hn - lL Hn - l + 1L.

We define a triple generalized sum of form B to be the sum of all the signed triple generalized
products of form B of the same elements and which have the same grade n.
n
‚ H-1LsB x Û y Û z
m l k n-l p
l=0

1 1
where sB = m l + 2
l Hl + 1L + k Hn - lL + 2
Hn - lL Hn - l + 1L.

For example, we tabulate below the triple generalized sums of form A for grades of 0, 1 and 2:

2009 9 3
Grassmann Algebra Book.nb 604

TableB:i, TripleGeneralizedSumA@iDBx, y, zF>, 8i, 0, 2<F êê


m k p

TableForm

0 xÛy Ûz
m 0 k 0 p

1 H-1L1+m x Û y Û z + H-1L1+k+m xÛy Ûz


m 1 k 0 p m 0 k 1 p

2 - x Û y Û z + H-1Lk xÛy Ûz - xÛy Ûz


m 2 k 0 p m 1 k 1 p m 0 k 2 p

Similarly the triple generalized sums of form B for grades of 0, 1 and 2 are

TableB:i, TripleGeneralizedSumB@iDBx, y, zF>, 8i, 0, 2<F êê


m k p

TableForm

0 xÛ yÛz
m 0 k 0 p

1 H-1L1+k x Û y Û z + H-1L1+m x Û y Û z
m 0 k 1 p m 1 k 0 p

2 - xÛ yÛz + H-1L2+k+m x Û y Û z -xÛ yÛz


m 0 k 2 p m 1 k 1 p m 2 k 0 p

The triple generalized sum conjecture


As we have shown above, a single triple generalized product is not in general associative. That is,
the A form and the B form are not in general equal. However we conjecture that the triple
generalized sum of form A is equal to the triple generalized sum of form B.

n n
‚ H-1LsA x Û y Û z ã ‚ H-1LsB x Û y Û z 10.40
m l k n-l p m l k n-l p
l=0 l=0

1 1
where sA = m l + 2
l Hl + 1L + Hm + kL Hn - lL + 2
Hn - lL Hn - l + 1L
1 1
and sB = m l + 2
l Hl + 1L + k Hn - lL + 2
Hn - lL Hn - l + 1L.

If this conjecture can be shown to be true, then the associativity of the Clifford product of a general
Grassmann expression can be straightforwardly proven using the definition of the Clifford product
in terms of exterior and interior products (or, what is equivalent, in terms of generalized products).
That is, the rules of Clifford algebra may be entirely determined by the axioms and theorems of the
Grassmann algebra, making it unnecessary, and indeed potentially inconsistent, to introduce any
special Clifford algebra axioms.

Ÿ Exploring the triple generalized sum conjecture

2009 9 3
Grassmann Algebra Book.nb 605

Ÿ Exploring the triple generalized sum conjecture


We can explore the triple generalized sum conjecture by converting the generalized products into
scalar products.

For example first we generate the triple generalized sums.

A = TripleGeneralizedSumA@2DBx, y, zF
3 2

- xÛy Ûz + xÛy Ûz- xÛy Ûz


3 2 2 0 3 1 2 1 3 0 2 2

B = TripleGeneralizedSumB@2DBx, y, zF
3 2

- xÛ yÛz -xÛ yÛz -xÛ yÛz


3 0 2 2 3 1 2 1 3 2 2 0

We could demonstrate the equality of these two sums directly by converting their difference into
scalar product form, thus allowing terms to cancel.

Expand@ToScalarProducts@A - BDD
0

However it is more instructive to convert each of the terms in the difference separately to scalar
products.

X = List üü HA - BL

:x Û y Û z , - xÛy Ûz , xÛ yÛz ,
3 0 2 2 3 2 2 0 3 1 2 1

x Û y Û z, x Û y Û z , - xÛy Ûz >
3 1 2 1 3 2 2 0 3 0 2 2

2009 9 3
Grassmann Algebra Book.nb 606

X1 = Expand@ToScalarProducts@XDD
80, -Hx2 û y2 L Hx3 û y1 L z Ô x1 + Hx2 û y1 L Hx3 û y2 L z Ô x1 +
Hx1 û y2 L Hx3 û y1 L z Ô x2 - Hx1 û y1 L Hx3 û y2 L z Ô x2 -
Hx1 û y2 L Hx2 û y1 L z Ô x3 + Hx1 û y1 L Hx2 û y2 L z Ô x3 ,
-Hz û y2 L Hx3 û y1 L x1 Ô x2 + Hz û y1 L Hx3 û y2 L x1 Ô x2 +
Hz û y2 L Hx2 û y1 L x1 Ô x3 - Hz û y1 L Hx2 û y2 L x1 Ô x3 -
Hz û y2 L Hx1 û y1 L x2 Ô x3 + Hz û y1 L Hx1 û y2 L x2 Ô x3 ,
Hz û y2 L Hx3 û y1 L x1 Ô x2 - Hz û y1 L Hx3 û y2 L x1 Ô x2 -
Hz û y2 L Hx2 û y1 L x1 Ô x3 + Hz û y1 L Hx2 û y2 L x1 Ô x3 -
Hz û x3 L Hx2 û y2 L x1 Ô y1 + Hz û x2 L Hx3 û y2 L x1 Ô y1 +
Hz û x3 L Hx2 û y1 L x1 Ô y2 - Hz û x2 L Hx3 û y1 L x1 Ô y2 +
Hz û y2 L Hx1 û y1 L x2 Ô x3 - Hz û y1 L Hx1 û y2 L x2 Ô x3 +
Hz û x3 L Hx1 û y2 L x2 Ô y1 - Hz û x1 L Hx3 û y2 L x2 Ô y1 -
Hz û x3 L Hx1 û y1 L x2 Ô y2 + Hz û x1 L Hx3 û y1 L x2 Ô y2 -
Hz û x2 L Hx1 û y2 L x3 Ô y1 + Hz û x1 L Hx2 û y2 L x3 Ô y1 +
Hz û x2 L Hx1 û y1 L x3 Ô y2 - Hz û x1 L Hx2 û y1 L x3 Ô y2 ,
Hx2 û y2 L Hx3 û y1 L z Ô x1 - Hx2 û y1 L Hx3 û y2 L z Ô x1 -
Hx1 û y2 L Hx3 û y1 L z Ô x2 + Hx1 û y1 L Hx3 û y2 L z Ô x2 +
Hx1 û y2 L Hx2 û y1 L z Ô x3 - Hx1 û y1 L Hx2 û y2 L z Ô x3 +
Hz û x3 L Hx2 û y2 L x1 Ô y1 - Hz û x2 L Hx3 û y2 L x1 Ô y1 -
Hz û x3 L Hx2 û y1 L x1 Ô y2 + Hz û x2 L Hx3 û y1 L x1 Ô y2 -
Hz û x3 L Hx1 û y2 L x2 Ô y1 + Hz û x1 L Hx3 û y2 L x2 Ô y1 +
Hz û x3 L Hx1 û y1 L x2 Ô y2 - Hz û x1 L Hx3 û y1 L x2 Ô y2 +
Hz û x2 L Hx1 û y2 L x3 Ô y1 - Hz û x1 L Hx2 û y2 L x3 Ô y1 -
Hz û x2 L Hx1 û y1 L x3 Ô y2 + Hz û x1 L Hx2 û y1 L x3 Ô y2 , 0<

Converting back to a sum shows that the terms add to zero.

X2 = Plus üü X1
0

Ÿ An algorithm to test the conjecture


It is simple to automate testing the conjecture by collecting the steps above in a function:

TestTripleGeneralizedSumConjecture[n_][x_,y_,z_]:=
Expand[ToScalarProducts[TripleGeneralizedSumA[n][x,y,z]-
TripleGeneralizedSumB[n][x,y,z]]]

As an example of this procedure we run through the first 320 cases. A value of zero will validate
the conjecture for that case.

2009 9 3
Grassmann Algebra Book.nb 607

TableBTestTripleGeneralizedSumConjecture@nDBx, y, zF,
m k p

8n, 0, 4<, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<F êê Flatten

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

10.11 Exploring Conjectures


As we have already shown, it is easy to explore conjectures using Mathematica to compute
individual cases. By 'explore' of course, we mean the generation of cases which either disprove the
conjecture or increase our confidence that the conjecture is correct.

A conjecture
As an example we suggest a conjecture for a relationship amongst generalized products of order 1
of the following form, valid for any m, k, and p.

a Û b Û g + H-1Lx b Û g Û a + H-1Ly g Û a Û b ã 0 10.41


m 1 k 1 p k 1 p 1 m p 1 m 1 k

The signs H-1Lx and H-1Ly are at this point unknown, but if the conjecture is true we expect
them to be simple functions of m, k, p and their products. We conjecture therefore that in any
particular case they will be either +1 or |1.

2009 9 3
Grassmann Algebra Book.nb 608

Ÿ Exploring the conjecture


The first step therefore is to test some cases to see if the formula fails for some combination of
signs. We do that by setting up a function that calculates the formula for each of the possible sign
combinations, returning True if any one of them satisfies the formula, and False otherwise. Then
we create a table of cases. Below we let the grades m, k, and p range from 0 to 3 (that is, two
instances of odd parity and two instances of even parity each, in all combinations). Although we
have written the function as printing the intermediate results as they are calculated, we do not show
this output as the results are summarized in the final list.

TestPostulate@8m_, k_, p_<D :=


ModuleB8X, Y, Z, a, b, g<, X = ToScalarProductsB a Û b Û gF;
m 1 k 1 p

Y = ToScalarProductsB b Û g Û aF;
k 1 p 1 m

Z = ToScalarProductsB g Û a Û bF; TrueQ@


p 1 m 1 k

X + Y + Z == 0 »» X + Y - Z == 0 »» X - Y + Z == 0 »» X - Y - Z == 0DF

Table@T = TestPostulate@8m, k, p<D; Print@88m, k, p<, T<D;


T, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<D êê Flatten
8True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True<

It can be seen that the formula is valid for some values of x and y in the first 64 cases. This gives
us some confidence that our efforts may not be wasted if we wished to prove the formula and/or
find the values of x and y an terms of m, k, and p.

2009 9 3
Grassmann Algebra Book.nb 609

10.12 The Generalized Product of Intersecting Elements

Ÿ The case l < p


Consider three simple elements a, b and g . The elements gÔa and gÔb may then be considered
m k p p m p k
elements with an intersection g. Generalized products of such intersecting elements are zero
p
whenever the grade l of the product is less than the grade of the intersection.

gÔa Û gÔb ã 0 l<p 10.42


p m l p k

We can test this by reducing the expression to scalar products and tabulating cases.

FlattenBTableBA = ToScalarProductsB g Ô a Û g Ô b F;
p m l p k

Print@88m, k, p, l<, A<D; A, 8m, 0, 3<,


8k, 0, 3<, 8p, 1, 3<, 8l, 0, p - 1<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

Of course, this result is also valid in the case that either or both the grades of a and b is zero.
m k

g Û gÔb ã gÔa Û g ã g Û g ã 0 l<p 10.43


p l p k p m l p p l p

Ÿ The case l ! p
For the case where l is equal to, or greater than the grade of the intersection p, the generalized
product of such intersecting elements may be expressed as the interior product of the common
factor g with a generalized product of the remaining factors. This generalized product is of order
p
lower by the grade of the common factor. Hence the formula is only valid for l ¥ p (since the
generalized product has only been defined for non-negative orders).

10.44
2009 9 3
Grassmann Algebra Book.nb 610

g Ô a Û g Ô b ã H-1Lp Hl-pL gÔa Û b ûg


p m l p k p m l-p k p

g Ô a Û g Ô b ã H-1Lp m a Û gÔb ûg
p m l p k m l-p p k p

We can test these formulae by reducing the expressions on either side to scalar products and
tabulating cases. For l > p, the first formula gives

FlattenBTableB

A = ToScalarProductsB g Ô a Û g Ô b - H-1Lp Hl-pL gÔa Û b û gF;


p m l p k p m l-p k p

Print@88m, k, p, l<, A<D; A, 8m, 0, 3<,


8k, 0, 3<, 8p, 0, 3<, 8l, p + 1, Min@m, kD<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

The case of l = p is tabulated in the next section.

The second formula can be derived from the first by using the quasi-commutativity of the
generalized product. To check, we tabulate the first few cases:

FlattenB

TableBToScalarProductsB g Ô a Û g Ô b - H-1Lp m a Û gÔb û gF,


p m l p k m l-p p k p

8m, 1, 2<, 8k, 1, m<, 8p, 0, m<, 8l, p + 1, Min@p + m, p + kD<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

Finally, we can put m = l - p, and rearrange the above to give:

a Û gÔb û g ã H-1Lp m aÔg Û b ûg 10.45


m m p k p m p m k p

FlattenB

TableBToScalarProductsB a Û g Ô b û g - H-1Lp m a Ô g Û b û gF,


m m p k p m p m k p

8m, 0, 2<, 8k, 0, m<, 8p, 0, m<, 8m, 0, Min@m, kD<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

Ÿ The special case of l = p


10.44
2009 9 3
Grassmann Algebra Book.nb 611

Ÿ The special case of l = p


By putting l = p into the first formula of [10.31] we get:

gÔa Û gÔb ã gÔaÔb ûg


p m p p k p m k p

By interchanging g, a, and b we also have two simple alternate expressions


p m k

gÔa Û gÔb ã gÔaÔb ûg


p m p p k p m k p

aÔg Û gÔb ã aÔgÔb ûg 10.46


m p p p k m p k p

aÔg Û bÔg ã aÔbÔg ûg


m p p k p m k p p

Again we can demonstrate this identity by converting to scalar products.

FlattenBTableBA = ToScalarProductsB g Ô a Û g Ô b - g Ô a Ô b û gF;


p m p p k p m k p

Print@88m, k, p<, A<D; A, 8m, 0, 3<, 8k, 0, 3<, 8p, 0, 3<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

10.13 The Generalized Product of Orthogonal Elements

Ÿ The generalized product of totally orthogonal elements


It is simple to see from the definition of the generalized product that if a and b are totally
m k
orthogonal, and l is not zero, then their generalized product is zero.

aÛb ã0 ai û bj ã 0 l>0 10.47


m l k

We can verify this easily:

2009 9 3
Grassmann Algebra Book.nb 612

FlattenBTableBToScalarProductsBa Û bF ê.
m l k

OrthogonalSimplificationRulesB::a, b>>F,
m k

8m, 0, 4<, 8k, 0, m<, 8l, 1, Min@m, kD<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

The GrassmannAlgebra function OrthogonalSimplificationRules generates a list of rules


which put to zero all the scalar products of a 1-element from a with a 1-element from b.
m k

Ÿ The generalized product of partially orthogonal elements

Consider the generalized product a Û gÔb of an element a and another element gÔb in which a
m l p k m p k m

and b are totally orthogonal, but g is arbitrary.


k p

But since ai ûbj ã0, the interior products of a and any factors of gÔb in the expansion of the
m p k
generalized product will be zero whenever they contain any factors of b. That is, the only non-zero
k

terms in the expansion of the generalized product are of the form aûgi Ô g i Ôb. Hence:
m l p-l k

a Û gÔb ã a Û g Ôb ai û bj ã 0 l § Min@m, pD 10.48


m l p k m l p k

a Û gÔb ã 0 ai û bj ã 0 l > Min@m, pD 10.49


m l p k

FlattenBTableBToScalarProductsBa Û g Ô b - a Û g Ô b F ê.
m l p k m l p k

OrthogonalSimplificationRulesB::a, b>>F,
m k

8m, 0, 3<, 8k, 0, m<, 8p, 0, m<, 8l, 0, Min@m, pD<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

By using the quasi-commutativity of the generalized and exterior products, we can transform the
above results to:

2009 9 3 10.50
Grassmann Algebra Book.nb 613

aÔg Û b ã
m p l k

H-1Lm l a Ô g Û b ai û bj ã 0 l § Min@k, pD
m p l k

gÔa Û b ã 0 ai û bj ã 0 l > Min@k, pD 10.51


p m l k

FlattenBTableBToScalarProductsB a Ô g Û b - H-1Lm l a Ô g Û b F ê.
m p l k m p l k

OrthogonalSimplificationRulesB::a, b>>F,
m k

8m, 0, 2<, 8k, 0, m<, 8p, 0, m<, 8l, 0, Min@k, pD<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

10.14 The Generalized Product of Intersecting


Orthogonal Elements

The case l < p


Consider three simple elements a, b and g . The elements gÔa and gÔb may then be considered
m k p p m p k
elements with an intersection g. As has been shown in Section 10.12, generalized products of such
p
intersecting elements are zero whenever the grade l of the product is less than the grade of the
intersection. Hence the product will be zero for l < p, independent of the orthogonality
relationships of its factors.

Ÿ The case l ! p
Consider now the case of three simple elements a, b and g where g is totally orthogonal to both a
m k p p m

and b (and hence to aÔb). A simple element g is totally orthogonal to an element a if and only if
k m k p m

aûgi ã0 for all gi belonging to g.


m p

The generalized product of the elements gÔa and gÔb of order l ¥ p is a scalar factor gûg times
p m p k p p
the generalized product of the factors a and b, but of an order lower by the grade of the common
m k
factor; that is, of order l|p.

2009 9 3 10.50
Grassmann Algebra Book.nb 614

The generalized product of the elements gÔa and gÔb of order l ¥ p is a scalar factor gûg times
p m p k p p
the generalized product of the factors a and b, but of an order lower by the grade of the common
m k
factor; that is, of order l|p.

gÔa Û gÔb ã gûg a Û b


p m l p k p p m l-p k
10.52
g = g1 Ô g2 Ô ! Ô gp a û gi ã b û gi ã 0 l¥p
p m k

Note here that the factors of g are not necessarily orthogonal to each other. Neither are the factors
p
of a. Nor the factors of b.
m k

We can test this out by making a table of cases. Here we look at the first 25 cases.

FlattenBTableBToScalarProductsB g Ô a Û g Ô b - g û g a Û b F ê.
p m l p k p p m l-p k

OrthogonalSimplificationRulesB::a, g>, :b, g>>F,


m p k p

8m, 0, 2<, 8k, 0, m<, 8p, 1, m + 1<, 8l, p, p + Min@m, kD<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

By combining the results of the previous section with those of this section, we see that we can also
write that:

gÔa Û b a Û gÔb
p m l k m l p k

gûg a Û b ã H-1Lp l gÔa Û b ûg ã


p p m l k p m l k p

H-1Lp m a Û g Ô b ûg 10.53
m l p k p

g = g1 Ô g2 Ô ! Ô gp a û gi ã b û gi ã 0
p m k

We check the second rule by taking the first 25 cases.

2009 9 3
Grassmann Algebra Book.nb 615

FlattenB

TableBToScalarProductsB a Û g Ô b û g - H-1Lp m g û g aÛb F ê.


m l p k p p p m l k

OrthogonalSimplificationRulesB::a, g>, :b, g>>F,


m p k p

8m, 0, 2<, 8k, 0, m<, 8p, 1, m + 1<, 8l, 0, Min@m, kD<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

10.15 Generalized Products in Lower Dimensional


Spaces

Generalized products in 0, 1, and 2-spaces


In this section we summarize the properties of generalized products for spaces of dimension 0, 1
and 2.

In spaces of all dimensions, generalized products in which one of the factors is a scalar will reduce
to either the usual (field or linear space) product by the scalar quantity (when the order of the
product is zero) or else to zero (when the order of the product is greater than zero). Except for a
brief discussion of the implications of this for 0-space, further discussions will assume that the
factors of the generalized products are not scalar.

In spaces of dimension 0, 1, or 2, all generalized products reduce either to the ordinary field
product, the exterior product, the interior product, or zero. Hence they bring little new to a study of
these spaces.

These results will be clear upon reference to the simple properties of generalized products
discussed in previous sections. The only result that may not be immediately obvious is that of the
generalized product of order one of two 2-elements in a 2-space, for example a Û b. In a 2-space, a
2 1 2 2
and b must be congruent (differ only by (possibly) a scalar factor), and hence by formula 10.17 is
2
zero.

If one of the factors is a scalar, then the only non-zero generalized product is that of order zero,
equivalent to the ordinary field product (and, incidentally, also to the exterior and interior products).

Ÿ 0-space
In a space of zero dimensions (the underlying field of the Grassmann algebra), the generalized
product reduces to the exterior product and hence to the usual scalar (field) product.

2009 9 3
Grassmann Algebra Book.nb 616

ToScalarProductsBa Û bF
0

ab

Higher order products (for example a Û b) are of course zero since the order of the product is
1
greater than the minimum of the grades of the factors (which in this case is zero).

In sum: The only non-zero generalized product in a space of zero dimensions is the product of
order zero, equivalent to the underlying field product.

Ÿ 1-space

Ï Products of zero order


In a space of one dimension there is only one basis element, so the product of zero order (that is,
the exterior product) of any two elements is zero:

Ha e1 L Û Hb e1 L ã a b e1 Ô e1 ã 0
0

ToScalarProductsBHa e1 L Û Hb e1 LF
0

Ï Products of the first order


The product of the first order reduces to the scalar product.

Ha e1 L Û Hb e1 L ã a b He1 û e1 L
1

ToScalarProductsBHa e1 L Û Hb e1 LF
1

a b He1 û e1 L

In sum: The only non-zero generalized product of elements (other than where one is a scalar) in a
space of one dimension is the product of order one, equivalent to the scalar product.

Ÿ 2-space

Ï Products of zero order


The product of zero order of two elements reduces to their exterior product. Hence in a 2-space, the
only non-zero products (apart from when one is a scalar) is the exterior product of two 1-elements.

Ha e1 + b e2 L Û Hc e1 + d e2 L ã Ha e1 + b e2 L Ô Hc e1 + d e2 L
0

Ï Products of the first order

2009 9 3
Grassmann Algebra Book.nb 617

Products of the first order


The product of the first order of two 1-elements is commutative and reduces to their scalar product.

Ha e1 + b e2 L Û Hc e1 + d e2 L ã Ha e1 + b e2 L û Hc e1 + d e2 L
1

ToScalarProductsBHa e1 + b e2 L Û Hc e1 + d e2 LF
1

a c He1 û e1 L + b c He1 û e2 L + a d He1 û e2 L + b d He2 û e2 L

The product of the first order of a 1-element and a 2-element is also commutative and reduces to
their interior product.

Ha e1 + b e2 L Û Hc e1 Ô e2 L ã
1
Hc e1 Ô e2 L Û Ha e1 + b e2 L ã Hc e1 Ô e2 L û Ha e1 + b e2 L
1

ToScalarProductsBHa e1 + b e2 L Û Hc e1 Ô e2 LF
1

-a c He1 û e2 L e1 - b c He2 û e2 L e1 + a c He1 û e1 L e2 + b c He1 û e2 L e2

The product of the first order of two 2-elements is zero. This can be determined directly from
[10.17].

aÛa ã 0, l!m
m l m

ToScalarProductsBHa e1 Ô e2 L Û Hb e1 Ô e2 LF
1

Ï Products of the second order


Products of the second order of two 1-elements, or of a 1-element and a 2-element are necessarily
zero since the order of the product is greater than the minimum of the grades of the factors.

ToScalarProductsBHa e1 + b e2 L Û Hc e1 + d e2 LF
2

Products of the second order of two 2-elements reduce to their inner product:

Ha e1 Ô e2 L Û Hb e1 Ô e2 L ã Ha e1 Ô e2 L û Hb e1 Ô e2 L
2

ToInnerProductsBHa e1 Ô e2 L Û Hb e1 Ô e2 LF
2

a e1 Ô e2 û b e1 Ô e2

2009 9 3
Grassmann Algebra Book.nb 618

In sum: The only non-zero generalized products of elements (other than where one is a scalar) in a
space of two dimensions reduce to exterior, interior, inner or scalar products.

10.16 Generalized Products in 3-Space


To be completed

10.17 Summary
† To be completed

2009 9 3
Grassmann Algebra Book.nb 619

11 Exploring Hypercomplex
Algebra
Ï

11.1 Introduction
In this chapter we explore the relationship of Grassmann algebra to the hypercomplex algebras.
Typical examples of hypercomplex algebras are the algebras of the real numbers !, complex
numbers #, quaternions $, octonions (or Cayley numbers) %, sedenions &, and Clifford algebras
!. We will show that each of these algebras can be generated as an algebra of Grassmann numbers
under a new product which we take the liberty of calling the hypercomplex product.

This hypercomplex product of an m-element a and a k-element b is denoted aÎb and defined as a
m k m k
sum of signed generalized Grassmann products of a with b.
m k

Min@m,kD
m,l,k
a Îb ã ‚ s aÛb 11.1
m k m l k
l=0

m,l,k
Here, the sign s is a function of the grades m and k of the factors and the order l of the
generalized product and takes the values +1 or -1 depending on the type of hypercomplex product
being defined.

Note particularly that this approach does not require the algebra to be defined in terms of
generators or basis elements other than those of the Grassmann algebra. We will see in what
follows that this leads to considerably more insight on the nature of the hypercomplex algebras
than afforded by the traditional algebraic approach. The generalized Grassmann products on the
right-hand side subsume and extend the principal dimension-independent products of geometric
linear algebra: the exterior and interior products. It is enticing to postulate that many algebras with
a geometric interpretation may be defined by sums of generalized products.

Our approach then, is not to define a hypercomplex algebra by defining new (hypercomplex)
elements, but rather to define a new product called the hypercomplex product which acts on
ordinary Grassmann numbers. This approach has the conceptual advantage that the hypercomplex
algebra can now sit comfortably inside the Grassmann algebra and be used in geometric
constructions just like the exterior and interior products.

Strictly speaking, this means that there are no hypercomplex numbers per se, only Grassmann
numbers acting under the hypercomplex product. The hypercomplex product defined above, in
m,l,k
which the hypercomplex signs s are not yet specified, may be viewed as a definition of the
most unconstrained hypercomplex algebra. Different hypercomplex algebras may be defined by
constraining these signs to have given values. The traditional hypercomplex algebras will have one
set of constraints, while the Clifford algebras will have another. And just as with the Grassmann
2009 9 3 they will have significantly different properties depending on the dimension of their
algebras,
underlying space. We will also see that one of the major sources of difference is due to the fact that
spaces of dimension higher than three can have non-simple elements. For example, this is the
reason why the algebra of octonions, since it lives in a Grassmann algebra with underlying space of
Grassmann Algebra Book.nb 620
Strictly speaking, this means that there are no hypercomplex numbers per se, only Grassmann
numbers acting under the hypercomplex product. The hypercomplex product defined above, in
m,l,k
which the hypercomplex signs s are not yet specified, may be viewed as a definition of the
most unconstrained hypercomplex algebra. Different hypercomplex algebras may be defined by
constraining these signs to have given values. The traditional hypercomplex algebras will have one
set of constraints, while the Clifford algebras will have another. And just as with the Grassmann
algebras, they will have significantly different properties depending on the dimension of their
underlying space. We will also see that one of the major sources of difference is due to the fact that
spaces of dimension higher than three can have non-simple elements. For example, this is the
reason why the algebra of octonions, since it lives in a Grassmann algebra with underlying space of
three dimensions, is the largest hypercomplex algebra with a scalar norm.

There is some care needed with terminology. In our approach to hypercomplex algebras, there are
no hypercomplex numbers per se, but only different types of hypercomplex product (that is, with
different sets of constraints on the hypercomplex signs), and spaces of different dimension. These
two factors together give the traditional hypercomplex algebras their individual flavours. So, what
then is a quaternion? From our approach, it is not an object, but rather a Grassmann number living
in a two-dimensional subspace (the Grassmann algebra on an underlying linear space of two
dimensions has 4 basis elements) responding to a specific type of hypercomplex product. With this
meaning, and in order to concord with traditional usage, we shall continue to speak of
hypercomplex numbers, such as quaternions, as if they were objects in their own right.
"Hypercomplex number" should be read as "Grassmann number under the hypercomplex product".
"Quaternion" should be read as "Hypercomplex number in a Grassmann algebra of two
dimensions". More carefully, by "Grassmann algebra of two dimensions", we mean a Grassmann
algebra constructed on a two-dimensional subspace of the underlying linear space.

It turns out that the real numbers are hypercomplex numbers in a subspace of zero dimensions, the
complex numbers are hypercomplex numbers in a subspace of one dimension, the quaternions are
hypercomplex numbers in a subspace of two dimensions, the octonions are hypercomplex numbers
in a subspace of three dimensions, and the sedenions are hypercomplex numbers in a subspace of
four dimensions. The number of base elements in the hypercomplex algebra is equal to the number
of basis elements in the corresponding Grassmann algebra.

We can see from this approach that different hypercomplex numbers can now live and work in the
same space. For example, complex numbers can operate on vectors, or quaternions on octonions.

In Chapter 12: Exploring Clifford Algebra we will also show that the Clifford product may be
defined in a space of any number of dimensions as a hypercomplex product with signs:

m,l,k 1
s ã H-1Ll Hm-lL+ 2 l Hl-1L

We could also explore the algebras generated by products of the type defined by definition 11.1,
m,l,k
but which have some of the s defined to be zero. For example the relations,

m,l,k m,l,k
s ã 1, l ã 0 s ã 0, l > 0
defines the algebra with the hypercomplex product reducing to the exterior product, and

m,l,k m,l,k
s ã 1, l = Min@m, kD s ã 0, l ! Min@m, kD
defines the algebra with the hypercomplex product reducing to the interior product.

Both of these definitions however lead to products having zero divisors, that is, some products can
be zero even though neither of its factors is zero. Because one of the principally useful
characteristics of the scalar-normed hypercomplex numbers (that is, hypercomplex numbers up to
and including the Octonions) are that they have no zero divisors, we shall limit ourselves in this
chapter to exploring just those algebras.
2009 9 3
Grassmann Algebra Book.nb 621

Both of these definitions however lead to products having zero divisors, that is, some products can
be zero even though neither of its factors is zero. Because one of the principally useful
characteristics of the scalar-normed hypercomplex numbers (that is, hypercomplex numbers up to
and including the Octonions) are that they have no zero divisors, we shall limit ourselves in this
chapter to exploring just those algebras.

m,l,k
That is, we shall always assume that s is not zero.

m,l,k
We begin by exploring the constraints we would require on the hypercomplex signs s in
definition 11.1 which will lead to algebras with a scalar norm. It turns out that we can ensure a
scalar norm (up to the Octonions) by only constraining a subset of the signs. The specification of
the rest of the signs will then be explored by considering spaces of increasing dimension, starting
with a space of zero dimensions generating the real numbers. In order to embed the algebras
defined on lower dimensional spaces within those defined on higher dimensional spaces we will
maintain the lower dimensional relations we determine for the hypercomplex signs when we
explore the higher dimensional spaces.

Finally, we note that this approach to hypercomplex numbers is invariant in a tensorial sense. The
traditional product tables for the hypercomplex numbers are recovered in our approach by
assuming the underlying space has an Euclidean metric. However, we will show that the
hypercomplex product definition in 11.1 above is still meaningful under different metrics. Indeed,
a simple change of metric can lead to a whole range of new hypercomplex algebras.

11.2 Some Initial Definitions and Properties

Distributivity
Distributivity of hypercomplex products of sums of elements of the same grade can be shown from
definition 11.1 using the distributivity of the generalized Grassmann products.

Ka1 + a2 + …OÎb ã a1 Îb + a2 Îb + …
m m k m k m k
11.2
a Î b1 + b2 + … ã a Îb1 + a Îb2 + …
m k k m k m k

We extend this by definition to sums of elements of different grades.

a1 + a2 + … Îb ã a1 Îb + a2 Îb + …
m1 m2 k m1 k m2 k
11.3
a Î b1 + b2 + … ã a Îb1 + a Îb2 + …
m k1 k2 m k1 m k2

2009 9 3
Grassmann Algebra Book.nb 622

Factorization of scalars
By using the definition 11.1 of the hypercomplex product and the properties of the generalized
Grassmann product we can easily show that scalars may be factorized out of any hypercomplex
products:

Ka aOÎb ã a Î a b ã a a Îb 11.4
m k m k m k

Multiplication by scalars
The next property we will require of the hypercomplex product is that it behaves as expected when
one of the elements is a scalar. That is:

a Îb ã bÎa ã b a 11.5
m m m

m,l,k
From the relations 11.5 and the definition 11.1 we can determine the constraints on the s
which will accomplish this.

Min@m,0D
m,l,0 m,0,0 m,0,0
a Îb ã ‚ s aÛbã s aÔb ã s ba
m m l m m
l=0

Min@0,mD
0,l,m 0,0,m 0,0,m
bÎa ã ‚ s bÛaã s bÔa ã s ba
m l m m m
l=0

m,l,k
Hence the first constraints we impose on the s to ensure the properties we require for
multiplication by scalars are that:

m,0,0 0,0,m
s ã s ã1 11.6

Hypercomplex scalar constraints


We call this pair of constraints HypercomplexScalarConstraints (alias ¯Hsc) and encode
them as a pair of rules.

2009 9 3
Grassmann Algebra Book.nb 623

HypercomplexScalarConstraints :=
m_,0,0 0,0,k_ 11.7
: ¯s ß 1, ¯s ß 1>

These rules are the first in a collection which we will eventually use to define the hypercomplex
algebras.

The real numbers !


The real numbers are a simple consequence of the relations determined above for scalar
0,0,0
multiplication. When the grades of both the factors are zero we have that s ã1. Hence:

aÎb ã bÎa ã a b 11.8

The hypercomplex product in a Grassmann algebra of zero dimensions is therefore equivalent to


the (usual) real field product of the underlying linear space. Hypercomplex numbers in a
Grassmann algebra of zero dimensions are therefore (isomorphic to) the real numbers.

The conjugate
The conjugate of a Grassmann number X is denoted Xc and is defined as the body Xb (scalar part)
of X minus the soul Xs (non-scalar part) of X.

X ã Xb + Xs Xc ã Xb - Xs 11.9

Ï Ÿ Example

Let X be a Grassmann number in 3 dimensions.

¯!3 ; X = ¯0a
a0 + a1 e1 + a2 e2 + a3 e3 + a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3 + a7 e1 Ô e2 Ô e3

The body and soul of X can be computed as

8Body@XD, Soul@XD<
8a0 , a1 e1 + a2 e2 + a3 e3 + a4 e1 Ô e2 + a5 e1 Ô e3 + a6 e2 Ô e3 + a7 e1 Ô e2 Ô e3 <

The conjugate of X can be computed as

GrassmannConjugate@XD
a0 - a1 e1 - a2 e2 - a3 e3 - a4 e1 Ô e2 - a5 e1 Ô e3 - a6 e2 Ô e3 - a7 e1 Ô e2 Ô e3

11.3 The Norm of Grassmann number


2009 9 3
Grassmann Algebra Book.nb 624

11.3 The Norm of Grassmann number

The norm
One of the important properties that !, #, $, and % possess in common is that of having a positive
scalar norm.

The norm of a Grassmann number X is denoted C@XD and is defined as the hypercomplex product
of X with its conjugate.

,@XD ã XÎXc ã HXb + Xs LÎHXb - Xs L ã = Xb 2 - Xs ÎXs 11.10

The norm of an m-element a is then denoted CBaF and defined as the hypercomplex product of a
m m m
with its conjugate ac . If a is not scalar, then ac is simply -a. Hence
m m m m

,BaF ã a Îac ã -a Îa m!0 11.11


m m m m m

The norm of a simple m-element


If a is simple, then the generalized Grassmann product a Û a is zero for all l except l equal to m,
m m l m
in which case the generalized product (and hence the hypercomplex product) becomes an inner
product. Thus for m not zero we have:
m,m,m
a Îa ã s aûa m!0
m m m m

Equation 11.11 then implies that for the norm to be a positive scalar quantity we must have
m,m,m
s ã-1.

,BaF ã -a Îa ã a û a a simple 11.12


m m m m m m

m,m,m
s ã -1 m!0 11.13

The norm of a scalar a is just a2 ãaûa, so formula 11.12 applies also for m equal to zero.

Hypercomplex square constraints

2009 9 3
Grassmann Algebra Book.nb 625

Hypercomplex square constraints


We call these constraint the HypercomplexSquareConstraints (alias ¯Hsq) and encode
them as the rules.

HypercomplexSquareConstraints :=
m_,m_,m_ 11.14
: ¯s ê; m ! 0 ß -1>;

These rules are the second in a collection which we will eventually use to define the hypercomplex
algebras.

We remark that, although the constraints 11.14 have been determined from a consideration of the
hypercomplex product of a simple element with itself, it must also now apply to the interior
product term of any hypercomplex product of two elements of the same grade. From formula 11.1,
the hypercomplex product of two elements of the same grade can be written
m
m,l,m m,0,m m,1,m m,m,m
a Îb ã ‚ s aÛbã s aÛb+ s aÛb+…+ s aÛb
m m m l m m 0 m m 1 m m m m
l=0

m,m,m
The last term involves s , and we must take it to be -1 here also.

This expansion also reminds us that while a Îa is necessarily scalar, aÎb is not.
m m m m

The antisymmetry of products of elements of different grade


We consider a general Grassmann number X ã a+A, where a is its body (scalar component) and A
is its soul (non-scalar component). The norm C[X] of X is then:

C@XD ã XÎXc ã Ha + ALÎHa - AL ã a2 - A Î A

To investigate the norm further we look at the product AÎA. Suppose A is the sum of two (not
necessarily simple) elements of different grade: an m-element a and a k-element b . Then AÎA
m k
becomes:

A Î A ã a + b Î a + b ã a Îa + a Îb + b Îa + b Îb
m k m k m m m k k m k k

2009 9 3
Grassmann Algebra Book.nb 626

We would like the norm of a general Grassmann number to be a scalar quantity. But none of the
generalized product components of aÎb or bÎa can be scalar if m and k are different, so we must
m k k m
choose to make bÎa equal to - aÎb as a fundamental defining axiom of the hypercomplex
k m m k
product, thus eliminating both products from the expression for the norm. This requirement of
antisymmetry can always be satisfied because, since the grades are different, the two products are
always distinguishable.

a Îb ã - b Îa 11.15
m k k m

Hypercomplex antisymmetry constraints


It haas been shown in the previous section that the hypercomplex product of two elements a and b
m k
of different grade must be antisymmetric in order to enable the norm of a general sum of simple
elements to be scalar. We now explore what constraints are required on the hypercomplex signs in
order to effect this antisymmetry.

We begin with the definition of the hypercomplex product.

Min@m,kD
m,l,k
a Îb ã ‚ s aÛb
m k m l k
l=0

Now write the definition for the factors in the reverse order, and reverse the order of the
generalized Grassmann products on the right hand side back to their initial order. (The formula for
doing this is in the previous chapter).

Min@m,kD Min@m,kD
k,l,m k,l,m
b Îa ã ‚ s bÛaã ‚ s H-1LHm-lL Hk-lL a Û b
k m k l m m l k
l=0 l=0

Hence if we are to require that a Îb ã -b Îa then we must have


m k k m

m,l,k k,l,m
s ã -H-1LHm-lL Hk-lL s
This in turn implies that
m,l,k k,l,m
if (m-l)(k-l) is odd, then s ã s
m,l,k k,l,m
if (m-l)(k-l) is even, then s ã - s .

Below is a table of parities of (m-l)(k-l) as a function of the parities of m, k, and l.

2009 9 3
Grassmann Algebra Book.nb 627

m k l m-l k - l Hm - lL Hk - lL
even even even even even even
even even odd odd odd odd
even odd even even odd even
even odd odd odd even even
odd even even odd even even
odd even odd even odd even
odd odd even odd odd odd
odd odd odd even even even

This table can be summarized by the rule: If m and k are of the same parity, but l is of the opposite
parity, then the parity of (m-l)(k-l) is odd; otherwise it is even. This can be encoded by the
constraint:

We encode these rules as the HypercomplexAntisymmetryConstraints (alias ¯Has).

HypercomplexAntisymmetryConstraints :=
m_?EvenQ,l_?OddQ,k_?EvenQ k,l,m
: ¯s ê; m < k ß ¯s ,
m_?OddQ,l_?EvenQ,k_?OddQ k,l,m 11.16
¯s ê; m < k ß ¯s ,
m_,l_,k_ k,l,m
¯s ê; m < k ß - ¯s >

m,l,k
Note that this constraint does not constrain the values of the remaining s for m > k.

The norm of a Grassmann number in terms of hypercomplex products


More generally, suppose A were the sum of several elements of different grade.

A ã a +b + g + …
m k p

The antisymmetry axiom then allows us to write AÎA as the sum of hypercomplex squares of the
components of different grade of A.

A Î A ã a + b + g + … Î a + b + g + … ã a Îa + b Îb + g Îg + …
m k p m k p m m k k p p

And hence the norm is expressed simply as

C@XD ã a2 - a Îa - b Îb - g Îg - …
m m k k p p

2009 9 3
Grassmann Algebra Book.nb 628

,Ba + a + b + g + …F ã a2 - a Îa - b Îb - g Îg - … 11.17
m k p m m k k p p

The norm of a Grassmann number of simple components


Consider a Grassmann number A in which the elements a, b, … of each grade are simple.
m k

A ã a +b + g + …
m k p

Then by 11.12 and 11.15 we have that the norm of A is positive and may be written as the sum of
inner products of the individual simple elements.

,Ba + a + b + g + …F ã a2 + a û a + b û b + g û g + … 11.18
m k p m m k k p p

The norm of a non-simple element


We now turn our attention to the case in which one of the component elements of X may not be
simple. This is the general case for Grassmann numbers in spaces of dimension greater than 3, and
indeed is the reason why the suite of hypercomplex numbers with positive scalar norm ends with
the Octonions. An element of any grade in a space of any dimension 3 or less must be simple. So in
spaces of dimension 0, 1, 2, and 3 the norm of an element will always be a scalar.

We now focus our attention on the product aÎa (where a may or may not be simple). The
m m m
following analysis will be critical to our understanding of the properties of hypercomplex numbers
in spaces of dimension greater than 3.

Suppose a is the sum of two simple elements a1 and a2 . The product aÎa then becomes:
m m m m m

a Îa ã Ka1 + a2 OÎKa1 + a2 O ã a1 Îa1 + a1 Îa2 + a2 Îa1 + a2 Îa2


m m m m m m m m m m m m m m

Since a1 and a2 are simple, a1 Îa1 and a2 Îa2 are scalar, and can be written:
m m m m m m

a1 Îa1 + a2 Îa2 ã - a1 û a1 - a2 û a2
m m m m m m m m

The expression of the remaining terms a1 Îa2 and a2 Îa1 in terms of exterior and interior products
m m m m
is more interesting and complex. We discuss this in section 11.4.

2009 9 3
Grassmann Algebra Book.nb 629

Summary of the hypercomplex constraints


Because of their importance in what follows, we collect here a summary of the constraints we will
need to apply to the hypercomplex signs in order to get the hypercomplex product to operate as
expected with scalars, and generate a positive scalar norm.

Ï Scalar constraints

HypercomplexScalarConstraints
m_,0,0 0,0,k_
: ¯s ß 1, ¯s ß 1>

Ï Square constraints

HypercomplexSquareConstraints
m_,m_,m_
: ¯s ê; m # 0 ß -1>

Ï Antisymmetry constraints

HypercomplexAntisymmetryConstraints
m_?EvenQ,l_?OddQ,k_?EvenQ k,l,m
: ¯s ê; m < k ß ¯s ,
m_?OddQ,l_?EvenQ,k_?OddQ k,l,m m_,l_,k_ k,l,m
¯s ê; m < k ß ¯s , ¯s ê; m < k ß - ¯s >

Note that whereas the scalar and square constraints gave specific values, these antisymmetry
constraints still leave half of each pair unspecified.

Ï Hypercomplex constraints

We collect these as our hypercomplex constraints which we designate ¯H.

¯H := Flatten@8HypercomplexScalarConstraints,
HypercomplexSquareConstraints,
HypercomplexAntisymmetryConstraints<D

Ï Residual constraints

Although these constraints cover a significnt range of values of the hypercomplex signs, there are
two residual constraints that must be specified by other considerations.

m,l,k
¯s m>k
m,l,m
¯s m>l
We shall explore these further in the sections below.

2009 9 3
Grassmann Algebra Book.nb 630

We shall explore these further in the sections below.

11.4 Products of two different elements of the same


grade

The symmetrized product of two m-elements


We now investigate the remaining types of hypercomplex products which can show up as terms in
the computation of a norm: products of two different elements of the same grade. These types of
term introduce the most complexity into the nature of the hypercomplex product. It is their
potential to introduce non-scalar terms into the (usually defined) norms of hypercomplex numbers
of order higher than the Octonions that makes them of most interest.

Returning to our definition of the hypercomplex product 11.1, and specializing it to the case where
both elements are of the same grade we get:
m
m,l,m
a1 Îa2 ã ‚ s a1 Û a2
m m m l m
l=0

Reversing the order of the factors in the hypercomplex product gives:

m
m,l,m
a2 Îa1 ã ‚ s a2 Û a1
m m m l m
l=0

The generalized products on the right may now be reversed together with the change of sign
H-1LHm-lL Hm-lL = H-1LHm-lL .
m
m,l,m
a2 Îa1 ã ‚ H-1LHm-lL s a1 Û a2
m m m l m
l=0

The sum a1 Îa2 +a2 Îa1 may now be written as:


m m m m

m
m,l,m
a1 Îa2 + a2 Îa1 ã ‚ I1 + H-1LHm-lL M s a1 Û a2 11.19
m m m m m l m
l=0

Because we will need to refer to this sum of products subsequently, we will call it the symmetrized
product of two m-elements (because the expression does not change if the order of the elements is
reversed).

2009 9 3
Grassmann Algebra Book.nb 631

Symmetrized products for elements of different grades


For two (in general, different) simple elements of the same grade a1 and a2 we have shown above
m m
that:
m
m,l,m
a1 Îa2 + a2 Îa1 ã ‚ I1 + H-1Lm-l M s a1 Û a2
m m m m m l m
l=0

The term 1+H-1Lm-l will be 2 for m and l both even or both odd, and zero otherwise. To get a
clearer picture of what this formula entails, we write it out explicitly for the lower values of m.
m,m,m
Note that we use the fact already established that s ã-1.

Ï m=0

aÎb + bÎa == 2 a b 11.20

Ï m=1

a1 Îa2 + a2 Îa1 == -2 a1 û a2 11.21

Ï m=2

2,0,2
a1 Îa2 + a2 Îa1 == 2 s a1 Ô a2 - a1 û a2 11.22
2 2 2 2 2 2 2 2

Ï m=3

3,1,3
a1 Îa2 + a2 Îa1 == 2 s a1 Û a2 - a1 û a2 11.23
3 3 3 3 3 1 3 3 3

Ï m=4

4,0,4 4,2,4
a1 Îa2 + a2 Îa1 == 2 s a1 Ô a2 + s a1 Û a2 - a1 û a2 11.24
4 4 4 4 4 4 4 2 4 4 4

2009 9 3
Grassmann Algebra Book.nb 632

The body of a hypercomplex square


We now break the symmetrized product up into its body (scalar component) and soul (non-scalar
components).

It is apparent from formula 11.19 and the examples given above, that the body of the symmetrized
product a1 Îa2 +a2 Îa1 is the inner product term in which l becomes equal to m, that is:
m m m m
m,m,m
2 s Ka1 ûa2 O .
m m

m,m,m
And since the hypercomplex square constraint requires that s ã-1,we can thus write:

BodyBa1 Îa2 + a2 Îa1 F ã -2 a1 û a2 11.25


m m m m m m

Suppose now that we have an m-element written as the sum of two m-elements.

a ã a1 + a2
m m m

Calculating the the body of aÎa gives:


m m

BodyBa ÎaF ã BodyBa1 Îa1 F + BodyBa1 Îa2 + a2 Îa1 F + BodyBa2 Îa2 F


m m m m m m m m m m

BodyBa ÎaF ã -a1 û a1 - 2 a1 û a2 - a2 û a2 ã - a û a


m m m m m m m m m m

Hence, even if a component element like a is not simple, the body of its hypercomplex square is
m
given by the same expression as if it were simple, that is, the negative of its inner product square.

BodyBa ÎaF ã - a û a 11.26


m m m m

It is straightforward to see that this is valid for aa being a sum of any number of m-elements.

Thus we can say that the body of the hypercomplex square of an arbitrary m-element is the
negative of its inner product square.

The soul of a hypercomplex square


The soul of aÎa is then the non-scalar residue given by the formula:
m m

2009 9 3
Grassmann Algebra Book.nb 633

m-2
m,l,m
SoulBa ÎaF ã ‚ I1 + H-1LHm-lL M s a1 Û a2 11.27
m m m l m
l=0

(Here we have used the fact that because of the coefficient I1 + H-1LHm-lL M, the terms with l
equal to m|1 is always zero, so we only need to sum to m|2).

For convenience in generating examples of this expression, we temporarily define the soul of aÎa
m m
as a function of m, and denote it h[m]. To make it easier to read we convert the generalized
products where possible to exterior and interior products.

h@m_D :=
m-2
m,l,m
‚ I1 + H-1LHm-lL M s a1 Û a2 êê SimplifyGeneralizedProducts
m l m
l=0

With this function we can make a palette of souls for various values of m. We do this below for m
from 1 to 6.

ComposePalette@8"Soul of a Hypercomplex Square"<,


Table@8m, h@mD<, 8m, 1, 6<DD

Soul of a Hypercomplex Square


1 0
2,0,2
2 2 s a1 Ô a2
2 2

3,1,3
3 2 s a1 Û a2
3 1 3

4,0,4 4,2,4
4 2 s a1 Ô a2 + 2 s a1 Û a2
4 4 4 2 4

5,1,5 5,3,5
5 2 s a1 Û a2 + 2 s a1 Û a2
5 1 5 5 3 5

6,0,6 6,2,6 6,4,6


6 2 s a1 Ô a2 + 2 s a1 Û a2 + 2 s a1 Û a2
6 6 6 2 6 6 4 6

Of particular note is the soul of the symmetrized product of two different simple 2-elements.

2,0,2
2 s a1 Ô a2
2 2

This 4-element is the simplest critical potentially non-scalar residue in a norm calculation. It is
always zero in spaces of dimension 2, or 3. Hence the norm of a Grassmann number in these
spaces under the hypercomplex product is guaranteed to be scalar.

2009 9 3
Grassmann Algebra Book.nb 634

This 4-element is the simplest critical potentially non-scalar residue in a norm calculation. It is
always zero in spaces of dimension 2, or 3. Hence the norm of a Grassmann number in these
spaces under the hypercomplex product is guaranteed to be scalar.

In a space of 4 dimensions, on the other hand, this element may not be zero. Hence the space of
dimension 3 is the highest-dimensional space in which the norm of a Grassmann number is
guaranteed to be scalar.

Summary of results of this section


Consider a general Grassmann number X.

X ã a + a +b + g + …
m k p

Here, we suppose a to be scalar, and the rest of the terms to be nonscalar simple or non-simple
elements.

The generalized hypercomplex norm C[X] of X may be written as:

C@XD ã a2 - a Îa - b Îb - g Îg - …
m m k k p p

The hypercomplex square aÎa of an element, has in general, both a body and a soul.
m m

The body of aÎa is the negative of the inner product.


m m

BodyBa ÎaF ã - a û a
m m m m

The soul of aÎa depends on the terms of which it is composed. If aãa1 +a2 +a3 +… then
m m m m m m

m-2
m,l,m
SoulBa ÎaF ã ‚ I1 + H-1LHm-lL M s a1 Û a2 + a1 Û a3 + a2 Û a3 + …
m m m l m m l m m l m
l=0

If the component elements of different grade of X are simple, then its soul is zero, and its norm
becomes the scalar:

C@XD ã a2 + a û a + b û b + g û g + …
m m k k p p

It is only in 3-space that we are guaranteed that all the components of a Grassmann number are
simple. Therefore it is only in 3-space that we are guaranteed that the norm of a Grassmann
number under the hypercomplex product is scalar.

2009 9 3
Grassmann Algebra Book.nb 635

11.5 The Complex Numbers !

Constraints on the hypercomplex signs


Complex numbers may be viewed as hypercomplex numbers in a space of one dimension.

In one dimension all 1-elements are of the form ae, where a is a scalar and e is the basis element.

Let a be a 1-element. From the definition of the hypercomplex product we see that the
hypercomplex product of a 1-element with itself is the (possibly signed) scalar product.

Min@1,1D
1,l,1 1,0,1 1,1,1 1,1,1
aÎa ã ‚ s aÛa ã s aÔa + s aûa ã s aûa
l
l=0

In the usual notation, the product of two complex numbers would be written:

Ha + Â bL Hc + Â dL ã Ha c - b dL + Â H b c + a dL
In the general hypercomplex notation we have:

Ha + b eLÎHc + d eL ã aÎc + aÎHd eL + Hb eLÎc + Hb eLÎHd eL


Simplifying this using the relations 11.6 and 11.7 above gives:

1,1,1
Ha c + b d eÎeL + Hb c + a dL e ã Ka c + b d s e û eO + Hb c + a dL e

1,1,1
Isomorphism with the complex numbers is then obtained by constraining s eûe to be -1.

One immediate interpretation that we can explore to satisfy this is that e is a unit 1-element (with
1,1,1
eûe ã 1), and s ã-1.

This then is the constraint we will impose to allow the incorporation of complex numbers in the
hypercomplex structure.

1,1,1
s ã -1 11.28

Complex numbers as Grassmann numbers under the hypercomplex


product
The imaginary unit  is then equivalent to a unit basis element (e, say) of the 1-space under the
hypercomplex product.

2009 9 3
Grassmann Algebra Book.nb 636

Âãe eûe ã 1 11.29

ÂÎÂ ã -Â û Â ã -1 11.30

In this interpretation, instead of the  being a new entity with the special property Â2 ã-1, the
focus is shifted to interpret it as a unit 1-element acting under a new product operation Î. Complex
numbers are then interpreted as Grassmann numbers in a space of one dimension under the
hypercomplex product operation.

For example, the norm of a complex number a+b ã a+be ã a+b  can be calculated as:

C@a + bD ã Ha + bLÎHa - bL ã a2 - bÎb ã a2 + b û b ã a2 + b2

,@a + bD ã a2 + b û b ã a2 + b2 11.31

11.6 The Hypercomplex Product in a 2-Space

Ÿ Tabulating products in 2-space


From our previous interpretation of complex numbers as Grassmann numbers in a space of one
dimension under the hypercomplex product operation, we shall require any one dimensional
subspace of the 2-space under consideration to have the same properties.

The product table for the types of hypercomplex products that can be encountered in a 2-space can
be entered by using the GrassmannAlgebra function HypercomplexProductTable.

T = HypercomplexProductTable@81, a, b, a Ô b<D
881Î1, 1Îa, 1Îb, 1ÎHa Ô bL<,
8aÎ1, aÎa, aÎb, aÎHa Ô bL<, 8bÎ1, bÎa, bÎb, bÎHa Ô bL<,
8Ha Ô bLÎ1, Ha Ô bLÎa, Ha Ô bLÎb, Ha Ô bLÎHa Ô bL<<

To make this easier to read we format this with the GrassmannAlgebra function TableTemplate.

TableTemplate@TD

1Î1 1Îa 1Îb 1ÎHa Ô bL


aÎ1 aÎa aÎb aÎHa Ô bL
bÎ1 bÎa bÎb bÎHa Ô bL
Ha Ô bLÎ1 Ha Ô bLÎa Ha Ô bLÎb Ha Ô bLÎHa Ô bL

Products where one of the factors is a scalar have already been discussed. Products of a 1-element
with itself have been discussed in the section on complex numbers. Applying these results to the
table above enables us to simplify it to:

2009 9 3
Grassmann Algebra Book.nb 637

Products where one of the factors is a scalar have already been discussed. Products of a 1-element
with itself have been discussed in the section on complex numbers. Applying these results to the
table above enables us to simplify it to:

1 a b aÔb
a -Ha û aL aÎb aÎHa Ô bL
b bÎa -Hb û bL bÎHa Ô bL
a Ô b Ha Ô bLÎa Ha Ô bLÎb Ha Ô bLÎHa Ô bL

In this table there are four essentially different new products which we have not yet discussed.

aÎb aÎHa Ô bL Ha Ô bLÎa Ha Ô bLÎHa Ô bL


In the next three subsections we will take each of these products and, with a view to developing the
quaternion algebra, show how they may be expressed in terms of exterior and interior products.

The hypercomplex product of two 1-elements


1,1,1
From the definition, and the constraint s ã-1 derived above from the discussion on complex
numbers we can write the hypercomplex product of two (possibly) distinct 1-elements as

1,0,1 1,1,1 1,0,1


aÎb ã s aÔb + s Ha û bL ã s a Ô b - Ha û bL
The hypercomplex product bÎa can be obtained by reversing the sign of the exterior product, since
the scalar product is symmetric.

1,0,1
aÎb ã s a Ô b - Ha û bL
11.32
1,0,1
bÎa ã - s a Ô b - Ha û bL

The hypercomplex product of a 1-element and a 2-element


From the definition of the hypercomplex product 11.1 we can obtain expressions for the
hypercomplex products of a 1-element and a 2-element in a 2-space. Since the space is only of two
dimensions, the 2-element may be represented without loss of generality as a product which
incorporates the 1-element as one of its factors.

Min@1,2D
1,l,2 1,0,2 1,1,2
aÎHa Ô bL ã ‚ s a Û Ha Ô bL ã s a Ô Ha Ô bL + s Ha Ô bL û a
l
l=0

Min@2,1D
2,l,1 2,0,1 2,1,1
Ha Ô bLÎa ã ‚ s Ha Ô bL Û a ã s Ha Ô bL Ô a + s Ha Ô bL û a
l
l=0

2009 9 3
Grassmann Algebra Book.nb 638

Since the first term involving only exterior products is zero, we obtain:

1,1,2
aÎHa Ô bL ã s Ha Ô bL û a
11.33
2,1,1
Ha Ô bLÎa ã s Ha Ô bL û a

The hypercomplex square of a 2-element


All 2-elements in a space of 2 dimensions differ by only a scalar factor. We need only therefore
consider the hypercomplex product of a 2-element with itself.

The generalized product of two identical elements is zero in all cases except for that which reduces
to the inner product. From this fact and the definition of the hypercomplex product we obtain

Min@2,2D
2,l,2 2,2,2
Ha Ô bLÎHa Ô bL == ‚ s Ha Ô bL Û Ha Ô bL == s Ha Ô bL û Ha Ô bL
l
l=0

2,2,2
Ha Ô bLÎHa Ô bL == s Ha Ô bL û Ha Ô bL 11.34

The product table in terms of exterior and interior products


Collecting together the results above, we can rewrite the hypercomplex product table solely in
terms of exterior and interior products and some (as yet) undetermined signs.

1 a b aÔb
1,0,1 1,1,2
a -Ha û aL s a Ô b - Ha û bL s Ha Ô bL û a
11.35
1,0,1 1,1,2
b - s a Ô b - Ha û bL -Hb û bL s Ha Ô bL û b
2,1,1 2,1,1 2,2,2
b s Ha Ô bL û a s Ha Ô bL û b s Ha Ô bL û Ha Ô b

2009 9 3
Grassmann Algebra Book.nb 639

11.7 The Quaternions "

The product table for orthonormal elements


Suppose now that a and b are orthonormal. Then:

aûa ã bûb ã 1

aûb ã 0 Ha Ô bL û a ã Ha û aL b Ha Ô bL û Ha Ô bL ã Ha û aL Hb û bL
The hypercomplex product table then becomes:

1 a b aÔb
1,0,1 1,1,2
a -1 s aÔb s b
1,0,1 1,1,2
b - s aÔb -1 - s a
2,1,1 2,1,1 2,2,2
aÔb s b - s a s

Generating the quaternions


Exploring a possible isomorphism with the quaternions leads us to the correspondence:

aö( bö) aÔb ö # 11.36

In terms of the quaternion units, the product table becomes:

1 $ % &
1,0,1 1,1,2
$ -1 s & s %

% - 1,0,1
s & -1 1,1,2
- s $
2,1,1 2,1,1 2,2,2
& s % - s $ s

To obtain the product table for the quaternions we therefore require:

1,0,1 1,1,2 2,1,1 2,2,2


s ã1 s ã -1 s ã1 s ã -1 11.37

These values give the quaternion table as expected.

2009 9 3
Grassmann Algebra Book.nb 640

These values give the quaternion table as expected.

1 $ % &
$ -1 & -%
11.38
% - & -1 $
& % -$ -1

Substituting these values back into the original table 11.17 gives a hypercomplex product table in
terms only of exterior and interior products. This table defines the real, complex, and quaternion
product operations.

1 a b aÔb
a -Ha û aL a Ô b - Ha û bL -Ha Ô bL û a
11.39
b - a Ô b - Ha û bL -Hb û bL -Ha Ô bL û b
aÔb Ha Ô bL û a Ha Ô bL û b - Ha Ô bL û Ha Ô bL

The norm of a quaternion


Let Q be a quaternion given in basis-free form as an element of a 2-space under the hypercomplex
product.

Q ã a + a + aÔb 11.40

Here, a is a scalar, a is a 1-element and aÔb is congruent to any 2-element of the space.

The norm of Q is denoted C[Q] and given as the hypercomplex product of Q with its conjugate Qc .
expanding using formula 11.5 gives:

C@QD ã Ha + a + a Ô bLÎHa -a -a Ô bL ã a2 - Ha + a Ô bLÎHa + a Ô bL


Expanding the last term gives:

C@QD ã a2 - aÎa - aÎHa Ô bL - Ha Ô bLÎa - Ha Ô bLÎHa Ô bL


From table 11.21 we see that:

aÎHa Ô bL ã -Ha Ô bL û a Ha Ô bLÎa ã Ha Ô bL û a


Whence:

aÎHa Ô bL ã -Ha Ô bLÎa


Using table 11.21 again then allows us to write the norm of a quaternion either in terms of the
hypercomplex product or the inner product.
2009 9 3
Grassmann Algebra Book.nb 641

Using table 11.21 again then allows us to write the norm of a quaternion either in terms of the
hypercomplex product or the inner product.

,@QD ã a2 - aÎa - Ha Ô bLÎHa Ô bL 11.41

,@QD ã a2 + a û a + Ha Ô bL û Ha Ô bL 11.42

In terms of 3, 4, and !

Q=a+b3+c4 +d!

C@QD ã a2 - Hb 3 + c 4LÎHb 3 + c 4L - Hd !LÎHd !L


ã a2 - b2 H3Î3L - c2 H4Î4L - d2 H!Î!L
ã a2 + b2 H3 û 3L + c2 H4 û 4L + d2 H! û !L
ã a2 + b2 + c2 + d2

The Cayley-Dickson algebra


If we set a and b to be orthogonal in the general hypercomplex product table 11.21, we retrieve the
multiplication table for the Cayley-Dickson algebra with 4 generators and 2 parameters.

1 a b aÔb
a -Ha û aL aÔb -Ha û aL b
11.43
b - aÔb -Hb û bL Hb û bL a
a Ô b Ha û aL b -Hb û bL a - Ha û aL Hb û bL

In the notation we have used, the generators are 1, a, b, and aÔb. The parameters are aûa and bûb.

2009 9 3
Grassmann Algebra Book.nb 642

12 Exploring Clifford Algebra


Ï

12.1 Introduction
In this chapter we define a new type of algebra: the Clifford algebra. Clifford algebra was
originally proposed by William Kingdon Clifford (Clifford ) based on Grassmann's ideas.
Clifford algebras have now found an important place as mathematical systems for describing some
of the more fundamental theories of mathematical physics. Clifford algebra is based on a new kind
of product called the Clifford product.

In this chapter we will show how the Clifford product can be defined straightforwardly in terms of
the exterior and interior products that we have already developed, without introducing any new
axioms. This approach has the dual advantage of ensuring consistency and enabling all the results
we have developed previously to be applied to Clifford numbers.

In the previous chapter we have gone to some lengths to define and explore the generalized
Grassmann product. In this chapter we will use the generalized Grassmann product to define the
Clifford product in its most general form as a simple sum of generalized products. Computations in
general Clifford algebras can be complicated, and this approach at least permits us to render the
complexities in a straightforward manner.

From the general case, we then show how Clifford products of intersecting and orthogonal
elements simplify. This is the normal case treated in introductory discussions of Clifford algebra.

Clifford algebras occur throughout mathematical physics, the most well known being the real
numbers, complex numbers, quaternions, and complex quaternions. In this book we show how
Clifford algebras can be firmly based on the Grassmann algebra as a sum of generalized
Grassmann products.

Historical Note

The seminal work on Clifford algebra is by Clifford in his paper Applications of Grassmann's
Extensive Algebra that he published in the American Journal of Mathematics Pure and Applied in
1878 (Clifford ). Clifford became a great admirer of Grassmann and one of those rare
contemporaries who appears to have understood his work. The first paragraph of this paper
contains the following passage.

Until recently I was unacquainted with the Ausdehnungslehre, and knew only so much of it as is
contained in the author's geometrical papers in Crelle's Journal and in Hankel's Lectures on Complex
Numbers. I may, perhaps, therefore be permitted to express my profound admiration of that
extraordinary work, and my conviction that its principles will exercise a vast influence on the future
of mathematical science.

2009 9 3
Grassmann Algebra Book.nb 643

12.2 The Clifford Product

Definition of the Clifford product


The Clifford product of elements a and b is denoted by aùb and defined to be
m k m k

Min@m,kD
1
aùb ã ‚ H-1Ll Hm-lL+ 2 l Hl-1L
aÛb 12.1
m k m l k
l=0

Here, a Û b is the generalized Grassmann product of order l.


m l k

The most surprising property of the Clifford product is its associativity, even though it is defined in
terms of products which are non-associative. The associativity of the Clifford products is not
directly evident from the definition.

Ÿ Tabulating Clifford products


In order to see what this formula gives we can tabulate the Clifford products in terms of
generalized products. Here we maintain the grade of the first factor general, and vary the grade of
the second.

TableB:a ù b, SumBCreateSignedGeneralizedProducts@lDBa, bF,


m k m k

8l, 0, k<F>, 8k, 0, 4<F êê SimplifySigns êê TableForm

aùb aÛb
m 0 m 0 0

aùb a Û b - H-1Lm Ka Û bO
m m 0 m 1

aùb a Û b - H-1Lm a Û b - a Û b
m 2 m 0 2 m 1 2 m 2 2

aùb a Û b - H-1Lm a Û b - a Û b + H-1Lm a Û b


m 3 m 0 3 m 1 3 m 2 3 m 3 3

aùb a Û b - H-1Lm a Û b - a Û b + H-1Lm a Û b + a Û b


m 4 m 0 4 m 1 4 m 2 4 m 3 4 m 4 4

2009 9 3
Grassmann Algebra Book.nb 644

Ÿ The grade of a Clifford product


It can be seen that, from its definition 12.1 in terms of generalized products, the Clifford product of
a and b is a sum of elements with grades ranging from m + k to | m - k | in steps of 2. All the
m k
elements of a given Clifford product are either of even grade (if m + k is even), or of odd grade (if
m + k is odd). To calculate the full range of grades in a space of unspecified dimension we can use
the GrassmannAlgebra function RawGrade. For example:

RawGradeBa ù bF
3 2

81, 3, 5<

Given a space of a certain dimension, some of these elements may necessarily be zero (and thus
result in a grade of Grade0), because their grade is larger than the dimension of the space. For
example, in a 3-space:

&3 ; GradeBa ù bF
3 2

81, 3, Grade0<

In the general discussion of the Clifford product that follows, we will assume that the dimension of
the space is high enough to avoid any terms of the product becoming zero because their grade
exceeds the dimension of the space. In later more specific examples, however, the dimension of the
space becomes an important factor in determining the structure of the particular Clifford algebra
under consideration.

Ÿ Clifford products in terms of generalized products


As can be seen from the definition, the Clifford product of a simple m-element and a simple k-
element can be expressed as the sum of signed generalized products.

For example, aùb may be expressed as the sum of three signed generalized products of grades 1, 3
3 2
and 5. In GrassmannAlgebra we can effect this conversion by applying the
ToGeneralizedProducts function.

ToGeneralizedProductsBa ù bF
3 2

aÛb+aÛb-aÛb
3 0 2 3 1 2 3 2 2

Multiple Clifford products can be expanded in the same way. For example:

2009 9 3
Grassmann Algebra Book.nb 645

ToGeneralizedProductsBa ù b ù gF
3 2

aÛb Ûg+ aÛb Ûg+- aÛb Ûg+


3 0 2 0 3 1 2 0 3 2 2 0

aÛb Ûg+ aÛb Ûg+- aÛb Ûg


3 0 2 1 3 1 2 1 3 2 2 1

ToGeneralizedProducts can also be used to expand Clifford products in any Grassmann


expression, or list of Grassmann expressions. For example we can express the Clifford product of
two general Grassmann numbers in 2-space in terms of generalized products.

&2 ; X = CreateGrassmannNumber@xD ù CreateGrassmannNumber@yD


Hx0 + e1 x1 + e2 x2 + x3 e1 Ô e2 L ù Hy0 + e1 y1 + e2 y2 + y3 e1 Ô e2 L

X1 = ToGeneralizedProducts@XD
x0 Û y0 + x0 Û He1 y1 L + x0 Û He2 y2 L + x0 Û Hy3 e1 Ô e2 L +
0 0 0 0

He1 x1 L Û y0 + He1 x1 L Û He1 y1 L + He1 x1 L Û He2 y2 L +


0 0 0

He1 x1 L Û Hy3 e1 Ô e2 L + He2 x2 L Û y0 + He2 x2 L Û He1 y1 L +


0 0 0

He2 x2 L Û He2 y2 L + He2 x2 L Û Hy3 e1 Ô e2 L + Hx3 e1 Ô e2 L Û y0 +


0 0 0

Hx3 e1 Ô e2 L Û He1 y1 L + Hx3 e1 Ô e2 L Û He2 y2 L +


0 0

Hx3 e1 Ô e2 L Û Hy3 e1 Ô e2 L + He1 x1 L Û He1 y1 L + He1 x1 L Û He2 y2 L +


0 1 1

He1 x1 L Û Hy3 e1 Ô e2 L + He2 x2 L Û He1 y1 L + He2 x2 L Û He2 y2 L +


1 1 1
He2 x2 L Û Hy3 e1 Ô e2 L - Hx3 e1 Ô e2 L Û He1 y1 L - Hx3 e1 Ô e2 L Û He2 y2 L -
1 1 1
Hx3 e1 Ô e2 L Û Hy3 e1 Ô e2 L - Hx3 e1 Ô e2 L Û Hy3 e1 Ô e2 L
1 2

Ÿ Clifford products in terms of interior products


The primary definitions of Clifford and generalized products are in terms of exterior and interior
products. To transform a Clifford product to exterior and interior products, we can either apply the
GrassmannAlgebra function ToInteriorProducts to the results obtained above, or directly to
the expression involving the Clifford product.

ToInteriorProductsBa ù bF
3 2

-Ha1 Ô a2 Ô a3 û b1 Ô b2 L + Ha1 Ô a2 Ô a3 û b1 L Ô b2 -
Ha1 Ô a2 Ô a3 û b2 L Ô b1 + a1 Ô a2 Ô a3 Ô b1 Ô b2

Note that ToInteriorProducts works with explicit elements, but also as above, automatically
creates a simple exterior product from an underscripted element.

ToInteriorProducts expands the Clifford product by using the A form of the generalized
product. A second function ToInteriorProductsB expands the Clifford product by using the
B form of the expansion to give a different but equivalent expression.
2009 9 3
Grassmann Algebra Book.nb 646

ToInteriorProducts expands the Clifford product by using the A form of the generalized
product. A second function ToInteriorProductsB expands the Clifford product by using the
B form of the expansion to give a different but equivalent expression.

ToInteriorProductsBBa ù bF
3 2

-Ha1 Ô a2 Ô a3 û b1 Ô b2 L - a1 Ô a2 Ô a3 Ô b1 û b2 +
a1 Ô a2 Ô a3 Ô b2 û b1 + a1 Ô a2 Ô a3 Ô b1 Ô b2

In this example, the difference is evidenced in the second and third terms.

Ÿ Clifford products in terms of inner products


Clifford products may also be expressed in terms of inner products, some of which may be scalar
products. From this form it is easy to read the grade of the terms.

ToInnerProductsBa ù bF
3 2

-Ha2 Ô a3 û b1 Ô b2 L a1 + Ha1 Ô a3 û b1 Ô b2 L a2 -
Ha1 Ô a2 û b1 Ô b2 L a3 - Ha3 û b2 L a1 Ô a2 Ô b1 +
Ha3 û b1 L a1 Ô a2 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô b1 - Ha2 û b1 L a1 Ô a3 Ô b2 -
Ha1 û b2 L a2 Ô a3 Ô b1 + Ha1 û b1 L a2 Ô a3 Ô b2 + a1 Ô a2 Ô a3 Ô b1 Ô b2

Ÿ Clifford products in terms of scalar products


Finally, Clifford products may be expressed in terms of scalar and exterior products only.

ToScalarProductsBa ù bF
3 2

Ha2 û b2 L Ha3 û b1 L a1 - Ha2 û b1 L Ha3 û b2 L a1 -


Ha1 û b2 L Ha3 û b1 L a2 + Ha1 û b1 L Ha3 û b2 L a2 +
Ha1 û b2 L Ha2 û b1 L a3 - Ha1 û b1 L Ha2 û b2 L a3 - Ha3 û b2 L a1 Ô a2 Ô b1 +
Ha3 û b1 L a1 Ô a2 Ô b2 + Ha2 û b2 L a1 Ô a3 Ô b1 - Ha2 û b1 L a1 Ô a3 Ô b2 -
Ha1 û b2 L a2 Ô a3 Ô b1 + Ha1 û b1 L a2 Ô a3 Ô b2 + a1 Ô a2 Ô a3 Ô b1 Ô b2

12.3 The Reverse of an Exterior Product

Defining the reverse


We will find in our discussion of Clifford products that many operations and formulae are
simplified by expressing some of the exterior products in a form which reverses the order of the 1-
element factors in the products.

We denote the reverse of a simple m-element a by a† , and define it to be:


m m

2009 9 3
Grassmann Algebra Book.nb 647

We denote the reverse of a simple m-element a by a† , and define it to be:


m m

Ha1 Ô a2 Ô ! Ô am L† ã am Ô am-1 Ô ! Ô a1 12.2

We can easily work out the number of permutations to achieve this rearrangement as
1
1 m Hm-1L
2
m Hm - 1L. Mathematica automatically simplifies H-1L 2 to Âm Hm-1L .

1
m Hm-1L
a† ã H-1L 2 a ã Âm Hm-1L a 12.3
m m m

The operation of taking the reverse of an element is called reversion.

In Mathematica, the superscript dagger is represented by SuperDagger. Thus


SuperDaggerBaF returns a† .
m m

The pattern of signs, as m increases from zero, alternates in pairs:

1, 1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, !

Ÿ Computing the reverse


In GrassmannAlgebra you can take the reverse of the exterior products in a Grassmann expression
or list of Grassmann expressions by using the function GrassmannReverse.
GrassmannReverse expands products and factors scalars before reversing the factors. It will
operate with graded variables of either numeric or symbolic grade. For example:

GrassmannReverseB:1, x, x Ô y, a Ô 3, b, a û b - g >F
3 k m k p

81, x, y Ô x, 3 a3 Ô a2 Ô a1 , bk Ô ! Ô b2 Ô b1 ,
am Ô ! Ô a2 Ô a1 û bk Ô ! Ô b2 Ô b1 - am Ô ! Ô a2 Ô a1 û gp Ô ! Ô g2 Ô g1 <

On the other hand, if we wish just to put exterior products into reverse form (that is, reversing the
factors, and changing the sign of the product so that the result remains equal to the original), we
can use the GrassmannAlgebra function ReverseForm.

ReverseFormB:1, x, x Ô y, a, b, a û b - g >F
3 k m k p

91, x, -Hy Ô xL, -Ha3 Ô a2 Ô a1 L, ÂH-1+kL k bk Ô ! Ô b2 Ô b1 ,


ÂH-1+kL k+H-1+mL m Ham Ô ! Ô a2 Ô a1 û bk Ô ! Ô b2 Ô b1 L -
ÂH-1+mL m+H-1+pL p Ham Ô ! Ô a2 Ô a1 û gp Ô ! Ô g2 Ô g1 L=

12.4 Special Cases of Clifford Products


2009 9 3
Grassmann Algebra Book.nb 648

12.4 Special Cases of Clifford Products

Ÿ The Clifford product with scalars


The Clifford product of a scalar with any element simplifies to the usual (field) product.

ToScalarProductsBa ù aF
m

a a1 Ô a2 Ô ! Ô am

aùa ã a a 12.4
m m

Hence the Clifford product of any number of scalars is just their underlying field product.

ToScalarProducts@a ù b ù cD
abc

Ÿ The Clifford product of 1-elements


The Clifford product of two 1-elements is just the sum of their interior (here scalar) and exterior
products. Hence it is of grade {0, 2}.

ToScalarProducts@x ù yD
xûy + xÔy

xùy ã xÔy + xûy 12.5

The Clifford product of any number of 1-elements can be computed.

ToScalarProducts@x ù y ù zD
z Hx û yL - y Hx û zL + x Hy û zL + x Ô y Ô z

ToScalarProducts@w ù x ù y ù zD
Hw û zL Hx û yL - Hw û yL Hx û zL + Hw û xL Hy û zL +
Hy û zL w Ô x - Hx û zL w Ô y + Hx û yL w Ô z +
Hw û zL x Ô y - Hw û yL x Ô z + Hw û xL y Ô z + w Ô x Ô y Ô z

2009 9 3
Grassmann Algebra Book.nb 649

The Clifford product of an m-element and a 1-element


The Clifford product of an arbitrary simple m-element and a 1-element results in just two terms:
their exterior and interior products.

xùa ã xÔa + aûx 12.6


m m m

a ù x ã a Ô x - H-1Lm a û x 12.7
m m m

By rewriting equation 12.7 as:

H-1Lm a ù x ã x Ô a - a û x
m m m

we can add and subtract equations 12.6 and 12.7 to express the exterior product and interior
products of a 1-element and an m-element in terms of Clifford products

1
xÔa ã Kx ù a + H-1Lm a ù xO 12.8
m 2 m m

1
aûx ã Kx ù a - H-1Lm a ù xO 12.9
m 2 m m

The Clifford product of an m-element and a 2-element


The Clifford product of an arbitrary m-element and a 2-element is given by just three terms, an
exterior product, a generalized product of order 1, and an interior product.

a ù b ã a Ô b - H-1Lm a Û b - a û b 12.10
m 2 m 2 m 1 2 m 2

Ÿ The Clifford product of two 2-elements


By way of example we explore the various forms into which we can cast a Clifford product of two
2-elements. The highest level is into a sum of generalized products. From this we can expand the
terms into interior, inner or scalar products where appropriate.

We expect the grade of the Clifford product to be a composite one.

2009 9 3
Grassmann Algebra Book.nb 650

RawGrade@Hx Ô yL ù Hu Ô vLD
80, 2, 4<

ToGeneralizedProducts@Hx Ô yL ù Hu Ô vLD
xÔy Û uÔv - xÔy Û uÔv - xÔy Û uÔv
0 1 2

The first of these generalized products is equivalent to an exterior product of grade 4. The second
is a generalized product of grade 2. The third reduces to an interior product of grade 0. We can see
this more explicitly by converting to interior products.

ToInteriorProducts@Hx Ô yL ù Hu Ô vLD
-Hu Ô v û x Ô yL - Hx Ô y û uL Ô v + Hx Ô y û vL Ô u + u Ô v Ô x Ô y

We can convert the middle terms to inner (in this case scalar) products.

ToInnerProducts@Hx Ô yL ù Hu Ô vLD
-Hu Ô v û x Ô yL + Hv û yL u Ô x -
Hv û xL u Ô y - Hu û yL v Ô x + Hu û xL v Ô y + u Ô v Ô x Ô y

Finally we can express the Clifford number in terms only of exterior and scalar products.

ToScalarProducts@Hx Ô yL ù Hu Ô vLD
Hu û yL Hv û xL - Hu û xL Hv û yL + Hv û yL u Ô x -
Hv û xL u Ô y - Hu û yL v Ô x + Hu û xL v Ô y + u Ô v Ô x Ô y

The Clifford product of two identical elements


The Clifford product of two identical elements g is, by definition
p

p
1
g ù g ã ‚ H-1Ll Hp-lL+ 2 l Hl-1L
gÛg
p p p l p
l=0

Since the only non-zero generalized product of the form g Û g is that for which l = p, that is gûg,
p l p p p
we have immediately that:
1
p Hp-1L
g ù g ã H-1L 2 gûg
p p p p

Or, alternatively:

g û g ã g† ù g ã g ù g† 12.11
p p p p p p

12.5 Alternate Forms for the Clifford Product


2009 9 3
Grassmann Algebra Book.nb 651

12.5 Alternate Forms for the Clifford Product

Alternate expansions of the Clifford product


The Clifford product has been defined in Section 12.2 as:

Min@m,kD
1
aùb ã ‚ H-1Ll Hm-lL+ 2 l Hl-1L
aÛb
m k m l k
l=0

Alternate forms for the generalized product have been discussed in Chapter 10. The generalized
product, and hence the Clifford product, may be expanded by decomposition of a, by
m
decomposition of b, or by decomposition of both a and b.
k m k

Ÿ The Clifford product expressed by decomposition of the first factor


The generalized product, expressed by decomposition of a is:
m

m
K O
l
a Û b ã ‚ ai Ô b û ai a ã a1 Ô a1 ã a2 Ô a2 ã !
m l k m-l k l m l m-l l m-l
i=1

Substituting this into the expression for the Clifford product gives:

m
K O
Min@m,kD l
1
aùb ã ‚ ‚ H-1Ll Hm-lL+ 2 l Hl-1L
ai Ô b û ai
m k m-l k l
l=0 i=1

a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l

Our objective is to rearrange the formula for the Clifford product so that the signs are absorbed into
the formula, thus making the form of the formula independent of the values of m and l. We can do
1
l Hl-1L † †
this by writing H-1L 2 ai ãai (where ai is the reverse of ai ) and interchanging the
l l l l
order of the decomposition of a into a l-element and a (m-l)-element to absorb the H-1Ll Hm-lL
m
factor: H-1Ll Hm-lL aã a1 Ôa1 ã a2 Ôa2 ã! .
m m-l l m-l l

2009 9 3
Grassmann Algebra Book.nb 652

m
K O
Min@m,kD l

aùb ã ‚ ‚ ai Ô b û ai
m k m-l k l
l=0 i=1

a ã a1 Ô a1 ã a2 Ô a2 ã !
m m-l l m-l l

Because of the symmetry of the expression with respect to l and (m-l), we can write m equal to
(m-l) and then l becomes (m-m). This enables us to write the formula a little simpler by arranging
for the factors of grade m to come before those of (m-m). Finally, because of the inherent
arbitrariness of the symbol m, we can change it back to l to get the formula in a more accustomed
form.

m
Min@m,kD KlO

aùb ã ‚ ‚ ai Ô b û ai
m k l k m-l 12.12
l=0 i=1

a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l

The right hand side of this expression is a sum of interior products. In GrassmannAlgebra we can
develop the Clifford product aùb as a sum of interior products in this particular form by using
m k
ToInteriorProductsD (note the final 'D'). Since a is to be decomposed, it must have an
m
explicit numerical grade.

Ï Example 1
We can expand any explicit elements.

ToInteriorProductsD@Hx Ô yL ù Hu Ô vLD
x Ô y û v Ô u + x Ô Hu Ô v û yL - y Ô Hu Ô v û xL + u Ô v Ô x Ô y

Note that this is a different (although equivalent) result from that obtained using
ToInteriorProducts.

ToInteriorProducts@Hx Ô yL ù Hu Ô vLD
-Hu Ô v û x Ô yL - Hx Ô y û uL Ô v + Hx Ô y û vL Ô u + u Ô v Ô x Ô y

Ï Example 2
We can also enter the elements as graded variables and have GrassmannAlgebra create the
requisite products.

ToInteriorProductsDBa ù bF
2 2

b1 Ô b2 û a2 Ô a1 + a1 Ô Hb1 Ô b2 û a2 L - a2 Ô Hb1 Ô b2 û a1 L + a1 Ô a2 Ô b1 Ô b2

Ï Example 3
2009 9 3
Grassmann Algebra Book.nb 653

Example 3
The formula shows that it does not depend on the grade of b for its form. Thus we can still obtain
k
an expansion for general b. For example we can take:
k

A = ToInteriorProductsDBa ù b F
2 k

b û a2 Ô a1 + a1 Ô b û a2 - a2 Ô b û a1 + a1 Ô a2 Ô b
k k k k

Ï Example 4
Note that if k is less than the grade of the first factor, some of the interior product terms may be
zero, thus simplifying the expression.

B = ToInteriorProductsDBa ù b F
2

-Ha1 Ô a2 û bL + b Ô a1 Ô a2

If we put k equal to 1 in the expression derived for general k in Example 3 we get:

A1 = A ê. k Ø 1
b û a2 Ô a1 + a1 Ô Hb û a2 L - a2 Ô Hb û a1 L + a1 Ô a2 Ô b

Although this does not immediately look like the expression B above, we can see that it is the same
by noting that the first term of A1 is zero, and expanding their difference to scalar products.

ToScalarProducts@B - A1D
0

Alternative expression by decomposition of the first factor


An alternative expression for the Clifford product expressed by a decomposition of a is obtained
m
by reversing the order of the factors in the generalized products, and then expanding the
generalized products in their B form expansion.

Min@m,kD
1
aùb ã ‚ H-1Ll Hm-lL+ 2 l Hl-1L+Hm-lL Hk-lL
bÛa
m k k l m
l=0

m
K O
Min@m,kD l
1
ã ‚ ‚ H-1Lk Hm-lL+ 2 l Hl-1L
b Ô ai û ai
k m-l l
l=0 i=1

1
l Hl-1L †
As before we write H-1L 2 ai ãai and interchange the order of b and ai to absorb the
l l k m-l

signÏH-1L k Hm-lL
to get:
2009 9 3
Grassmann Algebra Book.nb 654

1
l Hl-1L †
As before we write H-1L 2 ai ãai and interchange the order of b and ai to absorb the
l l k m-l
k Hm-lL
sign H-1L to get:

m
K O
Min@m,kD l

aùb ã ‚ ‚ ai Ô b û ai
m k m-l k l
l=0 i=1

a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l

In order to get this into a form for direct comparison of our previously derived result in formula
12.12, we write H-1Ll Hm-lL aã a1 Ôa1 ã a2 Ôa2 ã! , then interchange l and (m-l) as before to
m m-l l m-l l
get finally:

m
Min@m,kD KlO

aùb ã ‚ ‚ H-1Ll Hm-lL ai Ô b û ai
m k l k m-l 12.13
l=0 i=1

a ã a1 Ô a1 ã a2 Ô a2 ã !
m l m-l l m-l

As before, the right hand side of this expression is a sum of interior products. In
GrassmannAlgebra we can develop the Clifford product aùb in this form by using
m k
ToInteriorProductsC. This is done in the Section 12.6 below.

The Clifford product expressed by decomposition of the second factor


If we wish to expand a Clifford product in terms of the second factor b, we can use formulas A and
k
B of the generalized product theorem (Section 10.5) and substitute directly to get either of:

k
K O
Min@m,kD l

aùb ã ‚ ‚ H-1Ll Hm-lL a û bj Ô bj
m k
l=0 j=1
m l k-l 12.14

b ã b1 Ô b1 ã b2 Ô b2 ã !
k l k-l l k-l

2009 9 3
Grassmann Algebra Book.nb 655

k
K O
Min@m,kD l

aùb ã ‚ ‚ H-1Ll Hm-lL a Ô bj û bj
m k
l=0 j=1
m k-l l 12.15

b ã b1 Ô b1 ã b2 Ô b2 ã !
k l k-l l k-l

The Clifford product expressed by decomposition of both factors

We can similarly decompose the Clifford product in terms of both a and b. The sign
m k
1
l Hl-1L
H-1L 2 can be absorbed by taking the reverse of either ai or bj .
l l

m k
K OK O
Min@m,kD l l

aùb ã ‚ ‚ ‚ H-1Ll Hm-lL ai û bj ai Ô bj
m k l l m-l k-l
l=0 i=1 j=1

m k
Min@m,kD
K OK O
l l
12.16

aùb ã ‚ ‚ ‚ H-1Ll Hm-lL ai û bj ai Ô bj
m k l l m-l k-l
l=0 i=1 j=1

a ã a1 Ô a1 ã a2 Ô a2 ã ! b ã b1 Ô b1 ã b2 Ô b2 ã !
m l m-l l m-l k l k-l l k-l

12.6 Writing Down a General Clifford Product

Ÿ The form of a Clifford product expansion


We take as example the Clifford product aùb and expand it in terms of the factors of its first factor
4 k
in both the D form and the C form. We see that all terms except the interior and exterior products
at the ends of the expression appear to be different. There are of course equal numbers of terms in
either form. Note that if the parentheses were not there, the expressions would be identical except
for (possibly) the signs of the terms. Although these central terms differ between the two forms, we
can show by reducing them to scalar products that their sums are the same.

2009 9 3
Grassmann Algebra Book.nb 656

Ï The D form

X = ToInteriorProductsDBa ù bF
4 k

b û a4 Ô a3 Ô a2 Ô a1 + a1 Ô b û a4 Ô a3 Ô a2 -
k k

a2 Ô b û a4 Ô a3 Ô a1 + a3 Ô b û a4 Ô a2 Ô a1 -
k k

a4 Ô b û a3 Ô a2 Ô a1 + a1 Ô a2 Ô b û a4 Ô a3 - a1 Ô a3 Ô b û a4 Ô a2 +
k k k

a1 Ô a4 Ô b û a3 Ô a2 + a2 Ô a3 Ô b û a4 Ô a1 - a2 Ô a4 Ô b û a3 Ô a1 +
k k k

a3 Ô a4 Ô b û a2 Ô a1 + a1 Ô a2 Ô a3 Ô b û a4 - a1 Ô a2 Ô a4 Ô b û a3 +
k k k

a1 Ô a3 Ô a4 Ô b û a2 - a2 Ô a3 Ô a4 Ô b û a1 + a1 Ô a2 Ô a3 Ô a4 Ô b
k k k

Ï The C form
Note that since the exterior product has higher precedence in Mathematica than the interior
product, a1 Ôbûa2 is equivalent to a1 Ôb ûa2 , and no parentheses are necessary in the terms of
k k
the C form.

ToInteriorProductsCBa ù bF
4 k

b û a4 Ô a3 Ô a2 Ô a1 - a1 Ô b û a4 Ô a3 Ô a2 +
k k
a2 Ô b û a4 Ô a3 Ô a1 - a3 Ô b û a4 Ô a2 Ô a1 +
k k
a4 Ô b û a3 Ô a2 Ô a1 + a1 Ô a2 Ô b û a4 Ô a3 - a1 Ô a3 Ô b û a4 Ô a2 +
k k k
a1 Ô a4 Ô b û a3 Ô a2 + a2 Ô a3 Ô b û a4 Ô a1 - a2 Ô a4 Ô b û a3 Ô a1 +
k k k
a3 Ô a4 Ô b û a2 Ô a1 - a1 Ô a2 Ô a3 Ô b û a4 + a1 Ô a2 Ô a4 Ô b û a3 -
k k k
a1 Ô a3 Ô a4 Ô b û a2 + a2 Ô a3 Ô a4 Ô b û a1 + a1 Ô a2 Ô a3 Ô a4 Ô b
k k k

Either form is easy to write down directly by observing simple mnemonic rules.

Ÿ A mnemonic way to write down a general Clifford product


We can begin developing the D form mnemonically by listing the ai . This list has the same form
l
as the basis of the Grassmann algebra of a 4-space.

Ï 1. Calculate the ai
l

2009 9 3
Grassmann Algebra Book.nb 657

1. Calculate the ai
l

These are all the essentially different combinations of factors of all grades. The list has the same
form as the basis of the Grassmann algebra whose n-element is a.
m

DeclareBasis@4, aD; B = BasisL@D


81, a1 , a2 , a3 , a4 , a1 Ô a2 , a1 Ô a3 , a1 Ô a4 , a2 Ô a3 , a2 Ô a4 , a3 Ô a4 ,
a1 Ô a2 Ô a3 , a1 Ô a2 Ô a4 , a1 Ô a3 Ô a4 , a2 Ô a3 Ô a4 , a1 Ô a2 Ô a3 Ô a4 <

Ï 2. Calculate the cobasis with respect to a


m

A = CobasisL@D
8a1 Ô a2 Ô a3 Ô a4 , a2 Ô a3 Ô a4 , -Ha1 Ô a3 Ô a4 L,
a1 Ô a2 Ô a4 , -Ha1 Ô a2 Ô a3 L, a3 Ô a4 , -Ha2 Ô a4 L, a2 Ô a3 ,
a1 Ô a4 , -Ha1 Ô a3 L, a1 Ô a2 , a4 , -a3 , a2 , -a1 , 1<

Ï 3. Take the reverse of the cobasis

R = GrassmannReverse@AD
8a4 Ô a3 Ô a2 Ô a1 , a4 Ô a3 Ô a2 , -Ha4 Ô a3 Ô a1 L,
a4 Ô a2 Ô a1 , -Ha3 Ô a2 Ô a1 L, a4 Ô a3 , -Ha4 Ô a2 L, a3 Ô a2 ,
a4 Ô a1 , -Ha3 Ô a1 L, a2 Ô a1 , a4 , -a3 , a2 , -a1 , 1<

Ï 4. Take the interior product of each of these elements with b


k

S = ThreadBb û RF
k

:b û a4 Ô a3 Ô a2 Ô a1 , b û a4 Ô a3 Ô a2 , b û -Ha4 Ô a3 Ô a1 L, b û a4 Ô a2 Ô a1 ,
k k k k

b û -Ha3 Ô a2 Ô a1 L, b û a4 Ô a3 , b û -Ha4 Ô a2 L, b û a3 Ô a2 , b û a4 Ô a1 ,
k k k k k

b û -Ha3 Ô a1 L, b û a2 Ô a1 , b û a4 , b û -a3 , b û a2 , b û -a1 , b û 1>


k k k k k k k

2009 9 3
Grassmann Algebra Book.nb 658

Ï 5. Take the exterior product of the elements of the two lists B and S

T = Thread@B Ô SD

:1 Ô b û a4 Ô a3 Ô a2 Ô a1 , a1 Ô b û a4 Ô a3 Ô a2 ,
k k

a2 Ô b û -Ha4 Ô a3 Ô a1 L , a3 Ô b û a4 Ô a2 Ô a1 , a4 Ô b û -Ha3 Ô a2 Ô a1 L ,
k k k

a1 Ô a2 Ô b û a4 Ô a3 , a1 Ô a3 Ô b û -Ha4 Ô a2 L , a1 Ô a4 Ô b û a3 Ô a2 ,
k k k

a2 Ô a3 Ô b û a4 Ô a1 , a2 Ô a4 Ô b û -Ha3 Ô a1 L , a3 Ô a4 Ô b û a2 Ô a1 ,
k k k

a1 Ô a2 Ô a3 Ô b û a4 , a1 Ô a2 Ô a4 Ô b û -a3 , a1 Ô a3 Ô a4 Ô b û a2 ,
k k k

a2 Ô a3 Ô a4 Ô b û -a1 , a1 Ô a2 Ô a3 Ô a4 Ô b û 1 >
k k

Ï 6. Add the terms and simplify the result by factoring out the scalars

U = FactorScalars@Plus üü TD

b û a4 Ô a3 Ô a2 Ô a1 + a1 Ô b û a4 Ô a3 Ô a2 -
k k

a2 Ô b û a4 Ô a3 Ô a1 + a3 Ô b û a4 Ô a2 Ô a1 -
k k

a4 Ô b û a3 Ô a2 Ô a1 + a1 Ô a2 Ô b û a4 Ô a3 - a1 Ô a3 Ô b û a4 Ô a2 +
k k k

a1 Ô a4 Ô b û a3 Ô a2 + a2 Ô a3 Ô b û a4 Ô a1 - a2 Ô a4 Ô b û a3 Ô a1 +
k k k

a3 Ô a4 Ô b û a2 Ô a1 + a1 Ô a2 Ô a3 Ô b û a4 - a1 Ô a2 Ô a4 Ô b û a3 +
k k k

a1 Ô a3 Ô a4 Ô b û a2 - a2 Ô a3 Ô a4 Ô b û a1 + a1 Ô a2 Ô a3 Ô a4 Ô b
k k k

Ï 7. Compare to the original expression

XãU
True

2009 9 3
Grassmann Algebra Book.nb 659

12.7 The Clifford Product of Intersecting Elements

General formulae for intersecting elements


Suppose two simple elements gÔa and gÔb which have a simple element g in common. Then by
p m p k p
definition their Clifford product may be written as a sum of generalized products.

Min@m,kD+p
1
gÔa ù gÔb ã ‚ H-1Ll Hp+m-lL + 2 l Hl-1L
gÔa Û gÔb
p m p k p m l p k
l=0

But it has been shown in Section 10.12 that for l ¥ p that:

g Ô a Û g Ô b ã H-1Lp Hl-pL gÔa Û b ûg


p m l p k p m -p+l k p

Substituting in the formula above gives:

Min@m,kD+p
1
gÔa ù gÔb ã ‚ H-1Lp+l Hm-lL + 2 l Hl-1L
gÔa Û b ûg
p m p k p m l-p k p
l=0

Since the terms on the right hand side are zero for l < p, we can define m = l - p and rewrite the
right hand side as:

Min@m,kD
1
gÔa ù gÔb ã H-1Lw ‚ H-1Lm Hp+m-mL + 2 m Hm-1L
gÔa Û b ûg
p m p k p m m k p
m=0

1
where w = m p + 2
p Hp - 1L. Hence we can finally cast the right hand side as a Clifford product
by taking out the second g factor.
p

1
p Hp-1L
gÔa ù gÔb ã H-1Lm p+ 2 gÔa ùb ûg 12.17
p m p k p m k p

In a similar fashion we can show that the right hand side can be written in the alternative form:

1
p Hp-1L
gÔa ù gÔb ã H-1L 2 aù gÔb ûg 12.18
p m p k m p k p

By reversing the order of a and g in the first factor, formula 12.17 can be written as:
m p

2009 9 3
Grassmann Algebra Book.nb 660

1
p Hp-1L
aÔg ù gÔb ã H-1L 2 gÔa ùb ûg 12.19
m p p k p m k p

1
p Hp-1L
And by noting that the reverse, g† of g is H-1L 2 g, we can absorb the sign into the
p p p

formulae by changing one of the g to g . For example:
p p

a Ô g† ù g Ô b ã gÔa ùb ûg 12.20
m p p k p m k p

In sum: we can write:

a Ô g† ù g Ô b ã H-1Lm p aÔg ùb ûg 12.21


m p p k m p k p

a Ô g† ù g Ô b ã H-1Lm p a ù g Ô b ûg 12.22
m p p k m p k p

aÔg ùb ûg ã aù gÔb ûg 12.23


m p k p m p k p

The relations derived up to this point are completely general and apply to Clifford products in an
arbitrary space with an arbitrary metric. For example they do not require any of the elements to be
orthogonal.

Special cases of intersecting elements


We can derive special cases of the formulae derived above by putting a and b equal to unity. First,
m k
put b equal to unity. Remembering that the Clifford product with a scalar reduces to the ordinary
k
product we obtain:

a Ô g† ù g ã H-1Lm p a ù g û g ã H-1Lm p a Ô g û g
m p p m p p m p p

Next, put a equal to unity, and then, to enable comparison with the previous formulae, replace b by
m k
a.
m

2009 9 3
Grassmann Algebra Book.nb 661

g† ù g Ô a ã gùa ûg ã gÔa ûg
p p m p m p p m p

Finally we note that, since the far right hand sides of these two equations are equal, all the
expressions are equal.

a Ô g† ù g ã g† ù g Ô a
m p p p p m
12.24
mp
ã g ù a û g ã g Ô a û g ã H-1L aùg ûg
p m p p m p m p p

By putting a to unity in these equations we can recover the relation 12.11 between the Clifford
m
product of identical elements and their interior product.

12.8 The Clifford Product of Orthogonal Elements

The Clifford product of totally orthogonal elements


In Chapter 10 on generalized products we showed that if a and b are totally orthogonal (that is,
m k
ai ûbj ã0 for each ai belonging to a and bj belonging to b, then a Û b==0, except when l = 0.
m k m l k

Thus we see immediately from the definition of the Clifford product that the Clifford product of
two totally orthogonal elements is equal to their exterior product.

aùb ã aÔb ai û bj ã 0 12.25


m k m k

The Clifford product of partially orthogonal elements


Suppose now we introduce an arbitrary element g into aùb, and expand the expression in terms of
p m k
generalized products.

Min@m,k+pD
1
aù gÔb ã ‚ H-1Ll Hm- lL+ 2 l Hl-1L
a Û gÔb
m p k m l p k
l=0

But from formula 10.35 we have that:

2009 9 3
Grassmann Algebra Book.nb 662

a Û gÔb ã a Û g Ôb ai û bj ã 0 l § Min@m, pD
m l p k m l p k

Hence:

aù gÔb ã aùg Ôb ai û bj ã 0 12.26


m p k m p k

Similarly, expressing aÔg ùb in terms of generalized products and substituting from equation
m p k
10.37 gives:

Min@p,kD
1
aÔg ùb ã ‚ H-1Ll Hm+p-lL + 2 l Hl-1L+m l
aÔ g Û b
m p k m p l k
l=0

1 1
But H-1Ll Hm+p-lL + 2 l Hl-1L+m l
ã H-1Ll Hp-lL + 2 l Hl-1L
, hence we can write:

aÔg ùb ã aÔ gùb ai û bj ã 0 12.27


m p k m p k

Ÿ Testing the formulae


To test any of these formulae we may always tabulate specific cases. Here we convert the
difference between the sides of equation 12.27 to scalar products and then put to zero any products
whose factors are orthogonal. To do this we use the GrassmannAlgebra function
OrthogonalSimplificationRules. We verify the formula for the first 50 cases.

? OrthogonalSimplificationRules
OrthogonalSimplificationRules@88X1,Y1<,8X2,Y2<,!<D
develops a list of rules which put to zero all the
scalar products of a 1-element from Xi and a 1-element
from Yi. Xi and Yi may be either variables, basis
elements, exterior products of these, or graded variables.

2009 9 3
Grassmann Algebra Book.nb 663

FlattenBTableBToScalarProductsB a Ô g ù b - a Ô g ù b F ê.
m p k m p k

OrthogonalSimplificationRulesB::a, b>>F,
m k

8m, 0, 3<, 8k, 0, m<, 8p, 0, m + 2<FF

80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

12.9 The Clifford Product of Intersecting Orthogonal


Elements

Orthogonal union
We have shown in Section 12.7 above that the Clifford product of arbitrary intersecting elements
may be expressed by:

a Ô g† ù g Ô b ã H-1Lm p aÔg ùb ûg
m p p k m p k p

a Ô g† ù g Ô b ã H-1Lm p a ù g Ô b ûg
m p p k m p k p

But we have also shown in Section 12.8 that if a and b are totally orthogonal, then:
m k

aÔg ùb ã aÔ gùb
m p k m p k

aù gÔb ã aùg Ôb
m p k m p k

Hence the right hand sides of the first equations may be written, in the orthogonal case, as:

a Ô g† ù g Ô b ã H-1Lm p a Ô g ù b ûg ai û bj ã 0 12.28
m p p k m p k p

a Ô g† ù g Ô b ã H-1Lm p aùg Ôb ûg ai û bj ã 0 12.29


m p p k m p k p

Orthogonal intersection

2009 9 3
Grassmann Algebra Book.nb 664

Orthogonal intersection
Consider the case of three simple elements a, b and g where g is totally orthogonal to both a and
m k p p m

b (and hence to aÔb). A simple element g is totally orthogonal to an element a if and only if
k m k p m

aûgi ã0 for all gi belonging to g. Here, we do not assume that a and b are orthogonal.
m p m k

As before we consider the Clifford product of intersecting elements, but because of the
orthogonality relationships, we can replace the generalized products on the right hand side by the
right hand side of equation 10.39.

Min@m,kD+p
1
gÔa ù gÔb ã ‚ H-1Ll Hp+m-lL + 2 l Hl-1L
gÔa Û gÔb
p m p k p m l p k
l=0

Min@m,kD+p
1
ã ‚ H-1Ll Hp+m-lL + 2 l Hl-1L
gûg a Û b
p p m l-p k
l=p

Let m = l - p, then this equals:

Min@m,kD
1
gÔa ù gÔb ã gûg ‚ H-1LHm+pL Hm-mL + 2 Hm+pL Hm+p-1L
aÛb
p m p k p p m m k
m=0

Min@m,kD
1 1
p Hp-1L
ã H-1Lm p+ 2 gûg ‚ H-1Lm Hm-mL + 2 m Hm-1L
aÛb
p p m m k
m=0

Hence we can cast the right hand side as a Clifford product.

1
p Hp-1L
gÔa ù gÔb ã H-1Lm p+ 2 gûg aùb
p m p k p p m k

Or finally, by absorbing the signs into the left-hand side as:

a Ô g† ù g Ô b ã gûg aùb a û gi ã b û gi ã 0 12.30


m p p k p p m k m k

Note carefully, that for this formula to be true, a and b are not necessarily orthogonal. However, if
m k
they are orthogonal we can express the Clifford product on the right hand side as an exterior
product.

2009 9 3
Grassmann Algebra Book.nb 665

a Ô g† ù g Ô b ã
m p p k
12.31
gûg aÔb ai û bj ã ai û gs ã bj û gs ã 0
p p m k

12.10 Summary of Special Cases of Clifford Products

Arbitrary elements
The main results of the preceding sections concerning Clifford Products of intersecting and
orthogonal elements are summarized below.

Ï g is an arbitrary repeated factor


p

g† ù g ã g ù g† ã g û g 12.32
p p p p p p

g ù g ã g† û g ã g û g† 12.33
p p p p p p

g ù g ù g ã g† û g g ã g û g† g 12.34
p p p p p p p p p

2
gùg ùgùg ã gûg 12.35
p p p p p p

2009 9 3
Grassmann Algebra Book.nb 666

Ï g is an arbitrary repeated factor, a is arbitrary


p m

a Ô g† ù g ã g† ù g Ô a
m p p p p m
12.36
mp
ã g ù a û g ã g Ô a û g ã H-1L aùg ûg
p m p p m p m p p

Ï g is an arbitrary repeated factor, a and b are arbitrary


p m k

a Ô g† ù g Ô b ã H-1Lm p aÔg ùb ûg 12.37


m p p k m p k p

a Ô g† ù g Ô b ã H-1Lm p a ù g Ô b ûg 12.38
m p p k m p k p

Arbitrary and orthogonal elements

Ï g is arbitrary, a and b are totally orthogonal


p m k

aù gÔb ã aùg Ôb ai û bj ã 0 12.39


m p k m p k

aÔg ùb ã aÔ gùb ai û bj ã 0 12.40


m p k m p k

Ï g is an arbitrary repeated factor, a and b are totally orthogonal


p m k

a Ô g† ù g Ô b ã H-1Lm p a Ô g ù b ûg ai û bj ã 0 12.41
m p p k m p k p

2009 9 3
Grassmann Algebra Book.nb 667

a Ô g† ù g Ô b ã H-1Lm p aùg Ôb ûg ai û bj ã 0 12.42


m p p k m p k p

Ï a and b are arbitrary, g is a repeated factor totally orthogonal to both a and b


m k p m k

a Ô g† ù g Ô b ã
m p p k
12.43
gûg aùb ai û gs ã bj û gs ã 0
p p m k

Orthogonal elements

Ï a and b are totally orthogonal


m k

aùb ã aÔb ai û bj ã 0 12.44


m k m k

Ï a, b, and g are totally orthogonal


m k p

aÔg ù gÔb ã
m p p k
12.45

g ûg aÔb ai û bj ã ai û gs ã bj û gs ã 0
p p m k

Calculating with Clifford products


The previous section summarizes some alternative formulations for a Clifford product when its
factors contain a common element, and when they have an orthogonality relationship between
them. In this section we discuss how these relations can make it easy to calculate with these type of
Clifford products.

2009 9 3
Grassmann Algebra Book.nb 668

Ï The Clifford product of totally orthogonal elements reduces to the exterior


product

If we know that all the factors of a Clifford product are totally orthogonal, then we can interchange
the Clifford product and the exterior product at will. Hence, for totally orthogonal elements, the
Clifford and exterior products are associative, and we do not need to include parentheses.

aùbùg ã aùbÔg ã aÔbùg ã aÔbÔg


m k p m k p m k p m k p 12.46
ai û bj ã ai û gs ã bj û gs ã 0

Note carefully however, that this associativity does not extend to the factors of the m-, k-, or p-
elements unless the factors of the m-, k-, or p-element concerned are mutually orthogonal. In which
case we could for example then write:

a1 ù a2 ù ! ù a m ã a1 Ô a2 Ô ! Ô a m ai û aj ã 0 i!j 12.47

Ï Example
For example if xÔy and z are totally orthogonal, that is xûz ã 0 and yûz ã 0, then we can write

Hx Ô yL ù z ã x Ô y Ô z ã x Ô Hy ù zL ã -y Ô Hx ù zL

But since x is not necessarily orthogonal to y, these are not the same as

x ù y ù z ã Hx ù yL Ô z ã x ù Hy Ô zL

We can see the difference by reducing the Clifford products to scalar products.

ToScalarProducts@8Hx Ô yL ù z, x Ô y Ô z, x Ô Hy ù zL, -y Ô Hx ù zL<D ê.


OrthogonalSimplificationRules@88x Ô y, z<<D
8x Ô y Ô z, x Ô y Ô z, x Ô y Ô z, x Ô y Ô z<

ToScalarProducts@8x ù y ù z, Hx ù yL Ô z, x ù Hy Ô zL<D ê.
OrthogonalSimplificationRules@88x Ô y, z<<D
8z Hx û yL + x Ô y Ô z, z Hx û yL + x Ô y Ô z, z Hx û yL + x Ô y Ô z<

2009 9 3
Grassmann Algebra Book.nb 669

12.11 Associativity of the Clifford Product

Associativity of orthogonal elements


In what follows we use formula 12.45 to show that the Clifford product as defined at the beginning
of this chapter is associative for totally orthogonal elements. This result will then enable us to show
that by re-expressing the elements in terms in an orthogonal basis, the Clifford product is, in
general, associative. Note that the approach we have adopted in this book has not required us to
adopt ab initio the associativity of the Clifford product as an axiom, but rather show that
associativity is a natural consequence of its definition.

Formula 12.45 tells us that the Clifford product of (possibly) intersecting and totally orthogonal
elements is given by:

aÔg ù gÔb ã g† û g aÔb


m p p k p p m k

ai û bj ã ai û gs ã bj û gs ã 0

Note that it is not necessary that the factors of a be orthogonal to each other. This is true also for
m
the factors of b or of g.
k p

We begin by writing a Clifford product of three elements, and associating the first pair. The
elements contain factors which are specific to the element (a, b, w), pairwise common (g, d, e),
m k r p q s
and common to all three (q). This product will therefore represent the most general case since other
z
cases may be obtained by letting one or more factors reduce to unity.

aÔeÔqÔg ù gÔqÔbÔd ù dÔeÔwÔq


m s z p p z k q q s r z

ã q† Ô g † û q Ô g aÔeÔbÔd ù dÔeÔwÔq
z p z p m s k q q s r z

ã q† Ô g † û q Ô g H-1Lsk e† Ô d† û e Ô d a Ô b Ô Kw Ô qO
z p z p s q s q m k r z

ã H-1Lsk q† Ô g † û q Ô g e† Ô d† û e Ô d aÔbÔwÔq
z p z p s q s q m k r z

On the other hand we can associate the second pair to obtain the same result.

2009 9 3
Grassmann Algebra Book.nb 670

aÔeÔqÔg ù gÔqÔbÔd ù dÔeÔwÔq


m s z p p z k q q s r z

ã a Ô e Ô q Ô g ù H-1Lzk+z Hs+rL q† Ô d† û q Ô d gÔb Ô eÔw


m s z p z q z q p k s r

ã H-1Lzk+z Hs+rL q† Ô d† û q Ô d aÔeÔqÔg ù gÔbÔeÔw


z q z q m s z p p k s r

ã H-1Lzk+z Hs+rL+sz+ks q† Ô d† û q Ô d aÔqÔeÔg ù gÔeÔbÔw


z q z q m z s p p s k r

ã H-1Lzk+zr+ks q† Ô d† û q Ô d e† Ô g † û e Ô g Ka Ô qO Ô b Ô w
z q z q s p s p m z k r

ã H-1Lsk q† Ô g † û q Ô g e† Ô d† û e Ô d aÔbÔwÔq
z p z p s q s q m k r z

In sum: We have shown that the Clifford product of possibly intersecting but otherwise totally
orthogonal elements is associative. Any factors which make up a given individual element are not
specifically involved, and hence it is of no consequence to the associativity of the element with
another whether or not these factors are mutually orthogonal.

A mnemonic formula for products of orthogonal elements


Because q and g are orthogonal to each other, and e and d are orthogonal to each other we can
z p s q
rewrite the inner products in the alternative forms:

q† Ô g† û q Ô g ã Kq† û qO g† û g
z p z p z z p p

e† Ô d† û e Ô d ã e† û e d† û d
s q s q s s q q

The results of the previous section may then summarized as:

aÔeÔqÔg ù gÔqÔbÔd ù dÔeÔwÔq


m s z p p z k q q s r z
12.48
ã H-1Lsk g† û g Kq† û qO Ke† û eO d† û d aÔbÔwÔq
p p z z s s q q m k r z

A mnemonic for making this transformation is then


1. Rearrange the factors in a Clifford product to get common factors adjacent to the Clifford
product symbol, taking care to include any change of sign due to the quasi-commutativity of the
exterior product.
2. Replace the common factors by their inner product, but with one copy being reversed.
3. If there are no common factors in a Clifford product, the Clifford product can be replaced by the
exterior product.

2009 9 3
A mnemonic for making this transformation is then
1. Rearrange the factors in a Clifford product to get common factors adjacent
GrassmanntoAlgebra
the Clifford
Book.nb 671
product symbol, taking care to include any change of sign due to the quasi-commutativity of the
exterior product.
2. Replace the common factors by their inner product, but with one copy being reversed.
3. If there are no common factors in a Clifford product, the Clifford product can be replaced by the
exterior product.

Remember that for these relations to hold all the elements must be totally orthogonal to each other.

Note that if, in addition, the 1-element factors of any of these elements, g say, are orthogonal to
p
each other, then:

g û g ã Hg1 û g1 L Hg2 û g2 L ! Hgp û gp L


p p

Associativity of non-orthogonal elements

Consider the Clifford product aùb ùg where there are no restrictions on the factors a, b and g. It
m k p m k p
has been shown in Section 6.3 that an arbitrary simple m-element may be expressed in terms of m
orthogonal 1-element factors (the Gram-Schmidt orthogonalization process). Suppose that n such
orthogonal 1-elements e1 ,e2 ,!,en have been found in terms of which a, b, and g can be
m k p
expressed. Writing the m-elements, k-elements and p-elements formed from the ei as ej , er , and
m k
es respectively, we can write:
p

a ã S aj ej b ã S br er g ã S cs es
m m k k p p

Thus we can write the Clifford product as:

a ù b ù g ã S aj ej ù S br er ù S cs es ã S S S aj br cs ej ù er ù es
m k p m k p m k p

But we have already shown in the previous section that the Clifford product of orthogonal elements
is associative. That is:

ej ù er ù es ã ej ù er ù es ã ej ù er ù es
m k p m k p m k p

Hence we can write:

a ù b ù g ã S S S aj br cs ej ù er ù es ã a ù b ù g
m k p m k p m k p

We thus see that the Clifford product of general elements is associative.

aùb ùg ã aù bùg ã aùbùg 12.49


m k p m k p m k p

The associativity of the Clifford product is usually taken as an axiom. However, in this book we
have chosen to show that associativity is a consequence of our definition of the Clifford product in
terms of exterior and interior products. In this way we can ensure that the Grassmann and Clifford
algebras are consistent.
2009 9 3
Grassmann Algebra Book.nb 672

The associativity of the Clifford product is usually taken as an axiom. However, in this book we
have chosen to show that associativity is a consequence of our definition of the Clifford product in
terms of exterior and interior products. In this way we can ensure that the Grassmann and Clifford
algebras are consistent.

Ÿ Testing the general associativity of the Clifford product


We can easily create a function in Mathematica to test the associativity of Clifford products of
elements of different grades in spaces of different dimensions. Below we define a function called
CliffordAssociativityTest which takes the dimension of the space and the grades of three
elements as arguments. The steps are as follows:

• Declare a space of the given dimension. It does not matter what the metric is since we do not use
it.
• Create general elements of the given grades in a space of the given dimension.
• To make the code more readable, define a function which converts a product to scalar product
form.
• Compute the products associated in the two different ways, and subtract them.
• The associativity test is successful if a result of 0 is returned.

CliffordAssociativityTest@n_D@m_, k_, p_D :=


ModuleB8X, Y, Z, S<, &n ; X = CreateElementBxF;
m

Y = CreateElementByF; Z = CreateElementBzF;
k p

S@x_D := ToScalarProducts@xD; S@S@X ù YD ù ZD - S@X ù S@Y ù ZDDF

We can either test individual cases, for example 2-elements in a 4-space:

CliffordAssociativityTest@4D@2, 2, 2D
0

Or, we can tabulate a number of results together. For example, elements of all grades in all spaces
of dimension 0, 1, 2, and 3.

Table@CliffordAssociativityTest@nD@m, k, pD,
8n, 0, 3<, 8m, 0, n<, 8k, 0, n<, 8p, 0, m<D
80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0<

2009 9 3
Grassmann Algebra Book.nb 673

12.12 Decomposing a Clifford Product

Even and odd generalized products


A Clifford product aùb can be decomposed into two sums, those whose terms are generalized
m k

products of even order aùb , and those whose terms are generalized products of odd order
m k e

aùb . That is:


m k o

aùb ã aùb + aùb 12.50


m k m k e m k o

where, from the definition of the Clifford product:

Min@m,kD
1
aùb ã ‚ H-1Ll Hm-lL+ 2 l Hl-1L
aÛb
m k m l k
l=0

we can take just the even generalized products to get:

Min@m,kD
l
aùb ã ‚ H-1L 2 aÛb 12.51
m k e m l k
l = 0,2,4,!

(Here, the limit of the sum is understood to mean the greatest even integer less than or equal to
Min[m,k].)

The odd generalized products are:

Min@m,kD
l+1
m
aùb ã H-1L ‚ H-1L 2 aÛb 12.52
m k o m l k
l = 1,3,5,!

(In this case, the limit of the sum is understood to mean the greatest odd integer less than or equal
to Min[m,k].)

Note carefully that the evenness and oddness to which we refer is to the order of the generalized
product not to the grade of the Clifford product.

2009 9 3
Grassmann Algebra Book.nb 674

The Clifford product in reverse order


The expression for the even and odd components of the Clifford product taken in the reverse order
is simply obtained from the formulae above by using the quasi-commutativity of the generalized
product.

Min@m,kD Min@m,kD
l l
bùa ã ‚ H-1L 2 bÛa ã ‚ H-1L 2 +Hm-lL Hk-lL a Û b
k m e k l m m l k
l = 0,2,4,! l = 0,2,4,!

But, since l is even, this simplifies to:

Min@m,kD
l
mk
bùa ã H-1L ‚ H-1L 2 aÛb 12.53
k m e m l k
l = 0,2,4,!

Hence, the even terms of both products are equal, except when both factors of the product are of
odd grade.

bùa ã H-1Lm k a ù b 12.54


k m e m k e

In a like manner we can show that for l odd:

Min@m,kD
l+1
k +Hm-lL Hk-lL
bùa ã H-1L ‚ H-1L 2 aÛb
k m o m l k
l = 1,3,5,!

Min@m,kD
l+1
mk m
bùa ã -H-1L H-1L ‚ H-1L 2 aÛb 12.55
k m o m l k
l = 1,3,5,!

bùa ã -H-1Lm k a ù b 12.56


k m o m k o

The decomposition of a Clifford product


Finally, therefore, we can write:

2009 9 3
Grassmann Algebra Book.nb 675

aùb ã aùb + aùb 12.57


m k m k e m k o

H-1Lm k b ù a ã a ù b - aùb 12.58


k m m k e m k o

which we can add and subtract to give:

1
aùb ã a ù b + H-1Lm k b ù a 12.59
m k e 2 m k k m

1
aùb ã a ù b - H-1Lm k b ù a 12.60
m k o 2 m k k m

Ï Example: An m-element and a 1-element

Putting k equal to 1 gives:

1
aÔb ã Ka ù b + H-1Lm b ù a O 12.61
m 2 m m

1
Ka ù b - H-1Lm b ù a O ã -H-1Lm a û b 12.62
2 m m m

Ï Example: An m-element and a 2-element

Putting k equal to 2 gives:

1
aùb + bùa ã aÔb - aûb 12.63
2 m 2 2 m m 2 m 2

1
aùb - bùa ã -H-1Lm a Û b 12.64
2 m 2 2 m m 1 2

12.13 Clifford Algebra

2009 9 3
Grassmann Algebra Book.nb 676

12.13 Clifford Algebra

Generating Clifford algebras


Up to this point we have concentrated on the definition and properties of the Clifford product. We
are now able to turn our attention to the algebras that such a product is capable of generating in
different spaces.

Broadly speaking, an algebra can be constructed from a set of elements of a linear space which has
a product operation, provided all products of the elements are again elements of the set.

In what follows we shall discuss Clifford algebras. The generating elements will be selected
subsets of the basis elements of the full Grassmann algebra on the space. The product operation
will be the Clifford product which we have defined in the first part of this chapter in terms of the
generalized Grassmann product. Thus, Clifford algebras may viewed as living very much within
the Grassmann algebra, relying on it for both its elements and its operations.

In this development therefore, a particular Clifford algebra is ultimately defined by the values of
the scalar products of basis vectors of the underlying Grassmann algebra, and thus by the metric on
the space.

In many cases we will be defining the specific Clifford algebras in terms of orthogonal (not
necessarily orthonormal) basis elements.

Clifford algebras include the real numbers, complex numbers, quaternions, biquaternions, and the
Pauli and Dirac algebras.

Real algebra
All the products that we have developed up to this stage, the exterior, interior, generalized
Grassmann and Clifford products possess the valuable and consistent property that when applied to
scalars yield results equivalent to the usual (underlying field) product. The field has been identified
with a space of zero dimensions, and the scalars identified with its elements.

Thus, if a and b are scalars, then:

aùb ã aÔb ã aûb ã a Û b ã a b 12.65


0

Thus the real algebra is isomorphic with the Clifford algebra of a space of zero dimensions.

2009 9 3
Grassmann Algebra Book.nb 677

Ÿ Clifford algebras of a 1-space


We begin our discussion of Clifford algebras with the simplest case: the Clifford algebras of 1-
space. Suppose the basis for the 1-space is e1 , then the basis for the associated Grassmann algebra
is 81,e1 <.

&1 ; BasisL@D
81, e1 <

There are only four possible Clifford products of these basis elements. We can construct a table of
these products by using the GrassmannAlgebra function CliffordProductTable.

CliffordProductTable@D
881 ù 1, 1 ù e1 <, 8e1 ù 1, e1 ù e1 <<

Usually however, to make the products easier to read and use, we will display them in the form of
a palette using the GrassmannAlgebra function PaletteForm. (We can click on the palette to
enter any of its expressions into the notebook).

C1 = CliffordProductTable@D; PaletteForm@C1 D

1ù1 1 ù e1
e1 ù 1 e1 ù e1

In the general case any Clifford product may be expressed in terms of exterior and interior
products. We can see this by applying ToInteriorProducts to the table (although only
interior (here scalar) products result from this simple case),

C2 = ToInteriorProducts@C1 D; PaletteForm@C2 D

1 e1
e1 e1 û e1

Different Clifford algebras may be generated depending on the metric chosen for the space. In this
example we can see that the types of Clifford algebra which we can generate in a 1-space are
dependent only on the choice of a single scalar value for the scalar product e1 ûe1 . The Clifford
product of two general elements of the algebra is

Ha + b e1 L ù Hc + d e1 L êê ToScalarProducts
a c + b d He1 û e1 L + b c e1 + a d e1

It is clear to see that if we choose e1 ûe1 ã-1, we have an algebra isomorphic to the complex
algebra. The basis 1-element e1 then plays the role of the imaginary unit Â. We can generate this
particular algebra immediately by declaring the metric, and then generating the product table.

2009 9 3
Grassmann Algebra Book.nb 678

DeclareBasis@8Â<D; DeclareMetric@88-1<<D;
CliffordProductTable@D êê ToScalarProducts êê ToMetricForm êê
PaletteForm

1 Â
 -1

However, our main purpose in discussing this very simple example in so much detail is to
emphasize that even in this case, there are an infinite number of Clifford algebras on a 1-space
depending on the choice of the scalar value for e1 ûe1 . The complex algebra, although it has surely
proven itself to be the most useful, is just one among many.

Finally, we note that all Clifford algebras possess the real algebra as their simplest even subalgebra.

12.14 Clifford Algebras of a 2-Space

Ÿ The Clifford product table in 2-space


In this section we explore the Clifford algebras of 2-space. As might be expected, the Clifford
algebras of 2-space are significantly richer than those of 1-space. First we declare a (not
necessarily orthogonal) basis for the 2-space, and generate the associated Clifford product table.

&2 ; C1 = CliffordProductTable@D; PaletteForm@C1 D

1ù1 1 ù e1 1 ù e2 1 ù He1 Ô e2 L
e1 ù 1 e1 ù e1 e1 ù e2 e1 ù He1 Ô e2 L
e2 ù 1 e2 ù e1 e2 ù e2 e2 ù He1 Ô e2 L
He1 Ô e2 L ù 1 He1 Ô e2 L ù e1 He1 Ô e2 L ù e2 He1 Ô e2 L ù He1 Ô e2 L

To see the way in which these Clifford products reduce to generalized Grassmann products we can
apply the GrassmannAlgebra function ToGeneralizedProducts.

C2 = ToGeneralizedProducts@C1 D; PaletteForm@C2 D

1Û1 1 Û e1 1 Û e2
0 0 0

e1 Û 1 e1 Û e1 + e1 Û e1 e1 Û e2 + e1 Û e2
0 0 1 0 1

e2 Û 1 e2 Û e1 + e2 Û e1 e2 Û e2 + e2 Û e2
0 0 1 0 1

He1 Ô e2 L Û 1 He1 Ô e2 L Û e1 - He1 Ô e2 L Û e1 He1 Ô e2 L Û e2 - He1 Ô e2 L


0 0 1 0

The next level of expression is obtained by applying ToInteriorProducts to reduce the


generalized products to exterior and interior products.

2009 9 3
Grassmann Algebra Book.nb 679

The next level of expression is obtained by applying ToInteriorProducts to reduce the


generalized products to exterior and interior products.

C3 = ToInteriorProducts@C2 D; PaletteForm@C3 D

1 e1 e2 e1 Ô
e1 e1 û e1 e1 û e2 + e1 Ô e2 e1 Ô e2
e2 e1 û e2 - e1 Ô e2 e2 û e2 e1 Ô e2
e1 Ô e2 -He1 Ô e2 û e1 L -He1 Ô e2 û e2 L -He1 Ô e2 û e1 Ô e2 L - He1 Ô e2 û

Finally, we can expand the interior products to scalar products.

C4 = ToScalarProducts@C3 D; PaletteForm@C4 D

1 e1 e2
e1 e1 û e1 e1 û e2 + e1 Ô e2 -He1 û
e2 e1 û e2 - e1 Ô e2 e2 û e2 -He2 û
e1 Ô e2 He1 û e2 L e1 - He1 û e1 L e2 He2 û e2 L e1 - He1 û e2 L e2 He1 û e2 L2

It should be noted that the GrassmannAlgebra operation ToScalarProducts also applies the
function ToStandardOrdering. Hence scalar products are reordered to present the basis
element with lower index first. For example the scalar product e2 ûe1 does not appear in the table
above.

Product tables in a 2-space with an orthogonal basis


This is the Clifford product table for a general basis. If however,we choose an orthogonal basis in
which e1 ûe2 is zero, the table simplifies to:

C5 = C4 ê. e1 û e2 Ø 0; PaletteForm@C5 D

1 e1 e2 e1 Ô e2
e1 e1 û e1 e1 Ô e2 He1 û e1 L e2
e2 -He1 Ô e2 L e2 û e2 -He2 û e2 L e1
e1 Ô e2 -He1 û e1 L e2 He2 û e2 L e1 -He1 û e1 L He2 û e2 L

This table defines all the Clifford algebras on the 2-space. Different Clifford algebras may be
generated by choosing different metrics for the space, that is, by choosing the two scalar values
e1 ûe1 and e2 ûe2 .
Note however that allocation of scalar values a to e1 ûe1 and b to e2 ûe2 would lead to essentially
the same structure as allocating b to e1 ûe1 and a to e2 û e2 .

In the rest of what follows however, we will restrict ourselves to metrics in which e1 ûe1 ã%1 and
e2 ûe2 ã%1, whence there are three cases of interest.

2009 9 3
Grassmann Algebra Book.nb 680

In the rest of what follows however, we will restrict ourselves to metrics in which e1 ûe1 ã%1 and
e2 ûe2 ã%1, whence there are three cases of interest.

8e1 û e1 Ø +1, e2 û e2 Ø +1<


8e1 û e1 Ø +1, e2 û e2 Ø -1<
8e1 û e1 Ø -1, e2 û e2 Ø -1<

As observed previously the case 8e1 ûe1 Ø-1,e2 ûe2 Ø+1< is isomorphic to the case
8e1 ûe1 Ø+1,e2 ûe2 Ø-1<, so we do not need to consider it.

Ÿ Case 1: 8e1 û e1 Ø +1, e2 û e2 Ø +1<


This is the standard case of a 2-space with an orthonormal basis. Making the replacements in the
table gives:

C6 = C5 ê. 8e1 û e1 Ø +1, e2 û e2 Ø +1<; PaletteForm@C6 D

1 e1 e2 e1 Ô e2
e1 1 e1 Ô e2 e2
e2 -He1 Ô e2 L 1 -e1
e1 Ô e2 -e2 e1 -1

Inspecting this table for interesting structures or substructures, we note first that the even
subalgebra (that is, the algebra based on products of the even basis elements) is isomorphic to the
complex algebra. For our own explorations we can use the palette to construct a product table for
the subalgebra, or we can create a table using the GrassmannAlgebra function TableTemplate,
and edit it by deleting the middle rows and columns.

TableTemplate@C6 D

1 e1 e2 e1 Ô e2
e1 1 e1 Ô e2 e2
e2 -He1 Ô e2 L 1 -e1
e1 Ô e2 -e2 e1 -1

1 e1 Ô e2
e1 Ô e2 -1

If we want to create a palette for the subalgebra, we have to edit the normal list (matrix) form and
then apply PaletteForm. For even subalgebras we can also apply the GrassmannAlgebra
function EvenCliffordProductTable which creates a Clifford product table from just the
basis elements of even grade. We then set the metric we want, convert the Clifford product to
scalar products, evaluate the scalar products according to the chosen metric, and display the
resulting table as a palette.

2009 9 3
If we want to create a palette for the subalgebra, we have to edit the normal list Algebra
Grassmann (matrix) form and
Book.nb 681
then apply PaletteForm. For even subalgebras we can also apply the GrassmannAlgebra
function EvenCliffordProductTable which creates a Clifford product table from just the
basis elements of even grade. We then set the metric we want, convert the Clifford product to
scalar products, evaluate the scalar products according to the chosen metric, and display the
resulting table as a palette.

C7 = EvenCliffordProductTable@D; DeclareMetric@81, 0<, 80, 1<D;


PaletteForm@ToMetricForm@ToScalarProducts@C7 DDD

1 e1 Ô e2
e1 Ô e2 -1

Here we see that the basis element e1 Ôe2 has the property that its (Clifford) square is -1. We can
see how this arises by carrying out the elementary operations on the product. Note that
e1 ûe1 ãe2 ûe2 ã1 since we have assumed the 2-space under consideration is Euclidean.

He1 Ô e2 L ù He1 Ô e2 L ã -He2 Ô e1 L ù He1 Ô e2 L ã -He1 û e1 L He2 û e2 L ã -1

Thus the pair 81,e1 Ôe2 < generates an algebra under the Clifford product operation, isomorphic to
the complex algebra. It is also the basis of the even Grassmann algebra of L.
2

Ÿ Case 2: 8e1 û e1 Ø +1, e2 û e2 Ø -1<


Here is an example of a Clifford algebra which does not have any popular applications of which
the author is aware.

C7 = C5 ê. 8e1 û e1 Ø +1, e2 û e2 Ø -1<; PaletteForm@C7 D

1 e1 e2 e1 Ô e2
e1 1 e1 Ô e2 e2
e2 -He1 Ô e2 L -1 e1
e1 Ô e2 -e2 -e1 1

Ÿ Case 3: 8e1 û e1 Ø -1, e2 û e2 Ø -1<


Our third case in which the metric is {{-1,0},{0,-1}}, is isomorphic to the quaternions.

C8 = C5 ê. 8e1 û e1 Ø -1, e2 û e2 Ø -1<; PaletteForm@C8 D

1 e1 e2 e1 Ô e2
e1 -1 e1 Ô e2 -e2
e2 -He1 Ô e2 L -1 e1
e1 Ô e2 e2 -e1 -1

We can see this isomorphism more clearly by substituting the usual quaternion symbols (here we
Mathematica's double-struck symbols and choose the correspondence 8e1 Ø3,e2 Ø4,e1 Ôe2 Ø!<.

2009 9 3
Grassmann Algebra Book.nb 682

We can see this isomorphism more clearly by substituting the usual quaternion symbols (here we
Mathematica's double-struck symbols and choose the correspondence 8e1 Ø3,e2 Ø4,e1 Ôe2 Ø!<.

C9 = C8 ê. 8e1 Ø 3, e2 Ø 4, e1 Ô e2 Ø !<; PaletteForm@C9 D

1 $ % &
$ -1 & -%
% -& -1 $
& % -$ -1

Having verified that the structure is indeed quaternionic, let us return to the original specification
in terms of the basis of the Grassmann algebra. A quaternion can be written in terms of these basis
elements as:

Q ã a + b e1 + c e2 + d e1 Ô e2 ã Ha + b e1 L + Hc + d e1 L Ô e2

Now, because e1 and e2 are orthogonal, e1 Ôe2 is equal to e1 ùe2 . But for any further calculations
we will need to use the Clifford product form. Hence we write

Q ã a + b e1 + c e2 + d e1 ù e2 ã Ha + b e1 L + Hc + d e1 L ù e2 12.66

Hence under one interpretation, each of e1 and e2 and their Clifford product e1 ùe2 behaves as a
different imaginary unit. Under the second interpretation, a quaternion is a complex number with
imaginary unit e2 , whose components are complex numbers based on e1 as the imaginary unit.

12.15 Clifford Algebras of a 3-Space

Ÿ The Clifford product table in 3-space


In this section we explore the Clifford algebras of 3-space. First we declare a (not necessarily
orthogonal) basis for the 3-space, and generate the associated Clifford product table. Because of
the size of the table, only the first few columns are shown in the print version.

2009 9 3
Grassmann Algebra Book.nb 683

&3 ; C1 = CliffordProductTable@D; PaletteForm@C1 D

1ù1 1 ù e1 1 ù e2 1 ù e3
e1 ù 1 e1 ù e1 e1 ù e2 e1 ù e3
e2 ù 1 e2 ù e1 e2 ù e2 e2 ù e3
e3 ù 1 e3 ù e1 e3 ù e2 e3 ù e3
He1 Ô e2 L ù 1 He1 Ô e2 L ù e1 He1 Ô e2 L ù e2 He1 Ô e2 L ù e3
He1 Ô e3 L ù 1 He1 Ô e3 L ù e1 He1 Ô e3 L ù e2 He1 Ô e3 L ù e3
He2 Ô e3 L ù 1 He2 Ô e3 L ù e1 He2 Ô e3 L ù e2 He2 Ô e3 L ù e3
He1 Ô e2 Ô e3 L ù 1 He1 Ô e2 Ô e3 L ù e1 He1 Ô e2 Ô e3 L ù e2 He1 Ô e2 Ô e3 L ù

Ÿ !"3 : The Pauli algebra


Our first exploration is to Clifford algebras in Euclidean space, hence we accept the default metric
which was automatically declared when we declared the basis.

MatrixForm@MetricD
1 0 0
0 1 0
0 0 1

Since the basis elements are orthogonal, we can use the GrassmannAlgebra function
CliffordToOrthogonalScalarProducts for computing the Clifford products. This
function is faster for larger calculations than the more general ToScalarProducts used in the
previous examples.

C2 = ToMetricForm@CliffordToOrthogonalScalarProducts@C1 DD;
PaletteForm@C2 D

1 e1 e2 e3 e1 Ô e2 e1
e1 1 e1 Ô e2 e1 Ô e3 e2 e3
e2 -He1 Ô e2 L 1 e2 Ô e3 -e1 -He1 Ô
e3 -He1 Ô e3 L -He2 Ô e3 L 1 e1 Ô e2 Ô e3 -
e1 Ô e2 -e2 e1 e1 Ô e2 Ô e3 -1 -He2
e1 Ô e3 -e3 -He1 Ô e2 Ô e3 L e1 e2 Ô e3 -
e2 Ô e3 e1 Ô e2 Ô e3 -e3 e2 -He1 Ô e3 L e1
e1 Ô e2 Ô e3 e2 Ô e3 -He1 Ô e3 L e1 Ô e2 -e3 e2

We can show that this Clifford algebra of Euclidean 3-space is isomorphic to the Pauli algebra.
Pauli's representation of the algebra was by means of 2!2 matrices over the complex field:

2009 9 3
Grassmann Algebra Book.nb 684

We can show that this Clifford algebra of Euclidean 3-space is isomorphic to the Pauli algebra.
Pauli's representation of the algebra was by means of 2!2 matrices over the complex field:

0 1 0 -Â 1 0
s1 ã K O s2 ã K O s3 ã K O
1 0 Â 0 0 -1

The isomorphism is direct and straightforward:

8I ¨ 1, s1 ¨ e1 , s2 ¨ e2 , s3 ¨ e3 , s1 s2 ¨ e1 Ô e2 ,
s1 s3 ¨ e1 Ô e3 , s2 s3 ¨ e2 Ô e3 , s1 s2 s3 ¨ e1 Ô e2 Ô e3 <

To show this we can:


1) Replace the symbols for the basis elements of the Grassmann algebra in the table above by
symbols for the Pauli matrices. Replace the exterior product operation by the matrix product
operation.
2) Replace the symbols by the actual matrices, allowing Mathematica to perform the matrix
products.
3) Replace the resulting Pauli matrices with the corresponding basis elements of the Grassmann
algebra.
4) Verify that the resulting table is the same as the original table.

We perform these steps in sequence. Because of the size of the output, only the electronic version
will show the complete tables.

Ï Step 1: Replace symbols for entities and operations

C3 = HC2 êê ReplaceNegativeUnitL ê. 81 Ø I, e1 Ø s1 , e2 Ø s2 , e3 Ø s3 < ê.


Wedge Ø Dot; PaletteForm@C3 D

I s1 s2 s3 s1 .s2 s1 .s3 s2
s1 I s1 .s2 s1 .s3 s2 s3 s1 .
s2 -s1 .s2 I s2 .s3 -s1 -s1 .s2 .s3 s3
s3 -s1 .s3 -s2 .s3 I s1 .s2 .s3 -s1 -
s1 .s2 -s2 s1 s1 .s2 .s3 -I -s2 .s3 s1
s1 .s3 -s3 -s1 .s2 .s3 s1 s2 .s3 -I -s1
s2 .s3 s1 .s2 .s3 -s3 s2 -s1 .s3 s1 .s2
s1 .s2 .s3 s2 .s3 -s1 .s3 s1 .s2 -s3 s2 -

2009 9 3
Grassmann Algebra Book.nb 685

Ï Step 2: Substitute matrices and calculate

1 0 0 1 0 -Â 1 0
C4 = C3 ê. :I Ø K O , s1 Ø K O , s2 Ø K O, s3 Ø K O>;
0 1 1 0 Â 0 0 -1
MatrixForm@C4 D
1 0 0 1 0 -Â 1 0 Â 0 0 -1 0 Â Â 0
K O K O K O K O K O K O K O K O
0 1 1 0 Â 0 0 -1 0 -Â 1 0 Â 0 0 Â
0 1 1 0 Â 0 0 -1 0 -Â 1 0 Â 0 0 Â
K O K O K O K O K O K O K O K O
1 0 0 1 0 -Â 1 0 Â 0 0 -1 0 Â Â 0
0 -Â -Â 0 1 0 0 Â 0 -1 -Â 0 1 0 0 1
K O K O K O K O K O K O K O K O
 0 0  0 1  0 -1 0 0 - 0 -1 -1 0
1 0 0 1 0 -Â 1 0 Â 0 0 -1 0 Â Â 0
K O K O K O K O K O K O K O K O
0 -1 -1 0 -Â 0 0 1 0 Â -1 0 -Â 0 0 -Â
 0 0  0 1  0 -1 0 0 - 0 -1 -1 0
K O K O K O K O K O K O K O K O
0 -Â -Â 0 1 0 0 Â 0 -1 -Â 0 1 0 0 1
0 -1 -1 0 -Â 0 0 1 0 Â -1 0 -Â 0 0 -Â
K O K O K O K O K O K O K O K O
1 0 0 1 0 -Â 1 0 Â 0 0 -1 0 Â Â 0
0 Â Â 0 -1 0 0 -Â 0 1 Â 0 -1 0 0 -1
K O K O K O K O K O K O K O K O
 0 0  0 1  0 -1 0 0 - 0 -1 -1 0
 0 0  0 1  0 -1 0 0 - 0 -1 -1 0
K O K O K O K O K O K O K O K O
0 Â Â 0 -1 0 0 -Â 0 1 Â 0 -1 0 0 -1

Ï Step 3:

Here we let the first row (column) of the product table correspond back to the basis elements of the
Grassmann representation, and make the substitution: throughout the whole table.

C5 = C4 ê. Thread@First@C4 D Ø BasisL@DD ê.
Thread@-First@C4D Ø -BasisL@DD
881, e1 , e2 , e3 , e1 Ô e2 , e1 Ô e3 , e2 Ô e3 , e1 Ô e2 Ô e3 <,
8e1 , 1, e1 Ô e2 , e1 Ô e3 , e2 , e3 , e1 Ô e2 Ô e3 , e2 Ô e3 <,
8e2 , -He1 Ô e2 L, 1, e2 Ô e3 , -e1 , -He1 Ô e2 Ô e3 L, e3 , -He1 Ô e3 L<,
8e3 , -He1 Ô e3 L, -He2 Ô e3 L, 1, e1 Ô e2 Ô e3 , -e1 , -e2 , e1 Ô e2 <,
8e1 Ô e2 , -e2 , e1 , e1 Ô e2 Ô e3 , -1, -He2 Ô e3 L, e1 Ô e3 , -e3 <,
8e1 Ô e3 , -e3 , -He1 Ô e2 Ô e3 L, e1 , e2 Ô e3 , -1, -He1 Ô e2 L, e2 <,
8e2 Ô e3 , e1 Ô e2 Ô e3 , -e3 , e2 , -He1 Ô e3 L, e1 Ô e2 , -1, -e1 <,
8e1 Ô e2 Ô e3 , e2 Ô e3 , -He1 Ô e3 L, e1 Ô e2 , -e3 , e2 , -e1 , -1<<

Ï Step 4: Verification

Verify that this is the table with which we began.

C5 ã C2
True

2009 9 3
Grassmann Algebra Book.nb 686

Ÿ !"3 + : The Quaternions


Multiplication of two even elements always generates an even element, hence the even elements
form a subalgebra. In this case the basis for the subalgebra is composed of the unit 1 and the
bivectors ei Ôej .

C4 = EvenCliffordProductTable@D êê
CliffordToOrthogonalScalarProducts êê
ToMetricForm; PaletteForm@C4 D

1 e1 Ô e2 e1 Ô e3 e2 Ô e3
e1 Ô e2 -1 -He2 Ô e3 L e1 Ô e3
e1 Ô e3 e2 Ô e3 -1 -He1 Ô e2 L
e2 Ô e3 -He1 Ô e3 L e1 Ô e2 -1

From this multiplication table we can see that the even subalgebra of the Clifford algebra of 3-
space is isomorphic to the quaternions. To see the isomorphism more clearly, replace the bivectors
by 3, 4, and !.

C5 = C4 ê. 8e2 Ô e3 Ø 3, e1 Ô e3 Ø 4, e1 Ô e2 Ø !<; PaletteForm@C5 D

1 & % $
& -1 -$ %
% $ -1 -&
$ -% & -1

The Complex subalgebra


The subalgebra generated by the pair of basis elements 81,e1 Ôe2 Ôe3 < is isomorphic to the
complex algebra. Although, under the Clifford product, each of the 2-elements behaves like an
imaginary unit, it is only the 3-element e1 Ôe2 Ôe3 that also commutes with each of the other basis
elements.

Ÿ Biquaternions
We now explore the metric in which 8e1 û e1 Ø -1, e2 û e2 Ø -1, e3 û e3 Ø -1<.

To generate the Clifford product table for this metric we enter:

2009 9 3
Grassmann Algebra Book.nb 687

DeclareMetric@DiagonalMatrix@8-1, -1, -1<DD;


C6 = CliffordProductTable@D êê ToScalarProducts êê ToMetricForm;
PaletteForm@C6 D

1 e1 e2 e3 e1 Ô e2 e1
e1 -1 e1 Ô e2 e1 Ô e3 -e2 -
e2 -He1 Ô e2 L -1 e2 Ô e3 e1 -He1 Ô
e3 -He1 Ô e3 L -He2 Ô e3 L -1 e1 Ô e2 Ô e3 e1
e1 Ô e2 e2 -e1 e1 Ô e2 Ô e3 -1 e2
e1 Ô e3 e3 -He1 Ô e2 Ô e3 L -e1 -He2 Ô e3 L -
e2 Ô e3 e1 Ô e2 Ô e3 e3 -e2 e1 Ô e3 -He1
e1 Ô e2 Ô e3 -He2 Ô e3 L e1 Ô e3 -He1 Ô e2 L -e3 e2

Note that every one of the basis elements of the Grassmann algebra (except 1 and e1 Ôe2 Ôe3 ) acts
as an imaginary unit under the Clifford product. This enables us to build up a general element of
the algebra as a sum of nested complex numbers, or of nested quaternions. To show this, we begin
by writing a general element in terms of the basis elements of the Grassmann algebra:

QQ ã a + b e1 + c e2 + d e3 + e e1 Ô e2 + f e1 Ô e3 + g e2 Ô e3 + h e1 Ô e2 Ô e3

Or, as previously argued, since the basis 1-elements are orthogonal, we can replace the exterior
product by the Clifford product and rearrange the terms in the sum to give

QQ ã Ha + b e1 + c e2 + e e1 ù e2 L + Hd + f e1 + g e2 + h e1 ù e2 L ù e3

This sum may be viewed as the complex sum of two quaternions

Q1 ã a + b e1 + c e2 + e e1 ù e2 ã Ha + b e1 L + Hc + e e1 L ù e2
Q2 ã d + f e1 + g e2 + h e1 ù e2 ã Hd + f e1 L + Hg + h e1 L ù e2

QQ ã Q1 + Q2 ù e3

Historical Note

This complex sum of two quaternions was called a biquaternion by Hamilton [Hamilton, Elements
of Quaternions, p133] but Clifford in a footnote to his Preliminary Sketch of Biquaternions
[Clifford, Mathematical Papers, Chelsea] says 'Hamilton's biquaternion is a quaternion with
complex coefficients; but it is convenient (as Prof. Pierce remarks) to suppose from the beginning
that all scalars may be complex. As the word is thus no longer wanted in its old meaning, I have
made bold to use it in a new one."

Hamilton uses the word biscalar for a complex number and bivector [p 225 Elements of
Quaternions] for a complex vector, that is, for a vector x + Â y, where x and y are vectors; and the
word biquaternion for a complex quaternion q0 + Â q1 , where q0 and q1 are quaternions. He
emphasizes here that "… Â is the (scalar) imaginary of algebra, and not a symbol for a
geometrically real right versor …"

2009 9 3
Grassmann Algebra Book.nb 688
Hamilton uses the word biscalar for a complex number and bivector [p 225 Elements of
Quaternions] for a complex vector, that is, for a vector x + Â y, where x and y are vectors; and the
word biquaternion for a complex quaternion q0 + Â q1 , where q0 and q1 are quaternions. He
emphasizes here that "… Â is the (scalar) imaginary of algebra, and not a symbol for a
geometrically real right versor …"

Hamilton introduces his biquaternion as the quotient of a bivector (his usage) by a (real) vector.

12.16 Clifford Algebras of a 4-Space

Ÿ The Clifford product table in 4-space


In this section we explore the Clifford algebras of 4-space. First we declare a (not necessarily
orthogonal) basis for the 4-space, and generate the associated Clifford product table. Because of
the size of the table, only the first few columns are shown in the print version.

&4 ; C1 = CliffordProductTable@D; PaletteForm@C1 D

1ù1 1 ù e1 1 ù e2
e1 ù 1 e1 ù e1 e1 ù e2
e2 ù 1 e2 ù e1 e2 ù e2
e3 ù 1 e3 ù e1 e3 ù e2
e4 ù 1 e4 ù e1 e4 ù e2
He1 Ô e2 L ù 1 He1 Ô e2 L ù e1 He1 Ô e2 L ù e2
He1 Ô e3 L ù 1 He1 Ô e3 L ù e1 He1 Ô e3 L ù e2
He1 Ô e4 L ù 1 He1 Ô e4 L ù e1 He1 Ô e4 L ù e2
He2 Ô e3 L ù 1 He2 Ô e3 L ù e1 He2 Ô e3 L ù e2
He2 Ô e4 L ù 1 He2 Ô e4 L ù e1 He2 Ô e4 L ù e2
He3 Ô e4 L ù 1 He3 Ô e4 L ù e1 He3 Ô e4 L ù e2
He1 Ô e2 Ô e3 L ù 1 He1 Ô e2 Ô e3 L ù e1 He1 Ô e2 Ô e3 L ù e2 H
He1 Ô e2 Ô e4 L ù 1 He1 Ô e2 Ô e4 L ù e1 He1 Ô e2 Ô e4 L ù e2 H
He1 Ô e3 Ô e4 L ù 1 He1 Ô e3 Ô e4 L ù e1 He1 Ô e3 Ô e4 L ù e2 H
He2 Ô e3 Ô e4 L ù 1 He2 Ô e3 Ô e4 L ù e1 He2 Ô e3 Ô e4 L ù e2 H
He1 Ô e2 Ô e3 Ô e4 L ù 1 He1 Ô e2 Ô e3 Ô e4 L ù e1 He1 Ô e2 Ô e3 Ô e4 L ù e2 He1

Ÿ !"4 : The Clifford algebra of Euclidean 4-space

2009 9 3
Grassmann Algebra Book.nb 689

Ÿ !"4 : The Clifford algebra of Euclidean 4-space


Our first exploration is to the Clifford algebra of Euclidean space, hence we accept the default
metric which was automatically declared when we declared the basis.

Metric
881, 0, 0, 0<, 80, 1, 0, 0<, 80, 0, 1, 0<, 80, 0, 0, 1<<

Since the basis elements are orthogonal, we can use the faster GrassmannAlgebra function
CliffordToOrthogonalScalarProducts for computing the Clifford products.

C2 = CliffordToOrthogonalScalarProducts@C1 D êê ToMetricForm;
PaletteForm@C2 D

1 e1 e2 e3
e1 1 e1 Ô e2 e1 Ô e3
e2 -He1 Ô e2 L 1 e2 Ô e3
e3 -He1 Ô e3 L -He2 Ô e3 L 1
e4 -He1 Ô e4 L -He2 Ô e4 L -He3 Ô e4 L
e1 Ô e2 -e2 e1 e1 Ô e2 Ô e3
e1 Ô e3 -e3 -He1 Ô e2 Ô e3 L e1
e1 Ô e4 -e4 -He1 Ô e2 Ô e4 L -He1 Ô e3 Ô e4 L
e2 Ô e3 e1 Ô e2 Ô e3 -e3 e2
e2 Ô e4 e1 Ô e2 Ô e4 -e4 -He2 Ô e3 Ô e4 L
e3 Ô e4 e1 Ô e3 Ô e4 e2 Ô e3 Ô e4 -e4
e1 Ô e2 Ô e3 e2 Ô e3 -He1 Ô e3 L e1 Ô e2
e1 Ô e2 Ô e4 e2 Ô e4 -He1 Ô e4 L -He1 Ô e2 Ô e3 Ô e4
e1 Ô e3 Ô e4 e3 Ô e4 e1 Ô e2 Ô e3 Ô e4 -He1 Ô e4 L
e2 Ô e3 Ô e4 -He1 Ô e2 Ô e3 Ô e4 L e3 Ô e4 -He2 Ô e4 L
e1 Ô e2 Ô e3 Ô e4 -He2 Ô e3 Ô e4 L e1 Ô e3 Ô e4 -He1 Ô e2 Ô e4 L

This table is well off the page in the printed version, but we can condense the notation for the basis
elements of L by replacing ei Ô!Ôej by ei,!,j . To do this we can use the GrassmannAlgebra
function ToBasisIndexedForm. For example

2009 9 3
Grassmann Algebra Book.nb 690

1 - He1 Ô e2 Ô e3 L êê ToBasisIndexedForm
1 - e1,2,3

To display the table in condensed notation we make up a rule for each of the basis elements.

C3 =
C2 ê. Reverse@Thread@BasisL@D Ø ToBasisIndexedForm@BasisL@DDDD;
PaletteForm@C3 D

1 e1 e2 e3 e4 e1,2 e1,3 e
e1 1 e1,2 e1,3 e1,4 e2 e3 e4
e2 -e1,2 1 e2,3 e2,4 -e1 -e1,2,3 -e
e3 -e1,3 -e2,3 1 e3,4 e1,2,3 -e1 -e
e4 -e1,4 -e2,4 -e3,4 1 e1,2,4 e1,3,4 -
e1,2 -e2 e1 e1,2,3 e1,2,4 -1 -e2,3 -
e1,3 -e3 -e1,2,3 e1 e1,3,4 e2,3 -1 -
e1,4 -e4 -e1,2,4 -e1,3,4 e1 e2,4 e3,4 -
e2,3 e1,2,3 -e3 e2 e2,3,4 -e1,3 e1,2 e1,
e2,4 e1,2,4 -e4 -e2,3,4 e2 -e1,4 -e1,2,3,4 e
e3,4 e1,3,4 e2,3,4 -e4 e3 e1,2,3,4 -e1,4 e
e1,2,3 e2,3 -e1,3 e1,2 e1,2,3,4 -e3 e2 e2
e1,2,4 e2,4 -e1,4 -e1,2,3,4 e1,2 -e4 -e2,3,4 e2
e1,3,4 e3,4 e1,2,3,4 -e1,4 e1,3 e2,3,4 -e4 e3
e2,3,4 -e1,2,3,4 e3,4 -e2,4 e2,3 -e1,3,4 e1,2,4 -e
e1,2,3,4 -e2,3,4 e1,3,4 -e1,2,4 e1,2,3 -e3,4 e2,4 -

Ÿ !"4 + : The even Clifford algebra of Euclidean 4-space


The even subalgebra is composed of the unit 1, the bivectors ei Ôej , and the single basis 4-element
e1 Ôe2 Ôe3 Ôe4 .

2009 9 3
Grassmann Algebra Book.nb 691

C3 = EvenCliffordProductTable@D êê
CliffordToOrthogonalScalarProducts êê
ToMetricForm; PaletteForm@C3 D

1 e1 Ô e2 e1 Ô e3 e1 Ô e4
e1 Ô e2 -1 -He2 Ô e3 L -He2 Ô e4 L
e1 Ô e3 e2 Ô e3 -1 -He3 Ô e4 L
e1 Ô e4 e2 Ô e4 e3 Ô e4 -1 e1
e2 Ô e3 -He1 Ô e3 L e1 Ô e2 e1 Ô e2 Ô e3 Ô e4
e2 Ô e4 -He1 Ô e4 L -He1 Ô e2 Ô e3 Ô e4 L e1 Ô e2
e3 Ô e4 e1 Ô e2 Ô e3 Ô e4 -He1 Ô e4 L e1 Ô e3
e1 Ô e2 Ô e3 Ô e4 -He3 Ô e4 L e2 Ô e4 -He2 Ô e3 L

Ÿ !"1,3 : The Dirac algebra

To generate the Dirac algebra we need the Minkowski metric in which there is one time-like basis
element e1 ûe1 ã+1, and three space-like basis elements ei ûei ã-1.

8e1 û e1 ã 1, e2 û e2 ã -1, e3 û e3 ã -1, e4 û e4 ã -1<

To generate the Clifford product table for this metric we enter:

&4 ; DeclareMetric@DiagonalMatrix@81, -1, -1, -1<DD


881, 0, 0, 0<, 80, -1, 0, 0<, 80, 0, -1, 0<, 80, 0, 0, -1<<

2009 9 3
Grassmann Algebra Book.nb 692

C6 = CliffordToOrthogonalScalarProducts@C1 D êê ToMetricForm;
PaletteForm@C6 D

1 e1 e2 e3
e1 1 e1 Ô e2 e1 Ô e3
e2 -He1 Ô e2 L -1 e2 Ô e3
e3 -He1 Ô e3 L -He2 Ô e3 L -1
e4 -He1 Ô e4 L -He2 Ô e4 L -He3 Ô e4 L
e1 Ô e2 -e2 -e1 e1 Ô e2 Ô e3
e1 Ô e3 -e3 -He1 Ô e2 Ô e3 L -e1
e1 Ô e4 -e4 -He1 Ô e2 Ô e4 L -He1 Ô e3 Ô e4 L
e2 Ô e3 e1 Ô e2 Ô e3 e3 -e2
e2 Ô e4 e1 Ô e2 Ô e4 e4 -He2 Ô e3 Ô e4 L
e3 Ô e4 e1 Ô e3 Ô e4 e2 Ô e3 Ô e4 e4
e1 Ô e2 Ô e3 e2 Ô e3 e1 Ô e3 -He1 Ô e2 L
e1 Ô e2 Ô e4 e2 Ô e4 e1 Ô e4 -He1 Ô e2 Ô e3 Ô e4
e1 Ô e3 Ô e4 e3 Ô e4 e1 Ô e2 Ô e3 Ô e4 e1 Ô e4
e2 Ô e3 Ô e4 -He1 Ô e2 Ô e3 Ô e4 L -He3 Ô e4 L e2 Ô e4
e1 Ô e2 Ô e3 Ô e4 -He2 Ô e3 Ô e4 L -He1 Ô e3 Ô e4 L e1 Ô e2 Ô e4

To confirm that this structure is isomorphic to the Dirac algebra we can go through the same
procedure that we followed in the case of the Pauli algebra.

2009 9 3
Grassmann Algebra Book.nb 693

Ï Step 1: Replace symbols for entities and operations

C7 = HC6 êê ReplaceNegativeUnitL ê.
81 Ø I, e1 Ø g0 , e2 Ø g1 , e3 Ø g2 , e4 Ø g3 < ê.
Wedge Ø Dot; PaletteForm@C7 D

I g0 g1 g2 g3
g0 I g0 .g1 g0 .g2 g0 .g3
g1 -g0 .g1 -I g1 .g2 g1 .g3
g2 -g0 .g2 -g1 .g2 -I g2 .g3
g3 -g0 .g3 -g1 .g3 -g2 .g3 -I
g0 .g1 -g1 -g0 g0 .g1 .g2 g0 .g1 .g3
g0 .g2 -g2 -g0 .g1 .g2 -g0 g0 .g2 .g3
g0 .g3 -g3 -g0 .g1 .g3 -g0 .g2 .g3 -g0
g1 .g2 g0 .g1 .g2 g2 -g1 g1 .g2 .g3
g1 .g3 g0 .g1 .g3 g3 -g1 .g2 .g3 -g1
g2 .g3 g0 .g2 .g3 g1 .g2 .g3 g3 -g2
g0 .g1 .g2 g1 .g2 g0 .g2 -g0 .g1 g0 .g1 .g2 .
g0 .g1 .g3 g1 .g3 g0 .g3 -g0 .g1 .g2 .g3 -g0 .g1
g0 .g2 .g3 g2 .g3 g0 .g1 .g2 .g3 g0 .g3 -g0 .g2
g1 .g2 .g3 -g0 .g1 .g2 .g3 -g2 .g3 g1 .g3 -g1 .g2
g0 .g1 .g2 .g3 -g1 .g2 .g3 -g0 .g2 .g3 g0 .g1 .g3 -g0 .g1 .g2

Ï Step 2: Substitute matrices and calculate


1 0 0 0 1 0 0 0 0 0 0 -1
0 1 0 0 0 1 0 0 0 0 -1 0
C8 = C7 ê. :I Ø , g0 Ø , g1 Ø ,
0 0 1 0 0 0 -1 0 0 1 0 0
0 0 0 1 0 0 0 -1 1 0 0 0

0 0 0 Â 0 0 -1 0
0 0 -Â 0 0 0 0 1
g2 Ø , g3 Ø >; MatrixForm@C8 D
0 -Â 0 0 1 0 0 0
 0 0 0 0 -1 0 0

2009 9 3
Grassmann Algebra Book.nb 694

1 0 0 0 1 0 0 0 0 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 -1
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
0 0 1 0 0 0 -1 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 -1 0 0
0 0 0 1 0 0 0 -1 1 0 0 0 Â 0 0 0 0 -1 0 0 -1 0 0 0
1 0 0 0 1 0 0 0 0 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 -1
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 0 0 -1 0 0 0 1 -1 0 0 0 -Â 0 0 0 0 1 0 0 1 0 0 0
0 0 0 -1 0 0 0 1 -1 0 0 0 -Â 0 0 0 0 1 0 0 1 0 0 0
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
1 0 0 0 1 0 0 0 0 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 -1
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
0 0 -Â 0 0 0 Â 0 0 -Â 0 0 0 -1 0 0 -Â 0 0 0 0 Â 0 0
0 -Â 0 0 0 -Â 0 0 0 0 Â 0 0 0 -1 0 0 0 0 -Â 0 0 Â 0
 0 0 0  0 0 0 0 0 0 - 0 0 0 -1 0 0 - 0 0 0 0 -Â
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 0 0 1 0 0 0 -1 1 0 0 0 Â 0 0 0 0 -1 0 0 -1 0 0 0
1 0 0 0 1 0 0 0 0 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 -1
0 -1 0 0 0 -1 0 0 0 0 1 0 0 0 Â 0 0 0 0 -1 0 0 1 0
0 0 0 -1 0 0 0 1 -1 0 0 0 -Â 0 0 0 0 1 0 0 1 0 0 0
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 -1 0 0 0 -1 0 0 0 0 1 0 0 0 Â 0 0 0 0 -1 0 0 1 0
-1 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 1
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
0 0 -Â 0 0 0 Â 0 0 -Â 0 0 0 -1 0 0 -Â 0 0 0 0 Â 0 0
0 Â 0 0 0 Â 0 0 0 0 -Â 0 0 0 1 0 0 0 0 Â 0 0 -Â 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 0 0 1 0 0 0 -1 1 0 0 0 Â 0 0 0 0 -1 0 0 -1 0 0 0
-1 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 1
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 Â 0 0 0 Â 0 0 0 0 -Â 0 0 0 1 0 0 0 0 Â 0 0 -Â 0
0 0 -Â 0 0 0 Â 0 0 -Â 0 0 0 -1 0 0 -Â 0 0 0 0 Â 0 0
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
-1 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 1
0 0 0 1 0 0 0 -1 1 0 0 0 Â 0 0 0 0 -1 0 0 -1 0 0 0
0 0 -1 0 0 0 1 0 0 -1 0 0 0 Â 0 0 -1 0 0 0 0 1 0 0
0 -Â 0 0 0 -Â 0 0 0 0 Â 0 0 0 -1 0 0 0 0 -Â 0 0 Â 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 0 0 -Â 0 0 0 Â -Â 0 0 0 1 0 0 0 0 Â 0 0 Â 0 0 0
0 0 -Â 0 0 0 Â 0 0 -Â 0 0 0 -1 0 0 -Â 0 0 0 0 Â 0 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 Â 0 0 0 Â 0 0 0 0 -Â 0 0 0 1 0 0 0 0 Â 0 0 -Â 0
0 0 Â 0 0 0 -Â 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 -Â 0 0
0 0 0 -Â 0 0 0 Â -Â 0 0 0 1 0 0 0 0 Â 0 0 Â 0 0 0
0 1 0 0 0 1 0 0 0 0 -1 0 0 0 -Â 0 0 0 0 1 0 0 -1 0
-1 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 1
0 0 0 -1 0 0 0 1 -1 0 0 0 -Â 0 0 0 0 1 0 0 1 0 0 0
0 0 1 0 0 0 -1 0 0 1 0 0 0 -Â 0 0 1 0 0 0 0 -1 0 0
0 -Â 0 0 0 -Â 0 0 0 0 Â 0 0 0 -1 0 0 0 0 -Â 0 0 Â 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
0 0 Â 0 0 0 -Â 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 -Â 0 0
0 0 Â 0 0 0 -Â 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 -Â 0 0
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
-Â 0 0 0 -Â 0 0 0 0 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 Â
0 -Â 0 0 0 -Â 0 0 0 0 Â 0 0 0 -1 0 0 0 0 -Â 0 0 Â 0
0 0 Â 0 0 0 -Â 0 0 Â 0 0 0 1 0 0 Â 0 0 0 0 -Â 0 0
0 0 0 Â 0 0 0 -Â Â 0 0 0 -1 0 0 0 0 -Â 0 0 -Â 0 0 0
 0 0 0  0 0 0 0 0 0 - 0 0 0 -1 0 0 - 0 0 0 0 -Â
0 Â 0 0 0 Â 0 0 0 0 -Â 0 0 0 1 0 0 0 0 Â 0 0 -Â 0

Ï Step 3:

2009 9 3
Grassmann Algebra Book.nb 695

Step 3:

C9 = C8 ê. Thread@First@C8 D Ø BasisL@DD ê.
Thread@-First@C8 D Ø -BasisL@DD;

Ï Step 4: Verification

C9 ã C6
True

Ÿ !"0,4 : The Clifford algebra of complex quaternions

To generate the Clifford algebra of complex quaternions we need a metric in which all the scalar
products of orthogonal basis elements are of the form ei ûej (i not equal to j) are equal to -1. That
is

8e1 û e1 ã -1, e2 û e2 ã -1, e3 û e3 ã -1, e4 û e4 ã -1<

To generate the Clifford product table for this metric we enter:

&4 ; DeclareMetric@DiagonalMatrix@8-1, -1, -1, -1<DD


88-1, 0, 0, 0<, 80, -1, 0, 0<, 80, 0, -1, 0<, 80, 0, 0, -1<<

2009 9 3
Grassmann Algebra Book.nb 696

C6 = CliffordToOrthogonalScalarProducts@C1 D êê ToMetricForm;
PaletteForm@C6 D

1 e1 e2 e3
e1 -1 e1 Ô e2 e1 Ô e3
e2 -He1 Ô e2 L -1 e2 Ô e3
e3 -He1 Ô e3 L -He2 Ô e3 L -1
e4 -He1 Ô e4 L -He2 Ô e4 L -He3 Ô e4 L
e1 Ô e2 e2 -e1 e1 Ô e2 Ô e3
e1 Ô e3 e3 -He1 Ô e2 Ô e3 L -e1
e1 Ô e4 e4 -He1 Ô e2 Ô e4 L -He1 Ô e3 Ô e4 L
e2 Ô e3 e1 Ô e2 Ô e3 e3 -e2
e2 Ô e4 e1 Ô e2 Ô e4 e4 -He2 Ô e3 Ô e4 L
e3 Ô e4 e1 Ô e3 Ô e4 e2 Ô e3 Ô e4 e4
e1 Ô e2 Ô e3 -He2 Ô e3 L e1 Ô e3 -He1 Ô e2 L
e1 Ô e2 Ô e4 -He2 Ô e4 L e1 Ô e4 -He1 Ô e2 Ô e3 Ô e4
e1 Ô e3 Ô e4 -He3 Ô e4 L e1 Ô e2 Ô e3 Ô e4 e1 Ô e4
e2 Ô e3 Ô e4 -He1 Ô e2 Ô e3 Ô e4 L -He3 Ô e4 L e2 Ô e4
e1 Ô e2 Ô e3 Ô e4 e2 Ô e3 Ô e4 -He1 Ô e3 Ô e4 L e1 Ô e2 Ô e4

Note that both the vectors and bivectors act as an imaginary units under the Clifford product. This
enables us to build up a general element of the algebra as a sum of nested complex numbers,
quaternions, or complex quaternions. To show this, we begin by writing a general element in terms
of the basis elements of the Grassmann algebra:

X = CreateGrassmannNumber@aD

a0 + e1 a1 + e2 a2 + e3 a3 + e4 a4 + a5 e1 Ô e2 + a6 e1 Ô e3 +
a7 e1 Ô e4 + a8 e2 Ô e3 + a9 e2 Ô e4 + a10 e3 Ô e4 + a11 e1 Ô e2 Ô e3 +
a12 e1 Ô e2 Ô e4 + a13 e1 Ô e3 Ô e4 + a14 e2 Ô e3 Ô e4 + a15 e1 Ô e2 Ô e3 Ô e4

We can then factor this expression to display it as a nested sum of numbers of the form
Ck ãai +aj e1 . (Of course any basis element will do upon which to base the factorization.)

X ã Ia0 + a1 e1 M + Ia2 + a5 e1 M Ô e2 +
IIa3 + a6 e1 M + Ia8 + a11 e1 M Ô e2 M Ô e3 + IIa4 + a7 e1 M +
Ia9 + a12 e1 M Ô e2 + IIa8 + a13 e1 M + Ia14 + a15 e1 M Ô e2 M Ô e3 M Ô e4

Which can be rewritten in terms of the Ci as

2009 9 3
Grassmann Algebra Book.nb 697

X ã C1 + C2 Ô e2 + HC3 + C4 Ô e2 L Ô e3 + HC5 + C6 Ô e2 + HC7 + C8 Ô e2 L Ô e3 L Ô e4

Again, we can rewrite each of these elements as Qk ãCi +Cj Ôe2 to get

X ã Q1 + Q2 Ô e3 + HQ3 + Q4 Ô e3 L Ô e4

Finally, we write QQk ãQi +Qj Ôe3 to get

X ã QQ1 + QQ2 Ô e4

Since the basis 1-elements are orthogonal, we can replace the exterior product by the Clifford
product to give

X ã Ia0 + a1 e1 M + Ia2 + a5 e1 M ù e2 +
IIa3 + a6 e1 M + Ia8 + a11 e1 M ù e2 M ù e3 + IIa4 + a7 e1 M +
Ia9 + a12 e1 M ù e2 + IIa8 + a13 e1 M + Ia14 + a15 e1 M ù e2 M ù e3 M ù e4

ã C1 + C2 ù e2 + HC3 + C4 ù e2 L ù e3 + HC5 + C6 ù e2 + HC7 + C8 ù e2 L ù e3 L ù e4

ã Q1 + Q2 ù e3 + HQ3 + Q4 ù e3 L ù e4

ã QQ1 + QQ2 ù e4

12.17 Rotations
To be completed

2009 9 3
Grassmann Algebra Book.nb 698

13 Exploring Grassmann Matrix


Algebra
Ï

13.1 Introduction
This chapter introduces the concept of a matrix of Grassmann numbers, which we will call a
Grassmann matrix. Wherever it makes sense, the operations discussed will work also for listed
collections of the components of tensors of any order as per the Mathematica representation: a set
of lists nested to a certain number of levels. Thus, for example, an operation may also work for
vectors or a list containing only one element.

We begin by discussing some quick methods for generating matrices of Grassmann numbers,
particularly matrices of symbolic Grassmann numbers, where it can become tedious to define all
the scalar factors involved. We then discuss the common algebraic operations like multiplication
by scalars, addition, multiplication, taking the complement, finding the grade of elements,
simplification, taking components or determining the type of elements involved.

Separate sections are devoted to discussing the notions of transpose, determinant and adjoint of a
Grassmann matrix. These notions do not carry over directly into the algebra of Grassmann matrices
due to the non-commutative nature of Grassmann numbers.

Matrix powers are then discussed, and the inverse of a Grassmann matrix defined in terms of its
positive integer powers. Non-integer powers are defined for matrices whose bodies have distinct
eigenvalues. The determination of the eigensystem of a Grassmann matrix is discussed for this
class of matrix.

Finally, functions of Grassmann matrices are discussed based on the determination of their
eigensystems. We show that relationships that we expect of scalars, for example Log[Exp[A]] = A
or Sin@AD2 + Cos@AD2 = 1, can still apply to Grassmann matrices.

13.2 Creating Matrices

Ÿ Creating general symbolic matrices


Unless a Grassmann matrix is particularly simple, and particularly when a large symbolic matrix is
required, it is useful to be able to generate the initial form of a the matrix automatically. We do this
with the GrassmannAlgebra CreateMatrixForm function.

2009 9 3
Grassmann Algebra Book.nb 699

? CreateMatrixForm
CreateMatrixForm@DD@X,SD constructs an array of the specified
dimensions with copies of the expression X formed by indexing
its scalars and variables with subscripts. S is an optional
list of excluded symbols. D is a list of dimensions of
the array Hwhich may be symbolic for lists and matricesL.
Note that an indexed scalar is not recognised as a scalar
unless it has an underscripted 0, or is declared as a scalar.

Suppose we require a 2!2 matrix of general Grassmann numbers of the form X (given below) in a
2-space.

X = a + b e1 + c e2 + d e1 Ô e2
a + b e1 + c e2 + d e1 Ô e2

We declare the 2-space, and then create a 2!2 matrix of the form X.

&2 ; M = CreateMatrixForm@82, 2<D@XD; MatrixForm@MD


a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2
a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 Ô e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 Ô e2

We need to declare the extra scalars that we have formed. This can be done with just one pattern
(provided we wish all symbols with this pattern to be scalar).

DeclareExtraScalars@8Subscript@_, _, _D<D

:a, b, c, d, e, f, g, h, %, H_ û _L?InnerProductQ, __,_ , _>


0

Ÿ Creating matrices with specific coefficients


To enter a Grassmann matrix with specific coefficients, enter a placeholder as the kernel symbol.
An output will be returned which will be copied to an input form as soon as you attempt to edit it.
You can select a placeholder and enter the value you want. The tab key can be used to select the
next placeholder. The steps are:

• Generate the form you want.

Y = CreateGrassmannNumber@ÑD
Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2

CreateMatrixForm@82, 2<D@YD
88Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 , Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 <,
8Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 , Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 <<

• Start editing the form above.

2009 9 3
Grassmann Algebra Book.nb 700

882 + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 , Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 <,
8Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 , Ñ + Ñ e1 + Ñ e2 + Ñ e1 Ô e2 <<
• Finish off your editing and give the matrix a name.

A = 882 + 3 e1 , e1 + 6 e2 <, 85 - 9 e1 Ô e2 , e2 + e1 Ô e2 <<; MatrixForm@AD


2 + 3 e1 e1 + 6 e2
K O
5 - 9 e1 Ô e2 e2 + e1 Ô e2

Ÿ Creating matrices with variable elements


To create a matrix in which the Grassmann numbers are expressed in terms of x, y, and z, say, you
can temporarily declare the basis as {x,y,z} before generating the form with
CreateGrassmannNumber. For example:

B = Basis; DeclareBasis@8x, y, z<D


8x, y, z<

Z = CreateGrassmannNumber@ÑD
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz

Here we create a vector of elements of the form Z.

CreateMatrixForm@83<D@ZD êê MatrixForm
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz
Ñ + x Ñ + y Ñ + z Ñ + Ñ xÔy + Ñ xÔz + Ñ yÔz + Ñ xÔyÔz

Remember to return to your original basis.

DeclareBasis@BD
8e1 , e2 , e3 <

13.3 Manipulating Matrices

Ÿ Matrix simplification
Simplification of the elements of a matrix may be effected by using GrassmannSimplify. For
example, suppose we are working in a Euclidean 3-space and have a matrix B:

2009 9 3
Grassmann Algebra Book.nb 701

&3 ; B = 881 + x Ô x, 2 + e1 û e2 <, 8e1 Ô e2 , 3 + x Ô y Ô z Ô w<<;


MatrixForm@BD
1 + xÔx 2 + e1 û e2
e1 Ô e2 3 + x Ô y Ô z Ô w

To simplify the elements of the matrix we enter:

%@BD
881, 2 + e1 û e2 <, 8e3 , 3<<

If we wish to simplify any orthogonal elements, we can use ToMetricForm. Since we are in a
Euclidean space the scalar product term is zero.

ToMetricForm@%@BDD
881, 2<, 8e3 , 3<<

Ÿ The grades of the elements of a matrix


The GrassmannAlgebra function Grade returns a matrix with each element replaced by a list
of the different grades of its components. Thus, if an element is composed of several components
of the same grade, only that one grade will be returned for that element. For example, take the
matrix A created in the previous section:

A êê MatrixForm
2 + 3 e1 e1 + 6 e2
K O
5 - 9 e1 Ô e2 e2 + e1 Ô e2

Grade@AD êê MatrixForm
80, 1< 1
K O
80, 2< 81, 2<

Ÿ Taking components of a matrix


We take the matrix M created in Section 13.2.

M êê MatrixForm
a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2
a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 Ô e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 Ô e2

The body of M is given by:

2009 9 3
Grassmann Algebra Book.nb 702

Body@MD êê MatrixForm
a1,1 a1,2
a2,1 a2,2

and its soul by:

Soul@MD êê MatrixForm
e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2
e1 b2,1 + e2 c2,1 + d2,1 e1 Ô e2 e1 b2,2 + e2 c2,2 + d2,2 e1 Ô e2

The even components are extracted by using EvenGrade.

EvenGrade@MD êê MatrixForm
a1,1 + d1,1 e1 Ô e2 a1,2 + d1,2 e1 Ô e2
a2,1 + d2,1 e1 Ô e2 a2,2 + d2,2 e1 Ô e2

The odd components are extracted by using OddGrade.

OddGrade@MD êê MatrixForm
e1 b1,1 + e2 c1,1 e1 b1,2 + e2 c1,2
e1 b2,1 + e2 c2,1 e1 b2,2 + e2 c2,2

Finally, we can extract the elements of any specific grade using ExtractGrade.

ExtractGrade@2D@MD êê MatrixForm
d1,1 e1 Ô e2 d1,2 e1 Ô e2
d2,1 e1 Ô e2 d2,2 e1 Ô e2

Ÿ Determining the type of elements


We take the matrix A created in Section 13.2.

A êê MatrixForm
2 + 3 e1 e1 + 6 e2
K O
5 - 9 e1 Ô e2 e2 + e1 Ô e2

EvenGradeQ, OddGradeQ, EvenSoulQ and GradeQ all interrogate the individual elements of a
matrix.

EvenGradeQ@AD êê MatrixForm
False False
K O
True False

2009 9 3
Grassmann Algebra Book.nb 703

OddGradeQ@AD êê MatrixForm
False True
K O
False False

EvenSoulQ@AD êê MatrixForm
False False
K O
False False

GradeQ@1D@AD êê MatrixForm
False True
K O
False False

To check the type of a complete matrix we can ask whether it is free of either True or False
values.

FreeQ@GradeQ@1D@AD, FalseD
False

FreeQ@EvenSoulQ@AD, TrueD
True

13.4 Matrix Algebra

Ÿ Multiplication by scalars
Due to the Listable attribute of the Times operation in Mathematica, multiplication by scalars
behaves as expected. For example, multiplying the Grassmann matrix A created in Section 13.2 by
the scalar a gives:

A êê MatrixForm
2 + 3 e1 e1 + 6 e2
K O
5 - 9 e1 Ô e2 e2 + e1 Ô e2

a A êê MatrixForm
a H2 + 3 e1 L a He1 + 6 e2 L
K O
a H5 - 9 e1 Ô e2 L a He2 + e1 Ô e2 L

Ÿ Matrix addition
Due to the Listable attribute of the Plus operation in Mathematica, matrices of the same size
add automatically.

2009 9 3
Grassmann Algebra Book.nb 704

A+M
882 + 3 e1 + a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 ,
e1 + 6 e2 + a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2 <,
85 + a2,1 + e1 b2,1 + e2 c2,1 - 9 e1 Ô e2 + d2,1 e1 Ô e2 ,
e2 + a2,2 + e1 b2,2 + e2 c2,2 + e1 Ô e2 + d2,2 e1 Ô e2 <<

We can collect like terms by using GrassmannSimplify.

%@A + MD
882 + a1,1 + e1 H3 + b1,1 L + e2 c1,1 + d1,1 e1 Ô e2 ,
a1,2 + e1 H1 + b1,2 L + e2 H6 + c1,2 L + d1,2 e1 Ô e2 <,
85 + a2,1 + e1 b2,1 + e2 c2,1 + H-9 + d2,1 L e1 Ô e2 ,
a2,2 + e1 b2,2 + e2 H1 + c2,2 L + H1 + d2,2 L e1 Ô e2 <<

Ÿ The complement of a matrix


Since the complement is a linear operation, the complement of a matrix is just the matrix of the
complements of the elements.

M êê MatrixForm
a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2
a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 Ô e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 Ô e2

This can be simplified by using GrassmannSimplify. In a Euclidean space this gives:

%@ M D êê MatrixForm
e2 b1,1 - e1 c1,1 + d1,1 + a1,1 e1 Ô e2 e2 b1,2 - e1 c1,2 + d1,2 + a1,2 e1 Ô e2
e2 b2,1 - e1 c2,1 + d2,1 + a2,1 e1 Ô e2 e2 b2,2 - e1 c2,2 + d2,2 + a2,2 e1 Ô e2

Ÿ Matrix products
As with the product of two Grassmann numbers, entering the product of two matrices of
Grassmann numbers, does not effect any multiplication. For example, take the matrix A in a 2-
space created in the previous section.

MatrixForm@AD Ô MatrixForm@AD
2 + 3 e1 e1 + 6 e2 2 + 3 e1 e1 + 6 e2
K OÔK O
5 - 9 e1 Ô e2 e2 + e1 Ô e2 5 - 9 e1 Ô e2 e2 + e1 Ô e2

To multiply out the product AÔA, we use the GrassmannAlgebra MatrixProduct function
which performs the multiplication. The short-hand for MatrixProduct is a script capital M $,
obtained by typing ÂscMÂ.

2009 9 3
Grassmann Algebra Book.nb 705

B@A Ô AD
88H2 + 3 e1 L Ô H2 + 3 e1 L + He1 + 6 e2 L Ô H5 - 9 e1 Ô e2 L,
H2 + 3 e1 L Ô He1 + 6 e2 L + He1 + 6 e2 L Ô He2 + e1 Ô e2 L<,
8H5 - 9 e1 Ô e2 L Ô H2 + 3 e1 L + He2 + e1 Ô e2 L Ô H5 - 9 e1 Ô e2 L,
H5 - 9 e1 Ô e2 L Ô He1 + 6 e2 L + He2 + e1 Ô e2 L Ô He2 + e1 Ô e2 L<<

To take the product and simplify at the same time, apply the GrassmannSimplify function.

%@B@A Ô ADD êê MatrixForm


4 + 17 e1 + 30 e2 2 e1 + 12 e2 + 19 e1 Ô e2
K O
10 + 15 e1 + 5 e2 - 13 e1 Ô e2 5 e1 + 30 e2

Ï Regressive products
We can also take the regressive product.

%@B@A Ó ADD êê MatrixForm


e1 Ô e2 Ó H-9 e1 - 54 e2 L e1 Ô e2 Ó H19 + e1 + 6 e2 L
K O
e1 Ô e2 Ó H-13 - 27 e1 - 9 e2 - 9 e1 Ô e2 L e1 Ô e2 Ó H-9 e1 - 52 e2 + e1 Ô e2 L

Ï Interior products
We calculate an interior product of two matrices in the same way.

P = %@B@A û ADD
884 + 9 He1 û e1 L + 11 e1 + 30 e2 , 3 He1 û e1 L + 19 He1 û e2 L + 6 He2 û e2 L<,
810 - 27 He1 Ô e2 û e1 L - 9 He1 Ô e2 û e1 Ô e2 L + 5 e2 - 13 e1 Ô e2 ,
e2 û e2 - 9 He1 Ô e2 û e1 L - 53 He1 Ô e2 û e2 L + e1 Ô e2 û e1 Ô e2 <<

To convert the interior products to scalar products, use the GrassmannAlgebra function
ToScalarProducts on the matrix.

P1 = ToScalarProducts@PD
984 + 9 He1 û e1 L + 11 e1 + 30 e2 , 3 He1 û e1 L + 19 He1 û e2 L + 6 He2 û e2 L<,
910 + 9 He1 û e2 L2 - 9 He1 û e1 L He2 û e2 L + 27 He1 û e2 L e1 + 5 e2 -
27 He1 û e1 L e2 - 13 e1 Ô e2 , -He1 û e2 L2 + e2 û e2 + He1 û e1 L He2 û e2 L +
9 He1 û e2 L e1 + 53 He2 û e2 L e1 - 9 He1 û e1 L e2 - 53 He1 û e2 L e2 ==

This product can be simplified further if the values of the scalar products of the basis elements are
known. In a Euclidean space, we obtain:

ToMetricForm@P1D êê MatrixForm
13 + 11 e1 + 30 e2 9
K O
1 - 22 e2 - 13 e1 Ô e2 2 + 53 e1 - 9 e2

Ï Clifford products

2009 9 3
Grassmann Algebra Book.nb 706

Clifford products
The Clifford product of two matrices can be calculated in stages. First we multiply the elements
and expand and simplify the terms with GrassmannSimplify.

C1 = %@B@A ù ADD
882 ù 2 + 3 µ 2 ù e1 + 3 e1 ù 2 + e1 ù 5 + 9 e1 ù e1 - 9 e1 ù He1 Ô e2 L +
6 e2 ù 5 - 54 e2 ù He1 Ô e2 L, 2 ù e1 + 6 µ 2 ù e2 + 3 e1 ù e1 +
19 e1 ù e2 + e1 ù He1 Ô e2 L + 6 e2 ù e2 + 6 e2 ù He1 Ô e2 L<,
85 ù 2 + 3 µ 5 ù e1 + e2 ù 5 - 9 e2 ù He1 Ô e2 L - 9 He1 Ô e2 L ù 2 +
He1 Ô e2 L ù 5 - 27 He1 Ô e2 L ù e1 - 9 He1 Ô e2 L ù He1 Ô e2 L,
5 ù e1 + 6 µ 5 ù e2 + e2 ù e2 + e2 ù He1 Ô e2 L - 9 He1 Ô e2 L ù e1 -
53 He1 Ô e2 L ù e2 + He1 Ô e2 L ù He1 Ô e2 L<<

Next we can convert the Clifford products into exterior and scalar products with
ToScalarProducts.

C2 = ToScalarProducts@C1D
984 + 9 He1 û e1 L + 17 e1 + 9 He1 û e2 L e1 +
54 He2 û e2 L e1 + 30 e2 - 9 He1 û e1 L e2 - 54 He1 û e2 L e2 ,
3 He1 û e1 L + 19 He1 û e2 L + 6 He2 û e2 L + 2 e1 - He1 û e2 L e1 -
6 He2 û e2 L e1 + 12 e2 + He1 û e1 L e2 + 6 He1 û e2 L e2 + 19 e1 Ô e2 <,
910 - 9 He1 û e2 L2 + 9 He1 û e1 L He2 û e2 L + 15 e1 - 27 He1 û e2 L e1 +
9 He2 û e2 L e1 + 5 e2 + 27 He1 û e1 L e2 - 9 He1 û e2 L e2 - 13 e1 Ô e2 ,
He1 û e2 L2 + e2 û e2 - He1 û e1 L He2 û e2 L + 5 e1 - 9 He1 û e2 L e1 -
54 He2 û e2 L e1 + 30 e2 + 9 He1 û e1 L e2 + 54 He1 û e2 L e2 ==

Finally, if we are in a Euclidean space, this simplifies significantly.

ToMetricForm@C2D êê MatrixForm
13 + 71 e1 + 21 e2 9 - 4 e1 + 13 e2 + 19 e1 Ô e2
K O
19 + 24 e1 + 32 e2 - 13 e1 Ô e2 -49 e1 + 39 e2

13.5 The Transpose

Conditions on the transpose of an exterior product


If we adopt the usual definition of the transpose of a matrix as being the matrix where rows
become columns and columns become rows, we find that, in general, we lose the relation for the
transpose of a product being the product of the transposes in reverse order. To see under what
conditions this does indeed remain true we break the matrices up into their even and odd
components. Let AãAe +Ao and BãBe +Bo , where the subscript e refers to the even component and
o refers to the odd component. Then
Ï

2009 9 3
Grassmann Algebra Book.nb 707

HA Ô BLT ã HHAe + Ao L Ô HBe + Bo LLT ã HAe Ô Be + Ae Ô Bo + Ao Ô Be + Ao Ô Bo LT ã


HAe Ô Be LT + HAe Ô Bo LT + HAo Ô Be LT + HAo Ô Bo LT

BT Ô AT ã HBe + Bo LT Ô HAe + Ao LT ã
IBTe + BTo M Ô IAeT + AoT M ã BTe Ô AeT + BTe Ô AoT + BTo Ô AeT + BTo Ô AoT

If the elements of two Grassmann matrices commute, then just as for the usual case, the transpose
of a product is the product of the transposes in reverse order. If they anti-commute, then the
transpose of a product is the negative of the product of the transposes in reverse order. Thus, the
corresponding terms in the two expansions above which involve an even matrix will be equal, and
the last terms involving two odd matrices will differ by a sign:

HAe Ô Be LT ã BTe Ô AeT

HAe Ô Bo LT ã BTo Ô AeT

HAo Ô Be LT ã BTe Ô AoT

HAo Ô Bo LT ã -IBTo Ô AoT M

Thus

HA Ô BLT ã BT Ô AT - 2 IBTo Ô AoT M ã BT Ô AT + 2 HAo Ô Bo LT 13.1

HA Ô BLT ã BT Ô AT ñ Ao Ô Bo ã 0 13.2

Ÿ Checking the transpose of a product


It is useful to be able to run a check on a theoretically derived relation such as this. To reduce
duplication effort in the checking of several cases we devise a test function called
TestTransposeRelation.

TestTransposeRelation[A_,B_]:=
Module[{T},T=Transpose;
![T["[AÔB]]]ã !["[T[B]ÔT[A]]-2"[T[OddGrade[B]]Ô
T[OddGrade[A]]]]]

As an example, we test it for two 2!2 matrices in 2-space:

A êê MatrixForm
2 + 3 e1 e1 + 6 e2
K O
5 - 9 e1 Ô e2 e2 + e1 Ô e2

2009 9 3
Grassmann Algebra Book.nb 708

M êê MatrixForm
a1,1 + e1 b1,1 + e2 c1,1 + d1,1 e1 Ô e2 a1,2 + e1 b1,2 + e2 c1,2 + d1,2 e1 Ô e2
a2,1 + e1 b2,1 + e2 c2,1 + d2,1 e1 Ô e2 a2,2 + e1 b2,2 + e2 c2,2 + d2,2 e1 Ô e2

TestTransposeRelation@M, AD
True

It should be remarked that for the transpose of a product to be equal to the product of the
transposes in reverse order, the evenness of the matrices is a sufficient condition, but not a
necessary one. All that is required is that the exterior product of the odd components of the
matrices be zero. This may be achieved without the odd components themselves necessarily being
zero.

Conditions on the determinant of a general matrix


Due to the non-commutativity of Grassmann numbers, the notion of determinant does not carry
over directly to matrices of Grassmann numbers. To see why, consider the diagonal Grassmann
matrix composed of 1-elements x and y:

x 0
K O
0 y

The natural definition of the determinant of this matrix would be the product of the diagonal
elements. However, which product should it be, xÔy or yÔx?. The possible non-commutativity of
the Grassmann numbers on the diagonal may give rise to different products of the diagonal
elements, depending on the ordering of the elements.

On the other hand, if the product of the diagonal elements of a diagonal Grassmann matrix is
unique, as would be the case for even elements, the determinant may be defined uniquely as that
product.

Ÿ Determinants of even Grassmann matrices


If a Grassmann matrix is even, its elements commute and its determinant may be defined in the
same way as for matrices of scalars.

To calculate the determinant of an even Grassmann matrix, we can use the same basic algorithm as
for scalar matrices. In GrassmannAlgebra this is implemented with the function
GrassmannDeterminant. For example:

B = EvenGrade@MD; MatrixForm@BD
a1,1 + d1,1 e1 Ô e2 a1,2 + d1,2 e1 Ô e2
a2,1 + d2,1 e1 Ô e2 a2,2 + d2,2 e1 Ô e2

2009 9 3
Grassmann Algebra Book.nb 709

B1 = GrassmannDeterminant@BD
Ha1,1 + d1,1 e1 Ô e2 L Ô Ha2,2 + d2,2 e1 Ô e2 L -
Ha1,2 + d1,2 e1 Ô e2 L Ô Ha2,1 + d2,1 e1 Ô e2 L

Note that GrassmannDeterminant does not carry out any simplification, but instead leaves the
products in raw form to facilitate the interpretation of the process occurring. To simplify the result
one needs to use GrassmannSimplify.

%@B1D
-a1,2 a2,1 + a1,1 a2,2 + Ha2,2 d1,1 - a2,1 d1,2 - a1,2 d2,1 + a1,1 d2,2 L e1 Ô e2

13.6 Matrix Powers

Ÿ Positive integer powers

Ï Powers of matrices with no body


Positive integer powers of a matrix of Grassmann numbers are simply calculated by taking the
exterior product of the matrix with itself the requisite number of times - and then using
GrassmannSimplify to compute and simplify the result.

We begin in a 4-space. For example, suppose we let

&4 ; A = 88x, y<, 8u, v<<; MatrixForm@AD


x y
K O
u v

The second power is:

B@A Ô AD
88x Ô x + y Ô u, x Ô y + y Ô v<, 8u Ô x + v Ô u, u Ô y + v Ô v<<

When simplified this becomes:

A2 = %@B@A Ô ADD; MatrixForm@A2 D


-Hu Ô yL -Hv Ô yL + x Ô y
K O
-Hu Ô vL + u Ô x uÔy

We can extend this process to form higher powers:

A3 = %@B@A Ô A2 DD; MatrixForm@A3 D


-Hu Ô v Ô yL + 2 u Ô x Ô y vÔxÔy
K O
-Hu Ô v Ô xL -2 u Ô v Ô y + u Ô x Ô y

2009 9 3
Grassmann Algebra Book.nb 710

A3 = %@A Ô A2 D; MatrixForm@A3 D
-Hu Ô v Ô yL + 2 u Ô x Ô y vÔxÔy
K O
-Hu Ô v Ô xL -2 u Ô v Ô y + u Ô x Ô y

Because this matrix has no body, there will eventually be a power which is zero, dependent on the
dimension of the space, which in this case is 4. Here we see that it is the fourth power.

A4 = %@B@A Ô A3 DD; MatrixForm@A4 D


0 0
K O
0 0

Ï Powers of matrices with a body


By giving the matrix a body, we can see that higher powers may no longer be zero.

B = A + IdentityMatrix@2D; MatrixForm@BD
1+x y
K O
u 1+v

B4 = %@B@B Ô B@B Ô B@B Ô BDDDD; MatrixForm@B4 D


1 + 4 x - 6 uÔy - 4 uÔvÔy + 8 uÔxÔy 4 y - 6 vÔy + 6 xÔy + 4 vÔxÔy
K
4 u - 6 uÔv + 6 uÔx - 4 uÔvÔx 1 + 4 v + 6 uÔy - 8 uÔvÔy + 4 uÔxÔ

Ï GrassmannIntegerPower
Although we will see later that the GrassmannAlgebra function GrassmannMatrixPower will
actually deal with more general cases than the positive integer powers discussed here, we can write
a simple function GrassmannIntegerPower to calculate any integer power. Like
GrassmannMatrixPower we can also easily store the result for further use in the same
Mathematica session. This makes subsequent calculations involving the powers already calculated
much faster.

GrassmannIntegerPower[M_,n_?PositiveIntegerQ]:=
Module[{P},P=NestList[!["[MÔ#]]&,M,n-1];

GrassmannIntegerPower[M,m_,Dimension]:=First[Take[P,{m}]];
Last[P]]

As an example we take the matrix B created above.

B
881 + x, y<, 8u, 1 + v<<

Now we apply GrassmannIntegerPower to obtain the fourth power of B.

2009 9 3
Grassmann Algebra Book.nb 711

GrassmannIntegerPower@B, 4D êê MatrixForm
1 + 4 x - 6 uÔy - 4 uÔvÔy + 8 uÔxÔy 4 y - 6 vÔy + 6 xÔy + 4 vÔxÔy
K
4 u - 6 uÔv + 6 uÔx - 4 uÔvÔx 1 + 4 v + 6 uÔy - 8 uÔvÔy + 4 uÔxÔ

However, on the way to calculating the fourth power, the function GrassmannIntegerPower
has remembered the values of the all the integer powers up to the fourth, in this case the second,
third and fourth. We can get immediate access to these by adding the Dimension of the space as a
third argument. Since the power of a matrix may take on a different form in spaces of different
dimensions (due to some terms being zero because their degree exceeds the dimension of the
space), the power is recalled only by including the Dimension as the third argument.

Dimension
4

GrassmannIntegerPower@B, 2, 4D êê MatrixForm êê Timing


1 + 2 x - uÔy 2 y - vÔy + xÔy
:0. Second, K O>
2 u - uÔv + uÔx 1 + 2 v + uÔy

GrassmannIntegerPower@B, 3, 4D êê MatrixForm êê Timing


1 + 3 x - 3 uÔy - uÔvÔy + 2 uÔxÔy 3 y - 3 vÔy + 3 xÔy +
:0. Second, K
3 u - 3 uÔv + 3 uÔx - uÔvÔx 1 + 3 v + 3 uÔy - 2 uÔvÔ

GrassmannIntegerPower@B, 4, 4D êê MatrixForm êê Timing


1 + 4 x - 6 uÔy - 4 uÔvÔy + 8 uÔxÔy 4 y - 6 vÔy + 6 xÔy
:0. Second, K
4 u - 6 uÔv + 6 uÔx - 4 uÔvÔx 1 + 4 v + 6 uÔy - 8 uÔ

Ï GrassmannMatrixPower
The principal and more general function for calculating matrix powers provided by
GrassmannAlgebra is GrassmannMatrixPower. Here we verify that it gives the same result as
our simple GrassmannIntegerPower function.

GrassmannMatrixPower@B, 4D
881 + 4 x - 6 u Ô y - 4 u Ô v Ô y + 8 u Ô x Ô y,
4 y - 6 v Ô y + 6 x Ô y + 4 v Ô x Ô y<, 84 u - 6 u Ô v + 6 u Ô x - 4 u Ô v Ô x,
1 + 4 v + 6 u Ô y - 8 u Ô v Ô y + 4 u Ô x Ô y<<

Ÿ Negative integer powers


From this point onwards we will be discussing the application of the general GrassmannAlgebra
function GrassmannMatrixPower.

We take as an example the following simple 2!2 matrix.

2009 9 3
Grassmann Algebra Book.nb 712

A = 881, x<, 8y, 1<<; MatrixForm@AD


1 x
K O
y 1

The matrix A-4 is calculated by GrassmannMatrixPower by taking the fourth power of the
inverse.

iA4 = GrassmannMatrixPower@A, -4D


881 + 10 x Ô y, -4 x<, 8-4 y, 1 - 10 x Ô y<<

We could of course get the same result by taking the inverse of the fourth power.

GrassmannMatrixInverse@GrassmannMatrixPower@A, 4DD
881 + 10 x Ô y, -4 x<, 8-4 y, 1 - 10 x Ô y<<

We check that this is indeed the inverse.

%@B@iA4 Ô GrassmannMatrixPower@A, 4DDD


881, 0<, 80, 1<<

Ÿ Non-integer powers of matrices with distinct eigenvalues


If the matrix has distinct eigenvalues, GrassmannMatrixPower will use
GrassmannMatrixFunction to compute any non-integer power of a matrix of Grassmann
numbers.

Let A be the 2!2 matrix which differs from the matrix discussed above by having distinct
eigenvalues 1 and 2. We will discuss eigenvalues in Section 13.9 below.

A = 881, x<, 8y, 2<<; MatrixForm@AD


1 x
K O
y 2

We compute the square root of A.

1
sqrtA = GrassmannMatrixPowerBA, F
2
3
::1 + - + 2 x Ô y, I-1 + 2 M x>,
2
1
:I-1 + 2 M y, 2 + I-4 + 3 2 M x Ô y>>
4

To verify that this is indeed the square root of A we 'square' it.

%@B@sqrtA Ô sqrtADD

More generally, we can extract a symbolic pth root. But to have the simplest result we need to
ensure that p has been declared a scalar first,

2009 9 3
Grassmann Algebra Book.nb 713

More generally, we can extract a symbolic pth root. But to have the simplest result we need to
ensure that p has been declared a scalar first,

DeclareExtraScalars@8p<D

:a, b, c, d, e, f, g, h, p, %, H_ û _L?InnerProductQ, _>


0

1
pthRootA = GrassmannMatrixPowerBA, F
p
1
1 1

::1 + -1 + 2 p - x Ô y, -1 + 2 p x>,
p
1
-1+ p
1 1 1
2
: -1 + 2 p y, 2 p + -1 + 2 p - x Ô y>>
p

We can verify that this gives us the previous result when p = 2.

sqrtA ã pthRootA ê. p Ø 2 êê Simplify


True

If the power required is not integral and the eigenvalues are not distinct,
GrassmannMatrixPower will return a message and the unevaluated input. Let B be a simple
2!2 matrix with equal eigenvalues 1 and 1

B = 881, x<, 8y, 1<<; MatrixForm@BD


1 x
K O
y 1

1
GrassmannMatrixPowerBB, F
2
Eigenvalues::notDistinct :
The matrix 881, x<, 8y, 1<< does not have distinct
scalar eigenvalues. The operation applies only
to matrices with distinct scalar eigenvalues.
1
GrassmannMatrixPowerB881, x<, 8y, 1<<, F
2

Ÿ Integer powers of matrices with distinct eigenvalues


In some circumstances, if a matrix is known to have distinct eigenvalues, it will be more efficient
to compute powers using GrassmannMatrixFunction for the basic calculation engine. The
function of GrassmannAlgebra which enables this is
GrassmannDistinctEigenvalueMatrixPower.

2009 9 3
Grassmann Algebra Book.nb 714

? GrassmannDistinctEigenvaluesMatrixPower
GrassmannDistinctEigenvaluesMatrixPower@A,pD
calculates the power p of a Grassmann matrix A with
distinct eigenvalues. p may be either numeric or symbolic.

We can check whether a matrix has distinct eigenvalues with DistinctEigenvaluesQ.

For example, suppose we wish to calculate the 100th power of the matrix A below.

A êê MatrixForm
1 x
K O
y 2

DistinctEigenvaluesQ@AD
True

A
881, x<, 8y, 2<<

GrassmannDistinctEigenvaluesMatrixPower@A, 100D
881 + 1 267 650 600 228 229 401 496 703 205 275 x Ô y,
1 267 650 600 228 229 401 496 703 205 375 x<,
81 267 650 600 228 229 401 496 703 205 375 y,
1 267 650 600 228 229 401 496 703 205 376 -
62 114 879 411 183 240 673 338 457 063 425 x Ô y<<

13.7 Matrix Inverses

A formula for the matrix inverse


We can develop a formula for the inverse of a matrix of Grassmann numbers following the same
approach that we used for calculating the inverse of a Grassmann number. Suppose I is the identity
matrix, Xk is a bodiless matrix of Grassmann numbers and Xk i is its ith exterior power. Then we
can write:

HI + Xk L Ô II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk q M ã I % Xk q+1

Now, since Xk has no body, its highest non-zero power will be equal to the dimension n of the
space. That is Xk n+1 ã 0. Thus

HI + Xk L Ô II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M ã I 13.3

We have thus shown that for a bodiless Xk , II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M is the


inverse of HI + Xk L.

2009 9 3
Grassmann Algebra Book.nb 715

We have thus shown that for a bodiless Xk , II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M is the


inverse of HI + Xk L.

If now we have a general Grassmann matrix X, say, we can write X as X ã Xb + Xs , where Xb is the
body of X and Xs is the soul of X, and then take Xb out as a factor to get:

X ã Xb + Xs ã Xb II + Xb -1 Xs M ã Xb HI + Xk L

Pre- and post-multiplying the equation above for the inverse of HI + Xk Lby Xb and
Xb -1 respectively gives:

Xb HI + Xk L Ô II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M Xb -1 ã I

Since the first factor is simply X, we finally obtain the inverse of X in terms of Xk ã Xb -1 Xs as:

X-1 ã II - Xk + Xk 2 - Xk 3 + Xk 4 - ... % Xk n M Xb -1 13.4

Or, specifically in terms of the body and soul of X:

2 n
X-1 ã JI - IXb -1 Xs M + IXb -1 Xs M - ... % IXb -1 Xs M N Xb -1 13.5

Ÿ GrassmannMatrixInverse
This formula is straightforward to implement as a function. In GrassmannAlgebra we can calculate
the inverse of a Grassmann matrix X by using GrassmannMatrixInverse.

? GrassmannMatrixInverse
GrassmannMatrixInverse@XD computes the
inverse of the matrix X of Grassmann numbers in a
space of the currently declared number of dimensions.

A definition for GrassmannMatrixInverse may be quite straighforwardly developed from


formula 13.4.

GrassmannMatrixInverse[X_]:=
Module[{B,S,iB,iBS,K},B=Body[X];S=Soul[X];
iB=Inverse[B];iBS=!["[iBÔS]];
K=Sum[(-1)^i GrassmannMatrixPower[iBS,i],{i,0,Dimension}];
!["[KÔiB]]]

As a first simple example we take the matrix:

2009 9 3
Grassmann Algebra Book.nb 716

A = 881, x<, 8y, 1<<; MatrixForm@AD


1 x
K O
y 1

The inverse of A is:

iA = GrassmannMatrixInverse@AD; MatrixForm@iAD
1 + xÔy -x
K O
-y 1 - xÔy

We can check that this is indeed the inverse.

%@8B@A Ô iAD, B@iA Ô AD<D


8881, 0<, 80, 1<<, 881, 0<, 80, 1<<<

As a second example we take a somewhat more complex matrix.

A = 881 + e1 , 2, e3 Ô e2 <, 83, e2 Ô e4 , e4 <, 8e2 , e1 , 1 + e4 Ô e1 <<;


MatrixForm@AD
1 + e1 2 e3 Ô e2
3 e2 Ô e4 e4
e2 e1 1 + e4 Ô e1

iA = GrassmannMatrixInverse@AD êê Simplify
1 1 e4
:: H-He1 Ô e4 L - e2 Ô e4 L, H6 + e1 Ô e4 - e2 Ô e4 + e1 Ô e2 Ô e4 L, - >,
6 18 3
1
: H6 + e1 Ô e4 + e2 Ô e4 - 3 e1 Ô e2 Ô e3 + e1 Ô e2 Ô e4 L,
12
1
H-6 - 6 e1 - e1 Ô e4 + e2 Ô e4 + 3 e1 Ô e2 Ô e3 L,
36
1
He4 + e1 Ô e4 + 3 e2 Ô e3 L>,
6
1 1
: H-2 e1 - e1 Ô e2 Ô e4 L, H6 e1 - 12 e2 + 13 e1 Ô e2 Ô e4 L,
4 36
1
H6 + 5 e1 Ô e4 + 2 e2 Ô e4 - 3 e1 Ô e2 Ô e3 L>>
6

We check that this is indeed the inverse

%@8B@A Ô iAD, B@iA Ô AD<D


8881, 0, 0<, 80, 1, 0<, 80, 0, 1<<,
881, 0, 0<, 80, 1, 0<, 80, 0, 1<<<

Finally, if we try to find the inverse of a matrix with a singular body we get the expected type of
error messages.

2009 9 3
Grassmann Algebra Book.nb 717

A = 881 + x, 2 + y<, 81 + z, 2 + w<<; MatrixForm@AD


1+x 2+y
K O
1+z 2+w

GrassmannMatrixInverse@AD
Inverse::sing : Matrix 881, 2<, 81, 2<< is singular.

13.8 Matrix Equations

Ÿ Two types of matrix equations


If we can recast a system of equations into a matrix form which maintains the correct ordering of
the factors, we can use the matrix inverse to obtain the solution. Suppose we have an equation MÔ
XãY where M is an invertible matrix of Grassmann numbers, X is a vector of unknown Grassmann
numbers, and Y is a vector of known Grassmann numbers. Then the solution X is given by:

%@B@GrassmannMatrixInverse@MD Ô YDD

Alternatively, if the equations need to be expressed as XÔMãY in which case the solution X is given
by:

%@B@Y Ô GrassmannMatrixInverse@MDDD

GrassmannAlgebra implements these as GrassmannLeftLinearSolve and


GrassmannRightLinearSolve respectively.

? GrassmannLeftLinearSolve
GrassmannLeftLinearSolve@M,YD calculates a
matrix or vector X which solves the equation MÔX==Y.

Ÿ Solving matrix equations


Here we take an example and show that GrassmannLeftLinearSolve and
GrassmannRightLinearSolve both gives solutions to a matrix exterior equation.

M = 881 + x, x Ô y<, 81 - y, 2 + y + x + 5 x Ô y<<; MatrixForm@MD


1+x xÔy
K O
1 - y 2 + x + y + 5 xÔy

Y = 83 + 2 x Ô y, 7 - 5 x<; MatrixForm@YD
3 + 2 xÔy
K O
7-5x

2009 9 3
Grassmann Algebra Book.nb 718

GrassmannLeftLinearSolve gives:

X = GrassmannLeftLinearSolve@M, YD; MatrixForm@XD


3-3x
y 19 xÔy
2-2x+ 2
- 4

%@B@M Ô XDD ã Y
True

GrassmannRightLinearSolve gives:

X = GrassmannRightLinearSolve@M, YD; MatrixForm@XD


1 19 x 21 y 41 xÔy
-2 + 4
+ 4
+ 4
7 17 x 7y 29 xÔy
2
- 4
- 4
- 4

%@B@X Ô MDD ã Y
True

13.9 Matrix Eigensystems

Exterior eigensystems of Grassmann matrices


Grassmann matrices have eigensystems just as do real or complex matrices. Eigensystems are
useful because they allow us to calculate functions of matrices. One major difference in approach
with Grassmann matrices is that we may no longer be able to use the determinant to obtain the
characteristic equation, and hence the eigenvalues. We can, however, return to the basic definition
of an eigenvalue and eigenvector to obtain our results. We treat only those matrices with distinct
eigenvalues. Suppose the matrix A is n!n and has distinct eigenvalues, then we are looking for an
n!n diagonal matrix L (the matrix of eigenvalues) and an n!n matrix X (the matrix of eigenvectors)
such that:

AÔX ã XÔL 13.6

The pair {X,L} is called the eigensystem of A. If X is invertible we can post-multiply by X-1 to get:

A ã X Ô L Ô X-1 13.7

and pre-multiply by X-1 to get:

2009 9 3
Grassmann Algebra Book.nb 719

X-1 Ô A Ô X ã L 13.8

It is this decomposition which allows us to define functions of Grassmann matrices. This will be
discussed further in Section 13.10.

The number of scalar equations resulting from the matrix equation AÔX ã XÔL is n2 . The number
of unknowns that we are likely to be able to determine is therefore n2 . There are n unknown
eigenvalues in L, leaving n2 - n unknowns to be determined in X. Since X occurs on both sides of
the equation, we need only determine its columns (or rows) to within a scalar multiple. Hence there
are really only n2 - n unknowns to be determined in X, which leads us to hope that a decomposition
of this form is indeed possible.

The process we adopt is to assume unknown general Grassmann numbers for the diagonal
components of L and for the components of X. If the dimension of the space is N, there will be
2N scalar coefficients to be determined for each of the n2 unknowns. We then use Mathematica's
powerful Solve function to obtain the values of these unknowns. In practice, during the
calculations, we assume that the basis of the space is composed of just the non-scalar symbols
existing in the matrix A. This enables the reduction of computation should the currently declared
basis be of higher dimension than the number of different 1-elements in the matrix, and also allows
the eigensystem to be obtained for matrices which are not expressed in terms of basis elements.

For simplicity we also use Mathematica's JordanDecomposition function to determine the


scalar eigensystem of the matrix A before proceeding on to find the non-scalar components. To see
the relationship of the scalar eigensystem to the complete Grassmann eigensystem, rewrite the
eigensystem equation AÔX ã XÔL in terms of body and soul, and extract that part of the equation
that is pure body.

HAb + As L Ô HXb + Xs L == HXb + Xs L Ô HLb + Ls L

Expanding this equation gives

Ab Ô Xb + HAb Ô Xs + As Ô Xb + As Ô Xs L == Xb Ô Lb + HXb Ô Ls + Xs Ô Lb + Xs Ô Ls L

The first term on each side is the body of the equation, and the second terms (in brackets) its soul.
The body of the equation is simply the eigensystem equation for the body of A and shows that the
scalar components of the unknown X and L matrices are simply the eigensystem of the body of A.

Ab Ô Xb ã Xb Ô Lb 13.9

By decomposing the soul of the equation into a sequence of equations relating matrices of elements
of different grades, the solution for the unknown elements of X and L may be obtained sequentially
by using the solutions from the equations of lower grade. For example, the equation for the
elements of grade one is

Ab Ô Xs + As Ô Xb ã Xb Ô Ls + Xs Ô Lb 13.10
1 1 1 1

where As , Xs , and Ls are the components of As , Xs , and Ls of grade 1. Here we know Ab and As and
1 1 1 1
have already solved for Xb and Lb , leaving just Xs and Ls to be determined. However, although this
1 1
2009 9 3
process would be helpful if trying to solve by hand, it is not necessary when using the Solve
function in Mathematica.
Grassmann Algebra Book.nb 720

where As , Xs , and Ls are the components of As , Xs , and Ls of grade 1. Here we know Ab and As and
1 1 1 1
have already solved for Xb and Lb , leaving just Xs and Ls to be determined. However, although this
1 1
process would be helpful if trying to solve by hand, it is not necessary when using the Solve
function in Mathematica.

Ÿ GrassmannMatrixEigensystem
The function implemented in GrassmannAlgebra for calculating the eigensystem of a matrix of
Grassmann numbers is GrassmannMatrixEigensystem. It is capable of calculating the
eigensystem only of matrices whose body has distinct eigenvalues.

? GrassmannMatrixEigensystem
GrassmannMatrixEigensystem@AD calculates a list comprising the
matrix of eigenvectors and the diagonal matrix of eigenvalues
for a Grassmann matrix A whose body has distinct eigenvalues.

If the matrix does not have distinct eigenvalues, GrassmannMatrixEigensystem will return a
message telling us. For example:

A = 882, x<, 8y, 2<<; MatrixForm@AD


2 x
K O
y 2

GrassmannMatrixEigensystem@AD
Eigenvalues::notDistinct :
The matrix 882, x<, 8y, 2<< does not have distinct
scalar eigenvalues. The operation applies only
to matrices with distinct scalar eigenvalues.
GrassmannMatrixEigensystem@882, x<, 8y, 2<<D

If the matrix has distinct eigenvalues, a list of two matrices is returned. The first is the matrix of
eigenvectors (whose columns have been normalized with an algorithm which tries to produce as
simple a form as possible), and the second is the (diagonal) matrix of eigenvalues.

A = 882, 3 + x<, 82 + y, 3<<; MatrixForm@AD


2 3+x
K O
2+y 3

8X, L< = GrassmannMatrixEigensystem@AD; MatrixForm êü 8X, L<


2x 9y 2x 3y xÔy
-3 - + 1 - 5
- 5
+ 5
0
5 10
: x y
, 2x 3y xÔy
>
2 1- 5
+ 5
0 5+ + +
5 5 5

To verify that this is indeed the eigensystem for A we can evaluate the equation AÔX ã XÔL.

2009 9 3
Grassmann Algebra Book.nb 721

%@B@A Ô XDD ã %@B@X Ô LDD


True

Next we take a slightly more complex example.

A = 882 + e1 + 2 e1 Ô e2 , 0, 0<, 80, 2 + 2 e1 , 5 + e1 Ô e2 <,


80, -1 + e1 + e2 , -2 - e2 <<; MatrixForm@AD
2 + e1 + 2 e1 Ô e2 0 0
0 2 + 2 e1 5 + e1 Ô e2
0 -1 + e1 + e2 -2 - e2

8X, L< = GrassmannMatrixEigensystem@AD;

:80, 0, 1<,
7Â 5 5Â 17 7Â
:H-2 + ÂL - 3 + e1 - + e2 + - e1 Ô e2 , H-2 - ÂL -
2 2 2 4 2
7Â 5 5Â 17 7Â
3- e1 - - e2 + + e1 Ô e2 , 0>, 81, 1, 0<>
2 2 2 4 2

L
9Â 1 7Â 15 9Â
::-Â + 1 + e1 - - e2 - - e1 Ô e2 , 0, 0>,
2 2 2 4 2
9Â 1 7Â 15 9Â
:0, Â + 1 - e1 - + e2 - + e1 Ô e2 , 0>,
2 2 2 4 2
80, 0, 2 + e1 + 2 e1 Ô e2 <>

%@B@A Ô XDD ã %@B@X Ô LDD


True

13.10 Matrix Functions

Distinct eigenvalue matrix functions


As shown above in the section on matrix eigensystems, we can express a matrix with distinct
eigenvalues in the form:

A ã X Ô L Ô X-1
Hence we can write its exterior square as:

A2 ã IX Ô L Ô X-1 M Ô IX Ô L Ô X-1 M ã X Ô L Ô IX-1 Ô XM Ô L Ô X-1 ã X Ô L2 Ô X-1

This result is easily generalized to give an expression for any power of a matrix.

2009 9 3
Grassmann Algebra Book.nb 722

This result is easily generalized to give an expression for any power of a matrix.

Ap ã X Ô Lp Ô X-1 13.11

Indeed, it is also valid for any linear combination of powers,

‚ ap Ap ã ‚ ap X Ô Lp Ô X-1 ã X Ô I‚ ap Lp M Ô X-1

and hence any function f[A] defined by a linear combination of powers (series):

f@AD ã ‚ ap Ap

f@AD ã X Ô f@LD Ô X-1 13.12

Now, since L is a diagonal matrix:

L ã DiagonalMatrix@8l1 , l2 , …, lm <D

its pth power is just the diagonal matrix of the pth powers of the elements.

Lp ã DiagonalMatrix@8l1 p , l2 p , …, lm p <D

Hence:

‚ ap Lp ã DiagonalMatrixA9‚ ap l1 p , ‚ ap l2 p , …, ‚ ap lm p =E

f@LD ã DiagonalMatrix@8f@l1 D, f@l2 D, …, f@lm D<D

Hence, finally we have an expression for the function f of the matrix A in terms of the eigenvector
matrix and a diagonal matrix in which each diagonal element is the function f of the corresponding
eigenvalue.

f@AD ã X Ô DiagonalMatrix@8f@l1 D, f@l2 D, …, f@lm D<D Ô X-1 13.13

Ÿ GrassmannMatrixFunction
We have already seen in Chapter 9 how to calculate a function of a Grassmann number by using
GrassmannFunction. We use GrassmannFunction and the formula above to construct
GrassmannMatrixFunction for calculating functions of matrices.

2009 9 3
Grassmann Algebra Book.nb 723

? GrassmannMatrixFunction
GrassmannMatrixFunction@f@ADD calculates a function f of a
Grassmann matrix A with distinct eigenvalues, where f is a
pure function. GrassmannMatrixFunction@8fx,x<,AD calculates
a function fx of a Grassmann matrix A with distinct
eigenvalues, where fx is a formula with x as variable.

It should be remarked that, just as functions of a Grassmann number will have different forms in
spaces of different dimension, so too will functions of a Grassmann matrix.

Ÿ Exponentials and Logarithms


In what follows we shall explore some examples of functions of Grassmann matrices in 3-space.

&3 ; A = 881 + x, x Ô z<, 82, 3 - z<<; MatrixForm@AD


1 + x xÔz
K O
2 3-z

We first calculate the exponential of A.

expA = GrassmannMatrixFunction@Exp@ADD
1 1
::‰ + ‰ x + ‰ I-3 + ‰2 M x Ô z, ‰ I-1 + ‰2 M x Ô z>,
2 2
3
1 2
‰z ‰3 z
:-‰ + ‰ + ‰ I-3 + ‰ M x - - + 3 ‰ x Ô z,
2 2 2
1
‰3 - ‰3 z + I‰ + ‰3 M x Ô z>>
2

We then calculate the logarithm of this exponential and see that it is A as we expect.

GrassmannMatrixFunction@Log@expADD
881 + x, x Ô z<, 82, 3 - z<<

Ÿ Trigonometric functions
We compute the square of the sine and the square of the cosine of A:

2009 9 3
Grassmann Algebra Book.nb 724

s2A = GrassmannMatrixFunction@8Sin@xD^2, x<, AD

::2 x Cos@1D Sin@1D + Sin@1D2 +


1
Sin@1D H-4 Cos@1D + Sin@3D + Sin@5DL x Ô z, Cos@2D Sin@2D2 x Ô z>,
2
1
:-z Sin@2D - 2 z Cos@4D Sin@2D + x Sin@2D H-2 + Sin@4DL + Sin@2D
2
1 1
Sin@4D + z Sin@2D Sin@4D + Sin@2D H6 + 6 Cos@4D - 3 Sin@4DL x Ô z,
2 2
1
Sin@3D2 - z Sin@6D + H-Cos@2D + Cos@6D + 4 Sin@6DL x Ô z>>
4

c2A = GrassmannMatrixFunction@8Cos@xD^2, x<, AD

::Cos@1D2 - 2 x Cos@1D Sin@1D +


1
Cos@1D H-Cos@3D + Cos@5D + 4 Sin@1DL x Ô z, -Cos@2D Sin@2D2 x Ô z>,
2
1
:z Sin@2D + 2 z Cos@4D Sin@2D - x Sin@2D H-2 + Sin@4DL - Sin@2D Sin@4D -
2
1 1
z Sin@2D Sin@4D -Sin@2D H6 + 6 Cos@4D - 3 Sin@4DL x Ô z,
2 2
1
Cos@3D2 + z Sin@6D - H-Cos@2D + Cos@6D + 4 Sin@6DL x Ô z>>
4

and show that sin2 x + cos2 x = 1, even for Grassmann matrices:

s2A + c2A êê Simplify êê MatrixForm


1 0
K O
0 1

Ÿ Symbolic matrices
B = 88a + x, x Ô z<, 8b, c - z<<; MatrixForm@BD
a + x xÔz
K O
b c-z

2009 9 3
Grassmann Algebra Book.nb 725

GrassmannMatrixFunction@Exp@BDD
b H-‰a + a ‰a - c ‰a + ‰c L x Ô z H‰a - ‰c L x Ô z
::‰a H1 + xL + , >,
Ha - cL2 a-c
1
: Hb HHa - cL H-c ‰a + c ‰c - ‰a x - c ‰a x + ‰c x -
Ha - cL3
‰a z + ‰c z - c ‰c z + a H‰a - ‰c + ‰a x + ‰c zLL +
H1 + bL H-2 ‰a - c ‰a + 2 ‰c - c ‰c + a H‰a + ‰c LL x Ô zLL,
c
b H‰a - ‰c - a ‰c + c ‰c L x Ô z
-‰ H-1 + zL + >>
Ha - cL2

Ÿ Symbolic functions
Suppose we wish to compute the symbolic function f of the matrix A.

A êê MatrixForm
1 + x xÔz
K O
2 3-z

If we try to compute this we get an incorrect result because the derivatives evaluated at various
scalar values are not recognized by GrassmannSimplify as scalars. Here we print out just three
lines of the result.

Short@GrassmannMatrixFunction@f@ADD, 3D
1 1
:8á1à<, :-f@1D + f@3D + x Ô f@1D - x Ô f@3D + á22à +
2 2
5 1
x Ô z 2 f£ @1D + x f£ @1D + z f£ @1D + f£ @3D + x f£ @3D + 2 z f£ @3D ,
2 2
1 1
f@3D + á9à + x Ô z x f£ @1D + f£ @3D + z f£ @3D >>
2 2

We see from this that the derivatives in question are the zeroth and first, f[_], and f'[_]. We
therefore declare these patterns to be scalars so that the derivatives evaluated at any scalar
arguments will be considered by GrassmannSimplify as scalars.

DeclareExtraScalars@8f@_D, f£ @_D<D

:a, b, c, d, e, f, g, h, %, f@_D, H_ û _L?InnerProductQ, _, f£ @_D>


0

2009 9 3
Grassmann Algebra Book.nb 726

fA = GrassmannMatrixFunction@f@ADD
1 1
::f@1D + x f£ @1D - x Ô z Hf@1D - f@3D + 2 f£ @1DL, - Hf@1D - f@3DL x Ô z>,
2 2
1 1 1 1
:-f@1D - x f@1D - z f@1D + f@3D + x f@3D + z f@3D -
2 2 2 2
3
x f£ @1D - z f£ @3D + x Ô z Hf@1D - f@3D + f£ @1D + f£ @3DL,
2
1
f@3D - z f£ @3D + x Ô z Hf@1D - f@3D + 2 f£ @3DL>>
2

The function f may be replaced by any specific function. We replace it here by Exp and check that
the result is the same as that given above.

expA ã HfA ê. f Ø ExpL êê Simplify


True

13.11 Supermatrices
To be completed.

2009 9 3
Grassmann Algebra Book.nb 727

A1 Guide to GrassmannAlgebra
Ï To be completed

2009 9 3
Grassmann Algebra Book.nb 728

A2 A Brief Biography of
Grassmann
Hermann Günther Grassmann was born in 1809 in Stettin, a town in Pomerania a short distance
inland from the Baltic. His father Justus Günther Grassmann taught mathematics and physical
science at the Stettin Gymnasium. Hermann was no child prodigy. His father used to say that he
would be happy if Hermann became a craftsman or a gardener.

In 1827 Grassmann entered the University of Berlin with the intention of studying theology. As his
studies progressed he became more and more interested in studying philosophy. At no time whilst
a student in Berlin was he known to attend a mathematics lecture.

Grassmann was however only 23 when he made his first important geometric discovery: a method
of adding and multiplying lines. This method was to become the foundation of his
Ausdehnungslehre (extension theory). His own account of this discovery is given below.

Grassmann was interested ultimately in a university post. In order to improve his academic
standing in science and mathematics he composed in 1839 a work (over 200 pages) on the study of
tides entitled Theorie der Ebbe und Flut. This work contained the first presentation of a system of
spacial analysis based on vectors including vector addition and subtraction, vector differentiation,
and the elements of the linear vector function, all developed for the first time. His examiners failed
to see its importance.

Around Easter of 1842 Grassmann began to turn his full energies to the composition of his first
'Ausdehnungslehre', and by the autumn of 1843 he had finished writing it. The following is an
excerpt from the foreword in which he describes how he made his seminal discovery. The
translation is by Lloyd Kannenberg (Grassmann 1844).

The initial incentive was provided by the consideration of negatives in geometry; I was used to regarding the
displacements AB and BA as opposite magnitudes. From this it follows that if A, B, C are points of a straight
line, then AB + BC = AC is always true, whether AB and BC are directed similarly or oppositely, that is even
if C lies between A and B. In the latter case AB and BC are not interpreted merely as lengths, but rather their
directions are simultaneously retained as well, according to which they are precisely oppositely oriented.
Thus the distinction was drawn between the sum of lengths and the sum of such displacements in which the
directions were taken into account. From this there followed the demand to establish this latter concept of a
sum, not only for the case that the displacements were similarly or oppositely directed, but also for all other
cases. This can most easily be accomplished if the law AB + BC = AC is imposed even when A, B, C do not
lie on a single straight line.
Thus the first step was taken toward an analysis that subsequently led to the new branch of mathematics
presented here. However, I did not then recognize the rich and fruitful domain I had reached; rather, that
result seemed scarcely worthy of note until it was combined with a related idea.
While I was pursuing the concept of product in geometry as it had been established by my father, I concluded
that not only rectangles but also parallelograms in general may be regarded as products of an adjacent pair of
their sides, provided one again interprets the product, not as the product of their lengths, but as that of the
two displacements with their directions taken into account. When I combined this concept of the product
with that previously established for the sum, the most striking harmony resulted; thus whether I multiplied
the sum (in the sense just given) of two displacements by a third displacement lying in the same plane, or the
individual terms by the same displacement and added the products with due regard for their positive and
2009 9 3
negative values, the same result obtained, and must always obtain.
This harmony did indeed enable me to perceive that a completely new domain had thus been disclosed, one
that could lead to important results. Yet this idea remained dormant for some time since the demands of my
Thus the first step was taken toward an analysis that subsequently led to the new branch of mathematics
presented here. However, I did not then recognize the rich and fruitful domain I had reached; rather, that
result seemed scarcely worthy of note until it was combined with a related idea.
While I was pursuing the concept of product in geometry as it had been established by my
Grassmann father,
Algebra I concluded
Book.nb 729
that not only rectangles but also parallelograms in general may be regarded as products of an adjacent pair of
their sides, provided one again interprets the product, not as the product of their lengths, but as that of the
two displacements with their directions taken into account. When I combined this concept of the product
with that previously established for the sum, the most striking harmony resulted; thus whether I multiplied
the sum (in the sense just given) of two displacements by a third displacement lying in the same plane, or the
individual terms by the same displacement and added the products with due regard for their positive and
negative values, the same result obtained, and must always obtain.
This harmony did indeed enable me to perceive that a completely new domain had thus been disclosed, one
that could lead to important results. Yet this idea remained dormant for some time since the demands of my
job led me to other tasks; also, I was initially perplexed by the remarkable result that, although the laws of
ordinary multiplication, including the relation of multiplication to addition, remained valid for this new type
of product, one could only interchange factors if one simultaneously changed the sign (i.e. changed + into |
and vice versa).

As with his earlier work on tides, the importance of this work was ignored. Since few copies were
sold, most ended by being used as waste paper by the publisher. The failure to find acceptance for
Grassmann's ideas was probably due to two main reasons. The first was that Grassmann was just a
simple schoolteacher, and had none of the academic charisma that other contemporaries, like
Hamilton for example, had. History seems to suggest that the acceptance of radical discoveries
often depends more on the discoverer than the discovery.

The second reason is that Grassmann adopted the format and the approach of the modern
mathematician. He introduced and developed his mathematical structure axiomatically and
abstractly. The abstract nature of the work, initially devoid of geometric or physical significance,
was just too new and formal for the mathematicians of the day and they all seemed to find it too
difficult. More fully than any earlier mathematician, Grassmann seems to have understood the
associative, commutative and distributive laws; yet still, great mathematicians like Möbius found it
unreadable, and Hamilton was led to write to De Morgan that to be able to read Grassmann he
'would have to learn to smoke'.

In the year of publication of the Ausdehnungslehre (1844) the Jablonowski Society of Leipzig
offered a prize for the creation of a mathematical system fulfilling the idea that Leibniz had
sketched in 1679. Grassmann entered with 'Die Geometrische Analyse geknüpft und die von
Leibniz Characteristik', and was awarded the prize. Yet as with the Ausdehnungslehre it was
subsequently received with almost total silence.

However, in the few years following, three of Grassmann's contemporaries were forced to take
notice of his work because of priority questions. In 1845 Saint-Venant published a paper in which
he developed vector sums and products essentially identical to those already occurring in
Grassmann's earlier works (Barré 1845). In 1853 Cauchy published his method of 'algebraic keys'
for solving sets of linear equations (Cauchy 1853). Algebraic keys behaved identically to
Grassmann's units under the exterior product. In the same year Saint-Venant published an
interpretation of the algebraic keys geometrically and in terms of determinants (Barré 1853). Since
such were fundamental to Grassmann's already published work he wrote a reply for Crelle's
Journal in 1855 entitled 'Sur les différentes genres de multiplication' in which he claimed priority
over Cauchy and Saint-Venant and published some new results (Grassmann 1855).

It was not until 1853 that Hamilton heard of the Ausdehnungslehre. He set to reading it and soon
after wrote to De Morgan.

I have recently been reading … more than a hundred pages of Grassmann's Ausdehnungslehre, with great
admiration and interest …. If I could hope to be put in rivalship with Des Cartes on the one hand and with
Grassmann on the other, my scientific ambition would be fulfilled.

During the period 1844 to 1862 Grassmann published seventeen scientific papers, including
important papers in physics, and a number of mathematics and language textbooks. He edited a
political paper for a time and published materials on the evangelization of China. This, on top of a
heavy
2009 9teaching
3 load and the raising of a large family. However, this same period saw only few
mathematicians ~ Hamilton, Cauchy, Möbius, Saint-Venant, Bellavitis and Cremona ~ having
any acquaintance with, or appreciation of, his work.
Grassmann Algebra Book.nb 730

During the period 1844 to 1862 Grassmann published seventeen scientific papers, including
important papers in physics, and a number of mathematics and language textbooks. He edited a
political paper for a time and published materials on the evangelization of China. This, on top of a
heavy teaching load and the raising of a large family. However, this same period saw only few
mathematicians ~ Hamilton, Cauchy, Möbius, Saint-Venant, Bellavitis and Cremona ~ having
any acquaintance with, or appreciation of, his work.

In 1862 Grassmann published a completely rewritten Ausdehnungslehre: Die Ausdehnungslehre:


Vollständing und in strenger Form. In the foreword Grassmann discussed the poor reception
accorded his earlier work and stated that the content of the new book was presented in 'the
strongest mathematical form that is actually known to us; that is Euclidean …'. It was a book of
theorems and proofs largely unsupported by physical example.

This apparently was a mistake, for the reception accorded this new work was as quiet as that
accorded the first, although it contained many new results including a solution to Pfaff's problem.
Friedrich Engel (the editor of Grassmann's collected works) comments: 'As in the first
Ausdehnungslehre so in the second: matters which Grassmann had published in it were later
independently rediscovered by others, and only much later was it realized that Grassmann had
published them earlier' (Engel 1896).

Thus Grassmann's works were almost totally neglected for forty-five years after his first discovery.
In the second half of the 1860s recognition slowly started to dawn on his contemporaries, among
them Hankel, Clebsch, Schlegel, Klein, Noth, Sylvester, Clifford and Gibbs. Gibbs discovered
Grassmann's works in 1877 (the year of Grassmann's death), and Clifford discovered them in depth
about the same time. Both became quite enthusiastic about Grassmann's new mathematics.

Grassmann's activities after 1862 continued to be many and diverse. His contribution to philology
rivals his contribution to mathematics. In 1849 he had begun a study of Sanskrit and in 1870
published his Wörtebuch zum Rig-Veda, a work of 1784 pages, and his translation of the Rig-Veda,
a work of 1123 pages, both still in use today. In addition he published on mathematics, languages,
botany, music and religion. In 1876 he was made a member of the American Oriental Society, and
received an honorary doctorate from the University of Tübingen.

On 26 September 1877 Hermann Grassmann died, departing from a world only just beginning to
recognize the brilliance of the mathematical creations of one of its most outstanding eclectics.

2009 9 3
Grassmann Algebra Book.nb 731

A3 Notation
To be completed.

Operations
Ô exterior product operation

Ó regressive product operation

û interior product operation

Û generalized product operation


l

ù Clifford product operation

Î hypercomplex product operation

Ñ complement operation

Ñ vector subspace complement operation

! cross product operation

†Ñ§ measure

= defined equal to

ã equal to

ª congruence

Elements
a, b, c, … scalars

x, y, z, … 1-elements or vectors

a, b, g , … m-elements
m m m

X, Y, Z, … Grassmann numbers

n1 , n2 , … position vectors

P1 , P2 , … points

L1 , L2 , … lines

2009 9 3
Grassmann Algebra Book.nb 732

L1 , L2 , … lines

P1 , P2 , … planes

Declarations
¯!n declare a linear or vector space of n dimensions

¯"n declare an n-space comprising an origin point and a vector space of n dimensions

¯¯S declare extra scalar symbols

¯¯V declare extra vector symbols

Special objects
1 unit scalar

1 unit n-element

¯" origin point

¯D dimension of the currently declared space

¯n symbolic dimension of a space

¯c symbolic congruence factor

¯g derterminant of the metric tensor

Spaces
L linear space of scalars or field of scalars
0

L linear space of 1-elements, underlying linear space


1

L linear space of m-elements


m

L linear space of n-elements


n

Basis elements
ei basis 1-element or covariant basis element

2009 9 3
Grassmann Algebra Book.nb 733

ei contravariant basis 1-element

ei basis m-element or covariant basis m-element


m

ei contravariant basis m-element


m

ei cobasis element of ei
m m

2009 9 3
Grassmann Algebra Book.nb 734

A4 Glossary
To be completed.

Ï Ausdehnungslehre
The term Ausdehnungslehre is variously translated as 'extension theory', 'theory of
extension', or 'calculus of extension'. Refers to Grassmann's original work and other early
work in the same notational and conceptual tradition.

Ï Bivector
A bivector is a sum of simple bivectors.

Ï Bound vector
A bound vector B is the exterior product of a point and a vector. B ã PÔx. It may also
always be expressed as the exterior product of two points.

Ï Bound vector space


A bound vector space is a linear space whose basis elements are interpreted as an origin
point ¯" and vectors.

Ï Bound bivector
A bound bivector B is the exterior product of a point P and a bivector W: B ã PÔW.

Ï Bound simple bivector


A bound simple bivector B is the exterior product of a point P and a simple bivector xÔy:
B ã PÔxÔy. It may also always be expressed as the exterior product of two points and a
vector, or the exterior product of three points.

Ï Cofactor
The cofactor of a minor M of a matrix A is the signed determinant formed from the rows
and columns of A which are not in M.
The sign may be determined from H-1LS Hri +ci L , where ri and ci are the row and
column numbers.

Ï Dimension of a linear space


The dimension of a linear space is the maximum number of independent elements in it.

2009 9 3
Grassmann Algebra Book.nb 735

Ï Dimension of an exterior linear space


n
The dimension of an exterior linear space L is m
where n is the dimension of the
m
underlying linear space L.
1
n
The dimension m
is equal to the number of combinations of n elements taken m at a
time.

Ï Dimension of a Grassmann algebra


The dimension of a Grassmann algebra is the sum of the dimensions of its component
exterior linear spaces.
The dimension of a Grassmann algebra is then given by 2n , where n is the dimension of
the underlying linear space.

Ï Direction
A direction is the space of a vector and is therefore the set of all vectors parallel to a
given vector.

Ï Displacement
A displacement is a physical interpretation of a vector. It may also be viewed as the
difference of two points.

Ï Exterior linear space


An exterior linear space of grade m, denoted L, is the linear space generated by m-
m
elements.

Ï Force
A force is a physical entity represented by a bound vector. This differs from common
usage in which a force is represented by a vector. For reasons discussed in the text,
common use does not provide a satisfactory model.

Ï Force vector
A force vector is the vector of the bound vector representing the force.

Ï General geometrically interpreted 2-element


A general geometrically interpreted 2-element U is the sum of a bound vector PÔx and a
bivector W. That is, U ã PÔx + W.

2009 9 3
Grassmann Algebra Book.nb 736

Ï Geometric entities
Points, lines, planes, … are geometric entities. Each is defined as the space of a
geometrically interpreted element.
A point is a geometric 1-entity.
A line is a geometric 2-entity.
A plane is a geometric 3-entity.

Ï Geometric interpretations
Points, weighted points, vectors, bound vectors, bivectors, … are geometric
interpretations of m-elements.

Ï Geometrically interpreted algebra


A geometrically interpreted algebra is a Grassmann algebra with a geometrically
interpreted underlying linear space.

Ï Grade
The grade of an m-element is m.
The grade of a simple m-element is the number of 1-element factors in it.
The grade of the exterior product of an m-element and a k-element is m+k.
The grade of the regressive product of an m-element and a k-element is m+k|n.
The grade of the complement of an m-element is n|m.
The grade of the interior product of an m-element and a k-element is m|k.
The grade of a scalar is zero.
(The dimension n is the dimension of the underlying linear space.)

Ï GrassmannAlgebra
The concatenated italicized term GrassmannAlgebra refers to the Mathematica software
package which accompanies this book.

Ï A Grassmann algebra
A Grassmann algebra is the direct sum of an underlying linear space L, its field L, and
1 0
the exterior linear spaces L (2 § m § n).
m

L#L#L#!#L#!#L
0 1 2 m n

Ï The Grassmann algebra


The Grassmann algebra is used to describe that body of algebraic theory and results
based on the Ausdehnungslehre, but extended to include more recent results and
viewpoints.

Ï Intersection

2009 9 3
Grassmann Algebra Book.nb 737

Intersection
An intersection of two simple elements is any of the congruent elements defined by the
intersection of their spaces.

Ï Laplace expansion theorem


The Laplace expansion theorem states: If any r rows are fixed in a determinant, then the
value of the determinant may be obtained as the sum of the products of the minors of rth
order (corresponding to the fixed rows) by their cofactors.

Ï Line
A line is the space of a bound vector. Thus a line consists of all the (perhaps weighted)
points on it and all the vectors parallel to it.

Ï Linear space
A linear space is a mathematical structure defined by a standard set of axioms. It is often
referred to simply as a 'space'.

Ï Minor
A minor of a matrix A is the determinant (or sometimes matrix) of degree (or order) k
formed from A by selecting the elements at the intersection of k distinct columns and k
distinct rows.

Ï m-direction
An m-direction is the space of a simple m-vector. It is also therefore the set of all vectors
parallel to a given simple m-vector.

Ï m-element
An m-element is a sum of simple m-elements.

Ï m-plane
An m-plane is the space of a bound simple m-vector. Thus a plane consists of all the
(perhaps weighted) points on it and all the vectors parallel to it.

Ï m-vector
An m-vector is a sum of simple m-vectors.

Ï n-algebra
The term n-algebra is an alias for the phrase Grassmann algebra with an underlying
linear space of n dimensions.
Ï
Ï Origin

2009 9 3
Grassmann Algebra Book.nb 738

Origin
The origin ¯" is the geometric interpretation of a specific 1-element as a reference point.

Ï Plane
A plane is the space of a bound simple bivector. Thus a plane consists of all the (perhaps
weighted) points on it and all the vectors parallel to it.

Ï Point
A point P is the sum of the origin " and a vector x: P ã ¯" + x.

Ï Point mass
A point mass is a physical interpretation of a weighted point.

Ï Physical entities
Point masses, displacements, velocities, forces, moments, angular velocities, … are
physical entities. Each is represented by a geometrically interpreted element.
A point mass, displacement or velocity is a physical 1-entity.
A force, moment or angular velocity is a physical 2-entity.

Ï Physical representations
Points, weighted points, vectors, bound vectors, bivectors, … are geometric
representations of physical entities such as point masses, displacements, velocities,
forces, moments.
Physical entities are represented by geometrically interpreted elements.

Ï Scalar
A scalar is an element of the field L of the underlying linear space L.
0 1
A scalar is of grade zero.

Ï Screw
A screw is a geometrically interpreted 2-element in a three-dimensional (physical) space
(four-dimensional linear space) in which the bivector is orthogonal to the vector of the
bound vector.
The bivector is necessarily simple since the vector subspace is three-dimensional.

Ï Simple element
A simple element is an element which may be expressed as the exterior product of 1-
elements.

Ï Simple bivector
Ï

2009 9 3
Grassmann Algebra Book.nb 739

Simple bivector
A simple bivector V is the exterior product of two vectors. V ã xÔy.

Ï Simple m-element
A simple m-element is the exterior product of m 1-elements.

Ï Simple m-vector
A simple m-vector is the exterior product of m vectors.

Ï Space of a simple m-element


The space of a simple m-element a is the set of all 1-elements x such that aÔxã0.
m m
The space of a simple m-element is a linear space of dimension m.

Ï Space of a non-simple m-element


The space of a non-simple m-element is the union of the spaces of its component simple
m-elements.

Ï 2-direction
A 2-direction is the space of a simple bivector. It is therefore the set of all vectors parallel
to a given simple bivector.

Ï Underlying linear space


The underlying linear space of a Grassmann algebra is the linear space L of 1-elements,
1
which together with the exterior product operation, generates the algebra.
The dimension of an underlying linear space is denoted by the symbol n.

Ï Underlying bound vector space


An underlying bound vector space is an underlying linear space whose basis elements are
interpreted as an origin point ¯" and vectors. It can be shown that from this basis a
second basis can be constructed, all of whose basis elements are points.

Ï Union
A union of two simple elements is any of the congruent elements which define the union
of their spaces.

Ï Vector
A vector is a geometric interpretation of a 1-element.
Ï
Ï Vector space

2009 9 3
Grassmann Algebra Book.nb 740

Vector space
A vector space is a linear space in which the elements are interpreted as vectors.

Ï Vector subspace of a geometrically interpreted underlying linear space


The vector subspace of a geometrically interpreted underlying linear space is the
subspace of elements which do not involve the origin.

Ï Weighted point
A weighted point is a scalar multiple a of a point P: a P.

2009 9 3
Grassmann Algebra Book.nb 741

A5 Bibliography
To be completed.

Bibliography
Note
This bibliography is specifically directed at covering the algebraic implications of
Grassmann's work. It is outside the scope of this book to cover any applications of
Grassmann's work to calculus (for example, the theory of exterior differential forms), nor
to the discussion of other algebraic systems (for example Clifford algebra) except where
they clearly intersect with Grassmann's algebraic concepts. Of course, a result which may
be expressed by an author in one system may well find expression in others.

Ì Armstrong H L 1959
'On an alternative definition of the vector product in n-dimensional vector analysis'
Matrix and Tensor Quarterly, IX no 4, pp 107-110.
The author proposes a definition equivalent to the complement of the exterior product of
n|1 vectors.

Ì Ball R S 1876
The Theory of Screws: A Study in the Dynamics of a Rigid Body
Dublin
See the note on Hyde (1888).

Ì Ball R S 1900
A Treatise on the Theory of Screws
Cambridge University Press (1900). Reprinted 1998.
A classical work on the theory of screws, containing an annotated bibliography in which
Ball refers to the Ausdehnungslehre of 1862: 'This remarkable work … contains much
that is of instruction and interest in connection with the present theory … . Here we have
a very general theory which includes screw coordinates as a special case.' Ball does not
use Grassmann's methods in his treatise.

Ì Barton H 1927
'A Modern Presentation of Grassmann's Tensor Analysis'
American Journal of Mathematics, XLIX, pp 598-614.
This paper covers similar ground to that of Moore (1926).

Ì Bourbaki N 1948

2009 9 3
Grassmann Algebra Book.nb 742

Bourbaki N 1948
Algèbra
Actualités Scientifiques et Industrielles No.1044, Paris
Chapter III treats multilinear algebra.

Ì Bowen R M and Wang C-C 1976


Introduction to Vectors and Tensors
Plenum Press. Two volumes.
This is the only contemporary text on vectors and tensors I have sighted which relates
points and vectors via the explicit introduction of the origin into the calculus (p 254).

Ì Brand L 1947
Vector and Tensor Analysis
Wiley, New York.
Contains a chapter on motor algebra which, according to the author in his preface '… is
apparently destined to play an important role in mechanics as well as in line geometry'.
There is also a chapter on quaternions.

Ì Buchheim A 1884-1886
'On the Theory of Screws in Elliptic Space'
Proceedings of the London Mathematical Society, xiv (1884) pp 83-98, xvi (1885) pp
15-27, xvii (1886) pp 240-254, xvii p 88.
The author writes 'My special object is to show that the Ausdehnungslehre supplies all
the necessary materials for a calculus of screws in elliptic space. Clifford was apparently
led to construct his theory of biquaternions by the want of such a calculus, but
Grassmann's method seems to afford a simpler and more natural means of expression
than biquaternions.' (xiv, p 90) Later he extends this theory to '… all kinds of space.' (xvi,
p 15)

Ì Burali-Forti C 1897
Introduction à la Géométrie Différentielle suivant la Méthode de H. Grassmann
Gauthier-Villars, Paris
This work covers both algebra and differential geometry in the tradition of the Peano
approach to the Ausdehnungslehre.

Ì Burali-Forti C and Marcolongo R 1910

Éléments de Calcul Vectoriel


Hermann, Paris
Mostly a treatise on standard vector analysis, but it does contain an appendix (pp
176-198) on the methods of the Ausdehnungslehre and some interesting historical notes
on the vector calculi and their notations. The authors use the wedge Ô to denote Gibbs'
Ì
cross product ! and use the !, initially introduced by Grassmann to denote the scalar or
inner product.
2009 9 3
Grassmann Algebra Book.nb 743

Mostly a treatise on standard vector analysis, but it does contain an appendix (pp
176-198) on the methods of the Ausdehnungslehre and some interesting historical notes
on the vector calculi and their notations. The authors use the wedge Ô to denote Gibbs'
cross product ! and use the !, initially introduced by Grassmann to denote the scalar or
inner product.

Ì Cartan É 1922
Leçons sur les Invariants Intégraux
Hermann, Paris
In this work Cartan develops the theory of exterior differential forms, remarking that he
has called them 'extérieures' since they obey Grassmann's rules of 'multiplication
extérieures'.

Ì Cartan É 1938
Leçons sur la Théorie des Spineures
Hermann, Paris
In the introduction Cartan writes "One of the principal aims of this work is to develop the
theory of spinors systematically by giving a purely geometrical definition of these
mathematical entities: because of this geometrical origin, the matrices used by physicists
in quantum mechanics appear of their own accord, and we can grasp the profound origin
of the property, possesssed by Clifford algebras, of representing rotations in space having
any number of dimensions". Contains a short section on multivectors and complements
(the French term is supplément).

Ì Cartan É 1946
Leçons sur la Géométrie des Espaces de Riemann
Gautier-Villars, Paris
This is a classic text (originating in 1925) on the application of the exterior calculus to
differential geometry. Cartan begins with a chapter on exterior algebra and its expression
in tensor index notation. He uses square brackets to denote the exterior product (as did
Grassmann), and a wedge to denote the cross product.

Ì Carvallo M E 1892
'La Méthode de Grassmann'
Nouvelles Annales de Mathématiques, serie 3 XI, pp 8-37.
An exposition of some of Grassmann's methods applied to three dimensional geometry
following the approach of Peano. It does not treat the interior product.

Ì Chevalley C 1955
The Construction and Study of Certain Important Algebras
Mathematical Society of Japan, Tokyo.
Lectures given at the University of Tokyo on graded, tensor, Clifford and exterior
algebras.

Ì Clifford W K 1873

2009 9 3
Grassmann Algebra Book.nb 744

Clifford W K 1873
'Preliminary Sketch of Biquaternions'
Proceedings of the London Mathematical Society, IV, nos 64 and 65, pp 381-395
This paper includes an interesting discussion of the geometric nature of mechanical
quantities. Clifford adopts the term 'rotor' for the bound vector, and 'motor' for the general
sum of rotors. By analogy with the quaternion as a quotient of vectors he defines the
biquaternion as a quotient of motors.

Ì Clifford W K 1878
'Applications of Grassmann's Extensive Algebra'
American Journal of Mathematics Pure and Applied, I, pp 350-358
In this paper Clifford lays the foundations for general Clifford algebras.

Ì Clifford W K 1882
Mathematical Papers
Reprinted by Chelsea Publishing Co, New York (1968).
Of particular interest in addition to his two published papers above are the otherwise
unpublished notes:
'Notes on Biquaternions' (~1873)
'Further Note on Biquaternions' (1876)
'On the Classification of Geometric Algebras' (1876)
'On the Theory of Screws in a Space of Constant Positive Curvature' (1876).

Ì Coffin J G 1909
Vector Analysis
Wiley, New York.
This is the second English text in the Gibbs-Heaviside tradition. It contains an appendix
comparing the various notations in use at the time, including his view of the
Grassmannian notation.

Ì Collins J V 1899-1900
'An elementary Exposition of Grassmann's Ausdehnungslehre or Theory of Extension'
American Mathematical Monthly, 6 (1899) pp 193-198, 261-266, 297-301; 7 (1900) pp
31-35, 163-166, 181-187, 207-214, 253-258.
This work follows in summary form the Ausdehnungslehre of 1862 as regards general
theory but differs in its discussion of applications. It includes applications to geometry
and brief applications to linear equations, mechanics and logic.

2009 9 3
Grassmann Algebra Book.nb 745

Ì Coolidge J L 1940
'Grassmann's Calculus of Extension' in
A History of Geometrical Methods
Oxford University Press, pp 252-257
This brief treatment of Grassmann's work is characterized by its lack of clarity. The
author variously describes an exterior product as 'essentially a matrix' and as 'a vector
perpendicular to the factors' (p 254). And confusion arises between Grassmann's matrix
and the division of two exterior products (p 256).

Ì Cox H 1882
'On the application of Quaternions and Grassmann's Ausdehnungslehre to different kinds
of Uniform Space'
Cambridge Philosophical Transactions, XIII part II, pp 69-143.
The author shows that the exterior product is the multiplication required to describe non-
metric geometry, for 'it involves no ideas of distance' (p 115). He then discusses exterior,
regressive and interior products, applying them to geometry, systems of forces, and linear
complexes | using the notation of 1844. In other papers Cox applies the
Ausdehnungslehre to non-Euclidean geometry (1873) and to the properties of circles
(1890).

Ì Crowe M J 1967, 1985


A History of Vector Analysis
Notre Dame 1967. Republished by Dover 1985.
This is the most informative work available on the history of vector analysis from the
discovery of the geometric representation of complex numbers to the development of the
Gibbs-Heaviside system Crowe's thesis is that the Gibbs-Heaviside system grew mostly
out of quaternions rather than from the Ausdehnungslehre. His explanation of
Grassmannian concepts is particularly accurate in contradistinction to many who supply a
more casual reference.

Ì Dibag I 1974
'Factorization in Exterior Algebras'
Journal of Algebra, 30, pp 259-262
The author develops necessary and sufficient conditions for an m-element to have a
certain number of 1-element factors. He also shows that an (n|2)-element in an odd
dimensional space always has a 1-element factor.

Ì Drew T B 1961
Handbook of Vector and Polyadic Analysis
Reinhold, New York

2009 9 3
Grassmann Algebra Book.nb 746

Tensor analysis in invariant notation. Of particular interest here is a section (p 57) on


'polycross products' | an extension of the (three-dimensional) cross product to polyads.

Ì Efimov N V and Rozendorn E R 1975


Linear Algebra and Multi-Dimensional Geometry
MIR, Moscow
Contains a chapter on multivectors and exterior forms.

Ì Fehr H 1899
Application de la Méthode Vectorielle de Grassmann à la Géométrie Infinitésimale
Georges Carré, Paris
Comprises an initial chapter on exterior algebra as well as standard differential geometry.

Ì Fleming W H 1965
Functions of Several Variables
Addison-Wesley
Contains a chapter on exterior algebra.

Ì Forder H G 1941
The Calculus of Extension
Cambridge
This text is the one of the most recent devoted to an exposition of Grassmann's methods.
It is an extensive work (490 pages) largely using Grassmann's notations and relying
primarily on the Ausdehnungslehre of 1862. Its application is particularly to geometry
including many examples well illustrating the power of the methods, a chapter on forces,
screws and linear complexes, and a treatment of the algebra of circles.

Ì Gibbs J W 1886
'On multiple algebra'
Address to the American Association for the Advancement of Science
In Collected Works, Gibbs 1928, vol 2.
This paper is probably the most authoritative historical comparison of the different
'vectorial' algebras of the time. Gibbs was obviously very enthusiastic about the
Ausdehnungslehre, and shows himself here to be one of Grassmann's greatest proponents.

Ì Gibbs J W 1891
'Quaternions and the Ausdehnungslehre'
Nature, 44, pp 79|82. Also in Collected Works, Gibbs 1928.
Gibbs compares Hamilton's Quaternions with Grassmann's Ausdehnungslehre and
concludes that '... Grassmann's system is of indefinitely greater extension ...'. Here he also
concludes that to Grassmann must be attributed the discovery of matrices. Gibbs
published a further three papers in Nature (also in Collected Works, Gibbs 1928) on the
relationship between quaternions and vector analysis, providing an enlightening insight
into the quaternion|vector analysis controversy of the time.
2009 9 3
Grassmann Algebra Book.nb 747

Gibbs compares Hamilton's Quaternions with Grassmann's Ausdehnungslehre and


concludes that '... Grassmann's system is of indefinitely greater extension ...'. Here he also
concludes that to Grassmann must be attributed the discovery of matrices. Gibbs
published a further three papers in Nature (also in Collected Works, Gibbs 1928) on the
relationship between quaternions and vector analysis, providing an enlightening insight
into the quaternion|vector analysis controversy of the time.

Ì Gibbs J W 1928
The Collected Works of J. Willard Gibbs Ph.D. LL.D.
Two volumes. Longmans, New York.
In part 2 of Volume 2 is reprinted Gibbs' only personal work on vector analysis: Elements
of Vector Analysis, Arranged for the Use of Students of Physics (1881|1884). This was
not published elsewhere.

Ì Grassmann H G 1844
Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik
Leipzig.
The full title is Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik
dargestellt und durch Andwendungen auf die übrigen Zweige der Mathematik, wie auch
auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert.
This first book on Grassmann's new mathematics is known shortly as Die
Ausdehnungslehre von 1844. It develops the theory of exterior multiplication and division
and regressive exterior multiplication. It does not treat complements or interior products
in the way of the Ausdehnungslehre of 1862. The original Die Ausdehnungslehre von
1844 was republished in 1878. The best source to this work is Volume 1 of Grassmann's
collected works (1896), of which an English translation has been made by Lloyd C.
Kannenberg (1995).

Ì Grassmann H G 1855
'Sur les différents genres de multiplication'
Crelle's Journal, 49, pp 123-141
This paper was written to claim of Cauchy priority for a method of solving linear
equations.

Ì Grassmann H G 1862
Die Ausdehnungslehre. Vollständig und in strenger Form
Berlin.
This is Grassmann's second attempt to publish his new discoveries in book form. It
adopts a substantially different approach to the Ausdehnungslehre of 1844, relying more
on the theorem|proof approach. The work comprises two main parts: the first on the
exterior algebra and the second on the theory of functions. The first part includes chapters
on addition and subtraction, products in general, combinatorial products (exterior and
regressive), the interior product, and applications to geometry. This is probably
Grassmann's most important work. The best source is Volume 1 of Grassmann's collected
works (1896), of which an English translation has been made by Lloyd C. Kannenberg
(2000). In the collected works edition, the editor Friedrich Engel has appended extensive
notes and comments, which Kannenberg has also translated.

2009 9 3
adopts a substantially different approach to the Ausdehnungslehre of 1844, relying more
on the theorem|proof approach. The work comprises two main parts: the first on the
exterior algebra and the second on the theory of functions. The first part includes chapters
on addition and subtraction, products in general, combinatorial Grassmann
productsAlgebra
(exterior and 748
Book.nb
regressive), the interior product, and applications to geometry. This is probably
Grassmann's most important work. The best source is Volume 1 of Grassmann's collected
works (1896), of which an English translation has been made by Lloyd C. Kannenberg
(2000). In the collected works edition, the editor Friedrich Engel has appended extensive
notes and comments, which Kannenberg has also translated.

Ì Grassmann H G 1878
'Verwendung der Ausdehnungslehre fur die allgemeine Theorie der Polaren und den
Zusammenhang algebraischer Gebilde'
Crelle's Journal, 84, pp 273|283.
This is Grassmann's last paper. It contains, among other material, his most complete
discussion on the notion of 'simplicity'.

Ì Grassmann H G 1896|1911
Hermann Grassmanns Gesammelte Mathematische und Physikalische Werke
Teubner, Leipzig. Volume 1 (1896), Volume 2 (1902, 1904), Volume 3 (1911).
Grassmann's complete collected works appeared between 1896 and 1911 under the
editorship of Friedrich Engel and with the collaboration of Jakob Lüroth, Eduard Study,
Justus Grassmann, Hermann Grassmann jr. and Georg Scheffers. The following is a
summary of their contents.
Volume 1
Die lineale Ausdehnungslehre: ein neuer Zweig der Mathematik (1844)
Geometrische Analyse: geknüpft an die von Leibniz erfundene geometrische
Charakteristik (1847)
Die Ausdehnungslehre. Vollständig und in strenger Form (1862)
Volume 2
Papers on geometry, analysis, mechanics and physics
Volume 3
Theorie der Ebbe und Flut (1840)
Further papers on mathematical physics.
Parts of Volume 1 (Die lineale Ausdehnungslehre (1844) and Geometrische Analyse
(1847)) together with selected papers on mathematics and physics have been translated
into English by Lloyd C Kannenberg and published as A New Branch of Mathematics
(1995). Geometrische Analyse is Grassmann's prize-winning essay fulfilling Leibniz'
search for an algebra of geometry. The remainder of Volume 1 (Die Ausdehnungslehre
(1862)) has been translated into English by Lloyd C Kannenberg and published as
Extension Theory (2000).
Volume 2 comprises papers on geometry, analysis, analytical mechanics and
mathematical physics plus two texts, one on arithmetic and the other on trigonometry.
Volume 3 comprises Grassmann's earliest major work (Theorie der Ebbe und Flut
(1840)) and further papers, particularly on wave theory. The Theory of Tides begins to
apply Grassmann's new approach to vector analysis.

Ì Grassmann H der Jüngere 1909, 1913, 1927


Projektive Geometrie der Ebene
Teubner, Leipzig.
A comprehensive treatment in three books using the methods of the elder H. Grassmann.

2009 9 3
Grassmann Algebra Book.nb 749

A comprehensive treatment in three books using the methods of the elder H. Grassmann.

Ì Greenberg M J 1976
'Element of area via exterior algebra'
American Mathematical Monthly, 83, pp 274-275
The author suggests that the treatment of elements of area by using the exterior product
would be a more satisfactory treatment than that normally given in calculus texts.

Ì Greub W H 1967
Multilinear Algebra
Springer-Verlag, Berlin.
Contains chapters on exterior algebra.

Ì Gurevich G B 1964
Foundations of the Theory of Algebraic Invariants
Noordhoff, Groningen
Contains a chapter on m-vectors (called here polyvectors) with an extensive treatment of
the conditions for divisibility of an m-vector by one or more vectors (pp 354-395).

Ì Hamilton W R 1853
Lectures on Quaternions
Dublin
The first English text on quaternions. The introduction briefly mentions Grassmann.

Ì Hamilton W R 1866
Elements of Quaternions
Dublin
The editor Charles Joly in his preface remarks in relation to the quaternion's associativity
that "For example, Grassmann's multiplication is sometimes associative, but sometimes it
is not." The exterior product is of course associative. However, here it seems Joly may
be suffering from a confusion caused by Grassmann's notation for expresssions in which
both exterior and regressive products appear.
The second edition of 1899 has been reprinted in 1969 by Chelsea Publishing Company.

Ì Hardy A S 1895
Elements of Quaternions
Ginn and Company, Boston
The author claims this to be an introduction to quaternions at an elementary level.

2009 9 3
Grassmann Algebra Book.nb 750

Ì Heath A E 1917
'Hermann Grassmann 1809-1877'
The Monist, 27, pp 1-21
'The Neglect of the Work of H. Grassmann'
The Monist, 27, pp 22-35
'The Geometric Analysis of Hermann Grassmann and its connection with Leibniz's
characteristic'
The Monist, 27, pp 36-56

Ì Hestenes D 1966
Space|Time Algebra
Gordon and Breach
This work is a seminal exposition of Clifford algebra emphasising the geometric nature
of the quantities and operations involved. The author writes "… ideas of Grassmann are
used to motivate the construction of Clifford algebra and to provide a geometric
interpretation of Clifford numbers. This is to be contrasted with other treatments of
Clifford algebra which are for the most part formal algebra. By insisting on Grassmann's
geometric viewpoint, we are led to look upon the Dirac algebra with new eyes." (p 2)

Ì Hestenes D 1968
'Multivector Calculus'
Journal of Mathematical Analysis and Applications, 24, pp 313-325
In the words of the author: "The object of this paper is to show how differential and
integral calculus in many dimensions can be greatly simplified using Clifford algebra."

Ì Hodge W V D 1952
Theory and Applications of Harmonic Integrals
Cambridge
The "star" operator defined by Hodge is of similar nature to Grassmann's complement
operator.

Ì Hunt K H 1970
Screw systems in Spatial Kinematics (Screw Systems Surveyed and Applied to Jointed
Rigid Bodies)
Report MMERS 3, Department of Mechanical Engineering, Monash University, Clayton,
Australia
Although in this report the author has intentionally confined himself to well known
methods of pure and analytical geometry, he is a strong proponent of the screw as being
the natural language for investigating spacial mechanisms.

2009 9 3
Grassmann Algebra Book.nb 751

Ì Hyde E W 1884
'Calculus of Direction and Position'
American Journal of Mathematics, VI, pp 1-13
In this paper the author compares quaternions to the methods of the Ausdehnungslehre
and concludes that Grassmann's system is far preferable as a system of directed quantities.

Ì Hyde E W 1888
'Geometric Division of Non-congruent Quantities'
Annals of Mathematics, 4, pp 9-18
This paper deals with the concept of exterior division more extensively than did
Grassmann.

Ì Hyde E W 1888
'The Directional Theory of Screws'
Annals of Mathematics, 4, pp 137-155
This paper is an account of the theory of screws using the Ausdehnungslehre. Hyde
claims that "A screw evidently belongs thoroughly to the realm of the Directional
Calculus and will not be easily or naturally treated by Cartesian methods; and Ball's
treatment is throughout essentially Cartesian in its nature." Here he is referring to The
Theory of Screws: A Study in the Dynamics of a Rigid Body (1876). In A Treatise on the
Theory of Screws (1900) Ball comments: "Prof. Hyde proves by his [sic] calculus many
of the fundamental theorems in the present theory in a very concise manner" (p 531).

Ì Hyde E W 1890
The Directional Calculus based upon the Methods of Hermann Grassmann
Ginn and Company, Boston
The author discusses geometric applications in 2 and 3 dimensions including screws and
complements of bound elements (for example, points, lines and planes).

Ì Hyde E W 1906
Grassmann's Space Analysis
Wiley, New York
In the words of the author: "This little book is an attempt to present simply and concisely
the principles of the "Extensive Analysis" as fully developed in the comprehensive
treatises of Hermann Grassmann, restricting the treatment however to the geometry of
two and three dimensional space."

2009 9 3
Grassmann Algebra Book.nb 752

Ì Jahnke E 1905
Vorlesungen über die Vektorenrechnung mit Anwendungen auf Geometrie, Mechanik und
Mathematische Physik.
Teubner, Leipzig
This work is full of examples of application of the Ausdehnungslehre (notation of 1862)
to geometry, mechanics and physics. Only two and three dimensional problems are
considered.

Ì Lasker E 1896
'A Essay on the Geometrical Calculus'
Proceedings of the London Mathematical Society, XXVIII, pp 217-260
This work differs from most of the other papers of this era on the geometrical
applications of the Ausdehnungslehre by concentrating on a space of arbitrary dimension
rather than two or three.

Ì Lewis G N 1910
'On four-dimensional vector calculus and its application in electrical theory'
Proceedings of the American Academy of Arts and Sciences, XLVI, pp 165-181
A specialization of the Ausdehnungslehre to four dimensions and its applications to
electromagnetism in Minkowskian terms. The author introduces the new concepts with
the minimum of explanation: for example, the anti-symmetric properties of bivectors and
trivectors are justified as conventions! (p 167).

Ì Lotze A 1922
Die Grundgleichungen der Mechanik insbesondere Starrer Körper
Teubner, Leipzig
This short monograph is one of the rare works addressing mechanics using the methods
of the Ausdehnungslehre. It treats the dynamics of systems of particles and the kinematics
and dynamics of rigid bodies.

Ì Macfarlane A 1904
Bibliography of Quaternions and Allied Systems of Mathematics
Dublin
Published for the International Association for Promoting the Study of Quaternions and
Allied Systems of Mathematics, this bibliography together with supplements to 1913
contains about 2500 articles including many on the Ausdehnungslehre and vector analysis.

Ì Marcus M 1966
'The Cauch-Schwarz inequality in the exterior algebra'
Quarterly Journal of Mathematics, 17, pp 61-63
The author shows that a classical inequality for positive definite hermitian matrices is a
special case of the Cauchy-Schwarz inequality in the appropriate exterior algebra.

2009 9 3
Grassmann Algebra Book.nb 753

The author shows that a classical inequality for positive definite hermitian matrices is a
special case of the Cauchy-Schwarz inequality in the appropriate exterior algebra.

Ì Marcus M 1975
Finite Dimensional Multilinear Algebra
Marcel Dekker, New York
Part II contains chapters on Grassmann and Clifford algebras.

Ì Mehmke R 1913
Vorlesungen über Punkt- und Vektorenrechnung (2 volumes)
Teubner, Leipzig
Volume 1 (394 pages) deals with the analysis of bound elements (points, lines and
planes) and projective geometry.

Ì Milne E A 1948
Vectorial Mechanics
Methuen, London
An exposition of three-dimensional vectorial (and tensorial) mechnaics using 'invariant'
notation. Milne is one of the rare authors who realises that physical forces and the linear
momenta of particles are better modelled by line vectors (bound vectors) rather than by
the usual vector algebra. He treats systems of line vectors (bound vectors) as vector pairs.

Ì Moore C L E 1926
'Grassmannian Geometry in Riemannian Space'
Journal of Mathematics and Physics, 5, pp 191-200
This paper treats the complement, and the exterior, regressive and interior products in a
Riemannian space in tensor index notation using the alternating tensors and the
generalised Kronecker symbol. This classic use of tensor notation does not enhance the
readability of the exposition.

Ì Murnaghan F D 1925
'The Generalised Kronecker Symbol and its Application to the Theory of Determinants'
American Mathematical Monthly, 32, pp 233-241
The generalised Kronecker symbol is essentially the generalisation to an exterior product
space of the usual Kronecker symbol.

Ì Murnaghan F D 1925
'The Tensor Character of the Generalised Kronecker Symbol'
Bulletin of the American Mathematical Society, 31, pp 323-329
The author states "It will be readily recognised that there is an intimate connection here
with Grassmann's Ausdehnungslehre, and we believe, in fact, that a systematic exposition
of this theory with the aid of the generalised Kronecker symbol would help to make it
more widely understood".
2009 9 3
Grassmann Algebra Book.nb 754

The author states "It will be readily recognised that there is an intimate connection here
with Grassmann's Ausdehnungslehre, and we believe, in fact, that a systematic exposition
of this theory with the aid of the generalised Kronecker symbol would help to make it
more widely understood".

Ì Peano G 1895
'Essay on Geometrical Calculus'
in Selected Works of Guiseppe Peano Chapter XV, p 169-188
Allen and Unwin, London
The essay is translated from 'Saggio di calcolo geometrico' Atti, Accad. Sci. Torino, 31,
(1895-6) pp 952-975. Peano claims to have understood Grassmann's ideas by
reconstructing them himself. His geometric ideas come through with clarity,
substantiating his claim. Peano's principle exposition of Grassmann's work was in
Calcolo geometrico secondo l'Ausdehnunglehre di H. Grassmann, Turin (1888) of which
a translation of selected passages appears on pp 90-100 of the above Selected Works.

Ì Peano G 1901
Formulaire de Mathematiques
Gauthier-Villars, Paris
A compendium of axioms and results. The last part (pp 192-209) is devoted to point and
vector spaces and includes some interesting historical comments.

Ì Pedoe D 1967
'On a geometrical theorem in exterior algebra'
Canadian Journal of Mathematics, 19, pp 1187-1191
The author remarks 'This paper owes its inspiration to the remarkable book by H. G.
Forder The Calculus of Extension … Forder introduces many concepts which I find
difficult to bring down to earth. But the methods developed in his book are powerful
ones, and it is evident that much work can usefully be done in simplifying and
interpreting some of the concepts he uses'. He does not mention Grassmann.

Ì Saddler W 1927
'Apolar triads on a cubic curve'
Proceedings of the London Mathematical Society, Series 2, 26, pp 249-256
'Apolar tetrads on the Grassmann quartic surface'
Journal of the London Mathematical Society, 2, pp 185-189
Exterior algebra applied to geometric construction.

Ì Sain M 1976
'The Growing Algebraic Presence in Systems Engineering: An Introduction'
Proceedings of the IEEE, 64, p 96
A modern algebraic discussion culminating in the definition of the exterior algebra and
its relationship to the theory of determinants and some systems theoretical applications.

2009 9 3
Grassmann Algebra Book.nb 755

A modern algebraic discussion culminating in the definition of the exterior algebra and
its relationship to the theory of determinants and some systems theoretical applications.

Ì Schlegel V 1872, 1875


System der Raumlehre (2 volumes)
Leipzig
Geometry using Grassmann's methods.

Ì Schouten J A 1951
Tensor Analysis for Physicists
Oxford
Contains a chapter on m-vectors from a tensor-analytic viewpoint.

Ì Schweitzer A R 1950
Bulletin of the American Mathematical Society, 56
'Grassmann's extensive algebra and modern number theory' (Part I, p 355; Part II, p 458)
'On the place of the algebraic equation in Grassmann's extensive algebra' (p 459)
'On the derivation of the regressive product in Grassmann's geometrical calculus' (p 463)
'A metric generalisation of Grassmann's geometric calculus' (p 464)
Resumés only of these papers are printed.

Ì Scott R F 1880
A Treatise on the Theory of Determinants
Cambridge, London
The author states in the preface that 'The principal novelty in the treatise lies in its
systematic use of Grassmann's alternate units, by means of which the study of
determinants is, I believe, much simplified.'

Ì Shepard G C 1966
Vector Spaces of Finite Dimension
Oliver and Boyd, London
Chapter IV contains an introduction to exterior products via tensors and multilinear
algebra.

Ì Thomas J M 1962
Systems and Roots
W Byrd Press
The author uses exterior algebraic concepts in some of his network analysis.

2009 9 3
Grassmann Algebra Book.nb 756

Ì Tonti E 1972
Accademia Nazionale die Lincei, Serie VIII, Volume L II
'On the Mathematical Structure of a Large Class of Physical Theories' (Fasc.1, p 48)
'A Mathematical Model for Physical Theories' (Fasc.2-3, p 176, 351)
These papers begin the author's investigations into the structure of physical theories, in
which he considers the geometrical calculus to play an important part.

Ì Tonti E 1975
On the Formal Structure of Physical Theories
Report of the Instituto di Matematica del Politecnico di Milano
In this report the author constructs a classification scheme for physical quantities and the
equations of physical theories. The mathematical structures needed for this, and which
are reviewed in this report are algebraic topology, exterior algebra, exterior differential
forms, and Clifford algebra. The author shows that the underlying structure of physical
theories is basically capable of a geometric interpretation.

Ì Whitehead A N 1898
A Treatise on Universal Algebra (Volume 1)
Cambridge
No further volumes appeared. This is probably the best and most complete exposition of
Grassmann's works in English (586 pages). The author recreates many of Grassmann's
results, in many cases extending and clarifying them with original contributions.
Whitehead considers non-Euclidean metrics and spaces of arbitrary dimension. However,
like Grassmann, he does not distinguish between n-elements and scalars.

Ì Willmore T J 1959
Introduction to Differential Geometry
Oxford
Includes a brief discussion of exterior algebra and its application to differential geometry
(p 189).

Ì Wilson E B 1901
Vector Analysis
New York
The first formally published book entirely devoted to presenting the Gibbs-Heaviside
system of vector analysis based on Gibbs' lectures and Heaviside's papers in the
Electrician in 1893. (Wilson uses the term 'bivector', but by it means a vector with real
and imaginary parts.)

2009 9 3
Grassmann Algebra Book.nb 757

Ì Wilson E B and Lewis G N 1912


'The space-time manifold of relativity. The non-Euclidean geometry of mechanics and
electromagnetics'
Proceedings of the American Academy of Arts and Sciences, XLVIII, pp 387-507
This treatise uses a four-dimensional vector calculus developed by Lewis (see Lewis G
N) by specializing the exterior calculus to four dimensions (with the scalar products of
time-like vectors negative). This is a good example of the power of the exterior calculus.

Ì Ziwet A 1885-6
'A Brief Account of H. Grassmann's Geometrical Theories'
Annals of Mathematics, 2, (1885 pp 1-11; 1886 pp 25-34)
In the words of the author: 'It is the object of the present paper to give in the simplest
form possible, a succinct accont of Grassmann's matheamtical theories and methods in
their application to plane geometry'. (Follows in the main Schlegel's System der
Raumlehre.)

A note on sources to Grassmann's work


The best source for Grassmann's contributions to science is his Collected Works (Grassmann 1896)
which contain in volume 1 both Die Ausdehnungslehre von 1844 and Die Ausdehnungslehre von
1862, as well as Geometrische Analyse, his prizewinning essay fulfilling Leibniz's search for an
algebra of geometry. Volume 2 contains papers on geometry, analysis, mechanics and physics,
while volume 3 contains Theorie der Ebbe und Flut.

Die Ausdehnungslehre von 1862, fully titled: Die Ausdehnungslehre. Vollständig und in strenger
Form is perhaps Grassmann's most important mathematical work. It comprises two main parts: the
first devoted basically to the Ausdehnungslehre (212 pages) and the second to the theory of
functions (155 pages). The Collected Works edition contains 98 pages of notes and comments. The
discussion on the Ausdehnungslehre includes chapters on addition and subtraction, products in
general, progressive and regressive products, interior products, and applications to geometry. A
Cartesian metric is assumed.

Both Grassmann's Ausdehnungslehre have been translated into English by Lloyd C Kannenberg.
The 1844 version is published as A New Branch of Mathematics: The Ausdehnungslehre of 1844
and Other Works, Open Court 1995. The translation contains Die Ausdehnungslehre von 1844,
Geometrische Analyse, selected papers on mathematics and physics, a bibliography of Grassmann's
principal works, and extensive editorial notes. The 1862 version is published as Extension Theory.
It contains work on both the theory of extension and the theory of functions. Particularly useful are
the editorial and supplementary notes.

Apart from these translations, probably the best and most complete exposition on the
Ausdehnungslehre in English is in Alfred North Whitehead's A Treatise on Universal Algebra
(Whitehead 1898). Whitehead saw Grassmann's work as one of the foundation stones on which he
hoped to build an algebraic theory which united the several important and new mathematical
systems which emerged during the nineteenth century ~ the algebra of symbolic logic,
Grassmann's theory of extension, quaternions, matrices and the general theory of linear algebras.

The second most complete exposition of the Ausdehnungslehre is Henry George Forder's The
Theory of Extension (Forder 1941). Forder's interest is mainly in the geometric applications of the
theory of extension.
2009 9 3
Grassmann Algebra Book.nb 758

The second most complete exposition of the Ausdehnungslehre is Henry George Forder's The
Theory of Extension (Forder 1941). Forder's interest is mainly in the geometric applications of the
theory of extension.

The only other books on Grassmann in English are those by Edward Wyllys Hyde, The Directional
Calculus (Hyde 1890) and Grassmann's Space Analysis (Hyde 1906). They treat the theory of
extension in two and three-dimensional geometric contexts and include some applications to
statics. Several topics such as Hyde's treatment of screws are original contributions.

The seminal papers on Clifford algebra are by William Kingdon Clifford and can be found in his
collected works Mathematical Papers (Clifford 1882), republished in a facsimile edition by
Chelsea.

Fortunately for those interested in the evolution of the emerging 'geometric algebras', The
International Association for Promoting the Study of Quaternions and Allied Systems of
Mathematics published a bibliography (Macfarlane 1913) which, together with supplements to
1913, contains about 2500 articles. This therefore most likely contains all the works on the
Ausdehnungslehre and related subjects up to 1913.

The only other recent text devoted specifically to Grassmann algebra (to the author's knowledge as
of 2001) is Arno Zaddach's Grassmanns Algebra in der Geometrie, BI-Wissenschaftsverlag,
(Zaddach 1994).

2009 9 3
Grassmann Algebra Book.nb 759

Index
To be completed.

2009 9 3

You might also like