You are on page 1of 62

Dorst Chapter 11 The homogeneous model

The homogeneous model is designed to represent Rn, the subspace of Rn +1 orthogonal to e 0,


which is interpreted as a point space, by Rn +1, like in projective geometry, where points in a
plane are represented by vectors in space, and lines in a plane are represented by bivectors in
space. Using Rn +1, we can distinguish between points and vectors in Rn . The homogeneous
adjective relates to the fact that all scalar multiples of a given blade in Rn +1 represent the same
point object in the base space.
11.1 Homogeneous Representation Space
We shall assume a Euclidean metric, so an orthonormal base for Rn +1 is { e 0 , e1 , ⋯ , e n }, where e 0
represents the point of origin of Rn . So given any vector x ∈ Rn, e 0 ∙ x=0 . Dorst uses boldface for
vectors in Rn to distinguish them from vectors in Rn +1. So given a vector x ∈ Rn+1 , x=α e0 + x , for
some scalar α and some x ∈ Rn. Dorst define e 0 ∙ e 0=± 1, since he doesn’t want to choose between
them, he only want to make sure the e−1
0 exists. Given any vector x in R
n +1
, either it has a
nonzero e 0 coefficient in which case it represents a proper point, or it has no e 0 coefficient, in
which case it represents an improper point. Now consider a k -flat X represented by a k +1 -blade
in G n+ 1, x 0 ∧ x1 ∧⋯ ∧ x k, where x 0 , x 1 , ⋯ , x k are linearly independent. X is a geometric object of
rank k , as well as a collection of points which are linearly dependent on x 0 , x 1 , ⋯ , x k , or
{ x∨x ∧ x 0 ∧ x1 ∧⋯ ∧ x k =0 }, and if x , y ∈ X then so is x + y and αx . So the collection of vectors in
R
n +1
which represent points in X is a subspace of dimension k +1 . As example, a line is
represented by a 2-blade x 0 ∧ x1 and all scalar multiples of that 2-blade. A line is also a collection
of points x , such that x ∧ x 0 ∧ x 1=0 . The collection of points on a line is a two dimensional
subspace. This is true even when the line does not include the origin, so in the vector space
model this line is not a subspace ¿ Rn. But this is wrong. The zero vector of Rn +1 is neither a
proper point nor an improper point, since it is not a direction. So it is not true that a flat is
represented by a subspace, even a flat which includes the origin, since the zero vector is not in
any flat. So the blade which represents the flat, is not a subspace since it is missing the zero
vector.
11.2 All Points Are Vectors
11.2.1 Just like we use the Rn +1 vector e 0 to represent the point of origin of the point space Rn, we
shall use x=e 0 + x to represent a point in Rn , namely we imagine x ’s tail at the origin and its head
at the point x . We say that x is the location of the point x , indicating the displacement of the
point from the origin. Each vector in Rn +1 whose e 0 coefficient is nonzero, represents a point in
n
R . A point has two attributes. A weight and a location. The weight is defined to be the
coefficient of e 0. To define the location, given a vector x ∈ Rn+1 , put it in the form

(
x=α e0 + x=α e 0+
x
α ) x
. The location is defined to be .
α
So the weight is α =x ∙e−1
0 , and the location is

x x x ( e−1
0 ∙ e0 ) x
−1
x ∙ e0 (e−1
0 ∙ e0 ) x− ( x ∙ e 0 ) e0
−1
e−1
0 ⏌ (e0 ∧ x)
= −e 0= −e0 = − e=
−1 0
=
α α −1
x ∙ e0 −1
x ∙ e0 x ∙ e0 −1
x ∙ e0 x ∙ e−1
0

Note the distinction between e 0 + x , e 0 +α x , α e0 + x , α ( e 0+ x ) . The location of the first and fourth
x
is x . The location of the second is α x . The location of the third is . The weight of the first and
α
second is one, while the weight of the third and fourth, is α . As vectors in Rn +1, in general, each
two of the first second and third vectors, are linearly independent, so in Rn all three are distinct
points. But the first and fourth are linearly dependent, so as points in Rn, they have the same
1 1
location, see Figure 11.1b. But e 0 + x−
α +1
( e 0 + α x )−
α+1
( α e0 + x )=0, so the first second and

third vectors are linearly dependent. A point in Rn located at x , has infinitely many
representations in Rn +1, one for each α in α ( e 0+ x ) , so they differ by weight but not by location.
So the weight of a point does not affect its location. But it affects the way it is added to another
point, which is why addition in projective geometry is not possible, see 11.2.3.
Summary, distinct vectors in Rn +1 with nonzero e 0-coefficient, are distinct points – they either
differ by weight or by location or both. One cannot say that a point has only location and two
points with the same location are the same. A point has also a weight. In page 276, Dorst writes
that the intersection of a plane and a line, produces a point whose weight is one when the angle
π
of intersection is , and get closer to zero as the angle of intersection approaches zero.
2
However, when it comes to projective geometry, the weight attribute is irrelevant, and a point
truly has only a location. So every point has infinitely many representations, since all weights are
considered equivalent. Then Rn is thought of as a projective space (better denoted as Pn), where
its points are the equivalence classes of vectors in Rn +1, where all vectors linearly dependent
upon a given vector are considered identical. Then one can talk about the homogeneous
coordinates of a point, by taking the coefficient of e 0, namely the weight, to be one.
11.2.2. A point is represented by a vector in Rn +1 whose weight is nonzero, namely the
coefficient of e 0, is nonzero. What about the vectors whose weight is zero, which are effectively
vectors in Rn ? They represent only a direction, not a location, so they get interpreted as points at
infinity. So points which have a direction but not a location, are points at infinity. So the
statement two parallel lines in Rn meet at a point at infinity in Rn , can be restated as two parallel
lines in Rn have the same direction. If x ∈ Rn is a point at infinity, then α x is another point at
infinity, even though they have the same direction. Dorst says that nonetheless a non-unit
improper point can be interpreted as a velocity or a density, and I don’t know which density he
refers to. Actually the language of Dorst “non-unit weight” referring to an improper point is
confusing, and he should instead write “non-unit magnitude”. But perhaps he just means that
there is a weight at a location and a weight of a direction. So x∧2 x are different improper
points, even though they have the same direction. Just like with every location are associated
infinitely many points differing by their weights, so with every direction are associated infinitely
many arrows differing by their lengths. So improper points are simply arrows, which have a
direction and magnitude but no location, while proper points have location and weight, but no
direction. In projective geometry there are no weights. Two points which are scalar multiple of
each other are the same point.
11.2.3 Since points, finite or infinite, are nothing but vectors in Rn +1, they can be added. Adding
a point to itself, doesn’t change its direction, if it is a point at infinity, and doesn’t change its
location, if it is a finite point, only its weight. Adding finite points makes sense when we think of
the point as a point mass, with the mass being the weight. Then adding the mass points gives the

(
center of mass of the mass points, m1 ( e0 + x 1 ) +m 2 ( e0 + x 2 ) =( m1+ m2 ) e0 +
m1 x 1 +m2 x 2
m1 +m2 ), so the

m1 x 1+ m2 x 2
addition gives a point of weight m1+ m2 at a location displaced from the origin. So
m 1+ m2
proper points addition is not done using the parallelogram rule. For example,
( e 0 +e 1 ) + ( e 0 +e 2 ) ≠ e 0 + ( e 1+ e 2). But adding improper points, is done using the parallelogram rule.
Note that adding points in projective geometry is not possible, since in projective geometry,
points have only locations and no weights, so addition cannot be defined consistently. For
example, in projective geometry x=e 0 + x=2 ( e 0 + x ) and y=e0 + y =3 ( e 0+ y ). So

x + y=e 0 + x +e 0+ y=2 e0 + ( x+ y
2 )
=e 0 +
x+ y
2
and also

x + y=2 ( e 0 + x ) +3 ( e 0+ y )=5 e 0+ ( 2 x+3 y


5 )
=e 0 +
2 x+ 3 y
5
. But
x+ y 2 x +3 y
2

5
.

Equation (11.2). Let x i=e 0 + x i be a set of k points with weight one (called unit points, “unit”
doesn’t refer to a length of a vector but to a weight of a point). Let α i :1 ≤i ≤ k be scalars such that
k k k k

∑ αi=1. Then ∑ αi xi =∑ α i ( e 0 + x i )=e0 +∑ αi x i is also a weight one point. This way of adding
i=1 i=1 i=1 i=1

unit points to get a unit point by attaching weights α i is called affine combination. For example
1
use α 1=⋯=α k = , then the affine combination addition gives,
k
1 1 x + ⋯+ x k 1
+¿ ac ⋯ +¿ ac x k = ( e0 + x 1 ) +⋯ + ( e 0 + x k )=e0 + 1 = ( x 1 +⋯+ x k ) ¿ ¿
x1 k k k k . Had we started with
¿
¿
1
k points each with weight and added them in the regular way, we would get exactly the same
k
thing. It seems that the α i can be looked at as weights, and then with regular addition, we get the
same unit weight point as a sum as in affine combination. Wikipedia “Affine combination”
defines affine combination simply as a linear combination where the coefficients add to one, and
that is how Dorst use it, namely an addition of points whose weights add to one, so only the sum
is a unit point but the weights of the points added is not one. If all the n weights are equal and
n n
1 1 1
add to one, then each point is pi= ( e 0+ p i) , and adding them gives ∑ ( e 0+ p i )=e 0 + ∑ pi , so
n i=1 n n i=1
n
1
the location of the addition is ∑ pi .
n i=1
n
1
Centroid. Given n unit points pi, their centroid is the unit point ∑ pi . I interpret that as
n i=1
1
follows. Let p'i be a set of n points each with weight . Then there exist n unit vectors pi such
n
n n n n
1
'
that pi= p i. Then
n ∑ p'i =∑ 1n pi= 1n ∑ p i. To find the weight and location of ∑ p'i, we have
i=1 i=1 i=1 i=1
n n n n n
1 1
∑ p =∑'
i
n
pi=∑ ( e 0+ p i ) where pi is the location of pi∧ p'i . So ∑ p =e 0+ 1n ∑ pi, so the
'
i
i=1 i=1 i=1 n i=1 i=1

n n n
1
weight of ∑ pi is one and the location of ∑ pi is ∑ p . So by definition, the centroid of a set
' '

i=1 i=1 n i=1 i


1
of n unit points is computed by attaching to each unit point a weight of , then adding them,
n
with the result being a unit point. But unless we attach weights to the unit points, their addition
p+ q
will not be a unit point. So the centroid of 2 unit points p , q is the unit point , and the
2
p+ q+r
centroid of three unit points p , q , r is the unit point .
3
In “‫ ”הערות מתימטיות ד‬I discussed the notion of an affine space. It is basically a space whose
elements are points, which in general can’t be added or multiplied by a scalar, so it is not a
vector space. Just like in projective geometry, points don’t have weights. Also an affine space
has no origin, namely we don’t choose a special point and designate it as the origin. Attached to
an affine space is a vector space, because even though points can’t be added, they can be
subtracted to produce a vector in the attached vector space, or equivalently, one can add a vector
to a point to produce another point. If v=b−a, where v is a vector and a , b are points, we can
picture v as a directed line segment from a to b . This suggest a way to add points after all. Pick a
fixed point p. Then take any two points, a , b . Then a− p and b− p are vectors, so they can be
added, to produce the vector c . In the affine space itself, since the notion of parallelism exist, we
can use the parallelogram rule to add these vectors. This results in a new point, p+c . So we can
try to say that a+ b= p+c . Of course the problem is that this way of adding points depends on the
choice of p, and in an affine space no point is singled out, as mentioned. However, there is a
special case when points can be added and be multiplied by scalars (from the field of the vector
space), provided the scalar coefficients of the points add to one. This is what we are about to
describe.
Wikipedia “Affine space”, “Informal
description”. We have a set of points. Let p be
a point and make it temporarily the origin.
Then given a point a , the position vector
associated with the point a is a=a− p.
Similarly, given a point b , the position vector associated with the point b is b=b− p. If we add
the position vectors a+ b by the parallelogram rule, we get to a third point c and a+ b=c=c− p.
Now suppose the origin is the point q . Then the position vector associated with the point a is
a ' =a−q and the position vector associated with the point b is b ' =b−q . Adding the vectors
' '
a +b ' by the parallelogram rule, we get a vector c =c ' −q, which is not the same as the vector c ,
and the point c is not the same as the point c ' . Note however, that the triangle a^
qp is congruent
^' , since by construction the line segments aq∧bc ' are equal in length and are
to the triangle bcc
parallel, and similarly the line segments ap∧bc are equal in length and are parallel, so the angle
between aq∧ap equals the angle between b c ' ∧bc . Therefore the line segments pq∧cc ' are
equal in length and are parallel. Also note that ( p−q ) +c + ( c ' −c ) =c ' . So

c −c =( p−q )+ ( c −c )=2 ( p−q ) .


' '

Now suppose instead of adding the vectors ( a− p ) + ( b−p ) we add λ ( a− p ) + ( 1− λ ) ( b− p ) ≡ d , and


1
similarly add the vectors λ ( a−q ) + ( 1−λ )( b−q ) ≡ d ' . In the figure, λ= . Then,
4
( p−q ) +d =( p−q )+ λ ( a−p )+ ( 1−λ )( b− p )= λ ( p−q )+ (1−λ )( p−q ) + λ ( a− p ) + ( 1− λ ) ( b− p ) =λ ( p−q+a− p ) + ( 1−λ
So
p−q=d −d=d −q−d+ p=( d −d ) + ( p−q )
' ' '

So d ' =d , as can be seen in the figure. Using that d=d ' when the coefficients add to one, we can
define addition of points in the following special form: λa+ ( 1−λ ) b=d . To see the consistency of
this definition, we have
λ ( a− p ) + ( 1− λ ) ( b− p ) =d− p
Formally expanding that,
λa−λp+b− p− λb+ λp=λa+ ( 1−λ ) b− p ≡ d− p
Similarly,
'
λ ( a−q ) + ( 1−λ )( b−q )=d −q=d−q
Formally expanding that we get again λa+ ( 1−λ ) b−q ≡ d−q . In summary, given points a , b , we
can define λa+ ( 1−λ ) b to be the point p+ λ ( a− p ) + ( 1− λ ) ( b− p ), where p can be any point at all,
so the definition is independent of the point chosen.
In general,
n n n n n
d ≡ ∑ λ i ai=∑ λ i ( ai− p ) =∑ λ i a i−p ∑ λi=∑ λi ai− p
i=1 i=1 i=1 i=1 i =1
n
So d + p=∑ λi ai ≡d .
i=1

n n n n n
d ' ≡ ∑ λ i a'i=∑ λ i ( a'i−q ) =∑ λ i a 'i−q ∑ λ i=∑ λ i ai−q
i=1 i=1 i=1 i=1 i=1

n n n
So d ' +q=∑ λi ai ≡ d =d . So again we can define ∑ λi ai to be the point p+ ∑ λ i ( ai− p ),
'

i=1 i=1 i=1

n
provided ∑ λi=1, and this definition is independent of the point p.
i=1

Recap. We wants to add points to get a new point. To do that, we turn points into position
vectors by choosing an origin, because we know how to add position vectors. All one needs for
that is the parallelogram rule. There is no need for a metric, no distances nor angles, only to be
able to know parallel lines. Then the head of the resultant position vector defines a new point.
This entails that in general, the new point would depends on the origin chosen. However, when
we attach weights to the points, such that the sum of the weights is one, then turning the
weighted points into position vectors, adding the position vectors to get a resultant position
vector and therefore a new point, then this new point depends only on the points added and not
on the origin, even though the resultant position vector does depend on the origin chosen.
So addition of points in an affine space, where the there is no origin, is possible when the
weights add to one. But the homogeneous space is not an affine space, and addition of points is
always possible.
Then I saw in Wikipedia “Affine space” section “Affine combinations and barycenter”, that if
the coefficients add to zero, then also the addition of points is independent of the origin chosen.
In the case of two points, then certainly
p+ λ a− p− λ b=λ ( a−p )−λ ( b− p )= λ ( a−b ) and λ ( a−b ) is independent of the choice of the
origin. In general, let { λ 1 , ⋯ , λ n } be the coefficients whose sum is zero. Then
{ λ 1 , ⋯ , λ n }= { λi , ⋯ , λi }+ { λ j , ⋯ , λ j }, where { λi , ⋯ , λi } are all positive and { λ j , ⋯ , λ j } are all
1 m 1 n−m 1 m 1 n−m

negative. Then
m n−m m n −m m m
p+ ∑ λ i ( ai − p ) − p− ∑ |λ j |( a j − p ) =∑ λi ( a i − p+q−q )− ∑ |λ j|( a j − p+q−q )=∑ λi ( ai −q ) −∑ λ i ( p−q )−
k k l l k k l l k k k
k=1 l=1 k=1 l =1 k=1 k=1

n n
So when ∑ λi=0 , we can define the sum ∑ λi ai to be the vector
i=1 i=1
m n−m m n−m
p+ ∑ λ i ( ai − p ) − p− ∑ |λ j |( a j − p ) =∑ λi ( a i − p ) −∑ |λ j |( a j − p )
k k l l k k l l
k=1 l=1 k=1 l=1

because this vector is independent of any origin, just like the vector which is the subtraction of
two points is independent of any origin.
Recap. In the figure, there are three
points, a , b , c whose weights are 1,1 ,−2
respectively. When the sum of the
coefficients is zero, the sum of the
weighted points is a vector, not a point.
Namely, pick a point p as origin. Add all
the positions vectors relative to that origin whose weight is positive, which results in a new point,
say r . Then add all the positions vectors whose weight is negative, which results in another new
point, say s. Then r −s is a vector. Do the same using as origin a point q . The same procedure
results in two points r ' ∧s ' , and r ' −s ' is a vector. We proved that r −s=r ' −s ' even though in
general, r ≠ r ' and s ≠ s ' , provided the sum of the coefficients is zero.
Back to regular addition. Let x=α e0 + x ∈ Rn +1, and y ∈ Rn . Recall that by definition, x represents
x
a finite point with weight α at location . Then x + y=α e 0+ x + y , is a finite point, with the same
α
y
weight as x , but the location translated by . In this addition it is hard to look at y as an infinite
α
point. The more mass a point has, the “harder” it is to move it, namely, to translate it. To
translate x ’s location by y , one would need to add α y =( x ∙e 0 ) y to x .
−1

Scalar multiplication. Given a finite point, x=α e0 + x , scalar multiplication by β , βx=αβ e0 + β x ,


changes the weight and the location by β . Given an infinite point y , a scalar multiplication will
not change its direction. But we already seen that when adding an infinite point to a finite point,
the infinite point perspective is not clear, for then the magnitude (don’t use weight with respect
to improper points) of the infinite point makes a difference.
11.2.4. A precise language will say a point P in Rn is represented by a vector x=m e 0+ x in Rn +1,
x
whose location is the vector in Rn . Note the seemingly absurd definition that the point of
m
origin of Rn resides in Rn +1−Rn. But this is not absurd. One must distinguish between a point and
its representation. The base space Rn does contain a point of origin, only its representation is a
vector in Rn +1−Rn. One must remember that a point in the base space Rn is only represented by a
vector in Rn +1, but as a point it does reside in the base space Rn . Vectors in the base space Rn are
also points at infinity or improper points. So one can say that the base space Rn contains only
points, proper and improper, but in the homogeneous model one can’t say that the base space Rn
contains only vectors. To illustrate, take a plane with an origin. It is easy to distinguish mentally
between a point in the plane, which is a dot, and a vector in the plane, which is a directed arrow
from the origin to that dot. To make a mathematical distinction, we represent a point in the plane
as a vector in space, and surely there is a difference between a vector in space and a vector in the
plane.
11.3 All Lines Are 2-Blades
11.3.1 A line L in Rn is represented by a

simple bivector, or a 2-blade, in Rn +1. So


given points P ,Q represented by x , y ,
the infinite line L which goes through
P∧Q , is represented by the bivector
x ∧ y . Recall that a change of weight
does not change the location, so the set of points on the line x ∧ y is identical to the set of points
on the line αx ∧ βy . So we might as well take the points x , y to have unit weights. There are
many way to express the line. For example, x ∧ y=e 0 ∧ ( y−x ) + x ∧ y=( e 0 + x ) ∧ ( y−x ). Note that
the point at infinity x associated with x , is not on the line, x ∧ x ∧ y =−e 0 ∧ x ∧ y ≠ 0. Now
suppose that a point z=e0 + z is on that line, so z ∧ x ∧ y =0. So z=x + λ a and z=x + λ a for some
scalar λ and the vector a= y−x . Then x ∧ z=λx ∧ a. So the line represented by x ∧ y=x ∧ a can
be represented by x ∧ z. Note that x ∧ y ≠ x ∧ z but z ∧ a=( x+ λ a ) ∧a=x ∧ a. That just means that
the two triangles in the figure, one with sides x , y , a and the other with sides z , w , a, have the
same area, since they have the same base and the same height. Note that in the figure we drew
the vector part of the points, for example we drew x but not x , see figure 11.1(a). To summarize.
There are infinitely many representations for a line L, one for each pair of points on L, and one
for each pair of a point on L (which is a finite point) and a vector parallel to L (which is a point
at infinity). The relationship on the set of simple bivectors where B≡ B ' iff both represent the
same line, is an equivalence relation, or a congruence. A line is a collection of points, finite
points plus a point at infinity, which depends on the direction (orientation) of the line.
If x=e 0 + x and y=e0 + y , then x ∧ y=( e0 + x ) ∧ ( e 0 + y )=e0 ∧ ( y −x )+ x ∧ y . So this is the form of a
unit weight proper line. While a line at infinity, or an improper line, has the form x ∧ y . From the
term x ∧ y in a proper line one can get the displacement of the line from the origin, see below.
Note that while the addition of points is a point, the subtraction of equal weight points is a point
at infinity, x− y =( e 0+ x ) −( e 0 + y ) =x− y . Now since x ∧ y=x ∧ ( y−x )=x ∧ ( y−x ), so in general
we can define a line as x ∧ a=( e 0+ x ) ∧ a=e0 ∧a+ x ∧a , namely as the wedge product of a finite
point with a point at infinity which is a direction. A point y=e0 + y is on the line x ∧ a iff
( e 0 + y ) ∧ ( e0 ∧a+ x ∧a )=( e0 + y ) ∧ ( e0 ∧ a ) +( e 0 + y ) ∧ ( x ∧ a ) =e 0 ∧ x ∧a+ y ∧ x ∧a= y ∧ x ∧ a=0
iff y is linearly dependent on x∧a .
Dorst also points out that the line represented by x ∧ y can also be represented by
( x + λ a ) ∧ ( y + λ a ) where a= y−x= y−x , as can easily be checked that
x ∧ y=( x + λ a ) ∧ ( y + λ a )=x ∧a
One can also write
1
x ∧ y= ( y + x ) ∧ ( y−x )
2
1
Note that this is true for unit weight points. Note that the point ( y + x ) is the centroid of the unit
2
points x∧ y and it is a unit point.
Given a line x ∧ y and a point x , find an expression for y . We have x ∧ y=x ∧ a. So

0 ⏌ ( x ∧ y ) =e 0 ⏌ ( x ∧ a )= ( e 0 ⋅ x ) a=a , since e 0 ⋅ x=1 as x is a unit weight vector, and


e−1 −1 −1 −1

−1 −1 −1
e 0 ⋅ a=0. So y−x =e 0 ⏌ ( x ∧ y ) and y=x +e 0 ⏌ ( x ∧ y ).
Find the shortest distance from the origin to the line. What we need is a point d which is on the
line x ∧ y and orthogonal to a= y−x . So d ⋅ a=0 or da=d ∧a . Also x ∧ a=d ∧ a=da (see figure
x∧a a
above). So d= =( x ∧a ) 2. To understand it, write it like that,
a ‖a‖

( x ∧ a)
a
‖a‖
2
= ( x ∧a ) ⎿
a
‖a‖
2 (
=x− x ⋅
)
a a
‖a‖ ‖a‖
. Here x ⋅( a a
‖a‖ ‖a‖)is the projection of x along a ,

and x less the projection is the rejection of x from a , which is d .


In more detail, suppose we have a line L in two space, represented by a bivector in three space.
Let the line be represented by the bivector
B=b 1 e 01+ b2 e 02+b 3 e 12
The coefficients b 1 , b2 , b3 are independent. Express B as x ∧ a and as x ∧ y . Suppose the line is
parallel to the vector a=a1 e 1+ a2 e2 and suppose that x=e 0 + x 1 e 1 + x 2 e 2 is on that line. Then the
equation of the line is x ∧ a=a1 e01 +a 2 e 02+ ( x1 a2−x 2 a1 ) e 12 . So the coefficients of e 01∧e 02 in B
are the coefficients of the direction vector, namely a 1=b1 and a 2=b2. To find a point on the line,
b3 −b3
one can choose x 1= and x 2=0 , or x 2= and x 1=0 . So we get two points on the line,
b2 b1
b3 b
x ' =e 0 + e1 and x ' ' =e0 − 3 e 2. So the line x ' ∧ x ' ' must be congruent to B. Indeed,
b2 b1

−b1 b2 ' ' ' −b1 b2 −b3


b3
( x ∧ x )=
b3
b3
e − e − ( b 23
)
e =B . So the coefficients of B are compatible
b1 02 b2 01 b1 b2 12

with only one direction vector, but are compatible with infinitely many points, one for each point
on the line. Using the direction vector and any point on the line, we can use the formula above to
compute d .
In the case of a line in three space represented by a bivector in four space, we have
B=b 1 e 01+ b2 e 02+b 3 e 03 +b 4 e12+ b5 e13 +b 6 e23
Express B as x ∧ a and as x ∧ y . Suppose the line is parallel to the vector a=a1 e 1+ a2 e2 +a 3 e 3 and
suppose that x=e 0 + x 1 e 1 + x 2 e 2+ x3 e 3 is on that line. Then the equation of the line is
x ∧ a=a1 e01 +a 2 e 02 + a3 e03 + ( x 1 a2−x 2 a1 ) e12 + ( x 1 a3−x 3 a1 ) e13 + ( x 2 a3−x 3 a2 ) e23

So a 1=b1 and a 2=b2 and a 3=b3 . In general, the set of equations


b 4=x 1 b2−x 2 b1 ,b 5=x 1 b 3−x 3 b 1 , b6 =x 2 b3 −x 3 b 2
c b 2−b 4
has no solution for x 1 , x 2 , x 3. However, let x 1=c , where c is any scalar. Then x 2= . And
b1
c b2−b 4
b 3−b6
c b 3−b 5 x 2 b 3−b 6 b1 c b2 b3−b4 b3 −b1 b6
x 3= = = =
b1 b2 b2 b1 b2
Cancelling b 1, we get
c b2 b3−b2 b 5=c b2 b3−b4 b3 −b1 b6
So we get the constraint, b 1 b6 + b4 b3 −b2 b5=0 among the coefficients of B (Browne in page 194
calls this constraint the Plücker identity. See there other ways to derive the constraint). If this
c b2−b 4 c b 3−b5
constraint is satisfied, then x=e 0 +c e1 + e2 + e3 is a point on the line for any
b1 b1
scalar c .
In Mathematica, if we know the constraint, we can write
Reduce[ { b 4−x 1 b2 + x 2 b 1=¿ 0∧¿ b5−x 1 b3 + x3 b1 =¿ 0∧¿ b 6−x 2 b3 + x3 b 2=¿ 0∧¿ b 1 b 6+ b4 b 3−b2 b5 =¿ 0 } , {x 1 , x 2 , x 3 }
And the first solution is
b2 b5−b4 b3 b x −b b x −b
b 6 ≠ 0∧¿ b1= ∧¿ b 1 ≠ 0∧¿ x 2= 2 1 4 ∧¿ x3 = 3 1 5
b6 b1 b1
So once we have the constraint, which implies that the bivector is actually an equation of a line,
we get infinitely many solutions for x 1 , x 2 , x 3, for points on the line.
Note that x + λ a for any λ , is another point on the line, and every point on the line is x + λ a for
b 2−b 4 b 3−b 5
some λ . Suppose c=1 so x=e 0 +e 1+ e2 + e 3. Show x + λ a is on the line.
b1 b1

x + λ a=e 0+ e 1+
b2 −b4
b1
e 2+
b3 −b5
b1 (
e3 + λ ( b1 e1 +b2 e 2+ b3 e3 ) =e 0 +e 1 ( 1+ λ b 1 )+
b2 −b4
b1 ) (
+ λ b 2 e 2+
b3−b5
b1 )
+ λ b3 e 3=e0 +

where c=1+ λ b1. So x + λ a is on the line. Conversely, show that there exists a λ such that
c b2−b 4 c b 3−b5 b2−b4 b3−b5
e 0 +c e1 + e2 + e3 =e 0+ e1 + e2 + e 3 + λ ( b 1 e 1+ b2 e2 +b 3 e3 )
b1 b1 b1 b1
c−1
Obviously, λ= .
b1
The distance to the origin can be computed from x∧a .
Some notes on Figures 11.1 & 11.2. They shows R3 which is a regular vector space. They also
shows a subspace of dimension two in R3, which is isomorphic to R2. There is a unit vector
e 0 ∈ R3−R 2 such that for all x ∈ R2, e 0 ∙ x=0 . So Dorst draws R2 perpendicular to e 0. Further
Dorst draws a copy of R2, called the flat S. The original he draws as perpendicular to e 0 at the tail
of e 0, and the flat is perpendicular to e 0 at the tip of e 0, one unit away. Now e 0 has no location,
because vectors in a vector space, in this case R3, have no locations. However, the whole point of
the model is to introduce locations (of points) in the flat S. For that we need to specify the origin
of the flat S. So we draw e 0 anywhere in space, and then the tip of e 0 becomes the point of origin
of the flat S. Then any vector x ∈ R3 which can be written as e 0 + x where x ∈ R2, is interpreted as
a point on the flat S. Also x ∈ R2 is also visualized to live in the flat S, as a point at infinity. Note
that x∧2 x are different points at infinity. Dorst draws a vector x ∈ R2 also in the flat S, but this is
confusing, since one can only draw proper points on the flat, but not improper points which are
at infinity. But Dorst brings another interpretation of an improper point, namely a direction. Then
one can draw them in the flat. So we visualize e 0 + x as a point in the flat S, but mathematically, it
is a vector in R3. And just like vectors in R3 of the form e 0 + x are interpreted as points in the flat
S, so a bivector of the form ( e 0 + x ) ∧a=e 0 a+ x ∧ a where x , a∈ R2, is interpreted as a line L in

the flat S, where a point y in the flat S is on L iff ( e 0 + x ) ∧a ∧ y =0. Now, among all the points
on L, there is exactly one, d=e 0+ d which satisfies d ⋅ a=( e 0 +d ) ⋅ a=d ⋅ a=0 . Now in Figure
11.2(c), d is not drawn on the of S. Also a is not drawn on the flat of S. But both are drawn on
R2, where of course they live. Since d is a point on L, the equation of L can be written as
d ∧a=( e 0 +d ) ∧a=e 0 ∧a+ d ∧ a=e0 a+da=d a . The bivector da is called the moment of L. Now
−1 −1 −1
if we are given L as a bivector, then e 0 ⏌ L=e 0 ⏌ ( e 0 ∧a ) + e 0 ⏌ ( d ∧ a )=a. So e−1
0 ⏌ L is the

−1 −1 −1
direction of the line. Also e 0 ⏌ ( e 0 ∧ L )=e0 ⏌ ( e0 ∧e 0 ∧ a ) + e0 ⏌ ( e0 ∧d ∧ a )=d ∧ a=da. So
−1
−1 da M e 0 ⏌ ( e 0 ∧ L ) da
e ⏌ ( e 0 ∧ L ) is the moment of L. Finally, d= = =
0 . Recall that means
a a −1
e0 ⏌ L a

( da ) a−1 and not a−1 ( da ).


11.3.2. How to interpret a bivector of the form x ∧ y as a line, where both x∧ y are points at
infinity. Given any finite point z=e0 + z , z ∧ x ∧ y =e 0 ∧ x ∧ y + z ∧ x ∧ y=e 0 ∧ x ∧ y ≠ 0, so the
line contains no finite points. So we can interpret it as a line at infinity, or improper line, the set
of all directions in Rn which can be expressed as a linear combination of x∧ y , a circle of infinite
radius in the plane of x∧ y .
11.3.3. Adding lines. Every simple bivector can be interpreted as a line, either proper or at
infinity, but not a non-simple one. In Rn where n ≥ 4 , a bivector need not be simple. In R3 all
bivectors are simple (see my notes “‫ ”הערות מתימטיות ג‬on Browne page 39), so in principle,
adding R2 lines gives an R2 line. Suppose we have two proper lines L1=( e0 + x ) ∧a and
L2=( e0 + y ) ∧b . In R3 we then have
( e 0 +a e 1 +b e 2 ) ∧ ( c e1 +d e 2 )+ ( e 0 +a e 1+b e 2 ) ∧ ( c e 1+ d e 2) =( c+ c ) e 01+( d+ d ) e 02+ ( ad +a d −bc−b c ) e12 ≡ α e 01 + β
' ' ' ' ' ' ' ' ' '

This child line is not obviously related to its parents, but see below.
How do we find the point common to both lines? Using Hestenes projective paper equation
(3.14), we have
p=( ( e 0 + x ) ∧ a ) ∨ ( ( e 0 + y ) ∧b ) =[ ( e0 + x ) ∧a ∧ ( e 0+ y ) ] b−[ ( e 0+ x ) ∧ a∧ b ] ( e 0 + y )= [ e0 ∧ a∧ ( y−x ) + x ∧ a ∧ y ] b−[ e0 ∧

where [ e 0 ∧a ∧ ( y −x ) + x ∧a ∧ y ]=( e 0 ∧ a ∧ ( y −x ) + x ∧a ∧ y ) ⋅ ( e2 ∧e 1 ∧ e0 ) , which is a scalar, and


similarly [ e 0 ∧ a ∧b+ x ∧ a∧ b ] is a scalar.
The same calculation when the lines are parallel, a=b , gives
p=( ( e 0 + x ) ∧ a ) ∨ ( ( e 0 + y ) ∧a ) =[ ( e0 + x ) ∧a ∧ ( e 0+ y ) ] a−[ ( e 0+ x ) ∧ a∧ a ] ( e 0 + y ) =[ e 0 ∧ a ∧ ( y−x ) + x ∧ a ∧ y ] a− [ e0 ∧
which is a point at infinity proportional to the common direction a .
Another way is to express all vectors composing the lines relative to an orthonormal basis,
( e 0 + x ) ∧a=( e 0 + x 1 e 1+ x2 e 2) ∧ ( a1 e1 + a2 e 2 )=a1 e01 +a2 e 02+ ( x 1 a2−x 2 a1 ) e 12
( e 0 + y ) ∧b=( e 0 + y 1 e1 + y 2 e2 ) ∧ ( b1 e1 +b 2 e 2 )=b 1 e 01+ b2 e02 + ( y 1 b2− y 2 b1 ) e 12
So we can express each line by using only three parameters which we get from the four
parameters x 1 , x 2 , a 1 , a2 . So
( e 0 + x ) ∧a=α e 01 + β e 02+ γ e 12
( e 0 + y ) ∧b=α ' e01 + β ' e02 +γ ' e12
We can do better than that. Find the point of intersection of the e 1 axis with the line, namely
compute


e 01 ∨ ( α e 01+ β e 02+ γ e 12) =β e0 + γ e 1=β e 0 + e1
β )
γ
So the point e 0 + e1, which is on the e 1 axis, is on the line. Similarly,
β
γ
(
e 02 ∨ ( α e 01+ β e 02+ γ e 12) =α e0 −γ e 2=α e0− e 2
α )
γ
So the point e 0− e 2, which is on the e 2 axis, is on the line. Note that we used [ e 0 ∧ e 2 ∧ e1 ] =−1 .
α
So the equation of the line can be written as

( γ
β )( γ
e0 + e1 ∧ e0 − e 2
α )
So we started with four parameters, but we can express the line using only two parameters.
Now find the intersection of the two lines,
( a1 e01 +a2 e 02+ ( x 1 a2−x 2 a1 ) e 12) ∨ ( b1 e 01+b 2 e 02+ ( y 1 b 2− y 2 b 1 ) e12 )=a1 b2 e01 ∨ e 02+ a1 ( y 1 b2− y 2 b1 ) e 01 ∨ e12 +a2 b1 e 02 ∨
When the lines are parallel, b 1=a1 and b 2=a2 ,

( a1 e01 +a2 e 02+ ( x 1 a2−x 2 a1 ) e 12) ∨ ( a1 e 01+a 2 e 02+ ( y 1 a 2− y 2 a1 ) e12 )=a1 a2 e 01 ∨e 02+ a1 ( y 1 a 2− y 2 a1 ) e01 ∨ e 12+a 2 a 1 e 02 ∨
which is a point at infinity proportional to the common direction a 1 e 1+ a2 e2 .
If the two lines we are adding, intersect at a finite point p, L1= p ∧a and L2= p ∧b , then any
linear combination of the lines is another line which goes through p, α L1 + β L2 =p ∧ ( α a+ β b ) .
Similarly, if the two lines are parallel, L1= p ∧a and L2=q ∧ a, then any linear combination of
the lines is another line which is parallel to them, α L1 + β L2 =( αp+ βq ) ∧a .
11.4 All Planes Are 3-Blades
A plane Π is determined by three points or two points and a direction, or a point and two
directions. Each plane has infinitely many representations, as many as there are sets of three
points and so on. All representations of a line have the same direction, but they differ by the
point on the line. So one can tell at a glance that two 2-blades represent the same line, namely if
by rewriting them as a wedge product of a proper point and a point at infinity, they differ only by
the proper point but the point at infinity is the same.
Now suppose two 3-blades represent the same plane, one is x ∧ a ∧b=( e 0 + x ) ∧ a ∧b and the
other is
( x +α a+ β b ) ∧ ( μ a+ ρ b ) ∧ ( σ a+ τ b )=( e0 + x +α a+ β b ) ∧ ( μ a+ ρ b ) ∧ ( σ a+τ b )
Then
( e 0 + x +α a+ β b ) ∧ ( μ a+ ρ b ) ∧ ( σ a+ τ b )=( μ e0 ∧ a+ ρ e0 ∧b + μ x ∧a+ ρ x ∧b+ ( αρ− βμ ) a∧ b ) ∧ ( σ a+τ b )=( μτ−ρσ )
So the two representations are just scalar multiples of each other. So the representation can be a
proper point wedges with a line at infinity.
The equation of the plane can be written as
x ∧ y ∧ z=x ∧ ( y −x ) ∧ ( z− x ) =x ∧ ( y −x ) ∧ ( z−x )= ( e 0+ x ) ∧ ( y −x ) ∧ ( z− x ) =e 0 ∧ ( x ∧ y + y ∧ z + z ∧ x ) + x ∧ y ∧ z
Note that x ∧ y ∧ z is a plane at infinity and x ∧ y+ y ∧ z+ z ∧ x is a line at infinity. In the third
form, a plane is pictured as a proper point wedges with a line at infinity to suggest that just like a
line “ends” at infinity with a point, so a plane “ends” at infinity with a line.
A plane at infinity is the wedge of three points at infinity, and in a four dimensional space, it can
be identified with a sphere of infinite radius, just like in a three dimensional space a line at
infinity can be identified with a circle of infinite radius.
It is easy to verify the symmetric formula for the plane,
1 1
( x + y + z ) ∧ ( x ∧ y + y ∧ z+ z ∧ x )= ( x ∧ y ∧ z + y ∧ z ∧ x + z ∧ x ∧ y ) =x ∧ y ∧ z
3 3
1
Note that the point ( x + y + z ) is the centroid of the unit points x , y , z , and itself is a unit point.
3
1 1 1
We can use any other affine combination like 1,1 ,−1 instead of , , , and still get the same
3 3 3
plane, ( x + y−z ) ∧ ( x ∧ y + y ∧ z + z ∧ x )=x ∧ y ∧ z .
Support vector. In the case of a line x ∧ a, there was a unique vector d called the support vector,
such that d ⋅a=0, so x ∧ a=d ∧ a=d a, and d=e 0+ d is called the support point. We want to find
the corresponding support point d and the support vector d , in the case of a plane on which we
know there are points x , y , z, whose equation is x ∧ y ∧ z=¿
x ∧ ( y−x ) ∧ ( z−x )=e 0 ∧ A + x ∧ y ∧ z , where A=x ∧ y + y ∧ z + z ∧ x . There is a unique point on
the plane which is closest to the origin, call it d=e 0+ d . So d ⋅ ( y−x )=d ⋅ ( z−x )=d ⋅ ( z− y ) =0.
Therefore an equation of the plane is d ∧ ( y−d ) ∧ ( z −d )=d B , where
B=( y −d ) ∧ ( z−d )=d ∧ y + y ∧ z + z ∧ d .
Let d= x+ α ( y−x ) + β ( z−x ). So e 0 +d=e0 + x +α ( y −x )+ β ( z−x ). Or
d= (1−α −β ) x +α y+ β z
Now
y−d= y− (1−α −β ) x−α y−β z=( 1−α ) y−( 1−α ) x−β ( z−x )= (1−α )( y− x ) −β ( z−x )
z−d=z−( 1−α−β ) x−α y −β z=( 1−β ) z−( 1−β ) x +α x−α y=−α ( y−x )+ ( 1−β )( z −x )
Using the expression above, we have
μ=1−α , ρ=−β ,σ =−α , τ=1−β
So
d ∧ y ∧ z=d ∧ ( y−d ) ∧ ( z −d )=[ ( 1−α ) ( 1−β )−αβ ] x ∧ y ∧ z=( 1−α− β ) x ∧ y ∧ z =( 1−α −β ) x ∧ ( y−x ) ∧ ( z−x )
Also
dB=d ∧ B=d ∧ ( y −d ) ∧ ( z −d )=d ∧ y ∧ z=[ ( 1−α −β ) x +α y + β z ] ∧ y ∧ z=( 1 b−α−β ) x ∧ y ∧ z
Also
B=( y −d ) ∧ ( z−d )=d ∧ y + y ∧ z + z ∧ d=[ ( 1−α −β ) x +α y + β z ] ∧ y + y ∧ z + z ∧ [ (1−α−β ) x +α y+ β z ] =( 1−α− β
As we saw in the case of a line, we have e−1 −1
0 ⏌ Π=e0 ⏌ ( d ∧ B ) =B and
e−1 −1 −1
0 ⏌ ( e 0 ∧ Π )=e0 ⏌ ( e0 ∧d ∧ B ) =e 0 ⏌ ( e 0 ∧ d ∧ B ) =d ∧ B=dB . So

dB ( 1−α −β ) x ∧ y ∧ z x∧ y∧ z
d= ( dB ) B−1= = =
B ( 1−α −β ) A x ∧ y + y ∧ z+ z ∧ x
which is equation (11.4). Note that
−1 ( z−x ) ∧ ( y−x )
( x ∧ y+ y ∧ z + z ∧ x )−1=[ ( y−x ) ∧ ( z−x ) ] = 2 2
|z −x| | y −x|
The support vector to a plane is the normal to the plane. I want to derive it in a different form.
Suppose the plane is represented as x ∧ a ∧b . The normal, or support point, is d=e 0+ d , which
satisfies d ∙ a=d ∙ b=0. Now d= x+ α a+ β b for some scalars α , β to be determined. Using
0=d ∙a=x ∙ a+ α a2 + β a ∙ b and 0=d ∙b=x ∙ b+ α a ∙ b+ β b2 we get two equation for α , β . Solving
them we get
( x ∙ b ) ( a ∙ b )−( x ∙ a ) b 2 ( b ∧ x ) ⋅ ( a ∧b )
α= =
a2 b2 −( a ∙b )2 ( a∧ b ) ⋅ ( a ∧b )
( x ∙ a ) ( a ∙ b )−( x ∙b ) a2 ( x ∧a ) ⋅ ( a ∧ b )
β= =
2 2
a b −( a ∙b )
2
( a ∧b ) ⋅ ( a ∧b )
Adding planes. Suppose the homogeneous space is R4 , so all trivectors are simple. Let two
planes Π1 ,Π2 have a common line, x ∧ ( y−x ), so Π 1=x ∧ ( y−x ) ∧ ( z−x ) and
Π 2=x ∧ ( y−x ) ∧ ( w−x ). So
α Π 1 + β Π 2=αx ∧ ( y −x ) ∧ ( z−x ) + β x ∧ ( y −x ) ∧ ( w−x )=x ∧ ( y−x ) ∧ [ α ( z−x ) + β ( w−x ) ] =x ∧ ( y −x ) ∧ [ ( 1−α−b )
which is another plane which contains the line x ∧ ( y−x ) as well as the point
e 0 + ( 1−α−b ) x +α z + β w .
If Π 1 , Π 2 are parallel, so Π 1=x ∧ ( y−x ) ∧ ( z−x ) and Π 2=w ∧ ( r−w ) ∧ ( s−w ) where
r −w=γ ( y −x ) and s−w=δ ( z−x ) so Π 2=γδw ∧ ( y−x ) ∧ ( z−x ) then
α Π 1 + β Π 2=( αx+ βγδw ) ∧ ( y −x ) ∧ ( z−x )
which is another plane parallel to the first two.
Section 11.5 k -Flats as ( k +1 )-Blades
11.5.1 Explain the language of Dorst the wedge product of k +1 points gives a k +1 -blade in Rn +1
which represents a k -dimensional offset subspace in the n -dimensional base space Rn . It is
crucial to note that the base space Rn contains only points, proper and improper, and those points
are represented by vectors in Rn +1, the vectors representing the proper ones have nonzero
coefficients for e 0, and the vectors representing the improper ones have zero coefficients for e 0.
So every vector in Rn +1 can be also interpreted as a point in the base space. So it’s ambiguous
what Rn means. Sometimes it means the base space, the space of points. Sometimes it means the
subspace of Rn +1 orthogonal to e 0. So a flat is a collection of points. It is in general offset,
namely, it need not contain the point of origin e 0. Points can be added and any linear
combination of points in a flat is a point in the flat because of the properties of the wedge
product. So it seems that one can say that a k -flat, which is formed by a wedge product of k +1
vectors, or k +1 points, as a geometric object, it is k dimensional, but as a linear space with
points being the elements of the linear space, it has k +1 basis elements, so it is a k +1 subspace
of the representation space Rn +1, but it is not actually a subspace since it doesn’t contain the zero
vector. Flats which are entirely made of points which are represented by vectors in the subspace
orthogonal to e 0, are infinite flats. Dorst writes that a k -flat is k -dimensional offset subspace in
the n -dimensional base space Rn . k -dimensional here refers to the geometric dimension of the
flat (point zero, line one), and n -dimensional refers to the maximum dimensionality of a flat. I’m
saying that a k -flat has dimension k as a geometric object, but a dimension k +1 as a linear
subspace of Rn +1, but this is not true.
Example. Let n+1=k +1=3. Then the wedge product of 3 points in the base space R2,
x=e 0 + x , y =e 0+ y , z=e 0 + z , is a trivector in R3

( e 0 + x ) ∧ ( e 0 + y ) ∧ ( e 0 + z )=e0 ∧ ( y−x ) ∧ ( z−x ) + x ∧ y ∧ z =e 0 ∧ ( y −x ) ∧ ( z −x )


because x ∧ y ∧ z=0. This is a plane through the origin in the base space R2, so it contains the
origin, indeed e 0 ∧ [ e 0 ∧ ( y−x ) ∧ ( z −x ) ]=0. Also x , y , z are points in the plane, and certainly any
linear combinations of them are in the plane. So every proper point is in the plane. Also x , y , z,
and therefore every linear combinations of them, are in the plane, since
x ∧ ( e 0+ x ) ∧ ( e 0+ y ) ∧ ( e 0+ z ) =(−e0 ∧ x ) ∧ ( e 0 ∧ z−e 0 ∧ y+ y ∧ z ) =0
So every point, proper and improper is in the plane.
Example n+1=3. k +1=2. The wedge product of two points in the base space R2 gives a bivector
in R3 which represents a one dimensional offset subspace (a line) in the two dimensional base
space R2. Now the wedge product of two points in the base space R2 which are represented by
vectors in R3, can be expressed in many ways,
x ∧ y=( e0 + x ) ∧ ( e 0 + y )=e0 ∧ ( y −x )+ x ∧ y=x ∧ ( y−x )
If x ,0 , y are all different, then e 0 is not on the line, since e 0 ∧ x ∧ y =e 0 ∧ x ∧ y ≠ 0. Also x∧ y are
not on the line, since x ∧ x ∧ y =x ∧ e0 ∧ y ≠ 0. But any vector parallel to y−x is on the line, since
α ( y −x ) ∧ ( e 0 ∧ y −e 0 ∧ x + x ∧ y )=α [ − y ∧e 0 ∧ x−x ∧ e0 ∧ y ] =0
So the line contains any linear combination of x∧ y ∧ y−x , for example

(
x + y=e 0 + x +e 0+ y=2 e0 +
x+ y
2 )
, but not necessarily e 0 + x+ y . Note that x , y , y−x are not

linearly independent, since y−x = y−x .


Example n+1=k +1=2. We have a wedge product of two points, which is a bivector in R2,
( e 0 + x ) ∧ ( e 0 + y )=e0 ∧ ( y −x ) + x ∧ y=e0 ∧ ( y−x ) since x ∧ y=0 as both x∧ y are just multiples of
e 1. So this is a one dimensional flat, a line through the origin, e 0 ∧ e 1. Clearly, the points in the
base space and the vectors in the representation space are in bijection.
Example n+1=2, k +1=1. A point e 0 + β e1 is a vector in R2 which represents a zero dimensional
offset subspace in the one dimensional base space R . This is a degenerate case, since the offset
subspace contains only one point, so the difference between two points in the offset subspace,
which is the difference between a point and itself is the zero vector of the one dimensional
subspace of R2 which is the e 1-axis.
Example n+1=4 , k +1=3. The wedge product of three points gives a trivector in R4 which
represents a two dimensional offset subspace in the three dimensional base space R3 . If x , y , z
are three points, then x ∧ ( y−x ) ∧ ( z−x )=( e 0 + x ) ∧ ( y−x ) ∧ ( z−x ), is a plane through the origin if
x=0 , and is a plane off the origin is x ≠ 0 . Either way, it is a two dimensional offset space in the
three dimensional base space R3.
General case. We have a vector space Rn +1 and a k +1 -blade ( k ≤ n ),
K= ( e 0+ x0 ) ∧ ( x1−x 0 ) ∧ ⋯ ∧ ( x k −x 0 ). Actually Dorst writes it as K= x0 ∧ ( x 1−x 0 ) ∧ ⋯∧ ( x k −x 0 ),

where x 0 is a proper point with a weight which need not be one. The base space is Rn in which
the vectors of Rn +1 are interpreted as points, some proper (when the coefficient of e 0 is nonzero)
and some improper (when the coefficient of e 0 is zero). If k =0, we have a point, which is a zero
flat, or zero dimensional offset space. When k =1, we have a line, which is a rank one flat, a one
dimensional offset space. When k =2, we have a plane, which is a two flat, a two dimensional
offset space. The rank of the flat is the number of terms in ( x 1−x 0 ) ∧⋯ ∧ ( x k −x 0 ). So a k -flat is
represented by a k +1 -blade. A point is a zero dimensional object, and it is represented by a
vector which is a 1-blade. Similarly, the wedge product of two points, which is a one
dimensional object, is represented by a bivector which is a 2-blade.
11.5.2 The same thing applies to improper objects. They are just made up from improper points.
An improper point is a zero dimensional object which is represented by a vector in Rn +1 whose e 0
-coefficient is zero, but which is nonetheless a 1-blade. An improper line is a one dimensional
object which is represented by a 2-blade, a bivector whose vector constituents’ e 0-coefficient is
zero. There seems to be an error in Dorst (page 295), where he writes that “the vector ( 1-blade) u
is a point at infinity (0 -blade)”, and it should be 0 -dimensional and not 0 -blade.
11.5.3 If K is a proper k -dimensional object represented by a k +1 -blade, so
K= ( e 0+ x0 ) ∧ ( x1−x 0 ) ∧ ⋯ ∧ ( x k −x 0 ) =( e 0 + x 0 ) ∧ A k =e0 ∧ Ak + x 0 ∧ Ak . Then
e−1 −1 −1 −1
0 ⏌ K=e0 ⏌ ( e0 ∧ A k )+ e 0 ⏌ ( x0 ∧ A k )=e 0 ⏌ ( e 0 ∧ Ak ) =A k . Ak is called the direction of the

K
object. The support point d is defined by K=d A k =d ∧ Ak , so d= . The moment is defined by
Ak
M
M =d Ak =d ∧ A k , so the support vector is d= . Now
Ak

e 0 ⏌ ( e 0 ∧ K )=e0 ⏌ [ e0 ∧ ( e0 ∧ A k + x 0 ∧ A k ) ]=e 0 ⏌ [ e 0 ∧ x 0 ∧ A k ] =x 0 ∧ A k . So when x 0=d , we


−1 −1 −1

−1
have e 0 ⏌ ( e 0 ∧ K )=d ∧ A k =d Ak =M .
11.5.4 The number of parameters of an offset flat
How many parameters are needed to fix a k -flat? Take the case of n+1=4 . So the basis of R4 is
{e 0 , e1 , e2 , e 3 }. A point is zero dimensional, represented by a vector in R4 , P=α e 0+ β e 1 +γ e2 +δ e3.
This corresponds to the table 11.2a at n=3 , k=0, and the entry is four, as it should. A line is one
4
dimensional, a wedge product of two points, represented by a bivector in R,
L=( α 0 e 0 +α 1 e 1+ α 2 e 2+ α 3 e 3 ) ∧ ( β 0 e 0 + β 1 e 1 + β 2 e 2 + β 3 e 3 )=γ 0 e 01+ γ 1 e 02+ γ 2 e 03+ γ 3 e 12+ γ 4 e13 +γ 5 e23 ,

so it seems we need six parameters, but we saw above that there is a constraint on the γ i’s so we
need only five parameters to define a line, which is the table’s entry at n=3 , k=1 . A plane is two
dimensional, a wedge product of three points, represented by a trivector in R4 , and trivectors in
R are pseudovectors, so are simple, so any expression of the form P=α e 012 + β e013 +γ e023 +δ e123
4
can be factored as the wedge product of three points, so four parameters are needed, which is the
table’s entry in n=3 , k=2. Finally, a three dimensional object is the wedge product of four
points, represented by a four-vector in R4 , so it is a pseudoscalar, so only one parameter is
needed, which is the table’s entry at n=3 , k=3 .
Now let’s look at table 11.2b, when n+1=4 . An improper point, which is zero dimensional, is
represented in R4 by a vector β e1 + γ e 2+ δ e 3, so three parameters are needed. An improper line, is
the wedge product of two improper points in R4 , which is really a bivector in R3, so it has the
form of γ 3 e12 +γ 4 e 13+ γ 5 e 23, and bivectors in R3 are pseudovectors, so this form can be factored as
the wedge product of two vectors in R3 , so three parameters are needed. An improper plane is the
wedge product of three improper points in R4 , which is really a trivector in R3 , which is a
pseudoscalar, so one parameter is needed.
Now a k -flat in an Rn +1 vector space (k < n+1), has the form x ∧ A , which is a k dimensional
offset subspace, represented by a k +1 blade in Rn +1, where x is a proper point, and A is the
wedge product of k improper points, so it is a k -blade in Rn , and congruent blades represent the
same subspace, and the number of k -dimensional subspaces in Rn , is the number of ways to

choose k basis vectors from n basis vectors, which is (nk). So table 11.2b contains essentially the
binomial coefficients, except the one at the beginning which is missing.
In the case where x is also an improper point, so we get a k +1 -blade of vectors in Rn , which is

only possible if k < n, and then we need ( k+n 1) parameters to determine it.
If, however, x is a proper point, Dorst claims we need an additional n−k parameters to

determine x ∧ A beyond the (nk) needed to determine A, as above. If we assume that x is the
support point d=e 0+ d , then d ∈ Rn is perpendicular to the k vectors d− x1 , ⋯ , d−x k where
A=( d−x 1 ) ∧⋯ ∧ ( d−x k ) =( d−x 1 ) ∧⋯ ∧ ( d−x k ), and there are n−k degrees of freedom to choose
such a vector d .
∥ x∙y x⋅y
Recall that given two vectors x , y , P y ( x ) = = y is the projection of x along the direction
y y⋅y
⊥ x ∧ y x∧ y x⋅y
of y , while P y ( x ) = = y =x− y=x−P∥y ( x ) is the rejection of x from y . So
y y⋅y y⋅ y
∥ ⊥ ∥ ⊥
P y ( x ) ∧P y ( x ) are in the plane of x∧ y and P y ( x ) + P y ( x )=x . If θ is the angle between x∧ y , then

|P∥y ( x )|=|x|cos θ and |P⊥y ( x )|=|x|sin θ.


Suppose the vector space is R4 with base { e 0 , e1 , e2 , e 3 } and the base space is R3 and we have a
plane of points in the base space, P=x ∧ B, where B is a bivector in the vector space R3 . Now

()
B=α e 12+ β e13 + γ e23 , so it requires three parameters, 3 . Now n−k =3−2=1. So we need one
2
more parameter to characterize the plane, namely its distance from the origin. First we need a
vector d which is perpendicular to B, namely
0=d ∙ B=( d 1 e 1 +d 2 e2 +d 3 e3 ) ∙ ( α e12+ β e 13+ γ e 23 )=e1 (−β d3 −α d 2 ) + e2 ( α d 1−γ d 3 ) + e3 ( β d 1 + γ d 2 )
−β d 1 αd
So d 1 can be chosen arbitrarily, and d 2= and d 3= 1 . As d 1 varies, the distance δ from
γ γ

the origin varies, and δ =d 1 1+


√ β2 α 2 d1 2 2 2
+ = √ α + β + γ . So given δ , d 1 can be chosen
γ2 γ2 γ
accordingly. Similarly, suppose we want a plane with the directions vectors being
a=a1 e 1+ a2 e2 +a 3 e 3 and b=b1 e 1+ b2 e2 +b3 e 3. Then
a ∧ b=( a1 b2−a 2 b 1) e 12+ ( a1 b3−a3 b1 ) e13 + ( a2 b 3−a 3 b 2) e 23. Let α =a1 b 2−a2 b1 and β=a1 b 3−a3 b1

and γ=a2 b3−a 3 b 2. There are infinitely many pairs of direction vectors which determine a plane
' β−γ
with the same direction, α e12 + β e 13 +γ e23 . For example a =e1 + e2 + e 3 and b ' =α e 2+ β e 3. So
α
we don’t need six parameters a 1 , a2 , a3 , b1 ,b 2 , b 3, three are enough to determine the direction of
the plane.
Now suppose we want to determine the parameters of a line, whose dimension is one, and whose
form is the wedge product of a proper point with an improper point, the improper point being a

direction which requires three parameters (31), a=α e + β e + γ e . Now n−k =3−1=2. So we
1 2 3

need two more parameters. There is a point d=e 0+ d , on the line which is closest to the origin, so
0=d ∙a=( d 1 e 1+ d 2 e 2+ d 3 e3 ) ∙ ( α e 1+ β e 2+ γ e 3 ) =d 1 α +d 2 β +d 3 γ . This is one equation with three
−d 1 α −d 2 β
unknowns, so d 1∧d2 can be chosen arbitrarily, and then d 3= . If the distance is
γ
1 2 2 2
determined to be δ , then δ =
γ

d 1 ( α + γ ) +d 22 ( β 2+ γ 2 ) +2 d 1 d 2 αβ . There are infinitely any ways to

δγ
choose d 1∧d2 , for example let d 1=0 and d 2= .
√ β 2+ γ 2
11.6.1 Direct Representations
Suppose the base space is two dimensional and the homogeneous space is the vector space R3 . A
plane through the origin ( 0,0,0 ) of R3 is represented by a bivector p ∧q where
p=e0 + p1 e 1+ p2 e 2∧q=e 0 +q1 e 1+ q2 e 2 are thought of as vectors whose tail is at the origin (with
view to the following we boldened e 1∧e 2 but not e 0). A third vector x is at that plane through the
origin provided x ∧ p ∧ q=0 or x=αp + βq . Now we can reinterpret all that as an offset line or a
one flat in a two dimensional base space which is not a vector space but a point space, and every
R vector p=e0 + p1 e 1+ p2 e 2 get interpreted as a unit point in that point space, with e 0 get
3

interpreted as a point of origin, and p=e0 + p , q=e 0 +q , where p∧q are vectors in the vector
space R2 and are improper points in the 2 D point space or base space, and the equation of the
offset line or one flat is p ∧q= p ∧ ( q− p ) , and q− p is parallel to the line, and p is a point on the
line. So then x=e 0 + x is also a point on the line, provided x ∧ p ∧ q=0 which is true iff
( x− p ) ∧ ( q− p )=0. Consider now a two dimensional vector space R2, and a line through the
origin parallel to q− p. Then ( x− p ) ∧ ( q− p )=0 just means that x− p is also parallel to that line
through the origin. But this is not how we interpreted this equation in the two dimensional point
space. So far we discussed figure 11.3a. In figure 11.3b, the homogenous space is the vector
space R4 . A three dimensional sub vector space through the origin is represented by a trivector,
p ∧q ∧ r , and a vector x ∈ R4 is at that subspace provided x ∧ p ∧ q ∧r =0 or x=αp + βq+ γr . The
base space is three dimensional point space, and the 3 D subspace p ∧q ∧ r get interpreted as a
2 D offset plane, or a two flat in the three dimensional point space, with p , q , r being points on
that plane (three points determine a plane), and the equation of that plane can be written as
p ∧ ( q− p ) ∧ ( r− p ), and the condition for a point x to be on that plane can be expressed as
( x− p ) ∧ ( q− p ) ∧ ( r− p )=0. This equation can also be interpreted as a condition for ( x− p ) to be
on the plane through the origin ( 0,0,0 ) of the vector space R3 determined by the two vectors
q− p∧r− p whose tail is at that origin.
11.6.2 Dual Representation
Equation 11.6. Let X = p∧ A be a line or a one flat in the base space if A is a vector, a plane or
two flat if A is a bivector, and so on. Dorst here denotes the dual in the homogeneous space Rn +1
as ❑¿ and the dual in the subspace Rn as ❑⋆. The improper points of the base space are the
vectors of Rn , so when we take the dual of the blades of Rn , which are improper flats, then we
denote the dual by ❑⋆. The unit pseudoscalar of the geometric algebra built on Rn +1 is I n+1=e0 I n.
−1 −1 −1 −1
Now I n+1=( e0 I n ) =I n e0 .
¿ −1 ¿ −1
Recall that X is whatever left in I n+1 after X is excluded, namely X = X ⏌ I n +1. At first I
thought that the definition of X ⋆ is X ⋆= X ⏌ I −1 ⋆ −1
n . But the correct definition is X = X I n . Only

when X is improper, then X ⋆= X I −1 −1


n = X ⏌ I n . So the chain of Dorst on page 269 should be as

follows,

n +1 =X I n+1 = X ( I n e 0 )= ( X I n ) e 0 =X e0
X ¿ = X ⏌ I −1 −1 −1 −1 −1 −1 ⋆ −1

In practice, Dorst uses the ❑⋆ dual only for improper blades, so the geometric product reduces to
a contraction. It should be noted that the definition of the dual involves a contraction, not a
geometric product, so when X is proper, to call X I −1
n a dual is stretching the definition. Now

let’s work out equation (11.6). Step 1,

n+1= p ⏌ ( A ⏌ I n+1 ) =p ⏌ ( A ⏌ ( I n e 0 ) )= p ⏌ ( ( A ⏌ I n ) e 0 ) ≡ p ⏌ ( A e 0 )
( p ∧ A ) ⏌ I −1 −1 −1 −1 −1 −1 ⋆ −1

Step 2. Use equation (3.10) page 73 a ⏌ ( B ∧C )=( a ⏌ B ) C+ (−1 )grade (B ) B ∧ ( a ⏌C ) . If the grade
of A is k , the grade of A ⋆ is n−k and (−1 )n−k A ⋆ ≡ ^
A ⋆. Recall that b ∧ B=(−1 )
grade ( B )
B ∧a= ^B ∧ b.
Recall that the dot product of a vector with its inverse is one. So,

p ⏌ ( A ⋆ e−1
0 )= p ⏌ A e 0 + ( −1 )
( ) A ⋆ ∧ ( p ⏌ e−1
0 ) = p ⏌ A e0 + (−1 )
( ) A ⋆ ∧ ( e 0 ⏌ e−1
0 )= (−1 ) e ( p ⏌ A⋆)+
⋆ −1 n−k ⋆ −1 n−k n−k−1 −1
0

−1
Step 3. Recall from page 286 that M =e0 ⏌ ( e0 ∧ X ) . Also recall that M =dA=d ∧ A= p ∧ A .
The grade of M is k +1 and the grade of M ⋆ is n−k −1.
⋆ −1 ⋆
M =( p ∧ A ) ⏌ I n =p ⏌ A
^
M ⋆ =(−1 )
n−k−1
M ⋆ =−(−1 ) p ⏌ A ⋆ =−p ⏌ ^
n−k
A⋆

−e−1
0
( p⏌ ^
A⋆ )=e−1
0 M
^⋆

X =^
A −e0 ( p ⏌ ^
A )= ^
A +e 0 ^
¿ ⋆ −1 ⋆ ⋆ −1 ⋆
M
Step 4. Now
M =dA
⋆ −1 ⋆
M =( dA ) ⏌ I n =d ⏌ A
^
M =−d ⏌ ^

A

d −1 ^
M ⋆=−d−1 ( d ⏌ ^
A ⋆ ) =−d−1 ∧ ( d ⏌ ^
A ⋆ ) −d −1 ⏌ ( d ⏌ ^
A ⋆ ) =−d−1 ∧ ( d ⏌ ^
A ⋆ ) −( d−1 ∧ d ) ⏌ ^
A ⋆ =−d−1 ∧ ( d ⏌ ^
A ⋆ )=
To see the last step take an example

c−1 ∧ [ c ⏌ ( a ∧b ∧ c ∧ d ) ] =c−1 ∧ [ ( c ⏌ a ) ( b ∧c ∧d ) −( c ⏌ b ) ( a ∧c ∧d ) + ( c ⏌ c ) ( a ∧b ∧d ) −( c ⏌ d )( a∧ b ∧c ) ] =( c ⏌

So finally,
X ¿= ^
A ⋆ +e−1 ^⋆
0 M =−d
−1 ^⋆ ^⋆ −1 ^⋆
0 M = ( e 0 −d ) M
M +e−1 −1

Note that ^
A =(−1 ) A =(−1 ) (−1 ) A =(−1 ) [ (−1 ) A ] =(−1 ) ^
n−k n k n k ⋆ n ⋆
A . Here we used equation
⋆ ⋆ ⋆

(3.14) page 73.


Suppose that d=δ n where n is a unit vector normal to A , which means that
dA=d ∧ A=δ nA= M . So A ⋆= A ⏌ I −1
n . Suppose further that A is an hyperplane, so if the

dimension of the space is n+1 (n here is not bold, it is a number, not a vector), then the base
space has objects of dimensions 0 (points), 1 (lines), 2 (planes), up to n−1 which are
hyperplanes. The whole point space is n dimensional, but it is not a linear space (since it doesn’t
−1
contain the zero vector). So then A ⏌ I n is one dimensional, and is orthogonal to A , so it is n .
But this seems to assume that A=I n−1 and not some scalar multiple of it, so nA=I n. Also
M =δ nA is n dimensional, so M ⋆ is a scalar, namely δ . Now the flat is
Π=( e 0+ δ n ) ∧ A=e 0 ∧ A +δ nA=e0 ∧ A+ M
so

π=Π =( e0 −d
¿ −1 −1
)^⋆ −1 −1 ^
M =( e 0 −( δ n ) ) δ= (
−1
e0 −
δ
n
)(−1 ) δ=δ e
0 −1
0 −n

Dorst writes (top of page 290) “In the dual representation, testing whether a point x lies on the
dual flat X ¿ , is now done by demanding the contraction x ⏌ X ¿ to be zero”. I think this is wrong.
What should be is “In the dual representation, testing whether a point x lies on the flat X , is now
done by demanding the contraction x ⏌ X ¿ to be zero”. I’ll assume that in what follows. See
section 3.5.5 page 83 where Dorst has it right. Then using the additive form line of (11.6), we
have

0=x ⏌ X ¿ =x ⏌ ^ [
A ⋆ −e−1 (
0 p⏌A
^⋆ ) =x ⏌ ^ ]
A ⋆−x ⏌ e−1
0
(p ⏌^ [
A ⋆ ) =e 0 ⏌ ^
A⋆+ x ⏌ ^ ]
A ⋆−e0 ⏌ e−1 ( ^⋆ ) −x ⏌
0 ∧ p⏌A [ ]
Now the first term is zero. Using equation (3.10) page 73, the third term is
[
e 0 ⏌ e−1 (
0 ∧ p⏌A ]
^⋆ ) =( e ⏌ e−1 ) ∧ ( p ⏌ ^
0 0 A ⋆ )= p ⏌ ^
A⋆
and forth term is

[
x ⏌ e0 ∧( p ⏌ ^
−1 ⋆
]
−1
A )=−e0 ∧ [ ( x ∧ p ) ⏌ ^
A ) =−e 0 ∧ x ⏌ ( p ⏌ ^
⋆ −1
A ]=−e0 [ ( x ∧ p ) ⏌ ^
⋆ −1
A ]

So

0=x ⏌ ^ A +e 0 [ ( x ∧ p ) ⏌ ^
A −p ⏌ ^ A ] =( x−p ) ⏌ ^
A + e0 [ ( x ∧ p ) ⏌ ^
A ]
⋆ ⋆ −1 ⋆ ⋆ −1 ⋆

Since the terms on the right have different grades, each must be zero. The first term being zero
means that x− p, being orthogonal to ^
A ⋆, is in A , so ( x− p ) ∧ A=0, which is the way to check
that x is in the flat using the direct representation, see section (11.6.1).

0 [ ( x ∧ p ) ⏌ A ]=0
Note that ( x− p ) ∧ A=0 implies that e−1 ^⋆ , because

x ⏌ ( ( x− p ) ⏌ ^
A ) =x ⏌ ( x ⏌ ^
A ) −x ⏌ ( p ⏌ ^
A )=−x ⏌ ( p ⏌ ^
A ) =−( x ∧ p ) ⏌ ^
⋆ ⋆ ⋆ ⋆ ⋆
A
so

( x− p ) ∧ A=0 ⇒ x ⏌ ( ( x− p ) ⏌ ^ A =0 ⇒ e0 [ ( x ∧ p ) ⏌ ^
A )=0⇒ ( x ∧ p ) ⏌ ^ A ]=0
⋆ ⋆ −1 ⋆

Using the equation we derived above π=δ e−1


0 −n , we deduce that x lies on the hyperplane Π if

0=x ⏌ π =( e 0 + x ) ⏌ ( δ e−1
0 −n ) =δ−0+ 0−x ⏌ n=δ −x ∙ n

or x ∙ n=δ . For example, suppose we have a plane in Euclidean space whose shortest distance to
the origin is δ , and n is a unit vector orthogonal to the plane, with its tail at the origin. Then the
head of a vector x whose tail is at the origin, is on the plane iff x ∙ n=δ . Similarly for a line in a
plane.
Dorst writes the equation x ∙ n=δ in matrix form as

[]
x1

[ n1 ⋯ nm −δ ]
xm
=0

1
where m is the dimension of the base space.
Table 11.3, third column. Derive the formula for X ¿ in 1st row. Above we’ve seen that M =dA ,
and M ⋆ =¿ and ^ ¿ ^⋆ −1 ^⋆ ^⋆ −1
M ⋆ =−¿ and X = A +e 0 M = A −e 0 ¿.
Let A be a k -blade, so A ⋆ is an n−k -blade, and M =dA is an k +1 -blade and M ⋆ is an n−k −1-
blade.
2nd row. Check that ¿.
¿
Skip 3rd row and prove first 4th row, ^
M =¿.

¿
Back to 3rd row.
M =dA

M =¿
^⋆
M =−¿

d=− ^
M (^
⋆ −1
A ) =−¿¿

We shall come back to the 5th and 6th rows after studying section 11.8.
Section 11.7 Incidence Relationships
Dorst on page 129 equation (5.6) defines A ∩ B=¿ , while Hestenes in CAGC page 25 equation
(2.30) defines it the opposite way, A ∨ B=¿ where J is the least subspace which contains both
subspaces. The difference is at most a sign,
A ∨ B= (−1 )( grade (J )−grade ( A )) ( grade (J )−grade (B )) B ∨ A
The meet ∩ and the regressive ∨ are essentially the same, though one can define the regressive
product with the dual being with respect to the whole space and not with respect to the least
subspace which contains both. But then if the sum of the degrees is less than the degree of the
space, the regressive product is zero. To write the meet using the wedge product we have
A ∨ B=( A J −1 ∧ B J −1 ) J
( A ∨ B ) J −1= A J −1 ∧ B J −1
Browne on page 291 equation (5.3) has an axiom that the complement of the wedge product of
two blades is the regressive product of the complements of the blades. Which is the “dual” of the
axiom that the complement of the regressive product of two blades is the wedge product of the
complements of the two blades. Dorst has the “dual” axiom but doesn’t mention the first.
Because if one takes the case when A∧B intersect, their wedge product is zero, and the dual of
zero with respect to any space is zero, and also their duals with respect to their union are disjoint,
so their intersection is empty.
But in fact the first axiom doesn’t refer to the case of intersecting blades, but rather disjoint, non-
intersecting blades. So suppose Ar ∧ B s ≠ 0. Suppose that the dimension of the space is n and
¿ ¿ ¿ ¿
n> r+ s . Then the rank of Ar is n−r and the rank of Bs is n−s . Then the rank of Ar ∨ B s is
n−r +n−s−n=n−r−s> 0. So since Ar ∧Bs are disjoint, but are properly included in the n -
space, their duals with respect to the whole n -space intersect and that intersection is the dual with
¿ ¿ ¿
respect to the whole n -space of Ar ∧ B s, so the intersection is Ar ∨ B s=M =( A r ∧ Bs ) . Now
¿ ¿ ¿ ¿
Ar =M ∧ B and Bs =A ∧ M . So the union J= A ∧ M ∧ B of Ar ∧Bs is the entire n -space. So the
dual with respect to J is the dual with respect to the whole n -space. So the formula
( A ∧ B )¿ =A ¿ ∨ B¿ applies when the join is the whole space and A∧B are disjoint and their wedge
product is nonzero and the dual is with respect to the whole space. While the formula
( A ∨ B )¿ =A ¿ ∧ B¿ applies when A∧B are not disjoint and their wedge product is zero, and the
join is their union but not necessarily the whole space and the dual is taken with respect to the
union and not necessarily with respect to the whole space.
¿
Note that ( A ∨ B ) =A ¿ ∧ B¿ is zero when the dual is with respect to the whole space and the join
is not the whole space. Because then A ∨ B is zero, since the sum of the ranks of A∧B doesn’t
exceed the rank of the whole space. And also A¿ ∧ B¿ is zero since A¿ ∧B¿ intersect. So the dual
and the meet or the regressive product must be taken with respect to the union of A∧B and not
with respect to the whole space when the union is strictly included in the whole space. Similarly,
( A ∧ B )¿ =A ¿ ∨ B¿ is degenerate when the dual is taken with respect to the union of A∧B .
Because if A∧B are disjoint, then both sides are empty, and if A∧B intersect, the left side is
zero and the right side is empty. See further my notes on 11.7.2 page 296.
11.7.1 We saw above that an equation of a line is p ∧a=( e 0 + p ) ∧a=e 0 ∧ a+ p ∧a=e 0 a+ B,
where B is a bivector with a a factor of B. Dorst brings here two lines, L=e0 u+U ≡ ( e0 + p ) ∧u
and M =e0 v +V ≡ ( e0 +q ) ∧ v . See in figure (11.5) that U =p ∧u and V =q ∧ v . We assume the
two lines are in a plane I , which goes through the origin, so the lines must intersect at the infinite
point u when u ∥v and at a finite point if u ⋕ v . So by assumption, the vectors p ,u ,q , v are
vectors in the two dimensional plane through the origin, and any two of them (if u ⋕ v∧ p , q ≠ 0)
form a basis for that plane. Calculate the finite intersection point. Now the lines L∧M are one-
dimensional objects in the base space, so they are 2-blades in the homogeneous space, which are
simple bivectors ( e 0 + p ) ∧u and ( e 0 +q ) ∧ v . The intersection point is a zero dimensional object in
the base space, so it is a vector in the homogeneous space, L ∨ M . But this regressive product is
with respect to the join, not necessarily with respect to the whole space.
First we deal with the case u ⋕ v . Then the join is the plane, the two dimensional object in base
space, which is represented as a trivector in the homogeneous space, so if the homogeneous
space is three dimensional, then the join is the whole space, and the regressive product is with
respect to the whole space. We have the equation (5.8) page 129 (note the swap of L∧M , unlike
Hestenes or Browne), ( L∨ M ) J −1=( M J −1 ) ∧ ( L J −1) or L ∨ M =( M J −1 ) ∧ ( L J −1 ) J . So what is J ?
Dorst writes that J=e0 I =e 0 ∧ I . And what is I ? We choose it for now to be u ∧ v , but later we’ll
discuss that. So J=e0 ∧u ∧ v=e 0 ( u∧ v ) . But note that p∧q are also in that plane, namely
p ∧u ∧ v=q ∧u ∧ v=0. So p ∧u=φ ( u∧ v ) and q ∧ v=ψ ( u ∧ v ) for some scalars φ , ψ . And
therefore ( p ∧u ) ( u∧ v ) ≡ ( p ∧u )−⋆ and ( p ∧u ) ( u∧ v )−1 ≡ ( p ∧u ) ⋆ and ( q ∧ v ) ( u∧ v )= ( q ∧v )−⋆ and
( q ∧ v ) ( u∧ v )−1 ≡ ( q ∧ v )⋆ are all scalars. Now
−1 −1 −1 −1 −1 −1 −1 −1 −1 ⋆ −1 ⋆ −1
L J =L ( u∧ v ) e 0 =( e 0 u+ p ∧u ) ( u ∧v ) e 0 =e0 u ( u∧ v ) e 0 + ( p ∧u )( u ∧ v ) e0 ≡ e0 u e 0 + ( p∧ u ) e 0

where u ⋆ is a vector in the plane u ∧ v and ( p ∧u )⋆ ≡U ⋆ is a scalar. Now

0 =e 0 ( u ⋅ e 0 +u ∧ e0 ) =e 0 ( u ∧ e0 ) =¿
e 0 u ⋆ e−1 ⋆ −1 ⋆ −1 ⋆ −1

−1 ⋆ ⋆ −1 −1 ⋆ ⋆ −1
So altogether, L J =−u +U e 0 . Similarly, M J =−v +V e 0 where
⋆ ⋆ −1
V =( q ∧ v ) =( q ∧ v )( u ∧ v ) is a scalar. So

[
L ∨ M =( M J −1 ) ∧ ( L J −1 ) J = ( −v ⋆ + V ⋆ e−1 ⋆ ⋆ −1
]
0 ) ∧ (−u +U e 0 ) e0 ( u ∧v ) = v ∧u e 0 ( u ∧ v )−U ( v ∧ e0 ) e0 ( u ∧v ) −V
( ⋆ ⋆) ⋆ ⋆ −1

Now ( u ∧ v )⋆=( u ∧ v )( u ∧ v )−1=1. Using the result that ( a ∧ b ) ( b∧ a ) =( a ⋅ a ) ( b ⋅b )−( a ⋅b )( a ⋅ b ) and


b ∧a
( a ∧ b )−1= one can show that ( v ⋆ ∧u⋆ ) ( u ∧ v ) ≡ [ v ( u ∧v )−1 ∧u ( u ∧ v )−1 ] ( u ∧ v )=1.
( a ∧b )( b ∧ a )
−⋆
Namely ( v ⋆ ∧u⋆ ) =( u ∧v ) ⋆. So altogether
⋆ ⋆ q∧v p ∧u
L ∨ M =e 0−U v +V u=e 0+ u− v
u∧v u∧v
We see that choosing I to be u ∧ v gave an intersection point of weight one. Now Dorst doesn’t
define I to be u ∧ v and he leaves it unspecified, but it must be some bivector which is a scalar
multiple of u ∧ v . We’ll work out I below.
Let’s do the calculation without assuming that I =u ∧ v . So
−1 ⋆
[⋆ −1 ⋆ ⋆ −1 ⋆
]
L ∨ M =( M J ) ∧ ( L J ) J = ( −v + V e 0 ) ∧ (−u +U e 0 ) e0 I =( v ∧ u ) e 0 I −U [ v ∧ e 0 ] e 0 I −V [ e 0 ∧ u ] e 0 I =
−1 ⋆ ⋆ ⋆ −1 ⋆ −1 ⋆

where we used in the 7th equality equation (3.21) page 79, ¿ if A ⊆C and D ∧1=D (Browne
page 45 equation (2.14)). If C is a pseudoscalar, then ¿
So
⋆ ⋆ ⋆ −1 ⋆
L ∨ M =e 0 (u ∧ v ) −U v+V u=e 0 ( u∧ v ) I −U v +V u=( u ∧ v ) I
⋆ −1
( e 0−U I−1 I (u ∧ v )−1 v +V I −1 I ( u ∧ v )−1 u )= (u
So the weight depends on I .
In section 2.7 Dorst finds the intersection point without using the meet or regressive product.
Write the intersection point x as x=e 0 + x=e 0 +α u+ β v and we need to find α ∧β . Then
x ∧ u=β v ∧u and x ∧ v=α u ∧ v . But since x∧ p are on L, so x= p+ γ u, so x ∧ u= p ∧u , and
x∧v q∧v
similarly, as x∧q are on M , so x=q +δ v , so x ∧ v=q ∧ v , so α = = and
u ∧ v u∧ v
x ∧u p ∧u − p ∧u
β= = = .
v ∧u v ∧u u ∧v
Now suppose we fix the weight of the intersection point to be α . Then we can determine J∧I
exactly. Given two intersecting subspaces A∧B , with the intersection being N , then A=A ' ∧ N
and B=N ∧ B ' and A' = A ⌊ N−1 ¿ ¿ and B' =¿ and J= A ' ∧ N ∧ B' = A ' ∧ B= A ∧ B ' . Now A here
is the bivector L=( e0 + p ) ∧u and B here is the bivector M =( e0 +q ) ∧ v and the intersection here

is the vector N=α e0 + ( q∧v


u∧v
u+
p ∧u
v ∧u )
v ≡ α ( e 0+ β u+ γ v ) . And (see my notes on NFCM page 260

exercise (1.8))
N α ( e 0 + β u+ γ v ) e 0+ β u+ γ v e 0 + β u+ γ v
N −1 = = 2 = = =δ ( e 0+ β u+ γ v )

| |
α β 2 γ 2 [ ( u ∙ v ) −u2 v 2 ]
2 2
N α ( e 0 + β u+ γ v )( e0 + β u+γ v ) 0 0 1
2 2
α βγ (u ∙ v ) β u 0
γ 2 v2 βγ ( u∙ v ) 0

L' =L ⌊ N −1 ¿ ¿=δ [ ( e0 ∧u ) ⌊ ( e 0+ β u+ γ v ) + ( p ∧u ) ⌊ ( e 0+ β u+ γ v ) ¿ ¿=δ [ ( e0 ∧u ) ⌊ e 0 + β ( e 0 ∧ u ) ⌊u ¿ ¿+ ( p ∧u ) ⌊ e 0 ¿ ¿+ γ

J=L ⌊ N−1 ¿¿
So I =( ζ q−ζ p−ηu ) ∧ v
Now let’s discuss the parallel case, so the intersection is at infinity. Assuming p ≠ q, so the lines
are distinct, then the smallest subspace which contains them both is the plane, just like the case
when the lines intersect at a finite point. So I ∧J are the same in the finite or infinite intersection.
Now when we did above the intersection calculation, using an unspecified I , we got the result
N=L ∨ M =e0 ( u ∧ v )⋆−U ⋆ v +V ⋆ u
This is correct even when u ∥ v . However, u ∧ v=0, so v=−σ u, so N= ( V ⋆ +σ U ⋆ ) u. So the
intersection is any scalar multiple of u. Now V ⋆ ∧U ⋆ ∧σ are scalars. Since u∧v are given, σ can
be calculated. Also, p∧q are given, so U ∧V can be calculated. To calculate V ⋆ ∧U ⋆ one needs
to know I . But I is the plane, and a plane can be represented as the wedge product of any two
independent vectors. In the finite intersection case, the location of the point of intersection can be
calculated exactly, but the weight of that point, depends on I , and also J depends on I , and it
seems that I can be determined only so up to a scalar multiple. Alternatively, we can choose the
−1 u
infinite intersection point, say N=u , and N = . Then we can determine I exactly.
u2
' −1
L =L ⌊ N ¿ ¿=
1
u
1 2
e ∧u ) ⌊ u + ( p ∧u ) ⌊u ¿ ¿= 2 [ u e0 +u p− ( p ∙ u ) u ]= p− p ∙
2 [( 0
u
2 u u
|u| |u| ( )
J=L ⌊ N−1 ¿¿

[
So I = q− p+ p ∙ ( |uu|)|uu|] ∧ v.
Conversely, we can choose I= p ∧u assuming that p ≠ 0, namely, that L doesn’t go through the
origin, and e 0 is not on L. Then we can calculate the exact magnitude of the infinite intersection
point,
−1 u∧ p u∧ p
I = = 2 2
( u∧ p )( p ∧u ) u p −( p ⋅u )2
⋆ ( p ∧u )( u ∧ p )
U = 2 2 2
=1
u p −( p ⋅ u )

⋆( q ∧σ u )( u ∧ p )
V = 2 2 = 2 2
σ
|
q ⋅ p q⋅u
u ⋅ p u2

|
( q ⋅ p ) u2 − ( q ⋅ u ) ( p ⋅ u )
2 2 2 2 2
u p −( p ⋅u ) u p −( p ⋅u ) u p − ( p ⋅ u)

N= ( V +σ U ) u=σ
⋆ ⋆
( ( q ⋅ p ) u2−( q ⋅u )( p ⋅ u )
2 2
u p −( p⋅ u )
2
+1 u=σ
) (
q ⋅ p−( q ⋅ u^ ) ( p ⋅ u^ )
2
p −( p ⋅ u^ )
2
+1 u
)
If I is fixed, then the intersection point can be determined exactly, and if the weight or the
magnitude of the intersection point is fixed, then I can be determined exactly.
p ∧u p ∧u
Now suppose that we choose I to be a unit bivector, | p ∧u|= , and suppose that u=v
√ p −( p ⋅ u )
2 2

−1 ~ − p ∧u
are unit vectors. Then I = I = , where ~
❑ denote the reverse in Dorst book. Then
√ p −( p ⋅ u )
2 2
given any bivector B in the plane, B I −1 is the scalar which is the magnitude of B, as was shown
in my notes on Hestenes projective geometry paper equation (2.9). So
− ( p ∧u )( p ∧u )
=√ p − ( p ⋅ u ) ,
⋆ −1 2 2
U =( p ∧u ) I = is the magnitude of U, and
√ p − ( p ⋅u )
2 2

−( q ∧ u )( p ∧u )
V ⋆ =( q ∧u ) I −1= is the magnitude of V . Now let δ L ∧δ M be the magnitude of the
√ p −( p ⋅u )
2 2

distance vectors d L∧d M of L∧M from the origin. So U =p ∧u=d L u, so the magnitude of U ,
which is U ⋆ , is the magnitude δ L of d L, as u is a unit vector. Similarly, V =q ∧u=δ M u, so the
magnitude of V =q ∧u , which is V ⋆, is the magnitude δM of d M. So
⋆ ⋆
L ∨ M =−U v +V u= (−δ L + δ M ) u. So the magnitude of the infinite intersection point is related
to the distance between the lines.
The last case is when the two lines coincide. So every point on the line is an intersection point.
So ( q− p ) ∥u∥ v . So M =( e0 +q ) ∧ v=( e 0 + p+α u ) ∧u=( e0 + p ) ∧u=L. In that case the join and the
meet here are the line itself, J=1 ∧ L ∧1=L , and L ∨ L=¿ . So the join and meet coincide, being
represented by the subspace spanned by the bivector ( e 0 + p ) ∧u . But the equation e 0 I=J has no
solution for I unless the line goes through the origin, namely p=0 or p ∥u and L=e0 ∧u , in
which case I can be taken to be any scalar multiple of u. When the lines are distinct, then I is a
plane which goes through the origin, so it is spanned in general by any two of p , q , u , v . But
when the lines coincide and do not go through the origin, no one dimensional subspace includes
⋆ ⋆ ⋆
the lines. The above formula N=L ∨ M =e0 ( u ∧ v ) −U v +V u doesn’t work, because once I
can’t be defined, the ❑⋆ dual is not defined. But even when the lines go through the origin, so
I =u, U =p ∧u=0=V , so U ⋆ =V ⋆ =0. The formula for the intersection works only when the
intersection is one point and not the lines themselves.
Two skew lines in space page 295. Skew lines are nonintersecting lines in space. So I would
expect the meet to be zero. But in fact it is a nonzero scalar.
We have two lines as before, L= p ∧u=e 0 u+ p ∧u=e 0 u+U . M =q ∧ v =e 0 v+ q ∧ v=e 0 v +V .
The join is the whole four space. The three subspace which contains p , q , u , v can be spanned by
~
~ −1 I3 I3
say p ∧u ∧ v which we’ll take to be I 3, so J=e0 I 3. Now I 3=−I 3 so I 3 = ~ = . Now
I I I I 3 3 3 3

¿ −1 −1 −1 −1 −1 −1 −1 ⋆ −1 ⋆ −1
L =L J =( e 0 u+ U ) I 3 e 0 =e 0 u I 3 e 0 + U I 3 e 0 =e 0 u e 0 + U e 0
⋆ ⋆ ⋆
Note that u is a bivector and U is a vector. Let u =a ∧ b, then
−1 −1 ⋆
e 0 ( a ∧b ) e 0 =( e0 ∧a ∧ b ) e 0 =a ∧ b=u . So L¿ =u⋆ +U ⋆ ∧ e−1 ¿ ⋆ ⋆ −1
0 and M =v +V ∧e 0 . So

−¿
[
L ∨ M =( M ¿ ∧ L¿ ) = ( v ⋆ +V ⋆ ∧ e−1 ⋆ ⋆ −1 ⋆ ⋆
]
0 ) ∧ ( u + U ∧e 0 ) e 0 I 3 = ( v ∧ U ∧e 0 +V ∧ e0 ∧u ) e 0 I 3= ( v ∧U + V ∧u ) I
−1 ⋆ −1 ⋆ ⋆ ⋆ ⋆ ⋆

| || |
a b c 0
a b c
d e f 0
In the fourth equality we used the fact that = d e f as we have there the
g h i 0
g h i
0 0 0 1
contraction of two pseudoscalars in four space.
−⋆ ⋆ −1 −1
Now show that ( v ⋆ ∧U ⋆ ) =( v ∧U )⋆ . Now ( v ∧U ) =( v ∧ p ∧u ) I 3 =I 3 I 3 =1 . And

−⋆
( v ⋆ ∧U ⋆ ) = v ( I3
I3 I3
I
∧ ( p ∧u ) 3 I 3 =
I3 I3 ) 1
I 3 I 3 I 3 I3
¿

Note that ( v ∧U )⋆=( U ∧ v )⋆ since v ∧U =U ∧ v .


−⋆
Next show ( V ⋆ ∧ u⋆ ) = (V ∧u )⋆.
I3 1 1

( V ∧u ) =( V ∧u ) = ( q ∧v ∧u )( p ∧u ∧ v )= [ ( α p+ β u+ γ v ) ∧ v ∧u ] ( p ∧u ∧ v )= α [ p ∧ v ∧u ] ( p ∧u ∧
I 3 I 3 I 3 I3 I3 I3 I3 I3
−⋆

[
( V ⋆ ∧ u⋆ ) = ( q ∧ v )
I3
I3 I 3
∧u
I3
I3 I3
I 3=
] 1
I3 I3I3 I3
¿

So altogether we have
⋆ −⋆ ⋆ −⋆
L ∨ M =( v ∧U ) + ( V ∧u ) =( v ∧U ) + (V ∧u ) = (U ∧ v ) + ( u ∧V )
⋆ ⋆ ⋆ ⋆ ⋆ ⋆

So the regressive product is a scalar in four space, which is not a point. So,

L ∨ M =( p ∧u ∧ v+u ∧ q ∧ v ) =[ ( p−q ) ∧u ∧ v ] = {[ ( p−q ) ∧ u∧ v ] ( u∧ v ) ( u ∧v ) } ≡ [ d ( u ∧v ) ]


⋆ ⋆ −1 ⋆ ⋆

Since d ( u ∧v ) is a trivector, d ( u ∧v ) =d ∧ ( u∧ v ) , so d is orthogonal to u∧v and therefore its


magnitude is the shortest distance between the lines. But in general ( p−q ) ∧ ( u ∧ v ) ≠ ( p−q )( u ∧ v )
, because in general p−q is not orthogonal to u∧v . So d is the vector from L to M which is
orthogonal to u ∧ v , while p−q is the vector from L to M which in general is not orthogonal to
u ∧ v . In Figure 11.6 it is not clear that d is orthogonal to both u∧v . In general, I don’t see any
reason to believe that the shortest distance between the lines is the difference between the
shortest distances from the origin to the lines.

The formula L ∨ M =[ d ( u∧ v ) ] is a nonzero scalar when the lines do not intersect. It increases
when the distance between the lines increases and when the angle between the directions of the
lines increases from parallel to orthogonal. If the lines intersect at a finite point, so d=0, then the
join is no longer the four space, but rather a three space, and we are back to previous case. Also
if the two lines do not intersect, but are parallel, then u ∧ v=0, so we are back to the previous
case where the intersection is at infinity and the join is a three space.
To find the points on the lines which are closest, use the old way.
L=a+t b
M =c+ s d
δ=a+t b−c−s d
δ is a vector from the line M to the line L.
δ ∙ δ=a2+ c 2−2 ( a ∙c ) +t 2 b 2+ s 2 d 2 +2 t [ ( a−c ) ∙ b ]−2 s [ ( a−c ) ∙d ]−2ts ( b ∙ d )
∂ (δ ∙ δ)
=2 t b +2 [ ( a−c ) ∙ b ] −2 s ( b ∙ d )=0
2
∂t
∂ (δ ∙ δ)
=2 s d 2−2 [ ( a−c ) ∙ b ] −2 t ( b ∙ d )=0
∂s
s ( b ∙ d )−[ ( a−c ) ∙ b ]
t́= 2
b
b [ ( a−c ) ∙ d ]− ( b ∙d ) [ ( a−c ) ∙ b ]
2
s= 2 2 2
b d −( b ∙ d )
( b ∙ d ) [ ( a−c ) ∙ d ] −d 2 [ ( a−c ) ∙b ]
t= 2 2 2
b d − ( b ∙d )
Using these values of t∧s in the formulas of L∧M , respectively, we get two points, one on each
line, and their difference δ , which is a vector from M to L, satisfies
δ ∙ b=δ ∙ d=0
And the magnitude of δ is the (shortest) distance between the lines. Another way to find δ , if we
δ
don’t care to know the begin and end points of δ , is as follows. is a unit vector orthogonal to
|δ|
both lines. Let M , L be respectively the start and end points of δ , M on M and L on L. Let C , A
be the start and end points of the vector a−c , C on M and A on L. Translate a−c so it would
start at M instead of C and end at A ' instead of A . So we have a parallelogram whose four
vertices are C , A , A ' , M , where ⃗ MA ' and ⃗
CA ≡ a−c is parallel to ⃗ CM is parallel to ⃗
AA ' . Now
consider the triangle whose vertices are M , L , A ' . The side ML is just δ . The side MA ' is
essentially a−c . The angle at L between the sides A' L∧ML is claimed to be straight, that is,
δ
( a−c−δ ) ∙ δ=0. So then the projection of MA ' along ML is ML. So ( a−c ) ∙ =|δ |.
|δ |
Let check that A' L⊥ ML.
( b ∙ d ) [ ( a−c ) ∙ d ] −d 2 [ ( a−c ) ∙ b ] b2 [ ( a−c ) ∙ d ]−( b ∙ d ) [ ( a−c ) ∙ b ] 1
2{
δ=a+t b−c−s d =a−c+ 2 2 2
b− 2 2 2
d= 2 2
[ b2 d 2−
b d −( b ∙ d ) b d −( b ∙ d ) b d −( b∙ d )

( a−c−δ ) ∙ δ= 2
1
2
b d −( b ∙ d ) 2 {[
b [ ( a−c ) ∙ d ]−( b ∙ d ) [ ( a−c ) ∙ b ] ] d−[ ( b∙ d ) [ ( a−c ) ∙ d ]−d [ ( a−c ) ∙ b ]] b } ∙ 2 2
2 2 1
{
b d −( b ∙ d )2
{[ b
I did this calculation on Mathematica and the result is zero.
But why is it true? I saw in https://www.youtube.com/watch?v=w1wKPLy_vRk that there exist
parallel planes which contain both lines. Example. Consider the skew lines ( 0,0,0 ) +t (1,0,0 ) and
( 0,1,0 ) + s ( 0,0,1 ) . The set S of all planes which contain the first line (the x axis) does not include
the y , z plane or any plane x=c for any fixed scalar c . And the normal to all planes in S has zero
x component, ( 0 , y , z ) . The normal to all planes T which include the second line has zero z
component ( x , y , 0 ). Consider the set of parallel planes U whose normal is ( 0 , y ,0 ). There is a
plane in S ∩U , namely, t ( 1,0,0 ) +t ' ( 0,0,1 ), and a plane in T ∩U , namely
( 0,1,0 ) + s ( 0,0,1 ) + s ' ( 1,0,0 ). These two planes are parallel and distinct, and contain the given lines.
In general. We have two skew lines L=a+t b and M =c+ s d . Then the normal to any plane
which contains L has to have a normal n such that n ∙ b=0. Similarly, the normal to any plane
which contains M has to have a normal n such that n ∙ d=0 . Consider a plane P whose normal is
b × d . This normal is orthogonal to both lines. There is a plane which contains L whose normal is
parallel to b × d , namely a+ t b+t ' ( b × ( b × d ) ), and there is a plane which contains M whose
normal is parallel to b × d , namely c + s d +s ' ( d × ( b × d )) , and these two planes are of course
parallel. The distance between these two planes is the shortest distance between the lines L∧M .
Based on that, let L , M be the parallel planes which contains the lines L∧M respectively. Then
C∧M are points in M and A , A ' , L are points in L. Then ML is normal to both L∧M . So A' L,
which is a line segment in L, is orthogonal to ML.
See “‫ ”הערות פיצפטריק ב‬on page 37 close to page 38.
11.7.2 Relative Orientation
Recall from my notes on 11.7 the case of two blades A , B with grades a , b such that A ∧ B ≠0 ,
and a+ b<n. So grade ( A¿ ) + grade ( B¿ ) =n−a+n−b=2n−a−b>2 n−n=n. So A¿ ∧ B¿ =0 and
¿ ¿ ¿ ¿∗¿=B∧ A ¿
−¿
A ∨ B is a blade of nonzero glade. So ( A¿ ∨ B¿ ) =B ¿∗¿ ∧ A ¿
and A¿ ∨ B¿ =( B ∧ A ) . Thus
¿ −¿
when A∧B are disjoint, A ∧ B= ( B¿ ∨ A ¿ ) and when A∧B intersect, A ∨ B= ( B¿ ∧ A ¿ ) . Or when
−¿ ¿
A∧B are disjoint, ( A ∧ B ) =B¿ ∨ A¿ and when A∧B intersect, ( A ∨ B ) =B ¿ ∧ A¿.
Now consider the case when A ∧ B ≠0 and a+b=n, so then also
grade ( A¿ ) + grade ( B¿ ) =n−a+n−b=2n−a−b=2 n−n=n, so A¿ ∧ B¿ ≠ 0 , and
grade ( A ) =grade ( B ¿ ) and
¿
grade ( B )=grade ( A ). The dual is taken with respect to I n or with respect to any pseudoscalar of
grade n , like A ∧ B or A¿ ∧ B¿ , denote it as J n. So A ∨ B is a scalar and so is A¿ ∨ B¿ . So in that
¿ −¿
case we have the formula ( A ∨ B ) =B ¿ ∧ A¿ where both sides have grade n , or A ∨ B= ( B¿ ∧ A ¿ ) .
But also using equation 5.6 page 129, C ∨ D=¿, we have,
A ∨ B=¿
So we have two expressions for A ∧ B ≠0 and A ∨ B a scalar,
¿
( A ∨ B )−¿ = A ∧ B=( B¿ ∨ A ¿ )
−¿
( A ∧ B )¿ =A ∨ B=( B¿ ∧ A ¿ )
The second equalities in each line are also true when n> a+b, but the first equalities are only true
when n=a+b .
Two points on a line. We have two points x∧ y , and the line through them is represented by a
bivector B=( e0 + x ) ∧ ( e0 + y ) in the three dimensional representational space spanned by e 0 , x , y .
And any point p on the line is represented by a vector p=e0 + p in the representational space,
such that p ∧ B=0 . Or
( e 0 + p ) ∧ ( e0 + x ) ∧ ( e 0 + y ) =0
Expanding we get
e 0 ∧ ( x ∧ y − p∧ y + p ∧ x ) + p ∧ x ∧ y=0
So x ∧ y− p ∧ y + p ∧ x=0 and p ∧ x ∧ y=0 .
But
x ∧ y− p ∧ y + p ∧ x=( x− y ) ∧ ( x− p ) =−( y−x ) ∧ ( y− p )=0
So x− p=α ( x− y ) and y− p=β ( y −x ), or p=x −α x +α y= (1−α ) x +α y , or p is an affine
combination of x∧ y . So automatically p ∧ x ∧ y=0 .
Note that ( e 0 + x ) ∧ ( e 0 + y )=( e 0+ x ) ∧ ( y −x ) but e 0 + y−x is not a point on the line. Certainly the
y−x
equation of the line can be also written as ( e 0 + x ) ∧ .
| y−x|
¿
Now by the above result, x ∨ y=( x ∧ y ) , where the dual is taken with respect to the join, which
x∧y x∧y
is the line, which is a bivector in the representational space, say |x ∧ y|= 2 2 . This
√ x y −( x ⋅ y ) 2
dual is not the ❑⋆ dual. So
¿
x ∨ y=( x ∧ y ) = ( x ∧ y )
√x 2
y∧x
2
y −(x ⋅ y )
2
2 2 2

=√ x y −( x ⋅ y ) = [ ( e0 + x ) ⋅ ( e 0 + x ) ][ ( e 0 + y ) ⋅ ( e 0+ y ) ]−[ ( e 0+ x ) ⋅ ( e0 + y ) ] =√ (
2

This becomes zero when x= y , and increases as y−x increases. If we take x to be fixed, say the
point on the line closest to the origin, then as y increases, the regressive scalar increases.
If the line goes through the origin, then the representational space can be taken to be two
dimensional, so the line is one dimensional subspace. Then only one point x=e 0 + x on the line is
needed, and every other point on the line is linearly dependent on x .
¿
The equation x ∨ y=( x ∧ y ) is true when the join is two dimensional J 2 and the dual is
−1
contraction with J 2 , and e 0 is not in J 2. Note that x + y is not on the line unless the line goes
through the origin, in which case the representational space is two dimensional, spanned by
e 0∧x=e0 + x , and then any other point on the line has a location e 0 +t x . Then

x e0 + x e0∧ x
J 2=e0 ∧ =e 0 ∧ =
|x| √ 1+ x √ 1+ x2
2

The equation of the line is e 0 ∧ x and


−e 0 ∧ x −e 0 ∧ x x2
e 0 ∨ x=( e 0 ∧ x ) = (e0 ∧ x ) =
√ 1+ x 2
√ 1+ x 2 √ 1+ x 2
A point and a line in their plane. We have three points, p=e0 + p , q=e 0 +q , r=e0 +r , and p ∧q is
a line and r is assumed to lie off the line. Here the representational plane is four dimensional, and
the subspace which doesn’t include e 0 is three dimensional, and is spanned by p , q , r . If r is off
the line p ∧q , then it is not possible for r to be linearly dependent on p , q, because r is not on the
plane which contains the three points e 0 , p , q . If r is on the line of p ∧q , then r −p=α ( q− p ), or
r =p+ α ( q− p )= (1−α ) p+α q, so r is an affine combination of p∧q, and the subspace which
doesn’t include e 0 is not spanned by p , q , r .
The join J is the plane which the three points form, which is represented by the trivector
J= p ∧ q ∧r . The unit trivector is
p ∧q ∧ r p ∧q ∧ r p ∧q ∧ r p ∧q ∧ r p∧
= = = =

√| |
| p ∧q ∧ r| √ ( r ∧ q ∧ p ) ( p ∧ q ∧r ) √ ( r ∧ q ∧ p ) ( p∧ q ∧r ) r
2
r⋅ q r⋅ p √ p q r − p ( q ⋅ r ) −q ( p ⋅r )
2 2 2 2 2 2 2

2
q⋅r q q⋅ p
2
p⋅r p ⋅q p
Its inverse is its reverse, which is its negative.
So
( p ∧ q ∧r ) ( r ∧ q ∧ p )
( p ∧q ) ∨ r= =√ p2 q2 r 2− p2 ( q ⋅r )2−q 2 ( p ⋅r )2−r 2 ( p ⋅ q
√ p q r − p ( q ⋅ r ) −q ( p ⋅ r ) −r
2 2 2 2 2 2 2 2 2
( p ⋅q ) + 2 ( p ⋅ q )( p ⋅r ) ( q ⋅ r )
So | p ∧q ∧ r|= ( p ∧ q ) ∨r . Note that we have just shown that the volume of a parallelepiped
made of three vectors p , q , r is

√ p q r − p ( q ⋅r ) −q ( p ⋅r ) −r
2 2 2 2 2 2 2 2 2
( p ⋅q ) +2 ( p ⋅ q ) ( p ⋅ r )( q ⋅r )
This will be shown to be zero if as points p , q , r lie on one line. As points, the three points form
a triangle, and Dorst claim that the area of that triangle is proportional to the volume of the
parallelepiped made of p , q , r . The sides of the triangle are q− p=q−p , p−r= p−r , r −q=r−q
. Note that the vector sum of the three sides is zero. Also note that if the three points lie on one
line, then the triangle is degenerate. The area of that triangle is
1 1 1
A=
2
|( r−q ) ∧ ( p−r )|= |( r −q ) ∧ ( q− p )|= |( p−r ) ∧ ( q− p )|
2 2
In general |a∧ b|=√ a2 b2−( a ⋅b )2, where a , b are vectors. So
1 1
A=
2
| 2 √
( r−q ) ∧ ( p−r )|= ( r−q )2 ( p−r )2−[ ( r −q ) ⋅ ( p−r ) ]
2

Expanding, we get
2
4 A =( r −q ) ( p−r ) −[ ( r−q ) ⋅ ( p−r ) ] = p q −( p ⋅ q ) + p r −( p ⋅ r ) + q r −( q ⋅r ) −2 p ( q ⋅r ) −2 q ( p ⋅r ) −2r ( p ⋅q
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

Now calculate the volume of the parallelepiped made of p , q , r in terms of the parallelepiped
made of p , q , r . After some calculations, assuming e 20=1,
2
p q r − p ( q ⋅r ) −q ( p ⋅ r ) −r ( p ⋅ q ) +2 ( p⋅ q ) ( p ⋅ r )( q ⋅ r )=( e0 + p ) ( e 0 +q ) ( e0 +r ) −( e 0 + p ) ( ( e 0 +q ) ⋅ ( e0 +r ) ) −( e0 +
2 2 2 2 2 2 2 2 2 2 2 2 2

Next we relate the volume of the parallelepiped made of p , q , r, which is


p q r − p ( q ⋅r ) −q ( p ⋅ r ) −r ( p ⋅ q ) +2 ( p⋅ q ) ( p ⋅ r )( q ⋅ r ), to the shortest distance d from e 0 to
2 2 2 2 2 2 2 2 2

the plane of the triangle. So let p=e0 + p=e 0+ d + p , or p=d+ p . Similarly, q=d +q and r =d +r .
So p , q , r are vectors in the plane and d ∙ p=d ∙q=d ∙ r=0 . So the volume squared of the
parallelepiped made from p , q , r becomes,
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
p q r − p ( q ⋅r ) −q ( p ⋅ r ) −r ( p ⋅ q ) +2 ( p⋅ q ) ( p ⋅ r )( q ⋅ r )=( d + p ) ( d +q ) ( d +r ) − ( d+ p ) ( ( d +q ) ⋅ ( d+ r ) ) −( d + q ) (
1 1
Now A=
2
|( r−q ) ∧ ( p−r )|= 2 |( r −q ) ∧ ( p−r )|, so

d [ p q + p r + q r −( q ∙ r ) −2 p ( q ∙ r )−( p ∙r ) −2 q ( p ∙ r )− ( p ∙ q ) −2r ( p ∙q ) +2 ( p ∙ q )( p ∙r )+ 2 ( p ∙ q ) ( q ∙ r ) +2 ( p ∙ r )
2 2 2 2 2 2 2 2 2 2 2 2 2

Next show that the first five terms in bracket make zero. Let without loss of generality,
p ⋅ q=| p||q|cos α , q ⋅ r=|q||r|cos β , p ⋅ r=| p||r|cos ( α + β ). Then

p q r − p ( q ∙ r ) −q ( p ∙ r ) −r ( p ∙ q ) +2 ( p ∙ q ) ( p ∙ r ) ( q ∙ r )= p q r ( 1−cos β−cos ( α + β )−cos α +2 cos α cos β co


2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

This result is obvious, since p,q,r are vectors in a plane, and


2
p2 q2 r 2− p2 ( q ∙ r )2−q 2 ( p ∙ r )2−r 2 ( p ∙ q )2+2 ( p ∙ q ) ( p ∙ r ) ( q ∙ r ) is | p ∧q ∧ r| , which is the volume
squared of a degenerate parallelepiped.
In summary, ( p ∧q ) ∨ r equals the volume of the parallelepiped whose sides are p , q , r , which is
2 A √ 1+d
2

And the volume of the parallelepiped whose sides are p , q , r is

√ p q r − p ( q ⋅r ) −q ( p ⋅r ) −r
2 2 2 2 2 2 2 2 2
( p ⋅q ) +2 ( p ⋅ q ) ( p ⋅ r )( q ⋅r )=2 A d
And four times the area squared of the triangle whose vertices are the points p , q , r is
2 2 2
4 A 2=| p ∧ q| +| p ∧r| +|q ∧r| −2 p 2 ( q ⋅ r )−2 q2 ( p ⋅ r )−2 r 2 ( p ⋅ q )+2 ( p ⋅ q ) ( p ⋅ r ) +2 ( p ⋅ q )( q ⋅r ) +2 ( p ⋅ r ) ( q ⋅r )
Let’s check that for a simpler case. Let p=e0 + p , q=e 0 +q be vectors in the representation space,
which are points in the base space. Let d=e 0+ d be the point on the line closest to e 0. The length
of the line segment is | p−q|=√ ( p−q ) ⋅ ( p−q )=√ p 2+ q2−2 p ⋅q. The area of the parallelogram
whose sides are p , q is
2

| p ∧q|= √ p2 q2− ( p ⋅q ) = ( e 0 + p )2 ( e 0+ q )2 −( ( e 0 + p ) ⋅ ( e 0 +q ) ) = √( 1+ p2 ) ( 1+q 2) −( 1+ p⋅ q ) = √ 1+ p2 +q 2+ p2 q 2−1−
2 2

So | p ∧q|2=( p−q )2 +| p ∧q|2. Now p=d+ p and q=d +q where | p|+|q|=| p−q| and
p ⋅ q=−| p||q|. Then
2
| p ∧q| = p2 q2 −( p ⋅ q )2=( d + p )2 ( d +q )2 −( ( d + p ) ⋅ ( d+ q ) ) =( d 2 + p2 ) ( d 2+ q2 ) −( d 2−| p||q|) =d 4 + d 2 p2 +d 2 q2 + p2 q2−d
2 2

This formula | p ∧q|=d (| p|+|q|) just says that the area of the parallelogram whose sides are p , q
is base times height, namely, the length of the side p−q times the distance from that side to e 0
where p , q meet. So
| p ∧q| =( p−q )2 +d 2 (| p|+|q|) =( 1+d 2 ) ( p−q )2
2 2

So the area of the parallelogram whose sides are the representation-space vectors p , q is
proportional to the distance between the base-space points p , q, the proportionality factor is
related to the shortest distance from the origin to the line.
A point and a plane in space. This case consists of four finite points, three of them,
q=e 0 +q , r =e 0+ r , s=e 0+ s are on a plane, forming a nondegenerate triangle, and one off the
plane, which could be p=e0 if the plane is off the origin, or could be p=e0 + p, where p ≠ 0, if
the plane goes through the origin, in which case one of q ,r , s could be zero. So at any rate we
have four linearly independent vectors in the four dimensional representational space,
p=e0 + p , q=e 0 +q , r=e0 +r , s=e 0+ s, where one of p , q , r , s could be zero, and p , q , r , s are
linearly dependent, as the subspace whose base does not include e 0, is three dimensional, say
p=α q+ β r + γ s . Now p is off (on) the plane, iff the three vectors ( p−q ) , ( r−q ) , ( s−q ) are
linearly independent (dependent). If p is on the plane, then
0=( p−q ) ∧ ( r−q ) ∧ ( s−q )=( α q+ β r +γ s−q ) ∧ ( r−q ) ∧ ( s−q )=( ( α−1+ β+ γ ) ) q ∧ r ∧ s
So α + β+ γ =1, and p is an affine combination of q ,r , s .
Now the join of the four points are the whole space, p ∧q ∧ r ∧ s=s ∧r ∧q ∧ p . The unit
pseudoscalar of the four space is
p ∧ q ∧r ∧ s p ∧q ∧ r ∧ s p ∧ q ∧r ∧ s p ∧ q ∧r ∧ s

√|
J4= = = ≡

|
|p ∧ q ∧r ∧ s| √ ( p ∧q ∧r ∧ s ) ⋅ ( p ∧q ∧ r ∧ s ) δ
p ⋅ s p ⋅r p⋅ q p2
2
q⋅s q⋅r q q∙ p
r ⋅ s r2 r ⋅q r ∙ p
2
s s⋅r s ⋅q s ∙ p
~
Note that J −1
4 = J 4=J 4 . So

¿ p ∧q ∧r ∧s
p ∨ ( q ∧ r ∧ s )=( p ∧q ∧ r ∧ s ) = p ∧ q ∧r ∧ s =δ
δ
To be more concrete, suppose that , p=e0 and the plane q ∧ r ∧ s is off the origin. Now
e 0 ∧ q ∧r ∧ s=e 0 ∧ ( e 0 +q ) ∧ ( e 0+ r ) ∧ ( e 0 +s )=e0 ∧q ∧ r ∧s
The unit pseudoscalar of the four space is
e 0 ∧ q ∧r ∧ s e 0 ∧ q ∧r ∧ s e 0 ∧ q ∧r ∧ s e 0 ∧ q ∧r ∧ s

√|
J4 = = = =

|
|e 0 ∧ q ∧r ∧ s| √ ( e 0 ∧ q ∧r ∧ s ) ⋅ ( e0 ∧q ∧ r ∧s ) 0 1 √q r
2 2 2 2 2 2 2 2
s −q ( r ⋅ s ) −r ( q ⋅s ) −s ( q
0 0
q ⋅ s q ⋅r q 2 0
r ⋅ s r2 r ⋅ q 0
2
s s ⋅r s ⋅q 0
~
Note that J −1
4 = J 4=J 4 . So

p ∨ ( q ∧ r ∧ s )=( p ∧q ∧ r ∧ s ) = √ q r s −q ( r ⋅ s ) −r ( q ⋅s ) −s ( q ⋅r ) +2 ( q ⋅ r ) ( q ⋅ s ) ( r ⋅ s )=2 A √ 1+d


¿ 2 2 2 2 2 2 2 2 2 2

The last equality we got in the previous section, three points on a plane which form a
nondegenerate triangle, and A is the area of the triangle and d is the shortest vector from the
origin to the plane.
Two skew lines in space. This is, as in previous case of a plane and a point off the plane, a case
of four distinct points, which are not on one plane, since the skew lines do not intersect. The
lines are
L=( e0 + p ) ∧ ( e 0+ x ) and M =( e0 +q ) ∧ ( e 0 + y ). Let q=α x + β y+ γ p. Then
L ∧ M =( e 0 + p ) ∧ ( e0 + x ) ∧ ( e 0 +q ) ∧ ( e 0 + y )=( 1−α −β−γ ) e 0 ∧ x ∧ y ∧ p
J=e0 ∧ x ∧ y ∧ p

−1 e0 ∧ x ∧ y ∧ p e0 ∧ x ∧ y ∧ p e0∧ x ∧
J = = =
√ (e 0 ∧ x ∧ y ∧ p ) ⋅ ( e0 ∧ x ∧ y ∧ p ) √x 2 2 2 2 2 2 2 2 2
y p −x ( y ⋅ p ) − y ( x ⋅ p ) − p ( x ⋅ y ) +2 ( x ⋅ y ) ( x ⋅ p ) ( y ⋅ p ) 2A

Where A is the triangle formed by the three points x , y , p and d is the shortest vector from the
origin to the plane of the triangle.
¿ −1 e0∧ x ∧ y ∧ p ( e 0 ∧ x ∧ y ∧ p ) ( e0 ∧ x ∧
L ∨ M =( L ∧ M ) =( L ∧ M ) J =( 1−α− β−γ ) e0 ∧ x ∧ y ∧ p =( 1−α −β−γ )
2 A|d| 2 A|d|
Now the distance between the two lines is
( x− p ) × ( y−q ) ( x− p ) ∧ ( y −q )⋆
( p−q ) ∙ =( p−q ) ∙
|( x− p ) × ( y−q )| |( x− p ) ∧ ( y −q )⋆|
Let u=x− p and v= y−q. Now the unit pseudoscalar of subspace spanned by u , v , p is
u∧v∧ p u ∧v ∧ p u∧v ∧ p u ∧v ∧ p
= = 2 2 2 2 =
|u ∧ v ∧ p| √ ( p ∧ v ∧u ) ⋅ ( u ∧ v ∧ p ) √ u v p −u ( v ⋅ p ) −v ( u⋅ p ) −p ( u ⋅ v ) +2 ( u ⋅ v ) ( u⋅ p )( v ⋅ p ) 2 A|d|
2 2 2 2 2

Where A is the area of the triangle formed by the three points p , e0 +u , e 0 +v and d is the shortest
distance to the plane of that triangle. So
( u ∧v ) ( u ∧v ∧ p )

(u ∧ v ) 2 A |d| (u ∧ v ) ( u∧ v ∧ p )
( p−q ) ∙ = ( p−q ) ∙ = ( p−q ) ⋅
|( u ∧ v )⋆|
|
( u ∧v ) ( u ∧v ∧ p )
2 A |d| | |(u ∧ v ) ( u∧ v ∧ p )|

Now
( u ∧ v )( u ∧ v ∧ p )=¿
and |( u ∧ v )( u ∧ v ∧ p )|= √ [ ( u ∧ v )( u ∧ v ∧ p ) ] . [ ( u∧ v ) ( u∧ v ∧ p ) ] .
The result of the dot product is
2 2 2 2 2 3 2 2 2 4 2 2 4 2 2 2 4
2 u v ( u ⋅ v )( u ⋅ p ) ( v ⋅ p )+ v (u ⋅ v ) ( u ⋅ p ) −2 ( u ⋅v ) ( u⋅ p )( v ⋅ p ) +u ( u⋅ v ) ( v ⋅ p ) −u v ( p ⋅ v ) −v u ( p ⋅ u ) + p ( u ⋅ v )
Recall that for two vectors a , b , |a∧ b|=√ a2 b2−( a ⋅b )2. Now this expression for
2
|( u ∧ v )( u ∧ v ∧ p )| , can be rearranged as follows,
[ 2 u2 v 2 ( u ⋅ v )( u ⋅ p ) ( v ⋅ p )−u 4 v 2 ( p ⋅v )2−v 4 u2 ( p ⋅u )2 −u2 v 2 p2 ( u⋅ v )2 +u4 v 4 p2 ]−[ 2 (u ⋅ v )3 ( u ⋅ p ) ( v ⋅ p )−u2 ( u ⋅ v )2 ( v ⋅ p )2
Where A is the area of the triangle made of the three points u , v , p , as was shown above, and d is
the shortest vector from the origin to the plane of the triangle.
In Mathematica I did
$ Assumptions=(u∨v ∨p)∈Vectors [3 , Reals];
Then
TensorExpand [(((v . v )(u . p)−(v . p)(v . u))u+((u . u)(v . p)−(u . p)(v .u))v+(( u . v)(u . v )−(v . v )(u .u)) p). (((v . v
So finally,
|( u ∧ v )( u ∧ v ∧ p )|=2 A|u ∧ v||d|
Next
( p−q ) ⋅ [ ( u∧ v ) ( u∧ v ∧ p ) ] = ( p−q ) ⋅ { [ v 2 ( u⋅ p )−( v ⋅ p ) ( v ⋅u ) ] u+ [ u2 ( v ⋅ p ) −( u⋅ v ) ( u
So altogether
u×v ( u × v )⋆ (u ∧ v ) ( u∧ v ∧ p ) 2 ( γ −1 ) A|d|
( p−q ) ∙ =( p−q ) ∙ = ( p−q ) ⋅ =
|u × v| |( u ∧ v )⋆| |(u ∧ v ) ( u∧ v ∧ p )| |u∧ v|2
11.7.3 Relative lengths and Cross ratio.
Let p= p 0 ( e0 + p ) and q=q 0 ( e 0+ q ). Then the line determined by these two points is

p ∧q= p0 ( e 0+ p ) ∧ q 0 ( e0 +q )= p 0 q 0 ( e0 ∧q−e0 ∧ p+ p ∧q )= p0 q 0 [ e0 ∧ ( q− p ) + ( p ∧q )( q− p )−1 ( q− p ) ] =p 0 q0 e0 + ( p∧


q−
Now let d the shortest vector from the origin to the line. As was discussed on section 11.3.1 page
p ∧q
281, p ∧q= p ∧ ( q− p )=d ( q− p ) , so d= . Therefore,
q− p
p ∧q= p0 q0 ( e 0 +d ) ( q− p )= p0 q 0 d ( q− p )

Dorst has p−q, don’t know why. If we have a third point, r =r 0 ( e 0+ r ), then
q ∧ r=q 0 r 0 d ( r −q )
1
( q ∧ r )−1= ( r−q )−1 d−1
q0 r0
So
−1 1 −1 p −1 p −1
( q ∧ r ) ( p ∧ q )= ( r−q ) d−1 p 0 q 0 d ( q− p )= 0 ( r −q ) ( q− p ) = 0 ( q−r ) ( p−q )
q0r 0 r0 r0
−1 q−r α ( p−q )
Since p , q , r are collinear, α ( q−r )=( p−q ) , and ( q−r ) = = , and
( q−r ) ∙ ( q−r ) | p−q|2

p ∧ q p 0 p−q p0 α ( p−q ) p
= = ( p−q )=α 0
q ∧ r r 0 q−r r 0 | p−q|2
r0
which is a scalar called by Dorst affine distance ratio. If we take unit weight points, then
p ∧ q p−q
= . So the ratio of finite points represented by bivectors in the representation space,
q ∧ r q−r
reduces to a ratio of infinite points represented by vectors in the subspace orthogonal to e 0. Any
transformation which takes a line to a line, and takes a point to a point with the same weight, and
also preserves the ratio of distances, is called an affine transformations. Namely, let
f ( p ) =p 0 ( e0 + p ' ), f ( q )=q0 ( e 0+ q ' ) , f ( r )=r 0 ( e 0 +r ' ), then

f ( p ) ∧ f ( q ) p0 p '−q '
=
f ( q ) ∧f ( r ) r 0 q '−r '
p ' −q' p−q
f is called affine if = .
q ' −r ' q−r
Now let s=s 0 ( e0 + s ) be a fourth point on the line. Then
r ∧ s r 0 r−s
=
s ∧ p p0 s−p
Using q instead of a fourth point s, gives
r ∧q r 0 r −q
=
q ∧ p p 0 q−p
Then
p ∧ q r ∧ q p0 p−q r 0 r −q p−q r−q
= = =(−1 )(−1 )=1
q ∧ r q ∧ p r 0 q−r p0 q−p q−r q− p
So this is trivial. But
p ∧ q r ∧ s p 0 p−q r 0 r−s p−q r−s
= =
q ∧ r s ∧ p r 0 q−r p0 s−p q−r s−p
and this ratio is not trivial, and is not one unless s=q. This ratio is called projective cross ratio
and is independent of the weights of the four points. In Figure 11.8 it appears that | p−q|=|r −s|
but I don’t see the necessity for that.
Dorst claim that any linear transformation on the representation space preserves that ratio. Let’s
see if we can prove that. Since p , q , r , s are collinear, there exists scalars α , β , γ , δ and vectors
d , a such that
p= p 0 ( e0 +d + α a ) =p 0 ( d+ α a ) ; p=d+ α a
q=q 0 ( e 0+ d+ β a )=q 0 ( d+ β a ) ; q=d + β a
r =r 0 ( e 0+ d+ γ a )=r 0 ( d + γ a ) ;r =d +γ a
s=s 0 ( e0 + d+ δ a ) =s0 ( d +δ a ) ; s=d +δ a
p ∧q= p0 q0 ( β−α ) d ∧ a∧ p−q=( α −β ) a
q ∧ r=q 0 r 0 ( γ −β ) d ∧a∧q−r =( β−γ ) a
r ∧ s=r 0 s 0 ( δ−γ ) d ∧a∧r−s=( γ−δ ) a
s ∧ p=s 0 p0 ( α −δ ) d ∧ a∧s−p=( δ−α ) a
p ∧ q p 0 β−α p0 p−q
= =
q ∧ r r 0 γ −β r 0 q−r
r ∧ s r 0 δ−γ r 0 r−s
= =
s ∧ p p0 α −δ p 0 s− p
p ∧ q r ∧ s p 0 β −α r 0 δ−γ β−α δ−γ p−q r −s
= = =
q ∧ r s ∧ p r 0 γ −β p0 α −δ γ −β α−δ q−r s− p
Now let f be a linear transformation on the representation space. We shall also assume that f is
outermorphism, namely, f ( A ∧ B ) =f ( A ) ∧ f ( B ) .
f ( p ) =p 0 ( f ( e 0 ) + f ( d )+ αf ( a )) = p0 ( f ( d ) + αf ( a ) )

f ( q )=q0 ( f ( e0 ) + f ( d ) + βf ( a ) ) =q 0 ( f ( d )+ βf ( a ) )
f ( r )=r 0 ( f ( e 0) + f ( d ) +γf ( a ) ) =r 0 ( f ( d ) +γf ( a ) )

f ( s )=s 0 ( f ( e0 ) + f ( d ) +δf ( a ) )=s 0 ( f ( d )+ δf ( a ) )


f ( p ) ∧ f ( q )= p 0 q 0 ( β −α ) f ( d ) ∧ f ( a )∧f ( p )−f ( q )=( α−β ) f ( a )
f ( q ) ∧f ( r ) =q0 r 0 ( γ −β ) f ( d ) ∧f ( a )∧f ( q )−f ( r )=( β−γ ) f ( a )
f ( r ) ∧ f ( s )=r 0 s0 ( δ−γ ) f ( d ) ∧ f ( a )∧f ( r ) −f ( s )= ( γ −δ ) f ( a )
f ( s ) ∧ f ( p )=s0 p 0 ( α−δ ) f ( d ) ∧ f ( a )∧f ( s )−f ( p ) =( δ −α ) f ( a )
f ( p ) ∧ f ( q ) p0 β−α p 0 p−q
= =
f ( q ) ∧f ( r ) r 0 γ −β r 0 q−r
f ( r ) ∧ f ( s ) r 0 δ −γ r 0 r −s
= =
f ( s ) ∧f ( p ) p 0 α −δ p0 s− p
f ( p ) ∧ f ( q ) f ( r ) ∧ f ( s ) p 0 β −α r 0 δ−γ β−α δ −γ p−q r −s
= = =
f ( q ) ∧f ( r ) f ( s ) ∧ f ( p ) r 0 γ −β p0 α−δ γ−β α−δ q−r s− p
Now let us prove equation (11.11) page 300,
( p ∧ q ) ( r ∧s ) [ p 0 q 0 ( β−α ) d a ][ r 0 s 0 ( δ−γ ) d a ] ( β−α ) ( δ−γ ) d a d a ( β−α ) ( δ−γ ) p ∧ q r ∧ s
= = = =
( q ∧r ) ( s ∧ p ) [ q0 r 0 ( γ −β ) d a ][ s 0 p0 ( α −δ ) d a ] ( γ− β ) ( α −δ ) d a d a ( γ −β )( α −δ ) q ∧ r s ∧ p
( p−q )( r −s ) ( α −β ) a ( γ−δ ) a ( α −β ) ( γ −δ ) a2 ( α −β ) ( γ −δ ) p−q r−s
= = = =
( q−r ) ( s− p ) ( β−γ ) a ( δ−α ) a ( β−γ ) ( δ −α ) a2 ( β−γ ) ( δ −α ) q−r s− p
I don’t understand Dorst claim that the commutation properties of the elements give equation
(11.11). He seems to say that (11.11) follows from p ∧q=−q ∧ p and p−q=−( q−p ), but I
don’t see how is that relevant here.
Now suppose we have four lines in a plane, all intersecting at one point x as in Figure 11.9 page
301. Let p , q , r , s be points on the four lines, which are represented by the bivectors x ∧ p , x ∧ q,

x ∧ r , x ∧ s . The area of the triangle whose vertices are x , p , q is |12 ( p−x ) ∧ ( q−x )|. Now
x ∧ p ∧ q is a trivector representing the plane of the four lines, and so are x ∧ q ∧r , x ∧ r ∧ s . All
those trivectors are scalar multiplies of each other. Now the parallelepiped whose sides are the
representation-space vectors x , p , q , encloses the triangle whose vertices are the base-space
points x , p , q . We’ve shown above that the volume (Dorst calls it weight which I find
misleading) |x ∧ p∧ q|=|( p−x ) ∧ ( q−x )|√ 1+ d 2 where |d| is the shortest distance from e 0 to the
plane of the three triangles. So
( p−x ) ∧ ( q−x )
∧x ∧ p ∧q
x ∧ p ∧q ( q−x ) ∧ ( r −x ) ( p−x ) ∧ ( q−x )
= =
x ∧ q ∧r x ∧r ∧ s ( r−x ) ∧ ( s−x )
and so on. In words, the ratio of volumes equals the ratio of areas.
'
With reference to the figure, let p' ,q ' , r ' , s ' be collinear points on a line L. Let p −x= p0 ( p−x ) ,
and similarly for q ' , r ' , s ' . Then
1
x ∧ p ∧ q= x ∧ p' ∧q '
p0 q 0
1 ' '
x ∧ q ∧r = x ∧q ∧r
q0 r 0
1
x ∧ r ∧ s= x ∧ r ' ∧s '
r0 s0
1
x ∧ r ∧ s= x∧s' ∧ p'
s0 p0
' '
q−p
Let l be the point on L closest to the point x . Then L=l ∧ a^ =l a^ where a^ = ' is a unit vector
|q − p '|
along L. So p' =l+α a^ , q ' =l+ β a^ , r ' =l+ γ a^ , s' =l+δ a^ . Now
p' ∧ q' =( d+ α a^ ) ∧ ( d + β a^ ) =( β−α ) d a^ , so on. Therefore
' ' ' ' ' ' ' '
p ∧ q =( β−α ) d a^ ∧q ∧ r =( γ−β ) d a^ ∧r ∧ s = ( δ−γ ) d a^ ∧s ∧ p = ( α −δ ) d a^
So
x ∧ p ∧q q 0 r 0 x ∧ p' ∧ q' r 0 ( β−α ) x ∧d a^ r 0 ( β−α ) r 0 p' ∧ q'
= = = =
x ∧ q ∧r p 0 q 0 x ∧ q' ∧ r ' p 0 ( γ−β ) x ∧d a^ p 0 ( γ −β ) p0 q ' ∧ r '
x ∧ r ∧ s s 0 p 0 x ∧ r ∧ s p0 ( δ−γ ) x ∧d a^ p0 ( δ−γ ) p0 r ∧ s
' ' ' '
= = = =
x ∧ s ∧ p r 0 s0 x ∧ s' ∧ p ' r 0 ( α −δ ) x ∧d a^ r 0 ( α−δ ) r 0 s' ∧ p'
x ∧ p ∧q x ∧ r ∧ s ( β−α ) ( δ−γ ) p' ∧ q' r ' ∧s '
= =
x ∧ q ∧r x ∧ s ∧ p ( γ −β ) ( α −δ ) q' ∧ r ' s ' ∧ p'
The RHS of the last line is invariant under linear transformations on the plane, and therefore so is
the LHS. The LHS shows ratios of volumes which we’ve shown to be equal to ratios of areas of
triangles. So we used the cross ratio of four points on a line, to derive a cross ratio of four areas
of triangles.
11.8 Linear Transformations
We need to get some results from chapter 4. In section 4.2.1 page 102 Dorst extends linear
transformation to define outermorphism, where the linear transformation acts on a blade and not
just on a vector, f [ a1 ∧ ⋯ ∧a k ]=f [ a1 ] ∧ ⋯ ∧ f [ ak ] , where a 1 , ⋯ , a k are vectors (Dorst uses square
brackets around the argument). If k =0, so the argument is a scalar, a zero blade, then f ( α )=α , so
on scalars the extended linear transformations is the identity function. This is just like NFCM
page 255-256. In Section 4.2.3 page 106, Dorst define the determinant of the linear
transformation f on an n -dimensional linear space with a unit pseudovector I n, by
f [ I n] f [ I n] f [ I n ] f [ P]
−1

det ( f ) =f [ I n ] I ≡
−1
n , same as in NFCM page 255. Note that = −1 = where P is a
In In In P

pseudoscalar, since P=α I n. Let g be another linear transformation on the same space, so
( g ∘ f ) [ I n ] g [ f [ I n ] ] g ( det [ f ] I n ) g ( I n) In
det ( g ∘ f ) = = = =det [ f ] =det [ f ] det [ g ] =det [ f ] det [ g ]
In In In In In
In section 4.3.1 page 108 Dorst asks a confusing question. As the outermorphism extension of a
linear transformation is an identity function on scalars, then if A , B are blades of the same rank,
so ¿ is a scalar, then f [ A∗B ] = A∗B . In particular, when a , b are unit vectors, then
f [ a∗b ] =f [ a ∙ b ] =a∙ b=|a||b|cosθ . Now in general a ∙ b=f [ a ∙ b ] ≠ f [ a ] ∙ f [ b ] . Indeed, orthogonal
transformations are defined to satisfy a ∙ b=f [ a ∙ b ] =f [ a ] ∙ f [ b ]. In such a case,
a ∙ a=f [ a ∙ a ] =f [ a ] ∙ f [ a ], so |a|2=f [|a|2 ]=|f [ a ]| , so |f [ a ]|=±|a| , so the angle φ between f [ a ] ∧f [ b ]
2

f [ a] ∙ f [b ] f [ a ∙b ] |a||b|cos θ
is θ or π−θ , since cos φ= = = =±cos θ . But if for some linear
|f [ a ]||f [ b ]| ±|a|±|b| ±|a||b|
transformation f [ a ∙ b ] ≠ f [ a ] ∙ f [ b ], then even if |f [ a ]|=|a|, it doesn’t mean that φ=θ. So a linear
transformation in general can change the norm of a vector and change the angle between two
vectors.
In section 4.3.2 page 108, Dorst define the adjoint f of a linear transformation f . Given two
vectors, a , b , then f [ a ] ∙ b=f [ b ] ∙ a , same as NFCM page 254. Generalizing, we can define the
adjoint on blades using outermorphism, so let A=a1 ∧⋯ ∧ ak , B=b1 ∧⋯ ∧ bk are blades of equal
grade, then

| ||
f [ a 1 ] ∙ bk ⋯ f [ a 1 ] ∙ b1 a1 ∙ f [ bk ] ⋯ a1 ∙
f [ A ]∗B=f [ a1 ∧ ⋯ ∧a k ]∗( b 1 ∧ ⋯ ∧b k )=( f [ a1 ] ∧⋯ ∧ f [ a k ])∗( b1 ∧ ⋯ ∧b k )= ⋮ ⋱ ⋮ = ⋮ ⋱
f [ a k ] ∙ bk ⋯ f [ a k ] ∙ b1 a k ∙ f [ b k ] ⋯ a k ∙
Recall, that in terms of orthonormal base vectors e i, if f [ ei ] =α ji e j, for some n2 scalars α ji , then
f [ ei ] =α ij e j, so the coefficients are transposed. From that we can prove the implicit definition.

Indeed let a=ρi ei and b=τ i e i, then


f [ a ] ∙ b=f [ ρi ei ] ∙ ( τ j e j ) =ρi τ j f [ ei ] ∙e j =ρi τ j α ik e k ∙ e j= ρi τ j α ij
f [ b ] ∙ a=f [ τ j e j ] ∙ ( ρi ei ) =ρi τ j f [ e j ] ∙ ei =ρi τ j α kj e k ∙ ei =ρi τ j α ij
so from the coefficients being transposed we got the implicit definition. The other way, starting
with the implicit definition, we have
f [ ei ] ∙e j=f [ e j ] ∙ ei

which means that the j th coefficient of f [ ei ] , is the i th coefficient of f [ e j ]. Now when the n2
coefficients of f are arranged in an n × n square matrix ⟦ f ⟧ , the coefficients of f [ e j ] are put in the

j th column, so ⟦ f ⟧ij , the element in the i th row and j th column, is f [ e j ] ∙ e i. Similarly, ⟦ f ⟧ij contains
the i th coefficient of f [ e j ], namely f [ e j ] ∙ e i, which is the j th coefficient of f [ ei ] , namely f [ ei ] ∙e j ,
which is ⟦ f ⟧ ji. We can also express it as follows. a=( a ∙ e i ) ei , and
f [ a ] =( f [ a ] ∙ ei ) e i=( a ∙e i ) f [ e i ] =( a∙ e i ) ⟦ f ⟧ ji e j

f [ a ] =( f [ a ] ∙ e j ) e j=( a ∙ f [ e j ] ) e j= ( a ∙ ⟦ f ⟧ij e i ) e j=( a ∙e i ) ⟦ f ⟧ij e j

On page 109, Dorst claims that f −1=f −1. I don’t see that in NFCM. To prove it we need to show
that f −1 ∘ f is the identity transformation. Let’s do it first using basis. Let ⟦ f −1⟧ ij be the matrix of

f , so the matrix of f is its transpose, ⟦ f ⟧ ij =⟦ f ⟧ ji. Also ⟦ f ⟧ij = ⟦ f ⟧ ji . We have ⟦ f ⟧ ik ⟦ f ⟧ kj=δ ij.
−1 −1 −1 −1 −1

So
δ ij =⟦ f ⟧ik ⟦ f −1 ⟧kj =⟦ f −1⟧ kj ⟦ f ⟧ik =⟦ f −1 ⟧ jk ⟦ f ⟧ ki
So we’ve shown that ⟦ f −1⟧ is the inverse matrix of ⟦ f ⟧ , so ⟦ f −1⟧ = ⟦ f −1 ⟧. So the transpose of the
inverse is the inverse of the transpose. This is true for matrices in general. To prove it using the
implicit definition,
δ ij =e i ∙ e j=ei ∙ ( f ∘ f −1 ) [ e j ]=e i ∙ f [ f −1 [ e j ] ] =f [ e i ] ∙ f −1 [ e j ]=f −1 [ f [ e i ] ] ∙ e j=( f −1 ∘ f ) [ e i ] ⋅e j

So it must be that f −1 ∘ f is the identity function on all base vectors, so it is the identity function
in general.
In section 4.3.3 page 109-110 equation (4.12), Dorst proves that f ¿ , where A , B are blades. In
the proof Dorst uses a result in his paper “The inner products of Geometric algebra” section
2.2.3, that given multivectors A , B such that for all multivectors X , X∗A=X∗B , then A=B , see
my notes there.
Suppose the grades of A , B are k , l respectively. Then the grade of f [ B ] is l , and the grade of ¿ is
l−k , and so is the grade of X . Then
X∗¿
Now change A in the formula to f −1 [ C ]. Then
f¿
f¿
This is formula (4.13) and it gives the result of a linear transformation acting on a contraction of
blades. If B is a pseudoscalar, then f [ C B ] =f −1 [ C ] f [ B ].
In section 4.3.4 page 110, Dorst introduces orthogonal transformations, a ∙ b=f [ a ] ∙ f [ b ]. In such a
case, f =f −1. To see that
a ∙ b=f [ a ] ∙ f [ b ] =f [ f [ a ] ] ∙ b=( f ∘ f ) [ a ] ∙ b
In particular,
δ ij =e i ∙ e j=( f ∘ f ) [ ei ] ⋅ e j ≡ g [ ei ] ⋅ e j=( ei ∙ e k ) ⟦ g ⟧lk e l ⋅e j=⟦ g ⟧lk δ ik δ lj =⟦ g ⟧ ji

So ⟦ g ⟧ ji is the identity matrix, and g ≡ f ∘ f is the identity linear transformation. Similarly,


f = f́ =f . So in such a case, the formula f ¿ simplifies to f ¿ .
−1

In section 4.3.5 Dorst deals with the transformation of a dual representation. If X is a k -blade,
then X ¿ =¿ is the dual of X , and its grade is n−k . Given a linear transformation f , define the dual
¿
f by f ¿ [ X ¿ ] ≡ ( f [ X ] ) . Let D= X , so D I n =X . Then,
¿ ¿

¿
f [ X ] =( f [ X ] ) =f [ X ] I n =f [ D I n ] I n =f [ D ] f [ I n ] I n =det ( f ) f [ D ]
¿ ¿ −1 −1 −1 −1 −1

So altogether we have f ¿ [ X ¿ ] =det ( f ) f −1 [ X ¿ ], or f ¿=det ( f ) f −1. Note that the grades of


f [ X ] ∧f [ X ] are n−k , but in general f [ X ] ≠ f [ X ].
¿ ¿ ¿ ¿ ¿ ¿

However, when f is an orthogonal transformation, then f −1=f , and det ( f ) =±1, so


f ¿ [ X ¿ ] =det ( f ) f −1 [ X ¿ ] =± f [ X ¿ ]. If f is a rotation then f [ X ] =f [ X ] and if a reflection then
¿ ¿ ¿

f ¿ [ X ¿ ] =−f [ X ¿ ].
In section 4.4 page 113, Dorst derives the inverse of an outermorphism, which is NFCM page
256 equation (1.22). We have two results, f ¿ and f [ P ] =det ( f ) P , where P is a pseudoscalar. So
f [ A ¿ ] =f ¿
Next show that det ( f ) =det ( f ). We have,
det ( f ) =det ( f ) I n I n =f [ I n ] I n =I n f [ I n ] =det ( f )
−1 −1 −1

So
f [ A ] =det ( f ) f [ A ] I n
¿ −1 −1

f [ A In ]
−1
−1
f [ A ] I −1
n =
det ( f )

−1 f [ A I −1
n ] In f [ A I n ] I −1
n
f
−1
[ A ] =f [ A ] I I = −1
n n =
det ( f ) det ( f )
The last term is Hestenes formula. Use equation (4.13) page 110 to check,
f [ f [ A ] In ] In [ f [ A ] ] f [ I −1
n ] In
−1 −1 −1
f A det ( f ) I n I n
( f −1 ∘ f ) [ A ] =f −1 [ f [ A ] ] = = = =A
det ( f ) det ( f ) det ( f )

[ f [ A In ]In
]
−1
1 1 1
( f ∘ f ) [ A ] =f [ f [ A ] ]=f
−1 −1
det ( f )
=
det ( f )
[
f f [ A In ] In =
−1
det ( f )
−1
]
f f [ A ] f [ In ] I n =
−1
[
det ( f )
−1
]
f [ f [ A ] det ( f ) I n
−1

f [ In In ]I
−1 −1
In In
Example 1. Let A=I n . Then f −1
[ I n ]= n=
. Similarly, f −1 [ I −1
n ]= .
det ( f ) det ( f ) det ( f )

f [In ] I =α det ( f ) I −1
−1
Example 2. Let A=α . Then f −1 [ α ] =α n
n
I =α . n
det ( f ) det ( f )

f [a I n ] I n
−1
−1
Example 3. Let A=a, a vector. Then f [a ]= . No simplification.
det ( f )
Now back to homogeneous representation space. Recall from section 4.3.5 that given a blade X
¿
and a an outermorphism f that sends blades to blades of the same grade, then f is another
outermorphism defined by f ¿ [ X I −1
n+1 ]=f [ X ] I n+1 . But f [ X ] I n+1=f [ X I n +1 I n+1 ] I n+1. Now using
−1 −1 −1 −1

equation (4.13) page 110, f¿ where A,B are blades in G n+ 1, we have


f [ X I n +1 I n+1 ] I n +1=f [ X I −1
n +1 ] f [ I n+ 1 ] I n+1=det ( f ) f [ X I −1
n +1 ],
−1 −1 −1 −1 −1
or f ¿=det ( f ) f −1. This is equation
(11.12) page 302, and also equation (4.14) page 111.
In section 4.3.4 page 110, Dorst introduces orthogonal transformations, a ∙ b=f [ a ] ∙ f [ b ]. In such a
case, f =f −1. To see that
a ∙ b=f [ a ] ∙ f [ b ] =f [ f [ a ] ] ∙ b=( f ∘ f ) [ a ] ∙ b
Similarly, f −1= f́ =f . So the formula f ¿ simplifies to f ¿ . And the formula for f ¿ simplifies to
¿
f =det ( f ) f . Further,
| || |
f [ e0 ] ∙ f [ e n ] ⋯ f [ e 0 ] ∙ f [ e0 ] e 0 ∙ en ⋯ e 0 ∙ e0
f [ I n+1 ] f [ I n+1 ] = ⋮ ⋱ ⋮ = ⋮ ⋱ ⋮ =I n +1 I n+1
f [ en ] ∙ f [ e n ] ⋯ f [ e n ] ∙ f [ e0 ] e n ∙ en ⋯ e n ∙ e0

So det 2 ( f ) I n +1 I n+1=I n+ 1 I n+1. So det 2 ( f )=1 , or det ( f ) =±1. So finally f ¿=± f .


There is a general observation concerning transformations of points. A point is represented by a
x
vector. A proper point has the form x=α e0 + x , with weight α and location . Given another
α
y
point y=β e 0 + y , with weight β and location , their sum is x + y=( α + β ) e0 + x + y whose weight
β
x+ y
is α + β and whose location is . Now let f be a linear transformation on Rn +1. By the very
α+β
definition of a linear transformation, f [ ( α + β ) e0 + x + y ]=f [ α e 0 + x ] +f [ β e0 + y ]. So the
transformation of a sum of points is the sum of the transformed points.
11.8.2 Translations
A proper k -flat X in the base space, is represented by a k +1 blade in the representational space,
X = p∧ p 1 ∧ ⋯ ∧ pk
So X contains the proper points p , p 1 , ⋯ , pk . Assume they are unit weight points. A translation of
the flat is defined to be an addition of an improper point t to each proper point of the flat. So if x
is a proper point on the flat, x ∧ X=0, then x +t is a point on the translated flat,
( x +t ) ∧ ( p+t ) ∧ ( p1+ t ) ∧ ⋯ ∧ ( pk +t )=0. It seems to make no sense to translate an improper point
by another improper point t . But given a proper point, x=e 0 + x , it does make sense to translate it
to the proper point x +t=e 0+ x+t . Now
X = p∧ p 1 ∧ ⋯ ∧ pk =p ∧ ( p 1− p ) ∧⋯ ∧ ( p k − p ) = p∧ ( p1− p ) ∧ ⋯ ∧ ( pk − p )= p ∧ A=( e 0 + p ) ∧ A
Where A is a k -blade of improper points, namely the vectors from the n+1-dimensional
representation space used to form it are all from the n -dimensional subspace orthogonal to e 0.
Next express the translated X
( p+t ) ∧ ( p 1+ t ) ∧ ⋯ ∧ ( pk +t )=( p+t ) ∧ ( p1 +t−p−t ) ∧⋯ ∧ ( p k +t− p−t )= ( p+ t ) ∧ ( p1− p ) ∧ ⋯ ∧ ( pk −p )=( p+t ) ∧ A= (
So it’s almost the same as X .
Denote the mapping which is a translation by t as ~t . On vectors it is defined to be
~t α e + x =α e + x+ α t=α e +t + x
[ 0 ] 0 ( 0 )
and it extends to blades as outermorphism. It is the identity map on improper points α =0 , and on
x
proper points, α ≠ 0 , it adds t to the location of the point without affecting its weight. It can
α
also be interpreted as changing the point of origin from e 0 to e 0 +t . Why is this a linear
transformation? Let x=α e0 + x and y=β e 0 + y . Then
~t [ x ] + ~t [ y ] =α e + x +α t + β e + y + β t= ( α + β ) e + x + y + ( α + β ) t . On the other hand,
0 0 0

x + y=α e 0+ x + β e 0+ y =( α+ β ) e 0 + x+ y and ~t [ x + y ] =~t [ ( α + β ) e0 + x + y ]= ( α + β ) e 0+ x + y + ( α + β ) t .


On the flat X = p∧ A , ~t [ X ] =~t [ p ∧ A ] =~t [ p ] ∧ ~t [ A ] =( p+t ) ∧ A= p ∧ A+ t ∧ A=X +t ∧ A . See my
notes on section 11.5.3 page 286 that A=¿. So finally we get equation (11.13)
~t [ X ] =X +t ∧ ¿

This is the action of a translation on a blade X and it is in the fifth row second column of table
11.3 page 291.
Now let’s compute the ( n+1 ) × ( n+ 1 )-dimensional matrix of ~t , ⟦ t~ ⟧ij =ei ∙ ~t [ e j ], where 0 ≤ i, j ≤ n. Let
t=t e for 1 ≤ k ≤ n. First take j=0. ~t [ e ] =e +t=e +t e . So the zeroth column of the matrix is
k k 0 0 0 k k

[]
1
t1
. Take j=1. ~t [ e1 ] =e1, and so on. So the matrix is

tn

[ ]
1 0 ⋯ 0
t 1 ⋯ 0
⟦ t~ ⟧ = 1
⋮ ⋮ ⋱ ⋮
tk 0 ⋯ 1
~
where ⟦ t ⟧ij =δ ij 1 ≤i , j≤ n . The determinant of ⟦ t~ ⟧ is obviously one. It is easily verified that

[ ][ ] [ ] [ ] [ ]
1 0 ⋯ 0 x0 x0 1 0
t1 1 ⋯ 0 x1 x 0 t 1+ x1 t1 x 1
= =x 0 +
⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋮ ⋮
tk 0 ⋯ 1 xn x 0 t n+ x n tn x n

Dorst writes that taking X to ~t [ X ] is a linear transformation of X . I don’t know what is a linear
transformation of a blade. Perhaps he only means that ~t is a linear transformation. Now
~t [ I ] = ~t [ e ∧ e ∧⋯ ∧ e ]=~t [ e ] ∧ ~t [ e ] ∧ ⋯ ∧ ~t [ e ]= ( e + t ) ∧ e ∧ ⋯ ∧ e =I + t ∧e ∧ ⋯ ∧e =I .
n+1 0 1 n 0 1 n 0 1 n n +1 1 n n+1

That also proves that det ( ~t )=1.


Now suppose we have a k -flat X through the origin, so X =e 0 ∧ A=e0 A , where A is an improper
k -blade. To translate the flat to be a flat through a point e 0 + p= p , simply let ~
p act on X . So
Y =~
p [ X ]=~
p [ e 0 A ]=e0 A + p ∧ A=e0 ∧ A+ p ∧ A= p ∧ A .
Check that the formula ~t [ X ] =X +t ∧ ¿ applies when X is zero flat, namely a vector in the
representation space. Indeed, if X =x=α e + x , then ¿, so ~t [ x ] =x +t ∧ α=x +α t .
0

Check that the formula ~t [ X ] =X +t ∧ ¿ applies when X is an improper flat, namely it is a blade
made entirely from vectors orthogonal to e , X =x ∧ ⋯ ∧ x . Then ¿. So ~t [ X ] =X +t ∧ 0= X .
0 1 k+1

Next find ~t . By definition, given a blade X ,


¿

~t ¿ [ X ¿ ] =~t [ X ] I −1 =X I −1 + ¿
n+ 1 n +1

So we got equation (11.14). In the derivation, I used equation (3.10) page 73 and equation (3.21)
page 79. This result is in the fifth row third column of table 11.3 page 291.
Using equation (4.14) page 111, and equation (11.12) page 302, f ¿=det ( f ) f −1, we have in our
case ~t ¿= ~t −1 .
Note that a translation does not preserve the orthogonality of the point of origin to the improper
points. Namely, even though e 0 ∙ x=0 , ( e 0 +t ) ∙ ( x +t ) ≠ 0.
11.8.3 Rotation around the origin
I shall first explain reflection, which is not the topic of Dorst here.
Go back to Dorst section 6.4.3 page 158. Let x , a be vectors. The projection of x along a is given
a a a
by x ∥ =x ⋅ =x ⋅a 2 =( x ⋅ a ) a−1. Now
|a| |a| a
( x ∧ a ) a−1 =( x ∧ a ) ⋅ a−1 + ( x ∧ a ) ∧a−1=( a−1 ∙ a ) x−( a−1 ∙ x ) a=x−( x ∙ a−1 ) a
Also
( x ⋅ a )2 ( x ⋅a )2 a2
( x ⋅ a ) a−1 ⋅ [ ( x ∧ a ) a−1 ]= ( x ⋅a ) a−1 ⋅ [ x−( x ⋅ a ) a−1 ]=( x ⋅a ) ( a−1 ⋅ x ) −( x ⋅ a )2 ( a−1 ⋅a−1 ) = − =0
a2 a4
It comes out that ( x ⋅ a ) a−1 is x ∥, the projection of x along the direction of a , while ( x ∧ a ) a−1 is
x ⊥, the rejection of x from the direction of a . The projection and rejection are both in the plane
of x∧a , and are orthogonal to each other. This is neatly summarized as
−1 −1 −1
x=( xa ) a =( x ⋅ a ) a + ( x ∧ a ) a =x∥ + x ⊥.
I find Figure 6.3 of Dorst page 156 very confusing. It gives an illusion of 3 D but one must resist
that illusion, and take all arrows to be on the page of the book, a truly 2 D figure.
Dorst discusses reflection in line in pages 158 and 168. The
figure here shows Dorst’s definition of reflection in a line. We
say that the vector x is reflected at the line a in the plane of
x∧a and the reflected x is denoted as ax a−1 . To see why,
clearly the reflected x is the vector sum
( x ⋅ a ) a−1− ( x ∧a ) a−1=x ∥ −x⊥ . And ax a−1 =( a ⋅ x ) a−1 + ( a ∧ x ) a−1=( x ⋅ a ) a−1−( x ∧ a ) a−1. Note that
−1 −1
ax a−1 =( α a ) x ( α a )−1 for any scalar α , so ax a =x∥ x x ∥ .
Now suppose we want to reflect x at the line x ⊥, which Dorst discusses in page 168. So the
reflection is x ⊥ x x−1 −1 −1
⊥ . From the geometry it is evident that x ⊥ x x⊥ =−x ∥ x x ∥ =x ⊥− x∥ . Let’s

check it algebraically. The derivation uses x=x ∥ + x ⊥ and x 2=x 2∥ + x2⊥ , because to work directly
with x ⊥ with its bivector part, is hard.
x−x ∥ 1 2 1 1 1
x ⊥ x x−1
⊥ = ( x−x ∥ ) x 2
= 2 (
x −x ∥ x ) ( x−x ∥ ) = 2 ( x 2 x−2 x 2 x ∥ + x ∥ x x ∥ ) = 2 [ x 2 ( x−x ∥ −x ∥ ) + x ∥ x x ∥ ]= 2 [ x 2 ( x ⊥
x ⊥ x⊥ x⊥ x⊥ x⊥
So ax a−1 is the reflection of x in a or equivalently in x ∥, while −ax a−1 is the reflection of x in
x ⊥. Hestenes in NFCM page 279 defines reflection to be the second one.
Now go to Dorst section 6.4.2 page 156-7. Suppose instead of a vector a we have a k -blade A . It
still makes sense to talk about the projection of x in the subspace A and the rejection of x from
−1 x∙ A x∧ A
A . x=xA A = + . Both the projection and the rejection are in the k +1 -blade x ∧ A ,
A A
the rejection being orthogonal to A .
Now go to Dorst section 7.1 page 168 equation (7.1). Generalize the case of reflection of a vector
x in a line a to a blade X =x 1 ∧ ⋯ ∧ x k . We start with a linear transformation ~a defined by
~a [ x ] =ax a−1, and extend it as an outermorphism to blades,
~a [ X ] =~
a [ x1] ∧ ⋯ ∧~
a [ x k ]=( a x1 a ) ∧ ⋯ ∧ ( a x k a ), so the reflection of the blade in a is achieved by
−1 −1

wedging the reflection of each of the vectors of the blade in a . We want to prove that
~a [ X ] =aX a−1, by induction on k . If k =1 it’s true. Assume it’s true for k −1, where k > 1. So

X =x 1 ∧ ⋯ ∧ x k−1 ∧ x k =X k−1 ∧ x k . By induction hypothesis

( a x1 a−1) ∧ ⋯ ∧ ( a x k−1 a−1)=a ( x 1 ∧⋯ ∧ xk −1 ) a−1. So


( a x1 a−1) ∧ ⋯ ∧ ( a x k a−1 )=( a x 1 a−1 ) ∧⋯ ∧ ( a x k−1 a−1 ) ∧ ( a xk a−1 )=a ( x1 ∧⋯ ∧ x k−1 ) a−1 ∧ ( a x k a−1 )
r
A a+ (−1 ) a A r
Now use the formula Ar ∧ a= r . So
2
a ( x 1 ∧ ⋯ ∧ x k−1 ) a−1 a x k a−1 + (−1 )k−1 a x k a−1 a ( x1 ∧⋯ ∧ x k−1) a−1 a ( x1 ∧⋯ ∧ x k−1 ) x
a ( x 1 ∧ ⋯ ∧ x k−1 ) a ∧ ( a x k a )=
−1 −1
=
2
Dorst here explains the notion of a reflection in a plane, which is what I called above a reflection
in x ⊥. So suppose we have a vector x and a plane A which is a bivector and has a normal a . We
are not using the homogeneous model here, where a plane is represented by a trivector. The
reflection of x in A , can be defined as the reflection of x in the projection of x in the plane, or as
the reflection of x in the rejection of x from a , which is the normal to the plane. Now the
projection of x in the plane is x ⋅ A A−1 while the rejection of x from a is x ∧ a a−1. So the two
−1
must be equal. Let A=b ∧c and assume b ∙ c=0 , so A=bc . Now a=b × c=( b ∧c ) I 3 , where I 3 is
the unit pseudoscalar of the space x ∧ A . Now
c ∧b 1 1 1
= 2 2 [ ( x ⋅ b ) c −( x ⋅c ) b ] ( c ∧ b )= 2 2 [ ( x ⋅b ) c ( c ∧ b ) −( x ⋅c ) b ( c ∧ b ) ] = 2 2 [ ( x ⋅ b ) c ⋅ ( c ∧
−1
x ⋅ A A =x ⋅ ( b∧ c ) 2 2
b c b c b c b c
So we see that x ⋅ A A−1 is the projection of x on A , as Hestenes claims in NFCM page 65
equation (4.5b). On the other hand, we’ve seen above that x ∧ a a−1 can also be interpreted as the
projection of x on the plane A to which a is normal. Now
−1 −1 −1 −1 −1 −1
x=xA A =x ⋅ A A + x ∧ A A =xa a =x ⋅ a a + x ∧ a a , so it must be that
−1 −1
x ⋅ A A =x ∧ a a and x ∧ A A−1=x ⋅ a a−1.
All that is not really relevant here. What is relevant is that the reflection of x in A=b ∧c is the
negative of the reflection of x in the normal to the plane b × c . Given a hyperplane, its dual is a
vector, which can be thought of as the normal to the hyperplane. In our case the hyperplane is
A=b ∧c and the dual or normal is b × c= A¿. So − A¿ x A ¿−1 is the reflection of x in A and is the
negative of the reflection of x in the dual A¿ .
The language of Dorst “reflection of x in dual hyperplane a : x ↦−ax a−1” is problematic, and it
should be “reflection of x in a hyperplane A whose dual is a : x ↦−ax a−1”. But the reflection of
x in a is ax a−1 .
Next reflect a k -blade X =x 1 ∧ ⋯ ∧ x k in A , using outermorphism. Dorst in equation (7.2) claims
it is a ^
X a . Use induction. When k =1, X =x 1, the reflection is −a x 1 a =a x^ 1 a . Assume true
−1 −1 −1

for k −1. Prove for k .


1
(−a x1 a−1) ∧ ⋯ ∧ ( −a x k−1 a−1 ) ∧ (−a x k a−1 )=(−1 ) k−1 ( a ( x1 ∧⋯ ∧ x k−1 ) a−1 ) ∧ (−a x k a−1 )= 2 (−1 )k−1 [ ( a ( x 1 ∧ ⋯ ∧ x k−1 ) a
After I wrote all the above, I realized that A must be a hyperplane for a to be a vector. So X is a
subspace of the space of a ∧ A .
Now we’ll start the rotation discussion. Recall that we start with a Euclidean vector space Rn +1.
A k -flat is built from k +1 dimensional subspace of Rn +1 spanned by k +1 linearly independent
vectors x 0 , x 1 , ⋯ , x k which are interpreted as points. The flat itself is the wedge product of those
k +1 points, represented by a k +1 blade and all its scalar multiples which give different weights
to the flat. Since we don’t add k -flats unless k =0, the weight of a k -flat is not important. As a
collection of points, a point x is in the flat iff it is in the subspace, namely x ∧ x 0 ∧ x 1 ∧ ⋯ ∧ x k =0.
Now we can replace each of the x i vectors by a scalar multiple of that vector, which changes the
weight of the point but not its location, without changing the flat. So we can take all the vectors
to have the same weight, for simplicity, one. Then x 0 ∧ x1 ∧⋯ ∧ x k =x 0 ∧ ( x1 −x 0 ) ∧ ⋯ ∧ ( x k −x 0 )
and x i−x 0 are all improper points because they all the points have the same weight. So
( x 1−x 0 ) ∧⋯ ∧ ( x k −x 0 ) is an improper k −1 flat built from the k linearly independent vectors
( x 1−x 0 ) , ⋯ , ( x k −x 0 ) which span a k -dimensional subspace of the n-dimensional subspace
n n+1
R =R −{e 0 } where {e0 } is a one dimensional subspace spanned by e 0. So we can express the
representation of the k -flat as the k +1 blade X =x 0 ∧ A=( e0 + x 0 ) ∧ A .
θ
We learnt in equation (7.3) page 171 how to rotate a blade, given a rotor R=e−I 2 , where the

angle of rotation is θ and the axis of rotation is the normal to the plane whose unit pseudoscalar
is the unit 2-blade I . To rotate that flat, the formula is
−1 −1
RX R =R ( x 0 ∧ A ) R
Dorst specifies that the rotation axis goes through the origin. Suppose the axis of rotation is the
line e 0 ∧ n=e0 ∧ ( n1 e 1 +n2 e 2+ n3 e 3 ), where n1 +n 2+ n3=1. So
2 2 2

−1
(
I =n I 3 =( n 1 e 1+ n2 e2 +n3 e 3 ) e3 e 2 e 1=−( n1 e23 +n2 e 31+n3 e 12) = e2 −
n2
)
e ∧ ( n3 e 1−n1 e 3)
n1 1

Another way, longer, to derive I is to note that n × e1=n3 e2−n2 e 3 is perpendicular to n and

( n × e1 ) × n=( 1−n21 ) e 1−n1 ( n2 e 2+ n3 e3 ) is perpendicular to n and n × e1. The unit vector parallel to
n3 e2 −n2 e3 n3 e2 −n2 e3
n × e1 is = , and the unit vector parallel to ( 1−n21 ) e 1−n1 n2 e2 −n1 n3 e3 is
√n +n
2
2
2
3 √ 1−n 2
1

(1−n21 ) e1−n1 n2 e 2−n 1 n3 e 3


. So
√ 1−n 2
1

n3 e2 −n2 e3 ( 1−n1 ) e 1−n1 n2 e2−n1 n3 e3


2

I= =−( n 1 e 23+ n2 e31 +n3 e 12)


√1−n 2
1 √1−n 2
1

It is easily checked that ¿ and therefore ¿=n ∧ I is a 3-blade in Rn +1−{e0 }. Also


I e0=I ⌊ e0 ¿ ¿
θ θ
−I −I
because I is a 2-blade, so e 0 ∧ I is a 3-blade in Rn +1. Therefore R e =e 2 e =e e 2 =e R . So
0 0 0 0

R ( x 0 ∧ A ) R−1=[ R e0 R−1 + R x 0 R−1 ] ∧ R A R−1= [ e 0+ R x 0 R−1 ] ∧ R A R−1


Dorst goes in the opposite direction. He assumes that e 0 is not affected by the rotation, and only
improper blades get rotated. But using equation (7.3) we can derive all that.
If the dual X ¿ = X I −1 ¿ −1
n+1 of X is rotated, again equation (7.3) gives R X R . Now
R I n+1=I n+1 R ,

because from equation (3.19) we get R I n+1= R ⌋ I n+1 =I n +1 ⌊ R=I n+1 R , which is independent of
the parity of n because of the evenness of the grades of R . So
R X R =R X R I n+1 I n+1 =( R ( X I n +1) R ) I −1 ( ) I n+1
¿ −1 ¿ −1 −1 ¿ −1 −1 −1
n +1= RX R

Therefore the rotation of the dual of X , is the dual of the rotation of X . Again Dorst goes from
the end to the beginning, don’t know why.
11.8.4 General rotation
If the axis of rotation is not through the origin, but say through the point t=e 0 +t , and it
represented by the 2-blade t ∧ n. Translate the line and the flat by −t . That would make the axis
of rotation to go through the origin, parallel to itself. And the flat becomes using formula (11.13),
X =x 0 ∧ A=( e0 + x 0 ) ∧ A ↦ ( e0 + x 0−t ) ∧ A=X '
Next rotate X ' using R as above, to get
X ↦ R X R =R [ ( e 0 + x 0−t ) ∧ A ] R =( e 0 + R ( x 0−t ) R ) ∧ R A R−1=X ' '
' ' −1 −1 −1

Finally translate X ' ' by t , to get


X ' ' ↦ X ' ' +t ∧ R A R−1=( e0 + R x 0 R−1 ) ∧ R A R−1−R t R−1 ∧ R A R−1 +t ∧ R A R−1=RX R−1+ ( t −R t R−1 ) ∧ R A R−1
The question which bother me is why do we need the axis to go through the origin. In chapter 7
we discussed reflection in a plane say A=a ∧b . Given any vector x , we can decompose it to a
component x ∥ parallel to the plane and a component x ⊥ perpendicular to the plane, such that
x=x ∥ +x ⊥ and its reflection in A is x ∥ −x⊥. If we want to rotate x about an axis perpendicular to
A by an angle twice the angle θ between a∧b , reflect x in the plane to which a is orthogonal,
and then reflect the result in the plane to which b is orthogonal. When we visualize the reflection
or rotation, we picture the tails of the vectors x , a ,b and the normal to the plane A as attached at
their tails, even though all blades are free objects in the sense that they have no location. In the
homogenous model, all vectors (improper points) are pictured with their tail at the point of origin
and their head at another point, and the subtraction of the two points is exactly the vector.
Now let’s do that for the dual flat X ¿ . First translate X ¿ by −t . Using equation (11.14) we get
¿ ¿ −1
X ↦ X + e0 ∧¿
Rotating it using the equation in section 11.8.3, gives
¿ −1
X +e 0 ∧ ¿
In my notes on the versor product section 7.6.1, I noted that the rotation operator RX R−1 is a
versor product, and it was proven on page 194 that R ¿, so
¿ −1 −1
R X R + e0 ∧ R ¿
Finally, translate that by t , using again equation (11.14),
R X ¿ R−1+ e−1
0 ∧¿

where we used equation (3.10) page 73, and the facts that ¿ and e−1 −1
0 ∧e 0 =0 .

11.8.5 Rigid body motion


Rotate and then translate by t a flat X .
−1 −1
X ↦ RX R ↦ RX R +t ∧ ¿
Rotate and then translate by t its dual flat X ¿ (seems that Dorst should have “If X ¿ is a dual
representative” instead of “If X is a dual representative”).
¿ ¿ −1 ¿ −1 −1
X ↦ R X R ↦ R X R −e 0 ∧¿
If X is improper blade A , then
−1 −1
A ↦ R A R ↦ R A R + t ∧¿
Because ¿ for any improper blade B. We already discussed it in section 11.8.2.
11.8.6 Constructing elements through motions
Suppose we want the construct the equation of a line which goes through the point p=e0 + p , and
points in the direction x . Start with the e 1 axis. We’ve seen above in section 11.8.3 how to
−1
construct the rotor R to rotate the e 1 axis, R e 1 R , to become a line through the origin parallel to
x . Then translate the rotated e 1 axis by p so the line would go through p.
11.8.8 Affine Transformations
As we noted in section 11.7.3, any transformation on the point space which takes a line to a line,
and takes a point to a point with the same weight, and also preserves the ratio of distances, is
called an affine transformations. Namely, let f be an affine transformation. So
f [ p ] =f [ p 0 ( e 0+ p ) ]= p0 ( e 0 + p' ), f [ q ] =f [ q0 ( e 0 +q ) ]=q 0 ( e 0+ q ' ), f [ r ]=f [ r 0 ( e 0 +r ) ] =r 0 ( e 0+ r ' ). And

the ratio must satisfy


f [ p ] ∧ f [ q ] p 0 p '−q ' p0 p−q
= =
f [ q ] ∧ f [ r ] r 0 q '−r ' r 0 q−r
−1 −1 −1 −1
Now suppose x=α e0 + x , so the weight is α . Then e 0 ∙ x=e0 ∙ ( α e 0 + x )=e 0 ∙ ( α e 0 ) +e 0 ∙ x=α . So

0 ∙ f [ x ] =e 0 ∙ x . But e 0 ∙ f [ x ] =f [ e 0 ] ∙ x . Now
if f is an affine transformation, then e−1 −1 −1 −1

e 0 ∙ x=f [ e 0 ] ∙ x f [ e0 ]=e 0 .
−1 −1 −1 −1
for all x, implies that And since e−1
0 =β e 0, so

f [ e0 ]
[ ]
−1 −1 −1
e0 e0
e 0= = =f =f [ e0 ].
β β β
Next we show that an affine transformation f takes improper points to improper points. Let I n be
the unit pseudoscalar of the subspace of Rn +1 orthogonal to e 0. All the improper points of the
point space are represented by vectors in that subspace, and vice versa. So by definition ¿. Since
f sends zero to zero, so using equation (4.13) page 110, 0=f ¿ . Note that ¿ implies that for any
improper blade X , ¿. So an affine transformation takes a proper point to a proper point and takes
an improper point to an improper point. But it would never take a proper point to an improper
point nor vice versa.
Next suppose that f is any linear transformation on the subspace of Rn +1 orthogonal to e 0. Dorst
calls it Rn . It doesn’t contain any vector with a nonzero coefficient of e 0. Extend f to Rn +1 by
defining f [ α e 0 ]=α e0 . So f [ x ] =f [ α e 0+ x ]=α e 0 +f [ x ]. Next add to this extension of f a
translation by a vector t , using the form we used in section 11.8.2 page 303, so we have
f t [ x ]=f t [ α e0 + x ]=α e 0+ f [ x ] + α t . Note that f t [ x ]=f [ x ]. Since the weight α equals e−1
0 ∙ x , we have

f t [ x ]=f [ x ] + ( e−1
0 ∙ x ) t . Just like f doesn’t change the weight of a point, neither does f t , namely

0 ∙ x=e0 ∙ f [ x ] =e0 ∙ f t [ x ] . Let X be a k +1-blade of R


e−1 −1 −1 n +1
, so X = p∧ A where p is a proper
vector, and A is an improper k -blade, a wedge product of improper vectors, namely vectors
whose e 0-coefficient is zero. We’ve seen in section 11.5.3 page 286 that A=¿. Note that
f t [ A ] =f [ A ]. Now let’s calculate

f t [ X ] =f t [ p ∧ A ] =f t [ p ] ∧ f t [ A ] =( f [ p ] + ( e 0 ∙ p ) t ) ∧ f [ A ]=f [ p ] ∧ f [ A ] + ( ( e0 ∙ p ) t ) ∧ f [ A ] =f [ X ] + ( e 0 ∙ p ) t ∧ f [ A ]
−1 −1 −1

Next we show that ( e0 ∙ p ) t ∧ f [ A ] =t ∧¿


−1

t∧¿
So altogether we have
f t [ X ] =f [ X ] +t ∧¿
Dorst claims that this is the most general form of an affine transformation, where f is any linear
transformation on the subspace orthogonal to e 0, extended as f [ α e 0+ x ] =α e 0 +f [ x ]. But not every
linear transformation on the representation space is affine, because not every linear
transformation preserves the weight of a point.
Recall the notion of affine combination from page 277, where we add a bunch of points whose
k k
weights add up to exactly one. So suppose p=∑ α i pi where ∑ αi=1. If f is an affine
i=1 i=1

k k
transformation, so it preserves weights of points, then f [ p ] =f ∑ α i pi=∑ α i f [ pi ] is also an
i=1 i=1

affine combination, namely, the transform of a point which is an affine combination of points, is
an affine combination of transformed points.
11.8.9 Projective Transformations
These are the general linear transformations on the representation space. They can take any
vector to any vector, a vector with a zero e 0-coefficient, to a vector with a nonzero e 0-coefficient,
and vice versa, and certainly it can change the weight of a proper point.
11.9 Coordinate-free
Suppose we are given a line L=x ∧u. Then u=¿. Note that even if L is given as x ∧ y , still ¿
yields y−x = y−x which is the direction of L. Suppose that L=d u=( e 0 +d ) u=e0 u+du ≡ e0 u+ M
. Then
¿
And then
−1 −1
d=du u =M u =¿
Next suppose we are given a point p off L. Find a line M (not M !!!) through p perpendicular to
L and intersecting L. Now any vector v in the plane to which u is normal, is perpendicular to u.
But most lines p ∧v though perpendicular to L, do not intersect L. Now starting from e 0 we
know how to find the shortest vector d=e 0+ d to L such that d ⋅u=0. How do we do that from
p=e0 + p instead of from e 0? Suppose we consider p to be the new origin, e '0. Then relative to e '0,

e 0=e0− p and any proper point x=e 0 + x becomes relative to e 0 the point x =e 0 + x− p. Note that
' ' ' '

x represents the same point in space as x , only the origin changed so the representation of the
'

point changed. Also note that the change of origin doesn’t effect the representation of improper
points. Also the line L' =x ' ∧u is the same set of points as L=x ∧u, and only the representation
of the points changed. So L' =x ' ∧u=d ' u, where d ' =e'0 +d ' and d ' ⋅u=0 . The formula for d ' is
¿¿
So this vector d ' is perpendicular to u (which can easily be checked) and is the shortest distance
from p to L ' . But note that L∧L ' are the same line in space, and it’s not the case that one is
translated relative to the other. So the equation of the line which goes through p and intersects L
' x ∧u− p∧ u
and is perpendicular to L is just p ∧d = p ∧ . We can understand d ' as follows
u
x ∧u− p∧ u 1 1 1 2
= 2 [ ( x ∧u ) u−( p ∧ u ) u ] = 2 [ ( x ∧u ) ⋅ u−( p ∧ u ) ⋅u ] = 2 [ u x−( x ⋅u ) u−u p+ ( p ⋅u ) u ]=( x− p ) −[ ( x− p
2
u u u u
Interpretation. Take the plane which contains the vectors x− p and u. In that plane, x− p has a
projection along the direction u, and a rejection from the direction u. The projection is
[ ( x− p ) ⋅ u^ ] u^ and the rejection is d ' .
It seems that the result is correct, but the derivation is wrong. Namely the formula d ' =¿ ¿ is
wrong, since for example, ¿. So the idea of getting a new point of origin is wrong. But Dorst has
a different track. Translate all points by − p. This takes p to e 0 and it takes x=e 0 + x to
'
x =e 0 + x− p. So the line L get moved to a translated line L' =( x− p ) ∧u. Find the support vector
d ' of L' . Namely, d ' =¿ ¿ .
¿
¿
( x− p ) ∧u
d'=
u
It is clear that when we translate the points back, the support vector of L ' is just the required
vector from p to L.

You might also like