You are on page 1of 215

QUINTA ESSENTIA

A Practical Guide to Space-Time Engineering

PART 1
Alpha to Omega
For Mike
2nd Edition
Project Initiated: December 4, 2007
Project Completed: June 6, 2008
Revised: Thursday, 24 November 2011

GEOFFREY S. DIEMER

Edited by Riccardo C. Storti1

www.deltagroupengineering.com

rstorti@gmail.com
Copyright 2011: Delta Group Engineering (dgE): All rights
reserved.

www.deltagroupengineering.com

Preface
One does not find gold prospecting in a field filled with miners. One
must break new ground, not perpetually overturn familiar soil.
Riccardo C. Storti
We experience gravity every moment of our lives, yet most
people rarely, if ever, pause to consider what the force of gravity
actually is. To others, this question borders on obsession. Gravity is a
mystery that has plagued scientists for hundreds of years. Although
Newton and Einstein formulated ingenious tools for depicting
precisely how objects will behave due to the effects of gravity on
Earth and in the heavens, it may surprise many people to learn that
their work does not actually reveal the root cause of gravity. In other
words, we know that all material objects both generate and respond to
gravitational fields, but science has absolutely no idea how objects
cause gravity -- until now, that is.
The answer that has recently been uncovered, as described in
the Quinta Essentia series, extends the work of Newton and Einstein
using a mathematical framework commonly employed in the field of
thermodynamics. The physics is exactly the same; the only difference
is in the way we choose to depict the gravitational model.
In Sir Isaac Newtons time, some three hundred years ago,
people depicted gravity as being a pulling force which attracted
objects to one another in the heavens, and invariably caused objects to
fall to earth. It was also believed that gravity was transmitted across
great distances of space and that when it reached a distant object a
pulling force would be imparted upon it; thus this transmission of
gravitational force was referred to as action-at-a-distance. Newton
and his contemporaries surmised that a fluid-like substance of some
kind must fill all of space, acting as the medium via which the force of
gravity was transmitted. This mysterious substance was referred to as
the aether. Even though Newton implicated the aether as the
medium which transmitted the force of gravity, he could not logically
reconcile how a fluid-like description of the aether could allow
objects in the heavens to move as they do, simply because fluids act to
impede the motion of objects. If the aether was in fact fluid in nature,
its viscosity should cause the stars and planets to slow and fall out of
their regular, seemingly perpetual orbits.
The formulas Newton derived in his monumental work
entitled The Principia have been used not only to predict the orbital
motions of the planets; they are still used to this day to plan our
www.deltagroupengineering.com

spacecraft missions to the Moon, Mars and other planets. However,


the triumph of Newtonian Mechanics overshadowed another more
speculative theory of gravity posited by Newton, which has gone
virtually ignored ever since. In his treatise entitled Opticks, Newton
develops the mathematics describing optical principles of refraction,
reflection and light spectra. This profound body of work is still
fundamental to science today, and has brought about phenomenal
technological advancements since the time of its development. In this
work Newton briefly speculates on the notion that gravity is caused by
optical characteristics of the aether. Newton surmised that just like a
lens, gradual changes in the density of the aether (whatever it may be)
in the presence of matter should cause light and the movements of
objects passing through it to follow curved trajectories characteristic
of gravitational attraction.
A full two hundred years later, Einstein introduced his theory
of gravitation called General Relativity (GR). Very much like
Newtons optical model for gravity, GR is essentially a geometric
interpretation of gravity, derived from the way in which light
propagates through space in curved trajectories in the presence of
gravitational fields. The curved path of light in a gravitational field, in
turn, defines whether the space an object moves through appears
flat or curved. This curvature of space-time defines how objects
behave gravitationally.
Not only was Einsteins GR found to be highly accurate and
useful, it also removed the problem of action-at-a-distance. According
to GR, objects arent being pulled by some mysterious force
towards each other; rather, it is the curvature of the space-time fabric,
if you will, which guides the gravitational motions of objects, hence
no force is required. Since no force is necessary to keep a planet in
orbit, the action-at-a-distance problem which plagued Newton
vanished along with the aether.
However, Einsteins GR theory didnt vanquish the aether
completely; it merely replaced it with something even more abstract
called curved space-time. And the problem of action-at-a-distance
was only supplanted by a much thornier question of what, exactly, is
being curved? In other words, how does a beam of light or an object
know whether the empty vacuum of space it moves through
happens to be curved or flat, and respond accordingly? Is matter
actually curving space, and in so doing causing rays of light to bend
as they propagate along a curved manifold? In this context, spacetime curvature is a completely ambiguous term because space is not
considered to be a physical thing. In other words, how can nothing
posses a curved shape? It is crucial that physicists take to heart the

www.deltagroupengineering.com

fact that curvature is merely a mathematical contrivance acting to


describe (not explain) the physical phenomenon we call gravity. Thus
we are still left to wonder what physical process might be responsible
for conveying information about the gravitational field to the beam of
light or the object passing through it.
In 2002, physicist Hal Puthoff introduced an alternative
optical interpretation of GR, referred to as the Polarizable Vacuum
Representation of GR, which sought to answer such questions. Here,
Puthoff substitutes the concept of space-time curvature with a
variable index of refraction in space surrounding matter, which
yields a congruent yet mathematically simplified interpretation of
gravity to that of GR.
According to GR the space-time geometry of a gravitational
field surrounding a massive object is depicted as a depression in the
fabric of space-time. As an object passes through curved depressions
in space enveloping a planet or star, its path is caused to bend as it
follows the natural slope of the curve, ultimately resulting in a
gravitational effect. The key distinction between GR and the
polarizable vacuum (PV) interpretation is that the PV model explicitly
describes a physical manner in which space-time may, in effect, be
curved and how an object might be able to sense the gravitational
field it passes through.
The PV model asserts that matter polarizes the vacuum
surrounding it, generating gradients in the refractive index of space so
that as a beam of light passes through, its trajectory will be refracted
(i.e., bent) towards the object. However, if we follow Puthoffs model
and assume that a beam of light is bent due to refractive properties of
the vacuum of space, rather than curvature, we are still left to wonder
how matter causes the refractive index to change, and why polarized
space (a refractive index within the fabric of space-time) should
cause a gravitational force. These questions are answerable by first
understanding how the vacuum of space becomes refractive, and how
material objects act on, and react to a refractive space-time
environment.
Quantum Mechanics (QM) tells us that the vacuum of
empty space is, in fact, quite the opposite. If you switch your
television to an unutilized channel youll see thousands of dots of
static buzzing about like bees in a hive. This imagery is physically
reminiscent of what is occurring at the quantum level in the vacuum
of space; a chaotic jumble of quantum energy fluctuations at all points
in the Universe, whether in the inter-galactic voids of deep space or in
the impossibly small spaces between sub-atomic particles! In this
way, the Universe may be thought of as a container replete with
www.deltagroupengineering.com

energy which may never be emptied. When we consider the vacuum


to be something rather than nothing, it suggests that the vacuum
itself may provide the matrix supporting the indefinable curvature
of space-time and the more physical refractive index of the PV
model. But we are still left pondering the question of why objects
produce and respond to gravitational forces.
To solve this problem, we must look to other examples of
such forces in Nature. The only other similar force in Nature is
inertia. If you wish to change from one velocity to another, you
need a push to overcome the acceleration reaction force of inertia. The
force of inertia is only experienced upon acceleration, which simply
refers to a change in motion. When we move at a constant rate of
speed we dont feel any force (other than gravity) even if we are
moving incredibly fast. Yet once we change our rate or direction of
motion, we suddenly feel a force pushing on us in the opposite
direction of our acceleration. Nature tells us that uniform motion is
relative but acceleration, in a manner of speaking, is absolute because
you can feel it. But where does this strange force come from? Strange
and mysterious as it may seem, this powerful force arises (like
gravity) instantaneously out of the vacuum of space, as if by magic, to
inhibit changes in motion. But how is it that objects feel the force of
inertia and gravity even while separated from other objects by vast
expanses of nothingness?
This question hints at a deep connection between the forces
we have labeled gravity and inertia, and their indissoluble
connection to the quantum vacuum of space. In making this link, we
take the first steps towards profound discoveries and unparalleled
technological advancements; all based upon a fresh understanding of
the quantum origins of inertia and gravity.
This book presents an alternative model to that of GR, which
presumes that matter must do work on the quantum vacuum
manifold of space in order to generate so-called curvature within it,
and that the energy expended on the space-time manifold is
electromagnetic (EM) in nature. The EM energy exerted on the
manifold changes its configuration such that instead of being
curved, space-time becomes refractive in the presence of matter.
In this way, gravity may be definitively shown to be the byproduct of
EM exchange2 between matter and the vacuum of space surrounding
it. Objects and light passing through such regions of space behave in
precisely the same manner as predicted by GR, except that they are
guided according to optical principles of action (i.e., refraction)
2

In accordance with the principles of QM.


www.deltagroupengineering.com

rather than by metaphysical geometric imperatives. This new


approach also eliminates the terms gravity and inertia from our
vernacular, in that they are both shown to be byproducts of
electromagnetism and not actual forces in their own right.
Quinta Essentia: A Practical Guide to Space-Time
Engineering (the series) describes the development of a mathematical
method termed Electro-Gravi-Magnetics (EGM); so named because
it facilitates the representation of gravitational fields solely in
electromagnetic terms. One of the most valuable aspects of EGM is
that it demonstrates how GR and QM are interrelated. In this regard,
EGM is a unique method which reveals a single universal principle
applicable from the subatomic scale to the cosmological. For example,
the EGM method, originally designed to calculate the energy
distribution of gravitational fields, has uncovered not only the
framework underpinning the stability, order and coherent inner
structure of the atom; it also reveals how this order and stability arises
in Nature.
EGM is an engineering tool, and as such it may seem
somewhat unorthodox to many physicists. However, no new
physics has been conjured in order to develop the EGM method. It is
simply a novel application of time-tested engineering principles,
physics and mathematics. The principal reason it may seem
unorthodox has to do with the way in which problems are approached
in physics and engineering. The physicists natural ally is reduction;
where a generalized phenomenon is deconstructed into its basic
constituents. An engineers ally, on the other hand, is an inductive
approach; integrating basic principles and formulating a systemic
solution congruent with experimental observation such that a
phenomenon may be reverse-engineered.
Based upon engineering methodology, EGM represents a
new way of looking at an age-old problem. It employs conventional,
well-founded engineering principles which have never been
previously applied to the problem of gravity. Again, EGM treats
gravity in terms of thermodynamic principles; i.e., as being the result
of matter (mass-energy) establishing energetic equilibrium within the
space-time manifold (as defined by QM) surrounding it. Modeling the
dynamics of gravitation in this manner yields profoundly accurate and
comprehensive results because EGM successfully reveals the common
ground underlying GR and QM.
Part One of the Quinta Essentia series provides the layman
with a summarized presentation of the key results and findings in
Quinta Essentia parts Two, Three and Four; furnishing the reader with
the derivational details required to test ones own theories, to make
www.deltagroupengineering.com

predictions, and to scrutinize EGM. It is the authors sincere hope and


intention that the material presented in the Quinta Essentia series will
convey the scope and utility of the EGM method and inspire new
ideas and experiments dealing directly with space-time manifold
modification, by either applying EGM methods or through the
development of ones own approach.

www.deltagroupengineering.com

Table of Contents
Preface ................................................................................................ 3
Preface ................................................................................................ 3
1

Nothing is Everything ............................................................ 13


1.1

The void.......................................................................... 13

1.2

The Platonic solids.......................................................... 15

1.3

The laws of motion ......................................................... 18

1.4

The luminiferous aether .................................................. 21

1.5

Michelson and Morely.................................................... 24

1.6

Space-Time..................................................................... 26

1.7

The Casimir Effect.......................................................... 30

All Things Being Equal .......................................................... 35


2.1

The cosmic counter-balance ........................................... 35

2.2

Expansion and compression ........................................... 38

2.3

The principle of equivalence .......................................... 43

2.4

Mass-Energy equivalence ............................................... 49

The Glass That is Always Full............................................... 53


3.1

Symmetry and unity........................................................ 53

3.2

Exploring the microcosmos ............................................ 54

3.3

The Quinta Essentia........................................................ 58

3.4

Quantum uncertainty ...................................................... 69

3.5

The substantive Universe................................................ 71

Making Something of Nothing .............................................. 79


4.1

Virtual reality.................................................................. 79

4.2

Mutually assured construction ........................................ 83

Mass Illusion ........................................................................... 87


www.deltagroupengineering.com

10

5.1

A matter of terms ............................................................ 87

5.2

Intrinsic inertia................................................................ 88

5.3

Extrinsic inertia............................................................... 94

5.4

Bridging the gaps............................................................ 98

The Polarizable Vacuum ..................................................... 107


6.1

Blind-sighted ................................................................ 107

6.2

Optical gravity .............................................................. 109

6.3

Shaping the lens............................................................ 111

6.4

Conflux ......................................................................... 113

The Harmony of Nature ...................................................... 121


7.1

Ancient wisdom............................................................ 121

7.2

Music of the spheres ..................................................... 123

7.3

The quantum-harmonic axiom...................................... 126

7.4

Fouriers legacy ............................................................ 128

Electro-Gravi-Magnetics (EGM) ........................................ 135


8.1

Introduction .................................................................. 135

8.2

Similitude ..................................................................... 136

8.3

Precepts and principles ................................................. 140

8.4

Space-time engineering ................................................ 142

8.5

Gravity.......................................................................... 145

8.6

Elementary particles ..................................................... 157

8.7

Cosmology.................................................................... 164

EGM Technical Summary................................................... 179


9.1

Overview ...................................................................... 179

9.2

The QV spectrum.......................................................... 185

9.3

The EGM spectrum ...................................................... 185

9.4

The ZPF spectrum ........................................................ 186


www.deltagroupengineering.com

9.5

The PV spectrum .......................................................... 186

9.6

The EGM, PV and ZPF spectra .................................... 189

9.7

The Casimir Effect........................................................ 189

9.8

Comparative spectra ..................................................... 190

9.9

Characterization of the gravitational spectrum ............. 193

9.10

Planck-Particle characteristics................................... 193

9.11
Cosmology.................................................................... 194
9.11.1
Fundamental........................................................ 194
9.11.2
Advanced ............................................................ 195
9.11.3
Gravitational........................................................ 195
9.11.4
Particle ................................................................ 196
9.12
10

Key point summary ...................................................... 196

EGM Results Summary ....................................................... 201


10.1

Harmonic representation of fundamental particles ....... 201

10.2

Periodic table of fundamental particles......................... 202

10.3

EGM vs. SMoC ............................................................ 203

10.4

Cosmological evolution process ................................... 203

Periodic Table of the Elements ..................................................... 211


Image: Spiral Galaxy..................................................................... 212
Bibliography 1................................................................................ 213

www.deltagroupengineering.com

11

12

www.deltagroupengineering.com

Nothing is Everything
Among the great things which are found among us, the existence
of Nothing is the greatest.
Leonardo da Vincii

1.1

The void

The Sun, the Earth and all the planets of our solar system
float in the vast expanse of space, effortlessly, almost magically
suspended in a mystical, indefinable void. It may easily be assumed
that few people today ever give a moment of thought to the question
of what space actually is. To others, this question is an obsession.
The nature of space has been a source of philosophical and
scientific debate for thousands of years, beginning as a rational
argument to substantiate the existence of nothing. Before humanity
had any experiential knowledge of space, the debate raged over
whether a three-dimensional volume could be completely devoid of
all substance. If there was in fact a true void, could it even be thought
to exist? Over the centuries, the void eventually gained acceptance
as a truism, shifting the debate to questions concerning the physical
nature of nothingness. Was the void truly nothing, or is it composed
of an ethereal substance of some kind?
The question posed by philosophers throughout the ages is:
how can nothing exist as part of our reality, that is, since nothing
represents a state of non-existence? This is a paradox and a
contradiction in terms. Some ancient Greek philosophers expressly
opposed the existence of the void for this reason. But the precise
definition of the void at that time was considered to be a true and
complete nothingness. One interpretation of the vacuum was related
to the idea of zero, which is in many ways just as unfathomable as
the concept of infinity.
The Roman poet Lucretius is well known for the phrase: ex
nihilo nihil fit, meaning, nothing comes from nothing an idea
originally expressed by the Greek philosopher Empedocles (495-435
BC). Empedocles view was that everything in our material Universe
had to be born of something else, something tangible. Something
cannot be created from nothing, nor could anything simply disappear
into nothingness. To the Greek philosophers in this particular camp,
everything that is, is and forever will be, so there was no rational way
to include the idea of nothing or the state of non-existence into
arguments regarding the nature of matter.
www.deltagroupengineering.com

13

This overlying concept marks the birth of conservation of


energy in contemporary physics; meaning that energy can neither be
created nor destroyed, but only transformed or exchanged. Its like
accounting, or balancing your bank account. Although we all may
wish that money could magically appear in our bank account, or that
we could just tack on an extra zero to the end of our balance, we cant.
The money has to come from somewhere. The same is true of energy
the currency of the Universe.
Leucippus (5th century BC) and his student Democritus (460370 BC) are referred to as being Atomists because they introduced
the notion that matter is composed of eternal, indivisible, fundamental
units. A pure substance, the Atomists would say, could be divided and
subdivided again and again until at some point it could be divided no
further. The end-point of matter was called atomos; meaning
without parts. But the philosophical and logical invention of the
atom required something special namely, a void. All of those unseen
atoms which make up matter would need some free space to move
around in to rearrange themselves and form structures within. If
there were no space, then there would be no movement and no
transformation of matter witnessed in our commonplace experience.
There would likewise be no cause-and-effect and the ever-dynamic
motions of the Universe would cease. The Cosmos would be frozen
solid without time. The Pythagoreans, as Aristotle wrote, believed
that: It is the void which keeps things distinct, being a separation and
division of thingsii
Aristotle (384-322 BC), however, didnt completely agree
with the Atomists. In fact, it was Aristotle himself who maintained
that Nature abhors a vacuum. However, he didnt necessarily
disagree with them either, because his argument wasnt actually
rooted in a denial of the void. Hence, the argument became an issue of
defining terms.
When we speak of a vacuum, what do we mean? Aristotle
would contend that if one tried to create new space where there wasnt
space before, something would always immediately rush in to fill that
space. To use an example based in our own time, if one were to take a
zip-sealed plastic sandwich bag, flatten it completely to remove all the
air and then zip it shut, one will find that it isnt possible to pull the
sides apart in any way that could create a new space inside. Indeed, if
one were to construct a similar experiment utilizing something more
rigid like glass or metal, we know that it is possible to create a
vacuum largely free of air and matter, but the creation of that vacuum
doesnt encapsulate a zone of non-existence which has been
substituted in its place.

14

www.deltagroupengineering.com

This point is indicative of the direction the void would take


in philosophical terms. The void was a necessity in the Atomists
view; however, its existence remained impossible because a true void
(as nothingness) could never be created by any natural process.
Something, whatever it may be, must occupy newly created spaces.
But what was it, exactly, that was rushing in to fill the space if it
wasnt some form of matter? The spaces that permeate objects and
separate them from one another must be composed of something for
this line of reasoning to be compatible with experience and
observation. When a vacuum is created, that vacuum may be devoid
of all matter, but according to Aristotle, it must still be something.
It was the Hellenistic philosopher, Zeno of Citium (333-264
BC), whose teachings mark the beginnings of Stoicism3, so named
because of the Painted Porch from which he taught. Like Aristotle, the
Stoics also believed in a continuum of matter, or at least an absence of
a true void in the presence of matter. They believed that there must be
some kind of substance occupying the spaces surrounding objects
and completely permeating them, as if to say that all matter was
imbibed with a spirit imparting purpose of being. They called this
substance pneuma, which was thought to be a mixture of fire and air
an energizing fluid.
But unlike Aristotle, whose void-substance was somewhat
static and eternal, the Stoics pneuma was dynamic and protected
matter from dissolving into nothingness. It is this concept of nothing
which has its roots in what Empedocles termed the aether a
mysterious and ubiquitous medium which surrounded and permeated
matter. This so-called aether, supremely rarefied and quintessential,
became the substance giving form to the void.
The debate over nothingness (i.e., non-existence), became a
futile endeavor beyond the realm of empirical study or solution.
However, the nature and composition of the vacuum as a real
substance termed the aether would be the focus of debate evermore.

1.2

The Platonic solids

Plato (427-347 BC) derived a mathematical interpretation of


the aether in a similar manner to the Atomists by reducing matter into
its quintessential, elemental constituents. Study of Pythagorean and
Euclidian mathematics quite possibly provided the inspiration for his
development of a rather poetic model of the Universe based on

Stoa is Greek for porch.


www.deltagroupengineering.com

15

geometric symmetry. In his treatise called Timaeus, Plato describes


a complete theory of matter based on what he called the five perfect
solids. These solids represent the only perfectly symmetrical
polyhedrons4 whose outer surfaces are entirely composed of a single
type of regular polygon such as an equilateral triangle, a square or a
pentagon. Other shapes, such as the hexagon for example, cannot
form a polyhedron with a surface comprised only of hexagons.
The first of these perfect polyhedrons is known as the
tetrahedron; a three-dimensional shape consisting of four (4)
equilateral triangles connected along their edges to form a threelegged pyramid structure. The next order of polyhedron is the
octahedron; composed of two standard four-sided pyramids
sandwiched together by sharing the square base of each pyramid,
forming a diamond shape from eight (8) triangles. Even though the
center of the diamond shape is a square on the inside, the surface is
entirely composed of triangles. The third solid is the hexahedron (i.e.
a simple cube). The fourth perfect solid, composed of twenty (20)
equilateral triangles, is the icosahedron. The fifth and most unique
solid, the dodecahedron, is composed of twelve (12) identical
pentagons, forming a shape approximating a soccer ball.

Each of these five solids formed Platos version of a


periodic table of elements. These elemental shapes were thought to
form all material objects in the Universe5, forming the basis of
alchemy practiced over the next few thousand years. Empedocles,
before Plato, held the belief that only four elements existed, not
including the aether, which formed the basic atomic constituents of all
matter. Various concoctions of these four elements were thought to
create all substances. The four elements themselves were Earth, Air,
Fire and Water.
The existence of five Platonic solids implied that an
additional fifth element of matter must exist, called Quinta
Essentia, which represented the aether. Platos Quinta Essentia was
the substance that the heavens were made of. It was considered to be
4
5

Three-dimensional shapes.
Each solid represented one of the five elements.

16

www.deltagroupengineering.com

eternal, immutable and the source of all things. In fact, ancient


philosophers considered the fifth element, symbolized by the
dodecahedron, to be so important that its existence was kept secret
from the general population6. The belief that the aether was a
substance made from the fifth perfect solid marked a different manner
of viewing the aether, transforming the formerly featureless void into
the quintessential origin of all things.
The Quinta Essentia didnt infuse matter with spirit in the
way that the pneuma was believed to, rather, it was considered to be
the fabric of the void and the basis of matter. Plato surmised that these
five elements, unlike Empedocles four elements, could split and
merge into entirely new and larger atoms and thus form different
substances, whereas the four fundamental elements of Empedocles
were combined in various recipes to form substances with unique
characteristics.
In Platos model, the five elements correspond to each of the
five perfect solids,
Element
Earth
Air
Fire
Water
Aether

Geometry
Hexahedron
Octahedron
Tetrahedron
Icosahedron
Dodecahedron

Plato describes how the first four elements could recombine


to form new elements; however, the dodecahedron was unique. The
aether could not be broken up into more fundamental subunits or
recombined with other elements like the others could. This is due to
the fact that the surfaces of the other four solids may be further
subdivided into two types of right triangles. One of these is formed by
slicing a square diagonally through its center. The other is produced
by dividing an equilateral triangle by drawing a line from one tip
through to the center of the base, thus dividing it in half. What makes
the dodecahedron unique in this case is that it is not possible to build a
pentagon from just these two types of right triangles, as it is for the
other shapes. The other elements were malleable, whereas the aether
was eternal. The Quinta Essentia thus became the fabric of the
Cosmos upon which all matter was thought to be embroidered.
Oddly enough, triangular symmetry is mirrored in the
subatomic particles and quarks comprising the atoms as we have
come to understand them today. Our contemporary atomic model is
6

Carl Sagan, Cosmos television series.


www.deltagroupengineering.com

17

composed of three subatomic units; protons, neutrons and electrons.


Moreover, the proton is composed of two Up quarks and one
Down quark; whereas the neutron is composed of one Up quark
and two Down quarks.
These quark triplicates
(and the triplicate subatomic
components of the atom) can be
likened to Platos sub-elemental
triangles! Even though we now
know that Platos conjectures were
nothing more than philosophical
representations of reality, it is quite
surprising that the basic tenets of
his theory display such prescience.
One is forced to consider the
possibility of a deeper order in
Nature which Plato was able to
illuminate through his careful study
of mathematical symmetry.

1.3

The laws of motion

Fast forward to Sir Isaac Newton a full two-thousand years


later and the aether remains. From Roman times, to the dark ages,
through the middle ages and the Renaissance, the Platonic solids
formed the basis of physics and alchemy. The Quinta Essentia
remained a key ingredient in the many concoctions of alchemical
practice. Furthermore, it was the practice of alchemy, in the western
world at least, that would contribute greatly to the development of the
Scientific Method and the field of chemistry; in effect, generating the
disciplines of science we know today. For the sake of brevity
however, it will suffice to mention that the wealth of information
available from over two thousand years of world history relating to
alchemy shall be delegated to the reader for further investigation.
However, what cannot go unmentioned, in at least some detail, is
Newtons philosophical and scientific stance regarding the aether.
In his writings, Newtons stance concerning the aether was
rather two-sided. On one hand, in his treatise called Opticks written in
1704, he employs the aether as the basis for most of his observations
related to the nature of light and optical phenomena. In his Principia,
in which he develops the laws of motion and gravitation, the aether is
also present as the medium by which force is transmitted to objects

18

www.deltagroupengineering.com

separated by some distance in space. On the other hand, in his


writings7 Newton was very careful to remind his readers that he would
feign no hypotheses for what the aether could physically be, even
though the aether remained the basis for his reasoning throughout.
Newton felt that the aether should remain in the realm of the occult
and metaphysics, even though he consistently relied upon the aether
as a physical justification for his mathematical theories.
In Newtons laws of motion, gravity was thought to be a
force which attracted bodies to each other in the heavens, just as it
seemed to do here on Earth. Objects invariably fall to the Earth and it
was thought that an actual, physical force pulled everything to the
surface. It was the force of gravity that pulled the legendary apple
from the tree that fell on Newtons head, inspiring him to eventually
decipher the laws of gravitation. Even to this day, this colloquial
notion of what gravity is persists in our language. We still call gravity
a force, and we still, erroneously, talk about it as though it has the
ability to reach out and pull in objects from afar. Gravity in Newtons
time was thought to be transmitted instantaneously through space via
the aether, imparting a pulling force on other objects.
Even though Newton implicated the aether as the medium
transmitting the force of gravity, he could not logically reconcile how
a fluid-like description of the aether could allow objects in the
heavens to move as they do, simply because fluids act to impede the
movements of objects. The planets moved eternally and without
resistance through the aether, so how could objects move in a fluid
without any resistance to slow their motion? If the aether was some
kind of substance, it should induce resistance to the orbital motion of
planets and cause them to spiral into the Sun. Newton writes in his
work, Opticks:
p.528 Qu.28.
A dense fluid can be of no use for explaining the
phenomena of Nature, the motions of the planets
and comets being better explained without it. It
serves only to disturb and retard the motions of
those great bodies, and make the frame of Nature
languish; . . . so there is no evidence for its
existence; and, therefore, it ought to be rejected. . .
. the main business of natural philosophy is to
argue from phenomena without feigning
hypotheses, and to deduce causes from effects, till
7

Principia (1687) and Opticks (1704).


www.deltagroupengineering.com

19

we come to the very first cause, which certainly is


not mechanical and not only to unfold the
mechanism of the world, but chiefly to resolve these
and such like questions. What is there in places
almost empty of matter, and whence is it that the
Sun and planets gravitate towards one another,
without dense matter between them? Whence is it
that Nature doth nothing in vain; and whence arises
all that order and beauty which we see in the
world?
Thus, Newton recognized that something occupying the
spaces between objects accounted for transmission of gravitational
force (it just couldnt be fluid-like). Otherwise, how could one object
like the Earth affect the motion of the Moon so far away with just
empty space between them?
In Newtons time, this strange, disconnected cause-and-effect
relationship was referred to as action-at-a-distance, sparking
another great debate in physics that raged until Albert Einsteins
development of General Relativity (GR) approximately two hundred
years later. Newtons work, however, was immune from this
argument even though it remained a point of great contention that
pestered and haunted him ceaselessly.
Newton was immune from the action-at-a-distance debate
because he eloquently demonstrated how simply understanding the
regular, predictable behavior of Nature can often suffice, i.e., it is
sometimes adequate to formulate a mathematical description of
Natures laws without feigning hypotheses for why they occur. He
formulated a mathematical structure describing the motions of the
planets without espousing a mechanical, physical manifestation of its
behavior. If it works, so be it; thus, it became possible to discuss the
aether in purely philosophical terms without invoking it as a necessity
for the laws of gravitation and mechanics.
Newtons equations have allowed us to design rockets and
enabled planetary exploration in our solar system. It is also Newtons
principles of optics which enabled the invention of the photographic
equipment used to document these great adventures. All of this
technology has been made possible without ever having to understand
the mechanics of the aether. Thus the need for the aether evaporated,
even though its existence could still be debated; its precise nature
remaining as mysterious and indefinable as ever.

20

www.deltagroupengineering.com

1.4

The luminiferous aether

One of the most triumphant and influential discoveries in


human and scientific history was James Clerck Maxwells
development of the four equations for electromagnetism in 1864.
Based upon earlier work by Michael Faraday, the introduction of the
laws of electromagnetism would provide the spark that would
transform the world forever. In much the same way that Newton
derived the laws of motion and gravitation from first principles, by
feigning no hypotheses, and through uncorrupted observation of
Nature, Maxwell was also able to successfully merge the forces of
electricity (E) and magnetism (B) into a system of interactions he
called electromagnetism. His set of equations describes the behavior
of electric and magnetic fields and how they interact with matter. He
was also the first to show that light itself was simply an oscillating
wave composed of intertwined electric and magnetic fields.

www.deltagroupengineering.com

21

The development of electromagnetic theory hailed the


development of Relativity and Quantum Mechanics (QM). Maxwells
equations were a monumental achievement, not only because of their
elegance, or because of their immense usefulness for technological
purposes, but because they proved that a deep connection between
electricity and magnetism existed. Electricity and magnetism were
once thought of as entirely disparate phenomena. This was one of the
first so-called unification theories, illustrating that forces once
thought to be unique were, in actuality, one-and-the-same.
During the latter part of the 19th century, British physicists
(particularly those following Maxwells lead in the search to explain
the inner workings of electromagnetism) tended to continually turn
back to the notion of the luminiferous aether in order to help prove or
disprove emergent theories. This embodiment of the aether was sonamed because it was believed to be the medium that carried light,
i.e., electromagnetic waves, and was thus called luminiferous.
During the Victorian period, technological advancements
that spawned the industrial revolution contributed to the rapid
development of a mechanistic world-view. British society was bearing
witness to the triumph of the machine, and the machine rapidly and
drastically transformed the social and cultural landscape. The new
technological developments of the age would shape the spectacles
through which British scientists would view the Universe as well.
These new lenses skewed the vision of theorists at the time, causing
them to view the fabric of space in terms of cogs and wheels an
invisible yet intricately and seamlessly connected clockwork of
interactions via which objects and light travelled through space. This
emerging industrial revolution gave physicists cause to try and explain
electromagnetism in terms of this new mechanical language of the
day.
Light itself was discovered by Maxwell to simply be an
electromagnetic wave propagating through space. But what did this
actually mean? What were these waves of light propagating in? What
were they made of? We can easily imagine waves of light propagating
through space like waves on the surface of the ocean, which emanate
from a source and roll in towards the shore. But the very idea of a
wave implies movement through a fluid, or some kind of medium. So
the question was: what kind of substance carried these waves of light?
The aether was an attempt to provide a physical explanation
for a rather abstract mathematical representation of Maxwells
equations for electromagnetism. The form of this physical substance
was modeled after the most prevalent mechanistic imagery of the
time. Even today in our own era, some theorists argue for an

22

www.deltagroupengineering.com

information philosophy as the basis for conceptualizing elementary


particle phenomenon, so that matter itself could be thought of in terms
of binary bits of information. The fundamental constituent of our
Universe, according to information theory, is not really bits of stuff
but rather bytes of informationiii. In our contemporary society, this
being the information and computing age, we are naturally tempted
to create philosophical models reflecting our own zeitgeist in
precisely the same way that the Maxwellian physicists did in their
quest for a mechanical interpretation of physical reality.
If nothing else, the aether provided scientists at the time with
a convenient pedagogical tool for describing the way in which electric
and magnetic forces interacted. However, there was a driving desire to
actually prove the physical existence of the aether, and in doing so
provide a physical description of the seemingly magical effects of
electromagnetism. The Maxwellian cohort was envisioning a poetic
and monumental concept that could unify physical phenomenon as
purely mechanical movements of the aether, and thus provide a new
paradigm that was fully in-line with the late 19th centurys mechanical
view of the Universe. If the problem of the aether could eventually be
solved it would certainly be a scientific triumph to rival all others, and
provide instant fame and glory for those who developed it. Any
mechanical proof of the aethers structure would also be the perfect
way to eliminate the nagging problem of action-at-a-distance, which
was a source of debate since Newtons time.
The Maxwellians wanted a complete theory that would
finally quiet metaphysical questions regarding exactly how not only
electromagnetic signals were propagated, but also how planets
separated by vast distances in space could interact with one another
gravitationally. Following the lead of Descartes, theorists even
developed models to try and show how matter itself might be
understood as a manifestation of the aether in the form of vortex
rings. In this model, atoms could be thought of as tiny stable vortices
within the aether fluid.
But why hold on to these purely hypothetical models of
space if one could successfully predict and harness the phenomenon
of electromagnetism through pure mathematical reasoning alone? The
fact of the matter was that for some time, the aether models simply
supported the theories being put forth. They held up mathematically
as a framework for theories that were often too abstract to make any
real sense to the average person, or to the average physicist for that
matter. The classical physicist could present a working hypothesis to
explain electromagnetic waves and demonstrate how they might be
mechanically propagated through the aether.
www.deltagroupengineering.com

23

Physicists at the time wrestled with this conundrum between


the concepts of theory and method. If the method works
beautifully, is there any real need to provide a concrete theory to
explain why the method works? Is it not enough to simply provide an
elegant set of formulae which can be used to describe how Nature
works even if we still dont understand why it works that way?
The fervor with which the Maxwellians sought to solve the
structure of the aether was largely motivated by a desire to unify
physics in its entirety. This desire remains just as strong today as it
did in Maxwells time. Born of a desire to provide an allencompassing theory of electromagnetism, the aether was invoked
because it was convenient, manageable and contemporary. It seemed
that a working theory was not only attainable, but almost within reach.
Unlocking the inner-workings of the aether became the Holy Grail for
the British Maxwellians, not only to make their work on
electromagnetism credible, but to also render it immune to doubt and
criticism. Even more persuasive was the tantalizing hint that by
defining the aether, they would also finally unveil the mysterious
inner structure of the Cosmos.
This idea of the aether actually had great philosophical value
overall, but only in its use as a tool. Eventually, with a greater reliance
on purely mathematical approaches to the problems of
electromagnetism, and due to the results of the Michelson-Morely
experiment, the aether was eventually abandoned. As the 19th century
began to ebb away into the 20th, Einstein later explained that:
mechanics as the basis of physics was being abandoned, almost
unnoticeably, because its adaptability to the facts presented itself
finally as hopelessiv.

1.5

Michelson and Morely

The final nail in the coffin for the luminiferous aether came
in the form of a famed experiment performed by Albert Michelson
and Edward Morley in 1887v. The experiment itself was formulated
on the premise that if the Earth was actually moving through a fluidlike medium, we should be able to detect our movement through it.
Imagine you are traveling on a train. Lets say you decide to
walk to the diner located two cars ahead of you to have lunch. As you
walk along the aisle in the direction the train is traveling, and you
walk at a rate of 4 kilometers per hour (k.p.h), your speed relative to
the ground outside will be 4 (k.p.h), plus the trains velocity which is
100 (k.p.h), yielding a total combined velocity of 104 (k.p.h). When

24

www.deltagroupengineering.com

you walk back to your seat after lunch, your velocity relative to the
ground would be the trains speed minus your walking speed.
If the aether existed in the form envisioned by the
Maxwellians, then the speed of light through the aether should be
shown to have a velocity relative to some ground speed of the
aether. Michelson and Morley tested for the presence of the aether by
sending out two perpendicular beams of light from a single point
source, which were reflected by mirrors back to a single detector. The
design of their experiment relied on the wave-like nature of light.

In 1803, Thomas Young demonstrated that when light was


directed at an opaque screen with two slits cut in it, the two beams of
light that came through each slit would interfere with one another to
form a pattern on the wall behind the screen. Known as the two-slit
experiment, Young discovered that light was wave-like; waves of
light could interfere with each other just like waves on the surface of a
pond, creating peaks and troughs of interference. But these results
also spawned great debate and fascination about the mysterious nature
of light which continues to this day. Paradoxes raised by variations of
www.deltagroupengineering.com

25

the two-slit experiment continue to baffle scientists and have brought


to light some of the most perplexing and bizarre behavior ever to be
found in Nature.

Because of Youngs pioneering two-slit experiment,


Michelson and Morley knew that if they directed two perpendicular
beams outwards and then reflected them back to a detector, the two
beams would generate an interference pattern indicating whether the
beams of light travelled at different rates due to variance in ground
speed relative to the aether. Taking into account the rotation and
relative motions of the Earth around the Sun, they demonstrated that
no matter what relative direction the beams of light were traveling in,
no interference pattern was generated indicating a preferred direction,
or flow of the aether.
This experiment silenced the debate over the notion that
there could be a mechanical, fluid-like aether filling space, or one that
acted as the medium through which light propagated. But Michelson
and Morley more accurately demonstrated that there is no preferred
reference frame from which to measure the propagation of light
signals. It is this idea that became the spring-board for Einsteins
Relativity theory, where light speed is constant and everything else,
including time, is observed relative to the speed of light.

1.6

Space-Time

Michelson and Morley may have disproved the existence of


the mechanical, luminiferous aether, in such form as it was thought to
exist in Maxwells era, but this didnt stop another more
contemporary version of the aether from emerging with a vengeance.

26

www.deltagroupengineering.com

Einstein is at least partially responsible for both destroying the rusty,


mechanical aether of old and replacing it with a brand new aether all
his own. Although this time, unlike the Maxwellians, Einstein feigned
no hypotheses for what physical manifestation the aether might take.
Einsteins development of Relativity and the notion of a new aether
termed curved space-time removed the idea that gravity was a force
mediated by the ill-defined aether of Newtons time, and thrust a
revamped version of the aether into the limelight.
Einsteins equations show how an objects motion in a
gravitational field will be determined by its geodesic path, or
shortest path in units of time between two points in a curved spacetime manifold. This means that as an asteroid, for example, enters the
gravitational field of the Earth, it will be bent into a path which makes
it appear as though some kind of attraction towards the Earth is taking
place. The asteroid may even enter into an elliptical orbit due to this
changing trajectory. But this is not to say, as Newton implied, that
some mysterious force is at work, acting from a distance on the
asteroid and pulling it closer to the Earth.
Einstein introduced
the concept of curved spacetime to remove the notion that
a force pulls the asteroid into
a new trajectory. The most
common example of how a
planet or a star produces this
curved space-time effect is
often depicted by the analogy
of setting a cannon ball on a
taut rubber sheet. The cannon
ball will sink down into a
depression produced in the
flexible sheet. If a marble is
rolled in from the edge (here
representing the asteroid) the
marbles path will be changed
due to the curved topology of the rubber surface it rolls upon. It will
accelerate as it rolls downward in a direction towards the cannon ball
and then curve around it. If we werent able to see the rubber sheet,
we might conclude that the marble was somehow being attracted to
the cannon ball.
This analogy of curved space-time works brilliantly to
describe precisely how objects behave in gravitational fields! There is
no real force required to change the trajectory of the asteroid, it
www.deltagroupengineering.com

27

merely follows a path of least (or zero) resistance through a curved


space-time manifold induced by the presence of the Earth.
The geodesic motions of objects in curved space-time may
be likened to the flight path an aircraft takes when it travels between
cities. If one were to take a direct flight from San Francisco to Paris
for example, it might come as a surprise to some passengers that the
flight doesnt travel directly from west to east, over Denver, then New
York, across the Atlantic and on to Paris. It flies north in the direction
of Seattle, over Canada and Hudson Bay, then past the tip of
Greenland and finally southward towards Paris. At first glance, this
route seems very odd and inefficient, but if you were to use a piece of
string to try and find the shortest length that will connect San
Francisco and Paris on the globe, you will notice that the polar route
is indeed the shortest and thus, most efficient path.
This path is considered to be the most direct straight line
route on any curved three-dimensional surface. In the case of
Relativity, the curvature of space-time is four-dimensional: including
the three dimensions of volume and the fourth dimension of time.
Thus if we consider the asteroid again, it isnt being pulled by any
magical force towards the Earth, it is simply following a straight-line
path of zero resistance, characterized by its shortest time-interval
distance between two points in the curved manifold.
This notion of curved space-time provides the basis for
Einsteins GR. The equations describing gravitational fields work by
modeling this four-dimensional curvature mathematically. This
geometric contrivance has led to the theoretical predictions for black
holes and other strange phenomenon in the Universe. But the
problem is this: just what, exactly, is being curved? And if the
vacuum of space is indeed a formless void, then how can nothing
have a shape? GR not only invokes, it requires the existence of some
kind of medium or manifold to form the basis of this curvature, and
this medium must be capable of conveying information indicating
whether the space-time an object travels through is curved or not.
On May 5th, 1920 at the University of Leiden in the
Netherlands, Einstein gave an address on the issue of the aether and
said:
According to the general theory of relativity space
without aether is unthinkable; for in such space there
not only would be no propagation of light, but also no
possibility of existence for standards of space and time
(measuring-rods and clocks), nor therefore any spacetime intervals in the physical sense.

28

www.deltagroupengineering.com

The physicality of the space-time fabric was as undeniable as


it was indefinable. GR still works beautifully to this day and has
extended the classical framework of Newtons laws of gravitation and
motion into the modern age. However, it is extremely important to
remember that GR is simply another method we have at our disposal
for making calculations predicting the behavior of objects in the
Universe, particularly in extreme gravitational fields or when
traveling at velocities near the speed of light. Curvature should not be
misinterpreted as an actual, physical interpretation of space itself. It is
commonplace today to substitute the GR model for the real thing,
instead of the other way around. In this regard, Relativity should be
regarded as being a word which may be utilized to express an idea or
describe an object; in language, we would rarely confuse the word for
the real thing.
Scientists of this era wish to formulate a true and complete
explanation of gravity, in much the same way that the Maxwellians
needed to interpret the physical meaning of electromagnetism through
an understanding of the luminiferous aether. Newton, like Maxwell,
feigned no hypotheses in his explanation for why his equations were
true, he only demonstrated that they were. Einstein did the same with
Relativity. However, we must not take this notion of curvature too
literally, and we must keep our minds open to other, potentially more
complete interpretations of Nature. But alas, even with Relativity, we
are still left wrestling with the imponderable demon that is the
aether.

www.deltagroupengineering.com

29

1.7

The Casimir Effect

If we look to water as a source of inspiration, in all of its


whorls and spirals, waves and currents, one can see a microcosm of
the Universe. In each spiral whirlpool one sees the same form as a
typhoon as viewed from space, and looking out into the Cosmos, one
sees spiral formations in the many billions of galaxies inhabiting our
visible Universe.
Waves and ripples moving across the surface of water may
be likened to sound waves in the air, or waves of light traversing the
vast distances of space. The sea is indeed a mirror of the Universe.
Simply by observing the elemental forms the ocean creates, and by

30

www.deltagroupengineering.com

studying its movements and behaviors, one may observe the


fundamental shapes of Nature the language of Nature itself. The
writing is on the wall, so to speak. All we need to do is learn how to
read.
One way to read the language of Nature is through the simple
act of observation. Our understanding of Nature is instinctive and
innate, but sometimes we lose our ability to observe objectively. Our
innate reflex to analyze counters our equally intrinsic ability to
understand. We may become so clouded by our preconceived beliefs
about the world that we begin to see only what we expect to see, as if
our minds are lenses that have been warped by the weight of rules and
expectations.
We rush to make sense of an observation within the context
of our own consensual reality, culture and value system. Because of
this, it is important that we remember to hone our objective
observational skills. Great personal and scientific discoveries are
made when we clear away the flotsam and jetsam of preconceived
beliefs and simply observe a process in Nature.
We may also draw analogies from what we observe in the
natural world to foster a better understanding of complex theoretical
predictions that seem to defy common logic. With the advent of QM
and QED our understanding of the inner sanctum of matter has
increased exponentially, and as our understanding of such highly
complex and un-seen phenomena expands it becomes increasingly
difficult to find commonplace examples enabling us to make sense of
these strange new concepts. Instead of inventing culturally subjective
mechanisms to explain the physical world, like the British
Maxwellians did in an attempt to explain electromagnetism, we
should cite examples from our direct and unfettered experience of
Nature. Max Born, one of the fathers of QM said: My advice to those
who wish to learn the art of scientific prophesy is not to rely on
abstract reason, but to decipher the secret language of Nature from
Natures documents: the facts of experience.
As we come to learn more about QM, we also begin to
understand more about space. We assume that the vacuum of deep
space is a true and complete void, and that material objects occupy a
three-dimensional volume within it. Space, in this view, is really
nothing more than a dimensional matrix containing matter, and when
the matter is removed, we are left with a volume of empty space equal
to the volume of matter that was removed.
Quantum Field Theory (QFT) models the vacuum of space as
being something quite different than what most of us imagine it to be.
Quantum theory tells us that if we were to take a volume of space here
www.deltagroupengineering.com

31

at the surface of the Earth for example, pump out every last molecule
of air, and shield all thermal radiation so that the vacuum was at
absolute zero temperature, we would still be left with a vacuum of
space filled with energy fluctuations. Energy, therefore, can never be
completely pumped out of a given volume like air can. Energy will
always be propagating throughout the volume of the vacuum because
the vacuum is, in a sense, composed of energy.
This is due to the fact that energy, as it propagates as an
undulating sine wave, can never fully come to rest. Even at its lowest
energy state the energy must cycle about its ground state; in other
words, energy never flat-lines, it must always cycle. When all the
quantum states of lowest energy are summed across a given volume of
space, it adds up to form what may be considered to be a sea of
quantum energy, with waves propagating and fluctuating about with
random direction and intensity. This model suggests that the vacuum
is actually composed of electromagnetic waves (i.e. photons of light),
that together form an ocean of energy termed the Quantum Vacuum
(QV).
A ball floating on the surface of a roiling sea will be jostled
about by the waves; it wouldnt just sit there motionless. Likewise,
when we think of the vacuum as being an undulating sea of quantum
energy fluctuations, it is no longer possible to disregard the effect that
the vacuum has on matter, and likewise, the effect matter has on the
vacuum.
Building upon this conceptual framework the Dutch
physicist, Hendrik Casimir, predicted that the vacuum should have a
rather strange effect on matter.vi The phenomenon he predicted has
since been dubbed the Casimir Effect.
The QV is composed of a near-infinite spectrum of
electromagnetic waves of different frequencies8 (cycles per second),
amplitudes (wave heights) and direction. The QV is somewhat like
quantum white noise, analogous to the random static seen on a dead
television channel9. In free space, far from the presence of any matter,
the quantum static of the QV is uniformly random.
The Casimir Effect emerges when matter is placed within
this finely rippled terrain of the QV. The effect is observed when two
flat metal plates are placed parallel to one another in a vacuum. Here,
a boundary condition is established in this otherwise uniform space,
changing the nature of the wave conditions existing in the QV. Each
plate establishes a boundary, physically separating the region between
8
9

Termed modes.
Entirely random and incoherent.

32

www.deltagroupengineering.com

the plates from everything else outside them. This simple point may
seem ludicrously obvious, but the importance of boundary conditions
cannot be overstated. They are the essential hallmark of all dynamic
systems.
Surrounding the plates, and in the space between them,
energy fluctuations exist within the QV as waves, each moving in a
random direction and impacting the surfaces of each plate from all
sides. However, the waves between the plates will begin to calm
(figuratively speaking) as the plates are drawn closer together. The
calming effect the plates have on the QV between them is due to the
fundamental quantum nature of photons.
Each photon comprising the QV is a wave and QM states
that, it is impossible for a half-photon, or any fraction of a photon,
to exist. Each photon can only exist as a whole wave represented by a
complete 360 cycle10. Thus, no QV modes11 with wavelengths wider
than the gap can exist between the plates! There cannot be one-third
of a wave between the plates, for example. If the distance between the
plates is one micrometer, for instance, then only modes with a
complete wavelength less than one micrometer may physically exist
within that space.
As the plates are drawn very close together, more and more
energy modes become excluded from existence between the plates,
and conversely, more energy modes exist outside the plates than in
between them. Thus, a net difference in energy density between the
outside and inside the boundary is established by the plates. Because
there is, in effect, more energy on the outside than the inside,
pressure builds on the outside of the plates, pushing the plates
together with a force inversely proportional to the separation distance
between them. That is to say, as the gap between the plates gets
smaller and smaller, the force pushing on them becomes greater.
However, this Casimir Force is only observed when the distance
between the plates is exceedingly small. Likewise, the magnitude of
the force pushing them together is equally minute.

10
11

A quantum bit of energy.


Electromagnetic waves.
www.deltagroupengineering.com

33

It wasnt until 1997 that the attractive force that Casimir


predicted was actually confirmed by two independent experiments.
The first measurement was made by Steve Lamoreaux, then at the
University of Washington in Seattlevii. The measurement was taken
utilizing a slightly curved gold-plated lens mounted on the arm of a
torsion balance. The lens was gradually moved towards a flat plate,
and as the lens face attached to a balance arm was brought within
fractions of a millimeter of the plate, the torque produced by the
attractive Casimir Force between the lens and plate was measured by
the change in electrical force required to compensate for the torque
being produced. Umar Mohideen and Anushree Royviii independently
confirmed the Casimir Force measurement just a year after
Lamoreaux published his results. In an experiment similar to that of
Lamoreaux, Mohideen and Roy used an atomic force microscope to
measure the Casimir Force.
The Casimir Effect marked a key turning point in the
philosophical debate on the true nature of the vacuum. Casimirs
discovery provided strong evidence, in the form of a physical,
measurable force, that the so-called empty vacuum of space is in
fact, something more resembling a plenum of energy. Just as a boat is
moved by the waves of the ocean, matter suspended in the QV sea
affects, and is affected by, the vacuum surrounding it. From this sea of
energy, an ocean of possibilities emerges. For when we consider the
vacuum to be something rather than nothing, it suggests that the
vacuum itself holds the key to understanding the concrete physics
behind the abstract interpretation of GR.

34

www.deltagroupengineering.com

All Things Being Equal


Give me a firm place to stand and I will move the earth.
Archimedes (287-212 BC)

2.1

The cosmic counter-balance

For thousands of years the aether has been invoked as a


means of explaining various physical phenomena. But all attempts to
explain the aether have been relegated to the realm of speculation and
philosophical exercise. There is nothing shameful about conjuring up
the aether to make sense of the world, however. Our understanding of
Nature seems to require some kind of medium acting as a
background for the exertion of force and movement. Birds in the sky
fly by manipulating the air, fish swim by manipulating the water
surrounding them and human beings walk by pushing off the solid
ground beneath our feet. All these natural actions are brought about
through the action and reaction of forces, and we observe this
balancing of forces in every moment of our existence, whether we are
consciously aware of it or not.
Sir Isaac Newton learned
from Galileo that the nature of
gravity was to cause objects to
always fall at the same rate of
acceleration regardless of their mass.
He applied this observation to
objects that not only fell, but to
objects that were thrown as well.
Newton found that it was indeed
possible to mathematically predict
how far an object could go and
where it would land if thrown or
ejected with a given amount of force.
For example, if one fires a cannon,
the distance the cannon ball travels
before it falls to the ground is dependent upon the angle at which it is
shot, the mass of the ball itself, the force with which it is shot and the
constant acceleration of gravity acting on it (not taking into account
friction from air, etc.). The same is true for a bullet for that matter, or
any object one might wish to launch, throw or shoot.
The revelation enabling Newton to demonstrate that it was
gravity keeping the planets in orbit, and not some mysterious or
www.deltagroupengineering.com

35

divine structure of the heavens, was born of his ability to predict the
precise behavior of falling objects. Newton wondered what would
happen if a cannon atop an enormous mountain reaching high into the
sky fired a cannon ball with any amount of force he wished. He
demonstrated mathematically that firing the cannon ball straight ahead
with sufficient force, the ball wouldnt land until it reached half way
around the globe, or with even more force, fully around the globe. If
the cannon ball could be shot with sufficient force, Newton imagined,
the ball might never land! Instead it would enter into orbit around the
Earth, perpetually falling around the globe.

In a flash of brilliant insight, he likened this to the motions of


the planets as they orbit the Sun, and to the motions of the Moon
about the Earth! We take this knowledge for granted, but in Newtons
time this was an absolutely monumental discovery. The planets and
the objects in the heavens were all behaving according to a single,
fundamental law of gravity pertaining to cannon balls and planets just
the same. Gravity was found to be just as ubiquitous and ever-present
across the vast distances of space as it is here on Earth.
In Newtons time, gravity was regarded to be a literal force.
We continue to colloquially use the expression, the force of gravity
but we now know that this is not an accurate description. The
argument raised by Newtons gravitational model of planetary motion
supposed a constant force existed, always pulling on the planets in
order to keep them in motion. The force of gravity had to act on a
planet even at great distance from the Sun, which was also the source
of that force, and had to do so with nothing but empty space in
between. This was a real conundrum for many physicists at the time

36

www.deltagroupengineering.com

because it was unclear how the force of gravity was transmitted across
such vast distances of emptiness. However, the concept of force
remained the central focus in Newtons laws of motion and would
remain an inescapable conclusion of Newtons thinking despite its
thorny point of contention.
In order for an object to change from one velocity to another,
it needs a push to overcome the acceleration reaction force termed
inertia. Objects in uniform motion, or at rest, will remain in uniform
motion unless otherwise acted upon by an outside force; this is
Newtons first law. Of course, if one considers common earth-bound
examples such as the acceleration of a boat or car, there are other
factors at play like friction between the object and the surface it
travels upon, adding to the complexity of this relationship. However,
in simplest terms, Newtons first law states that it is fundamentally
necessary to supply energy to an object to alter its uniform motion.
Lets suppose you are riding inside a rocket, traveling at
constant velocity through interstellar space, and you suddenly spot an
asteroid in your path. You would immediately power the thrusters in
an attempt to avoid the collision. If your seatbelt did not happen to be
properly fastened at the time you fire the thrusters, you would be
abruptly slammed up against the opposing side of the ship. By firing
the thrusters you are using chemical energy to impart a force on the
exterior surface of your rocket, pushing the rocket into a new
trajectory and allowing you to successfully dodge the oncoming
asteroid. But when traveling inside the ship, without a seatbelt, you
are really only moving with the same relative uniform motion as the
ship. So for a moment, when the ship suddenly veers to one side, your
inertial mass wants to maintain its straight-line path of motion. The Gforce you feel as you are slammed to one side of the cockpit is a result
of your being squeezed between the side of the cockpit pushing you in
a new direction, and the inertial reaction force countering your change
in motion, impinging on you from the opposite direction. The energy
from the thrusters is really only necessary to counter the inertial
reaction force experienced when changing direction; however, this
isnt only true in free space.
We feel inertial forces all the time. In a car, you feel inertial
force when you accelerate, make a sharp turn or slam on the brakes.
Its the same inertial force which makes you sink into your seat in an
airplane when you are about to take off. Imagine for a moment that
you are riding in a car and feeling the force of inertia pushing you into
your seat when you accelerate. Now, imagine that everything
disappears aside from you in your seat no car, no road, no landscape
outside the window, nothing just you being pushed into your seat as
www.deltagroupengineering.com

37

you speed up. Moreover, lets say you take your foot off the
accelerator and you resume a constant rate of speed. Suddenly, the
force you had felt pushing you into the seat subsides and you feel no
force, other than gravity holding you in your seat. Since nothing else
exists that you know of in the Universe no road, no other cars or
trees going by as you look to your left or right; how do you know if
you are moving? The only way you can really tell if youre moving is
if you accelerate or change your motion and thus, feel the force of
inertia.
Let us also assume that even though you cant actually see
the steering wheel, you can still feel it in your hands. If you were to
suddenly rotate the steering wheel to make a sharp turn, an invisible
force would abruptly push you in the opposite direction of your turn.
The turn itself merely marks a change in the otherwise constant
motion you were in before you turned the wheel. Whether you are
traveling in empty space or on Earth, inertial force is felt when
accelerating or change in motion occurs. The force felt is immediate
and local to you, wherever you might happen to be in the Universe.
Trying to escape the force of inertia would be like trying to out-run
your own shadow. It cant be done. It is the inescapable nature of
matter itself.
But where does this strange force come from? Strange and
mysterious as it may seem, this powerful force arises instantaneously
out of the vacuum of space, as if by magic, to physically inhibit
changes in motion.

2.2

Expansion and compression

How is the Universe arranged so that we feel no force when


we are stationary or in a uniform state of motion, but we suddenly
experience a force when changing from one state to another, no matter
where we may happen to be? Why does matter resist acceleration if
there is nothing in the way to impede it? And how is it that objects
feel the force of gravity in space, separated from other objects by vast
expanses of nothingness? These questions hint at a connection
between the forces we have labeled gravity and inertia. This
connection is, in many respects, responsible for the development of
modern physics.
The Czech-Austrian physicist, Ernst Mach, proposed a
possible mechanism for inertial force and its connection to gravitation
in the late 1800s, while Einstein was only in his teens beginning to
explore the frame of thought that would lead to the development of
Relativity a decade or so later. In fact, it was Einstein himself, who in

38

www.deltagroupengineering.com

describing Machs ideas on the subject of inertia, coined the term


Machs Principle.
Machs Principle is based on the notion that all matter in the
Universe is connected by an invisible bond. Mach surmised that
objects felt forces countering their acceleration because all objects
were linked together by a web of gravitational interactions. If one
imagines an infinite Universe with matter in the form of stars and
planets peppered throughout space, one may assume a fairly uniform
average distribution of matter throughout the Cosmos.
All of these planets and stars, Mach reasoned, would radiate
gravitational fields into the space surrounding them. Mach figured
that if all the gravitational fields from all the matter in the Universe
were averaged across space, then at every place in the Universe one
should feel the effects of this unified field. No matter where one might
find oneself in the Universe, one would always feel a gravitational
resistance opposing any change in motion. It is as though matter is
locked in a gravitational web and when an object attempts to change
its position within it, the web compresses in the direction of
acceleration and is stretched out behind it.
Mach is historically noted primarily for his development of
the Mach Numbers (i.e. the Mach Scale12), and he predicted what
we understand today as being the sonic boom. Mach also studied
the work of Christian Andreas Doppler in great detail. Doppler,
another Austrian physicist, is noted for the Doppler Effect.
As an ambulance races along, its siren emits waves of
compression in the air. These waves rush at high speed to our ears,
and we detect and interpret these compression waves as sound. The
rate at which the sound waves travel through the air is generally
constant, so the additive motion of the ambulance causes the waves to
be compressed in its direction of motion. As the ambulance
approaches, the siren sounds higher in pitch13 than it does as it passes
by. As the ambulance rushes past, the sound of the siren will quickly
bend down to a lower pitch. Because the sound waves are compressed
in the direction of motion, the waves are squeezed together, raising
their frequency14. The more sound waves heard per second, the higher
the pitch will be. As the ambulance drives past, the sound waves left
behind are stretched out and reduced in frequency, thus we hear the
pitch bend down to a lower register.
12

The ratio of an objects speed to the speed of sound in the fluid the
object is traveling in.
13
An indication of sound wave frequency.
14
The number of sound waves heard per second.
www.deltagroupengineering.com

39

Machs Principle for the origin of inertia may be likened to a


gravitational Doppler Effect. According to the principle, objects are
pulled uniformly in all directions by the average gravitational field in
space, acting like the air through which sound travels. When an object
accelerates, it feels an immediate opposing force as if the field was
being compressed in the direction of acceleration. Energy input is
required to counter inertial resistance, or in the case of Machs
Principle, to compress or decompress the static gravitational energy in
the direction of acceleration or deceleration.
The Doppler Effect doesnt only apply to sound waves; it
also applies to light waves. This is most commonly known in
astronomy and cosmology as red-shift. Blue-shifts and other
frequency shifts exist as well it all depends on a light-emitting
objects motion relative to an observer.

Our modern-day cosmological creation story begins with the


Big Bang, as it has come to be known. This paradigm states that the
Universe was born of a single, unfathomably powerful explosion
which gave rise to not only all the matter and energy in the Universe,
but the Universe itself! According to the Big Bang theory, everything
was packed into a single, infinitesimal speck before the Universe was
born. This singularity was the seed of our Universe from which all
things grew. Even time emerged as a result of the Big Bang; but how
is it that we have come to this profound, albeit, bizarre-sounding
conclusion? All one has to do is listen for the answer in the changing
pitch of the ambulance siren as it passes by, for our Big Bang creation
story owes its origin to the Doppler Effect.
High in the mountains east of Los Angeles, the famed
astronomer Edwin Hubble spent many a night throughout the 1920s
observing nebulous smudges in the night sky from the Mount Wilson
Observatory. These fuzzy points of light, on much closer observation,

40

www.deltagroupengineering.com

were found to be whole other galaxies much like our own. This
discovery expanded the scale of the known Universe by several orders
of magnitude. Before Hubbles time, the Universe was thought to be
only slightly larger than our15 galaxy. At that time, the Milky Way
was the Universe and when Hubble demonstrated that these fuzzy
nebulae, once thought to be part of our own galaxy, were in fact
distant galaxies themselves, the range of the Cosmos expanded
beyond all comprehension.
Along with his colleague, Milton Humason, Hubble set out
to measure the distances of these galaxies by studying the Cepheid
variable stars within them. Cepheid stars fluctuate in brightness and
possess a narrow range of intrinsic luminosity. Because these stars
possess such similar luminosity, their relative brightness may be
applied as a standard by which to measure the distance of the galaxies
containing them the dimmer the Cepheid, the more distant the
galaxy. However, this isnt what Hubble is historically noted for
discovering.
Hubble is famous for having combined his Cepheid data with
measurements taken by Keeler, Campbell and Slipher, which measure
the red-shifts associated with the same galaxies Hubble was
observing. What he discovered, as a result of this marriage of
observations, would come to be known as Hubbles Law; which
brought about the Big Bang history of the Universe we are so familiar
with today.
Heres how he did it: Light waves, like sound waves, can
Doppler shift. As a light-emitting object moves through space, the
light waves emanating in the direction of movement are compressed
in frequency. Hence, light waves compressed to a higher frequency
are thus shifted towards the blue end of the visible spectrum.
Similarly, light emitted in the trailing direction of the objects motion
is decompressed and shifted towards the red end of the visible
spectrum.
Hubble noticed that the light coming from the most distant
galaxies, based on his Cepheid data, were more red-shifted than ones
close by and the magnitude of red-shift was directly related to the
galaxys distance from us; implying that all the galaxies in the
heavens were moving away from us. However, the implication was
not that the galaxies are moving away from us per se, but that space
itself, in which we reside, is expanding in all directions.
Strangely, this requires that every point in the Universe
represents the point of origin of the Big Bang! Thus, all matter in the
15

Milky Way.
www.deltagroupengineering.com

41

Universe was not ejected from a central point into a pre-existing


expanse of space such that it moves away from an origin; instead, the
fabric of space is expanding, carrying along the matter forming stars,
planets and us. Distant galaxies appear to be the most red-shifted
because the space between them and us has expanded more than
galaxies closer to us.
Machs view of inertia implies that an objects motion
relative to the fabric of space (i.e. a pan-universal gravitational
matrix), is the root cause of inertial forces. However, this view
continues to sound, walk and talk a lot like the aether model of old, as
if to say that an objects movement through space induces a wake to
form through it, and would thus require energy input to counter the
opposing force as it moves along. This presents a problem because we
know that objects traveling with uniform motion do not experience
inertial forces. If Machs Principle were true, then objects should
experience an inertial force at all times, whether they are moving
uniformly or accelerating.
The force of inertia, however, is only experienced upon
acceleration, which simply refers to a constant rate of change in
motion. But how is this so? What strange property of the vacuum
could cause this peculiar physical phenomenon? When we move at
constant speed we dont feel anything, even if we are moving
incredibly fast. Yet once we change our rate or direction of motion,
we suddenly feel a force.

When a ship cruises through water, the engine must be


constantly running, providing the force that pushes the water out of
the way and keeps the ship moving; but adding energy to keep an
object in uniform motion simply isnt necessary in space. Once you
give an object a push in space, it will maintain that rate of speed (in
open, flat space) indefinitely without needing a constant force behind
it to keep it moving. However, if you want to speed up, slow down, or
change direction, energy will be required to counter the physical,
powerful force of inertia we feel, arising as if out of nowhere.
Uniform motion is defined based upon ones motion relative to
external points of reference, such as the position of nearby stars.
However, acceleration is fundamentally distinct; it wouldnt matter if

42

www.deltagroupengineering.com

you were the sole object in the Universe, if you accelerate, you will
know it immediately because you will feel the force of inertia.
The physical nature of inertia has remained a mystery for
eons. Historically, inertia is considered to be an intrinsic property of
matter, full-stop, no explanation required. However inertia, with its
uncanny nature, holds the key to understanding the physicality of
space itself! Elucidating the inner workings of inertia will lead
directly to the most complete generalized understanding of the
Universe ever to be gained by humanity.
Although Machs Principle of inertia was never formally
developed into a quantitative, physical theory, there is a strongly
compelling aspect to it which cannot be disregarded. Despite its
inadequacies, Machs conceptualization was at least partially correct
in its premise that inertia was a manifestation of the same force we
experience as gravity.

2.3

The principle of equivalence

The primary basis for this idea of self-consistency between


inertial and gravitational force has a long-standing history and has a
very solid foundation. So solid in fact, that this connection provided
the basis for Einsteins theory of General Relativity (GR). The
concept is termed the Equivalence Principle and states that the
inertial force of acceleration in free space is the same as the
gravitational force experienced on the surface of the Earth.
The origins of this idea go all the way back to Galileo, who
showed that if you drop a very heavy object like a large stone, and a
light object like a pebble at the same instant, they will hit the ground
simultaneously. This observation seems counter-intuitive because one
might naturally expect that the heavier object would fall faster and hit
the ground earlier than the pebble. On the Earth of course, the air
provides resistance to falling objects and we observe that a piece of
paper or a feather falls slower than a boulder. However, this isnt due
to gravity or the relative mass of the objects; it only has to do with the
air having to be pushed out of the way as objects fall through it. On
the Moon however, no atmosphere exists to impede the acceleration
of falling objects.
During the Apollo 15 Moon landing, the astronaut David
Scott tested Galileos conclusion by dropping a falcon feather and a
geology hammer at the same time, and was able to reassure any
skeptical viewers that light and heavy objects fall at the same rate due
to gravity; the hammer and the feather landed at precisely the same
time.
www.deltagroupengineering.com

43

Gravitys affect on mass is what we call weight, but weight is


not synonymous with mass. David Scott, although he was the same
size and composed of the same quantity of matter on the Moon as he
was on Earth, he weighed less on the Moon. This is due to the fact that
on the Moon, gravity is weaker than it is on Earth weaker
meaning that the acceleration of gravity on the Moon is less than on
Earth.
Weight is a measure of the force required to counter the
apparent acceleration of gravity. This is Newtons Second Law,
expressed by the equation16 F = ma, which can either be caused by
inertial acceleration in free space, or the acceleration of gravity. For
example, as the Apollo 15 rocket was launched, all the astronauts
experienced intense G-forces and were much heavier than when the
rocket sat on the launch pad. The rockets acceleration during launch
is added to the acceleration of gravity. Therefore, the total combined
acceleration impinging on the astronauts means that the force
required to counter the total acceleration is greater. This larger
counter-force is felt as weight; so, the greater the rate of acceleration,
the heavier a given mass appears.
When we talk about gravity, were talking about
acceleration. When you drop a ball, it doesnt fall at 9.8 meters per
second, it falls at 9.8 meters per second, per second. Lets say you are
standing next to a long, straight stretch of road lined with reflector
posts spaced 10 meters apart. A car driving along at uniform velocity
might be covering a distance of 10 meters between each reflector post,
per second, based on your stopwatch. The cars speed would then be
measured to be 10 meters per second or 36 kilometers per hour
(k.p.h). Then you watch another car start from a stationary position
and accelerate at a constant rate. You might measure that; in the first
second the car travels only one meter, 10 meters in the next second,
20 meters in the 3rd second, 40 meters in the 4th second and so on.
The car would therefore be accelerating at a rate of 10 meters per
second, per second or 10 meters per second squared [i.e. 10(m/s2)].
Acceleration can be measured in reverse as well. This occurs
when one applies the brakes in a car to slow down. A constant rate of
change applies to both cases, and is called acceleration whether the
car is speeding up or slowing down. The same magnitude of inertial
force will be experienced when the driver accelerates to speed up, or
puts on the brakes to slow down. The only difference is the direction
with which the force pushes on the driver. However, no forces will be
16

i.e. force (F) is equal to an objects mass (m) multiplied by its


acceleration (a).

44

www.deltagroupengineering.com

felt in the direction of, or against, motion when the car is traveling at a
uniform speed.
Gravity is synonymous with inertia because it is a sensation
of being accelerated even though you may be sitting stationary in a
chair; and because gravity is acceleration, the feeling of force one
experiences while being held stationary at the surface of the Earth is
the same force one would feel due to inertia if one were to accelerate
at the same rate in free space! Einstein imagined a similar thought
experiment to illustrate the Equivalence Principle, and in so doing,
developed the framework for what would come to be known as GR.
Imagine you are in a large box, like an elevator, without
windows, and you cant look outside to determine anything about
your movement within the environment. All you know about your
movement is based on what you can feel from inside the box. While
standing inside a stationary box at the surface of the Earth, you feel
the acceleration of gravity pushing you to the floor at 9.8 meters per
second, per second [i.e. 9.8(m/s2)].
Now imagine that you are floating around weightless inside
the box, which itself is floating in empty space. Then suddenly the
box begins to accelerate in one direction so that the opposing side of
the box moves up to touch your feet. Lets say the box begins
accelerating through space at a rate of 9.8 meters per second, per
second. Without knowing anything about your environment, you are
quickly able to stand up on the floor. What you would feel in that
instance would be indistinguishable from what you would feel
standing stationary on the surface of the Earth! The inertial force of
acceleration is the same, whether you are accelerating through free
space, or sitting stationary in a gravitational field.
When Einstein applied the Equivalence Principle to his
geometric interpretation of curved space-time, he was able to
demonstrate in a very elegant manner why gravity is, in effect, the
same force as inertia. Lets go back to the explanation of curved
space-time and geodesic motion mentioned earlier. As an asteroid
enters the Earths gravitational well, its path will be bent around the
Earth according to the most direct geodesic path possible within the
curved space-time manifold. The asteroid, however, doesnt actually
experience a force as its trajectory is altered. To the asteroid, it is
simply following the path of zero resistance in a curved topology.
Lets also imagine that the asteroid enters into orbit around the Earth.
Even though it travels in an ellipse circling the Earth, it still feels no
force keeping it in that path. If the asteroid approached the Earth on
an impact trajectory, heading directly towards the Earth, even though
it is being accelerated by gravity it would still not feel any force!
www.deltagroupengineering.com

45

The asteroid is simply falling along with the acceleration of


gravity, like a feather floating on water, pulled along by the swift
current in a river. If you held the feather stationary so that the current
rushed along underneath it, the feather would then feel the resistance
of the current, and a force would be required17 to resist the feathers
natural tendency to move with the current. Similarly, the asteroid
would require some counter-force to push it away from an impact
trajectory. An impact trajectory is a geodesic path, just like an orbital
path, and a force would be necessary in order to push the asteroid into
another orbit or out of orbit all together.
Einstein demonstrated that any diversion from an objects
geodesic path of motion in a curved space-time manifold results in an
inertial reaction force. If you are traveling through free space18 and
you decide to change your direction, you will feel an inertial reaction
force. Simply put, if you dont go with the flow of the space-time
topology surrounding you, youre going to feel an inertial reaction
force; and any time you want to change your path, you will need to
expend some energy to do so.
Think of space-time as a landscape of hills and valleys. An
objects geodesic path through that landscape is like the path of a river
following the lowest possible elevation within that landscape. The
river, or a feather floating on the river, will naturally want to flow
from high to low with the current. If you decide that you want to push
the water up and over a hill, its going to require some effort to do so.
When gravity is considered to be a well in space-time and
not a force in its own right, an object floating in a gravitational
current of least resistance will only require energy input in order to
move itself out of its natural path of least resistance. Einstein tells us
that gravity is nothing more than space-time curvature. Inertia is felt
when changing the path of motion against the natural path of least
resistance within the landscape of space-time.
It is even possible to describe the strange predictions made
by GR using this kind of imagery. No doubt, one of the strangest
predictions made by GR is the existence of the black hole. Black
holes are the result of the gravitational collapse of the most
enormously massive stars. As a massive star pulls in more and more
matter from its surroundings, it can eventually reach a point when it
can no longer hold itself up against the gravitational field it generates.
At this point, the star will collapse under its own gravity to form a
warp within the fabric of space-time so deep that nothing entering this
17
18

In this case, the force of your fingers holding the feather in place.
i.e. in a straight line in flat space-time.

46

www.deltagroupengineering.com

gravitational well can escape not even light. As matter or light falls
towards the black hole, it will be forced to follow such a steeply
curved, inwardly-bent topology in space that it could never acquire
sufficient energy to escape the well.
Perhaps an easier or more intuitive way to describe what is
going on here is by way of fluid-dynamics19. Sound waves travel
much more quickly in water than they do in air because the molecular
density of water is higher than air. This simply means that molecules
of water are much more closely packed together per unit volume than
air molecules. The speed of sound in water is approximately 1,500
meters per secondix roughly three times faster than in air.

Lets assume for the sake of argument that we may substitute


the idea of space-time curvature for an accelerated flow of water from
left to right towards the central drain pipe of a fluid-model black
hole. Lets also substitute the light waves traveling through space for
sound waves in the fluid. Now lets imagine that we are traveling in a
submarine suspended in the water and that we are sending out sonic
pings which allow us to echolocate objects in our vicinity. Each of
these sonic pings (sound waves) will radiate outwards in concentric
rings away from our ship. But what will happen to the sound waves

19

William Unruh of the University of British Columbia in Vancouver


is credited with the original concept described by this thought
experiment.
www.deltagroupengineering.com

47

we emit as we pass by a black hole and travel through its gravitational


current?
As we move closer and closer to the drain (the black hole),
we reach a point where the flow rate of the water around us starts to
surpass the speed of sound in water. If an observer in a boat on the
surface of the water happens to be listening to our ping signals, far
from any gravitational current generated by the black hole, the
observer will hear the pings get progressively lower and lower in pitch
(i.e. lower in frequency), until they eventually come to a stop. Even
though our submarine is still sending out pings, the flow rate of the
fluid we are immersed in has surpassed the speed of sound and the
signal can no longer escape to the surface. The boundary point at
which this occurs is called the event horizon of the black hole, and
refers to the point of no return where the black hole becomes black
the point which light can no longer be detected by outside observers.
In the case of a real black hole, the event horizon marks the boundary
at which the acceleration of gravity surpasses the speed of light.
In this case, using fluid-dynamics to describe the more
bizarre predictions of GR yields the same results as the curved spacetime analogy. Einsteins GR describes the link between gravity and
inertia, and how objects behave in inertial reference frames according
to gravity. However, Einsteins GR is nothing more than a geometric
interpretation of gravity, just like our sound wave analogy. It is
important to remember that GR is just that an analogy.
In the GR analogy, space-time is represented by fourdimensional geometry, yielding a topological map of space in the
presence of matter. Even though Einstein admitted the necessity for
some manifestation of the aether as the basis for his space-time
manifold, many physicists today insist that space is indeed a complete
vacuum. If this is in fact the case, then the obvious question remains:
what mysterious property of the vacuum is capable of being curved,
and how does an object know whether the space it travels through is
curved or flat?
Alas, we are left with the mystery of the force we call inertia
a physical force we feel, arising as if by magic out of the vacuum of
space. And we are still left with the force we call gravity, which
somehow causes objects to directly affect one another from afar with
nothing between them aside from nothing itself. In this regard, it is
important to remember that GR is merely a highly effective
descriptive tool, but not a literal, physical explanation of Nature.

48

www.deltagroupengineering.com

2.4

Mass-Energy equivalence

Shedding light on the fact that GR is merely a descriptive


tool is not meant as a criticism or denial of either Special or General
Relativity. Indeed, Einsteins theories have proven themselves to be
some of the most magnificent predictive tools ever constructed, and
are directly responsible for revealing some of the most mysterious and
compelling aspects of the Universe ever to be imagined. The most farreaching of these, and arguably the most famous in the history of
physics, is the notion of mass-energy equivalence described by the
equation E = mc2.
The expression E = mc2 is misunderstood by many people
to mean mass-energy conversion that is to say, when one form of
stuff we call matter is converted into another form of stuff we call
energy. We have become well acquainted with this erroneous idea
because of the atomic bomb. We have witnessed first-hand that a
staggering amount of energy may be unleashed from a mere handful
of matter in an intensely violent explosion. However, the expression
E = mc2 literally, or more properly, refers to mass-energy
equivalence. In much the same way that the Equivalence Principle
implies that inertial and gravitational forces are one-and-the-same, so
are mass and energy. It is important to begin on this semantic point,
with this conceptualization firmly in mind: that matter may be
described, expressed and calculated in terms of its energy alone.
One of the most difficult concepts to grasp in relativistic
physics, however, is the notion of mass-energy equivalence as a literal
expression. E = mc2 is without a doubt the most famous equation in
history, but very few people actually know what it truly means. And
even now, after having seen with our own eyes the horrifying truth of
mass-energy equivalence through the development of nuclear
weapons, it seems that very few people fully, intellectually accept the
notion of mass-energy equivalence as a literal statement.
Even though we have lived with and utilized the E = mc2
equation, we seem unable to accept that mass is energy and energy is
mass. We demand that the explanation for this statement be consistent
with our everyday experience and intuition. We inhabit a material
world and live-out a material existence. We observe with our senses
that we and all objects around us have substance. Objects have weight
and form, are solids, liquids, gasses, and material things are ascribed a
three-dimensional volume in space. It is no surprise, therefore, that a
literal understanding of mass-energy equivalence is not an easy thing
for us to grasp or accept. Our minds are constructed around and have
adapted to the immutability of our material Universe.
www.deltagroupengineering.com

49

Einstein realized the connection between mass and energy


while considering inertia; in that energy is required to accelerate an
object or to alter its geodesic path through curved space-time. For
example, if we want to accelerate a spaceship through empty space,
we find that the faster we accelerate it, the more inertia the ship will
feel. This means that the energy required to change the motion of
mass is directly proportional to the mass itself. So, by that logic, mass
and energy are equivalent! In a sense, the only literal modeling of
nature arising via the interpretation of space-time geometry is that
mass is a measure of energy. Relativity states that if you were able to
accelerate an object to light-speed (which according to Relativity can
never be reached) an infinite amount of energy, in the form of thrust,
must be applied to the object. Mass, or weight as we commonly think
of it, is defined by an objects resistance to acceleration its inertia.
This is why mass is relative, and one of the reasons why GR is termed
relativity.

The mass of an object also appears to change based on your


motion relative to it. If all of our measurements are relative to our
own apparent speed or position then what, exactly, can we measure
with any certainty? In this relativistic reality, the only parameter
physicists and engineers can actually measure is force. So when we
speak of mass, we are actually referring to an objects inertia because
mass scales according to the force required to accelerate it.
For example, if you were to travel in interstellar space at
uniform velocity there is no way to discern, other than by observing
external objects, whether you were stationary or moving. In fact, you
wouldnt even be able to determine whether you were stationary and
objects were moving past you, or if you were moving past stationary
objects. The only way you could know for certain whether you are
moving or not is if you accelerate; acceleration is absolute (in the
sense that you can feel its effect on your body) and is typified by a
constant rate of change in motion.

50

www.deltagroupengineering.com

Mass is like a bank account for the currency of energy. When


we push an object like a car or a rocket, we are adding energy to the
system. However, because energy and mass are equivalent, the energy
we add causes an increase in the mass of the object. This may initially
seem very odd, but it is a completely natural consequence of the way
our Universe works. The Universe is a closed system. Energy and
matter are neither created nor destroyed in physical processes, but
merely transformed. In this respect, mass is an energy input-output
system. The energy being added to the system to accelerate it gets
banked as mass.
Energy is associated with all processes in the Universe and is
inextricably linked to the fabric of space itself. We are all made aware
of this connection and indivisibility of mass and energy every time we
accelerate. We feel it hundreds of times per day. We experience the
sensation so often that that we have almost become unaware of its
omnipresence. However, in becoming so desensitized to it, we subject
ourselves to the risk that we might not learn from its subtle innuendo
and never uncover the most fundamental and important truths about
our Universe.
Understanding the nature of inertia allows us to understand
the true nature of space and matter. If we come to understand the
cause of inertia, we might understand how to manipulate it as well,
and in turn, manipulate the fabric of space. What would happen if we
could somehow block or negate inertial forces? We could perhaps
accelerate freely without being subject to relativistic constraints, and
avoid being crushed by intense G-forces when accelerating at
incredible rates. The Equivalence Principle necessitates that if we can
affect, modify or negate inertial force, then we would also be able to
manipulate gravity! Imagine for a moment we could somehow harness
the powerful force of inertia, and give ourselves that firm place to
stand in Archimedes challenge. Could we, in fact, move the
Earth?

www.deltagroupengineering.com

51

52

www.deltagroupengineering.com

The Glass That is Always Full

3.1

Symmetry and unity

As if by some serendipitous logic, the deeper we penetrate


into the mystery of matter, the more we learn about space. This
connection seems almost mystical. Many popular science books
written in the latter decades of the 20th century have sought to link the
discoveries made by Quantum Mechanics (QM) to Eastern
philosophies20. Whether a basis in fact exists for these conclusions or
not, it comes as no surprise that it has proven useful to draw upon
forms of human understanding which are based upon philosophical
ideas. QM has uncovered truths about the inner sanctum of matter that
are so strange, they sound more like magic than reality. No matter
how we may choose to interpret the data, QM reveals deep
connections between matter and space.
Plato developed his group of five perfect solids as a way to
explain a much deeper and intrinsic symmetry in Nature. We now
realize that his model has no real basis in fact, but we still respond to
it nevertheless through an innate human faculty appreciative of the
aesthetics of symmetry. Even today, an uncanny semblance of truth
remains in Platos model of matter. This raises the question of
whether we respond to symmetry because it is the nature of our
Universe, and thus in our nature as well, or perhaps that our
appreciation of symmetry is a synthetic product of the human mind
which we consistently attempt to impose upon Nature. Whether we
are making objective observations or are just seeing what we want to
see, much has been discovered and predicted through analysis of
symmetry and unity in Nature.
Platos perfect solids, based upon geometric symmetry alone,
hint at the true nature of the subatomic world which we have come to
better understand in the last century. Likewise, the holy grail many
physicists seek today is a single formula describing the elemental
symmetry of the Universe a single Grand Unification Theory
(GUT) or Theory of Everything (ToE) explaining all physical
phenomena we observe, and one that will allow us to predict what we
have not yet observed. Natural symmetry has spawned the legend of
the GUT; it is not known whether it will ever be possible to formulate
such a theory, but there is quite a lot of justification for thinking that it
will be.
20

e.g. Fritjof Capras, The Tao of physics. (Boston: Shambala, 1975).


www.deltagroupengineering.com

53

We understand that mass and energy are equivalent, just as


gravitational mass and inertial mass are equivalent. Faraday coined
the term electromagnetism as a result of having unified electricity
and magnetism, formerly thought to be distinct forces of Nature. It is
this discovery, however, that marked the birth of our modern
civilization. The modern world is literally built upon the foundation of
electromagnetism and it is manifest in virtually every aspect of our
modern lives. It was Faradays successor, James Clerck Maxwell,
who discovered that light is an exquisitely intertwined pair of electric
and magnetic waves. Richard Feynman and Murray Gell-Mann
further demonstrated that an interaction involving the weak field,
which helps hold atoms together, was simply another aspect of
electromagnetism; now referred to as the electro-weak interaction.
Feynman and Gell-Manns discovery was born of the belief
that because the mathematics of their theory was so elegant and
beautiful, and based upon symmetry, that it should be correct even
though they lacked key experimental evidence at the time the theory
was being developed to prove it. Feynmans theory, as it turns out, has
since proven to be one of the most precise and accurate theories ever
developed! And all this is due to what was originally a faith-based
approach to physics, reliant solely on principles of symmetry.
Science will often yield, without much resistance, as
theoretical physicists make new claims which are initially, or at least
partially, based on mere aesthetic appeal. The favorable compatibility
of prediction and observation, based upon symmetry, has proven itself
to be an arrangement worthy of trust and is continually reinforced as
we grow closer to achieving unity.

3.2

Exploring the microcosmos

If mass is so inextricably linked to the fabric of space, then


similarly, shouldnt space lend itself somehow to the nature of mass
as well? Our bias as material beings in a material Universe has given
us the somewhat erroneous impression that our investigation into the
nature of matter is a one-way street ending in the foggy cul-de-sac of
QM. But what have our deep investigations into matter revealed about
space? Lets start with the atom and work our way down.
Lets pretend that we are in a spacecraft that can change in
size from the scale we live in, down to infinitely small dimensions. As
we shrink ourselves down to millionths, then billionths of a meter in
size, and zoom in on a tiny fragment of matter, we begin to see the
vague outlines of individual atoms. The atoms themselves would
likely appear to have a fuzzy or hazy surface because the surface of

54

www.deltagroupengineering.com

an atom is nothing more than a cloud of electrons buzzing around a


nucleus centered deep within the interior of each atom. We may
imagine that the electrons themselves are tiny pinpoints of negative
charge encircling the massive nucleus.
If we measured the electron fog we just travelled through at
the surface of the atom to be one kilometer (km) thick, we would have
to travel another 50,000(km) until we reach the nucleus! One of the
most startling aspects of the atom, and matter itself, is that it is largely
composed of empty space! The nucleus is comprised of positively
charged protons, clumped together with generally equal numbers of
neutrons, which carry no charge. The proton and neutron are roughly
equal in mass and are each far more massive than the electron. But
unlike the electron, which is a fundamental subatomic particle,
protons and neutrons may be further deconstructed into more
fundamental particles called quarks. Protons are composed of two
Up quarks and one Down quark, and neutrons are composed of
one Up quark and two Down quarks.
A remarkable symmetry emerges allowing physicists to
predict the existence of many other particles. Firstly, simple symmetry
exists in the arrangement of charge within the atom21. A balanced
symmetry also exists in the configuration of quarks within the protons
and neutrons22.
Many subatomic particles like quarks, for example, cannot
exist in the standard energy conditions of our everyday environment.
In order to detect or measure subatomic particles like quarks, protons
must be smashed together at extremely high energies. This is rather
like crashing two cars together at great speed. Crash them together at
a slow speed and they may just bounce off one another with minor
damage, but crash them at enormous speeds and they will explode
into bits. Higher-order particles like quarks are generated as a result of
such collisions, existing for fractions of a second, and only in the
high-energy conditions created by colliding particles at velocities
approaching the speed of light. As the energy is turned up on these
collisions, the array of particles produced becomes more varied and
bewildering.

21

A negatively charged electron cloud encases the positively charged


protons of the nucleus.
22
Quarks are arranged in balanced triplicates.
www.deltagroupengineering.com

55

We have learned that force-carrying particles exist as well.


These particles, called gauge bosons, in effect, help to hold the
atom together. The boson carrying the electromagnetic force keeping
the electron in place around the nucleus is termed the photon. This
is the very same photon we commonly describe as light. The force
carrier boson mediating the strong nuclear force, which holds the
positive charges of protons densely packed together with neutrons in
the nucleus, are called gluons. Although it has never been
experimentally measured, the Standard Model of particle physics
predicts the existence of the graviton as well, which is much like a
photon except that instead of mediating EM force, the graviton is
thought to mediate gravitational attraction.
Particles are categorized based upon certain characteristics
like mass, charge and spin, as well as other traits like direction
(termed flavor) and handedness (termed chirality), etc. These
characteristics are based upon various forms of symmetry. For
instance, in our everyday experience we know that if an up exists,
then a down should exist; if a left exists, a right should exist
and so forth. According to theory, symmetry suggests that particles
possess equally yet oppositely charged counterparts termed

56

www.deltagroupengineering.com

antiparticles. For example, the electrons antiparticle is the


positron. The Standard Model also maintains that theoretical objects
composed entirely of antiparticles may exist, behaving as normal
matter. However, if pieces of matter and anti-matter collide, the two
will annihilate each other in a burst of energy. In fact, all the particles
in the subatomic particle zoo, as it is called, are expected to have
corresponding antiparticles based upon this principle of symmetry.
But most importantly, symmetry allows particle physicists to predict,
or presume the existence of particles before they are directly
observed.
As elementary particle physicists probe ever deeper into the
atom, and smash particles together at higher and higher energies into
smaller and smaller bits, the details become increasingly coarse and
ill-defined. The characteristics of matter which we can easily describe
in our macro-reality begin to lose all meaning in the abstract
landscapes of the microcosmos. We seem to be approaching the
terminal limit of how far we can travel inwardly into matter, and
strangely, what we find at the end of the line tells us more about the
structure of space than it does about matter.
This new world of quantum-space is a very strange and alien
place indeed. As we venture into the scale of the subatomic particle,
we may no longer count on the predictable, mechanical clockwork
rules governing our reality. This is a realm of probability and
indeterminate outcome, where even consciousness sometimes appears
to affect quantum events. It is a reality standing in stark contrast to the
cold, indifferent, cause-and-effect nature of the Universe we know.
The notion that the fundamental nature of the Cosmos is
probabilistic and random was initially quite unsettling to physicists
like Einstein, who [could not] believe that God would choose to play
dice with the Universe23. An ordered, elegant Universe seems to be
the one our sense of aesthetics, symmetry and beauty favors above the
chaotic, topsy-turvy game of chance proposed by QM. Like it or not
the rules of QM, however strange, reflect the truth of things. But this
quantum world of the microcosmos is nothing to be afraid of. In fact,
the more we dispassionately embrace what we are shown in the
quantum realm, the more we stand to discover about the Universe we
inhabit.
By delving ever deeper into the depths of matter, further
subdividing and slicing it into thinner sections, a point is reached
23

Often paraphrased as God doesnt play dice with the Universe, In


a letter to Max Born dated the 12th of December, 1926; quoted in
Einstein: The Life and Times, Wings Books, (1995) Ronald W. Clark.
www.deltagroupengineering.com

57

where matter and space finally converge. It has only been through
such investigations into matter that the fine threads and fibers
weaving the proverbial fabric of space-time have begun to be
revealed. The true, quantum nature of space is uniquely strange and
wonderful so strange, in fact, that it is doubtful whether we could
have deduced its curious characteristics based on symmetry alone.

3.3

The Quinta Essentia

Coming to know the inner workings of the atom spurred


scientists to make a critical and drastic shift in perspective. This shift
in perspective brought us out of the purely classical, mechanistic view
of the Cosmos and into the quantum realm. This, in turn, allowed us
to view the Universe from an elevated perspective, from where we
could observe space and matter coexisting in a reciprocal relationship
contrary to the prior notion that matter floated inertly within an
unknowable void.
The most accepted model
of the atom prior to the
development of QM was an object
resembling our solar system, in
which electrons were depicted as
tiny planets orbiting the massive
Sun-like nucleus at the center.
Every atom had a massive nucleus
at the center, composed of protons
and neutrons, orbited by much less
massive electrons. But if we were
able to travel in our subatomic
spacecraft, and fly amongst
individual atoms, we would not see
individual, spherical electrons in orbit around the nucleus. Instead, we
might find something akin to fog, vaguely defining the outer surface
of the atom if we were able to see anything at all, that is.
An imaginary dividing line exists separating our macro
reality from the subatomic realm. We might not be able to see
electrons buzzing around an atom because at this scale, matter no
longer exists in the solid, objective form we are familiar with in our
commonplace experience. When we describe how matter behaves, it
is convenient to use analogies pertaining to solid objects like billiard
balls bouncing off one another and such. And when we use the term
particle to describe elemental structures of the atom, our minds
immediately draw upon imagery of equally solid and objective bits of

58

www.deltagroupengineering.com

stuff. At the subatomic level, however, matter doesnt actually exist in


this form. It is really only convenient and practical to talk about
subatomic particles in terms of their energy alone.
We know that light possesses wave-like characteristics which
may be likened to a wave propagating through a fluid. Waves of light
interact and interfere with one another, forming interference patterns
like ripples on the surface of a pond. Yet light also carries momentum,
behaving as though it were composed of individual objects, like tiny
grains of sand. These so-called particles of light are termed
photons. It is this particle-wave duality conundrum which spawned
the development of QM, and fostered an entirely new understanding
of matter.
If one shines light on metal, and the light is of just the right
frequency, an electric current may be produced in the metal. A simple,
yet somewhat dangerous proof of this is to put a crumbled piece of
aluminum foil in a microwave oven. The microwave radiation induces
an electric current in the metal, which will arc and spark between the
creases and folds in the foil. In 1887, Heinrich Hertz first observed
this effect as he shined a beam of UV light on a metallic coil
separated from a conducting electrode by a small spark gap, in a
configuration very much like a common spark plug. The UV light
caused sparks to jump between the electrode and the coil. Hertz also
found that if he placed a pane of glass between the spark gap and the
UV source, the glass blocked much of the UV light and the sparks
decreased in intensity. When he replaced the glass with quartz, which
doesnt block UV radiation, the sparks resumed with their standard
intensity.
However, Hertz never developed a working theory which
could adequately explain this observation. It wasnt until Einstein
published a paper in 1905 titled On a Heuristic Viewpoint
Concerning the Production and Transformation of Lightx that a
description was finally offered explaining this strange effect. In this
paper, Einstein referred to the phenomenon as the Photoelectric
Effect. When light energy impacts electrons in the atoms of metal,
some of those electrons are knocked off the atom and begin to flow
through the metal producing an electric current. But this only happens
if the light has enough energy (i.e. momentum) to knock them out of
place.

www.deltagroupengineering.com

59

Einsteins Photoelectric Effect is based on the supposition


that light is particulate in nature, as if the light was composed of tiny
grains. These grains of light could be ejected from their source with
great force, as if from a sand-blaster, to etch away the electrons from
the surface of metals. It was this brilliant insight which earned
Einstein the Nobel Prize in 1921. Brilliant as his explanation was, it
still invoked a very classical way of thinking that flew in the face of
convention.
Light was previously experimentally demonstrated to be
wave-like. Maxwells equations of electromagnetism were rooted in
the notion that light propagated as waves. In the context of
electromagnetism, light itself was understood to be nothing more than
a braided pair of electric and magnetic waves propagating through
space. The famous two-slit experiment demonstrated unequivocally
that light could interact to produce interference patterns, just like
waves on the surface of water. The other concern was that if photons
had no mass, how could they have momentum?
Momentum is a measure of an objects mass multiplied by its
velocity. Light is known to have a velocity, of course, but if a
photons mass is zero then where does the momentum come from that
allows a photon to blast the more massive electrons away from their
respective atoms? The reason that mass-less photons may be
considered to possess momentum is due to the fact that they have
inherent energy; E = mc2 states that energy is equivalent to mass.

60

www.deltagroupengineering.com

Einsteins explanation for the Photoelectric Effect was


profound because it adequately predicted experimental observations
utilizing a particulate basis for light. However, the key discovery in
this instance was that it is not the intensity of light which produced a
stronger electric current in metals; it was the frequency of the light
that mattered.
Spanning the EM spectrum are radio waves at the low
frequency end, then as the EM spectrum increases to higher
frequencies we find microwaves and infrared radiation, then visible
light, then ultraviolet (UV) light. Even higher in frequency still are Xrays and Gamma rays. As the frequency increases along the spectrum,
so does the energy associated with each of these kinds of EM
radiation. The higher in energy (i.e. frequency) the photons are, the
more particle-like they begin to behave.

X-ray light is composed of photons of very high frequency


much higher than visible light, for example. We know this empirically
because X-ray light can pass right through materials visible light
cannot, like soft tissues of the human body. The wavelengths of X-ray
light are much smaller than visible light, or microwaves, or even radio
waves for that matter, which may be meters in length. The
photoelectric effect demonstrates that if higher frequency radiation,
such as X-ray light is directed at a metal, the current produced will be
proportionally greater than if UV light or any other lower-frequency
radiation is used. This means that unlike water or sound waves for
which energy is measured based on the amplitude (the height of the
wave), the strength of an EM wave of light is based on its frequency.
X-rays can pass through materials which visible light cannot, not only
because the wave is physically smaller, but also because X-rays are

www.deltagroupengineering.com

61

also much higher in energy than visible light, and thus carry greater
momentum.
Its like the difference between firing a football and a bullet
at the same speed. The football would likely bounce off an object or
explode on contact, but a bullet has a much better chance of
penetrating most materials if shot with sufficient force. X-ray photons
may also cause physical damage to the cells they pass through,
specifically to the DNA, in very much the same way that a bullet
causes damage when it strikes an object. X-rays actually tear right
through DNA and potentially have the ability to cause harmful
mutations in genes, which may result in cancer. This is why your
doctor or dentist wants to know the last time you had an X-ray taken.
Its important not to let your average X-ray radiation dose exceed a
given damage tolerance threshold of the cell, in order to minimize the
risk of diseases caused by genetic mutations.
Max Planck derived a mathematical relationship for light
momentum and energy in the form of his equation E = hv, which
states that the energy (E) of light is equal to its frequency (v),
sometimes denoted (f), multiplied by Planck s constant (h). Plancks
constant is a measure of light energy as a function of frequency, but
more importantly, it describes how the energy is packaged.
For example, a volume of
water may be divided and divided
again until one is left with a single
water molecule composed of one
oxygen and two hydrogen atoms.
However, it is not possible to
further sub-divide that molecule
and still have water. You can have
one molecule, or two, or how ever
many you like, but it isnt possible
to have two and three-fourths
molecules of water. Plancks
constant effectively describes the
notion that light energy, as related
to frequency, comes in whole
increments (i.e. quanta) per whole
cycle of the wave. Wavelength is
measured from crest to crest, or
trough to trough, so the cyclic
quality of a wave must be factored into the equation, and this is served
by including Plancks constant. The most important point to
remember is that the frequency of light is synonymous with its

62

www.deltagroupengineering.com

energy, just as mass is synonymous with energy in Einsteins massenergy relationship.


Armed with this understanding it is now possible to
appreciate how Nature has constructed the atom. Materials, composed
of atoms, absorb and emit light-energy. Heat up a piece of iron and it
glows orange and red. A key factor leading to the development of QM
was the observation that light was absorbed and emitted by various
substances in discrete and specific wavelengths, and that these
wavelengths were, in turn, always characteristic of the kind of matter
they were absorbed by or emitted from. The spectral characteristic of
light emitted by matter can thus be used as a signature, allowing us to
identify the composition of distant objects in space, and the
composition of matter in the laboratory just the same24. But how and
why is this true?
This observation was in direct conflict with the pre-quantum
solar-system model of the atom. When we look at our solar system,
the planets are in orbit at specific distances around the Sun, but this
doesnt mean that each planet has to maintain any specific orbital
distance. An object can orbit the Sun at any distance chance might
allow. Its not as if the Earth, because of its mass or some other
physical factor, has to inhabit a certain orbital distance from the Sun,
it just happens to be so. Also, if an object in the solar system decays
from one orbit to another nearer to the Sun, it may change position in
a gradual manner as it spirals inward from point A to point B.
This, however, isnt the case for electrons surrounding the
atomic nucleus. Electrons only orbit at discrete, quantized levels
they cannot exist at any in-between distance from the nucleus. If the
orbit of an electron decays and changes from one orbit to another, no
intermediate position exists that an electron may occupy during that
transition. It has one position at one moment, and then instantaneously
shifts to a different position. It is as if at one moment you might be
sitting at home reading the newspaper, and then suddenly find
yourself at the caf down the street! All this sounds truly bizarre, that
is, unless the notion of particle-wave duality is considered.

24

This method is termed spectroscopy.


www.deltagroupengineering.com

63

When Louis de Broglie was a


student at the University of Paris in the
early 1920s, Relativity and the
Photoelectric Effect were new concepts
beginning to take root in science. Based
upon Relativity he learned that E =
mc2, which states that mass-less
photons possess momentum because
they possess intrinsic energy. He also
learned that E = hv from the
Photoelectric Effect, and that a photons
frequency was a measure of its energy.
De Broglie astutely realized that E was
equal to these two, apparently different
things namely frequency and mass. So if photons of light possess
frequency, could things with mass, like electrons, also have
characteristics of waves? Or greater still, could perhaps all forms of
matter, from particles to pebbles to planets, all have wave-like
characteristics as well? The answer, de Broglie discovered, was yes!
Three years after de Broglie derived his hypothesis25 it was
experimentally verified by Clinton Davisson and Lester Germer at
Bell Labs. Experimental confirmation of de Broglies hypothesis
earned him the Nobel Prize in 1929. Heres how they did it. X-rays
cause current to flow in metals, and the X-ray light may also be
reflected and refracted as it bounces off the atomic lattice forming the
regular structure of metals and other crystals. The diffraction patterns
reflected may be utilized to deduce the molecular lattice structure26 of
the atoms comprising metals and crystals.
Davisson and Germer decided to turn this idea on its head by
directing a beam of electrons at a piece of nickel. They found that
the electron beam was refracted in the same way one expects to find
with X-rays. The electrons forming the beam were behaving much
like photons of light traveling as waves and were being reflected and
refracted off the metals surface, forming diffraction patterns on a
detector. From these interference patterns, the wavelength of the
electrons was precisely calculated in the same manner as one
calculates the wavelength of X-ray light! Based on what was learned
in this experiment, electrons surrounding the atomic nucleus could no
longer be considered as little particles in orbit around the nucleus.

25
26

That matter possesses wave-like attributes.


The three-dimensional arrangement of atoms.

64

www.deltagroupengineering.com

Electrons not only appeared to move in waves, they could be


considered to be waves.
One might ask how a pebble, or a planet for that matter, may
be represented as a wave. We observe objects to be solid and they
dont appear to quiver with wave-like ripples, or move along in a
serpentine manner. This is because as an object increases in mass, its
wavelength becomes exceedingly and undetectably small. It is only
when matter reaches the subatomic scale that its matter-wavelength
becomes physically important or detectable.
The de Broglie wavelength denotes the basis by which the
macro and quantum realms are divided; matter becomes subject to its
wave nature below a certain scale and the fine topological details of
quantum-space become apparent. To us, space appears completely
smooth and featureless, but to an electron, or any other subatomic
particle, space is a roiling, rough landscape. By analogy, its similar to
viewing something smooth under a microscope. To the naked eye a
substance may appear to be quite smooth, but place it under a
microscope and it might look pitted and rough. As the magnification
gets finer and finer, matter also loses its tangible, objective
characteristics and enters a state of duality being simultaneously
particle and wave-like.
As we transform our perspective and view the electron as a
wave rather than a single particle orbiting the nucleus of the atom,
suddenly things start making sense. If the atom was really like a
miniature solar-system, we would expect the electrons to crash into
the nucleus almost instantaneously due to the mutual attraction
between the negative electron charge and the positive proton charge
of the nucleus. But the electrons never crash into the nucleus. Why is
this so?
The Danish physicist Niels Bohr
was the first to apply the wave nature of
the electron to the orbital model of the
atom. Bohr reasoned that the wave nature
of matter explains how and why atomic
electrons maintain stable orbits. If we
consider Plancks constant, we recall that
energy comes in bits (i.e. quanta) based
upon the cyclic nature of the wave. Bohr
found that electrons occupied discrete
atomic orbitals directly corresponding to
complete cycles of the electron
wavelength. Moreover, the electron
wavelength of each orbital was associated
www.deltagroupengineering.com

65

with a specific amount of energy.


Imagine the classical model of the atom once again, with its
electrons orbiting in circular paths around the nucleus. If one cuts a
circular path at a specific location, unwinding it into a straight line of
precise length, the orbital path becomes analogous to a guitar string
held fixed at both ends. When a string of length is plucked, it
vibrates at specific frequencies which represent harmonic divisions of
its length (as depicted below).

Between the fixed end-points of the string, fractions of waves


cannot exist. Its like our collection of water molecules. Three-and-ahalf molecules of water cannot exist; they may only exist in wholes.
In this case, only whole harmonic multiples (e.g. 1, 2, 3, 4, 5 etc.) of
the fundamental frequency27, defined by the orbital circumference can
exist. In this way, the electron is treated as a standing wave

27

The condition depicted in the diagram; also termed the 1st


harmonic.

66

www.deltagroupengineering.com

wrapped around the nucleus; the only maintainable orbits are the
harmonic frequencies physically fitting the circumference.
This quantum, harmonic model explains the stability of
electrons in the atom, and why in-between states of electrons dont
exist. With this new perspective, the orbital model of electrons was
replaced by energy levels a harmonic model as a result of QM.
Bohr also realized that the energy levels associated with the
electrons in atoms describe the absorption and emission spectrum of
the hydrogen atom. The hydrogen atom was used to model this effect
because of its simple configuration having one electron circling a
single proton. As the electron jumps from a lower harmonic state to a
higher one, the frequency and energy of the electron increases.
However, in order to alter the energy level of the electron, energy
needs to be added or subtracted.
The law of conservation of energy ensures that energy cannot
magically appear out of, or disappear into nothing; it must come
from somewhere. When an electron jumps from a higher to lower
energy level, it is also jumping from a higher to lower frequency.
When this occurs, energy is released as light.
The phenomenon of photon emission from atoms was
actually quite well understood even before Bohr fully developed his
theory describing why it occurs. Atomic photon emission wavelengths
obey a precise harmonic pattern, which was determined by the
Swedish physicist, Johannes Rydberg. In the Rydberg formula, the
wavelengths of the photons radiated from atoms as electrons jump
between energy levels can be accurately predicted by simply
substituting whole-number harmonic intervals into an equation, along
with the Rydberg constant for a given atomic element.
Rydbergs formula, used to predict the frequencies of light
emitted from atoms, was subsequently explained by Bohrs model of
the atom. Bohr demonstrated that the harmonic pattern derived by
Rydberg and others28 works because the frequency of the photons
released from the atom directly corresponds to the frequency
difference between electron energy levels! It is as if the atom is a
musical instrument which may be strummed with light to produce a
musical scale of colors.
On a musical instrument, each sound-wave is produced at a
particular frequency, differing from others based upon the harmonic
interval between the notes. Instead of sound waves, an atom emits
28

Ritz, Lyman, Balmer, Paschen, Brackett, Pfund and Humphreys all


contributed to and further developed the original formula to apply to
other atoms.
www.deltagroupengineering.com

67

specific frequencies of light based upon harmonic intervals. The


frequency of light is, of course, what defines the attribute of color.
The atoms of any element absorb photons at exact frequencies
because a perfect fit is required to boost the electron to a new
energy level. Similarly, when the electron drops to a lower energy
level, a photon is released with a frequency precisely equal to the
difference between the frequency shifts in the electrons energy level.
These differences in energy levels are governed by the permissible
harmonic states of the electron in the atomic system.
When a blacksmith heats a piece of iron in the furnace, heat
energy is absorbed, boosting the energy levels of electrons in the
metal. Similarly, when the metal is removed from the fire it glows
orange-red in color. As the metal cools, electrons in the iron atom fall
to lower energy levels, releasing a cognate spectrum of photons in the
transition, observed as orange and red light; color is the hallmark of
this energy exchange. In fact, all the colors in the material world are
born of this indissoluble interaction between energy and matter29.
This model of the atom is truly beautiful in its elegance; all
matter is locked in a ceaseless dance with light. Thus, Einsteins
mass-energy equivalence relationship becomes easier to comprehend
because matter is not separated from its energy environment it is
uniquely dependent upon it. Attributes such as wavelength and
frequency are the characteristics by which we describe subatomic
matter and energy. Although macro-scale matter may be described in
mechanical language, the subatomic world can only be adequately
described in the language of energy.
The concept of wave-particle duality exists in the quantum
realm, providing the basis for mass-energy equivalence. This matterenergy relationship is revealed at the subatomic level because
subatomic matter may only be adequately described in terms of
energy relationships. Energy is the currency of the Cosmos, and
energy is constantly being shared and exchanged in a perpetual
dynamic interaction with matter. So if we consider the fabric of space
as being filled with energy, it becomes clear how space may be
perceived as being the basis for all matter, and the foundation upon
which the ever-changing dynamic reality we experience in the
material world plays out.
This is why Plato had it right, even if the particular model he
used was incomplete. According to Plato, the fifth element (the
Quinta Essentia) was reasoned to be the ethereal substance of the
void, but it was also the substance upon which matter was
29

Based upon the theory of Quantum-Electro-Dynamics (QED).

68

www.deltagroupengineering.com

constructed. If we think of the Quinta Essentia as being energy itself,


then this purest substance of the Universe is indeed the basis for all
matter.

3.4

Quantum uncertainty

Erwin Schrdinger was instrumental in sailing this new


harmonic model of the atom into uncharted territory, and in so doing,
established the field of Quantum Mechanics. In 1925, Schrdinger
defined the many configurations wave-like electrons may take in
different atoms. Atoms come in much more complex and flavorful
varieties than just the hydrogen atom, and these atoms have not just
one, but many electrons, existing at various energy levels.
Schrdinger was able to unravel the complexities of the atomic animal
by demonstrating that electrons didnt always have to occupy the
same orbital distance from the nucleus, i.e., based upon the outdated
solar-system model. He determined, rather, that the electron could
exist in a variety of shapes and configurations based upon its
quantum state.
A quantum state defines
the characteristics of an electrons
energy level, which includes its
placement hierarchy (energy),
magnetic torque and orbital
configuration. The quantum state
may also be defined by what
Schrdinger termed its wavefunction. Schrdingers wavefunction interpretation implies that
the electron itself is a continuous
wave-form and that, for all
practical purposes, it is in all places
at once (within its quantum state)
around the nucleus! Schrdingers
wave is not so much a physical
wave, like those found on the
ocean, but is instead a representation of a statistical probability for
where an electron is likely to be detected around the nucleus at any
given moment.
In the laboratory, certain properties of the electron and other
elementary particles may be readily detected and measured. The
particulate nature of subatomic matter remains, and is just as valid as
our notions of wave-functions and so forth. However, if particles may
www.deltagroupengineering.com

69

be accurately represented by their wave nature, then why is it that we


are still able to measure them as distinct particles? The particle
attribute that we detect, as it turns out, may be explained through
Schrdingers wave-function and the Heisenberg Uncertainty
Principle.
When a particle is physically measured, its wave-function is
said to have collapsed; the act of measurement reduces the state of the
potential into a single point of being30. Its a bit like popping a
balloon. You may prick a balloon with a pin anywhere on the surface
you wish, but the effect will be the same no more balloon. But
unlike a balloon which has a surface, the surface of a quantum state
is a transitory illusion.
Consider a propeller driven aircraft or a helicopter. As the
propeller blades spin faster and faster, the detail blurs and we end up
seeing a ghostly, translucent shape representing the full range of
motion the propeller blade traverses as it spins. Now, if you tried to
throw a dart in hopes of hitting the propeller blade as it spins around,
just by throwing the dart within the range of the propellers
movement, there are going to be instances when the dart passes right
through and times when the dart hits the blade dead-on. Its really just
a statistical game of chance.
A similar effect is at play when attempting to detect electrons
in an atom. Areas exist where we might expect an electron to be, but
sometimes it isnt detected. For example, although it might be
possible to accurately measure the propellers rate of rotation, it
would also be very difficult to guess the
exact position of a single propeller blade
at any specific moment.
Werner Heisenberg discovered
that mutually exclusive bits of
information may only be determined
independently in a quantum system, as
with the spinning propeller. When
dealing with quantum systems, one may
only accurately determine either position
or momentum, but information about
both of these characteristics cannot be
determined simultaneously. Because of
this, an inherent uncertainty exists in
30

This concept originates from the Copenhagen Interpretation of


Quantum Mechanics:
http://plato.stanford.edu/entries/qm-copenhagen/

70

www.deltagroupengineering.com

quantum measurements. This is referred to as Heisenbergs


Uncertainty Principle; it is not possible to know how a particle is
moving and precisely where it is simultaneously, because the act of
measurement changes the system.
So when a wave-function is measured, its characteristics are
particle-like; a specific point contains well-defined information about
some aspect of the system. Once the system is measured, the
information gained is only representative of a single state of being.
Consider a balloon again, only this time we are trying to
prick an oddly shaped, asymmetric balloon in the dark, having not
seen it before. If we randomly stab at the air until we happen to hit
any surface of the balloon it will pop, and we may state with certainty
that one part of the surface was located where we jabbed it with the
pin. However, we will not have complete information about the shape
of the entire balloon before it was popped just by touching one point
on its surface. Information about other surfaces once existing on the
balloon vanishes at the moment we pop it. Popping the balloon in this
case is analogous to collapsing the wave-function.
The singular identity of quantum information results in the
measurement of particle-like qualities. Its a yes-or-no answer. The
balloon either popped or it didnt. Prior to the measurement being
performed (before the balloon is popped) the system is defined as a
set of probabilities. Upon measurement, one coordinate from many is
selected and thus the information retrieved is one-dimensional,
distinct and singular, i.e., particle-like.

3.5

The substantive Universe

At the commencement of this chapter, it was stated that


much may be learned about the nature of space by delving deeply into
the depths of matter. In so doing, we may travel full-circle to find that
matter and energy are inextricably locked in a dynamic interplay
which, in turn, defines the physical reality of our Universe. The
wonders of the quantum world are breathtaking and bewildering.
However, the most astonishing achievement of QM is not what it
teaches us about matter; its what it teaches us about the vacuum of
space. It tells a tale so strange that we would likely never have
imagined it otherwise. It speaks of a mystery so deep and shadowy
that science is only now beginning to grasp the full significance of
what a quantum interpretation of space implies.
The chronicle of a quantum interpretation of space begins
with Max Planck in the year 1900, who discovered a deep connection
between matter and light. In order to better elaborate on the
www.deltagroupengineering.com

71

importance of his contribution, a more detailed explanation of the


thermodynamic property of matter we call temperature is necessary.
All material objects are subject to the attribute of temperature, which
is a measure of the average kinetic energy (motion) of all the
molecules contained in a substance. All the molecules comprising any
material object are jostling about and banging into one another. The
intensity with which the molecules impact one another is a measure of
the objects temperature. The more energetic the collisions, the higher
the temperature will be. Similarly, the less energetic the motion, the
lower the temperature will be.
At any given temperature however, not all molecules
comprising a given material possess identical kinetic energy. It is the
average kinetic energy of all the molecules in the system which
defines the temperature of the substance. If one were able to plot the
energies of all molecules in the system individually, they would form
a kinetic energy distribution fitting a statistical bell-shaped curve. The
ends of the curve would be representative of the small number of the
least and most energetic molecules, but the greatest proportion of
molecules will have energies clustered around the average value at the
center of the bell curve.
Max Planck discovered that radiated energy may be modeled
in a similar manner to temperature. The thermal energy radiated into
space by any material object, like the Sun for example, is distributed
throughout the space surrounding it. The Sun emits infrared radiation
(part of the EM spectrum) comprised of frequencies below the color
red that the human eye cannot see, but that we can feel as heat. The
Sun, of course, also emits frequencies we can see like red, yellow and
orange, and much higher frequencies like ultra-violet and X-rays that
we cannot see. However, the peak energy emitted by the Sun spans
the visible and infrared bands of the EM spectrum31. Planck
discovered that the distribution of energy in the EM spectrum
surrounding any material object is solely dependent upon the objects
temperature. This collection of energy is termed blackbody
radiation.
If we took an empty metal box out into space and somehow
trapped the Suns radiation inside, we would be, in effect, taking a
survey of the photons emitted by the Sun. In this regard, the
blackbody radiation spectrum is analogous to a telephone survey. In a
standard survey, the interviewer attempts to achieve a statistical
representation of the whole population by measuring the opinions of a
31

The sun, with a temperature of 5,780(Kelvin), yields a peak


radiation spectrum of roughly 500(nm) [the visible light range].

72

www.deltagroupengineering.com

smaller, randomly selected group of individuals. People have vastly


varying opinions, but most people surveyed will tend towards a
general consensus; few individuals adopt an extreme stance. When the
surveyor plots the answers on a graph, the shape of the curve is
typically bell-shaped, with most of the people sharing the same
answer to a question, and fewer people strongly agreeing or
disagreeing. The larger and more random the survey population, the
better it represents the entire population.
Planck determined that the radiant energy distribution from
an object is dependent upon the objects temperature. In other words,
the collection of photons constituting the blackbody energy
distribution surrounding any object changes in a very particular way.
The composition of the object is irrelevant; temperature is the sole
factor defining the prevalence of frequencies that are present. Planck
demonstrated how this occurs using an ingenious modeling system.
He treated an energy-sampling box as being filled with millions of
individual bits of energy termed harmonic oscillators. This is
analogous to considering the box to be filled with billions of water
molecules.
In this case, lets pretend that energy may be decomposed
into imaginary fundamental energy molecules; analogous to a set of
tiny strings of equal length32. The kinetic energy of water molecules
causes them to bounce around and impact one another, and this is a
measure of the waters temperature. However, the energy of each
oscillator is measured by the frequency with which the string
vibrates. Each imaginary energy string is the same length, so the
frequencies with which the strings vibrate may only be quantized
harmonics of the fundamental frequency.
In a volume of water, most of the molecules possess values
of kinetic energy near the population average. However, some will
have much higher and others will have lower kinetic energy.
Similarly, in the case of blackbody radiation, most of the oscillators
filling the volume of the sample box will be vibrating with a
frequency representative of the average energy of the whole, but some
strings vibrate at very high or low frequencies. If high and low
frequency strings collide, the energy is shared between them.
However, the energy exchange isnt a transfer of kinetic energy; it
transfers energy in the form of frequency. For example, a collision
between two oscillators might cause a high-frequency string to drop to
a lower harmonic, and cause the low-frequency string to jump to a
higher harmonic via the energy transaction.
32

All water molecules are the same size and configuration.


www.deltagroupengineering.com

73

We may imagine that when a fast moving water molecule


collides with a slower one, energy is transferred in the same manner
as when a cue ball impacts a stationary billiard ball. The fast-moving
molecule recoils with a reduced velocity because it loses kinetic
energy as it transfers momentum to the other molecule. In the case of
blackbody radiation, a similar situation arises. The energy of
oscillators inside the box is based upon the frequency of oscillation (E
= h) and not the kinetic energy as occurs for water molecules.
Blackbody radiation, like temperature, can dissipate or
increase in intensity. For example, if boiling water is poured into a
cool container, the waters temperature slowly decreases as the kinetic
energy of the water molecules is transferred to the cooler molecules of
the container. The water continues to cool until the container and the
water reach thermal equilibrium. The same is true if you were to add
an ice-cube to a hot container. The heat-energy from the container
transfers to the ice, causing the ice to melt and heat up until the
container and water reach the same temperature.
Blackbody radiation behaves in a similar manner. If we
deposited a closed, empty metal box heated to 100C into deep space,
approaching absolute zero temperature, energy in the form of thermal
photons radiate away from the metal (some into the interior of the
empty box) until energetic equilibrium is attained.
Energy emitted by matter into the space surrounding it
(outside and inside) is distributed in a regular and predictable manner.
The population of radiated photons appears as one might expect to see
in a temperature distribution curve if the momentum of each
individual molecule was sampled and plotted on a graph. A peak
forms around the average frequency in the spectrum of photons and
relatively fewer photons have very high or very low-frequencies. The
peak value in the distribution varies according to the temperature of
the object. Actual measurements of blackbody energy distributions are
shown to precisely fit the curve Plancks theory predicts.

74

www.deltagroupengineering.com

(Above): Blackbody radiation curves for stars of different temperature


(T). The X-axis represents the wavelength of EM radiation and the Yaxis represents the relative density of photons according to frequency
in the EM spectrum.
The most innovative aspect of Plancks theory is that it
works on the basis of quantum increments of energy. This is to say
that, just like water molecules, fractional increments of energy cannot
exist. Rather, the energy distribution is composed of harmonic
integers (i.e. whole quanta of energy). The incorporation of Plancks
constant (h) entails that the imaginary energy oscillators possess
energies of hv, 2hv, 3hv etc; a condition where a string has
1/5thhv energy, for example, cannot exist.
All this seems quite straightforward and easy to
conceptualize until one considers the effects of a single oscillator in a
metal box at absolute zero temperature. This is where things get a bit
strange and the temperature analogy becomes inapplicable; however,
amazing new conclusions emerge by way of this consideration. In a
blackbody system the single oscillator spreads out to fill the entire
box, and does not bounce around inside like a single water molecule
would. Remember, photons are also waves and a wave is, by its
fundamental nature, required to cycle otherwise it would cease to
be. This point is of vital importance. The photon inside the box is not
a particle bouncing around, it is a wave. It can never fully come to
www.deltagroupengineering.com

75

rest because that would imply that the photon had been destroyed.
Energy can neither be created nor destroyed; hence, the photon
perpetually fluctuates around its lowest permissible energy state
inside the box.
The inevitable emergent conclusion from what we know
about QM and Heisenbergs Uncertainty Principle is that any physical
system in the Universe must possess some minimum intrinsic energy
which cannot be removed. This perpetual fluctuation of photonic
energy in space is termed Quantum-Vacuum-Energy (QVE). QVE
exists even when all thermal motions between atoms and molecules
has completely ceased. For this reason, QVE is also termed ZeroPoint-Energy (ZPE), emphasizing that the energy comprising the
vacuum is present at absolute zero temperature. Throughout this book,
we shall refer to the sub-thermal ZPE as Quantum-Vacuum-Energy
(QVE) to emphasize its derivation from QM and its reference to the
vacuum.
Plancks blackbody radiation principle entails that vacuum
energy is intimately tied to mass energy, and that the vacuum energy
filling space surrounding matter is just as important as the matter
residing within it. All systems possess a ground state of energy
attained by equilibration with its environment. For example, many
possible sizes of energy-trapping boxes may exist in space and
many minimum energy states may exist within those boxes. A solitary
photon inside each variation of box possesses different QVE
parameters defined by the box it occupies. Thus, different vacuum
states (i.e. vacua) must exist, associated with specific classifications
of matter. A single atom, for example, interacts with the vacuum by
establishing its own boundary condition by equilibration, analogous
to the manner in which an empty box floating in space establishes an
interactive boundary condition within the QVE33.
The prediction of QVE leads to a foamy description of
space, saturated with frenetic, evanescent fluctuations. If you switch
your television to an unutilized channel, youll see thousands of dots
of static buzzing about like bees in a hive. This imagery is physically
reminiscent of whats occurring at the quantum level in the vacuum of
space; a chaotic jumble of fluctuations at all points in the Universe,
whether within inter-galactic voids or within the space between subatomic particles!
Stranger still, vacuum energy exhibits particle-like attributes,
with virtual particles instantaneously crackling into existence and
33

This partitioning of space becomes particularly important at the


level of subatomic particles.

76

www.deltagroupengineering.com

abruptly vanishing back into the vacuum. QM permits the creation of


virtual particles from pure energy by briefly borrowing energy from
vacuum fluctuations.
Virtual particles34 are invoked to explain the conservation of
energy and momentum occurring in particle lifetimes and decay
processes. Moreover, they are also applied to explain the electro-weak
and strong nuclear forces within atoms via a virtual particle catch
game between subatomic particles. The mechanism of the electroweak force is generally explained as the result of a subatomic transfer
of virtual photons. Virtual particles are also utilized to plot the
formation and annihilation of intermediate particles that are generated
during collisions in accelerators. At research laboratories like CERN
and SLAC, particles are smashed together at enormous speeds and the
resulting high-energy subatomic particles produced in these collisions
are analyzed using a mapping process that often requires the use of
accessory virtual particles. Virtual particles are utilized in such
mapping processes to fill in the gaps where details are lacking in
these exceedingly short-lived events.
One may then wonder why virtual particles are considered
virtual instead of real. On one hand, since virtual particles cannot
actually be seen or detected directly we must consider them to be
imaginary, but on the same note, they have real, measurable effects.
This is somewhat analogous to the manner in which words convey
ideas. Words themselves are the real, objective tools facilitating the
conveyance of ideas. However, ideas themselves are incorporeal. This
shouldnt imply that an idea doesnt exist or have observable effects.
An idea may exist as a real force with the power to affect our
objective reality as much as anything else. Many wars fought
throughout history were wars of ideology, based upon ideas, beliefs,
and emotional motivation. Human history has been shaped by our
physical requirement to survive and propagate, but also by the forces
of ideology and belief. Ideas and beliefs, like virtual particles, have
measurable effects even though they, themselves, are not directly
measurable.
We now understand that the vacuum of empty space is, in
fact, the opposite; it should more properly be regarded as a plenum. In
this way, the Universe is a container which may never be emptied.
Rather than a void, space represents something far more
substantive. By way of QM, we have discovered that energy is the
quintessential substance filling the Universe.
34

Emerging from, and then dissipating into the vacuum so rapidly that
we may never be able to detect them directly.
www.deltagroupengineering.com

77

78

www.deltagroupengineering.com

4
4.1

Making Something of Nothing


Virtual reality

Although QVE is obliged to exist by the rules of QuantumMechanics (QM), our psychological acceptance of the QV appears to
be more a suspension of disbelief rather than a sincere conviction. The
QV seems a bit too weird to be true. Yet the Casimir Effect has
provided substantial physical proof that QVE is real, or at least virtual
but with real, measurable effects. The most dramatic insight to be
gained through this level of understanding is that space affects matter
just as matter affects space. A deep, mutual connection exists between
matter, space and energy which cannot be severed. Matter and the QV
are two aspects of a fundamental concept, as if two sides of the same
coin.
The existence of Quantum-Vacuum-Energy (QVE) is
revealed by the application of spatial boundary conditions as
demonstrated by the Casimir Effect. From such spatial partitions,
forces are generated due to the formation of Quantum Vacuum (QV)
asymmetry, causing the two parallel metallic plates to be pushed
together. If only one plate were utilized, the QV would remain
symmetrical and appear identical from either side of the plate, and no
force would be generated on the plate. However, bringing two parallel
plates close together causes the QV between them to change. Fewer
QV photons exist between the plates than outside them due to the
boundary the plates establish. All but the smallest wavelengths of
energy are excluded from the space between the plates. The field
asymmetry between the inner and outer vacua generates a net pressure
on the outer surface of the plates, and as the inner and outer vacua
attempt to equilibrate to identical energy states, the plates are pushed
together in the process as if carried along by an increasingly swift
current.
However, the Casimir Effect isnt the only development
lending credence to the existence of QVE and its influence upon
matter. Professor Stephen Hawking has become one of the most
famous theoretical physicists of the late 20th century, and a legend in
his own time, yet many people would not be able to say precisely why
his has become a household name in our current day and age. If one
were to ask the same question about Einstein, one would almost
invariably get an answer pertaining to E = mc2, something about
Relativity or the atomic bomb etc. So what brilliant insight into
Nature has earned Hawking the privilege of sitting in Sir Isaac
www.deltagroupengineering.com

79

Newtons chair at Cambridge University? For one, Hawking came up


with a theory describing how black holes radiate by kicking virtual
particles out of the vacuum and into the real world.
Black holes are referred to as such because they produce a
gravitational field so intense that not even light can escape it. When
an extremely massive star pulls in matter, it generates an ever-larger
gravitational warp in space-time until a threshold is reached. The
threshold marks the point at which the gravitational strength of the
enormous mass causes the star to collapse under its own weight. The
gravitational acceleration becomes so great that not even light can
escape; at this point, the object becomes a black hole because the light
entering its gravity well can never escape to reach our eyes.
When electromagnetic (EM) radiation, is emitted from an
object it is said to radiate photons. Hence, one expects that if black
holes attempted to emit radiation, it would get sucked right back in
(analogous to the submarine example described earlier). The signal
could be sent, but it would never reach the outside world. If we were
traveling in intergalactic space, far from any star or material object,
rogue black holes marauding through space might pose a serious
threat to us should we happen to fly a bit too closely. Since they are
just as dark as the surrounding space, they would be invisible to us.
When we attempt to hunt for black holes with a telescope, we run into
the same dilemma. Because we cannot see them, we may only infer
the black holes presence through its gravitational effect on other
nearby stars. What Hawking discovered was that there might be a
way, theoretically, to detect a black hole in empty space via its
subatomic particle emissions. But if light cannot escape a black hole,
then how can they radiate particles?
Black holes are extremely dense, creating a near-infinite
depression in space-time. At a certain distance from the center of the
black hole, a point exists at which gravitational attraction is low
enough so that light may escape. If light crosses this invisible border it
will be drawn into the black hole. This dividing line encircling the
black hole is termed the event horizon; marking the boundary
between our Universe and the mystery inside.
But again, if a black hole cant actually emit radiation then
how does it release real particles as Hawking maintains? The answer
lies within the vacuum. Based upon mathematical prediction, the QV
seethes with virtual particles, flashing imperceptibly into and out of
existence. Virtual particles form in pairs comprised of a particle and
its corresponding anti-particle, as required by charge symmetry, only
to dissolve back into the QV just as quickly.

80

www.deltagroupengineering.com

Virtual photons are ever-present in the vacuum, and are


responsible for the Casimir Effect. But they are also responsible for
the creation of virtual particle-antiparticle pair formation, as virtual
particles are formed by borrowing energy from vacuum fluctuations.
Hawkings revelation came as he wondered what might happen if a
pair of particles popped into existence on the razors edge of the event
horizon. One of the virtual particles would be sliced away and
disappear into the black hole, and the other particle would be cut off
from its partner and thrust into reality. Thus, the event horizon is
thought to be brimming with orphaned particles that were created as
part of a virtual particle pair, and then torn apart by the intense gravity
of the black hole. These orphaned virtual particles, in turn, should be
detectable as radiation being emitted from the black hole.
Even more interesting is the fact that when one particle of the
pair becomes real, the other member of that pair must account for
that addition of mass-energy to our Universe because of the First Law
of Thermodynamics35. The particle of the pair falling into the black
hole is assigned a negative mass-energy value, while the particle that
has been formed on the outside of the event horizon is assigned a
positive mass-energy value. As negative mass-energy rains onto the
positive mass of the black hole, an infinitesimal piece of it is
annihilated. Each of these mass annihilations eats away at the matter
contained inside black hole, facilitating a net gain of mass outside the
black hole and a net loss inside. Thus the black hole not only appears
to radiate particles, it will eventually evaporate away!
Hawkings principle is purely theoretical because we havent
yet detected Hawking radiation, nor have we directly observed a
black hole evaporate or emerge back into our Universe as a neutron
star. However, it may be possible to substantiate Hawkings principle
by other means.
The Equivalence Principle demonstrates how physical laws
are maintained for an object accelerating through flat space-time or
held fixed in a gravitational field of identical apparent acceleration.
Thus, if we produced enough thrust to fix the position of a spaceship
just outside the event horizon of a black hole, this produces the
equivalent physical condition of accelerating to nearly light speed in
free space. Its somewhat like rowing up stream in a swift current.
One may have to expend a lot of energy, rowing swiftly in order to
simply keep pace with a stationary point on the shore.
One highly noteworthy theoretical prediction made by the
physicists Paul Davies and Bill Unruh in the 1970s lends credence to
35

Energy may change form, but cannot be created or destroyed.


www.deltagroupengineering.com

81

Hawkings theory. Paul Davies and Bill Unruh independently derived


a free-space acceleration equivalent of Hawking radiation at
approximately the same time Hawking developed his black hole
evaporation hypothesis. Although different approaches were taken,
Davies and Unruh determined mathematically that if an observer was
accelerated at an extreme rate to nearly the speed of light, the observer
would perceive themselves as being immersed in a haze of thermal
energy, making it appear as though the space outside was heating up.
The effect, although exceedingly slight, may be likened to quantum
friction as the observer tears through the QV while accelerating.
This curious effect is known as the Davies-Unruh Effect.
The Equivalence Principle states that gravitational
acceleration is equivalent to mechanically induced acceleration. In
either case, a force is experienced when an object attempts to deviate
from its geodesic path through curved space-time. When we observe a
comet orbiting the Sun, its elliptical path is experienced as being a
straight line of least resistance through curved space. The only way
that the comet can move out of its geodesic path is if energy is
supplied to counteract inertial resistance as it deviates from that path.
An alternative way to change the comets path would be to place
another massive object like a planet nearby, disrupting the space-time
curvature and redirecting the comet. In both cases, due to mass-energy
equivalence, energy is simply being added to the system either
through thrust-energy supplied to the object or by adding mass-energy
to a region of space in the vicinity of the object.
The Earth may be regarded, in relativistic terms, to reside in
a well of space-time curvature. As the space shuttle launches from the
Earths surface, it gradually climbs uphill to get free of the Earths
gravity well. In the case of a black hole, the space-time curvature at
the event horizon is analogous to the hill becoming vertical; which
means that an infinite amount of energy would be required in order to
escape. Similarly, if our spaceship is traveling in a straight line
through flat, interstellar space, and we decide we want to change
direction, we would need to fire our thrusters in order to change our
path. Changing paths always entails accelerating, and by accelerating
we are, in effect, curving our geodesic path, and curving our local
space-time by adding energy to the system.
Just as gravity is discussed in terms of space-time curvature,
so is acceleration. As an object is accelerated to near light speed, an
apparent near-vertical space-time curvature is experienced. To the
accelerated object, it would appear as though it were hovering near
(but not at) the event horizon of a black hole! At the accelerationinduced event horizon, the object being accelerated perceives space-

82

www.deltagroupengineering.com

time around it to be warming up with thermal radiation as virtual


photons are compressed in its vicinity. Since the object cannot
actually be accelerated to light speed (as is represented by the event
horizon of a black hole), Hawking radiation and Unruh-Davies
radiation are not entirely congruous, although both occur by way of
the same mechanism.
High-energy particle physicists are laboring to find a method
of detecting Davies-Unruh radiation in a particle accelerator when
subatomic particles are accelerated to near light-speed.xi If this
radiation is observed, it will not only provide proof for Davies,
Unruhs and Hawkings theories, it will provide further vindication
for the importance and physicality of the QV.
Gravity and acceleration appear to have an effect on the
vacuum, based upon Hawking Radiation and the Davies-Unruh
Effect. In both cases, intense gravitational fields or extreme
accelerations physically stress and tear the fabric of space-time.
Both of these effects are expressly due to mass and its
interaction with the vacuum. Mass warps space-time to produce
gravitational fields and it experiences inertial resistance upon
acceleration. Both of these effects may also be described by evoking
Einsteins concept of space-time geodesics. Since the Hawking and
Davies-Unruh Effects deal with gravity and acceleration respectively,
one faces a revolutionary and inescapable conclusion: that mass is
completely and impartibly linked to the QVE of space!
We know from Einstein that mass and energy curve spacetime. However, Einstein did not appeal to QM as being the mediator
of this process. What Nature has written for us in boldface type is that
GR and QM, formerly considered to be disparate aspects of the
Universe, are indeed quite capable of being unified. In fact, Relativity
and QM must already be unified the physics community just hasnt
been able to figure out exactly how quite yet . . . or have they?

4.2

Mutually assured construction

If mass directly affects the vacuum through mechanisms such


as the Hawking and Davies-Unruh Effects, and the vacuum can affect
matter through the Casimir Force, could it also be possible that the
properties of mass are attributable to the QV? We have discussed how
mass affects the vacuum and vice versa; but in these examples, mass36
remains independently defined. Recently, scientists have begun to

36

A descriptive attribute of matter; a measure of its energy.


www.deltagroupengineering.com

83

investigate whether the specific attributes defining mass might


actually be physical manifestations of QVE.
Matter is comprised of atoms, themselves composed of
subatomic particles which may be classified in terms of their energy
and described as wave-functions. Particles also possess charge; for
example, a proton carries a positive charge while electrons carry a
negative charge. Charge is relativistically invariant, meaning that
unlike mass, length or time, it doesnt appear to change according to
its relative velocity. Charge is more akin to light, in that it is a
standard by which other relativistic effects are measured.
But what is charge anyway? What does it mean when
someone speaks of positive or negative charge? One might consider it
to be analogous to the opposite poles of a magnet, where the northern
pole emits a field and the southern pole seems to re-absorb it, like a
one-way revolving door allowing passage either in or out.
However, this doesnt describe what charge actually is. The truth is
that no one really knows what charge is. Charge is certainly a wellcharacterized attribute, but the question: what is charge? is
presently unanswerable. Some physicists believe that every charge is
akin to a miniature black hole singularity, or a dimensionless
mathematical point generating or absorbing field energy. Although
much debate surrounds the fundamental nature of charge, an
electrons charge is characterized by a rather fascinating key attribute:
electrons continually radiate EM waves (photons) and generate
electrostatic fields.
EM force is transmitted via an exchange of photons. The net
charge of a single atom is typically zero, as there is a balance between
the number of electrons and protons it carries, causing charge effects
to be neutralized. The continual exchange of photons is what mediates
the attraction and balance between opposite charges. If a localized
accumulation of electrons builds up in a substance, an electrostatic
field is produced. When the repulsive force between electrons
becomes too great, the charges arc to a region of lower potential. A
bolt of lightning is an example of this release of electrostatic energy.
When an electron is in motion, it radiates an EM field, measured as a
collection of photons. Whatever an electron may physically be, it is
characterized by an EM field propagating into space. Even though
photons forming an EM field are mass-less, their interaction with
charged matter imparts either repulsive or attractive forces.
Magnetic force is produced through the interaction of fields,
coupled to the field source (the magnet itself). For example, as one
brings two like poles of a pair of magnets together, one finds it
increasingly difficult to make the surfaces of each magnet touch. A

84

www.deltagroupengineering.com

strong repulsive force is being transferred to each magnet via the


fields they extend into the space surrounding them. Similarly, the field
emanating into the surrounding space is coupled to the magnet, so the
magnet experiences the force imparted by the field.
Whether it is through the Hawking or Davies-Unruh Effects,
or the fundamental connections between electricity and magnetism, a
deep connection exists between space and matter, field and particle. A
synergistic relationship is at play, enabling the existence of all things.
An unceasing, dynamic exchange occurs, which provides structure to
the Cosmos. Were it not for the perpetual dance between space and
matter, the Cosmos would cease to be. The key to our continued
understanding of the Universe is our acknowledgement of the
connection between matter and the QV. If we are going to make
further progress in science, we need to change our collective
perspective and begin thinking in terms of systems and interactive
wholes rather than disconnected, singular entities.

www.deltagroupengineering.com

85

86

www.deltagroupengineering.com

5
5.1

Mass Illusion
A matter of terms

It is very easy for us to take for granted this truly strange and
mysterious attribute called mass. It is so fundamental to our everyday
experience that few people pause to consider it. Of course, scientific
progress has yielded a vast working knowledge of matter extending
into the furthest depths of scale. We have come to know the inner
structure of the atom, and that the atom is the basic building block of
matter. This knowledge not only permeates, but also creates the
foundation upon which our modern civilization is built. It has enabled
the development of the field of chemistry, and through application of
this knowledge we can create a seemingly infinite array of useful
compounds and materials. Indeed, our modern way of life on Earth is
rooted in this deep working knowledge of matter. However, when we
talk about matter we arent necessarily talking about mass. Mass isnt
so much a thing like matter is; it is an attribute of matter, in much the
same way that temperature is an attribute of matter. Temperature may
vary due to the amount of thermal energy a given material possesses,
but temperature doesnt define the atomic or molecular structure of
matter. Likewise, mass is a measure of the energy embodied by
matter, and represents a physical attribute associated with all matter at
all levels of scale.
Matter experiences inertial reaction forces upon acceleration,
gravitational attraction to other objects, and is subject to relativistic
effects. Not only does it warp space-time to generate a gravitational
field, but depending on an observers motion relative to an object, the
mass of that object may appear to change when the observer alters
their motion relative to it. Moreover, Relativity states that mass is
energy and energy is mass.
If mass is nothing more than a synonym for energy and is
subject to relativistic effects, then how does it assume the physical
attributes we associate with matter? Why does matter resist
acceleration and why does it gravitate? Although Einstein and Newton
invented marvelous and ingenious methods for modeling and
predicting the behavior of mass, their models do little to explain why
mass behaves the way it does, or what causes matter to have the
particular set of attributes it does. If GR cannot explain the physical
origin of the collection of attributes we call mass, then what can? The
answer seems to be inscribed within the very fabric of space and time.
All we must do to uncover the answer is decipher Natures language.
www.deltagroupengineering.com

87

Matter and energy are locked in a ceaseless dynamic


exchange and define one another through this intimate collaboration.
In quantum reality, the barriers defining individuality are nebulous
and vague. Matter, and the space in which it resides, may no longer be
considered separate entities. Einstein states that energy and mass are
equivalent, and by way of Quantum Mechanics (QM) we are able to
glean literal meaning from this statement. Could the quantum
connections between matter and energy be utilized to explain the
properties of mass? The answer to this question is beginning to take
shape at the forefront of theoretical physics. A fresh understanding of
the quantum origins of mass will lead to new discoveries and
unparalleled technological advancements so profound, that the course
of human history will be radically altered.

5.2

Intrinsic inertia

The connection between the field and the field-source has


been explored as a means of describing one way in which an electron
might acquire the attribute of mass. This connection provides a
possible explanation for why the Equivalence Principle holds true,
hinting at the possible mechanism underlying General Relativity
(GR). Physicist Vesselin Petkov at Concordia University in Montreal,
Quebec describes the historical basis for what he terms Classical
Electromagnetic Mass Theoryxii.
Petkov hints at the possibility that physicists may have
uncovered the origin of inertia long ago, had they not been so dazzled
by the bright lights of Einsteins geometric space-time curvature
early in the 20th century. If one considers classical models of the
electron proposed by such physicists as Thomson, Maxwell and their
successors, a proposed mechanism for inertia and Relativity appears
to emerge quite readily through the nature of the electron. What this
model proposes is that the force of inertia is simply the result of the
charge source interacting with its own field while accelerating. This
describes a classical, physical and completely intrinsic model of
inertia, helping to explain the observation that inertial force is local to
matter, and immediate in action, i.e., it is not a force to be
transmitted to matter by mysterious means.
The model for intrinsic inertia posited by Petkov is based
upon the electron model from Quantum Electrodynamics (QED),
dealing with recoil forces on the electron charge as it absorbs and
radiates virtual photons. As the electron is accelerated, it senses that
the surrounding virtual photons it interacts with are asymmetrically
red or blue-shifted. However, to make the reasoning behind this

88

www.deltagroupengineering.com

model amenable to principles described in previous chapters, it shall


be interpreted via the Doppler Effect.
If we were moving with uniform motion alongside an
electron and were able to view it, we would expect it to be perfectly
spherical. In this model we shall presume that the electron is a pointlike singularity (the source of the charge), radiating a uniformly
spherical electromagnetic (EM) field in all directions. Imagine that the
charge is analogous to a tiny ambulance with its siren on. The siren
emits sound waves of consistent frequency that we can hear. When we
travel alongside the ambulance at the same speed, or if the ambulance
is stationary next to us, we hear the siren as undistorted with
consistent pitch. If we could view the sound waves emanating from
the ambulance, we would see a series of perfectly uniform concentric
waves radiating spherically outwards like a rain-drop in a pond.
However, if we were able to observe the electron as it
accelerates past us, it would appear to be radiating an asymmetrical
field. As the charge accelerates, the EM field emitted at the speed of
light in all directions becomes compressed in wavelength in the
direction of its acceleration, and decompressed (i.e. stretched out) in
the trailing direction. The electron and its field now appear eggshaped, with the electron offset in the direction of acceleration. If the
electron were an ambulance siren, we would notice the pitch rapidly
dropping as it passes by.
If we consider the charge source to be coupled to its own
field, we begin to understand where the reaction force against
acceleration originates. As the EM field is continually compressed to
a higher frequency in the direction of acceleration, the energy of the
field in the direction of motion increases proportionally. Conversely,
the trailing waves are decompressed to a lower frequency and
energy37. E = h states that the leading compressed waves possess
greater energy than the trailing decompressed waves.
The electron charge source is analogous to a ball suspended
in a two dimensional (2D) box by two springs attached to opposite
sides. As the ball moves in one direction inside the box, one of the
attached springs is compressed in the direction of motion while the
other is decompressed (i.e. stretched out). The total energy between
the springs remains constant, but the energy in the springs is shifted

37

The source experiences a push backwards from the compressed


energy in front of it and a pull backwards from the decompressed
energy behind it (analogous to the pressure drag associated with the
motion of submerged bodies in fluids).
www.deltagroupengineering.com

89

asymmetrically, divided between them during acceleration. When one


spring increases in energy, the other decreases in energy.

When the electron is accelerating it perceives itself to be


immersed in an asymmetric field which is more energetic in the
direction of motion. However, the electron seeks existence at its
equilibrium state, with neither of its springs deformed and sharing its
energy equally in all directions. Whenever compression occurs, the
electron experiences a counter-force acting to nudge it into a resting
state of equilibrium within its environment. The ball, attached by
springs inside the box, moves independently from its frame to a
certain extent, but the forces acting on the ball and box cause them to
co-move, adjusting to each other as they change position.
For example, if the box were to enter a region of curved
space-time, the box, being defined by the Universe in which it exists,
would deform asymmetrically. If the ball inside the box is held fixed,
the springs would be forced out of equilibrium and asymmetrically
deformed. However, to keep pace with the energy disequilibrium of
its surroundings, the ball naturally moves to keep pace, centering
itself in the area of lowest energy within the box. The movement of
the ball inside the deforming box describes the inertial motion of freefall in a gravitational field.
When Einstein developed GR, one of the tools he utilized to
develop the theory was an elevator thought experiment. An elevator
compartment is very handy as a descriptive tool in this regard because
it provides a way of walling-off the Universe and considering the laws
of physics to exclusively exist in a small, local volume of space-time.
The elevator is analogous to the box, ball and spring model previously
described. Since there are no windows in the elevator, it is impossible
to determine any information about ones location, direction or

90

www.deltagroupengineering.com

velocity, etc. Einstein imagined riding inside such an elevator


compartment in two locations. One involved being held fixed in the
Earths gravitational field and the other in free space. Lets start by
riding inside Einsteins elevator in free space.
At first we find ourselves floating about freely inside the
elevator. We are weightless and do not experience up or down
orientation. However, we have decided that the elevator can only
travel in what we, inside the elevator, perceive to be an up or
down direction, so we shall refer to the tiled surface inside the
elevator as being the floor. Now lets imagine that we place our feet
against the tiled floor to mimic standing upright. To do our
experiment, we brought with us a rubber ball to bounce around inside
the elevator. In a zero-gravity environment, if we threw the ball in a
perfectly straight line to the wall on either side of us, the ball would
bounce back and forth several times, striking opposing walls at the
same height without ever falling to the floor. But what would
happen as the elevator started accelerating in what we regard as being
the up direction?
If the elevator began to accelerate upwards fast enough, as
the ball bounced back and forth between the walls, the ball would
suddenly appear to fall to the floor. The rate at which it would fall
would be precisely the same rate of the elevators acceleration
because the ball isnt really falling in this case. The ball remains in the
same place as the elevator begins accelerating past the balls position.
However, according to our frame of reference, defined by the interior
parameters of the elevator we move along with, the ball appears to
accelerate towards the floor. If we traced the balls path inside the
elevator, it appears to fall along a parabolic trajectory, just as it does
after being thrown horizontally at the surface of the Earth.
If the elevator sat on the surface of the Earth and we threw a
ball at the wall, it would bounce off and fall to the floor along the
same parabolic trajectory as it would inside the elevator accelerating
in free space at the same rate as Earths gravity. The idea leading
directly to the development of GR and the concept of curved spacetime was that the ball in the elevator may be replaced with a beam of
light. The acceleration required would be much, much greater, but if
we could accelerate fast enough, a light beam (or photon) propagating
from one side of the elevator would appear to bend towards the floor
along a parabolic path. Paths of light in accelerated reference frames,
such as the frame defined by the elevator compartment, are geodesic
paths defining the topology (i.e. curvature) of space-time!
Inside an elevator floating in free space, the path of light
inside will be a straight line from one side of the elevator to the other.
www.deltagroupengineering.com

91

This tells us that the observed space relative to our reference frame is
flat. When the elevator accelerates in free space, the path of the
light beam bends and our reference frame then tells us that the
observed space-time is curved.
Space-time in a gravitational field always appears to be
curved, and curvature defines the depth of the gravitational well
produced in space. The more massive the object, the greater the
curvature and the apparent acceleration experienced. Similarly, as
acceleration rates increase, the greater the apparent curvature of space
will be38. The whole of GR theory is based upon the pathways of light
within our perceived reference frame in space.
Now, lets re-visit our model of intrinsic inertia for the
electron. Whether the electron is moving in an accelerating elevator in
space, or falling to the Earth in a gravitational well, it is always
moving in accordance with our perceived view of the space-time
around it. To us, an electron may appear to fall to the ground due to
gravity, but the electron perceives itself as being in equilibrium with
its environment and not experiencing a force causing it to fall.
Human experience causes us to believe that the force of gravity is
pulling the electron to the floor. No pull, no force and no
gravity exists per se; we simply observe an electron moving in
equilibrium with the geodesic path of lowest energy encasing it,
which happens to be curved (i.e. asymmetrical) in this case.
If the electrons field is uniform and it enters a region of
space-time asymmetry (such as a gravitational field), it responds to
environmental conditions by falling in search of an equilibrium
state within that asymmetric space-time. Similarly, any imposed
perturbation of the electrons natural path produced by altering its
intrinsic energy to an asymmetrical state within a flat space-time
background (what we term acceleration), requires energy input.
When an electron is held stationary in a gravitational well
(e.g. the surface of the Earth), ambient space-time appears curved and
the object continually responds to it. When the electron falls freely in
curved space-time, it feels no force because it adjusts to the
background field asymmetry. Utilizing the box and spring example, if
the box moves then the ball attached by springs inside is compelled to
keep pace and co-move within its frame to equilibrate with the
asymmetric energy of the springs. However, when an object is held
fixed within a gravitational field, it senses that the immediate spacetime is always asymmetric, and this asymmetry of space-time results
in gravity.
38

Based upon curved paths of light in the local frame of reference.

92

www.deltagroupengineering.com

Objects, no matter how massive, are equally affected by


space-time curvature (i.e. asymmetry) because they are immersed in
the same gravitational environment. However, the energy required to
move objects out of equilibrium within curved space-time depends
upon the objects mass, and this is termed weight.
The acceleration of gravity is constant and all masses
equilibrate to the local asymmetry at the same rate. The mass of an
object is only consequential when resistance to the acceleration of
gravity is taken into account; a force is required to counter the inertial
resistance to the change in an objects natural geodesic path. The
greater the mass an object possesses, the greater the force required to
counter inertia. So even though a hammer weighs more than a feather
when fixed in the same gravitational field (because it has greater
mass) all matter, regardless of how massive it is, responds to the
asymmetry of space-time by accelerating downwards at the same rate.
In this regard, mass is a measure of the force required to move an
object out of equilibrium with its immediate space-time environment.
The electron self-energy model not only describes inertial
and gravitational effects, it also hints at the deeper meaning behind E
= mc2. Since inertial resistance is a measure of an objects mass, the
more massive an object is the more inertial resistance it will
experience upon acceleration, and the more curvature it will generate
in space-time. Similarly, the more intensely an object is accelerated,
the more massive it becomes because acceleration generates apparent
curvature; this is why mass is relative under GR.
Consider the electron self-energy ball-and-spring analogy
once again. In order to completely compress one spring connected to
the ball to zero length, one requires an infinite amount of energy
input. This also implies that the leading EM field of an accelerating
electron approaching the speed of light approaches infinite energy.
The energy required to compress the leading EM field increases
because the EM field frequency in the direction of acceleration
increases. The inertial reaction force against acceleration becomes
greater, which in turn is a measure of mass, thus the mass increases
according to the relationship E = mc2 (or in this case, m = E/c2).
Simply put, matter cannot attain the speed of light because it would
become infinitely massive.

www.deltagroupengineering.com

93

5.3

Extrinsic inertia

We now understand from QM that we must always consider


the notion that space is replete with QVE fluctuations, and that the
QV has an effect on matter. So one might wonder what kinds of
forces, if any, an electron experiences as it accelerates through the
QV. This is precisely what astrophysicist Bernard Haisch and
physicist Alfonso Rueda wondered, and when they looked into it in
greater detail, they began to find some truly remarkable results!
Having a background in astrophysics, Haisch was drawn to
the notion that the QV might contribute to inertia because he already
knew quite a bit about radiation pressure. Everyone has seen
pictures of a comet, with its tail streaming elegantly behind it like the
train of a bridal gown. However, what some people may fail to realize
is that the comets tail isnt necessarily streaming along behind it as
it moves through the solar system. The comets tail is formed by
debris blown off the comets surface by the Suns solar wind and by
radiation pressure, so that the comets tail always trails in the
direction of the solar wind, like a cosmic windsock.
The solar wind, of course, always blows radially outwards
from the Sun. However, the thing to remember in this case is that the
solar wind isnt really like the atmospheric wind we have on Earth.
Solar wind does have a material aspect to it, in that many energetic
particles radiate from the Sun. In fact, the solar wind is largely
comprised of electrons and atomic nuclei of hydrogen and helium
atoms that have been stripped of their electrons (i.e. ionized gas
plasma)xiii. But countless numbers of photons are also released from
the sun. Even though photons lack mass, they can still pack quite a
wallop because they have momentum.
All materials exposed to EM radiation experience radiation
pressure. The atoms comprising any substance may absorb or reflect
radiant photons, and when an atom absorbs a photon, it also absorbs
the energy associated with it. When a photon is reflected, energy is
transmitted as it ricochets off the recoiling atom. James Clerk
Maxwell realized this in the late 1800s, but it wasnt verified
experimentally until the year 1900 by Russian physicist, Pyotr
Nikolayevich Lebedevxiv.
Luminiferous momentum (i.e. radiation pressure) is at least
partially responsible for physically blowing material off the comets
core, producing the tail we see streaming through the heavens. New
spacecraft propulsion technology has even been developed aiming to
harness solar wind and the force of radiation pressure. This particular
method of propulsion is referred to as a solar sail, and poetically, a

94

www.deltagroupengineering.com

spacecraft could sail on the solar wind, and move through vast
distances of space without needing to carry fuel. In 2005, the
Planetary Society built and launched a privately funded solar sail
spacecraft named Cosmos-1. The spacecraft was designed in the
form of a giant reflective umbrella, to be unfurled in space and catch
the solar wind. The radiation pressure impacting the sails surface per
unit time is miniscule, but the cumulative force applied to the sail over
a long period will produce staggering velocities perhaps even
enough to reach nearby stars. Unfortunately, Cosmos-1 was lost after
a faulty launch, so we must continue to dream of sailing amongst the
stars on the winds of light at least for now anyway.
It is possible to estimate the force Cosmos-1 would
experience from the solar wind by calculating the power associated
with light39 as it propagates. This is made possible by utilizing the
Poynting vector40 developed by the English physicist, John Henry
Poynting in 1884. When you switch on your flashlight, you are
generating a beam of light that propagates from the bulb towards
whatever object you wish to illuminate. The Poynting vector is a
quantitative measure of the power of flow (i.e. the flux) associated
with the combined electric and magnetic wave components of light as
it propagates from the flashlight to the object.
The QV of flat space-time comprises a near infinite spectrum
of photons of various energies and random orientations, meaning
there is no cumulative or net direction to the QV. So in flat space-time
the QV photons can be disregarded from most calculations. This
doesnt mean that the spectrum of QV photons doesnt exist; it simply
means that the QVE may be considered virtual because a net force
does not arise from a random, baseline QV. Thus, in free space, the
QV is said to be isometric (i.e. equal in all directions).
In the early 1990s, Haisch and Rueda applied the concept of
radiation pressure to the QVE derived from QM. They wondered how
the QV might appear when viewed from an accelerated reference
frame, in much the same way that Einstein wondered how light paths
behaved inside an accelerating elevator in free space. What they found
was shocking. By applying textbook Electro-Dynamic principles, they
determined, by transforming QVE from a stationary to an accelerated
reference frame, that it acquired asymmetry. The field was no longer
random and isometric, rather, the QVE in the accelerated frame
39

For a photon or a radiation field composed of many photons.


By chance, the sound of its name describes what it does. A vector
quantity possesses magnitude and orientation (i.e. an arrow of
varying size and direction).

40

www.deltagroupengineering.com

95

appeared to have a net direction to it, and because it had a direction


they were able to calculate the Poynting vector associated with it!
They determined that the magnitude of the energy flux generated in
the local QV was proportional to the magnitude of the applied
acceleration. Thus, as the acceleration increased, the QVE flow
opposing it also increased. Apply these terms to mass and what do
you get? Inertia!
EM radiation generates forces on matter. Haisch and Rueda
surmised that upon acceleration, the particles and charges comprising
matter experience an EM drag-force against the local QV,
analogous to radiation pressure. The only instance in which an object
is affected by the local QV occurs when it appears to possess net
direction (i.e. when it is asymmetrical or anisotropic). The fact that
QV anisotropy appeared to be acceleration-dependent was the ace in
the hole the key reason for believing that they may have discovered
the physical basis of inertia.
Haisch and Rueda consider the electron to be a classical
point-like particle, jostled about by QVE flow impinging upon it,
resulting in inertial resistance to acceleration. Here, we are shown a
model for inertia which is extrinsic. In this model, inertia arises due to
the influence of an external source; similar to that of Machs
Principle, as opposed to the intrinsic electron self-energy model as
described by Petkov. This extrinsic model proposed by Haisch and
Rueda is termed the Quantum Vacuum Inertia Hypothesis (QVIH).
But why does asymmetry manifest in the QV only during
acceleration and not uniform motion? In other words, why do objects
experience inertial force only when they accelerate? This question
may be answered by the electron self-energy model and Haisch and
Ruedas QVIH.
Consider the Doppler Effect. The change in pitch we hear as
an ambulance siren moves past us is due to the ambulances motion
relative to the sound waves propagating from the siren. But the
ambulance doesnt need to be accelerating to cause this auditory
effect; it just has to be moving past us. However, with inertia, the
situation is rather different. Inertial force is only experienced in such
cases where a change in velocity is occurring. If the nature of inertia
was rooted within Machs Principle or a Doppler-like effect, one
might conclude that all motion should result in a resistance force. If
inertia operated by Machs Principle, then a preferred reference frame
would exist within the Universe, acting as a backdrop to the motion of
all objects moving through it an idea that is anathema to the tenets
of GR. The Doppler Effect may hold some value as an analogy, but
it cannot be directly applied to inertia because space is quite different

96

www.deltagroupengineering.com

from a fluid like air or water. Whatever we wish to call it; space-time
geometry, the aether or what have you, the nature of space must also
satisfy the rules of inertia, and all physical laws that have been
experimentally validated thus far. What then, should the vacuum of
space be like in order to satisfy the condition of inertia?
The answer has to do with the way in which energy is
distributed throughout the QV. Unlike the blackbody spectrum of the
Sun, for example, peaking in a specific region of the EM spectrum
and based entirely on temperature, QVE is predicted by QM to be
distributed throughout space in a fundamentally different manner.
When we plot the blackbody energy distribution41 for the Sun, we find
that it emits photons spanning a wide range of the EM spectrum,
peaking in the ultraviolet, visible and infra-red range42.
QVE has a rather different distribution along the EM
spectrum, however. The QV is predicted to possess a frequency
cubed energy distribution throughout free space; at low QV
frequencies, the spectral energy density of QV photons is minimal and
at high QV frequencies it is maximal. The spectral energy density of
QV photons follows the cube of the frequency along the EM
spectrum, and doesnt peak at any particular bandwidth as a
blackbody radiation spectrum does.
For example, lets say we want to calculate the QVE density
in the microwave region of the EM spectrum. Microwaves exist in the
10(GHz) range (approximately). To simplify matters, we may say
that the density of QVE at 10(GHz) along the EM spectrum is
proportional to 10 x 10 x 10. The frequency cubed distribution of
the QV means that moving up the EM spectrum to waves with a
frequency of 100(GHz), the proportional energy density of QV
photons is 100 x 100 x 100! Thus, the highest frequency ranges of
the EM spectrum contain the most QVE, implying that the energy
density of free-space is inconceivably energetic. It has been
estimated43 that the amount of QVE contained in a coffee cup sized

41

The density and arrangement of photons surrounding an object.


A Blackbody spectrum calculator may be found at The Wolfram
Demonstrations Project:
http://demonstrations.wolfram.com/BlackbodySpectrum/
43
This is the mainstream view, not the view of the EGM construct in
the Quinta Essentia series (i.e. QE3,4) where the opposite
conclusion is mathematically derived. That is, QE3,4 mathematically
demonstrate that free space does not contain a near infinite amount
of energy in a vanishing volume.
42

www.deltagroupengineering.com

97

volume of empty space, if converted to heat energy, would be enough


to boil away the Earths oceans!
The frequency-cubed distribution of QVE in a flat space-time
manifold explains why an object doesnt experience a reaction force
against uniform motion. Michelson and Morely experimentally
verified that an absolute reference frame by which to measure uniform
motion does not exist. Thus, for an object to avoid experiencing a
resistive force against uniform motion, the QV must appear identical
to all observers irrespective of relative velocities44. This necessitates
the cubic frequency distribution form of the QV spectrum,xv rendering
uniform motion Lorentz invariant such that it appears consistent
across reference frames. For example, wave amplitude may be small
or large, but its form remains unchanged regardless of magnitude. The
frequency-cubed QVE distribution ensures that space-time appears
flat and isometric for any object traveling in uniform motion.
However, during acceleration, the background QVE distribution
appears asymmetric.
Therefore, the QVIH model remains Lorentz invariant and
consistent with GR via the cubic frequency distribution of QVE. GR
states that an observer traveling in uniform motion through space-time
perceives the Universe as being flat and isometric; however, in an
accelerated reference frame space-time appears to be curved. Haisch
and Ruedas classical ElectroDynamics model of inertia asserts a
congruent position; during uniform motion the QV appears
symmetric. In an accelerated reference frame, asymmetry45 manifests
in the QV that is proportional to the magnitude of the applied
acceleration. Thus, rather than relying upon the metaphysical, nonintuitive terminology of GR, which describes space-time as being
flat or curved, these terms may now be substituted with the more
physically meaningful reference to symmetrical or asymmetrical
QVE densities.

5.4

Bridging the gaps

Einstein relied upon the Equivalence Principle to


demonstrate how the geometric space-time of an accelerated reference
frame can be equivalent to a gravitational field. The same is true for
Haisch and Ruedas QVIH, which utilizes the Equivalence Principle

44

Caveat: applicable to objects traveling in uniform motion, not


accelerating.
45
i.e. anisotropy.

98

www.deltagroupengineering.com

to demonstrate how QVE asymmetry appears in an accelerated


reference frames and reference frames held fixed in a gravitational
field.
The force an object experiences in a gravitational field is due
to local QVE asymmetry, producing a net energy flux that, in effect,
pushes downwards on the object. The question Einstein was never
able to address, and which remains in the QV interpretation is: how,
exactly, does matter curve space or generate QVE asymmetry?
Although the mechanistic particulars have not yet been formally
conjectured by Haisch and Rueda, they have been able to replace the
imaginary four-dimensional geometry of Einsteins space-time with
their own classical, physical modeling of QVE distributions. Their
description modestly insinuates one of the most astounding and
profound ideas ever suggested in the history of science!
QM was largely formulated several years after Einstein
developed GR, and he considered the entire field to be rather
unpalatable. Einstein modeled inertial and gravitational frames of
reference based upon the geodesic pathways light follows in the
presence of matter and during acceleration. However, Einstein lacked
the tools to offer any potential physical basis for the existence of
inertia, or why matter curved space-time. The QV had not yet been
conceived at the time he was developing GR, so he lacked a source
from which to derive a potential physical basis for gravity and inertia,
aside from the luminiferous aether which he believed did not exist.
The development of QM eventually produced a rift between
itself and GR which remains solidly in place to this day. QM, through
its prediction of the QV, states that the Universe possesses a specific
value of energy density. However, when viewed through Relativity
theory, the energy density of the QV should cause a catastrophic
gravitational collapse of the Universe . . . but the Universe hasnt
collapsed. In fact, recent observations reveal that the Universe is
expanding. This observation has created a major dilemma for
physicists, because it means that either QM or Relativity is
fundamentally flawed or incomplete, yet QM and GR have both
proven to be highly accurate means of representing physical systems.
To avoid a seemingly insurmountable incongruity between
GR and QM, Einstein left his original interpretation of space-time
alone, as a mathematical tool to extend Newtons laws of motion into
the extreme limits of the Cosmos. His theory worked so beautifully
that there was little need to worry about precisely how it worked so
well or why it didnt jibe with the emerging quantum view. The
beauty and elegance of GR has stood impervious to criticism for over
a century and gave birth to modern cosmology. It has directly
www.deltagroupengineering.com

99

generated some of the most profound questions to be asked and has


predicted the existence of otherwise unfathomable objects in our
Universe such as black holes. Yet QM has proven to be equally
valuable for explaining observations at the subatomic scale which
Relativity cannot handle. Therefore, it is only fair to say that both
formalisms are equally correct, even though they dont appear to
corroborate.
The most important aspect of the QVIH is not that it provides
a physical mechanism for inertial and gravitational forces, which GR
merely describes, but that it represents the unification of GR, QM and
electromagnetism. It takes the first steps at bridging the gap between
these formerly incompatible, yet equally valid theories.
Since Sir Isaac Newton formulated his postulate F = ma, it
has remained just that a postulate. F = ma is a tenet in physics; the
immutable law governing how objects move. However, the
remarkable mathematical feat leading to the QVIH was that Rueda
managed to derive Newtons postulate from QM! If Newtons first
law of motion, and thus GR, may be derived from QM then in
principle these formalisms must already be unified. F = ma predicts
how objects move; Rueda offers an explanation for why objects move
in the manner they do by deriving Newtons postulate from QM.
In order to achieve this result, the QVIH model must assume
something quite radical: which is that the subatomic particles
comprising matter, such as quarks and electrons, are intrinsically
mass-less, and it is only through their interaction with the external
QVE environment that the property of inertial mass is born. In this
model, mass merely arises as a by-product of an interaction between
the energy packaged in the form of matter and the QVE surrounding
it. The QVIH model likens this interaction to EM interference or
scattering that takes place as charges are perturbed by the QV fields
they move through, and the energy of this interaction is a measure of
the particles inertial mass. In this model, the mass of any object may
be considered to be a measure of the energy with which
fundamentally mass-less particles interact with their quantum
environment. Through this interpretation, a (classical) physical
explanation for Einsteins mass-energy equivalence relationship is
also revealed.
According to Haisch and his colleagues, mass arises as a
function of scale. For example, if a large46 vessel floats in fairly calm
waters with only small waves on the surface, it remains motionless
and unperturbed by the action of the waves. However, if a toy boat is
46

Much larger than the small waves surrounding it.

100

www.deltagroupengineering.com

placed in the same water, it will be jostled about by the small waves it
encounters as if weathering a violent storm at sea. The size of the toy
boat is on the same scale as the waves it floats upon, causing it to feel
the undulating terrain of the rough and choppy sea surrounding it.
On the molecular scale an analogous effect exists, termed
Brownian motion. Robert Brown was a Scottish botanist who
collected and catalogued thousands of plant species in Australia in the
early 1800s. He was highly skilled at microscopy and utilized it as a
tool to study plant pollens. In studying pollen grains under the
microscope, he noticed that the individual pollen grains appeared to
jitter wildly in suspension, yet maintained their overall position within
the field of view. He noted that the movement seemed to be an
intrinsic quality of the grain itself, resembling a freely moving lifeformxvi. However, he also noticed that particles only slightly larger
than pollen grains do not move in this manner. Like the toy boat,
pollen grains are of a particular size, which allows them to experience
the kinetic motion of the water molecules they are suspended within.
As the water molecules impact the pollen grain from all sides, the
pollen grain jitters wildly, while larger objects are unperturbed. In
Browns time, the cause of this effect was not entirely clear, but
thanks to his initial observation, this effect has since been dubbed
Brownian motion.
Again, the temperature of any substance is a measure of the
average kinetic energy of the molecules comprising that substance.
Within fluids like air and water, molecules dart about bouncing off
one another; the higher the temperature, the more frenetic the
molecular activity. Some molecules possess sufficient momentum
such that when they strike the surface of a pollen grain it recoils.
Since water molecules are too small to be viewed through a
microscope, we may infer their movement because we observe the
pollen grains rapidly quivering and jittering in place as they recoil
upon impact.
Over a period of time, if one tracked the motions of a pollen
grain under Brownian motion, one notices generalized movement in
some direction; referred to as a random walk. This range of motion
represents the statistical average of the combined small-scale
movements of the pollen grain, resulting in travel from point A to
point B.
The QVIH was developed utilizing Stochastic
Electrodynamics (SED). The term stochastic refers to a
mathematical treatment incorporating random behavior over time.
Random stock market fluctuations or the walk of Brownian motion
may be modeled utilizing this technique. Haisch and Rueda, in their
www.deltagroupengineering.com

101

development of the QVIH, modeled the electrodynamics of a moving


electron stochastically as though it were a pollen grain jostled about
by Brownian motion in the randomly fluctuating sea of QVE.
According to Haisch and Rueda, under conditions of uniform motion,
the classical, point-like electron is buffeted about by the chaotic
fluctuations of QVE.
When the electron is forced to accelerate, it appears to the
electron that the QVE fluctuations in the direction of acceleration
become increasingly energetic. It is this QV asymmetry that is thought
to result in the acceleration reaction force of inertia. Since mass is a
measure of inertial force, the interaction between the electron and
QVE is thought to establish the electrons inertial mass.
QV fluctuations, in this model, are thought to cause the
electron to be wildly shaken about while stationary or in uniform
motion. The stationary fluctuation of a charge was predicted
mathematically by Erwin Schrdinger in the early 1930s.
Schrdinger derived from Diracs equations that the electron should
be expected to fluctuate at the speed of light in what he referred to as
zitterbewegung, which in German means trembling motion. But if
the charge truly is fluctuating at the speed of light, it means that the
electron must be intrinsically mass-less. Otherwise, how could it
move at light-speed? It is this idea that establishes the basis upon
which the Haisch-Rueda interpretation operates. What they propose is
that if particles such as the electron are inherently mass-less, then it
must be through an interaction with the extrinsic QV field that the
property of mass emerges. In other words, the energy of the
fluctuation is a measure of the particles mass. Thus, in this view,
mass and energy are equivalent, and mass cannot exist without the
QV of space!
Louis de Broglie predicted, by combining the equations E =
mc2 and E= h, that all matter has wave-like properties. Although
the de Broglie wavelength of a moving planet or person is
imperceptibly minute, the wavelength of a moving electron or
subatomic particle is quite large compared to its size. As a direct
mathematical consequence of combining the two equations, the restmass of a stationary electron may be expressed in terms of
wavelength. In other words, the mass of the electron may be
expressed as a frequency of energy equivalent to the mass-energy of
the electron. This relationship is termed the Compton
wavelengthxvii. However, electrons arent the only particles which
may be expressed in Compton wavelength form. Other subatomic
particles possess characteristic Compton wavelengths because, by

102

www.deltagroupengineering.com

way of combining the equations, a particles mass defines its


wavelength.
Like Einstein, who used the photoelectric effect to illustrate
that light could be thought of as having particle-like attributes, Arthur
Compton also contributed to this particulate (photon) model of light
by experimentally verifying that photons possess momentum (a
characteristic associated with mass). Compton demonstrated this by
scattering X-ray photons off atomic electrons. As X-ray photons
bounce off electrons, the photons momentum diminishes as it knocks
the electron out of place (i.e. changes its momentum). Compton found
that the X-ray photon loses momentum energy in the form of
frequency, and that the frequency lost is limited to double the
electrons Compton wavelength. So not only did Compton help
establish the particle nature of light, he also confirmed the physicality
underlying the wave nature of matter, revealed by mathematically
combining E = mc2 and E= h.
By means of the extrinsic interpretation, Haisch and Rueda
propose that the Compton wavelength is established as the electron
charge physically interacts with the QV at the Compton frequency. In
this interpretation, the Compton frequency is associated with the restmass due to its zitterbewegung energy. However, when the electron is
in motion, it gains the attribute of momentum (mass multiplied by
velocity). Since the de Broglie wavelength is a measure of a particles
momentum, Haisch and Rueda decided to investigate whether the
Compton and de Broglie wavelengths were interrelated phenomena.
In the year 2000, Haisch and Rueda published a manuscript
demonstrating how an electron fluctuating at the Compton frequency,
as it passes by a stationary observer, appears to be moving at its de
Broglie wavelengthxviii. They achieved this by mathematically
observing the electrons Compton frequency from a Doppler shifted
reference frame in motion. When they superimposed the Doppler
shifted frequency from the moving reference frame onto the Compton
frequency of the electron, they noticed that the resulting beatfrequency47 was precisely equal to the de Broglie wavelength of a
moving electron!
Beat-frequencies arise when differing wave frequencies
overlap and are often readily identifiable in videos of cars in motion,
when the wheels (as viewed on video) appear to be slowly rolling
backwards as the car drives forward48. We perceive this visual effect
47

The mathematical difference between frequencies.


This phenomenon is referred to as temporal aliasing or the
wagon-wheel effect.

48

www.deltagroupengineering.com

103

due to the difference between the frequency with which the video
frames are captured by the camera (in frames per second), and the
frequency at which the wheels of the car rotate (in revolutions per
second). Video cameras typically capture approximately thirty still
frames per second, which during play-back are blended together by
our visual cortex to re-produce the effect of motion.
If the wheels of the car were turning at a rate of 25
revolutions per second as the car is captured on video, the wheels will
not have revolved completely by the time the next video frame is
captured; the wheels would only have turned about 85% of the full
cycle. If a marker were placed on the wheel to keep track of its
position with time, it would show that the wheel shifts about 55
counter clockwise with each captured frame of video. Thus when the
video is played back, the wheel will appear to be rotating backwards
at roughly four revolutions per second. By changing either the
frequency of the video capture or the speed of the car, the effect may
be run faster or slower in the clockwise or counter clockwise
direction. If the revolutions per second and frames per second are
equally matched, the wheel will appear motionless as the car drives
along.
In this regard, Haisch and Rueda have filmed49 the
Compton frequency of an electron against the background frequency
of the QV in a moving reference frame. In doing so, the electrons
wavelength appears to an outside observer to be the de Broglie
wavelength. Here, Haisch and Rueda suggest that the de Broglie
wavelength is ultimately derived extrinsically from the QV. However,
this particular example represents just one of many suggestions for
how the QV might help us understand the deep mysteries of the
quantum Universe.
The QVIH suggests the possibility that the property of
inertial mass, and other quantum phenomena such as the de Broglie
wavelength, arise extrinsically through an interaction between the
subatomic particle and QVE. This stochastic modeling system
yields quite profound results, but not without stirring some
controversy, because it represents a very literal way of approaching
the problem. But if we are ever going to fully understand gravity and
inertia, we must be open to innovative interpretations like Petkovs
intrinsic model and the Haisch-Rueda extrinsic model.
Creative interpretations of reality are commonplace in
contemporary physics, and we are quite accepting of them because we
are cognizant of the fact that they are merely human rationalizations
49

Analogous to the wheels of the car in the preceding example.

104

www.deltagroupengineering.com

of purely mathematical concepts. However, these mental models


sometimes forge bias in our thinking, and cause us to form preconceived notions about matter and space which obfuscate our
intuition and our ability to observe the obvious. One such bias is the
notion that space-time is literally curved or that space should be
interpreted as a purely geometric manifold. It is of vital importance
that we not confuse the abstract mathematical description of space for
space itself.
The other cataract clouding the lens of truth is the notion that
gravity is one of the primary forces in nature, or a force in its own
right. We know that the action of gravity on matter is equivalent to
inertial force, and arises by way of apparent curvature, or asymmetric
energy distribution in space-time. The root of our problem is that we
have defined separate terms (gravity and inertia) for a single
phenomenon, and find it necessary to cling to such misguided dogma.
Einsteins work explicitly states that gravity and inertia are
equivalent, and E = mc2 tells us that energy and matter are
equivalent. Perhaps if it werent for the atomic bomb, we would have
just as much difficulty with the notion of mass-energy equivalence as
we seem to have with the idea that gravity and inertial forces are, in
fact, one-and-the-same phenomenon.
GR is as brilliant and elegant as it is effective, yet on the
same note, it is also one-sided and incomplete because it disregards
the demands of the quantum Universe. When we view mass only in
terms of GR, or only in terms of QM, we limit ourselves to a partial
and incomplete description of the Universe, and this inevitably leads
us in the wrong direction. However, by viewing the problem from
both perspectives simultaneously, a wondrous and limitless horizon
begins to unfold before us.

www.deltagroupengineering.com

105

106

www.deltagroupengineering.com

6
6.1

The Polarizable Vacuum


Blind-sighted

General Relativity (GR) is a geometric model of gravity such


that space-time is represented as a four-dimensional manifold yielding
a topological map of space in the presence of matter. This space-time
landscape, in turn, guides the paths of light and the motion of objects
passing through it. We interpret this motion as gravity.
On one hand, GR is a marvelous achievement which has
profoundly enhanced our understanding of the Universe. On the other
hand, it is not at all amenable to technological applications, in that it
would require huge amounts of matter or energy to modify or
manipulate gravitational forces. GR yields highly accurate
predictions, yet makes no assumptions as to why the Universe is as it
is. It doesnt explain why matter produces a gravitational field, or the
specific mechanism by which matter experiences inertial forces upon
acceleration. Einsteins space-time manifold is a vacuum a void. If
this is indeed the case, then the obvious question remains: how can
nothing possess a curved four-dimensional geometry? And how is it
that objects know whether they are passing through a region of
curved or flat space? It is of critical importance to remember that GR
is merely a highly effective descriptive tool a mathematical
representation, not a literal explanation of Nature.
Bernard Haisch and Alfonso Rueda introduced a model
describing matter as being immersed within and wholly dependent
upon the quantum medium of space for its existence. Their model
does away with the notion that matter rests suspended in a vast, inert
nothingness while exerting gravitational force on other objects from
afar. Throughout the history of physics, it is almost incomprehensible
that we have virtually ignored the question of why an object
experiences a force upon acceleration, and have yielded without
protest to the notion that this force emerges from nowhere!
Perhaps this is due to the way in which our brains are wired.
Our ability to function as human beings relies upon our capacity to
selectively tune-out superfluous input like the sound of a ticking
clock, the buzz of fluorescent lighting, or the background murmur of
conversation in a crowded room. At every moment of every day
throughout the course of our lives we sense the weight of our own
body as we sit or walk, and the tug of inertia whenever we change
direction or velocity. It is a constant and consistent experience; so
consistent that we scarcely think of it. It is human nature to ignore the
www.deltagroupengineering.com

107

obvious so that we may focus on things that seem important, aberrant


or threatening.
We seem content to nod vacantly in agreement to any
suggestion, no matter how bizarre or disconnected from our intuitive
knowledge and everyday experience, as long as it satisfies our need to
predict the patterns of Nature. We are apt to swallow contradictions
such as the idea of geometric nothingness hook, line and sinker in
lieu of other, perhaps more rational explanations. Twentieth century
physics has merely replaced the notion of action at a distance with
the equally abstract concept of curved space-time. Many physicists
are quite comfortable with this contradiction, insisting that the
relativistic tensor mathematics beautifully describing the space-time
manifold actually is space-time substituting the abstraction for the
phenomenon! This may be suitable for those who are highly adept and
well versed in the language of applied mathematics, but it leaves little
for the more pragmatic mind to chew on.
GR was developed by mapping the trajectory of light as it
passes alongside massive objects. The extent to which gravitational
fields bend light allows us to trace the contours of the otherwise
invisible space-time manifold. The physicist John Wheeler is noted
for one of the most succinct and concise descriptions of GR in stating:
matter tells space how to curve, and curved space tells matter how to
move. But is matter actually curving space, and in so doing causing
rays of light to bend as they propagate along a curved manifold? Ask
an engineer to bend light and they wont attempt it by curving spacetime; if you want to bend light, shine it through a lens!

108

www.deltagroupengineering.com

6.2

Optical gravity

If you dip a long stick into a swimming pool, you will notice
that it appears to bend as it enters the water. The same is true if you
happen to be spear fishing you would need to aim your spear at a
slightly different place than where you see the fish if you wish to hit
your target. This phenomenon arises because in both cases you are
seeing light (an image) which has been refracted (i.e. bent) by the
water.
Light is refracted as it passes from one substance through to
another of different density. A substance like water possesses a
specific index of refraction based upon its density. When light transits
from air into water, it moves from a medium of lower to higher
density.
As photons of light move through substances such as air,
glass, or water, they dont simply pass through unaffected light
interacts with the atoms and molecules comprising it; the light might
be absorbed and re-radiated, or reflected. When light passes from air
into water, it takes longer for the photons to interact with the water
because it is more densely packed with molecules than air; as this
occurs, the light slows down, causing it to bend. Its a bit like the
difference between running on land and trying to run in a swimming
pool. You cant run as fast in water as you can in air because water is
denser.
The degree to which a beam of light is bent depends not only
on the density of the medium it passes through (its index of
refraction), but also the angle of approach (i.e. angle of incidence).
The science of optics is based upon the principles of refraction and
angle of incidence. Thanks to optics, we have eyeglasses, telescopes
and a whole host of other magnificent technologies which improve
our daily lives.
Sir Isaac Newton worked extensively to create the science of
optics. Newton studied the manner in which lenses of different shape
and density bend and refract light in various ways, and how curved
mirrors reflect and focus light. However, the foundations of optics are
based upon interactions occurring at the quantum level, which
Newton knew nothing of. The theory fully describing the fundamental
interaction between light and matter is termed Quantum
Electrodynamics (QED), and is one of the most accurate theories in
physics.
The application of optical principles to those of GR as led to
an alternative and more intuitively appealing interpretation of the
space-time manifold referred to as the Polarizable Vacuum (PV)
www.deltagroupengineering.com

109

Approach to General Relativity (GR)xix. Harold Hal Puthoff


introduced the PV model in 2002, having drawn upon earlier work by
physicists Harold Wilson, Robert Dicke, and famed Nobel laureate,
Andrei Sakharov.
The PV model utilizes optical principles to define the
topological features of space-time via the application of a variable
Refractive Index50 rather than curvature. As a beam of light
propagates through space, passing nearby a massive object, its path is
not bent due to the curvature of the nothingness it happens to be
transiting through; it is bent by passing through a region of variable
energy density which in turn, generates a variable Refractive Index
affecting the path of the beam.
According to the PV model, all matter establishes an energy
density gradient in the QV surrounding it, acting as a space-time lens.
This results in the formation of a changing Refractive Index within the
Quantum Vacuum (QV) surrounding matter. Consider the use of a
magnifying glass to focus light from the Sun. The magnifying glass
bends the parallel beams passing through it to a single focal point. It is
the tapered shape and varying lens thickness which causes the light to
bend to differing degrees as it passes through.
A similar effect also occurs in space. Einstein predicted that
a strong gravitational field should cause the trajectory of light to bend
in much the same way as it does when passing through a lens. This
effect is referred to as gravitational lensing, and astronomers have
obtained direct photographic evidence of this phenomenon with the
Hubble space telescope. Given the right set of circumstances, light
from a distant quasar may be bent around the intense gravitational
field of a galactic cluster positioned between the Earth and the quasar,
so that it may be seen even though it is located directly behind the
cluster. Hubble images of distant lensed objects appear warped and
stretched as though having been reflected off the back of a polished
spoon.
Within the context of GR, the space-time geometry of a
gravitational field surrounding a galactic cluster is depicted as a
depression in the fabric of space. As light passes by this curved region
of space-time it bends, resulting in a gravitational lensing effect.
Substituting the concept of variable index of refraction within the
QV in place of space-time curvature yields a congruent
interpretation of gravity to that of GR. The key distinction between
the PV and GR models is that the PV interpretation explicitly
describes a physical manner in which space-time may, in effect, be
50

Denoted by the symbol KPV.

110

www.deltagroupengineering.com

curved. However, it doesnt completely address the precise


mechanism by which this occurs. That is to say, the PV model doesnt
explicitly describe how matter physically changes the Refractive
Index of the space-time manifold surrounding it.

6.3

Shaping the lens

The PV model asserts that matter polarizes the QV into


regions of variable energy density that, in turn, generates regions of
variable Refractive Index in the space surrounding an object. To
visualize this concept, one might consider a common magnet. If you
sprinkle iron filings onto a piece of paper and place a magnet
underneath, the filings rapidly align themselves with the magnetic
lines of force produced by the magnetic field. The magnets influence
polarizes (i.e. enforces direction and order) to the random scatter of
filings.
A precedent for the existence of vacuum polarization comes
from the generally accepted model of the electron. In previous
chapters, a highly simplistic model of the electron was utilized to
describe Classical Electromagnetic Mass Theory; the electron was
described as a point-like charge in a pure vacuum, radiating an
electromagnetic (EM) field. To further simplify this concept, the
electron was treated as a ball held fixed inside a frame by springs
which expand or compress as the ball moves within its frame.
The contemporary model of the bare electron stems from
QED51. The QV is effervescent with virtual particle pair formation
and annihilation; thus, we must consider the effect this has on the
dynamics of all elementary particles, including the electron.
The effect an electron has on the QV is termed vacuum
polarization. In a volume of space devoid of all matter, the QV is
comprised of a chaotic and equally distributed mix of virtual particle
pairs popping into and out of existence. However, drop an electron
into the mix and all that drastically changes. Its presence attracts the
virtual positrons present in the vacuum, forming a cloud of positive
charges surrounding the bare electron. The QV becomes biased as

51

Modeling it as a negatively charged point-particle surrounded by a


cloud of virtual particle pairs, constantly emerging from and
disappearing into the QV; charge emerges as a highly localized
change in QV energy distribution. Hawking and Davies-Unruh
radiation are also derived from the principle of virtual particle pair
formation.
www.deltagroupengineering.com

111

virtual particle pairs are segregated into clusters of positive and


negative charge. In this state, the QV is no longer neutral or uniform
it has been polarized52.
One possible explanation
accounting for the formation of
gravitational fields is that all
material objects are composed of
atoms which are themselves
composed
of
charges
and
elementary particles generating their
own localized polarizations within
the QV. The cumulative effect of
these densely packed particles and
charges generates a large-scale,
synergistic polarization in the QV
extending into space.
Taking vacuum polarization into account, the PV
interpretation combined with the Quantum Vacuum Inertia
Hypothesis (QVIH) provides a physical explanation for why matter
experiences inertial force and generates gravitational fields. GR offers
little in this regard; it describes the manner in which energy moves in
gravitational fields but doesnt offer a physical explanation for why
objects gravitate, or experience weight and inertia.
According to the PV model, matter generates a polarized
gradient in the QV resulting in a change in the Refractive Index of
space-time. In such cases the QV appears asymmetrical, thus inertial
and gravitational forces are experienced.
When the optical effect of an asymmetrical energy gradient
is considered, the manner in which the polarized vacuum affects light
propagation becomes apparent53. As a ray of light enters an area of
vacuum polarization, it is affected by the change of Refractive Index
and bends towards the polarization source as if it were passing

52

http://physics.nist.gov/cuu/Constants/alpha.html: according to QED


and relativistic Quantum Field Theory (QFT) describing the
interaction of charged particles and photons, an electron is considered
to emit virtual photons which, in turn, may become virtual electronpositron pairs. The virtual positrons are attracted to the bare
electron while the virtual electrons are repelled from it. The bare
electron is therefore shielded due to polarization within the vacuum.
53
The paths of light altered by the presence of matter define spacetime curvature.

112

www.deltagroupengineering.com

through a lens. The bending of light in this context is congruent to the


curved path of light predicted geometrically within GR.
GR states that inertial force and gravitational weight are
defined by the geodesic paths of light in curved space-time. Any
deviation from the natural geodesic topology requires energy input.
When an object accelerates or is held fixed in a gravitational field,
space-time appears curved and the object experiences a force. For an
object moving with uniform motion in free space, or falling along the
natural geodesic in a curved manifold, space-time appears flat and no
force is experienced.
Likewise, to an observer held fixed on the surface of the
Earth, space-time appears asymmetrical, thus constant acceleration is
experienced. Similarly, from the perspective of a uniformly
accelerating observer in free space, the QV appears asymmetrical and
one experiences a force. In both cases, the Equivalence Principle is
preserved and we are equipped with a framework for understanding
its origin.

6.4

Conflux

What of the other strange predictions made by GR, such as


time dilation, mass scaling, length contraction and mass-energy
equivalence? In order for the PV model to robustly challenge GR, it
must satisfy all competing predictions which have thus far been
experimentally verified. The solution and answer to the challenge
pertains to the Refractive Index KPV.
When light passes from air into water, it moves from media
of lower to higher density and slows down. In terms of an optical
model of gravity, relative to a distant observer, if KPV increases, the
speed of light c appears to slow down54. GR is based upon the
trajectories of light through curved space-time relative to an observer;
this implies that the propagation of energy within the QV is the basis
of gravity. Thus, descriptors such as mass, size and time are also
subject to KPV.
Energy is the currency of the Universe and can neither be
created nor destroyed, it may only be exchanged or transformed; all
mass-energy influences its environment and is itself affected by

54

As light from a region of low energy density (e.g. free space),


moves to a region of higher energy density (e.g. the Earths gravity
well).
www.deltagroupengineering.com

113

environmental conditions. One such phenomenon illustrating the


interaction between an object and its environment is buoyancy.
Consider a helium balloon near the surface of the Earth. The
helium gas inside the balloon is less dense than the encapsulating
atmosphere of heavier gasses (nitrogen and oxygen). The balloon
possesses buoyancy because atmospheric density is greater than inside
the balloon; the density differential forces the balloon upwards as it
seeks environmental equilibrium55. Equilibrium marks a point of
neutral buoyancy such that the pressure inside and outside the balloon
are equalized. Because the balloon is elastic, it expands during
ascension. Higher atmospheric pressure near the surface of the Earth
acts uniformly on the balloon, confining the helium inside to a
particular volume. However, with increasing altitude, the atmospheric
pressure decreases and less environmental energy is available to
contain the gas inside and the balloon expands.
A similar concept sustains the PV model of GR such that
mass-energy equilibrates to the local Quantum Vacuum Energy
(QVE) environment56. Hence, objects appear to shrink upon
acceleration to near light-speed while equilibrating, as perceived by a
distant observer; from the objects perspective the Universe appears to
increase in energy, however its own size does not seem to change.
From the distant observers perspective, the situation is
analogous to watching a helium balloon being pulled to Earth from
high altitude. At high altitude, the balloon is stretched to a size
determined by environmental equilibrium. As the balloon is displaced
from its initial equilibrium state by moving into a region of higher
pressure, the increase in atmospheric density compresses the balloon.
In terms of the PV model, because E = mc2, we must consider that
mass is a measure of energy and the energy density of the
encapsulating space directly affects mass as it equilibrates to the local
QVE environment.
An object accelerating away from us at near light speed57
appears red-shifted because the frequency of emitted light is subject to
the KPV value. Regardless of the energy density (i.e. KPV value) of
55

All physical systems seek stabilization, marked by the lowest


permissible energy state.
56
Analogous to the manner in which a helium balloon is affected by
the atmosphere.
57
The speed of light in a vacuum is a definition, not a measurement.
This is why it is listed as exact by the National Institute of
Standards and Technology (NIST);
http://physics.nist.gov/cgi-in/cuu/Value?c|search_for=Speed+of+light

114

www.deltagroupengineering.com

the local space-time encapsulating you in a vacuum, the light you emit
always propagates away at c. However, to a distant observer, the
light you emit appears refracted by the space-time you move through.
The tone of the light appears to shift because the observer views the
light source as moving into a region of KPV which is different from
the observers local value. The spectral shift of the light emitted from
an accelerating source is solely dependent on the relative difference
between the KPV values of the observer and the source.
Consider a light source moving naturally into a region of
variable KPV (a gravitational field). To the source object, nothing
appears to change in terms of its own size, the way light moves, or the
way time flows. However, to a distant observer, light from the source
appears to refract as it moves into the gravitational field. An object
moving naturally within a region of variable KPV experiences no
external forces because natural motion in such a case requires moving
along a geodesic path in curved space-time. Thus, the geodesic path
of GR may be expressed in terms of KPV, rather than explicitly in
terms of space-time curvature. The primary advantage of the PV
model over GR, in this regard, is its conceptual simplicity.
An observer might presume that a force acts on a naturally
moving object causing it to shift into a curved trajectory around a
planet or star, but the object itself doesnt experience any force. It is
only when the object is displaced from equilibrium within its local
environment that it experiences a force. All objects seek the lowest
energy equilibrium state within an environment; this is why
acceleration requires energy input and inertia is experienced during
acceleration. Thus, energy input is required to maintain disequilibrium
and once the energy input ceases, the object resumes a state of
uniform motion, in equilibrium.
No matter where uniform motion occurs or how fast an
object may appear to be moving with respect to its environment, a
uniformly moving object will be in equilibrium with the energy
configuration of its environment. It perceives the Universe as being
flat even though it may appear curved to a distant observer. Therefore,
we may summarize equilibration in terms of the PV model of gravity
and GR as follows; uniform motion is synonymous with QV
equilibrium and acceleration is synonymous with QV
disequilibrium.
From Relativity theory we know that an object can never
accelerate to the speed of light because it becomes infinitely massive;
in order to accelerate an object to the speed of light, an infinite
amount of energy is required to push it that fast. This is truly one of
the most bizarre predictions of Relativity and one of the most difficult
www.deltagroupengineering.com

115

to understand intuitively. It also presents one of the most formidable


boundaries limiting our hopes of finding an efficient means of
interstellar space travel.
As mass accelerates, it encounters inertial resistance due to
disequilibrium, becoming a sink for the increasingly energetic
environment it perceives. To achieve greater acceleration (i.e. QV
disequilibrium), more energy must be dumped into the mass from the
environment in order for the system to equilibrate. Since energy is
equivalent to mass, the mass increases with the addition of thrust
energy. As an object is accelerated to near light speed, its mass
becomes nearly infinite. Consequently, infinite force would be
required to accelerate an infinite mass to the speed of light.
Haisch and Ruedas QVIH demonstrates that as matter
accelerates through space, it perceives the QV as being asymmetrical.
The PV model advances this approach by assigning form in terms of a
KPV value. As an object moves into a region of space with an
apparent asymmetric vacuum energy density58, a force is required to
counteract the resistance induced by QV asymmetry. When an object
is held in a gravitational field it experiences constant acceleration; this
is what we have come to call the force of gravity. In actuality, due
to the Equivalence Principle, the object is actually experiencing an
inertial force. No unique force of gravity exists per se, only a
disequilibrium stress between matter and the QV. All points in a
gravitational field exist in a constant state of asymmetrical energy
polarization; thus, all material objects in the field continually attempt
to equilibrate to the asymmetry. However, equilibration is impossible
while any object is held fixed and not permitted to fall along its
natural geodesic trajectory (i.e. path of energetic equilibrium). The
only means to equilibrate with environmental asymmetry is to go
with the flow and fall with gravity. This is why objects are
weightless during free-fall.
An objects weight is a function of its mass and the
acceleration of gravity. If we travel to the Moon, we weigh less than
we do on Earth because our weight is a product of our mass (which
remains the same) and the acceleration of gravity, which is lower on
the Moon. If Jupiter had a solid surface we could stand upon, we
would expect to weigh more than we do on Earth. Held fixed in a
stronger gravitational field, the force required to counter the
acceleration of gravity is much greater.

58

i.e. a region of space with a KPV value greater than unity.

116

www.deltagroupengineering.com

In a weak gravitational field such as the surface of the Moon,


the QV possesses a lower Refractive Index59 than on the surface of the
Earth. One may relate Refractive Index (QV asymmetry), to the slope
of a curve, such that greater asymmetry induces a steeper slope. The
higher the KPV value, the steeper the slope and greater the
gravitational acceleration will be. Because mass is a measure of the
force required to counter QV asymmetry, mass scales according to the
local KPV value as it does under GR due to the intensity of spacetime curvature. Thus, when optical principles from the PV model are
applied to GR, physical processes are easier to comprehend.
One of the most fascinating aspects of GR is the notion that
time is enmeshed with space. GR not only states that mass and energy
must be considered mass-energy, it also states that space and time
are a singular phenomenon termed space-time. In the fourdimensional matrix of space-time, an object has a precise position
within 3-D space, and it has a coordinate or position in time as well.
When an object moves to different coordinates in 3-D space, it will
take a certain amount of time to do so.
We define time as the interval between linked events. In
Relativity the speed of light is constant, thus the most appropriate
means of representing the passage of time is to base it on the duration
of the interval required for light to traverse a particular distance in
space. However, the consequence of holding the speed of light
constant is that mass, length and time will all appear to change
relative to it.
Lets consider Einsteins elevator again. In this case, we
decide to quantify time by measuring how long it takes for a photon to
bounce between the walls of the elevator from side to side. Lets also
imagine that we have two identical elevators moving parallel to one
another through space. At first, they move at the same rate with no
difference in velocity between them. The photons bounce back and
forth inside each elevator in unison, hitting each wall at exactly the
same moment. However, when one of the elevators begins to move at
a faster rate than the other one, the vertical movements of the
elevators must be considered when measuring the total distance each
photon travels between each wall. Not only is the distance travelled
by each photon measured in terms of the horizontal distance inside the
elevator, the vertical distance the elevator travels within that period
must be added to that distance. This causes the photons path to
stretch into a zigzag rather than a horizontal line.

59

i.e. the KPV value is closer to unity.


www.deltagroupengineering.com

117

As one elevator moves faster than the other, the photon has a
greater distance per unit time to travel between each wall. A photon
propagates in a vacuum at the speed of light; thus, the modified factor
between each elevator is the time required for the photon to bounce
between the walls60. In the slow-moving elevator, the photon has a
shorter overall distance to cover, but in the fast-moving elevator the
photon travels a greater distance overall, taking into account the larger
vertical distance it has to travel. If we are measuring time as the
interval between photons striking each wall, time in the fast-moving
elevator appears to an outside observer to slow down. However, if we
were inside either elevator, we would not notice any difference in the
passage of our own time; we would only be aware of photons
bouncing from side to side. Likewise, if we examine the behavior of
the slow-moving elevator from inside the fast one, it appears that time
has slowed down for the other elevator while ours remains the same;
from inside the slow elevator, time in the fast-moving elevator also
appears to have slowed down.
Time is thus relative, not only because it is relative to an
observers motion, but because it is relative to the constant speed of
light in a vacuum. If we were riding in a space ship headed towards a
black hole, we would observe that we move towards it and fall in61.
However, to an outside observer watching the scene unfold, we first
appear to shrink62 and the light we emit begins to red-shift, but we
would never actually appear to fall into the black hole! This is
because, to the outside observer, our time first appears to slow down
and then stop at the moment we reach the event horizon!
Within GR, these strange effects are due to the propagation
of light in a curved space-time manifold. In the context of the PV
model, the same effects occur but they are a function of KPV, not
curvature. Imagine that a region of flat space-time was analogous to a
large rug laying flat on the floor. Lets say an ant is walking from one
end of the rug to the other, moving at the maximum speed its legs
permit (the ant represents a photon traveling at light speed). If we
scrunch up the rug and measure the time required by the ant to
traverse a distance relative to the bare floor, the distance per unit time
the ant travels appears to be less than if the rug were lying flat.
When the KPV value increases, QVE density increases.
This is analogous to scrunching up the fabric of space-time. Even
60

The relativistic mass effects discussed previously do not apply


because the photon is considered to be mass-less.
61
Not withstanding the tidal forces ripping us apart beforehand.
62
Before being stretched apart by gravitational tidal forces.

118

www.deltagroupengineering.com

though the speed of light does not appear to change locally within a
region of varying vacuum polarization63, to a distant observer in flat
space-time, light appears to slow down64 and the distance between
two points decreases65. By considering the difference between local
and observed vacuum polarization states of space-time, it becomes
obvious why an object appears to behave as though its time was
slowing down, its length was contracting or its emitted light was
being refracted and shifted in frequency. Its like spear fishing in a
tide pool; the image of the fish we see is refracted, and if we aim for
what we see we will miss the target. However, if we aim for where the
fish should be after refraction is considered, we have a much better
chance of hitting it.
The PV model demonstrates the manner in which the KPV
value of the QV is congruent with the concept of curved space-time
which Einstein invoked to explain acceleration and motion through
gravitational fields. Most importantly, the PV model leaves us better
equipped to understand the physical basis for the perplexing
conclusions of GR. Mathematically, the laws of motion elegantly
unfold through the implementation of GR but the basis for these laws
remains difficult to grasp from the perspective of relativistic
curvature. However, when we replace the concept of curvature
with the principles of optics, the consequences of relativity make
intuitive sense when viewed through the lens of the PV.

63

e.g. at the surface of the Earth.


Appearing to refract towards the region of higher KPV value.
65
As measured in accordance with the ambient energy density
condition in the observers local environment.
64

www.deltagroupengineering.com

119

120

www.deltagroupengineering.com

7
7.1

The Harmony of Nature


Ancient wisdom

The Universe is an ever-changing system in motion. All of


its parts; the stars and planets, the galaxies and nebulae, are all linked
in a marvelously intricate dance. If a single attribute characterizes the
Cosmos, it is movement. The Universe flows and evolves forming
whorls and spirals like eddies in a flowing river; it appears at once
chaotic yet profoundly ordered. We see order in the regular orbits of
planets around the Sun and the quantum jumps of electron energy
levels in atoms. Science rests upon our ability to mathematically
predict these regularities in a seemingly chaotic Universe. But why is
there order in the first place? What is the organizing principle upon
which order arises in our Universe?
Our innate appreciation of music and harmony affords us a
singular distinction in the animal kingdom, and it is by way of this
principle that we may gain a more thorough understanding of order in
the Cosmos. The Universe is built upon the foundation of harmonic
relationships. Harmony is stability, and stability is order. The Cosmos
exists through a sympathetic balance of forces; a dynamic equilibrium
binding its inner workings in perpetual dance. Nothing exists
independently and all things find order and stability through harmonic
concordance. The supreme clockwork order of the Cosmos is
regulated and overseen by an almost musical harmony. Without this
harmonic imperative acting to govern the existence of matter and
motion, chaos would ensue and the Universe would cease to be. This
is not simply a philosophical exhortation it is a physical fact.
The concept of harmony permeates virtually every culture
and philosophy, and has been a pervasive theme throughout recorded
history. The ancient Babylonians, whose civilization thrived some
four thousand years ago, are reported to have defined their cultural
and philosophic identity according to the principle of harmony. The
Hellenistic philosopher, Philo of Alexandria (20BC 50AD),
described the ancient Babylonians as having set up a harmony
between things on Earth and things on high, between heavenly things
and earthly. Following as it were the laws of musical proportion, they
have exhibited the Universe as a perfect concord or symphony
produced by a sympathetic affinity between its parts xx.
The Father of Numbers, Pythagoras of Samos (582 507BC)
was a scientist, a mystic, but foremost, a mathematician. Pythagoras is
known primarily for his Theorem, utilized to calculate the
www.deltagroupengineering.com

121

dimensions of right triangles. However, he is also known for having


formulated the Laws of Cosmic Harmony. Similar to the ancient
Babylonians, Pythagoras adopted a world-view founded not only on
the order that harmony implies, but the physics of harmonic
relationships. His harmonic philosophy was not merely a poetic and
idealistic notion it was an idea forged from mathematical logic.
The Pythagorean philosophy arose from the study of musical
and tonal relationships. Pythagoras was the first to formally describe
the manner in which our human appreciation of musical tone and
pitch rests upon a solid a mathematical foundation of harmonic
proportions. The notes in a scale are not arbitrary. The spectrum of all
possible tones is delineated into distinct divisions, forming scales of
whole notes which we naturally discern as being evenly defined
increments of a larger whole. Most people can immediately identify
whether a note sounds flat or sharp. We sense upon hearing a note
sounding flat or sharp that it is mathematically discordant from other
notes in the scale.
Western music is based upon the diatonic scale, defined by
Pythagoras; thus, originally known as the Pythagorean scale.
Certain notes of a scale may be produced on a single string of a guitar,
for example, if the string is held fixed at fractions of its fixed length.
The octave marks a complete cycle of the tonal scale. Two tones
separated by eight full tones, termed the octave, is actually the same
tone, just higher or lower in pitch. The harmonic ratio describing the
octave is 2:1; meaning, a string vibrating along half its length
produces the same note as the whole length but at a higher pitch66. If a
string is held fixed at a point one-third of its whole length, a note five
tones above the fundamental67 will sound. The ratio for this tone is
3:2. The ratio producing a sound four tones above the fundamental
is 4:3. All of these even ratios produce sounds in tune with the
others. However, when two different notes not mathematically
concordant with this harmonic ratio are played simultaneously, the
tones sound dissonant. Musical notes are not arbitrarily chosen from a
spectrum of tones, they are increments in tone derived from geometric
ratios of a fundamental tone. The frequency of vibration is what we
hear as a tone and the physics describing the musical scale is defined
by the harmonic relationship between the tones.
Harmony means to be in concordance with, or having parts
joined in sympathetic union; connoting congruence, compatibility and
66

The high-tone octave is double the frequency of the low-tone.


Fundamental refers to the lowest tone (i.e. frequency) in a
harmonic series.
67

122

www.deltagroupengineering.com

stability. The Pythagoreans interpreted numeric harmonic


relationships as being implicit in Nature such that the Cosmos owed
its existence to a mathematical imperative. Pythagoras believed that
numbers were the only real things in the Universe and that divine
numerical ratios caused all order to arise. Harmony, in the
Pythagorean Universe, begets order and influences all forms in
Nature.
Pythagoras was also the first to apply the term Cosmos to
characterize the Universe. To us, the Universe is abstract
something out there. However, the term Cosmos encompasses
everything that is the Universe, from galaxies to planets, from people
to plant life; everything. The word Cosmos that we now colloquially
apply interchangeably with Universe is an ancient Greek expression
describing a state of perfect order the antithesis of chaos. That
Pythagoras so carefully chose to assign this word to describe the
Universe is quite telling because in doing so, he implies that the
Cosmos is far more than a void in which we are suspended; it is a
supreme manifestation of perfect harmony, permeating all facets of
existence.

7.2

Music of the spheres

Harmony is not merely a poetic philosophical concept; it is


also physical and literal. Assuming harmonics represented the basic
nature of the Cosmos, Pythagoras developed a mystical model of
planetary motion known as the Music of the Spheres68. He surmised
that the planets, the Earth and Sun included, were organized and set in
place according to a divine rule of harmonic proportions. Although
this was a purely mystical philosophy, based upon a mathematical
concept, some literal truth remains in its core.
We have come to discover that the planets do not orbit the
sun in arbitrary paths of their own design; the gravitational fields of
the other bodies in the solar system directly affect their orbits. This
gravitational network is a key factor in the formation and evolution of
our entire solar system. Jupiters moon Europa, for example, is tugged
by the gravitational influence of all other moons in the Jovian system,
and this acts to align and stabilize the orbital period of each moon.
This stable gravitational arrangement that evolves over time is known
as orbital resonance69.

68
69

Musica Universalis.
Orbital resonances may be stable or unstable.
www.deltagroupengineering.com

123

Orbital resonance was the key driver in the formation,


evolution and order of our current solar system. Early in its evolution,
our own solar system is thought to have possessed many more planets
than exist today. However, resonant instabilities caused these early
planetoids to congeal into larger ones, or nudged them out of their
precarious orbits and into the Sun. Some planetoids ejected from their
solar orbits were captured by planets and became moons. The
substantial masses of gas giants such as Jupiter and Saturn provide a
source of protection and stability for our solar system acting as
grand matriarchs of the solar family, reigning in rebellious rogues and
ejecting uncooperative dissidents.
Over the course of many millions of years, the solar system
eventually evolved more stable orbital resonances as it settled into the
configuration that exists today. These stable resonances act to hold
our solar system together. For example, the Earths period of rotation
around the Sun is one year and Saturn takes nearly 30 years to
complete its orbit. The closest planet to the Sun, Mercury, takes
roughly three months to complete its orbit. Considering the orbital
periods in relation to one another, we notice a marked regularity
amongst them. For every orbit of the Earth, for example, Mercury
orbits the Sun approximately four times and for every five orbits of
Jupiter, Saturn orbits twice. Quasi-uniform ratios are found amongst
the orbital periods of the planets and moons, and these alignments
stabilize and balance the solar system. A rhythm and synchronicity
exists, as if the planets are engaged in a grand, cosmic waltz.
To illustrate this point, we shall examine the orbital
resonance ratios of Jupiters moons Ganymede, Europa and Io. The
harmonic ratio of these three moons is 1:2:4 respectively; for every
orbit of Ganymede, Europa orbits twice and Io orbits fourfold.
Harmonically ordered orbits evolve because all objects and systems
seek the condition of greatest stability, marked by the state of lowest
energy. For example, a ball rolling back and forth in a U-shaped well
eventually stabilizes as it comes to rest at the lowest point in the well,
achieving a state of least energy. Moving the ball up the side of the
well requires energy input and represents an unstable energy state.
Resonant orbital arrangements are self-reinforcing because they often
represent the lowest permissible energy configuration of the system.

124

www.deltagroupengineering.com

The orbits of other planets in our solar system have selforganized and evolved over the eons into highly regular harmonic
ratiosxxi, and though not all planets in the solar system possess highly
regular orbital resonances, our solar system continues to evolve into a
state of increasing stability as time progresses. Harmonization in the
orbital periods of planets in our solar system has been recognized and
appreciated since ancient times. This clockwork syncopation
mesmerized early philosophers and physicists such as Johannes
Kepler.
In his 1619 treatise entitled
Harmonice Mundi,70 Kepler sought to
explain the arrangement of the planets
according to an organizing principle
stipulated by the Pythagorean model and
the geometric conventions of Platos five
perfect solids. Kepler believed that the
circumference of each planetary orbit
was prescribed by the ratio resulting
from nesting the five perfect solids inside
one another.
In Keplers model, a cube with a
sphere fitting snugly inside indicated the
orbital circumference of an arbitrary
planet. Within that sphere fits another solid such as a tetrahedron, and
the sphere nested within the tetrahedron indicated the orbital
circumference of another planet. Within the orbit defined by the
tetrahedron another perfect solid would fit, thus defining another
planetary orbit and so on, until all the planets were accounted for.

70

Harmony of the Worlds.


www.deltagroupengineering.com

125

Needless to say, Keplers speculative model ultimately


proved to be incorrect. However, his deep conviction that a divine,
harmonic order regulated Natural laws inspired him to develop the
famous Three Laws of Planetary Motion. These laws not only maintain
their rank as the gold-standard of celestial mechanics to this day, they
laid the ground-work for Newtonian Mechanics and provided
undeniable evidence for a heliocentric model of the solar system.

(Left): Kepler's planetary model


of the solar system as nested
Platonic solids. Mysterium
Cosmographicum (1596).

7.3

The quantum-harmonic axiom

Harmony implies stability. This is not only true in


philosophical terms; it also applies to physical systems. Matter exists
in the Universe because elementary particles71 are products of an
inherent harmonic order. Atoms are composed of three particle
constituents; the proton, neutron and electron. The proton and neutron
are themselves composed of three quark subunits. The structural
composition of atoms is never arbitrary; symmetry and uniformity
exists at every level of scale. Simple and consistent mathematical
rules of symmetry always apply, giving rise to the existence of matter.
A simple rule, such as the number of electrons in an atom,
dictates how an individual element reacts and combines with others to
form a varied array of molecules which, in turn, form all substances in
the Universe. The stability and homogeneity of matter and its
predictable chemical behavior is a manifestation of the underlying
mathematical order upon which it is constructed.

71

The building blocks from which atoms are constructed.

126

www.deltagroupengineering.com

The energy levels of electrons surrounding the atomic


nucleus are harmonically defined. Atomic electrons may be
considered to exist as three-dimensional probabilistic standing waves
surrounding the nucleus. Sometimes the probability of finding an
electron at a specific point in an atom is unlikely, while in other
places the presence of an electron is very likely.
The electron probability distribution is proportional to the
size of the atom itself. One may consider the atom as being trapped
inside a box containing the electron-wave, analogous to the orbit of a
planet in Keplers model. The boundary imposed upon the electronwave limits its existence to harmonic multiples of its ground-state72,
not unlike a guitar string held fixed at both ends. The fixed string may
only vibrate in harmonic increments of its fixed length. Similarly,
atomic electrons exist as harmonic intervals of their fundamental
frequency. In the electrons case however, instead of vibrating like a
string in two dimensions, the electron exists as a standing wave
encompassing the three-dimensional volume of the atom.
These harmonic frequency intervals are termed Eigenfrequencies, derived from the German word Eigen, meaning
same. The electron Eigen-frequencies (i.e. Eigen-states) are
quantum and discrete73; representing whole harmonic multiples of the
ground-state energy. It is this harmonic organization principle that led
to the development of Quantum Mechanics (QM), and it is also the
reason it is called quantum in the first place.
(Right): Eigen-states of the
electron in a hydrogen atom74.
In order to better explain
this concept, we shall turn to the
model of blackbody radiation. Max
Planck demonstrated that thermal
radiation may be described by a
spectral relationship. The distribution
pattern of the blackbody radiation
spectrum is characterized as a skewed
72

The lowest permissible whole-integer frequency within the


parameters set by the atomic boundary.
73
Intermediate energy states do not exist.
74
Image credited to the Florian Marquardt, Theoretical Condensed
Matter, department of physics, University of Munich (LMU),
Germany.
www.deltagroupengineering.com

127

bell-shaped curve; the majority of photons surrounding a hot object


possess roughly the same energy, and the prevalence of photons with
higher or lower energy diminishes on either side of the spectrum.
Planck treated each photon as a harmonic oscillator,
analogous to a tiny string in space; each one vibrating at a specific
frequency. Planck concluded that the range of permissible oscillation
frequencies was not continuous, but limited to integer multiples of
h, i.e., the frequency multiplied by Plancks Constantxxii. Thus a
blackbody radiation spectrum is not randomly defined; it represents a
distribution of discrete frequencies with variable prevalence along the
spectrum. In other words, photon energies surrounding a hot object
cannot oscillate at random frequencies; they may only exist at precise
sub-harmonics of the Planck Frequency, implying that the energy
associated with any material object is harmonically defined.
For example, the majority of radiant energy from the Sun
occurs in a relatively narrow bandwidth of wavelengths. When
graphed, the prevalence of photons in a given energy range forms a
skewed bell-shaped distribution termed a blackbody radiation
curve. Planck determined that the only manner in which to accurately
depict the distribution of energy in a blackbody curve was to
quantize the field. He partitioned the spectral energy into discrete
bits; each bit being harmonically related to others by Plancks
Constant, analogous to the manner in which various Eigen-states of an
electron are delimited by its ground-state energy.
Thus, the quantum model not only applies to energy, it also
applies to matter. In quantizing electron Eigen-states and dividing the
electromagnetic field into photons, the underlying structure and order
of the quantum Universe is revealed. QM states that matter and
energy are literally built upon the foundation of harmonic
relationships.

7.4

Fouriers legacy

If we delve deeply enough into any physical phenomenon, it


seems that nearly all physical processes are governed by some kind
of harmonic statute. But what is the reason behind all this order and
structure we find in the Cosmos, and how does it naturally arise?
Order governs the balance of energy and forces. An unstable,
disordered system is like a boulder precariously balanced on the tip of
a mountain peak. A boulder that has come to rest on the valley floor
after having tumbled down the mountainside denotes a stable system
at its lowest energy state.

128

www.deltagroupengineering.com

Analyzing orbital resonance in planetary motion, we find that


configurations evolve75 to represent the most energetically stable and
efficient arrangement possible. Each planet contributes gravitational
influence to the system such that the resulting tidal forces provide a
positively or negatively reinforcing effect upon the elements76 of the
system.
A complete cycle of a planet around the sun is termed an
orbital period. Orbital periods and cyclical motions of any kind,
such as the swing of a pendulum, may be represented as sine waves
possessing frequency and amplitude. To increase the amplitude
(height) of the pendulums swing, energy may be added in the form of
a push to the pendulums direction of motion. When pushing in
syncopation with the motion of the pendulum77, the amplitude of the
swing increases. Similarly, to decrease the amplitude of the swing, a
push may be applied which is out of phase78. Doing so saps the kinetic
energy of the motion and the pendulum will eventually come to a
stop.
The energy dynamics of this process may be represented
mathematically by the simple addition and subtraction of waves.
Adding79 two waves of equal length and amplitude peak-to-peak80
forms a resultant wave possessing the same length, but double the
amplitude of the initial waves; this is termed constructive
interference. Conversely, if two identical waves are added peak-totrough81, the waves cancel each other out. This is termed destructive
interference. This scheme for adding and subtracting waves has been
greatly elaborated upon in the world of mathematics to encompass all
manner of wave interactions.
Many different waves may be combined to produce a single
composite waveform. Conversely, a composite waveform may be
decomposed into a set or spectrum of individual waves. The
decomposition process is a bit like defining the number 100. Many
individual numbers may be added together to obtain 100;
alternatively, 100 may be decomposed into sets of lesser numbers
equaling 100 when summed. Waves are analogous to numbers in
this regard. Each wave is analogous to a number which may be added
75

Over long periods of time.


i.e. celestial objects.
77
The addition of energy is in phase with its natural swing.
78
Opposing the swing of the pendulum.
79
Superimposing.
80
The two waves are said to be in phase.
81
The two waves are said to be out of phase.
76

www.deltagroupengineering.com

129

or subtracted from the others to create a final number, denoting a


constant function.
This process of summing harmonic modes of a fundamental
wave may be used to mathematically re-construct any waveform or
constant function by applying the rules of constructive and destructive
interference. The reverse is also true, as one may readily decompose
any constant function into a cognate spectrum of discrete harmonic
frequencies. The process of waveform dissolution is termed spectral
analysis. Spectral analysis has broad applications and is commonly
utilized in electronics, optics, acoustics, image processing, computer
data compression and more. It may even be used to model
gravitational acceleration82. In fact, the use of spectral analysis to
characterize gravity forms the basis of what we refer to as space-time
engineering.
The early 19th century
French mathematician and physicist,
Joseph Fourier, founded the field of
spectral analysis through his
development of a mathematical
process for compiling harmonic
waves. This method is nowadays
termed Fourier series83, and the
mathematical operation facilitating
spectral decomposition is termed
Fourier Transformation84.
Sir Isaac Newton learned
that if he shined white light through
a glass prism, it spread into a
rainbow of colors. This experiment
demonstrated that sunlight is composed of many distinct wavelengths
of light, spanning all visible colors of the spectrum. In terms of
spectral analysis, sunlight is analogous to a constant function
compiled from the superposition of a spectrum of light waves.
Spectral analysis may be utilized to characterize electromagnetic
82

The harmonic representation of gravitational acceleration is


thoroughly demonstrated in Quinta Essentia Part III (QE3). A brief
summary of this process is also available in the EGM Technical
Summary.
83
http://mathworld.wolfram.com/FourierSeries.html
84
The Fourier Transform defines a relationship between a signal in
the time domain and its representation in the frequency domain:
http://www.see.ed.ac.uk/~mjj/dspDemos/EE4/tutFT.html

130

www.deltagroupengineering.com

(EM) fields85 such as sunlight, for example, by mathematically


compiling the individual wavelengths that make up the visible
spectrum.

The Fourier summation of waves may be utilized to construct


a constant function by the mathematical superposition of the
harmonics associated with a fundamental wave. For example, a
periodic square wave86 may be represented as harmonic multiples of
the fundamental frequency utilizing Fourier series87. The following
illustration depicts a Fourier series summation of a small number of
harmonic modes, demonstrating that as the number of summed modes
approaches infinity, the Fourier representation utilizing sine waves
becomes (in this case) a perfect square wave of unit amplitude.

85

EM energy is typically represented as a span of radio, microwaves,


visible light, x-rays or gamma rays. Gamma rays possess shortwavelengths (i.e. high frequencies) and are highly energetic, whereas
radio occurs at long-wavelengths (i.e. low frequencies) and are low in
energy. All possible energy values of EM radiation fall along a
continuum called the EM spectrum. EM waves are spectrally
organized according to wavelength, representing a range of
possibilities. Bandwidth refers to a range or sub-set of wavelengths
within the spectrum such as visible light. The bandwidth of white
light lies between the UV and infrared EM limits.
86
http://www.falstad.com/fourier/
87
Each harmonic, relative to its lowest permissible frequency value in
the applied mathematical spectrum, is termed a mode.
www.deltagroupengineering.com

131

The ground-state of any system represents its lowest


permissible energy value, and is synonymous with stability. Whether
it is the orbital period of Jupiters moons, the quantum shifts of
electrons in a hydrogen atom, or a drop of oil spreading upon the
surface of water, stability (a constant energy state) is a direct
consequence of environmental equilibrium. In terms of spectral
analysis, any ground-state system we wish to consider represents a
summation of inputs and outputs resulting in a constant function; the
systems most stable configuration.
It is possible to mathematically visualize the difference
between stability and instability, in terms of symmetry and
asymmetry, utilizing Fourier series. Lets look back to the orbital
resonances of Io, Europa and Ganymede. If the harmonic orbital
periods of these moons are superimposed upon one another as
periodic functions, the wave summation forms a regular and
symmetric pattern. If the orbital period of Europa happened to be
2.765 per orbit of Ganymede rather than 2, the resulting
waveform will be asymmetric.

132

www.deltagroupengineering.com

The same is true for musical tones as well. When we hear


multiple notes played in unison, which denote even harmonic
intervals in a scale, (termed a chord) it sounds pleasing. However, if
one of the notes possesses a wavelength which is not an exact
harmonic increment in that scale, the chord sounds dissonant.
Mathematical and harmonic symmetry is often congruent with
aesthetic value. Mathematically, the dissonant interaction of sound
waves generates a composite asymmetrical waveform which sounds
unpleasant. However, harmonic sound waves combine elegantly,
yielding an ordered and symmetric waveform which sounds pleasant.
Our aesthetic appreciation of harmony, symmetry, music,
order and beauty defines our humanity. Pythagoras and the ancient
Babylonians reasoned that this was due to an underlying mathematical
symmetry in Nature; a harmonic sympathy amongst all things which
enables and establishes structure in the Cosmos. We intrinsically
appreciate Natural order because we innately understand that all
things are drawn to, or tend towards this end. We readily appreciate
the difference between free-flow and interference, harmony and
dissonance. The fact that we may apply Fouriers techniques to
analyze systemic symmetry affords us a unique opportunity. Through
this perspective, we may view the processes of Nature in a deeper and
more enlightened manner, and easily identify the common threads and
underlying principles of action driving natural processes in all their
forms.

www.deltagroupengineering.com

133

134

www.deltagroupengineering.com

Electro-Gravi-Magnetics (EGM)

Controversy is the first step towards reformation.


Riccardo C. Storti

8.1

Introduction

Gravity is an electromagnetic phenomenon. Many physicists


in the scientific community today might deem this to be a rather bold,
even heretical statement because gravity has traditionally been treated
as a unique and distinct force in and of itself. Electricity and
magnetism, which were once thought to be entirely disparate entities,
are now unified into a single set of interactions termed
electromagnetism. The weak nuclear force, which helps keep the
subatomic particles of atoms bound together, was shown to be
mediated by photons, and thus we now call this combined interaction
the electroweak interaction. Nature has placed many conspicuous
clues pointing directly to the notion that gravity and inertia are
electromagnetic in origin. We have chosen to use this argument as the
starting point of our investigation into the nature of gravity, simply
because it is the most logical and obvious place to begin.
The science of physics has long considered the development
of an all-encompassing Theory of Everything to be its greatest and
final purpose to unify gravity and all the other forces of Nature into
a single, elegant equation. Despite the fact that by the turn of the 21st
century there was still no single theory which could successfully unite
gravity with electromagnetism, there is good reason to suggest that
gravity has always been unified with electromagnetism, in principle at
least.
If one rationally considers the tenets of General Relativity
(GR), one is forced to conclude that gravity must operate, at least in
part, through some component of electromagnetism. According to
GR, matter generates curvature in space-time, and this imaginary
curvature directly affects the propagation of electromagnetic (EM)
energy and the motions of material objects. As a beam of light enters
a gravitational field, its trajectory is curved in direct response to the
gravitational field it passes through. However, the dynamic behavior
of EM energy in proximity to matter defines space-time curvature,
which in turn defines how material objects interact gravitationally.
This means that one may remove the concepts of space-time
curvature and gravitational force entirely and substitute them with
the refraction of EM energy in the presence of matter.
www.deltagroupengineering.com

135

We use the term electromagnetism because we now


understand that the forces of electricity and magnetism go hand-inhand. This is connection is empirically proven because magnetic
fields may be applied to induce electrical currents in conductors, and
vice versa. Each force has the ability to directly affect the other. By
this logic, we must also conclude that because gravity is the result of
energy displacement due to the presence of matter (i.e. what Einstein
termed curvature of space-time) gravity must act through some deep
and fundamental connection between matter and EM energy.
This fundamental connection may be described in exquisite
detail utilizing the Electro-Gravi-Magnetics (EGM) method. EGM has
been developed through a synthesis of observation and application of
time-tested principles of engineering, physics and mathematics. No ad
hoc theories are required and no new physics has been conjured up
in order to develop the EGM method. EGM doesnt require the
invocation of multiple dimensions or universes to yield highly
accurate results which are fully consistent with observation and
proven theory. EGM simply and elegantly reveals mathematical
patterns and relationships in Nature which form the basis of all
physical phenomena involving matter and energy including gravity.
More specifically, EGM models the manner in which mass-energy
behaves in the milieu of Quantum Vacuum Energy (QVE). Most
importantly, this interaction between matter and the Quantum
Vacuum (QV) constitutes a system which not only defines the
properties of mass, but also reveals the primary canonical rule
governing the existence of matter.
The first principle which must be acknowledged and
accepted to fully understand how the EGM method works is that
matter doesnt exist as an autonomous entity floating inertly in space.
EGM models mass-energy as a dynamic interaction process in which
energy in the form of mass establishes equilibrium with the energy of
the QV surrounding it. Matter and EM radiation follow geodesic paths
of least resistance through space-time as they seek equilibrium within
the ambient vacuum-energy environment.

8.2

Similitude

If we wish to investigate gravity as being a function of


electromagnetism, we must assume that gravity and electromagnetism
are already unified. Furthermore, if we wish to implicate the QV, and
therefore Quantum Mechanics (QM) as the binding link, we must
additionally assume that QM is also unified with gravity and
electromagnetism. However, prior to adopting this perspective, we

136

www.deltagroupengineering.com

must identify the common thread relating these outwardly divergent


concepts. This was the first step in the development of the EGM
method, and the reason why it is referred to as a method and not a
theory.
Buckingham Theory (BPT) is a well established
fundamental engineering principle which has been widely utilized
since its formulation in the early 1900s. In fact, much of what we
know today about thermodynamics has been gained through the
implementation of this theory. Buckinghams technique is applied to
simplify the representation of complex physical systems. In doing so,
BPT determines which components are necessary (or unnecessary) in
order to adequately represent the dynamics of a system.
The Greek letter (Pi) in BPT doesnt refer to the ratio
, but instead denotes dimensionless variables arranged according
to like terms in order to describe the components of a system. It is
somewhat analogous to the manner in which words are arranged
according to the rules of grammar and sentence structure. In this
regard, the dimensionless groups are the words of the sentence,
whereas the grammatical structure and choice of words in the
sentence are analogous to the equation best describing the system
being analyzed. For example, a single event may be described in
many different ways, utilizing different words, different combinations
of words, placed in various order, and yet still yield an adequate
description of that event. No right or wrong sentences exist per se,
only ones adequately describing the event being observed. One may
choose to recount a single event quite differently from another person,
or rephrase the details of one event in various ways, yet the desired
result is the same: that the information is communicated adequately.
BPT formalisms afford an engineer the ability to phrase the
dynamics of an experimental prototype in multiple ways, with the end
result being an equation describing the system mathematically. Syntax
in language provides the structural framework upon which ideas are
communicated. The basic rules of syntax allow a limited number of
words to be arranged into an almost limitless number of expressions.
Syntax provides structure and meaning to language so that ideas are
conveyed. BPT, in a sense, provides the mathematical syntax upon
which an equation may be constructed. An engineer designs and
selects a mathematical expression, in accordance with syntactic
guidelines, yielding the most complete depiction of the prototype.
Variables may be added or removed from the equation until a model
is constructed which best predicts the outcome of a simulation.

www.deltagroupengineering.com

137

BPT is utilized to model the behavior of a whole system88


without requiring precise knowledge of all components within the
system. For example, it is not necessary to calculate the movements of
every water molecule in the ocean to adequately model or predict the
movement of a wave passing through it. BPT operates within the
framework of Dimensional Analysis Techniques (DATs)89,
demonstrating the scaling relationship between similar systems90. For
example, in modeling a wave in water, DATs demonstrate that the
size of a wave may be irrelevant in many cases. A wave may be
several meters high, or a mere ripple on the surface; however, the
wave dynamics of the system are geometrically scalable. Likewise,
the dynamics of a vortex of water going down a drain may be
described in the same terms as a tornado in the atmosphere.
If we wish to design a computer simulation of a new
submarine prototype, BPT and DATs allow us to select the proper
physical parameters affecting the real submarine, such as the tensile
strength of the hull, the pressure of the water acting on the hull and so
forth, and it also allows the researcher to reduce or eliminate variables
and parameters which are unnecessary. This dramatically increases
the efficiency of the prototype design process by reducing the number
of experiments and simulations necessary in order to test it
adequately. These methods also provide a framework for
understanding and analyzing problems, and a means of assessing the
overall quality and usefulness of the model itself91.
Generating an equation utilizing BPT is quite
straightforward. All the factors involved in the system being analyzed
are considered, then the experimenter judges which variables92 are
expected to be physically important, such as energy, time, mass,
length, gravity, pressure, etc. The variables are then grouped as a set
of parameters influencing the system, in accordance with the standard
methodology developed by Edgar Buckingham.

88

Particularly when scaling physical relationships to the size of


bench-top experimental prototypes.
89
Primarily enforcing dimensional homogeneity across mathematical
and physical representations.
90
Indicating that they may be described in like terms.
91
Norwegian University of Science and Technology,
http://www.math.ntnu.no/~hanche/kurs/matmod/1998h/
92
Each possessing units of physical measure; for example, mass is a
fundamental unit (e.g. kg) which cannot be reduced nor expressed
as a combination of units.

138

www.deltagroupengineering.com

An important consideration involving DATs and BPT is the


rule of similitude. In order to compare a mathematical model to a
physical system, certain criteria must be satisfied. The model must
have dynamic, kinematic or geometric similarity to the real-world
system (any of, or all of these if applicable). Dynamic similarity
relates forces, kinematic similarity relates motion93 and geometric
similarity relates shape94. Once the design principles of similitude are
satisfied, the mathematical model is considered applicable to the realworld system95.
The famed English physicist, Sir Geoffrey I. Taylor,
masterfully demonstrated how dimensional analysis could be applied
to predict the energy generated by the first atomic bomb, detonated
outside Alamogordo, New Mexico in 1945, utilizing declassified
high-speed camera images of the explosion. Taylor surmised that the
five physical factors involved in the explosion were; the energy of the
explosion, the radius of the shockwave, the atmospheric pressure and
density acting to contain the shockwave, and the time interval of the
shockwaves expansion. These five physical terms possess three
fundamental units between them (i.e. length, mass and time). The
number of dimensionless groups96 equals the number of physical
factors involved in the system, minus the number of fundamental
units. Five physical factors are involved in the system yielding three
fundamental units, therefore two dimensionless groupings are
required to solve for the energy released by the detonation.
Energy, exerted as atmospheric pressure, acts to partially
contain the explosion as it occurs. The dynamic interaction between
the energy released in the blast and the energy exerted by atmospheric
pressure generates the shock wave. This interaction is similar to the
manner in which the size of an air bubble in the ocean is defined by
the ambient pressure of the water. Thus, the shock wave is utilized as
a measure of the total energy of the system.
High-speed cameras were utilized to film the explosion and
each still image provided Taylor with precise time intervals to
measure the size97 and rate of the shockwaves expansion. This
information facilitated the determination of the energy released by the
93

Synonymous with the time domain.


For instance, the topology of space-time curvature within the
context of GR.
95
Refer to a standard Engineering text for worked examples of DATs
and BPT.
96
i.e. groups.
97
i.e. the spherical dimensions of the expanding explosion.
94

www.deltagroupengineering.com

139

blast, without foreknowledge of the amount of explosive used in the


device itself. The rate and size of shockwave expansion is
proportional to the energy released by the explosion and
representative of the energy exerted by the surrounding atmosphere
acting to contain the blast-sphere as the system moves towards a state
of energetic equilibrium.

8.3

Precepts and principles

The EGM method is so named because it was initially


developed as a means of representing, based upon DATs and BPT,
how a gravitational field might be described solely in the
mathematical language of electromagnetism. EGM is not a theory; it
is a modeling approach a method of mathematically simulating a
real-world system.
Standard engineering techniques such as DATs and BPT are
typically applied to simulate common engineering problems such as
those involving aerodynamics, thermodynamics or load stress;
however, in this case they have been applied to find solutions to
problems in GR and QM. EGM is not new physics. EGM is based
entirely upon tried-and-true mathematical and physical principles.
The original intent of EGM was to determine, via
mathematical modeling alone, whether it might be possible to modify
the gravitational force acting on a test object, or to potentially
generate artificial gravity by utilizing electromagnetic energy to alter
the state of the QV surrounding the test object. However, the result
proved to be of far greater scope and significance than its developers
originally anticipated, or could have possibly imagined. The EGM
method has unveiled a universal principle which may be applied to
virtually all physical systems involving matter and energy. To any
properly skeptical scientist, this may seem either too good to be true,
or even impossible to believe. Yet, when EGM is applied to physical
systems, it consistently yields highly accurate and astonishingly
precise results, whether one is investigating the microcosm of
subatomic particles or the largest astronomical objects in the
Universe.
EGM was formulated with the intent of providing a tool with
which engineers and physicists could not only understand, but
possibly even modify gravity and inertia. Theoretically, this can only
be achieved by somehow modifying the space-time manifold
surrounding a test object. Gravity is often erroneously referred to and
treated as a force, that is said to pull on other objects. This is an
entirely inaccurate and misleading portrayal. Gravity is the result of

140

www.deltagroupengineering.com

an interaction between matter and the space-time manifold


surrounding it. GR interprets gravity as being due to curvature within
the space-time manifold98. The space-time manifold is thought to
control how objects and radiation move through it, in accordance with
the local index of curvature in a region of space. In effect, GR allows
us to study the physics of gravity through a geometric interpretation of
space-time. Hence, the original purpose of EGM was an attempt to
determine how one might go about physically modifying the spacetime manifold in order to alter gravitational and inertial geodesics.
We also know from Einsteins famous equation E = mc2
that matter and energy are equivalent. Simply put, this means that
matter is energy the energy is merely condensed in the form of
matter. However, matter is an inadequate term to use in physics and
is better represented by its physical attribute termed mass. Mass is a
mathematical term describing the amount of energy embodied by
matter, and is thus referred to as mass-energy to reflect Einsteins
equivalence relationship.
GR states that mass-energy generates curvature in the fabric
of space-time, resulting in gravity. However, we must always
remember that space-time curvature is physically meaningless; that
is to say, it is only a mathematical construct allowing us to describe a
physical phenomenon. Believing the space-time manifold to be pure
vacuum leads to a logically defiant position99. Although GR describes
the motions of matter and EM radiation, it may only be regarded as a
mathematical interpretation of reality because Einstein was never able
to adequately demonstrate the exact physical mechanism by which
matter generates curvature in space-time, or the physical attribute of
space-time capable of being curved.
EGM models the mattermanifold interaction as a physical
system such that energy, in the form of mass, does work on the
space-time manifold in order to directly affect (i.e. curve) it.
However, in order for this to be a physical interaction, the space-time
manifold must be treated as though it is something rather than nothing
there must be something for mass-energy to exert its influence upon.
EGM presumes physicality of the space-time manifold and that the
currency of this exchange is electromagnetic (i.e., mediated by
photons, or more specifically, by gravitons). Presuming gravity is
the result of a mattermanifold interaction, the EGM construct may be

98
99

i.e. induced and affected by the presence of matter and / or energy.


i.e. nothingness cannot possess shape.
www.deltagroupengineering.com

141

implemented via the application of control-systems engineering


philosophies100 in accordance with the following precepts:
i.
ii.
iii.
iv.

An object at rest polarizes the QV surrounding it.


An object at rest is in equilibrium with the QV
surrounding it.
The QVE101 surrounding an object at rest is
equivalent to E = mc2.
The frequency distribution of the spectral energy
density of the QV surrounding an object at rest is
cubic.

In other words, EGM methodology commences by


mathematically expressing the mass-energy value E, from E =
mc2, in terms of an EM spectrum by Fourier Transformation. This
mass-energy spectrum is then superimposed upon the frequencycubed QV spectrum of flat space-time, derived from QM.
Expressing mass-energy in spectral terms facilitates the coalescence
of these two spectra mathematically, creating a new spectrum. This
new spectrum, in turn, depicts an EM energy equilibrium gradient
formed between the center of mass and infinity. The energy
gradient produced is entirely congruent to the attribute of spacetime curvature derived from GR.
However, it is important to emphasize that EGM is a
mathematical construct only. EGM does not propose that mass is
literally comprised of spectral modes interacting with the QV, it is
merely a tool by which to distil and deconstruct the fundamental
energy dynamics of GR, EM and QM; combining like characteristics
in order to solve a problem.

8.4

Space-time engineering

In 1939, physicists Lise Meitner and Otto Frisch published


an article in the journal Nature entitled Disintegration of Uranium by
Neutrons: a New Type of Nuclear Reactionxxiii. Meitner and Frisch
studied the physics of nuclear fission occurring when the nuclei of
uranium atoms are bombarded with neutrons. This causes the nucleus

100

Commonly invoked to design cruise control devices in cars or


tracking systems, and in general, any automated technology utilizing
feedback to maintain a steady-state.
101
i.e. gravitational field energy.

142

www.deltagroupengineering.com

of the atom to split in two, forming barium isotopes and releasing a


large amount of energy in the process. However, Meitner and Frisch
were initially stumped in their investigation of this phenomenon. The
mass of uranium and the absorbed neutrons yielded fission products
lower in total mass than that of the starting materials, and they could
not account for the mass lost in the reaction, until they took Einsteins
mass-energy relationship into consideration. Their breakthrough in
finally understanding fission came from the realization that the
missing mass-energy is equivalent to the amount of photon energy
released by the fission process.
Similarly, when a massive star explodes as a supernova,
some of its mass is lost as energy102 while some is fused into
heavier elements such as carbon and iron. Any loss of the stars
original mass must be accounted for, and this missing mass takes
the form of an equivalent amount of photonic energy.
The combined energy of a collection of photons is often
expressed mathematically as a spectrum of EM frequencies. The
EGM method commences by mathematically representing any mass
as an equivalent localized density of photonic energy. Properties of
Fourier harmonics are subsequently utilized to mathematically
decompile the mass-energy into a spectrum of EM frequencies. The
total mass-energy of a celestial object is analogous to a white light
composite which may be separated by a prism into a spectrum of
frequencies103.
This mathematical conversion process is somewhat similar to
the manner in which a blackbody radiation spectrum is derived. An
object radiates thermal photons into its environment (or absorbs
them104), such that their spectral distribution may be expressed as a
Planck blackbody radiation curve. Similarly, EGM transforms a value
of mass into a value of energy, expressed in terms of a spectrum of
energy modes (i.e. photons).
Mathematically translating an expression of mass into energy
is not the only task required to adequately model the matter-manifold
interaction; additionally, we must treat our collection of mass-energy
photons as though they were being confined (i.e. contained) by an
external energy density.

102

i.e. thermal photons, X-rays and other high-energy radiation.


This is not to imply that the material properties of a celestial object
may be separated by a prism, it is simply a method of conceptually
and mathematically representing mass in standard units of energy.
104
Depending upon the ambient temperature.
103

www.deltagroupengineering.com

143

Mass-energy105 may be represented through principles of


similitude in much the same manner that Sir Geoffrey Taylor modeled
the energy of an atmospheric atomic bomb detonation. Sir Taylor
treated the physical parameters of the atomic blast as an actionreaction system between the energy released by the blast and the
surrounding pressure and density of the atmosphere acting to contain
it. Although space-time curvature is representative of this energy
relationship, a pure vacuum cannot be utilized as a physical means
of establishing equilibrium because nothingness is non-physical. At
this juncture, we must rely on Quantum Mechanics to provide us with
the proper tools.
The existence of the QV is predicted and required by QM
and Quantum Electro-Dynamics (QED). QM and QED are arguably
the most precise and accurate theories ever developed in physics. QM
and QED dictate that virtual energy must be embedded within the
fabric of space-time. The origin and physical constitution of virtual
energy is too complex to describe in brevity, but we know for certain
that truly empty space does not exist. Space is teeming with energy
fluctuations. This energy, in a manner of speaking, is the thread from
which the fabric of space-time is woven. The energy of free space,
originally derived from QM, is thus termed QVE; also known as
Zero-Point Energy (ZPE).
The QV is represented as a sea of virtual photons which
may be expressed spectrally by mathematically divvying up the
Zero-Point Field (ZPF) into individual units (photons) of energy.
Any collection of photons is referred to as an EM field. This is
conceptually similar to the manner in which we may characterize the
surface of the ocean as a spectrum of waves. The ocean possesses
many waves of different size and direction, and all the individual
waves combined form the surface of the ocean. Each photon
comprising the QV is analogous to a single wave on the ocean. The
energy of any EM field may be represented as a spectrum of waves;
thus, the QV may also be represented as a spectrum of EM energy.
Moreover, since mass is equivalent to energy, it follows that we may
also represent matter as a precisely defined spectrum of EM energy.
Now that matter and the vacuum have been expressed in like
terms (as spectra), it becomes possible to model their interaction
utilizing Fourier techniques106. This approach enables the construction
105

Mass is contained within a volume of space; thus, it follows that


the energy density of the object should be related to the energy
density of the field surrounding it.
106
Derived in QE3.

144

www.deltagroupengineering.com

of a matter-vacuum system engineering prototype, which may be


analyzed and manipulated mathematically to simulate gravitational
force acting on a test object, and to model potential ways of
modifying the gravitational field. Since gravitation is engendered by
curvature of the space-time manifold, we employ the term spacetime engineering to describe the prototype.
The EGM construct treats the interaction of matter with the
QV as an equilibrated system. In this way, an object may no longer be
considered to exist as a discrete entity floating inertly in the empty
vacuum of space. The energy of the QV interacts with the energy
packaged as mass (i.e. matter), and these two forms of energy act in
concert as part of a dynamic system.
EGM works by mathematically superimposing the spectral
signatures of matter and the QV upon one another utilizing Fourier
harmonics. It is through this mathematical superposition of spectra
that we might better understand where the forces of gravity and
inertia come from. This is not to suggest that a precise physical
mechanism for gravity has been discovered, because it doesnt take
into consideration the behavior of every particle of matter as it
interacts with virtual particles of the QV. Nor does it imply that
gravity is the result of energy strings interacting in multi-dimensional
space-time as String theories suggest. It simply demonstrates how, by
mathematically combining the spectral energy forms of mass-energy
and the QV107, a change in Poynting vector108 (P) results to produce
the effect we associate with gravity.

8.5

Gravity

The universal principle driving the EGM construct is


equilibrium. All matter in the Universe109 seeks a state of greatest
stability, which is synonymous with its point of lowest energy (i.e. its
ground-state). The expression E = mc2 depicts a relationship of
equivalence rather than one of transformation, and it is through this
principle that EGM operates. Since mass and the QV are
embodiments of energy, we may treat the energy condensed as matter
as existing in a state of dynamic equilibrium with the Universe
surrounding it. Consequently, EGM asserts that the properties of mass
107

The superposition of the QV and mass-energy spectra results in a


new spectrum termed the Polarizable Vacuum (PV) spectrum.
108
In the displacement domain.
109
Including the fabric of the Universe (i.e. the space-time manifold).
www.deltagroupengineering.com

145

are relativistic because mass adjusts to the ambient energy conditions


in its local environment. It is by way of these principles that EGM
may be considered congruent to GR.
The EGM construct yields a precise determination of the
mass-energy equilibrium point of an object with the QV110
surrounding it. EGM models the presence of matter immersed within
the QV as an interactive system, suggesting an alternative
interpretation of space-time curvature. Geodesic paths of matter and
energy define the topology of space-time curvature under GR.
However, EGM provides a rather more heuristic framework for
investigating how space-time manifests curvature in the presence of
matter through the principle of equilibrium.
As equilibrium is established between an object and the QV
surrounding it, a gradient in the energy-density is formed within the
vacuum. This gradient, in turn, is congruent to what Einstein termed
curvature. However, instead of interpreting gravity as functioning
through geometric imperatives, the EGM interpretation demonstrates
that gravity operates according to optical principles.
EM energy moves in accordance with any local gravitational
potential it may encounter. The way light (energy) moves through
energy-density gradients within the vacuum is analogous to the
manner in which light refracts when passed through a lens. This
optical interpretation of gravity was first suggested approximately
three hundred years ago by Sir Isaac Newton in his treatise entitled
Opticks. Newton describes how gravity may be regarded as a
manifestation of density variations in the aether, which he presumed
surrounded and permeated all objects. These density variations
should, as Newton thought, directly affect the motions of light and
matter passing through them. Newton theorized that the aether should
be most dense far away from an object like the Earth, and conversely,
more subtle and rarefied nearby or within an object. Two passages
from Newtons Opticks demonstrate the optical model of gravity
exceedingly well:

110

Facilitating a reverse engineering approach to gravity if a region


of space-time on a laboratory test bench is considered to be the
Experimental Prototype (EP) for the mathematical model produced by
the application of DATs and BPT. Subsequently, the mathematical
model may be applied to the EP for scaling purposes, leading to
gravity control experiments.

146

www.deltagroupengineering.com

Qu. 20.
Doth not this aethereal medium in passing out of
water, glass, crystal, and other compact and dense bodies
into empty spaces, grow denser and denser, by degrees,
and by that means refract the rays of light not in a point,
but by bending them gradually in curved lines? And doth
not the gradual condensation of this medium extend to
some distance from the bodies, and thereby cause the
inflexions of the rays of light, which pass by the edges of
dense bodies, at some distance from the bodies?
Qu. 21.
Is not this medium much rarer within the dense bodies
of the Sun, stars, planets and comets, than in the empty
celestial spaces between them? And in passing from them
to great distances, doth it not grow denser and denser
perpetually, and thereby cause the gravity of those great
bodies towards one another, and of their parts towards the
bodies; every body endeavouring to go from the denser
parts of the medium towards the rarer? For if this medium
be rarer within the Suns body than at its surface . . . and
rarer there than at the orb of Saturn, I see no reason why
the increase of density should stop anywhere, and not
rather be continued through all distances from the Sun to
Saturn, and beyond. And if the elastic force of this medium
be exceedingly great, it may suffice to impel bodies from
the denser parts of the medium towards the rarer, with all
that power which we call gravity.xxiv [sic]
Newtons optical model of gravity has a modern counterpart
known as the Polarizable Vacuum (PV) Representation of GR a title
originally coined by physicist Hal Puthoff in 1994, based upon an
earlier body of work introduced by the physicists, Harold Wilson and
Robert Dicke in the 1950s. The PV model replaces the concept of
space-time curvature with a variable Refractive Index caused by
the polarization of the QV surrounding an object.
Newton wrote that a gradually changing density in the aether
results in gradually curving paths of light. A changing Refractive
Index induced by gradual changes in the polarized QV surrounding
matter also results in the refraction of light, as though it were passing
through a lens. This bent EM radiation follows a geodesic path
congruent to that predicted by GR according to the space-time
curvature interpretation. EGM similarly interprets the PV models
www.deltagroupengineering.com

147

Refractive Index as a region of variable vacuum polarization


surrounding a mass-object. However, EGM matures this concept by
demonstrating that the variable polarization is a product of the
mathematical superposition of the QV and mass-energy spectra
described earlier.
A key distinction between QVE and mass-energy is that the
energy contained within matter is highly localized, whereas the
energy of the QV is distributed homogeneously throughout the vast
regions of free-space. The differences between these energy
distributions may be expressed in terms of spectral characteristics.
Haisch, Rueda and Puthoff (HRP) were able to ascertain that
the QV spectrum possesses a cubic frequency distribution; i.e., the
spectral energy density increases proportionally to the cube of the
frequency. Therefore, the peak spectral energy density of the QV is
predicted to occur at maximum frequency. However, this presents a
rather formidable dilemma because it implies that the energy density
of empty space is nothing short of staggering.
Calculating the total energy of the QV in this form suggests
that every cubic centimeter of empty space is so energetic that it
should cause the Universe to collapse in on itself. According to GR,
energy and mass generate curvature in space-time. Thus the energy
distribution predicted by HRP should cause the space-time manifold
to curve acutely inwards, causing the Universe to implode111. In fact,
it has been estimated that the amount of QVE contained in a coffee
cup volume of empty space, if converted to heat-energy, would be
enough to boil away the Earths oceansxxv. Because of these
theoretical results, many physicists discount the existence of the QV
in cubic frequency form, believing that something must be
fundamentally wrong with the derivation, despite the fact that this
form originates from standard QM.
However, the theoretical prediction of an imploding Universe
does not preclude the frequency-cubed spectral distribution of QVE.
In other words, the distribution of the QV spectrum may remain
frequency-cubed, yet not result in a catastrophic collapse of spacetime, as long as the maximum frequency in the spectrum is low
enough. To better illustrate this point, all we must do is state the First
Law of Thermodynamics; energy can neither be created nor
111

This is the mainstream view, not the view of the EGM construct in
the Quinta Essentia series (i.e. QE3,4) where the opposite
conclusion is mathematically derived. That is, QE3,4 mathematically
demonstrate that free space does not contain a near infinite amount
of energy in a vanishing volume.

148

www.deltagroupengineering.com

destroyed. The total energy of the Universe is, and always has been,
constant. Neither more nor less energy existed in the early Universe,
during the first few trillionths of a second after the Big Bang than
exists today. The energy of the Universe hasnt gone anywhere; it has
only become more diffuse112 over time as the Universe expanded. As
this occurs, energy isnt conjured from nowhere to fill the everwidening gaps the energy of the Universe merely becomes diluted
with cosmological expansion.
Precise measurements of the Hubble constant and Cosmic
Microwave Background Radiation (CMBR) temperature allow us to
quantify the expansion of the Universe since the instant of the Big
Bang. We calculate the Hubble constant by measuring the red-shift of
light from galaxies moving away from us as they are pulled apart by
the expanding fabric of space. Thus, the Hubble constant is a measure
of the rate of cosmic expansion and the CMBR temperature is a
measure of the EM energy left over from the Big Bang only now,
the once high-frequency radiation filling the young Universe has been
stretched out into the microwave frequency range as a result of
cosmic expansion.
Utilizing EGM to analyze the energy dynamics of Hubble
expansion spectrally, we may model the primordial spectrum of the
seed-Universe113 as a single, high-frequency wavefunction
containing the total energy of the Universe. At the moment of the Big
Bang, this single wavefunction rapidly began to decompose114 into a
broad spectrum of lower-frequency waves, forming localized energy
gradients within the QV where matter condensed. This spectral
decomposition model is a mathematical representation of the energy
dynamic which occurs due to expansion, and is not intended to be a
literal interpretation. The many modes of lower-frequency waves in
the present-day QV spectrum115, when summed, must contain the total
energy present at the instant of the Big Bang (excluding the energy
condensed as matter).

112

i.e. red-shifted.
i.e. prior to the Big Bang.
114
i.e. bifurcate.
115
In flat space-time.
113

www.deltagroupengineering.com

149

However, the total energy value of the present-day spectrum


is spread out and divvied up amongst many modes, each with a lower
energy (frequency) per mode. The composition of the vacuum at
present constitutes a near-infinite number of modes, but the majority
of these are low-frequency because the sum of the spectrum must
equal the observed energy present in the vacuum. To suggest
otherwise would imply that the energy of the Universe had increased
since the time of the Big Bang.
For the purpose of conceptual demonstration, let us assume
that the size of the Universe is almost infinite, such that we may
assign low and high-frequency limits to the QV spectrum of flat
space-time incrementally above zero Hz and precisely one Hz
respectively. Under such conditions, a near infinite number of
harmonic modes, relative to a fundamental frequency value, may exist
within the QV spectrum, obeying a cubic frequency distribution
between 0 and 1 Hz. In this context, the QV contains many
modes of low energy and avoids the infinite energy in a vanishing
volume problem encountered by standard QM because the countless
numbers of low-energy modes sum to a finite QVE density value.
Moreover, the significant majority of energy contained within flat
space-time (in our example) occurs at the one Hz limit.
Similarly, if we assume that the Universe is infinitely large,
the fundamental frequency of the QV spectrum would be exactly zero
Hz, with an infinite number of low frequency harmonic modes
existing within the range of zero Hz and a high-frequency spectral
limit, arranged in a frequency-cubed distribution. EGM demonstrates
that the high-frequency spectral limit approaches zero Hz because the
bulk of the total energy of the QV is comprised of a large number of

150

www.deltagroupengineering.com

very low-frequency modes, each containing a relatively small amount


of energy. Therefore, EGM116 produces a QV spectrum in cubic
frequency form, low enough in energy density to prevent collapse
under its own weight, and without violating GR or QM.
Within the EGM construct, the cubic frequency distribution
of the QV is probabilistic. This means that, although the upper QV
spectral limit is permitted to tend to infinity, the probability of
detecting a photon decreases as the spectral limit increases. In other
words, it is increasingly likely that a measurable QV photon exists at
low rather than high frequency because: (a) QV photons have been
stretched-out since the instant of the Big Bang and (b), Nature seeks
conditions of lowest energy. Although the probability of detecting a
high-frequency QV photon in a gravitational field of non-zero
strength117 is greater than a field of zero strength118, the probability of
detecting a low rather than high-frequency QV photon remains greater
in both cases. Detection probabilities are based upon photon
populations119 at specific harmonic modes, denoting an important
characteristic of the EGM spectrum.
In summary, the QV spectrum of flat120 space-time derived
by EGM is characterized as possessing a cubic-frequency distribution
with a cut-off frequency which is quite low. The EGM derivation does
not contradict the cubic-frequency distribution form of the spectrum;
it merely disputes the cut-off frequency value assigned by HRP.
Setting the QV spectrum temporarily aside, we shall now
define and describe the energy spectrum associated with matter;
termed the EGM spectrum. In contrast to the QV spectrum, the
EGM spectrum of a mass-object is comprised of a narrow bandwidth
of extremely high-frequency modes121. Here, the E from the
equation E = mc2 is expressed in the same terms as the QV
spectrum; i.e. as a wavefunction representation of mass-energy
obeying a Fourier distribution such that the number of modes
decreases as energy density increases.122 (see: QE3).

116

Without contradicting any principle of QM or thermodynamics.


i.e. in curved space-time.
118
i.e. flat space-time.
119
See: QE2,3,4 for derivations.
120
Empty space, containing no matter.
121
This is a simplified reference to the EGM spectrum. Please consult
the proceeding chapter for further information.
122
i.e. the number of modes is inversely proportional to the energy
density of the space-time manifold.
117

www.deltagroupengineering.com

151

When the QV and EGM spectra are superimposed upon one


another using spectral analysis, the resulting spectrum is termed the
PV spectrum. This hybrid spectrum represents the energy
equilibrium state between the QV and EGM spectra123. It is important
to note that the QV and EGM spectra are purely theoretical constructs,
because mass and the vacuum never exist in isolation. The QV
spectrum represents a theoretical Universe without mass-objects,
while the EGM spectrum represents only the mass-energy of an object
in isolation. However, mass-energy is invariably found within the
milieu of QVE, so the only relevant spectrum is that of the PV. The
PV spectrum upholds the HRP cubic-frequency distribution form,
with a spectrum extending into very high-frequency ranges, but only
in the immediate presence of matter.
The PV spectrum, formed by the superposition of the QV
and EGM spectra, resolves the HRP cosmic collapse problem
because the only instance in which the PV spectrum extends into very
high-frequency ranges is in the immediate presence of matter. Flat
space-time, on the other hand, far from any matter, is comprised of
very low-frequency modes and thus does not contain enough energy
to cause a catastrophic collapse of space-time. The high-frequency
contribution to the PV spectrum does not come from empty space, but
rather from energy that is locked-up in the form of mass, which is
distributed in highly localized points throughout the Universe.
For example, consider the action of adding a single star to an
empty Universe. Within the EGM construct, the entire mass of the star
is represented as a single point (a point mass), radiating the totality
of its mass-energy into the space surrounding it. This action
superimposes the EGM spectrum of the point mass onto the QV
spectrum of the empty Universe; doing so forms the PV spectrum.124
Surrounding the point mass, a mode population gradient is established
in space-time between the mass and the edge of the Universe. The
mode population gradient modifies the Refractive Index KPV value
of the vacuum such that it changes at the same rate as gravitational
acceleration g from the center of the point mass. Thus, the gradient
is congruent to the concept of space-time curvature within GR.

123

The EGM spectrum is mass-energy based. Since a maximum mass


limit does not exist within contemporary physics, the EGM spectrum
is infinitely broad. However, mass-density is theoretically limited to
the Planck scale; thus, the EGM spectrum is bounded (in this regard)
by the Planck Frequency.
124
i.e. a quantized representation of the gravitational field.

152

www.deltagroupengineering.com

The obvious question arising from the formation of the PV


spectrum is; what induces the modal population gradient?125 The
nature of the Universe is to expand such that the energy within it
becomes stretched out. The Universe continually strives to reach its
lowest energy state and greatest stability. As this occurs, the highfrequency modes present shortly after the Big Bang are bifurcated into
a larger number of low-frequency modes as the Universe expands.
Mass may be modeled as doing work126 on the surrounding
vacuum by curving it. The presence of a point mass pushes the
vacuum around it uphill, against its natural flux of expansion. The
nature of the Universe is to expand, and upon encountering resistance
to its normal flux from high to low energy, the Universe pushes
back as it strives to reach a state of equilibrium. The mass-associated
spectrum represents condensed energy, which causes the QV
spectrum surrounding matter to locally re-compress to fewer modes of
higher-frequency. Hence, it follows that the more massive the object,
the steeper the gradient (change) in mode number between its center
of mass and the edge of the Universe, resulting in gravitational
acceleration proportional to mass. Compression of the vacuum modes
requires energy input, and it is precisely this re-compression of QVE
which results in gravity. This model also provides an answer to the
question of how and why matter curves space.

125
126

i.e. why does the vacuum become polarized?


i.e. expending energy.
www.deltagroupengineering.com

153

The tendency of the space-time manifold is to expand;


however, the presence of matter interrupts this movement, polarizing
the QV. Energy is required to alter its state to fewer modes of higher
frequency, counteracting the thermodynamic tendency of any system
to move towards a state of lowest energy and greatest stability.
Subsequently, an observer held fixed within a QV gradient senses that
the mode energy is asymmetrical127 and based upon the Quantum
Vacuum Inertia Hypothesis (QVIH), vacuum asymmetry results in an
apparent acceleration force on the observer which is perceived as
gravity.
Rather than a geometric curvature of nothingness, the
manifestation of g is better represented as back-pressure from the
vacuum as mass-energy exerts its influence upon it. Anything caught
in the inward flow of space-time, so to speak, is pulled along with the
current. EGM represents this process as the superposition of two
distinct spectra utilizing Fourier harmonics, resulting in a
mathematical description of g. Thus, it may be stated that the EGM
construct yields a quantized description of gravity.
EGM mathematically represents matter as radiating a
spectrum of conjugate EM frequencies. However, if we consider
matter to radiate a spectrum of gravitons128, the EGM construct may
be represented in quasi-physical form,129 such that gravitons emerge
as a vehicle for the feedback of information between the EGM
spectrum of matter and the QV spectrum of the local space-time
manifold.
127

i.e. higher in the direction of the center of mass of an object and


lower out in space.
128
i.e. elementary particles presumed to mediate gravitational force.
129
Science has yet to detect or rigorously define gravitons;
consequently, sufficient latitude exists to interpret the graviton in a
manner suitable to the EGM construct.

154

www.deltagroupengineering.com

EGM considers the spectral energy of a gravitational field to


be equivalent to the mass-energy of the object generating the field,
expressible in terms of a PV spectrum and analogous to space-time
curvature within GR. It models each of the conjugate EM frequencies
as two populations of conjugate photon pairs, i.e., each population
is 180 out of phase with its conjugate, consistent with a Fourier
harmonics representation of a constant function in complex form (see:
QE3). A conjugate photon pair constitutes the definition of a graviton
within the EGM construct.
The density of gravitons surrounding a mass-object is
maximal in close proximity to an object, and gradually decreases with
radial distance; thus, the greater the population density of gravitons,
the stronger the gravitational field will be. These factors are consistent
with the manner in which the PV spectrum is defined via Fourier
harmonics, resulting in a spectrum which increases in mode number
with radial distance from the mass-object130.
The EGM interpretation of gravity is analogous to Newtons
conceptualization of optical gravity as well. According to Newton, the
aether was presumed to be denser farther away from a mass-object
and less dense nearby. The change (i.e. gradient) in the density of
the aether causes light and the movements of objects through it to
follow trajectories characteristic of gravitational attraction. The
increasing density of Newtons aether may be substituted with the
analogous concept of increasing mode population in the QV,
proportional to the distance from a mass-object.

The physical basis for gravity, within Newtons optical


framework, is similar to that of a long-range Casimir Effect. The
Casimir Effect demonstrates that when two neutrally charged parallel
metal plates are brought very close together, photons in the QV with
130

i.e. QV mode number decreases with graviton density.


www.deltagroupengineering.com

155

wavelengths too large to fit between the plates are excluded. The
reduced energy density between the plates biases the QV131, pushing
the plates together with increasing force as the separation distance
decreases. Gravity, in this regard, is akin to a long-range Casimir
Effect because EGM describes gravity as being the result of a change
in mode population across a region of the QV.
In fact, EGM derives the Casimir Force from first principles,
demonstrating that it differs depending on the gravitational field
strength of the location in which it is measured. For example, EGM
asserts that the strength of the Casimir Force on Jupiter will be
smaller than on the surface of the Moon (see: QE3). The gravitational
effect on the Casimir Force is due to the population of modes
comprising the PV spectrum. The denser the mass, the fewer modes it
has in its PV spectrum because each mode within it possesses higher
energy (i.e. frequency). The modal bandwidth of the PV spectrum for
a very dense object is narrower than that of a less dense object. Thus,
at the surface of Jupiter, fewer low-frequency vacuum modes exist
than at surface of the Moon, resulting in a smaller Casimir Force on
Jupiter than the Moon.
What Sir Isaac Newton originally envisioned over three
hundred years ago in his speculations regarding optical gravity is
mirrored in the PV model of GR. EGM doesnt merely elaborate on
PV theory, it puts real numbers to it, allowing one to precisely
quantify and define the PV. The variable Refractive Index of the PV
model acts as a replacement for the metaphysical concept of spacetime curvature under GR. EGM models the changing gradient of the
PV as a summation of harmonic modes via Fourier series to represent
a constant function (i.e. g) at any position in space surrounding a
mass-object. However, it is important to re-emphasize that EGM is a
mathematical construct only. EGM does not propose that mass is
literally comprised of spectral modes interacting with the QV, it is
merely a tool by which to distil and deconstruct the fundamental
energy dynamics of GR, EM and QM; combining like characteristics
in order to solve a problem.
The EGM method provides a unique framework for
understanding the physics of gravity. The vacuum of space, as we
now understand, is an embodiment of energy, and so is mass. The
problem with previous interpretations of gravity has to do with the
131

Casimir experiments (to date) have only been performed in a


gravitational field. Thus, it is more accurate to refer to the PV rather
than the QV; however, QV has been applied for conceptual
simplicity in order to assist the reader.

156

www.deltagroupengineering.com

notion that matter is something and space is nothing when, in


fact, matter and space are actually two mutually dependent forms of
energy: one subtle and impalpable, the other objective and concrete.

8.6

Elementary particles

Although the EGM method is ideally suited for modeling the


structure of gravitational fields surrounding large objects such as
planets and stars, it may also be utilized to model properties of atomic
and subatomic matter. Moreover, applying the EGM method to
elementary particles has led to some rather startling and profound
results! A deeper level of order has been uncovered at the subatomic
level that follows as a natural extension of the QM paradigm, which
governs the order and structure of the atom.
As we have already seen, the atomic system is reliant upon
the principle of harmonic symmetry. The discovery that electron
energy levels in atoms only exist as stable quantum frequency
intervals gave rise to the discipline of QM. EGM demonstrates that
the electron energy-level isnt the only instance where this quantum
paradigm applies.
The QM model doesnt state what the electron is, precisely,
but it does show that when it exists as part of an atomic system, it can
only exist in harmonic energy states defined by the parameters of the
atom-system. Each quantum change in an electrons energy level is
induced by the absorption or emission of a photon. EGM
demonstrates that the properties of subatomic particles are not defined
arbitrarily in Nature, and that this harmonic principle of action
extends into the furthest depths of the atom. EGM has revealed a QVE
equilibrium ratio relationship amongst all subatomic particles,
forming a quantum-harmonic canon governing the inner structure of
matter.
Particle physics research often involves the act of smashing
subatomic particles together at near light-speed velocities and
analyzing the bewildering array of debris formed in the collision. This
process is commonly described as being similar to smashing two cars
together and attempting to determine how they worked by analyzing
the shattered debris. The discipline of particle physics is also referred
to as High-Energy Physics (HEP). This term is applied because the
particles resulting from such collisions are only able to exist in
extremely high energy environments.
Subatomic particles often only exist as interlinked
components of another greater particle system and not as free entities
in and of themselves. They often exist only when we cause them to
www.deltagroupengineering.com

157

exist. At the instant of a particle collision, for example, the subatomic


particle products generated in the collision may only be measured (or
even be generated in the first place) by having increased the energy of
the environment in which the parent particles are smashed together132.
For example, a proton is composed of three quark subunits;
however, quarks themselves are not known to exist as free quarks.
The configuration of the proton system acts as a boundary condition,
containing the quarks in a composite form called the proton.
Extremely high energies are thus required to smash protons into their
individual quark constituents and the quarks released in the collision
can only exist freely for an extremely brief period of time133. The
energy of any object, whether particle or otherwise, is equilibrated by
the ambient energy in its local environment. However, only when
equilibrium is artificially shifted, as occurs in a high energy collision,
is the energy balance destabilized sufficiently to allow high-energy
quarks to exist autonomously for a brief moment.
Quarks generated in a collision, as it turns out, are each more
massive (in energy terms) than the proton they originate from! How
can it be that the free subunit of a parent particle possesses greater
mass-energy than its source? This is analogous to a baby weighing
more than the mother at the time of birth! However, this happens to be
the case because we release quarks by increasing the mass-energy
density of the proton system at the time of the collision.
Shortly after the Big Bang, the Universe was a soup of free
quarks in a hot and dense environment. In the first moments after the
Big Bang, the total energy of the early Universe was much more
densely packed than it is today. Quarks could exist freely in the early
Universe because the ambient energy density allowed them to exist in
this more energetic form. When particles are accelerated to extremely
high energies in a collider we are, in effect, re-creating the dense,
high-energy conditions of the early Universe, and allowing free
particles to exist.
As the Universe rapidly expanded and cooled, its energy
density decreased, subsequently permitting the condensation of
composite particles such as the proton and neutron (termed Big Bang
132

Relativity describes why tremendous energy is required to


accelerate even the tiniest particles like protons and neutrons to near
light-speed. It also states that mass scales proportionally with the rate
of change of velocity and that accelerating objects (whether its a
person or a particle) to anything closely approaching the speed of light
requires enormous amounts of energy.
133
i.e. until energy conditions return to normal.

158

www.deltagroupengineering.com

Nucleosynthesisxxvi), followed by the formation of even more


complex, low-energy composites (hydrogen and helium atoms) as the
energy density decreased further.
This manner of energy-density dynamics is exquisitely
modeled by the EGM method, which has been specifically designed to
simulate the environmental interaction of such systems where massenergy is affected by the energy density conditions surrounding it.
The existence of a particle is wholly and completely based upon the
mutual, indissoluble interaction between itself and the environmental
energy conditions it strives to equilibrate within. However, the
question remains as to why particles posses distinct energies, and why
they are not arbitrarily defined in Nature. In other words, photons may
possess a wide range of possible frequencies, while elementary and
subatomic particles in the atom have well defined and discrete
energies. What governs the formation of distinct subatomic particles
and their pattern of organization within the atomic system?
In elementary particle physics, a particles mass is expressed
as an energy equivalent via E = mc2, which means that the more
energy a particle possesses, the more massive it will be. The energy
of a photon is frequency based (via E = h), meaning the higher in
frequency (shorter in wavelength) a photon is, the more energy it
possesses. Thus, the EGM construct asserts that mass is inherently
frequency-based as well, because it is an expression of a particular
EM frequency bandwidth.
EGM models the energy-density environmental equilibrium
dynamics of systems, where matter is affected by ambient conditions,
by mathematically decomposing a value of mass-energy into an
EGM spectrum of frequencies utilizing Fourier harmonics. This is a
mathematical construct only, utilized to model the system as a whole,
and must not be interpreted as being physically descriptive of reality.
Nevertheless, the process of mathematically translating units of mass
into spectral information elegantly articulates the equilibrium
established between matter and the QV. The resulting equilibrium
state, in turn, defines the physical properties and characteristics of any
mass-object, including subatomic particles. The physical
characteristics of all fundamental particles are born of equilibrium and
are a manifestation of Einsteins principle of mass-energy
equivalence. EGM models mass-energy equivalence as a condition of
energetic equilibrium within the QVE environment. This energy
relationship is expressed in terms of the PV spectrum, and as we shall
see, yields a natural harmonic relationship between all subatomic
particles.

www.deltagroupengineering.com

159

In order to fully comprehend the manner in which EGM


derives particle characteristics, it is first necessary to derive the EGM
spectrum of a particle. Even though a proton is a composite particle
composed of quarks, it is still possible to accurately model it with
EGM in spectral form as a singular entity. Here, the experimentally
measured mass of the proton may be utilized to derive its EGM
spectrum. The proton is the easiest place to begin in this analysis
because its mass is precisely known and experimentally validated to
high precision.
EGM methodology commences by mathematically
representing the protons mass-energy in spectral form. However, this
process doesnt only convert the mass of the proton into a PV
spectrum of EM energy; it develops the concept that the mass-energy
of an object is contained within a finite volume of space-time, defined
by environmental equilibrium. Thus, the free proton isnt solely
described by its mass-energy value, its mass-energy is also associated
with density because it occupies a limited and finite volume of spacetime. The PV spectrum of an object is thus derived as a representation
of an objects mass-energy density, not just by the quantity of massenergy it carries.
For example, stars and planets take spherical form because
they are compressed by gravity into their lowest-energy configuration.
Under ideal conditions, a gas bubble in water is compressed to a
spherical shape by the balance in pressure between the air and the
water encapsulating it. The same principle applies to stars as well; the
expansion pressure of hydrogen fusion in a star is balanced by the
gravitational force acting to hold the hydrogen densely packed
together. The equilibrium point established between the two forces
confines the star to a particular size and density in space, and also
establishes the surface parameters of the object.
In Nature, the sphere is generally the most efficient shape for
packaging energy or matter. This is because the sphere has the lowest
surface to volume ratio of any shape. A cube requires more total
surface area than a sphere in order to contain an equal volume. The
same is true for a pyramid, or any other three-dimensional shape for
that matter. The spherical form is so commonly found in Nature
because systems seek the most stable and efficient configuration
possible within any given set of circumstances; efficiency is
synonymous with stability.

160

www.deltagroupengineering.com

Note: the sphere represents the equilibrium boundary between the


mass-energy of an object and the QVE of the environment acting to
contain it.
This principle of spherical energy density configuration
directly affects the fundamental and harmonic cut-off frequency of the
protons PV spectrum134. The PV spectrum is derived as a
representation of mass-energy density equilibration such that spectral
characteristics differ from object to object. For example, the proton
possesses fewer harmonic modes and a higher harmonic cut-off135
frequency than that of a star. We may utilize EGM to spectrally model
any particles mass-energy in precisely the same manner as is done for
a star or planet; by assuming that Nature utilizes the most efficient
form of packaging and distributes the mass-energy of a free particle
spherically. It may or may not be physically true that a singular
particle is spherical. However, we may assume that the energy of a
free particle is spherically distributed in order to maintain geometric
similitude between all mass-energy systems, whether that system is a
particle or a planet.
At first glance, it may seem like a rather complicated
exercise to mathematically treat matter as though it were a spectrum
of frequencies. Translating a simple expression of mass into an
ostensibly more complicated spectral form is essential, however.
There is a way to simplify things a bit though, and it is by way of this
simplification process that the aforementioned harmonic relationship
amongst particles is established. For all practical purposes, we may
134

Possesses specific characteristics such as a low and high-frequency


end-points (i.e. cut-offs). Within the EGM construct, the low and
high-frequency limits are termed the fundamental and the cut-off
frequency respectively. Between these, a range of harmonic frequency
modes exist comprising the spectrum.
135
Spectral limit.
www.deltagroupengineering.com

161

disregard all low-frequency modes of the PV spectrum and describe a


particle explicitly in terms of its cut-off frequency, because the
highest harmonic frequency in the spectrum is representative of the
significant majority of the particles total energy.
Each particle type possesses a
particular mass-energy value such that it
may be characterized by a unique
harmonic cut-off frequency value,
derived from the free particles rest
mass. For example, the proton will have
a different cut-off frequency than an
electron or neutron because their massenergies are different136. Thus, the
harmonic cut-off frequency denotes the
equilibration signature of a given
particle
type.
Moreover,
EGM
demonstrates that the harmonic cut-off
frequency signatures of all subatomic
particles are uniquely related to one
another.
The EGM construct reveals that particle mass-energies
(expressed spectrally) are naturally established according to a distinct
and highly precise harmonic pattern. For example, the relationship
between any pair of particle types, such as an electron and a proton,
may be represented as the ratio of their harmonic cut-off frequencies,
demonstrating that all fundamental particles exist as though they were
musical notes played on the same string. Each particle note is
but one harmonic in a scale of notes, and each note is defined by the
particles harmonic cut-off frequency. Because the harmonic cut-off
frequencies of particles are direct representations of the particles
mass-energy, and because those harmonic cut-off frequencies are only
found in whole, quantum-harmonic increments, it means that the
mass-energies of all subatomic particles are strictly ordered according
to a quantum rule. Thus, we may consider all particles to be harmonic
multiples of another particle like an electron, for example. Based upon
this harmonic principle of order, a periodic table of subatomic
particles may be formulated mirroring the hierarchical basis upon
which the chemical elements are arranged.
136

The harmonic cut-off frequency of the neutron is extremely close


to that of the proton. Thus, where appropriate, the ratio of the
harmonic cut-off frequency of the proton to neutron is usefully
approximated to unity.

162

www.deltagroupengineering.com

EGM decomposes the energy value of any object into a


spectrum of harmonic EM frequencies mathematically representing
its mass as a collection of photons. Each photon in the resulting
spectrum is dependent upon the value of the mass from which it is
derived. Since frequency relationships are best described in harmonic
form, it comes as no surprise that subatomic particles should exhibit
harmonic relationships as well.
Thus, the mass-energy of any particle may be defined as a
harmonic (or sub-harmonic) increment of a common EM frequency.
Within EGM, the fundamental particle masses (expressed in
frequency terms) exist in harmonic increments (i.e. quanta) in a
similar manner to the way in which electron energy levels are defined
harmonically in atoms. This is why the EGM model acts as an
extension of the QM paradigm. It is not currently possible to derive a
precise mathematical pattern or relationship amongst the masses of
fundamental particles by any other known method. More importantly,
we may utilize the EGM method to predict the mass and operative
size of any particle with unprecedented precision, and obtain values
orders of magnitude more accurate than may be achieved utilizing the
Standard Model of particle physics.
One of the most valuable features of EGM is that it
demonstrates how GR and QM are interrelated. In this regard, EGM
is a unique method, derived from a single paradigm demonstrating the
cross-fertilization of the central pillars of physics. It has uncovered
not only the framework underpinning the stability, order and coherent
inner structure of the atom; it also reveals how this order and stability
arises in Nature. Perhaps the most profound insight to be gained from
EGM is that the harmonic pattern of organization amongst subatomic
particles arises based upon a particles relationship to all other
fundamental particles. Could this perhaps imply that the common
particle ancestor from which all atomic elements, all molecules and
all material forms are constructed is the photon energy itself?

www.deltagroupengineering.com

163

8.7

Cosmology

EGM is universal. It is a single concept a single


paradigm which may be applied to sub-atomic particles, planets, stars,
galaxies even the Universe as a whole may be evaluated utilizing
EGM methodology. In fact, EGM not only allows the researcher to
model all these things independently, it reveals how all of matter and
space is interconnected, because the same equation revealing the
harmonic relationship amongst particle types may also be applied to
precisely derive cosmological measurements such as the present
values of the Hubble constant and Cosmic Microwave Background
Radiation (CMBR) temperature (H0 and T0 respectively).
Moreover, EGM demonstrates that T0 may be derived from H0,
meaning that these two phenomena are interrelated. The EGM
particle equation even serves to validate and substantiate the
evolutionary epochs of our Universe, as science has come to
understand them, since the time of the Big Bang.
A mass-object may be defined as any interacting collection
of material objects, such as an atom or galaxy, and it may also be
defined as a single, indivisible unit of matter such as a free elementary
particle. This is because EGM models any object as a feedback system
between the mass-object itself and the QVE surrounding it. For
example, we know that it is not necessary to calculate the movements
of every individual water molecule in the ocean in order to adequately
predict the dynamics of a wave passing through it. All we must do is
model the dynamics of the wave itself. In the same manner, EGM
treats any object or collection of objects as a whole entity, whether it

164

www.deltagroupengineering.com

is a whole proton, a whole atom, a whole star, a whole galaxy, or even


the whole Universe. EGM is a method permitting one to
mathematically represent a mass-object in spectral form. Although
EGM doesnt purport this interpretation to be literal or physical, it is a
computationally accurate means of representing matter within the
context of the QV.
In place of standard (and often highly complex) differential
equation formalisms used to solve for the dynamics of a two body
system, such as a binary star system, EGM models this dynamic by
way of the constructive and destructive interference resulting between
the PV spectra of each mass-object. EGM can model the gravitational
dynamics between galaxies, stars or even particles for that matter, in a
far simpler manner than may be achieved utilizing relativistic
differential equations.
The Planck blackbody radiation phenomenon demonstrates
that matter radiates a spectrum of EM radiation based upon its
temperature, and that the modes comprising that spectrum may be
described as harmonics of the Planck Frequency. This principle of
spectral distribution is mirrored by EGM because the PV spectra of
mass-objects are generated utilizing Fourier harmonics. That is to say,
each PV spectrum is a mathematical decomposition of the
gravitational energy of a mass-object into a cognate spectrum of
harmonic frequencies. Thus, the PV spectrum is analogous to a
gravitational blackbody spectrum.
Wiens displacement law describes the relationship
between the temperature of an object and its blackbody radiation
spectrum. Comparing hot and cold objects, we see that the blackbody
spectrum for each object type possesses a similar shape; depicting
peak photon prevalence in a specific frequency range, trailing off at
the high and low spectral limits. Differences in peak emission
frequencies obey a scaling factor relationship defined by Wiens
displacement law. This principle demonstrates that the spectrum is
analogous to the representation of temperature.
When we directly measure the temperature of empty space,
we are in fact measuring the residual energy from the Big Bang as
red-shifted (i.e. stretched-out) photons present in the early Universe;
the EM waves we observe today are a snap-shot of the once extremely
high-frequency photons present when atoms first formed137. Billions
of years later, those photons have become stretched by cosmic
expansion to such a degree that now they are approximately 1(mm)
in wavelength, falling within the microwave frequency range of the
137

As asserted by the Standard Model of Cosmology (SMoC).


www.deltagroupengineering.com

165

EM spectrum. This background radiation filling space is referred to as


the Cosmic Microwave Background Radiation (CMBR). We find that
this wavelength corresponds to a temperature of approximately
2.725(K): the present CMBR temperature, T0.
At Bell Laboratories in 1964, while working with a large
horn antenna designed for Radio Astronomy and satellite
communications, Arno Penzias and Robert Woodrow Wilson
discovered a ubiquitous white-noise falling within the microwave
frequency range that could not be eliminated. It was audible day and
night and in all directions. What Penzias and Wilson detected with
their antenna was the CMBR radiation left over from the birth of our
Universe! The discovery of CMBR earned Penzias and Wilson the
Nobel Prize in 1978.
The physical detection and measurement of CMBR, and thus
T0, was momentous because, at the time of its discovery, the Big
Bang model of cosmic history was merely conjecture. The Big Bang
theory emerged from Hubbles observation that the Universe was
apparently expanding in all directions. It was presumed that it should
be possible to trace this expansion back in time when all the matter
and energy in the Universe was packed together in a much denser
form. However, in the intervening decades between the time Hubble
expansion was discovered and T0 was actually measured, the Big
Bang model was by no means on solid ground. The favorable pairing
of prediction and observation meant that, as strange as it may seem,
our Universe must have suddenly burst into being as if from nowhere.
The Universe could no longer be considered an eternal, steady-state
Universe; it was instead finite having a beginning and perhaps an
end in time.
When the Universe burst into existence, it didnt explode into
some pre-existing space. It is not as if matter erupted into a void that
was already there. This is a very common and equally grave
misconception of what the Big Bang theory actually asserts. The Big
Bang model instead suggests that space itself erupted into existence,
carrying matter and energy along with it. It is the space-time manifold
which expands, not matter expanding into pre-existing space. Thus,
the Big Bang happened everywhere. Many people assume that
because we measure the accelerated recession of galaxies, we can also
trace the motion of those galaxies back to some origin in space, and
that a particular point marks the location of the Big Bang. What we
actually find is that every point in the Universe was the original
location of the Big Bang, because all galaxies (except those whose
gravitational attraction has overcome cosmic expansion) are moving
away from each other. The fabric of space is expanding between

166

www.deltagroupengineering.com

galaxies, in all directions, and all points within space are becoming
further separated from all other points not moving away from a
common origin. Similarly, all points in space would converge to a
single point if traced backwards, but this point would not be located
at some particular coordinate in space because every point in the
Universe may be considered the center of the Universe.
The expansion of the space-time metric (not the ejection of
matter into static space) provides the reason for why the ultra highenergy photons of the early Universe are now detectable as lowenergy microwaves filling space. All energy and matter are part and
parcel of the fabric of space-time, and as space-time expands, matter
and energy are also subject to that expansion.
In this regard, we may consider Hubble expansion to be
intimately tied to T0. The current temperature of space defines the
blackbody radiation spectrum of the Universe, and vice versa. The
radiation comprising the spectrum is composed of far red-shifted
photons that were present in the blackbody spectrum of the early
Universe shortly after the Big Bang. At that time, those photons were
much higher in frequency (energy) because the temperature of the
early Universe was extremely high. There was no more energy in the
early Universe than there is today, however. To state otherwise
contradicts the First Law of Thermodynamics. The total energy of the
Universe remains the same, but the energy is now spread out across a
much larger volume. Consequently, the energy density of the Universe
has changed, not the net amount of energy it contains.
This relationship between energy-density and temperature is
a well characterized principle of fundamental astrophysics. As a star
forms, clouds of hydrogen condense into a massive sphere of gas,
similar to the planet Jupiter. A pressure threshold must be achieved
before this dense ball of hydrogen (the protostar) is hot enough to
initiate hydrogen fusion. Increasing gravitational pressure on the
hydrogen gas of the protostar causes an increase in temperature. When
the pressure and temperature of the protostar reach the threshold
required to fuse hydrogen into helium, the protostar ignites and the
star is born. The temperature of an active star, which is established as
a function of pressure, also determines the stars blackbody radiation
spectrum. All these factors are completely reliant upon a common
state of equilibrium.
Stars between ten and twenty times more massive than our
sun meet their end in the form of a gargantuan explosion termed a
supernova. As any star fuses its store of hydrogen into helium, an
enormous amount of energy is released from the star, resulting in
explosive outward pressure. The outward force of the energy released
www.deltagroupengineering.com

167

from hydrogen fusion is counteracted by the inward pressure of


gravity. The balance struck between outward and inward pressures
establishes the spherical dimensions of the star.
However, as the stars store of hydrogen becomes depleted
and less energy is produced by fusion, the outward pressure begins to
wane and gravitational collapse takes over. Gravity places added
pressure on the remaining hydrogen, causing it to heat up further. As
the star becomes crushed under its own gravitational weight, it heats it
up to such an extent that the helium begins to fuse into heavier
elements such as oxygen and carbon. The inward pressure continues
to build until the star implodes and the resulting shock-wave causes
the star to be ripped apart in a massive explosion, expelling the
heavier elements just forged in the stellar crucible out into space.
The remnants of supernova explosions form neutron stars
stellar cores that are roughly as massive as our Sun but only about 20
kilometers in diameter138. Normal atoms possess a nucleus composed
of protons and neutrons, with electrons buzzing around at a relatively
vast distance from the nucleus. In fact, atoms are really mostly made
up of empty space. However, in neutron stars the atoms are
compressed so tightly that they become crushed into a compact ball of
atomic nuclei. This form of matter is so dense that just one cubic
centimeter of it measures in the billions of kilograms! Similarly, the
neutron stars gravitational field is so strong that in order to escape its
gravitational field, one must achieve an escape velocity of roughly
forty-percent the speed of light139!
If a star larger than twenty times the mass of the Sun begins
to burn out its hydrogen supply and collapse under gravity, its matter
compresses into a state so dense that it disappears entirely within the
fabric of space! This, of course, is a black hole. Instead of being
compressed into a ball of atomic nuclei, matter gets squeezed by
gravity into a point. The black hole is black because gravity is so
great that the escape velocity exceeds the speed of light. Light
attempting to flee the confines of the black hole can never reach
escape velocity it will forever push against the current of gravity in
vain, like a fish trying to swim up a waterfall.
Whether a massive star explodes as a supernova to become a
neutron star, or a super massive star collapses to form a black hole,
the outcome ultimately depends upon the equilibrium established

138

Encyclopedia Britannica online:


http://www.britannica.com/eb/article-9055410/neutron-star
139
Vescape = (2GM/r).

168

www.deltagroupengineering.com

between the expansive pressure of the energy of the star and the
contractive gravitational pressure.
The life cycle and destiny of most stars may be determined
utilizing a straight-forward relationship born of the principle of
equilibrium. The Sun is a middle-aged star that has fused about half of
its hydrogen supply into helium, and still has about 4.5 billion years
left before its hydrogen is depleted. As hydrogen becomes depleted,
and the outward pressure from helium fusion overcomes the
compression of gravity, our Sun will swell into a red giant.
Eventually the outer layer of the red giant, composed of helium and
other freshly-formed elements, will slough away from the core
leaving a ring of gasses referred to as a planetary nebula. The core
of the Sun will remain as a white dwarf star at the center of the
nebula, continuing to burn the remaining carbon from helium fusion
until it is also depletedxxvii.
The active hydrogen-burning phase of a stars life cycle is
termed its main sequence. As a star like the Sun becomes a red
giant, it moves from its main sequence phase into its red giant phase.
During a stars main sequence, its brightness (luminosity), mass, size
(radius) and temperature are established as a function of equilibrium.
For example, the Sun is G2V class star140, which means
that it is a main sequence star whose temperature is 5,700(K) at its
surface. Wiens displacement law demonstrates that the peak of the
Suns blackbody spectrum occurs in the yellow-white photographic
light range. The star Rigel in the constellation Orion is seventeen
times more massive than the Sun and six times its radius, with a
temperature of 11,000(K). The mass-density equilibrium of Rigel
relates to its temperature, and the temperature relates to its apparent
color, which is in the blue range. The color blue is higher in frequency
than yellow, and this difference in frequency between Rigel and the
Sun is a function of Wiens displacement law. As temperature
increases, the peak emission of the blackbody spectrum shifts
upwards in energy. This is termed color temperature. Its just like a
flame: the blue part of the flame is the hottest, whereas the yellow and
orange parts of the flame are relatively cooler.
This tangential foray into astrophysics has been for the
purpose of conveying a crucial point, which is that seemingly
independent physical parameters of the star are intimately connected,
and a change in one will affect the others. The mass-density and
radius of a star in its main sequence relates to the stars temperature.
From the temperature we may predict the stars color which, in turn,
140

http://en.wikipedia.org/wiki/Sun
www.deltagroupengineering.com

169

is derived from the stars blackbody radiation spectrum. As massdensity increases, so does temperature. This causes the stars
blackbody spectrum to shift to a higher-energy and narrower peak
frequency bandwidth according to Wiens displacement law. This
example from fundamental astrophysics reinforces the concept that
objects are systems they are neither static nor inert. This is the
fundamental premise of EGM and the basis upon which all EGM
calculations are performed.
As discussed earlier, any mass-object may be described by its
PV spectrum, which is a direct function of mass-energy density. The
equilibrium point of the star affects its radius, temperature and
blackbody spectrum. The PV spectrum of a star depicts a very similar
relationship. As the mass-energy density of any object increases, its
harmonic cut-off frequency increases and the modal bandwidth is
compressed. Thus, the PV spectrum of a neutron star, for example,
possesses a higher harmonic cut-off frequency and a narrower modal
bandwidth than our Sun, which is less massive.
This is why the EGM principle is universal. We may model
any object, whether its a galaxy, a cluster of galaxies, a black hole, a
neutron star, the Sun, a planet or a subatomic particle using the same
fundamental equation. However, the real value of the EGM method
lies in its ability to relate mass equilibrium states to one another, as
demonstrated by the subatomic particle harmonic relationship. It is by
way of this harmonic relationship that we may extrapolate
cosmological parameters like the Hubble constant and the CMBR
temperature as well.
Because EGM models a mass-object as existing in
equilibrium with the QV, the local energy state of the vacuum may be
considered to be equivalent to the mass-energy of the object it
encapsulates. For instance, this equivalency relationship is mirrored
by the stable equilibrium state of a star, such that the outward energy
produced by fusion is equal to the inward gravitational energy acting
to contain it. One may also conceptualize this by considering a
seesaw or lever with a fulcrum placed at its center. The lever may
be balanced horizontally if objects of equal weight are placed on each
end. The weight of object A on one side must be exactly the same
weight as object B on the opposing side for the lever to remain
stable and horizontal. In this regard, the mass-energy of an object
must be equivalent to the vacuum energy encapsulating it in order for
it to rest in equilibrium.

170

www.deltagroupengineering.com

EGM is based upon dimensional analysis, which permits the


researcher to model real-world systems by similitude. Employing
similitude, EGM derives H0 and T0 by relating the PV spectrum
of an imaginary particle possessing the energy density of the Universe
at the instant of the Big Bang to its present-day value; utilizing the
mass-energy density of the Milky Way galaxy as a basis for the
comparison. We may apply the EGM method to model the difference
in energy density between the early Universe immediately after the
Big Bang and the present moment by presuming that the seed of the
Big Bang was a particle of maximum permissible energy density, i.e.,
a Planck Particle representing a state in which all the energy in the
Universe is compacted into a single point141.
Like Sir Geoffrey Taylor, who calculated the energy of the
atomic bomb explosion knowing only the difference in blast sphere
radius at given intervals in time, we may extrapolate H0 and T0 by
comparing the analogous Planck Particle Universe at the instant of
creation with the present-day Universe because of the First Law of
Thermodynamics.
The derivation process is executed by utilizing the EGM
harmonic representation of fundamental particles equation to relate
the primordial-Universe Planck Particle to a present-day equivalent of
known mass. However, instead of a proton or electron, the arbitrary
particle we elect to apply as a base-line reference particle is
imaginary, possessing the mass of the Milky Way galaxy. If we were
to use a proton, it would reflect its local equilibrium boundary within
the atomic system, not the energy-density state of the Universe. The
PV spectrum of this Milky Way particle, referred to as the Galactic
Reference Particle (GRP), possesses a harmonic cut-off frequency
which is very high, but less than the Planck Frequency, and represents
141

EGM demonstrates that as mass-energy density increases, the PV


modal bandwidth compresses. Thus, a Planck Particle representing the
Universe at an instant prior to the Big Bang specifies a condition
where its PV modal spectrum is compressed to a single value
approaching the Planck Frequency (see: QE4 for derivation).
www.deltagroupengineering.com

171

the mass-energy density and vacuum equilibrium state of the presentday Universe.
The Universe is quite isometric142; we observe that all
galaxies are, on average, evenly distributed throughout the Universe,
and that they are all roughly in the same stage of evolution. Thus, we
may assume that the evolution of all galaxies has been subject to the
same ground rules and has followed roughly the same time-line as our
own. Because of this, our own Milky Way galaxy acts as a reference
particle, yielding an average present-day value of the gravitational
intensity throughout space-time.
Astronomers have been able to produce a fairly good
estimate of the total mass of the Milky Way, and have been able to
calculate the distance of our sun to the center of the galaxy. We may
mathematically represent total galactic mass as being contained within
a single particle, placed at the galactic center. This reference
particle (the GRP) may be represented as radiating gravitational
energy equivalent to its total mass. The intensity of gravitational
energy at any radial position, such as the Suns mean distance from
the galactic center, may be calculated from the PV spectrum of the
GRP. Thus, the GRP is proportionally representative of the total
mass-energy density and QV equilibrium state of the Universe at the
present time.
Pressure, as it has recently been described within this
chapter, is directly related to temperature. Temperature, as we also
know, is directly related to the blackbody spectrum. A mass-object of
any type may be represented by its PV spectrum, which may also be
physically interpreted as a spectrum of gravitons. The parameters of
the PV spectrum directly relate to the gravitational intensity of the
mass-object. In other words, the modes comprising the PV spectrum
indicate the gravitational intensity present at any point from the center
of the mass-object. As one moves away from the center of mass, the
gravitational intensity decreases, and the number of PV spectral
modes increases. This equilibrium gradient denotes a balance of field
pressures between the QV and the mass-object. H0, in a sense, is a
measure of the expansive pressure of the space-time manifold.
Thus, we may utilize the GRP to determine the average cosmological
matter-space-time equilibrium value and derive H0. In this regard,
H0 describes the observed energy condition of the vacuum in its
entirety, as does T0.
Deriving H0 in this manner provides the required input for
the derivation of T0. Once again, we shall commence by stating that
142

i.e. the same, no matter where you may be measuring it.

172

www.deltagroupengineering.com

pressure is related to temperature. Relating the current average


pressure of space-time (i.e. H0) to the pressure of the
primordial-Universe Planck Particle at the instant of the Big Bang (via
Wiens displacement law) yields a scale by which the Planck mode
decayed into its current spectrum. Gravitons143 radiated at the instant
of the Big Bang have red-shifted144 at the scale defined by the ratio
between the GRP and Planck Particle equilibrium end-points. This
red-shifted frequency may, in turn, be converted to cosmological
temperature, producing an estimate surpassing the precision of the
most accurately measured value of T0145. This level of accuracy is
due, in part, to the natural derivation of the cosmological inflationary
epoch under the EGM construct146.
It is also possible to utilize the primordial-Universe Planck
Particle and GRP end-points to thermodynamically model the change
in H and T since the instant of the Big Bang, forming a complete
historical record of the evolution of the Cosmos! This is accomplished
by relating the Planck Particle and GRP via the harmonic
representation of fundamental particles equation, yielding a
dimensional scaling factor which fills the gap between creation and
the present-day in terms of volumetric expansion.
Immediately after the Big Bang, as energy began condensing
to form matter, the gravitational energy radiating from matter formed
equilibrium gradients within the QV. Hence, the formation of matter
is a vital component for determining the average expansive pressure
of the Universe following the Big Bang and the manner in which H
has changed over time. The influence of matter upon the expansive
pressure of the space-time metric is automatically factored into the
model by incorporating the gravitational energy state at the Suns
relative position to the GRP. We may extrapolate the evolution of H
and T by assuming that the number of space-time modes has
bifurcated exponentially since the Big Bang, taking into account the
effect of matter condensation on the modal spectrum. This facilitates
the determination of scaling factors based upon the intensity of
gravitational flux between the instant of the Big Bang and the presentday; the scaling factors are then applied over the Hubble and
143

i.e. conjugate photon pairs.


Into the microwave range.
145
EGM predicts 2.7248(K).
146
The SMoC does not naturally derive the inflationary epoch or
consider gravity to be an EM phenomenon; thus, the residual radiation
measured as T0 is assumed by contemporary physicists to be the
result of the formation of atoms.
144

www.deltagroupengineering.com

173

temperature domains. The result of this calculation is truly


astonishing!
Based upon the EGM method, the epochs of cosmic
evolution are mapped out in extraordinary detail. The resulting history
of H and T corroborate with all epochs of cosmic evolution as
asserted by the Standard Model of cosmology. The theory of early
cosmic inflation is reinforced and the recently measured
accelerated expansion is derived.
Cosmic inflation is an epoch thought to have occurred within
the first fractions of a trillionth of a second after the Big Bang. This
burst of rapid acceleration was followed by a reduction in the
acceleration rate, continuing throughout the life-time of the Universe.
Of particular interest in this case is that the inflation epoch emerges
spontaneously as a result of the EGM calculation, and isnt presumed
or placed there a priori as part of the modeling process.
Alan Guth introduced the cosmic inflation hypothesis to the
Standard Model of cosmology as a requisite so that the Big Bang
theory fits observation. Without this inflationary epoch, the
Universe would not exist in present observational form. It would be
flat and featureless, with no clumps of matter or galaxies, and would
be so small today that the Universe, even after billions of years, would
only fit on the head of a pinxxviii. The inflationary epoch has been
added to the Standard Model of cosmic history because it is required.
Without it, the Big Bang theory flounders. However, the EGM
construct generates the inflationary epoch from first principles, and is
ultimately derived from a particle physics equation.
The latest scientific measurements demonstrate that the
expansion of the Universe continues to accelerate. Previously,
scientists wondered whether there might be enough matter in the
Universe to halt cosmic expansion. In the fullness of time, it was
thought that perhaps there was enough matter present to suck spacetime back in, causing the Universe to meet its end in a reverse of the
Big Bang termed the Big Crunch. However, when the data was
assembled, it vexed some astronomers to discover that the Universe is
actually accelerating at a rate exceeding predictions, based upon the
best estimate of the total amount of matter in the Universe.
The discrepancy between prediction and observation (within
the Standard Model of cosmology) is so vast, in fact, that
cosmologists were forced to invent the concepts of dark energy and
dark matter in order to make sense of the findings. Our best
measurements of expansion are so far from the predicted value that
theorists presently estimate that 72(%) of the Universe must be
composed of dark energy and 23(%) must be dark matter, meaning

174

www.deltagroupengineering.com

that a whopping 95(%) of our Universe exists in an unknown,


unobservable form147!
According to observationxxix, it is thought that a substantial
portion of matter comprising galaxies is missing because of the
peculiar manner in which galaxies rotate. Instead of rotating fastest in
the center and slower on the periphery, as occurs in a vortex of water,
or a cyclone in the atmosphere, stars located in the spiral arms of
galaxies rotate around the central axis at the same rate as the stars near
the center. One might naturally expect that individual stars in the arms
of a galaxy would gradually spiral into the center, moving slowly at
the edge and then faster and faster as they spiral in towards the center
of the vortex. Surprisingly, however, the entire galaxy rotates
uniformly like a giant pin-wheel in space. In order for an entire galaxy
to rotate uniformly it would require much more mass, in the form of
stars, planets and gasses, than is actually found to be present.
Therefore, it is thought that matter must be present in some
undetectable form in great halos surrounding the visible part of a
galaxy. The concept of dark matter has been manufactured in order
to make up for the missing mass, and explain why entire galaxies
rotate uniformly like a wheel, rather than spiral inward like a vortex.
Similarly, dark energy is also a contrivance invoked to
explain why the Universe continues to expand at an accelerated rate,
despite the addition of dark matter. Notwithstanding dark and visible
matter, the remainder of the Universe is thought to be in the form of
an energy field which generates a negative pressure in space,
counteracting gravity on a cosmological scale, causing intergalactic
voids of space-time to expand like giant balloons.
Although the CMBR spectrum is not entirely smooth and
uniform, its overall smoothness necessitates that a certain critical
density of matter exists in the Universe. Unfortunately, the derived
value contradicts measurement when the expansion rate of the
Universe is applied to calculate the density value. In other words, the
CMBR and acceleration rate measurements are in direct conflict with
current theory, which means that either something is fundamentally
wrong with the Standard Model of cosmology, or we must come to
terms with the notion that a mere 4.6(%) of our Universe is
composed of matter and energy that we may observe and measure.
Even though the cosmic inflation epoch is also a contrivance
introduced to fit a theory, EGM substantiates its existence because the
inflation epoch emerges spontaneously as a natural consequence of
147

NASA JPL PlanetQuest news: SIM PlanetQuest to predict date of


cosmic collision by Bob Silberg.
www.deltagroupengineering.com

175

the calculation deriving H0 and T0. However, EGM calls into


question the existence of dark energy and dark matter. This is due to
the fact that the EGM method not only predicts H0 and T0 with
extraordinary precision, it predicts the inflationary epoch and current
measures of accelerated expansion without invoking dark matter or
energy. In fact, based upon the EGM method, the contribution of dark
matter and energy to the cosmological model is negligible. The EGM
method requires no contrivances or fudge-factors in order to produce
results which are substantially more precise than those provided by
the Standard Model of cosmology (and particle physics).
Astonishingly, EGM allows one to derive T from H,
demonstrating that they are intimately related phenomenon. As a
consequence, the entire history of the Cosmos is revealed such that
key evolutionary epochs are clearly and precisely defined without the
need for dark energy and matter. After the Big Bang, an inflationary
epoch ensued, followed by phases leading to the condensation of
matter, the formation of stars, heavy elements and large-scale
structures such as galaxies. Cosmological epochs arise due to the
energy density conditions present in the Universe during each phase.
Just like the formation of subatomic particles in a collider, each
cosmic phase transition was induced by the epoch-specific energy
density parameters of the Universe which existed at that particular
time.
These epochs in the lifetime of the Universe are not unlike
the main sequence lifetimes of stars. The fate of a star is preordained
by consequence of its physical state of equilibrium. The
characteristics of the star; its temperature, color, size and even the
duration of its life hinge on a dynamic balance between the stars
thermal energy and gravity. When a giant star transitions between its
main sequence and its death as a supernova, the phase transitions
brought about by shifts in equilibrium forge heavy elements and
disperses them throughout the Universe. The formation of these
elements provided the starting material for planet formation, and
ultimately, the emergence of life. We owe our existence to the
principle of equilibrium and the harmonic paradigm that EGM
describes.
The fact that it is possible to utilize the EGM harmonic
representation of fundamental particles equation to solve for
cosmological problems such as H0 and T0, as well as describing in
fine detail the timeline of cosmic history means that the Cosmos is
beholden to the same harmonic imperative begetting the existence of
matter. The EGM principle is more than universal, it is
cosmological.

176

www.deltagroupengineering.com

We have come full circle, from alpha to omega, having


substantiated a mathematical philosophy once fervently espoused by
Pythagoras and the ancient Babylonians over two-thousand years ago.
We now hold substantive evidence authenticating the philosophical
beliefs of our ancient scientific predecessors, who contemplated and
understood the Cosmos to be much more than a void in which
matter merely resides. Their depiction of the Cosmos encompassed all
forms in the Universe, from the miniscule to the immense, living and
inert. Their Cosmos was an expression of Musica Universalis the
harmonic affinity connecting all things and giving rise to all forms in
Nature.

www.deltagroupengineering.com

177

178

www.deltagroupengineering.com

EGM Technical Summary


Written by Riccardo C. Storti

Brevity is the sister of talent.


Interpretation: revolutionary statements in science
should be simple.
Russian proverb
Brevity can be the enemy of comprehension.
Interpretation: recognition and comprehension of simple
and revolutionary scientific statements depends upon the
skill-set of the audience.
Riccardo C. Storti

9.1

Overview

The following section outlines the method developed within


QE2-4 to describe g in harmonized terms, yielding new predictions
and highly precise experimentally verified results beyond the
Standard Models (SMs) of particle physics and cosmology. The
EGM construct derives (see: QE3):
i. A harmonic representation of gravitational fields at a
mathematical point arising from geometrically spherical
objects of uniform mass-energy distribution using modified
Complex Fourier series.
ii. Characteristics of the amplitude spectrum based upon (i).
iii. Derivation of the fundamental harmonic frequency based
upon (i).
iv. Characteristics of the frequency spectrum of an implied ZPF
based upon (i) and the assumption that an EM relationship
exists over a change in displacement across a practical
bench-top test volume.
The derivational procedure obeys the following hierarchy:
v. A harmonic representation of g is developed.
vi. The frequency spectrum of (v) is derived by application of
Buckingham Theory (BPT) and dimensional similarity.
vii. The ZPF energy density is related to (vi) based upon the
assumption that engineered EM changes in g may be

www.deltagroupengineering.com

179

produced across the dimensions of a practical bench-top test


volume.
viii. Spectral characteristics of the PV are derived based upon
(vii).
ix. A description of physical modeling criteria is presented.
x. A set of sample calculations and illustrational plots are
presented.
Applicable definitions:

Quantum Vacuum (QV): a quantum representation of the


space-time manifold within GR.
Quantum-Vacuum-Energy (QVE): the spectral energy
associated with the QV.
Zero-Point-Field (ZPF): the QV field associated with
globally flat space-time geometry. However, such a
configuration cannot physically exist; thus, the ZPF takes the
form of a generalized reference to the QV field throughout
the Quinta Essentia series (i.e. QE2-4).
Zero-Point-Energy (ZPE): the spectral energy associated
with the ZPF.
Polarizable Vacuum (PV): a polarized representation of the
ZPF.
Electro-Gravi-Magnetics (EGM): a theoretical relationship
between EM fields and g.

Fourier series148 may be applied to represent a periodic


function as a trigonometric summation of sine and cosine terms. It
may also be applied to represent a constant function over an arbitrary
period by the same method. Since the PV model is (historically) a
weak field isomorphic approximation of GR and the frequency
spectrum is postulated to range from negative to positive infinity, it
follows that Fourier series represent a useful tool by which to describe
gravity.

148

A Fourier series representation of a constant function involves the


hybridization of amplitude and frequency spectra (i.e. a Fourier
distribution contains two embedded spectra).

180

www.deltagroupengineering.com

Utilizing Fourier series in complex form149, the square wave


is constructed by summing modes. The manner in which the function
to be approximated is articulated influences its harmonic
characteristics150.

Figure 1.1,
A constant function is termed even due to symmetry about
the Y axis; subsequently, its Fourier approximation need only
contain certain terms at odd harmonics151, presenting the added
advantage of mathematical and energetic efficiency152. Thus, the
preceding periodic square wave may be reconstructed utilizing the
symmetry characteristics of a constant function as depicted by the
proceeding graph153 such that g is physically measured as a constant
function at the surface of the Earth. A Fourier series approximation of
g may be obtained by computing the magnitude of the preceding /
proceeding periodic square waves154 as the number of harmonic
modes tends to infinity.

149

The preferred representation in the Quinta Essentia series,


possessing Real (Re) and Imaginary (Im) parts; however, the Im
contribution mathematically sums to zero.
150
i.e. it may contain exclusively cosine or sine terms; alternatively, it
may contain both trigonometric forms.
151
i.e. 1st, 3rd, 5th etc.
152
i.e. the system is modeled as existing at its lowest energy state.
153
i.e. for demonstration purposes only, up to the 21st harmonic.
154
i.e. computing the magnitude acts to enforce full wave
rectification; http://en.wikipedia.org/wiki/Rectifier
www.deltagroupengineering.com

181

Figure 1.2,
Therefore, g (i.e. a constant function) may be
mathematically characterized as a fully rectified periodic square wave
composed of odd Fourier harmonics. Due to symmetry (as illustrated
above / below), g may be constructed utilizing half the period of the
fully rectified square wave155.

Figure 1.3,

155

i.e. the complete square wave cycle is not required to describe the
system.

182

www.deltagroupengineering.com

Time domain modeling may be applied over the


displacement domain of a practical bench-top test volume by
considering the relevant changes over the dimensions of that volume.
Constant functions may be expressed as a summation of trigonometric
terms; subsequently, it is convenient to model a gravitational field
utilizing modified Complex Fourier series according to an odd
number harmonic distribution. Hence, g may be usefully
represented by the magnitude of a periodic square wave solution as
the number of waves utilized to describe it, approaches infinity.
It is demonstrated in QE3 that dimensional similarity and the
equivalence principle may be applied to represent the magnitude of an
acceleration vector such that an expression for the frequency spectrum
is derived in terms of harmonic mode. This is achieved by assuming
that electromagnetically induced acceleration is dynamically,
kinematically and geometrically similar to g as constructed by
Fourier series wave summation.
The gravitational field surrounding a homogeneous solid
spherical mass may be characterized by its energy density. If the
magnitude of this field is directly proportional to the mass-energy
density of the object, then the field energy density of the PV may be
evaluated over the difference between successive odd frequency
modes. The reason for this is due to the mathematical properties of
Fourier series for constant functions. For such cases as appears in
standard texts, the summed contribution of all even modes equals
zero. Subsequently, only odd mode contributions need be considered
when modeling a constant function.
Utilizing the approximate rest mass-energy density of a solid
spherical object, an expression relating the terminating harmonic cutoff mode may be derived by assuming that the equivalent quantity of
mass-energy within an object is also stored in the gravitational field
surrounding it. Subsequently, the upper boundary of the frequency
spectrum, termed the harmonic cut-off frequency, may be calculated;
the derivation is based upon the compression of energy density of the
random ZPF form to one change in odd harmonic mode while
preserving dynamic, kinematic and geometric similarity in accordance
with BPT.
The compressed random ZPF form is subsequently
decompressed over the Fourier domain (assigning structure), yielding
a highly precise reciprocal harmonic representation of g; preserving
dynamic, kinematic and geometric similarity to the Newtonian, PV
and GR representations. The cross-fertilization of the amplitude and
frequency characteristics of a constant function described by Fourier
series with the ZPF spectral energy density distribution derived by
www.deltagroupengineering.com

183

Haisch and Rueda, is a useful tool by which to determine the spectral


characteristics of the PV representation of GR (proposed by Puthoff)
at the surface of the Earth (for example) by assuming,
xi. The PV physically exists as a spectrum of frequencies and
wave vectors.
xii. The sum of all PV wave vectors at the surface of the Earth is
coplanar with the gravitational acceleration vector. This
represents the only vector of practical experimental
consequence.
xiii. A modified Complex Fourier series representation of g is
representative of the magnitude of the resultant PV wave
vector.
xiv. A physical relationship exists between electricity, magnetism
and gravity such that g may be investigated and modified.
Therefore, we may summarize the solution algorithm
constituting the harmonically based EGM construct by five simple
steps as follows:
xv. Apply Dimensional Analysis Techniques (DAT's), BPT and
similarity principles to combine electricity, magnetism and
resultant EM acceleration in the form of groupings.
xvi. Apply the equivalence principle to the groupings formed
in (xv).
xvii. Apply Fourier Harmonics to the equivalence principle.
xviii. Apply ZPF Theory156 to Fourier Harmonics.
xix. Apply the PV model of gravity to the ZPF.
Within the EGM construct, the Poynting Vector P
represents the propagation of energy (i.e. conjugate photon pairs, see:
QE3), radially outwards from the center of mass; however, g is the
result of the change in P (i.e. P) between two points in the
displacement domain. This may appear counter-intuitive since P
propagates away from the center of mass, but g is a consequence of
P not P. A P arises due to the superposition of the P field
upon the ZPF. The ZPF acts to constrain the P field, yielding g as
predicted by Newtonian mechanics and GR.
This principle may be demonstrated by a simple example; let
the value of P at positive radial displacements from a mass-object
r1 and r2 be given by the positive values P1 and P2
respectively. Hence, if r2 is greater than r1 then P2 is less than
156

See also: ZPF equilibrium as described in QE2 (i.e. the chapter


titled The Natural Philosophy of Fundamental Particles).

184

www.deltagroupengineering.com

P1 because P2 tends to zero as r2 approaches infinity such that


the difference between P1 and P2 is negative, indicating that g
acts towards the center of mass and opposite to the direction of
propagation of P.
P represents the propagation of spectral mass-energy
equivalence in the form of populations of conjugate Photon pairs. An
equilibrium gradient in the displacement domain arises due to the
mathematical interaction between the mass-energy and ZPF spectra,
equivalent to space-time curvature under GR because the intensity of
P varies congruently with g. Hence, the radial gradient in P is
analogous to variations in the Refractive Index of the space-time
manifold in an optical model of gravity.

9.2

The QV spectrum

Historically, the QV has been considered to be composed of


a near infinite spectrum of randomly orientated wave functions, each
of specific frequency and amplitude, analogous to the static one
observes on a dead television channel. However, the EGM construct
disagrees with this historical conception as it implies the existence of
a near infinite quantity of energy in a vanishing volume (i.e. free
space contains a near infinite amount of energy).
EGM asserts that the QV is more appropriately described as
a finite spectrum whose wave function population is determined by
the quantity of mass-energy occupying a specific volume (i.e. free
space contains a near zero amount of energy). Subsequently, the QV
spectrum may be characterized by the following statements:
xx. It is a generalized reference to a quantum description of the
space-time manifold.
xxi. In flat space-time geometries, it transforms to the ZPF
spectrum.
xxii. In curved geometries (i.e. gravitational fields), it transforms
to the PV spectrum.

9.3

The EGM spectrum

The EGM spectrum is a harmonic description of mass-energy


represented as conjugate EM wavefunction pairs; incrementally above
0(Hz), tending to the Planck Frequency and obeying a Fourier
distribution. Key generalized spectral features are:
xxiii. It is discrete and harmonically continuous.

www.deltagroupengineering.com

185

xxiv. The terminating frequency is a harmonic multiple of the


fundamental (i.e. lowest freq.).
xxv. Each wavefunction represents a population of photons such
that each conjugate photon pair constitutes a graviton.
xxvi. Where appropriate, due to the principle of mass-energy
equivalence and the law of conservation of energy157, it may
also be referred to as the PV spectrum.

9.4

The ZPF spectrum

The ZPF spectrum may be partially described by its contrast


to the EGM spectrum. The EGM spectrum relates the mass of an
object to the gravitational field surrounding it utilizing Fourier
harmonics; hence, it is somewhat localized. However, the energy of
the ZPF is dispersed homogeneously throughout the Universe.
The historical conception of the ZPF implies the existence of
a near infinite quantity of energy in a vanishing volume (i.e. free
space contains a near infinite amount of energy). Fortunately, EGM
resolves this conflict such that a vanishingly small volume of flat
space-time does not contain an infinite amount of energy. This is
achieved by merging the continuous cubic frequency characteristic of
the ZPF with a discrete and finite Fourier distribution such that,
xxvii. The number of harmonic modes approaches infinity.
xxviii. The highest frequency tends to zero.
A determination of available ZPF energy throughout the
observable Universe is demonstrated in QE2,4 and the gradient of the
Hubble constant in the time domain is shown to be presently
positive158.

9.5

The PV spectrum

The PV spectrum may be formulated by merging the EGM


and ZPF spectral distributions. Energy condensed as mass is finite;
representing a small fraction of the total energy in the Universe. The
finite parameters of matter dictate the form that the mass-energy
spectrum will take. The resulting harmonic description is termed the
PV spectrum.
157

i.e. the mass-energy within an object is energetically equivalent to


the gravitational field surrounding the object.
158
Facilitating an explanation of the accelerated cosmological
expansion phenomenon.

186

www.deltagroupengineering.com

PV spectral formation may be conceptualized by considering


a Universe populated by a singular spherical object of homogeneous
mass-energy density. When such an object is added to an empty
Universe, the EGM spectrum of the object is superimposed upon the
background ZPF spectrum. Merging the EGM and ZPF spectra results
in the cross-fertilization of characteristics; the complete mathematical
derivation is contained in QE3. Descriptions of the specific
mathematical events required are as follows,
xxix. Integrate the Haisch-Rueda-Puthoff (HRP) spectral energy
density equation over the frequency domain .
xxx. Recognize that, for any Fourier summation resulting in a
constant function, only odd harmonic modes are required due
to the null summation of even modes. This is a fundamental
property of Fourier mathematics and should not be
dismissed159.
xxxi. Formulate an expression for the change in energy density
with respect to odd harmonics, in terms of , utilizing the
integrated HRP spectral energy density equation.
xxxii. Substitute the harmonic frequency PV relationship into the
integrated HRP spectral energy density equation.
xxxiii. Solving appropriately, one obtains the harmonic cut-off
mode and frequency (i.e. n and respectively). n
denotes the highest harmonic mode contained in the merged
spectra (i.e. the PV spectrum) and represents the
terminating spectral frequency relative to a fundamental
value (i.e. its lowest permissible magnitude).
Hence, all required attributes have been derived to
completely describe g in harmonic terms. The next step is to
understand how the EGM method produces a PV spectrum such that
the infinite energy dilemma of ZPF Theory (derived by
contemporary QM methods), is averted. The deductive reasoning may
be articulated as follows:
xxxiv. The HRP derivation implies that the majority of ZPE exists
at the spectral limit160.
xxxv. Assume that the ZPE at an arbitrary mathematical point in
the space-time manifold is constant such that the associated
spectrum may be described harmonically relative to the

159

Refer to any standard text for further information regarding Fourier


techniques.
160
i.e. low frequency energy contribution is comparatively trivial.
www.deltagroupengineering.com

187

xxxvi.

xxxvii.

xxxviii.

xxxix.

magnitude of some fundamental frequency at the point


under consideration.
The Fourier characteristics of a constant function
demonstrate that only odd harmonic modes are required for
summation.
Principles of equivalence and similitude imply that the
highest spectral transition of odd harmonic mode may be
utilized in the representation of the total localized ZPE.
Assume that the mass-energy density of an object is equal to
the spectral energy density of the gravitational field
surrounding it.
Integrating the HRP spectral energy density relationship
yields the total ZPE, which may be expressed locally as a
narrow high-frequency bandwidth of equivalent energy.
Equating this result to the mass-energy density of an object
yields the PV spectrum surrounding it, preserving similitude.

Therefore, when the EGM and ZPF spectra are merged, the
continuous ZPF spectrum is compressed and equated to the Fourier
distribution of the EGM spectrum such that the resulting PV spectrum
is a decompressed form of the merged spectra and the properties of its
spectral limits may be determined. This process mathematically
transforms the continuous ZPF spectrum to a discrete and finite
Fourier distribution of equivalent energy. Thus, as radial displacement
r at a mathematical point from a mass-object increases;
xl. Gravitational field strength decreases.
xli. Spectral energy density decreases.
xlii. The number of harmonic modes increases (i.e. bifurcation).
xliii. Greater numbers of modes are required to be summed for
energetic equivalence.
The EGM interpretation of gravity is similar to Newtons
thoughts of an optical model such that the aether was presumed to be
denser farther away. The gradient in aether density causes light and
objects to follow trajectories characteristic of GR. EGM demonstrates
that the increasing density of Newtons aether is analogous to
increases in mode population in the PV spectrum. Hence, the PV is an
EM frequency spectrum obeying a Fourier distribution at
displacement r describing a mass M induced gravitational field
such that;
xliv. It denotes a polarized form of the ZPF spectrum161.
161

Mass pushes the ZPF surrounding it uphill, against the natural


flux of space-time manifold expansion.

188

www.deltagroupengineering.com

xlv. The population of spectral modes decreases as mass


increases.
xlvi. Maximum spectral frequency increases as mass increases.
xlvii. The fundamental spectral frequency increases as mass
increases.
xlviii. Spectral frequency bandwidth162 increases as mass increases.

9.6

The EGM, PV and ZPF spectra

The difference between the EGM, PV and ZPF spectra is that


the EGM spectrum commences incrementally above 0(Hz) and
approaches the Planck Frequency. The PV spectrum is mass specific
and represents a bandwidth of the EGM spectrum commencing at a
non-zero fundamental frequency. The EGM and PV spectra follow a
Fourier distribution, whereas the ZPF spectrum possesses the same
frequency bandwidth of the EGM spectrum, but does not follow a
Fourier distribution. Thus, the EGM spectrum is the polarized form of
the ZPF spectrum, while the PV spectrum is an object specific subset
of the EGM spectrum following a Fourier distribution.

9.7

The Casimir Effect

The Casimir Effect163 demonstrates that when small


distances separate two flat neutral metal plates, photons in the PV
field with wavelengths larger than the plate separation distance are
excluded from the spatial cavity, resulting in an attractive force
between the plates due to the bias in vacuum energy across the
system164. Gravity, in this regard, is analogous to a long-range
Casimir Effect because EGM asserts that mass induced gravitational
effects may be described by changes in mode population across a
region of space.
The EGM construct was applied in QE3 to derive the Casimir
Force from first principles, demonstrating that it differs depending
upon ambient gravitational field strength! For example, the Casimir
162

i.e. the difference in magnitude between the highest and lowest


frequencies.
163
Presently, it is only experimentally confirmed to exist in
gravitational fields (i.e. PV fields). The Effect has not been
physically verified in flat space-time geometries (i.e. the free-space
0g condition).
164
i.e. the vacuum energy density is lower between the plates.
www.deltagroupengineering.com

189

Force will be slightly different on Earth than Jupiter or the Moon.


QE3 states that,
xlix. . an Earth based equivalent Casimir experiment
conducted on Jupiter will exclude fewer low frequency modes
preserving higher frequency modes that simply pass
through the plates, resulting in a smaller Casimir Force. By
contrast, the same experiment conducted on the Moon will
produce a larger Casimir Force.
l. . a Casimir Experiment conducted in free space will
produce an extremely small force (tending to zero) due to the
lack of initial background field pressure. Since the Casimir
Force arises from a pressure imbalance, the lack of
significant ambient field pressure between the plates165
prevents the formation of large Casimir Forces.

9.8

Comparative spectra

Note: labels of the form 2.xx, 3.xx, 4.xx refer to QE2,3,4 respectively.
EGM bandwidth comparisons of PV spectra associated with
physical categories of objects may be formulated and represented
graphically based upon ZPF equilibria. Determination of the ZPF
equilibrium radius of subatomic particles is a sophisticated process,
summarized in QE2. A complete and rigorous derivation is presented
in QE3.
Utilizing the EGM construct, the HRP spectral energy
density equation with cubic frequency distribution may be graphically
categorized into four regions (i.e. zones), these are; Planck-scale
energy densities, particle physics, astrophysics and cosmology,
subject to the following generalized characteristics166 [see: Fig. (2.1,
2.2)],
li. Planck scale energy densities167 [see: QE4]
Narrowband high-frequency spectrum.
Narrowband modal spectrum.
lii. Particle physics
Broadband high-frequency spectrum.
Narrowband modal spectrum.
165

i.e. in and around the experimental zone.


See: QE2 for precise numerical determinations.
167
Refers to particulate representations of maximum permissible
energy densities (i.e. Black Hole singularities).
166

190

www.deltagroupengineering.com

liii. Astrophysics
Moderateband168 high-frequency spectrum.
Moderateband modal spectrum.
liv. Cosmology [see: Tab. (2.6, 2.7)]
Narrowband low-frequency spectrum.
Broadband modal spectrum.
Sample plots,

Figure169 2.1 (illustrational only - not to scale),


where,
Region / Zone
Applicable Category
Gravitational Model
Space-Time Geometry

A
B
Cosm. Astro.
ZPF
PV
Flat
Curved
Table 2.6,

C
PP
PV
Flat

D
PS
PV
Curved

168

A generalized reference to spectral bandwidth relative to narrow


and broad descriptors.
169
Utilizing this proportional spectral frequency characteristic in the
harmonic representation of gravitational fields by the EGM method,
the bifurcation phenomenon may be mathematically articulated by the
relationship 0 1 / nPV [Eq. (2.7); see: QE2].
www.deltagroupengineering.com

191

Note: Cosm. (Cosmology), Astro. (Astrophysics), PP (Particle


Physics) and PS (Planck Scale).

Figure 2.2 (illustrational only - not to scale),


where,
Region / Zone
Applicable Category
Gravitational Model
Space-Time Geometry

E
F
PS
PP
PV
PV
Curved
Flat
Table 2.7,

G
Astro.
PV
Curved

H
Cosm.
ZPF
Flat

On a Cosmological scale170, the ZPF upper spectral limit is


influenced by the average energy density of the present Universe. The
spectral density of the ZPF remains cubic; however, the upper spectral
frequency limit is lower than it was in the early Universe. Hence, the
majority of ZPE is presently in the form of low-frequency modes,
each containing a relatively small amount of energy.
The few high-frequency modes characterizing the early
Universe have bifurcated into a very large bandwidth of lowerfrequency modes as the Universe expanded to its present form. The
total energy of the Universe remains constant, but is spread out over a
170

i.e. on average, with a flat space-time manifold as determined by


the Wilkinson Microwave Anisotropy Probe (WMAP).

192

www.deltagroupengineering.com

much greater volume as cosmological expansion continues. It is


demonstrated by derivation in QE3 and confirmed in QE4, that the
majority proportion of the gravitational effect in a field occurs at the
harmonic cut-off frequency such that all other frequencies may
be usefully neglected171.

9.9

Characterization of the gravitational spectrum

The EGM equations, utilized to describe fundamental


particles in harmonic terms, are simplified for values of Refractive
Index KPV approaching unity. This facilitates the representation of
g utilizing the PV harmonic cut-off frequency , leading to the
formulation of a generalized cubic frequency expression. It is
demonstrated that the PV spectrum is dominated by such that
the magnitude of the associated gravitational Poynting Vector is
usefully approximated by the total energy density, resulting in an
expression for EGM Flux Intensity C_J. The derivation sequence
proceeds as follows,
lv. Simplification of the EGM equations.
lvi. Derivation of g in terms of .
lvii. Formulation of a generalized cubic frequency expression in
terms of g.
lviii. Determination of the gravitationally dominant EGM
frequency.
lix. Derivation of C_J.

9.10 Planck-Particle characteristics


The minimum physical dimensions of SchwarzschildPlanck Particle mass and radius is derived, leading to the
determination of the value of KPV at the event horizon of a
Schwarzschild-Planck Black Hole (SPBH). Consequently, the
magnitude of at the event horizon RBH of a Schwarzschild
Black Hole (SBH) is presented, yielding the singularity radius rS
171

The information in this paragraph should not be confused with the


PV spectrum of a specific body such as a planet, in which case, the
bulk of the gravitational energy [i.e. >> 99.99(%)] occurs at the
harmonic cut-off frequency. The low frequency modes do not
contribute significantly and may be usefully neglected from most
calculations. This phenomenon has been thoroughly and rigorously
explored in QE3.
www.deltagroupengineering.com

193

and harmonic cut-off profiles (n and from rS to RBH).


The minimum gravitational lifetime of matter TL is also advanced
such that the value of generalized average emission frequency per
graviton g may be calculated. These determinations assist in the
supplemental EGM interpretation with respect to the visibility of
Black Holes (BHs). The derivation sequence proceeds as follows,
lx. Derivation of the minimum physical Schwarzschild-Planck
Particle mass and radius.
lxi. Derivation of the value of the KPV at the event horizon of a
Schwarzschild-Planck Black Hole (SPBH).
lxii. Derivation of at the event horizon of a SPBH.
lxiii. Derivation of at the event horizon of a SBH.
lxiv. Derivation of rS.
lxv. n and profiles (from rS to RBH) of SBHs.
lxvi. Derivation of TL.
lxvii. Derivation of g.
lxviii. Why can't we observe BHs?

9.11 Cosmology
9.11.1 Fundamental
The primordial and present values of the Hubble constant are
derived (H and HU respectively), leading to the determination of
the Cosmic Microwave Background Radiation (CMBR) temperature
TU. This facilitates the determination of the impact of dark matter /
energy on HU and TU such that a generalized expression for TU
in terms of HU is formulated. An experimentally implicit derivation
of the ZPF energy density threshold UZPF is also presented. The
derivation sequence proceeds as follows,
lxix. Derivation of H and HU.
lxx. Derivation of TU.
lxxi. Numerical solutions for172 H, AU, RU, U, MU, HU and
TU.
lxxii. Determination of the impact of dark matter / energy on
HU and TU.
lxxiii. TU as a function of a generalized Hubble constant.

AU, RU, U, MU denote cosmological age, size, mass-density and


total mass respectively.

172

194

www.deltagroupengineering.com

lxxiv. Derivation of173 Ro, MG, HU2 and U2 from TU2.


lxxv. Experimentally implicit derivation of UZPF.

9.11.2 Advanced
A time dependent derivation of TU is performed, including
its rate of change and relationship to HU. This facilitates the
articulation of the cosmological evolution process into four distinct
periods dealing with the inflationary and early expansive phases.
Subsequently, the history of the Universe174 is developed and
compared to the Standard Model (SM) of Cosmology (SMoC). This
assists in determining the cosmological limitations of the EGM
construct. The question of the practicality of utilizing conventional
radio telescopes for gravitational astronomy is also addressed. The
derivation sequence proceeds as follows,
lxxvi. Time dependent CMBR temperature.
lxxvii. Rates of change of CMBR temperature.
lxxviii. Rates of change of the Hubble constant.
lxxix. Cosmological evolution process.
lxxx. History of the Universe.
lxxxi. EGM cosmological construct limitations.
lxxxii. Are conventional radio telescopes, practical tools for
gravitational astronomy?

9.11.3 Gravitational
An engineering model is developed to explain how
gravitational effects are transmitted through space-time in terms of
EGM wavefunction propagation and interference. The derivation
sequence proceeds as follows,
lxxxiii. Gravitational propagation: the mechanism for interaction.
lxxxiv. Gravitational interference: the mechanism of interaction.

173

Ro and MG denote galactic radius and mass respectively.


HU2, U2 and TU2 represent transformations of HU, U and
TU.
174
As defined by the EGM construct.
www.deltagroupengineering.com

195

9.11.4 Particle
The following characteristics are derived utilizing EGM
principles,
lxxxv. The photon and graviton mass-energies lower limit.
lxxxvi. The photon and graviton Root-Mean-Square (RMS) charge
radii lower limit.
lxxxvii. The photon charge threshold.
lxxxviii. The photon charge upper limit.
lxxxix. The photon charge lower limit.

9.12 Key point summary


Under the EGM construct, the following assertions were
derived,
xc. The EGM spectrum is a harmonic description of mass-energy
represented as conjugate EM wavefunction pairs;
incrementally above 0(Hz), tending to the Planck
Frequency and obeying a Fourier distribution. Key
generalized spectral features are,
It is discrete and harmonically continuous.
The highest frequency is a harmonic multiple of the
fundamental (i.e. lowest freq.).
Each wavefunction represents a population of
photons such that each conjugate photon pair
constitutes a graviton.
Where appropriate, due to the principle of massenergy equivalence and the law of conservation of
energy, it may also be referred to as the PV spectrum.
xci. The ZPF is an EM frequency spectrum referring to the QV
spectrum of globally flat space-time geometry. However,
such a configuration cannot physically exist; thus, the ZPF
takes the form of a generalized reference to the QV field
throughout the Quinta Essentia series (i.e. QE2-4) such
that,
The number of harmonic modes approaches infinity.
The highest frequency tends to zero.
xcii. The PV is an EM frequency spectrum obeying a Fourier
distribution at displacement r describing a mass M
induced gravitational field such that,
It denotes a polarized form of the ZPF spectrum.

196

www.deltagroupengineering.com

xciii.
xciv.

xcv.
xcvi.

xcvii.

xcviii.

xcix.

The population of spectral modes decreases as mass


increases.
Maximum spectral frequency increases as mass
increases.
The fundamental spectral frequency increases as
mass increases.
Spectral frequency bandwidth increases as mass
increases.
A vanishing volume containing infinite energy does not exist
under the EGM construct.
Although on the human scale the quantity of ZPF energy is
trivial, on the astronomical or cosmological scale, it becomes
extremely large when approaching the dimensions of the
visible Universe.
The EGM spectrum is a simple, but extreme, extension of the
EM spectrum.
The ZPF equilibrium radius of astronomical bodies coincides
with the mean radius (see: QE3), representing the
mathematical boundary (within EGM) delineating mass
composition and the gravitational field surrounding it.
The EGM harmonic representation of fundamental
particles175 is derived by considering all matter to be
radiators of populations of conjugate photon pairs176,
suggesting that the quintessential building-block of all atoms,
chemical elements, molecules and material forms in the
Cosmos is the photon.
EGM is a method and not a theory because: (i) it is an
engineering approximation and (ii), the mass and size of
most subatomic particles are not precisely known. It
harmonizes all fundamental particles relative to an arbitrarily
chosen reference particle by parameterising ZPF equilibrium
in terms of .
The formulation of table177 (4.5) is a robust approximation
based upon PDG data. Other interpretations are possible,
depending on the values utilized. For example, if one re-

175

i.e. the harmonic pattern expressed in terms of St.


The majority of energy contained within a PV spectrum occurs at
the spectral limit; hence, the spectrum may be usefully approximated
by a single conjugate wavefunction pair at the harmonic cut-off
frequency. See: QE2,3 for further information.
177
Refer to the proceeding chapter.
176

www.deltagroupengineering.com

197

c.

ci.

cii.

ciii.

civ.

applies the method presented in QE3 based upon other data;


the values of St in table (4.5) might differ. However, in
the absence of exact experimentally measured mass and size
information, there is little motivation to postulate alternative
harmonic sequences, particularly since the current
formulation fits the available experimental evidence
extremely well.
If all mass and size values were exactly known by
experimental measurement, the main sequence formulated in
QE3 (or a suitable variation thereof) will produce a precise
harmonic representation of fundamental particles, invariant
to interpretation. Table (4.5) values cannot be dismissed due
to potential multiplicity before reconciling how,
EGM generates radii values substantially more
accurate than any other contemporary method178.
is capable of producing a Top quark mass
value the SM of particle physics cannot.
Extremely short-lived leptons179 cannot exist, or do
not exist for a plausible harmonic interpretation.
Any other harmonic interpretation, in the absence of
exact mass and size values determined
experimentally, denote a superior formulation.
The cosmological inflation and accelerated expansion
phenomena emerge naturally within the EGM construct and
are not presumed a priori as part of the modeling process.
Dark matter / energy are not required by the EGM construct
to predict experimentally verified results. In fact, it is
mathematically demonstrated that dark influence upon
H0 and T0 is less than 1(%).
The present values of deceleration parameter and
cosmological constant (q0 and 0 respectively) are
derived and precisely quantified under the EGM construct.
The SMoC interpretation of the sign associated with ZPF
energy is opposite to the EGM construct. That is, the SMoC
interprets ZPF energy as a positive quantity; EGM interprets
it as a negative quantity.

178

It is a noteworthy result that EGM is capable of producing the


Neutron Mean Square (MS) charge radius as a positive quantity.
Conventional techniques favor the non-intuitive form of a negative
squared quantity.
179
i.e. with lifetimes of < 10-29(s).

198

www.deltagroupengineering.com

cv. Where appropriate, due to the principle of mass-energy


equivalence and the law of conservation of energy180, the
EGM spectrum may also be referred to as the PV spectrum.
Note: numerical simulations substantiating all claims exist in QE2-4.

180

In terms of equilibration.
www.deltagroupengineering.com

199

200

www.deltagroupengineering.com

10 EGM Results Summary


10.1 Harmonic representation of fundamental particles
Particles may be classified according to a precise harmonic
relationship amongst harmonic cut-off frequencies. The EGM
harmonic representation of fundamental particles equation yields
harmonic values relative to a designated reference particle. Harmonics
not matching known particles in the Standard Model are assigned
theoretical designations (Th.).
Proton
Harm.

Electron
Harm.

Quark
Harm.

Elec. (e), Elec. Neutrino (


e)
L2, 2 (Th. Lepton, Neutrino)
L3, 3 (Th. Lepton, Neutrino)
Muon (
), Muon Neut. (
)

St = 1
2
4
6
8

St = 1/2
1
2
3
4

St = 1/14
1/7
2/7
3/7
4/7

L5, 5 (Th. Lepton, Neutrino)


Tau (), Tau Neutrino (
)
Up, Down quark: (uq), (dq)
Strange quark (sq)
Charm quark (cq)
Bottom quark (bq)
QB5 (Th. quark or Boson)
QB6 (Th. quark or Boson)
W Boson
Z Boson
Higgs Boson (H) (Th.)
Top quark (tq)

10
12
14
28
42
56
70
84
98
112
126
140

5
6
7
14
21
28
35
42
49
56
63
70

5/7
6/7
1
2
3
4
5
6
7
8
9
10

Exis. and Th. Particles181


Proton (p), Neutron (n)

Table182 4.5

181

Although the newly predicted Leptons are within the kinetic


range181 and therefore should have been experimentally detected,
there are substantial explanations discussed in QE2,3.
182
Appears similarly as Particle Summary Matrix 3.3 in QE3 and
table (4.5) in QE2,4.
www.deltagroupengineering.com

201

Note: Exis. (Existing), Th. (Theoretical), Harm. (Harmonics), Elec.


(Electron) and Neut, (Neutrino).

10.2 Periodic table of fundamental particles


The harmonic relationship amongst fundamental particles
allows for their hierarchical arrangement into a representation
mimicking the periodic table of atomic elements. Assuming QB5,6
to be Intermediate Vector Bosons (IVBs), EGM conjectures that the
periodic table of elementary particles may be constructed as follows:

Table 4.9,
(i) *Where, SC denotes coupling strength at 1(GeV)183.
James William Rohlf, Modern Physics from to Z, John Wiley
& Sons, Inc. 1994.
183

202

www.deltagroupengineering.com

(ii) The values of St in table (4.9) utilize the proton as the


reference particle. This is due to its Root-Mean-Square (RMS) charge
radius and mass-energy being precisely known by physical
measurement.

10.3 EGM vs. SMoC


The following table displays a summary of the key
mathematical facts determined via the EGM method in comparison to
those obtained via the Standard Model of Cosmology (SMoC).
Key Mathematical Fact
Dark matter / energy required
Max Cosmological Temp 1031(K)
Big Bang Temperature = 0(K)
Unification with particle physics
Relationship between H0 and T0
H0 and T0 are calculable to high precision
H0 and T0 were derived from particle
physics
Precise determination of distinct cosmological
evolutionary phases
Sign of the deceleration parameter is in
agreement with expectation
Prediction of accelerated cosmological
expansion
Table 2.17,

SMoC
Yes
Yes
No
No
No
No
No

EGM
No
Yes
Yes
Yes
Yes
Yes
Yes

No

Yes

No

Yes

No

Yes

where, H0, T0, q0, 0 denote the present values of Hubble constant,
CMBR184 temperature, deceleration parameter and cosmological
constant respectively.

10.4 Cosmological evolution process


Figure (4.23) depicts the change in the temperature of the
early Universe with time following the Big Bang. The EGM
calculation for peak temperature predicts a Big Bang temperature of
0(K) and peak temperature of 1031(K) immediately after the Big
Bang.

184

Cosmic Microwave Background Radiation.


www.deltagroupengineering.com

203

Figure (4.26) depicts a Planck-like curve relationship


between cosmological temperature and the Hubble constant.
Increasing volumetric expansion results in vacuum energy diffusion;
leading to a decrease in CMBR temperature.
Figure (2.4) depicts the rate of change of the Hubble constant
over time, resulting in a curve defining the inflationary and
expansive epochs of cosmic history. The peak of the curve marks
the point at which cosmic inflation ends and expansion begins. The
maximum cosmological temperature185 line marks the instant at which
the rate of change of the Hubble constant switches from negative to
positive. The section of the curve above 0 marks a period of
positive Hubble gradient186 and below 0 marks a period of negative
Hubble gradient187. Thus, EGM calculations are congruent with the
physical observation that the space-time manifold is currently
undergoing accelerated expansion. It is important to note that this
feature is presently beyond the abilities of the SMoC to produce.
Figure (2.5) depicts the following:
1. Primordial Inflation (prior to the Big Bang): the Universe may be
described as inverted and non-physical such that the interior of
the Cosmos existed outside the exterior boundary RBH in
accordance with the Primordial Universe model described in
QE4 such that:
i. The cosmological temperature T increases from negative
infinity to zero.
ii. The rate of change of the Hubble constant over time dHdt
increases from negative infinity to -H2.
iii. The magnitude of the Hubble constant188 |H| decreases
from positive infinity to H.
2. Thermal Inflation: the period from the instant of the Big Bang to
the instant of maximum cosmological temperature such that:
iv. T increases from zero to its maximum value.
v. dHdt increases from -H2 to zero.
185

i.e. an average value.


i.e. the Universe inflated and expanded at an accelerated rate;
continuing to the present day.
187
The rate of inflation was negative until the point of maximum
cosmological temperature; it then began to inflate and expand at a
positive rate.
188
This terminology is an abbreviated reference to the square-root of
the magnitude of the rate of change of the Hubble constant over time,
as indicated by the graph.
186

204

www.deltagroupengineering.com

3.

vi. |H| decreases from H to zero.


Hubble Expansion: the period from the maximum postprimordial189 |H| to the present day such that:
vii. T decreases to its present day value.
viii. dHdt decreases from its maximum physical value to its
present day value.
ix. |H| decreases from its maximum physical190 value to its
present day value.

189

i.e. bounded by the Cosmological Expansion phase.


In this context, physical refers to the Hubble Expansion phase
because it is experimentally observed.
190

www.deltagroupengineering.com

205

Figure 4.23,

206

www.deltagroupengineering.com

Figure 4.26,

www.deltagroupengineering.com

207

Figure 2.4,

208

www.deltagroupengineering.com

Figure 2.5,
www.deltagroupengineering.com

209

210

www.deltagroupengineering.com

Periodic Table of the Elements

www.deltagroupengineering.com

211

Image: Spiral Galaxy

212

www.deltagroupengineering.com

Bibliography 1
i

Leonardo da Vinci, translated by Irma Anne Richter, The Notebooks


of Leonardo da Vinci. (Oxford University Press, 1998), pp.276.
ii
J. Robinson, An Introduction to Early Greek Philosophy. (Boston:
Houghton Miffin, 1968), pp. 75.
iii
George Johnson, Fire in the Mind: Science, Faith and the Search
for Order. (New York: Alfred A Knopf Publishers, 1995).
iv
Bruce J. Hunt, The Maxwellians. (Ithaca and London: Cornell
University Press, 1991).
v
A. A. Michelson and E.W. Morley, On the Relative Motion of the
Earth and the Luminiferous Aether. Philos. Mag. S.5, 24 (151),
pp.449-463 (1887).
vi
H. B. G. Casimir, and D. Polder, The Influence of Retardation on
the London-van der Waals Forces. Physical Review, Vol. 73, Issue 4,
pp. 360-372 (1948).
vii
S. K. Lamoreaux, Demonstration of the Casimir Force in the 0.6 to
6 m Range Physical Review Letters. 78, 58 (1997).
viii
U. Mohideen and Anushree Roy, A Precision Measurement of the
Casimir Force between 0.1 to 0.9 mm Physical Review Letters,
vol.81, (no.21), APS, (1998).
ix
Benedetto, G.; Gavioso, R.; Albo, P.; Lago, S.; Ripa, D.; Spagnolo,
R. Speed of Sound in Pure Water at Temperatures between 274 and
394(K) and at Pressures up to 90(MPa) International Journal of
Thermophysics, Volume 26, Number 6, November 2005 , pp. 16671680(14).
x
Albert Einstein On a Heuristic Viewpoint Concerning the
Production and Transformation of Light Annalen der Physik,
17(1905), pp. 132-148.
xi
Pisin Chen, Stanford Linear Accelerator Center, June 06, 2000
Violent Acceleration and the Event Horizon - Press Release.
http://home.slac.stanford.edu/pressreleases/2000/20000606.htm
xii
Vesselin Petkov, Did 20th century physics have the means to reveal
the nature of inertia and gravitation? arXiv:physics/0012025 v3 17
(December, 2000).
xiii
Eugene Parker, "Dynamics of the Interplanetary Gas and Magnetic
Fields". The Astrophysical Journal. (1958) 128: 664.
xiv
Pyotr Lebedev, "Untersuchungen ber die Druckkrfte des Lichtes
[The Experimental Study of the Pressure of Light], Annalen der
Physik, 1901.
xv
Boyer, Timothy H., The Classical Vacuum Scientific American,
www.deltagroupengineering.com

213

pp. 70-78, (August 1985).


xvi
(i) Ford, Brian J., The Controversy of Robert Brown and
Brownian Movement Biologist, 39 (3): 82-83, (June 1992). (ii)
Brownian Movement in Clarkia Pollen: A Reprise of the first
Observations The Microscope, 40 (4): 235-241, (1992).
xvii
Compton, Arthur H. A Quantum Theory of the Scattering of Xrays by Light Elements Physical Review, 21, 483 - 502 (1923).
xviii
B. Haisch & A. Rueda, On the relation between a Zero-PointField-Induced inertial effect and the Einstein-de Broglie formula
physics Letters A, 268, 224, (2000).
xix
H. E. Puthoff, Polarizable-Vacuum (PV) Approach to General
Relativity Foundations of physics, Vol. 32, No. 6, (June 2002).
xx
Philo, translated by F.H. Colson, Vol. IV: On the Migration of
Abraham, (Cambridge Mass.: Loeb Classic Library, Harvard
University Press, No. 261, 1932).
xxi
Malhotra, R., Holman, M., and Ito, T Chaos and stability in the
Solar system PNAS, 98(22):12342-12343 (2001).
xxii
Planck, Max, "On the Law of Distribution of Energy in the Normal
Spectrum". Annalen der Physik, vol. 4, p. 553 ff (1901).
xxiii
Lise Meitner & Otto Robert Frisch Disintegration of Uranium by
Neutrons: a New Type of Nuclear Reaction Nature 143: 239-240.
(1939).
xxiv
Sir Isaac Newton, Opticks. (Chicago: Encyclopedia Britannica
[1955, c1952] Book III, Part I) p.520-521.
xxv
Marc G. Millis, Emerging Possibilities for Space Propulsion
Breakthroughs Interstellar Propulsion Society Newsletter, Vol. I, No.
1, (July 1, 1995).
xxvi
Hetherington, N.S., Encyclopedia of Cosmology (Garland
Publishing Inc. New York 1993), (articles on Big Bang Cosmology,
and Origins of Primordial Nucleosynthesis).
xxvii
Sackmann, I.-Juliana; Arnold I. Boothroyd; Kathleen E. Kraemer
(11 1993). "Our Sun. III. Present and Future". Astrophysical Journal
418: 457.
xxviii
Peter Coles, Inside Inflation: After the Big Bang. New
Scientist, issue 2593 (March, 2007).
xxix
V. Rubin, W. K. Ford, Jr., "Rotation of the Andromeda Nebula
from a Spectroscopic Survey of Emission Regions," Astrophysical
Journal 159: 379 (1970).

214

www.deltagroupengineering.com

90000
ID: 2671468
www.lulu.com

9 781409 205340

Quinta Essentia: A Practical Guide to Space-Time Engineering - Part 1

ISBN 978-1-4092-0534-0

You might also like