You are on page 1of 880

From; http://www.reciprocalsystem.com/nbm/index.

htm

Nothing But Motion


DEWEY B. LARSON
Volume I of a revised and enlarged edition of
THE STRUCTURE OF THE PHYSICAL UNIVERSE

Preface 12. Basic Mathematical Relations

1. Background 13. Physical Constants

2. A Universe of Motion 14. Cosmic Elements

3. Reference Systems 15. Cosmic Ray Decay

4. Radiation 16. Cosmic Atom Building

5. Gravitation 17. Some Speculations

6. The Reciprocal Relation 18. Simple Compounds

7. High Speed Motion 19. Complex Compounds

8. Motion in Time 20. Chain Compounds

9. Rotational Combinations 21. Ring Compounds

10. Atoms References

11. Sub-atomic Particles

Preface
Nearly twenty years have passed since the first edition of this work was published. As I
pointed out in the preface of that first edition, my findings indicate the necessity for a
drastic change in the accepted concept of the fundamental relationship that underlies the
whole structure of physical theory: the relation between space and time. The physical
universe, I find, is not a universe of matter existing in a framework provided by space and
time, as seen by conventional science, but a universe of motion, in which space and time
are simply the two reciprocal aspects of motion, and have no other significance. What I
have done, in brief, is to determine the properties that space and time must necessarily
possess in a universe composed entirely of motion, and to express them in the form of a
set of postulates. I have then shown that development of the consequences of these
postulates by logical and mathematical processes, without making any further
assumptions or introducing anything from experience, defines, in detail, a complete
theoretical universe that coincides in all respects with the observed physical universe.
Nothing of this nature has ever been developed before. No previous theory has come
anywhere near covering the full range of phenomena accessible to observation with
existing facilities, to say nothing of dealing with the currently inaccessible, and as yet
observationally unknown, phenomena that must also come within the scope of a complete
theory of the universe. Conventional scientific theories accept certain features of the
observed physical universe as given, and then make assumptions on which to base
conclusions as to the properties of these observed phenomena: The new theoretical
system, on the other hand, has no empirical content. It bases a11 of its conclusions solely
on the postulated properties of space and time. The theoretical deductions from these
postulates provide for the existence of the various physical entities and phenomena-
matter, radiation, electrical and magnetic phenomena, gravitation, etc.-as well as
establishing the relations between these entities. Since all conclusions are derived from
the same premises, the theoretical system is a completely integrated structure, contrasting
sharply with the currently accepted body of physical theory, which, as described by
Richard Feynman, is ―a multitude of different parts and pieces that do not fit together
very well.‖
The last twenty years have added a time dimension to this already unique situation. The
acid test of any theory is whether it is still tenable after the empirical knowledge of the
subject is enlarged by new discoveries. As Harlow Shapley once pointed out, facts are the
principle enemies of theories. Few theories that attempt to cover any more than a severely
limited field are able to survive the relentless march of discovery for very long without
major changes or complete reconstruction. But no substantive changes have been made in
the postulates of this new system of theory in the nearly twenty years since the original
publication, years in which tremendous strides have been made in the enlargement of
empirical knowledge in many physical areas. Because the postulates and whatever can be
derived from them by logical and mathematical processes, without introducing anything
from observation or other external sources, constitute the entire system of theory, this
absence of substantive change in the postulates means that there has been no change
anywhere in the theoretical structure.
It has been necessary, of course, to extend the theory by developing more of the details,
in order to account for some of the new discoveries, but in most cases the nature of the
required extension was practically obvious as soon as the new phenomena or
relationships were identified. Indeed, some of the new discoveries, such as the existence
of exploding galaxies and the general nature of the products thereof, were actually
anticipated in the first published description of the theory, along with many phenomena
and relations that are still awaiting empirical verification. Thus the new theoretical
system is ahead of observation and experiment in a number of significant respects.
The scientific community is naturally reluctant to change its views to the degree required
by my findings, or even to open its journals to discussion of such a departure from
orthodox thought. It has been a slow and difficult task to get a significant count of
consideration of the new structure of theory. However, those who do examine this new
theoretical structure carefully can hardly avoid being impressed by the logical and
consistent nature of the theoretical development. As a consequence, many of the
individuals who have made an effort to understand and evaluate the new system have not
only recognized it as a major addition to scientific knowledge, but have developed an
active personal interest in helping to bring it to the attention of others. In order to
facilitate this task an organization was formed some ears ago with the specific objective
of promoting understanding and eventual acceptance of the new theoretical system, the
Reciprocal System of physical theory, as we are calling it. Through the efforts of this
organization, the New Science Advocates, Inc., and its individual members, lectures on
the new theory have been given at colleges and universities throughout the United States
and Canada. The NSA also publishes a newsletter, and has been instrumental in making
publication of this present volume possible.
At the annual conference of this organization at the University of Mississippi in
August 1977 I gave an account of the origin and early, development of the
Reciprocal System of theory. It has been suggested by some of those who heard
this, presentation that certain parts of it ought to be included in this present
volume in order to bring out the fact that the central idea of the new system of
theory, the general reciprocal relation between space and time, is not a product of
a fertile imagination, but a conclusion reached as the result of an exhaustive and
detailed analysis of the available empirical data in a number of the most basic
physical fields. The validity of such a relation is determined by its consequences,
rather than by its antecedents, but many persons may be more inclined to take the
time to examine those consequences if they are assured that the relation in
question is the product of a systematic inductive process, rather than something
extracted out of thin air. The following paragraphs from my conference address
should serve this purpose.

Many of those who come in contact with this system of theory are surprised to
find us talking of ―progress in connection with it. Some evidently look upon the
theory as a construction, which should be complete before it is offered for
inspection. Others apparently believe that it originated as some kind of a
revelation, and that all I had to do was to write it down. Before I undertake to
discuss the progress that has been made in the past twenty years, it is therefore
appropriate to explain just what kind of a thing the theory actually is, and why
progress is essential. Perhaps the best way of doing this will be to tell you
something about how it originated.
I have always been very much interested in the theoretical aspect of scientific
research, and quite early in life I developed a habit of spending much of my spare
time on theoretical investigations of one kind or another. Eventually I concluded
that these efforts would be more likely to be productive if I directed most of them
toward some specific goal, and I decided to undertake the task of devising a
method whereby the magnitudes of certain physical properties could be calculated
from their chemical composition. Many investigators had tackled this problem
previously, but the most that had ever been accomplished was to devise some
mathematical expressions whereby the effect of temperature and pressure on these
properties can be evaluated if certain arbitrary ―constants‖ are assigned to each of
the various substances. The goal of a purely theoretical derivation, one which
requires no arbitrary assignment of numerical constants, has eluded all of these
efforts.
It may have been somewhat presumptuous on my part to select such an objective,
but, after all, if anyone wants to try to accomplish ; something new, he must aim
at something that others have not done. Furthermore, I did have one significant
advantage over my predecessors, in that I was not a professional physicist or
chemist. Most people would probably consider this a serious disadvantage, if not
a definite disqualification. But those who have studied the subject in depth are
agreed that revolutionary new discoveries in science seldom come from the
professionals in the particular fields involved. They are almost always the work of
individuals who might be considered amateurs, although they are more accurately
described by Dr. James B. Conant as uncommitted investigators.‖ The
uncommitted investigator, says Dr. Conant, is one who does the investigation
entirely on his own initiative, without any direction by or responsibility to anyone
else, and free from any requirement that the work must produce results.
Research is, in some respects, like fishing. If you make your living as a fisherman,
you must fish where you know that there are fish, even though you also know that
those fish are only small ones. No one but the amateur can take the risk of going
into completely unknown areas in search of a big prize. Similarly, the professional
scientist cannot afford to spend twenty or thirty of the productive years of his life
in pursuit of some goal that involves a break with the accepted thought of his
profession. But we uncommitted investigators are primarily interested in the
fishing, and while we like to make a catch, this is merely an extra dividend. It is
not essential as it is for those who depend on the catch for their livelihood. We are
the only ones who can afford to take the risks of fishing in unknown waters. As
Dr. Conant puts it,
Few will deny that it is relatively easy in science to fill in the details of a
new area, once the frontier has been crossed. The crucial event is turning
the unexpected corner. This is not given to most of us to do... By
definition the unexpected corner cannot be turned by any operation that is
planned... If you want advances in the basic theories of physics and
chemistry in the future comparable to those of the last two centuries, then
it would seem essential that there continue to be people in a position to
turn unexpected corners. Such a man I have ventured to call the
uncommitted investigator.
As might be expected, the task that I had undertaken was a long and difficult one,
but after about twenty years I had arrived at some interesting mathematical
expressions in several areas, one of the most intriguing of which was an
expression for the inter-atomic distance in the solid state in terms of three
variables clearly related to the properties portrayed by the periodic table of the
elements. But a mathematical expression, however accurate it may be, has only a
limited value in itself. Before we can make full use of the relationship that it
expresses, we must know something as to its meaning. So my next objective was
to find out why the mathematics took this particular form. I studied these
expressions from all angles, analyzing the different terms, and investigating all of
the hypotheses as to their origin that I could think of. This was a rather
discouraging phase of the project, as for a long time I seemed to be merely
spinning my wheels and getting nowhere. On several occasions I decided to
abandon the entire project, but in each case, after several months of inactivity I
thought of some other possibility that seemed worth investigating, and I returned
to the task. Eventually it occurred to me that, when expressed in one particular
form, the mathematical relation that I had formulated for the inter-atomic distance
would have a simple and logical explanation if I merely assumed that there is a
general reciprocal relation between space and time.
My first reaction to this thought was the same as that of a great many others. The
idea of the reciprocal of space, I said to myself, is absurd. One might as well talk
of the reciprocal of a pail of water, or the reciprocal of a fencepost. But on further
consideration I could see that the idea is not so absurd after all. The only relation
between space and time of which we have any actual knowledge is motion, and in
motion space and time do have a reciprocal relation. If one airplane travels twice
as fast as another, it makes no difference whether we say that it travels twice as
far in the same time, or that it travels the same distance in half the time. This is
not necessarily a general reciprocal relation, but the fact that it is a reciprocal
relation gives the idea of a general relation a considerable degree of plausibility.
So I took the next step, and started considering what the consequences of a
reciprocal relation of this nature might be. Much to my surprise, it was
immediately obvious that such a relation leads directly to simple and logical
answers to no less than a half dozen problems of long standing in widely
separated physical fields. Those of you who have never had occasion to study the
foundations of physical theory in depth probably do not realize what an
extraordinary result this actually is. Every theory of present-day physical science
has been formulated to apply specifically to some one physical field, and not a
single one of these theories can provide answers to major questions in any other
field. They may help to provide these answers but in no case can any of them
arrive at such an answer unassisted. Yet here in the reciprocal postulate we find a
theory of the relation between space and time that leads directly, without any
assistance from any other theoretical assumptions or from empirical facts, to
simple and logical answers to many different problems in many different fields.
This is something completely unprecedented. A theory based on the reciprocal
relation accomplishes on a wholesale scale what no other theory can do at all.
To illustrate what I am talking about, let us consider the recession of the distant
galaxies. As most of you know, astronomical observations indicate that the most
distant galaxies are receding from the earth at speeds which approach the speed of
light. No conventiona1 physical theory can explain this recession. Indeed, even if
you put all of the theories of conventional physics together, you still have no
explanation of this phenomenon. In order to arrive at any such explanation the
astronomers have to make some assumption, or assumptions, specifically
applicable to the recession itself. The current favorite, the Big Bang theory,
assumes a gigantic explosion at some hypothetical singular point in the past in
which the entire contents of the universe were thrown out into space at their
present high speeds. The rival Steady State theory assumes the continual creation
of new matter, which in some unspecified way creates a pressure that pushes the
galaxies apart at the speeds now observed. But the reciprocal postulate, an
assumption that was made to account for the magnitudes of the inter-atomic
distances in the solid state, gives us an explanation of the galactic recession
without the necessity of making any assumptions about that recession or about the
that are receding. It is not even necessary to arrive at any c as to what a galaxy is.
Obviously it must be something-or its existence could not be recognized-and as
long as it is something, the reciprocal relation tells us that it must be moving
outward away from our location at the speed of light, because the location which
it occupies is so moving. On the basis of this relation, the spatial separation
between any two physical locations, the ―elapsed distance,‖ as we may call it, is
increasing at the same rate as the elapsed time.
Of course, any new answer to a major question that is provided by a new theory
leaves some subsidiary questions that require further consideration, but the road to
the resolution of these subsidiary issues is clear once the primary problem is
overcome. The explanation of the recession, the reason why the most distant
galaxies recede with the speed of light, leaves us with the question as to why the
closer galaxies have lower recession speeds, but the answer to this question is
obvious, since we know that gravitation exerts a retarding effect which is greater
at the shorter distances.
Another example of the many major issues of long standing that are resolved
almost automatically by the reciprocal postulate is the mechanism of the
propagation of electromagnetic radiation. Here, again, no conventional physical
theory is able to give us an explanation. As in the case of the galactic recession, it
is necessary to make some assumption about the radiation itself before any kind
of a theory can be formulated, and in this instance conventional thinking has not
even been able to produce an acceptable hypothesis. Newton’s assumption of light
corpuscles traveling in the manner of bullets from a gun, and the rival hypothesis
of waves in a hypothetical ether, were both eventually rejected. There is a rather
general impression that Einstein supplied an explanation, but Einstein himself
makes no such claim. In one of his books he points out what a difficult problem
this actually is, and he concludes with this statement:
Our only way out seems to be to take for granted the fact that space has
the physical property of transmitting electromagnetic waves, and not to
bother too much about the meaning of this statement.
So, as matters now stand, conventional science has no explanation at all for this
fundamental physical phenomenon. But here, too, the reciprocal postulate gives
us a simple and logical explanation. It is, in fact, the same explanation that
accounts for the recession of the distant galaxies. Here, again, there is no need to
make any assumption about the photon itself. It is not even necessary to know
what a photon is. As long as it is something, it is carried outward at the speed of
light by the motion of the spatial location which it occupies.
No more than a minimum amount of consideration was required in order to see
that the answers to a number of other physical problems of long standing similarly
emerged easily and naturally on application of the reciprocal postulate. This was
clearly something that had to be followed up. No investigator who arrived at this
point could stop without going on to see just how far the consequences of the
reciprocal relation would extend. The results of that further investigation
constitute what we now know as the Reciprocal System of theory. As I have
already said, it is not a construction, and not a revelation. Now you can see just
what it is. It is nothing more nor less than the total of the consequences that result
if there is a general reciprocal relation between space and time.
As matters now stand, the details of the new theoretical system, so far as they
have been developed, can be found only in my publications and those of my
associates, but the system of theory is not coextensive with what has thus far been
written about it. In reality, it consists of any and all of the consequences that
follow when we adopt the hypothesis of a general reciprocal relation between
space and time. A general recognition of this point would go a long way toward
meeting some of our communication problems. Certainly no one should have any
objection to an investigation of the consequences of such a hypothesis. Indeed,
anyone who is genuinely interested in the advancement of science, and who
realizes the unprecedented scope of these consequences, can hardly avoid wanting
to find out just how far they actually extend. As a German reviewer expressed it.

Only a careful investigation of all of the author’s deliberations can show


whether or not he is right. The official schools of natural philosophy
should not shun this (considerable, to be sure) effort. After all, we are
concerned here with questions of fundamental significance.
Yet, as all of you undoubtedly know, the scientific community, particularly that
segment of the community that we are accustomed to call the Establishment, is
very reluctant to permit general discussion of the theory in the journals and in
scientific meetings. They are not contending that the conclusions we have reached
are wrong; they are simply trying to ignore them, and hope that they will
eventually go away. This is, of course, a thoroughly unscientific attitude, but since
it exists we have to deal with it, and for this purpose it will be helpful to have
some idea of the thinking that underlies the opposition. There are some
individuals who simply do not want their thinking disturbed, and are not open to
any kind of an argument. William James, in one of his books, reports a
conversation that he had with a prominent scientist concerning what we now call
ESP. This man, says James, contended that even if ESP is a reality, scientists
should band together to keep that fact from becoming known, since the existence
of any such thing would cause havoc in the fundamental thought of science. Some
individuals no doubt feel the same way about the Reciprocal System, and so far as
these persons are concerned there is not much that we can do. There is no
argument that can counter an arbitrary refusal to consider what we have to offer.
In most cases, however, the opposition is based on a misunderstanding of our
position. The issue between the supporters of rival scientific theories normally is:
Which is the better theory? The basic question involved is which theory agrees
more closely with the observations and measurements in the physical areas to
which the theories apply, but since all such theories are specifically constructed to
it the observations, the decision usually has to rest to a large degree on
preferences and prejudices of a philosophical or other non-scientific nature. Most
of those who encounter the Reciprocal System of theory for the first time take it
for granted that we are simply raising another issue, or several issues, of the same
kind. The astronomers, for instance, are under the impression that we are
contending that the outward progression of the natural reference system is a better
explanation of the recession of the distant galaxies than the Big Bang. But this is
not our contention at all. We have found that we need to postulate a general
reciprocal relation between space and time in order to explain certain fundamental
physical phenomena that cannot be explained by any conventional physical
theory. But once we have postulated this relationship it supplies simple and
logical answers for the major problems that arise in all physical areas. Thus our
contention is not that we have a better assortment of theories to replace the Big
Ban and other specialized theories of limited scope, but that we have a general
theory that applies to all physical fields. These theories of limited applicability are
therefore totally unnecessary.
While this present volume is described as the first unit of a ―revised and enlarged‖
edition, the revisions are actually few and far between. As stated earlier, there have been
no substantive changes in the postulates since they were originally formulated. Inasmuch
as the entire structure o g theory has been derived from these postulates by deducing their
logical and mathematical consequences, the development of theory in this new edition is
essentially y significant difference b the same as in the original, the only significant
difference being ina few places where points that were originally somewhat vague have
been clarified, or where more direct lines of development have been substituted for the
earlier derivations. However, many problems are encountered in getting an
unconventional work of this kind into print, and in order to make the original publication
possible at all it was necessary to limit the scope of the work, both as to the number of
subjects covered and as to the extent to which the details of each subject were developed.
For this reason the purpose of this new edition is not only to bring the theoretical
structure up to date by incorporating all of the advances that have been made in the last
twenty years, but also to present the portions of the original results--approximately half of
the total--that had to be omitted from the first edition.
Because of this large increase in the size of the work, the new edition will be issued in
several volumes. This first volume is self contained. It develops the basic laws and
principles applicable to physical phenomena in general, and defines the entire chain of
deductions leading from the fundamental postulates to each of the conclusions that are
reached in the various physical areas that are covered. The subsequent volumes will apply
the same basic laws and principles to a variety of other physical phenomena. It has
seemed advisable to change the order of presentation to some extent, and as a result a
substantial amount of the material omitted from the first edition has been included in this
volume, whereas some subjects, such as electric and magnetic phenomena, that were
discussed rather early in the first edition have been deferred to the later volumes.
For the benefit of those who do not have access to the first edition (which is out of print)
and wish to examine what the Reciprocal System of theory has to say about these
deferred items before the subsequent volumes are published, I will say that brief
discussions of some of these subjects are contained in my 1965 publication, New Light on
Space and Time, and some further astronomical information, with particular reference to
the recently discovered compact astronomical objects, can be found in Quasars and
Pulsars, published in 1971.
It will not be feasible to acknowledge all of the many individual contributions that have
been made toward developing the details of thc theoretical system and bringing it to the
attention of the scientific community. However, I will say that I am particularly indebted
to the founders of the New Science Advocates, Dr. Douglas S. Cramer, Dr. Paul F. de
Lespinasse, and Dr. George W. Hancock; to Dr. Frank A, Anderson, the current President
of the NSA, who did the copy editing for this volume, along with his many other
contributions; and to the past and present members of the NSA Executive Board: Steven
Berline, RonaId F. Blackburn, Frances Boldereff, James N. Brown, Jr., Lawrence
Denslow, Donald T. Elkins, Rainer Huck, Todd Kelso, Richard L. Long, Frank H. Meyer,
William J. Mitchell, Harold Norris, Carla Rueckert, Ronald W. Satz, George Windolph,
and Hans F. Wuenscher.

D. B. Larson

CHAPTER 1

Background
To the man of the Stone Age the world in which he lived was a world of spirits. Powerful
gods hurled shafts of lightning, threw waves against the shore, and sent winter storms
howling down out of the north. Lesser beings held sway in the forests, among the rocks,
and in the flowing streams. Malevolent demons, often in league with the mighty rulers of
the elements, threatened the human race from all directions, and only the intervention of
an assortment of benevolent, but capricious, deities made man’s continued existence
possible at all.
This hypothesis that material phenomena are direct results of the actions of superhuman
beings was the first attempt to define the fundamental nature of the physical universe: the
first general physical concept. The scientific community currently regards it as a juvenile
and rather ridiculous attempt at an explanation of nature, but actually it was plausible
enough to remain essentially unchallenged for thousands of years. In fact, it is still
accepted, in whole or in part, by a very substantial proportion of the population of the
world. Such widespread acceptance is not as inexplicable as it may seem to the
scientifically trained mind; it has been achieved only because the ―spirit‖ concept does
have some genuine strong points. Its structure is logical. If one accepts the premises he
cannot legitimately contest the conclusions. Of course, these premises are entirely ad hoc,
but so are many of the assumptions of modern science. The individual who accepts the
idea of a ―nuclear force‖ without demur is hardly in a position to be very critical of those
who believe in the existence of ―evil spirits.‖
A special merit of this physical theory based on the ―spirit‖ concept is that it is a
comprehensive theory; it encounters no difficulties in assimilating new discoveries, since
all that is necessary is to postulate some new demon or deity. Indeed, it can even deal
with discoveries not yet made, simply by providing a ―god of the unknown.‖ But even
though a theory may have some good features, or may have led to some significant
accomplishments, this does not necessarily mean that it is correct, nor that it is adequate
to meet current requirements. Some three or four thousand years ago it began to be
realized by the more advanced thinkers that the ―spirit‖ concept had some very serious
weaknesses. The nature of these weaknesses is now well understood, and no extended
discussion of them is necessary. The essential point to be recognized is that at a particular
stage in history the prevailing concept of the fundamental nature of the universe was
subjected to critical scrutiny, and found to be deficient. It was therefore replaced by a
new general physical concept.
This was no minor undertaking. The ―spirit‖ concept was well entrenched in the current
pattern of thinking, and it had powerful support from the ―Establishment,‖ which is
always opposed to major innovations. In most of the world as it then existed such a break
with accepted thought would have been impossible, but for some reason an atmosphere
favorable to critical thinking prevailed for a time in Greece and neighboring areas, and
this profound alteration of the basic concept of the universe was accomplished there. The
revolution in thought came slowly and gradually. Anaxagoras, who is sometimes called
the first scientist, still attributed Mind to all objects, inanimate as well as animate. If a
rock fell from a cliff, his explanation was that this action was dictated by the Mind of the
rock. Even Aristotle retained the ―spirit‖ concept to some degree. His view of the fall of
the rock was that this was merely one manifestation of a general tendency of objects to
seek their ―natural place,‖ and he explained the acceleration during the fall as a result of
the fact ―that the falling body moved more jubilantly every moment because it found
itself nearer home.‖1 Ultimately, however, these vestiges of the ―spirit‖ concept
disappeared, and a new general concept emerged, one that has been the basis of all
scientific work ever since.
According to this new concept, we live in a universe of matter: one that consists of
material ―things‖ existing in a setting provided by space and time. With the benefit of this
conceptual foundation, three thousand years of effort by generation after generation of
scientists have produced an immense systematic body of knowledge about the physical
universe, an achievement which, it is safe to say, is unparalleled elsewhere in human life.
In view of this spectacular record of success, which has enabled the ―matter‖ concept to
dominate the organized thinking of mankind ever since the days of the ancient Greeks, it
may seem inconsistent to suggest that this concept is not adequate to meet present-day
needs, but the ultimate fate of any scientific concept or theory is determined not by what
it has done but by what, if anything, it now fails to do. The graveyard of science is full of
theories that were highly successful in their day, and contributed materially to the
advance of scientific knowledge while they enjoyed general acceptance: the caloric
theory, the phlogiston theory, the Ptolemaic theory of astronomy, the ―billiard ball‖
theory of the atom, and so on. It is appropriate, therefore, that we should, from time to
time, subject all of our basic scientific ideas to a searching and critical examination for
the purpose of determining whether or not these ideas, which have served us well in the
past, are still adequate to meet the more exacting demands of the present.
Once we subject the concept of a universe of matter to a critical scrutiny of this kind it is
immediately obvious, not only that this concept is no longer adequate for its purpose, but
that modern discoveries have completely demolished its foundations. If we live in a
world of material ―things‖ existing in a framework provided by space and time, as
envisioned in the concept of a universe of matter, then matter in some form is the
underlying feature of the universe: that which persists through the various physical
processes. This is the essence of the concept. For many centuries the atom was accepted
as the ultimate unit, but when particles smaller (or at least less complex) than atoms were
discovered, and it was found that under appropriate conditions atoms would disintergrate
and emit such particles in the process, the sub-atomic particles took over the role of the
ultimate building blocks. But we now find that these particles are not permanent building
blocks either.
For instance, the neutron, one of the constituents, from which the atom is currently
supposed to be constructed, spontaneously separates into a proton, an electron, and a
neutrino. Here, then, one of the ―elementary particles,‖ the supposedly basic and
unchangeable units of matter, transforms itself into other presumably basic and
unchangeable units. In order to save the concept of a universe of matter, strenuous efforts
are now being made to explain events of this kind by postulating still smaller ―elementary
particles‖ from which the known sub-atomic particles could be constructed. At the
moment, the theorists are having a happy time constructing theoretical ―quarks‖ or other
hypothetical sub-particles, and endowing these products of the imagination with an
assortment of properties such as ―charm,‖ ―color,‖ and so on, to enable them to fit the
experimental data.
But this descent to a lower stratum of physical structure could not be accomplished, even
in the realm of pure hypothesis, without taking another significant steps away from
reality. At the time the atomic theory was originally proposed by Democritus and his
contemporaries, the atoms of which they conceived all physical structures to be
composed were entirely hypothetical, but subsequent observations and experiments have
revealed the existence of units of matter that have exactly the properties that are
attributed to the atoms by the atomic theory. As matters now stand, therefore, this theory
can legitimately claim to represent reality. But there are no observed particles that have
all of the properties that are required in order to qualify as constituents of the observed
atoms.
The theorists have therefore resorted to the highly questionable expedient of assuming,
entirely ad hoc, that the observed sub-atomic particles (that is, particles less complex than
atoms) are the atomic constituents, but have different properties when they are in the
atoms than those they are found to have wherever they can be observed independently.
This is a radical departure from the standard scientific practice of building theories on
solid factual foundations, and its legitimacy is doubtful, to say the least, but the architects
of the ―quark‖ theories are going a great deal farther, as they are cutting loose from
objective reality altogether, and building entirely on assumptions. Unlike the hypothetical
―constituents‖ of the atoms, which are observed sub-atomic particles with hypothetical
sets of properties instead of the observed properties, the quarks are hypothetical particles
with hypothetical properties.
The unreliability of conclusions reached by means of such forced and artificial
constructions should be obvious, but it is not actually necessary to pass judgment on this
basis, because irrespective of how far the subdividing of matter into smaller and smaller
particles is carried, the theory of ―elementary particles, of matter cannot account for the
observed existence of processes whereby matter is converted into non-matter, and vice
versa. This interconvertibility is positive and direct proof that the ―matter‖ concept is
wrong; that the physical universe is not a universe of matter. There clearly must be some
entity more basic than matter, some common denominator underlying both matter and
non-material phenomena.
Such a finding, which makes conventional thinking about physical fundamentals
obsolete, is no more welcome today than the ―matter‖ concept was in the world of
antiquity. Old habits of thought, like old shoes, are comfortable, and the automatic
reaction to any suggestion of a major change in basic ideas is resisted, if not outright
resentment. But if scientific progress is to continue, it is essential not only to generate
new ideas to meet new problems, but also to be equally diligent in discarding old ideas
that have outlived their usefulness.
There is no actual need for any additional evidence to confirm the conclusion that the
currently accepted concept of a universe of matter is erroneous. The observed
interconvertibility of matter and non-matter is in itself a complete and conclusive
refutation of the assertion that matter is basic. But when the inescapable finality of the
answer that we get from this interconvertibility forces recognition of the complete
collapse of the concept of a universe of matter, and we can no longer accept it as valid, it
is easy to see that this concept has many other shortcomings that should have alerted the
scientific community to question its validity long ago. The most obvious weakness of the
concept is that the theories that are based upon it have not been able to keep abreast of
progress in the experimental and observational fields. Major new physical discoveries
almost invariably come as surprises, ―unexpected and even unimagined surprises,‖2 in the
words of Richard Schlegel. They were not anticipated on theoretical grounds, and cannot
be accommodated to existing theory without some substantial modification of previous
ideas. Indeed, it is doubtful whether any modification of existing theory will be adequate
to deal with some of the more recalcitrant phenomena now under investigation.
The current situation in particle physics, for instance, is admittedly chaotic. The outlook
might be different if the new information that is rapidly accumulating in this field were
gradually clearing up the situation, but in fact it merely seems to deepen the existing
crisis. If anything in this area of confusion is clear by this time it is that the ―elementary
particles‖ are not elementary. But the basic concept of a universe of matter requires the
existence of some kind of an elementary unit of matter. If the particles that are now
known are not elementary units, as is generally conceded, then, since no experimental
confirmation is available for the hypothesis of sub-particles, the whole theory of the
structure of matter, as it now stands, is left without visible support.
Another prime example of the inability of present-day theories based on the ―matter‖
concept to cope with new knowledge of the universe is provided by some of the recent
discoveries in astronomy. Here the problem is an almost total lack of any theoretical
framework to which the newly observed phenomena can be related. A book published a
few years ago that was designed to present all of the significant information then
available about the astronomical objects known as quasars contains the following
statement, which is still almost as appropriate as when it was written:
It will be seen from the discussion in the later chapters that there are so many conflicting
ideas concerning theory and interpretation of the observations that at least 95 percent of
them must indeed be wrong. But at present no one knows which 95 percent.3
After three thousand years of study and investigation on the basis of theories founded on
the ―matter‖ concept we are entitled to something more than this. Nature has a habit of
confronting us with the unexpected, and it is not very reasonable to expect the currently
prevailing structure of theory to give us an immediate and full account of all details of a
new area, but we should at least be able to place the new phenomena in their proper
places with respect to the general framework, and to account for their major aspects
without difficulty.
The inability of present-day theories to keep up with experimental and observational
progress along the outer boundaries of science is the most obvious and easily visible sign
of their inadequacies, but it is equally significant that some of the most basic physical
phenomena are still without any plausible explanations. This embarrassing weakness of
the current theoretical structure is widely recognized, and is the subject of comment from
time to time. For instance, a press report of the annual meeting of the American Physical
Society in New York in February 1969 contains this statement:
A number of very distinguished physicists who spoke reminded us of long-standing
mysteries, some of them problems so old that they are becoming forgotten—pockets of
resistance left far behind the advancing frontiers of physics.4
Gravitation is a good example. It is unquestionably fundamental, but conventional theory
cannot explain it. As has been said it ―may well be the most fundamental and least
understood of the interactions.‖5 When a book or an article on this subject appears, we
almost invariably find the phenomenon characterized, either in the title or in the
introductory paragraphs, as a ―mystery,‖ an ―enigma,‖ or a ―riddle.‖
But what is gravity, really? What causes it? Where does it come from? How did it get
started? The scientist has no answers . . . in a fundamental sense, it is still as mysterious
and inexplicable as it ever was, and it seems destined to remain so. (Dean E.
Wooldridge)6
Electromagnetic radiation, another of the fundamental physical phenomena, confronts us
with a different, but equally disturbing, problem. Here there are two conflicting
explanations of the phenomenon, each of which fits the observed facts in certain areas but
fails in others: a paradox which, as James B. Conant observed, ―once seemed
intolerable,‖ although scientists have now ―learned to live with it.‖7 This too, is a ―deep
mystery,‖8 as Richard Feynman calls it, at the very base of the theoretical structure.
There is a widespread impression that Einstein solved the problem of the mechanism of
the propagation of radiation‖ and gave a definitive explanation of the phenomenon. It
may be helpful, therefore, to note just what Einstein did have to say on this subject, not
only as a matter of clarifying the present status of the radiation problem itself, but to
illustrate the point made by P. W. Bridgman when he observed that many of the ideas and
opinions to which the ordinary scientist subscribes ―have not been thought through
carefully but are held in the comfortable belief . . . that some one must have examined
them at some time.‖9
In one of his books Einstein points out that the radiation problem is an extremely difficult
one, and he concludes that:
Our only way out seems to be to take for granted the fact that space has the physical
property of transmitting electromagnetic waves, and not to bother too much about the
meaning of this statement. 10
Here, in this statement, Einstein reveals (unintentionally) just what is wrong with the
prevailing basic physical theories, and why a revision of the fundamental concepts of
those theories is necessary. Far too many difficult problems have been evaded by simply
assuming an answer and ―taking it for granted.‖ This point is all the more significant
because the shortcomings of the ―matter‖ concept and the theories that it has generated
are by no means confined to the instances where no plausible explanations of the
observed phenomena have been produced. In many other cases where explanations of one
kind or another have actually been formulated, the validity of these explanations is
completely dependent on ad hoc assumptions that conflict with observed facts.
The nuclear theory of the atom is typical. Inasmuch as it is now clear that the atom is not
an indivisible unit, the concept of a universe of matter demands that it be constructed of
―elementary‖ material units of some kind. Since the observed sub-atomic particles are the
only known candidates for this role it has been taken for granted, as mentioned earlier,
that the atom is a composite of sub-atomic particles. Consideration of the various possible
combinations has led to the hypothesis that is now generally accepted: an atom in which
there is a nucleus composed of protons and neutrons, surrounded by some kind of an
arrangement of electrons.
But if we undertake a critical examination of this hypothesis it is immediately apparent
that there are direct conflicts with known physical facts. Protons are positively charged,
and charges of the same sign repel each other. According to the established laws of
physics, therefore, a nucleus composed wholly or partly of protons would immediately
disintegrate. This is a cold, hard physical fact, and there is not the slightest evidence that
it is subject to abrogation or modification under any circumstances or conditions.
Furthermore, the neutron is observed to be unstable, with a lifetime of only about 15
minutes, and hence this particle fails to meet one of the most essential requirements of a
constituent of a stable atom: the requirement of stability. The status of the electron as an
atomic constituent is even more dubious. The properties, which it must have to play such
a role, are altogether different from the properties of the observed electron. Indeed, as
Herbert Dingle points out, we can deal with the electron as a constituent of the atom only
if we ascribe to it ―properties not possessed by any imaginable objects at all.‖11
A fundamental tenet of science is that the facts of observation and experiment are the
scientific court of last resort; they pronounce the final verdict irrespective of whatever
weight may be given to other considerations. As expressed by Richard Feynman:
If it (a proposed new law or theory) disagrees with experiment it is wrong. In that simple
statement is the key to science.... That is all there is to it.12
The situation with respect to the nuclear theory is perfectly clear. The hypothesis of an
atomic nucleus composed of protons and neutrons is in direct conflict with the observed
properties of electric charges and the observed behavior of the neutron, while the
conflicts between the atomic version of the electron and physical reality are numerous
and very serious. According to the established principles of science, and following the
rule that Feynman laid down in the foregoing quotation, the nuclear theory should have
been discarded summarily years ago.
But here we see the power of the currently accepted fundamental physical concept. The
concept of a universe of matter demands a ―building block‖ theory of the atom: a theory
in which the atom (since it is not an indivisible building block itself) is a ―thing‖
composed of ―parts‖ which, in turn, are ―things‖ of a lower order. In the absence of any
way of reconciling such a theory with existing physical knowledge, either the basic
physical concept or standard scientific procedures and tests of validity had to be
sacrificed. Since abandonment of the existing basic concept of the nature of the universe
is essentially unthinkable in the ordinary course of theory construction, sound scientific
procedure naturally lost the decision. The conflicts between the nuclear theory and
observation were arbitrarily eliminated by means of a set of ad hoc assumptions. In order
to prevent the break-up of the hypothetical nucleus by reason of the repulsion between
the positive charges of the individual protons it was simply assumed that there is a
―nuclear force‖ of attraction, which counterbalances the known force of repulsion. And in
order to build a stable atom out of unstable particles it was assumed (again purely ad hoc)
that the neutron, for some unknown reason, is stable within the nucleus. The more
difficult problem of inventing some way of justifying the electron as an atomic
constituent is currently being handled by assuming that the atomic electron is an entity
that transcends reality. It is unrelated to anything that has ever been observed, and is itself
not capable of being observed: an ―abstract thing, no longer intuitable in terms of the
familiar aspects of everyday experience,‖13 as Henry Margenau describes it.
What the theorists‖ commitment to the ―matter‖ concept has done in this instance is to
force them to invent the equivalent of the demons that their primitive ancestors called
upon when similarly faced with something that they were unable to explain. The
mysterious ―nuclear force‖ might just as well be called the ―god of the nucleus.‖ Like an
ancient god, it was designed for one particular purpose; it has no other functions; and
there is no independent confirmation of its existence. In effect, the assumptions that have
been made in an effort to justify retention of the ―matter‖ concept have involved a partial
return to the earlier ―spirit‖ concept of the nature of the universe.
Since it is now clear that the concept of a universe of matter is not valid, one may well
ask: How has it been possible for physical science to make such a remarkable record of
achievement on the basis of an erroneous fundamental concept? The answer is that only a
relatively small part of current physical theory is actually derived from the general
physical principles based on that fundamental concept. ―A scientific theory,‖ explains R.
B. Braithwaite, ―is a deductive system in which observable consequences logically follow
from the conjunction of observed facts with the set of the fundamental hypotheses of the
system. ―14 But modern physical theory is not one deductive system of the kind described
by Braithwaite; it is a composite made up of a great many such systems. As expressed by
Richard Feynman:
Today our theories of physics, the laws of physics, are a multitude of different parts and
pieces that do not fit together very well. We do not have one structure from which all is
deduced.15
One of the principal reasons for this lack of unity is that modern physical theory is a
hybrid structure, derived from two totally different sources. The small-scale theories
applicable to individual phenomena, which constitute the great majority of the ―parts and
pieces,‖ are empirical generalizations derived by inductive reasoning from factual
premises. At one time it was rather confidently believed that the accumulation of
empirically derived knowledge then existing, the inductive science commonly associated
with the name of Newton, would eventually be expanded to encompass the whole of the
universe. But when observation and experiment began to penetrate what we may call the
far-out regions, the realms of the very small, the very large, and the very fast, Newtonian
science was unable to keep pace. As a consequence, the construction of basic physical
theory fell into the hands of a school of scientists who contend that inductive methods are
incapable of arriving at general physical principles. ―The axiomatic basis of theoretical
physics cannot be an inference from experience, but must be free invention,‖16 was
Einstein’s dictum.
The result of the ascendancy of this ―inventive‖ school of science has been to split
physical science into two separate parts. As matters now stand, the subsidiary principles,
those that govern individual physical phenomena and the low-level interactions, are
products of induction from factual premises. The general principles, those that apply to
large scale phenomena or to the universe as a whole, are, as Einstein describes them,
―pure inventions of the human mind.‖ Where the observations are accurate, and the
generalizations are justified, the inductively derived laws and theories are correct, at least
within certain limits. The fact that they constitute by far the greater part of the current
structure of physical thought therefore explains why physical science has been so
successful in practice. But where empirical data is inadequate or unavailable, present-day
science relies on deductions from the currently accepted general principles, the products
of pure invention, and this is where physical theory has gone astray. Nature does not
agree with these ―free inventions of the human mind.‖
This disagreement with nature should not come as a surprise. Any careful consideration
of the situation will show that ―free invention‖ is inherently incapable of arriving at the
correct answers to problems of long standing. Such problems do not continue to exist
because of a lack of competence on the part of those who are trying to solve them, or
because of a lack of adequate methods of dealing with them. They exist because some
essential piece or pieces of information are missing. Without this essential information
the correct answer cannot be obtained (except by an extremely unlikely accident). This
rules out inductive methods, which build upon empirical information. Invention is no
more capable of arriving at the correct result without the essential information than
induction, but it is not subject to the same limitations. It can, and does, arrive at some
result.
General acceptance of a theory that is almost certain to be wrong is, in itself, a serious
impediment to scientific progress, but the detrimental effect is compounded by the ability
of these inventive theories to evade contradictions and inconsistencies by further
invention. Because of the almost unlimited opportunity to escape from difficulties by
making further ad hoc assumptions, it is ordinarily very difficult to disprove an invented
theory. But the definite proof that the physical universe is not a universe of matter now
automatically invalidates all theories, such as the nuclear theory of the atom, that are
dependent on this ―matter‖ concept. The essential piece of information that has been
missing, we now find, is the true nature of the basic entity of which the universe is
composed.
The issue as to the inadequacy of present-day basic physical theory does not normally
arise in the ordinary course of scientific activity because that activity is primarily directed
toward making the best possible use of the tools that are available. But when the question
is actually raised there is not much doubt as to how it has to be answered. The answer
that we get from P. A. M. Dirac is this:
The present stage of physical theory is merely a steppingstone toward the better stages we
shall have in the future. One can be quite sure that there will be better stages simply
because of the difficulties that occur in the physics of today.17
Dirac admits that he and his fellow physicists have no idea as to the direction from which
the change will come. As he says, ―there will have to be some new development that is
quite unexpected, that we cannot even make a guess about.‖ He recognizes that this new
development must be one of major significance. ―It is fairly certain that there will have to
be drastic changes in our fundamental ideas before these problems can be solved‖17 he
concludes. The finding of this present work is that ―drastic changes in our fundamental
ideas‖ will indeed be required. We must change our basic physical concept: our concept
of the nature of the universe in which we live.
Unfortunately, however, a new basic concept is never easy to grasp, regardless of how
simple it may be, and how clearly it is presented, because the human mind refuses to look
at such a concept in any simple and direct manner, and insists on placing it within the
context of previously existing patterns of thought, where anything that is new and
different is incongruous at best, and more often than not is definitely absurd. As
Butterfield states the case:
Of all forms of mental activity, the most difficult to induce even in the minds of the
young, who may be presumed not to have lost their flexibility, is the art of handling the
same bundle of data as before, but placing them in a new system of relations with one
another by giving them a different framework.18
In the process of education and development, each human individual has put together a
conceptual framework which represents the world as he sees it, and the normal method of
assimilating a new experience is to fit it into its proper place in this general conceptual
framework. If the fit is accomplished without difficulty we are ready to accept the
experience as valid. If a reported experience, or a sensory experience of our own, is
somewhat outside the limits of our complex of beliefs, but not definitely in conflict, we
are inclined to view it skeptically but tolerantly, pending further clarification. But if a
purported experience flatly contradicts a fundamental belief of long standing, the
immediate reaction is to dismiss it summarily.
Some such semi-automatic system for discriminating between genuine items of
information and the many false and misleading items that are included in the continuous
stream of messages coming in through the various senses is essential in our daily life,
even for mere survival. But this policy of using agreement with past experience as the
criterion of validity has the disadvantage of limiting the human race to a very narrow and
parochial view of the world, and one of the most difficult tasks of science has been, and
to some extent continues to be, overcoming the errors that are thus introduced into
thinking about physical matters. Only a few of those who give any serious consideration
to the subject still believe that the earth is flat, and the idea that this small planet of ours
is the center of all of the significant activities of the universe no longer commands any
strong support, but it took centuries of effort by the most advanced thinkers to gain
general acceptance of the present-day view that, in these instances, things are not what
our ordinary experience would lead us to believe.
Some very substantial advances in scientific methods and equipment in recent years have
enabled investigators to penetrate a number of far-out regions that were previously
inaccessible. Here again it has been demonstrated, as in the question with respect to the
shape of the earth, that experience within the relatively limited range of our day-to-day
activities is not a reliable guide to what exists or is taking place in distant regions. In
application to these far-out phenomena the scientific community therefore rejects the
―experience‖ criterion, and opens the door to a wide variety of hypotheses and concepts
that are in direct conflict with normal experience: such things as events occurring without
specific causes, magnitudes that are inherently incapable of measurement beyond a
certain limiting degree of precision, inapplicability of some of the established laws of
physics to certain unusual phenomena, events that defy the ordinary rules of logic,
quantities whose true magnitudes are dependent on the location and movement of the
observer, and so on. Many of these departures from ―common sense‖ thinking, including
almost all of those that are specifically mentioned in this paragraph, are rather ill-advised
in the light of the facts that have been disclosed by this present work, but this merely
emphasizes the extent to which scientists are now willing to go in postulating deviations
from every-day experience.
Strangely enough, this extreme flexibility in the experience area coexists with an equally
extreme rigidity in the realm of ideas. The general situation here is the same as in the
case of experience. Some kind of semi-automatic screening of the new ideas that are
brought to our attention is necessary if we are to have any chance to develop a coherent
and meaningful understanding of what is going on in the world about us, rather than
being overwhelmed by a mass of erroneous or irrelevant material. So, just as purported
new experiences are measured against past experience, the new concepts and theories that
are proposed are compared with the existing structure of scientific thought and judged
accordingly.
But just as the ―agreement with previous experience,‖ criterion breaks down when
experiment or observation enters new fields, so the ―agreement with orthodox theory‖
criterion breaks down when it is applied to proposals for revision of the currently
accepted theoretical fundamentals. When agreement with the existing theoretical
structure is set up as the criterion by which the validity of new ideas is to be judged, any
new thought that involves a significant modification of previous theory is automatically
branded as unacceptable. Whatever merits it may actually have, it is, in effect, wrong by
definition.
Obviously, a strict and undeviating application of this ―agreement‖ criterion cannot be
justified‖ as it would bar all major new ideas. A new basic concept cannot be fitted into
the existing conceptual framework, as that framework is itself constructed of other basic
concepts‖ and a conflict is inevitable. As in the case of experience‖ it is necessary to
recognize that there is an area in which this criterion is not legitimately applicable. In
principle, therefore, practically everyone concedes that a new theory cannot be expected
to agree with the theory that it proposes to replace, or with anything derived directly or
indirectly from that previous theory.
In spite of the nearly unanimous agreement on this, point as a matter of principle, a new
idea seldom gets the benefit of it in actual practice. In part this is due to the difficulties
that are experienced in trying to determine just what features of current thought are
actually affected by the theory replacement. This is not always clear on first
consideration, and the general tendency is to overestimate the effect that the proposed
change will have on prevailing ideas. In any event, the principal obstacle that stands in
the way of a proposal for changing a scientific theory or concept is that the human mind
is so constituted that it does not want to change its ideas, particularly if they are ideas of
long standing. This is not so serious in the realm of experience, because the innovation
that is required here generally takes the form of an assertion that ―things are different‖ in
the particular new area that is under consideration. Such an assertion does not involve a
flat repudiation of previous experience; it merely contends that there is a hitherto
unknown limit beyond which the usual experience is no longer applicable. This is the
explanation for the almost incredible latitude that the theorists are currently being
allowed in the ―experience‖ area. The scientist is prepared to accept the assertion that the
rules of the game are different in a new field that is being investigated, even where the
new rules involve such highly improbable features as events that happen without causes
and objects that change their locations discontinuously.
On the other hand, a proposal for modification of an accepted concept or theory calls for
an actual change in thinking, something that the human mind almost automatically
resists, and generally resents. Here the scientist usually reacts like any layman; he
promptly rejects any intimation that the rules which he has already set up, and which he
has been using with confidence, are wrong. He is horrified at the mere suggestion that the
many difficulties that he is experiencing in dealing with the ―parts‖ of the atom, and the
absurdities or near absurdities that he has had to introduce into his theory of atomic
structure are all due to the fact that the atom is not constructed of ―parts.‖
Inasmuch as the new theoretical system presented in this volume and those that are to
follow not only requires some drastic reconstruction of fundamental physical theory, but
goes still deeper and replaces the basic concept of the nature of the universe, upon which
all physical theory is constructed, the conflicts with previous ideas are numerous and
severe. If appraised in the customary manner by comparison with the existing body of
thought many of the conclusions that are reached herein must necessarily be judged as
little short of outrageous. But there is practically unanimous agreement among those who
are in the front rank of scientific investigators that some drastic change in theoretical
fundamentals is inevitable. As Dirac said in the statement previously quoted, ―There will
have to be some new development that is quite unexpected, that we cannot even make a
guess about.‖ The need to abandon a basic concept, the concept of a universe of matter
that has guided physical thinking for three thousand years is an ―unexpected
development,‖ just the kind of a thing that Dirac predicted. Such a basic change is a very
important step, and it should not be lightly taken, but nothing less drastic will suffice.
Sound theory cannot be built on an unsound foundation. Logical reasoning and skillful
mathematical manipulation cannot compensate for errors in the premises to which they
are applied. On the contrary, the better the reasoning the more certain it is to arrive at the
wrong results if it starts from the wrong premises.

CHAPTER 2

A Universe of Motion
The thesis of this present work is that the universe in which we live is not a universe of
matter, but a universe of motion, one in which the basic reality is motion, and all physical
entities and phenomena, including matter, are merely manifestations of motion. The
atom, on this basis, is simply a combination of motions. Radiation is motion, gravitation
is motion, an electric charge is motion, and so on.
The concept of a universe of motion is by no means a new idea. As a theoretical
proposition it has some very obvious merits that have commended it to thoughtful
investigators from the very beginning of systematic science. Descartes' idea that matter
might be merely a series of vortexes in the ether is probably the best-known speculation
of this nature, but other scientists and philosophers, including such prominent figures as
Eddington and Hobbes, have devoted much time to a study of similar possibilities, and
this activity is still continuing in a limited way.
But none of the previous attempts to use the concept of a universe of motion as the basis
for physical theory has advanced much, if any, beyond the speculative stage. The reason
why they failed to produce any significant results has now been disclosed by the findings
of the investigation upon which this present work is based. The inability of previous
investigators to achieve a successful application of the ―motion‖ concept, we find, was
due to the fact that they did not use this concept in its pure form. Instead, they invariably
employed a hybrid structure, which retained elements of the previously accepted ―matter‖
concept. ―All things have but one universal cause‖ which is motion‖ 19 says Hobbes. But
the assertion that all things are caused by motion is something quite different from saying
that they are motions. The simple concept of a universe of motion‖ without additions or
modifications–the concept utilized in this present work-is that of a universe which is
composed entirely of motion.
The significant difference between these two viewpoints lies in the role that they assign
to space and time. In a universe of matter it is necessary to have a background or setting
in which the matter exists and undergoes physical processes, and it is assumed that space
and time provide the necessary setting for physical action. Many differences of opinion
have arisen with respect to the details, particularly with respect to space–whether or not
space is absolute and immovable, whether such a thing as empty space is possible,
whether or not space and time are interconnected, and so on–but throughout all of the
development of thought on the subject the basic concept of space as a setting for the
action of the universe has remained intact. As summarized by J. D. North:
Most people would accept the following: Space is that in which material objects are
situated and through which they move. It is a background for objects of which it is
independent. Any measure of the distances between objects within it may be regarded as
a measure of the distances between its corresponding parts.20
Einstein is generally credited with having accomplished a profound alteration of the
scientific viewpoint with respect to space, but what he actually did was merely to
introduce some new ideas as to the kind of a setting that exists. His ―space‖ is still a
setting, not only for matter but also for the various ―fields‖ , that he envisions. A field, he
says, is ―something physically real in the space around it.‖21 Physical events still take
place in Einstein’s space just as they did in Newton's space or in Democritus' space.
Time has always been more elusive than space, and it has been extremely difficult to
formulate any clear-cut concept of its essential nature. It has been taken for granted,
however, that time, too, is part of the setting in which physical events take place; that is,
physical phenomena exist in space and in time. On this basis it has been hard to specify
just wherein time differs from space. In fact the distinction between the two has become
increasingly blurred and uncertain in recent years, and as matters now stand, time is
generally regarded as a sort of quasi-space, the boundary between space and time being
indefinite and dependent upon the circumstances under which it is observed. The modern
physicist has thus added another dimension to the spatial setting, and instead of
visualizing physical phenomena as being located in three-dimensional space, he places
them in a four-dimensional space-time setting.
In all of this ebb and flow of scientific thought the one unchanging element has been the
concept of the setting. Space and time, as currently conceived, are the stage on which the
drama of the universe unfolds–―a vast world-room, a perfection of emptiness, within
which all the world show plays itself away forever.‖22
This view of the nature of space and time, to which all have subscribed scientist and
layman alike, is pure assumption. No one, so far as the history of science reveals, has
ever made any systematic examination of the available evidence to determine whether or
not the assumption is justified. Newton made no attempt to analyze the basic concepts.
He tells us specifically, ―I do not define time, space, place and motion, as being well
known to all. ― Later generations of scientists have challenged some of Newton's
conclusions, but they have brushed this question aside in an equally casual and carefree
manner. Richard Tolman, for example, begins his discussion of relativity with this
statement: ―We shall assume without examination . . . the unidirectional, one-valued,
one-dimensional character of the time continuum.‖23
Such an uncritical acceptance of an unsubstantiated assumption ―without examination‖ ,
is, of course, thoroughly unscientific, but it is quite understandable as a consequence of
the basic concept of a universe of matter to which science has been committed. Matter, in
such a universe, must have a setting in which to exist. Space and time are obviously the
most logical candidates for this assignment. They cannot be examined directly. We
cannot put time under a microscope, or subject space to a mathematical analysis by a
computer. Nor does the definition of matter itself give us any clue as to the nature of
space and time. The net effect of accepting the concept of a universe of matter has
therefore been to force science into the position of having to take the appearances which
space and time present to the casual observer as indications of the true nature of these
entities.
In a universe of motion, one in which everything physical is a manifestation of motion,
this uncertainty does not exist, as a specific definition of space and time is implicit in the
definition of motion. It should be understood in this connection that the term ―motion,‖ as
used herein, refers to motion as customarily defined for scientific and engineering
purposes; that is, motion is a relation between space and time, and is measured as speed
or velocity. In its simplest form, the ―equation of motion,‖ which expresses this definition
in mathematical symbols, is v = s/t.
The definition as stated, the standard scientific definition, we may call it, is not the only
way in which motion can be defined. But it is the only definition that has any relevance to
the development in this work. The basic postulate of the work is that the physical
universe is composed entirely of motion as thus defined. What we are undertaking to do
is to describe the consequences that necessarily follow in a universe composed of this
kind of motion. Whether or not one might prefer to define motion in some other way, and
what the consequences of such a definition might be, has no bearing on the present
undertaking.
Obviously, the equation of motion, which defines motion in terms of space and time,
likewise defines space and time in terms of motion. It tells us that in motion space and
time are the two reciprocal aspects of that motion, and nothing else. In a universe of
matter, the fact that space and time have this significance in motion would not preclude
them from having some other significance in a different connection, but when it is
specified that motion is the sole constituent of the physical universe, space and time
cannot have any significance anywhere in that universe other than that which they have
as aspects of motion. Under these circumstances, the equation of motion is a complete
definition of the role of space and time in the physical universe. We thus arrive at the
conclusion that space and time are simply the two reciprocal aspects of motion and have
no other significance.
On this basis, space is not the Euclidean container for physical phenomena that is most
commonly visualized by the layman; neither is it the modified version of this concept
which makes it subject to distortion by various forces and highly dependent on the
location and movement of the observer, as seen by the modern physicist. In fact, it is not
even a physical entity in its own right at all; it is simply and solely an aspect of motion.
Time is not an order of succession, or a dimension of quasi-space, neither is it a physical
entity in its own right. It, too, is simply and solely an aspect of motion, similar in all
respects to space, except that it is the reciprocal aspect.
The simplest way of defining the status of space and time in a universe of motion is to
say that space is the numerator in the expression s/t, which is the speed or velocity, the
measure of motion, and time is the denominator. If there is no fraction, there is no
numerator or denominator; if there is no motion, there is no space or time. Space does not
exist alone, nor does time exist alone; neither exists at all except in association with the
other as motion. We can, of course, focus our attention on the space aspect and deal with
it as if the time aspect, the denominator of the fraction, remains constant (or we can deal
with time as if space remains constant). This is the familiar process known as abstraction,
one of the useful tools of scientific inquiry. But any results obtained in this manner are
valid only where the time (or space) aspect does, in fact, remain constant, or where the
proper adjustment is made for whatever changes in this factor do take place.
The reason for the failure of previous efforts to construct a workable theory on the basis
of the ―motion,‖ concept is now evident. Previous investigators have not realized that the
―setting‖ concept is a creature of the ―matter‖ concept; that it exists only because that
basic concept envisions material ―things‖ existing in a space-time setting. In attempting
to construct a theoretical system on the basis of the concept of a universe of motion while
still retaining the ―setting‖ concept of space and time, these theorists have tried to
combine two incompatible elements, and failure was inevitable. When the true situation
is recognized it becomes clear that what is needed is to discard the ―setting‖ concept of
space and time along with the general concept of a universe of matter, to which it is
intimately related, and to use the concept of space and time that is in harmony with the
idea of a universe of motion.
In the discussion that follows we will postulate that the physical universe is composed
entirely of discrete units of motion, and we will make certain assumptions as to the
characteristics of that motion. We will then proceed to show that the mere existence of
motions with properties as postulated, without the aid of any supplementary or auxiliary
assumptions, and without bringing in anything from experience, necessarily leads to a
vast number and variety of consequences which, in total, constitute a complete theoretical
universe.
Construction of a fully integrated theory of this nature, one, which derives the existence,
and the properties of the various physical entities from a single set of premises, has long
been recognized as the ultimate goal of theoretical science. The question now being
raised is whether that goal is actually attainable. Some scientists are still optimistic. ―Of
course, we all try to discover the universal law,‖ says Eugene P. Wigner, ―and some of us
believe that it will be discovered one day.‖24 But there is also an influential school of
thought which contends that a valid, generally applicable, physical theory is impossible,
and that the best we can hope for is a ―model‖ or series of models that will represent
physical reality approximately and incompletely. Sir James Jeans expresses this point of
view in the following words:
The most we can aspire to is a model or picture which shall explain and account for some
of the observed properties of matter; where this fails, we must supplement it with some
other model or picture, which will in its turn fail with other properties of matter, and so
on.25
When we inquire into the reasons for this surprisingly pessimistic view of the
potentialities of the theoretical approach to nature, in which so many present-day
theorists concur, we find that it has not resulted from any new discoveries concerning the
limitations of human knowledge, or any greater philosophical insight into the nature of
physical reality; it is purely a reaction to long years of frustration. The theorists have been
unable to find the kind of an accurate theory of general applicability for which they have
been searching, and so they have finally convinced themselves that their search was
meaningless; that there is no such theory. But they simply gave up too soon. Our findings
now show that when the basic errors of prevailing thought are corrected the road to a
complete and comprehensive theory is wide open.
It is essential to understand that this new theoretical development deals entirely with the
theoretical entities and phenomena, the consequences of the basic postulates, not with the
aspects of the physical universe revealed by observation. When we make certain
deductions with respect to the constituents of the universe on the basis of theoretical
assumptions as to the fundamental nature of that universe, the entities and phenomena
thus deduced are wholly theoretical; they are the constituents of a purely theoretical
universe. Later in the presentation we will show that the theoretical universe thus derived
from the postulates corresponds item by item with the observed physical universe,
justifying the assertion that each theoretical feature is a true and accurate representation
of the corresponding feature of the actual universe in which we live. In view of this one-
to-one correspondence, the names that we will attach to the theoretical features will be
those that apply to the corresponding physical features, but the development of theory
will be concerned exclusively with the theoretical entities and phenomena.
For example, the ―matter‖ that enters into the theoretical development is not physical
matter; it is theoretical matter. Of course, the exact correspondence between the
theoretical and observed universes that will be demonstrated in the course of this
development means that the theoretical matter is a correct representation of the actual
physical matter, but it is important to realize that what we are dealing with in the
development of theory is the theoretical entity, not the physical entity. The significance
of this point is that physical ―matter,‖ ―radiation‖ and other physical items cannot be
defined with precision and certainty, as there can be no assurance that our observations
give us the complete picture. The ―matter‖ that enters into Newton's law of gravitation,
for example, is not a theoretically defined entity; it is the matter that is actually
encountered in the physical world: an entity whose real nature is still a subject of
considerable controversy. But we do know exactly what we are dealing with when we
talk about theoretical matter. Here there is no uncertainty whatever. Theoretical matter is
just what the postulates require it to be–no more, no less. The same is true of all of the
other items that enter into the theoretical development.
Although physical observations have not yet given us a definitive answer to the question
as to the structure of the basic unit of physical matter, the physical atom–indeed, there is
an almost continuous revision of the prevailing ideas on the subject, as new facts are
revealed by experiment–we know exactly what the structure of the theoretical atom is,
because both the existence and the properties of that atom are consequences that we
derive by logical processes from our basic postulates.
Inasmuch as the theoretical premises are explicitly defined, and their consequences are
developed by sound logical and mathematical processes, the conclusions that are reached
with respect to matter, its structure and properties, and all other features of the theoretical
universe are unequivocal. Of course, there is always a possibility that some error may
have been made in the chain of deductions, particularly if the chain in question is a very
long one, but aside from this possibility, which is at a minimum in the early stages of the
development, there is no doubt as to the true nature and characteristics of any entity or
phenomenon that emerges from that development.
Such certainty is impossible in the case of any theory, which contains empirical elements.
Theories of this kind, a category that includes all existing physical theories, are never
permanent; they are always subject to change by experimental discovery. The currently
popular theory of the structure of the atom, for example, has undergone a long series of
changes since Rutherford and Bohr first formulated it, and there is no assurance that the
modifications are at an end. On the contrary, a general recognition of the weakness of the
theory as it now stands has stimulated an intensive search for ways and means of bringing
it into a closer correspondence with reality, and the current literature is full of proposals
for revision.
When a theory includes an empirical component, as all current physical theories do, any
increase in observational or experimental knowledge about this component alters the
sense of the theory, even if the wording remains the same. For instance, as pointed out
earlier, some of the recently discovered phenomena in the sub-atomic region, in which
matter is converted to energy, and vice versa, have drastically altered the status of
conventional atomic theory. The basic concept of a universe of material ―things,‖ to
which physical science has subscribed for thousands of years, requires the atom to be
made up of elementary units of matter. The present theory of an atom constructed of
protons, neutrons, and electrons is based on the assumption that these are the ―elementary
particles‖ ; that is, the indivisible and unchangeable basic units of matter. The
experimental finding that these particles are not only interconvertible, but also subject to
creation from non-matter and transformation into non-matter, has changed what was
formerly a plausible (even if somewhat fanciful) theory into a theory that is internally
inconsistent. In the light of present knowledge, an atom simply cannot be constructed of
―elementary particles‖ of matter.
Some of the leading theorists have already recognized this fact, and are casting about for
something that can replace the elementary particle as the basic unit. Heisenberg suggests
energy:
Energy . . . is the fundamental substance of which the world is made. Matter originates
when the substance energy is converted into the form of an elementary particle.26
But he admits that he has no idea as to how energy can be thus converted into matter.
This ―must in some way be determined by a fundamental law,‖ he says. Heisenberg's
hypothesis is a step in the right direction, in that he abandons the fruitless search for the
―indivisible particle,‖ and recognizes that there must be something more basic than
matter. He is quite critical of the continuing attempt to invest the purely hypothetical
―quark‖ with a semblance of reality:
I am afraid that the quark hypothesis is not really taken seriously today by its proponents.
Questions dealing with the statistics of quarks, the forces that keep them together, the
reason why the quarks are never seen as free particles, the creation of pairs of quarks
inside an elementary particle, are all left more or less undefined.27
But the hypothesis that makes energy the fundamental entity cannot stand up under
critical scrutiny. Its fatal defect is that energy is a scalar quantity, and simply does not
have the flexibility that is required in order to explain the enormous variety of physical
phenomena. By going one step farther and identifying motion as the basic entity this
inadequacy is overcome, as motion can be vectorial, and the addition of directional
characteristics to the positive and negative magnitudes that are the sole properties of the
scalar quantities opens the door to the great proliferation of phenomena that characterizes
the physical universe.
It should also be recognized that a theory of the composite type, one that has both
theoretical and empirical components, is always subject to revision or modification; it
may be altered essentially at will. The theory of atomic structure, for instance, is simply a
theory of the atom–nothing else–and when it is changed, as it was when the hypothetical
constituents of the hypothetical nucleus were changed from protons and electrons to
protons and neutrons, no other area of physical theory is significantly affected. Even
when it is found expedient to postulate that the atom or one of its hypothetical
constituents does not conform to the established laws of physical science, it is not usually
postulated that these laws are wrong; merely that they are not applicable in the particular
case. This fact that the revision affects only a very limited area gives the theory
constructors practically a free hand in making alterations, and they make full use of the
latitude thus allowed.
Susceptibility to both voluntary and involuntary changes is unavoidable as long as the
development of theory is still in the stage where complex concepts such as ―matter‖ must
be considered unanalyzable, and hence it has come to be regarded as a characteristic of
all theories. The first point to be emphasized, therefore, in beginning a description of the
new system of theory based on the concept of a universe of motion, the Reciprocal
System, as it is called, is that this is not a composite theory of the usual type; it is a purely
theoretical structure which includes nothing of an empirical nature.
Because all of the conclusions reached in the theoretical development are derived entirely
from the basic postulates by logical and mathematical processes the theoretical system is
completely inflexible, a point that should be clearly understood before any attempt is
made to follow the development of the details of the theory in the following pages. It is
not subject to any change or adjustment (other than correction of any errors that may
have been made, and extension of the theory into areas not previously covered). Once the
postulates have been set forth, the entire character of the resulting theoretical universe has
been implicitly defined, down to the minutest detail. Just because the motion of which the
universe is constructed, according to the postulates, has the particular properties that have
been postulated, matter, radiation, gravitation, electrical and magnetic phenomena, and so
on, must exist, and their physical behavior must follow certain specific patterns.
In addition to being an inflexible, purely theoretical product that arrives at definite and
certain conclusions which are in full agreement with observation, or at least are not
inconsistent with any definitely established facts, the Reciprocal System of theory is one
of general applicability. It is the first thing of its kind ever formulated: the first that
derives the phenomena and relations of all subdivisions of physical activity from the
same basic premises. For the first time in scientific history there is available a theoretical
system that satisfies the criterion laid down by Richard Schlegel in this statement:
In a significant sense, the ideal of science is a single set of principles, or perhaps a set of
mathematical equations, from which all the vast process and structure of nature could be
deduced.28
No previous theory has covered more than a small fraction of the total field, and the
present-day structure of physical thought is made up of a host of separate theories,
loosely related, and at many points actually conflicting. Each of these separate theories
has its own set of basic assumptions, from which it seeks to derive relations specifically
applicable to certain kinds of phenomena. Relativity theory has one set of assumptions,
and is applicable to one kind of phenomena. The kinetic theory has an altogether different
set of assumptions, which it applies to a different set of phenomena. The nuclear theory
of the atom has still another set of assumptions, and has a field of applicability all its
own, and so on. Again quoting Richard Feynman:
Instead of having the ability to tell you what the law of physics is, I have to talk about the
things that are common to the various laws; we do not understand the connection between
them.15
Furthermore, each of these many theories not only requires the formulation of a special
set of basic assumptions tailored to fit the particular situation, but also finds it necessary
to introduce a number of observed entities and phenomena into the theoretical structure,
taking their existence for granted, and accepting them as ―given‖ , so far as the theory is
concerned.
The Reciprocal System now replaces this multitude of separate theories and subsidiary
assumptions with a fully integrated structure of theory derived in its entirety from a single
set of basic premises. The status of this system as a general physical theory is not a matter
of opinion; it is an objective fact that can easily be verified by an examination of the
theoretical development. Such an examination will disclose that the development leads to
detailed conclusions in all major physical fields, and that these conclusions are derived
deductively from the postulates of the system, without the aid of any supplementary or
subsidiary assumptions, and without introducing anything from experience. The new
theoretical structure not only covers the field to which the conventional physical theories
are applicable; it also gives us answers to the basic physical questions with which the
theories based on the ―matter‖ concept have been unable to cope, and it extends the scope
of physical theory to the point where it is capable of dealing with those recent
experimental and observational discoveries in the far-out regions of science that have
been so baffling to those who are trying to understand them in the context of previously
existing ideas.
Of course, the theoretical development has not yet been carried to the point where it
accounts for every detail of the physical universe. That point will not be reached for a
long time, if ever. But it has been carried far enough to make it clear that the probability
of being unable to deal with the remaining items is negligible, and that the Reciprocal
System is, in fact, a general physical theory.
The crucial importance of this status as a general physical theory lies in the further fact
that it is impossible to construct a wrong general physical theory. At first glance this
statement may seem absurd. It may seem almost self-evident that if validity is not
required there should be no serious obstacle to constructing some kind of a theory of any
subject. But even without any detailed consideration of the factors that are involved in the
case of a general physical theory, a review of experience will show that this offhand
opinion is incorrect. Construction of a general physical theory has been a prime goal of
science for three thousand years, and an immense amount of time and effort has been
devoted to the task, with no success whatever. The failure has not been a matter of
arriving at the wrong answers; the theorists have not been able to formulate any single
theory that would give them any answers, right or wrong, to more than a mere handful of
the millions of questions that a general physical theory must answer. A long period of
failure to find the correct theory is understandable, since the field that must be covered by
a general theory is so immense and so extremely complicated, but thousands of years of
inability to construct any general theory are explainable only on the basis that there is a
reason why a wrong theory cannot be constructed.
This reason is easily understood if the essential nature of the task is carefully examined.
Construction of a general physical theory is analogous to the task of deciphering a very
long message in code. If a coded message is short–a few words or a sentence–alternative
interpretations are possible, any or all of which may be wrong, but if the message is a
very long one–a whole book in code would be an appropriate analogy to the subject
matter of a general physical theory–there is only one way to make any kind of sense out
of every paragraph, and that is to find the key to the cipher. If, and when, the message is
finally decoded, and every paragraph is intelligible, it is evident that the key to the cipher
has been discovered. The possibility that there might be an alternative key, a different set
of meanings for the various symbols utilized, that would give every one of the thousands
of sentences in the message a different significance, intelligible but wrong, is
preposterous. It can therefore be definitely stated that a wrong key to the cipher is
impossible. The correct general theory of the universe is the key to the code of nature. As
in the case of the cipher, a wrong theory can provide plausible answers in a very limited
field, but only the correct theory can be a general theory; one that is capable of producing
explanations for the existence and characteristics of all of the immense number of
physical phenomena. Thus a wrong general theory, like a wrong key to a cipher, is
impossible.
The verification of the validity of the theoretical structure as a whole that is provided by
the demonstration that it is a general physical theory does not eliminate the need for
checking each of the conclusions of the theory individually. It is not unlikely that those
persons who carry out the process of development of the details of the theory will make
some mistakes. But the fact that the individual conclusions have been derived by
extension of a correct general structure of theory creates a strong presumption of their
validity, a presumption that cannot be overcome by anything other than definite and
conclusive contrary evidence. Hence, as conclusions are reached in the course of the
development, it is not necessary to supply positive proof that they are correct, or to argue
that the case in favor of their validity is superior to that of any competitor. All that is
required is to show that these conclusions are not inconsistent with any definitely
established facts.
Recognition of this point is essential for a full understanding of the presentation in the
pages that follow. Many persons will no doubt take the stand that they find the arguments
in favor of certain of the currently accepted ideas more persuasive than those in favor of
the conclusions derived from the Reciprocal System. Indeed, some such reactions are
inevitable, since there will be a strong tendency to view these conclusions in the context
of present-day thought, based on the no longer tenable concept of a universe of matter.
But these opinions are irrelevant. Where it can be shown that the conclusions are
legitimately derived from the postulates of the system, they participate in the proof of the
validity of the structure of theory as a whole, a proof that has been established by two
independent means: (1) by showing that this is a general physical theory, and that a
wrong general physical theory is impossible, and (2) by showing that none of the
authentic deductions from the postulates of the theory is inconsistent with any positively
established information from observation or experiment.
This second method of verification is analogous to the manner in which we would go
about verifying the accuracy of an aerial map. The traditional method of map making
involves first a series of explorations, then a critical evaluation of the reports submitted
by the explorers, and finally the construction of the map on the basis of those reports that
the geographers consider most reliable. Similarly, in the scientific field, explorations are
carried out by experiment and observation, reports of the findings and conclusions based
on these findings are submitted, these reports are evaluated by the scientific community,
and those that are judged to be authentic are added to the scientific map, the accepted
body of factual and theoretical knowledge.
But this traditional method of map making is not the only way in which a geographic map
can be prepared. We may, for instance, devise some photographic system whereby we
can secure a representation of an entire area in one operation by a single process. In either
case, whether we are offered a map of the traditional kind or a photographic map we will
want to make some tests to satisfy ourselves that the map is accurate before we use it for
any important purposes, but because of the difference in the manner in which the maps
were produced, the nature of these tests will be altogether different in the two cases. In
checking a map of the traditional type we have no option but to verify each significant
feature of the map individually, because aside from a relatively small amount of
interrelation, each feature is independent. Verification of the position shown for a
mountain in one part of the map does not in any way guarantee the accuracy of the
position shown for a river in another part of the map. The only way in which the position
shown for the river can be verified is to compare what we see on the map with such other
information as may be available. Since these collateral data are often scanty, or even
entirely lacking, particularly along the frontiers of knowledge, the verification of a map
of this kind in either the geographic or the scientific field is primarily a matter of
judgment, and the final conclusion cannot be more than tentative at best.
In the case of a photographic map, on the other hand, each test that is made is a test of the
validity of the process, and any verification of an individual feature is merely incidental.
If there is even one place where an item that can definitely be seen on the map is in
conflict with something that is positively known to be a fact, this is enough to show that
the process is not accurate, and it provides sufficient justification for discarding the map
in its entirety. But if no such conflict is found, the fact that every test is a test of the
process means that each additional test that is made without finding a discrepancy
reduces the mathematical probability that any conflict exists anywhere on the map. By
making a suitably large number and variety of such tests the remaining uncertainty can be
reduced to the point where it is negligible, thereby definitely establishing the accuracy of
the map as a whole. The entire operation of verifying a map of this kind is a purely
objective process in which features that can definitely be seen on the map are compared
with facts that have been definitely established by other means.
One important precaution must be observed in the verification process: a great deal of
care must be exercised to make certain of the authenticity of the supposed facts that are
utilized for the comparisons. There is no justification for basing conclusions on anything
that falls short of positive knowledge. In testing the accuracy of an aerial map we realize
that we cannot justify rejecting the map because the location of a lake indicated on the
map conflicts with the location that we think the lake occupies. In this case it is clear that
unless we actually know just where the lake is, we have no legitimate basis on which to
dispute the location shown on the map. We also realize that there is no need to pay any
attention to items of this kind: those about which we are uncertain. There are hundreds,
perhaps thousands, of map features about which we do have positive knowledge, far
more than enough for purposes of comparison, so that we need not give any consideration
to features about which there is any degree of uncertainty.
Because the Reciprocal System of theory is a fully integrated structure derived entirely
by one process–deduction from a single set of premices–it is capable of verification in the
same manner as an aerial map. It has already passed such a test; that is, the theoretical
deductions have been compared with the observed facts in thousands of individual cases
distributed over all major fields of physical science without encountering a single definite
inconsistency. These deductions disagree with many currently accepted ideas, to be sure,
but in all of these cases it can be shown that the current views are not positive knowledge.
They are either conclusions based on inadequate data, or they are assumptions,
extrapolations, or interpretations. As in the analogous case of the aerial map, conflicts
with such items, with what scientists think, are meaningless. The only conflicts that are
relevant to the test of the validity of the theoretical system are conflicts with what
scientists know.
Thus, while recognition of human fallibility prevents asserting that every conclusion
purported to be reached by application of this theory is authentic and therefore correct, it
can be asserted that the Reciprocal System of theory is capable of producing the right
answers if it is properly applied, and to the extent that the development of the
consequences of the postulates of the theory has been correctly carried out, the theoretical
structure thus derived is a true and accurate representation of the actual physical universe.

CHAPTER 3

Reference Systems
As indicated in the preceding chapter, the concept of a universe of motion has to be
elaborated to some extent before it is possible to develop a theoretical structure that will
describe that universe in detail. The additions to the basic concept must take the form of
assumptions–or postulates, a term more commonly applied to the fundamental
assumptions of a theory–because even though the additional specifications (the physical
specifications, at least) obviously do apply to the particular universe of motion in which
we live, there does not appear to be adequate justification for contending that they
necessarily apply to an possible universe of motion.
It has already been mentioned that we are postulating a universe composed of discrete
units of motion. But this does not mean that the motion proceeds in a series of jumps.
This basic motion is progression in which the familiar progression of time is
accompanied by a similar progression of space. Completion of one unit of the progression
is followed immediately by initiation of another, without interruption. As an analogy, we
may consider a chain. Although the chain exists only in discrete units, or links, it is a
continuous structure, not a mere juxtaposition of separate units.
Whether or not the continuity is a matter of logical necessity is a philosophical question
that does not need to concern us at this time. There are reasons to believe that it is, in fact,
a necessity, but if not, we will introduce it into our definition of motion. In any event, it is
part of the system. The extensive use of the term ―progression‖ in application to the basic
motions with which we are dealing in the initial portions of this work is intended to
emphasize this characteristic.
Another assumption that will be made is that the universe is three-dimensional. In this
connection, it should be realized that all of the supplementary assumptions that were
added to the basic concept of a universe of motion in order to define the essential
properties of that universe were no more than tentative at the start of the investigation
that ultimately led to the development of the Reciprocal System of theory. Some such
supplementary assumptions were clearly required, but neither the number of assumptions
that would have to be made, nor the nature of the individual assumptions, was clearly
indicated by existing knowledge of the physical universe. The only feasible course of
action was to initiate the investigation on the basis of those assumptions, which seemed
to have the greatest probability of being correct. If any wrong assumptions were made, or
if some further assumptions were required, the theoretical development would, of course,
encounter insurmountable difficulties very quickly, and it would then be necessary to go
back and modify the postulates, and try again. Fortunately, the original postulates passed
this test, and the only change that has been made was to drop some of the original
assumptions that were found to be deducible from the others and therefore superfluous.
No further physical postulates are required, but it is necessary to make some assumptions
as to the mathematical behavior of the universe. Here our observations of the existing
universe do not give us guidance of as definite a character as was available in the case of
the physical properties, but there is a set of mathematical principles which, until very
recent times, was generally regarded as almost self-evident. The main body of scientific
opinion is now committed to the belief that the true mathematical structure of the
universe is much more complex, but the assumption that it conforms to the older set of
principles is the simplest assumption that can be made. Following the rule laid down by
William of Occam; this assumption was therefore made for the purpose of the initial
investigation. No modifications have since been found necessary. The complete set of
assumptions that constitutes the fundamental postulates of the theory of a universe of
motion can be expressed as follows:
First Fundamental Postulate: The physical universe is composed entirely of one
component, motion, existing in three dimensions, in discrete units, and with two
reciprocal aspects, space and time.
Second Fundamental Postulate: The physical universe conforms to the relations of
ordinary commutative mathematics, its primary magnitudes are absolute, and its
geometry is Euclidean.
Postulates are justified by their consequences, not by their antecedents, and as long as
they are rational and mutually consistent, there is not much that can be said about them,
either favorably or adversely. It should be of interest, however, to note that the concept of
a universe composed entirely of motion is the only new idea that is involved in the
postulates that define the Reciprocal System. There are other ideas, which, on the basis of
current thinking, could be considered unorthodox, but these are by no means new. For
example, the postulates include the assumption that the geometry of the universe is
Euclidean. This is in direct conflict with present-day physical theory, which assumes a
non-Euclidean geometry, but it certainly cannot be regarded as an innovation. On the
contrary, the physical validity of Euclidean geometry was accepted without question for
thousands of years, and there is little doubt but that non-Euclidean geometry would still
be nothing but a mathematical curiosity had it not been for the fact that the development
of physical theory encountered some serious difficulties which the theorists were unable
to surmount within the limitations established by Euclidean geometry, absolute
magnitudes, etc.
Motion is measured as speed (or velocity, in a context that we will consider later).
Inasmuch as the quantity of space involved in one unit of motion is the minimum
quantity that takes part in any physical activity, because less than one unit of motion does
not exist, this is the unit of space. Similarly, the quantity of time involved in the one unit
of motion is the unit of time. Each unit of motion, then, consists of one unit of space in
association with one unit of time; that is, the basic motion of the universe is motion at
unit speed.
Cosmologists often begin their analyses of large-scale physical processes with a
consideration of a hypothetical ―empty‖ universe, one in which no matter exists in the
postulated space-time setting. But an empty universe of motion is an impossibility.
Without motion there would be no universe. The most primitive condition, the situation
which prevails when the universe of motion exists, but nothing at all is happening in that
universe, is a condition in which units of motion exist independently, with no interaction.
In this condition all speed is unity, one unit of space per unit of time, and since all units
of motion are alike–they have no property but speed, and that is unity for all–the entire
universe is a featureless uniformity. In order that there may be physical phenomena that
can be observed or measured there must be some deviation from this one-to-one relation,
and since it is the deviation that is observable, the amount of the deviation is a measure of
the magnitude of the phenomenon. Thus all physical activity, all change that occurs in the
system of motions that constitutes the universe, extends from unity, not from zero.
The units of space, time, and motion (speed) that form the background for physical
activity are simply scalar magnitudes. As matters now stand, we have no geometric
means of representation that will express all three magnitudes coincidentally. But if we
assume that the time progression continues at a uniform rate, and we measure this
progression by some independent device (a clock), then we can represent the
corresponding spatial magnitude by a one-dimensional geometric figure: a line. The
length of this line represents the amount of space corresponding to a given time
magnitude. Where this time magnitude is unity, the length of the line also represents the
speed, the space per unit time.
In present-day scientific practice, the datum from which all speed measurements are
made, the point identified with the mathematical zero, is some stationary point in the
reference system. But, as has been explained, the reference datum for physical
magnitudes in a universe of motion is not zero speed but unit speed. The natural datum is
therefore continually moving outward (in the direction of greater magnitudes) from the
conventional zero datum, and the true speeds that are effective in the basic physical
interactions can be correctly measured only in terms of deviation upward or downward
from unity. From the natural standpoint a motion at unit speed is no effective motion at
all.
Expressing this in another way, we may say that the natural system of reference, the
reference system to which the physical universe actually conforms, is moving outward at
unit speed with respect to any stationary spatial reference system. Any identifiable
portion of such a stationary reference system is called a location in that system. While
less-than-unit quantities of space do not exist, points within the units can be identified. A
spatial location may therefore be of any size, from a point to the amount of space
occupied by a galaxy, depending on the context in which the term is used. To distinguish
locations in the natural moving system of reference from locations in the stationary
reference systems, we will use the term absolute location in application to the natural
system. In the context of a fixed reference system an absolute location appears as a point
(or some finite spatial magnitude) moving along a straight line.
We are so accustomed to referring motion to a stationary reference system that it seems
almost self-evident that an object that has no independent motion, and is not subject to
any external force, must remain stationary with respect to some spatial coordinate system.
Of course, it is recognized that what seems to be motionless in the context of our ordinary
experience is actually moving in terms of the solar system as a reference; what seems to
be stationary in the solar system is moving if we use the Galaxy as a reference datum, and
so on. Current scientific theory also contends that motion cannot be specified in any
absolute manner, and can only be stated in relative terms. However, all previous thought
on the subject, irrespective of how it views the details, has made the assumption that the
initial point of a motion is some fixed spatial location that can be identified as the spatial
zero.
But nature is not required to conform to human opinions and beliefs, and in this case does
not do so. As indicated in the preceding paragraphs, the natural system of reference in a
universe of motion is not a stationary system but a moving system. Inasmuch as each unit
of the basic motion involves one unit of space and one unit of time, it follows that
continuation of the motion through an interval during which time is progressing involves
a continued increase, or progression, of both space and time. If an absolute spatial
location X is in coincidence with spatial location x at time t, then at time t + n this
absolute location X will be found at spatial location x + n. As seen in the context of a
stationary spatial system of reference, each absolute location is moving outward from its
point of reference at a constant unit speed.
Because of this motion of the natural reference system with respect to the stationary
systems, an object that has no independent motion, and is not subject to any external
force, does not remain stationary in any system of fixed spatial coordinates. It remains at
the same absolute location, and therefore moves outward at unit speed from its initial
location, and from any object that occupies such a location.
Thus far we have been considering the progression of the natural moving reference
system in the context of a one-dimensional stationary reference system. Since we have
postulated that the universe is three-dimensional, we may also represent the progression
in a three-dimensional stationary reference system. Because the progression is scalar,
what this accomplishes is merely to place the one-dimensional system that has been
discussed in the preceding paragraphs into a certain position in the three dimensional
coordinate system. The outward movement of the natural system with respect to the fixed
point continues in the same one-dimensional manner.
The scalar nature of the progression of the natural reference system is very significant. A
unit of the basic motion has no inherent direction; it is simply a unit of space in
association with a unit of time. In quantitative terms it is a unit scalar magnitude: a unit of
speed. Scalar motion plays only a very minor role in everyday life, and little attention is
ordinarily paid to it. But our finding that the basic motion of the physical universe is
inherently scalar changes this picture drastically. The properties of scalar motion now
become extremely important.
To illustrate the primary difference between scalar motion and the vectorial motion of our
ordinary experience let us consider two cases which involve a moving object X between
two points A and B on the surface of a balloon. In the first case, let us assume that the
size of the balloon is maintained constant, and that the object X is something capable of
independent motion, a crawling insect, perhaps. The motion of X is then vectorial. It has
a specific direction in the context of a stationary spatial reference system, and if that
direction is BA–that is, X is moving away from B–the distance XA decreases and the
distance XB increases. In the second case, we will assume that X is a fixed spot on the
balloon surface, and that its motion is due to expansion of the balloon. Here the motion of
X is scalar. It is simply outward away from all other points on the balloon surface, and
has no specific direction. In this case the motion away from B does not decrease the
distance XA. Both XB and XA increase. The motion of the natural reference system
relative to any fixed spatial system of reference is motion of this character. It has a
positive scalar magnitude, but no inherent direction.
In order to place the one-dimensional progression of an absolute location in a three-
dimensional coordinate system it is necessary to define a reference point and a direction.
In the subsequent discussion we will be dealing largely with scalar motions that originate
at specific points in the fixed coordinate system. The reference point for each of these
motions is the point of origin. It follows that the motions can be represented in the
conventional fixed system of reference only by the use of multiple reference points. This
was brought out in the first edition of this work in the form of a statement that photons
(which, as will be shown later, are objects without independent motion, and therefore
remain in their absolute locations of origin) ―travel outward in all directions from various
points of emission.” However, experience has indicated that further elaboration of this
point is necessary in order to avoid misunderstandings. The principal stumbling block
seems to be a widespread impression that there must be some kind of a conceptually
identifiable universal reference system to which the motions of photons and other objects
that remain in the same absolute locations can be related. The expression ―natural
reference system‖ probably contributes to this impression, but the fact that a natural
reference system exists does not necessarily imply that it must be related in any direct
way to the conventional three-dimensional stationary frame of reference.
It is true that the expanding balloon analogy suggests something of this kind, but an
examination of this analogy will show that it is strictly applicable only to a situation in
which all existing objects are stationary in the natural system of reference, and are
therefore moving outward at unit speed. In this situation, any location can be taken as the
reference point, and all other locations move outward from that point; that is, all locations
move outward away from all other locations. But just as soon as moving objects (entities
that are stationary, or moving with low speeds, in the fixed reference system, and are
therefore moving with high speeds relative to the natural system of reference–emitters of
photons, for example) are introduced into the situation, this simple representation is no
longer possible, and multiple reference points become necessary.
In order to apply the balloon analogy to a gravitationally bound physical system it is
necessary to visualize a large number of expanding balloons, centered on the various
reference points and interpenetrating each other. Absolute locations are defined only in a
scalar sense (represented one-dimensionally). They move outward, each from its own
reference point, regardless of where those reference points may be located in the three-
dimensional spatial coordinate system. In the case of the photons, each emitting object
becomes a point of reference, and since the motions are scalar and have no inherent
direction, the direction of motion of each photon, as seen in the reference system, is
determined entirely by chance. Each of the emitting objects, wherever it may be in the
stationary reference system, and whatever its motions may be relative to that system,
becomes the reference point for the scalar photon motion; that is, it is the center of an
expanding sphere of radiation.
The finding that the natural system of reference in a universe of motion is a moving
system rather than a stationary system, our first deduction from the postulates that define
such a universe, is a very significant discovery. Heretofore only one so-called ―universal
force,‖ the force of gravitation, has been known. Later in the discussion it will be seen
that the customary term ―universal‖ is somewhat too broad in application to gravitation‖
but this phenomenon (the nature of which will be examined later) affects all units and
aggregates of matter within the observational range under all circumstances. While not
actually universal, it can appropriately be called a ―general‖ force. In a universe of
motion a force is necessarily a motion, or an aspect of motion. Since we will be working
mainly in terms of motion for the present, it will be desirable at this point to establish the
relation between the force and motion concepts.
For this purpose, let us consider a situation in which an object is moving in one direction
with a certain velocity, and is simultaneously moving in the opposite direction with an
equal velocity. The net change of position of the object is zero. Instead of looking at the
situation in terms of two opposing motions‖ we may find it convenient to say that the
object is motionless, and that this condition has resulted from a conflict of two forces
tending to produce motion in opposite directions. On this basis we define force as that
which will produce motion if not prevented from so doing by other forces. The
quantitative aspects of this relation will be considered later. The limitations to which a
derived concept of this kind are subject will also have consideration in connection with
subjects to be covered in the pages that follow. The essential point to be noted here is that
―force‖ is merely a special way of looking at motion.
It has long been realized that while gravitation has been the only known general force,
there are many physical phenomena that are not capable of satisfactory explanation on
the basis of only one such force.
For example, Gold and Hoyle make this comment:
Attempts to explain both the expansion of the universe and the condensation of galaxies
must be very largely contradictory so long as gravitation is the only force field under
consideration. For if the expansive kinetic energy of matter is adequate to give universal
expansion against the gravitational field it is adequate to prevent local condensation
under gravity, and vice versa. That is why, essentially, the formation of galaxies is passed
over with little comment in most systems of cosmology.29
Karl K. Darrow made the same point in a different connection, emphasizing that
gravitation alone is not sufficient in many applications. There must also be what he called
an ―antagonist,‖ an ―essential and powerful force,‖ as he described it.
May we now assume that the ultimate particles of the world act on each other by gravity
alone, with motion as the sole antagonist to keep the universe from gathering into a single
clump? The answer to this question is a forthright and irrevocable No!30
The globular star clusters provide an example illustrating Darrow's point. Like the
formation of galaxies, the problem of accounting for the existence of these clusters is
customarily ―passed over with little comment, by the astronomers, but a discussion of the
subject occasionally creeps into the astronomical literature. A rather candid article by E.
Finlay-Freundlich which appeared in a publication of the Royal Astronomical Society
some years ago admitted that ―the main problem presented by the globular clusters is
their very existence as finite systems.‖ Many efforts have been made to explain these
clusters on the basis of motions acting in opposition to gravitation, but as this author
concedes, there is no evidence of the existence of motions that would be adequate to
establish an equilibrium, and he asserts that ―their structure must be determined solely by
the gravitational field set up by the stars which constitute such a cluster.‖ This being the
case, the only answer he was able to visualize was that the clusters ―have not yet reached
the final state of equilibrium,‖ a conclusion that is clearly in conflict with the many
observational indications that these clusters are relatively stable long-lived objects. The
following judgment that Finlay-Freundlich expressed with respect to the results obtained
by his predecessors is equally applicable to the situation as it stands today:
All attempts to explain the existence of isolated globular clusters in the vicinity of the
galaxy have hitherto failed.31
But now we find that there is a second ―general force‖ that has not hitherto been
recognized, just the kind of an ―antagonist‖ to gravitation that is necessary to explain all
of these otherwise inexplicable phenomena. Just as gravitation moves all units and
aggregates of matter inward toward each other, so the progression of the natural reference
system with respect to the stationary reference systems in common use moves material
units and aggregates, as we see them in the context of a stationary reference system,
outward away from each other. The net movement of each object, as observed, is
determined by the relative magnitudes of the opposing general motions (forces), together
with whatever additional motions may be present.
In each of the three illustrative cases cited, the outward progression of the natural
reference system provides the missing piece in the physical puzzle. But these cases are
not unique; they are only especially dramatic highlights of a clarification of the entire
physical picture that is accomplished by the introduction of this new concept of a moving
natural reference system. We will find it in the forefront of almost every subject that is
discussed in the pages that follow.
It should be recognized, however, that the outward motions that are imparted to physical
objects by reason of the progression of the natural reference system are, in a sense,
fictitious. They appear to exist only because the physical objects are referred to a spatial
reference system that is assumed to be stationary, whereas it is, in fact, moving. But in
another sense, these motions are not entirely fictitious, inasmuch as the attribution of
motion to entities that are not actually moving takes place only at the expense of denying
motion to other entities that are, in fact, moving. These other entities that are stationary
relative to the fixed spatial coordinate system are participating in the motion of that
coordinate system relative to the natural system. The motion therefore exists, but it is
attributed to the wrong entities. One of the first essentials for an understanding of the
system of motions that constitutes the physical universe is to relate the basic motions to
the natural reference system, and thereby eliminate the confusion that has been
introduced by the use of a fixed reference system.
When this is done it can be seen that the units of motion involved in the progression of
the natural reference system have no actual physical significance. They are merely units
of a reference system in which the fictitious motion of the absolute locations can be
represented. Obviously, the spatial aspect of these fictitious units of motion is equally
fictitious, and this leads to an answer to the question as to the relation of the ―space‖
represented by a stationary three-dimensional reference system, extension space, as we
may call it, to the space of the universe of motion. On the basis of the explanation given
in the preceding pages, if a number of objects without independent motion (such as
photons) originate simultaneously from a source that is stationary with respect to a fixed
reference system, they are carried outward from the location of origin at unit speed by the
motion of the natural reference system relative to the stationary system. The direction of
motion of each of these objects, as seen in the context of the stationary system of
reference, is determined entirely by chance, and the motions are therefore distributed over
all directions. The location of origin is then the center of an expanding sphere, the surface
of which contains the locations that the moving objects occupy after a period of time
corresponding to the spatial progression represented by the radius of the sphere.
Any point within this sphere can be defined by the direction of motion and the duration of
the progression; that is, by polar coordinates. The sphere generated by the motion of the
natural reference system relative to the point of origin has no actual physical significance.
It is a fictitious result of relating the natural reference system to an arbitrary fixed system
of reference. It does, however, define a reference frame that is well adapted to
representing the motions of ordinary human experience. Any such sphere can be
expanded indefinitely, and the reference system thus defined is therefore coextensive
with all other stationary spatial reference systems. Position in any one such system can be
expressed in terms of any other merely by a change of coordinates.
The volume generated in this manner is identical with the entity that is called ―space‖ in
previous physical theories. It is the spatial constituent of a universe of matter. As brought
out in the foregoing explanation, this entity, extension space, as we have called it, is
neither a void, as contended by one of the earlier schools of thought, or an actual physical
entity, as seen by an opposing school. In terms of a universe of motion it is simply a
reference system.
An appropriate analogy is the coordinate system on a sheet of graph paper. The original
lines on this paper, generally lightly printed in color, have no significance so far as the
subject matter of the graph is concerned. But if we draw some lines on this sheet that are
relevant to the subject matter, then the printed coordinate system facilitates our
assessment of the interrelations between the quantities represented by those lines.
Similarly, extension space, per se, has no physical significance. It is merely a reference
system, like the colored lines on the graph paper, that facilitates cognition of the relations
between the significant entities and phenomena: the motions and their various aspects.
The true ―space‖ that enters into physical phenomena is the spatial aspect of motion. As
brought out earlier, it has no independent existence. Nor does time. Each exists only in
association with the other as motion.
We can, however, isolate the spatial aspect of a particular motion, or type of motion, and
deal with it on a theoretical basis as if it were independent, providing that the rate of
change of time remains constant, or the appropriate correction is applied for whatever
deviation from a constant rate actually does take place. This ability to abstract the spatial
aspect and treat it independently is the factor that enables us to relate the spatial aspect of
translational motion to the reference system that we recognize as extension space.
It may be of interest to note that this clarification of the nature of extension space gives
us a partial answer to the long-standing question as to whether this space, which in the
context of a universe of matter is ―space‖ in general, is finite or infinite. As a reference
system it is potentially infinite, just as ―number‖ is potentially infinite. But it does not
necessarily follow that the number of units of space participating in motions that actually
have physical significance is infinite. A complete answer to the question is therefore not
available at this stage of the development. The remaining issue will have further
consideration later.
The finding that extension space is merely a reference system also disposes of the issue
with respect to ―curvature,‖ or other kinds of distortion, of space, and it rules out any
participation of extension space in physical action. Such concepts as those involved in
Einstein's assertion that ―space has the physical property of transmitting electromagnetic
waves, are wholly incorrect. No reference system can have any physical effects, nor can
any physical action affect a reference system. Such a system is merely a construct: a
device whereby physical actions and their results can be represented in usable form.
Extension space, the ―container‖ visualized by most individuals when they think of space,
is capable of representing only translational motion, and its spatial aspect, not physical
space in general. But the spatial aspect of any motion has the same relation to the
physical phenomena in which it is involved as the spatial aspect of translational motion
that we can follow by means of its representation in the coordinate system. For example,
the space involved in rotation is physical space, but it can be defined in the conventional
reference system only with the aid of an auxiliary scalar quantity: the number of
revolutions. By itself, that reference system cannot distinguish between one revolution
and n revolutions. Nor is it able to represent vibrational motion. As will be found later in
the development, even its capability of representing translational motion is subject to
some significant limitations.
Regardless of whether motion is translatory, vibratory, or rotational, its spatial aspect is
―space,‖ from the physical standpoint. And whenever a physical process involves space
in general, rather than merely the spatial aspect of translational motion, all components of
the total space must be taken into account. The full implications of this statement will not
become apparent until we are ready to begin consideration of electrical phenomena, but it
obviously rules out the possibility of a universal reference system to which all spatial
magnitudes can be related. Furthermore, every motion, and therefore every physical
object (a manifestation of motion) has a location in three-dimensional time as well as in
three-dimensional space, and no spatial reference system is capable of representing both
locations.
It may be somewhat disconcerting to many readers to be told that we are dealing with a
universe that transcends the stationary three-dimensional spatial reference system in
which popular opinion places it: a universe that involves three-dimensional time, scalar
motion, a moving reference system, and so on. But it should be realized that this
complexity is not peculiar to the Reciprocal System. No physical theory that enjoys any
substantial degree of acceptance today portrays the universe as capable of being
accurately represented in its entirety within any kind of a spatial reference system.
Indeed, the present-day ―official‖ school of physical theory says that the basic entities of
the universe are not ―objectively real‖ at all; they are phantoms which can ―only be
symbolized by partial differential equations in an abstract multidimensional space.‖ 32
(Werner Heisenberg)
Prior to the latter part of the nineteenth century there was no problem in this area. It was
assumed, without question, that space and time were clearly recognizable entities, that all
spatial locations could be defined in terms of an absolute spatial reference system, and
that time could be defined in terms of a universal uniform flow. But the experimental
demonstration of the constant speed of light by Michelson and Morley threw this
situation into confusion, from which it has never fully emerged.
The prevailing scientific opinion at the moment is that time is not an independent entity,
but is a sort of quasi-space, existing in one dimension that is joined in some manner to the
three dimensions of space to form a four-dimensional continuum. Inasmuch as this
creates as many problems as it solves, it has been further assumed that this continuum is
distorted by the presence of matter. These assumptions, which are basic to, in relativity
theory, the currently accepted doctrine, leave the conventional spatial reference system in
a very curious position. Einstein says that his theory requires us to free ourselves ―from
the idea that co-ordinates must have an immediate metrical meaning.‖33 He defines this
expression ―a metrical meaning‖ as the existence of a specific relationship between
differences of coordinates and measurable lengths and times. Just what kind of a meaning
the coordinates can have if they do not represent measurable magnitudes is rather
difficult to understand. The truth is that the differences in coordinates, which, according
to Einstein, have no metrical meaning, are the spatial magnitudes that enter into almost
all of our physical calculations. Even in astronomy, where it might be presumed that any
inaccuracy would be very serious, in view of the great magnitudes involved, we get this
report from Hannes Alfven:
The general theory (of relativity) has not been applied to celestial mechanics on an
appreciable scale. The simpler Newtonian theory is still employed almost exclusively to
calculate the motions of celestial bodies.34
Our theoretical development now demonstrates that the differences in coordinates do
have ―metrical meaning‖ , and that wherever we are dealing with vectorial motions, or
with scalar motions that can be referred to identifiable reference points, these coordinate
positions accurately represent the spatial aspects of the translational motions that are
involved. This explains why the hypothesis of an absolute spatial reference system for the
universe as a whole was so successful for such a long time. The exceptions are
exceptional in ordinary practice. The existence of multiple reference points has had no
significant impact except in the case of gravitation, and the use of the force concept has
sidestepped the gravitational issue. Only in recent years have the observations penetrated
into regions outside the boundaries of the conventional reference systems.
But we now have to deal with the consequences of this enlargement of the scope of our
observations. In the course of this present work it has been found that the problems
introduced into physical science by the extension of experimental and observational
knowledge are directly due to the fact that some of the newly discovered phenomena
transcend the reference systems into which current science is trying to place them. As we
will see later, this is particularly true where variations in time magnitudes are involved,
inasmuch as conventional spatial reference systems assume a fixed and unchanging
progression of time. In order to get the true picture it is necessary to realize that no single
reference system is capable of representing the whole of physical reality.
The universe, as seen in the context of the Reciprocal System of theory, is much more
complex than is generally realized, but the simple Newtonian universe was abandoned by
science long ago, and the modifications of the Newtonian view that we now find
necessary are actually less drastic than those required by the currently popular physical
theories. Of course, in the final analysis this makes no difference. Scientific thought will
have to conform to the way in which the universe actually behaves, irrespective of
personal preferences, but it is significant that all of the phenomena of a universe of
motion, as they emerge from the development of the Reciprocal System, are rational,
clearly defined, and ―objectively real.‖
CHAPTER 4

Radiation
The basic postulate of the Reciprocal System of theory asserts the existence of motion. In
itself, without qualification, this would permit the existence of any conceivable kind of
motion, but the additional assumptions included in the postulates act as limitations on the
types of motion that are possible. The net result of the basic postulates plus the limitations is
therefore to assert the existence of any kind of motion that is not excluded by the limiting
assumptions. We may express this point concisely by saying that in the theoretical universe
of motion anything that can exist does exist. The further fact that these permissible
theoretical phenomena coincide item by item with the observed phenomena of the actual
physical universe is something that will have to be demonstrated step by step as the
development proceeds.
Some objections have been raised to the foregoing conclusion that what can exist does exist,
on the ground that actuality does not necessarily follow from possibility. But no one is
contending that actual existence is a necessary consequence of possible existence, as a
general proposition. What is contended is that this is true, for special reasons, in the physical
universe. Philosophers explain this as being the result of a ―principle of nature.‖ David
Hawkins, for instance, tells us that ―the principle of plenitude . . . says that all things
possible in nature are actualized.‖35 What the present development has done is to explain
why nature follows such a principle. Our finding is that the basic physical entities are scalar
motions, and that the existence of different observable entities and phenomena is due to the
fact that these motions necessarily assume specific directions when they appear in the
context of a three-dimensional frame of reference. Inasmuch as the directions are determined
by chance, there is a finite probability corresponding to every possible direction, and thus
every possibility becomes an actuality. It should be noted that this is exactly the same
principle that was applied in Chapter 3 to explain why an expanding sphere of radiation
emanates from each radiation source (a conclusion that is not challenged by anyone). In this
case, too, scalar motions exist, each of which takes one of certain permissible directions
(limited by the translational character of the motions), and these motions are distributed over
all of the directions.
Inasmuch as it has been postulated that motion, as defined earlier, is the sole constituent of
the physical universe, we are committed to the proposition that every physical entity or
phenomenon is a manifestation of motion. The determination of what entities, phenomena,
and processes can exist in the theoretical universe therefore reduces to a matter of
ascertaining what kinds of motion and combinations of motions can exist in such a universe,
and what changes can take place in these motions. Similarly, in relating the theoretical
universe to the observed physical universe, the question as to what any observed entity or
phenomenon is never arises. We always know what it is. It is a motion, a combination of
motions, or a relation between motions. The only question that is ever at issue is what kinds
of motions are involved.
There has been a sharp difference of opinion among those interested in the philosophical
aspects of science as to whether the process of enlarging scientific understanding is
―discovery‖ or ―invention.‖ This is related to the question as to the origin of the fundamental
principles of science that was discussed in Chapter l, but it is a broader issue that applies to
all scientific knowledge, and involves the inherent nature of that knowledge. The specific
point at issue is clearly stated by R. B. Lindsay in these words:
Application of the term ―discovery‖ implies that there is an external world ―out there‖
wholly independent of the observer and with built-in regularities and laws waiting to be
uncovered and revealed. They have always been there and presumably always will be; our
task is by diligent search to find out what they are. On the other hand, the term ―invention‖
implies that the physicist uses not only his observations but his imaginative powers to
construct points of view that identify with experience.36
The ―discovery‖ concept, says Lindsay, implies that the acquisition of scientific knowledge
is cumulative, and that ultimately our understanding of the physical world should be
essentially complete. On the contrary, the ―point of view of invention means that the process
of creating new experience and the construction of new ideas to cope with this experience go
hand in hand.‖ On this basis, ―the whole activity is open-ended‖ ; there is no place for the
idea of completeness.
The Reciprocal System now provides a definitive answer to this question. It not only
establishes scientific investigation as a process of discovery, but reduces that discovery to
deduction and verification of the deductions. All of the information that is necessary in order
to arrive at a full description of any theoretically possible entity or phenomenon is implicit
in the postulates. A full development of the consequences of the postulates therefore defines
a complete theoretical unlverse.
As will be seen in the pages that follow, the physical processes of the universe include a
continuing series of interchanges between vectorial motions and scalar motions. In all of
these interchanges causality is maintained; no motions of either type occur except as a result
of previously existing motions. The concept of events occurring without cause, which enters
into some of the interpretations of the theories included in the current structure of physics, is
therefore foreign to the Reciprocal System. But the universe of motion is not deterministic in
the strict Laplacian sense, because the directions of the motions are continually being
redetermined by chance processes. The description of the physical universe that emerges
from development of the consequences of the postulates of the Reciprocal System therefore
identifies the general classes of entities and phenomena that exist in the universe, and the
relations between them, rather than specifying the exact result of every interaction, as a
similarly complete theory would do if it were deterministic.
In beginning our examination of these physical entities and phenomena, the first point to be
noted is that the postulates require the existence of real units of motion, units that are similar
to the units of motion involved in the progression of the natural reference system, except
that they actually exist, rather than being fictitious results of relating motion to an arbitrary
reference system. These independent units of motion, as we will call them, are superimposed
on the moving reference background in much the same manner as that in which matter is
supposed to exist in the basic space of previous physical theory. Since they are units of the
same kind, however, these independent units are interrelated with the units of the
background motion, rather than being separate and distinct from it, in the manner in which
matter is presumed to be distinct from the space-time background in the theories based on
the ―matter‖ concept. As we will see shortly, some of the independent motions have
components that are coincident with the background motion, and these components are not
effective from the physical standpoint; that is, their effective physical magnitude is zero.
A point of considerable significance is that while the postulates permit the existence of these
independent motions, and, on the basis of the principle previously stated, they must therefore
exist in the universe of motion defined by the postulates, those postulates do not provide any
mechanism for originating independent motions. It follows that the independent motions
now existing either originated coincidentally with the universe itself, or else were originated
subsequently by some outside agency. Likewise, the postulates provide no mechanism for
terminating the existence of these independent motions. Consequently, the number of
effective units of such motion now existing can neither be increased nor diminished by any
process within the physical system.
This inability to alter the existing number of effective units of independent motion is the
basis for what we may call the general conservation law, and the various subsidiary
conservation laws applying to specific physical phenomena. It suggests, but does not
necessarily require, a limitation of the independent units of motion to a finite number. The
issue as to the finiteness of the universe does not enter into any of the phenomena that will
be examined in this present volume, but it will come up in connection with some of the
subjects that will be taken up later, and it will be given further consideration then.
The Reciprocal System of theory deals only with the physical universe as it now exists, and
reaches no conclusions as to how that universe came into being, nor as to its ultimate fate.
The theoretical system is therefore completely neutral on the question of creation. It is
compatible with either the hypothesis of creation by some agency, or the hypothesis that the
universe has always existed. Continuous creation of matter by action of the physical
mechanism itself, as postulated by the Steady State theory of cosmology is ruled out, and
there is nothing in that mechanism that will allow the universe to arrive at any kind of
termination of its own accord. The question of creation or termination by action of an
outside agency is beyond the scope of the theoretical development.
Turning now to the question as to what kinds of motion are possible at the basic level, we
note that scalar magnitudes may be either positive (outward, as represented in a spatial
reference system), or negative (inward). But as we observe motion in the context of a fixed
reference system, the outward progression of the natural reference system is always present,
and thus every motion includes a one-unit outward component. The discrete unit postulate
prevents any addition to an effective unit, and independent outward motion is therefore
impossible. All dependent motion must have net inward or negative magnitude.
Furthermore, it must be continuous and uniform at this stage of the development, because no
mechanism is yet available whereby discontinuity or variability can be produced.
Since the outward progression always exists, independent continuous negative motion is not
possible by itself, but it can exist in combination with the ever-present outward progression.
The result of such a combination of unit negative and unit positive motion is zero motion
relative to a stationary coordinate system. Another possibility is simple harmonic motion, in
which the scalar direction of movement reverses at each end of a unit of space, or time. In
such motion, each unit of space is associated with a unit of time, as in unidirectional
translational motion, but in the context of a stationary three-dimensional spatial reference
system the motion oscillates back and forth over a single unit of space (or time) for a certain
period of time (or space).
At first glance, it might appear that the reversals of scalar direction at each end of the basic
unit are inadmissible in view of the absence of any mechanism for accomplishing a reversal.
However, the changes of scalar direction in simple harmonic motion are actually continuous
and uniform, as can be seen from the fact that such motion is a projection of circular motion
on a diameter. The net effective speed varies continuously and uniformly from +1 at the
midpoint of the forward movement to zero at the positive end of the path of motion, and
then to -1 at the midpoint of the reverse movement and zero at the negative end of the path.
The continuity and uniformity requirements are met both by a continuous, uniform change
of direction, and by a continuous, uniform change of magnitude.
As pointed out earlier, the theoretical structure that we are developing from the fundamental
postulates is a description of what can exist in the theoretical universe of motion defined by
those postulates. The question as to whether a certain feature of this theoretical universe
does or does not correspond to anything in the actual physical universe is a separate issue
that is explored in a subsequent step in the project, to be started shortly, in which the
theoretical universe is compared item by item with the observed universe. At the moment,
therefore, we are not concerned with whether or not simple harmonic motion does exist in
the actual physical universe, why it exists, if it does, or how it manifests itself. All that we
need to know for present purposes is that inasmuch as this kind of motion is continuous, and
is not excluded by the postulates, it is one of the kinds of motion that exists in the theoretical
universe of motion, under the most basic conditions.
Under these conditions simple harmonic motion is confined to individual units. When the
motion has progressed for one full unit, the discrete unit postulate specifies that a boundary
exists. There is no discontinuity, but at this boundary one unit terminates and another begins.
Whatever processes may have been under way in the first unit cannot carry over into the
next. They cannot be divided between two totally independent units. Consequently, a
continuous change from positive to negative can occur only within one unit of either space
or time.
As explained in Chapter 3, motion, as herein defined, is a continuous process–a
progression–not a succession of jumps. There is progression even within the units, simply
because these are units of progression, or scalar motion. For this reason, specific points
within the unit–the midpoint, for example–can be identified, even though they do not exist
independently. The same is true of the chain used as an analogy in the preceding discussion.
Although the chain exists only in discrete units, or links, we can distinguish various portions
of a link. For instance, if we utilize the chain as a means of measurement, we can measure
10-1/2 links, even though half of a link would not qualify as part of the chain. Because of
this capability of identifying the different portions of the unit, we see the vibrating unit as
following a definite path.
In defining this path we will need to give some detailed consideration to the matter of
direction. In the first edition the word ―direction‖ was used in four different senses.
Exception was taken to this practice by a number of readers, who suggested that it would be
helpful if ―direction‖ were employed with only one significance, and different names were
attached to the other three concepts. When considered purely from a technical standpoint,
the earlier terminology is not open to legitimate criticism, as using words in more than one
sense is unavoidable in the English language. However, anything that can be done to
facilitate understanding of the presentation is worth serious consideration. Unfortunately,
there is no suitable substitute for ―direction‖ in most of these applications.
Some of the objections to the previous terminology were based on the ground that scalar
quantities, by definition, have no direction, and that using the term ―direction‖ in application
to these quantities, as well as to vectorial quantities, is contradictory and leads to confusion.
There is merit in this objection, to be sure, in any application where we deal with scalar
quantities merely as positive and negative magnitudes. But as soon as we view the scalar
motions in the context of a fixed spatial reference system, and begin talking about ―outward‖
and ―inward,‖ as we must do in this work, we are referring not to the scalar magnitudes
themselves, but to the representation of these magnitudes in a stationary spatial reference
system, a representation that is necessarily directional. The use of directional language in
this case therefore appears to be unavoidable.
There are likewise some compelling reasons for continuing to use the term ―direction in
time‖ in application to the temporal property analogous to the spatial property of direction.
We could, of course, coin a new word for this purpose, and there would no doubt be certain
advantages in so doing. But there are also some very definite advantages to be gained by
utilizing the word ―direction‖ with reference to time as well as with reference to space.
Because of the symmetry of space and time, the property of time that corresponds to the
familiar property of space that we call ―direction‖ has exactly the same characteristics as the
spatial property, and by using the term ―direction in time,‖ or ―temporal direction,‖ as a
name for this property we convey an immediate understanding of its nature and
characteristics that would otherwise require a great deal of discussion and explanation. All
that is then necessary is to keep in mind that although direction in time is like direction in
space, it is not direction in space.
Actually, it should not be difficult to get away from the habit of always interpreting
―direction‖ as meaning ―direction in space‖ when we are dealing with motion. We already
recognize that there is no spatial connotation attached to the term when it is used elsewhere.
We habitually use ―direction‖ and directional terms of one kind or another in speaking of
scalar quantities, or even in connection with items that cannot be expressed in physical
imagery at all. We speak of wages and prices as moving in the same direction, temperature
as going up or down, a change in the direction of our thinking, and so on. Here we realize
that we are using the word ―direction‖ without any spatial significance. There should be no
serious obstacle in the way of a similar conception of the meaning of ―direction in time.‖
In this edition the term ―direction‖ will not be used in referring to the deviations upward or
downward from unit speed. In the other senses in which the term was originally used it
seems essential to continue utilizing directional language, but as an alternative to the further
limitations on the use of the term ―direction‖ that have been suggested we will use
qualifying adjectives wherever the meaning of the term is not obvious from the context.
On this basis vectorial direction is a specific direction that can be fully represented in a
stationary coordinate system. Scalar direction is simply outward or inward, the spatial
representation of positive or negative scalar magnitudes respectively. Wherever the term
―direction‖ is used without qualification it will refer to vectorial direction. If there is any
question as to whether the direction (scalar or vectorial) under consideration is a direction in
space or a direction in time, this information will also be given.
Vectorial motion is motion with an inherent vectorial direction. Scalar motion is inward or
outward motion that has no inherent vectorial direction, but is given a direction by the
factors involved in its relation to the reference system. This imputed vectorial direction is
independent of the scalar direction except to the extent that the same factors may, in some
instances, affect both. As an analogy, we may consider a motor car. The motion of this car
has a direction in three-dimensional space, a vectorial direction, while at the same time it has
a scalar direction, in that it is moving either forward or backward. As a general proposition,
the vectorial direction of this vehicle is independent of its scalar direction. The car can run
forward in any vectorial direction, or backward in any direction.
If the car is symmetrically constructed so that the front and back are indistinguishable, we
cannot tell from direct observation whether it is moving forward or backward. The same is
true in the case of the simple scalar motions. For example, we will find in the pages that
follow that the scalar direction of a falling object is inward, whereas the scalar direction of a
beam of light is outward. If the two happen to traverse the same path in the same vectorial
direction, as they may very well do, there is nothing observable that will distinguish between
the inward and outward motion. In the usual situation the scalar direction of an observed
motion must be determined from collateral information independently of the observed
vectorial direction.
The magnitude of a simple harmonic motion, like that of any other motion, is a speed, the
relation between the number of units of space and the number of units of time participating
in the motion. The basic relation, one unit of space per unit of time, always remains the
same, but by reason of the directional reversals, which result in traversing the same unit of
one component repeatedly, the speed of a simple harmonic motion, as it appears in a fixed
reference system, is 1/x (or x/1). This means that each advance of one unit in space (or time)
is followed by a series of reversals of scalar direction that increase the number of units of
time (or space) to x, before another advance in space (or time) takes place. At this point the
scalar direction remains constant for one unit, after which another series of reversals takes
place.
Ordinarily the vectorial direction reverses in unison with the scalar direction, but each end of
a unit is the reference point for the position of the next unit in the reference system, and
conformity with the scalar reversals is therefore not mandatory. Consequently, in order to
maintain continuity in the relation of the vectorial motion to the fixed reference system the
vectorial direction continues the regular reversals at the points where the scalar motion
advances to a new unit of space (or time). The relation between the scalar and vectorial
directions is illustrated in the following tabulation, which represents two sections of a 1/3
simple harmonic motion. The vectorial directions are expressed in terms of the way the
movement would appear from some point not in the line of motion.
Unit Number DIRECTION
Scalar Vectorial
1 inward right
2 outward left
3 inward right
4 inward left
5 outward right
6 inward left
The simple harmonic motion thus remains permanently in a fixed position in the dimension
of motion, as seen in the context of a stationary reference system; that is, it is an oscillatory,
or vibratory, motion. An alternative to this pattern of reversals will be discussed in Chapter
8.
Like all other absolute locations, the absolute location occupied by the vibrating unit, the
unit of simple harmonic motion, is carried outward by the progression of the natural
reference system, and since the linear motion of the vibrating unit has no component in the
dimensions perpendicular to the line of oscillation, the outward progression at unit speed
takes place in one of these free dimensions. Inasmuch as this outward progression is
continuous within the unit as well as from one unit of the reference frame to the next, the
combination of a vibratory motion and a linear motion perpendicular to the line of vibration
results in a path which has the form of a sine curve.
Because of the dimensional relationship between the oscillation and the linear progression,
there is a corresponding relation between the vectorial directions of these two components of
the total motion, as seen in the context of a stationary reference system, but this relation is
fixed only between these two components. The position of the plane of vibration in the
stationary spatial system of reference is determined by chance, or by the characteristics of
the originating object.
Although the basic one-to-one space-time ratio is maintained in the simple harmonic motion,
and the only change that takes place is from positive to negative and vice versa, the net
effect, from the standpoint of a fixed system of reference is to confine one component–either
space or time–to a single unit, while the other component extends to n units. The motion can
thus be measured in terms of the number of oscillations per unit of time, a frequency,
although it is apparent from the foregoing explanation that it is actually a speed. The
conventional measurement in terms of frequency is possible only because the magnitude of
the space (or time) term remains constant at unity.
Here, in this oscillating unit, the first manifestation of independent motion (that is, motion
that is separate and distinct from the outward motion of the natural reference system) that
has emerged from the theoretical development, is the first physical object. In the motion of
this object we also have the first instance of ―something moving.‖ Up to this time we have
been considering only the basic motions, relations between space and time that do not
involve movement of any ―thing.‖ Experience in presenting the theory to college audiences
has indicated that many persons are unable to conceive of the existence of motion without
something moving, and are inclined to argue that this is impossible. It should be realized,
however, that we are definitely committed to this concept just as soon as we postulate a
universe composed entirely of motion. In such a universe, ―things‖ are combinations of
motions, and motion is thus logically prior to ―things.‖
The concept-of a universe of motion is generally conceded to be reasonable and rational.
The long list of prominent and not-so-prominent scientists and philosophers who have
essayed to explore the implications of such a concept is sufficient confirmation of this point.
It follows that unless some definite and positive conflict with reason or experience is
encountered, the necessary consequences of that concept must also be presumed to be
reasonable and rational, even though some of them may conflict with long-standing beliefs
of some kind.
There is no mathematical obstacle to this unfamiliar type of motion. We have defined
motion, for purposes of a theory of a universe of motion, by means of the relation expressed
in the equation of motion: v = s/t. This equation does not require the existence of any
moving object. Even where the motion actually is motion of something, that ―something,‖
does not enter into any of the terms of the equation, the mathematical representation of the
motion. The only purpose that it serves is to identify the particular motion under
consideration. But identification is also possible where there is nothing moving. If, for
example, we say that the motion we are talking about is the motion of atom A, we are
identifying a particular motion, and distinguishing it from all other motions, but if we refer
to the motion which constitutes atom A, we are identifying this motion (or combination of
motions) on an equally definite basis, even though this is not motion of anything.
A careful consideration of the points brought out in the foregoing discussion will make it
clear that the objections that have been raised to the concept of motion without anything
moving are not based on logical grounds. They stem from the fact that the idea of simple
motion of this kind, merely a relation between space and time, is new and unfamiliar. None
of us likes to discard familiar ideas of long standing and replace them with something new
and different, but this is part of the price that we pay for progress.
This will be an appropriate time to emphasize that combinations or other modifications of
existing motions can only be accomplished by adding or removing units of motion. As
stated in Chapter 2, neither space nor time exists independently. Each exists only in
association with the other as motion. Consequently, a speed 1/a cannot be changed to a
speed 1/b by adding b-a units of time. Such a change can only be accomplished by
superimposing a new motion on the motion that is to be altered.
In carrying out the two different operations that were involved in the investigation from
which the results reported herein were derived, it would have been possible to perform them
separately; first developing the theoretical structure as far as circumstances would permit,
and then comparing this structure with the observed features of the physical universe. In
practice, however, it was more convenient to identify the various theoretical features with
the corresponding physical features as the work progressed, so that the correlations would
serve as a running check on the accuracy of the theoretical conclusions. Furthermore, this
policy eliminated the need for the separate system of terminology that otherwise would have
been required for referring to the various features of the theoretical universe during the
process of the theoretical development.
Much the same considerations apply to the presentation of the results, and we will therefore
identify each theoretical feature as it emerges from the development, and will refer to it by
the name that is customarily applied to the corresponding physical feature. It should be
emphasized, however, that this hand in hand method of presentation is purely an aid to
understanding. It does not alter the fact that the theoretical universe is being developed
entirely by deduction from the postulates. No empirical information is being introduced into
the theoretical structure at any point. All of the theoretical features are purely theoretical,
with no empirical content whatever. The agreement between theory and observation that we
will find as we go along is not a result of basing the theoretical conclusions on appropriate
empirical premises; it comes about because the theoretical system is a true and accurate
representation of the actual physical situation.
Identification of the theoretical unit of simple harmonic motion that we have been
considering presents no problem. It is obvious that each of these units is a photon. The
process of emission and movement of the photons is radiation. The space-time ratio of the
vibration is the frequency of the radiation, and the unit speed of the outward progression is
the speed of radiation, more familiarly known as the speed of light.
When considered merely as vibrating units, there is no distinction between one photon and
another except in the speed of vibration, or frequency. The unit level, where speed 1/n
changes to n/l cannot be identified in any directly observable way. We will find, however,
that there is a significant difference between the manner in which the photons of vibrational
speed 1/n enter into combinations of motions and the corresponding behavior of photons of
vibrational speed greater than unity. This difference will be examined in detail in the
chapters that follow.
One of the things that we can expect a correct theory of the structure of the universe to do is
to clear up the discrepancies and ―paradoxes‖ of previously existing scientific thought. Here,
in the explanation of the nature of radiation that emerges from the development of theory,
we find this expectation dramatically fulfilled. In conventional thinking the concepts of
―wave‖ and ―particle‖ are mutually exclusive, and the empirical discovery that radiation acts
in some respects as a wave phenomenon, and in other respects as an assembly of particles
has confronted physical science with a very disturbing paradox. Almost at the outset of our
development of the consequences of the postulates that define a universe of motion we find
that in such a universe there is a very simple explanation. The photon acts as a particle in
emission and absorption because it has the distinctive feature of a particle: it is a discrete
unit. In transmission it behaves as a wave because the combination of its own inherent
vibratory motion with the translatory motion of the progression of the natural reference
system causes it to follow a wave-like path. In this case the problem that seemed impossible
to solve while radiation was looked upon as a single entity loses all of its difficult features as
soon as it is recognized as a combination of two different things.
Another difficult problem with respect to radiation has been to explain how it can be
propagated through space without some kind of a medium. This problem has never been
solved other than by what was described by R. H. Dicke as a ―semantic trick‖ ; that is,
assuming, entirely ad hoc, that space has the properties of a medium.
One suspects that, with empty space having so many properties, all that had been
accomplished in destroying the ether was a semantic trick. The ether had been renamed the
vacuum.5
Einstein did not challenge this conclusion expressed by Dicke. On the contrary ―he freely
admitted‖ not only that his theory still employed a medium, but also that this medium is
indistinguishable, other than semantically, from the ―ether‖ of previous theories. The
following statements from his works are typical:
We may say that according to the general theory of relativity space is endowed with physical
qualities; in this sense, therefore, there exists an ether.37
We shall say: our space has the physical property of transmitting waves, and so omit the use
of a word (ether) we have decided to avoid.38
Thus the relativity theory does not resolve the problem. There is no evidence to support
Einstin's assumption that space has the properties of a medium, or that it has any physical
properties at all. The fact that no method of propagating radiation through space without a
medium has ever been conceived is therefore still unreconciled with the absence of any
evidence of the existence of a medium. In the theoretical universe of the Reciprocal System
the problem does not arise, since the photon remains in the same absolute location in which
it originates, as any object without independent motion must do. With respect to the natural
reference system it does not move at all, and the movement that is observed in the context of
a stationary reference system is a movement of the natural reference system relative to the
stationary system, not a movement of the photon itself.
In both the propagation question and the wave-particle issue the resolution of the problem is
accomplished in the same manner. Instead of explaining why the seemingly complicated
phenomena are complex and perplexing, the Reciprocal System of theory removes the
complexity and reduces the phenomena to simple terms. As other long-standing problems
are examined in the course of the subsequent development we will find that this conceptual
simplicity is a general characteristic of the new theoretical structure.

CHAPTER 5

Gravitation
Another type of motion that is permitted by the postulates, and therefore exists in the
theoretical universe, is rotation. Before rotational motion can take place, however, there
must exist some physical object (independent motion) that can rotate. This is purely a
matter of geometry. We are still in the stage of the development where we are dealing
only with scalar motions, and a single scalar motion cannot produce the directional
characteristics of rotation. Like the sine curve of the photon they require a combination of
motions: a compound motion, we may say. Thus, while motion is possible without
anything moving, rotation is not possible unless some physical object is available to be
rotated. The photon of radiation is such an independent motion, or physical object, and it
is evident, from the limitations that apply to the kinds of motion that are possible at this
stage of the development, that it is the only primary unit that meets the requirement.
Simple rotation is therefore rotation of the photon.
In our ordinary experience rotation is a vectorial motion, and its direction (a vectorial
direction) is relative to a fixed spatial system of reference. In the absence of other motion,
an object rotating vectorially remains stationary in the fixed system. However, any
motion of a photon is a scalar motion, inasmuch as the mechanism required for the
production of vectorial motion is not yet available at this stage of the development. A
scalar motion has an inherent scalar direction (inward or outward), and it is given a
vectorial direction by the manner in which the scalar motion appears in the fixed
coordinate system.
As brought out in Chapter 4, the net scalar direction of independent motion is inward.
The significance of the term ―net‖ in this statement is that a compound motion may
include an outward component providing that the magnitude of the inward component of
that motion is great enough to give the motion as a whole the inward direction. Since the
vectorial direction that this inward motion assumes in a fixed reference system is
independent of the scalar direction, the motion can take any vectorial direction that is
permitted by the geometry of three-dimensional space. One such possibility is rotation.
The special characteristic of rotation that distinguishes it from the simple harmonic
motion previously considered is that in rotation the changes in vectorial direction are
continuous and uniform, so that the motion is always forward, rather than oscillating back
and forth. Consequently, there is no reason for any change in scalar direction, and the
motion continues in the inward direction irrespective of the vectorial changes. Scalar
rotation thus differs from inherently vectorial rotation in that it involves a translational
inward movement as well as the purely rotational movement. A rolling motion is a good
analogy, although the mechanism is different. The rolling motion is one motion, not a
rotation and a translational motion. It is the rotation that carries the rolling object forward
translationally. Similarly, the scalar rotation is only one motion, even though it has a
translational effect that is absent in the case of vectorial rotation.
To illustrate the essential difference between rotation and simple harmonic motion, let us
return to the automobile analogy. If the car is on a very narrow road, analogous to the
one-dimensional path of vibration of the photon, and it runs forward in moving north,
then when it reverses its vectorial direction and moves south it also reverses its scalar
direction and runs backward. But if the car is on a circular track and starts moving
forward, it continues moving forward regardless of the changes in vectorial direction that
are taking place.
The vectorial direction of the inward translational movement of the rotating photon, like
the vectorial direction of the non-rotating photon, is a result of viewing the motion in the
context of an arbitrary reference system, rather than an inherent property of the motion
itself. It is therefore determined entirely by chance. However, the non-rotating photon
remains in the same absolute location permanently, unless acted upon by some outside
agency, and the direction determined at the time of emission is therefore also permanent.
The rotating photon, on the other hand is continually moving from one absolute location
to another as it travels back along the line of the progression of the natural reference
system, and each time it enters a new absolute location the vectorial direction is
redetermined by the chance process. Inasmuch as all directions are equally probable, the
motion is distributed uniformly among all of them in the long run. A rotating photon
therefore moves inward toward all space-time locations other than the one that it happens
to occupy momentarily. Coincidentally, it continues to move outward by reason of the
progression of the reference system, but the net motion of the observable aggregates of
rotating photons in our immediate environment is inward. The determination of the
vectorial direction corresponding to ―outward‖ automatically determines the vectorial
direction of ―inward‖ in each case, inasmuch as one is the reverse of the other.
Some of the readers of the first edition found the concept of ―inward motion‖ rather
difficult. This was probably due to looking at the situation on the basis of a single
reference point. ―Outward‖ from such a point is easily visualized, whereas ―inward‖ has
no meaning under the circumstances. But the non-rotating photon does not merely move
outward from the point of emission; it moves outward from all locations in the manner of
a spot on the surface of an expanding balloon. Similarly, the rotating photon moves
inward toward all locations in the manner of a spot on the surface of a contracting
balloon. The outward motion is simply the spatial representation of an increasing scalar
magnitude, whereas the inward motion is the spatial representation of a decreasing scalar
magnitude. If that decreasing magnitude reaches zero, it continues as an increasing
negative magnitude; that is, if the object which was moving inward toward a certain
location eventually arrives at that location, it continues in motion beyond that point
(providing that nothing intervenes).
Since space and time locations cannot be identified by observation, neither inward nor
outward motion can be recognized as such. It is possible, however, to observe the
changes in relations between the moving objects and other physical structures. The
photons of radiation, for instance, are observed to be moving outward from the emitting
objects. Similarly, each of the rotating photons in the local environment is moving toward
all other rotating photons, by reason of the inward motion in space in which all
participate, and the change of relative position in space can be observed. This second
class of identifiable objects in the theoretical universe thus manifests itself to observation
as a number of individual units, which continually move inward toward each other.
Here, again, the identification of the physical counterparts of the theoretical phenomena
is a simple matter. The inward motion in all directions of space is gravitation, and the
rotating photons are the physical objects that gravitate; that is, atoms and particles.
Collectively, the atoms and particles constitute matter.
As in the case of radiation, the new theoretical development leads to a very simple
explanation of a hitherto unexplained phenomenon. Previous investigators in this area
have arrived at a reasonably good understanding of the physical effects of gravitation, but
they are completely at sea as to how it originated, and how the apparent gravitational
effect is propagated. Our finding is that these previous investigators have misunderstood
the nature of the gravitational phenomenon.
Except at extreme distances, each unit or aggregate of matter in the observed physical
universe continually moves toward all others, unless restrained in some way. It has
therefore been taken for granted that each particle of matter is exerting a force of
attraction on the others. However, when we examine the characteristics of that presumed
force we find that it is something of a very peculiar nature, totally unlike the forces of
ordinary experience. As nearly as we can determine from observation, the gravitational
―force‖ acts instantaneously, without an intervening medium, and in such a manner that it
cannot be screened off or modified in any way. These observed characteristics are so
difficult to explain theoretically that the theorists have given up the search for an
explanation, and are now taking the stand that the observations must, for some unknown
reason, be wrong.
Even though all practical gravitational calculations, including those at astronomical
distances, are carried out on the basis of instantaneous action, without introducing any
inconsistencies, and even though the concept of a force which is wholly dependent upon
position in space being propagated through space is self-contradictory, the theorists take
the stand that since they are unable to devise a theory to account for instantaneous action,
the gravitational force must be propagated at a finite velocity, all evidence to the contrary
notwithstanding. And even though there is not the slightest evidence of the existence of
any medium in space, or the existence of any medium-like properties of space, the
theorists also insist that since they are unable to devise a theory without a medium or
something that has the properties of a medium, such an entity must exist, in spite of the
negative evidence.
There are many places in accepted scientific thought where the necessity of facing up to
clear evidence from observation or experiment is avoided by the use of one or more of
the evasive devices that the modern theorists have invented for the purpose, but this
gravitational situation is probably the only major instance in which the empirical
evidence is openly and categorically defied. While the total lack of any explanation of the
gravitational phenomenon that is consistent with the observations has undoubtedly been
the primary cause of the flagrantly unscientific attitude that has been taken here, an
erroneous belief concerning the nature of electromagnetic radiation has been a significant
contributing factor.
The enormous extension of the known range of radiation frequencies in modern times has
been accomplished mainly through the generation of these additional frequencies by
electrical means, and it has come to be believed that there is a unique connection between
radiation and electrical processes, that radiation is the carrier by means of which
electrical and magnetic effects are propagated. From this it is only a short step to the
conclusion that there must also be gravitational waves, carriers of gravitational energy.
―Such (gravitational) waves resemble electromagnetic waves,‖ says Joseph Weber, who
has been carrying on an extensive search for these hypothetical waves for many years.
The theoretical development in the preceding pages shows that this presumed analogy
does not represent the reality of the universe of motion.
In that universe radiation and gravitation are phenomena of a totally different order. But
it is worth noting that radical differences between these two types of phenomena are also
apparent in the information that is available from empirical sources. That information is
simply ignored in current practice because it conflicts with the popular theories of the
moment.
Radiation is a process whereby energy is transferred from a material aggregate at some
particular location in space (or time) to other spatial (or temporal) locations. Each photon
has a definite frequency of vibration and a corresponding energy content; hence these
photons are essentially traveling units of energy. The emitting agency loses a specific
amount of energy whenever a photon leaves. This energy travels through the intervening
space (or time) until the photon encounters a unit of matter with which it is able to
interact, whereupon the energy is transferred, wholly or in part, to this matter. At either
end of the path the energy is recognizable as such, and is readily interchangeable with
other types of energy. The radiant energy of the impinding photon may, for instance, be
converted into kinetic energy (heat), or into electrical energy (the photoelectric effect), or
into chemical energy (photochemical action). Similarly, any of these other types of
energy, which may exist at the point of emission of the radiation, can be converted into
radiation by appropriate processes.
The gravitational situation is entirely different. Gravitational energy is not
interchangeable with other forms of energy. At any specific location with respect to other
masses, a mass unit possesses a definite amount of gravitational (potential) energy, and it
is impossible to increase or decrease this energy content by conversion from or to other
forms of energy. It is true that a change in location results in a release or absorption of
energy, but the gravitational energy which the mass possesses at point A cannot be
converted to any other type of energy at point A, nor can the gravitational energy at A be
transferred unchanged to any other point B (except along equipotential lines). The only
energy that makes its appearance in any other form at point B is that portion of the
gravitational energy which the mass possessed at point A that it can no longer have at
point B: a fixed amount determined entirely by the difference in location.
Radiant energy remains constant while traveling in space, but can vary almost without
limit at any specific location. The behavior of gravitation is the exact opposite. The
gravitational effect remains constant at any specific location, but varies if the mass moves
from one location to another, unless the movement is along an equipotential line. Energy
is defined as capability of doing work. Kinetic energy, for example, qualifies under this
definition, and any kind of energy that can be freely converted to kinetic energy likewise
qualifies. But gravitational energy is not capable of doing work as a general proposition.
It will do one thing, and that thing only: it will move masses inward toward each other. If
this motion is permitted to take place, the gravitational energy decreases, and the
decrement makes its appearance as kinetic energy, which can then be utilized in the
normal manner. But unless gravitation is allowed to do this one thing which it is capable
of doing, the gravitational energy is completely unavailable. It cannot do anything itself,
nor can it be converted to any form of energy that can do something.
The mass itself can theoretically be converted to kinetic energy, but this internal energy
equivalent of the mass is something totally different from the gravitational energy. It is
entirely independent of position with reference to other masses. Gravitational, or
potential, energy, on the other hand, is purely energy of position; that is, for any two
specific masses, the mutual potential energy is determined solely by their spatial
separation. But energy of position in space cannot be propagated in space; the concept of
transmitting this energy from one spatial position to another is totally incompatible with
the fact that the magnitude of the energy is determined by the spatial location.
Propagation of gravitation is therefore inherently impossible. The gravitational action is
necessarily instantaneous as Newton's Law indicates, and as has always been assumed for
purposes of calculation.
It is particularly significant, therefore, that the theoretical characteristics of gravitation, as
derived from the postulates of the Reciprocal System, are in full agreement with the
empirical observations, strange as these observations may seem. In the theoretical
universe of motion gravitation is not an action of one aggregate of matter on another, as it
appears to be. It is simply an inward motion of the material units an inherent property of
the atoms and particles of matter. The same motion that makes an atom an atom also
causes it to gravitate. Each atom and each aggregate is pursuing its own course
independently of all others, but because each unit is moving inward in space, it is moving
toward all other units, and this gives the appearance of a mutual interaction. These
theoretical inward motions, totally independent of each other, necessarily have just the
kind of characteristics that are observed in gravitation. The change in the relative position
of two objects due to the independent motions of each occurs instantaneously, and there
is nothing propagated from one to the other through a medium, or in any other way.
Whatever exists, or occurs, in the intervening space can have no effect on the results of
the independent motions.
One of the questions that is frequently asked is how this finding that the gravitational
motion of each aggregate is completely independent of all others can be reconciled with
the observed fact that the direction of the (apparent) mutual gravitational force between
two objects changes if either object moves. On the face of it, there appears to be a
necessity for some kind of an interaction. The explanation is that the gravitational motion
of an object never changes, either in amount or in direction. It is always directed from the
location of the gravitating unit toward all other space and time locations. But we cannot
observe the motion of an object inward in space; we can only observe its motion relative
to other objects whose presence we can detect. The motion of each object therefore
appears to be directed toward the other objects, but, in fact, it is directed toward all
locations in space and time irrespective of whether or not they happen to be occupied.
Whatever changes appear to take place in the gravitational phenomena by reason of
change of position of any of the gravitating masses are not changes in the gravitational
motions (or forces) but changes in our ability to detect those motions.
Let us assume a mass unit X occupying location a, and moving gravitationally toward
locations b and c. If these locations are not occupied, then we cannot detect this motion at
all. If location b is occupied by mass unit Y. then we see X moving toward Y; that is, we
can now observe the motion of X toward location b, but its motion toward location c is
still unobservable. The observable gravitational motion of Y is toward X and has the
direction ba.
Now if we assume that Y moves to location c, what happens? The essence of the theory is
that the motion of X is not changed at all; it is entirely independent of the position of
object Y. But we are now able to observe the motion of X toward c because there is a
physical object at that location, whereas we are no longer able to observe the motion of X
toward location b, even though that motion exists just as definitely as before. The
direction of the gravitational motion (or force) of X thus appears to have changed, but
what has actually happened is that some previously unobservable motion has become
observable, while some previously observable motion has become unobservable. The
same is true of the motion of object Y. It now appears to be moving in the direction ca
rather than in the direction ba, but here again there has been no actual change, other than
the change in the position of Y. Gravitationally, Y is moving in all directions at all times,
irrespective of whether or not that motion is observable.
The foregoing explanation has been presented in terms of individual mass units, rather
than aggregates, as the basic question with respect to the effect of variable mass on the
gravitational motion has not yet been considered. The discussion will be extended to the
multiple units in the next chapter.
As emphasized in Chapter 3, the identification of a second general force, or motion, to
which all matter is subject, provides the must needed ―antagonist,‖ to gravitation, and
enables explaining many phenomena that have never been satisfactorily explained on the
basis of only one general force. It is the interaction of these two general forces that
determines the course of major physical events. The controlling factor is the distance
intervening between the objects that are involved. Inasmuch as the progression of space
and time is merely a manifestation of the movement of the natural reference system with
respect to the conventional stationary system of reference, the space progression
originates everywhere, and its magnitude is always the same, one unit of space per unit of
time. Gravitation, on the other hand, originates at the specific locations which the
gravitating objects happen to occupy. Its effects are therefore distributed over a volume
of extension space the size of which varies with the distance from the material object. In
three-dimensional space, the fraction of the inward motion directed toward a unit area at
distance d from the object is inversely proportional to the total area at that distance; that
is, to the surface of a sphere of radius d. The effective portion of the total inward motion
is thus inversely proportional to d². This is the inverse square law to which gravitation
conforms, according to empirical findings.
The net resultant of these two general motions in each specific case depends on their
relative magnitudes. At the shorter distances gravitation predominates, and in the realm
of ordinary experience all aggregate of matter are subject to net gravitational motions (or
forces). But since, the progression of the natural reference system is constant at unit
speed while the opposing gravitational motion is attenuated by distance it accordance
with the inverse square law, it follows that at some specific distance, the gravitational
limit of the aggregate of matter under consideration, the motions reach equality. Beyond
this point the net movement is outward, increasing toward the speed of light as the
gravitational effect continues to decrease.
As a rough analogy, we may visualize a moving belt traveling outward from a central
location and carrying an assortment of cubes and balls. The outward travel of the belt
represents the progression of the natural reference system. The cubes are analogous to the
photons of radiation. Having no independent mobility of their own, they must necessarily
remain permanently at whatever locations on the belt they occupy initially and they
therefore move outward from the point of origin at the full speed of the belt. The balls,
however, can be caused to rotate, and if the rotation is in the direction opposite to the
travel of the belt and the rotational speed is high enough, the balls will move inward
instead of outward. These balls represent the atoms of matter, and the inward motion
opposite to the direction of the travel of the belt is analogous to gravitation.
We could include the distance factor in the analogy by introducing some means of
varying the speed of rotation of the balls with the distance from the central point. Under
this arrangement the closer balls would still move inward, but at some point farther out
there would be an equilibrium, and beyond this point the balls would move outward.
The analogy is incomplete, particularly in that the mechanism whereby the rotation of the
balls causes them to move inward translationally is not the same as that which causes the
inward motion of the atoms. Nevertheless, it does show quite clearly that under
appropriate conditions a rotational motion can cause a translational displacement, and it
gives us a good picture of the general relations between the progression of the natural
reference system, gravitational motion, and the travel of the photons of radiation.
All aggregates of matter smaller than the largest existing units are under the gravitational
control of larger aggregates; that is, they are within the gravitational limits of those larger
units. Consequently, they are not able to continue the outward movement that would take
place in the absence of the larger bodies. The largest existing aggregates are not limited
in this manner, and on the basis of the principles that have been stated, any two such
aggregates that are outside their mutual gravitational limits recede from each other at
speeds increasing with the distance.
In the observed physical universe, the largest aggregates of matter are galaxies.
According to the foregoing theoretical findings, the distant galaxies should be receding
from the earth at extremely high speeds increasing with distance up to the speed of light,
which will be reached where the gravitational effect is reduced to a negligible level. Until
quite recently, this theoretical conclusion would have been received with extreme
skepticism, as it conflicts with what was then the accepted thinking, and there was no
way in which it could be subjected to a test. But recent astronomical advances have
changed this situation. Present-day instruments are able to reach out to distances so great
that the effect of gravitation is minimal, and the observations with this improved
equipment show that the galaxies are behaving in exactly the manner predicted by the
new theory.
In the meantime, however, the astronomers have been trying to account for this galactic
recession in some manner consistent with present astronomical views, and they have
devised an explanation in which they assume, entirely ad hoc, that there was an enormous
explosion at some singular point in the past history of the universe which hurled the
galaxies out into space at their present fantastically high speeds. If one were to be called
upon to decide which is the better explanation–one which rests upon an ad hoc
assumption of an event far out of the range of known physical phenomena, or one which
finds the recession to be an immediate and direct consequence of the fundamental nature
of the universe–there can hardly be any question as to the decision. But, in reality, this
question does not arise, as the case in favor of the theory of a universe of motion is not
based on the contention that it provides better explanations of physical phenomena, a
contention that would have to depend, in most instances, on conformity with non-
scientific criteria, but on the objective and genuinely scientific contention that it is a fully
integrated system of theory which is not inconsistent with any established fact in any
physical area.
Another significant effect of the existence of a gravitational limit, within which there is a
net inward motion, and outside of which there is a net outward progression, is that it
reconciles the seemingly uniform distribution of matter in the universe with Newton's
Law of Gravitation and Euclidean geometry. One of the strong arguments that has been
advanced against the existence of a gravitational force of the inverse square type
operating in a Euclidean universe is that on such a basis, ―The stellar universe ought to be
a finite island in the infinite ocean of space,‖39 as Einstein stated the case. Observations
indicate that there is no such concentration. So far as we can tell, the galaxies are
distributed uniformly, or nearly uniformly, throughout the immense region now
accessible to observation, and this is currently taken as a definite indication that the
geometry of the universe is non-Euclidean.
From the points brought out in the preceding pages, it is now clear that the flaw in this
argument is that it rests on the assumption that there is a net gravitational force effective
throughout space. Our findings are that this assumption is incorrect, and that there is a net
gravitational force only within the gravitational limit of the particular mass under
consideration. On this basis it is only the matter within the gravitational limit that should
agglomerate into a single unit, and this is exactly what occurs. Each major galaxy is a
―finite island in the ocean of space,‖ within its gravitational limit. The existing situation
is thus entirely consistent with inverse square gravitation operating in a Euclidean
universe, as the Reciprocal System requires.
The atoms, particles, and larger aggregates of matter within the gravitational limit of each
galaxy constitute a gravitationally bound system. Each of these constituent units is
subject to the same two general forces as the galaxies, but in addition is subject to the
(apparent) gravitational attraction of neighboring masses, and that of the entire mass
within the gravitational limits acting as a whole. Under the combined influence of all of
these forces, each aggregate assumes an equilibrium position in the three-dimensional
reference system that we are calling extension space, or a net motion capable of
representation in that system. So far as the bound system is concerned, the coordinate
reference system, extension space, is the equivalent of Newton's absolute space. It can be
generalized to include other gravitationally bound systems by taking into account the
relative motion of the systems.
Any or all of the aggregates or individual units that constitute a gravitationally bound
system may acquire motions relative to the fixed reference system. Since these motions
are relative to the defined spatial coordinate system, the direction of motion in each
instance is an inherent property of the motion, rather than being merely a matter of
chance, as in the case of the coordinate representation of the scalar motions. These
motions with inherent vectorial directions are vectorial motions: the motions of our
ordinary experience. They are so familiar that it is customary to generalize their
characteristics, and to assume that these are the characteristics of all motion. Inasmuch as
these familiar vectorial motions have inherent directions, and are always motions of
something, it is taken for granted that these are essential features of motions; that all
motions must necessarily have these same properties. But our investigation of the
fundamental properties of motion reveals that this assumption is in error. Motion, as it
exists in a universe composed entirely of motion, is merely a relation between space and
time, and in its simpler forms it is not motion of anything, nor does it have an inherent
direction. Vectorial motion is a special kind of motion; a phenomenon of the
gravitationally bound systems.
The net resultant of the scalar motions of any object–the progression of the reference
system and the various gravitational motions–has a vectorial direction when viewed in
the context of a stationary reference system, even though that direction is not an inherent
property of the motion. The observed motion of such an object, which is the net resultant
of all of its motions, both scalar and vectorial, thus appears to be simply a vectorial
motion, and is so interpreted in current practice. One of the prerequisites for a clear
understanding of basic physical phenomena is a recognition of the composite nature of
the observed motions. It is not possible to get a true picture of activity in a gravitationally
bound system unless it is realized that an object such as a photon or a neutrino which is
traveling at the speed of light with respect to the conventional frame of reference does so
because it has no capability of independent motion at all, and is at rest in its own natural
system of reference. Similarly, the behavior of atoms of matter can be clearly understood
only in the light of a realization that they are motionless, or moving at low speeds,
relative to the conventional reference system because they possess inherent motions at
high speed which counterbalance the motion of the natural reference system that would
otherwise carry them outward at the speed of the photon or the neutrino.
It is also essential to recognize that the scalar motion of the photons can be
accommodated within the spatial reference system only by the use of multiple reference
points. Photons are continually being emitted from matter by a process that we will not be
prepared to discuss until a later stage of the theoretical development. The motion of the
photons emitted from any material object is outward from that object, not from the
instantaneous position in some reference system which that object happens to occupy at
the moment of emission. As brought out in Chapter 3, the extension space of our ordinary
experience is ―absolute space,‖ for vectorial motion and for scalar motion viewed from
one point of reference. But every other reference point has its own ―absolute space,‖ and
there is no criterion by which one of these can be singled out as more basic than another.
Thus the location at which a photon originates cannot be placed in the context of any
general reference system for scalar motion. That location itself is the reference point for
the photon emission, and if we choose to view it in relation to some reference system
with respect to which it is moving, then that relative motion, whatever it may be, is a
component of the motion of the emitted photons.
Looking at the situation from the standpoint of the photon, we may say that at the
moment of emission this photon is participating in all of the motions of the emitting
object, the outward progression of the natural reference system, the inward motion of
gravitation, and all of the vectorial motions to which the material object is subject. No
mechanism exists whereby the photon can eliminate any of these motions, and the
outward motion of the absolute location of the emission, to which the photon becomes
subject on separation from the material unit, is superimposed on the previously existing
motions. This, again, means that the emitting object defines the reference point for the
motion of the photon. In a gravitationally bound system each aggregate and individual
unit of matter is the center of a sphere of radiation.
This point has been a source of difficulty for some readers of the first edition, and further
consideration by means of a specific example is probably in order. Let us take some
location A as a reference point. All photons originating from a physical object at A move
outward at unit speed in the manner portrayed by the balloon analogy. Gravitating objects
move inward in opposition to the progression, and can therefore be represented by
positions somewhere along the lines of the outward movement. Here, then, we have the
kind of a situation that most persons are looking for: something that they can visualize in
the context of the familiar fixed spatial coordinate system. But now let us take a look at
one of these gravitating objects, which we will call B. For convenience, let us assume that
B is moving gravitationally with respect to A at a rate which is just equal to the outward
progression of the natural reference system, so that B remains stationary with respect to
object A in the fixed reference system. This is the condition that prevails at the
gravitational limit. What happens to the photons emitted from B?
If the expanding system centered at A is conceived as a universal system of reference, as
so many readers have evidently taken it to be, then these photons must be detached from
B in a manner which will enable them to be carried along by progression in a direction
outward from A. But the natural reference system moves outward from all locations; it
moves outward from B in exactly the same manner as it does from A. There is no way in
which one can be assigned any status different from that of the other. The photons
originating at B therefore move outward from B. not from A. This would make no
difference if B were itself moving outward from A at unit speed, as in that case outward
from B would also be outward from A, but where B is stationary with respect to A in a
fixed coordinate system, the only way in which the motions of the photons can be
represented in that system is by means of two separate reference points. Thus there is a
sphere of radiation centered at A, and another sphere centered at B. Where the spheres
overlap, the photons may make contact, even though all are moving outward from their
respective points of origin.
It has been suggested that the theoretical conclusion that the unit outward motion of the
photon is added to the motion imparted to the photon by the emitting object conflicts with
the empirically established principle that the speed of radiation is independent of the
speed of the source, but this is not true. The explanation lies in some aspects of the
measurement of speed that have not been recognized. This matter will be discussed in
detail in Chapter 7.
CHAPTER 6

The Reciprocal Relation


Inasmuch as the fundamental postulates define a universe composed entirely of units of
motion, and define space and time in terms of that motion, these postulates preclude
space and time from having any significance other than that which they have in motion,
and at the same time require that they always have that significance; that is, throughout
the universe space and time are reciprocally related.
This general reciprocal relation that necessarily exists in a universe composed entirely of
motion has a far-reaching and decisive effect on physical structures and processes. In
recognition of its crucial role, the name ―Reciprocal‖ has been applied to the system of
theory based on the ―motion‖ concept of the nature of the universe. The reason for calling
it a ―system of theory‖ rather than merely a ―theory‖ is that its subdivisions are
coextensive with other physical theories. One of these subdivisions covers the same
ground as relativity, another parallels the nuclear theory of the atom, still another deals
with the same physical area as the kinetic theory, and so on. It is appropriate, therefore, to
call these subdivisions ―theories,‖ and to refer to the entire new theoretical structure as
the Reciprocal System of theory, even though it is actually a single fully integrated entity.
The reciprocal postulate provides a good example of the manner in which a change in the
basic concept of the nature of the universe alters the way in which we apprehend specific
physical phenomena. In the context of a universe of matter existing in a space-time
framework, the idea of space as the reciprocal of time is simply preposterous, too absurd
to be given serious consideration. Most of those who encounter the idea of ―the reciprocal
of space‖ for the first time find it wholly inconceivable. But these persons are not taking
the postulates of the new theory at their face value, and recognizing that the assertion that
―space is an aspect of motion‖ means exactly what it says. They are accustomed to
regarding space as some kind of a container, and they are interpreting this assertion as if
it said that ―container space is an aspect of motion,‖ thus inserting their own concept of
space into a statement which rejects all such previous ideas and defines a new and
different concept. The result of mixing such incongruous and conflicting concepts cannot
be otherwise than meaningless.
When the new ideas are viewed in the proper context, the strangeness disappears. In a
universe in which everything that exists is a form of motion, and the magnitude of that
motion, measured as speed or velocity, is the only significant physical quantity, the
existence of the reciprocal relation is practically self-evident. Motion is defined as the
relation of space to time. Its mathematical expression is the quotient of the two quantities.
An increase in space therefore has exactly the same effect on the speed, the mathematical
measure of the motion, as a decrease in time, and vice versa. In comparing one airplane
with another, it makes no difference whether we say that plane A travels twice as far in
the same time, or that it travels a certain distance in half the time.
Inasmuch as the postulates deal with space and time in precisely the same manner, aside
from the reciprocal relation between the two, the behavior characteristics of the two
entities, or properties, as they are called, are identical. This statement may seem
incredible on first sight, as space and time manifest themselves to our observation in very
different guises. We know time only as a progression, a continual moving forward,
whereas space appears to us as an entity that ―stays put.‖ But when we subject the
apparent differences to a critical examination, they fail to stand up under the scrutiny.
The most conspicuous property of space is that it is three-dimensional. On the other hand,
it is generally believed that the observational evidence shows time to be one-dimensional.
We have a subjective impression of a unidirectional ―flow‖ of time from the past, to the
present, and on into the future. The mathematical representation of time in the equations
of motion seems to confirm this view, inasmuch as the quantity t in v = s/t and related
equations is scalar, not vectorial, as v and s are, or can be.
Notwithstanding its general and unquestioning acceptance, this conclusion as to the one-
dimensionality of time is entirely unjustified. The point that is being overlooked is that
―direction,‖ in the context of the physical processes which are represented by vectorial
equations in present-day physics, always means ―direction in space.‖ In the equation v =
s/t, for example, the spatial displacement s is a vector quantity because it has a direction
in space. It follows that the velocity v also has a direction in space, and thus what we
have here is a space velocity equation. In this equation the term t is necessarily scalar
because it has no direction in space.
It is quite true that this result would automatically follow if time were one-dimensional,
but the one-dimensionality is by no means a necessary condition. Quite the contrary, time
is scalar in this space velocity equation (and in all of the other familiar vectorial
equations of modern physics; equations that are vectorial because they involve direction
in space) irrespective of its dimensions, because no matter how many dimensions it may
have, time has no direction in space. If time is multi-dimensional, as our theoretical
development finds it to be, then it has a property that corresponds to the spatial property
that we call ―direction.‖ But whatever we call this temporal property, whether we call it
―direction in time,‖ as we are doing for reasons previously explained, or give it some
altogether different name, it is a temporal property, not a spatial property, and it does not
give time any direction in space. Regardless of its dimensions, time cannot be a vector
quantity in any equation such as those of present-day physics, in which the property,
which qualifies a quantity as vectorial, is that of having a direction in space.
The existing confusion in this area is no doubt due, at least in part, to the fact that the
terms ―dimension‖ and ―dimensional‖ are currently used with two different meanings.
We speak of space as three-dimensional, and we also speak of a cube as three-
dimensional. In the first of these expressions we mean that space has a certain property
that we designate as dimensionality, and that the magnitude applying to this property is
three. In other words, our statement means that there are three dimensions of space. But
when we say that a cube is three-dimensional, the significance of the statement is quite
different. Here we do not mean that there are three dimensions of ―cubism,‖ or whatever
we may call it. We mean that the cube exists in space and extends into three dimensions
of that space.
There is a rather general tendency to interpret any postulate of multi-dimensional time in
this latter significance; that is, to take it as meaning that time extends into n dimensions
of space, or some kind of a quasi-space. But this is a concept that makes little sense under
any conditions, and it certainly is not the meaning of the term ―three-dimensional time‖
as used in this work. When we here speak of time as three-dimensional we will be
employing the term in the same significance as when we speak of space as three-
dimensional; that is, we mean that time has a property, which we call dimensionality, and
the magnitude of that property is three. Here, again, we mean that there are three
dimensions of the property in question: three dimensions of time.
There is nothing in the role which time plays in the equations of motion in space to
indicate specifically that it has more than one dimension. But a careful consideration
along the lines indicated in the foregoing paragraphs does show that the present-day
assumption that we know time to be one-dimensional is completely unfounded. Thus
there is no empirical evidence that is inconsistent with the assertion of the Reciprocal
System that time is three-dimensional.
Perhaps it might be well to point out that the additional dimensions of time have no
metaphysical significance. The postulates of a universe of motion define a purely
physical universe, and all of the entities and phenomena of that universe, as determined
by a development of the necessary consequences of the postulates, are purely physical.
The three dimensions of time have the same physical significance as the three dimensions
of space.
As soon as we take into account the effect of gravitation on the motion of material
aggregates, the second of the observed differences, the progression of time, which
contrasts sharply with the apparent immobility of extension space, is likewise seen to be a
consequence of the conditions of observation, rather than an indication of any actual
dissimilarity. The behavior of those objects that are partially free from the gravitational
attraction of our galaxy, the very distant galaxies, shows conclusively that the immobility
of extension space, as we observe it, is not a reflection of an inherent property of space in
general, but is a result of the fact that in the region accessible to detailed observation
gravitation moves objects toward each other, offsetting the effects of the outward
progression. The pattern of the recession of the distant galaxies demonstrates that when
the gravitational effect is eliminated there is a progression of space similar to the
observed progression of time. Just as ―now‖ continually moves forward relative to any
initial point in the temporal reference system, so ―here‖ in the absence of gravitation,
continually moves forward relative to any initial point in the spatial reference system.
Little additional information about either space or time is available from empirical
sources. The only items on which there is general agreement are that space is
homogeneous and isotropic, and that time progresses uniformly. Other properties that are
sometimes attributed to either time or space are merely assumptions or hypotheses.
Infinite extent or infinite divisibility, for instance, are hypothetical, not the results of
observation. Likewise, the assertions as to spatial and temporal properties that are made
in the relativity theories are, as Einstein says, ―free inventions of the human mind,‖ not
items that have been derived from experience.
In testing the validity of the conclusion that all properties of either space or time are
properties of both space and time, such assumptions and hypotheses must be disregarded,
since it is only conflicts with definitely established facts that are conclusive. The
significance of a conflict with a questionable assertion cannot be other than questionable.
―Homogeneous‖ with respect to space is equivalent to ―uniform‖ with respect to time,
and because the observations thus far available tell us nothing at all about the dimensions
of time, there is nothing in these observations that is inconsistent with the assertion that
time, like space, is isotropic. In spite of the general belief, among scientists and laymen
alike, that there is a great difference between space and time, any critical examination
along the foregoing lines shows that the apparent differences are not real, and that there is
actually no observational evidence that is inconsistent with the theoretical conclusion that
the properties of space and of time are identical.
As brought out in Chapter 4, deviations from unit speed, the basic one-to-one space-time
ratio, are accomplished by means of reversals of the direction of the progression of either
space or time. As a result of these reversals, one component traverses the same path in the
reference system repeatedly, while the other component continues progressing
unidirectionally in the normal manner. Thus the deviation from the normal rate of
progression may take place either in space or in time, but not in both coincidentally. The
space-time ratio, or speed, is either 1/n (less than unity, the speed of light,) or n/1 (greater
than unity). Inasmuch as everything physical in a universe of motion is a motion–that is,
a relation between space and time, measured as speed–and, as we have just seen, the
properties of space and those of time are identical, aside from the reciprocal relationship,
it follows that every physical entity or phenomenon has a reciprocal. There exists another
entity or phenomenon that is an exact duplicate, except that space and time are
interchanged.
For example, let us consider an object rotating with speed 1/n and moving translationally
with speed 1/n. The reciprocal relation tells us that there must necessarily exist,
somewhere in the universe, an object identical in all respects, except that its rotational
and translational speeds are both n/1 instead of 1/n. In addition to the complete
inversions, there are also structures of an intermediate type in which one or more
components of a complex combination of motions are inverted, while the remaining
components are unchanged. In the example under consideration, the translational speed
may become n/1 while the rotational speed remains at 1/n, or vice versa. Once the normal
(1/n) combination has been identified, it follows that both the completely inverted (n/1)
combination and the various intermediate structures exist in the appropriate environment.
The general nature of that environment in each case is also indicated, inasmuch as change
of position in time cannot be represented in - a spatial reference system, and each of these
speed combinations has some special characteristics when viewed in relation to the
conventional reference systems. The various physical entities and phenomena that
involve motion of these several inverse types will be examined at appropriate points in
the pages that follow. The essential point that needs to be recognized at this time, because
of its relevance to the subject matter now under consideration, is the existence of inverse
forms of all of the normal (1/n) motions and combinations of motions.
This is a far-reaching discovery of great significance. In fact the new and more accurate
picture of the physical universe that is derived from the ―motion‖ concept differs from
previous ideas mainly by reason of the widening of our horizons that results from
recognition of the inverse phenomena. Our direct physical contacts are limited to
phenomena of the same type as those that enter into our own physical makeup: the direct
phenomena, we may call them, although the distinction between direct and inverse is
merely a matter of the way in which we see them, not anything that is inherent in the
phenomena themselves. In recent years the development of powerful and sophisticated
instruments has enabled us to penetrate areas that are far beyond the range of our unaided
senses, and in these new areas the relatively simple and understandable relations that
govern events within our normal experience are no longer valid. Newton's laws of
motion, which are so dependable in everyday life, break down in application to motion at
speeds approaching that of light; events at the atomic level resist all attempts at
explanation by means of established physical principles, and so on.
The scientific reaction to this state of affairs has been to conclude that the relatively
simple and straightforward physical laws that have been found to apply to events within
our ordinary experience are not universally valid, but are merely approximations to some
more complex relations of general applicability. The simplicity of Newton's laws of
motion, for instance, is explained on the ground that some of the terms of the more
complicated general law are reduced to negligible values at low velocities, and may
therefore be disregarded in application to the phenomena of everyday life. Development
of the consequences of the postulates of the Reciprocal System arrives at a totally
different answer. We find that the inverse phenomena that necessarily exist in a universe
of motion play no significant role in the events of our everyday experience, but as we
extend our observations into the realms of the very large, the very small, and the very
fast, we move into the range in which these inverse phenomena replace or modify those
which we, from our particular position in the universe, regard as the direct phenomena.
On this basis, the difficulties that have been experienced in attempting to use the
established physical laws and relations of the world of ordinary experience in the far-out
regions are very simply explained. These laws and relations apply specifically to the
world of immediate sense perception, phenomena of the direct space-time orientation,
and they fail in application to any situation in which the events under consideration
involve phenomena of the inverse type in any significant degree. They do not fail because
they are wrong, or because they are incomplete; they fail because they are misapplied. No
law–physical or otherwise-can be expected to produce the correct results in an area to
which it has no relevance. The inverse phenomena are governed by laws distinct from,
although related to, those of the direct phenomena, and where those phenomena exist
they can be understood and successfully handled only by using the laws and relations of
the inverse sector.
This explains the ability of the Reciprocal System of theory to deal successfully with the
recently discovered phenomena of the far-out regions, which have been so resistant to
previous theoretical treatment. It is now apparent that the unfamiliar and surprising
aspects of these phenomena are not due to aspects of the normal physical relations that
come into play only under extreme conditions, as previous theorists have assumed; they
are due to the total or partial replacement of the phenomena of the direct type by the
related, but different phenomena of the inverse type. In order to obtain the correct
answers to problems in these remote areas, the unfamiliar phenomena that are involved
must be viewed in their true light as the inverse of the phenomena of the directly
observable region, not in the customary way as extensions of those direct phenomena into
the regions under consideration. By identifying and utilizing this correct treatment the
Reciprocal System is not only able to arrive at the right answers in the far-out areas, but
to accomplish this task without disturbing the previously established laws and principles
that apply to the phenomena of the direct type.
In order to keep the explanation of the basic elements of the theory as simple and
understandable as possible, the previous discussion has been limited to what we have
called the direct view of the universe, in which space is the more familiar of the two basic
entities, and plays the leading role. At this time it is necessary to recognize that because
of the general nature of the reciprocal relation between space and time every statement
that has been made with respect to space in the preceding chapters is equally applicable
to time in the appropriate context. As we have seen in the case of space and time
individually, however, the way in which the inverse phenomenon manifests itself to our
observation may be quite different from the way in which we see its direct counterpart.
Locations in time cannot be represented in a spatial reference system, but, with the same
limitations that apply to the representation of spatial locations, they can be represented in
a stationary three-dimensional temporal reference system analogous to the three-
dimensional spatial reference system that we call extension space. Since neither space nor
time exists independently, every physical entity (a motion or a combination of motions)
occupies both a space location and a time location. The location as a whole, the location
in the physical universe, we may say, can therefore be completely defined only in terms
of two reference systems.
In the context of a stationary spatial reference system the motion of an absolute location,
a location in the natural reference system, as indicated by observation of an object
without independent motion, such as a photon or a galaxy at the observational limit, is
linearly outward. Similarly, the motion of an absolute location with respect to a stationary
temporal reference system is linearly outward in time. Inasmuch as the gravitational
motion of ordinary matter is effective in space only, the atoms and particles of this
matter, which are stationary with respect to the spatial reference system, or moving only
at low velocities, remain in the same absolute locations in time indefinitely, unless
subjected to some external force. Their motion in three-dimensional time is therefore
linearly outward at unit speed, and the time location that we observe, the time registered
on a clock, is not the location in any temporal reference system, but simply the stage of
progression. Since the progression of the natural reference system proceeds at unit speed,
always and everywhere, clock time, if properly measured, is the same everywhere. As we
will see later in the development, the current hypotheses which require repudiation of the
existence of absolute time and the concept of simultaneity of distant events are erroneous
products of reasoning from premises in which clock time is incorrectly identified as time
in general.
The best way to get a clear picture of the relation of clock time to time in general is to
consider the analogous situation in space. Let us assume that a photon A is emitted from
some material object X in the direction Y. This photon then travels at unit speed in a
straight line XY which can be represented in the conventional fixed spatial reference
system. The line of progression of time has the same relation to time in general (three-
dimensional time) as the line XY has to space in general (three-dimensional space). It is a
one-dimensional line of travel in a three-dimensional continuum; not something separate
and distinct from that continuum, but a specific part of it.
Now let us further assume that we have a device whereby we can measure the rate of
increase of the spatial distance XA, and let us call this device a ―space clock‖ , Inasmuch
as all photons travel at the same speed, this one space clock will suffice for the
measurement of the distance traversed by any photon, irrespective of its location or
direction of movement, as long as we are interested only in the scalar magnitude. But this
measurement is valid only for objects such as photons, which travel at unit speed. If we
introduce an object, which travels at some speed other than unity, the measurement that
we get from the space clock will not correctly represent the space traversed by that
object. Nor will the space clock registration be valid for the relative separation of moving
objects, even if they are traveling at unit speed. In order to arrive at the true amount of
space entering into such motions we must either measure that space individually, or we
must apply an appropriate correction to the measurement by the space clock.
Because objects at rest in the stationary spatial reference system, or moving at low
velocities with respect to it, are moving at unit speed relative to any stationary temporal
reference system, a clock which measures the time progression in any one process
provides an accurate measurement of the time elapsed in any low-speed physical process,
just as the space clock in our analogy measured the space traversed by any photon. Here,
again, however, if an object moves at a speed, or a relative speed, differing from unity, so
that its movement in time is not the same as that of the progression of the natural
reference system, then the clock time does not correctly represent the actual time
involved in the motion under consideration. As in the analogy, the true quantity, the net
total time, must be obtained either by a separate measurement (which is usually
impractical) or by determining the magnitude of the adjustment that must be applied to
the clock time to convert it to total time.
In application to motion in space, the total time, like the clock registration, is a scalar
quantity. Some readers of the previous edition have found it difficult to accept the idea
that time can be three-dimensional because this makes time a vector quantity, and
presumably leads to situations in which we are called upon to divide one vector quantity
by another. But such situations are non-existent. If we are dealing with spatial relations,
time is scalar because it has no spatial direction. If we are dealing with temporal
relations, space is scalar because it has no temporal direction. Either space or time can be
vectorial in appropriate circumstances. However, as explained earlier in this chapter, the
deviation from the normal scalar progression at unit speed may take place either in space
or in time, but not in both coincidentally. Consequently, there is no physical situation in
which both space and time are vectorial.
Similarly, scalar rotation and its gravitational (translational) effect take place either in
space or in time, but not in both. If the speed of the rotation is less than unity, time
continues progressing at the normal unit rate, but because of the directional changes
during rotation space progresses only one unit while time is progressing n units. Thus the
change in position relative to the natural unit datum, both in the rotation and in its
gravitational effect, takes place in space. Conversely, if the speed of the rotation is
greater than unity, the rotation and its gravitational effect take place in time.
An important result of the fact that rotation at greater-than-unit speeds produces an
inward motion (gravitation) in time is that a rotational motion or combination of motions
with a net speed greater than unity cannot exist in a spatial reference system for more
than one (dimensionally variable) unit of time. As pointed out in Chapter 3, the spatial
systems of reference, to which the human race is limited because it is subject to
gravitation in space, are not capable of representing deviations from the normal rate of
time progression. In certain special situations, to be considered later, in which the normal
direction of vectorial motion is reversed, the change of position in time manifests itself as
a distortion of the spatial position. Otherwise, an object moving normally with a speed
greater than unity is coincident with the reference system for only one unit of time.
During the next unit, while the spatial reference system is moving outward in time at the
unit rate of the normal progression, gravitation is carrying the rotating unit inward in
time. It therefore moves away from the reference system and disappears. This point will
be very significant in our consideration of the high-speed rotational systems in Chapter
15.
Recognition of the fact that each effective unit of rotational motion (mass) occupies a
location in time as well as a location in space now enables us to determine the effect of
mass concentration on the gravitational motion. Because of the continuation of the
progression of time while gravitation is moving the atoms of matter inward in space, the
aggregates of matter that are eventually formed in space consist of a large number of
mass units that are contiguous in space, but widely dispersed in time. One of the results
of this situation is that the magnitude of the gravitational motion (or force) is a function
not only of the distance between objects, but also of the effective number of units of
rotational motion, measured as mass, that each object possesses. This motion is
distributed over all space-time directions, rather than merely over all space directions,
and since an aggregate of n mass units occupies n time locations, the total number of
space-time locations is also n, even though all mass units of each object are nearly
coincident spatially. The total gravitational motion of any mass unit toward that
aggregate is thus n times that toward a single mass unit at the same distance. It then
follows that the gravitational motion (or force) is proportional to the product of the
(apparently) interacting masses.
It can now be seen that the comments in Chapter 5, with respect to the apparent change of
direction of the gravitational motions (or forces) when the apparently interacting masses
change their relative positions are applicable to multi-unit aggregates as well as to the
individual mass units considered in the original discussion. The gravitational motion
always takes place toward all space-time locations whether or not those locations are
occupied by objects that enable us to detect the motion.
A point that should be noted in this connection is that two objects are in effective contact
if they occupy adjoining locations in either space or time, regardless of the extent of their
separation in the other aspect of motion. This statement may seem to conflict with the
empirical observation that contact can be made only if the two objects are in the same
place at the same clock time. However, the inability to make contact when the objects
reach a common spatial location in a fixed reference system at different clock times is not
due to the lack of coincidence in time, but to the progression of space that takes place in
connection with the progression of time which is registered by the clock. Because of this
space progression, the location that has the same spatial coordinates in the stationary
reference system is not the same spatial location that it was at an earlier time.
Scientific history shows that physical problems of long standing are usually the result of
errors in the prevailing basic concepts, and that significant conceptual modifications are a
prerequisite for their solution. We will find, as we proceed with the theoretical
development, that the reciprocal relation between space and time which necessarily exists
in a universe of motion is just the kind of a conceptual alteration that is needed to clear up
the existing physical situation: one which makes drastic changes where such changes are
required, but leaves the empirically determined relations of our everyday experience
essentially untouched.

CHAPTER 7

High Speed Motion


As brought out in Chapter 3, the ―space‖ of our ordinary experience, extension space, as
we have called it, is simply a reference system, and it has no real physical significance.
But the relationships that are represented in this reference system do have physical
meaning. For example, if the distance between object A and object B in extension space
is x, then if A moves a distance x in the direction AB while B remains stationary with
respect to the reference system, the two objects will come in contact. The contact has
observable physical results, and the fact that it occurs at the coordinate position reached
by object A after a movement defined in terms of the coordinates from a specific initial
position in the coordinate system demonstrates that the relation represented by the
difference between coordinates has a definite physical meaning.
Einstein calls this a ―metrical‖ meaning; that is, a connection between the coordinate
differences and ―measurable lengths and times.‖ To most of those who have not made
any critical study of the logical basis of so-called ―modern physics‖ it probably seems
obvious that this kind of a meaning exists, and it is safe to say that comparatively few of
those who now accept Einsten's relativity theory because it is the orthodox doctrine in its
field realize that his theory denies the existence of such a meaning. But any analysis of
the logical structure of the theory will show that this is true, and Einstein’s own statement
on the subject, previously quoted, leaves no doubt on this score.
This is a prime example of a strange feature of the present situation in science. The
members of the scientific community have accepted the basic theories of ―modern
physics,‖ as correct, and are quick to do battle on their behalf if they are challenged, yet
at the same time the majority are totally unwilling to accept some of the aspects of those
theories that the originators of the theories claim are essential features of the theoretical
structures. How many of the supporters of modern atomic theory, for example, are
willing to accept Heisenberg's assertion that atoms do not ―exist objectively in the same
sense as stones or trees exist‖ ?40 Probably about as many as are willing to
acceptEinstein'sassertion that coordinate differences have no metrical meaning.
At any rate, the present general acceptance of the relativity theory as a whole, regardless
of the widespread disagreement with some of its component parts, makes it advisable to
point out just where the conclusions reached in this area by development of the
consequences of the postulates of the Reciprocal System differ from the assertions of
relativity theory. This chapter will therefore be devoted to a consideration of the status of
the relativity concept, includes the extent to which the new findings are in agreement
with it. Chapter 8 will then present the full explanation of motion at high speeds, as it is
derived from the new theoretical development. It is worth noting in this connection that
Einstein himself was aware of ―the eternally problematical character‖ of his concepts,
and in undertaking the critical examination of his theory in this chapter we are following
his own recommendation, expressed in these words:
In the interests of science it is necessary over and over again to engage in the critique of
these fundamental concepts, in order that they may not unconsciously rule us. This
becomes evident especially in those situations involving development of ideas in which
the consistent use of the traditional fundamental concepts leads us to paradoxes difficult
to resolve.41
In spite of all of the confusion and controversy that have surrounded the subject, the
factors that are involved are essentially simple, and they can be brought out clearly by
consideration of a correspondingly simple situation, which, for convenient reference, we
will call the ―two-photon case.‖ Let us assume that a photon X originates at location O in
a fixed reference system, and moves linearly in space at unit velocity, the velocity of light
(as all photons do). In one unit of time it will have reached point x in the coordinate
system, one spatial unit distant from 0. This is a simple matter of fact that results entirely
from the behavior of photon X, and is totally independent of what may be done by or to
any other object. Similarly, if another photon Y leaves point O simultaneously with X,
and travels at the same velocity, but in the opposite direction, this photon will reach point
y, one unit of space distant from O. at the end of one unit of elapsed time. This, too, is
entirely a matter of the behavior of the moving photon Y. and is independent of what
happens to photon X or to any other physical object. At the end of one unit of time, as
currently measured, X and Y are thus separated by two units of space (distance) in the
coordinate system of reference.
In current practice some repetitive physical process measures time. This process, or the
device, in which it takes place, is called a clock. The progression of time thus measured is
the standard time magnitude which, on the basis of current understanding, enters into
physical relations. Speed, or velocity, the measure of motion, is defined as distance
(space) per unit time. In terms of the accepted reference systems, this means distance
between coordinate locations divided by clock registration. In the two-photon case, the
increase in coordinate separation during the one unit of elapsed time is two units of space.
The relative velocity of the two photons, determined in the standard manner, is then two
natural units; that is, twice the velocity of light, the velocity at which each of the two
objects is moving.
In 1887, an experiment by Michelson and Morley compared the velocity of light traveling
over round trip paths in different directions relative to the direction of the earth’s motion.
The investigators found no difference in the velocities, although the accuracy of the
experiment was far greater than would be required to reveal the expected difference had it
been present. This experiment, together with others, which have confirmed the original
findings, makes it necessary to conclude that the velocity of light in a vacuum is constant
irrespective of the reference system. The determination of velocity in the standard
manner, dividing distance traveled by elapsed time, therefore arrives at the wrong answer
at high velocities.
As expressed by Capek, the initial impact of this discovery was ―shattering.‖ It seemed to
undermine the whole structure of theoretical knowledge that had been erected by
centuries of effort. The following statement by Sir James Jeans, written only a few
decades after the event, shows what a blow it was to the physicists of that day:
For more than two centuries this system of laws (Newton's) was believed to give a
perfectly consistent and exact description of the processes of nature. Then, as the
nineteenth century was approaching its close, certain experiments, commencing with the
famous Michelson-Morley experiment, showed that the whole scheme was meaningless
and self-contradictory.42
After a quarter of a century of confusion, Albert Einstein published his special theory of
relativity, which proposed a theoretical explanation of the discrepancy. Contradictions
and uncertainties have surrounded this theory from its inception, and there has been
continued controversy over its interpretation in specific applications, and over the nature
and adequacy of the various explanations that have been offered in attempts to resolve the
―paradoxes‖ and other inconsistencies. But the mathematical successes of the theory have
been impressive, and even though the mathematics antedated the theory, and are not
uniquely connected with it, these mathematical successes, in conjunction with the
absence of any serious competitor, and the strong desire of the physicists to have
something to work with, have been sufficient to secure general acceptance.
Now that a new theory has appeared, however, the defects in the relativity theory acquire
a new significance, as the arguments which justify using a theory in spite of
contradictions and inconsistencies if it is the only thing that is available are no longer
valid when a new theory free from such defects makes its appearance. In making the
more rigorous appraisal of the theory that is now required, it should be recognized at the
outset that a theory is not valid unless it is correct both mathematically and conceptually.
Mathematical evidence alone is not sufficient, as mathematical agreement is no
guarantee of conceptual validity.
What this means is that if we devise a theoretical explanation of a certain physical
phenomenon, and then formulate a mathematical expression to represent the relations
pictured by the theory, or do the same thing in reverse manner, first formulating the
mathematical expression on an empirical basis, and then finding an explanation that fits
it, the mere fact that this mathematical expression yields results that agree with the
corresponding experimental values does not assure us that the theoretical explanation is
correct, even if the agreement is complete and exact. As a matter of principle, this
statement is not even open to question, yet in a surprisingly large number of instances in
current practice, including the relativity theory, mathematical agreement is accepted as
complete proof.
Most of the defects of the relativity theory as a conceptual scheme have been explored in
depth in the literature. A comprehensive review of the situation at this time is therefore
unnecessary, but it will be appropriate to examine one of the long-standing ―paradoxes‖
which is sufficient in its self to prove that the theory is conceptually incorrect. Naturally,
the adherents of the theory have done their best to ―resolve‖ the paradox, and save the
theory, and in their desperate efforts they have managed to muddy the waters to such an
extent that the conclusive nature of the case against the theory is not generally
recognized.
The significance of this kind of a discrepancy lies in the fact that when a theory makes
certain assertions of a general nature, if any one case can be found where these assertions
are not valid, this invalidates the generality of the assertions, and thus invalidates the
theory as a whole. The inconsistency of this nature that we will consider here is what is
known as the ―clock paradox.‖ It is frequently confused with the ―twin paradox,‖ in
which one of a set of twins stays home while the other goes on a long journey at a very
high speed. According to the theory, time progresses more slowly for the traveling twin,
and he returns home still a young man, while his brother has reached old age. The clock
paradox, which replaces the twins with two identical clocks, is somewhat simpler, as it
evades the question as to the relation between clock registration and physical processes.
In the usual statement of the paradox, it is assumed that a clock B is accelerated relative
to another identical clock A, and that subsequently, after a period of time at a constant
relative velocity, the acceleration is reversed, and the clocks return to their original
locations. According to the principles of special relativity, clock B, the moving clock, has
been running more slowly than clock A, the stationary clock, and hence the time interval
registered by B is less than that registered by A. But the special theory also tells us that
we cannot distinguish between the motion of clock B relative to clock A and the motion
of clock A relative to clock B. Thus it is equally correct to say that A is the moving clock
and B is the stationary clock, in which case the interval registered by clock A is less than
that registered by clock B. Each clock therefore registers both more and less than the
other.
Here we have a situation in which a straightforward application of the special relativity
theory leads to a conclusion that is manifestly absurd. This paradox, which stands
squarely in the way of any claim that relativity theory is conceptually valid, has never
been resolved except by means which contradict the basic assumptions of the relativity
theory itself. Richard Schlegel brings this fact out very clearly in a discussion of the
paradox in his book Time and the Physical World. ―Acceptance of a preferred coordinate
system‖ is necessary in order to resolve the contradiction, he points out, but ―such an
assumption brings a profound modification to special relativity theory; for the assumption
contradicts the principle that between any two relatively moving systems the effects of
motion are the same, from either system to the other.‖43 G. J. Whitrow summarizes the
situation in this way: ―The crucial argument of those who support Einstein (in the clock
paradox controversy) automatically undermines Einstein's own position.‖44 The theory
based primarily on the postulate that all motion is relative contains an internal
contradiction which cannot be removed except by some argument relying on the
assumption that some motion is not relative.
All of the efforts that have been made by the professional relativists to explain away this
paradox depend, directly or indirectly, on abandoning the general applicability of the
relativity principle, and identifying the acceleration of clock B as something more than an
acceleration relative to clock A. Moller, for example, tells us that the acceleration of
clock B is ―relative to the fixed stars.‖45 Authors such as Tolman, who speaks of the ―lack
of symmetry between the treatment given to the clock A, which was at no time subjected
to any force, and that given to clock B which was subjected to . . . forces . . . when the
relative motion of the clocks was changed,‖46 are simply saying the same thing in a more
roundabout way. But if motion is purely relative, as the special theory contends, then a
force applied to clock B cannot produce anything more than a relative motion–it cannot
produce a kind of motion that does not exist–and the effect on clock A must therefore be
the same as that on clock B. Introduction of a preferred coordinate system such as that
defined by the average positions of the fixed stars gets around this difficulty, but only at
the cost of destroying the foundations of the theory, as the special theory is built on the
postulate that no such preferred coordinate system exists.
The impossibility of resolving the contradiction inherent in the clock paradox by appeal
to acceleration can be demonstrated in yet another way, as the acceleration can be
eliminated without altering the contradiction that constitutes the paradox. No exhaustive
search has been made to ascertain whether this streamlined version, which we may call
the ―simplified clock paradox‖ has been given any consideration previously, but at any
rate it does not appear in the most accessible discussions of the subject. This is quite
surprising, as it is a rather obvious way of tightening the paradox to the point where there
is little, if any, room for an attempt at evasion. In this simplified clock paradox we will
merely assume that the two clocks are in uniform motion relative to each other. The
question as to how this motion originated does not enter into the situation. Perhaps they
have always been in relative motion. Or, if they were accelerated, they may have been
accelerated equally. At any rate, for purposes of the inquiry, we are dealing only with the
clocks in uniform relative motion. But here again, we encounter the same paradox.
According to the relativity theory, each clock can be regarded either as stationary, in
which case it is the faster, or as moving, in which case it is the slower. Again each clock
registers both more and less than the other.
There are those who claim that the paradox has been resolved experimentally. In the
published report of one recent experiment bearing on the subject the flat assertion is made
that ―These results provide an unambiguous empirical resolution of the famous clock
paradox.‖47 This claim is, in itself, a good illustration of the lack of precision in current
thinking in this area, as the clock paradox is a logical contradiction. It refers to a specific
situation in which a strict application of the special theory results in an absurdity.
Obviously, a logical inconsistency cannot be ―resolved‖ by empirical means. What the
investigators have accomplished in this instance is simply to provide a further verification
of some of the mathematical aspects of the theory, which play no part in the clock
paradox.
This one clearly established logical inconsistency is sufficient in itself, even without the
many items of evidence available for corroboration, to show that the special theory of
relativity is incorrect in at least some significant segment of its conceptual aspects. It may
be a useful theory; it may be a ―good‖ theory from some viewpoint; it may indeed have
been the best theory available prior to the development of the Reciprocal System, but this
inconsistency demonstrates conclusively that it is not the correct theory.
The question then arises: In the face of these facts, why are present-day scientists so
thoroughly convinced of the validity of the special theory? Why do front-rank scientists
make categorical assertions such as the following from Heisenberg?
The theory . . . has meanwhile become an axiomatic foundation of all modern physics,
confirmed by a large number of experiments. It has become a permanent property of
exact science just as has classical mechanics or the theory of heat.48
The answer to our question can be extracted from this quotation. ―The theory,‖ says
Heisenberg, has been ―confirmed by a large number of experiments.‖ But these
experiments have confirmed only the mathematical aspects of the theory. They tell us
only that special relativity is mathematically correct, and that it therefore could be valid.
The almost indecent haste to proclaim the validity of theories on the strength of
mathematical confirmation alone is one of the excesses of modern scientific practice
which, like the over-indulgence in ad hoc assumptions, has covered up the errors
introduced by the concept of a universe of matter, and has prevented recognition of the
need for a basic change.
Like any other theory, special relativity cannot be confirmed as a theory unless its
conceptual aspects are validated. Indeed, the conceptual aspects are the theory itself, as
the mathematics, which are embodied in the Lorentz equations, were in existence before
Einstein formulated the theory. However, establishment of conceptual validity is much
more difficult than confirmation of mathematical validity, and it is virtually impossible in
a limited field such as that covered by relativity because there is too much opportunity for
alternatives that are mathematically equivalent. It is attainable only where collateral
information is available from many sources so that the alternatives can be excluded.
Furthermore, consideration of the known alternatives is not conclusive. There is a general
tendency to assume that where no satisfactory alternatives have thus far been found, there
is no acceptable alternative. This gives rise to a great many erroneous assertions that are
given credence because they are modeled after valid mathematical statements, and have a
superficial air of authenticity. For example, let us consider the following two statements:
A. As a mathematical problem there is virtually only one possible solution (the Lorentz
transformation) if the velocity of light is to be the same for all. (Sir George Thomson) 49
B. There was and there is now no understanding of it (the Michelson Morley experiment)
except through giving up the idea of absolute time and of absolute length and making the
two interdependent concepts. (R. A. Millikan)50
The logical structure of both of these statements (including the implied assertions) is the
same, and can be expressed as follows:
1. A solution for the problem under consideration has been obtained.
2. Long and intensive study has failed to produce any alternative solution.
3. The original solution must therefore be correct.
In the case of statement A, this logic is irrefutable. It would, in fact, be valid even without
any such search for alternatives. Since the original solution yields the correct answers,
any other valid solution would necessarily have to be mathematically equivalent to the
first, and from a mathematical standpoint equivalent statements are merely different ways
of expressing the same thing. As soon as we obtain a mathematically correct answer to a
problem, we have the mathematically correct answer.
Statement B is an application of the same logic to a conceptual rather than a
mathematical solution, but here the logic is completely invalid, as in this case alternative
solutions are different solutions, not merely different ways of expressing the same
solution. Finding an explanation which fits the observed facts does not, in this case,
guarantee that we have the correct explanation. We must have additional confirmation
from other sources before conceptual validity can be established.
Furthermore, the need for this additional evidence still exists as strongly as ever even if
the theory in question is the best explanation that science has thus far been able to devise,
as it is, or at least should be, obvious that we can never be sure that we have exhausted
the possible alternatives. The theorists do not like to admit this. When they have devoted
long years to the study and investigation of a problem, and the situation still remains as
described by Millikan–that is, only one explanation judged to be reasonably acceptable
has been found–there is a strong temptation to assume that no other possible explanation
exists, and to regard the available theory as necessarily correct, even where, as in the case
of the special theory of relativity, there may be specific evidence to the contrary.
Otherwise, if they do not make such an assumption, they must admit, tacitly if not
explicitly, that their abilities have thus far been unequal to the task of finding the
alternatives. Few human beings, in or out of the scientific field, relish making this kind of
an admission.
Here, then, is the reason why the serious shortcomings of the special theory are currently
looked upon so charitably. Nothing more acceptable has been available (although there
are alternatives toEinstein's interpretation of the Lorentz equations that are equally
consistent with the available information), and the physicists are not willing to concede
that they could have overlooked the correct answer. But the facts are clear. No new valid
conceptual information has been added to the previously existing body of knowledge by
the special theory. It is nothing more than an erroneous hypothesis: a conspicuous
addition to the historical record cited by Jeans:
The history of theoretical physics is a record of the clothing of mathematical formulae,
which were right, or very nearly right, with physical interpretations, which were often
very badly wrong.51
‖As an emergency measure,‖ say Toulmin and Goodfield, ―physicists have resorted to
mathematical fudges of an arbitrary kind.‖52 Here is the truth of the matter. The Lorentz
equations are simply fudge factors: mathematical devices for reconciling discordant
results. In the two-photon case that we are considering, if the speed of light is constant
irrespective of the reference system, as established empirically by the Michelson-Morley
experiment, then the speed of photon X relative to photon Y is unity. But when this speed
is measured in the standard way (assuming that this might be physically possible),
dividing the coordinate distance xy by the elapsed clock time, the relative speed is two
natural units (2c in the conventional system of units) rather than one unit. Here, then, is a
glaring discrepancy. Two different measurements of what is apparently the same thing,
the relative speed, give us altogether different results.
Both the nature of the problem and the nature of the mathematical answer provided by
the Lorentz equations can be brought out clearly by consideration of a simple analogy.
Let us assume a situation in which the property of direction exists, but is not recognized.
Then let us assume that two independent methods are available for measuring motion,
one of which measures the speed, and the other measures the rate at which the distance
from a specified reference point is changing. In the absence of any recognition of the
existence of direction, it will be presumed that both methods measure the same quantity,
and the difference between the results will constitute an unexpected and unexplained
discrepancy, similar to that brought to light by the Michelson-Morley experiment.
An analogy is not an accurate representation. If it were, it would not be an analogy. But
to the extent that the analogy parallels the phenomenon under consideration it provides an
insight into aspects of the phenomenon that cannot, in many cases, be directly
apprehended. In the circumstances of the analogy, it is evident that a fudge factor
applicable to the general situation is impossible, but that under some special conditions,
such as uniform linear motion following a course at a constant angle to the line of
reference, the mathematical relation between the two measurements is constant. A fudge
factor embodying this constant relation, the cosine of the angle of deviation, would
therefore bring the discordant measurements into mathematical coincidence.
It is also evident that we can apply the fudge factor anywhere in the mathematical
relation. We can say that measurement 1 understates the true magnitude by this amount,
or that measurement 2 overstates it by the same amount, or we can divide the discrepancy
between the two in some proportion, or we can say that there is some unknown factor that
affects one and not the other. Any of these explanations is mathematically correct, and if
a theory based on any one of them is proposed, it will be ―confirmed‖ by experiment in
the same manner that special relativity and many other products of present-day physics
are currently being ―confirmed.‖ But only the last alternative listed is conceptually
correct. This is the only one that describes the situation as it actually exists.
When we compare these results of the assumptions made for purposes of the analogy
with the observed physical situation in high-speed motion we find a complete
correspondence. Here, too, mathematical coincidence can be attained by a set of fudge
factors, the Lorentz equations, in a special set of circumstances only. As in the analogy,
such fudge factors are applicable only where the motion is constant both in speed and in
direction. They apply only to uniform translational motion. This close parallel between
the observed physical situation and the analogy strongly suggests that the underlying
cause of the measurement discrepancy is the same in both cases; that in the physical
universe, as well as under the circumstances assumed for purposes of the analogy, one of
the factors that enters into the measurement of the magnitudes involved has not been
taken into consideration.
This is exactly the answer to the problem that emerges from the development of the
Reciprocal System of theory. We find from this theory that the conventional stationary
three-dimensional spatial frame of reference correctly represents locations in extension
space, and that, contrary to Einstein's assertion, the distance between coordinates in this
reference system correctly represents the spatial magnitudes entering into the equations
of motion. However, this theoretical development also reveals that time magnitudes in
general can only be represented by a similar three-dimensional frame of reference, and
that the time registered on a clock is merely the one-dimensional path of the time
progression in this three-dimensional reference frame.
Inasmuch as gravitation operates in space in our material sector of the universe, the
progression of time continues unchecked, and the change of position in time represented
by the clock registration is a component of the time magnitude of any motion. In
everyday life, no other component of any consequence is present, and for most purposes
the clock registration can be taken as a measurement of the total time involved in a
motion. But where another significant component is present, we are confronted with the
same kind of a situation that was portrayed by the analogy. In uniform translational
motion the mathematical relation between the clock time and the total time is a constant
function of the speed, and it is therefore possible to formulate a fudge factor that will take
care of the discrepancy. In the general situation where there is no such constant
relationship, this is not possible, and the Lorentz equations cannot be extended to motion
in general. Correct results in the general situation can be obtained only if the true scalar
magnitude of the time that is involved is substituted for clock time in the equations of
motion.
This explanation should enable a clear understanding of the position of the Reciprocal
System with respect to the validity of the Lorentz equations. Inasmuch as no method of
measuring total time is currently available, there is a substantial amount of convenience
in being able to arrive at the correct numerical results in certain applications by using a
mathematical fudge factor. In so doing, we are making use of an incorrect magnitude that
we are able to measure in lieu of the correct magnitude that we cannot measure. The
Reciprocal System agrees that when we need to use fudge factors in this manner, the
Lorentz equations are the correct fudge factors for the purpose. These equations simply
accomplish a mathematical reconciliation of the equations of motion with the constant
speed of light, and since this constant speed, which was accepted by Lorentz as an
empirically established fact, is deduced from the postulates of the Reciprocal System, the
mathematical treatment is based on the same premises in both cases, and necessarily
arrives at the same results. To this extent, therefore, the new system of theory is in accord
with current thinking.
As P. W. Bridgman once pointed out, many physicists regard ―the content of the special
theory of relativity as coextensive with the content of the Lorentz equations.‖53 K.
Feyerabend gives us a similar report:
It must be admitted, however, that contemporary physicists hardly ever use Einstein’s
original interpretation of the special theory of relativity. For them the theory of relativity
consists of two elements:
(1) The Lorentz transformations; and (2) mass-energy equivalence.54
For those who share this view, the results obtained from the Reciprocal System of theory
in this area make no change at all in the existing physical picture. These individuals
should find it easy to accommodate themselves to the new viewpoint. Those who still
take their stand with Einstein will have to face the fact that the new results show, just as
the clock paradox does, that Einstein's interpretation of the mathematics of high speed
motion is incorrect. Indeed, the mere appearance of a new and different explanation of a
rational character is a crushing blow to the relativity theory, as the case in its favor is
argued very largely on the basis that there is no such alternative. As Einstein says, ―if the
velocity of light is the same in all C.S. (coordinate systems), then moving rods must
change their length, moving clocks must change their rhythm . . . there is no other way.‖
55
The statement by Millikan quoted earlier is equally positive on this score.
The status of an assertion of this kind, a contention that there is no alternative to a given
conclusion, is always precarious, because, unlike most propositions based on other
grounds, which can be supported even in the face of some adverse evidence, this
contention that there is no alternative is immediately and utterly demolished when an
alternative is produced. Furthermore, the use of the ―no alternative‖ argument constitutes
a tacit admission that there is something dubious about the explanation that is being
offered; something that would preclude its acceptance if there were any reasonable
alternative.
In contribution, in the form of the special theory, can be accurately evaluated only if it is
realized that this, too, is a fudge, a conceptual fudge, we might call it. As he explains in
the statement that has been our principal target in this chapter, what he has done is to
eliminate the ―metrical meaning,‖ of spatial coordinates; that is, he takes care of the
discrepancy between the two measurements by arbitrarily decreeing that one of them
shall be disregarded. This may have served a certain purpose in the past by enabling the
scientific community to avoid the embarrassment of having to admit inability to find any
explanation for the high speed discrepancy, but the time has now come to look at the
situation squarely and to recognize that the relativity concept is erroneous.
It is not always appreciated that the mathematical fudge accomplished by the use of the
Lorentz equations works in both directions. If the velocity is not directly determined by
the change in coordinate position during a given time interval, it follows that the change
in coordinate position is not directly determined by the velocity. Recognition of this point
will clear up any question as to a possible conflict between the conclusions of Chapter 5
and the constant speed of light.
In closing this discussion of the high speed problem, it is appropriate to point out that the
identification of the missing factor in the motion equations, the additional time
component that becomes significant at high speeds, does not merely provide a new and
better explanation of the existing discrepancy. It eliminates that discrepancy, restoring the
―metrical meaning‖ of the coordinate distances in a way that makes them entirely
consistent with the constant speed of light.

CHAPTER 8

Motion in Time
The starting point for an examination of the nature of motion in time is a recognition of
the status of unit speed as the natural datum, the zero level of physical activity. We are
able to deal with speeds measured from some arbitrary zero in our everyday life because
these are not primary quantities; they are merely speed differences. For example, where
the speed limit is 50 miles per hour, this does not mean that an automobile is prohibited
from moving at any faster rate. It merely means that the difference between the speed of
the vehicle and the speed of the portion of the earth's surface over which the vehicle is
traveling must not exceed 50 miles per hour. The car and the earth’s surface are jointly
moving at higher speeds in several different directions, but these are of no concern to us
for ordinary purposes. We deal only with the differences, and the datum from which
measurement is made has no special significance.
In current practice we regard a greater rate of change in vehicle location relative to the
local frame of reference as being the result of a greater speed, that quantity being
measured from zero. We could equally well measure from some arbitrary non-zero level,
as we do in the common systems of temperature measurement, or we could even measure
the inverse of speed from some selected datum level, and attribute the greater rate of
change of position to less ―inverse speed,‖ In dealing with the basic phenomena of the
universe, however, we are dealing with absolute speeds, not merely speed differences,
and for this purpose it is necessary to recognize that the datum level of the natural system
of reference is unity, not zero.
Since motion exists only in units, according to the postulates that define a universe of
motion, and each unit of motion consists of one unit of space in association with one unit
of time, all motion takes place at unit speed, from the standpoint of the individual units.
This speed may, however, be either positive or negative, and by a sequence of reversals
of the progression of either time or space, while the other component continues
progressing unidirectionally, an effective scalar speed of 1/n, or n/1, is produced. In
Chapter 4 we considered the case in which the vectorial direction of the motion reversed
at each end of a one-unit path, the result being a vibrational motion. Alternatively, the
vectorial direction may reverse in unison with the scalar direction. In this case space (or
time) progresses one unit in the context of a fixed reference system while time (or space)
progresses n units. Here the result is a translatory motion at a speed of 1/n (or n/1) units.
The scalar situation is the same in both cases. A regular pattern of reversals results in a
space-time ratio of 1/n or n/1. In the example shown in the tabulation in Chapter 4, where
the space-time ratio is 1/3, there is a one-unit inward motion followed by an outward unit
and a second inward unit. The net inward motion in the three-unit sequence is one unit. A
continuous succession of similar 3-unit sequences then follows. As indicated in the
accompanying tabulation, the scalar direction

DIRECTION
Number Vibratory Translation
Unit Scalar Vectorial Scalar Vectorial
1 inward right inward forward
2 outward left outward backward
3 inward right inward forward
4 inward left inward forward
5 outward right outward backward
6 inward left inward forward
of the last unit of each sequence is inward. (A sequence involving an even number of n
alternates n - 1 and n + 1. For instance, instead of two four-unit sequences, in which the
last unit of each sequence would be outward, there is a three-unit sequence and a five-unit
sequence.) The scalar direction of the first unit of each new sequence is also inward. Thus
there is no reversal of scalar direction at the point where the new sequence begins. In the
vibrational situation the vectorial direction continues the regular succession of reversals
even at the points where the scalar direction does not reverse, but in the translational
situation the reversals of vectorial direction conform to those of the scalar direction.
Consequently, the path of vibration remains in a fixed location in the dimension of the
oscillation, whereas the path of translation moves forward at the scalar space-time ratio
1/n (or n/1). This is the pattern followed by certain scalar motions that will be discussed
later and by all vectorial motions: motions of material units and aggregates.
When the progression within a unit of motion reaches the end of the unit it either reverses
or does not reverse. There is no intermediate possibility. It follows that what appears to
be a continuous unidirectional motion at speed 1/n is, in fact, an intermittent motion in
which space progresses at the normal rate of one unit of space per unit of time for a
fraction 1/n of the total number of space units involved, and has a net resultant of zero, in
the context of the fixed reference system, during the remainder of the motion.
If the speed is 1/n–one unit of space per n units of time–space progresses only one unit
instead of the n units it would progress unidirectionally. The result of motion at the 1/n
speed is therefore to cause a change of spatial position relative to the location that would
have been reached at the normal rate of progression. Motion at less than unit speed, then,
is motion in space. This is a well-known fact. But because of the uncritical acceptance of
Einstein’s dictum that speeds in excess of that of light are impossible, and a failure to
recognize the reciprocal relation between space and time, it has not heretofore been
realized that the inverse of this kind of motion is also a physical reality. Where the speed
is n/1, there is a reversal of the time component that results in a change of position in time
relative to that which would take place at the normal rate of time progression, the elapsed
time registered on a clock. Motion at speeds greater than unity is therefore motion in
time.
The existence of motion in time is one of the most significant consequences of the status
of the physical universe as a universe of motion. Conventional physical science, which
recognizes only motion in space, has been able to deal reasonably well with those
phenomena that involve spatial motion only. But it has not been able to clarify the
physical fundamentals, a task for which an understanding of the role of time is essential,
and it is encountering a growing number of problems as observation and experiment are
extended into the areas where motion in time is an important factor. Furthermore, the
number and scope of these problems has been greatly increased by the use of zero speed,
rather than unit speed, as the reference datum for measurement purposes. While motion at
speeds of 1/n (speeds less than unity) is motion in space only, when viewed relative to the
natural (moving) reference system, it is motion in both space and time relative to the
conventional systems that utilize the zero datum.
It should be understood that the motions we are now discussing are independent motions
(physical phenomena), not the fictitious motion introduced by the use of a stationary
reference system. The term ―progression,‖ is here being utilized merely to emphasize the
continuing nature of these motions, and their space and time aspects. During the one unit
of motion (progression) at the normal unit speed that occurs periodically when the
average speed is 1/n, the spatial component of this motion, which is an inherent property
of the motion independent of the progression of the natural reference system, is
accompanied by a similar progression of time that is likewise independent of the
progression of the reference system, the time aspect of which is measured by a clock.
Thus, during every unit of clock time, the independent motion at speed 1/n involves a
change of position in three-dimensional time amounting to 1/n units.
As brought out in the preliminary discussion of this subject in Chapter 6, the value of n at
the speeds of our ordinary experience is so large that the quantity 1/n is negligible, and
the clock time can be taken as equivalent to the total time involved in motion. At higher
speeds, however, the value of 1/n becomes significant, and the total time involved in
motion at these high speeds includes this additional component. It is this heretofore-
unrecognized time component that is responsible for the discrepancies that present-day
science tries to handle by means of fudge factors.
In the two-photon case considered in Chapter 7, the value of 1/n is 1/1 for both photons.
A unit of the motion of photon X involves one unit of space and one unit of time. The
time involved in this unit of motion (the time OX) can be measured by means of the
registration on a clock, which is merely the temporal equivalent of a yardstick. The same
clock can also be used to measure the time magnitude involved in the motion of photon Y
(the time OY), but this use of the same temporal ―yardstick‖ does not mean that the time
interval OY through which Y moves is the same interval through which X moves, the
interval OX, any more than using the same yardstick to measure the space traversed by Y
would make it the same space that is traversed by X. The truth is that at the end of one
unit of the time involved in the progression of the natural reference system (also
measured by a clock), X and Y are separated by two units of total time (the time OX and
the time OY), as well as by two units of space (distance). The relative speed is the
increase in spatial separation, two units, divided by the increase in temporal separation,
two units, or 2/2 = 1.
If an object with a lower speed v is substituted for one of the photons, so that the
separation in space at the end of one unit of clock time is 1 + v instead of 2, the
separation in time is also 1 + v and the relative speed is (1+ v)/(1 + v) = 1. Any process
that measures the true speed rather than the space traversed during a given interval of
standard clock time (the time of the progression of the natural reference system) thus
arrives at unity for the speed of light irrespective of the system of reference.
When the correct time magnitudes are introduced into the equations of motion there is no
longer any need for fudge factors. The measured coordinate differences and the measured
constant speed of light are then fully compatible, and there is no need to deprive the
spatial coordinates of their ―metrical meaning.‖ Unfortunately, however, no means of
measuring total time, except in certain special applications, are available at present.
Perhaps some feasible method of measurement may be developed in the future, but in the
meantime it will be necessary to continue on the present basis of applying a correction to
the clock registration, in those areas where this is feasible. Under these circumstances we
can consider that we are using correction factors instead of fudge factors. There is no
longer an unexplained discrepancy that needs to be fudged out of existence. What we
now find is that our calculations involve a time component that we are unable to measure.
In lieu of the measurements that we are unable to make, we find it possible, in certain
special cases, to apply correction factors that compensate for the difference between
clock time and total time.
A full explanation of the derivation of these correction factors, the Lorentz equations, is
available in the scientific literature, and will not be repeated here. This conforms with a
general policy that will be followed throughout this work. As explained in Chapter 1,
most existing physical theories have been constructed by building up from empirical
foundations. The Reciprocal System of theory is constructed in the opposite manner.
While the empirically based theories start with the observed details and work toward the
general principles, the Reciprocal System starts with a set of general postulates and works
toward the details. At some point each of the branches of the theoretical development will
meet the corresponding element of empirical theory. Where this occurs in the course of
the present work, and there is agreement, as there is in the case of the Lorentz equations,
the task of this presentation is complete. No purpose would be served by duplicating
material that is already available in full detail.
Most of the other well-established relationships of physical science are similarly
incorporated into the new system of theory, with or without minor modifications, as the
development of the theoretical structure proceeds, not because of the weight of
observational evidence supporting these relations, or because anyone happens to approve
of them, or because they have previously been accepted by the scientific world, but
because the conclusions expressed by these relations are the same conclusions that are
reached by development of the new theoretical system. After such a relation has thus
been taken into the system, it is, of course, part of the system, and can be used in the
same manner as any other part of the theoretical structure.
The existence of speeds greater than unity (the speed of light), the speeds that result in
change of position in time, conflicts with current scientific opinion, which accepts
Einstein’s conclusion that the speed of light is an absolute limit that cannot be exceeded.
Our development shows, however, that at one point where Einstein had to make an
arbitrary choice between alternatives, he made the wrong choice, and the speed limitation
was introduced through this error. It does not exist in fact.
Like the special theory of relativity, the theory from which the speed limitation is derived
is an attempt to provide an explanation for an empirical observation. According to
Newton's second law of motion, which can be expressed as a= F/m, if a constant force is
applied to the acceleration of a constant mass it should produce an acceleration that is
also constant. But a series of experiments showed that where a presumably constant
electrical force is applied to a light particle, such as an electron, in such a manner that
very high speeds are produced, the acceleration does not remain constant, but decreases
at a rate which indicates that it would reach zero at the speed of light. The true relation,
according to the experimental results, is not Newton's law, a = F/m, but a = -\/1—(v /c )
F/m. In the system of notation used in this work, which utilizes natural, rather than
arbitrary, units of measurement, the speed of light, designated as c in current practice, is
unity, and the variable speed (or velocity), v, is expressed in terms of this natural unit. On
this basis the empirically derived equation becomes a = F/m.
There is nothing in the data derived from experiment to tell us the meaning of the term 1 -
v² in this expression; whether the force decreases at higher speeds, or the mass increases,
or whether the velocity term represents the effect of some factor not related to either
force or mass. Einstein apparently considered only the first two of these alternatives.
While it is difficult to reconstruct the pattern of his thinking, it appears that he assumed
that the effective force would decrease only if the electric charges that produced the force
decreased in magnitude. Since all electric charges are alike, so far as we know, Whereas
the primary mass concentrations seem to be extremely variable, he chose the mass
alternative as being the most likely, and assumed for purposes of his theory that the mass
increases with the velocity at the rate indicated by the experiments. On this basis, the
mass becomes infinite at the speed of light.
The results obtained from development of the consequences of the postulates of the
Reciprocal System now show that Einstein guessed wrong. The new information
developed theoretically (which will be discussed in detail later) reveals that an electric
charge is inherently incapable of producing a speed in excess of unity, and that the
decrease in the acceleration at high speeds is actually due to a decrease in the force
exerted by the charges, not to any change in the magnitude of either the mass or the
charge.
As explained earlier, force is merely a concept by which we visualize the resultant of
oppositely directed motions as a conflict of tendencies to cause motion rather than as a
conflict of the motions themselves. This method of approach facilitates mathematical
treatment of the subject, and is unquestionably a convenience, but whenever a physical
situation is represented by some derived concept of this kind there is always a hazard that
the correspondence may not be complete, and that the conclusions reached through the
medium of the derived concept may therefore be in error. This is what has happened in
the case we are now considering.
If the assumption that a force applied to the acceleration of a mass remains constant in the
absence of any external influences is viewed only from the standpoint of the force
concept, it appears entirely logical. It seems quite reasonable that a tendency to cause
motion would remain constant unless subjected to some kind of a modifier. But when we
look at the situation in its true light as a combination of motions, rather than through the
medium of an artificial representation by means of the force concept, it is immediately
apparent that there is no such thing as a constant force. Any force must decrease as the
speed of the motion from which it originates is approached. The progression of the
natural reference system, for instance, is motion at unit speed. It therefore exerts unit
force. If the force–that is, the effect–of the progression is applied to overcoming a
resistance to motion (the inertia of a mass) it will ultimately bring the mass up to the
speed of the progression itself: unit speed. But a tendency to impart unit speed to an
object that is already moving at high speed is not equivalent to a tendency to impart unit
speed to a body at rest. In the limiting condition, where an object is already moving at
unit speed, the force due to the progression of the reference system has no effect at all,
and its magnitude is zero.
Thus, the full effect of any force is attained only when the force is exerted on a body at
rest, and the effective component in application to an object in motion is a function of the
difference between the speed of that object and the speed that manifests itself as a force.
The specific form of the mathematical function, ~ rather than merely 1-v, is related to
some of the properties of compound motions that will be discussed later. Ordinary
terrestrial speeds are so low that the corresponding reduction in the effective force is
negligible, and at these speeds forces can be considered constant. As the speed of the
moving object increases, the effective force decreases, approaching a limit of zero when
the object is moving at the speed corresponding to the applied force–unity in the case of
the progression of the natural reference system. As we will find in a later stage of the
development, an electric charge is inherently a motion at unit speed, like the gravitational
motion and the progression of the natural reference system, and it, too, exerts zero force
on an object moving at unit speed.
As an analogy, we may consider the case of a container full of water, which is started
spinning rapidly. The movement of the container walls exerts a force tending to give the
liquid a rotational motion, and under the influence of this force the water gradually
acquires a rotational speed. But as that speed approaches the speed of the container the
effect of the ―constant‖ force drops off, and the container speed constitutes a limit beyond
which the water speed cannot be raised by this means. The force vanishes, we may say.
But the fact that we cannot accelerate the liquid any farther by this means does not bar us
from giving it a higher speed in some other way. The limitation is on the capability of the
process, not on the speed at which the water can rotate.
The mathematics of the equation of motion applicable to the acceleration phenomenon
remain the same in the Reciprocal System as inEinstein's theory. It makes no difference
mathematically whether the mass is increased by a given amount, or the effective force is
decreased by the same amount. The effect on the observed quantity, the acceleration, is
identical. The wealth of experimental evidence that demonstrates the validity of these
mathematics therefore confirms the results derived from the Reciprocal System to exactly
the same degree that it confirms Einstein’s theory. All that this evidence does in either
case is to show that the theory is mathematically correct.
But mathematical validity is only one of the requirements that a theory must meet in
order to be a correct representation of the physical facts. It must also be conceptually
valid; that is, the meaning attached to the mathematical terms and relations must be
correct. One of the significant aspects of Einstein’s theory of acceleration at high speeds
is that it explains nothing; it merely makes assertions. Einstein gives us an ex cathedra
pronouncement to the effect that the velocity terms represents an increase in the mass,
without any attempt at an explanation as to why the mass increases with the velocity, why
this hypothetical mass increment does not alter the structure of the moving atom or
particle, as any other mass increment does, why the velocity term has this particular
mathematical form, or why there should be a speed limitation of any kind.
Of course, this lack of a conceptual background is a general characteristic of the basic
theories of present-day physics, the ―free inventions of the human mind,‖ as Einstein
described them, and the theory of mass increase is not unusual in this respect. But the
arbitrary character of the theory contrasts sharply with the full explanation provided by
the Reciprocal System. This new system of theory produces simple and logical answers
for all questions, similar to those enumerated above, that arise in connection with the
explanation that it supplies. Furthermore, one of these is, in any respect, ad hoc. All are
derived entirely by education from the assumptions as to the nature of space and time that
constitute the basic premises of the new theoretical system.
Both the Reciprocal System and Einstein's theory recognize that there; a limit of some
kind at unit speed. Einstein says that this is a limit on the magnitude of speed, because on
the basis of his theory the mass reaches infinity at unit speed, and it is impossible to
accelerate an infinite mass. The Reciprocal System, on the other hand, says that the limit
is on the capability of the process. A speed in excess of unity cannot be produced by
electromagnetic means. This does not preclude acceleration to higher speeds by other
processes, such as the sudden release fo large quantities of energy in explosive events,
and according to this new theoretical viewpoint there is no definite limit to speed
magnitudes. In deed, the general reciprocal relation between space and time requires that
speeds in excess of unity be just as plentiful, and cover just as wide a range, in the
universe as a whole, as speeds less than unity. The apparent predominance of low-speed
phenomena is merely a result of observing the universe from a location far over on the
low-speed side of the neutral axis.
One of the reasons why Einstein's assertion as to the existence of limiting speed was so
readily accepted is an alleged absence of any observational evidence of speeds in excess
of that of light. Our new theoretical development indicates, however, that there is actually
no lack of evidence. The difficulty is that the scientific community currently holds a
mistaken belief as to the nature of the change of position that is produced by such a
motion. We observe that a motion at a speed less than that of light causes a change of
location in space, the rate of change varying with the speed (or velocity, if the motion is
other then linear). It is currently taken for granted that a speed in excess of that of light
would result in a still greater rate of change of spatial location, and the absence of any
clearly authenticated evidence of such higher rates of location change is interpreted as
proof of the existence of a speed limitation. But in a universe of motion an increment of
speed above unity (the speed of light) does not cause a change of location in space. In
such a universe there is complete symmetry between space and time, and since unit speed
is the neutral level, the excess speed above unity causes a change of location in three-
dimensional time rather than in three-dimensional space.
From this it can be seen that the search for ―tachyons‖ , hypothetical particles that move
with a spatial velocity greater than unity, will continue o be fruitless. Speeds above unity
cannot be detected by measurements if the rate of change of coordinate positions in
space. We can detect them only by means of a direct speed measurement, or by some
collateral effects. There are many observable effects of the required nature, but their
status as evidence of speeds greater than that of light is denied by present-day physicists
on the ground that it conflicts with Einstein’s assumption of an increase in mass at high
speeds. In other words, the observations are required to conform to the theory, rather than
requiring the theory to meet the standard test of science: conformity with observation and
measurement.
The current treatment of the abnormal redshifts of the quasars is a glaring example of this
unscientific distortion of the observations to fit the theory. We have adequate grounds to
conclude that these are Doppler shifts, and are due to the speeds at which these objects
are receding from the earth. Until very recently there was no problem in this connection.
There was general agreement as to the nature of the redshifts, and as to the existence of a
linear relation between the redshift and the speed. This happy state of affairs was ended
when quasars were found with redshifts exceeding 1.00. On the basis of the previously
accepted theory, a 1.00 redshift indicates a recession speed equal to the speed of light.
The newly discovered redshifts in the range above 1.00 therefore constitute a direct
measurement of quasar motions at speeds greater than that of light.
But the present-day scientific community is unwilling to challenge Einstein, even on the
basis of direct evidence, so the mathematics of the special theory of relativity have been
invoked as a means of saving the speed limitation. No consideration seems to have been
given to the fact that the situation to which the mathematical relations of special relativity
apply does not exist in the case of the Doppler shift. As brought out in Chapter 7, and as
Einstein has explained very clearly in his works, the Lorentz equations, which express
those mathematics, are designed to reconcile the results of direct measurements of speed,
as in the Michelson-Morley experiment, with the measured changes of coordinate
position in a spatial reference system. As everyone, including Einstein, has recognized, it
is the direct speed measurement that arrives at the correct numerical magnitude. (Indeed,
Einstein postulated the validity of the speed measurement as a basic principle of nature.)
Like the result of the Michelson-Morley experiment, the Doppler shift is a direct
measurement, simply a counting operation, and it is not in any way connected with a
measurement of spatial coordinates. Thus there is no excuse for applying the relativity
mathematics to the redshift measurements.
Inasmuch as the ―time dilatation‖ aspect of the Lorentz equations is being applied to
some other phenomena that do not seem to have any connection with spatial coordinates,
it may be desirable to anticipate the subsequent development of theory to the extent of
stating that the discussion in Chapter 15 will show that those ―dilatation‖ phenomena that
appear to involve time only, such as the extended lifetime of fast-moving unstable
particles, are, in fact, consequences of the variation of the relation between coordinate
spatial location (location in the fixed reference system) and absolute spatial location
(location in the natural moving system) with the speed of the objects occupying these
locations. The Doppler effect, on the other hand, is independent of the spatial reference
system.
The way in which motion in time manifests itself to observation depends on the nature of
the phenomenon in which it is observed. Large redshifts are confined to high-speed
astronomical objects, and a detailed examination of the effect of motion in time on the
Doppler shift will be deferred to Volume II, where it will be relevant to the explanation
of the quasars. At this time we will take a look at another of the observable effects of
motion in time that is not currently recognized as such by the scientific community: its
effect in distorting the scale of the spatial reference system.
It was emphasized in Chapter 3 that the conventional spatial reference systems are not
capable of representing more than one variable–space-and that because there are two
basic variables–space and time–in the physical universe we are able to use the spatial
reference systems only on the basis of an assumption that the rate of change of time
remains constant. We further saw, earlier in this present chapter that at all speeds of unity
or less time does, in fact, progress at a constant rate, and all variability is in space. It
follows that if the correct values of the total time are used in all applications, the
conventional spatial coordinate systems are capable of accurately representing all
motions at speeds of 1/n. But the scale of the spatial coordinate system is related to the
rate of change of time, and the accuracy of the coordinate representation depends on the
absence of any change in time other than the continuing progression at the normal rate of
registration on a clock. At speeds in excess of unity, space is the entity that progresses at
the fixed normal rate, and time is variable. Consequently, the excess speed above unity
distorts the spatial coordinate system.
In a spatial reference system the coordinate difference between two points A and B
represents the space traversed by any object moving from A to B at the reference speed.
If that reference speed is changed, the distance corresponding to the coordinate difference
AB is changed accordingly. This is true irrespective of the nature of the process utilized
for measurement of the distance. It might be assumed, for instance, that by using
something similar to a yardstick, which compares space directly with space, the
measurement of the coordinate distance would be independent of the reference speed. But
this is not correct, as the length of the yardstick, the distance between its two ends, is
related to the reference speed in the same manner as the distance between any other two
points. If the coordinate difference between A and B is x when the reference speed has
the normal unit value, then it becomes 2x if the reference speed is doubled. Thus, if we
want to represent motions at twice the speed of light in one of the standard spatial
coordinate systems that assume time to be progressing normally, all distances involved in
these motions must be reduced by one half. Any other speed greater than unity requires a
corresponding modification of the distance scale.
The existence of motion at greater-than-unit speeds has no direct relevance to the familiar
phenomena of everyday life, but it is important in all of the less accessible areas, those
that we have called the far-out regions. Most of the consequences that apply in the realm
of the very large, the astronomical domain, have no significance in relation to the subjects
being discussed at this early stage of the theoretical development, but the general nature
of the effects produced by greater-than-unit speeds is most clearly illustrated by those
astronomical phenomena in which such speeds can be observed on a major scale. A brief
examination of a typical high-speed astronomical object will therefore help to clarify the
factors involved in the high-speed situation.
In the preceding pages we deduced from theoretical premises that speeds in excess of the
speed of light can be produced by processes that involve large concentrations of energy,
such as explosions. Further theoretical development (in Volume II) will show that both
stars and galaxies do, in fact, undergo explosions at certain specific stages of their
existence. The explosion of a star is energetic enough to accelerate some portions of the
stellar mass to speeds above unity, while other portions acquire speeds below this level.
The low-speed material is thrown off into space in the form of an expanding cloud of
debris in which the particles of matter retain their normal dimensions but are separated by
an increasing amount of empty space. The high-speed material is similarly ejected in the
form of an expanding cloud, but because of the distortion of the scale of the reference
system by the greater-than-unit speeds, the distances between the particles decrease
rather than increase. To emphasize the analogy with the cloud of material expanding into
space, we may say that the particles expanding into time are separated by an increasing
amount of empty time.
The expansion in each case takes place from the situation that existed at the time of the
explosion, not from some arbitrary zero datum. The star was originally stationary, or
moving at low speed, in the conventional spatial reference system, and was stationary in
time in the moving system of reference defined by a clock. As a result of the explosion,
the matter ejected at low speeds moves outward in space and remains in the original
condition in time. The matter ejected at high speeds moves outward in time and remains
in its original condition in space. Since we see only the spatial result of all motions, we
see the low-speed material in its true form as an expanding cloud, whereas we see the
high-speed material as an object remaining stationary in the original spatial location.
Because of the empty space that is introduced between the particles of the outward-
moving explosion product, the diameter of the expanding cloud is considerably larger
than that of the original star. The empty time introduced between the particles of the
inward-moving explosion product conforms to the general reciprocal relation, and inverts
this result. The observed aggregate, a white dwarf star, is also an expanding object, but
its expansion into time is equivalent to a contraction in space, and as we see it in its
spatial aspect, its diameter is substantially less than that of the original star. It thus
appears to observation as an object of very high density.
The white dwarf is one member of a class of extremely compact astronomical objects
discovered in recent years that is today challenging the basic principles of conventional
physics. Some of these objects, such as the quasars are still without any plausible
explanation. Others, including the white dwarfs, have been tied in to current physical
theory by means of ad hoc assumptions, but since the assumptions made to explain each
of these objects are not applicable to the others, the astronomers are supplied with a
whole assortment of theories to explain the same phenomenon: extremely high densities.
It is therefore significant that the explanation of the high density of the white dwarf stars
derived from the postulates of the Reciprocal System of theory is applicable to all of the
other compact objects. As will be shown in the detailed discussion, all of these extremely
compact astronomical objects are explosion products, and their high density is in all cases
due to the same cause: motion at speeds in excess of that of light.
This is only a very brief account of a complex phenomenon that will be examined in full
detail later, but it is a striking illustration of how the inverse phenomena predicted by the
reciprocal relation can always be found somewhere in the universe, even if they involve
such seemingly bizarre concepts as empty time, or high speed motion of objects
stationary in space.
Another place where the inability of the conventional spatial reference systems to
represent changes in temporal location, other than by distortion of the spatial
representation, prevents it from showing the physical situation in its true light is the
region inside unit distance. Here the motion in time is not due to a speed greater than
unity, but to the fact that, because of the discrete nature of the natural units, less than unit
space (or time) does not exist. To illustrate just what is involved here, let us consider an
atom A in motion toward another atom B. According to current ideas, atom A will
continue to move in the direction AB until the atoms, or the force fields surrounding
them, if such fields exist, are in contact. The postulates of the Reciprocal System specify,
however, that space exists only in units. It follows that when atom A reaches point X one
unit of space distant from B. it cannot move any closer to B in space. But it is free to
change its position in time relative to the time location occupied by atom B. and since
further movement in space is not possible, the momentum of the atom causes the motion
to continue in the only way that is open to it.
The spatial reference system is incapable of representing any deviation of time from the
normal rate of progression, and this added motion in time therefore distorts the spatial
position of the moving atom A in the same manner as the speeds in excess of unity that
we considered earlier. When the separation in time between the two atoms has increased
to n units, space remaining unchanged (by means of continued reversals of direction), the
equivalent spatial separation, the quantity that is determined by the conventional methods
of measurement, is l /n units. Thus, while atom A cannot move to a position less than one
unit of space distant from atom B. it can move to the equivalent of a closer position by
moving outward in time. Because of this capability of motion in time in the region inside
unit distance it is possible for the measured length, area, or volume of a physical object to
be a fraction of a natural unit, even though the actual one, two, or three-dimensional
space cannot be less than one unit in any case.
It was brought out in Chapter 6 that the atoms of a material aggregate, which are
contiguous in space, are widely separated in time. Now we are examining a situation in
which a change of position in the spatial coordinate system results from a separation in
time, and we will want to know just where these time separations differ. The explanation
is that the individual atoms of an aggregate such as a gas, in which the atoms are
separated by more than unit distance, are also separated by various distances in time, but
these atoms are all at the same stage of the time progression. The motion of these atoms
meets the requirement for accurate representation in the conventional spatial coordinate
systems; that is, it maintains the fixed time progression on which the reference system is
based. On the other hand, the motion in time that takes place inside unit distance involves
a deviation from the normal time progression.
A spatial analogy may be helpful in getting a clear view of this situation. Let us consider
the individual units (stars) of a galaxy. Regardless of how widely these stars are
separated, or how much they move around within the galaxy, they maintain their status as
constituents of the galaxy because they are all receding at the same speed (the internal
motions being negligible compared to the recession speed). They are at the same stage of
the galactic recession. But if one of these stars acquires a spatial motion that modifies its
recession speed significantly, it moves away from the galaxy, either temporarily or
permanently. Thereafter, the position of this star can no longer be represented in a map of
the galaxy, except by some special convention.
The separations in time discussed in Chapter 6 are analogous to the separations in space
within the galaxies. The material aggregates that we are now discussing retain their
identities just as the galaxies do, because their individual components are progressing in
time at the same rates. But just as individual stars may acquire spatial speeds which cause
them to move away from the galaxies, so the individual atoms of the material aggregates
may acquire motions in time which cause them to move away from the normal path of the
time progression. Inside unit distance this deviation is temporary and quite limited in
extent. In the white dwarf stars the deviations are more extensive, but still temporary. In
the astronomical discussions in Volume II we will consider phenomena in which the
magnitude of the deviation is sufficient to carry the aggregates that are involved
completely out of the range of the spatial coordinate systems.
So far as the inter-atomic distance is concerned, it is not material whether this is an actual
spatial separation or merely the equivalent of such a separation, but the fact that the
movement of the atoms changes from a motion in space to a motion in time at the unit
level has some important consequences from other standpoints. For instance, the spatial
direction AB in which atom A was originally moving no longer has any significance now
that the motion is taking place inside unit distance, inasmuch as the motion in time which
replaces the previous motion in space has no spatial direction. It does have what we
choose to call a direction in time, but this temporal direction has no relation at all to the
spatial direction of the previous motion. No matter what the spatial direction of the
motion of the atom may have been before unit distance was reached, the temporal
direction of the motion after it makes the transition to motion in time is determined
purely by chance.
Any kind of action originating in the region where all motion is in time is also subject to
significant modifications if it reaches the unit boundary and enters the region of space
motion. For example, the connection between motion in space and motion in time is
scalar, again because there is no relation between direction in space and direction in time.
Consequently, only one dimension of a two-dimensional or three-dimensional motion can
be transmitted across the boundary. This point has an important bearing on some of the
phenomena that will be discussed later.
Another significant fact is that the effective direction of the basic scalar motions,
gravitation and the progression of the natural reference system, reverses at the unit level.
Outside unit space the progression of the reference system carries all objects outward in
space away from each other. Inside unit space only time can progress unidirectionally,
and since an increase in time, with space remaining constant, is equivalent to a decrease
in space, the progression of the reference system in this region, the time region, as we
will call it, moves all objects to locations which, in effect, are closer together. The
gravitational motion necessarily opposes the progression, and hence the direction of this
motion also reverses at the unit boundary. As it is ordinarily observed in the region
outside unit distance, gravitation is an inward motion, moving objects closer together. In
the time region it acts in the outward direction, moving material objects farther apart.
On first consideration, it may seem illogical for the same force to act in opposite
directions in different regions, but from the natural standpoint these are not different
directions. As emphasized in Chapter 3, the natural datum is unity, not zero, and the
progression of the natural reference system therefore always acts in the same natural
direction: away from unity. In the region outside unit distance away from unity is also
away from zero, but in the time region away from unity is toward zero. Gravitation
likewise has the same natural direction in both regions: toward unity.
It is this reversal of coordinate direction at the unit level that enables the atoms to take up
equilibrium positions and form solid and liquid aggregates. No such equilibrium can be
established where the progression of the natural reference system is outward, because in
this case the effect of any change in the distance between atoms resulting from an
unbalance of forces is to accentuate the unbalance. If the inward-directed gravitational
motion exceeds the outward-directed progression, a net inward motion takes place,
making the gravitational motion still greater. Conversely, if the gravitational motion is
the smaller, the resulting net motion is outward, which still further reduces the already
inadequate gravitational motion. Under these conditions there can be no equilibrium.
In the time region, however, the effect of a change in relative position opposes the
unbalanced force, which caused the change. If the gravitational motion (outward in this
region) is the greater an outward net motion takes place, reducing the gravitational
motion and ultimately bringing it into equality with the constant inward progression of
the reference system. Similarly, if the progression is the greater, the net movement is
inward, and this increases the gravitational motion until equilibrium is reached.
The equilibrium that must necessarily be established between the atoms of matter inside
unit distance in a universe of motion obviously corresponds to the observed inter-atomic
equilibrium that prevails in solids and, with certain modifications, in liquids. Here, then,
is the explanation of solid and liquid cohesion that we derive from the Reciprocal System
of theory, the first comprehensive and completely self-consistent theory of this
phenomenon that has ever been formulated. The mere fact that it is far superior in all
respects to the currently accepted electrical theory of matter is not, in itself, very
significant, inasmuch as the electrical hypothesis is definitely one of the less successful
segments of present-day physical theory, but a comparison of the two theories should
nevertheless be of interest from the standpoint of demonstrating how great an advance the
new theoretical system actually accomplishes in this particular field. A detailed
comparison will therefore be presented later, after some further groundwork has been
laid.

CHAPTER 9

Rotational Combinations
One of the principal difficulties that is encountered in explaining the Reciprocal System
of theory, or portions thereof, is a general tendency on the part of readers or listeners to
assume that the author or speaker, whoever he may be, does not actually mean what he
says. No previous major theory is purely theoretical; every one takes certain empirical
information as a given element in the premises of the theory. The conventional theory of
matter, for example, takes the existence of matter as given. It then assumes that this
matter is composed of ―elementary particles,‖ which it attempts to identify with observed
material particles. On the basis of this assumption, together with the empirical
information introduced into the theory, it then attempts to explain the observed range of
structural characteristics. Inasmuch as all previous theories of major scope have been
constructed on this pattern, there is a general impression that physical theories must be so
constructed, and it is therefore assumed that when reference is made to the fact that the
Reciprocal System utilizes no empirical data of any kind, this statement must have some
meaning other than its literal significance.
The theoretical development in the preceding chapters should dispose of this
misapprehension so far as the qualitative aspect of the universe is concerned. While the
task is still only in the early stages, enough of the basic features of the physical universe–
radiation, matter, gravitation, etc,.–have been derived by deduction from the postulates,
without the aid of further assumptions, or of empirical information, to demonstrate that a
purely theoretical qualitative development is, in fact, feasible. But a complete account of
a theoretical universe must necessarily include the quantitative aspects of physical
phenomena as well as the qualitative aspects.
Here is another place where the way in which the development of theory has taken place
is mistakenly regarded as the way in which this development must take place. The
theoretical products of the Newtonian era, the so-called ―classical‖ physics, were capable
of being expressed in simple mathematical terms. But some deviations from the classical
laws have been encountered in the far-out regions that have been reached by observation
and experiment in recent years, and the physicists have not been able to account for these
deviations without employing extremely complex mathematical processes, together with
conceptual artifices of a rather dubious character, such as Einstein’s ―rubber yardstick‖ ,
or fudge factor. In the light of the points brought out in the preceding chapter it is now
evident that the difficulties are due to a misunderstanding of the basic nature of the far-
out phenomena, but since the modern theorists have not realized this, they have
concluded that the true relationships of the universe are extremely complex, and that they
cannot be expressed by anything other than very complex mathematics.
The general acceptance of this view of the situation has led a large segment of the
scientific community, particularly the theoretical physicists, to the further conclusion that
any treatment of the subject matter by means of simple mathematics is necessarily wrong,
and can safely be dismissed without examination. Indeed, many of these individuals go a
step farther, and characterize such a treatment as ―non-mathematical.‖ This attitude is, of
course, preposterous, and it cannot be defended, but it is nevertheless so widespread that
it constitutes a serious obstacle in the way of a full appreciation of the merits of any
simple mathematical treatment.
In beginning the quantitative development of the Reciprocal System of theory it is
therefore necessary to emphasize that simplicity is a virtue, not a defect. It is so
recognized, in principle, by scientists in general, including those who are now contending
that the universe is fundamentally complex, or even, as expressed by P. W. Bridgman,
that it ―is not intrinsically reasonable or understandable.‖56 In its entirety, the universe is
indeed complex, extremely so, but as the initial steps in the development of the
Reciprocal System in the preceding pages have already begun to demonstrate from a
qualitative standpoint, it is actually a complex aggregate of interrelated simple elements.
The principal advantage of mathematical treatment of physical subject matter is the
precision with which knowledge of a mathematical character can be developed and
expressed. This is offset to a considerable degree, however, by the fact that mathematical
knowledge of physical phenomena is incomplete, and from the physical standpoint,
ambiguous. No mathematical statement of a physical relation is complete in itself. As
Bridgman frequently pointed out, it must be accompanied by a ―text‖ that tells us what
the mathematics mean, and how they are to be applied. There is no definite and fixed
relation between this text and the mathematics; that is, every mathematical statement of a
physical relation is capable of different interpretations.
The importance of this point in the present connection lies in the fact that the Reciprocal
System makes relatively few changes in the mathematical aspects of current physical
theory. The changes that it calls for are primarily conceptual. They require different
interpretations of the mathematics, changes in the text, as Bridgman would say. Such
changes, modifications of our ideas as to what the mathematics mean, obviously cannot
be represented by alterations in the mathematical expressions. These expressions will
have to stand as they are. Many readers of the first edition have asked that the new ideas
be ―put in mathematical form.‖ But what these individuals really mean is that they want
the theory put into some different mathematical form. They are, in effect, demanding that
we change the mathematics and leave the concepts alone. This, we cannot do. The errors
in current physical thought are primarily conceptual, not mathematical, and the
corrections have to be made where the errors are, not somewhere else.
There is nothing extraordinary about the close correlation between the mathematical
aspects of the Reciprocal System and those of current theory. The conventional
mathematical relations were, for the most part, derived empirically, and any correct
theory of a more general nature must necessarily arrive at these same mathematics. But
there is no guarantee that the prevailing interpretation of these mathematical results is
correct. On the contrary, as Jeans pointed out in the statement previously quoted, the
physical interpretations of correct mathematical formulae have often been ―badly wrong.‖
Correction of the errors that have been made in the interpretation of the mathematical
expressions often has very significant consequences, not so much in the particular area to
which such an expression is directly applicable, but in collateral areas. The interpretation
is usually tailored to fit the immediate physical situation reasonably well, but if it is not
correct it becomes an impediment to progress in related areas. If it does not actually lead
to erroneous conclusions such as the limitation on speed that Einstein derived from a
wrong interpretation of the mathematics of acceleration at high speeds, it at least misses
all of the significant collateral implications of the true explanation.
For example, the mathematical statement of the recession of the distant galaxies merely
tells us that these galaxies are receding at speeds directly proportional to their distances.
The currently popular interpretation of this mathematical relation assumes that the
recession is an ordinary vectorial motion. The problem in accounting for it then becomes
a matter of identifying (or inventing) a force of sufficient magnitude to produce the
extremely high speeds of the most distant objects. The accepted hypothesis is that they
were produced by a gigantic explosion of the entire contents of the universe at some
unique stage of its history. The Reciprocal System is in agreement with the mathematical
aspects of current theory. It arrives theoretically at the conclusion that the distant galaxies
must recede at speeds proportional to their respective distances: the same conclusion that
present-day astronomy derives empirically. But the new theoretical system says that this
recession is not a vectorial motion imparted to the galaxies by some powerful force. It is
a scalar outward motion that results from viewing the galaxies in the context of a
stationary spatial frame of reference rather than in the natural moving system of reference
to which all physical objects actually conform.
So far as the recession phenomenon itself is concerned, it makes little difference, aside
from the implications for cosmology, which interpretation of the mathematical relation
between speed and distance is accepted, but on the basis of the currently popular
hypothesis, this relation has no further significance, whereas on the basis of the
explanation derived from the postulates of the Reciprocal System, the same forces that
apply to the distant galaxies are applicable to all atoms and aggregates of matter,
producing effects which vary with the relative magnitudes of the different forces
involved. On the basis of this new information, the mathematical relation, which applies
to the galaxies, is one of far-reaching importance.
This present chapter will initiate a demonstration that the very complex mathematical
relations that are encountered in many physical areas are the result of permutations and
combinations of simple basic elements, rather than a reflection of a complex fundamental
reality. The process whereby the compound unit of motion that we call an atom is
produced by applying a rotational motion to a previously existing vibrational motion, the
photon, is typical of the manner in which the complex phenomena of the universe are
built up from simple foundations. We start with a uniform linear, or translational, motion
at unit speed. Then by directional reversals we produce a simple harmonic motion, or
vibration. Next the vibrating unit is caused to rotate. The addition of this motion of a
different type alters the behavior of the unit–gives it different properties, as we say–and
puts it into a new physical category. All of the more complex physical entities with which
we will deal in the subsequent pages are similarly built up by compounding the simpler
motions.
The first phase of this mathematical development is a striking example of the way in
which a few very simple mathematical premises quickly proliferate into a large number
and variety of mathematical consequences. The development will begin with nothing
more than the series of cardinal numbers and the geometry of three dimensions. By
subjecting these to simple mathematical processes, the applicability of which to the
physical universe of motion is specified in the fundamental postulates, the combinations
of rotational motions that can exist in the theoretical universe will be ascertained. It will
then be shown that these rotational combinations that theoretically can exist can be
individually identified with the atoms of the chemical elements and the sub-atomic
particles that are observed to exist in the physical universe.
A unique group of numbers representing the different rotational components will be
derived for each of these combinations. The set of numbers applying to each element or
type of particle theoretically determines the properties of that substance, inasmuch as
these properties, like all other quantitative features of a universe of motion, are functions
of the magnitudes of the motions that constitute the material substances. It will be shown
in this and the following chapter that this theoretical assertion is valid for some of the
simpler properties, including those, which depend upon the position of the element in the
periodic table. The application of these numerical factors to other properties will be
discussed from time to time as consideration of these other properties is undertaken later
in the development.
One preliminary step that will have to be taken is to revise present measurement
procedures and units in order to accommodate them to the natural moving system of
reference. Because of the status of unity as the natural reference datum, a deviation of n-l
units downward from unity to a speed 1/n has the same natural magnitude as a deviation
of n-l units upward to a speed n/1, even though, when measured from zero speed in the
conventional manner, the changes are wholly disproportionate. When n is 4, for example,
the upward change is from 1 to 4, an increase of 3 units, whereas the downward change is
from 1 to ¼, a decrease of only ¾ unit.
In order to reflect the fact that these deviations are actually equal in magnitude from the
natural standpoint, the basis on which the fundamental processes of the universe take
place, it is necessary to set up a new system of speed measurement, in which we express
the magnitude of the speed in terms of the deviation, upward or downward, from unit
speed, instead of measuring from some zero in the conventional manner. Inasmuch as the
units in which speeds are measured on this basis are not commensurable with those of
speed as measured from zero, it would lead to complete confusion if the units of the new
system were called units of speed. For this reason, when reference is made to speed in
terms of its natural magnitude in any of the publications dealing with the Reciprocal
System of theory, it is not called speed. Instead, the term ―speed displacement‖ is used,
the units of this displacement being natural units of deviation from unity.
In practice, the term ―speed displacement‖ is usually shortened to ―displacement,‖ and
this has led to some criticism of the terminology on the ground that ―displacement‖
already has other scientific meanings. But it is highly desirable, as an aid to
understanding, that the idea of a deviation from a norm should be clearly indicated in the
language that is used, and there are not many English words that meet the requirements.
Under the circumstances, ―displacement,‖ appears to be the best choice. The sense in
which this term is used will almost always be indicated by the context in which it
appears, and in the few cases where there might be some question, the possibility of
confusion can be avoided by employing the full name, ―speed displacement.‖
Another reason for the use of a distinctive term in designating natural speed magnitudes
is that this is necessary in order to make the addition of speeds meaningful. Conventional
physics claims that it recognizes speed as a scalar quantity, but in actual practice gives it
no more than a quasi-scalar status. True scalar quantities are additive. If we have five
gallons of gasoline in one container and ten gallons in another, the total, the quantity in
which we are most interested, is fifteen gallons. The corresponding sum of two speeds of
the same object–rotational and translational, for example–has no meaning at all in current
physical thought. In the universe of motion described by the Reciprocal System of theory,
however, the scalar total of all of the speeds of an object is one of the most important
properties of that object. Thus, even though speed has the same basic significance in the
Reciprocal System as in conventional theory–that is, it is a measure of the magnitude of
motion–the manner in which speed enters into physical phenomena is so different in the
two systems that it would be inappropriate to express it in the same units of measurement
in both cases, even if this were not ruled out for other reasons.
It would, of course, be somewhat simpler if we could say ―speed‖ whenever we mean
speed, and not have to use two different terms for the same thing. But the meaning of
whatever is said should be clear in all cases if it is kept in mind that whenever reference
is made to ―displacement,‖ this means ―speed,‖ but not speed as ordinarily measured. It is
speed measured in different quantities, and from a different reference datum.
A decrease in speed from 1/1 to 1/n involves a positive displacement of n-1 units; that is,
an addition of n-1 units of motion in which time is unidirectional while the space
direction alternates, thus, in effect, adding n-1 units of time to the original speed 1/1.
Similarly, an increase in speed from 1/1 to n/1 involves a negative displacement, an
addition of n-1 units of motion in which space is unidirectional while the time direction
alternates; thus, in effect, adding n-1 units of space to the original speed 1/1.
In the first edition of this work the displacements here designated positive and negative
were called ―time displacement‖ and ―space displacement‖ respectively, to emphasize the
fact that the positive displacement represents an increased amount of time in association
with one unit of space, while the reverse is true in negative displacement. Experience has
shown, however, that the original terminology tends to be confusing, particularly in that it
is frequently interpreted as indicating addition of independent quantities of time or space
to the phenomena under consideration, whereas, in fact, it is the speed that is being
increased or decreased. As pointed out in Chapter 2, in a universe of motion there is no
such thing as physical space or time independent of motion. We can abstract the space
aspect of motion mentally, and imagine it existing independently, as a reference system
(extension space) or otherwise, but we cannot add or subtract space or time in actual
practice except by superimposing a new motion on the motion we wish to alter.
If we were dealing with speed measured from the mathematical zero it would be logical
to apply the term positive to an addition to the speed, but where we measure from unity
the values increase in both directions, and there is no reason why one increase should be
considered any more ―positive‖ than the other. The choice has therefore been made on a
convenience basis, and the ―positive‖ designation has been applied to the displacements
on the low speed side of the unit speed datum because these are the displacements of the
material system of phenomena. We will find, as we proceed, that the displacements
toward higher speeds, where they occur at all in the material sector, do so mainly as
negative modifications of the predominantly low speed motion combinations.
Inasmuch as the units of positive displacement and of negative displacement are simply
units of deviation from the natural speed datum they are additive algebraically. Thus, if
there exists a motion in time with a negative speed displacement of n-1 units (equivalent
to n units of speed in conventional terms) we can reduce the speed to zero, relative to the
natural datum, by adding a motion with a positive speed displacement of n-1 units.
Addition of further positive displacement will result in a net speed below unity; that is, a
motion in space. But there is no way by which we can alter either the time aspect or the
space aspect of the motion independently. The variable in a universe of motion is speed,
and the variation occurs only in displacement units. The change in terminology has been
made in the hope that it will contribute toward a full realization that what we are dealing
with are units of speed, even though, for technical reasons, we cannot call it speed.
In the case of radiation, there is no inherent upper limit to the speed displacement
(conventionally measured as frequency), but in actual practice a limit is imposed by the
capabilities of the processes that produce the radiation, examination of which will be
deferred until after further groundwork has been laid. The range of radiation frequencies
is so wide that, except near 1/1, where the steps from n to n + 1 are relatively large, the
frequency spectrum is practically continuous.
The rotational situation is very different. In contrast to the almost unlimited number of
possible vibrational frequencies, the maximum number of units of rotational
displacement that can participate in any one combination of rotations is relatively small,
for reasons which will appear in the course of the discussion. Furthermore, probability
considerations dictate the distribution of the total number of rotational displacement units
among the different rotations in each individual case, so that in general there is only one
stable combination among the various mathematically possible ways of distributing a
given total rotational displacement. This limits the possible rotational combinations that
we identify as material atoms and particles to a relatively small series, the successive
members of which differ initially by one displacement unit, and at a later stage by two of
the single displacement units.
With this understanding of the fundamentals, let us now proceed to an examination of the
general characteristics of the combinations of rotational motions. The existence of
different rotational patterns is clear from the start, as the rotation can not only take place
at different speeds (displacements), but, in a three-dimensional universe, can also take
place independently in the different dimensions. As we will see in our investigation,
however, some restrictions are imposed by geometry.
The photon cannot rotate around the line of vibration as an axis. Such a rotation would be
indistinguishable from no rotation at all. But it can rotate around either or both of the two
axes perpendicular to the line of vibration and to each other. One such rotation of the
one-dimensional photon generates a two-dimensional figure: a disk. Rotation of the disk
around the second available axis then generates a three-dimensional figure: a sphere. This
exhausts the available dimensions, and no further rotation of the same nature can take
place. The basic rotation of the atom or particle is therefore two-dimensional, and, as
brought out in Chapter 5, it is in the inward scalar direction. But after the two-
dimensional rotation is in existence it is possible to give the entire combination of
vibrational and rotational motions a rotation around the third axis, which is also inward
from the scalar standpoint, but is opposed to the two-dimensional rotation vectorially.
This reverse rotation is optional, as the basic rotation is distributed over all three
dimensions, and nothing further is required for stability. A rotating system therefore
consists of a photon rotating two-dimensionally, with or without a reverse rotation in the
third dimension.
Although the two dimensions of the basic rotation have been treated separately for
descriptive purposes, first generating a disk by one rotation, and then a sphere by the
second, it should be understood that there are not two one-dimensional rotations; there is
one two-dimensional rotation. This distinction has a significant bearing on the properties
of the rotational combinations. The combined magnitude of two one-dimensional
rotations of n displacement units each is 2n. The magnitude of a two-dimensional rotation
in which the displacement is n in each dimension is n². It is not essential that all of the
rotations be effective in the physical sense. Unless there is effective rotation in at least
one dimension it is meaningless to speak of rotation, as such motion cannot be
distinguished from translation. But if there is effective rotation–that is, rotation with a
speed differing from unity–in at least one dimension, there can be rotation at unit speed
(zero displacement) in the other dimension or dimensions.
The vibrational speed displacement of the basic photon may be either negative (greater
than unity) or positive (less than unity). Let us consider the case of a photon with a
negative displacement, to which we propose to add a unit of rotational displacement
(rotate the photon). Inasmuch as the individual units of vibrational displacement are
discrete (that is, they are not tied together in any way), the one applied unit of rotational
motion results in rotation of only one of the vibrational units. Because of the lack of any
connection between the vibrational units there is no force resisting separation. When the
one unit starts moving inward by reason of the rotation it therefore moves away from the
remainder of the photon, which continues to be carried outward by the progression of the
natural reference system. Irrespective of the number of vibrational units in the photon to
which the rotational displacement was added, the compound motion produced by this
addition thus contains only the vibrational units that are being rotated. The remaining
vibrational units of the original photon continue as a photon of lower displacement.
When a compound motion of this type, rotation of a vibration, is formed, the inward
motion due to the rotation replaces the outward motion of the progression of the reference
system. Thus the components of the compound motion are not subject to oppositely
directed motions in the manner of the multi-unit rotating photons, and these components
do not separate spontaneously. However, the rotational displacement of the photon now
under consideration is negative. If the rotational displacement applied to this photon is
also negative, the displacement units, being units of the same scalar nature, are additive
in the same manner as the vibrational units of a photon. Like the photon units, they are
easily separated when even a relatively small force is applied, and the rotational
displacement is therefore readily transferred from the original photon to some other
object, under appropriate conditions. For this reason, combinations of negative
vibrational and negative rotational displacements are inherently unstable. On the other
hand, if the applied rotational displacement is positive, equal numbers of the positive and
negative displacement units neutralize each other. In this case the combination has no net
displacement. A motion that does have a net displacement cannot be extracted from such
a combination without the intervention of some outside agency. It is simple enough to
separate one negative unit from an aggregate of n negative units, but getting one negative
unit out of nothing at all is not so easily accomplished. A combination of a negative
vibration and a positive rotation (or vice versa) is therefore inherently stable.
All that has been said about additions to a photon with negative displacement applies
with equal force, but in the inverse manner, to the addition of rotation to a photon with
positive displacement. We therefore arrive at the conclusion that in order to produce
stable combinations photons oscillating in time (negative displacement) must be rotated
in space (positive displacement), whereas photons oscillating in space must be rotated in
time. This alternation of positive and negative displacements is a general requirement for
stability of compound motions, and it will play an important part in the theoretical
development in the subsequent pages. It should be understood, however, that stability is
dependent on the environment. Any combination will break up if the environmental
conditions are sufficiently unfavorable. Conversely, there are situations, to be examined
later, in which environmental influences create conditions that confer stability on
combinations that are normally unstable.
The combinations in which the net rotation is in space (positive displacement) can be
identified with the relatively stable atoms and particles of our local environment, and
constitute what we will call the material system. For the present we will confine the
discussion to the members of this material system, and will leave the inverse type of
combination, the cosmic system, as we will call it, for later consideration.
Inasmuch as the oscillating photon is being rotated in two dimensions (the basic positive
rotation), one unit of two-dimensional positive displacement is required to neutralize the
negative vibrational displacement of the photon, and reduce the net total displacement to
zero. Because of its lack of any effective deviation from unit speed (the reference datum)
this combination of motions has no observable physical properties, and for that reason it
was somewhat facetiously called ―the rotational equivalent of nothing‖ in the first
edition. But this understates the significance of the combination. While it has no effective
net total magnitude, its rotational component does have a direction. The idea of a motion
that has direction but no magnitude sounds something like a physicist's version of the
Cheshire cat, but the zero effective magnitude is a property of the structure as a whole,
while the rotational direction of the two-dimensional motion, which makes the addition
of further positive rotational displacement possible, is a property of one component of the
total structure. Thus, even though this combination of motions can do nothing itself, it
does constitute a base from which something (a material particle) can be constructed that
cannot be formed directly from a linear type of motion. We will therefore call it the
rotational base.
There are actually two of the rotational bases. The one we have been discussing is the
base of the material system. The structures of the cosmic system are constructed from a
different base; one that is just the inverse of the material base. In this inverse combination
the photon is oscillating in space (positive displacement) and rotating in time (negative
displacement).
Successive additions of positive displacement to the rotational base produce the
combinations of motions that we identify as the sub-atomic particles and the atoms of the
chemical elements. The next two chapters will describe the structures of the individual
combinations. Before beginning this description, however, it will be in order to make
some general comments about the implications of the theoretical conclusion that the
atoms and particles of matter are systems of rotational motions.
One of the most significant results of the new concept of the structure of atoms and
particles that has been developed from the postulates of the Reciprocal System is that it is
no longer necessary to invoke the aid of spirits or demons or their modern equivalents:
mysterious hypothetical forces of a purely ad hoc nature–to explain how the parts of the
atom hold together. There is nothing to explain, because the atom has no separate parts. It
is one integral unit, and the special and distinctive characteristics of each kind of atom are
not due to the way in which separate ―parts‖ are put together, but are due to the nature
and magnitude of the several distinct motions of which each atom is composed.
At the same time, this explanation of the structure of the atom tells us why such a unit
can expel particles, or disintegrate into smaller units, even though it has no separate parts;
how it can act, in some respects, as if it were an aggregate of sub-atomic units, even
though it is actually a single integral entity. Such a structure can obviously part with
some of its motion, or absorb additional units of motion, without in any way altering the
fact that it is a single entity, not a collection of parts. When the pitcher throws a curve
ball, it is still a single unit–it is a baseball–even though it now has both a translational
motion and a rotational motion, which it did not have while still in his hand. We do not
have to worry about what kind of a force holds the rotational ―part,‖ the translational
―part,‖ and the horsehide covered ―nucleus‖ together.
There has been a general impression that if we can get particles out of an atom, then there
must be particles in atoms; that is, the atom must be constructed of particles. This
conclusion seems so natural and logical that it has survived what would ordinarily be a
fatal blow: the discovery that the particles which emanate from the atom in the process of
radioactivity and otherwise are not the constituents of the atom; that is, they do not have
the properties which are required of the constituents. Furthermore, it is now clear that a
great variety of particles that cannot be regarded as constituents of normal atoms can be
produced from these atoms by appropriate processes. The whole situation is now in a
state of confusion. As Heisenberg commented:
Wrong questions and wrong pictures creep automatically into particle physics and lead to
developments that do not fit the real situation in nature.27
It is now apparent that all of this confusion has resulted from the wholly gratuitous, but
rarely questioned, assumption that the sub-atomic particles have the characteristics of
―parts‖ ; that is, they exist as particles in the structure of the atom, they require something
that has the nature of a ―force‖ to keep them in position, and so on. When we substitute
motions for parts, in accordance with the findings of the Reciprocal System, the entire
situation automatically clears up. Atoms are compound motions, sub-atomic particles are
less complex motions of the same general nature, and photons are simple motions. An
atom, even though it is a single unitary structure without separate parts, can eject some of
its motion, or transfer it to some other structure. If the motion which separates from the
atom is translational it reappears as translational motion of some other unit; if it is simple
linear vibration it reappears as radiation; if it is a rotational motion of less than atomic
complexity it reappears as a sub-atomic particle; if it is a complex rotational motion it
reappears as a smaller atom. In any of these cases, the status of the original atom changes
according to the nature and magnitude of the motion that is lost.
The explanation of the observed interconvertibility of the various physical entities is now
obvious. All of these entities are forms of motion or combinations of different forms,
hence any of them can be changed into some other form or combination of motion by
appropriate means. Motion is the common denominator of the physical universe.

CHAPTER 10

Atoms
In some respects, the combinations of motions with greater rotational displacement, those
which constitute the atoms of the chemical elements, are less complicated than those with
the least displacement, the sub-atomic particles, and it will therefore be convenient to
discuss the structure of these larger units first.
Geometrical considerations indicate that two photons can rotate around the same central
point without interference if the rotational speeds are the same, thus forming a double unit.
The nature of this combination can be illustrated by two cardboard disks interpenetrated
along a common diameter C. The diameter A perpendicular to C in disk a represents one
linear oscillation, and the disk a is the figure generated by a one-dimensional rotation of this
oscillation around an axis B perpendicular to both A and C. Rotation of a second linear
oscillation, represented by the diameter B. around axis A generates the disk b. It is then
evident that disk a may be given a second rotation around axis A, and disk b may be given a
second rotation around axis B without interference at any point, as long as the rotational
speeds are equal.
The validity of the mathematical principles of probability is covered in the fundamental
postulates by specifically including them in the definition of ―ordinary commutative
mathematics,‖ as that term is used in the postulates. The most significant of these principles,
so far as the atomic structures are concerned, are that small numbers are more probable than
large numbers, and symmetrical combinations are more probable than asymmetrical
combinations of the same total magnitude. For a given number of units of net rotational
displacement the double rotating system results in lower individual displacement values, and
the probability principles give them precedence over single units in which the individual
displacements are higher. All rotating combinations with sufficient net total displacement to
enable forming double units therefore do so.
To facilitate a description of these objects we will utilize a notation in the form a-b-c, where
c is the speed displacement of the one-dimensional reverse rotation, and a and b are the
displacements in the two dimensions of the basic two-dimensional rotation. Later in the
development we will find that the one-dimensional rotation is connected with electrical
phenomena, and the two-dimensional rotation is similarly connected with magnetic
phenomena. In dealing with the atomic and particle rotations it will be convenient to use the
terms ―electric‖ and ―magnetic‖ instead of ―one-dimensional‖ and ―two-dimensional‖
respectively, except in those cases where it is desired to lay special emphasis on the number
of dimensions involved. It should be understood, however, that designation of these rotations
as electric and magnetic does not indicate the presence of any electric or magnetic forces in
the structures now being described. This terminology has been adopted because it not only
serves our present purposes, but also sets the stage for the introduction of electric and
magnetic phenomena in a later phase of the development.
Where the displacements in the two magnetic dimensions are unequal, the rotation is
distributed in the form of a spheroid. In such cases the rotation which is effective in two
dimensions of the spheroid will be called the principal magnetic rotation, and the other the
subordinate magnetic rotation. When it is desired to distinguish between the larger and the
smaller magnetic rotational displacements, the terms primary and secondary will be used.
Where motion in time occurs in the material structures now being discussed, the negative
displacement values of this motion will be distinguished by placing them in parentheses. All
values not so identified refer to positive displacement (motion in space).
Some questions now arise as to the units in which the displacements should be expressed. As
will quickly be seen when we start to identify the individual structures, the natural unit of
displacement is not a convenient unit in application to the double rotating systems. The
smallest change that can take place in these systems involves two natural units. As stated in
Chapter 9, probability considerations dictate the distribution of the total displacement of a
combination among the different dimensions of rotation. The possible rotating combinations
therefore constitute a series, successive members of which differ by two of the natural one-
dimensional units of displacement. Since we will not encounter single units in these atomic
structures, it will simplify our calculations if we work with double units rather than the
single natural units. We will therefore define the unit of electric displacement in the atomic
structures as the equivalent of two natural one-dimensional displacement units.
On this basis, the position of each element in the series of combinations, as determined by its
net total equivalent electric displacement, is its atomic number. For reasons that will be
brought out later, half of the unit of atomic number has been taken as the unit of atomic
weight.
At the unit level dimensional differences have no numerical effect; that is, 1³ = 1² = 1. But
where the rotation extends to greater displacement values a two-dimensional displacement n
is equivalent to n² one-dimensional units. If we let n represent the number of units of electric
displacement, as defined above, the corresponding number of natural (single) units is 2n, and
the natural unit equivalent of a magnetic (two-dimensional) displacement n is 4n², Inasmuch
as we have defined the electric displacement unit as two natural units, it then follows that a
magnetic displacement n is equivalent to 2n² electric displacement units.
This means that the unit of magnetic displacement, the increment between successive values
of the two-dimensional rotational displacement, is not a specific magnitude in terms of total
displacement. Where the total displacement is the significant factor, as in the position in the
series of elements, the magnetic displacement value must be converted to equivalent electric
displacement units by means of the 2n² relation. For some other purposes, however, the
displacement value in terms of magnetic units does have significance in its own right, as we
will see in the pages that follow.
In order to qualify as an atom–a double rotating system–a rotational combination must have
at least one effective magnetic displacement unit in each system, or, expressing the same
requirement in a different way, it must have at least one effective displacement unit in each
of the magnetic dimensions of the combination structure. One positive magnetic (double)
displacement unit is required to neutralize the two single negative displacement units of the
basic photons; that is, to bring the total scalar speed of the combination as a whole down to
zero (on the natural basis). This one positive unit is not part of the effective rotation. Thus,
where there is no rotation in the electric dimension, the smallest combination of motions that
can qualify as an atom is 2-1-0. This combination can be identified as the element helium,
atomic number 2.
Helium is a member of a family of elements known as the inert gases, a name that has been
applied because of their reluctance to enter into chemical combinations. The structural
feature that is responsible for this chemical behavior is the absence of any effective rotation
in the electric dimension. The next element of this type has one additional unit of magnetic
displacement. Since the probability factors operate to keep the eccentricity at a minimum,
the resulting combination is
2-2-0, rather than 3-1-0. The succeeding increments of displacement similarly go alternately
to the principal and subordinate rotations.
Helium, 2-1-0, already has one effective displacement unit in each magnetic dimension, and
the increase to 2-2-0 involves a second unit in one dimension. As previously indicated, the
electric equivalent of n magnetic units is 2n². Unlike the addition of another electric unit, the
addition of a magnetic unit is not a simple process of going from l to 2. In the case of the
electric displacement there is first a single unit, then another single unit for a total of two,
another bringing the total to three, and so on. But 2 x 1² = 2, and 2 x 2² = 8. In order to
increase the total electric equivalent of the magnetic displacement from 2 to 8 it would be
necessary to add the equivalent of 6 units of electric displacement, and there is no such thing
as a magnetic equivalent of 6 electric units. The same situation arises in the subsequent
additions, and the increase in magnetic displacement must therefore take place in full 2n²
equivalents. Thus the succession of inert gas elements is not 2, 10, 16, 26, 36, 50, 64, as it
would be if 2(n+1)² replaces 2n² in the same manner that n+ l replaces n in the electric
series, but 2, l0, 18, 36, 54, 86, 118. For reasons which will be developed later, element 118
is unstable, and disintegrates if formed. The preceding six members of this series constitute
the inert gas family of elements.
The number of mathematically possible combinations of rotations is greatly increased when
electric displacement is added to these magnetic combinations, but the combinations that can
actually exist as elements are limited by probability considerations, as noted in Chapter 9.
The magnetic displacement is numerically less than the equivalent electric displacement, and
is more probable for this reason. Its status as the essential basic rotation also gives it
precedence over the electric rotation. Any increment of displacement consequently adds to
the magnetic rotation if possible, rather than to the rotation in the electric dimension. This
means that the role of the electric displacement is confined to filling in the intervals between
the inert gas elements.
On this basis, if all rotational displacement in the material system were positive, the series of
elements would start at the lowest possible magnetic combination, helium, and the electric
displacement would increase step by step until it reached a total of 2n² units, at which point
the relative probabilities would result in a conversion of these 2n² electric units into one
additional unit of magnetic displacement, whereupon the building up of the electric
displacement would be resumed. This behavior is modified, however, by the fact that
electric displacement in ordinary matter, unlike magnetic displacement, may be negative
instead of positive.
The restrictions on the kinds of motions that can be combined do not apply to minor
components of a system of motions of the same type, such as rotations. The net effective
rotation of a material atom must be in space in order to give rise to those properties which
are characteristic of ordinary matter. It necessarily follows that the magnetic displacement,
which is the major component of the total, must be positive. But as long as the larger
component is positive, the system as a whole can meet the requirement that the net rotation
be in space (positive displacement) even if the smaller component, the electric displacement,
is negative. It is possible, therefore, to increase the net positive displacement a given amount
either by direct addition of the required number of positive electric units, or by adding a
magnetic unit and then adjusting to the desired intermediate level by adding the appropriate
number of negative units.
Which of these alternatives will actually prevail is affected to a considerable degree by the
conditions that exist in the atomic environment, but in the absence of any bias due to these
conditions, the determining factor is the size of the electric displacement, lower
displacement values being more probable than higher values. In the first half of each group
intermediate between two inert gas elements, the electric displacement is minimized if the
increase in atomic number (equivalent electric displacement) is accomplished by direct
addition of positive displacement. When n² units have been added, the probabilities are
nearly equal, and as the atomic number increases still further, the alternate arrangement
becomes more probable. In the latter half of each group, therefore, the increase in atomic
number is normally attained by adding one unit of magnetic displacement, and then reducing
to the required net total by adding negative electric displacement, eliminating successive
units of the latter to move up the atomic series.
By reason of the availability of negative electric displacement as a component of the atomic
rotation, an element with a net displacement less than that of helium becomes possible.
Adding one negative electric displacement unit to helium produces this element, 2-1-(1),
which we identify as hydrogen,, and thereby, in effect, subtracting one positive electric unit
from the equivalent of two units (above the rotational base) that helium possesses. Hydrogen
is the first in the ascending series of elements, and we may therefore give it the atomic
number 1. The atomic number of any other material element is its net equivalent electric
displacement.
Above helium, 2-1-0, we find lithium, 2-1-1, beryllium, 2-1-2, boron, 2-1-3, and carbon, 2-
1-4. Since this is an 8-atom group, the probabilities are nearly even at this point, and carbon
can also exist as 2-2-(4). The elements that follow move up the atomic series by reducing the
negative displacements: nitrogen, 2-2-(3), oxygen, 2-2-(2), fluorine, 2-2-(1), and finally the
next inert gas, neon, 2-2-0.
Another similar 8-element group follows, adding a second magnetic unit in the other
magnetic dimension. This carries the series up to another inert gas element, argon, 3-2-0.
Table 1 shows the normal displacements of the elements to, and including, argon.

TABLE 1
THE ELEMENTS OF THE LOWER GROUPS
Atomic Atomic
Displacements Element Displacements Element
Number Number
2-1-(1) Hydrogen 1
2-1-0 Helium 2
2-1-1 Lithium 3 2-2-1 Sodium 11
2-1-2 Beryllium 4 2-2-2 Magnesium 12
2-1-3 Boron 5 2-2-3 Aluminum 13
2-1-4 2-2-4
Carbon 6 Silicon 14
2-2-(4) 2-2-(4)
2-2-(3) Nitrogen 7 3-2-(3) Phosphorus 15
2-2-(2) Oxygen 8 3-2-(2) Sulfur 16
2-2-(1) Fluorine 9 3-2-(1) Chlorine 17
2-2-0 Neon 10 3-2-0 Argon 18
At element 18, argon, the magnetic displacement has reached a level of two units above the
rotational base in each of the magnetic dimensions. In order to increase the rotation in either
dimension by an additional unit a total of 2x3², or 18, units of electric displacement are
required. This results in a group of 18 elements which reaches the midpoint at cobalt, 3-2-9,
and terminates at krypton, 3-3-0. A second 18-element group follows, as indicated in Table
2.

TABLE 2
THE INTERMEDIATE ELEMENTS
Atomic Atomic
Displacements Element Displacements Element
Number Number
3-2-1 Potassium 19 3-3-1 Rubidium 37
3-2-2 Calcium 20 3-3-2 Strontium 38
3-2-3 Scandium 21 3-3-3 Yttrium 39
3-2-4 Titanium 22 3-3-4 Zirconium 40
3-2-5 Vanadium 23 3-3-5 Niobium 41
3-2-6 Chromium 24 3-3-6 Molybdenum 42
3-2-7 Manganese 25 3-3-7 Technetium 43
3-2-8 Iron 26 3-3-8 Ruthenium 44
3-2-9 3-3-9
Cobalt 27 Rhodium 45
3-3-(9) 3-3-(9)
3-3-(8) Nickel 28 4-3-(8) Palladium 46
3-3-(7) Copper 29 4-3-(7) Silver 47
3-3-(6) Zinc 30 4-3-(6) Cadmium 48
3-3-(5) Gallium 31 4-3-(5) Indium 49
3-3-(4) Germanium 32 4-3-(4) Tin 50
3-3-(3) Arsenic 33 4-3-(3) Antimony 51
3-3-(2) Selenium 34 4-3-(2) Tellurium 52
3-3-(1) Bromine 35 4-3-(1) Iodine 53
3-3-0 Krypton 36 4-3-0 Xenon 54
The final two groups of elements, Table 3, contain 2x4², or 32, members each. The heaviest
elements of the last group have not yet been observed, as they are highly radioactive, and
consequently unstable in the terrestrial environment. In fact, uranium, element number 92, is
the heaviest that exists naturally on earth in any substantial quantities. As we will see later,
however, there are other conditions under which the elements are stable all the way up to
number 117.

TABLE 3
THE ELEMENTS OF THE HIGHER GROUPS
Atomic Atomic
Displacements Element Displacements Element
Number Number
4-3-1 Cesium 55 4-4-1 Francium 87
4-3-2 Barium 56 4-4-2 Radium 88
4-3-3 Lanthanum 57 4-4-3 Actinium 89
4-3-4 Cerium 58 4-4-4 Thorium 90
4-3-5 Praseodymium 59 4-4-5 Protactinium 91
4-3-6 Neodymium 60 4-4-6 Uranium 92
4-3-7 Promethium 61 4-4-7 Neptunium 93
4-3-8 Samarium 62 4-4-8 Plutonium 94
4-3-9 Europium 63 4-4-9 Americium 95
4-3-10 Gadolinium 64 4-4-10 Curium 96
4-3-11 Terbium 65 4-4-11 Berkelium 97
4-3-12 Dysprosium 66 4-4-12 Californium 98
4-3- 13 Holmium 67 4-4-13 Einsteinium 99
4-3-14 Erbium 68 4-4-14 Fermium 100
4-3-15 Thulium 69 4-4-15 Mendelevium 101
4-3-16 4-4-16
Ytterbium 70 Nobelium 102
4-3-(16) 5-4-(16)
4-4-(15) Lutetium 71 5-4-(15) Lawrencium 103
4 4-(14) Hafnium 72 5-4-(14) Rutherfordium 104
4 4-(13) Tantalum 73 5 4-(13) Hafnium 105
4 4-(12) Tungsten 74 5-4-(12) 106
4 4-(11) Rhenium 75 5 4-(11) 107
4 4-(10) Osmium 76 5-4-(10) 108
4-4-(9) Iridium 77 5-4-(9) 109
4-4-(8) Platinum 78 5-4-(8) 110
4-4-(7) Gold 79 5-4-(7) 111
4-4-(6) Mercury 80 5-4-(6) 112
4-4-(5) Thallium 81 5-4-(5) 113
4-4-(4) Lead 82 5-4-(4) 114
4-4-(3) Bismuth 83 5-4-(3) 115
4-4-(2) Polonium 84 5-4-(2) 116
4-4-(1) Astatine 85 5-4-(1) 117
4-4-0 Radon 86
For convenience in the subsequent discussion these groups of elements will be identified by
the magnetic n value, with the first and second groups in each pair being designated A and B
respectively. Thus the sodium group, which is the second of the 8-element groups (n=2) will
be called Group 2B.
At this point it will be appropriate to refer back to this statement that was made in Chapter 9:
The (mathematical) development will begin with nothing more than the series of cardinal
numbers and the geometry of three dimensions. By subjecting these to simple mathematical
processes, the applicability of which to the physical universe of motion is specified in the
fundamental postulates, the combinations of rotational motions that can exist in the
theoretical universe will be ascertained, and it will be shown that these rotational
combinations that theoretically can exist can be individually identified with the atoms of the
chemical elements and the sub-atomic particles that are observed to exist in the physical
universe. A unique group of numbers representing the different rotational components will
be derived for each of these combinations.
A review of the manner in which the figures presented in Tables 1 to 3 were derived will
show that this commitment, so far as it applies to the elements, has been carried out in full.
This is a very significant accomplishment. Both the existence of a series of theoretical
elements identical with the observed series of chemical elements, and the numerical values
which theoretically characterize each individual element have been derived from the general
properties of mathematics and geometry, without making any supplementary assumptions or
introducing any numerical values specifically applicable to matter. The possibility that the
agreement between the series of elements thus derived and the known chemical elements
could be accidental is negligible, and this derivation is, in itself, a conclusive proof that the
atoms of matter are combinations of motions, as asserted by the Reciprocal System of
theory. But this is only the beginning of a vast process of mathematical development. The
numerical values at which we have arrived, the atomic numbers and the three displacement
values for each element, now provide us with the basis from which we can derive the
quantitative relations in the areas that we will examine.
The behavior characteristics, or properties, of the elements are functions of their respective
displacements. Some are related to the total net effective displacement (equal to the atomic
number in the combinations thus far discussed), some are related to the electric
displacement, others to the magnetic displacement, while still others follow a more complex
pattern. For instance, valence, or chemical combining power, is determined by either the
electric displacement or one of the magnetic displacements, while the inter-atomic distance
is affected by both the electric and magnetic displacements, but in different ways. The
manner in which the magnitudes of these properties for specific elements and compounds
can be calculated from the displacement values has been determined for many properties and
for many classes of substances. These subjects will be considered individually in the
chapters that follow.
One of the most significant advances toward an understanding of the relations between the
structures of the different chemical elements and their properties was the development of the
periodic table by Mendeleeff in l869. In this diagram the elements are arranged horizontally
in periods and vertically in groups, the order within the period being that of the atomic
number (approximately defined in the original work by the atomic weights). When the
elements are correctly assigned in the periods, those in the vertical groups are quite similar
in their properties. On comparing the periodic table with the rotational characteristics of the
elements, as tabulated in this chapter, it is evident that the horizontal periods reflect the
magnetic rotational displacement, while the vertical groups represent the electric rotational
displacement. In revising the table to take advantage of the additional information derived
from the Reciprocal System of theory we may therefore replace the usual group and period
numbering by the more meaningful displacement values.
When this is done it is apparent that a further revision of the tabular arrangement is required
in order to put all of the elements into their proper positions. Mendeleeff's original table
included nine vertical groups, beginning with the inert gases, Group O, and ending with a
group in which the three elements iron, cobalt, and nickel, and the corresponding elements in
the higher periods, were all assigned to a single vertical position. In the more modern
versions of the table the number of vertical groups has been expanded to avoid splitting each
of the longer periods into two sub-periods, as was done by Mendeleeff. One of the most
popular of these revised versions utilizes 18 vertical groups, and puts 15 elements of each of
the last two periods into one of these l8 positions in order to accommodate the full number
of elements.
In the light of the new information now available, it can be seen that Mendeleeff based his
arrangement on the relations existing in the 8-element rotational groups, 2A and 2B in the
notation used in this work, and forced the elements of the larger groups into conformity with
this 8-element pattern. The modern revisers have made a partial correction by setting up
their tables on the basis of the l8-element rotational groups, 3A and 3B, leaving blank spaces
where the 8-element groups have no counterparts of the l8-element values. But these tables
still retain a part of the original distortion, as they force the members of the 32-element
groups into the l8-element pattern. To construct a complete and accurate table, it is only
necessary to carry the revision procedure one step farther, and set up the table on the basis of
the largest of the magnetic groups, the 32-element groups 4A and 4B.
For some purposes a simple extension of the current versions of the table to the full 32
position width necessary to accommodate Grcups 4A and 4B is probably all that is needed.
On the other hand, the useful chemical information displayed by the table is confined mainly
to the elements with electric displacements below l0, and separating the central elements of
the two upper groups from the main portion of the table, as in the conventional
arrangements, has considerable merit. The particular elements that are thus separated on the
basis of the electric displacement are not the same ones that are treated separately in the
conventional tables, but the general effect is much the same.
When the table is thus divided into two sections, it also appears that there are some
advantages to be gained by a vertical, rather than a horizontal, arrangement, and the revised
table, Fig. 1, has been set up on this basis. The new concept of ―divisions,‖ which is
emphasized in this table, will be explained in Chapter 18. Inasmuch as carbon and silicon
play both positive and negative roles rather freely, they have each been assigned to two
positions in the table, but hydrogen, which is usually shown in two positions in the
conventional tables, is necessarily negative on the basis of the principles that have been
developed in this work and is only shown in one position. The aspects of its chemical
behavior that have led to its classification with the electropositive elements will also be
explained in Chapter 18.

Figure 1
The Revised Periodic Table of the Elements

Electric
Magnetic Displacement Div. Div.
Displacement
2-1 2-2 3-2 3-3 4-3 4-4
3 11 19 37 55 87
1
Li Na K Rb Cs Fr
4 12 20 38 56 88
I 2 4-3 4-4
Be Mg Ca Sr Ba Ra
5 13 21 39 57 89 64 96
3 10 II
B Al Sc Y La Ac Gd Cm
6 14 22 40 58 90 65 97
4 11
C Si Ti Zr Ce Th Tb Bk
23 41 59 91 66 98
5 12
V Nb Pr Pa Dy Cf
24 42 60 92 67 99
6 13
Cr Mo Nd U Ho Es
25 43 61 93 68 90
II 7 14
Mn Tc Pm Np Er Fm
26 44 62 94 69 101
8 15
Fe Ru Sm Pu Tm Md
27 45 63 95 70 102
9 16
Co Rh Eu Am Yb No
28 46 78 110 71 103
(8) (15)
Ni Pd Pt Lu Lr
29 47 79 111 72 104
(7) (14)
Cu Ag Au Hf Rf
III
30 48 80 112 73 105
(6) (13)
Zn Cd Hg Ta Ha
31 49 81 113 74 106
(5) (12) III
Ga In Tl W
6 14 32 50 82 114 75 107
(4) (11)
C Si Ge Sn Pb Re
7 15 33 51 83 115 76 108
(3) (10)
N P As Sb Bi Os
IV
8 16 34 52 84 116 77 109
(2) (9)
O S Se Te Po Ir
1 9 17 35 53 85 117 4-4 5-4
(1)
H F Cl Br I At
2 10 18 36 54 86
0 0
He Ne Ar Kr Xe Rn

In the original construction of the periodic table the known properties of certain elements
were combined with the atomic number sequence to establish the existence of the relations
between the elements of the various periods and groups, and thereby to predict previously
undetermined properties, and even the existence of some previously unknown elements. The
table thus added significantly to the chemical knowledge of the time. In this work, however,
the revised table is not being presented as an addition to the information contained in the
preceding pages, but merely as a convenient graphic method of expressing some portions of
that information. Everything that can be learned from the table has already been set forth in
more detailed form, verbally and mathematically, in this and the earlier chapters. Some of
the implications of this information, such as its application to the property of valence, will
have further consideration later.

CHAPTER 11

Sub-Atomic Particles
While the series of elements contains no combinations of motions with net positive
displacement less than that of hydrogen, 2-1-(1), this does not mean that such combinations
are non-existent. It merely means that they do not have sufficient speed displacement to
form two complete rotating systems, and consequently do not have the properties, which
distinguish the rotational combinations that we call atoms. These less complex combinations
of motion can be identified as the sub-atomic particles. As is evident from the foregoing,
these particles are not constituents of atoms, as seen in current scientific thought. They are
structures of the same general nature as the atoms of the elements, but their net total
displacement is below the minimum necessary to form the complete atomic structure. They
may be characterized as incomplete atoms.
The term ―sub-atomic‖ is currently applied to these particles with the implication that the
particles are, or can be, building blocks from which atoms are constructed. Our new findings
make this sense of the term obsolete, but the name is still appropriate in the sense of a
system of motions of a lower degree of complexity than atoms. It will therefore be retained
in this work, and applied in this modified sense. The term ―elementary particle‖ must be
discarded. There are no ―elementary‖ particles in the sense of basic units from which other
structures can be formed. A particle is smaller and less complex than an atom, but it is by no
means elementary. The elementary unit is the unit of motion.
The theoretical characteristics of the sub-atomic particles, as derived from the postulates of
the Reciprocal System, have been given considerable additional study since the date of the
last previous publication in which they were discussed, and there has been a significant
increase in the amount of information that is available with respect to these objects,
including the theoretical discovery of some particles that are more complex than those
described in the first edition. Furthermore, we are now in a position to examine the structure
and behavior of the cosmic sub-atomic particles in greater depth (in the later chapters). In
order to facilitate the presentation of this increased volume of information, a new system of
representing the dimensional distribution of the rotation has been adopted.
This means, of course, that we are now using one system of notation for the rotation of the
elements, and a different system to represent the rotations of the same nature when we are
dealing with the particles. At first glance, this may seem to be introducing an unnecessary
complication, but the truth is that as long as we want to take advantage of the convenience
of using the double displacement unit in dealing with the elements, while we must use the
single unit in dealing with the particles, we are necessarily employing two different systems,
whether they look alike or not. In fact, lack of recognition of this difference has led to some
of the confusion that we now wish to avoid. It appears, therefore, that as long as two
different systems of notation are necessary for convenient handling of the data, we might as
well set up a system for the particles in a manner that will best serve our purposes, including
being distinctive enough to avoid confusion.
The new notation used in this edition will indicate the displacements in the different
dimensions, as in the first edition, and will express them in single units, as before, but it will
show only effective displacements, and will include a letter symbol that will specifically
designate the rotational base of the particle. It is necessary to take the initial non-effective
rotational unit into consideration in dealing with the elements because of the characteristics
of the mathematical processes that we will utilize. This is not true in the case of the sub-
atomic particles, and as long as the atomic (double) notation cannot be used in any event, we
will show only the effective displacements, and will precede them with either M or C to
indicate whether the rotational base of the combination is material or cosmic. This will have
the added advantage of clearly indicating that the rotational values in any particular case are
being expressed in the new notation.
This change in the symbolic representation of the rotations, and the other modifications of
terminology that we are making in this edition, may introduce some difficulties for those
who have already become accustomed to the manner of presentation in the earlier works. It
seems advisable, however, to take advantage of any opportunities for improvement in this
respect that may be recognized in the present early stage of the theoretical development, as
improvements of this nature will become progressively less feasible as time goes on and
existing practices become resistant to change.
On the new basis, the material rotational base is M 0-0-0. To this base may be added a unit
of positive electric displacement, producing the positron, M 0-0-1, or a unit of negative
electric displacement, in which case the result is the electron, M 0-0-(1). The electron is a
unique particle. It is the only structure constructed on a material base, and therefore stable in
the local environment, that has an effective negative displacement. This is possible because
the total rotational displacement of the electron is the sum of the initial positive magnetic
unit required to neutralize the negative photon displacement (not shown in the structural
notation) and the negative electric unit. As a two-dimensional motion, the magnetic unit is
the major component of the total rotation, even though its numerical magnitude is no greater
than that of the one-dimensional electric rotation. The electron thus complies with the
requirement that the net total rotation of a material particle must be positive.
As brought out earlier, adding motion with negative displacement has the effect of adding
more space to the existing physical situation, whatever it may be, and the electron is
therefore, in effect, a rotating unit of space. We will see later that this fact plays an
important part in many physical phenomena. One immediate, and very noticeable, result is
that electrons are plentiful in the material environment, whereas positrons are extremely
rare. On the basis of the same considerations that apply to the electron, we can regard the
positron as essentially a rotating unit of time. As such, it is readily absorbed into the material
system of combinations, the constituents of which are predominantly time structures; that is,
rotational motions with net positive displacement (speed = 1/t). The opportunities for
utilizing the negative displacement of the electrons in these structures, on the contrary, are
very limited.
If the addition to the rotational base is a magnetic unit rather than an electric unit, the result
could be expressed as M 1-0-0. It now appears, however, that the notation M ½-½-0 is
preferable. Of course, half units do not exist, but a unit of two-dimensional rotation
obviously occupies both dimensions. To recognize this fact we will have to credit one half to
each. The ½-½ notation also ties in better with the way in which this system of motions
enters into further combinations. We will call this M ½-½-0 particle the massless neutron,
for reasons, which will appear shortly.
At the unit level in a single rotating system, the magnetic and electric units are numerically
equal; that is, 1² = 1, Addition of a unit of negative electric displacement to the M ½-½-0
combination of motions, the massless neutron, therefore produces a combination with a net
total displacement of zero. This combination, M ½-½-(1), Can be identified as the neutrino.
In the preceding chapter, the property of the atoms of matter known as atomic weight, or
mass, was identified with the net positive three-dimensional rotational displacement (speed)
of the atoms. This property will be discussed in more detail in the next chapter, but at this
time we will note that the same relationship also applies to the sub-atomic particles; that is,
these particles have mass to the extent that they have net positive rotational displacement in
three dimensions. None of the particles thus far considered meets this requirement. The
electron and the positron have effective rotation in one dimension; the massless neutron in
two. The neutrino has no net displacement at all. The sub-atomic rotational combinations
thus far identified are therefore massless particles.
By combination with other motions, however, the displacement in one or two dimensions
can attain the status of a component of a three-dimensional displacement. For instance, a
particle may acquire a charge, which is a motion of a kind that will be examined later in the
development, and when this happens, the entire displacement, both of the charge and of the
original particle, will then manifest itself as mass. Or a particle may combine with other
motions in such a way that the displacement of the massless particle becomes a component
of the three-dimensional displacement of the combination structure.
Addition of a unit of positive, instead of negative, electric displacement to the massless
neutron would produce M ½-½-1, but the net total displacement of this combination is 2,
which is sufficient to form a complete double rotating system, an atom, and the greater
probability of the double structure precludes the existence of the M ½-½-1 combination,
other than momentarily.
The same probability considerations likewise exclude the two-unit magnetic structure M 1-
1-0, and its positive derivative M 1-1-1, which have net displacements of 2 and 3
respectively. However, the negative derivative, M 1-1-(1), formed in practice by the addition
of a neutrino, M ½-½-(1), to a massless neutron, M ½-½-0. can exist as a particle, as its net
total displacement is only one unit; not enough to make the double structure mandatory. This
particle can be identified as the proton.
Here we have an illustration of the way in which particles that are individually massless,
because they have no three-dimensional rotation, combine to produce a particle with an
effective mass. The massless neutron rotates only two-dimensionally, while the neutrino has
no net rotation. But by adding the two, a combination with effective rotation in all three
dimensions is produced. The resulting particle, the proton, M 1-1-(1), has one unit of mass.
At the present, rather early, stage of the theoretical development it is not possible to make a
precise evaluation of the probability factors and other influences that determine whether or
not a theoretically feasible rotational combination will actually be able to exist under a given
set of circumstances. The information now available indicates, however, that any material
type combination with a net displacement less than 2 should be capable of existing as a
particle in the local environment. In actual practice none of the single system combinations
identified in the preceding paragraphs has been observed, and there is considerable doubt as
to whether there is any way whereby they can be observed, other than through indirect
processes which enable us to infer their existence The neutrino, for example, is ―observed‖
only by means of the products of certain events in which this particle is presumed to
participate. The electron, the positron, and the proton have been observed only in the
charged state, not in the uncharged condition, which constitutes the basic state of all of the
rotational combinations thus far discussed. Nevertheless, there is sufficient evidence to
indicate that all of these uncharged structures do, in fact, exist, and play significant roles in
physical processes. This evidence will be forthcoming as we continue the theoretical
development.
In the previous publications, the M ½-½-0 combination (1-1-0 in the notation utilized in
those works) was identified as the neutron, but it was noted that in some physical processes,
such as cosmic ray decay, the magnetic displacement that could be expected to be ejected in
the form of neutrons is actually transferred in massless form. Since the observed neutron is a
particle of unit atomic weight, it was at that time concluded that in these particular instances
the neutrons act as combinations of neutrinos and positrons, both massless particles. On this
basis, the neutron plays a dual role, massless under some conditions, but possessing unit
mass under other circumstances.
Further investigation, centering mainly on the secondary mass of the sub-atomic particles,
which will be discussed in Chapter 13, has now disclosed that the observed neutron is not
the single effective magnetic rotation with net displacements M ½-½-0. but a more complex
particle of the same net displacement, and that the single magnetic displacement is massless.
It is no longer necessary to conclude that the same particle acts in two different ways.
Instead, there are two different particles.
The explanation is that the new findings have revealed the existence of a type of structure
intermediate between the single rotating systems of the massless particles and the complete
double systems of the atoms. In these intermediate structures there are two rotating systems,
as in the atoms of the elements, but only one of these systems has a net effective
displacement. The rotation in this system is that of the proton, M 1-1-(1). In the second
system there is a neutrino type rotation.
The massless rotations of the second system can be either those of the material neutrino, M
½-½-(1), or those of the cosmic neutrino, C ½-½-1. With the material neutrino rotation the
combined displacements are M ½-½-(2) This combination is the mass one isotope of
hydrogen, a structure identical with that of the normal mass two atom (deuterium), M 2-2-
(2), or M 2-1-(1) in the atomic notation, except that it has one less unit of magnetic rotation,
and therefore one less unit of mass. When the cosmic neutrino rotation is added to the
proton, the combined displacements are M 2-2-0. the same net total as that of the single
magnetic rotation. This theoretical particle, the compound neutron, as we will call it, can be
identified as the observed neutron.
The identification of the separate rotations of these intermediate type structures with the
rotations of the neutrinos and protons should not be interpreted as meaning that neutrinos
and protons actually exist as such in the combination structures. What is meant is that one of
the component rotations that constitutes the compound neutron, for instance, is the same
kind of a rotation as that which constitutes a proton when it exists separately.
Inasmuch as the net total displacement of the compound neutron is identical with that of the
massless neutron, those aspects of the behavior of the particles–properties, as they are
called–which are dependent on the net total displacement are the same for both. Likewise,
those properties that are dependent on total magnetic displacement, or total electric
displacement, are identical. But there are other properties that are related to those features of
the particle structure in which the two neutrons differ. The compound neutron has an
effective unit of three-dimensional displacement in the rotating system with the proton type
rotation, and it therefore has one unit of mass. The massless neutron, on the other hand, has
no effective three-dimensional displacement, and therefore no mass.
The two neutrons also differ in that, although it is (or at least, as we will see in Chapter 17,
may be) a still unobserved particle, the massless neutron is theoretically stable in the
material environment, whereas the life of the compound neutron is short because of the
―foreign‖ nature of the rotation in the second system. After about 15 minutes, on the
average, the compound neutron ejects the second rotating system in the form of a cosmic
neutrino, and the particle reverts to the proton status.
The structures of the sub-atomic particles of the material system may now be summarized as
follows:
Massless particles

M 0-0-0 rotational base


M 0-0-1 positron
M 0-0-(1) electron
M ½-½-0 massless neutron
M ½-½-(1) neutrino
Particles with mass

M+ 0-0-1 charged positron


M- 0-0-(1) charged electron
M 1-1-(1) proton
M+ 1-1-(1) charged proton
M 1-1-(1)
compound neutron
C (½)-(½)-1

CHAPTER 12

Basic Mathematical Relations


It was pointed out in the introductory chapters that when we postulate a universe
composed entirely of motion, every entity or phenomenon that exists in this universe is
either a motion, a combination of motions, or a relation between motions. The discussion
thus far has been addressed mainly to an examination of the primary features of the
possible motions, and certain of the combinations of these motions. At this point it will
be advisable to consider some of the basic kinds of relations that exist between motions.
Inasmuch as motion in general is defined as a relation between space and time, expressed
symbolically by s/t, all of the different kinds of motions, and the relations between
motions, can be expressed in space-time terms. Such an analysis into space and time
components will be particularly helpful in putting the various physical relationships into
the proper perspective, and our first objective in the field we are now entering will
therefore be to establish the space-time equivalents of the various quantities that
constitute the so-called ―mechanical‖ system. Consideration of the analogous quantities
of the electrical system will be deferred until we are ready to begin an examination of
electrical phenomena.
One set of these mechanical quantities is customarily expressed in velocity terms, and it
presents no problems. One-dimensional velocity is, by definition, s/t. It follows that two-
dimensional and three-dimensional velocity is s²/t² and s³/t³ respectively. Acceleration,
the time rate of change of one-dimensional velocity, is s/t².
In addition to these quantities which express motion as velocities (or speeds), there is also
a set of quantities which are fundamentally based on resistance to movement, although in
some applications this basic significance is obscured by other factors. The objects, which
resist movement, are atoms and particles of matter: three-dimensional combinations of
motions. In a universe of motion, where nothing exists but motion, the only thing that can
resist change of motion is motion. The particular motion that resists any change in the
motion of an atom is the inherent motion of the atom itself, the motion that makes it an
atom. Furthermore, only a three-dimensional motion, or motion that is automatically
distributed over three dimensions, is able to offer effective resistance, as any vacant
dimension permits motion to take place without hindrance.
The magnitude of the resistance can be expressed in terms of the quantity required to
eliminate the effective existing motion; that is, to reduce this motion to unity in the
conventional reference system. This is the inverse of the motion of the atom, s³/t³, and the
resistance to motion, or inertia, is therefore t³/s³. In more general application, inertia is
known as mass.
Inasmuch as current physical theory recognizes gravitation and inerti a as phenomena of
a quite different character, the equivalence of gravitational and inertial mass, which has
been experimentally demonstrated to the almost incredible accuracy of less than one part
in 1011, is regarded as very significant, although there is considerable difference of
opinion as to what that significance actually is. As expressed by Clifford M. Will, ―the
theoretical interpretation of the Eötvös experiment (which demonstrates the equivalence)
has varied."57 Will asserts that it is now believed that the results of this experiment rule
out all non-metric theories of gravitation (he defines metric theories as those ―in which
gravitation can be treated as being synonymous with the curvature of space and time‖ ).
After the theorists have arrived at such a far-reaching conclusion on the basis of what
Will admits is no more than a ―conjecture,‖ it comes as something of an anticlimax when
the Reciprocal System reveals that nothing of an esoteric nature is involved. Gravitation
is a motion, but it can manifest itself either directly as motion or inversely as resistance to
another motion.
Multiplying mass, t³/s³, by velocity, s/t, we obtain momentum, t² /s², the reciprocal of two-
dimensional velocity. Another multiplication by velocity, s/t, gives us energy, t/s. Energy,
then, is the reciprocal of velocity. When one-dimensional motion is not restrained by
opposing motion (force) it manifests itself as velocity; when it is so restrained it
manifests itself as potential energy. Kinetic energy is merely ―energy in transit,‖ so to
speak. It is a measure of the energy that has been used to produce the velocity of a mass
(½mv² = ½t³/s³ x s²/t² = ½ t/s), and can be extracted for other use by terminating the
motion (velocity).
This explanation of the nature of energy should be of some assistance to those who are
still having some difficulty with the concept of scalar motion. Both speed and energy are
scalar measures of motion. But on our side of the unit speed boundary, the low-speed
side, where all motion is in space, speed can be represented in our conventional spatial
system of reference because it causes a change of position, inward or outward, in space,
whereas energy cannot be so represented. On the high-speed side of the boundary, the
relations are inverted. There all motion is in time, and the measure of that motion, the
energy, t/s, the inverse of speed, s/t, can be represented in a stationary temporal reference
system, whereas speed is neither inward nor outward from the time standpoint, and
cannot be represented in the temporal coordinate system.
Here is the reason for the purely scalar nature of any increment of speed beyond the unit
level, such as those discussed in Chapter 8. The added speed does have a direction, but it
is a direction in time, and it has no vectorial effect in a spatial system of reference. We
will find this very significant when we undertake an examination of some of the recently
discovered high-speed astronomical objects in Volume II.
Force, which is defined as the product of mass and acceleration, becomes t³/s³ x s/t²= t/s².
Acceleration and force are thus inverse quantities, in the sense in which that term is
generally used in this work; that is, they are identical except that space and time are
interchanged. They are not inverse in the mathematical sense, as their product is not equal
to unity.
One special type of force that is of particular interest is the gravitational force, that which
the aggregates of matter appear to exert on each other by reason of there motions inward
in space. In this case, the mathematical expression F = kmm'/d² by which the force is
ordinarily calculated is quite different from the general force equation F= ma. When
taken at their face value, these two expressions are clearly irreconcilable. If gravitational
force is actually a force, even a force of the ―as if,‖ variety, it cannot be proportional to
the product of two masses (that is, to m2) when force in general is proportional to the first
power of the mass. There is an obvious contradiction here.
Most of the other common quantities of the mechanical system can be reduced to space-
time terms without any complications. For example:
Impulse, the product of force and time, has the same dimensions as momentum.
Ft= t/s² x t = t²/s²
Both work and torque are the products of force and distance, and have the same
dimensions as energy.
Fs= t/s² x s= t/s
Pressure is force per unit area.
F/s² = t/s x I/s² = t/s4
Density is mass per unit volume.
m/s³ = t³/s³ x 1/s³ = t³/s6
Viscosity is mass per unit length per unit time.
m x 1/s x 1/t = t³/s³ x 1/s x 1/t = t²/s4
Surface tension is force per unit length.
F/s = t/s² x 1/s = t/s³
Power is work per unit time.
W/t = t/s x 1/t = 1/s
All of the established relations in the field of mechanics have the same dimensional
consistency on the basis of these space-time dimensions as in the conventional forms,
since the mass terms in the equations are, in all cases, balanced by derivatives of mass on
the opposite side of the equation. The numerical values in these equations likewise retain
the same relationships, as all that we have done, from this standpoint, is to change the
size of the unit in which the quantity of mass is expressed. What has been accomplished,
then, is to express mass in terms of the components of motion. Since mechanics deals
only with space, time, and mass, it follows that, so far as mechanics is concerned, by
reducing mass to motion we have confirmed the validity of the basic postulate that the
physical universe is composed entirely of motion.
This is a very significant point. The concept of the nature of the physical universe on
which conventional physics is based, the concept of a universe of matter existing in a
framework provided by space and time, identifies matter as a fundamental quantity. The
results of this present work now show that, in the physical field that is the most
completely developed and understood, the fundamental entity is motion, not matter.
Furthermore, it is now possible to see why the common denominator of the universe has
to be motion; why it could not be anything else. It has to be something to which all of the
mechanical quantities can be reduced (and all other physical quantities as well, but for the
present we are examining the mechanical relations). The only entity that meets these
requirements is the simple relationship between space and time that we are defining as
motion. Motion is the common denominator of the field of mechanics.
It still remains to be established that motion is the common denominator of the entire
universe, but the demonstration that all of the quantities with which mechanics deals,
including mass , can be reduced to motion creates a strong presumption that when the
more complex phenomena in other fields are equally well understood they will also be
found to be reducible to motion. The development of theory in the subsequent pages of
this and the volumes to follow will show that this logical expectation is realized, and that
all physical phenomena and entities can, in fact, be reduced to motion.
The application of the Reciprocal System of theory to mechanics throws a significant
light on the relation of this theoretical system to conventional scientific thought. It was
asserted in Chapter 6 that the concept of a universe of motion, on which the new
theoretical system is based, is ―just the kind of a conceptual alteration that is needed to
clear up the existing physical situation: one which makes drastic changes where such
changes are required, but leaves the empirically determined relations of our everyday
experience essentially untouched.‖ Here, in application to a field in which the entire body
of knowledge is a network of ―empirically determined relations,‖ the validity of this
assertion is dramatically demonstrated. The only change that is found to be necessary in
mechanics is to recognize the fact that mass is reducible to motion. Otherwise, the entire
structure of mechanical theory is incorporated into the Reciprocal System just as it
stands. As will be shown in the pages that follow, the same is true in other fields to the
extent that the prevailing ideas in those fields are, like the principles of mechanics,
solidly based on empirically determined facts. But where the prevailing ideas are based
on assumptions–"free inventions of the human mind,‖ in Einstein’s words–the
development of the theory of a universe of motion now shows that most of these invented
ideas are erroneous, in part if not in their entirety. The Reciprocal System diverges from
current scientific thought only in those respects where current theory has been led astray
by erroneous assumptions. As indicated earlier, the phenomena involved are mainly those
not accessible to direct apprehension, primarily the phenomena of the very small, the
very large, and the very fast.
In all of the space-time expressions of physical quantities that were derived in the
preceding pages of this chapter, the dimensions of the denominator of the fraction are
either equal to or greater than the dimensions of the numerator. This is another result of
the discrete unit postulate, which prevents any interactions from being carried beyond the
unit level. Addition of speed displacement to motion in space reduces the speeds; the
atomic rotation can take place only in the negative scalar direction, and so on. The same
principle applies to the dimensions of physical quantities, and the dimensions of the
numerator of the space-time expression of any real physical quantity cannot be greater
than those of the denominator. Purely mathematical relations that violate this principle
can, of course, be constructed, but according to the theoretical findings they have no real
physical significance.
For example, the reciprocal of viscosity is known as fluidity, and in certain applications it
is more convenient for purposes of calculation to work with fluidity values rather than
viscosity values. But the space-time expression for fluidity is s4/t², and on the basis of the
principle just stated, we must conclude that viscosity is the quantity that has a real
physical existence.
The most notable of the quantities excluded by this dimensional principle is ―action.‖
This is the product of energy, t/s, and time t, and in space-time terms it is t²/s. Thus it is
not admissible as a real physical quantity. In view of the prominent place which it
occupies in some physical areas, this conclusion that it has no actual physical significance
may come as quite a surprise, but the explanation can be seen if we examine the most
familiar of the conventional applications of action: its use in the expression of Planck's
constant. The equation connecting the energy of radiation with the frequency is
E = hv
where h is Planck's constant. In order to be dimensionally consistent with the other
quantities in the equation this constant must be expressed in terms of action.
It is clear, however, from the explanation of the nature of the photon of radiation that was
developed in Chapter 4, that the so-called ―frequency‖ is actually a speed. It can be
expressed as a frequency only because the space that is involved is always a unit
magnitude. In reality, the space dimension belongs with the frequency, not with the
Planck constant. When it is thus transferred, the remaining dimensions of the constant are
t²/s², which are the dimensions of momentum, and are the reversing dimensions that are
required to convert speed s/t to energy t/s. In space-time terms, the equation for the
energy of radiation is
t/s = t²/s² x s/t
Similar situations have developed in other cases where dimensions have been improperly
assigned in current practice. The energy of rotation, for instance, is commonly expressed
as ½ Iw ², where I is the moment of inertia, and w is the angular velocity. The moment of
inertia is the product of the mass and the square of the distance:I = ms² = t³/s³ x s² = t³/s
This result shows that the moment of inertia is an artificial construct without physical
significance. The important part that it plays in the expression for rotational energy may
seem inconsistent with this conclusion, but again the explanation is that the space
magnitude has been improperly assigned. It belongs with the velocity term, not with the
mass term. When it is so transferred, the moment of inertia is eliminated, and the
rotational energy equation reverts to the normal kinetic form E= ½mv². The equation in
its usual form is merely a mathematical convenience, and does not reflect the actual
physical situation.
In addition to the kinds of relations that have been discussed so far in this chapter, where
the relations themselves are familiar, and only the analysis into space and time
components is new, there are other types of physical relations that are peculiar to the
universe of motion. At this time we will want to examine two of these: the limitations on
unidirectional motion, and the relations between motion in space and motion in time.
The translational and vibrational speeds with which we have been mainly concerned thus
far are speeds attained by means of directional reversals, and their magnitudes are not
subject to any limits other than those arising from the finite capabilities of the originating
processes. Rotation, however, is unidirectional from the scalar standpoint, and
unidirectional magnitudes are limited by the discrete unit postulate. On the basis of this
postulate, the maximum possible one-dimensional unidirectional speed is one net
displacement unit. However, the atom rotates in the inward scalar direction, and inward
motion necessarily takes place in opposition to the omnipresent outward motion of the
natural reference system. Two inward displacement units are therefore required in order
to reach the limit of one net unit. These two units extend from unity in the positive scalar
direction (the positive zero, in terms of the natural system) to unity in the negative scalar
direction (the negative zero), and they constitute the maximum for any one-dimensional
unidirectional motion. In three-dimensional space (or time) there can be two
displacement units in each of the three dimensions, and the maximum three-dimensional
unidirectional displacement is therefore 2³, or 8, units.
There have been some suggestions that the number of possible directions (and
consequently displacements) in three-dimensional space ought to be 3 x 2 = 6 rather than
2³ = 8. It should therefore be emphasized that we are not dealing with three individual
dimensions of motion, we are dealing with three-dimensional motion. The possible
directions in a three-dimensional continuum can be visualized by regarding a two-unit
cube as being an assemblage of eight one-unit cubes. The diagonals from the center of the
assemblage to the opposite corner of each of the cubes then define the eight possible
directions.
An important consequence of the fact that there are eight displacement units between the
zero point of the positive motion and the end of the second unit, which is the zero from
the negative standpoint, is that in any physical situation involving rotation, or other three-
dimensional motion, there are eight displacement units between positive and negative
magnitudes. A positive displacement x from the positive datum is physically equivalent
to a negative displacement 8—x from the negative datum. This is a principle that will
have a wide field of application in the pages that follow.
The key factor in the relation between motion in space and motion in time is the
previously mentioned fact that in the context of a spatial reference system all motion in
time is scalar, and in the context of a temporal reference system all motion in space is
scalar. The regions of motion in time and motion in space therefore meet in what is
essentially no more than a point contact. It follows that of all of the possible directions
that a motion in time can take, only one of these time directions brings the motion in time
into contact with the region of motion in space. Only in this one direction can an effect be
transmitted across the regional boundary. Inasmuch as all possible directions are equally
probable, in the absence of any factors that would establish a preference, the ratio of the
transmitted effect to the total magnitude of the motion is numerically equal to the total
number of possible directions.
As can be seen from the foregoing explanation, the transmission ratio depends on the
nature of the motion, particularly on the number of dimensions involved. However, the
value with which we will be most concerned is that applicable to the basic properties of
matter. This is the relation that was called the inter-regional ratio in the first edition, and
it appears advisable to retain this name, although the more extensive information now
available shows the relation is not as general as the name might indicate.
On the basis of the theoretical considerations discussed in the preceding paragraphs, there
are 4 possible orientations of each of the two two-dimensional rotations of the atoms, and
8 possible orientations of the one-dimensional rotations, making a total of 4 x 4 x 8 = 128
different positions that a unit displacement of the scalar translational motion of the atom
(the inward scalar effect of the rotation) can take in three-dimensional time. In addition,
each of the rotating systems of the atom has an initial unit of vibrational displacement
with three possible orientations, one in each dimension. For the two-dimensional basic
rotation this means nine possible positions, of which two are occupied. Thus, for each of
the 128 possible rotational positions there is an additional 2/9 vibrational position which
any given displacement unit may occupy. The inter-regional ratio is then 128 (1 + 2/9) =
156.44.
It is this inter-regional ratio that accounts for the small ―size‖ of atoms when the
dimensions of these objects are measured on the assumption that they are in contact in the
solid state. According to the theory developed in the foregoing pages, there can be no
physical distance less than one natural unit, which, as we will see in the next chapter, is
4.56 x 10-6 cm. But because the inter-atomic equilibrium is established in the region
inside this unit, the measured inter-atomic distance is reduced by the inter-regional ratio,
and this measured value is therefore in the neighborhood of 10-8 cm.
The inversion of space and time at the unit level also has an important effect on the
dimensions of inter-regional relations. Inside unit space no changes in space magnitudes
can take place, since less than unit space does not exist. However, as pointed out earlier,
the motion in time, which can take place inside the space unit, is equivalent to a motion
in space because of the inverse relation between space and time. An increase in the time
aspect of a motion in this inside region (the time region, where space remains constant at
unity) from 1 to t is equivalent to a decrease in the space aspect from 1 to 1/t. Where the
time is t, the speed in this region is equivalent space 1/t divided by time t, or 1/t².
In the region outside unit space, the speed corresponding to one unit of space and time t is
1/t. Now we find that in the time region it is 1/t². The time region speed, and all quantities
derived therefrom, which means all of the physical phenomena of the inside region, as all
of these phenomena are manifestations of motion, are therefore second power expressions
of the corresponding quantities of the outside region. This is an important principle that
must be taken into account in any relation involving both regions. The intra-region
relations may be equivalent; that is, the expression a= be is the mathematical equivalent
of the expression a² = b²c². But if we measure the quantity a in the outside region, it is
essential that the equation be expressed in the correct regional form: a = b²c².
Although the difficulties which the Reciprocal System of theory does not encounter do
not enter into the development of thought in these pages, and, strictly speaking, have no
real place in the discussion, it may be of interest, while we are considering some of the
factors that enter into the phenomena of very small dimensions, to point out that the
theory of a universe of motion is free from the problem of infinities that plagues all
conventional theories in this physical area. Richard Feynman gives us a candid
assessment of the existing theoretical situation:
We really do not know exactly what it is that we are assuming that gives us the difficulty
producing infinities. A nice problem!
However, it turns out that it is possible to sweep the infinities under the rug, by a certain
crude skill, and temporarily we are able to keep on calculating.... We have all these nice
principles and known facts, but we are in some kind of trouble: either we get the
infinities, or we do not get enough of a description–we are missing some parts.58
The Reciprocal System is free of these problems because it is a fully quantized system of
theory. Every physical phenomenon, this theory tells us, is a manifestation of motion, and
every motion involves at least one unit of space and one unit of time. For convenience,
we may identify a ―point‖ within a unit of space or a unit of time, but such a point has no
independent existence. Nothing less than one unit of either space or time exists in the
universe of motion.

CHAPTER 13

Physical Constants
Because motion and its components, space and time, exist only in units, the derivatives of
motion, dimensional variations of the basic relation between space and time, such as
acceleration, force, etc., also exist only in natural units. A natural unit of force, for
example, is a natural unit of time divided by a two-dimensional natural unit of space. It
then follows that where a relation of the kind discussed in Chapter 12 is correctly stated,
it is valid as a quantitative relation between units without any arbitrary ―constant.‖ The
expression F = ma, for example, tells us that one natural unit of force applied to one
natural unit of mass will produce an acceleration of one natural unit. When all quantities
are expressed in natural units there are no numerical constants in equations of this kind
aside from what we may call structural factors: geometrical factors such as the number of
effective dimensions, numerical factors such as the second and third powers of the
quantities entering into the relations, and so on.
There has been a great deal of speculation as to the nature and origin of the ―fundamental
constants‖ of present-day physics. An article in the Sept. 4, 1976 issue of Science News,
for example, contends that we are confronted with a dilemma, inasmuch as there are only
two ways of looking at these constants, neither of which is really acceptable. We must
either, the article says, ―swallow them ad hoc‖ without justification for ―their necessity,
their constancy, or their values,‖ or we must accept the Machian hypothesis that they are,
in some unknown way, determined by the contents of the universe as a whole. The
development of the Reciprocal System of theory has now resolved this dilemma in the
same way that it handled a number of the long-standing problems considered in the
earlier pages; that is, by exposing it as fictitious. When all quantities are expressed in the
proper units–the natural units of which the universe of motion is constructed–the
―fundamental constants‖ reduce to unity and vanish.
A preliminary step that has to be taken before we can compare the mathematical results
derived from the new theory with the numerical values obtained by measurement is to
ascertain the conversion ratios by which the values in the natural system can be converted
to the conventional system of units in which the measurements are reported. Inasmuch as
the conventional units are arbitrary, there is no way in which the conversion factors can
be calculated theoretically. It is necessary to utilize a measurement of some specific
physical quantity for each independent conventional unit. Any physical quantity, which
involves the item in question, and can be clearly identified, will theoretically serve the
purpose, but for maximum accu1acy certain basic phenomena that are relatively simple,
and have been carefully studied observationally, are clearly preferable.
There is no question as to where we should obtain the value of the natural unit of speed,
or velocity. The speed of radiation, measured as the speed of light in a vacuum, 2.99793 x
1010 cm/sec, is an accurately measured quantity that is definitely identified as the natural
unit by the theoretical development. There are some uncertainties with respect to the
other conversion factors, both as to the accuracy of the experimental values from which
they have to be calculated, and as to whether all of the minor factors that enter into the
theoretical situation have been fully taken into account. Some improvement has been
made in both respects since the first edition was published, and the principal
discrepancies that existed in the original findings have been eliminated, or at least greatly
minimized. No significant changes were required in the values of the basic natural units,
but some of the details of the manner in which these units enter into the determination of
the ―constants‖ and other physical magnitudes have been clarified in the course of
extending the development of the theoretical structure.
One of the problems in this connection is that of arriving at a decision as to which of the
reported measured values should be used in the calculations. Ordinarily it would be
assumed that the more recent results are the more accurate, but an examination of these
recent values and the methods by which they have been obtained indicates that this is not
necessarily true. Apparently the ―consistent‖ values listed in the up-to-date tabulations
involve some adjustments of the raw data to conform with current theoretical ideas as to
the relations that should exist between the various individual values. For purposes of this
present work the unadjusted data are preferable.
The principal question at this point concerns the experimental values of Avogadro's
number, as only three conversion constants are required for present purposes, and there
are no significant differences in the measurements of the quantities that will be used in
calculating two of these constants. The more recent values reported for Avogadro’s
number are somewhat lower than those reported earlier, but the correlation with the
gravitational constant, which will be discussed shortly, favors some of the earlier results.
The value adopted for use in evaluating the conversion constant for mass, 6.02486 x 1023
g-mol-1, has therefore been taken from a 1957 tabulation by Cohen, Crowe and
DuMond.59
In any event, it should be understood that wherever the results obtained in this work are
expressed in the arbitrary units of a conventional system, they are accurate only to the
degree of accuracy of the experimental values of the quantities used in determining the
conversion constants. Any future change in these values resulting from improvement of
experimental techniques will involve a corresponding change in the values calculated
from theoretical premises. However, this degree of uncertainty does not apply to any
results that are stated in natural units, or in conventional terms such as units of atomic
number that are equivalent to natural units.
As in the first edition, the natural unit of time has been calculated from the Rydberg
fundamental frequency. A question has arisen here because this frequency varies with the
mass of the emitting atom. The original calculation was based on the value applicable to
hydrogen, but this has been questioned, as the prevailing opinion regards the vague
applicable to infinite mass as the fundamental magnitude. A definitive answer to this
question will not be available until the theory of the variation in the frequency has been
worked out, but in the meantime a review of the situation indicates that we should stay
with the hydrogen value in the interim. From the theoretical viewpoint it would seem that
the unit value would come from an atom of unit magnitude, rather than from an infinite
number of atoms. Also, even though the difference is small, the value thus derived seems
to be more consistent with the general pattern of measured magnitudes than the
alternative.
From the manner in which the Rydberg frequency appears in the mathematics of
radiation, particularly in such simple relations as the Balmer series of spectral lines, it is
evident that this frequency is another physical manifestation of a natural unit, similar in
this respect to the speed of light. It is customarily expressed in cycles per second on the
assumption that it is a function of time only. From the explanation previously given, it is
apparent that the frequency of radiation is actually a velocity. The cycle is an oscillating
motion over a spatial or temporal path, and it is possible to use the cycle as a unit only
because that path is constant. The true unit is one unit of space per unit of time (or the
inverse of this quantity). This is the equivalent of one half-cycle per unit of time rather
than one full cycle, as a full cycle involves one unit of space in each direction. For
present purposes the measured value of the Rydberg frequency should therefore be
expressed as 6.576115 x 1015 half-cycles per second. The natural unit of time is the
reciprocal of this figure, or 1.520655 x 10-16 seconds. Multiplying the unit of time by the
natural unit of speed, we obtain the value of the natural unit of space, 4.558816 x 10-6
centimeters.
By combing these two natural units as required, the natural units of all of the quantities of
the velocity group can be calculated. Those of the inverse quantities, the energy group,
can also be calculated in the same centimeter-second terms, but this gives us expressions
such as 3.711381 x 10-32 sec³/cm³, which is the natural unit of mass. This value has no
practical use because the inverse relations between the quantities of the velocity group
and those of the energy group have not hitherto been recognized. In setting up the
conventional system of units it has been assumed that mass is another fundamental
quantity for which an additional arbitrary unit is necessary. The ratio of the velocity-
based unit of mass to this arbitrary unit, the gram, can be derived from any clearly
defined physical relation involving mass that has been accurately measured in
conventional units. As indicated earlier, the measurement selected for this purpose is that
of Avogadro's constant. This constant is the number of molecules per gram molecular
weight, or in application to atoms, the number of atoms per gram atomic weight. The
reported value is 6.02486 x 1023. The reciprocal of this number, 1.65979 x 10-24, in grams,
is therefore the mass equivalent of unit atomic weight, the unit of inertial mass, as we
will call it.
With the addition of the value of the natural unit of inertial mass to the values previously
derived for the natural units of space and time, we now have all of the information
required for calculation of the natural units of the other primary quantities of the
mechanical system. The mechanical units can be summarized as follows:
Natural Units of Primary Quantities
Space-time Units Conventional Units
-6
s space 4.558816 x 10 cm 4.558816 x 10-6 cm
-16
t time 1.520655 x 10 sec 1.520655 x 10-16 sec
s/t speed 2.997930 x 1010 cm/sec 2.997930 x 1010 cm/sec
26
s/t² acceleration 1.971473 x 10 cm/sec² 1.971473 x 1026 cm/sec²
t/s energy 3.335635 x 10-11 see/cm 1.49175 x 10-3 ergs
-6
t/s² force 7.316889 x 10 sec/cm² 3.27223 x 10² dynes
4 5 4
t/s pressure 3.520646 x 10 sec/cm 1.57449 x 1013 dynes/cm²
t²/s² momentum 1.112646 x 10-21 sec²/cm² 4.97593 x 10-14 g-cm/sec
-32
t³/s³ inertial mass 3.711381 x 10 sec³/cm³ 1.65979 x 10-24 g
The values given in the first column of this tabulation are those derived by applying the
natural units of space and time to the space-time expressions for each physical quantity.
In the case of the quantities of the speed or velocity type, these are also the values
applicable in the conventional systems of measurement. However, mass is regarded as an
independent fundamental variable in the conventional systems, and a mass term is
introduced into each of the quantities of the energy type. Momentum, for example, is not
treated as t²/s², but as the product of mass and velocity, which, in space-time terms, is t³/s³
x s/t. The use of an arbitrary unit of mass then introduces a numerical factor. Thus, in
order to arrive at the values of the natural units in terms of the cgs system of
measurement, each of the values given for the energy group in the first column of the
tabulation must be divided by this factor: 2.236055 x 10-8.
As we saw in Chapter 10, the masses of the atoms of matter can be expressed in terms of
units of equivalent electric displacement. The minimum quantity of displacement is one
atomic weight unit. It is therefore evident that this displacement unit is some kind of a
natural unit of mass. In the first edition it was identified as the natural unit of mass in
general. The continuing theoretical development has revealed, however, that this atomic
weight unit, the unit of inertial mass, is actually a composite that includes not only a unit
of what we will now call primary mass, the basic mass quantity, but also a unit of
secondary mass.
The concept of secondary mass was introduced in the first edition, without being
developed very far. A considerably more detailed treatment is now available. The inward
motion in space which gives rise to the primary mass does not take place from an initial
level occupying a fixed location in a stationary frame of reference. Instead, the initial
level itself is in motion in the region inside unit space. Since mass is an expression of the
inward motion that is effective in the context of a stationary reference system, the
primary mass is modified by the mass equivalent of the motion of the initial level.
While the previous deductions with respect to the essential features of the secondary
mass component have been confirmed in the subsequent studies, a few of the details take
on a somewhat different appearance when viewed in the light of the more complete
information now available. The recent findings indicate that although the primary mass is
a function of the net total effective positive rotational displacement, the movement of the
initial level that is responsible for the existence of the secondary mass depends on the
magnitudes of the displacements in the different dimensions separately.
The scalar directions of the motions inside unit distance play an important part in
determining these magnitudes. Outside unit distance, the scalar direction of the rotational
motion is inward because it must oppose the outward motion of the natural reference
system. However, as we saw in Chapter10, the magnitude of that inward motion depends
to some extent on whether the displacement in the electric dimension is positive or
negative. Inside unit space there is still more variability, as the motion in this region is in
time, and there is no fixed relation between direction in time and direction in space. (The
rotational motion of which the material atom or particle is constructed is motion in space,
but inside one spatial unit the translational motion of the atom is in time.)
Because of this directional freedom in the time region, the secondary mass may be either
positive or negative. Furthermore, the directions of the individual displacement units are
independent of each other, and the net total secondary mass of a complex atom may be
relatively small because of the presence of nearly equal numbers of positive and negative
secondary mass components. This directional variability introduces a number of
complications into the secondary mass pattern of the elements. The complete pattern has
not yet been identified, but a substantial amount of information is now available with
respect to the values applying to sub-atomic particles and the elements of low atomic
number.
The magnitudes of the natural units applicable to physical quantities are independent of
the sector or region of the universe in which the phenomena to which they relate are
located. As explained in Chapter12, however, only a fraction of any physical effect can
be transmitted across a regional boundary, and the measured value beyond that boundary
is substantially less than the original unit. This is the principal reason for the great
disparity between the magnitudes of the primary and secondary mass. A unit of mass in
the region inside unit distance is inherently just as large as a unit of mass in the region
outside unit distance. But when both are measured in terms of their effect in the outside
region, the inside, or secondary, mass is reduced by the interregional ratio.
In this chapter we are dealing with some very small quantities, and for greater accuracy
we will extend the previously calculated value of the inter-regional ratio to two more
decimal places, making it 156.4444. The reciprocal of this ratio, 0.00639205, is the
fraction of a time region unit that is effective outside unit distance. It is therefore the unit
of secondary mass applicable to the basic two-dimensional rotation of the atom or
particle. The unit of inertial mass is one such secondary unit plus one unit of primary
mass, or a total of 1.00639205.
An analysis of the secondary mass relations enables us to compute the mass of each of
the sub-atomic particles, a magnitude that is of interest not only as one more item of
information about the physical universe, but also because of the light that it throws on the
structure of the individual particle. Here we must take into account not only the two-
dimensional component of the secondary mass, the magnetic component, as we will call
it, following our usual terminology, but also the other components that may be involved
in the secondary mass. One of these is the component due to the electric rotation, if any.
Inasmuch as this electric rotation, the rotation in the third dimension, is not an
independent motion, but a reverse rotation of the pre-existing two-dimensional rotating
system, or systems, it adds neither primary mass nor the magnetic unit that is the
principal component of the secondary mass. It contributes only the mass equivalent of a
unit of one-dimensional rotation. In this case, the 1/9 factor representing the possible
positions of the basic photon applies directly against the basic 1/128 relation. We then
have for the unit of electric mass:
1/9 x 1/128 = 0.00086806
This value applies specifically where the motion around the electric axis is a rotation of a
two-dimensional displacement distributed over all three dimensions, as in a double
rotating system. Where only one two-dimensional rotation is involved, the electric mass
is 2/3 of the full unit, or 0.00057870. When two of the two-dimensional rotations (four
dimensions in all) are consolidated to form a double rotating system (three dimensions),
the two 0.00057870 mass units become one 0.00086806 unit.
Another secondary mass component that may be present is the mass due to an electric
charge. Like all other phenomena in a universe of motion, a charge is a motion, an
additional motion of the atom or particle. We are not ready to discuss charges in detail at
this stage of the presentation, so for the present we will merely note that on the basis of
the restrictions on combinations of motions defined in Chapter 9, the charge, as a motion
of the rotating particle or atom, must have a displacement opposite to that of the rotation
in order to be stable. This means that the motion that constitutes the charge is on the far
side of another regional boundary–another unit level–and it is subject to two successive
inter-regional transmission factors.
The relation between the time region and the third region, in which the motion of the
charge takes place, is similar to that between the time region and the region outside unit
space. The inter-regional ratio is the same, except that because the electric charge is one-
dimensional the factor 1 + 1/9 has to be substituted for the factor 1+2/9 that appears in
the inter-regional ratio previously calculated. This makes the interregional ratio
applicable to the relation with the third region
128 x (1+1/9)= 142.2222. The mass of unit charge is the reciprocal of the product of the
two inter-regional ratios, 156.4444 and 142.2222, and amounts to 0.00004494.
The charge applicable to electrons and positrons deviates from this normal value because
these particles have effective rotations in only one dimension, leaving the other two
dimensions open. In some way, the exact nature of which is not yet clear, the motion of
the charge is able to take place in these two dimensions of the time region instead of in
the normal manner. Since this is on the opposite side of the unit boundary, the direction
of the effect is reversed, making the mass increment due to the charge negative, as well as
reducing its magnitude by one third. The effective mass of a charge applied to an electron
or positron is therefore -2/3 x 0.00004494= -0.00002996
We may now apply the calculated values of the several mass components, as given in the
foregoing paragraphs, to a determination of the masses of the sub-atomic particles
described in Chapter 11. For convenience, these values will be recapitulated as follows:
p primary mass 1.00000000
m magnetic mass 0.00639205
gravitational mass 1.00639205
E electric mass (3 dim.) 0.00086806
e electric mass (2 dim.) 0.00057870
C mass of normal charge 0.00004494
c mass of electron charge -0.00002996
These are the masses of the various components on the natural scale. The measured
values are reported in terms of a scale based on an arbiter assumed mass for some atom
or isotope that is taken as a standard. For a number of years there were two such scales in
common use, the chemical scale, based on the atomic weight of oxygen as 16, an the
physical scale, which assigned the 16 value to the O16 isotope. More recently, a scale
based on an atomic weight of 12 for the C12 isotope has found favor, and most of the
values given in the current literature are expressed in terms of the C12 scale. In the light of
the finding of this work the shift away from the O16 scale is unfortunate, as the theoretical
development indicates that the O16 isotope has a mass c exactly 16 on the natural scale,
and the physical scale (O16 = 16) is therefore coincident with the natural scale. It will, of
course, be necessary to use the natural scale for our purposes. The observed values quote
for comparison with the theoretical masses will therefore be stated in terms of the
equivalent O16 physical scale.
Here again we face the same issue that was encountered early in this chapter in
connection with the selection of an empirical value c Avogadro’s number as a basis for
calculating the unit of mass: the question as to whether we should regard the most recent
determination as the most accurate. It would appear that the arguments that led to the
acceptance of the 1957 value of Avogadro’s number are also applicable to the particle
masses, particularly since the agreement between the calculated and observed masses of
the electron and proton is quit satisfactory on this basis. The empirical values cited in the
paragraphs that follow have therefore been taken from the 1957 compilation by Cohen,
Crowe and Du Mond.59
Since mass is three-dimensional, an independent one-dimensional or two-dimensional
rotation has no mass. Nevertheless, when such a rotation becomes a component of a
three-dimensional rotation, it contributes to the mass equivalent of that rotation. This
amount that a rotation which is massless when independent will add to the mass of a
particle or atom when it joins that combination of motions constitutes what we will call
potential mass.
In the case of the particles with no effective two-dimensional rotational displacement, the
electron and the positron, the appropriate unit of electric mass, 0.00057870, is the entire
mass of the particle, and even that mass is only potential, rather than actual, as long as the
particle is in the basic uncharged condition. When a charge is added, the effect of the
charge is distributed over all three dimensions by the chance process that governs the
directions of the motion of the charge in the time region. Thus the charged particle has
effective motion in all three dimensions, irrespective of the number of dimensions of
rotation. This not only makes the mass of the charge itself an effective quantity, but, as
indicated in Chapter 11, it also raises the potential mass of the rotation of the particles to
the effective status. The net effective mass of the electron or the positron is then the
rotational value 0.00057870 less the mass of the charge 0.00002996, or 0.00054874. The
observed value is 0.00054877.
The massless neutron, the M ½-½-0 combination, has no effective rotation in the third
dimension, but no rotation from the natural standpoint is rotation at unit speed from the
standpoint of a fixed reference system. This rotational combination therefore has an
initial unit of electric rotation, with a potential mass of 0.00057870, in addition to the
mass of the two-dimensional basic rotation 1.00639205, making the total potential mass
of this particle 1.00697075.
In this connection, it should be noted that the electron and positron also have rotation at
unit speed (no rotation, in terms of the natural system) in the two inactive dimensions, but
these rotations involve no mass, as they are independent, and are not rotating anything.
The initial unit of rotation in the third dimension of the massless neutron, on the other
hand, is a reverse rotation of the two-dimensional structure, and it therefore adds an
electric mass unit.
The neutrino, M ½-½-(1), has the same unit positive displacement in the magnetic
dimensions as the massless neutron, but it has neither primary nor magnetic mass because
these are functions of the net total displacement, and that quantity is zero for the neutrino.
But since the electric mass is independent of the basic rotation, and has its own initial
unit, the neutrino has the same potential mass as the uncharged electron or positron,
0.00057870.
The potential mass of both the massless neutron and the neutrino is actualized when the
rotations of these particles are joined to produce a three-dimensional rotation. The mass
of the resulting particle is then 1.00754945. As indicated in Chapter 11, this particle is the
proton. As it is observed, however, the proton is positively charged, and in this condition
the foregoing figure is increased by the mass of a unit charge, 0.00004494. The resulting
mass of the theoretical charged proton is 1.00759439. The mass of the observed proton
has been measured as 1.007600.
Consolidation of two protons results in the formation of a double rotating system. As
stated earlier, this substitutes one three-dimensional electric unit of mass- for two of the
two-dimensional units, reducing the combined mass by 0.00028935. The mass of the
product, the deuterium atom (H²), is the sum of two (uncharged) proton masses less this
amount, or 2.014810. The corresponding observed value is 2.014735.
Inasmuch as the proton already has a three-dimensional status, addition of another
neutrino alters only the electric mass. The material neutrino adds the normal two-
dimensional electric unit, 0.00057870, making the total for the product, the mass one
isotope of hydrogen, 1.00812815. The measured value is reported as 1.008142.
The successive additions of neutrinos to the massless neutron that eventually produce the
mass one isotope of hydrogen should be given special attention, as the considerations
which will be discussed in Chapter 17 indicate that this addition process plays a very
significant part in the overall cyclic mechanism of the universe. The following tabulation
shows how the mass of the hydrogen isotope is built up step by step.
Step by Step Building Process
for the Hydrogen Isotope
primary mass 1.00000000
magnetic mass 0.00639205
electric mass 0.00057870
M ½-½-0 massless neutron 1.00697075*
M ½-½-(1) neutrino 0.00057870*
M 1-1-(1) proton 1.00754945
M 2-2-(1) neutrino 0.00057870*
M 1½-1½-(2) hydrogen (H1) 1.008l28l5
* potential mass
Neutrinos are plentiful in the local environment. The requirement for production of new
matter in the form of hydrogen by the addition process is therefore a continuing supply of
massless neutrons. In Chapter 15 we will find that there is in operation a gigantic process
that furnishes just such a supply.
Addition of a cosmic neutrino, the rotational displacements of which are on the opposite
side of the unit boundary, to the proton, involves an additional initial electric unit, as both
the rotation in time and the rotation in space must start from unity. Also the spatial effect
of the cosmic neutrino rotation is three-dimensional, since the spatial direction of motion
in time is indeterminate. The total addition of mass to the proton in the production of the
compound neutron is then 0.00144676, and the resulting mass of the particle is
1.00899621. It has been measured as 1.008982.
The following is a summary of the particle masses and the mass components from which
these masses are built up. The empirical values from the 1957 compilation are given for
comparison. As noted earlier, the correlation is quite satisfactory for the electron and the
proton, as it is within the estimated range of experimental error. The divergence in the
case of the heavier particles is not large, but it exceeds the estimated error. Whether the
source of this discrepancy is in the theoretical development or in the experimental
determinations remains to be ascertained.
Mass Mass
Particle
Composition Calculated Observed
e-c charged electron 0.00054874 0.00054876
e-c charged positron 0.00054874 0.00054876
e electron 0.00057870* massless
e positron 0.00057870* massless
e neutrino 0.00057870* massless
p+m+e massless neutron 1.00697075* massless
p + m + 2e proton 1.00754945 unobserved
p + m + 2e + C charged proton 1.00759439 1.007593
p + m + 3e hydrogen (H1) 1.008l28l5 1.008142
p + m + 3e + E compound neutron 1.00899621 1.008982
* potential mass
In the first edition the relation between the natural unit of mass and the arbitrary unit in
the cgs system was identified in terms of the gravitational constant. It has recently been
pointed out by Todd Kelso and Steven Berline that the relation thus established cannot be
converted to a different system of units such as the SI (mks) system. This made it evident
that the interpretation of the gravitational phenomenon on which the previous
determination was based was, in some way, erroneous An analysis of the situation was
therefore carried out in order to locate the point of error.
The invalidation of the interpretation of the gravitational equation has no effect on any
other feature of the theoretical results that have been obtained from the Reciprocal
System, as described in this volume Its sole result has been to leave this system of theory
without an, connection between the gravitational equation and the theoretical structure.
Once the situation is viewed in this light, it is immediately apparent that the lack of
connection between the equation and physical theory is not peculiar to the Reciprocal
System. Conventional theory does not identify the connection either. The physics
textbooks find it necessary to admit this fact in statements such as the following: ―It
should be noted that Newton's law of universal gravitation is not a defining equation like
Newton's second principle of mechanics and cannot be derived from defining equations.
It represents an observed relation” . This is a theoretical discrepancy that conventional
physics has not been able to resolve. But it is an isolated discrepancy, and it has been
swept, under the rug by assigning fictitious dimensions to the gravitational constant.
It follows from this that the error lies in some interpretation of that ―observed relation‖
that has been common to both conventional theory and the Reciprocal System. Evidently
the developers of both systems of theory have misunderstood the true nature of the
phenomenon. Here, again, recognition of the source of the difficulty points the way to the
resolution of the problem. As brought out in the earlier chapters, one mass does not
actually exert a force on another–each is pursuing its own course independently of all
others–but the results of the inward motions of two masses are similar to those that would
follow if the masses did attract each other. These results can therefore be represented in
terms of an attractive force, on an ―as if‖ basis. But in order to do this we must put the ―as
if‖ forces on the same footing as real forces.
A force can only be exerted against a resistance. Hence, when we attribute a force to the
motion of one mass we cannot also attribute a force to the motion of the other. We must
attribute a resistance to the second mass. Thus, an ―as if‖ force, a gravitational force, is
exerted against an ―as if,‖ inertial resistance. In the previous discussion we identified
gravitation as three-dimensional motion, s³/t³, and inertia as three-dimensional resistance
to motion, t³/s³. The product of the gravitational motion and the inertial resistance
therefore does not have the dimensions of mass to the second power, as the conventional
expression of the gravitational equation indicates; it is dimensionless.
This is a situation in which the ability to reduce all physical quantities to space-time
terms is very helpful. It will also be convenient to exam the dimensional situation
independently before taking up the question of the numerical values. The gravitational
equation, as expressed in current practice, is assigned dimensions as follows:
(dynes cm² g-²) x g² x cm-² = dynes (13-1)
Reducing equation 13-1 to space-time terms in accordance with the relations established
in Chapter 12 (in which dynes, as g-cm/sec², are t³/s³ x s x 1/t² = t/s²), we have
(t/s² x s² x s6/t6) x t6/s6 x 1/s² = t/s² (13-2)
In the light of the new understanding of the mm' term as the dimensionless product of
gravitational and inertial mass, it is now evident that the s6/t6 dimensions belong with mm'
rather than with the gravitational constant. When they are so applied, the resulting
dimensions of mm' cancel out, as the true theoretical dimensions do. We can therefore
replace them with the correct dimensions. As pointed out in the first edition, there are
also two other errors in the customary assignment of dimensions to this equation. The
distance term is actually dimensionless. It is the ratio of 1/n² to 1/1² The dimensions that
are mistakenly assigned to this term belong to a term whose existence has not been
recognized because it has unit value, and therefore does not enter into the numerical
calculation. In order to put the ―as if‖ gravitational interaction on the same basis as a real
interaction, we have to express it in terms of the action of a force on a resistance, not as
the action of a mass on a resistance. And since the dimensions of the mass term cancel, so
that the gravitational mass enters the equation only as a dimensionless number, the force
of gravitation has to be expressed in actual force terms; that is, as t/s². The correct
dimensional form of the equation is then
(s³/t³ x t³/s³) x t/s² = t/s² (13-3)
Turning now to the numerical magnitudes, we note that while the dimensions of the mm'
term cancel out, the magnitudes do not. Every unit of mass is both a unit of s³/t³ and a
unit of t³/s³, each in its proper context. Since the units are independent, the effective
magnitude of the ―as if‖ action of m units of gravitation against m' units of inertial
resistance is mm'. However, expressing both of the mass terms in conventional units
introduces a numerical error, as only the inertial mass term is counterbalanced by a
conventional mass magnitude on the other side of the equation. To compensate for this
error a corresponding inverse factor must be introduced into the gravitational constant.
There is no error if the gravitational mass is expressed in natural units, as the value 1 does
not require any counterbalancing term. The relation between the natural and conventional
units therefore determines the magnitude of the necessary correction factor.
One gram is 6.02486 x 1023 units of inertial mass (t³/s³). The reciprocal of this number is
1.65979 x 10-24. But only one sixth of the total number of mass units is effective in the
gravitational interaction because this ―as if‖ interaction takes place in only one
dimension, and in only one of the two directions in this dimension. The total number of
s³/t³ units corresponding to an effective mass of one gram is therefore 9.95 x 74 x 10-24.
Expressing this mass as one unit overstates the numerical value, and a correction of this
magnitude must therefore be included as a component of the gravitational constant.
A small additional correction is required because of the effect of the secondary mass.
Gravitation and inertia are inversely related relative to the primary mass; that is, the
primary mass is p/(p + s) units of gravitational mass and also p/(p + s) units of inertial
mass, where p and s are the primary and secondary masses respectively. The product of a
unit of gravitational mass and a unit of inertial mass is therefore 1/(1 + s)² units of
primary mass. Where the result is expressed in terms of inertial mass, another 1 + s factor
is introduced. The total effect of the secondary mass is then the introduction of a factor of
1.019299. Applying this factor to the value 9.95874 x 10-24, we obtain 1.015093 x 10-23.
Replacing the 1/s² distance term by a t/s² force term has the effect of introducing a time
dimension, which must be expressed in natural units to avoid creating a numerical
unbalance. The numerical value of the natural unit of time, 1.520655 x 10-16, offsets in
part the errors in the mass term. The net correction to be made is 1.015093 x 10-23 divided
by the natural unit of time, and amounts to 6.67537 x 10-8. This is the gravitational
constant in the cgs system of units.
Looking now at the question of conversion to a different system of units, the issue that
initiated the restudy of the situation, we find that a change from cgs to mks units in the
conventional form of the equation (13-1) results in a change of 10-6 in the mass term, 10-4
the distance term, and 10-5 in the force term. A change of 10-3 in the gravitational constant
is then required for a balance. In the theoretical equation (13-3) the net effect of a change
in the system of units is confined to the relation of the natural and conventional units of
mass. As can be seen from the explanation that has been given, the gravitational constant
is proportional to the ratio of these units. Changing the conventional unit from grams to
kilograms alters this ratio by 10-3. The gravitational constant is then changed by the same
amount. This agrees with the result obtained from equation 13-1.
Those who are familiar with the first edition will have noticed that the values of the
natural unit of inertial mass and related quantities, as given earlier in this chapter, are
larger than the values given in the original publication. At the time of the original
investigation it seemed clear that a factor of 1/3 entered into the mass situation in some
way, and there appeared to be sufficient justification for applying this factor to the size of
the basic unit. As brought out in the preceding paragraphs, we now find that the 1/3 factor
is a result of the one-dimensional nature of the ―as if‖ gravitational interaction. This
factor has therefore been eliminated from the mass units. As a result, the natural unit of
inertial mass, as defined in this edition, is three times the value given in the first edition
(with a small adjustment to reflect the results of the continuing studies of the details of
the phenomena involved). The use of these larger units has no effect on the physical
relations involving inertial mass, as the expressions of these relations are balanced
equations in which the mass terms are in equilibrium with terms representing quantities
derived from mass.

CHAPTER 14

Cosmic Elements
As pointed out in Chapter 6, the inversion of space and time in physical phenomena that
is possible by reason of the reciprocal relation between the two entities may apply to only
one of the constituent motions of a complex physical entity or phenomenon, or it may
apply to the entire structure. We have already examined some of the effects of inversion
of single motion components, such as translational motion in time, negative displacement
in the electric dimension of the atomic rotation, etc. Now we are ready to take a look at
the consequences of complete inversions.
It has already been noted that the rotational combinations which constitute the atoms and
sub-atomic particles of the material system are photons vibrating in time and rotating in
space, and that they are paralleled by a similar system of combinations in which the
photons are vibrating in space and rotating in time. The point to be emphasized at this
juncture is that the inverse system, the cosmic system of atoms and sub-atomic particles,
is identical with the material system in every respect, except for the space-time inversion.
Corresponding to carbon, 2-1-4, there is cosmic carbon, (2)-(1)-(4). Corresponding to the
neutrino, M ½-½-(1), there is a cosmic neutrino, C (½)-(½)-1, and so on.
Furthermore, this identity applies with equal force to all of the entities and phenomena of
the physical universe. Since everything that exists in the material sector of the universe is
a manifestation of motion, every item is exactly duplicated in the cosmic sector with
space and time interchanged. The detailed description of the material sector of the
universe that we are deriving item by item through development of the consequences of
the basic postulates of the Reciprocal System of theory is therefore equally applicable to
the cosmic sector. Thus, even though the cosmic sector is almost entirely unobservable,
we have just as exact and just as detailed knowledge of that sector (aside from
information about specific individuals of the various classes of objects) as we do of the
material sector.
It should be noted, however, that our knowledge of the material sector is knowledge of
how the phenomena of that sector appear to observation from a point within that sector;
that is, a location in a gravitationally bound system. What we know about the cosmic
sector through application of the reciprocal relation is knowledge of the same kind,
information as to how the phenomena of the cosmic sector appear to observation from a
location within that sector; a location in a system that is gravitationally bound in time.
Such knowledge has no direct significance from our standpoint, as we cannot make
observations from such a base, but it does provide a basis from which we can determine
how the phenomena of the cosmic sector, and the phenomena originating in that sector,
theoretically should appear to our observation.
One of the most perplexing questions of present-day physics is: Where is the antimatter?
Considerations of symmetry applied to the current theories of the structure of matter
indicate that there should be ―anti‖ forms of the elements of which ordinary matter is
constituted, and that the ―antimatter‖ composed of those ―antielements,‖ ought to be
equally as abundant in the universe as a whole as ordinary matter. ―Antistars,‖ and
―antigalaxies‖ should theoretically be as plentiful as ordinary stars and ordinary galaxies.
But there is no hard evidence of the existence of any such objects. It has been suggested,
to be sure, that some of the observed galaxies may be composed of antimatter. Alfven, for
example, says that there is a ―distinct possibility that antiworlds may actually be
neighbors of ours, astronomically speaking. It cannot be excluded that the Andromeda
nebula, the closest galaxy to ours, or even stars within our own galaxy, are composed of
antimatter.‖60 But this is pure speculation, in the absence of any demonstrated means of
distinguishing the radiation produced by a galaxy of the hypothetical antimatter from that
produced by a galaxy of ordinary matter. So the question remains, Where is the
antimatter?
The Reciprocal System now provides the answer. This new structure of theory agrees that
antimatter (actually reciprocal matter: cosmic matter, s we are calling it) exists, and that it
is equally as abundant in the physical universe as ordinary matter. But it tells us that the
galaxies of cosmic matter are not localized in space; they are localized in three-
dimensional time. The progression of time to which we are subject carries us through this
three-dimensional time in a manner analogous to a linear motion through three-
dimensional space. Only a very small fraction of the total number of objects occupying
positions in the spatial reference system would be encountered in the course of a one-
dimensional spatial motion of this kind, and the same is true of the number of cosmic
objects that are encountered in our progression through time, is compared with the total
number of such objects occupying positions n a three-dimensional temporal reference
system.
Furthermore, gravitation in the cosmic sector acts in time, rather than in space, and the
atoms of which a cosmic aggregate is composed are contiguous in time, but widely
dispersed in space. Thus, even the relatively small number of cosmic aggregates that we
do encounter in our movement through time are not encountered as spatial aggregates;
they are encountered as individual atoms widely dispersed in space. We cannot recognize
a cosmic star or galaxy because we observe it only one atom at a time. Radiation from the
cosmic aggregate is similarly dispersed. Such radiation is continually reaching us, but as
we observe it, this radiation originates from individual, widely scattered, atoms, rather
than from localized aggregates, and it is therefore isotropic from our viewpoint. This
radiation can no doubt be equated with the ―blackbody radiation‖ currently attributed to
the remnants of the ―Big Bang.‖
All of the somewhat sensational suggestions as to the existence of observable stars and
galaxies of antimatter, and the possible consequences of interaction between these
aggregates and bodies composed of ordinary matter are thus without foundation. The
antimatter-fueled generators, which supply the energy for space travel in science fiction,
will have to remain on the science fiction shelves.
The difference between a cosmic star and a white dwarf star should be noted particularly.
Both are on the time side of the dividing line so far as the translational speed is
concerned; that is, both are composed of matter that is moving faster than the speed of
light. But the white dwarf is otherwise no different from the ordinary star of the material
sector. The space-time relationship is inverted only in the translational motion of its
components. In the cosmic star, on the contrary, all of the space-time relations are the
inverse of those of the ordinary material star; not only the translational motion, but also
the vibrational and rotational motions of its constituent atoms, and, what is especially
significant in the present connection, the effect of gravitation. Consequently, the white
dwarf is an aggregate in space, and we see it as such, whereas the cosmic star is an
aggregate in time, and we cannot recognize it as an aggregate.
Even those contacts which do take place between matter and the individual particles of
cosmic matter (antimatter) that enter the local environment do not have the kind of results
that are anticipated on the basis of current theory. In present-day thought the essential
difference between matter and antimatter is conceived as a charge reversal. An atom is
thought to consist of a positively charged nucleus surrounded by negatively charged
electrons. It is then assumed that the antiatom has the reverse structure: a negatively
charged nucleus surrounded by positively charged electrons (positrons). The further
assumption then follows that an effective contact between any particle and its antiparticle
would result in cancellation of all charges and reduction of both particles to radiant
energy.
This is a typical example of the results of the compartmental nature of present-day
physical theory, which permits an assumption to be used in one field of application, and a
direct contradiction of that assumption to be applied in another field, both under the
banner of ―modern physics.‖ Where the accepted theory requires that opposite charges
neutralize each other on close approach, it is assumed that they do so. Where this does
not fit the theory, as in the electrical explanation of the structure of matter, it is cheerfully
assumed that the charges accommodate their behavior to the requirements of the theory,
and take up stable relative positions instead of destroying each other. In the present
instance, both of these contradictory assumptions are employed at the same time. The
stable charges that somehow have no effect on each other are ―annihilated,‖ by other
charges, presumably identical in nature. Our findings are that wherever electric charges
actually do exist, opposite charges destroy each other on contact.
It does not follow, however, that charge neutralization is equivalent to annihilation. In
actual practice, only one of the reactions between particles and what are presumed to be
antiparticles follows the theoretical scenario of annihilation. The electron and positron
do, in fact, annihilate each other on contact, with the production of oppositely directed
photons. The antiparticle of the proton, in the accepted sense of the term–a particle
equivalent to the proton in all observable respects except that it is negatively charged–has
been detected, but contact of this antiproton with a proton does not result in annihilation
of the particles into radiant energy. ―Here the situation is not as straightforward as in the
annihilation of an electron-positron pair,‖61 report Boorse and Motz. And indeed it is not.
The interaction of these particles produces an assortment of transient and stable particles
not essentially different from those, which appear in other high-energy interactions. As
these authors say, ―different kinds of mesons are released‖ in the process. In the light of
our new findings it is evident that these are not annihilation reactions; they are cosmic
atom building reactions. We will examine the nature and characteristics of such reactions
in Chapter 16.
Detection of the antineutron has also been reported, but the evidence for this is indirect,
and it is rather difficult to reconcile the various ideas as to just what an antineutron would
be with the concept of charge reversal as the essential difference between particle and
antiparticle. On the basis of the charge reversal hypothesis, the neutral particles should
have no ―anti‖ forms. Indeed, those who contend that ―every particle has its antiparticle‖
justify this statement by asserting that each neutral particle is its own antiparticle. This
would rule out the existence of a distinct antineutron, in the currently accepted sense of
the term. In any event, this problem with respect to the neutral particles is another item
that, like the lack of annihilation in the ―annihilation reactions‖ , emphasizes the
inadequacy of the conventional theory of atomic structure in application to the
―antimatter‖ phenomena.
In a universe of motion the atom is not an electrical structure. As has been brought out in
detail in the earlier pages, it is a combination of rotational and vibrational motions. In the
structures of the material type the speed of the rotational motions is less than unity (the
speed of light) while the speed of the vibrational motion is greater than unity. In the
structures of the cosmic type these relations are reversed. Here the speed of the
vibrational motion is less than unity and the speed of the rotational motion is greater than
unity. The true ―antiparticle‖ of a material particle or atom is a combination of motions in
which the positive rotational displacements and negative vibrational displacements of the
material structure are replaced by negative rotational displacements and positive
vibrational displacements of equal magnitude.
In one of the reactions currently attributed to mutual annihilation of antiparticles, the
neutralization of displacements is actually accomplished, and in this case, the
combination of electrons and positrons, the particles are actually annihilated; that is, they
are converted to radiant energy and their existence as particles of the rotational class is
terminated. But there are, in reality, two different processes involved in this reaction.
First, the oppositely directed charges cancel each other, leaving both particles in the
uncharged condition. Subsequently, their rotations, M 0-0-1 and M 0-0-(1) combine to 0-
0-0, which is no effective rotation at all. In the vernacular, we might describe this second
process as straightening out the rotational motion. There is a short interval between the
two processes, and the effects attributed to ―positronium,‖ a hypothetical short-lived
combination of an electron and a positron, probably originate during this interval.
The extent to which annihilation can actually take place in contacts between antiparticles
other than the electron and positron is still an open question. If the observed antiproton is
actually the true antiparticle of the proton–that is, a cosmic proton–the results of the
observed contacts of these particles indicate rather definitely that annihilation is confined
to the one-dimensional particles. If the observed antiproton is merely a material proton
with a negative charge, a possibility that cannot be ruled out at the present stage of the
investigation, the observed results of the interactions are not relevant to the question, but
the situation is still unfavorable for annihilation, as the obstacles in the way of securing
simultaneous contact between the corresponding motions obviously increase with the
complexity of the rotational combination, and it is very doubtful if the necessary
coincident contacts can be obtained in different dimensions. It therefore appears that the
intriguing possibility of energy production by contact between matter and antimatter is
not only ruled out as a large scale process by the impossibility of concentrating antimatter
in space, as previously indicated, but is also unlikely even as a single atom process.
Inasmuch as our present objective is to examine those phenomena of the cosmic sector of
the universe that are accessible to our observation, the observed antiparticles, which are
products of high-energy processes in the material sector, are pertinent only to the extent
that they throw some light on the kind of behavior that can be expected from the cosmic
objects that do enter our field of observation. As indicated earlier, some of these
incoming objects make themselves known as a result of chance encounters during our
progress through three-dimensional time. Additionally, there are processes, to be
described later, which result in the ejection of substantial quantities of matter from each
sector into the other. The portion of the material sector within our observational range is
therefore subject to a continual inflow of cosmic matter. The incoming particles of this
matter can be identified as the cosmic rays.
As they appear to observation, the cosmic rays are particles entering the local frame of
reference from all directions and at extremely high speeds, together with a variety of
secondary particles produced in events initiated by the primary particles. The secondaries
include some common sub-atomic particles of the material system, such as electrons and
neutrinos, and also a number of transient particles of extremely short lifetime, from 10-6
seconds downward, that were unknown prior to the discovery of the cosmic rays, but
have since been produced by high energy processes in the particle accelerators.
In current thought, the primaries are regarded as ordinary material atoms. The evidence in
favor of this conclusion may be summarized as follows:
1. Sub-atomic particles are excluded, as they are all incapable, for one reason or
another, of producing the observed effects. This means that, unless they belong to
an otherwise unknown class of particle, the primary cosmic rays must be atoms.
2. The masses of the atoms that constitute the primaries cannot be determined at the
present stage of instrumentation and techniques, but it is possible to determine the
charges on the individual particles, and on the assumption that they are fully
ionized, this indicates the atomic numbers. The distribution of the elements in the
incoming cosmic rays, on this basis, approximates the estimated distribution in
the observed universe as a whole.
In the absence of any known alternative, this amount of evidence has been sufficient to
secure general acceptance of the conclusion that the primaries are atoms of ordinary
material elements. When the issue as to its validity is raised, however, as it must be when
an alternative appears, it is clear that there are many counter indications in the empirical
data. The most serious items are the following:
1. The speeds and energies of the primaries are too high to be compatible with
production by ordinary physical processes. No known process, or even a plausible
speculative process, based on conventional physics, is capable of producing
energies that extend up to the vicinity of 1020 eV. As expressed in the
Encyclopedia Britannica, “how to explain the acquisition of such energies is a
disturbing physical and cosmological problem.‖
2. With the exception of some of the relatively low energy rays that are thought to
originate in the sun, most of the primaries have energies in the range, which
indicates speeds in the neighborhood of the speed of light. Inasmuch as some
decrease in speed has undoubtedly taken place before the observations, it is quite
probable on the basis of the observational evidence (that is, disregarding any
purely theoretical limitation) that the rays originally entering the local
environment were traveling at the full speed of light. This is another indication of
an extraordinary origin.
3. While the distribution of elements deduced from the cosmic ray charges
approximates the estimated distribution in the observed universe as a whole, there
are some very significant differences. For example, the proportion of iron atoms
in the cosmic rays is 50 times that in average matter. Lithium has been reported to
be as much as 1000 times as abundant (although some of the lithium may be a
decay product). The cosmic rays therefore cannot be merely ordinary matter
drawn from the common pool and accelerated to high speeds by some unknown
process. They must have originated from some unusual kind of source. These
anomalies in the ―charge spectrum‖ of the cosmic rays are given little attention in
current physical thought, probably because they have no known explanation, but
the significance that such deviations from the normal abundance would have, if
confirmed, was clearly recognized at the time when the first indications of these
deviations were observed. For instance, Hooper and Scharff (1958) made this
comment: ―An excess of heavy nuclei would suggest the necessity of
reconsidering our fundamental ideas on the origin of the primary radiation.‖62
4. All of the major products of the primary rays have extremely short lifetimes. If
they do not undergo collisions before this time has elapsed, they decay in flight to
particles of lower mass and equal or longer lifetime. There is much available
evidence to indicate that this is also true of the primaries. For example, in some of
the observed events a transient particle leaves the scene of the event in a
continuation of the line of travel of the primary, and carries the bulk of the
original energy. The straightforward interpretation of such events is that they
represent processes in which the primary decays to the transient particle and
continues on its way. The existence of a substantial number of high-energy pions
in the incoming stream of particles is another item of evidence pointing in the
same direction, as similar, but earlier, decays of primaries will produce pions with
very high energies. It has been estimated that as much as 15 percent of the
incoming high-energy particles are pions. The conclusion that can logically be
drawn from the observations is that the primaries are of the same general nature as
the known transient particles, and that the entire cosmic ray phenomenon is a
single process taking place in a succession of decay events–a process in which an
atom with some strange and unusual properties is converted first into other
similar, but less massive, particles, and then finally into products that are
compatible with the local environment.
The considerations summarized in the foregoing paragraphs indicate that the current
explanation of the nature of the primary cosmic rays is not correct. They point to the
conclusion that these primaries are not atoms of material elements, as now believed, but
atoms of a special kind which have characteristics similar to those of the transient
particles, and are produced under some unusual conditions that lead to entry into the local
frame of reference at the full speed of light. Since we now find from the theoretical
development that there is a continuing inflow of cosmic atoms, which are atoms of a
special kind that, according to the theory, enter our environment at the speed of light, and
are subject to rapid decay in the manner of the observed transient particles, the identity of
the theoretical and observed phenomena is almost self-evident.
An outstanding characteristic of the results obtained from development of the
consequences of the postulates of the Reciprocal System of theory–one that we have had
occasion to mention several times in the preceding pages–is the way in which they
resolve long-standing and seemingly extremely difficult questions in a surprisingly
simple manner. Nowhere is this more evident than in the case of the cosmic rays, where
the finding that these incoming particles are atoms from the high-energy sector of the
universe clears up the many previously intractable issues in this area with remarkable
ease.
The basic questions: What are the cosmic rays?, and Where do they come from?, are
answered automatically by the theoretical discovery of a sector of the universe in which
objects with the observed properties of the cosmic rays are indigenous. The particular
properties that characterize the constituents of the cosmic rays, and distinguish them from
the constituents of aggregates of ordinary matter, are naturally the ones that are the most
difficult to explain on the basis of current theories which try to fit them into the material
system of phenomena, but these explanations are practically obvious once the existence
of the cosmic (high energy) sector is recognized.
The energy questions are the central problems. As stated by W. F. G. Swann, ―no piece of
matter can, under ordinary circumstances, contain, in any form, enough energy to provide
cosmic ray energies for its particles.‖63 But this is only one phase of the energy problem.
The total energy involved is also far too large.
If cosmic rays move in straight lines, as does starlight, and have the same energy density
as starlight, then the power supplies to each will have to be the same. There seems no
conceivable way to find this much energy for cosmic radiation. (Leverett Davis, Jr.)64
Here again we meet the ―There is no other way‖ contention that is being used to justify so
many of the otherwise untenable theories and assertions of present-day science, and again
the development of the Reciprocal System demonstrates that there is a ―conceivable
way.‖ But because the cosmic ray physicists have been confined within the limited
horizons of conventional basic ideas, they have not been able to account for the observed
energies on any straightforward basis. They have therefore been forced to invent exotic
hypothetical mechanisms for acceleration of the cosmic rays from the relatively low
energies that are available in the material sector to the high levels that are actually
observed, and equally far-fetched ―storage‖ processes to avoid the difficulty cited by
Davis.
The existence of another half of the universe, in which the prevailing speeds are greater
than the speed of light, and the energies of the mass units are correspondingly great,
disposes of both aspects of the energy issue. There are observable explosion processes in
the material sector (which will be examined in detail in Volume II) that result in the
acceleration of large quantities of matter to speeds in excess of the speed of light. The
most energetic portions of these high-speed explosion products are ejected into the
cosmic sector, the sector of motion in time. From the general reciprocal relation between
space and time we can deduce that these same processes are operative in the cosmic
sector, and that they result in the ejection of large quantities of cosmic matter into the
material sector. This is the matter that we observe in the form of the cosmic rays.
The characteristics of these interchange processes, as they will be developed in Volume
II, explain why the distribution of the elements in the cosmic rays differs from the
estimated average distribution in the observed physical universe. It will be shown that the
proportion of heavier elements in matter increases with the age of the matter, and it will
be further shown that the matter ejected from one sector of the universe into the other
consists principally of the oldest (or most advanced) matter in the originating sector. Thus
the cosmic rays are not representative of cosmic matter in general; they are representative
of the cosmic matter that corresponds to the oldest matter in the material sector. The
isotropic distribution of the incoming rays is likewise a necessary result of entry from the
region of motion in time. Both the spatial location of entry, and the direction of motion of
the particle after entry, are determined by chance, as the contact of the space and time
motions is purely scalar.
The identification of the cosmic rays as atoms of the cosmic elements was clear from the
beginning of the development of the Reciprocal System. As stated earlier, the available
evidence indicates that these so-called ―rays‖ must be atoms. On the other hand, their
observed properties are quite different from those of the atoms of ordinary matter. The
natural conclusion from these facts would be that the atoms of the cosmic rays are atoms
of some different kind. Conventional science cannot accept this answer because it has no
place for the kind of an atom that is indicated. The physicists have therefore been forced
to conclude that the cosmic rays are ordinary atoms that, for some unknown reason, have
unusual properties. In contrast, the basic postulates of the Reciprocal System require the
existence of a type of atom, the inverse of the material atom, that has just the kind of
characteristics, when observed in the material sector, that are found in the cosmic rays.
It should be noted in this connection that the concept of antimatter, the conventional
alternative to the reciprocal matter required by the postulates of the Reciprocal System,
cannot be applied to the cosmic rays, because the interaction of matter and antimatter is
theoretically supposed to result in annihilation of both substances, rather than the particle
production and other phenomena that are actually observed in the cosmic ray interactions.
Although only a limited amount of time could be allotted to the cosmic rays in the early
stages of the development of the Reciprocal System, because of the large number of
physical areas that had to be given some study in order to confirm the status of the theory
as one of general application, the first edition did include an account of the nature and
origin of the primary rays, an explanation of the kind of modifications that these particles
must undergo in the material environment, and a general description of this modification,
or decay, process. In the meantime there has been substantial progress, both
experimentally and theoretically, and it is now possible to expand the previous
presentation very materially.
The extension of theory in the cosmic ray area that has taken place in the twenty years
since the publication of the first edition provides a good illustration of what is involved in
the development of the theoretical system from the fundamental postulates. The basic
facts–the identity of the cosmic rays, their place of origin, the reason for their enormous
energies, etc.–were almost self-evident once the reciprocal relation between space and
time was recognized. But it cannot be expected that such an understanding of the basic
facts will immediately clear up the entire multitude of questions that arise in the course of
developing the details of the theoretical structure. The answers to these questions are
available. They can be derived from the fundamentals of the system of theory. But they
do not emerge automatically.
Where a theory is developed entirely by deduction from a single set of premises, as is
true of the Reciprocal System, there should not be many cases in which wrong answers
are reached, if the theoretical foundations are solid, and due care is exercised in the
logical development. Only a very few of the conclusions stated in the first edition of this
work have been invalidated by the twenty years of additional study that have followed.
But it is altogether unrealistic to expect that the first exploration of a physical field by
means of a totally new method of approach will accurately identify all of the significant
features of the phenomena in that field. It is a virtual certainty that many of the original
conclusions will be incomplete. Here, again, the Reciprocal System is no exception.
The explanation of cosmic ray decay that will be given in the next chapter is, in all
essential respects, the same explanation that was presented in the first edition. However,
the development of the theoretical structure in the intervening years has brought to light
many necessary consequences of the postulates of the Reciprocal System that have a
significant bearing on the decay process and contribute to a more complete understanding
of the decay events. These new items of information include such things as the existence
of a transition zone, the two-dimensional nature of the motion in that zone, the existence
of the massless form of the neutron, and the nature of the limitation on the lifetimes of the
cosmic particles. With the benefit of all of this additional theoretical knowledge, and a
substantial increase in the amount of available empirical information, it will be possible
to define the decay sequence more accurately. Nevertheless, the presentation in Chapter
15 is not a new explanation of the phenomenon; it is the same explanation in more
complete form.
CHAPTER 15

Cosmic Ray Decay


On the basis of the information developed in Chapter 14 we may describe the cosmic rays
in general terms as cosmic atoms and particles which enter the material environment at
the speed of light, at random spatial locations, and with random directions. Here, then,
are the contents of the cosmic sector of the universe as they appear, very fleetingly, to our
observation. We will now examine what happens to these objects after they arrive.
In the earliest observed stages the cosmic particles are known as the primary cosmic rays.
As many observers have pointed out, there is no assurance that these are the original
rays, as the decay process may have already begun before the primary rays are observed.
The theoretical development indicates that this is, indeed, true, as the primaries contain a
considerable percentage of particles that are clearly decay products rather than normal
constituents of the origina1 rays. In the subsequent discussion we will follow the general
practice, and will refer to the observed incoming particles as the primary rays, but it
should be understood that this does not imply that the observed primaries are identical
with the particles that originally crossed the boundary into the material sector.
Since the cosmic rays enter the material sector from a region in which the prevailing
speeds are greater than unity, these particles make their entry at the speed of light. It is
the decrease from a speed greater than unity to a speed less than unity which constitutes
entry into the materia1 sector, but the dividing line between the cosmic sector and the
material sector is unit speed in all three scalar dimensions. The speed of the primaries
therefore remains at or near unity in the observable dimension even after the speed, in
total, has decreased to some extent. This accounts for the previously noted fact that the
observed speeds of the incoming particles are mainly close to the speed of light.
Inasmuch as these speeds, and the corresponding kinetic energies, are greatly in excess of
the normal speeds and energies of the material sector, transfer of the excess kinetic
energy to the environment begins immediately on entry. Gravitational and
electromagnetic forces, to which the cosmic atom is subject as soon as it crosses the
boundary, accomplish part of the energy reduction. Contact with material particles is also
an important factor, and a further loss occurs in connection with the reduction of the
internal energy that must also take place.
The cosmic atoms of maximum energy content (kinetic equivalent) are those of the most
abundant cosmic elements: c-hydrogen and c-helium. The principal constituents of the
cosmic rays, the cosmic elements of low atomic number, are therefore not only entering
the material frame of reference at speeds which are far too high to be compatible with the
material environment, but are also entering in the form of structures whose internal
energy (rotational displacement) content is also much too great. These elements must lose
rotational energy, as well as kinetic energy, before they can assume forms that will merge
with the material phenomena. The required loss of rotational energy from the atomic
structures is accomplished by ejection of particles of an appropriate nature. A
readjustment of some kind in the atomic motions is required at very short intervals, and
the probability principles insure that the direction of the rearrangements is toward greater
stability. In the material environment this means a reduction of the excess rotational
energy.
At the present stage of the theoretical development it appears that the limitation of the
lifetimes of the cosmic elements to extremely short intervals is due to the fact that the
rotation in the cosmic structure takes place at a speed greater than unity, and this structure
therefore moves inward in time, rather than in space. Consequently, it can exist in a
stationary spatial frame of reference for only one unit of time. If it is moving
translationally at a speed above unity in all scalar dimensions, as is true of most of the
cosmic atoms encountered by chance in our passage through time, it moves away from
the line of the time progression of the material sector, and disappears. But this option is
not available to cosmic atoms that have dropped below unit speed, and instead, they
separate into two or more particles, each of which then has its own appropriate lifetime.
The natural unit of time, in application to macroscopic physical phenomena, was
evaluated in Chapter 13 as 1.521 x 10-16 seconds. Some of the observed particles have
lifetimes in this neighborhood, but others range all the way from about 10-16 seconds to
about 10-24 seconds. As will be brought out later, the magnitude of the deviation from
unit time has been correlated with the dimensions of the spatial motion of the particles,
but the exact nature of the modifying factor has not yet been identified, and for the
present we will treat it as a modifier of the unit of time, similar to the inter-regional ratio
that modifies the unit of space in application to the time region.
The limiting lifetime to which the foregoing comments apply is the limit at zero speed. At
higher speeds, the lifetime, as measured by a conventional clock, increases in accordance
with the relations expressed in the Lorentz equations, which, as noted earlier, are equally
as applicable in the Reciprocal System of theory as in conventional physics. The
explanation of this longer life that we deduce from theory is that the particle can remain
intact in the spatial reference system as long as it remains in the same unit of time. But an
object moving at the speed of light remains in the same unit of time (in the natural
system, which is controlling) permanently and such an object can exist indefinitely in any
system of reference. The decrease in life at the lower speeds follows the mathematical
pattern derived by Lorentz. From the foregoing it is evident that the primary cosmic rays,
moving at the speed of light, did not necessarily enter the material sector in our
immediate vicinity. The rays that we observe may have entered anywhere in interstellar,
or even in intergalactic, space.
In general, as pointed out in the first edition, the successive steps of the decay process
which the cosmic atoms undergo after their entry consist of ejections of rotational
displacement in the form of massless particles, which continue until the residual cosmic
element reaches a status in which it can be transformed into a material structure. Of
course, nothing physical can be transformed into something different. Only in the world
of magic is that possible. Addition or removal of some constituent can alter a physical
entity, but it can be transformed only into some other form of the same thing, as the term
itself implies. In the case of the elements the transformation is made possible by the
specific relation between the space and time zero points.
As explained in Chapter 12, the difference between a positive speed displacement x and
the corresponding negative speed displacement 8—x (or 4—x in the case of two-
dimensional motion) is simply a matter of the orientation of the motion with respect to
these space and time zero points. The rotational motions of material atoms and particles
are all oriented on the basis of the spatial (positive) zero, because, as noted earlier, it is
this orientation that enables the rotational combination to remain in a fixed spatial
reference system. Similarly, the cosmic atoms and particles are oriented on the basis of
the temporal (negative) zero, and are therefore capable of remaining permanently within
a fixed temporal reference system, whereas they have only a transient existence in a
spatial system. The only difference between a motion with a positive speed displacement
x and one with a negative speed displacement 8—x (or 4—x) is in this orientation of the
scalar direction. Either can therefore be converted to the other by a directional inversion.
For example, if the negative magnetic displacements of the cosmic helium atoms, (2)-(1)-
0, are replaced by the 4—x positive values, this inverts the scalar directions of the
rotations without altering the nature or magnitude of either of the rotational components.
The product, an atom of the material element argon, 2-3-0 (or 3-2-0 in our usual notation)
is therefore the same physical object as the cosmic helium atom. It is merely moving in a
different scalar direction. Conversion of cosmic helium into argon is nothing more than a
change to another form of the same thing, and thus it is a physical possibility that can be
accomplished under the right conditions and by the appropriate processes.
Every atom of either the cosmic or the material type in which the speed displacements do
not exceed 3 in either of the magnetic dimensions or 7 in the electric dimension has an
equivalent oppositely directed structure. This is illustrated in the following table of
equivalents of cosmic and material elements of the inert gas series, the elements with no
effective displacement in the electric dimension.
Cosmic System Material System
c-helium (2)-(1)-0 2-3-0 argon
c-neon (2)-(2)-0 2-2-0 neon
c-argon (3)-(2)-0 1-2-0 helium
c-krypton (3)-(3)-0 1-1-0 2 neutrons
It does not follow that a direct conversion of an atom of such an element to the equivalent
inverse structure is always possible. On the contrary, it is seldom possible. For instance,
in order to convert the cosmic helium atom directly to argon the rotations in the two
magnetic dimensions would have to be inverted simultaneously, and at the same time the
approximately 40 mass units required by the argon atom would have to be obtained from
somewhere. The c-helium atom cannot meet these requirements, so at the end of the
appropriate unit of time when it must do something, it does what it can do; that is, it
ejects a massless particle. This carries away some positive rotational displacement, and
moves the residual cosmic atom up the series of elements toward a higher cosmic atomic
number, the equivalent of a lower material atomic number.
This process continues until the residual cosmic atom is c-krypton, each rotating system
of which is equivalent to a neutron. Here the transformation requirements can be met, as
the inversion of each rotation involves only a single effective unit, and no provision for
addition of mass is necessary, since the product of the conversion is a massless neutron.
The scalar directions of the c-krypton motions therefore invert, and two massless
neutrons take their places in the material system. The question as to what then happens to
these particles will be discussed in Chapter 17.
The general nature of the cosmic ray decay process, as described in the foregoing
paragraphs, was clear from the start of the investigation of the role of the cosmic rays in
the theoretical universe of the Reciprocal System. It was therefore evident that the
ejections during this decay process must consist of positive rotational displacement in
order that the cosmic atoms would be modified in the direction of greater stability in the
material environment and ultimately built up to the level where conversion is possible. In
the first edition these ejections were discussed in terms of neutrons and neutron
equivalents, although it was noted that, in the terrestrial environment at least, they must
be massless. Transfer of mass in these events is impossible, as the cosmic atoms have no
actual mass. The mass indicated by their behavior in the observed reactions is merely the
mass equivalent of the cosmic (inverse) mass that these atoms of the cosmic elements
actually do possess. What these atoms must eject is positive magnetic rotational
displacement, and this can only take place through the medium of massless particles. The
conclusion reached in the earlier study was that in these ejection events the carrier
particles must be pairs of neutrinos and positrons (jointly equivalent to neutrons
rotationally, but massless) rather than neutrons of the observed type. The more recent
finding that the neutron exists in a massless form now resolves this difficulty, as it is now
evident that the ejected particles are massless neutrons.
The progress that has been made in both the observational and the theoretical fields has
also enabled defining the decay path more accurately and in more detail than was
possible in the first edition. Inasmuch as all features of the cosmic sector of the universe
are identical with the corresponding features of the material sector, except that space and
time are interchanged, the matter accelerated to high speeds by cosmic explosions of
astronomical magnitude includes all of the components of cosmic matter: sub-atomic
particles and atoms of all of the elements. But in order to be accelerated all the way to
unity in three dimensions, a particle must offer a full unit of resistance in all three
dimensions. Consequently, the only particles that are able to accelerate up to the escape
speeds are the double rotating systems, the atoms. The unit particle in the interchange
between the cosmic and material sectors is the atom of unit atomic number, the mass two
isotope of hydrogen (deuterium). The mass one isotope of hydrogen does not qualify as a
full-sized unit, but it lacks only the equivalent of a cosmic massless neutron, and this can
be provided by ejection of a massless neutron of the material type. When subjected to a
powerful explosive acceleration the H1 atom therefore ejects such a particle and assumes
the H² status.
The sub-atomic particles are not capable of being accelerated to the escape speed. They
are all either inherently massless, or easily separated into massless components, and when
they reach their limiting speeds they take the massless forms and thereby terminate the
acceleration. The total absence of sub-atomic particles in the cosmic rays that results
from this inability to reach the escape speed is not currently recognized because the
singly charged particles are mistakenly identified as protons, and the cosmic atoms in the
decay sequence–mesons, in the conventional terminology–are accorded a somewhat
indefinite kind of a sub-atomic status. But the absence of electrons is a conspicuous and
puzzling feature of the cosmic ray phenomenon, and it imposes some severe constraints
on theories which try to account for the origin of the rays.
An effect so gross as to exclude completely high-energy electrons from the spectrum at
the earth should, it would seem, be accounted for unambiguously by any successful
theory for the origin of the cosmic radiation. (T. M. Donahue)65
The unambiguous explanation is now available. No sub-atomic particles are present in the
original cosmic rays because these particles are not capable of accelerating to the high
inverse speeds necessary for entry into the material sector.
The cosmic property of inverse mass is observed in the material sector as a mass of
inverse magnitude. Where a material atom has a mass of Z units on the atomic number
scale, the corresponding cosmic atom has an inverse mass of Z units, which is observed
in the material sector as if it were a mass of l/Z units. The masses of the particles with
which we are now concerned are conventionally expressed in terms of million electron
volts (MeV). One atomic mass unit (emu) is equivalent to 931.152 MeV. The atomic
number equivalent is twice this amount, or 1862.30 MeV. The primary rotational mass of
an element of atomic number Z is then 1862.30 Z MeV, and that of a cosmic element of
atomic number Z is 1862.30/Z MeV. Where the atomic mass m is expressed in terms of
atomic weight, this becomes 3724.61/m MeV.
As matters now stand, neither the theoretical calculations nor the observations of the
masses of the cosmic elements above hydrogen in the cosmic atomic series are
sufficiently accurate to justify taking the secondary mass into consideration. The
theoretical discussion of the masses of these elements will therefore be confined to the
primary mass only, disregarding the small modification due to the secondary mass effect.
For the same reasons, both the calculated and observed values in the comparisons that
follow will be stated in terms of the nearest whole number of MeV. An exception has
been made in the case of hydrogen, because the secondary mass of this element under
normal conditions is relatively large, and the probability that it will be altered by changes
in environmental conditions is relatively small. Since the mass of a material H² atom is
1.007405 on the atomic number scale, the mass of a cosmic H² atom is the reciprocal of
this figure, or 0.99265 units. This is equivalent to 1848.61 MeV.
At this point it will be necessary to recognize that the combinations of motions that
constitute the atoms of the elements, both material and cosmic, are capable of acquiring
additional motion components of a different kind, each unit of which alters the mass of
the atom by one atomic weight unit. It will be convenient to defer the detailed
consideration of this new type of motion, which we will call a gravitational charge, until
we are ready to discuss the entire class of motions to which it belongs, but for present
purposes we need to note that each material element of atomic number Z exists in a
number of different forms, or isotopes, each of which has atomic weight 2Z+G, where G
is the number of gravitational charges. The normal mass of the corresponding cosmic
isotopes is the reciprocal of 2Z+ G. but when the cosmic atoms enter the material
environment they are able to add gravitational charges of the material (positive) type to
the cosmic combinations of motions (including the gravitational charges of the cosmic
(negative) type, if any). Each such material type charge adds one atomic weight unit, or
931.15 MeV, to the isotopic mass of the cosmic atom.
In the first edition it was recognized that the incoming cosmic rays would consist
primarily of c-hydrogen, but at that time there were no observational indications of any
cosmic ray particles in the hydrogen mass range, and the extension of the theoretical
development to the questions of scalar motion in two dimensions and the lifetimes of the
cosmic atoms had not yet been undertaken. The exact theoretical status of the incoming
c-hydrogen atoms was therefore still uncertain. Inasmuch as the ―mesons‖ then known
were mainly cosmic elements of the inert gas series, it was concluded that the original c-
hydrogen atoms must be stripped of their one-dimensional rotation and reduced to the
two-dimensional (inert gas) condition almost immediately on crossing the speed
boundary. In the meantime, however, the investigators have been able to extend their
observations to earlier portions of the decay path, and they have recently discovered a
short-lived particle with a mass that is reported as 3695 MeV.
Identification of this 3695 ―psi‖ particle as a ―cosmic deuteron with two material isotopic
charges‖66 by Ronald W. Satz was the crucial theoretical advance that opened the door to
a clarification of the status of cosmic hydrogen. This now enables us to close the gap, and
trace the progress of the cosmic atom from its entry into the material sector in the form of
cosmic hydrogen (c-H²) all the way to its final transformation into material particles.
For reasons which will be explained in Volume II, the cosmic atom has an effective
translational motion in two of the three scalar dimensions at the neutral point where it
enters the material half of the universe. The terrestrial environment, into which the
observable cosmic atoms enter, is favorable for the acquisition of gravitational charges of
the material type. Each of the two dimensions of motion therefore adds such a charge.
The two charges acquired by the c-H² atom add 1862.30 MeV to the 1848.61 MeV mass
equivalent of the cosmic mass, bringing the total mass of this, the first of the theoretical
cosmic ray particles, to 3710.91 MeV. The mass of the newly discovered psi particle is
reported as 3695 MeV. In view of the many uncertainties involved in the observations,
this can be regarded as consistent with the theoretical value.
As mentioned earlier, the particle lifetimes are correlated with the dimensions of the
spatial motions that the particles acquire, the translational motion and the gravitational
charges. While the theoretical situation has not yet been clarified, we find empirically
that the life of a particle with two dimensions of scalar motion in space and no
gravitational charge is about 10-16 seconds, approximately the natural unit of time. Each
dimension of motion modifies the unit of time applicable to the particle life by
approximately 10-8, while each gravitational charge modifies the unit by about 10-2. On
this basis, the following approximate lifetimes are applicable:
Dimensions Charges Life (sec) Dimensions Charges Life (sec)

3 0 10-24 1 l 10-10
2 2 10-20 1 0 10-8
2 0 10-16
The reported lifetime of the 3695 psi particle is in the neighborhood of 10-20 seconds,
which agrees with the theoretical determination of the dimensions of motion on which the
mass calculation is based.
The general decay pattern defined in the preceding pages indicates that c-H² should
undergo an ejection of positive rotational displacement, converting it to c-He³. From the
expression 3724.61/m, we obtain 1242 MeV as the rotational mass of c-He³, to which we
add the mass of two gravitational charges for a total of 3104 MeV. The observed 3695
particle decays to another psi particle with a reported mass of 3105 MeV, and a life of
about 10-20 seconds. This second particle can clearly be identified with the c-He³ atom.
Thus the observed masses, the lifetimes, and the decay pattern all confirm the basic
identification of the c-hydrogen particle by Satz.
Another decay of the same kind would produce c-He4, and it is probable that some
particles of this composition are occasionally formed. Indeed,
any cos mic atom between c-hydrogen and c-krypton may appear in the cosmic ray
products. But the probabilities favor certain specific cosmic elements, and these are the
products that constitute the normal decay sequence we are now examining. The speeds of
the cosmic rays and their decay products decrease rapidly in the material environment,
and by the time the decay of c-He³ is due the additional energy loss in the decay process
is usually sufficient to drop the cosmic residue into the speed range below unity. The
consequent elimination of the motion in the second scalar dimension results in a double
decay which adds two atomic weight units to the cosmic atom. The product is c-Li5.
Further increases in the inverse mass of the residual cosmic atom by successive additions
of single atomic weight units would be possible, but the probabilities favor larger steps as
the material equivalent of a cosmic unit increment continues decreasing. The one unit
increment in each of the two steps from c-He3 to c-Li-5 is therefore followed by a series
of increments that are uniformly one atomic weight unit larger in each successive decay,
except for the step between c-N14 and c-Ne20, where the increase over the size of the
previous increment is two units.
On this basis, the two l-unit increments that produce c-Li5 are followed by a 2-unit
increment to c-Be7, a 3-unit increment to c-B10, a 4-unit increment to c-N14, and a 6-unit
increment to c-Ne20. These decay products are not capable of retaining both of the
gravitational charges of their precursors, but they keep one of the charges, and all of the
cosmic elements identified as members of this section of the decay sequence have masses
which include a 931.15 gravitational increment, as well as the basic mass equivalent of
the cosmic element, 1862.30/Z MeV. The indicated life of a cosmic atom with one
gravitational charge, after dropping into the range of one-dimensional motion, is about
l010 seconds. These theoretical masses and lifetimes are in agreement with the observed
properties of the class of transient cosmic ray particles known as hyperons, as indicated
in the following tabulation:
MASS
Element Particle Calculated Observed Lifetime
c-Li5 omega 1676 1673 1.30 x l0-10
c-B10 xi 1304 1321 1.67 x l0-10
c-N14 sigma 1197 1197 1.48 x l0-10
c-Ne20 lambda 1117 1116 2.52 x l0-10
The masses given are those of the negatively charged particles. Positive electric charges
and other variable factors introduce a ―fine structure‖ into the numerical values of the
properties of the particles that has not yet been studied in the context of the Reciprocal
System.
The observed decay pattern is in agreement with the theory, so far as its general direction
is concerned; that is, all of the members of the series decay in such a manner that the
eventual result is c-neon. It is still uncertain, however, whether the decay always passes
through all of the stages identified with the normal sequence, or whether this sequence is
subject to modification, either by omission of one or more of the steps, or by a variation
in the size of the ejections of time displacement. The c-Be7 atom, mass 1463 MeV, for
instance, is not listed in the tabulation, as its identification with an observed particle of
mass 1470 MeV is rather uncertain. This does not preclude its definite identification as a
decay product eventually. It may be noted in this connection that the omega particle (c-
Li5) was found only as a result of an intensive search stimulated by a theoretical
prediction. However, the fact that the last three members of this hyperon series (which
were the first to be discovered and are still the best known) are separated by only one
decay step, suggests that there is little, if any, deviation from the normal sequence in
those cases where the full range of decay from c-He to c-Ne is involved.
When we examine the properties of gravitational charges at a later stage of the theoretical
development we will find that the stability of these charges is a function of the atomic
number. The mathematical expression of this relation which we will derive from theory
indicates that the stability limit for a double gravitational charge in the terrestrial
environment falls between the material equivalents of c-He³ and c-Li5. This accounts for
the previously mentioned fact that c-Li5 and the elements above it in the cosmic series are
incapable of retaining two gravitational charges. But the center of the zone of stability for
these elements is closer to the +1 isotope (one gravitational charge) than to the zero
isotope (the basic rotation), and for this reason they are all singly (gravitationally)
charged, as indicated in the preceding discussion. From c-Si27 upward in the cosmic
series, the center of the zone of stability is closer to the zero isotope, and these elements
carry no gravitational charges.
Without the gravitational charge, the mass of c-Si27, the decay product resulting from a 7-
unit addition to c-Ne20, is 137.95 MeV, and the low speed lifetime is about 10-8 seconds.
The corresponding observed particle is the pion, with measured mass 139.57 MeV, and
lifetime 2.602 x 10-8 seconds.
Pions are frequently reported as products of observed cosmic ray events initiated by
primaries. As we will see in the next chapter, such production is quite feasible where
there is a violent contact of some kind, with the release of a large amount of energy, but
direct production of pions in decay is not consistent with the decay pattern as derived
from theory. The apparent direct production is, however, understandable when the
relative lifetimes of the pion and the earlier decay products are taken into consideration.
There is no reason to believe that normal decay in flight will result in any change of
direction. Ejection of massless particles will take care of the conservation requirements
without the necessity of directional modification. Because the entire decay process up to
the production of the pion occupies only a very short time compared to the lifetime of the
pion itself, it is unlikely that the usual methods of observation will be able to distinguish
between a pion and a cosmic particle undergoing a complete decay to the pion status in
flight.
In the kind of a situation mentioned in Chapter 14, for instance, where a pion apparently
leaves the scene of an event in a continuation of the direction of motion of the primary,
and carries the bulk of the original energy, leading to the conclusion that the primary
decayed directly to the pion, there is nothing in the observations that is inconsistent with
the theoretical conclusion that during a short interval at the beginning of the motion
attributed to the pion, the cosmic particle was actually going through the preceding steps
in the decay sequence.
The next event in this decay sequence, the decay of the pion, involves an 8-unit
increment to c-Ar35. Again the zero isotope is the stable form. This leads to a mass of
106.42 MeV and a theoretical life equal to that of the pion. The observed particle is the
muon, with mass 105.66 MeV, formed by decay of the pion, as required by the theory.
Both the decay to c-Si27 (the pion) and the subsequent decay to c-Ar35 (the muon)
continue the same pattern of a uniform one unit increase in the cosmic mass increment in
each succeeding event that was followed in the earlier decay steps. But inasmuch as c-
argon is equivalent to helium, which, from the material standpoint, is only one step away
from the neutron that is the end product of the decay process, the following ejection of
positive displacement carries the cosmic atom to the final cosmic structure, c-krypton.
Each of the two rotating systems of the c-Kr atom is rotationally equivalent to a neutron,
and converts to that particle. Since c-Kr is massless (that is, its observed mass is merely
the mass equivalent of the inverse mass of the cosmic sector) the conversion products are
massless neutrons, or their equivalents, pairs of neutrinos and positrons. Some of the
aspects of this conversion process will be given further consideration in Chapter 17.
Unlike the decay events, which involve changes in the atomic structure, and therefore do
not take place until they must, the conversion of the c-krypton rotations to massless
neutrons is merely a change in scalar direction to conform with the new environment, and
it takes place as soon as it can do so. Consequently, the c-krypton atom, as such, has a
zero lifetime. As soon as the particle ejection from c-argon takes place, the conversion to
massless neutrons begins. In view of the non-appearance of c-krypton, the apparent
lifetime of c-argon, the muon, is the sum of its own proper lifetime and the conversion
time. The value reported from observation is 2.20 x 10-6 seconds. A theoretical
explanation of this value is not yet available, but it is probably significant that the
difference between this and the life of an uncharged particle moving in one dimension,
about 10-8 seconds, is approximately that associated with a gravitational charge.
The absence of the c-krypton atom from the decay process is not due to any abnormal
instability of this cosmic atom itself, but to the preference for the alternate scalar
direction that prevails in the material environment. In the reverse process, where the
directional preference favors the c-krypton atom over the neutron alternate, it plays a
prominent part, as we will see in Chapter 16.
In those cases where the incoming cosmic atom is not in the normal decay sequence it
ejects enough positive displacement in one or two decay events to reach one of the
positions in that sequence, after which it follows the normal path in the same manner as
the products of the decay of cosmic hydrogen. However, these heavier elements are
beyond the stability limit for two gravitational charges, in a low energy environment, and
consequently they do not form structures analogous to the psi particles. This has the
effect of increasing the probability that some of the decay products that normally carry
one gravitational charge will occasionally be found in the uncharged condition. The one
allowable charge would result in an asymmetrical structure during the time that the speed
of these particles is in the two-dimensional range, and if they are observed at this stage
they are likely to be uncharged (gravitationally). The uncharged lifetime for a particle
moving two-dimensionally is approximately one natural unit of time, or about 10-16
seconds. Such a life is the most definite indication that an observed particle is in this
early stage of the decay process.
For example, the eta particle, with observed mass 549 MeV and a life of .25 x 10-16
seconds is probably a gravitationally uncharged c-Be7 atom, which theoretically has a
mass of 532 MeV. A more questionable identification equates the rho particle with c-Li5.
The theoretical mass in this case is 745 MeV, and the observed values range from 750 to
770, the more recent measurements being the higher. The rho lifetime has been reported
as about l10-23 seconds, but this is too short to be a decay time. It is evidently a
fragmentation time, a concept which will be explained in connection with the discussion
of particle production in the accelerators. Both c-Li5 and c-Be7 are in the normal decay
sequence, a fact which lends some support to the foregoing identifications. The reported
observations of particles that are outside the normal decay sequence will be given some
further consideration in the next chapter.
If the incoming cosmic atom is above c-krypton in the cosmic atomic series, so that it
cannot enter the normal decay sequence in the manner of the elements of lower atomic
number, it must nevertheless separate into parts at the end of the appropriate unit of time,
and since it cannot eject massless neutrons as the lighter atoms do, it fragments into
smaller units, which then follow the normal decay path.

CHAPTER 16

Cosmic Atom Building


In essence, the cosmic ray decay is a process whereby high energy combinations of
motions that are unstable at speeds less than that of light are converted in a series of steps
to low energy structures that are stable at the lower speeds. A requirement that must be
met in order to make the process feasible is the existence of a low energy environment
that can serve as a sink for the energy that must be withdrawn from the cosmic structures.
Where a high energy environment is created, either fortuitously or deliberately, the decay
process is reversed, and cosmic elements of lower atomic number are produced from
cosmic elements of higher atomic number, or from material particles, kinetic energy
being absorbed from the environment to meet the additional energy requirements.
The first step in the reverse process is the inverse of the last step in the decay process: a
neutron equivalent is converted into one of the rotating systems of a cosmic krypton atom
by inversion of the orientation with respect to the space-time zero points. It is convenient,
from a practical standpoint, to work with electrically charged particles. The standard
technique in the production of transient particles therefore is to use protons, or hydrogen
atoms which fragment to protons, as the ―raw material‖ for cosmic atom building. In the
high energy environment that is created in the production apparatus, the particle
accelerators, the proton, M 1-1-(1), ejects an electron,
M 0-0-(1), and then separates into two massless neutrons, M ½-½-0, each of which
converts to a half c-Kr atom (that is, one of the rotating systems of that atom) by
directional inversion. These half c-Kr atoms cannot add displacement and become muons
because they are unable to dispose of the proton mass, which persists as a gravitational
charge (half of the normal size, as the proton has only one rotating system). They remain
as particles of a distinct type, each with half of the
c-Kr mass (52 MeV), and half of the 931 MeV mass of a normal gravitational charge, the
total being 492 MeV. They can be identified as K mesons, or kaons, the observed mass of
which is 494 MeV.
As can be seen from the foregoing, the initial production of transient (cosmic) particles in
the accelerators is always accompanied by a copious production of kaons. Each of the
subsequent steps in the cosmic at building process that requires the addition of mass, such
as the product of c-neon (the lambda particle) from c-silicon (the pion) and the product of
the psi-3105 particle from one of the heaviest of the hyperons similar to the initial cosmic
particle production, except that the proton mass is added to the product as a gravitational
charge instead of forming a kaon. Where kaons appear in connection with the product of
these particles, they are the result of secondary processes.
Furthermore, kaons are not produced in the decay processes, either in the cosmic rays or
in the accelerators, because the decay takes place on a massless basis. A few kaons
appear in the cosmic ray decay events, but they are not decay products. They are
produced in collisions of cosmic rays with material atoms under conditions such that a
temporary excess of energy is created—in miniature equivalents of the particle
accelerators, we may say.
If the reverse process, the atom building process, is carried beyond c-hydrogen the final
particle vanishes into the cosmic sector. Otherwise the cosmic atom building which takes
place in the material sector is eventually succeeded by a decay that follows the normal
path back to the point of reconversion into massless neutrons. Where the excess kinetic
energy in the environment is too great to permit the decal proceed to completion, the
production and decay processes arrive at an equilibrium consistent with the existing
energy level.
In such a high-energy environment, the life of a particle may be terminated by a
fragmentation process before the unit time limitation takes effect. This is simply a
process of breaking the particle into two or more separate parts. The degree of
fragmentation depends on the energy of the disruptive forces, and at the lower energy
levels the products of fragmentation of any transient particle are mainly pions. At higher
energies kaons appear, and in the fragmentation of hyperons the mass of the gravitational
charges may come off in the form of neutron or protons. Corresponding to fragmentation
is the inverse process of consolidation, in which particles of smaller mass join to form
particles of larger mass. Thus a particle, with a mass measured as 1020 MeV has been
observed to fragment into two kaons. The 36 MeV excess mass goes into kinetic energy.
Under appropriate conditions, the two kaons may consolidate to form a particle,
utilizing 36 MeV of kinetic energy to supply the necessary addition to the mass of the
two smaller particles.
The essential difference between the two pairs of processes—building, and decay on the
one hand, and fragmentation and consolidatior on the other—is that building and decay
proceed from higher to lower cosmic atomic number, and vice versa, whereas
fragmentation consolidation proceed from greater to less equivalent mass per particle,
and vice versa. The decay process as a whole is a conversion from cosmic status to
material status, and the atom building in the particle accelerators is a partial and
temporary reversal of this process, but fragmentation and consolidation are merely
changes in the state of the atomic constituents, a process that is common in both sectors.
The change in cosmic atomic number due to fragmentation may be either upward or
downward, in contrast to the decay process, which always results in an increase in the
cosmic atomic number. This difference is a consequence of the manner in which the mass
of the gravitational charges enters into the respective processes. For example, the decay
of c-St, the pion, is in the direction of c-Kr. On the other hand, the kaon, a gravitationally
charged c-Kr atom, cannot decay into any other cosmic particle, as it is at the end of the
line so far as decay is concerned, but it can fragment into any combination of particles
whose combined mass does not exceed the 492 MeV kaon mass. Fragmentation into
pions reverses the direction of the decay. If the maximum conversion to pions (mass 138
MeV each) takes place, three pions are produced. Frequently, a larger part of the total
energy goes into the kinetic energy of the products, and the production of pions decreases
to two.
The existence of both 2-pion and 3-pion events has been given a great deal of attention
because of the bearing that they have on various hypotheses as to the laws that govern
particle transformations. The present study indicates, however, that if the basic
requirement, an excess energy environment, is met, so that conversion of the kaon to the
material status is prevented, there are no restrictions on the fragmentation reactions, other
than those considerations that are applicable to matter and energy in general in the
material sector of the universe.
The study of the transient particles, which had its origin in the observation of the cosmic
rays, is now carried on mainly in the accelerators. It is assumed that the same particles
and the same processes are involved, and that the details thereof can be more
conveniently ascertained where the conditions are subject to control. This is true, to a
degree, of course, but the situation in the accelerators is much more complex than that to
which the incoming cosmic rays are subject. The atom building process does not merely
invert the decay process. The actual inverse of the cosmic ray decay is a situation in
which material elements enter a cosmic (high energy) environment and eject negative
displacement in order to build up into structures that can ultimately convert to the cosmic
status. The cosmic entities initially produced in this process are sub-atomic particles. The
accelerators, however, produce the cosmic elements that are closest to conversion to the
material status (c-Kr, etc.), and then drive them back up the decay path by creating
temporary energy concentrations in the material (low energy) environment. Because of
the uneven character of these concentrations of energy, cosmic atom building in the
accelerators is accompanied by numerous events of the inverse (decay) character, and by
various fragmentation and consolidation processes that involve neither building nor
decay. Many of the phenomena observed in the accelerator experiments are therefore
peculiar to the kind of environment existing in the accelerators, and are not encountered
in either the cosmic ray decay or in normal cosmic atom building.
It should also be kept in mind that the actual observations of these events, the ―raw‖ data,
have little meaning in themselves. In order to acquire any real significance they must be
interpreted in the light of some kind of a theory as to what is happening, and in such areas
as particle physics the final conclusion is often ten percent fact and ninety percent
interpretation. The theoretical findings of this work are in agreement with the
experimental results, and they also agree with the conclusions of the experimenters in
most cases, but it can hardly be expected that the agreement will be complete where there
are so many uncertainties in the interpretation of the experimental results.
The sequence of events in cosmic atom building in the accelerators has been observed
experimentally in the so-called ―resonance‖ experit meets. These involve accelerating
two streams of particles—stable or transient—to extremely high speeds and allowing
them to collide. The relation of the amount of interaction, the ―cross-section,‖ to the
energy involved is not constant, but shows peaks or ―resonances‖ at certain farily well-
defined values. This result is interpreted as indicating the production of very short-lived
particles (indicated lifetime about 10-23 seconds) at the energies of the resonance peaks.
This interpretation is confirmed in this work by the agreement of the sequences of
resonance particles with the theoretical results of the cosmic atom building process.
Because of the difference in the nature of the processes, the sequence of elements in
cosmic atom building is not the inverse of the decay sequence, although most of the
decay products above c-He are included. As brought out in Chapter 15, the decay process
is essentially a matter of ejecting positive rotational displacement. There is also a
decrease in equivalent mass, but the mass loss is a secondary effect. The primary
objective of the process is to get rid of the excess rotational energy. In the atom building
process in a high energy environment the necessary energy is readily available, and the
essential task is to provide the required mass. This is supplied in the form of c-Kr atoms,
mass 51.73 each. The full sequence of cosmic atoms in the building process therefore
consists of a series of elements, the successive members of which differ by 52 MeV.
Aside from the lower end of the series, where two of the 52 MeV units are required per
cosmic atomic weight unit, the only significant deviations from this pattern in the
experimental results are that c-B9 is absent, while c-Ne (a member of the decay sequence)
and c-O appear in lieu of, or in addition to, c-F. The complete atom building sequence is
shown in Table 4.

TABLE 4
COSMIC ATOM BUILDING SEQUENCE
Atomic Atomic
Element 51.73 n
Number Mass
36 *c-Kr 52 52
18 *c-A 103 103
12 c-Mg 155 155
(10) *c-Ne 186
9 c-F 207 207
(8) c-O 232
7 *c-N 266 259
6 c-C 310 310
5 *c-B10 372 362
4-½ c-B9 414
4 c-Be8 466 466
3-½ *c-Be7 532 517
569
3 c-Li6 621 621
672
2-½ *c-Li5 745 724
* member of decay sequence

Most of the reported experimental results omit many of the steps in the full sequence.
Whether this means that double or triple jumps are being made, or whether the
intermediate stages have been missed by the investigators is not yet clear. However, the
most complete set of results, the ―sigma‖ series, is close enough to the theoretical
sequence to suggest that the build-up does, in fact, proceed step by step as indicated in
Table 4.
Regardless of any deviations from the normal sequence that may take place earlier, the
first phase of the atom building process always terminates at c-Li5 (the omega particle,
mass 1676 MeV) because, as is evident from the description of the steps in the cosmic
ray decay, the motion must enter a second dimension in order to accomplish any further
decrease in the cosmic atomic number. This requires a relatively large increase in energy,
from 1676 to 3104 MeV. In the decay process there is no alternative, and the big drop in
energy must take place, but in the reverse process the addition of energy in smaller
amounts is made possible by reason of the ability of the cosmic atom to retain additional
gravitational charges in an excess energy environment.
The doubly (gravitationally) charged cosmic element of lowest energy within the atom
building range is c-Kr, the first atom that can be formed from conversion of material
particles. The energy difference between doubly charged c-Kr and the last singly charged
product, c-Li5, is substantial (238 MeV), and all of the cosmic atom building series
theoretically include doubly charged c-Kr as well as singly charged c-Li5. There are, in
fact, some intermediate stages. All but the last small increment of the mass required for
the second charge is added in the form of c-Kr atoms (52 MeV each), as in building up
the rotational mass, and this addition is accomplished in four steps. Similar inter-stages
are possible between c-Be7 and c-Li6, also between c-Li6 and c-Li5, where two c-Kr mass
increments are required between the cosmic elements.
Beyond doubly charged c-Kr, the regular sequence is again followed, with some
omissions or deviations which, as mentioned earlier, may or may not represent the true
course of events. At doubly charged c-Li5, mass 2607 MeV, the atom building process
again reaches the one-dimensional limit, and a third charge is added in the same manner
as the second, inaugurating a new series of resonances which extends to the
neighborhood of the 3104 MeV required for the production of the first of the particles
that have scalar motion in two dimensions.
Table 5 compares the theoretical and observed values of the masses of the particles
included in the several series of resonances that have been reported. The agreement is
probably about as close as can be expected in view of the difficulties involved in making
the measurements. In more than a third of the total number of cases the measured mass is
within 10 MeV of the theoretical value. It is also worth noting that in the only case where
enough measurements are available to provide a good average value for an individual
cosmic element, the 11 measurements on c-Li5, the agreement between this average and
the theoretical mass is exact.
All of the singly charged transient particles moving in only one dimension are stable
against decay for about l0-10 seconds. However, they are extremely vulnerable to
fragmentation under conditions such as those that prevail in the accelerators, and only the
particles of lowest mass escape fragmentation long enough to decay. The lifetime of the
heavier particles is limited by fragmentation to the absolute minimum, which appears to
be the unit of time corresponding to three scalar dimensions of motion, or about 10-24
seconds.
In the tabulations of particle data in the current scientific literature,

TABLE 5
“BARYON RESONANCES”
c-Atomic Grav. Inter- Mass
Element Theor. Obs. ***
number charge stage Obs. **
Sigma Series
7 *c-N 1 1197 1190
4 c-Be8 1 1397 1385
3-½ *c-Be7 1 1463 1480
3 c-Li6 1 1552
a 1604 1620
2-½ *c-Li5 1 1676 1670
a 1728 1750 1690
b 1779 1765
c 1831 1840
d 1882 1880
36 *c-Kr 2 1914 1915
18 *c-Ar 2 1965 1940
12 c-Mg 2 2017 2000
10 *c-Ne 2 2048 2030
9 c-F 2 2069 2070
8 c-O 2 2095 2080
7 *c-N 2 2128 2100
5 *c-B 2 2234 2250
3 c-Li6 2 2483 2455
2-½ *c-Li5 2 2607 2620
10 *c-Ne 3 2979 3000
Lambda Series
10 *c-Ne 1 1117 1115
4 c-Be8 1 1397 1405
3 c-Li6 1 1552 1520
2-½ *c-Li5 1 1676 1670 1690
a 1728 1750
b 1779 1815
c 1831 1830
d 1882 1870-1860
12 c-Mg 2 2017 2020-2010
8 c-O 2 2095 2100 2110
4 c-Be8 2 2328 2350
2-½ *c-Li5 2 2607 2585
Xi Series
5 *c-B 1 1303 1320
3 c-Li6 1 1552 1530
2-½ *c-Li5 1 1676 1630
c 1831 1820
36 *c-Kr 2 1914 1940
10 *c-Ne 2 2048 2030
5 *c-B 2 2234 2250
3 c-Li5 2 2483 2500
N Series
3-½ *c-Be7 1 1463 1470
3 c-Li6 1 1552 1535 1520
2-½ *c-Li5 1 1676 1670 1688
a 1728 1700
b 1779 1780
d 1882 1860
14 *c-St 2 1995 1990
10 *c-Ne 2 2048 2040
8 c-O 2 2095 2100
6 c-C 2 2172 2190 2175
5 *c-B 2 2234 2220
2-½ *c-Li5 2 2607 2650
10 *c-Ne 3 2979 3030
Delta Series
6 c-C 1 1241 1236
2-½ *c-Li5 1 1676 1670 1690
d 1882 1890
36 *c-Kr 2 1914 1910
18 *c-Ar 2 1965 1950 1960
6 c-C 2 2172 2160
3-½ *c-Be7 2 2394 2420
36 *c-Kr 3 2845 2850
* Decay sequence
** Well-established resonances
*** Less certain resonances
the information with respect to the series of resonances thus far discussed is presented
under the heading of ―Baryon Resonances.‖ A further classification of ―Meson
Resonances‖ gives similar information concerning particles that were observed by a
variety of other techniques. These are, of course, entities of the same nature—cosmic
elements in the decay range—and largely the same elements, but because of the wide
variations in the conditions under which they were produced the meson list includes a
number of additional elements. Indeed, it includes all of the elements of the regular atom
building sequence (with c-Ne and c-O substituted for c-F, as previously noted), and one
additional isotope, c-Ci11. The masses derived from the experiments are compared with
the theoretical masses of the cosmic elements in Table 6. The names currently applied to
the observed particles have no significance, and have been omitted.
In preparing this table, the observed particles were first assigned to the corresponding
cosmic elements, an assignment that could be made without ambiguity, as the maximum
experimental deviations from the theoretical masses are, in all but a very few instances,
considerably less than the mass differences between the successive elements or isotopes.
On the assumption that the deviations of the reported values from the true masses of the
particles are due to causes whose effects are randomly related to the true masses, the
individual values were averaged for comparison with the theoretical masses. The close
correlation between the two sets of values not only confirms the status of these observed
particles as cosmic elements, but also validates the assumption of random deviations, on
which the averaging was based. Presumably these deviations are, in part, due to
inaccuracies in obtaining and processing the experimental data, but they may also include
a random distribution of differences of a real character: more of the ―fine structure‖
which, as previously noted, has not yet been studied in the context of the Reciprocal
System.
The averaged values are shown in parentheses. Where only single measurements are
available, the deviations from the theoretical values are naturally greater, but they are
generally within the same range as those of the individual values that enter into the
averages. Longer lived decay products such as c-Ne and c-N are not usually classified
with the resonances, but they have been included in the table to show the complete
picture. The gaps still remaining in the table will no doubt be filled as further
experimental work is done. Indeed, many of these gaps, particularly in the upper portion
of the mass range, can be filled immediately, simply by consolidating Tables 5 and 6. The
difference between these two sets of resonances is only in the experimental procedures by
which the reported values were derived. All of the transient

TABLE 6
“MESON RESONANCES”
c-Atomic Grav. Inter- Mass Mass
Element Theor.
number charge stage Obs. ** Individual Values
3 c-Li6 0 621
a 673 700
5
2-½ *c-Li 0 745 (760) 750,770
a 797 784
d 952 (951) 940,953-958
36 *c-Kr 1 983 (986) 970,990,997
18 *c-Ar 1 1034 (1031) 1020,1033,1040
12 c-Mg 1 1086 (1090) 1080,1100
10 *c-Ne 1 1117 1116
8 c-O 1 1164 (1165) 1150,1170-1175
7 *c-N 1 1197 1197
6 c-C12 1 1241 (1240) 1237,1242
5-½ c-C11 1 1270 (1274) 1265,1270,1286
5 *c-B10 1 1303 1310
4-½ c-B9 1 1345
4 c-Be8 1 1397
3-½ *c-Be7 1 1463 (1455) 1440,1470
a 1515 1516
3 c-Li6 1 1552 1540
a 1604 (1623) 1600,1645
5
2-½ *c-Li 1 1676 (1674) 1660,1664-1680,1690
b 1779 (1773) 1760,1765-1795
c 1831 (1840) 1830,1850
36 *c-Kr 2 1914 1930
8 c-O 2 2095 2100
5 *c-B10 2 2234 2200
4-½ c-B9 2 2276 2275
4 c-Be8 2 2328 2360
3-½ *c-Be7 2 2394 2375
36 *c-Kr 3 2845 2800
36 (kaon)½ c-Kr 1-½ 1423 (1427) 1416,1421,1430,1440
* Decay sequence
particles, irrespective of the category to which they are currently assigned, are cosmic
elements or isotopes, with or without gravitational charges of the material type.
The absence of singly (gravitationally) charged particles corresponding to c-B9 from the
list of observed resonances is rather conspicuous, particularly since the similar particle of
twice this atomic weight, c-F18 is also missing, as noted earlier. The reason for this
anomaly is still unknown.
The last particle listed in Table 6 is a kaon, one of the two rotating systems of a c-Kr
atom, with a full gravitational charge in addition to the half-sized charge that it normally
carries. This particle has the same relation to the normal kaon that the atoms of the
doubly charged series in Tables 5 and 6 bear to the corresponding singly charged atoms.
ln the first edition it was suggested that some of the cosmic ray particles entering the
material sector might be cosmic chemical compounds rather than single atoms. In the
light of the more complete information now available with respect to the details of the
inter-regional transfer of matter, this possibility must now be excluded, but short-lived
associations between cosmic and material particles, and perhaps, in some cases, between
cosmic particles, are feasible, and evidence of some such associations has been obtained.
For example, the lambda meson (c-neon) is reported to participate in a number of
combinations with material elements, called hyperfragments, which disintegrate after a
brief existence. The current view is that the meson, which is assumed to be a sub-atomic
particle, replaces one of the ―nucleons‖ in the material atom. However, we find (l) that
the material atom is not composed of particles, (2) that there are no nucleons, and (3) that
the mesons are full-sized atoms, not sub-atomic particles. The hyperfragment therefore
cannot be anything more than a temporary association between a material atom and a
cosmic atom.
The new findings as to the nature of the transient particles, and their production and
decay, do not negate the results of the vast amount of work that has been done toward
determining the behavior characteristics of these particles. As stated earlier in this
chapter, these theoretical findings are generally consistent not only with the actual
experimental results, but also with the experimenters' ideas as to what the raw data—the
various ―tracks,‖ electrical measurements, counter readings, etc.—signify with respect to
the existence and behavior of the different transient particles. But what appears to be an
immense amount of experimental data actually contributes very little toward an
explanation of the nature of these particles, and their place in the physical universe; it
merely serves to define the problem. As expressed by V. F. Weisskopf, in a review of the
situation, ―The present theoretical activities are attempts to get something from almost
nothing.‖
Much of the information derived from observation is ambiguous, and some of it is
definitely misleading. The experimentally established facts obviously have a bearing on
the problem, but they are too limited in their scope to warn the investigators that they
cannot be fitted into the pattern to which scientists are accustomed. For instance, in the
world of ordinary matter, a particle mass less than that of the lightest isotope of hydrogen
indicates that the particle belongs to the sub-atomic class. But when the effective masses
of the transient particles, as determined by experiment, are interpreted according to this
familiar pattern, they give a totally false account of the nature of these entities. Thus,
while the determination of the particle masses adds to the total amount of factual
information available, its practical effect is to lead the investigators away from the truth
rather than toward it. The following statements by Weisskopf in his review indicate that
he suspected that some such misinterpretation of the empirical data is responsible for the
confusion that currently surrounds the subject.
We are exploring unknown modes of behavior of matter under completely novel
conditions.... It is questionable whether our present understanding of high-energy
phenomena is commensurate to the intellectual effort directed at their interpretation.67
Availability of a general physical theory which enables us to deduce the nature and
characteristics of the transient particles in full detail from theoretical premises, rather
than having to depend on physical observation of a very limited scope, now opens the
door to a complete understanding. The foregoing pages have provided an account of what
the transient particles are, where the particles of natural origin (the cosmic rays) come
from, what happens to them after they arrive, and how they are related to the transient
particles produced in the accelerators. The aspects of these particles that have been so
difficult to explain on the basis of conventional theory—their multiplicity, their
extremely short lifetimes, the high speed and great energies of the natural particles, and
so on—are automatically accounted for when their origin and general nature is
understood.
Another significant point is that, on the basis of the new theoretical explanation, the
cosmic rays have a definite and essential place in the mechanism of the universe. One of
the serious weaknesses of conventional physical theory is that it is unable to find roles for
a number of the recently discovered phenomena such as the cosmic rays, the quasars, the
galactic recession, etc., that are commensurate with the magnitude of the phenomena, and
is forced to treat them as products of exceptional or abnormal circumstances. In view of
the wide extent of the phenomena in question, and their far-reaching consequences, such
characterization is clearly inappropriate. The theoretical finding that these are stages of
the cosmic cycle through which all matter eventually passes now eliminates this
inconsistency, and identifies each of these phenomena with a significant phase of the
normal activity of the universe. The existence of a hitherto unknown second half of the
universe is the key to an understanding of all of these currently misinterpreted
phenomena, and the most interesting feature of the cosmic rays is that they give us a
fleeting glimpse of the entities of which the physical objects of that second half, the
cosmic sector, are constructed.

CHAPTER 17

Some Speculations
The Reciprocal System of theory consists of the fundamental postulates, together with
everything that is implicit in the postulates; that is, everything that can legitimately be
derived from those postulates by logical and mathematical processes without introducing
anything from any other source. It is the theory as thus defined that can claim to be a true
and accurate representation of the observed physical universe, on the grounds specified in
the earlier pages. The conclusions stated in this and related publications by the present
author and others are the results of the efforts that have thus far been made to develop the
consequences of the postulates in detail. However‖ the findings that have emerged from
the early phases of this theoretical development call for some drastic modifications of the
prevailing conceptions of the nature of some of the basic physical entities and
phenomena. Such conceptual changes are not easily made, and the persistence of
previous habits of thought makes it difficult, not only for the readers of these works, but
also for the investigators themselves, to grasp the full implications of the new ideas when
they first make their appearance.
The existence of scalar motion in more than one dimension, which plays an important
part in the subject matter of the two preceding chapters, is a good example. It is now clear
that such motion is a necessary and unavoidable consequence of the basic postulates, and
there is no inherent obstacle that would stand in the way of a complete and detailed
understanding of its nature and effects if it could be considered in isolation, without
interference from previously existing ideas and beliefs. But this is not humanly possible.
The minds into which this idea enters are accustomed to thinking along very different
lines, and inertia of thought is similar to inertia of matter, in that it can be fully overcome
only over a period of time.
Even the simple concept of motion that is inherently scalar, and not merely a vectorial
motion whose directional aspects are being disregarded, involves a conceptual change of
no small magnitude, and the first edition of this work did not go beyond this point, except
in specifying that the increase in the speed of recession of the galaxies is linear beyond
the gravitational limit‖ a tacit assertion that the increment is scalar.
Subsequent studies of high energy astronomical phenomena carried the development of
thought on the subject a step farther, as they led to the conclusion that the quasars are
moving in two dimensions. However, it took additional time to achieve a recognition of
the fact that unit scalar speed in three dimensions constitutes the line of demarcation
between the region of motion in space and the region of motion in time, and the first
publication in which this point was brought out specifically was Quasars and Pulsars
(1971). Now we further find that the same considerations also apply to the incoming
cosmic particles. At the moment, it appears that the full scope of the subject has been
covered, but past experience does not encourage a positive statement to that effect.
This experience demonstrates how difficult it is to attain a comprehensive understanding
of the various aspects of any new item of information that is derived from the basic
postulates, and it explains why identification of the source from which the correct
answers can be obtained does not automatically give us all of those answers; why the
results obtained by application of the Reciprocal System of theory, like the products of all
other research into previously unknown physical areas, necessarily differ in the degree of
certainty that can be ascribed to them, particularly in the relatively early stages of an
investigation. Many are established beyond a reasonable doubt; others can best be
characterized as ―work in progress‖ ; still others are little, if any, more than speculations.
However, because of the extremely critical scrutiny to which a theory based on a new and
radically different fundamental concept is customarily (and properly) subjected,
publication of the results of the theoretical development described in this work has, in
general, been limited to those items which have been given long and careful examination,
and can be considered as having a very high degree of probability of being correct.
Almost thirty years of study and investigation went into the project before the first edition
of this work was published. The additions and modifications in this new edition are the
result of another twenty years of review and extension of the original findings by the
author and others.
Inasmuch as the results of this development are conclusions about one universe derived
in their entirety from one set of basic premises, every advance that is made in the
understanding of phenomena in one physical field throws some light on outstanding
questions in other fields. A review such as that required for the preparation of this new
edition has the benefit of all of the advances that have been made subsequent to the last
previous systematic study of each area, and a considerable amount of clarification of the
subject matter previously examined, and extension of the development into new subject
areas, was accomplished almost automatically during the revision of the text. Where it is
evident that the new theoretical conclusions thus derived are firm enough to meet the
criteria that were applied to the original publication they have been included in this new
edition. But in general, any new ideas of major consequence that have emerged from this
rather rapid review have been held over for further study in order to be sure that they
receive adequate consideration before publication.
In one particular case, however, there seems to be sufficient justification for making an
exception to this general policy. In the preceding pages, the discussion of the decay of the
cosmic elements after entry into the material environment was carried to the point where
the decay was complete, and it was noted that the ultimate result would necessarily be
conversion of the cosmic elements into forms that would be compatible with the new
environment. Since hydrogen is the predominant constituent of the material sector of the
universe, this element must ultimately be produced from the decay products, but just how
the transition is accomplished has not been clear theoretically, and empirical information
bearing on the subject is practically non-existent. It would be a significant advance
toward completion of the basic theoretical structure if this gap could be closed.
Consideration of the question during preparation of the text of the new edition has
uncovered some interesting possibilities in this connection, and a discussion of these
ideas in the present work seems to be warranted, even though it must be admitted that
they are still speculative, or at least no more than ―work in progress.‖
The first of these tentative new conclusions is that the muon neutrino is not a neutrino. As
the theoretical development now stands, there is no place for any neutrinos other than the
electron neutrino and its cosmic analog, the electron antineutrino, as it is currently
known. Of course, the door is not completely closed. Earlier in this volume it was
asserted that sufficient evidence is now available to demonstrate that the physical
universe is, in fact, a universe of motion, and that a correct development of the
consequences of the postulates that define such a universe will produce an accurate
representation of the existing physical universe. It is not contended‖ however, that the
present author and his associates are infallible, and that the conclusions which they have
reached by these means are always correct. It is conceivable that further theoretical
clarification may change some aspects of our existing view of the neutrino situation‖ but
the theory as it now stands has no place for muon neutrinos.
As brought out in the previous pages ―however‖ the theory does require the production of
a different massless particle in the processes in which the ―muon neutrino‖ now appears‖
and the logical conclusion is that the particle now called the muon neutrino is the particle
required by the theory: the massless neutron. From the observational standpoint this
changes nothing but the name, as these two massless particles cannot be distinguished by
any currently known means. On the theoretical side, the observed particle fits in very well
with the theoretical deductions as to the behavior of the massless neutron. This particle
should theoretically be produced in every decay event, whereas the neutrino should
appear only in the last step, where separation of the residual cosmic atom into two
massless particles takes place. This is in accord with observation, as the ―muon neutrino‖
appears in both the pion decay and the muon decay, whereas the electron neutrino
appears only in the decay of the muon. Empirical confirmation of the theoretical produce
lion of massless neutrons in the earlier decay events has not yet been observed, but this is
understandable.
The reported products of the decay of a positive muon are also in agreement with the
massless neutron hypothesis. These products are currently considered to be a positron,
which, according to our findings, is M 0-0-1, an electron neutrino, M 2-2-(1), and a
―muon antineutrino,‖ which we now identify as a massless neutron, M 2-2-0. The
positron and the electron neutrino are jointly equivalent to a second massless neutron.
Their appearance as two particles rather than one is probably due to the fact that they are
the products of the final conversion of the residual cosmic atom, in which the electric and
magnetic rotations are oppositely directed, rather than merely discrete particles ejected
from the cosmic atom.
It is claimed that muons also exist with negative charges, and that these decay into the
antiparticles of the decay products of the positive muon: an electron, an electron
antineutrino, and a ―muon neutrino.‖ These asserted products are the equivalent of two
cosmic massless neutrons. The production of such particles, or of cosmic particles of any
kind, other than the members of the regular decay sequence, as the result of a decay
process, is rather difficult to reconcile with the theoretical principles that have been
developed. Theoretical considerations indicate that there is no such thing as an
―antimeson,‖ and that the negatively charged muon is identical with the positively
charged muon, except for the difference in the charge. On this basis, the decay products
should differ only in that an electron replaces the positron. Inasmuch as two of the decay
particles in each case are unobservable, there appears to be a rather strong probability that
their identification in current physical thought comes from the ninety percent of
interpretation rather than from the ten percent of observation that enters into the reported
results. However, it is the existence of some unresolved questions of this kind that has
made it necessary to characterize the contents of this chapter as somewhat speculative.
On the basis of the theoretical decay pattern, the incoming cosmic atoms are eventually
converted into massless neutrons and their equivalents. The problem then becomes: What
happens to these particles? There are no experimental or observational guideposts along
this route; we will have to depend entirely on theoretical deductions.
The massless neutron already has a material type structure--that is, a negative vibration
and a positive rotation--and no conversion process is required. Likewise, no decay or
fragmentation process is possible because this particle has only one rotational
displacement unit. Progress toward the hydrogen goal must therefore take place by means
of addition processes. Addition of a massless neutron to a positron, a proton, a compound
neutron, or a second massless neutron, would produce a particle in which there is a single
rotating system of displacement 2 (on the particle scale). As indicated in Chapter 11, it
appears that such a particle, if it exists at all, is unstable, and in the absence of any means
of transferring one of the units of displacement to a second rotating system, the unstable
particle will decay back to particles of the original types. Such additions will therefore
accomplish nothing.
The additions that are actually possible constitute a regular series. The decay product, the
massless neutron, M ½-½-0, can combine with an electron, M 0-0-(l), to form a neutrino,
M ½-½-(1). Another massless neutron added to the neutrino produces a proton, M 1-1-
(1). As has been indicated, addition of a massless neutron to the proton is not feasible, but
a neutrino can be added, and this produces the mass one hydrogen isotope, M ½-½-(2).
So far as the rotational displacement is concerned, we now have a clear and consistent
picture. By addition of the supply of massless neutrons resulting from the decay of the
cosmic rays to electrons and neutrinos, particles that are plentiful in the material
environment, hydrogen, the basic element of the material system, is produced. But there
is still one important factor to be accounted for. There is no problem in the addition of the
massless neutron to the electron, but in adding to the neutrino to produce the proton a unit
of mass must be provided. The question that must be answered before this hypothetical
hydrogen building process can be considered a reality is: Where does the required mass
come from?
It appears, on the basis of the recent extensions of the theory, that the answer to this
question can be found in a hitherto unrecognized property of particles with two-
dimensional rotation. As explained in Chapter 12, mass is t³/S³, the reciprocal of three-
dimensional speed, whereas energy is t/s, the reciprocal of one-dimensional speed.
Obviously, there is an intermediate quantity, the reciprocal of two-dimensional speed,
t²/s². This has been recognized as momentum, or impulse, but it has been regarded as a
derivative of mass. Indeed‖ momentum is customarily defined as the product of mass and
velocity. What has not been recognized is that the reciprocal of two-dimensional speed
can exist in its own right, independent of mass, and that a two-dimensional massless
particle can have what we may call internal momentum, t²/s² ”just as a three-dimensional
atom has mass― t³/s³.
The internal energy of an atom‖ the energy equivalent of its mass‖ is equal to the product
of its mass and the square of unit speed‖ t³/s³ x s²/t² = t/s. This is the relation discovered
by Einstein‖ and expressed as E = mc². In order to provide the unit mass required in the
addition of a massless neutron to a neutrino to form a proton, a unit quantity of energy, t/s
must be provided.
The kinetic energy of a particle with internal momentum M is the product of this
momentum and the speed: Mv = t²/s² x s/t = t/s. Inasmuch as the massless neutron has
unit magnetic displacement, and therefore unit momentum, and being massless it moves
with unit speed (the speed of light), its kinetic energy is unity. Thus the kinetic energy of
the massless neutron is equal to the energy requirement for the production of a unit of
mass, and by coming to rest in the stationary frame of reference the massless neutron can
provide the energy as well as the rotational displacement necessary to produce the proton
by combination with a neutrino.
Here, then, is what appears, on initial consideration at least, to be a complete and
consistent theoretical explanation of the transition from decay product to material atom.
There is, of course, no observational confirmation of the hypothetical processes, and such
confirmation may be hard to get. The conclusions that have been reached will therefore
have to rest entirely on their theoretical foundations for the time being.
It is worth noting that, on the basis of these conclusions, the hydrogen produced from the
decay products originates somewhat uniformly throughout the extension space of the
material sector, inasmuch as the neutrino population must be fairly uniformly distributed.
This is in agreement with other deductions that were discussed in the first edition, and
will be given further consideration in Volume II of this work. The standing of the
conclusions that have just been outlined is considerably strengthened by the fact that the
two lines of theoretical development meet at this point.
As stated earlier, the inflow of cosmic matter into the material sector is counterbalanced
by an ejection of matter from the material sector into the cosmic sector in the form of
high-speed explosion products. These are the two crucial phases of the great cycle which
constitutes the continuing activity of the universe. But the slow process of growth and
development that the arriving matter undergoes before it is ready to participate in the
events which will eject it back into the cosmic sector, and complete the cycle, is an
equally important, even though less spectacular, aspect of the cycle. Consequently, one of
the major tasks involved in developing a theoretical account of the physical universe
from the basic postulates of the Reciprocal System is to trace the evolutionary path of the
new matter, and of the aggregates into which that matter gathers. Our first concern,
however, must be to identify the participants in physical activity, and to define their
principal properties, as these are items of information that will be required before the
events in which these entities participate can be accurately evaluated. Now that we have
arrived, at least tentatively, at the hydrogen stage, we will defer further consideration of
the evolution of matter to Volume II, and will return to our examination of the individual
material units and their primary combinations.

CHAPTER 18

Simple Compounds
In the preceding chapters we have determined the specific combinations of simple rotations
that are stable in the material sector of the universe, and we have identified each of these
combinations, within the experimental range, with an observed sub-atomic particle or atom
of an element. We have then shown that an exact duplicate of this system of material
rotational combinations, with space and time interchanged, exists in the cosmic sector, and
we have identified all of the observed particles that do not belong to the material system as
atoms or particles of the cosmic system. To the extent that observational or experimental
data are available, therefore, we have established agreement between the theoretical and
observed structures. So far as these data extend, there are no loose ends; all of the observed
entities have been identified theoretically, and while not all of the theoretical entities have
been observed, there are adequate theoretical explanations for this.
The number of observed particles is increased substantially by a commonly accepted
convention which regards particles of the same kind, but with different electric charges, as
different particles. No consideration has been given to the effects of electric charges in this
present discussion, as the existence of such charges has no bearing on the basic structure of
the units. These charges may play a significant part in determining whether or not certain
kinds of reactions take place under certain circumstances, and may have a major influence
on the details of those reactions, just as the presence or absence of concentrations of kinetic
energy may have a material effect on the course of events. But the electric charge is not part
of the basic structure of the atom or sub-atomic particle. As will be brought out when we
take up consideration of electrical phenomena, it is a temporary appendage that can be
attached or removed with relative ease. The electrically charged atom or particle is therefore
a modified form of the original rotational combination rather than a distinctly different type
of structure.
Our examination of the basic structures is not yet complete, however, as there are some
associations of specific numbers of specific elements that are resistant to dissociation, and
therefore act in the manner of single units in processes of low or moderate energy. These
associations, or molecules, play a very important part in physical activity, and in order to
complete our survey of the units of which material aggregates are composed we will now
develop the theory of the structure of molecules, and will determine what kinds of molecules
are theoretically possible.
The concept of the molecule originated from a study of the behavior of gases, and as
originally formulated it was essentially empirical. The molecule, on this basis, is the
independent unit in a gas aggregate. But this definition cannot be applied to a solid, as the
independent unit in a solid is generally the individual atom, or a small group of atoms, and
in this case the molecule has no physical identity. In order to make the molecule concept
more generally applicable, therefore, it has been redefined on a theoretical rather than an
empirical basis, and as now conceived, a molecule is the smallest unit of a substance which
can (theoretically) exist independently and retain all of the properties of the substance.
The atoms of a molecule are held together by inter-atomic forces, the nature and magnitude
of which will be examined in detail later. The strength of these forces determines whether or
not the molecule will break up under whatever disruptive forces it may be subjected to, and
the manner in which certain atoms are joined in a molecule may have an effect on the
magnitude of the inter-atomic forces, but the determination of what atoms can combine with
what other atoms, and in what proportions, is governed by an entirely different set of factors.
ln current theory, the factors responsible for the inter-atomic force, or ―bond,‖ are presumed
to have a double function, not only determining the strength of the cohesive force, but also
determining what combination can take place. The results of the present investigation
indicate, however, that the force which determines the equilibrium distance between any two
atoms is identical in origin and in general character regardless of the kind of atoms involved,
and regardless of whether or not those atoms can, or do, take part in the formation of a
molecule.
Experience has indicated that it is advisable to lay more emphasis on the independence of
these two aspects of the interrelations between atoms, and for this purpose the plan of
presentation employed in the first edition will be modified in some respects. As already
mentioned, the information that will be developed with respect to the molecular structure
will be presented before any discussion of inter-atomic forces is undertaken. Furthermore,
present indications are that whatever advantages there may be in using the familiar term
―bond‖ in describing the various molecular structures are outweighed by the fact that the
term ―bond‖ almost inevitably implies the notion of a force of some kind. Inasmuch as the
different molecular ―bonds‖ merely reflect different relative orientations of the rotations of
the interacting atoms, and have no force implications, we will abandon the use of the term
―bond‖ in this sense, and will substitute ―orientation‖ for present purposes. The term ―bond‖
will be used in a different sense in a later chapter where it will actually relate to a force.
The existence of molecules, either combinations of specific numbers of like atoms, or
chemical compounds, which are combinations of unlike atoms, is due to the limitations on
the establishment of inter-atomic equilibrium that are imposed by the presence of motion in
time in the electric dimension of the atoms of certain elements. Those elements whose atoms
rotate entirely in space (positive displacement in all rotational dimensions), or which are
able to attain the all-positive status by reorientation on the 8-x basis, are not subject to any
such limitations. An atom of an element of this kind can establish an equilibrium with any
other such atoms in any proportions, except to the extent that the physical properties of the
elements involved (such as the melting points) or conditions in the environment (such as the
temperature) interfere. Material aggregates of this kind are called mixtures. In some cases,
where the mixture is homogeneous and the composition is uniform, the term alloy is applied.
There is a class of intermetallic compounds, in which these positive constituents are
combined in definite proportions. CuZn and Cu5Zn8 are compounds of this class. But the
combinations of copper and zinc are not limited to specific ratios of this kind in the way in
which the composition of true chemical compounds is restricted. The commercially
important alloys of these two metals extend through the entire range from a brass with 90
percent copper and 10 percent zinc to a solder with 50 percent of each constituent, and the
possible alloys extend over a still wider range. The intermetallic compounds are merely
those alloys whose proportions are especially favorable from a geometric standpoint. A
typical comment in a chemistry textbook is that ―The theory of the bonding forces involved
in these intermetallic compounds is very complex and is not, as yet, very well understood.‖
The reason is that there are no ―bonding forces‖ in these substances in the same sense in
which that term is ordinarily used in application to the true chemical compounds.
As has been stated, negative rotation in the electric dimension of an atom is admissible
because the requirement that the net total rotational displacement must be positive (in the
material sector) can be met as long as the magnetic rotation is positive. In the time region
inside unit distance, however, the electric and magnetic rotations act independently. Here the
presence of a randomly oriented electric rotation in time makes it impossible to maintain a
fixed inter-atomic equilibrium. Any relation of space to time is motion, and motion destroys
the equilibrium. But an equilibrium can be established in certain cases if both of the
interacting atoms are specifically oriented along the line of interactions in such a manner
that the negative displacement in the electric dimension of one atom is counterbalanced by
an equal positive displacement in one of the dimensions of the second atom, so that the
magnitude of the resulting relative motion is zero with respect to the natural datum. Or a
multi-atom group equilibrium may be established where the total negative displacements of
the atoms with electric rotation in time are exactly equal to the total effective positive
displacements of the atoms with which the interaction is taking place.
In these cases there is an equilibrium because the net total of the positive and negative
displacement involved is zero. Alternatively, the equilibrium may be based on a total of 8 or
16 units, since, as we have found, there are 8 displacement units between one zero point and
the next. A negative displacement x may be counterbalanced by a positive displacement 8-x,
the net total being 8, which is the next zero point, the equivalent of the original zero.
As an analogy, we may consider a circle, the circumference of which is marked off into 8
equal divisions. Any point on this circle can be described in either of two ways: as x units
clockwise from zero, or as 8—x units counter-clockwise from 8. A distance of 8 units
clockwise from zero is equivalent to zero. Thus a balance between x and 8—x, with the
midpoint at 8, is equivalent to a balance between x and -x, with the midpoint at zero. The
situation in the inter-atomic space-time equilibrium is similar. As long as the relative
displacement of the two interacting motions, the total of the individual values, amounts to
the equivalent of any one of the zero points, the system is in equilibrium.
Because of the specific requirements for the establishment of equilibrium, the components
of combinations of this kind, molecules of chemical compounds, exist in definite
proportions, each n atoms of one component being associated with a specific number of
atoms of the other component or components. In addition to the constant proportions of their
components, compounds also differ from mixtures or alloys in that their properties are not
necessarily similar to those of the components, as is generally true in the all-positive
combinations, but may be of an altogether different nature, as the resultant of a space-time
equilibrium of the required character may differ widely from any of the effective rotational
values of the individual elements.
The rotational displacement in the dimension of interaction determines the combining
power, or valence, of an element. Since the negative displacement is the foreign component
of the material molecule that has to be counterbalanced by an appropriate positive
displacement to make the compound possible, the negative valence of an element is the
number of units of effective negative displacement that an atom of that element possesses. It
follows that, with some possible exceptions that will be considered later, there is only one
value of the negative valence for any element. The positive valence of an atom in any
particular orientation is the number of units of negative displacement which it is able to
neutralize when oriented in that manner. Each element therefore has a number of possible
positive valences, depending on its rotational displacements and the various ways in which
they can be oriented. The occurrence of these alternate orientations is largely dependent
upon the position of the element within the rotational group, and in preparation for the
ensuing discussion of this subject it will be advisable, for convenient reference, to set up a
classification according to position.
Within each of the rotational groups the minimum electric displacement for the elements in
the first half of the group is positive, whereas for those in the latter half of the group it is
negative. We will therefore apply the terms electropositive and electronegative to the
respective halves. It should be understood, however, that this distinction is based on the
principle that the most probable orientation in the electric dimension considered
independently is that which results in the minimum displacement. Because of the molecular
situation as a whole, an electronegative element often acts in an electropositive capacity–
indeed, nearly all of them take the positive role in chemical compounds under some
conditions, and many do so under all conditions–but this does not affect the classification
that has been defined.
There are also important differences between the behavior of the first four members of each
series of positive or negative elements and that of the elements with higher rotational
displacements. We will therefore divide each of these series into a lower division and an
upper division, so that those elements with similar general characteristics can be treated
together. This classification will be based on the magnitude of the displacement, the lower
division in each case including the elements with displacements from l to 4 inclusive, and
the upper division comprising those with displacements of 4 or more. The elements with
displacement 4 belong to both divisions, as they are capable of acting either as the highest
members of the lower divisions or as the lowest members of the upper divisions. It should
be recognized that in the electronegative series the members of the lower divisions have the
higher net total positive displacement (higher atomic number).
For convenience, these divisions within each rotational group will be numbered in the order
of increasing atomic number as follows:
These are the divisions which were indicated in the revised periodic table in Chapter 10. As
will be seen from the points developed in the subsequent discussion, the division to which
an element belongs has an important bearing on its chemical behavior. Including this
divisional assignment in the table therefore adds substantially to the amount of information
that is represented.
Where the normal displacement x exceeds 4, the equivalent displacement 8-x is numerically
less than x, and therefore more probable, other things being equal. One effect of this
probability relation is to give the 8-x positive valence preference over the negative valence
in Division III, and thereby to limit the negative components of chemical compounds to the
elements of Division I
Division I Lower electropositive
Division II Upper electropositive
Division III Upper electronegative
Division IV Lower electronegative
V, except in one case where a Division III element acquires the Division IV status for
reasons that will be discussed later.
When the positive component of a compound is an element from Division I, the normal
positive displacement of this element is in equilibrium with the negative displacement of the
Division IV element. In this case both components are oriented in accordance with their
normal displacements. The same is true if either or both of the components is double or
multiple. We will therefore call this the normal orientation. The corresponding normal
valences are the positive valence (x) and the negative valence (-x).
It is theoretically possible for any Division I element to form a compound with any Division
IV element on the basis of the appropriate normal valences, and all such compounds should
be stable under favorable conditions, but whether or not any specific compound of this type
will be stable under the normal terrestrial conditions is determined by probability
considerations. An exact evaluation of these probabilities has not yet been attempted, but it
is apparent that one of the most important factors in the situation is the general principle that
a low displacement is more probable than a high displacement. If we check the theoretically
possible normal valence compounds against the compounds listed in a chemical handbook,
we will find nearly all of the low positive-low negative combinations in this list of common
compounds. The low positive-high negative, and the high positive-low negative
combinations are much less fully represented, while we will find the high positive-high
negative combinations rather scarce.
The geometrical symmetry of the resulting crystal structure is the other major determinant.
A binary compound of two valence four elements (RX), for example, is more probable than a
compound of a valence four and a valence three element (R3X4). The effect of both of these
probability factors is accentuated in Division II, where the displacements corresponding to
the normal valence have the relatively high values of 5 or more. Consequently, this valence
is utilized only to a limited extent in this division, and is generally replaced by one of the
alternative valences.
Inasmuch as the basic requirement for the formation of a chemical compound is the
neutralization of the negative electric displacement, the alternative positive valences are
simply the results of the various ways in which the atomic rotation can be oriented to attain
an effective positive displacement that will serve the purpose. Since each type of valence
corresponds to a particular orientation, the subsequent discussion will be carried on in terms
of valence, the existence of a corresponding orientation in each case being understood.
The predominant Division III valence is based on balancing the 8-x displacement (positive
because of the zero point reversal) against the displacement of the negative component. The
resulting relative displacement is 8, which, as explained earlier, is the equivalent of zero. We
will call this the neutral valence. This valence also plays a prominent part in the purely
Division lV compounds.
The higher Division III members of Groups 4A and 4B are unable to utilize the 8—x neutral
valence because for these elements the values of 8—x are less than zero, and therefore
meaningless. instead, these elements form compounds on the basis of the next higher
equivalent of zero displacement. Between the 8-unit level and this next zero equivalent there
are two effective initial units of motion, as well as an 8-unit increment. The total effective
displacement at this point is therefore l8, and the secondary neutral valence is 18-x. A
typical series of compounds utilizing this valence, the oxides of the Division III elements of
Group 4A, consists of HfO2, Ta2O5, WO3, Re2O7, and OSO4.
Symmetry considerations favor balancing two electric displacements to arrive at the
necessary space-time equilibrium, where conditions permit, but where the all-electric
orientation encounters difficulties, it is possible for one of the magnetic rotations to take the
positive role in the inter-atomic equilibrium. The magnetic valences, which apply in these
magnetic-electric orientations, are the most common basis of combination in Division II,
where the positive valences are high, and the neutral valences are excluded because the 8–x
displacement is negative. They also make their appearance in the other three divisions where
probability considerations permit.
Each element has two magnetic rotations and therefore has two possible first order magnetic
valences. In alternate groups the two rotations are equal, where no environmental influences
are operative, and on this basis the number of magnetic valences should be reduced to one in
half of the groups. As we saw in our original consideration of the atomic rotation in Chapter
10, however, any element can rotate with an addition of positive electric rotational
displacement to the appropriate magnetic rotation, or with an addition of negative electric
rotational displacement to the next higher magnetic rotation. Because of this flexibility, the
limitation of the elements of alternate groups to a single magnetic valence actually applies
only to the elements of Division I. Here this restriction has no real significance, as the
elements of this division make little use of the magnetic valence in any event, because of the
high probability of the low positive valences.
To distinguish between the two magnetic valences, we will call the larger one the primary
magnetic valence, and the smaller one the secondary magnetic valence. Neither of these
valences has any inherent probability advantage over the other, but the geometrical
considerations previously mentioned do have a significant effect. For instance, where the
magnetic valence can be either two or three, a combination with a valence three negative
element takes the form R3X2 if the magnetic valence is two, and the form RX if the alternate
valence prevails. The latter results in the more symmetrical, and hence more probable,
structure. Conversely, if the negative element has valence four, the R2X structure developed
on the basis of a magnetic valence of two is more symmetrical than the R4X4 structure that
results if the magnetic valence is three, and it therefore takes precedence.
Many of the theoretically possible magnetic valence compounds that are on the borderline of
stability, and do not make their appearance as independent units, are stable when joined with
some other valence combination. For example, there are three theoretically possible first
order valence oxides of carbon: CO2 (positive electric valence), CO (primary magnetic
valence), and C2O (secondary magnetic valence). The first two are common compounds.
C2O is not. But there is another well-known compound, C3O2, which is obviously the
combination CO C2O. As we will see later, this ability of the less stable combinations to
participate in complex structures plays an important role in compound formation.
The first order valences of the elements, the valences that have been discussed thus far, are
summarized in Table 7. The great majority of the true chemical compounds of all classes are
formed on the basis of these valences.
TABLE 7
FIRST ORDER VALENCES

Group Division Magnetic Valences Element Electric Valences

Primary Secondary Normal Neutral Negative


(*Sec.)
lB IV 1 1 H 1

lB 0 2 1 He

2A I 2 1 Li 1
Be 2
B 3
C 4

2A IV 2 1 C 4 4
N 5 3
O 2
F 1

2A 0 2 2 Ne

2B I 2 2 Na 1
Mg 2
Al 3
Si 4

2B IV 3 2 Si 4 4
P 5 3
S 6 2
Cl 7 1

2B 0 3 3 Ar
3A I 3 2 K 1
Ca 2
Sc 3
Ti 4

3A II 3 2 V 5
Cr 6
Mn 7
Fe 8
Co

3A III 3 2 Ni
Cu 1
Zn 2
Ga 3
Ge 4

3A IV 3 2 As 5 3
Se 6 2
Br 7 1

3A 0 3 3 Kr

3B I 3 3 Rb 1
Sr 2
Y 3
Zr 4

3B II 4 3 Nb 5
Mo 6
Tc 7
Ru 8
Rh

3B III 4 3 Pd
Ag 1
Cd 2
In 3
Sn 4

3B IV 4 3 Sb 5 3
Te 6 2
I 7 1

3B 0 4 3 Xe
4A I 4 3 Cs 1
Ba 2
La 3
Ce 4

4A II 4 3 Pr 5
Nd 6
Pm 7
Sm 8
Eu
Gd
Tb
Dy
Ho
Er
Tm
Yb

4A III 4 3 Lu
Hf 4*
Ta 5*
W 6*
Re 7*
Os 8*
Ir
Pt
Au 1
Hg 2
Tl 3
Pb 4

4A IV 4 3 Bi 5 3
Po 6 2
At 7 1

4A 0 4 4 Rn

4B I 4 4 Fr 1
Ra 2
Ac 3
Th 4

4B II 5 4 Pa 5
U 6
Np 7
Pu 8
Am
Cm
Bk
Cf
Es
Fm
Md
No

4B III 5 4 Lr
Rf 4*
Ha 5*
There is also an alternate type of inter-atomic orientation that gives rise to what we may call
second order valences. As has been emphasized in the previous discussion, an equilibrium
between positive and negative rotational displacements can take place only where the net
resultant is zero, or the equivalent of zero, because any value of the space-time ratio other
than unity (zero displacement) constitutes motion, and makes fixed equilibrium positions
impossible. In the most probable condition, the initial level from which each rotation
extends is the same zero point, or, where the nature of the orientation requires different zero
points, the closest combination that is possible under the circumstances. This arrangement,
the basis of the first order valences, is clearly the most probable, but it is not the only
possibility.
Inasmuch as the separation between natural zero points (unit speed levels) is two linear units
(or eight three-dimensional units) it is possible to establish an equilibrium in which the
initial level of the positive rotation (the positive zero) is separated from the initial level of
the negative rotation (the negative zero) by two linear units. The effect of this separation on
the valence is illustrated in Fig. 2.

Fig.2
(a) (b) (c)

The basis of the first order valences is shown in (a). Here the normal positive valence V
balances an equal negative valence V at an equilibrium point represented by the double line.
In (b) the initial level of the positive rotation has been offset to the next zero point, two units
distant from the point of equilibrium. These two units, being on the positive side of the
equilibrium point, add to the effective positive displacement, and the positive valence
therefore increases to V+2; that is, V+2 negative valence units are counterbalanced. In (c) it
is the initial level of the negative rotation that has been offset from the point of equilibrium.
Here the two intervening units add to the effective negative displacement, and the positive
valence decreases to V-2, as the V units of positive displacement are now able to balance
only V—2 negative valence units.
By reason of the availability of the zero point modifications illustrated in Fig.2(b), each of
the positive first order valences corresponds to a second order valence, an enhanced valence,
as we will call it, that is two units greater in the case of the direct valences (x+2), and two
units less for the inverse valences: 8 - (x+2) = 6-x. Compounds based on enhanced normal
valences are relatively uncommon, as the normal valence itself has a high degree of
probability, and the enhanced valence is not only inherently less probable, but also has a
higher effective displacement in any specific application, which decreases the relative
probability still further. The probability factors are more favorable for the enhanced neutral
valence, as in this case the effective displacement is less than that of the corresponding first
order valences. The compounds of this type are therefore more numerous, and they include
such well-known substances as SO2 and PCI3. An interesting application of this valence is
found in ozone, which is an oxide of oxygen, analogous to SO2.
It should theoretically be possible for valences to be diminished by orientation in the manner
shown in Fig.2 (c), but it is doubtful if any stable compounds are actually formed on the
basis of diminished electric valences. The reason for their absence is not yet understood. The
magnetic valences are both enhanced and diminished. Either the primary or the secondary
valence may be modified, but since enhancement is in the direction of lower probability
(higher numerical value) the number of common compounds based on the enhanced
magnetic valences is relatively small. Diminishing the valence improves the probability, and
the diminished valence compounds are therefore more plentiful in the rotational groups in
which they are possible (those with primary magnetic valences above two), although the list
is still very modest compared to the immense number of compounds based on the first order
valences.
As indicated earlier, one component of any true chemical compound must have a negative
displacement of four or less, as it is only through the establishment of an equilibrium
between such a negative displacement and an appropriate positive displacement that the
compound comes into existence. The elements with the required negative displacement are
those which comprise Division IV, and it follows that every compound must include at least
one Division IV element, or an element which has acquired Division IV status by valence
enhancement. If there is only one such component, the positive-negative orientation is fixed,
as the Division IV element is necessarily the negative component. Where both components
are from Division IV, however, one normally negative element must reorient itself to act in a
positive capacity, and a question arises as to which retains its negative status.
The answer to this question hinges on the relative negativity of the elements concerned.
Obviously a small displacement is more negative than a large one, since it is farther away
from the neutral point where positive and negative displacements of equal magnitude are
equivalent. Within any one group the order of negativity is therefore the same as the
displacement sequence. In Group 2B, for instance, the most negative element is chlorine,
followed by sulfur, phosphorus, and silicon, in that order. This means that the negative
component in any Division IV chlorine-sulfur combination is chlorine, and the product is a
compound such as SCl2, not ClS or Cl2S. On the other hand, the compound P2S3 is in order,
as phosphorus is normally positive to sulfur.
Where the electric displacements are equal, the element with the smaller magnetic
displacement is the more negative, as the effect of a greater magnetic displacement is to
dilute the negative electric rotation by distributing it over a larger total displacement. We
therefore find ClF3 and IBr3, but not FCl3 or BrI3. The magnitude of the variation in
negativity due to the difference in magnetic displacement is considerably less then that
resulting from inequality of electric displacement, and the latter is therefore the controlling
factor except where the electric displacements are the same in both components.
On the foregoing basis, all elements of Divisions I, II, and III are positive to Division IV
elements. The displacement 4 elements on the borderline between Divisions III and IV
belong to the higher division when combined with elements of lower displacement, and
when elements lower in the negative series acquire valences of 4 or more through
enhancement or reorientation they also assume Division III properties and become positive
to the other Division IV elements. Thus chlorine, which is negative to oxygen in the purely
Division IV compound OCl2, is the positive component in Cl2O7. Similarly, the normal
relations of phosphorus and sulfur, as they exist in P2S3, are reversed in S3P4, where sulfur
has the valence 4.
Hydrogen, like the displacement 4 members of the higher groups, is a borderline element,
and because of its position is able to assume either positive or negative characteristics. It is
therefore positive to all purely negative elements (Division IV below valence 4), but
negative to all strictly positive elements (Divisions I and II), and to the elements of Division
III. Because of its lower magnetic displacement, it is also negative to the higher borderline
elements: carbon, silicon, etc. The fact that hydrogen is negative to carbon is particularly
significant in view of the importance of the carbon-hydrogen combination in the organic
compounds.
Another point that should be noted here is that when hydrogen acts in a positive capacity, it
does so as a Division III element, not as a member of Division I. Its + 1 valence is therefore
magnetic. This is why hydrogen was assigned only to the negative position in the revised
periodic table, rather than giving it two positions, as has been customary.
The variation in negativity with the size of the magnetic displacement has the effect of
extending the Division III behavior into Division IV to a limited extent in the higher groups.
Lead, for example, has practically no Division IV characteristics, and bismuth has less than
its counterparts in the lower groups. At the lower end of the atomic series this situation is
reversed, and the Division IV characteristics extend into Division III, as an alternative to the
normal positive behavior of some of the elements of that division. Silicon, for instance, not
only forms combinations such as MnSi and CoSi3, which, on the basis of the information
currently available, appear to be intermetallic compounds similar to those of the higher
Division III elements, but also combinations such as Mg2Si and CaSi2, which are probably
true compounds analogous to Be2C and CaC2. Carbon carries this trend still farther and
forms carbides with a wide variety of positive components.
In the 2A group, the Division IV characteristics extend to the fifth element, boron. This is
the only case in which the fifth element of a series has Division IV properties, and the
behavior of boron in compound formation is correspondingly unique. In its Division I
capacity, as the positive component in compounds such as B203, boron is entirely normal.
But its first order negative valence would be -5. Formation of compounds based on this -5
valence conflicts with the previously stated limitation of the negative valence to a maximum
of four units. Boron therefore shifts to an enhanced negative valence, adding two positive
units to its first order value of -5, with a resultant of -3. The direct combinations of boron
with positive elements have such structures as FeB and Cu3 B2. However, many of the
borides have complex structures in which the effective valences are not as clearly indicated.
This raises a question as to whether boron may be an exception to the rule limiting the
maximum negative valence to -4, and may utilize both the -5 and
-3 valences. This issue will be considered in the next chapter.

CHAPTER 19

Complex Compounds
The discussion in the preceding chapter had direct reference only to compounds of the
type Rm Xn, in which m positive atoms are combined with n negative atoms, but the
principles therein developed are applicable to all combinations of atoms. Our next
objective will be to apply these principles to an examination of some of the more
complex situations.
Any atom in one of the simple compounds may be replaced by another atom of the same
valence number and type. Thus any or all of the four chlorine atoms in CCl4 may be
replaced by equivalent negative atoms, producing a whole family of compounds such as
CCl3 Br, CCl2 F2 , CClI3 , CF4 , etc. Or we may replace n of the valence one chlorine
atoms by one atom of negative valence n, obtaining such compounds as COCl2 , COS,
CSTe, and so on. Replacements of the same kind can be made in the positive component,
producing compounds like SnCl4 .
Simple replacement by an atom of a different valence type is not possible. Copper, for
instance, has the same numerical valence as sodium, but the sodium atoms in a compound
such as Na2 O are not replaceable by copper atoms. There is a compound Cu2 O, but the
neutral valence structure of this compound is very different from the normal valence
structure of Na2 O. Similarly, if we exchange a positive hydrogen (magnetic valence)
atom for one of the sodium (normal valence) atoms in Na2 O, the process is not one of
simple replacement. Instead of NaHO, we obtain NaOH, a compound of a totally
different character.
A factor which plays an important part in the building of complex molecular structures is
the existence of major differences in the magnitudes of the rotational forces in the various
inter-atomic combinations. Let us consider the compound KCN, for example. Nitrogen is
the negative element in this compound, and the positive-negative combinations are K-N
and C-N. When we compute the inter-atomic distances, by means of the relations that
will be developed later, we find that the values in natural units are .904 for K-N and .483
for C-N.
As stated in Chapter 18, the term ―bond‖ is not being used in this work in any way
connected with the subject matter of that chapter: the combining power or valence. The
term ―valence bond,‖ or any derivative such as ―covalent bond,‖ has no place in the
theoretical structure of the Reciprocal System. However, use of the word ―bond‖ is
convenient in referring to the cohesion between specific atoms, atomic groups, or
molecules, and in the subsequent discussion it will be employed in this restricted sense.
On this basis we may say that the force of cohesion, or ―bond strength,‖ is considerably
greater for the C-N bond than for the K-N bond, as is indicated by the difference in the
inter-atomic distances.
It has usually been assumed that this force of cohesion is an indication of the strength of
the inter-atomic forces, but in reality the relation is inverse. As explained in Chapter 8,
the gravitational forces exerted by the atoms, the forces due to the atomic rotation, are
forces of repulsion in the time region, and the cohesion is therefore greater when the
rotational forces are weaker. The short C-N distance, and the corresponding strength of
this bond, are the results of inactive force dimensions in this combination which reduce
the effective repulsive force, and require the atoms to move closer together to establish
equilibrium with the constant force of the progression of the reference system.
Because of its greater strength, the C-N bond remains intact through many processes
which disrupt or modify the K-N bond, and the general behavior of the compound KCN
is that of a K-CN combination rather than that of a group of independent atoms such as
we find in K2 O. Groups like CN which have relatively high bond strengths and are
therefore able to maintain their identity while changes are taking place elsewhere in the
compounds in which they exist are called radicals. Inasmuch as the special properties of
these radicals are due to the differences between their bond strengths and those of the
other bonds within the compounds, the extent to which any particular group acts as a
radical depends on the magnitude of these differences. Where the inter-atomic forces are
very weak, and the bond is correspondingly strong, as in the C-N combination, the radical
is very resistant to separation, and acts as a single atom in most respects. At the other
extreme, where the differences between the various inter-atomic forces in the molecule
are small, the boundary line between radicals and non-radical atomic groupings is rat per
vague.
The stronger radicals are definite structural groups. NH4 is, to a large degree, structurally
interchangeable with the sodium atom, OH can substitute for I in the CdI2 crystal without
changing the structure, and so on. The weakest radicals, those with the smallest margins
of bond strength, crystallize in structures in which the radical, as such, plays no part, and
the structural units are the individual atoms. The perovskite (CaTiO3 ) structure is a
familiar example. Here each atom is structurally independent, and hence this type of
arrangement is available for a compound like KMgF3 in which there definitely are no
radicals, as well as for a compound such as KIO3 which contains a borderline group.
From a structural standpoint the IO3 group in KIO3 is not a radical, although it acts as a
radical in some other physical phenomena, and is commonly recognized as one.
From a thermal standpoint, for example, the IO3 group is definitely a radical at low
temperatures, the entire group acting as a unit. But unlike the strong radicals such as OH
and CN, which maintain this single unit status under all ordinary conditions, IO3
separates into two thermal units at higher temperatures. Other radical groups are still less
resistant to the thermal forces. The CrO3 group, for example, acts as a single thermal unit
at the lower temperatures, but in the upper part of the solid temperature range all four
atoms are thermally independent. The thermal behavior of chemical compounds,
including the examples mentioned, will be discussed in a subsequent volume.
In order to take the place of single atoms in the three-dimensional inorganic structures,
the radicals must have three-dimensional force distributions, and where some of the inter-
atomic forces are inherently two-dimensional, as is true in some of the lower group
elements, for reasons that will be explained later, the three-dimensional distribution must
be achieved by the geometrical arrangement. The typical inorganic radical therefore
consists of a group of satellite atoms clustered three-dimensionally around one or more
central atoms. Inasmuch as the satellite atoms are between the central atom and the
opposite component of the compound, the effective valence of the radical must have the
same sign as that of the satellite atoms. This limitation on the net valence means that the
great majority of these inorganic radicals are negative, as hydrogen is the only element
that has a two-dimensional force distribution when acting in a positive capacity. The most
important hydrogen radical of this class is ammonium, NH4 , in which hydrogen has the
magnetic valence 1 and nitrogen the negative valence 3 for a net group valence of +1.
The phosphonium radical is similar, but less common. A variation of NH4 is the
tetramethylammonium radical N(CH3 )4 , in which the hydrogen atoms are replaced by
positive CH3 groups.
The theoretically possible number of negative radicals is very large, but the effect of
probability factors limits the number of those actually existing to a small fraction of the
number that could theoretically be constructed. Other things being equal, those groups
with the smallest net displacement are the most probable, so we find BO2 -1 commonly,
and BO3 -3 less frequently, but not BO4 -5, BO5 -7, or the other higher members of this
series. Geometrical considerations also enter into the situation, the most probable
combinations, where other features are equal, being those in which the forces can be
disposed most symmetrically.
The status of the binary radicals such as OH, SH, and CN, is ambiguous on the basis of
the criteria developed thus far, since there is no distinction between central and satellite
atoms in their structures, but these groups can be included with the inorganic radicals
because they are able to enter into the three-dimensional inorganic geometric
arrangements.
Another special class of radicals combines positive and negative valences of the same
element. Thus there is the azide radical N3 , in which one nitrogen atom with the neutral
valence +5 is combined with two negative nitrogen atoms, valence -3 each, for a group
total of -1. Similarly, a carbon atom with the primary magnetic valence +2 joins with a
negative carbon atom, valence -4, to form the carbide radical, C2 , with a net valence of -
2.
The common boride radicals, the combination boron structures mentioned in Chapter 18,
are B2 , B4 , and B6 . The best known B4 compounds are all direct combinations with
valence 4 elements of Division I. It can therefore be concluded that the net valence of the
B4 combination is -4. Similarly, the role of B6 in such compounds as CaB6 and BaB6
indicates that the net valence of the B6 radical is -2. The status of B2 is not as clearly
indicated, but it also appears to have a net valence of -2; that is, it is simply half of the B4
combination. This net valence of -2 could be produced either by a combination of the -3
negative valence with the secondary magnetic valence, +1, or by a combination of the -5
negative valence with the positive valence +3. The same two alternatives are available for
B4 . The combination of +1 and -3 valences is also feasible for the radical B6 , and on the
basis of these values the valences of all of the boride radicals constitute a consistent
system, as shown by the following tabulation:
Positive Negative Net
B2 B+1 B-3 -2
B4 2 B+1 2 B-3 -4
B6 4 B+1 2 B-3 -2
On the other hand, the B6 radical cannot be produced by a combination of +3 and-5
valences, and in order to utilize the -5 valence it would be necessary to substitute valence
+2 in the positive position. The -3 negative valence thus leads to a more consistent set of
combinations, as well as being consistent with the boron valence in the direct combinae
lions of boron with positive elements. At least for the present, therefore, it will have to be
concluded that the weight of the evidence favors a single negative valence (-3) for boron.
The general principles of compound formation developed for the simpler combinations
apply with equal force to compounds contemning radicals of the inorganic class. The
basic requirement is that the group valence of the radical be in equilibrium with an equal
and opposite valence. A negative radical such as SO4 therefore joins the necessary
number of positive atoms to form a compound on the order of K2 SO4 . The positive NH4
radical similarly joins with a negative atom to produce a compound like NH4 Cl. Or both
components may be radicals, as in (NH4 )2 SO4
One new factor introduced by the grouping is that the relative negativity of the atoms
within the group no longer has any significance. The azide group, N3 , for instance, is
negative, and cannot be anything but negative. In the compound ClN3 , then, the chlorine
atom is necessarily positive, even though chlorine is negative to nitrogen in direct
Division IV combinations such as NCl3 .
In the magnetic valence compounds the negative electric displacement is in equilibrium
with one of the magnetic displacements of the positive component. This leaves the
positive electric displacement free to exert a directional influence on other molecules or
atoms. In its general aspects, this directional effect is similar to the orienting influence of
the space-time equilibrium that is required in order to enable atoms of negative elements
to join with other atoms in compounds. In both cases there are certain relative positions
of the interacting atoms or molecules that permit a closer approach, which results in a
greater cohesive force. Neither of these orienting agencies contributes anything to the
cohesive forces; they simply hold the participants in the positions in which the stronger
forces are generated. Without the directional restrictions imposed by these orienting
influences the relative positions would be random, and the greater cohesive forces would
not develop.
Since all magnetic valence compounds have free electric displacements, they all have
strong combing tendencies, forming what we may call molecular compounds; that is,
compounds in which the constituents are molecules instead of the individual atoms or
radicals of the atomic compounds. Inasmuch as the free electric displacements are all
positive, there is no valence equilibrium involved, and the molecular compounds can be
of almost any character, but geometrical and symmetry considerations favor associations
with units of the same kind, or with closely related units. Double molecules of a
compound are not readily recognized in the solid or liquid states, but in spite of the
obstacles to recognition there are many well-known combinations such as FeO, Fe2 O3 ,
C2 O, CO, etc. Water and ammonia, both magnetic valence compounds, are particularly
versatile in forming combinations of this type, and join with a great variety of substances
for form hydrates and ammoniates.
There is only one free electric displacement in any binary magnetic valence combination,
and the orienting effect is therefore exerted in only one direction. When the active
molecular orientation effects, as we will call them, of a pair of molecules such as FeO
and Fe2 O3 are directed toward each other, the system is closed, and the resulting Fe3 O4
association has no further combining tendencies. Even where several H2 O molecules
combine with the same base molecule, as is very common, the association is between the
base molecule and each H2 O molecule individually. A different situation develops where
a two-dimensional molecule is formed on the basis of a magnetic valence. Here the inter-
molecular distance may be reduced to the point where three molecules are within a single
natural unit of space, in which case each molecule exerts an orienting effect not only
upon its immediate neighbor in the active direction, but also upon the next molecule
beyond it.
Limitation of the effective inter-atomic forces to two dimensions in this class of
compounds contributes to the extension of the magnetic orientation effects in two
separate ways. First, it reduces the inter-atomic distance by one third, since there is no
effective rotational force in the third dimension. In the compound lithium chloride, for
example, the distance between lithium and chlorine atoms on a three-dimensional basis
would be 1.321 natural units. By reason of the two-dimensional orientation, this drops to
.881 units. Then, the distance between molecules 1 and 3 is further reduced by the
geometric effect illustrated in Fig. 3. In an aggregate in which the structural units are
arranged three-dimensionally, as in (a), molecule 2 interposes its full diameter between
molecules 1 and 3. Where the inter-atomic distance is x, the distance between the centers
of molecules 1 and 3 is then 4x. But if the structural units are arranged two-
dimensionally, as in (b), this distance is reduced to 2y, where y is the distance between
adjacent central atoms.
In the case of lithium chloride, this reduction is not sufficient to enable any interaction
between molecules 1 and 3, as the 2y distance is 1.398, and no effect is exerted where
this distance exceeds unity. But there are other compounds, particularly those of carbon
and nitrogen, in which the 2y distance is, or can be, less than unity. The C-C distance, for
example, ranges from .406 to .528. With some aid from the geometric
Figure 3
(a) (b)

arrangement in the case of the greater distances, a large number of carbon compounds
based on the magnetic orientation are within the range where the orienting effects of the
free electric displacement extend to the third molecule.
These two-dimensional magnetic valence molecules with very short inter-atomic
distances are actually stable structures with their negative electric rotations fully
counterbalanced by appropriate positive magnetic rotations, and they are therefore
capable of independent existence in the manner of the other molecules that we have
considered. Because of their strong combining tendencies, however, most of them do not
actually lead an independent life more than momentarily if there are other molecules
present with which they can combine, and in recognition of the fact that they are
normally constituents of molecular compounds rather than molecules in their own right
we will hereafter refer to them as magnetic neutral groups.
While there are many atomic combinations with inter-atomic distances less than one half
natural unit, or so close to this figure that they can be brought within it by structural
modifications, the number of such combinations that can form magnetic neutral groups is
limited by various factors such as probability, valence, relative negativity, etc. Thus the
combinations CN and OH are excluded because they have active valences; that is, they
are negative radicals, not neutral groups. NH2 is excluded by a probability situation that
will be discussed later; OH2 is excluded because hydrogen is strongly positive to oxygen,
and so on. Furthermore, the binary valence two combinations are subject to an additional
restriction. Its exact nature is not yet clear, but its effect is to put CO at the limit of
stability, so that combinations such as NO and CS are excluded. The practical effect of
these several restrictions, together with the limitations on the inter-atomic distance, is to
confine the magnetic neutral groups, aside from CO, almost entirely to combinations of
carbon, nitrogen, and boron with valence one negative atoms or radicals.
In the subsequent discussion we will find it convenient to use a diagram which identifies
the orientation effects that are exerted by the various structural units, and thus shows how
the different types of molecular compounds are held in combining positions; that is,
positions in which the inter-group cohesive forces are maximized. In the diagram we will
represent valence effects by double lines, as in CH3 =OH, while the primary molecular
orientation effect will be represented by single lines, as in CH-CH. The secondary
molecular effects exerted on the third group in line will then be shown by connecting
lines, with arrows to indicate the direction of the orienting effect.
As this diagram indicates, there is a primary orientation effect between CH groups 1 and
2, and between groups 3 and 4. Because these effects are unidirectional, and paired, there
is no interaction between groups 2 and 3. If the CH groups were three-dimensional, like
the FeO and Fe2 O3 molecules previously mentioned, there would be no combination
between the 1-2 pair and the 3-4 pair, and the result would be two CH-CH molecules. But
because group 3 is within one unit of distance of group 1, the orienting effect of the free
electric displacement of group 1, which acts at short range against group 2, also acts
against group 3 at longer range, as shown in the diagram. Similarly, the 4-3 effect acts at
long range against group 2. Thus the 1-2 and 3-4 pairs are held in the combining position
by the secondary orientation effects in spite of the lack of any primary effect between
groups 2 and 3.
The relation of these orienting influences to the cohesion between the constituents of the
atomic or molecular compound can be compared to the effect of a reduced temperature
on a saturated liquid. The result of the lower temperature is solidification, and in the solid
there is an additional cohesive force between the atoms that did not exist in the liquid, but
this new force is not supplied by the temperature. What the change in the temperature
actually accomplished was to create the necessary conditions under which the atoms
could assume the relative positions in which the inter-atomic forces of cohesion are
operative. Similarly, the orienting effects of the valence equilibrium and the free
rotational displacement of the magnetic neutral groups do not provide the forces that hold
the molecules together; they merely create the conditions which allow the stronger
cohesive forces to operate.
When the atoms or neutral groups are subjected to the orienting effects that permit them
to establish equilibrium at one of the shorter inter-atomic or inter-group distances, it is
the point of equilibrium between the rotational forces and the oppositely directed force
due to the progression of the natural reference system that determines the magnitude of
the cohesive forces. An important consequence is that the cohesive force between any
two specific magnetic neutral groups is the same regardless of whether the orientation
results from the short range primary effect, or the long range secondary effect, of the free
electric displacements. In the preceding diagram, the magnitude of the cohesive force
between groups 2 and 3 is identical with that of the 1-2 and 3-4 forces. It is simply the
cohesive force between two CH groups. As we will see later, particularly in Chapter 21,
this point is quite significant in connection with the attempts that are being made to draw
conclusions concerning the molecular structure from the magnitudes of the inter-group
forces.
As the diagram indicates by the arrows at the two ends of the four-group combination, the
2-1 and 3-4 secondary orientation effects are not satisfied, and they are capable of
extension to any other atom or group that comes within range. Such a combination of
neutral groups is therefore open to further combination in both directions. The system is
not closed by the addition of more groups of the same character, since this still leaves
active secondary orientation effects at each end of the combined structure. The unique
combining power that results from this continuation of the secondary effects gives rise to
an extremely large and complex variety of chemical compounds. There is almost no limit
on the number of groups that can be joined. As long as each end of the molecule is a
magnetic neutral group with an active secondary effect, there are still two active ends no
matter how many groups are added.
The necessary closure to form a compound without further combining tendencies can be
attained in one of two ways. Enough of these magnetic neutral groups may combine to
permit the ends of the chain to swing around and join, satisfying the unbalanced
secondary effects, and creating a ring compound. Or, alternatively, the end groups may
attach themselves to atoms or radicals which do not have the orienting effects of the
magnetic groups. Such additions close the system and form a chain compound. Both the
chain and ring structures are known as organic compounds, a name surviving from the
early days of chemistry, when it was believed that natural products were composed of
substances of a nature totally different from that of the constituents of inorganic matter.
As used herein, the term ―organic‖ will refer to all compounds with the characteristic
two-dimensional magnetic valence structure, rather than being defined as usual to cover
only carbon compounds with certain exceptions. The excluded carbon compounds are
practically the same under both definitions, and the only significant difference is that in
this work a few additional compounds, such as the hydronitrogens, which have the same
type of structure as the organic carbon compounds are included in the organic
classification.
The valence equilibrium must be maintained in the chain compounds, and the addition of
a positive radical or atom at one end of the chain must be balanced by the addition of a
negative unit with the same net valence at the other end. This equilibrium question does
not arise in connection with the ring compounds as all of the structural units in the ring
are either magnetic neutral groups or neutral associations of atoms or groups with active
valences. Here the complete valence balance is achieved within the groups or
associations.
In order to join the two-dimensional magnetic group structures any radicals which are to
occupy the end positions must also be two-dimensional. The inherently three-dimensional
inorganic radicals such as NO3 , SO4 , etc., do not qualify. The two-atom and three-atom
radicals like OH, CN, and NO2 are arranged three-dimensionally in the inorganic
compounds, but they are not necessarily limited to this kind of an arrangement, and they
can be disposed two-dimensionally. These radicals are therefore available for the two-
dimensional compounds.
The two-dimensional structure also reverses the requirement with respect to the net
valence of the radicals. The external contacts of the two-dimensional groups are made
primarily by the central atoms, and instead of having the same direction as that of the
satellite atoms, the net group valence conforms to the valence of the central atom. These
groups, the organic radicals, are therefore opposite in valence to their counterparts
among the inorganic radicals. Corresponding to the positive ammonium radical NH4 is
the negative amine radical NH2 , the negative radical CN- in which carbon has the
magnetic valence 2 has an organic analog in the positive radical CN+, in which carbon
has the normal valence 4, and so on. Furthermore, the combinations of carbon and the
valence one negative elements, including hydrogen, which are inherently two-
dimensional, and are therefore precluded from acting as inorganic radicals, are fully
compatible with the two-dimensional neutral groups. Since there are a large number of
such combinations, the great majority of the organic radicals are structures of this type.
From the foregoing it can be seen that the organic compounds are subject to exactly the
same valence considerations as the inorganic compounds. They are, in fact, atomic
associations of identically the same general nature. The only difference is that the very
short inter-atomic distances in the magnetic valence compounds of the lower group
elements permit the existence of secondary orientation effects that enable these
compounds to unite into complex structures. This unification of the whole realm of
chemical compounds is an example of the kind of simplification that results when the true
reason for a physical phenomenon is ascertained. As we saw in Chapter 18, the formation
of chemical compounds takes place because the atoms of the purely electronegative
elements (Division IV) cannot establish a stable relationship with atoms of other elements
except under certain special conditions in which their negative displacement (motion in
time) is counterbalanced by an appropriate positive displacement of the elements with
which they are interacting. These requirements are equally as applicable to carbon and
the other lower elements as to the constituents of the inorganic compounds. All chemical
compounds are governed by the same general principles.
The clarification of the nature of the organic compounds will, of course, require some
modification of existing chemical ideas. The concept of an electronic origin of the
cohesive forces must be abandoned. Electrons are independent physical entities. They are
not constituents of atoms, and they are not available to generate cohesive forces, even if
they were capable of so doing. (It should be noted that the foregoing statement does not
assert that there are no electrons in the atoms. That is an entirely different issue which
will be given consideration when we are ready to begin a discussion of electrical
phenomena.) The concepts of ―double bonds‖ and ―triple bonds‖ will also have to be
discarded, along with the curious idea of ―resonance,‖ in which a system alternating
between two possible states is supposed to acquire an additional energy component by
reason of the alternation.
Some of the theoretical concepts that are untenable in the light of the new findings, such
as the ―double bonds‖ , have been quite useful in practice, and for this reason many
chemists will no doubt find it difficult to believe that these ideas are actually wrong. As
explained in the introductory discussion, however, much of the progress that has been
made in the scientific field has been made with the help of theories that are now known to
be wrong, and have been discarded. The reason for this is that none of these theories was
entirely wrong. In order to gain any substantial degree of acceptance a theory must be
correct in at least some respects, and, as experience has demonstrated in many cases,
these valid features can contribute materially to an understanding of the phenomena to
which they relate, even though other portions of the theory are totally incorrect.
The necessity of parting with cherished ideas of long standing will be less distressing if it
is realized that the ―double bonds‖ and associated concepts that must now be abandoned
are not tangible physical entities; they are merely inventions by which certain empirical
relations of a mathematical nature are clothed in descriptive language for more
convenient manipulation. Linus Pauling brings this out clearly in the following
statements:
The structural elements that are used in classical structure theory, the carbon-carbon
single bond, the carbon-carbon double bond, the carbon-hydrogen bond, and so on, also
are idealizations, having no existence in reality.... It is true that chemists, after long
experience in the use of classical structure theory, have come to talk about, and probably
to think about, the carbon-carbon double bond and other structural units of the theory as
though they were real. Reflection leads us to recognize, however, that they are not real,
but are theoretical constructs in the same way as the individual Kekule structures for
benzene.68
When a correct theory appears it must include the valid features of the previous incorrect
theory. But the identity of these features as they appear in the context of the different
theories is often obscured by the fact that they are expressed in different language. In the
case we are now considering, current chemical theory says that the cohesion in organic
compounds is due to electronic forces. Development of the Reciprocal System of theory
now leads to the conclusion that there are no electrons in the atomic structures, and
consequently there are no electronic forces. At first glance, then, it would appear that the
new findings repudiate the entire previous structure of thought. On closer examination,
however, it can be seen that the electrons, as such, actually play no part in most of the
explanations of physical and chemical phenomena that are presumably derived from the
electronic theory. The theoretical development actually uses only the numerical values.
For example, the conclusions that are drawn from the positions of the elements in the
periodic table are currently expressed in terms of the number of electrons. Carbon has a
valence of four in its ―saturated‖ condition because it has four electrons in its atomic
structure, so the electronic theory says. It is clear from the empirical evidence that there
actually are four units of some kind in the carbon atom, whereas the sodium atom has
only one unit of this kind. But the empirical observations give us nothing but the numbers
4 and 1; they tell us nothing at all about the nature of the units to which the numerical
values apply. The conclusion that these units are electrons is pure assumption, and the
identification with electrons plays no part in the application of the theory. The maximum
valence of carbon is four, not four electrons.
Moseley's Law, which relates the frequencies of the characteristic x-rays of the elements
to their atomic numbers, is another example. It is currently accepted as ―definite proof‖ of
the existence of specific numbers of electrons in the atoms of these elements.
Conclusions of the same kind are drawn from the optical spectra. In a publication of the
National Bureau of Standards entitled Atomic Energy Levels we find this positive
statement: ―Each chemical element can emit as many atomic spectra as it has electrons.‖
But, in fact, the empirical evidence in both cases contributes nothing but numbers. Here,
again, the observations tell us that certain specific numbers of units are involved, but they
give us no indication as to the nature of these units. So far as we can tell from the
empirical information, they can be any kind of units, without restriction.
Thus, when we discard the electronic theory in application to these phenomena we are
not making any profound change; we are merely altering the language in which our
understanding of the phenomena is expressed. instead of saying that there are 11
electrons in sodium, one of which is in a particular ―configuration,‖ we say, on the basis
of our theoretical findings, that the total number of effective speed displacement units in
the rotational motions of the sodium atom is 11, and that only one of these applies to the
electric (one-dimensional) rotation. Carbon has 6 total displacement units in its rotational
motions, with 4 in the electric dimension. It follows that in those properties which are
related to the total effective speed displacement (the net total quantity of motion in the
atom) the number applicable to sodium is 11, and that applicable to carbon is 6, while in
those properties which are determined by the displacement in the electric dimension
individually the respective numbers are 1 for sodium and 4 for carbon.
It is an equally simple matter to translate the formation of ―ionic compounds‖ from the
language of the electronic theory to the language of the Reciprocal System. The
electronic theory says that stability is attained by conforming to the ―electronic
configuration‖ of one of the inert gas elements, and that potassium and chlorine, for
example, accomplish this by transferring one electron from potassium to chlorine, thus
bringing both to the status of the inert gas element argon. The Reciprocal System says
that chlorine has a negative rotational speed displacement of one unit (a unit motion in
time) in its electric dimension, and that it can enter into a chemical combination only by
means of a relative orientation in which that negative displacement is balanced at a zero
point by an appropriate positive displacement. Potassium has a positive displacement of
one unit, and the combination of this one positive unit and the negative unit of chlorine
produces the required net total of zero.
So far as the ―ionic compounds‖ are concerned, the Reciprocal System changes
practically nothing but the language, as the foregoing example shows. But when the
language change is made, it becomes evident that the same theory that applies to this one
restricted class of compounds applies to all of the true chemical compounds. On this
basis there is no need for the profusion of subsidiary theories that have been formulated
in order to deal with those classes of compounds to which the basic ―ionic‖ explanation is
not applicable. Instead of calling upon the multitude of different ―bonds‖ –the ionic bond,
the ion-dipole bond, the covalent bond, the hydrogen bond, the three-electron bond, and
the numerous ―hybrid‖ bonds–that are required in order to adapt the electronic theory to
the many types of compounds, the Reciprocal System applies the same theoretical
principles to all compounds.
In these cases that we have considered, the translation from electronic language to the
language of the Reciprocal System leads to a significant clarification of the mechanism of
the processes that are involved. Whatever value there may be in the electronic theory is
not lost when that theory is abandoned; it is carried over into the theoretical structure of
the Reciprocal System in different language.

CHAPTER 20

Chain Compounds
In undertaking a general survey of such an extended field as that of the structure of the
organic compounds it is obviously essential to use some kind of a classification system to
group the compounds of similar characteristics together, so that we may avoid the
necessity of dealing with so many individual substances. The distinction between chain
and ring compounds has already been mentioned. The chemical properties of the chain
compounds are determined primarily by the nature of the positive and negative radicals
or atoms, and it will therefore be convenient to set up two separate classifications for
these compounds, one on the basis of the positive component, and the other on the basis
of the negative component. In general, the classifications utilized in this work will
conform to the commonly recognized groupings, but the defining criteria will not
necessarily be the same, and this will result in some divergence in certain cases.
The first positive classification that we will consider comprises those compounds whose
positive components contain valence four carbon atoms. These are called paraffins. This
name originally referred only to hydrocarbons, but as used herein it will apply to all chain
compounds with valence four carbon at the positive end of the molecule. The term
―saturated compound‖ is commonly used with essentially the same significance so far as
the chain compounds are concerned, but its application is usually extended to the cyclic
compounds as well. To avoid confusion it will not be used in this work, since the cyclic
compounds cannot be considered saturated on the basis of the criteria that we are setting
up. The paraffin hydrocarbon, or alkane, chain is a linking of CH2 neutral groups with a
CH2 positive radical at one end of the chain, and a negative hydrogen atom at the other.
The cohesion between this hydrogen atom and the adjacent CH2 group is very strong, and
for most purposes it will be convenient to regard the
CH2 • H combination as a negative CH3 radical. On this basis, the paraffin hydrocarbon
chain is
CH3 • (CH2)n• C3.
If a valence two carbon atom is substituted for the valence four carbon atom of the
paraffins, the result is an olefin, a chain which is identical with that of the paraffins
except that it has the primary magnetic valence radical CH instead of the normal valence
radical CH3 in the positive position. The general formula for the olefin hydrocarbons, or
alkenes, is CH • (CH2)n• CH3.
In the usual version of this formula one of the CH2 groups is placed outside of the CH
group, but this is obviously incompatible with the structural principles developed in the
preceding pages. On first consideration it might appear that the chemical evidence is
favorable to the conventional
CH2 • CH sequence. When we remove all of the internal magnetic neutral groups we
come down to CH • CH3 as the theoretical structure of ethylene, the first of the olefins,
whereas it is generally agreed that the chemical behavior of this compound is more in
harmony with the structure
CH2 • CH2. This apparent contradiction is explained by the nature of the CH3 negative
radical. As has been pointed out, this radical is actually CH2 • H. For most purposes the
combination may be treated as a single unit, but if we express the ethylene formula in full
form as
CH • CH2 • H it can be seen that the association between the CH and H structural units is
closer than that between CH2 and H. It is true that the CH2 group is between CH and H
when the ethylene molecule is intact, but CH and H are partners in a valence equilibrium,
whereas the intervening CH2 group is neutral. Consequently, if the molecule is
sufficiently disturbed by chemical or other means, the CH and H units join and the
compound enters the subsequent reaction as two methylene (CH2) molecules. This is not
an unusual situation. Many observers have commented that the reacting molecule under
such circumstances is not necessarily the same as the static molecule.
A valence one carbon atom in the positive position produces an acetylene. Both the olefin
and acetylene classifications, as herein defined, should be understood as including all
compounds with the specified positive components, not merely the hydrocarbons. In the
acetylenes, as in the olefins, the currently accepted molecular formulas must be revised to
put the positive valence component at the end of the chain. We also find that the valence
one orientation of a lone carbon atom is more stable if it is joined to a neutral group in
which carbon has the same valence, rather than to one in which the carbon valence is +2.
The independent carbon atom that constitutes the positive component of the acetylenes is
therefore followed by a CH neutral group. The remainder of the acetylene hydrocarbon,
or alkyne, molecule is identical with the corresponding portion of a molecule of either of
the other two hydrocarbon chains, and the general formula is
C • CH • (CH2)n• CH3.
Acetylene itself is similar to ethylene in that the true structure is
C • CH • H2 with a valence equilibrium between the single C and H atoms which causes
them to combine if the molecule breaks up. The compound therefore acts chemically as
two CH units.
Addition of CH2 neutral groups to the straight chain hydrocarbons does not necessarily
take place in the existing chain. The incoming groups may instead be inserted between
the positive and negative components of any of the neutral groups, enlarging that group
from CH2 to
CH • CH2 • H2 which we may write as CH • CH3, or CHCH3, as previously indicated.
Further additions may then be made in the same manner as they are made in the principal
chain, lengthening the neutral group indefinitely. Such a lengthened group is known as a
branch of the principal chain, and structures of this kind are called branched chain
compounds.
No branching of the CH3 radical is possible, since addition of a CH2 group results in
CH2 • CH2 • H2 or CH2 • CH3, which merely extends the straight chain. A CH2 group may
be added to the CH olefin radical however, as the product in this case is CCH3, which is
not equivalent to an extension of the chain. This CCH3 group may then be lengthened in
the usual manner to C • CH2 • CH3, and so on.
Under the accepted systems of nomenclature the branched chain compounds are named
as derivatives of the straight chain compounds, the chain position being indicated by
number, as in
2-methyl butane, 2,3-dimethyl hexane, etc. The added possibility of a modification of the
positive radical in the olefins introduces an extra variation into the system which is taken
into account by setting up several basic classifications: 1-olefins, 2-olefins, 3-olefins, and
so on. Branching is handled in the same manner as in the paraffins, and the compounds
have names such as
2-ethyl-1-hexene, 3,4-dimethyl-2-pentene, etc.
The names applied to the paraffins under this current system are equally applicable to
these compounds on the basis of the structural relations developed in this work. However,
the current ideas as to the structure of the olefins and acetylenes, and the system of
nomenclature that has been applied to them, are products of the electronic theory of
compound formation. The results of our theoretical development show that certain
modifications of the previously accepted structural arrangements are required, as has
been noted, and the nature of these modifications is such that changes in the names
applied to some of the compounds would also be appropriate. On this new basis no
special system of names is required for the olefins, as the paraffin system can be applied
to the olefins as well. The only difference between the two is in the branching of the
olefin radical, and this can be handled by utilizing the 1-alkyl term, available but not used
in the paraffin compounds. On this basis 1-pentene, CH • (CH2)3 • CH3, will become
simply pentene, while
2-pentene, CCH3 • (CH2)2 • CH3, becomes 1-methyl butene, and 3-pentene,
(C • CH2 • CH3) • CH2 • CH3, becomes 1-ethyl propene. The paraffin names are also
applicable to the acetylenes in the same manner. 1-pentyne, C • CH • (CH2)2 • CH3,
becomes pentyne; 2-pentyne, C • CCH3 • CH2 • CH3, becomes 2-methyl butyne, and so
on. Such a revision of the nomenclature is not only desirable from the standpoint of more
accurately reflecting the true structure of the molecules, and for the sake of uniformity,
but also accomplishes a substantial amount of simplification.
The information derived from theory will likewise require some modification of the
conventional methods of representing the molecular structure of the organic compounds.
The so-called ―extended‖ formulas, based on concepts such as electrons and double
bonds that have no place in the molecule as we find it, must be discarded. But for most
purposes the exact arrangement of the individual atoms is immaterial. The structural unit
is the group rather than the atom, and the positions of the groups determine the nature and
magnitude of the structure-dependent properties of the compound. The notation that has
been used thus far, the ―condensed‖ structural formula which shows only the composition
and sequence of the groups, is therefore adequate for most normal applications.
The usual arrangement of these condensed formulas is not entirely satisfactory, as it does
not recognize the existence of positive and negative valences, and therefore fails to
distinguish between groups of the same composition but opposite valence. The CH; end
groups in the paraffin molecule, for example, are currently regarded as identical. Since
the opposing valences play a very important part in the molecular structure it is desirable
that the formula should definitely indicate the positive and negative components of the
compound. This can be accomplished without any serious dislocation of familiar patterns
by identifying the positive and negative components of the compound as a whole with the
left and right ends of the formula respectively, as is common practice in the inorganic
division.
It would be logical to extend this policy to the individual components of the molecules,
and that probably should be done some day as a matter of consistency, but some
compromise with logic and consistency seems advisable in this present work in order to
avoid creating further complications for the readers, who already have many unavoidable
departures from conventional practice to contend with. The familiar expressions for such
primary units as NH2 and OH will. therefore be retained, together with expansions such
as NH • CH2 • CH3, O • CH2 • CH3, etc., even though this reverses the regular positive to
negative order in most of the negative radicals. Continued use of CH3 rather than CH2 • H
to represent the negative methyl radical is also.a departure from consistent practice, but in
this case the condensed form is not only more familiar but also more convenient. The full
CH2 • H representation will therefore be used only where, as in the discussion of the
structure of the ethylene molecule, it is necessary to stress the true nature of the radical.
In the case of the analogous CH2 negative radical there is no significant advantage to be
gained by use of the condensed expression, and this radical, which is a combination of a
CH neutral group and a negative hydrogen atom will be shown in its true form as CH • H.
For a correct representation of the molecular structure it is essential that the neutral
groups be clearly identified. Where there are methyl substitutions, the identification can
be accomplished by omitting the dividing mark between the components of the neutral
group; e.g.,
CH3 • CHCH3 • CH2 • CHCH3 • CH3, 2,4-dimethyl pentane.
Longer neutral groups can be identified by parentheses, the positive-negative order being
preserved within the group. The formula of 3–propyl pentane on this basis is
CH3 • CH2 • (CH • CH2 • CH2 • CH3) • CH2 • CH3.
If further subdivision within the neutral groups is necessary, the distinction between main
and subgroupings can be indicated by brackets or other suitable symbols.
Where a valence two negative component is involved and the chain is double, the
customary expression such as (CH3 • CH2)2 • O is appropriate if the chains are equal.
Unequal chains can be represented by treating the valence two component and one of the
branches as a negative radical in this manner: CH2 • CH2 • CH2 • (O • CH2 • CH3),or the
two branches can be shown on separate lines, as

In order to facilitate the presentation of the new principles of molecular structure that
have been developed from the postulates of the Reciprocal System the revised structural
formulas as described in the foregoing paragraphs will be used throughout this work. In
designating positions in the chain we will number from the positive end, rather than
following the Geneva system, which regards the two ends as interchangeable. The
different numbering is necessary for clarity, in view of the modifications that have been
made, not only in the order of the groups but also, in some cases, in the group
composition. However, this revised numbering will be used only for purposes of the
discussion, and the accepted names of the compounds will be retained, to avoid
unnecessary confusion. A complete overhaul of the organic nomenclature will be
advisable sooner or later.
The somewhat minor modifications of current structural ideas that are required in the
olefins and acetylenes become more significant in the diolefins, a class of compounds in
which a pair of CH neutral groups with the acetylene carbon valence (one) is inserted into
the olefin chain, a valence two structure. The C5compounds of this class are known as
pentadienes. If the CH groups replace the CH2 groups in the third and fourth positions of
pentene the result is
CH • CH2 • CH • CH • CH3.
Instead of using the same numbering system that is applied to the other hydrocarbon
families, the diolefins are numbered according to the locations of the hypothetical
―double bonds,‖ and this compound is called 1,3-pentadiene. Since the CH3 group at the
negative end of the pentene molecule is actually CH2 • H2 the CH2 portion is open to
replacement by CH. The incoming CH groups may therefore occupy the fourth and fifth
positions, producing CH • CH2 • CH2 • CH • CH • H2 now called 1,4-pentadiene. Another
possible structure involves removing the hydrogen atom from the CH positive radical,
and splitting the molecule into two chains. If the chains are equal, we have C(CH•CH3H2,
which we may also represent as

This is 2,3-pentadiene. A variation of this structure removes the CH2 group from one of
the CH3 combinations. This reduces the compound to a C4 status, but it can be brought
back up to a pentadiene by inserting the CH2 group in the other branch, which produces
what is called 1,2-pentadiene:

One of the most important of the diolefins, from the industrial standpoint, is isoprene,
another C5 compound, currently called 2-methyl1,3-butadiene. The structure is the same
as that of 1,4-pentadiene, except that the CH2 group next to the first of the CH neutral
groups is moved out of the chain and attached to the CH group as a branch: CH • CH2 •
CCH3 • CH • H.
Nitrogen, which is next to carbon in the atomic series, is also the next most prolific in the
formation of compounds. Some of the ―carbon‖ compounds, such as urea, one of the first
organic compounds to be synthesized, actually contain more nitrogen than carbon, but the
positive component in these compounds is carbon, and the lengthening of the chain takes
place primarily by the addition of carbon groups. There are other compounds, however,
in which nitrogen takes the positive role both in the compound as a whole and in the
neutral groups.
Corresponding to the hydrocarbons are the hydronitrogens. The positive nitrogen radical
in these compounds is NH2+, in which nitrogen has the enhanced neutral valence three. A
combination of this radical with the negative amine group is hydrazine, NH2 • NH2.
Inserting one NH neutral group we obtain triazane, NH2 • NH • NH2. Another similar
addition produces tetrazane, NH2 • NH • NH • NH2. Just how far this addition process can
be carried is uncertain, as the theoretical limits have not been established, and the
hydronitrogens have not been given the same exhaustive study as the corresponding
carbon compounds. A nitrogen series corresponding to the acetylenes has a lone nitrogen
atom with the secondary magnetic valence one as the positive component. The parent
compound of this series is diimide, N • NH2. One added NH neutral group results in
triazene, N • NH • NH2, and by a second addition we obtain tetrazene, N • NH • NH •
NH2. Here again, the ultimate length of the chain is uncertain.
All of the neutral groups in these nitrogen compounds have the composition NH2 in
which nitrogen has the secondary magnetic valence one. A neutral group NH2 based on
the primary magnetic valence is theoretically possible, but this group is identical with the
amine radical except for the rotational orientation, and the orientation is subject to change
in accordance with the relative probabilities. The amine radical is a more probable
structure, and it prevents the existence of the NH2 neutral group.
The NH2+ radical is also a much less probable structure than the amine radical, in which
nitrogen has its normal negative valence, but this positive radical is not in competition
with the amine group. Wherever a number of NH2 units exist in close proximity the inter-
atomic forces tend toward combination, and in order that such combination may take
place some groups must be reoriented so that they may act as the positive components of
the compounds. The NH2+ radical has the most probable of the positive orientations, and
it therefore takes over the positive role in NH2 • NH2 and similar combinations, a position
that is not open to the amine radical. The NH2 neutral group has no such protected status.
Beyond carbon and nitrogen the ability to form compounds of the molecular type drops
sharply, but the corresponding elements in the higher groups do participate in a few
compounds of this nature. Silicon forms a series of hydrides analogous to the paraffin
hydrocarbons, with the composition SiH3 • (SiH2)n • H, and also some compounds
intermediate between the silicon and carbon chains. Typical examples of the latter are Si3
• CH2 • SiH2 • H, and Si(CH3)3 • CH2 • SiH2 • CH2 • SiH2 • H. Germanium forms a series
of hydrides, known as germanes, which are similar to the silicon hydrides, or silanes, and
have the composition Ge3 • (GeH2)n • H. Only a few members of this series are known.
An unstable tin hydride, Sn3 • SnH2 • H2 has also been reported. It could be expected that
the higher valence three elements would form a limited number of compounds similar to
the hydronitrogens, but the known compounds of this type are still scarce. Among those
that have been reported are diphosphene, PH2 • PH2, and cacodyl, As(CH3)2 • As(CH3)2.
Since the minimum magnetic valence of phosphorus and arsenic is two, these compounds
cannot have the hydrazine structure NH2 • NH • H2 and are probably
PH • PH2 • H and AsCH3 • As(CH3)2 • CH3. As pointed out in connection with ethylene
and acetylene, the chemical behavior of such compounds is explained by the tendency of
the positive and negative components of the compound as a whole, such as PH and H in
diphosphene, to join when the compound is disturbed during a chemical reaction.
Another series of compounds of the molecular class, but not related to either carbon or
nitrogen, is based on boron. Because it acts as a Division IV element in these two-
dimensional compounds, boron takes the valence five, rather than the normal valence
three which it has in a compound such as B2O3, where it acts as an element of Division I.
The valence one radical on the valence five basis would be BH4, or an equivalent, but
such a radical would be three-dimensional, and not capable of joining a two-dimensional
chain. The positive radical in the boron chain is therefore the valence two combination
B3. As in the hydrocarbons, the negative component of the molecule as a whole is
hydrogen, and because of the valence of the positive radical two negative hydrogen atoms
are required. Here again, the association between the hydrogen atoms and the adjacent
BH neutral group is close, as in the hydrocarbons, and the combination could be regarded
as a valence two negative B3 radical. For present purposes, however, it appears advisable
to show it in its true form as BH • H2.
The magnetic neutral groups of the boron compounds can be formed on the basis of
either the primary or the secondary magnetic valence, which produce BH2 and BH
respectively. Because it minimizes the number of hydrogen atoms at the negative end of
the molecule, the negative radical BH • H2 takes precedence over BH2 • H2 even where
the interior groups are BH2 combinations. This presence of a BH neutral group at the
negative end of the compound, together with some other factors that apparently favor BH
over BH2, has the effect of making the BH structures more stable than those in which the
neutral groups are BH2.
The basic hydride of boron is diborane, B3 • BH • H2. Addition of BH neutral groups
produces a series of compounds with the composition B3 • (BH)n • H2, the best known of
which are hexaborane, in which n is 5, and decaborane, in which n is 9. Substitution of a
pair of BH2 groups for two of the BH groups results in a series which has the composition
B3 • (BH2)2 • (BH)n • H2. Beyond tetraborane, the first member of this series (n=1),these
compounds, as indicated in the preceding paragraph, are less stable than the
corresponding compounds of the all-BH series. In all of these boron compounds
replacement of hydrogen atoms by other valence one atoms or radicals is possible in the
same manner as in the hydrocarbons, but to a much more limited extent.
As noted earlier, the extension of Division IV characteristics into Division III, which
gives rise to the two-dimensional combining tendencies of boron, does not apply to the
corresponding elements of the higher groups to any substantial degree, and they do not
duplicate the boron series of compounds. There is an unstable hydride of aluminum,
Al2H6, and a compound Ga2H6 called digallane, both of which may be structurally similar
to diborane, but there is little, if any lengthening of these compounds by means of
magnetic neutral groups.
From the overall chemical standpoint, the molecular compounds formed by positive
elements other than carbon are not of much concern, and they are given little or no
attention in any but specialized textbooks. They are important in the present connection,
however, because they serve to confirm the theoretical conclusions that were reached
with respect to the structure of the carbon compounds. The nitrogen and boron
compounds are not only constructed in accordance with the general pattern deduced from
theory, and followed by the carbon compounds-that is, a chain of magnetic neutral groups
with a positive radical at one end and a negative radical at the other-but also support the
theoretical conclusions with respect to the structural details, inasmuch as they are like the
carbon compounds in those respects in which the theory finds these elements to be alike,
whereas they differ from the carbon compounds in those respects in which there are
theoretical differences. For example, all three of these elements form both valence two
(CH2 etc.) and valence one (CH etc.) magnetic neutral groups (with the exeption of NH2,
the absence of which has been explained), because these magnetic valences are properties
of the group of elements (2A) to which all three belong. On the other hand, the radicals in
the end positions are unlike because the electric valences, which apply to these radicals,
are properties of each of the three elements individually, and they are all different.
The second system of classification of the organic chain compounds, that based on the
nature of the negative components, is not an alternate but a parallel system. A compound
classified as an alcohol because of the nature of its negative component also belongs to
one of the categories set up on the basis of the identity of the positive component. The
previous discussion was confined mainly to the hydrocarbons to simplify the
presentation, but all of the statements that were made with reference to compounds in
which the negative component is hydrogen, alone or in combination with CH2 as a
negative CH3 radical, are equally applicable to those in which the hydrogen has been
replaced by an equivalent negative atom or group. Thus we have paraffinic alcohols,
olefinic (unsaturated) alcohols, and so on.
The primary requirement for the one for one substitutions is that the valence of the
substituent must conform to the hydrogen valence both in magnitude and in sign. This
requirement has been obscured to a large extent by current structural theories which do
not recognize the existence of positive and negative valence in organic compounds, but
some of the hydrogen atoms in these compounds are positive and others are negative, and
this determines what substitutions can take place. Hydrogen in combination with carbon
is negative, and may be replaced by any of the halogens or by negative radicals.
Hydrogen combined with oxygen is positive, and can therefore be replaced only by
positive elements and radicals. Thus from acetic acid, CH3 • CO • OH, we obtain by
substitution CH2Cl • CO • OH, chloroacetic acid, but CH3 • CO • ONa, or Na • (O • CO •
CH3 ),sodium acetate.
A hydrogen atom acting alone may be either positive or negative, depending on its
environment. The hydrogen atom at the end of a hydrocarbon chain is negative, and may
be replaced by a halogen. CH3 • CH2 • H, ethane, becomes CH3 • CH2 • Cl, ethyl chloride.
The lone hydrogen atom in formic acid, H • CO • OH, is positive, and a halogen cannot
replace it. The normal valence alkali elements cannot replace this lone magnetic valence
hydrogen atom either, and an incoming positive atom goes to the OH radical. The
hydrogen in N-H combinations is also resistant to monatomic substitutions, but
replacement by radicals of the proper valence is readily accomplished.
Elements with higher valences substitute quite freely for either carbon or hydrogen in the
positive and negative radicals, but enter into the magnetic neutral groups mainly as
constituents of the common valence one radicals: OH, NH2, etc. Except in the direct
carbon-oxygen combination CO, a single atom of valence two or three in a neutral group
is necessarily a constituent of an extended radical such as (O • CH2 • CH3).
In beginning a consideration of the principal families of substituted compounds, we will
look first at the alcohols. This alcohol classification is one of several which result from
the addition of oxygen to the hydrocarbons in different ways. Here an OH radical is
directly attached to a hydrocarbon group, replacing a negative hydrogen atom. It is not
essential, however, that this OH group replace the particular atom that constitutes the
negative component of the compound as a whole. The chemical behavior of the normal
alcohols, in which the OH radical is at the end of the chain, as in ethyl alcohol, CH3 • CH2
• OH, is closely paralleled if OH is substituted for a hydrogen atom in one of the neutral
groups, as in secondary butyl alcohol, CH3 • CH2 • CHOH • CH3. If the substitution takes
place in the positive radical the result is somewhat different. Such a substitution is more
readily made if oxygen is first introduced at the more favorable negative end of the
compound, and the product of a double OH substitution is a dibasic alcohol, or glycol, the
most familiar compound being ethylene glycol, CH2OH • CH2 • OH.
Earlier in this chapter it was noted that the paraffin hydrocarbons are not actually the
symmetrical structures that they appear to be. There is a combination of one carbon atom
and three hydrogen atoms at each end of the molecule, but one end of the chain is
necessarily positive, which means that the CH3 group at this end is a radical in which
carbon has the +4 valence, while the other end is necessarily negative, and this, as
previously explained, means that the CH3 group in this position is actually a close
association of a negative hydrogen atom with a CH2 neutral group in which carbon has its
+2 valence. Where the true molecular structure is important, as in understanding the
chemical behavior of ethylene, it is essential to recognize that CH3 in the negative
position is, in fact, CH2 • H. As indicated in the formula given for ethylene glycol, this
same asymmetry also exists in the other seemingly symmetrical compounds. The CH2OH
group in the positive position in the glycols has a +4 carbon valence and a group valence
of + 1. In the negative position, the carbon valence is +2, and the true structure is CH2 •
OH. The chemistry textbooks contain statements such as this: ―Theoretically the simplest
glycol should be dihydroxy methane,
CH2(OH)2.‖ The foregoing explanation of the glycol structure shows why this compound
would not be a glycol, and why no such compound has been found.
An oxygen atom added to a hydrocarbon may replace the two hydrogen atoms of a CH2
neutral group rather than forming an OH radical. The resulting group CO is very close to
the point of not being able to act as a magnetic neutral group at all, and it is greatly
restricted as to its position in the molecule. Straight chains of CO groups similar to the
CH2 chains are not possible. This explains why carbon monoxide occurs as a separate
compound, whereas methylene does not. In order to enable the CO group to join an
organic combination some assistance from the geometric arrangement is necessary (a
point which will be discussed further in connection with our examination of the ring
compounds), and in the chain compounds this can be accomplished most readily at the
negative end of the molecule. In the usual arrangement, therefore, a single CO neutral
group is joined directly to the negative atom or radical.
If the negative component is the radical OH, the resulting compound contains the
combination CO • OH, and is an acid. Acetic acid, CH3 • CO • OH, and acrylic acid, CH •
CH2 • CO • OH, are representative paraffmic and olefmic (unsaturated) acids respectively.
Here again, a shift of the carbon valence to +4 produces a positive radical of the same
composition, and enables formation of dibasic acids, such as oxalic acid, COOH • CO •
OH, maleic acid, COOH • CH • CH • CO • OH, etc.
Modification of the acid structure by substituting an alkyl group for the hydroxyl
hydrogen results in another prolific family of compounds, the esters. Ethyl acetate, CH3 •
CO • (O • CH2 • CH3), and diethyl oxalate, CO(O • CH2 • CH3) • CO • (O • CH2 • CH3),
are typical of the mono and di esters respectively. A similar substitution in an alcohol
produces an ether. This compound may be considered as a radical of the composition O •
(CH2)n • CH3 in combination with an alkyl group. If we now substitute a second radical of
the same kind for one of the hydrogen atoms in the adjacent hydrocarbon group we
obtain an acetal. Another such replacement results in an orthoester. By successive
substitutions in ethyl alcohol, CH3 • CH2 • OH,for instance, we produce methyl ethyl
ether, CH3 • CH2 • (O • CH3),dimethyl acetyl, CH3 • CH • (O • CH3)2, and trimethyl
orthoacetate, CH3 • C • (O • CH3)3. Elimination of a water molecule from two acid
molecules produces an anhydride, such as acetic anhydride, (CH3 • CO)2 • O. No new
structural features are involved in these compounds.
If the CO neutral group is joined directly to the negative hydrogen atom at the end of the
hydrocarbon chain the compound is an aldehyde. Acetaldehyde, CH3 • CO • H, is the
most familiar member of this family. The aldehyde radical is usually expressed as CHO
(to avoid confusion with the OH radical, the textbooks say),but this does not reflect the
true status of the CO combination as a neutral group. It may be worth noting that the
CHO representation also does not explain, as the CO • H formula does, why one of the
most prominent features of the aldehydes is that they are good reducing agents. Like the
other organic families that have been discussed thus far, the aldehydes form dibasic, as
well as monobasic, compounds. The simplest dibasic aldehyde is glyoxal, COH • CO • H.
As in such structures as COOH • CO • OH, the conversion of the negative radical to a
positive radical involves a valence shift, but in the acids the change is in the carbon
valence, which goes from +2 in CO • OH to +4 in COOH, while in the aldehydes the
change is in the hydrogen valence, which goes from - 1 in CO • H to + 1 in COH.
These are the most basic valence changes in organic reactions, and their concurrent
accomplishment is an essential element in a wide variety of chemical reactions. For
instance, in the addition reactions that convert olefinic compounds to the paraffin status,
such as adding HBr to acrylic acid, the carbon valence in the positive radical increases
two units from +2 to +4. At the same time, the hydrogen atom that had a + 1 valence in
HBr decreases that valence by two units to the - 1 level in the addition product CH2Br •
CH2 • CO • OH. There are no obstacles in the way of a change of valence. This is merely
a matter of reorientation, a change of rotational direction, and each atom is free to
reorient itself to conform tc its environment. But the positive-negative balance in the
compound must be maintained, and the change from positive to negative, or vice versa, in
the hydrogen valence is one of the most common ways of compensating for an increase
or decrease in the carbon valence.
Because of the close association between the negative hydrogen atom of the
hydrocarbons and the adjoining CH2 group, the CO neutral group is able to occupy a
position adjoining the
CH2 • H combination as an alternate to the aldehyde position next to the hydrogen atom.
In this more remote position it is near the limit of stability, and this makes association
with the positive radical more probable than participation in the negative combination CO
• CH2 • H. For this reason, the monobasic compounds in this family, the ketones, have
oxygen in the positive radical, COCH3, rather than in the negative radical as usual. The
first member of the family, dimethyl ketone, or acetone, has the structure COCH3 • CH2 •
H. The corresponding dibasic compound is dimethyl diketone, COCH3 • CO • CH2 • H.
The monobasic ketone structure can be verified by comparing the results of simple
addition reactions of the ketones with those of the aldehydes, the isomeric compounds in
which the CO group is neutral. The addition of hydrogen to the aldehydes proceeds in
this manner:
CH3 • CH2 • CO • H + H2 = CH3 • CH2 • CH2 • OH
The final product, propyl alcohol, is a normal chain compound with a CH3 radical in the
positive position, just as in the aldehyde itself. Only the negative end of the molecule has
been altered. If the CO group in the corresponding ketone, methyl ethyl ketone, or 2-
butanone, had the same status as in the aldehyde (that is, if the compound were CH3 •
CH2 • CO • CH3), we would expect essentially the same result. We would expect the CH3
positive radical to remain intact, and the product to be a primary, or perhaps a secondary,
alcohol. But since the CO group in the ketone is part of a radical in which the carbon
valence is four, and the compound is actually COCH3 • CH2 • CH3, both CH3 groups are
negative. Addition of a hydrogen atom to the neutral group CH2 produces a third negative
CH3 group. Inasmuch as no positive CH radical is present, hydrogenation results in a
tertiary alcohol, in which the CH3 groups are negative, as in the original ketone:
COCH3•CH2•CH3 + H2 = C(CH3)3•OH
In the organic chain compounds thus far discussed, lengthening of the chain is
accomplished mainly by the addition of CH2 neutral groups and, in some cases, CH • CH
pairs. Introduction of oxygen produces a neutral group CHOH, and substitution of this
group for CH2 originates additional families of compounds. These include such important
substances as the hydroxy acids, the polyhydroxy alcohols, and the saccharides. The
hydroxy acids may be either monobasic, like lactic acid, CH3 • CHOH • CO • OH, or
dibasic, similar to tartaric acid, COOH • (CHOH)2 • CO • OH. In both cases the chains
can be extended by adding more CHOH groups, although addition of CH2 is also
possible, as in malic acid, COOH • CHOH • CH2 • CO • OH. The polyhydroxy alcohols
are extensions of the glycol chain with CHOH neutral groups. The general formula is
CH2OH • (CHOH)n• CH2 • OH. The saccharides result from conversion of the CH3
radicals in the aldehydes and ketones to CH2OH and addition of CHOH neutral groups.
The products derived from the aldehydes are aldoses, the general formula for which is
CH2OH • (CHOH)n • CO • H. Those derived from the ketones are ketoses, and have the
structure
(CO • CH2 • OH) • (CHOH)n • CH2 • OH.
When nitrogen is introduced into an aldehyde or ketone, replacing the carbon-oxygen
combination with a triple combination of nitrogen, hydrogen, and oxygen in the form of
the valence two oxime radical NH • O, the nature of the addition products again shows
the same relation to the structures of the two oxo derivatives that we noted in the case of
hydrogen addition. Adding NH to the aldehyde alters only the negative radical, which
expands from
CO • H to CH • NH • O. Propionaldehyde, CH3 • CH2 • CO • H, for example, becomes
propionaldehyde oxime, CH3 • CH2 • (CH • NH • O). On the other hand, addition of NH
to the ketones requires a molecular rearrangement to bring both CH3 groups, which are
negative, into combination with positive carbon in the positive radical. Adding NH to
acetone, COCH3 • CH3 produces dimethyl ketoxime, C(CH3)2 • NH • O. As indicated in
these formulas, it is necessary to change the expression for the oxime radical from the
conventional NOH to NH • O to show the true composition.
Another way in which nitrogen may be introduced into the hydrocarbons is by
substituting the NH2 amine group for negative hydrogen. Further substitutions are then
possible for the positive hydrogen atoms in NH2, giving rise to a great variety of
structures. The compounds in which the NH2 radical remains intact are primary amines,
those with NH and one positive substitution are secondary amines, and those in which
both hydrogen atoms have been replaced, leaving only the lone nitrogen atom from the
original amine group, are tertiary amines. Since the amine replacements are positive,
these compounds may have more than one olefinic branch, as in diallyamine, (CH • CH2 •
CH2)2 • NH,a type of structure not found in the hydrocarbons, where all hydrogen atoms
are negative, and can be replaced only by negative substituents. Diamines have the usual
double structure, with CH2NH2 in the positive position and the normal amine combination
CH2 • NH2 at the negative end of the molecule.
Like the hydroxyl group OH which attaches to CH to form the neutral group CHOH, the
amine group joins with CH to form a neutral group CHNH2. This group is more restricted
as to its position in the chains than CHOH, which substitutes quite freely for CH2, but it
has a special importance in that it is an essential component of the amino acids, which, in
turn, are the principal building blocks af the proteins, the basic constituents of living
matter. In the monoacids the CHNH2 group in effect extends the acid radical from CO •
OH to CHNH2 • CO • OH. Further lengthening of the chain takes place by addition of
hydrocarbon neutral groups, or CHOH, rather than CHNH2. Thus d-alanine, CH3 •
CHNH2 • CO • OH lengthens to 1-leucine, CH3 • CHCH3 • CH2 • CHNH2 • CO • OH.
These two compounds are members of one sub-group of the amino acids in which the
positive radical is CH3. A second sub-group utilizes the carboxyl radical COOH in the
posiiive position. The simplest compound of this type is d-aspartic acid, COOH • CH2 •
CHNH2 • CO • OH. The third of the sub-groups, the diamino acids, has amine radicals in
both the positive and negative positions, as in d-lysine, CH2NH2 • (CH2)3 • CHNH2 • CO •
OH.
Another combination containing nitrogen is the cyanide, or nitrile, radical. In the normal
radical CN nitrogen has the negative valence three and carbon has the primary magnetic
valence two, the net group valence being - 1. The positive and negative roles are reversed
in the radical NC2 in which nitrogen has the enhanced neutral valence three. In this
orientation nitrogen has Division III properties, and is positive to carbon rather than
negative as usual. Since the negative valence of carbon is four, the net valence of the
radical NC is - l, identical with the valence of CN. The NC compounds, the isocyanides,
therefore have the same composition as the cyanides, but different properties.
The CN+ radical makes its appearance in such compounds as cyanoacetic acid, CN • CH2
• CO • OH. Here nitrogen is negative, as in the CN• radical, but carbon has the normal
positive valence four, and the net group valence is therefore + 1. Cyanogen, CN • CN, is
a combination of the + 1 and -1 radicals. Compounds with the CO • CN combination in
the negative position are nat generally regarded as constituting a separate family, and are
named as members of the normal cyanides.
Introduction of the CO neutral group in conjunction with NH2 produces an amide, a
structure which is open to an unusually wide variety of additions and substitutions. If we
start with acetamide, CH3 • CO • NH2, we may add CH2 groups in the normal manner to
form propionamide, CH3 • CH2 • CO • NH2, and the higher homologs, or we may
substitute positive radicals for the amine hydrogen, obtaining compounds like N-ethy.
acetamide, CH3 • CO • (NH • CH2 • CH3). The NH combination, which has a net valence
of -2, can take the place of oxygen in the CO group of the amide, forming a CNH neutral
group which has similar properties. Such a replacement in acetamide gives us
acetamidine, CH3 • CNH • NH2. If the neutral CO group in acetamide is replaced by the
positive CO radical we obtain aminoacetone, COCH3 • CH2 • NH2. Further replacement
of carbon by nitrogen then changes the radical COCH3 to CONH2, and produces a whole
new series: urea, CONH2 • NH2, and its derivatives. Another CO group changes the
monobasic carbamide, urea, to a dibasic compound, oxamide, CONH2 • CO • NH2.
A negative combination of oxygen and nitrogen that can be substituted for hydrogen is
the nitro group, NO2. This results in a family known as the nitroparaffins. 1-nitropropane,
CH3 • (CH2)2 • NO2, is typical. The NO group in these nitroparaffins is a combination of
positive nitrogen (valence +3) with negative oxygen (-2 each). An isomeric family of
compounds, the alkyl nitrites, substitutes a group ONO, in which one oxygen atom with
the enhanced neutral valence +4 and a nitrogen atom with its normal -3 valence form a
valence one positive radical ON. A further combination with negative oxygen then
produces a valence one negative radical ONO. The CO • NO2 combination, like CO • CO,
is outside the magnetic neutral limits under ordinary conditions, and there is no CO • NO2
series of compounds corresponding to those based on CO•NH2.
In the quaternary ammonium compounds nitrogen has its neutral valence five, as in the
inorganic nitrates, and joins with the equivalent of five valence one negative atoms or
radicals to form compounds ranging from simple combinations such as
tetramethylammonium hydroxide,
N(CH3)4 • OH, to some very complex, and biologically important, compounds such as
lecithin. The quaternary ammonium portion of the lecithin molecule also exists separately
as choline,
N(CH3)3OH • CH2• CH2• OH.
Addition of oxygen to the cyanide and isocyanide radicals produces the radicals OCN
and ONC, which form the basis of the cyanates and isocyanates. A comparison of the
cyanides and cyanates provides a good illustration of the way in which the various
pertinent factors enter into the construction of chemical compounds. Each element has
several possible rotational orientations which it can assume to form chemical
combinations, and in each of these orientations it has an effective speed displacement, or
valence, which determines the status that the element can assume in a compound, and the
ratio in which it combines with the other components. Some orientations are inherently
more probable than others, but the type of combination that will be the most stable cannot
be determined solely on the basis of this probability, since other factors also enter into the
situation. The limitation imposed on direct combinations by the relative negativity of the
constituents is one such factor. The greater relative probability of low net group valences
in the radicals is another. Replacement capacity is likewise a significant factor. A valence
one radical is not only an inherently more probable structure than one of higher valence;
it also has an ability to replace hydrogen atoms quite freely, while radicals of higher
valence can accomplish such replacements only with some difficulty: In an environment
favorable to these replacements the valence one radical therefore takes precedence, if
such a radical can be formed.
In any particular instance where there are two or more possible ways of constructing a
valence one radical, the combined influence of all effective factors determines which of
the possible combinations has the greatest over-all probability, and consequently the
greatest stability. Where the margin of one structure over another is small, both may exist
under appropriate conditions; where it is large, only the more stable compound can exist.
In the cyanides the net total of all factors affecting the combination of carbon and
nitrogen favors carbon valence +2 and nitrogen valence -3. An alternate with carbon -4
and nitrogen +3 is close enough to be stable. When oxygen, with valence -2, is added to
either of these radicals the positive valence must increase by two units if the addition
product is to be a valence one substitute for negative hydrogen. This is possible in both
cases, as both carbon and nitrogen have the required higher valences. Carbon steps up
from the primary magnetic valence +2 in CN to the normal valence +4 in OCN. Nitrogen
goes from the enhanced neutral valence +3 in NC to the neutral valence +5 in ONC. The
negative valences are unchanged: nitrogen has -3 in both CN and OCN, carbon has -4 in
NC and ONC.
The participation of elements of the higher rotational groups in chemical compounds
involves no new structural features. Because of factors such as the higher magnetic
valences, the greater inter-atomic distances, and the prevalence of three-dimensional
force distributions, in the higher rotational groups, these elements are excluded from
many of the types of combinations and structures in which the elements of Group 2A
participate. But to the extent to which these elements can occupy positions in such
combinations and structures, they do so on the same basis as the analogous Group 2A
elements. The descriptions of the various types of combinations and structures in the
preceding pages therefore apply to the compounds of these higher group elements as well
as to those of the elements that were specifically mentioned.
Sulfur comes the nearest to duplicating the lower group structures. The corresponding
Group 2A element, oxygen, uses its negative valence almost exclusively, and to the
extent that its somewhat greater inter-atomic distances will permit, sulfur, which has the
same -2 valence, duplicates the oxygen compounds. Corresponding to the alcohols, acids,
ethers, amides, etc. which have been discussed in the preceding pages, there are
thioalcohols, thioacids, thioethers, thioamides, etc., that are identical except that sulfur
substitutes for oxygen.
The inter-atomic distance C-S is greater than the C-O distance, and this makes the sulfur
compounds somewhat less stable than their oxygen analogs, limiting the total number of
these compounds rather severely. One significant point is that the C-S distance will not
permit the formation of CS neutral groups, and replacement of neutral CO by CS. This
eliminates the possibility of families of sulfur compounds similar to the oxygen families
whose negative radicals are CO • OH, CO • NH2, CO • OCH3, and so on. There are
thioacids, but the radical is not CS • OH, or CS • SH; it is CO • SH. Where the formula of
a compound, as written in accordance with current practice, appears to indicate the
presence of a CS group in a neutral position, this is actually a valence two combination
that forms part of the positive radical. Thus thioacetamide and thiourea, commonly
represented as CH3 • CS • NH2 and NH2 • CS • NH2, are actually CSCH3 • NH2 and
CSNH2 • NH2. Neither CSOH nor CSSH is barred from acting as a valence one positive
radical, a position in which the inter-atomic distance is not a controlling factor, but both
are limited in their stability. CSOH tends to rearrange to the more probable form COSH2
while CSSH is vulnerable to loss of a CS molecule. For example, xanthic acid, CSSH •
(O • CH2 • CH3) spontaneously separates into CS and ethyl alcohol.
Oxidation of the sulfides provides another example of the displacement of the valences
by addition of a strongly negative element. In methyl sulfide, (CH3)2S, sulfur has its
normal negative valence, -2. Because it is positive to oxygen, oxidation forces it into the
positive position in the compound, with a +4 valence, and the CH3 groups, which can take
either + 1 or - 1, shift to the negative. The product is methyl sulfoxide, SO(CH2 • H)2. An
additional oxygen atom is accommodated by a further shift in the sulfur valence to its
maximum value +6 (the neutral valence). The new compound that is formed is methyl
sulfone, SO(CH2 • H)2.
The single element radicals, such as N3(N+5 • N-3 • N-3) and C2 (C+2• C•-4) conform to the
same pattern of behavior as the other radicals. These particular combinations form azides
and carbides respectively. The latter, since they contain no element other than carbon and
hydrogen, have been named as a hydrocarbon family, although from a structural
standpoint the introduction of the C2 radical into a normal hydrocarbon is the equivalent
of the substitution of any other radical, and the resulting compounds should logically be
called carbides. The carbide structure is quite evident in such compounds as (CH • CH2)2
• C2, which is divinylacetylene, or 1,5 hexadien-3-yne. The valence balance here is the
same as in the binary carbides: CaC2, etc. As indicated earlier, however, probability
considerations favor valence one radicals, where such radicals are possible, and in the
hydrocarbons the C2, combination generally joins with a positive hydrogen atom to form
the valence one radical C2H, structurally analogous to OH. The compounds utilizing this
radical may be either olefinic (example: vinylacetylene, CH • CH2 • C2H) or acetylenic
(example: butadiyne, C • CH • C2H). Magnetic neutral groups can be added in the usual
manner, forming compounds such as 1,5 hexadiyne, C • CH • CH2 • CH2 • C2H. This
compound, also known as dipropargyl, is isomeric with benzene, and attracted a great
deal of attention in the early days of structural chemistry when the ―benzene problem‖
was the center of attention.
A simple carbide, H • C2H, is the initial product of the action of water on calcium carbide,
but since hydrogen is negative to carbon a direct combination of this kind between carbon
and positive hydrogen is unstable, and the hydrogen carbide promptly changes to
acetylene, in which the hydrogen atoms are negative. The valence changes in this series
of reactions are interesting. In the original calcium carbide the valences are Ca +2, C+2, C-4.
The reaction with water substitutes two + 1 hydrogen atoms for the calcium. The relative
negativity of carbon and hydrogen then forces hydrogen into the negative position, and
since the total negative valence on this basis is only two units, carbon has to take its + 1
valence to reach an equilibrium.
Although the three-dimensional inorganic radicals of the SO4 type are not able to
substitute freely for hydrogen in organic compounds in the manner of the organic
radicals, it is possible for organic chains to replace the atoms that are joined to these
three-dimensional radicals in the inorganic compounds. In other words, there is no room
for a three-dimensional component in a two-dimensional structure, but a two-dimensional
combination can occupy a position in a three-dimensional structure. Typical compounds
are ethyl sulfate, (CH3 • CH2)2 • SO4, and methyl phosphate, (CH3)3 • PO4.
Compounds of the metals with organic radicals are usually grouped in a separate category
as metal-organic, or organometallic, but they are classified as organic in this work,
inasmuch as they have the regular organic structure. A compound such as ethyl sodium,
Na • CH2 • CH3, has exactly the same structure as the corresponding paraffin
hydrocarbon, propane, CH3 • CH2 • CH3. A compound such as diphenyl tin has exactly
the same structure as diphenyl methane, one of the aromatic ring compounds that we will
examine in Chapter 21. No separate consideration needs to be given, therefore, to either
the organometallic compounds, or those compounds which have both organic and
inorganic components, in this discussion of molecular structure.
The number and diversity of the chain compounds can be increased enormously by
additional branching, by combinations of the various substituents that have been
discussed, and by the use of some less common substituents, but all such compounds
follow the same structural principles that have been outlined for the most common
organic chain families. There are some additional ways in which structural variations can
occur, and to complete the molecular picture a few comments on these items are
advisable, but since they are equally applicable to the ring compounds it will be
appropriate to defer this discussion until after we have examined the ring structures.

CHAPTER 21

Ring Compounds
The second major classification of the organic compounds is that of the ring compounds.
These ring structures are again divided into three sub-classes. In two of these, the positive
components of the magnetic neutral groups of the rings are carbon atoms: the cyclic, or
alicyclic, compounds in which the predominant carbon valence is two, and the aromatic
compounds in which this valence is one. In the third class, the heterocyclic compounds,
one or more of the carbon atoms in the ring is replaced by an atom of some other
element. All of these classes are further subdivided into mononuclear and polynuclear
divisions, the basic structure of the latter being formed by a condensation or fusion of two
or more rings. It should be understood that the classifications are not mutually exclusive.
A compound may consist of a ring joined to one or more chains; a chain compound may
have one paraffinic and one olefinic branch; a cyclic ring may be joined to an aromatic
ring; and so on.
As in the chain compounds, a parallel classification divides the ring compounds into
families characterized by the nature of the negative components: hydrocarbons, alcohols,
amines, etc. The normal cyclic hydrocarbon, a cyclane, or cycloparaffin, is a simple ring
of CH2 neutral groups. The general formula can be expressed as -(CH2)n Beginning with
cyclopropane (N=3) normal cyclanes have been prepared with all values of n up to more
than 30. The neutral groups in these rings are identical with the CH2 neutral groups in the
chain compounds, and they may be expanded in the same manner by CH2 additions.
Corresponding to the branched chain compounds we therefore have branched rings such
as ethylcyclohexane,
-(CH2)5 (CH • CH2 • CH3)-, and 1-methyl-2ethyl cyclopentane,
-CHCH3 • (CH • CH2 • CH3) • (CH2)3-.
In the notation used herein, the neutral groups will be clearly identified by parentheses or
other means, and the positive-negative order will be preserved within these groups as in
the neutral groups of the chain compounds. To identify the substance as a ring compound
and to show that the end positions in the straight line formula have no such special
significance as they do in the chain compounds, dashes will be used at each end of the
ring formula as in the examples given. If two or more rings are present, or if a portion of
the compound is outside the ring, the positions of the dashes will so indicate. While any
group could be taken as the starting point in expressing the formula of a single ring, the
order of the usual numbering system will be followed as far as possible, to minimize the
deviations from familiar practice. The branch names such as 1-methyl-2-ethyl are then
clearly indicated by the formula.
Replacement of all of the valence two groups in the cyclic ring by valence one groups,
where such replacement is possible, converts the cyclic compound into an aromatic. In
general, however, the distinctive aromatic characteristics do not appear unless the
replacement is complete, and the intermediate structures in which CH or its equivalent
has been substituted for CH2 in only part of the ring positions will be included in the
cyclic classification. Since the presence of the remaining CH2 groups is the principal
determinant of the molecular properties, the predominant carbon valence, in the sense in
which that term is used in defining the classes of ring compounds, is two, even where
there are more CH than CH2 groups in the molecule.
As mentioned earlier, the probabilities favor association of like forces in the molecular
compounds. The CH2 groups have sufficient latitude in their geometric arrangement to be
able to compensate for substantial variations, and single CH2 groups can therefore fit into
the molecular structure without difficulty, but the CH groups have very little geometric
leeway, and for that reason they nearly always exist in pairs. This does not mean that the
individual group is positively barred from existing separately, and in some of the more
complex structures single CH groups can be found, but in the simple rings the pairs are so
much more probable than the odd numbers of groups that the latter are excluded.
The first two-group substitution in the cyclanes produces the cyclenes, or cycloolefins. A
typical compound is cyclohexene, -(CH2)4 • (CH)2 . The designations cycloparaffin and
cycloolefin are not appropriate, in view of the findings of this work, as the cycloparaffins
contain no carbon atoms with the characteristic paraffin valence, and it is the substitution
of two acetylene valence groups into the CH2 rings that forms the cycloolefins. The
names cyclane and cyclene are therefore preferable.
Substitution of two more CH groups into the ring produces the cyclodienes. The
existence of two CH • CH pairs in these compounds introduces a new factor in that the
positions of the pairs within the ring may vary. No question of this kind arises in
connection with cyclopentadiene, -(CH)Q•CH2, the first compound in this series, but in
cyclohexadiene two different arrangements are possible: -(CH)4• (CH2)2 which is known
as 1,3-cyclohexadiene,
and -(CH)2 • CH2 • (CH)2 • CH2 which is I,4-cyclohexadiene.
Negative hydrogen atoms in the cyclic compounds may be replaced by equivalent atoms
or groups in the same manner as those in the magnetic neutral groups of the chain
compounds. The resulting products, such as cyclohexyl chloride, -(CH2)5 • CHCI-,
cyclohexanol,
-(CH2)5 • CHOH-, cyclohexylamine, -(CH2)5 • CHNH2, etc., have properties quite similar
to those of the equivalent chain compounds: chlorides, alcohols, amines, and so on.
There are no atomic groups in the normal cyclic rings which have an amount of freedom
of geometric arrangement comparable to that of the radicals at the two ends of the
aliphatic chains, and the substituents which are limited to the radicals in the chains do not
appear at all in the cyclic compounds unless a branch becomes long enough to put the end
group beyond the range of the forces originating in the ring. In this case the structure is in
effect a combination chain and ring compound. Because of this geometric restriction the
range of substituents in the normal types of cyclic compounds is considerably narrower
than in the chains. In addition to those already mentioned, Cl, OH, and NH2, the primary
list includes the remaining halogens, oxygen, CN, and CO • OH.
The compounds formed by direct substitution of oxygen for the two hydrogen atoms of
the CH2 group are named as ketones, but they do not have the ketone structure, as the
resulting CO group is part of the ring and is a magnetic neutral group. One substitution
produces cyclohexanone,
-(CH2)5 • CO-. A second results in a compound such as 1,3-cyclohexanedione,
-CO • CH2 • CO • (CH2)3-. The CO substitution can extend all the way to cyclohexane
hexone, -(CO)6, in which no hydrogen remains. It is also possible to make the oxygen
substitution by means of a valence one combination instead of the full valence two
replacement, in which case we obtain a compound such as cyclohexyl methyl ether, -
(CH2)5 • (CH • OCH3)-.
Additional families of compounds are produced both by secondary substitutions, which
result in structures on the order of cyclohexyl acetate, -(CH2)5 • CH(O • CO • CH3)-, and
by parallel substitutions in two or more neutral groups. An example of the type of
structure that is produced by the multiple substitutions is 1,2,3-cyclopropanetricarboxylic
acid, -(CH • CO • OH)3-. The naturally occurring compounds of this cyclic class are
highly branched rings beginning with such substances as menthol, -CHCH3 • CH2 •
CHOH • (CH • CHCH3 • CH3) • (CH2)2, and extending to very complex structures, but
they follow the same general structural patterns as the simpler cyclic compounds, and
will not require additional discussion in the present connection.
As mentioned earlier, the CH2 groups have a considerable degree of structural latitude
because of their three-atom composition. The angle between the effective lines of force
varies from about 120 degrees in cyclopropane to less than 15 degrees in the largest
cyclic rings thus far studied. The two-atom groups such as CH do not have this structural
freedom, and are restricted to a narrow range in the vicinity of 60 degrees. The
theoretically exact limits have not yet been determined, but the difficulties involved in the
preparation of derivatives of cyclooctatetraene, -(CH)8, indicate that this compound is at
the extreme limit of stability. This would suggest a maximum deviation of about 15
degrees from the 60 degree angle of the six-member ring. The atoms of which the
molecular compounds are composed have a limited range in which they can assume
positions above or below the central plane of the molecule. The actual angles between the
effective lines of force will therefore deviate slightly from the figures given above, which
are based on positions in the central plane, but this does not affect the point which is
being made, which is that the cyclic ring is very flexible, whereas the aromatic ring is
practically rigid.
As long as there is even one CH2 group in the ring it has the cyclic flexibility.
Cyclopentadiene can exist in spite of the rigidity of the portion of the ring occupied by
the four CH groups because the CH2 group that completes the structure is able to
accommodate itself to the position necessary for closing the ring. But when all of the
three-atom groups have been replaced by two-atom groups or single atoms the ring
assumes the aromatic rigidity. Cyclobutadiene, for example, would consist of four CH
groups only, and the maximum deviation of the CH lines of force, somewhere in the
neighborhood of 75 degrees, is far short of the 90 degrees that would be required for
closure of the cyclobutadiene ring. All attempts to produce such a compound have
therefore failed.
The properties of the various ring compounds are dependent to a considerable degree on
this question as to whether the members of the rings are restricted to certain definite
positions, or have a substantial range of variability within which they can adjust to the
requirements for combination. In view of this natural line of demarcation, the aromatic
classification, as used in this work, is limited to the rigid structures, specifically to those
compounds composed entirely of valence one CH groups or their monovalent substitution
products, except for such connecting carbon atoms as may be present.
Because of the limitations on the atomic positions, the aromatic compounds, with the
exception of cyclooctatetraene, are confined to the six-member rings, the valence one
equivalents of cyclohexane and its derivatives, and there are no aromatic analogs of
cyclobutane, cycloheptane, etc. The structural rigidity therefore limits the compound
forming versatility of the aromatic rings to a substantial degree, but this is more than
offset by other effects of the same factor. The locations in the chain compounds which
are open to the greatest variety of combinations are the ends of the chain and its longer
branches, if any.
In the aromatic rings every ring location has, to some degree, the properties of an end.
Also, because of the rigidity of the ring, the maximum intergroup distance 1-3 in the ring
is about ten percent less than the distance between the equivalent groups in the aliphatic
chain, after making an allowance for the small amount of flexibility that does exist. This
brings some additional combinations of elements within the limit of effectiveness of the
free electric displacements, and in these rings we find not only groups such as COH, CCI,
CNH2, etc., which are the valence one equivalents of the combinations that make up the
cyclic rings and the interior portions of the chain compounds, but also other combinations
such as CNO2 and CSH which are just beyond the magnetic neutral limits in the non-
aromatic structures. The number of available combinations in which the neutral group
CO accompanies the negative radical is similarly increased.
Secondary substitutions extend the length and diversity of the magnetic neutral groups of
the ring, and produce a wide variety of single branch compounds on the order of isobutyl
benzene,
-(CH)5 • (C • CH2 • CHCH3 • CH3)- and N-ethyl aniline,
-(CH)5 • (C • NH • CH2 • CH3)-, but the principal field for variability in the mononuclear
aromatics lies in their capability of multiple branching. The aromatic rings not only have
a greater variety of available substituents than any other type of molecular compound, but
also a larger number of locations where these substituents may be introduced. This
versatility is compounded by the fact that in the rings, as in the chains, the order of
sequence of the groups has a definite effect on the properties of the compound. The
behavior of 1,2-dichlorobenzene, -(CCl)2•(CH)4, for instance, is in many respects quite
different from that of 1,4-dichlorobenzene,
-CCl • (CH)2 • CCl • (CH)2 .
A significant feature of the aromatic rings is their ability to utilize larger numbers of the
less versatile substituents. For example, the limitation of such groups as NO2 to the
negative radical in the chains means that only one such group can exist in any chain
compound, unless a branch becomes so long that the compound is in effect a union of
two chains. In the aromatic ring this limitation is removed, and compounds with three or
four of the highly reactive nitro groups in the six-member ring are common. The list
includes such well-known substances as picric acid (2,4,6-trinitrophenol), -COH • CNO2
• CH • CNO2 • CH • CNO2-, and TNT (2,4,6-trinitrotoluene),
-CH3 • CNO2 • CH • CNO2 • CH • CNO2- .
Since there is only one hydrogen atom in the CH group, the direct substitutions in the
aromatic rings are limited to valence one negative components. In order to establish a
valence equilibrium with a bivalent atom or radical two of the aromatic rings are
required. These bivalent atoms or groups therefore constitute a means whereby two rings
can be joined. Diphenyl ether, for example, has the structure -(CH)5 • C-OC • (CH)5-, in
which the oxygen atom is not a member of either ring but participates in the valence
equilibrium. The bivalenrt negative radical NH similarly produces diphenylamine, -(CH)5
• C-NH-C • (CH)5-.
Each of these rings is a very stable structure with a minimum of eleven constituent atoms,
and a possibility of considerable enlargement by substitution. This method of joining
rings is therefore a readily available process whereby stable molecules of large size may
be constructed. Further additions and substitutions may be made not only in the rings and
their branches, but in the connecting link as well. Thus the addition of two CH2 groups to
diphenyl ether produces dibenzyl ether, -(CH)5 • CCH2 • O • CH2 C • (CH)5-.
According to the definition of an aromatic compound, these multiple ring structures are
not purely aromatic, as the connecting links do not qualify. This is a situation which we
will encounter regardless of the manner in which the various organic classifications are
set up, as the more complex compounds are primarily combinations of the different basic
types of structure. Ordinarily a compound is classified as a ring structure if it contains a
ring of any kind, even though the ring may be only a minor appendage on a long chain,
and it is considered as an aromatic if there is at least one aromatic ring present.
In the multiple ring compounds the combination (CH)5 • C, which is a benzene ring less
one hydrogen atom, acts as a monovalent positive radical, the phenyl radical, and the
simple substituted compounds can be named either as derivatives of benzene or as phenyl
compounds; i.e., chlorobenzene or phenyl chloride. The net positive valence one is the
valence condition in which the ring is left when a hydrogen atom is removed, but this net
valence is due entirely to the + 1 valence of the lone carbon atom from which the
hydrogen atom was detached, all other groups being neutral, and it does not necessarily
follow that the carbon valence will remain at + 1. As emphasized earlier, valence is
simply a matter of rotational orientation, and when acting alone any atom can assume any
one of its possible valences, providing that there are no specific obstacles in the
environment. The lone carbon atom is therefore free to accommodate itself to different
environments by reorientation on the basis of any of its alternate valences: +2, +4, or -4.
If two phenyl radicals are brought together, the inter-atomic forces will tend to establish
an equilibrium. A valence balance is a prerequisite for a force equilibrium, and the carbon
atoms will therefore reorient themselves to balance the valences. There are two possible
ways of accomplishing this result. Since carbon has only one negative valence, -4, one
carbon atom takes this valence, and a second must assume the +4 valence in order to
arrive at an equilibrium. In a direct combination of two phenyl groups these valence
changes can be made in the two independent carbon atoms, without modifying the neutral
groups in any way, and this is therefore the most probable structure in such compounds as
biphenyl, -(CH)5 • C-C • (CH)5-. A similar balanced pair of positive and negative valence
3 nitrogen atoms may be introduced, in combination with the valence 4 carbon atoms, to
form azobenzene, -(CH)5 • CNNC • (CH)5-.
The alternative is to make both valence changes in the same phenyl group, giving the
lone carbon atom the -4 valence and increasing the v,alence of the carbon atom in an
adjacent neutral group from +1 to +4. The product is a ring in which there are four CH
neutral groups, a CH group with a net valence of +3, and a single carbon atom with the -4
valence. By this means the phenyl group is changed from a univalent positive radical, C •
(CH)5, to a univalent negative radical,
(CH)4 • CH • C. Like the methyl group, which can act either as a positive radical CH3
with valence + 1, or as a negative radical CH2 • H with valence - 1, the phenyl group is
able to combine with substances of either valence type, taking the negative valence in
combination with a positive component, and the positive valence when combining with a
negative atom or group. It is negative in all of the phenyl compounds of the metal-organic
class, and not only forms compounds such as phenyl copper, Cu-C • (CH)5-, and diphenyl
zinc, Zn(-C • (CH)5-)2, but also combination phenyl-halide structures like phenyl tin
trichloride, SnCl3-C • (CH)5-.
In combination with the CH3 radical the phenyl group is positive. Either radical can take
either valence, but the methyl group probabilities are nearly equal, while the positive
valence is more probable in the phenyl group, since it involves no change in the benzene
ring other than the removal of a hydrogen atom. The combination -(CH)5 • CCH3- is
therefore toluene, with positive phenyl and negative methyl (carbon valence two), rather
than phenyl methane, which would have negative phenyl and positive methyl (carbon
valence four).
This option is not available in combination with other hydrocarbon radicals, or with
carbon itself, and in such compounds the phenyl radical replaces hydrogen, and is
negative. An additional phenyl substitution in toluene, for example, reduces the CH3
radical to CH2. This group cannot have the -2 net valence that would be necessary for
combination with positive phenyl radicals, and both of the phenyl groups assume the
negative status in the resulting compound, diphenyl methane. The olefinic and acetylenic
benzenes likewise have this type of structure in which the phenyl radical is negative.
Styrene, for instance, is not vinyl benzene, -(CH)5 • C-CH2 • CH, as that combination
would contain two positive components and no negative. It is phenyl ethylene,
CH • CH2 • -C • (CH)5-, in which CH is positive and the phenyl group is negative.
An interesting phenyl compound is phenyl acetylene, the conventional formula for which
is C6HS • C • CH. On the basis of our finding that hydrogen is negative to carbon, the
hydrogen atom in the acetylene CH would have to be negative. But this is not true, as it
can be replaced by sodium. It seems evident, then, that this is phenyl carbide, -(CH)5 •
CC2H, a compound similar to butadiyne, which we have already identified as a carbide, C
• CH • C2H. As noted previously, the relative negativity of carbon and hydrogen has no
meaning with reference to the carbide radical, which has a net negative valence, and
cannot be other than negative regardless of what element or group it combines with.
According to the textbooks, the phenyl compound is identified as an acetylene because ―it
undergoes the typical acetylene reactions.‖ But so does any other carbide. The acetylene
lamp was a ―carbide‖ lamp to the cyclists of an earlier day.
Like the phenyl radical, the cyclic radicals can accommodate themselves to either the
positive or negative position in the molecule. These radicals, too, are positive in the
monosubstituted compounds. A methyl substitution produces hexahydro toluene, not
cyclohexyl methane. But if there are two cyclic substitutions in a methyl group they are
both negative, and dicyclohexyl methane is a reality.
At this point it will be desirable to examine the effects of the various modifications of the
ring structure on the cohesion of the molecule. We may take the benzene ring as the basic
aromatic structure. Textbooks and monographs on the aromatic compounds typically
contain a chapter, or at least a lengthy section, on the ―benzene problem. ―69 The
problem, in essence, is that all of the evidence derived from observation and experiment
indicates that the interatomic forces and distances between any two of the six CH groups
in the ring are identical, but no theory of the chemical ―bond‖ has been able to account
for the structure of the benzene molecule without utilizing two or more different kinds of
bonds. The currently favored ―solution‖ of the problem is to sweep it under the rug by
postulating that the structure alternates, or ―resonates,‖ between the different bond
arrangements.
The development of the Reciprocal System of theory now shows that the forces between
the groups in the benzene ring are, in fact, identical. As has been emphasized throughout
the preceding discussion, however, the existence and nature of chemical compounds is
not determined by the cohesive forces between the atoms of the different elements, but by
the directional relationships which the atomic rotations must assume in order to permit
elements with electric rotation in time to establish stable force equilibria in space. The
findings of this theoretical develop ment agree that the orienting effects which enable CH
groups to combine into the benzene ring are of two different types, a short range effect
and a long range effect, but they also reveal that the nature of the orienting influences has
no bearing on the magnitude of the interatomic forces, and this explains why no
difference in these forces can be detected experimentally. The forces between any two of
the CH neutral groups in the ring are identical.
Inasmuch as the orienting factors cause the atoms to align their rotations in certain
specific relative directions, they are, in a sense, forces, but in order to distinguish them
from the actual cohesive forces that hold the atoms, groups, and molecules together in the
positions determined by these orienting factors we are using the term ―effects‖ rather than
―forces‖ in application to the orientation, even though this introduces an element of
awkwardness into the presentation. The nature of these effects, as they apply to the
benzene ring, can be illustrated by an orientation diagram of the kind previously
introduced.

The pairs of CH groups, 1-2, 3-4, and 5-6, in the diagram, are held in the combining
positions by the orienting effects of a directional character that are exerted by all
magnetic groups or compounds. Alternate groups, 1-3, 2-4, etc., are within unit distance,
and therefore within the effective range of these orienting effects. The primary effect of
group l, for instance, is directed toward group 2, but group 3 is also within unit distance,
and consequently there is a long range 1-3 secondary effect as well as a short range 1-2
primary effect. Because of the directional nature of these orienting effects there is no 2-3
primary effect, but the pairs 1-2 and 3-4 are held in position by the 1-3 and 4-2 secondary
effects.
If we replace one of the hydrogen atoms with some negative substituent, the orientation
situation is unchanged. The new neutral group, or that portion of it which is within the
range of the ring forces if the group is a long one, takes over the functions of the CH
group without alteration. However, removal of a hydrogen atom and conversion of the
benzene molecule into a positive phenyl radical changes the orientation pattern to

The secondary effect 3-5 has now been eliminated, as the lone carbon atom does not have
the free electric rotation characteristic of the magnetic groups or compounds, but the
remaining orientation effects are still adequate to hold the structure together. The further
valence change that is necessary if the phenyl radical is to assume a negative valence
similarly eliminates the 4-2 secondary effect, as group 4 is no longer magnetic. However,
the two carbon atoms and one hydrogen atom combine into a radical CCH, with a net
valence of - 1. This radical has no orienting effect on its neighbors, but the adjoining
magnetic neutral groups do exert their effect on it. The orientation pattern is

As previously explained, the carbon atoms in the CCH combination have valences +4 and
-4. If we remove the hydrogen atom from this group we obtain a ring in which four CH
neutral groups are combined with two individual carbon atoms. This structure is neutral
and is capable of existing as an independent compound, but, like the methylene molecule,
it does not actually do so, because it has a strong tendency to form a double ring. The
four CH groups which are attached to the C-C combination can be duplicated on the
opposite side of the C-C line of action, forming another similar ring which utilizes the
same pair of carbon atoms as part of its ring structure. The fact that the effects originating
from the free electric rotations are exerted on the carbon atoms by the CH groups on one
side does not in any way interfere with the existence of similar effects on the other side.
The orientation relations in the second ring are identical with those of the first. Neither
ring can now recapture a hydrogen atom and become a phenyl radical because the
presence of the other ring prevents the approach of the free hydrogen atoms. The double
ring compound therefore has a high degree of stability.
This compound is naphthalene, -(CH)4 • C=C • (CH)4, a condensed ring aromatic
hydrocarbon. When used in the formula of a compound in this work, the double mark
between two carbon atoms is a symbol indicating the condensed ring type of structure in
which the rings are joined at two positions rather than at a single position as in
compounds such as biphenyl. It has no implications of the kind associated with the
―double bonds‖ of the electronic theory.
A third ring added in the same manner produces anthracene. Further similar additions in
line result in a series of compounds: naphthacene, pentacene, and so on. But it is not
necessary that the additions be made in line, and each of these compounds is
accompanied by others which have the same composition, but different structures. For
instance, the four ring compounds of the naphthacene composition, C18H12, include
chrysene, naphthanthracene, 3,4-benzophenanthrene, and triphenylene. Pyrene has the
same four rings, but a more compact structure, and a composition C16H10.
The structural behavior of the condensed rings is essentially the same as that of the single
benzene rings. They join to form compounds such as binaphthyl and bianthryl; they act as
radicals (naphthyl, anthryl, phenanthryl, etc.); they attach more rings by substitution for
hydrogen to produce compounds such as triphenyl anthracene; and they form a great
variety of compounds by utilizing the other negative substituents available to the
aromatic rings. Many interesting and important compounds are included in this category,
but no new structural features are involved, and they are therefore outside the scope of
the present discussion.
The two CH groups of the middle ring of the anthracene structure are not necessary for
stability, and they can be eliminated. The resulting compound is biphenylene, -(CH)4 •
CC=CC • (CH)4 . A structure with only one CH group in the middle ring, intermediate
between anthracene and biphenylene, is ruled out by the low probability of the continued
existence of a single CH group, but a similar compound can be formed by putting a CH2
group in the intermediate position, as the CH2 groups are not restricted to pairs. The new
compound is fluorene. Another CH2 group in the opposite position restores the
anthracene structure with a cyclic middle ring. This compound is dihydroanthracene.
As previously mentioned, a ring with even one CH2 group deviates substantially from the
typical aromatic behavior, and any such ring is classified with the cyclic structures, but
this effect is confined to the specific ring, and any adjacent aromatic rings retain their
aromatic character. Such compounds as fluorene and dihydroanthracene should therefore
be regarded as combination cyclic-aromatic structures. These compounds occur in large
numbers and in great variety, but the principles of combination are the same as in the
purely aromatic compounds, and do not need to be repeated. Since the cyclic compounds
are less stable than the corresponding aromatics, the combination structures do not cover
as large a field as the aromatic compounds, but a very stable structure such as that of
naphthalene does extend through the entire substitution range. Beginning with the purely
aromatic compound, successive pairs of hydrogen atoms can be added all the way to the
purely cyclic compound, decahydronaphthalene.
The reduction in the variety of combination structures due to the fact that the cohesive
force in the cyclic ring is weaker than that in the aromatic ring is offset to some extent by
the ability of the CH2 groups to form rings of various sizes. 1,2,3,4-
tetrahydronaphthalene, for instance, can drop one of its CH2 groups, forming indane, -
(CH)4 • C=C • (CH2)3-. Because of the CH2 flexibility, the cyclic ring in this compound is
still able to close even if two of the remaining CH2 groups are replaced by CH. This
produces indene, -(CH)4 • C=C • (CH)2 • CH2 .
Polynuclear cyclic compounds are formed in the same manner as the polynuclear
aromatic and combination structures, but not in as great a number or variety.
Corresponding to biphenyl and its substitution products are dicyclopentyl, dicyclohexyl,
etc., and their derivatives; triphenyl methane has a cyclic equivalent in tricyclohexyl
methane; the cyclic analog of naphthalene is bicyclodecane, and so on.
The last major division of the ring compounds is the heterocyclic class, in which are
placed all compounds in which any of the carbon atoms in the cyclic or aromatic rings are
replaced by other elements. The principal reason for setting up a special classification for
these compounds is that most of the substitutions of other elements for carbon require
valence changes of one kind or another, unlike the substitutions for hydrogen, which
normally involve no valence modifications, except in those cases where two valence one
hydrogen atoms are replaced by one valence two substituent.
Some of the heterocyclic substitutions are of this two for one character, and in those cases
the normal cyclic or aromatic structure is not altered. For example, if we begin with
quinone,
-(CH)2 • CO • (CH)2 • CO-, an aromatic carbon compound, and replace two of the CH
groups with NH neutral groups we obtain uracil, -NH • CO • NH • CH • CH • CO-. One
more similar pair replacement removes the last of the hydrocarbon groups and results in
urazine, -NH • CO • NH • NH • CO • NH-. In the compound cyclohexane hexone
previously mentioned all of the hydrogen has been replaced, and in borazole, -
BH • NH • BH • NH • BH • NH-, all carbon is eliminated. All of these heterocyclic
compounds are composed entirely of two-member magnetic neutral groups, and therefore
have the benzene structure: six groups arranged in a rigid aromatic ring.
More commonly, however, the heterocyclic substituent is a single atom or a radical, and
such a substitution requires a valence change in some other part of the ring to maintain
the valence equilibrium. Substitutions therefore often take place in balanced pairs. In
pyrone,
-(CH)2 • CO • (CH)2 • O-, for example, the CO combination is not a neutral group, but a
radical with valence +2 which balances the -2 valence of the oxygen atom. The CH2
radical, in which carbon also has its normal valence +4, has the same function in pyran,
-(CH)2 • CH2 • (CH)2 • O-. Substitution of two nitrogen atoms with the balanced valences
of +3 and -3 in the aromatic ring produces a diazine. If the nitrogen atoms are in the 1,2
positions the compound is pyridazine, -N•N•(CH)4 . The properties of the 1,3 and 1,4
compounds are enough different from those of pyridazine that they have been given
distinctive names, pyrimidine and pyrazine, respectively.
Since the positive and negative radicals in a ring have no fixed positions similar to the
two ends of the chains, it is not possible to indicate their status by their positions as we do
in the formulas we are using for the chain compounds. Some appropriate method of
identification probably should be devised in order to make the formula as representative
of the actual structure as possible, but this is not necessary for the purposes of the present
work, and can be left for later consideration.
The following orientation diagrams for pyrone and pyridazine are typical of those for
heterocyclic compounds with single atom or radical substitutions:

If the valence equilibrium is not achieved in this manner by means of a pair of


substitutions, a valence change in one of the neutral groups is necessary. A single
nitrogen atom substituted into the ring requires a +3 valence elsewhere in the structure to
counterbalance the negative nitrogen valence. This is readily accomplished by a shift of
one of the carbon valences to +4. The reconstructed ring then consists of a nitrogen atom,
valence -3, a CH radical, valence +3, and four CH neutral groups. This compound is
pyridine, -(CH)5 • N-. Hydrogenation can be carried out by steps through intermediate
compounds all the way to the corresponding cyclic structure, piperidine, -(CH2)5 • NH-.
When oxygen, or another valence two negative component, is introduced into the
aromatic ring the necessary valence balance may be attained by a simultaneous
replacement of one of the CH neutral groups by a CH2 radical, as already noted in the
case of pyran. Or the required balance can be achieved without introduction of additional
hydrogen if the carbon valences in two of the CH groups are stepped up to the +2 level
(the primary magnetic valence), forming two CH radicals, each with valence + 1. This
leaves an unstable odd number of CH neutral groups in the six-member ring, but there is
sufficient flexibility in the structure to enable a ring closure on a five-member basis, and
stability is restored by ejecting a neutral group. The resulting compound is furan, -(CH)4 •
O-, a five-member ring with one oxygen atom, two CH neutral groups, and two CH
valence one positive radicals. Substituting sulfur instead of oxygen produces thiophene, -
(CH)4 • S-, while inserting the negative radical NH into the same position produces
pyrrole, -(CH)4 • NH-. Each of these furan type compounds also exists in the cyclic
dihydro and tetrahydro forms. The furan orientation pattern is

The essential feature of all of these five-member rings of the furan class is a valence
equilibrium in which three of the five components participate, the two remaining
components being the neutral groups that furnish the ring-forming capability. In furan the
equilibrium combination is
C+-O•2-C+. Formation of a similar combination with nitrogen in the negative position
requires that some element or radical positive to nitrogen take the positive position, and
in the heterocyclic division nitrogen itself commonly accepts this role. The most probable
valence under these conditions is +3, as in hydrazine. The two nitrogen valences, +3 and
-3, are then in equilibrium, and in this case the fifth component of the five-member ring
must be a neutral group. Since it is a single group, it is the cyclic group CH2, and the
neutral trio is N+3-N-3-CH2°.
The compound is isopyrazole, -N • CH • CH • CH2 • N-. An alternate group arrangement
produces isoimidazole, -N • CH2 • N • CH • CH-. A variation of this structure moves a
hydrogen atom from the CH2 group to the positive nitrogen, which changes the neutral
combination to NH +2-N-3-CH+1. . The compounds formed on this basis are pyrazole,
-N • (CH)3 • NH-, and imidazole, -N • CH • NH • CH • CH-.
From these basic heterocyclic types a great variety of condensed systems such as
coumarone (benzofuran), indole (benzopyrrole), quinoline (benzopyridine), etc., can be
formed by combination with other rings. Both the single rings and the condensed systems
are then open to further enlargement by all of the processes of addition and substitution
previously discussed, and a very substantial proportion of the known organic compounds
belong to this class. From a structural standpoint, however, the basic principles involved
in the formation of all of these compounds are those that have been covered in the
preceding discussion.
In the foregoing pages we have encountered several kinds of isomerism, the existence of
different compounds with the same composition. Some, such as the cyanides and
isocyanides, differ only in valence; some, such as the straight chain and branched
paraffins, differ in the position of the neutral groups; and some, such as the aldehydes and
the ketones, differ in the assignment of the atoms of the constituent elements to the
structural groups. Most of these isomers that we have examined thus far are distinct
stable compounds. There are also some isomeric systems in which the two forms of a
substance convert so readily from one to the other that they establish an equilibrium
which varies in accordance with the conditions to which the compound is subject. This
form of isomerism is known as tautomerism.
One of the familiar examples of tautomerism is that between the, ―keto‖ and ―enol‖ forms
of certain substances. Ethyl acetoacetate COCH3 • CH2 • CO • (O • CH2 • CH3), is the
keto form of a compound that also exists in the enol form as the ethyl ester of
hydroxycrotonic acid, COH • CHCH3 • CO • (O • CH2 • CH3). The compound freely
changes from one form to the other to meet changing physical and chemical conditions.
This is another example of counterbalancing carbon and hydrogen valence changes, and
it is an indication of the ease with which such changes can be made. In the radical
COCH3 the carbon valence is +4, and all hydrogen is negative. The transition to the enol
form involves a drop in the carbon valence to +2, and one hydrogen atom shifts from - 1
to + 1 to maintain the balance. The CH2 group in the radical is then superfluous, and it
moves to the adjacent neutral group. The remainder of the molecule is unchanged.
The development of the Reciprocal System of theory has not yet been extended to a study
of tautomerism. Nor has it been applied to those kinds of isomerism which depend on the
geometrical arrangement of the component parts of the molecules, such as optical
isomerism. These aspects of the general subject of molecular structure will therefore have
to be left for later treatment.
This chapter is the last of the four that have been devoted to an examination of the
structure of chemical compounds. In closing the discussion it will be appropriate to point
out just how the presentation in these chapters fits into the general plan of the work, as
defined in Chapter 2. The usual discussion of molecular structure, as we find it in the
textbooks, starts with the empirical observation that certain chemical compounds-sodium
chloride, benzene, water, ethyl alcohol, etc.-exist, and have certain properties, including
different molecular structures. The theoretical treatment then attempts to devise plausible
explanations for the existence of these observed compounds, their structures, and other
properties. This present work, on the other hand, is entirely deductive. By developing the
necessary consequences of the fundamental postulates of the Reciprocal System we find
that in a universe of motion matter must exist; it must exist in the form of a series of
elements; and those elements must have the capability of combining in certain specific
ways to form chemical compounds. In this and the preceding chapters, the most
important of the possible types of molecular structures have been derived from theory,
and specific compounds have been characterized by composition and structure.
The second objective of the work is to identify these theoretical combinations with the
observed chemical compounds. For example, we deduce purely from theory that there
must exist a compound in the form of a chain of three groups of atoms, in which the first
group contains three atoms of element number one and one atom of element number six,
and has a net group combining power, or valence, of + 1. The second group has two
atoms of element number one and one of number six, and is neutral; that is, its net
valence is zero. The third group has one atom of element number one and one of element
number eight, and a valence of - 1. This theoretical composition and structure are in full
agreement with the composition of the observed compound known as ethyl alcohol, and
with the structure of that compound as deduced from physical and chemical observation
and measurement. We are thus entitled to conclude that ethyl alcohol is the chemical
compound existing in the physical universe that corresponds to the compound which
must exist in the theoretical universe of the Reciprocal System. In other words, we have
identified the theoretical compound as ethyl alcohol.
The great majority of the identifications cited in the preceding pages are unequivocal-
almost self-evident, we may say-and this agreement establishes the validity of both the
theoretical development and the empirical determination of the molecular structures.
Where there are discrepancies, some of them, such as the one involved in the structure of
ethylene, are quite easily explained. However, as the size and complexity of the
molecules increases, the number and variety of the possible modifications of the
theoretical structure also increases, in even greater proportion, and the observable
differences between the various modifications decrease. The validity of the
identifications is therefore less certain than in the case of the smaller and simpler
molecules, but this does not mean that there is any additional uncertainty with respect to
the existence of the more complex theoretical compounds. It merely means that the
available empirical information is not adequate to permit a definite decision as to which
of the observed compounds corresponds to a particular theoretical structure. It can be
expected, therefore, that further investigation will clear up most of these questions.
The discussion of chemical compounds in this and the preceding three chapters completes
the description of the primary physical entities, the actors in the drama of the physical
universe. In the next volume we will begin applying the theoretical findings to an
examination of the drama itself: the action in which these entities are involved.

Nothing but Motion


Dewey B. Larson
References
1. Butterfield, Herbert, The Origins of Modern Science, Revised Edition, The Free
Press, New York, 1965, page 18.
2. Schlegel, Richard, Completeness in Science, Appleton-Century-Crofts, New
York, 1967, page 152.
3. Burbidge and Burbidge, Quasi-Stellar Objects, W. H. Freeman & Co., San
Francisco, 1967, page vii.
4. New Scientist, Feb. 13, 1969.
5. Dicke, R. H., American Scientist, March 1959.
6. Wooldridge, Dean E., The Machinery of Life, McGraw-Hill Book Co., New York,
1966, page 4.
7. Conant, James B., Modern Science and Modern Man, Columbia University Press,
1952, page 47.
8. Feynman, Richard, The Character of Physical Law, The M.I.T. Press, 1967, page
145.
9. Bridgman, P. W., The Nature of Physical Theory, Princeton University Press,
1936, page 134.
10. Einstein and Infeld, The Evolution of Physics, Simon & Schuster, New York,
1938, page 159.
11. Dingle, Herbert, A Century of Science, Hutchinson's Publications, London, 1951,
page 315.
12. Feynman, Richard, op. cit., page 156.
13. Margenau, Henry, Quantum Theory, Vol. 1, edited by D. R. Bates, Academic
Press, New York, 1961, page 6.
14. Braithwaite, R. B., Scientific Explanation, Cambridge University Press, 1953,
page 22.
15. Feyoman, Richard, op. cit., page 30.
16. Einstein, Albert, The Structure of Scientific Thought, E. H. Madden, editor,
Houghton MiMin Co., Boston, 1960, page 82.
17. Dirac, P. A. M., Scientific American, May 1963.
18. Butterfield, Herbert, op. cit., page 13.
19. Hobbes, Thomas, The Metaphysical System of Hobbes, M. W. Calkins, editor,
The Open Court Publishing Co., La Sane, III., 1948, page 22.
20. North, J. D., The Measure of the Universe, The Clarendon Press, Oxford, 1965,
page 367.
21. Einstein, Albert, Relativity, Henry Ho It & Co., New York, 1921, page 74.
22. Hocking, William E., Preface to Philosophy, The Macmillan Co., New York,
1946, page 425.
23. Tolman, Richard C., The Theory of the Relativity of Motion, University of
California Press, 1917, page 27.
24. Wigner, Eugene P., Symmetries and Reflections, Indiana University Press, 1967,
page 30.
25. Jeans, Sir James, The Universe Around Us, Cambridge University Press, 1947,
page 113.
26. Heisenberg, Werner, On Modern Physics, Clarkson N. Potter, New York, 1961,
page 16.
27. Heisenberg, Werner, Physics Today, March 1976.
28. Schlegel, Richard, op. cit., page 18.
29. Gold and Hoyle, Paris Symposium on Radio A stronomy, paper 104, Stanford
University Press, 1959.
30. Darrow, Karl K., Scientific Monthly, March 1942.
31. Finlay-Freundlich, E., Monthly Notices of the Royal Astronomical Society, 105-
237.
32. Heisenberg, Werner, Philosophic Problems of Nuclear Science, Pantheon Books,
New York, 1952, page 38.
33. Einstein, Albert, A Ibert Einstein: Philosopher-Scientist, Paul Schilpp, editor, the
Library of Living Philosophers, Evanston, 111., 1949, page 67.
34. Alfven, Hannes, Worlds-Antiworlds, W. H. Freeman & Co., San Francisco, 1966
page 92.
35. Hawkins, David, The Language of Nature, W. H. Freeman & Co., San Francisco.
1964, page 183.
36. Lindsay, R. B., Physics Today, Dec. 1967.
37. Einstein, Albert, Sidelights on Relativity, E. P. Dutton & Co., New York, 1922
page 23.
38. Einstein and Infeld, op. cit., page 185.
39. Einstein, Albert, Relativity, op. cit., page 126.
40. Heisenberg, Werner, Physics and Philosophy, Harper & Bros., New York, 1958
page 129.
41. Einstein, Albert, Foreword to Concepts of Space, by Max Jammer, Harvard
University Press, 1954.
42. Jeans, Sir James, op. cit., page 78.
43. Schlegel, Richard, Time and the Physical World, Michigan State University Press
1961, page 160.
44. Whitrow, G. J., The Natural Philosophy of Time, Thomas Nelson & Sons, London
1961, page 218.
45. Moller, C., The Theory of Relativity, The Clarendon Press, Oxford, 1952, page 49.
46. Tolman, Richard C., Relativity, Thermodynamics and Cosmology, The Clarendon
Press Oxford, 1934, page 195.
47. Science, July 14, 1972.
48. Heisenberg, Werner, Philosophic Problems of Nuclear Science, op. cit., page 12.
49. Thomson, Sir George, The Inspiration of Science, Oxford University Press,
London 1961, page 66.
50. Millikan, Robert A., Time and Its Mysteries, Collier Books, New York, 1962,
page 24.
51. Jeans, Sir James, Physics and Philosophy, The Macmillan Co., New York, 1945
page 190.
52. Toulmin and Goodfield, The Architecture of Matter, Harper & Row, New York
1962, page 298.
53. Bridgman, P. W., A Sophisticate's Primer of Relativity, Wesleyan University
Press 1962, page 10.
54. Feyerabend, P. K., Philosophy of Science, The Delaware Seminar, Vol. 2 (1962-
1963) Bernard Baumrin, editor, Interscience Publishers, New York, 1963, page
17.
55. Einstein and Infeld, op. cit., page 195.
56. Bridgman, P. W., Reflections of a Physicist, Philosophical Library, New York,
1955 page 186.
57. Will, Clifford M., Scientific American, Nov. 1974.
58. Feynman, Richard, op. cit., pages 156, 166.
59. Cohen, Crowe and Du Mond, The Fundamental Constants of Physics, Interscienc'
Publishers, New York. 1957.
60. Alfven, Hannes, Scientific American, Apr. 1967.
61. Boorse and Motz, The World of the Atom, Vol. 2, Basic Books, New York, 1966
page 1457.
62. Hooper and Scharff, The Cosmic Radiation, John Wiley & Sons, New York, 1958
page 57.
63. Swann, W. F. G., Journal of the Franklin Institute, May 1962.
64. Davis, Leverett, Jr., Nuovo Cimento Suppl., 10th Ser., Vol. 13, No. 1, (1959).
65. Donahue, T. M., Physical Review, 2nd Ser., Vol. 84, No. 5, (1951), page 972.
66. Satz, Ronald W., Reciprocity, May 1975.
67. Weisskopf, V. F., Comments on Nuclear and Particle Physics, Jan.-Feb. 1969.
68. Pauling, Linus, The Nature of the Chemical Bond, Cornell University Press, 1960
page 217.
69. See, for instance, Badger, G. M., The Structures and Reactions of the Aromatic
Compounds, Cambridge University Press, 1954, Chapter 1.
70. Weisskopf, V. F., Lectures in Theoretical Physics, Vol. 111, Britten, J. Downs,
and B. Downs, editors, Interscience Publishers, New York, 1961, page 80.

DEWEY B. LARSON: THE COLLECTED WORKS

Dewey B. Larson (1898-1990) was an American


engineer and the originator of the Reciprocal System of
Theory, a comprehensive theoretical framework capable
of explaining all physical phenomena from subatomic
particles to galactic clusters. In this general physical
theory space and time are simply the two reciprocal
aspects of the sole constituent of the universe–motion.
For more background information on the origin of
Larson’s discoveries, see Interview with D. B. Larson
taped at Salt Lake City in 1984. This site covers the
entire scope of Larson’s scientific writings, including his
exploration of economics and metaphysics.
Physical Science The Universe of Motion
The third volume of the revised edition of
The Structure of the Physical Universe
The Structure of the Physical Universe,
The original groundbreaking publication
applying the theory to astronomy.
wherein the Reciprocal System of Physical
Theory was presented for the first time.
The Liquid State Papers
A series of privately circulated papers on the
The Case Against the Nuclear Atom
liquid state of matter.
―A rude and outspoken book.‖
The Dewey B. Larson Correspondence
Beyond Newton Larson’s scientific correspondence, providing
―...Recommended to anyone who thinks the many informative sidelights on the
subject of gravitation and general relativity development of the theory and the
was opened and closed by Einstein.‖ personality of its author.

New Light on Space and Time The Dewey B. Larson Lectures


A bird’s eye view of the theory and its Transcripts and digitized recordings of
ramifications. Larson’s lectures.
The Collected Essays of Dewey B. Larson
The Neglected Facts of Science Larson’s articles in Reciprocity and other
Explores the implications for physical publications, as well as unpublished essays.
science of the observed existence of scalar
motion. Metaphysics
Beyond Space and Time
Quasars and Pulsars
A scientific excursion into the largely
Explains the most violent phenomena in the
unexplored territory of metaphysics.
universe.
Economic Science
Nothing but Motion The Road to Full Employment
The first volume of the revised edition of The scientific answer to the number one
The Structure of the Physical Universe, economic problem.
developing the basic principles and relations.
The Road to Permanent Prosperity
Basic Properties of Matter A theoretical explanation of the business
The second volume of the revised edition of cycle and the means to overcome it.
The Structure of the Physical Universe,
applying the theory to the structure and
behavior of matter, electricity and
magnetism.
From; http://www.reciprocalsystem.com/bpm/index.htm

Basic Properties of Matter


DEWEY B. LARSON
Volume II of a revised and enlarged edition of
THE STRUCTURE OF THE PHYSICAL UNIVERSE

Preface

1. Solid Cohesion 15. Electrical Storage


2. Inter–atomic Distances 16. Induction of Charges
3. Distances in Compounds 17. Ionization
4. Compressibility 18. The Retreat From Reality
5. Heat 19. Magnetostatics
6. Specific Heat Patterns 20. Magnetic Quantities and Units
7. Temperature Relations 21. Electromagnetism
8. Thermal Expansion 22. Charges in Motion
9. Electric Currents 23. Magnetic Materials
10. Electrical Resistance 24. Isotopes
11. Thermoelectric Properties 25. Radioactivity
12. Scalar Motion 26. Atom Building
13. Electric Charges 27. Mass and Energy
14. The Basic Forces References

Preface
THIS volume is the second in a series in which I am undertaking to develop the
consequences that necessarily follow if it is postulated that the physical universe is
composed entirely of motion. The characteristics of the basic motion were defined in
Nothing But Motion,the first volume of the series, in the form of seven assumptions as to
the nature and interrelation of space and time. In the subsequent development, the
necessary consequences of these assumptions have been derived by logical and
mathematical processes without the introduction of any supplementary or subsidiary
assumptions, and without introducing anything from experience. Coincidentally with this
theoretical development, it has been shown that the conclusions thus reached are
consistent with the relevant data from observation and experiment, wherever a
comparison can be made. This justifies the assertion that, to the extent to which the
development has been carried, the theoretical results constitute a true and accurate picture
of the actual physical universe.
In a theoretical development of this nature, starting from a postulate as to the
fundamental nature of the universe, the first results of the deductive process necessarily
take the form of conclusions of a basic character: the structure of matter, the nature of
electromagnetic radiation, etc. Inasmuch as these are items that cannot be apprehended
directly, it has been possible for previous investigators to formulate theories of an ad hoc
nature in each individual field to fit the limited, and mainly indirect, information that is
available. The best that a correct theory can do in any one of these individual areas is to
arrive at results that also agree with the available empirical information. It is not
possible, therefore, to grasp the full significance of the new development unless it is
recognized that the new theoretical system, the Reciprocal System, as we call it, is one of
general application, one that reaches all of its conclusions all physical fields by
deduction from the same set of basic premises.
Experience has indicated that it is difficult for most individuals to get a broad enough
view of the fundamentals of the many different branches of physical science for a full
appreciation of the unitary character of this new system. However, as the deductive
development is continued, it gradually extends down into the more familiar areas, where
the empirical information is more readily available, and less subject to arbitrary
adjustment or interpretation to fit the prevailing theories. Thus the farther the
development of this new general physical theory is carried, the more evident its validity
becomes. This is particularly true where, as in the subject matter treated in this present
volume, the theoretical deductions provide both explanations and numerical values in
areas where neither is available from conventional sources.
There has been an interval of eight years between the publication of Volume I and the
first complete edition of this second volume in the series. Inasmuch as the investigation
whose results are here being reported is an ongoing activity, a great deal of new
information has been accumulated in the meantime. Some of this extends or clarifies
portions of the subject matter of the first volume, and since the new findings have been
taken into account in dealing with the topics covered in this volume, it has been necessary
to discuss the relevant aspects of these findings in this volume, even though some of them
may seem out of place. If, and when, a revision of the first volume is undertaken, this
material will be transferred to Volume I.
The first 11 chapters of this volume were published in the form of reproductions of the
manuscript pages in 1980. Publication of the first complete edition has been made
possible through the efforts of a group of members of the International Society of Unified
Science, including Rainer Huck, who handled the financing, Phil Porter, who arranged
for the printing, Eden Muir, who prepared the illustrations, and Jan Sammer, who was in
charge of the project.

D. B. Larson
December 1987
CHAPTER 1

Solid Cohesion
The consequences of the reversal of direction (in the context of a fixed reference system)
that takes place at unit distance were explained in a general way in Chapter 8 of Volume
I. As brought out there, the most significant of these consequences is that establishment
of an equilibrium between gravitation and the progression of the natural reference system
becomes possible.
There is a location outside unit distance where the magnitudes of these two motions are
equal: the distance that we are calling the gravitational limit. But this point of equality is
not a point of equilibrium. On the contrary, it is a point of instability. If there is even a
slight unbalance of forces one way or the other, the resulting motion accentuates the
unbalance. A small inward movement, for instance, strengthens the inward force of
gravitation, and thereby causes still further movement in the same direction. Similarly, if
a small outward movement occurs, this weakens the gravitational force and causes further
outward movement. Thus, even though the inward and outward motions are equal at the
gravitational limit, this is actually nothing but a point of demarcation between inward and
outward motion. It is not a point of equilibrium.
In the region inside unit distance, on the contrary, the effect of any change in position
opposes the unbalanced forces that produced the change. If there is an excess
gravitational force, an outward motion occurs which weakens gravitation and eliminates
the unbalance. If the gravitational force is not adequate to maintain a balance, an inward
motion takes place. This increases the gravitational effect and restores the equilibrium.
Unless there is some intervention by external forces, atoms move gravitationally until
they eventually come within unit distance of other atoms. Equilibrium is then established
at positions within this inside region: the time region, as we have called it.
The condition in which a number of atoms occupy equilibrium positions of this kind in an
aggregate is known as the solid state of matter. The distance between such positions is
the inter-atomic distance, a distinctive feature of each particular material substance that
we will examine in detail in the following chapter. Displacement of the equilibrium in
either direction can be accomplished only by the application of a force of some kind, and
a solid structure resists either an inward force, a compression, or an outward force, a
tension. To the extent that resistance to tension operates to prevent separation of the
atoms of a solid it is commonly known as the force of cohesion.
The conclusions with respect to the nature and origin of atomic cohesion that have been
reached in this work replace a familiar theory, based on altogether different premises.
This previously accepted hypothesis, the electrical theory of matter, has already had
some consideration in the preceding volume, but since the new explanation of the nature
of the cohesive force is basic to the present development, some more extensive
comparisons of the two conflicting viewpoints will be in order before we proceed to
develop the new theoretical structure in greater detail.
The electrical, or electronic, theory postulates that the atoms of solid matter are
electrically charged, and that their cohesion is due to the attraction between unlike
charges. The principal support for the theory comes from the behavior of ionic
compounds in solution. A certain proportion of the molecules of such compounds split
up, or dissociate, into oppositely charged components which are then called ions. The
presence of the charges can be explained in either of two ways: (1) the charges were
present, but undetectable, in the undissolved material, or (2) they were created in the
solution process. The adherents of the electrical theory base it on explanation (1). At the
time this explanation was originally formulated, electric charges were thought to be
relatively permanent entities, and the conclusion with respect to their role in the solution
process was therefore quite in keeping with contemporary scientific thought. In the
meantime, however, it has been found that electric charges are easily created and easily
destroyed, and are no more than a transient feature of matter. This cuts the ground from
under the main support of the electrical theory, but the theory has persisted because of the
lack of any available alternative.
Obviously some kind of a force must hold the solid aggregate together. Outside of the
forces known to result directly from observable motion, there are only three kinds of
force of which there has heretofore been any definite observational knowledge:
gravitational, electric, and magnetic. The so-called ―forces‖ which play various roles in
present-day atomic physics are purely hypothetical. Of the three known forces, the only
one that appears to be strong enough to account for the cohesion of solids is the electric
force. The general tendency in scientific circles has therefore been to take the stand that
cohesion must result from the operation of electrical forces, notwithstanding the lack of
any corroboration of the conclusions reached on the basis of the solution process, and the
existence of strong evidence against the validity of those conclusions.
One of the serious objections to this electrical theory of cohesion is that it is not actually
a theory, but a patchwork collection of theories. A number of different explanations are
advanced for what is, to all appearances, the same problem. In its basic form, the theory
is applicable only to a restricted class of substances, the so-called ―ionic‖ compounds.
But the great majority of compounds are ―non-ionic.‖ Where the hypothetical ions are
clearly non-existent, an electrical force between ions cannot be called upon to explain the
cohesion, so, as one of the general chemistry tests on the author‘s shelves puts it, ―A
different theory was required to account for the formation of these compounds.‖ But this
―different theory,‖ based on the weird concept of electrons ―shared‖ by the interacting
atoms, is still not adequate to deal with all of the non-ionic compounds, and a variety of
additional explanations are called upon to fill the gaps.
In current chemical parlance the necessity of admitting that each of these different
explanations is actually another theory of cohesion is avoided by calling them different
types of ―bonds‖ between the atoms. The hypothetical bonds are then described in terms
of interaction of electrons, so that the theories are united in language, even though widely
divergent in content. As noted in Chapter19, Vol. I, a half dozen or so different types of
bonds have been postulated, together with ―hybrid‖ bonds which combine features of the
general types.
Even with all of this latitude for additional assumptions and hypotheses, some substances,
notably the metals, cannot be accommodated within the theory by any expedient thus far
devised. The metals admittedly do not contain oppositely charged components, if they
contain any charged components at all, yet they are subject to cohesive forces that are
indistinguishable from those of the ionic compounds. As one prominent physicist, V. F.
Weisskopf, found it necessary to admit in the course of a lecture, ―I must warn you I do
not understand why metals hold together.‖ Weisskopf points out that scientists cannot
even agree as to the manner in which the theory should be applied. Physicists give us one
answer, he says, chemists another, but ―neither of these answers is adequate to explain
what a chemical bond is.‖1

This is a significant point. The fact that the cohesion of metals is clearly due to something
other than the attraction between unlike charges logically leads to a rather strong
presumption that atomic cohesion in general is non-electrical. As long as some non-
electrical explanation of the cohesion of metals has to be found, it is reasonable to expect
that this explanation will be found applicable to other substances as well. Experience in
dealing with cohesion of metals thus definitely foreshadows the kind of conclusions that
have been reached in the development of the Reciprocal System of theory.
It should also be noted that the electrical theory is wholly ad hoc. Aside from what little
support it can derive from extrapolation to the solid state of the conditions existing in
solutions, there is no independent confirmation of any of the principal assumptions of the
theory. No observational indication of the existence of electrical charges in ordinary
matter can be detected, even in the most strongly ionic compounds. The existence of
electrons as constituents of atoms is purely hypothetical. The assumption that the
reluctance of the inert gases to enter into chemical compounds is an indication that their
structure is a particularly stable one is wholly gratuitous. And even the originators of the
idea of ―sharing‖ electrons make no attempt to provide any meaningful explanation of
what this means, or how it could be accomplished, if there actually were any electrons in
the atomic structure. These are the assumptions on which the theory is based, and they
are entirely without empirical support. Nor is there any solid basis for what little
theoretical foundation the theory may claim, inasmuch as its theoretical ties are to the
nuclear theory of atomic structure, which is itself entirely ad hoc.
But these points, serious as they are, can only be regarded as supplementary evidence, as
there is one fatal weakness of the electrical theory that would demolish it even if nothing
else of an adverse nature were known. This is our knowledge of the behavior of positive
and negative electric charges when they are brought into close proximity. Such charges
do not establish an equilibrium of the kind postulated in the theory; they destroy each
other. There is no evidence which would indicate that the result of such contact is any
different in a solid aggregate, nor is there even any plausible theory as to why any
different outcome could be expected, or how it could be accomplished.
It is worth noting in this connection that while current physical theory portrays positive
and negative charges as existing in a state of congenial companionship in the nuclear
theory of the atom and in the electrical theory of matter, it turns around and gives us
explanations of the behavior of antimatter in which these charges display the same
violent antagonism that they demonstrate to actual observation. This is the kind of
inconsistency that inevitably results when recalcitrant problems are ―solved‖ by ad hoc
assumptions that involve departures from established physical laws and principles.
In the context of the present situation in which the electrical theory is challenged by a
new development, all of these deficiencies and contradictions that are inherent in the
electrical theory become very significant. But the positive evidence in favor of the new
theory is even more conclusive than the negative evidence against its predecessor. First,
and probably the most important, is the fact that we are not replacing the electrical theory
of matter with another ―theory of matter.‖ The Reciprocal System is a complete general
theory of the physical universe. It contains no hypotheses other than those relating to the
nature of space and time, and it produces an explanation of the cohesion of solids in the
same way that it derives logical and consistent explanations of other physical phenomena:
simply by developing the consequences of the basic postulates. We therefore do not have
to call upon any additional force of a hypothetical nature to account for the cohesion. The
two forces that determine the course of events in the region outside unit distance also
account for the existence of the inter-atomic equilibrium inside this distance.
It is significant that the new theory identifies both of these forces. One of the major
defects of the electrical theory of cohesion is that it provides only one force, the
hypothetical electrical force of attraction, whereas two forces are required to explain the
observed situation. Originally it was assumed that the atoms are impenetrable, and that
the electrical forces merely hold them in contact. Present-day knowledge of
compressibility and other properties of solids has demolished this hypothesis, and it is
now evident that there must be what Karl Darrow called an ―antagonist,‖ in the statement
quoted in Volume I, to counter the attractive force, whatever it may be, and produce an
equilibrium. Physicists have heretofore been unable to find any such force, but the
development of the Reciprocal System has now revealed the existence of a powerful and
omnipresent force hitherto unknown to science. Here is the missing ingredient in the
physical situation, the force that not only explains the cohesion of solid matter, but, as we
saw in Volume I, supplies the answers to such seemingly far removed problems as the
structure of star clusters and the recession of the galaxies.
One point that should be specifically noted is that it is this hitherto unknown force, the
force due to the progression of the natural reference system, that holds the solid aggregate
together, not gravitation, which acts in the opposite direction in the time region. The
prevailing opinion that the force of gravitation is too weak to account for the cohesion is
therefore irrelevant, whether it is correct or not.
Inasmuch as the new theoretical system applies the same general principles to an
understanding of all of the inter-atomic and inter-molecular equilibria, it explains the
cohesion of all substances by the same physical mechanism. It is no longer necessary to
have one theory for ionic substances, several more for those that are non-ionic, and to
leave the metals out in the cold without any applicable theory. The theoretical findings
with respect to the nature of chemical combinations and the structure of molecules that
were outlined in the preceding volume have made a major contribution to this
simplification of the cohesion picture, as they have eliminated the need for different kinds
of cohesive forces, or ―bonds.‖ All that is now required of a theory of cohesion is that it
supply an explanation of the inter-atomic equilibrium, and this is provided, for all solid
substances under all conditions, by balancing the outward motion (force) of gravitation
against the inward motion (force) of the progression of the natural reference system.
Because of the asymmetry of the rotational patterns of the atoms of many elements, and
the consequent anisotropy of the force distributions, the equilibrium locations vary not
only between substances, but also between different orientations of the same substance.
Such variations, however, affect only the magnitudes of the various properties of the
atoms. The essential character of the inter-atomic equilibrium is always the same.
As indicated in the original discussion of gravitation, even though the various aggregates
of matter do not actually exert gravitational forces on each other, the observable results of
their gravitational motions are identical with those that would be produced if such forces
did exist. The same is true of the results of the progression of the natural reference
system. There is a considerable element of convenience in expressing these results in
terms of force, on an ―as if‖ basis, and this practice has already been followed to some
extent in the previous volume. Now that we are ready to begin a quantitative evaluation
of the inter-atomic relations, however, it is desirable to make it clear that the force
concept is being used only for convenience. Although the quantitative discussion that
follows, like the earlier qualitative discussion, will be carried on in terms of forces, what
we will actually be dealing with are the inward and outward motions of each individual
atom.
While the items that have been mentioned add up to a very impressive case in favor of
the new theory of cohesion, the strongest confirmation of its validity comes from its
ability to locate the point of equilibrium; that is to give us specific values of the inter-
atomic distances. As will be demonstrated in Chapter 2, we are already able, by means of
the newly established relations, to calculate the possible values of the inter-atomic
distance for most of the simpler substances, and there do not appear to be any serious
obstacles in the way of extending the calculations to more complex substances whenever
the necessary time and effort can be applied to the task. Furthermore, this ability to
determine the location of the point of equilibrium is not limited to the simple situation
where only the two basic forces are involved. Chapters 4 and 5 will show that the same
general principles can also be applied to an evaluation of the changes in the equilibrium
distance that result from the application of heat or pressure to the solid aggregate.
Although, as stated in Volume I, the true magnitude of a unit of space is the same
everywhere, the effective magnitude of a spatial unit in the time region is reduced by the
inter-regional ratio. It is convenient to regard this reduced value, 1/156.44 of the natural
unit, as the time region unit of space. The effective portion of a time region phenomenon
may extend into one or more additional units, in which case the measured distance will
exceed the time region unit, or the original single unit may not be fully effective, in
which case the measured distance will be less than the time region unit. Thus the inter-
atomic equilibrium may be reached either inside or outside the time region unit of
distance, depending on where the outward rotational forces reach equality with the
inward force of the progression of the natural reference system. Extension of the inter-
atomic distance beyond one time region unit does not take the equilibrium system out of
the time region, as the boundary of that region is at one full-sized natural unit of distance,
not at one time region unit. So far as the inter-atomic force equilibrium is concerned,
therefore, the time region unit of distance does not represent any kind of a critical
magnitude.
As we saw in our examination of the composition of the magnetic neutral groups,
however, the natural unit as it exists in the time region (the time region unit) is a critical
magnitude from the orientation standpoint. An explanation of this difference can be
derived from a consideration of the difference in the inherent nature of the two
phenomena. Where the inter-atomic distance is less than one time region unit, the
rotational forces are acting against the inward force of the progression of the reference
system during only a portion of the unit progression. Similarly, where the inter-atomic
distance is greater than one time region unit, the unit inward force is acting against only a
portion of the greater-than-unit outward rotational forces. The variations in distance thus
reflect differences in the magnitudes of the rotational forces. But the orientation effect
has no magnitude. It either exists, or does not exist. As we have noted in the previous
discussion, particularly in connection with the structure of the benzene molecule, this
effect, if it exists, is the same regardless of whether it acts at short range or at long range.
The essential requirement that it must meet is that it must be continuously effective.
Otherwise, the orientation is destroyed during the off period. Where the rotational forces
extend beyond one time region unit, so that the unit orientation effect is coincident with
only a portion of the total rotational forces, the orienting effect is not continuous, and no
orientation takes place.
In this chapter we are dealing mainly with what we are calling ―rotational forces.‖ These
are, of course, the same ―as if‖ forces due to the scalar aspect of the atomic rotation that
were called ―gravitational‖ in some other contexts, the choice of language depending on
whether it is the origin or the effect of the force that is being emphasized in the
discussion. For a quantitative evaluation of the rotational forces we may use the general
force equation, providing that we replace the usual terms of the equation with the
appropriate time region terms. As explained in introducing the concept of the time region
in Chapter 8 of Vol. I, equivalent space 1/t replaces space in the time region, and velocity
is therefore 1/t² Energy, the one-dimensional equivalent of mass, which takes the place of
mass in the time region expression of the force equation, because the three rotations of
the atom act separately, rather than jointly, in this region, is the reciprocal of this
expression, or t2. Acceleration is velocity divided by time: 1/t³. The time region
equivalent of the equation F = ma is therefore F = Ea = t²x1/t³=1/t in each dimension.
At this point we will need to take note of the nature of the increments of speed
displacement in the time region. In the outside region additions to the displacement
proceed by units: first one unit, then another similar unit, yet another, and so on, the total
up to any specific point being n units. There is no term with the value n. This value
appears only as a total. The additions in the time region follow a different mathematical
pattern, because in this case only one of the components of motion progresses, the other
remaining fixed at the unit value. Here the displacement is 1/x, and the sequence is 1/1,
1/2, 1/3...1/n. The quantity 1/n is the final term, not the total. To obtain the total that
corresponds to n in the outside region it is necessary to integrate the quantity 1/x from
x = 1 to x = n. The result is ln n, the natural logarithm of n.
Many readers of the first edition have asked why this total should be an integral rather
than a summation. The answer is that we are dealing with a continuous quantity. As
pointed out in the introductory chapters of the preceding volume, the motion of which the
universe is constructed does not proceed in a succession of jumps. Even though it exists
only in units, it is a continuous progression. A unit of this motion is a specific portion of
this continuity. A series of units is a more extended segment of that continuity, and its
magnitude is an integral. In dealing with the basic individual units of motion in the
outside region it is possible to use the summation process, but only because in this case
the sum is the same as the integral. To get the total of the 1/x series we must integrate.
To evaluate the rotational force we integrate the quantity 1/t from unity, the physical
datum or zero level, to t:

(1-1)

If the quantity ln t is below unity in any dimension there is no effective outward force in
that dimension, but the natural logarithm exceeds unity for all values of x above 2, and
the atoms of all elements have a rotational displacement of 2 (equivalent to t = 3) or more
in at least one dimension. Consequently, all have effective rotational forces.
The force computed from equation 1-1 is the inherent rotational force of the individual
atom; that is, the one- dimensional force which it exerts against a single unit of force. The
force between two (apparently) interacting atoms is
F = ln tA ln tB (1-2)

For a two-dimensional magnetic rotation this becomes

F = ln² tA ln² tB (1-3)

As we found in Chapter12, Vol. I, the equivalent of distance s in the time region is s², and
the gravitational force in this region therefore varies inversely as the fourth power of the
distance rather than the square. Applying this factor to the expression for the force of the
two-dimensional rotation, together with the inter-regional ratio, the ratio of effective to
total force derived in the same chapter, we obtain the effective force of the magnetic
rotation of the atom:

Fm = (0.006392)4 s-4 ln² tA ln² tB (1-4)


The distance factor does not apply to the force due to the progression of the natural
reference system, as this force is omnipresent, and unlike the rotational force is not
altered as the objects to which it is applied change their relative positions. At the point of
equilibrium, therefore, the rotational force is equal to the unit force of the progression.
Substituting unity for Fm in equation
1-4, and solving for the equilibrium distance, we obtain
s0 = 0.006392 ln½ tA ln½ tB (1-5)
The inter-atomic distances for those elements which have no electric rotation, the inert
gas series, may be calculated directly from this equation. In the elements, however, tA = tB
in most cases, and it will be convenient to express the equation in the simplified form:
s0 = 0.006392 ln t (1-6)
-8
The values thus calculated are in the neighborhood of 10 cm, and for convenience this
quantity has been taken as a unit in which to express the inter-atomic and inter-molecular
distances. When converted from natural units to this conventional unit, the Angstrom
unit, symbol Å, equation 1-6 becomes
s0 = 2.914 ln t Å (1-7)
In applying this equation we encounter another of the questions with respect to
terminology that inevitably arise in a basically new treatment of any subject. The
significance of the quantity t as used in the foregoing discussion and in the equations is
obvious from the context—it is the magnitude of the effective rotation—but the question
is: What shall we call it? The basic quantity with which we are dealing, the rotational
speed displacement, does not enter into the equations directly. The mathematical
structure of these equations requires us to enter them with values that include the initial
unit which constitutes the natural zero datum. Furthermore, each double vibrational unit
rotates independently, and when the rotation extends to a second such unit the increment
in the value of t is only one half unit per added unit of displacement. Under these
circumstances, where the relation of the term t to the displacement is variable, it seems
advisable to give this term a distinctive name, and we will therefore call it the specific
rotation.
As brought out in the discussion of the general characteristics of the atomic rotation in
Chapter10, Vol. I, the two magnetic displacements may be unequal, and in this event the
speed distribution takes the form of a spheroid with the principal rotation effective in two
dimensions and the subordinate rotation in one. The average effective value of the
specific rotation under these conditions is (t12t2)¹/3. In this case we are dealing with the
properties of a single entity, and the mathematical situation seems clear. But it is not so
evident how we should arrive at the effective specific rotation where there is an
interaction between two atoms whose individual rotations are different. As matters now
stand it appears that the geometric mean of the two specific rotations is the correct
quantity, and the values tabulated in Chapters 2 and 3 have been calculated on this basis.
It should be noted, however, that this conclusion as to the mathematics of the
combination is still somewhat tentative, and if further study shows that it must be
modified in some, or all, applications, the calculated values will be subject to
corresponding modifications. Any changes will be small in most cases, but they will be
substantial where there is a large difference between the two components. The absence of
major discrepancies between the calculated and observed distances in combinations of
atoms with much different dimensions therefore gives some significant support to the use
of the geometric mean pending further theoretical clarification.
The inter-atomic distances of four of the five inert gas elements for which experimental
data are available follow the regular pattern. The values calculated for these elements are
compared with the experimental distances in Table 1.
Table 1: Distances - Inert Gas Elements
Atomic Specific Distance
Element
Number Rotation Calculated Observed
10 Neon 3-3 3.20 3.20
18 Argon 4-3 3.76 3.84
36 Krypton 4-4 4.04 4.02
54 Xenon 4½-4½ 4.38 4.41
Helium, which also belongs to the inert gas series, has some special characteristics due to
its low rotational displacement, and will be discussed in connection with other elements
affected by the same factors. The reason for the appearance of the 4¹/2 value in the xenon
rotation will also be explained shortly. The calculated distances are those which would
prevail in the absence of compression and thermal expansion. A few of the experimental
data have been extrapolated to this zero base by the investigators, but most of them are
the actual observed values at atmospheric pressure and at temperatures which depend on
the properties of the substances under examination. These values are not exactly
comparable to the calculated distances. In general, however, the expansion and
compression up to the temperature and pressure of observation are small. A comparison
of the values in the last two columns of Table 1 and the similar tables in Chapters 2 and 3
therefore gives a good picture of the extent of agreement between the theoretical figures
and the experimental results.
Another point about the distance correlations that needs to be taken into account is that
there is a substantial amount of variation in the experimental results. If we were to take
the closest of these measured values as the basis for comparison, the correlation would be
very much better. One relatively recent determination of the xenon distance, for example,
arrives at a value of 4.34, almost identical with the calculated distance. There are also
reported values for the argon distance that agree more closely with the theoretical result.
However, a general policy of using the closest values would introduce a bias that would
tend to make the correlation look more favorable than the situation actually warrants. It
has therefore been considered advisable to use empirical data from a recognized selection
of preferred values. Except for those values identified by asterisks, all of the experimental
distances shown in the tables are taken from the extensive compilation by Wyckoff.2 Of
course, the use of these values selected on the basis of indirect criteria introduces a bias
in the unfavorable direction, since, if the theoretical results are correct, every
experimental error shows up as a discrepancy, but even with this negative bias the
agreement between theory and observation is close enough to show that the theoretical
determination of the inter-atomic distance is correct in principle, and to demonstrate that,
with the exception of a relatively small number of uncertain cases, it is also correct in the
detailed application.
Turning now to the elements which have electric as well as magnetic displacement, we
note again that the electric rotation is one-dimensional and opposes the magnetic rotation.
We may therefore obtain an expression for the effect of the electric rotational force on the
magnetically rotating photon by inverting the one-dimensional force term of equation 1-
2.
Fe = 1/(ln t‘A ln t‘B) (1-8)
Inasmuch as the electric rotation is not an independent motion of the basic photon, but a
rotation of the magnetically rotating structure in the reverse direction, combining the
electric rotational force of equation 1-8 with the magnetic rotational force of equation 1-4
modifies the rotational terms (the functions of t) only, and leaves the remainder of
equation 1-4 unchanged.
ln² tA ln² tB
F = (0.006392)4 ————— (1-9)
s4 ln t‘A ln t‘B
Here again the effective rotational (outward) and natural reference system progression
(inward) forces are necessarily equal at the equilibrium point. Since the force of the
progression of the natural reference system is unity, we substitute this value for F in
equation 1-9 and solve for s0, the equilibrium distance, as before.
(ln½ tA ln½ tB)
s0 = (0.006392) —————— (1-10)
¼ ¼
(ln t‘A ln t‘B)
Again simplifying for application to the elements, where A is generally equal to B,
s0 = 0.006392 ln t/ln½ t‘ (1-11)
In Angstrom units this become
s0 = 2.914 ln t/ln½ t‘Å (1-12)
As already noted, when the rotation is extended to a second (double) vibrational unit, to
vibration two, we may say, each added displacement unit adds only one half unit to the
specific rotation. Inasmuch as 8 electric displacement units distributed three-
dimensionally bring the rotation to a new zero point, and cause the rotational motion to
revert to the translational status, the change to vibration two in the electric dimension
must take place before the displacement reaches 8. Specific rotation 8 (displacement 7) is
therefore followed by 8¹/2, 9, 9¹/2, etc. But the first effective rotational displacement unit
is necessarily one-dimensional, and the linear equivalent of the 8-unit limit is 2 units.
Thus this first unit has already reached the one-dimensional limit. The succeeding
displacement units have the option of continuing on the one-dimensional basis and
extending the rotation to vibration two rather than extending it into additional
dimensions. The change to vibration two therefore may take place immediately after the
first displacement unit. In this case specific rotation 2 (displacement 1) is followed by
2¹/2, 3, 3¹/2, etc. The lower value is commonly found where it first becomes possible; that
is, displacement 2 normally corresponds to rotation 2¹/2 rather than 3. The next element
may take the intermediate value 3¹/2, but beyond this point the higher vibration one value
normally prevails.
In the first edition it was indicated that the one or two vibrational diplacement units being
rotated did not necessarily constitute the entire vibrational component of the basic
photon, inasmuch as these one or two units are capable of being rotated independently of
the remaining vibrational units, if any. Further consideration now leads to the conclusion
that one or two units of a multi-unit photon frequency can, in fact, be set in rotation
independently, as previously indicated, and that the original photon may have had an
excess of vibrational units, but that in such an event the rotating portion of the photon
begins moving inward, whereas the non-rotating portion continues moving outward by
reason of the progression of the natural reference system. The two portions therefore
separate, and the rotating portion retains no non-rotating vibrational component.
The general pattern of the magnetic rotational values is the same as that of the electric
values. The tendency to substitute specific rotation 2¹/2 for 3 applies to the magnetic as
well as to the electric rotation, and in the lower group combinations (both elements and
compounds) that follow the regular electropositive pattern the specific magnetic rotations
are usually 2¹/2–2¹/2 or 3–2¹/2, rather than 3-3. But the upper limit for specific magnetic
rotation on a vibration one basis is 4 (three displacement units) instead of 8, as the two-
dimensional rotation reaches the upper zero level at 4 displacement units in each
dimension. Rotation 4¹/2 therefore follows rotation 4 in the regular sequence, as we saw
in the values given for xenon in Table 1. It is possible to reach rotation 5 in one
dimension, however, without bringing the magnetic rotation as a whole up to the 5 level,
and 5–4 or 5–4¹/2 rotation occurs in some elements either in lieu of, or in combination
with, the 4¹/2-4 or 4¹/2–4¹/2 rotation

Chapter 2

Inter-atomic Distances
As equation 1-10 indicates, the distance between any two atoms in a solid aggregate is a
function of the specific rotations of the atoms. Since each atom is capable of assuming any
one of several different relative orientations of its rotational motions, it follows that there are
a number of possible specific rotations for each combination of atoms. This number of
possible alternatives is still further increased by two additional factors that were discussed
earlier. The atom has the option, as we noted in Chapter10, vol. I, of rotating with the
normal magnetic displacement and a positive electric displacement, or with the next higher
magnetic displacement and a negative electric increment. And in either case, the effective
quantity, the specific rotation, may be modified by extension of the motion to a second
vibrating unit, as brought out in Chapter 1.
It is possible that each of these many variations of the magnitude of the specific rotation, and
the corresponding values of the inter-atomic distances, may actually be realized under
appropriate conditions, but in any particular set of circumstances certain combinations of
rotations are more probable than the others, and in ordinary practice the number of different
values of the distance between the same two atoms is relatively small, except in certain
special cases. As matters now stand, therefore, we are able to calculate from theoretical
premises a small set of possible inter-atomic distances for each element or compound.
Ultimately it will no doubt be advisable to evaluate the probability relations in detail so that
the results of the calculations will be as specific as possible, but it has not been feasible to
undertake this full treatment of the probability relationships in this present work. In an
investigation of so large a field as the structure of the physical universe there must not only
be some selection of the subjects that are to be covered, but also some decisions as to the
extent to which that coverage will be carried. A comprehensive treatment of the probability
relations wherever they enter into physical situations could be quite helpful, but the amount
of time and effort required to carry out such a project will undoubtedly be enormous, and its
contribution to the major objectives of this present undertaking is not sufficient to justify
allocating so much of the available resources to it. Similar decisions as to how far to carry
the investigation in certain areas have had to be made from time to time throughout the
course of the work in order to limit it to a finite size.
It might be well to point out in this connection that it will never be possible to calculate a
unique inter-atomic distance for every element or combination of elements, even when the
probability relations have been definitely established, as in many cases the choice from
among the alternatives is not only a matter of relative probability, but also of the history of
the particular specimen. Where two or more alternative forms are stable within the range of
physical conditions under which the empirical examination is being made; the treatment to
which the specimen has previously been subjected plays an important part in the
determination of the structure.
It does not follow, however, that we are totally precluded from arriving at definite values for
the inter-atomic distances. Even though no quantitative evaluation of the relative
probabilities of the various alternatives is yet available, the nature of the major factors
involved in their determination can be deduced theoretically, and this qualitative information
is sufficient in most cases to exclude all but a very few of the total number of possible
variations of the specific rotations. Further-more, there are some series relations by means of
which the range of variability can be still further narrowed. These series patterns will be
more evident when we examine the distances in compounds in the next chapter, and they
will be given more detailed consideration at that point.
The first thing that needs to be emphasized as we begin our analysis of the factors that
determine the inter-atomic distance is that we are not dealing with the sizes of atoms; what
we are undertaking to do is to evaluate the distance between the equilibrium positions that
the atoms occupy under specified conditions. In Chapter 1 we examined the general nature
of the atomic equilibrium. In this and the following chapter we will see how the various
factors involved in the relations between the rotations of the (apparently) interacting atoms
affect the point of equilibrium, and we will arrive at values of the inter-atomic distances
under static conditions. Then in Chapters 5 and 6 we will develop the quantitative relations
that will enable us to determine just what changes take place in these equilibrium distances
when external forces in the form of pressure and temperature are applied.
As we have seen in the preceding volume, all atoms and aggregates of matter are subject to
two opposing forces of a general nature: gravitation and the progression of the natural
reference system. These are the primary forces (or motions) that determine the course of
physical events. Outside the gravitational limits of the largest aggregates, the outward
motion due to the progression of the natural reference system exceeds the inward motion of
gravitation, and these aggregates, the major galaxies, move outward from each other at
speeds increasing with distance. Inside the gravitational limits the gravitational motion is the
greater, and all atoms and aggregates move inward. Ultimately. if nothing intervenes, this
inward motion carries each atom within unit distance of another, and the directional reversal
that takes place at the unit boundary then results in the establishment of an equilibrium
between the motions of the two atoms. The inter-atomic distance is the distance between the
atomic centers in this equilibrium condition. It is not, as currently assumed, an indication of
the sizes of the atoms.
The current theory which regards the interatomic distance as a measure of ―size‖ is, in many
respects, quite similar to the electronic ―bond‖ theory of molecular structure. Like the
electronic theory, it is based on an erroneous assumption - in this case, the assumption that
the atoms are in contact in the solid state - and like the electronic theory, it fits only a
relatively small number of substances in its simple form, so that it is necessary to call upon a
profusion of supplementary and subsidiary hypotheses to explain the deviations of the
observed distances from what are presumed to be the primary values. As the textbooks point
out, even in the metals, which are the simplest structures from the standpoint of the theory,
there are many difficult problems, including the awkward fact that the presumed ―size‖ is
variable, depending on the nature of the crystal structure. Some further aspects of this
situation will be considered in Chapter 3.
The resemblance between these two erroneous theories is not confined to the lack of
adequate foundations and to the nature of the difficulties that they encounter. It also extends
to the resolution of these difficulties, as the same principles that were derived from the
postulates of the Reciprocal System to account for the formation of molecules of chemical
compounds, when applied in a somewhat different way, are the general considerations that
govern the magnitude of the inter-atomic distance in both elements and compounds. Indeed,
all aggregates of electronegative elements are molecular in their composition, rather than
atomic, as the molecular requirement that the negative electric displacement of an atom of
such an element must be counterbalanced by an equivalent positive displacement in order to
arrive at a stable equilibrium in space applies with equal force to a combination with a like
atom. As we saw in our examination of the structural situation, electropositive elements are
not subject to this restriction, but in many cases the molecular (balanced orientation) type of
structure takes precedence over the electropositive structure by reason of collateral factors
that affect the relative probability. Because of this fact that the distances follow the
structural pattern, the various ways of orienting the atomic rotations that were discussed in
Chapter18, Vol. I, with a few modifications due to the special conditions that exist in the
elemental aggregates, determine the manner in which the atoms of an element are able to
combine with each other, and the effective values of the specific rotations in these
combinations.
In the electropositive elements the specific rotations are based, in the first instance, on the
rotational displacements as listed in Chapter10, Vol. I. Where the inter-atomic orientation is
the normal positive arrangement, the displacements as listed are translated directly into
specific rotations by addition of the initial unit and reduction of the incremental values
where the rotation extends to vibration two. Except for the elements of group 2A, which, as
already noted, are subject to some special considerations because of their low magnetic
displacements, the elements of Division I all follow the regular electropositive pattern of
specific rotations. The only irregularities are in the electric rotations of the second and third
elements of each group, where the point of transition to vibration two varies between
groups. The inter-atomic distances in this division are listed in Table2.
The regular electropositive pattern is also applicable in Division II, and a number of the
Division II elements of Group 3A crystallize on this basis, with inter-atomic distances
determined in the same manner as in Division I. As noted in Volume I, however, the
Division II elements generally favor the magnetic type of orientation in chemical
compounds because the normal positive orientation becomes less probable as the
displacement, increases. The same probability considerations operate against the positive
orientation in the elements of this division, but instead of employing the magnetic
orientation as the alternate, these elements utilize a type of orientation that is available only
where all rotations of each participant in a combination are identical with those of the other.
This arrangement reverses the effective directions of the rotations of alternate atoms. The
resulting relative rotation is a combination of x and 8-x (or 4-x), as in the neutral orientation,
and the effective specific rotations are 10 for vibration one and 5 for vibration two. A
combination value 5-10 is also common.

Table 2: Distances - Division I


Atomic Specific Rotation Distance
Group Element
Number Magnetic Electric Calc. Obs.
2B 3-2½
11 Sodium 2 3.70 3.71
3-3
12 Magnesium 3-2½ 2½ 3.17 3.21
13 Aluminum 3-2½ 3 2.83 2.86
3A 19 Potassium 4-3 2 4.49 4.50
20 Calcium 4-3 2½ 4.00 3.98
21 Scandium 4-3 4 3.18 3.20
22 Titanium 4-3 5 2.95 2.92
3B 37 Rubidium 4-4 2 4.85 4.87
38 Strontium 4-4 2½ 4.32 4.28
39 Yttrium 4-4 3½ 3.64 3.63
40 Zirconium 4-4 5 3.18 3.23
4A 55 Cesium 4½-4½ 2 5.23 5.24
56 Barium 5-4½ 3 4.36 4.34
57 Lanthanum 4½-4½ 4 3.70 3.74
58 Cerium 5-4½ 5 3.61 3.63
4B 89 Actinium 4½-5 4 3.79 3.76*
90 Thorium 4½-5 5 3.52 3.56

This reverse type of structure makes its appearance in body-centered cubic crystal forms of
Chromium and Iron which coexist with the regular positive hexagonal or face-centered
cubic structures. Vanadium and Niobium, the first Division II elements of their respective
groups, combine the positive and reverse orientations. Beyond Niobium the positive
orientation does not appear in the common Division II forms of the elements, the structures
to which the present discussion is limited, and all elements take the reverse orientation,
except Europium and Ytterbium, which combine it with a unit-specific rotation; that is, no
electric-rotational displacement at all, as in the inert gas elements.
On the basis of the considerations discussed in Chapter 1, the average effective specific
rotation for such rotational combinations has been taken as the geometric mean of the two
components. Where the orientations are the same, and the only difference is in the
magnitude, as in the 5-10 combination, and in the combinations of magnetic rotations that
we will encounter later, the equilibrium is reached in the normal manner. If two different
electric rotations are involved, the two-atom pairs cannot attain spatial equilibrium
individually, but they establish a group equilibrium similar to that which is achieved where n
atoms of valence one each combine with one atom of valence n.
The Division II distances are shown in Table 3.

Table 3: Distances - Division II


Atomic Specific Rotation Distance
Group Element
Number Magnetic Electric Calc. Obs.
3A 23 Vanadium 4-3 6-10 2.62 2.62
4-3 7 2.68 2.72
24 Chromium
4-3 10 2.46 2.49
25 Manganese 4-3 8 2.59 2.58
4-3 8½ 2.56 2.57
26 Iron
4-3 10 2.46 2.48
27 Cobalt 4-3 9 2.52 2.51
28 Nickel 4-3 9½ 2.49 2.49
3B 41 Niobium 4-4 6-10 2.83 2.85
42 Molybdenum 4-4½ 10 2.72 2.72
43 Technetium 4-4½ 10 2.73 2.73*
44 Ruthenium 4-4½ 10 2.73 2.70
4-4 10 2.66 2.69
45 Rhodium
4-4½ 10 2.73 2.76
46 Palladium 4-4½ 10 2.73 2.74
4A 59 Praseodymium 5-4½ 5 3.61 3.64
60 Neodymium 5-4½ 5 3.61 3.65
62 Samarium 5-4½ 5 3.61 3.62*
63 Europium 4½-5 1-5 3.96 3.96
64 Gadoaium 5-4½ 5 3.61 3.62
65 Terbium 5-4½ 5 3.61 3.59
66 Dysprosium 5-4½ 5 3.61 3.58
67 Holmium 4½-5 5 3.52 3.56
68 Erbium 4½-5 5 3.52 3.53
69 Thulium 4½-5 5 3.52 3.52
70 Ytterbium 4½-4½ 1-5 3.86 3.87
71 Lutetium 4½-5 5 3.52 3.50*
4B 91 Protactinium 4½-5 5-10 3.22 3.24*
92 Uranium 4½4½ 10 2.87 2.85
93 Neptunium 4½-4½ 5 3.43 3.46*
94 Plutonium 4½4½ 5-10 3.14 3.15*
95 Americum 4½-4½ 5 3.43 3.46*
96 Curium 4½-4½ 5-10 3.14 3.10*
97 Berkelium 4½-4½ 5 3.43 3.40*

Because of the greater probability of the electropositive types of combinations, the


characteristics of Division II carry over into the first elements of Division III, and these
elements, Nickel, Palladium, and Lutetium, are included in the table. Some similar
modifications of the normal division boundaries have already been noted in connection with
other subjects.
The net total rotation of the material atom is a motion with positive displacement-that is, a
speed less than unity-and as such it normally results in a change of position in space. Inside
unit space, however, all motion is in time. The orientation of the atom for the purpose of the
space-time equilibrium therefore exists in the three dimensions of time. As we saw in our
examination of the inter-regional situation in Chapter12, Volume I, each of these dimensions
contacts the space of the region outside unit distance individually. To the extent that the
motion in a dimension of time acts along the line of this contact it is a motion in equivalent
space. Otherwise it has no spatial effect beyond the unit boundary. Because of the
independence of the three dimensions of motion in time the relative orientation of the
electric rotation of any combination of atoms may be the same in all spatial dimensions, or
there may be two or three different orientations.
In most of the elements that have been discussed thus far the orientation is the same in all
spatial dimensions, and in the exceptions the alternate rotations are symmetrically
distributed in the solid structure. The force system of an aggregate of such elements is
isotropic. It follows that any aggregate of atoms of these elements has a structure in which
the constituents are arranged in one of the Geometrical patterns possible for equal forces: an
isometric crystal. All of the electropositive elements (Divisions I and II) crystallize in
isometric forms, and, except for a few which apparently have quite complex structures, each
of the crystal forms of these elements belongs to one or another of three types: the face-
centered cube, the body-centered cube, or the hexagonal close-packed structure.
We now turn to the other major subdivision of the elements, the electronegative class, those
whose normal electric displacement is negative. Here the force system is not necessarily
isotropic, since the most probable arrangement in one or two dimensions may be the
negative orientation, a direct combination of two ne-ative electric displacements, similar to
the all-positive combinations. It is not possible to have negative orientation in all three
dimensions, and wherever it does exist in one or two dimensions the rotational forces of the
atoms are necessarily anisotropic. The controlling factor is the requirement that the net total
rotational displacement of a material atom as a whole must be positive. Negative orientation
in all three dimensions is obviously incompatible with this requirement, but if the negative
displacement is restricted to one dimension the aggregate has fixed atomic positions in two
dimensions, with a fixed average position in the third because of the positive displacement
of the atom as a whole. This results in a crystal structure that is essentially equivalent to one
with fixed positions in all dimensions. Such crystals are not usually isometric, as the inter-
atomic distance in the odd dimension is generally different from that of the other two.
Where the distances in all dimensions do happen to coincide, we will find on further
investigation that the space symmetry is not an indication of force symmetry.
If the negative displacement is very small, as in the lower division IV elements, it is possible
to have negative orientation in two dimensions if the positive displacement in the third
dimension exceeds the sum of these two negative components, so that the net result is still
positive. Here the relative positions of the atoms are fixed in one dimension only, but the
average positions in the other two dimensions are constant by reason of the net positive
displacement of the atoms. An aggregate of such atoms retains most of the external
characteristics of a crystal, but when the internal structure is examined the atoms appear to
be distributed at random, rather than in the orderly arrangement of the crystal. In reality
there is just as much order as in the crystalline structure, but part of the order is in time
rather than in space. This form of matter can be identified as the glassy, or vitreous, form, to
distinguish it from the crystalline form.
The term ―state‖ is frequently used in this connection instead of ―form‖, but the physical
state of matter has an altogether different meaning based on other criteria, and it seems
advisable'to confine the use of this term to the one application. Both glasses and crystals are
in the solid state.
In beginning a consideration of the structures of the individual electronegative elements, we
will start with Division III. The general situation in this division is similar to that in Division
II, but the negativity of the normal electric displacement introduces a new factor into the
determination of the orientation pattern, as the most probable orientation of an
electronegative element may not be capable of existing in all three dimensions. As stated
earlier, where two or more different orientations are possible in a given set of circumstances
the relative probability is the deciding factor. Low displacements are more probable than
high displacements. Simple orientations are more probable than combinations. Positive
electric orientation is more probable than negative. In Division I all of these factors operate
in the same direction. The positive orientation is simple, and it also has the lowest
displacement value. All structures in this division are therefore formed on the basis of the
positive orientation. In Division II the margin of probability is narrow. Here the positive
displacement x is greater than the inverse displacement 8-x, and this operates against the
greater inherent probability of a simple positive structure. As a result, both the positive and
reverse types of structure are found in this division, together with a combination of the two.
In Division III the negative orientation has a status somewhat similar to that of the positive
orientation in Division II. As a simple orientation, it has a relatively high probability. But it
is limited to one dimension. The regular division III structures of Groups 3A and 3B are
therefore anisotropic, with the reverse orientation in the other two dimensions. A
combination of these two types of orientation is also possible, and in copper and silver, the
first Division III elements of their respective groups, the crystals formed on the basis of this
combination orientation have cubic symmetry. As in Division II, the elements of Division III
in Groups 4A and 4B crystallize entirely on the basis of the reverse orientation. Table 4 lists
what may be considered as the regular inter-atomic distances of the elements of Division III.
Although the probability of the negative orientation is greater in Division IV than in
Division III, because of the smaller displacement values, this type of structure seldom
appears in the crystals of the lower division. The reason is that where this orientation exists
in the elements of the lower displacements, it exists in two dimensions, and this produces a
glassy or vitreous aggregate rather than a crystal. The reverse orientation is not subject to
any restrictive factor of this nature, but it is less probable at the lower displacements, and
except in Group 4A, where it continues to predominate, this orientation appears less
frequently as the displacement decreases. Where it does exist it is increasingly likely to
combine with some other type of orientation. As a result of these limitations that are
applicable to the inherently more probable types of orientation, many of the Division IV
structures are formed on the basis of the secondary positive orientation, a combination of
two 8-x displacements.

Table 4: Distances - Division III


Atomic Specific Rotation Distance
Group Element
Number Magnetic Electric Calc. Obs.
3A 29 Copper 4-3 8-10 2.53 2.55
4-4 7 2.90 2.91
30 Zinc
4-4 10 2.66 2.66
4-3 6 2.79 2.80
31 Gallium
4-3 10 2.46 2.44
3B 47 Silver 4-5 8-10 2.87 2.88
5-4 7 3.20 3.26*
48 Cadmium
5-4 10 2.94 2.97
5-4 6 3.33 3.37
49 Indium
5-4 6-10 3.21 3.24
4A 72 Hafnium 4-4½ 5 3.26 3.32
73 Tantalum 4½-4½ 10 2.87 2.86
74 Tungsten 4-4½ 10 2.73 2.74
75 Rhenium 4-4½ 10 2.73 2.77*
76 Osmium 4-4½ 10 2.73 2.73
77 Eridium 4-4½ 10 2.73 2.71
78 Platinum 4-4½ 10 2.73 2.77
79 Gold 4½-4½ 10 2.87 2.88
4-4½ 5-10 2.98 3.00
80 Mercury
4½-4½ 5 3.43 3.47
81 Thallium 4½-4½ 5 3.43 3.45
The secondary positive orientation is not possible in the electropositive divisions, as 8-x is
negative in these divisions, and like the negative orientation itself, an 8-x negative
combination would be confined to a subordinate role in one or two dimensions of an
asymmetric structure. Such a crystal structure cannot compete with the high probability of
the symmetrical electropositive crystals, and therefore does not exist. In the electronegative
divisions, however, the 8-x displacement is positive, and there are no limitations on it, aside
from those arising from the high displacement values.
The effective displacement of this secondary positive orientation is even greater than might
be expected from the magnitude of the quantity 8-x, as the change of zero points for the two
oppositely directed motions is also oppositely directed, and the new zero points are 16
displacement units apart. The resultant relative displacement is 16-2x, and the corresponding
specific rotation is 18-2x. In Division IV the numerical values of the latter expression range
from 10 to 16, and because of the low probability of such high rotations, the secondary
positive orientation is limited to one or one and one-half dimensions in spite of its positive
character. In Division III the 8-x displacements are lower, but in this case they are too low.
A two-unit separation of the zero points (16 displacement units) cannot be maintained unless
the effective displacement is at least 8 (one full three-dimensional unit). The secondary
positive orientation is therefore confined to Division IV.
A special type of structure is possible only for those electronegative elements which have a
rotational displacement of four units in the electric dimension. These elements are on the
borderline between Divisions III and IV, where the secondary positive and reverse
orientations are about equally probable. Under similar conditions other elements crystallize
in hexagonal or tetragonal structures, utilizing the different orientations in the different
dimensions. For these displacement 4 elements, however, the two orientations produce the
same specific rotation: 10. The inter-atomic distance in these crystals is therefore the same
in all dimensions, and the crystals are isometric, even though the rotational forces in the
different dimensions are not of the same character. The molecular arrangement in this
crystal pattern, the diamond structure, shows the true nature of the rotational forces.
Outwardly this crystal cannot be distinguished from the isotropic cubic crystals, but the
analogous body-centered cubic structure has an atom at each corner of the cube as well as
one in the center, whereas the diamond structure leaves alternate corners open to
accommodate the abnormal projection of forces in the secondary positive dimension.
In those of the lower elements of Division IV that are beyond the range of the inverse type
of orientation, there is no available alternative for combination with the secondary positive
orientation. The crystals of these elements therefore have no effective electric rotation in the
remaining dimensions, and the relative specific rotation in these dimensions is unity, as in
all dimensions of the inert gas elements. The most common distances in the aggregates of
the Division IV elements are shown in Table 5.
Up to this point, no consideration has been given to the elements of atomic number below
10, as the rotational forces of these elements are subject to certain special influences which
make it desirable to discuss them separately. One cause of deviation from the normal
behavior is the small size of the rotational groups. In the larger groups the four divisions are
distinct, and, except for some overlapping, each has its own characteristic force
combinations, as we have seen in the preceding paragraphs. In an 8-element group, however,
the second series of four elements, which would normally constitute Division III, is actually
in the Division IV position. As a result, these four elements have, to a certain extent, the
properties of both divisions. Similarly, the Division I elements of these groups may, in some
cases, act as if they were members of Division III.
A second influence that affects the forces and the crystal structures of the lower group
elements is the inactivity of the rotational forces in certain dimensions that was mentioned
earlier. A specific rotation of two units produces no effect in the positive direction. The
reason for this is revealed by equation 1-1. By applying this equation we find that the
effective rotational force (ln t) for t = 2 is 0.693, which is less than the opposing space-time
force 1.00. The net effective force of specific rotation 2 is therefore below the minimum
value for action in the positive direction. In order to produce an active force the specific
rotation must be high enough to make ln t greater than unity. This is accomplished at
rotation 3.

Table 5: Distances - Division IV


Atomic Specific Rotation Distance
Group Element
Number Magnetic Electric Calc. Obs.
2B 14 Silicon 3-3 5-10 2.31 2.35
3-3
10 2.19 2.2
15 Phosphorus 3-4
3-4 1 3.46 3.48*
3-3 10 2.11 2.07
16 Sulfur
3-3 1 3.21 3.27*
3-3 16 1.92 1.82
17 Chlorine
3-3 1-16 2.48 2.52
3A 32 Germanium 4-3 10 2.46 2.43
4-3 12 2.37 2.44*
33 Arsenic
4-3 10 2.46 2.51
4-3 14 2.32 2.32
34 Selenium
3-4 1 3.46 3.46
4-3 16 2.25 2.27
35 Bromine
3-4 1 3.46 3.30
3B 4½-4 10 2.80 2.80
50 Tin 5-4 5-10 3.22 3.17
5-4 10 2.94 3.02
5-4 12 2.83 2.87
51 Antimony
5-4 4-10 3.34 3.36*
5-4½ 14 2.82 2.86
52 Tellurium
5-4½ 1-10 3.71 3.74
5-4 16 2.68 2.70
53 Iodine 5-4 1-16 3.54 3.54
5-4 1 4.46 4.41*
4A 82 Lead 4½-4½ 5 3.43 3.49
4½-4½ 5 3.43 3.47*
83 Bismuth
4½-4½ 5-10 3.14 3.10
84 Polonium 4½-4½ 5 3.43 3.40*
The specific magnetic rotation of the 1B group, which includes only the two elements
hydrogen and helium, and the 2A group of eight elements beginning with lithium, combines
the values 3 and 2. Where the value 2 applies to the subordinate rotation (3-2), one
dimension is inactive; where it applies to the principal rotation (2-3), two dimensions are
inactive. This reduces the force exerted by each atom to 2/3 of the normal amount in the
case of one inactive dimension, and to 1/3 for two inactive dimensions. The inter-atomic
distance is proportional to the square root of the product of the two forces involved. Thus the
reduction in distance is also 1/3 per inactive dimension.
Since the electric rotation is not a basic motion, but a reverse rotation of the magnetic
rotational system, the limitations to which the basic rotation is subject are not applicable.
The electric rotation merely modifies the magnetic rotation, and the low value of the force
integral for specific rotation 2 makes itself apparent by an inter-atomic distance which is
greater than that which would prevail if there were no electric displacement at all (unit
specific rotation).
Theoretical values of the inter-atomic distances of the lower group elements are compared
with measured values in Table 6

Table 6: Distances - Lower Group Elements


Atomic Specific Rotation Distance
Group Element
Number Magnetic Electric Calc. Obs.
1B 1 Hydrogen 3(1) 10 0.70 0.74*
2 Helium 3(1) 1 1.07 1.09
*2A 3 Lithium 2½-2½ 2 3.05 3.03
4 Beryllimn 3(2) 2½ 2.282 2.28
3(2) 5 1.68 1.74*
5 Boron
3-3 10 2.11 2.03*
C (diamond) 3(2) 5-10 1.54 1.54
6 3(2) 1 1.41 1.42
C (graphite)
3-3 1 3.21 3.40
3(1½) 10 1.06 1.06
7 Nitrogen
3-3 1 3.21 3.44*
3(1½) 10 1.06 1.15*
8 Oxygen
3-3 1 3.21 3.20*
9 Fluorine 3(2) 10 1.41 1.44*
The figures in parentheses in column 4 of this table indicate the effective number of
dimensions. Thus the notation 3(1) shown for hydrogen means that this element has a
specific magnetic rotation of 3, effective in only one dimension.
Except where the crystals are isometric, there is still much uncertainty in the distance
measurements on these lower group elements, and many other values have been reported in
addition to those included in the table. This situation will be discussed at length in Chapter
3, where we will have the benefit of measurements of the distances between like atoms that
are constituents of chemical compounds.
As indicated in the introductory paragraphs of this chapter, we are not yet in a position
where we can determine specifically just what the inter-atomic distance will be for any
given element under a given set of conditions. The theoretical considerations that have been
discussed actually do lead to specific values in many cases, but in other instances there is an
uncertainty as to which of two or more theoretically possible rotational arrangements
corresponds to the observed crystal structure. Continuing progress is being made in both the
experimental and the theoretical fields, and it can be expected that these uncertainties will
gradually diminish toward the irreducible minimum that was mentioned earlier. In the
course of this process there will necessarily be some changes in the identifications of the
observed inter-atomic distances with the theoretically possible structures. A comparison of
Tables 1 to 6 with the corresponding tabulations of the first edition should therefore be of
interest as an indication of the nature and magnitude of the changes that have taken place in
our view of this interatomic distance situation in the last twenty years, and by extension, an
indication of the amount of change that can be expected in the future.
Such a comparison shows that the modifications of the original conclusions that now appear
to be required, in the light of the additional information that has been made available, are
confined almost entirely to those which have resulted from a better theoretical understanding
of the behavior of the specific magnetic rotation above an effective value of 4. Few changes
are required in either the magnetic or electric values in those rotational combinations where
the specific magnetic rotation is 4-4 or less.
One of the puzzling features of the rotational situation as it appeared at the time of the
original publication was the apparent retrograde progression of the specific magnetic
rotation in Groups 4A and 4B. It was recognized at that time that both the 4½ and 5 values
of the specific rotation correspond to the same displacement, 4, the difference being that in
the case of the 4½ value the rotation extends to two units of vibration, and the last increment
of specific rotation in this case is only half size. The next half unit increment, if such an
increment were possible, would bring the 4½ rotation back to the 5 value. It would therefore
appear that the sequence of specific rotations beyond 4½-4 should be 4½-4½, 5-4½, 5-5, and
so on. But the tendency is in the opposite direction. Instead of moving toward higher values
as the atomic number increases, there is actually a decreasing trend. This was already
evident at the time of publication of the first edition, as the low inter-atomic distances of the
series of elements from Tungsten to Platinum could not be accounted for unless the specific
magnetic rotation dropped back to 4-4½ from the higher levels of the preceding elements of
the 4A group. This decreasing trend has become even more prominent as distances have
become available for additional elements of Group 4B, as some of these values indicate
specific magnetic rotations of 4-4, or possibly even 4-3½.
As it happens, the continuation of the trend toward lower values in the more recent data has
had the effect of clarifying the situation. It is now evident that the 5-5 specific rotation is not
reached within the accessible portion of Groups 4A and 4B. (Considerations that will be
discussed later show that the specific rotation of 5-5 would be unstable.) The lower values in
the 4A and 4B groups do not result from a decrease in the magnetic displacement, but from
a shift of the existing displacement units from vibration one to vibration two, a process
which reduces the specific rotation of the units by one half. On a vibration one basis,
rotational displacements 4-3 correspond to specific rotations 5-4. Conversion of successive
units of displacement to vibration two, without change in the number of displacement units,
results in a series of specific rotations, 5-4, 4½-4, 4-4½, 4-4, and so on. A similar series with
one additional displacement unit goes through the values 5-4½, 4½-5, 4½-4½, 4½-4, and
then follows the same route as the series with the lower displacement.
The modifications that have been made in the theoretical rotational values applicable to the
elements of these two highest rotational groups since the publication of the first edition are
the result of a review of the situation in the light of this new understanding of the trend of
the specific rotation. The general pattern in group 4A is now seen to be that of the series
from 5-4½ to 4-4½, with a return to 4½-4½ in the lower electronegative elements. So far as
can be determined at this time, Group 4B follows the same pattern one step farther
advanced; that is, it begins with 4½-5 rather than 5-4½.
The difference in the inter-atomic distance corresponding to one of the steps in this
conversion process is relatively small, and in view of the substantial variation in the
experimental values it has not appeared advisable to take into account the possibility of
combinations such as 4½-5 specific rotation of one atom of a pair and 4½-4½ in the other. It
seems clear that such combinations do exist in some of the lower group elements, Sodium,
for example, and they probably play some part in the higher groups. Most of the reported
distances for Holmium and Erbium, for instance, agree more closely with a combination of
5-4½ and 4½-5 than with either individually. However, all of these values are theoretically
possible, and the only question at issue in this and many other similar cases is which
theoretical value corresponds to the observed distance. Definitive answers to identification
questions of this kind will have to wait until the theoretical probabilities are specifically
evaluated, or the experimental uncertainties are resolved.
Many questions concerning alternate crystal structures will also have to wait for more
information from theory or experiment, particularly where crystal forms that exist only at
high temperatures or pressures are involved. There is, however, a large body of information
already available in this area, and it can be tied into the theoretical picture as soon as
someone has the time and the inclination to undertake the task.

Chapter 3

Distances in Compounds
Thus far in the discussion of the inter-atomic distances we have been dealing with
aggregates composed of like atoms. The same general principles apply to aggregates of
unlike atoms, but the existence of differences between the components of such systems
introduces some new factors that we will now want to examine.
The matters to be considered in this chapter have no relevance to direct combinations of
electropositive elements (aggregates of which are mixtures or alloys, rather than chemical
compounds). As noted in Chapter18, Vol. I, the proportions in which such elements can
combine may be determined, or limited, by geometrical considerations, but aside from such
effects, unlike atoms of this kind can combine on the same basis as like atoms. Here the
forces are identical in character and concurrent, the type of combination that we have called
the positive orientation. The resultant specific electric rotation, according to the principles
previously set forth, is (t1t2)1/2, the geometric mean of the two constituents. If the two
elements have different magnetic rotations, the resultant is also the geometric mean of the
individual rotations, as the magnetic rotations always have positive displacements, and these
combine in the same manner as the positive electric displacements. The effective electric
and magnetic specific rotations thus derived can then be entered in the applicable force and
distance equations from Chapter 1.
Combinations of unlike positive atoms may also take place on the basis of the reverse
orientation, the alternate type of structure that is available to the elemental aggregates.
Where the electric rotations of the components differ, the resultant specific rotation of the
two-atom combination will not be the required neutral 5 or 10, but a second pair of atoms
inversely oriented to the first results in a four-atom group that has the necessary rotational
balance. As brought out in VolumeI, the simplest type of combination in chemical
compounds is based on the normal orientation, in which Division I electropositive elements
are joined with Division IV electronegative elements on the basis of numerically equal
displacements. The resultant effective specific magnetic rotation can be calculated in the
same manner as in the all-positive structures, but, as we saw in our consideration of the
inter-atomic distances of the elements, where an equilibrium is established between positive
and negative electric rotations, the resultant is the sum of the two individual values, rather
than the mean.

Table 7: Distances - NaCl Type Compounds


Specific Rotation Distance
Compound
Magnetic Elec. Calc. Obs.
LiH 3(2) 3(2) 3 2.04 2.04
LiF 3(2) 3(2) 3 2.04 2.01
LiCl 3(2) 3½-3½ 4 2.57 2.57
LiBr 3(2) 4-4 4 2.77 2.75
Li 3(2) 5-4 4 2.96 3.00
NaF 3-2½ 3(2) 4 2.26 2.31
NaCl 3-2½ 3½-3½ 4 2.77 2.81
NaBr 3-2½ 4-4 4 2.94 2.98
NaI 3-3 5-4 4 3.21 3.23
MgO 3-3 3(2) 5½ 2.15 2.10
MgS 3-3 3½-3½ 5½ 2.60 2.59
MgSe 3-3 4-4 5½ 2.76 2.72
KF 4-3 3(2) 4 2.63 2.67
KCl 4-3 3½-3½ 4 3.11 3.14
KBr 4-3 4-4 4 3.30 3.29
KI 4-3 5-4 4 3.47 3.52
CaO 4-3 3(2) 5½ 2.38 2.40
CaS 4-3 3½-3½ 5½ 2.81 2.84
CaSe 4-3 4-4 5½ 2.98 2.95
CaTe 4-3 5-4 5½ 3.13 3.17
ScN 4-3 3(2) 7 2.22 2.22
TiC 4-3 3(2) 8½ 2.12 2.16
RbF 4-4 3(2) 4 2.77 2.82
RbCl 4-4 3½-3½ 4 3.24 3.27
RbBr 4-4 4-4 4 3.43 3.43
RbI 4-4 5-4 4 3.61 3.66
SrO 4-4 3(2) 5½ 2.51 2.57
SrS 4-4 3½-3½ 5½ 2.92 2.93
SrSe 4-4 4-4 5½ 3.10 3.11
SrTe 4-4 5-4 5½ 3.26 3.24
CsF 5-4 3(2) 4 2.96 3.00
CsCl 5-4 4-3 4 3.47 3.51
BaO 5-4½ 3(2) 5½ 2.72 2.76
BaS 5-4½ 4-3 5½ 3.17 3.17
BaSe 5-4½ 4-4 5½ 3.30 3.31
BaTe 5-4½ 5-4 5½ 3.47 3.49
LaN 5-4 3(2) 6 2.61 2.63
LaP 5-4 4-3 6½ 2.99 3.01
LaAs 5-4 4-4 7 3.04 3.06
LaSb 5-4 5-4 7 3.20 3.24
LaBi 5-4 5-4½ 7 3.24 3.28
When this arrangement unites one electropositive atom with each electronegative atom the
resulting structure is usually a simple cube with the atoms of each element occupying
alternate comers of the cube. This is called the Sodium Chloride structure, after the most
familiar member of the family of compounds crystallizing in this form. Table 7 gives the
interatomic distances of a number of common NaCl type crystals. From this tabulation it can
be seen that the special rotational characteristics which certain of the elements possess in the
elemental aggregates carry over into their compounds. The second element in each group
shows the same preference for rotation on the basis of vibration two that we encountered in
examining the structures of the elements. Here, again, this preference extends to some of the
following elements, and in such series of compounds as CaO, ScN, TiC, one component
keeps the vibration two status throughout the series, and the resulting effective rotations are
5½, 7, 8½, rather than 6, 8, 10. The elements of the lower groups have inactive force
dimensions in the compounds just as in the elemental structures previously examined. If the
active dimensions are not the same in both components, the full rotational force of the more
active component is effective in its excess dimensions, the effective rotation in an inactive
dimension being unity. For example, the value of ln t for magnetic rotation 3 is 1.099 in
three dimensions, or 0.7324 in two dimensions. If this two-dimensional rotation is combined
with a three-dimensional magnetic rotation x, the resultant value of ln t is (0.7324 x)½, the
geometric mean of the individual values, in two dimensions, and x in the third. The average
value for all three dimensions is (0.7324 x2)¹/3.

This dimensional inactivity in the lower groups plays only a minor role in the structures of
the elements, as can be seen from the fact that it did not need any attention until almost the
end of Table 8.

Table 8: Distances - CaF2 Type Compounds


Specific Rotation Distance
Compound
Magnetic Elec. Calc. Obs.
Na2O 3-2½ 3(2) 3½ 2.39 2.40
Na2S 3-2½ 4-3 4 2.83 2.83
Na2Se 3-2½ 4-4 4 2.94 2.95
Na2Te 3-2½ 5-4½ 4 3.13 3.17
Mg2Si 3-3 4-3 5 2.73 2.77
Mg2Ge 3-3 4-4 5½ 2.76 2.76
Mg2Sn 3-3 5-4 5½ 2.90 2.93
Mg2Pb 3-3 5-4½ 5½ 2.94 2.96
K2O 4-3 3(2) 3½ 2.79 2.79
K2S 4-3 4-3 4 3.17 3.20
K2Se 4-3 4-4 4 3.30 3.33
K2Te 4-3 5-4½ 4 3.51 3.53
CaF2 4-3 3(2) 5½ 2.38 2.36
Rb2O 4-4 3(2) 3½ 2.94 2.92
Rb2S 4-4 4-3 4 3.30 3.31
SrF2 4-4 3(2) 5½ 2.50 2.50
SrCl2 4-4 4-3 5½ 2.98 3.03
BaF2 5-4 3(2) 5½ 2.68 2.68
BaCl2 5-4½ 4-3 5½ 3.17 3.18*
The compounds of lithium with valence one negative elements follow the regular pattern,
and were included in Table 7, but the compounds with valence two elements are irregular,
and they have therefore been omitted from Table 8. As we will see in Chapter 6, the
irregularity is due to the fact that the two lithium atoms in a molecule of the CaF2 type act as
a radical rather than as independent constituents of the molecule.
These two normal orientation tables, 7 and 8, provide an impressive confirmation of the
validity of the theoretical findings. One of the problems in dealing with the inter-atomic
distances of the elements is that because of the relatively small total number of elements, the
number to which any particular magnetic rotational combination is applicable is quite small,
and consequently it is rather difficult to establish a prima facie case for the authenticity of
the rotatioml values. But this is not true of the normal type compounds, as they are more
numerous and less variable. There are two elements in these tables, sulfur and chlorine, that
have different magnetic rotations under different conditions. These elements have 4-3
rotation in the CaF2 type crystals, and in the NaCl type combinations with elements of group
4A. In the other compounds of the NaCl type they take the 3½-3½ rotations. There are also
two more elements, each of which, according to the information now available, deviates
from its normal rotations in one of the listed compounds. Otherwise, all of the elements
entering into the 60 compounds in the two tables have the same specific magnetic rotations
in every compound in which they participate.
Furthermore, when the inherent differences between the elemental and compound
aggregates are taken into account, there is also agreement between these rotations in the
compounds and the specific rotations of the same elements in the elemental aggregates. The
most common difference of this kind is a result of the fact that the Division IV element in a
compound has a purely negative role. For this reason it takes the magnetic rotation of the
next higher group. In the elemental aggregates half of the atoms are reoriented to act in a
positive capacity. Consequently, thev tend to retain the normal rotation of the group to
which they actually belong. For example, the Division IV elements of Group 3A,
germanium, arsenic, selenium, and bromine, have the normal specific rotation of their group,
4-3, in the crystals of the elements, but in the compounds they take the 4-4 specific rotation
of Group 3B, acting as negative members of that group.
Another difference between the two classes of structures is that those elements of the higher
groups that have the option of extending their rotation to a second vibrational unit are less
likely to do so if they are combining with an element which is rotating entirely on the basis
of vibration one. Aside from these deviations due to known causes, the values of the specific
magnetic rotation determined for the elements in Chapter 2 are also generally applicable to
the compounds. This equivalence does not apply to the specific electric rotations, as they are
determined by the way in which the rotations of the constituents of each aggregate are
oriented relative to each other, a relation that is different in the two classes of structures.
This applicability of the same equations and, in general, the same numerical values, to the
calculation of distances in both elements and compounds contrasts sharply with the
conventional theory that regards the inter-atomic distance as being determined by the ―sizes‖
of the atoms. The sodium atom, or ―ion,‖ in the NaCl crystal, for example, is asserted to
have a radius only about 60 percent as large as the radius of the atom in the elemental
aggregate. If this atom takes part in a compound which cannot be included in the ―ionic‖
class, current theory gives it still a different ―size‖ : what is called a ―covalent‖ radius. The
need for assuming any extraordinary changeability in the size of what, so far as we can tell,
continues to be the same object, is now eliminated by the finding that the variations in the
inter-atomic distance have nothing to do with the sizes of the atoms, but merely indicate
differences in the location of the equilibrium between the inward and the outward forces to
which the atoms are subject.
Another type of orientation that forms a relatively simple binary compound is the rotational
combination that we found in the diamond structure. As in the elements, this is an
equilibrium between an atom of a Division IV element and one of Division III, the
requirement being that
t1+ t2 = 8. Obviously, the only elements that can meet this requirement by themselves are
those whose negative rotational displacement (valence) is 4, but any Division IV element
can establish an equilibrium of this kind with an appropriate Division III element.
Closely associated with this cubic diamond-like Zinc Sulfide class of crystals is a hexagonal
structure based on the same orientation, and containing the same equal proportions of the
two constituents. Since these controlling factors are identical in the two forms, the crystals
of the hexagonal Zinc Oxide class have the same inter-atomic distances as the corresponding
Zinc Sulfide structures. In such instances, where the inter-atomic forces are the same, there
is little or not probability advantage of one type of crystal over the other, and either may be
formed under appropriate conditions. Table 9 lists the inter-atomic distances for some
common crystals of these two classes.

Table 9: Distances - Diamond Type Compounds


Specific Rotation Distance
Compound Magnetic Elec.
ZnS (Cubic) Class Calc. Obs.
AlP 3-4 3½-3½ 10 2.32 2.35
AlAs 3-4 4-4 10 2.62 2.43
AlSb 3-4 5-4½ 10 2.62 2.66
SiC 3-4 3(2) 10 1.94 1.93*
CuCl 3-4 3½-3½ 10 2.32 2.35
CuBr 3-4 4-4 10 2.46 2.46
CuI 3-4 5-4 10 2.59 2.62
ZnS 3-4 3½-3½ 10 2.32 2.36
ZnSe 3-4 4-4 10 2.46 2.45
ZnTe 3-4 5-4½ 10 2.62 2.63*
GaP 3-4 3½-3½ 10 2.32 2.36
GaAs 3-4 4-4 10 2.46 2.43
GaSb 3-4 5-4½ 10 2.62 2.65
AgI 4-4 5-4 10 2.80 2.81
CdS 4-4 3½-3½ 10 2.51 2.52
CdTe 4-4 5-4 10 2.80 2.78
InP 4-4 3½-3½ 10 2.51 2.54
InAs 4-4 4-4 10 2.66 2.62
InSb 4-4 5-4 10 2.80 2.80
AlN 3-4 3(2) 10 1.94 1.90
ZnO 3-4 3(2) 10 1.94 1.95
ZnS 3-4 3½-3½ 10 2.32 2.33
GaN 3-4 3(2) 10 1.94 1.94
AgI 4-4 5-4 10 2.80 2.81
CdS 4-4 3½-3½ 10 2.51 2.51
CdSe 4-4 4-4 10 2.66 2.63
InN 4-4 3(2) 10 2.15 2.13
The comments that were made about the consistency of the specific rotation values in Tables
7 and 8 are applicable to the values in Table 9 as well. Most of the elements participating in
the compounds of this table have the same specific rotations as in the previous tabulations,
and where there are exceptions, the deviations are of a regular and predictable nature.
A feature of Table 9 is the appearance of one of the normally electropositive elements of
group 2B, Aluminum, in the role of a Division III element. Beryllium and magnesium also
form ZnS type compounds, but like the lithium compounds previously mentioned they are
irregular, probably for the same reason, and have not been included in the tabulation. The
Division III behavior of these normally Division I elements is a result of the small size of the
lower groups, which puts their their Division I elements into the same positions with respect
to the electronegative zero point as the Division III elements of the larger groups. This
relationship is indicated in the following tabulation, where the asterisks identify those
elements that are normally in Division I.

Division III
Be* Mg* Zn
B* Al* Ga
C Si Ge
N P As
O S Se
F Cl Br
None of the orientations thus far considered is applicable to compounds of the Division II
elements. The normal orientation does not exist above a specific rotation of 5, as the higher
value would put the relative rotation above the limiting value 10. The Zinc Oxide and Zinc
Sulfide types of combination are electronegative structures, and the reverse orientation of
the Division II elemental structures is not available for compounds with negative elements.
The Division II elements therefore form their compounds on the basis of the magnetic
orientation. This type of structure is theoretically available for any element, but its use is
limited by probability considerations. It is utilized in many of the compounds of Divisions
III and IV, especially in the higher rotational groups, but rarely appears in Division I
combinations because of the very high probability of the normal orientation in this division.
Since the magnetic rotation is distributed over all three dimensions, its effective component
is not altered by a change in position, and has the same value in the magnetic orientations as
in the corresponding compounds based on the electric orientations. In order to establish the
magnetic type of equilibrium, however, the axis of the negative electric rotation has to be
parallel to that of one of the magnetic rotations, and it is therefore perpendicular to the axis
of the positive electric rotation. Consequently, the latter takes no part in the normal inter-
atomic force equilibrium, and it constitutes an additional orienting influence, the effects of
which were discussed in Volume I. In these compounds of the magnetic type the
displacement of the negative component (-x) is balanced by a numerically equal positive
displacement (x). Thus the magnetic orientation is somewhat similar to the normal
orientation. However, the magnetic rotation is opposite in vectorial direction to the electric
rotation, and the resultant relative rotation effective in the dimension of combination is
therefore one of the neutral values 10, 5, or a combination of these two, rather than the 2x of
the normal orientation.
Compounds based on the magnetic orientation occur in a variety of crystal forms, the nature
of which depends on the degree of force symmetry and the number of atoms of each kind in
the equilibrium system. In some cases there is enough symmetry to make isometric
structures of the NaCl, CaF2, and similar types possible. Other crystals are asymmetric. A
common arrangement for the binary compounds is the Nickel Arsenide structure, a
hexagonal crystal in which the positive atoms occupy the face positions and the negative
atoms are in the central positions, spaced alternately 1/4 and 3/4 along the c axis. Table 10
shows the inter-atomic distances calculated for some NiAs and NaCl, type crystals of binary
magnetic orientation compounds of Group 3A.
Almost all of the NiAs type compounds that have been examined in the course of this
present work take the vibration one value of the specific electric rotation: 10. The magnetic
orientation compounds with the NaCl structure are quite evenly divided between the 10
rotation and the combination 5-10 in the 3A group, but utilize the 5-10 rotation almost
exclusively in the higher groups. In order to show as wide a variety of the features of these
magnetic type compounds as is possible in the limited amount of space that can be allotted
to them, Table 10 has been restricted to Group 3A compounds, and the following Table 11
gives the data for a representative sample of the compounds of the rare earth elements (from
Group 4A), together with a selection of compounds from Group 4B, in which the identical
values of the inter-atomic distance in the combinations of the elements of this group with the
Division IV elements of Group 2A are emphasized.

Table 10: Distances - Binary Magnetic Orientation Compounds


Specific Rotation Distance
Compound
Magnetic Elec. Calc. Obs.
NiAs (Hexagonal) Class—Group 3A
VS 4-3 3½-3½ 10 2.42 2.42
VSe 4-3 4-4 10 2.56 2.55
CrS 4-3 3½-3½ 10 2.42 2.44
CrSe 4-3 4-4 10 2.56 2.54
CrSb 4-3 5-4½ 10 2.73 2.74
CrTe 4-3 5-4½ 10 2.73 2.77
MnAs 4-3 4-4 10 2.56 2.58
MnSb 4-3 5-4½ 10 2.73 2.78
FeS 4-3 3½-3½ 10 2.42 2.45
FeSe 4-3 4-4 10 2.56 2.55
FeSb 4-3 5-4 10 2.69 2.67
FeTe 3-4 5-4 10 2.59 2.61
CoS 3-4 3½-3½ 10 2.32 2.33
CoSe 3-4 4-4 10 2.46 2.46
CoSb 3-4 5-4 10 2.59 2.58
CoTe 3-4 5-4 10 2.59 2.62
NiS 3½-3½ 3½-3½ 10 2.37 2.38
NiAs 3½-3½ 4-3 10 2.42 2.43
NiTe 3½-3½ 5-4 10 2.64 2.64
NaCl (Cubic) Class-Group 3A
VN 4-3 3(2) 10 2.04 2.06
VO 4-3 3(2) 10 2.04 2.05
CrN 4-3 3(2) 10 2.04 2.07
MnO 3½-3½ 3(2) 5-10 2.18 2.22
MnS 3½-3½ 3½-3½ 5-10 2.59 2.61
MnSe 3½-3½ 4-4 5-10 2.75 2.72
FeO 3-4 3(2) 5-10 2.12 2.16
CoO 3-4 3(2) 5-10 2.12 2.12
Thus far the calculation of equilibrium distances has been carried out by crystal types as a
matter of convenience in identifying the effect of various atomic characteristics on the
crystal form and dimensions. It is apparent from the points brought out in the discussion,
however, that identification of the crystal type is not always essential to the determination of
the inter-atomic distance. For example, let us consider the series of compounds NaBr,
Na2Se, and Na3As. From the relations that have been established in the preceding pages we
may conclude that these Division I compounds are formed on the basis of the normal
orientation. We therefore apply the known value of the relative specific electric rotation of a
normal orientation Sodium compound, 4, and the known values of the normal specific
magnetic rotations of Sodium and the Group 3B elements, 3-3½ and 4-4 respectively, to
equation 1-10, from which we ascertain that the most probable inter-atomic distance in all
three compounds is 2.95, irrespective of the crystal structure. (Measured values are 2.97,
2.95, and 2.94 respectively.)

Table 11: Distances - Binary Magnetic Orientation Compounds


Specific Rotation Distance
Compound
Magnetic Elec. Calc. Obs.
CeN 5-4 3(2) 5-10 2.52 2.50
CeP 5-4 4-3 5-10 2.94 2.95
CeS 5-4 3½-3½ 5-10 2.89 2.89*
CeAs 5-4 4-4 5-10 3.06 3.03
CeSb 5-4 5-4 5-10 3.22 3.20
CeBi 5-4 5-4 5-10 3.22 3.24
PrN 5-4 3(2) 5-10 2.52 2.58
PrP 5-4 4-3 5-10 2.94 2.93
PrAs 4½-4 4-4 5-10 2.98 3.00
PrSb 4½-4 5-4 5-10 3.14 3.17
NdN 5-4 3(2) 5-10 2.52 2.57
NdP 5-4 4-3 5-10 2.94 2.91
NdAs 4½-4 4-4 5-10 2.98 2.98
NdSb 4½-4 5-4 5-10 3.14 3.15
EuS 5-4 4-3 5-10 2.94 2.98
EuSe 5-4 4-4 5-10 3.06 3.08
EuTe 5-4 5-4½ 5-10 3.26 3.28
GdN 5-4 3(2) 5-10 2.52 2.50*
YbSe 4½-4 4-4 5-10 2.98 2.93
YbTe 4½-4 5-4 5-10 3.14 3.17
ThS 4½-4½ 3½-3½ 5-10 2.85 2484
ThP 4½-4½ 4-3 5-10 2.91 2.91
UC 4½-4½ 3(2) 5-10 2.47 2.50*
UN 4½-4½ 3(2) 5-10 2.47 2.44*
UO 4½-4½ 3(2) 5-10 2.47 2.46*
NpN 4½-4½ 3(2) 5-10 2.47 2.45*
PuC 4½-4½ 3(2) 5-10 2.47 2.46*
PuN 4½-4½ 3(2) 5-10 2.47 2.45*
PuO 4½-4½ 3(2) 5-10 2.47 2.48*
AmO 4½-4½ 3(2) 5-10 2.47 2.48*
The possible inter-atomic distances in the more complex compounds can be calculated in a
similar manner, without the necessity of analyzing the great variety of geometrical structures
in which these compounds crystallize. The usefulness of this procedure in application to
compounds in general is limited, at the present stage of the theoretical development, because
we are not normally able to define the specific rotations from theoretical premises as
definitely as in the foregoing illustration. It is of considerable value, however, in dealing
with the lower electronegative elements, whose specific electric rotations are confined to the
neutral values, and whose variability in the magnetic dimensions is only in the number of
inactive dimensions (that is, dimensions in which the specific rotation is 2). The elements
involved are those of groups 1B and 2A; hydrogen, carbon, nitrogen, oxygen, and fluorine,
together with Boron, one of the normally electropositive elements of Group 2A. The other
two positive elements of this group, lithium and beryllium, are also two-dimensional under
most conditions, but they take the positive orientation, and have much greater inter-atomic
distances.
Table 12 gives the theoretically possible inter-atomic distances of these lower group
elements, with some examples of the measured values corresponding to the calculated
distances.

Table 12: Distances - Lower Negative Elements


Specific Rotation Distance
Magnetic Elec. n.u. Å
3(1) 3(1) 10 .241 .70
3(1) 3(1½) 10 .317 .92
3(1½) 3(1½) 10 .363 1.06
3(1) 3(2) 10 .406 1.18
3(1½) 3(2) 10 .445 1.30
3(2) 3(2) 10 .483 1.41
3(2) 3(2) 5-10 .528 1.54

Calc. Comb. Example Obs. Calc. Comb. Example Obs.


70. H-H H2 74. 1.30 H-B B2H6 1.27
92. H-F HF 92. C-O CaCO3 1.29
H-C Benzene 94. B-F BF3 1.30
Formic
H-O 95. C-N Oxamide 1.31
acid
1.06 H-N Hydrazine 1.04 C-F Cf3Cl 1.32
H-C Ethylene 1.06 C-C Ethylene 1.34
C-N NaCN 1.09 1.41 C-C Benzene 1.39
N-N N2 1.09 N-O HNO3 1.41
C-O COS 1.10 C-C Graphite 1.42
1.18 C-O CO2 1.15 C-N DI-Alanine 1.42
Methyl
C-N Cyanogen 1.16 C-O 1.42
ether
H-B B2H6 1.17 C-F CH3F 1.42
N-N CuN3 1.17 1.54 C-C Diamond 1.54
N-0 N2O 1.19 C-C Propane 1.54
C-C Acetylene 1.20 B-C B(CH3)2 1.56
The experimental results are not all in agreement with the theory. On the contrary, they are
widely scattered. The measured C-C distances, for example, cover almost the entire range
from 1.18, the minimum for this combination, to the maximum 1.54. However, the basic
compounds of each class do agree with the theoretical values. The paraffin hydrocarbons,
benzene, ethylene, and acetylene, have C-C distances approximating the theoretical 1.54,
1.41, 1.30, and 1.18 respectively. All C-H distances are close to the theoretical 0.92 and
1.06, and so on. It can reasonably be concluded, therefore, that the significant deviations
from the theoretical values are due to special factors that apply to the less regular structures.
A detailed investigation of the reasons for these deviations is beyond the scope of this
present work. However, there are two rather obvious causes that are worth mentioning. One
is that forces exerted by adjacent atom may modify the normal result of a two-atom
interaction. An interesting point in this connection is that the effect, where it occurs, is
inverse; that is, it increases the atomic separation, rather than decreasing it as might be
expected. The natural reference system always progresses at unit speed, irrespective of the
positions of the structures to which it applies, and consequently the inward force due to this
progression always remains the same. Any interaction with a third atom introduces an
additional rotational outward) force, and therefore moves the point of equilibrium outward.
This is illustrated in the measured distances in the polynuclear derivatives of benzene. The
lowest C-C distances in these compounds, 1.38 and 1.39, are found along the outer edges of
the molecular structures, while the corresponding distances in the interiors of the
compounds, where the influence of adjoining atoms is at a maximum, characteristically
range from 1.41 to 1.43.
Another reason for discrepancies is -that in many instances the measurement and the
theoretical calculation do not apply to the same quantity. The calculation gives us the
distance between structural units, whereas the measurements apply to the distances between
specific atoms. Where the atoms are the structural units, as in the compounds of the NaCl
class, or where the inter-group distance is the same as the inter-atomic distance, as in the
normal paraffins, there is no problem, but exact agreement cannot be expected otherwise.
Again we can use benzene as an example. The C-C distance in benzene is generally reported
as 1.39, whereas the corresponding theoretical distance, as indicated in Table 12, is 1.41.
But, according to the theory, benzene is not a ring of carbon atoms with hydrogen atoms
attached; it is a ring of CH neutral groups, and the 1.41 neutral value applies to the distance
between these neutral groups, the structural units of the atom. Since the hydrogen atoms are
known to be outside the carbon atoms, if these atoms are coplanar it follows that the distance
between the effective centers of the CH groups must be somewhat greater than the distance
between the carbon atoms of these groups. The 1.39 measurement between the carbon atoms
is therefore entirely consistent with the theoretical distance calculations.
The same kind of a deviation from the results of the (apparent) direct interaction between
two individual atoms occurs on a larger scale where there is a group of atoms that is acting
structurally as a radical. Many of the properties of molecules composed in part, or entirely,
of radicals or neutral groups are not determined directly by the characteristics of the atoms,
but by the characteristics of the groups. The NH4 radical, for example, has the same specific
rotations, when acting as a group, as the rubidium atom, and it can be substituted in the
NaCl type crystals of the rubidium halides without altering the volume. Consequently, the
inter-atomic distances have no direct significance in compounds containing these groups. It
is theoretically feasible to locate the effective centers of the various groups, and to measure
the inter-group distances that correspond to those calculated from theory, but this task has
not yet been undertaken, and it will not be possible it this time to present a comparison
between theoretical and experimental distances in compounds containing radicals
comparable to the comparisons in Tables 1 to12.
Some preliminary results have been made, however, on the relation between the theoretical
distances and the density in complex compounds. There are a number of factors, not yet
investigated in detail, that have some influence on the density of solid matter, and for that
reason the conclusions thus far derived from theory are somewhat tentative, and the
correlations between theory and observation are only approximate. Nevertheless, certain
aspects of these tentative results are significant, and are of enough interest to justify giving
them some attention.
If we divide the molecular mass, in terms of atomic weight units, by the density, we arrive at
the molecular volume in terms of the units entering into the density measurement. For
present purposes it will be convenient to convert this quantity to natural units of volume.
The applicable conversion factor is the cube of the time region unit of distance divided by
the mass unit atomic weight. In the cgs system of units it has the numerical value 14.908.
In Table 13 the average volumes per volumetric group of a representative number of
inorganic compounds containing radicals (V), as calculated from the measured densities, are
compared with the cubes of the inter-group distances (S03), as calculated on the theoretical
basis previously described.

Table 13: Molecular Volume


m d n V S03 c ab1 ab2
NaNO3 85.01 2.261 2 1.261 1.241 4 3-3 4-5
KNO3 101.10 2.109 2 1.608 1.565 4 4-3 4-5
Ca(NO3)2 164.10 2.36 3 1.554 1.565 4 4-3 4-5
RbNO3 147A9 3.11 2 1.590 1.63 4 4-4 4-4
Sr(NO3)2 211.65 2.986 3 1.585 1.631 4 4-4 4-4
CsNO3 194.92 3.685 2 1.774 1.825 4 4½-4½ 4-4
Na2CO2 106.00 2.509 3 0.944 0.970 4 3-3 3½-3½
MgCO3 84.33 3.037 2 0.931 0.970 4 3-3 3½-3½
K2CO3 138.20 2.428 3 1.272 1.222 4 4-3 3½-3½
CaCO3 100.09 2.711 2 1.238 1.222 4 4-3 3½-3½
BaCO3 197.37 4.43 2 1.494 1.532 4 4½-4½ 3½-3½
FeCO3 115.86 3.8 2 1.022 0.976 5 4-3 3½-3½
CoCO3 118.95 4.13 2 0.966 0.976 5 4-3 3½-3½
Cu2CO3 187.09 4.40 3 0.950 0.976 5 4-3 3½-3½
ZnC3 125.39 4.44 2 0.947 0.976 5 4-3 3½-3½
Ag2CO3 275.77 6.077 3 1.015 1.096 5 4-4 3½-3½
The specific electric rotation (c) for the compounds with the normal orientation is 4, as in
the valence one binary compounds. Those with the magnetic orientation take the neutral
value 5. The applicable specific magnetic rotations for the positive component and the
negative radical are shown in the columns headed ab1 and ab2 respectively. Columns 2, 3,
and 4 give the molecular mass (m), the density of the solid compound (d), and the number of
volumetric units in the molecule (n). Here, again, as in the earlier tables, the calculated and
empirical values are not exactly comparable, as the measured values of the densities have
been used directly, rather than being projected back to zero temperature, a refinement that
would be required for accuracy, but is not justified at this early stage of the investigation.
In this table there are five pairs of compounds, such as Ca(NO3)2 and KNO3 in which the
inter-group distances are the same, and the only difference between the pairs, so far as the
volumetric factors are concerned, is in the number of structural groups. Because of the
uncertainties involved in the measured densities, it is difficult to reach firm conclusions on
the basis of each pair considered individually, but the average volume per group, calculated
from the density, in the five two-group structures is 1.267, whereas in the five three-group
structures the average is 1.261. It is evident from this that the volumetric equality of the
group and the independent atom which we noted in the case of the NH4 radical is a general
proposition, in this class of compounds at least. This is a point that will have a special
significance when we take up consideration of the liquid volume relations.
In closing the discussion in this chapter it is appropriate to reiterate that the values of the
inter-atomic and inter-group distance derived from theory apply to the separations as they
would exist if the equilibrium were reached at zero temperature and zero pressure. In the
next two chapters we will consider how these distances are modified when the solid
structure is subjected to finite pressures and temperatures.

Chapter 4

Compressibility
One of the simplest physical phenomena is compression, the response of the time region
equilibrium to external forces impressed upon it. With the benefit of the information
developed earlier, we are now in a position to begin an examination of the compression
of solids, disregarding for the present the question of the origin of the external forces. For
this purpose we introduce the concept of pressure, which is defined as force per unit area.
P=F/s² (4-1)
In many cases it will be convenient to deal with pressure on a volume basis rather than on
an area basis. We therefore multiply both force and area by distance, s, which gives us
the alternative equation:
P=Fs/s³=E/V (4-2)
In the region outside unit distance, where the atoms or molecules of matter are
independent, the total energy of an aggregate can thus be expressed in terms of pressure
and volume as
E=PV (4-3)
As we will find in the next chapter when we begin consideration of thermal motion, a
condition of constant temperature is a condition of constant energy, other things being
equal. Equation 4-3 thus tells us that for an aggregate in which the cohesive forces
between the atoms or molecules are negligible, an ideal gas, the volume at constant
temperature is inversely proportional to the pressure. This is Boyle's law, one of the well-
established relations of physics.
For application to the time region in which the solid equilibrium is located, the second
power of the volume must be substituted for the first power, in accordance with the
general inter-regional relation previously established. The time region equivalent of
Boyle‘s Law is therefore
PV²=k (4-4)
In terms of volume this becomes
V=k/P½ (4-5)
This equation tells us that at constant temperature the volume of a solid is inversely
proportional to the square root of the pressure. The pressure represented by the symbol P
in this equation is, of course, the total effective pressure; that is, the pressure equivalent
of all of the forces acting in opposition to the rotational forces of the atom. The force due
to the progression of the natural reference system opposes the rotational forces, and acts
in parallel with the external compressive forces, but it has the same magnitude regardless
of whether or not any such external forces are present. It therefore exerts what we may
call an internal pressure, an already existing level of pressure to which an external
pressure becomes an addition. In order to conform to established usage and to avoid
confusion, the symbol P will hereafter refer to the external pressure only, the total
pressure being expressed as P0 + P. On this basis, quation 4-5 may be restated as
V=k/(P0+P)½ (4-6)
Compression is normally expressed in terms of relative rather than absolute volumes, the
reference volume being the volume at zero external pressure, where equation 4-6 has the
form
V=k/P0½ (4-7)
Dividing the equation 4-6 by equation 4-7, and rearranging, we obtain
V P0½
— = ——— (4-8)
V0 (P0+P)½
As this equation brings out, the internal pressure, P0, is the key factor in the compression
of solids. Inasmuch as this pressure is a result of the progression of the natural reference
system which, in the time region, is carrying the atoms inward in opposition to their
rotational forces (gravitation), the inward force acts only on two dimensions (an area),
and the magnitude of the pressure therefore depends on the orientation of the atom with
respect to the line of the progression. As indicated in connection with the derivation of
the inter-regional ratio, there are 156.44 possible positions of a displacement unit in the
time region, of which a fraction az represents the area subjected to pressure, a and z being
the effective displacements in the active dimensions. The letter symbols a, b, and c, are
used as indicated in Chapter 10, Volume I. The displacement z is either the electric
displacement c or the second magnetic displacement b, depending on the orientation of
the atom.
From the principle of equivalence of natural units it follows that each natural unit of
pressure exerts one natural unit of force per unit of cross-sectional area per effective
rotational unit in the third dimension of the equivalent space. However, the pressure is
measured in the units applicable to the effect of external pressure. The forces involved in
this pressure are distributed to the three spatial dimensions and to the two directions in
each dimension. Only those in one direction of one dimension–one sixth of the total–are
effective against the time region structure. Applying this 1/6 factor to the ratio
az/156.444, we have for the internal pressure per rotational unit at unit volume,
P0 = az/938.67 (4-9)
This expression may now be generalized to apply to y rotational units and V units of
volume, as follows:
P0 = azy/(938.67V) (4-10)
The force exerted by the progression of the natural reference system is independent of the
geometrical arrangement of the atoms, and the volume term in equation 4-10 refers to
what we may call the three-dimensional atomic space, the cube of the inter-atomic
distance, rather than to the geometric volume. We will therefore replace V by S03. This
gives us the internal pressure equation in final form:
P0 = azy/(936.67S03) (4-11)
The value derived from this equation is the magnitude of the internal pressure in terms of
natural units. To obtain the pressure in terms of any conventional system of units it is
only necessary to apply a numerical coefficient equal to the value of the natural unit of
pressure in that system. This natural unit was evaluated in Volume I as 5.282 x 1012
dynes/cm2. The corresponding values in the systems of units used in the reports of the
experiments with which comparisons will be made in this chapter are:
1.554 x 107 atm
1.606 x 107 kg/cm2
1.575 x 107 megabars
In terms of the units used by P.W. Bridgman, the pioneer investigator in the field, in most
of his work, equation 4-11 takes the form
P0 =17109 azy/S03 kg/cm² (4-12)
The internal pressure thus calculated for any specific substance is not usually constant
through the entire external pressure range. At low total pressures, the orientation of the
atom with respect to the line of progression of the natural refe- rence system is
determined by the thermal forces which, as we will see later, favor the minimum values
of the effective cross-sectional area. In the low range of total pressures, therefore, the
cross-section is as small as the rotational displacements of the atom will permit. In
accordance with Le Chatelier‘s Principle, a higher pressure, either internal or external,
applied against the equilibrium system causes the orientation to shift, in one or more
steps, toward higher displacement values. At extreme pressures the compressive force is
exerted against the maximum cross-section: 4 magnetic units in one dimension and 8
electric units in another. Similarly, only one of the magnetic rotational units in the atom
participates in the radial component y of the resistance to compression at the low
pressures, but further application of pressure extends the participation to additional
rotational units, and at extreme pressures all of the rotational units in the atom are
involved. The limiting value of y is therefore the total number of such units. The exact
sequence in which these two kinds of factors increase in the intermediate pressure range
has not yet been determined, but for present purposes a resolution of this issue is not
necessary, as the effect of any specific amount of increase is the same in both cases.
Helium and neon, the first two of the inert gases, the elements that have no effective
rotation in the electric dimension, take the absolute minimum compression factors: one
rotating unit with one effective unit of displacement in each of the two effective
dimensions. The azy factors for these elements can be expressed as 1-1-1. In this
notation, which we will use for convenience in the subsequent discussion, the numerical
values of the compression factors are given in the same order as in the equations. It
should be noted that the absolute minimum compression, that applicable to the elements
of least displacement, is explicitly defined by the factors 1-1-1. The value of the factor a
increases in the higher members of the inert gas series because of their greater magnetic
displacement.
Because of their negative displacement in the electric dimension, which, in this context,
is equivalent to the zero displacement of the inert gases, the electronegative elements
follow the inert gas pattern, taking the minimum 1-1-1 factors in the lowest members of
the lowest rotational groups, and values that are higher, but still generally well below
those of the corresponding electro-positive elements, as the displacement increases in
either or both of the atomic rotations. None of the elements of the electronegative
divisions below electric displacement 7 has the 4-8 az factors initially, although they are
capable of these high levels, and can eventually reach them under appropriate conditions.
All of the electropositive elements studied by Bridgman have the full 4 units in one
dimension; that is, a = 4. The value of z for the alkali metals is equal to the electric
displacement, one unit, and since y takes the minimum value under low pressure
conditions, the compression factors for these elements are 4-1-1. The displacement 2
elements (calcium, etc.) take the intermediate values 4-2-1 or 4-3-1. The greater
displacements of the elements that follow have a double effect. They increase the internal
pressure directly by enlarging the effective cross-section, and this higher internal pressure
then has the same effect as a greater external pressure in causing a further increase in the
compression factors. Most of these elements therefore utilize the full displacements of the
active cross-section dimensions from the start of compression; that is, 4-4-1 (az = ab, two
magnetic dimensions) in some of the lower group elements and the transition elements of
Group 4A, and 4-8-1, or 4-8-n (az = ac, one magnetic and one electric dimension) in the
others.
The factors that determine the internal pressures of the compounds that have been
investigated thus far fall mainly in the intermediate range, between 4-1-1 and 4-4-1.
NaCl, for instance, has 4-2-1 initially, and shifts to 4-3-1 in the pressure range between
30 and 50 M kg/cm2. AgCl has 4-3-1 initially, and carries these factors up to a transition
point near Bridgman's pressure limit of 100 M kg/cm2. CaF2 has the factors 4-4-1 from
the start of compression. The initial values of the internal pressure of most of the
inorganic compounds examined in this investigation are based on one or another of these
three patterns. Those of the organic compounds are mainly 4-1-1, 4-2-1, or an
intermediate value 4-1½-1
Compression is ordinarily measured in terms of relative volume, and most of the
discussion in this chapter will deal with the subject on this basis, but for some purposes
we will be interested in the compressibility, the rate of change of volume under pressure.
This rate is obtained by differentiating equation 4-8.
1 dV P0½
—– —– = ———— (4-13)
V0 dP 2(P0+P)³/2
The compressibility at P0, the initial compressibility, is of particular interest. For all
practical purposes it is the same as the compressibility at one atmosphere, this pressure
being only a small fraction of the internal pressure P0. The initial compressibility may be
obtained from equation 4-13 by letting P equal zero. The result is
1 dV 1
—– —– = —— (4-14)
V0 dP (P=0) 2P0
Since the initial compressibility is a quantity that can be measured, its simple and direct
relation to the internal pressure provides a significant confirmation of the physical reality
of that theoretical property of matter. Initial compressibility factors derived theoretically
for those elements on which consistent compressibility data are available for comparison,
the internal pressures calculated from these factors, and the initial compressibilities
corresponding to the calculated internal pressures are listed in Table14, together with
measured values of the initial compressibility at room temperature. Two sets of
experimental values are given, one from Bridgman and one from a more recent
compilation. Values of S03, except those marked with asterisks, are computed from the
inter-atomic distances (S0) in the tables of Chapter 2. Where the structure is anisotropic
the S03 value shown is the product of one of the distances given in the earlier tabulations
by the square of the other. The reason for the occurrence of the indicated deviations from
the Chapter 2 values will be explained later.

Table 14: Initial Compressibility


S03 Comp. Factors P0 Initial Compressibility x 106
a z y (M kg/cm2) Calc. Obs.3 Obs.4
Li 1.151 4 1 1 59.5 8.42 8.41 8.46
Be 0.482 4 4 1 568 0.88 0.87 0.98
C(dia.) 0.147 4 6 1 2793 0.18 0.18 0.18
Na 2.048 4 1 1 33.4 14.97 15.1 14.42
Mg 1.291 4 4 1 212 2.36 2.86 2.77
Al 0.915 4 5 1 374 1.34 1.30 1.36
Si 0.497 4 4 1 551 0.91 0.31 0.99
K 3.659 4 1 1 18.7 26.74 31.0 30.4
Ca 2.588 4 3 1 79.3 6.31 5.51 6.45
Ti 1.033 4 8 1 530 0.94 0.77 0.93
V 0.729 4 8 1 751 0.67 0.59 0.61
Cr 0.603 4 8 1 908 0.55 0.50 0.52
Mn 0.705 4 8 1 777 0.64 0.76 1.65
Fe 0.603 4 8 1 908 0.55 0.57 0.58
Co 0.603* 4 8 1 908 0.55 0.52 0.51
Ni 0.603* 4 8 1 908 0.55 0.50 0.53
Cu 0.652 4 6 1 630 0.79 0.70 0.72
Zn 0.903 4 4 1 303 1.65 1.64 1.64
Ge 0.603 4 4 1 454 1.10 1.33 1.27
Rb 4.616 4 1 1 14.8 33.78 38.7 31.4
Sr 3.268 4 3 1 62.8 7.96 7.9 8.46
Zr 1.306 4 6 1½ 472 1.06 1.06 1.18
Nb 0.921 4 8 1½ 892 0.56 0.55 0.58
Mo 0.764* 4 8 2 1433 0.35 0.34 0.36
Ru 0.764* 4 8 2 1433 0.35 0.34 0.31
Rh 0.764 4 8 2 1433 0.35 0.36 0.36
Pd 0.823 4 8 1½ 998 0.50 0.51 0.54
Ag 0.956 4 8 1 573 0.87 0.96 0.97
Cd 1.118 4 4 1 245 2.04 1.89 2.10
In 1.165* 4 4 1 235 2.13 2.38
Sn 0.913* 4 4 1 300 1.67 1.64 0.80
Sb 1.325* 4 4 1 207 2.42 2.32 2.56
Cs 5.774 4 1 1 11.9 42.0 59.0 49.1
Ba 2.686* 4 2 1 51.0 9.80 9.78
La 2.044 4 4 1 134 3.73 3.39 4.04
Ce 1.893 4 4 1 145 3.45 3.45 4.10
Pr 1.758* 4 4 1 156 3.21 3.21
Nd 1.758* 4 4 1 156 3.21 3.00
Sm 1.758* 4 4 1 156 3.21 3.34
Gd 1.346* 4 4 1 203 2.46 2.56
Dy 1.346 4 4 1 203 2.46 2.55
Ho 1.346* 4 4 1 203 2.46 2.47
Er 1.346* 4 4 1 203 2.46 2.38
Tm 1.346* 4 4 1 203 2.46 2.47
Yb 2.167* 4 2 1 63.2 7.92 7.38
Lu 1.346* 4 4 1 203 2.46 2.38
Ta 1.027* 4 8 2 1066 0.47 0.47 0.49
W 0.953* 4 8 3 1723 0.29 0.28 0.30
Ir 0.823 4 8 3 1996 0.25 0.28
Pt 0.823 4 8 2 1330 0.38 0.35 0.35
Au 0.953 4 8 1½ 862 0.58 0.56 0.57
Ti 1.631 4 4 1 168 2.98 3.31 2.74
Pb 1.249* 4 4 1 219 2.25 2.29 2.29
Bi 1.249 4 3 1 164 3.05 2.71 3.11
Th 1.758 4 8 1 311 1.61 1.81
U 0.984 4 8 1 556 0.90 0.94 0.99
In most cases the difference between the calculated and measured compressibilities is
within the probable experimental error. Substantial deviations from the calculated values
are to be expected in the case of low melting point elements such as the alkali metals,
unless corrections have been applied to the empirical data, as there is an additional
component in the initial volume of such substances. Elsewhere, the differences between
the calculated compressibilities and either of the two sets of experimental values are, on
the average, no greater than the differences between the experimental results. This
process is repeated at successively higher pressure levels until the maximum compression
factors for the element are reached.
Because of the nature of this compression pattern, a convenient method of analyzing the
experimental values of the volume of various substances under compression can be made
available by expressing equation 4-8 in the form
(V0 /V)² = 1+P/P0 (4-15)
According to this equation, if we plot the reciprocals of the squares of the relative
volumes against the corresponding total pressure ratios we should obtain a straight line
intersecting the zero pressure ordinate at the reference volume 1.00. The slope of the line
is determined by the magnitude of the internal pressure, P0 Fig.1(a) is a curve of this kind
for the element tin, based on Bridgman‘s experimental values.
Figure1: Compression Patterns

Where there is a transition to a higher set of compression factors within the experimental
range, and the magnitude of P0 changes, the volumes diverge from the original line and
follow a second straight line, the slope of which is determined by the new compression
factors. On preparing curves of this kind for the other elements investigated by
Bridgman, we find that about two-thirds of them actually do conform to a single straight
line up to the 30,000 kg/cm2 pressure limit of his earlier work. His studies of the less
compressible substances, such as the higher elements of the electropositive divisions,
were not carried beyond this level, but he measured the compression up to
100,000 kg/cm2 on many other elements, and most of them were found to undergo a
transition in which the effective internal pressure increases without any volume
discontinuity. The compression curve for such a substance consists of two straight line
segments connected by a smooth transition curve, as in Fig.1(b), which represents
Bridgman's values for silicon.
In addition to the changes of this type, commonly called second order transitions, some
solid substances undergo first order transitions in which there is a modification of the
crystal structure and a volume discontinuity at the transition point. The effective internal
pressure generally changes during a transition of this kind, and the resulting volumetric
pattern is similar to that of KCl, Fig.1(c). With the exception of some values which are
rather erratic and of questionable validity, all of Bridgman's results follow one of these
three patterns or some combination of them. The antimony curve, Fig.1(d), illustrates one
of the combination patterns. Here a second order transition between 30,000 and 40,000
kg/cm2 is followed by a first order transition at a higher pressure. The numerical values
corresponding to these curves are given in the tables that follow.
The experimental second order curves are smooth and regular, indicating that the
transition process takes place freely when the appropriate pressure is reached. The first
order transitions, on the other hand, show considerable irregularity, and the experimental
results suggest that in many substances the structural changes at the transition points are
subject to a variable amount of delay due to internal conditions in the solid aggregate. In
these substances the transition does not take place at a well-defined pressure, but
somewhere within a relatively broad transition zone, and the exact course of the transition
may vary considerably between one series of measurements and another. Furthermore,
there are many substances which appear to experience similar delays in achieving
volumetric equilibrium even where no transitions take place. The compression curves
suggest that a number of the reported transitions are actually volume adjustments which
merely reflect delayed response to the pressure applied earlier. For example, in the
barium curve based on Bridgman's results there are presumably two transitions, one
between 20,000 and 25,000 kg/cm2, and the other between 60,000 and 70,000 kg/cm2.
Yet the experimental volumes at 60,000 and 100,000 kg/cm2 are both very close to the
values calculated on the basis of a single straight line relation. It is quite probable,
therefore, that this element actually follows one linear relation at least up to the vicinity
of 100,000 kg/cm2.
The deviations from the theoretical curves that are found in the experimental volumes of
substances with relatively high melting points are generally within the experimental error
range, and those larger deviations that do make their appearance can, in most cases, be
explained on the foregoing basis. The compression curves for substances with low
melting points show systematic deviations from linearity at the lower pressures, but this
is a normal pattern of behavior resulting from the proximity of the change of state. As
will be brought out in detail in our examination of the liquid state, the physical state of
matter is basically a property of the individual atom or molecule. The state of the
aggregate merely reflects the state of the majority of its individual constituents.
Consequently, a solid aggregate at any temperature near the melting point contains a
specific proportion of liquid molecules. Since the volume of a liquid molecule differs
from that of a solid molecule, the volume of the aggregate is modified accordingly. The
amount of the volume deviation in each case can be calculated by methods that will be
described in the subsequent discussion of the liquid volume relations.
Table 15 compares the results of the application of equation 4-8 with Bridgman‘s
measurements on some of the elements that maintain the same internal pressure all the
way up to his pressure limit of 100,000 kg/cm2. In many cases he made several series of
measurements on the same element. Most of these results agree within about 0.003, and it
does not appear that listing all of the individual values in the tabulations would serve any
useful purpose. The values given in Table15, and in the similar tables that follow, are
those obtained in experiments that were carried to the full 100,000 kg/cm2 pressure level.
Where the high pressure measurements were started at some elevated pressure, or where
the measurement interval was greater than usual, the gaps have been filled from the
results of other Bridgman experiments.

Table 15: Relative Volumes Under Compression


Calc. Obs. Calc. Obs. Calc. Obs. Calc. Obs.
Pressure
Zn Zr In Sn
(M kg/cm2)
4-4-1 4-6-1½ 4-4-1 4-4-1
0 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
5 .992 .992 .995 .995 .988 .988 .992 .991
10 .984 .984 .990 .989 .980 .980 .984 .982
15 .976 .977 .985 .983 .970 .967 .976 .975
20 .969 .969 .980 .978 .960 .955 .968 .966
25 .961 .964 .975 .973 .951 .948 .961 .960
30 .954 .954 .970 .969 .942 .936 .954 .951
35 .947 .965 .964 .933 .932 .947
40 .940 .939 .960 .960 .925 .919 .940 .936
50 .927 .925 .951 .946 .909 .903 .926 .923
60 .914 .912 .942 .937 .893 .888 .913 .909
70 .902 .900 .933 .929 .878 .874 .901 .897
80 .890 .889 .925 .922 .864 .860 .889 .886
90 .879 .878 .917 .916 .851 .847 .878 .875
100 .868 .868 .909 .910 .838 .835 .867 .864
Table16 extends the volume comparisons to representative elements of the classes that
are subject to transitions within the experimental range of pressures. Transitions reported
by the investigator or indicated by the theoretical calculations are shown by horizontal
lines in the appropriate columns. In these tabulations the position of the upper branch of
each curve has been fixed by using the experimental volume at a selected pressure in the
straight line segment above the transition (identified by the symbol R) as a reference
point. Thus the slope of this upper branch of the curve is determined theoretically, but its
position relative to the 1/V2 scale is empirical. Some work has been done toward
extension of the theoretical development to a determination of the exact position of the
upper section of each curve, but this project is not far enough advanced to justify any
discussion of it at this time.

Table 16: Relative Volumes Under Compression


Calc. Obs. Calc. Obs. Calc. Obs. Calc. Obs.
Pressure Al Si Ca Sb
(M kg/cm2) 4-5-1 4-4-1 4-3-1 4-4-1
4-8-1 4-8-1 4-4-1 4-4-1½
0 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
5 .993 .993 .996 .995 .970 .969 .988 .987
10 .987 .987 .991 .990 .943 .942 .977 .975
15 .981 .981 .987 .986 .917 .918 .966 .964
20 .974 .975 .982 .981 .895 .897 .955 .954
25 .968 .969 .978 .978 .878 .878 .945 .944
30 .964 .964 .974 .974 .862 .861 .935 .934
35 .847 .845 .925 .925
40 .957 .958 .966 .968 .832 .832 .916 .917
50 .949 .951 .960 .962 .805 R .805 .899 .899
60 .942 .944 .956 .957 .780 .780 .888 .886
70 .935 .937 .952 .952 .758 .748 .875 .875
80 .928 .929 .948 .948 .737 .732 .864 R .864
90 .922 .922 .944 .944 .718 .716 .815
100 .915 R .915 .940 R .940 .701 .702 .803
Ba La Pr U
4-2-1 4-4-1 4-4-1 4-8-1
4-3-1 4-8-1 4-4-1½ 4-8-2
0 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
5 .955 .955 .982 .981 .984 .983 .996 .955
10 .915 .914 .965 .963 .970 .967 .991 .990
15 .880 .879 .949 .947 .955 .953 .987 .986
20 .848 .841 .933 .933 .942 .940 .983 .981
25 .820 .814 .918 .917 .929 .927 .979 .978
30 .794 .789 .904 .905 .916 .915 .975 .973
35 .771 .770 .891 .893 .904 .904 .971 .971
40 .750 .747 .878 .881 .893 .893 .967 .966
50 .712 .712 .858 .863 .878 .878 .960 .960
60 .679 .682 .845 .846 .863 .863 .956 .955
70 .650 .639 .833 .832 .849 R .849 .952 .951
80 .625 .618 .821 .819 .835 .836 .949 .947
90 .603 .598 .809 .808 .822 .823 .945 .944
100 .582 .580 .798 R .798 .810 .811 .941 R .941
Compressibility patterns of compounds are theoretically identical with those of the
elements, and this theoretical conclusion is confirmed by compression data for a
representative group of inorganic compounds presented in Table17.

Table 17: Relative Volumes Under Compressi

Calc. Obs. Calc. Obs. Calc. Obs. Calc. Obs.


Pressure NaCl NaI KCl ZnS
(M kg/cm2) 4-2-1 4-2-1 4-2-1 4-4-1
4-2-1½ 4-2-1½ 4-2-1½ 4-4-1½
0 .994 1.000 .987 1.000 .994 1.000 .995 1.000
5 .979 .982 .964 .970 .973 .974 .991 .994
10 .964 .966 .942 .944 .953 .952 .986 .988
15 .950 .951 .922 .922 .934 .933 .982 .982
20 .937 R .937 .903R .902 .916 R .916 .977 R .977
.803 R .803
25 .924 .924 .885 .886 .791 .789 .973 .972
30 .912 .912 .868 .871 .779 .778 .969 .967
35 .900 .901 .853 .858 .768 .768 .964 .963
40 .889 .892 .840 .840 .757 .758 .960 .961
50 .867 .865 .819 .816 .741 .742 .952 .954
60 .847 .848 .799 .795 .727 .723 .945 .947
70 .829 .832 .781 .777 .714 .710 .940 .940
80 .815 .817 .765 .761 .702 .698 .934 .934
90 .802 .803 .749 .747 .690 .688 .929 .929
100 .790 R .790 .734 R .734 .679 R .679 .924 R .924
CsBr NH4Cl KNO3
AgCl
4-3-1 4-2-1 4-3-1
4-3-1
4-4-1 4-4-1 4-3-2
0 1.000 1.000 .984 1.000 1.000 1.000 .894 1.000
5 .990 .989 .962 .971 .974 .973 .878 .882
10 .980 .979 .942 .947 .950 .951 .862 .862
15 .971 .969 .923 .925 .928 .933 .847 .846
20 .961 .960 .905 R .905 .910 .918 .833 .831
25 .952 .952 .888 .888 .900 .905 .820R .820
30 .944 .942 .871 .870 .889 .891 .807 .804
35 .935 .937 .856 .859 .879 .883
40 .927 .926 .842 .840 .869 .867 .783 .781
50 .911 .910 .815 .814 .851 .846 .761 .762
60 .895 .896 .790 .792 .833 .828 .744 .745
70 .881 .883 .777 .773 .817 .812 .733 .732
80 .867 .871 .760 .757 .801 .798 .723 .720
90 .854 .860 .743 .742 .787 .785 .712 .711
100 .841 .835 .728 R .728 .773 R .773 .703 R .703

As might be expected for the less uniform composition, transitions are somewhat more
common in the compounds, but otherwise there is no difference in the compression
curves. The curve for KCl, shown graphically in Fig. 1 and by numerical values in
Table17, is of special interest because it includes a sharp first order transition in which
there is a substantial decrease in the basic volume while the compression factors remain
unchanged. The magnitude of the volume reduction that takes place indicates that there is
a reorientation of the atomic rotations in which the neutral specific electric rotation 5 is
substituted for the normal rotation 4 as the effective relative value. The theoretical
volumes beyond the transition point, as shown in the table, are based on the small atomic
volume corresponding to the higher rotation. Up to 20,000 kg/cm2 the volume follows the
curve corresponding to compression factors 4-2-1 and S03 = 1.222, which produce an
internal pressure of 112.7 M kg/cm2. At the transition point the basic volume (S03) drops
to 0.976, increasing the internal pressure to 141.1 M kg/cm2. The compression then
continues on this basis up to the vicinity of 45,000 kg/cm2, where the compression factors
change from 4-2-1 to 4-3-1, and the internal pressure rises accordingly.
As in the compression of the elements, the theoretical calculations do not always confirm
the transitions reported by the experimenters. On the other hand, these calculations show
that a large proportion of the compounds, including six of the eight in Table17, undergo
either a transition or some other process in which they eliminate a volume component in
the pressure range below
5000 kg/cm2. The effect on the compression curve is to cause the linear segment of the
curve to intersect the zero pressure ordinate at a volume below 1.000. The origin of these
volume adjustments is still uncertain. The occurrence of a number of observable first
order transitions at relatively low pressures suggests that some early second order
transitions may also be taking place. But it is also possible that voids in the structure may
be eliminated in the early stages of compression, or that there are geometrical
readjustments.
The structural characteristics of the organic compounds make them particularly
susceptible to such geometrical readjustments. Because of their low melting points, their
volumes under low pressure also include the additional component that exists near the
change of state. It appears, however, that in a wide range of compounds elimination of
these extra volume components is substantially complete at some pressure well below the
40,000 kg/cm2 level to which Bridgman's measurements on solid organic compounds
were carried. This means that there is a fairly wide pressure range in which these
compounds follow the normal compression pattern. The following comparison of
theoretical and observed volume ratios for benzene and some of its polynuclear
derivatives gives an indication of how the elimination of the excess volume progresses. A
measured ratio lower than the theoretical means that some of the excess volume is
eliminated in the pressure range for which the ratio is measured, and the amount of the
difference is an indication of the amount by which the normal loss of volume due to
compression is increased.
Benzene
P Ratio Ratio 40/25
(M kg/cm2) Calc. Obs. Calc. Obs.
40/20 .938 .920 Benzene .954 .943
40/25 .954 .943 Naphthalene .954 .950
40/30 .970 .964 Anthracene .954 .953
40/35 .985 .984
As these figures indicate, benzene is just getting rid of the last of the excess volume at the
pressure limit of the experiments, and there is no linear section of the benzene
compression curve on which the slope can be measured for comparison with the
theoretical value. With increased molecular complexity, however, the linear section of the
curve lengthens, and for compounds with characteristics similar to those of anthracene
there is a 15,000 kg/cm2 interval in which the measured volumes should follow the
theoretical line.
Compounds of this nature have magnetic rotation 3-3 and electric rotation 4. The
effective value of S03 is therefore 0.812, and where the compression factors are 4-1½-1
the resulting internal pressure is 127.2 M kg/cm2. As shown in the values tabulated for
benzene, which were computed on the basis of this internal pressure, the ratio of the
volume at 40,000 kg/cm2 to that at 25,000 kg/cm2 should be 0.954 for all organic
compounds with characteristics (molecular complexity, melting point, compression
factors, etc.) similar to those of anthracene. Table18 shows that this theoretical
conclusion is corroborated by Bridgman's measurements.

Table 18: Measured Volume Ratio - 40/25 M/kg/cm 2


(Theoretikal radio:.954)
Urea .954 p-Nitroiodobenzene .955
Nitrourea .956 o-Chlorobenzoic acid .954
Cyanamide .953 m-Chlorobenzoic acid .953
o-Xylene .956 p-Chlorobenzoic acid .954
p-Xylene .956 o-Bromobenzoic acid .954
Triphenyl methane .953 m-Bromobenzoic acid .954
o-Diphenyl benzene .954 p-Bromobenzoic acid .954
m-Diphenyl benzene .955 m-Iodobenzoic acid .955
p-Diphenyl benzene .955 p-Iodobenzoic acid .953
Chlorobenzene .954 p-Nitroaniline .954
o-Nitrochlorobenzene .956 o-Acetyl tuluidine .954
o-Nitrobromobenzene .955 Tetrahydronaphthalene .953
p-Nitrobromobenzene .953 Anthracene .953
o-Nitroiodobenzene .953 Acenaphthene .955
At the time the theoretical values listed in the foregoing tables were originally calculated,
Bridgman's results constituted almost the whole of the experimental data then available in
the high pressure range, and his experimental limit at 100,000 kg/cm2 was the boundary
of the empirical knowledge of the effect of high pressure. In the meantime the
development of shock wave techniques by American and Russian investigators has
enabled measuring compressions at pressures up to several million atmospheres. With the
benefit of these new measurements we are now able to extend the correlation between
theory and experiment into the region of the maximum compression factors.
The nature of the response of the compression factors to the application of pressure has
already been explained, and the maximum factors for each group of elements have been
identified. However, the magnitude of the base volume (S03) also enters into the
determination of the internal pressure, and coincidentally with the increase in these
factors there is a trend toward a minimum base volume. In themselves, modifications of
the crystal structure play only a small part in the compressibility picture. Application of
sufficient pressure causes a solid to assume one of the crystal forms corresponding to the
closest packing of the atoms, the face-centered cubic or close-packed hexagonal for
isometric crystals, and the nearest equivalent structures if the crystals are anisometric. If
some different crystal form exists at zero pressure, the volume decrease due to the change
to one of the close-packed forms shows up as a percentage reduction in all subsequent
volumes, but the compressibility is not otherwise affected. However, a difference in
crystal structure often indicates a difference in the relative orientation of the atomic
rotations. Any such change in orientation alters the internal pressure, and consequently
has a significant effect on the compressibility.
Application of pressure tends to favor what may be called ―regular‖ structures at the
expense of those structures that are able to exist only because of special conditions
applicable to the particular elements involved. This tendency is evident from the start of
the compression process, and is responsible for the large number of deviations from the
Chapter 2 values of the inter-atomic distances that are identified by asterisks in Table14.
For example the five elements from chromium to nickel have a number of different inter-
atomic distances at low pressure, and are able to crystallize in alternate forms. In the
early stages of compression, however, all of these elements, except manganese, orient
themselves on the basis of the neutral relative rotation 10, and have an internal pressure
that reflects the corresponding value of S03, which is 0.603. At still higher pressures
vanadium shifts to the same relative rotation and joins the group. Manganese probably
does likewise, but empirical confirmation of this change is still lacking. Thus the change
of variation of the atomic arrangements is greatly reduced by external pressure. One of
the collateral effects is that the amount of uncertainty in the identification of the rotation
orientation, and the resulting base volume, is minimized.
Most of the elements that change to a lower base volume at the start of compression
maintain this new value of S03 throughout the remainder of the present range of the shock
wave experiments. Those that do not make this change in the early stages of compression
generally do so at some higher pressure. Only a few keep the same base volume up to the
shock wave pressure limit. Still fewer undergo a second transition to a lower base
volume. Thus the general pattern involves one reduction of the base volume in the
pressure range from zero external pressure up to the limit of the shock wave experiments.
This pattern is reflected in the twelve series of measurements that have been selected for
comparison with the theoretical values. Out of the twelve elements that are included, only
two, copper and chromium, have the same base volume in the shock wave range as at
zero pressure. Four continue with the values of S03 applicable to the early stages of
compression, the values listed in Table14, and six change to a lower base volume
somewhere above Bridgman's pressure limit. The minimum base volumes, the
corresponding maximum compression factors, and the resulting internal pressures for
these elements are shown in Table19.

Table 19: Maximum Internal Pressures


3
c a-b S0 a-z-y P0 c a-b S03 a-z-y P0
V 10 4-3 0.603 4-8-2 1816 Ag 8-10 4-4 0.823 4-8-4 2661
Cr 10 4-3 0.603 4-8-3 2724 W 10 4-4½ 0.822 4-8-5 3330
Co 10 4-3 0.603 4-8-3 2724 Au 10 4-4½ 0.822 4-8-5 3330
Ni 10 4-3 0.603 4-8-3 2724 Tl 5-10 4-4½ 1.074 4-8-5 2549
Cu 8-10 4-3 0.652 4-8-3 2519 Pb 5-10 4-4½ 1.074 4-8-5 2549
Mo 10 4-4 0.764 4-8-4 2866 Th 5 4½-4½ 1.631 4-8-5 1678

Here again, as in the pressure range of the Bridgman experiments, the theoretical
development is not yet far enough advanced to enable specifying the exact locations of
the upper sections of the compression curves. Nor is it yet clear in all cases just how
many of the possible intermediate values of the compression factors are actually utilized
as the pressure increases. What we are able to do at the present rather early stage of the
development of the theory is to demonstrate that in this extreme high pressure range, as
well as at the lower pressures of the preceding tables, the volume varies inversely with
the square root of the total pressure, strictly in accordance with the theory. In this
connection it should be noted that the section of each compression curve that is based on
the maximum value of the internal pressure is long enough to make the square root
pattern clear and distinct.
Furthermore, we are able to show that the slope of the last section of the experimental
curve for each element is identical with the theoretical slope determined by the calculated
maximum values of the internal pressure, and that the slope of each of the intermediate
sections is in agreement with one of the possible intermediate values of that internal
pressure. An exact theoretical definition of the curves will have to wait for further
progress along the lines discussed earlier. In the meantime, the amount of theoretical
information already available will serve as a means of testing the validity of each set of
empirical results, and will also enable a reasonable amount of extrapolation of the
compression curves beyond the present limits of the shock wave technology.
Table 20 is a comparison of the theoretical volumes, based on an empirical reference
volume for each of the sections of the curves, as in the preceding tables, with the shock
wave results obtained at Los Alamos5 on the elements that were investigated over the
widest range of pressures. Unless there is an increase in the compression factors in the
vicinity of 100,000 atmospheres, the compression curves established on the basis or
Bridgman's measurements extend into the lower range of these shock wave experiments.
In these cases the theoretical volumes up to the first change in the compression factors
are calculated on the basis of the reference volume selected from the Bridgman data, and
no reference point is identified in this table.

Table 20: Shock Wave Compressions


P a-z-y Calc. Obs. a-z-y Calc. Obs a-z-y Calc. Obs.
W Au Mo
0.1 4-8-3 .972 .970 4-8-1½ .946 .953 4-8-2 .966 .966
.2 .946 .944 4-8-3 .911 .917 .936 .937
.3 .922 .921 .888R .888 .908 .912
.4 .900 .901 .867 .864 4-8-3 .885 .890
.5 4-8-4 .880 .882 .847 .843 .868 .870
.6 .865 .866 .828 .825 .851 .852
.7 .850 .851 .811 .810 .836 .836
.8 .836R .836 .794 .796 .822 .821
.9 .823 .824 4-8-5 .780 .783 .808 .807
1.0 .810 .812 .771 .772 .795R .795
1.1 .798 .800 .762R .762 .783 .783
1.2 4-8-5 .787 .790 .754 .752 .771 .772
1.3 .778 .780 .745 .743 4-8-4 .761 .762
1.4 .770 .771 .737 .735 .752 .752
1.5 .762R .762 .730 .728 .743R .743
1.6 .754 .754 .722 .720 .734 .734
1.7 .747 .746 .715 .714 .726 .726
1.8 .739 .738 .708 .708
1.9 .732 .731 .701 .702
2.0 .725 .725 .694 .696
2.1 .718 .718
Cr Pb V
0.1 4-8-1½ .955R .955 4-4-1½ .858 .865 4-8-1 .939 .945
.2 .924 .920 4-4-3 .796R .796 4-8-1½ .900 .902
.3 .895 .891 .753 .751 .867R .867
.4 .869 .867 .716 .718 .838 .838
.5 .845 .846 4-8-3 .691 .693 .811 .812
.6 .823 .827 .673R .673 .787 .790
.7 4-8-3 .805 .811 .656 .656 .765 .770
.8 .794 .797 .640 .642 4-8-2 .750 .753
.9 .783 .784 4-8-5 .628 .630 .736 .737
1.0 .772R .772 .619R .619 .723R .723
1.1 .762 .761 .610 .609 .710 .709
1.2 .752 .751 .602 .600 .698 .697
1.3 .742 .742 .594 .593 .687 .686
1.4 .733 .733 .586 .586
Co Ni Cu
0.1 4-8-1½ .953 .956 4-8-1½ .953 .954 4-8-1 .945 .940
.2 .921 .920 .921 .919 .898 .897
.3 .893 .890 .893 .889 4-8-1½ .865 .864
.4 .867 .865 .867 .865 .838 .836
.5 .843R .843 .843R .843 .814R .814
.6 .821 .823 .821 .825 .792 .794
.7 .801 .806 .801 .808 4-8-3 .772 .777
.8 .782 .791 4-8-3 .790 .794 .760 .762
.9 4-8-3 .769 .776 .779 .780 .749 .749
1.0 .759 .764 .768 .768 .738 .737
1.1 .749 .752 .758 .757 .728 .726
1.2 .739 .741 .748 .747 .718 .716
1.3 .730 .731 .739 .738 .708 .707
1.4 .721 .721 .730 .729 .699 .698
1.5 .712R .712 .721R .721 .690R .690
1.6 .704 .704
Ag Tl Th
0.1 4-8-1 .922 .929 4-4-3 .850 .853 4-8-1 .869 .870
.2 4-8-2 .879 .881 .787 .783 4-8-2 .792 .795
.3 .848 .845 .736R .736 .747 .744
.4 .820 .817 4-8-3 .702 .703 .710 .707
.5 .794R .794 .678R .678 .677R .677
.6 .771 .775 .656 .658 4-8-3 .652 .652
.7 4-8-4 .752 .759 .637 .642 .632R .632
.8 .741 .744 4-8-5 .623 .628 .613 .614
.9 .730 .731 .614 .616 .596 .599
1.0 .720R .720 .605R .605 4-8-5 .583 .585
1.1 .710 .710 .597 .596 .572 .573
1.2 .701 .700 .588 .587 .562 .562
1.3 .692 .692 .581 .580 .553 .553
1.4 .683 .684 .573 .573 .544 .544
1.5 .675 .677 .566 .567 .535 .535
1.6 .667 .670
A rather surprising feature of these comparisons is that the agreement between the shock
wave results and the theoretical volumes is as close as the agreement between Bridgman's
static values and the theory. It is true that this set of measurements was deliberately
selected for the comparison, and it represents the best results rather than the average, but
in any event the close correlation is a significant confirmation of the validity of both the
shock wave techniques and the theoretical relations.
The question that now arises is what course the compressibility follows beyond the
pressure range of this table. In some cases a transition to a smaller base volume appears
to be possible. Copper, for instance, may shift to the rotations of the preceding
electropositive elements at some pressure above that of the tabulation. Aside from such
special cases, the factors that determine the compressibility in the range below two
million atmospheres have reached their limits. At the present stage of the investigation,
however, the possibility that some new factor may enter into the picture at extreme
pressures cannot be excluded. A ―collapse‖ of the atomic structure of the kind envisioned
by the nuclear theory is. of course, impossible, but as matters now stand we are not in a
position to say that all aspects of the compressibility situation have been explored. It is
conceivable that there may be some, as yet unknown, capability of change in the atomic
motions that would increase the resistance to pressure beyond what now appears to be the
ultimate limit.
Some shock wave measurements have been made at still higher pressure levels, and these
should throw some light on the question. Unfortunately, however, the results are rather
ambiguous. Three of the elements included in these experiments, lead, tin, and bismuth,
follow the straight line established in Table 20 up to the maximum pressures of about
four million atmospheres. On the other hand, five elements on which measurements were
carried to maximums between three and five million atmospheres show substantially
lower compressions than a projection of the Table 20 curves would indicate. The
divergence in the case of gold, for example, is almost eight percent. But there are equally
great differences between the results of different experiments, notably in the case of iron.
Whether or not some new factor enters into the compression situation at pressures above
those of Table 20 will therefore have to be regarded as an open question.

Chapter 5
Heat
IF an atom is subjected to an external force of a transient nature, such as that involved in
a violent contact a motion is imparted to it. Where the magnitude of the force is great
enough the atom is ejected from the time region and the inter-atomic equilibrium is
destroyed. If the force is not sufficient to, accomplish this ejection, the motion is turned
back at some intermediate point and it becomes a vibratory, or oscillating, motion.
Where two or more atoms are combined into a molecule, the molecule becomes the
thermal unit. The statements about atoms in the preceding paragraph are equally
applicable to these molecular units. In order to avoid continual repetition of the
expression ―atoms and molecules,‖ the references to thermal units in the discussion that
follows will be expressed in terms of molecules, except where we are dealing specifically
with substances such as aggregates of metallic elements, in which the thermal units are
definitely single atoms. Otherwise the individual atoms will be regarded, for purposes of
the discussion, as monatomic molecules.
The thermal motion is something quite different from the familiar vibratory motions of
our ordinary experience. In these vibrations that we encounter in everyday life, there is a
continuous shift from kinetic to potential energy, and vice versa, which results in a
periodic reversal of the direction of motion. In such a motion the point of equilibrium is
fixed, and is independent of the amplitude of the vibration. In the thermal situation,
however, any motion that is inward in the context of the fixed reference system is
coincident with the progression of the natural reference system, and it therefore has no
physical effect. Motion in the outward direction is physically effective. From the physical
standpoint, therefore, the thermal motion is a net outward motion that adds to the
gravitational motion (which is outward in the time region) and displaces the equilibrium
point in the outward direction.
In order to act in the manner described, coinciding with the progression of the natural
reference system during the inward phase of the thermal cycle and acting in conjunction
with gravitation in the outward phase, the thermal vibration must be a scalar motion. Here
again, as in the case of the vibratory motion of the photons, the only available motion
form is simple harmonic motion. The thermal oscillation is identical with the oscillation
of the photon except that its direction is collinear with the progression of the natural
reference system rather than perpendicular to it. However, the suppression of the phvsical
effects of the vibration during the half of the cycle in which the thermal motion is
coincident with the reference system progression gives this motion the physical
characteristics of an intermittent unidirectional motion, rather than those of an ordinary
vibration. Since the motion is outward during half of the total cycle, each natural unit of
thermal vibration has a net effective magnitude of one half unit.
Inasmuch as the thermal motion is a property of the individual molecule, not an aspect of
a relation between molecules, the factors that come into play at distances less than unity
do not apply here, and the direction of the thermal motion, in the context of a stationary
reference system is always outward. As indicated earlier, therefore, continued increase in
the magnitude of the thermal motion eventually results in destruction of the inter-atomic
force equilibrium and ejection of the molecule from the time region. It should be noted,
however, that the gravitational motion does not contribute to this result, as it changes
direction at the unit boundary. The escape cannot be accomplished until the magnitude of
the thermal motion is adequate to achieve this result unassisted.
When a molecule acquires a thermal motion it immediately begins transferring this
motion to its surroundings by means of one or more of several processes that will be
considered in detail at appropriate points later in this and the subsequent volumes.
Coincident with this outflow there is an inflow of thermal motion from the environment,
and, in the absence of an externally maintained unbalance, an equilibrium is ultimately re
ached at a point where inflow and outflow are equal. Any two molecules or aggregates
that have established such an equilibrium with each other are said to be at the same
temperature.
In the universe of motion defined by the postulates of the Reciprocal System, speed and
energy have equal standing from the viewpoint of the universe as a whole. But on the low
speed side of the neutral axis, where all material phenomena are located, energy is the
quantity that exceeds unity. Equality of motion in the material sector is therefore
synonymous with equal energy. Thus a temperature equilibrium is a condition in which
inflow and outflow of energy are equal. Where the thermal energy of a molecule is fully
effective in transfer on contact with other units of matter, its temperature is directly
proportional to its total thermal energy content. Under these conditions,
E = kT (5-1)

In natural units the numerical coefficient k is eliminated, and the equation becomes

E=T (5-2)

Combining Equation 5-2 with Equation 4-3 we obtain the general gas equation, PV = T,
or in conventional units, where R is the gas constant.

PV = RT (5-3)
These are the relations that prevail in the ―ideal gas state.‖ Elsewhere the relation
between temperature and energy depends on the characteristics of the transmission
process. Radiation originates three-dimensionally in the time region, and makes contact
one-dimensionally in the outside region. It is thus four-dimensional, while temperature is
only one-dimensional. We thus find that the energy of radiation is proportional to the
fourth power of the temperature.

Erad = kT4 (5-4)


This relation is confirmed observationally. The thermal motion originating inside unit
distance is likewise four-dimensional in the energy transmission process. However, this
motion is not transmitted directly into the outside region in the manner of radiation. The
transmission is a contact process, and is subject to the general inter-regional relation
previously explained. Instead of E = kT4, as in radiation, the thermal motion is E2 = k’T4,
or
E = kT2 (5-5)
A modification of this relation results from the distribution of the thermal motion over
three dimensions of time, while the effective component in thermal interchan ce is only
one-dimensional. This is immaterial as long as the thermal motion is confined to a single
rotational unit, but the effective component of the thermal motion of magnetic rotational
displacement n is only 1/n3 of the total. We may therefore generalize equation 5-5 by
applying this factor. Substituting the usual term heat (symbol H) for the time region
thermal energy E, we then have

H = T2/n3 (5-6)
The general treatment of heat in conventional physical theory is empirically based, and is
not significantly affected by the new theoretical development. It will not be necessary,
therefore, to give this subject matter any attention in this present work, where we are
following a policy of not duplicating information that is available elsewhere, except to
the extent that reference to such information is required in order to avoid gaps in the
theoretical development. The thermal characteristics of individual substances, on the
other hand, have not been thoroughly investigated. Since they are of considerable
importance, both from the standpoint of practical application and because of the light that
they can shed on fundamental physical relationships, it is appropriate to include some
discussion of the status of these items in the universe of motion. One of the most
distinctive thermal properties of matter is the specific heat, the heat increment required to
produce a specific increase in temperature. This can be obtained by differentiating
equation 5-6.

dH/dT = 2T/n3 (5-7)


Inasmuch as heat is merely one form of energy it has the same natural unit as energy in
general, 1.4918 x 10-3 ergs. However, it is more conunonly measured in terms of a special
heat energy unit, and for present purposes the natural unit of heat will be expressed as
3.5636 x 10-11 gram-calories, the equivalent of the general energy unit.
Strictly speaking, the quantity to which equation 5-7 applies is the specific heat at zero
pressure, but the pressures of ordinary experience are very low on a scale where unit
pressure is over fifteen million atmospheres, and the question as to whether the equation
holds good at all pressures, an issue that has not yet been investigated theoretically, is of
no immediate concern. We can take the equation as being applicable under any condition
of constant pressure that will be encountered in practice.
The natural unit of specific heat is one natural unit of heat per natural unit of temperature.
The magnitude of this unit can be computed in terms of previously established quantities,
but the result cannot be expressed in terms of conventional units because the
conventional temperature scales are based on the properties of water. The scales in
common use for scientific purposes are the Celsius or Centigrade, which takes the ice
point as zero, and the Kelvin, which employs the same units but measures from absolute
zero. All temperatures stated in this work are absolute temperatures, and they will
therefore be stated in terms of the Kelvin scale. For uniformity, the Kelvin notation (oK,
or simply K) will also be applied to temperature differences instead of the customary
Celsius notation (oC).
In order to establish the relation of the Kelvin scale to the natural system, it will be
necessary to use the actual measured value of some physical quantity, involving
temperature, just as we have previously used the Rydberg frequency, the speed of light,
and Avogadro‘s number to establish the relations between the natural and conventional
units of time, space, and mass. The most convenient empirical quantity for this purpose is
the gas constant. It will be apparent from the facts developed in the discussion of the
gaseous state in a subsequent volume of this series that the gas constant is the equivalent
of two-thirds of a natural unit of specific heat. We may therefore take the measured value
of this constant, 1.9869 calories, or 8.31696 x 107 ergs, per gram mole per degree Kelvin,
as the basis for conversion from conventional to natural units. This quantity is commonlv
represented by the symbol R, and this symbol will be employed in the conventional
manner in the following pages. It should be kept in mind that R = 2/3 natural unit. For
general purposes the specific heat will be expressed in terms of calories per gram mole
per degree Kelvin in order to enable making direct comparisons with empirical data
compiled on this basis, but it would be rather awkward to specifv these units in every
instance, and for convenience only the numerical values will be given. The foregoing
units should be understood.
Dividing the gas constant by Avogadro‘s number, 6.02486 x 1023 per g-mole, we obtain
the Bolzman constant, the corresponding value on a single molecule basis: 1.38044 x 10-16
ergs/deg. As indicated earlier, this is two-thirds of the natural unit, and the natural unit of
specific heat is therefore 2.07066 x 10-16 ergs/deg. We then divide unit energy, 1.49175 x
10-3 ergs, by this unit of specific heat, which gives us 7.20423 x 1012 degrees Kelvin, the
natural unit of temperature in the region outside unit distance (that is, for the gaseous
state of matter).
We will also be interested in the unit temperature on the T3 basis, the temperature at
which the thermal motion reaches the time region boundary. The 3/4 power of 7.20423 x
1012 is 4.39735 x 109. But the thermal motion is a motion of matter and involves the 2/9
vibrational addition to the rotationally distributed linear motion of the atoms. This
reduces the effective temperature unit by the factor 1 + 2/9, the result being 3.5978 x 109
degrees K.
On first consideration, this temperature unit may seem incredibly large, as it is far above
any observable temperature, and also much in excess of current estimates of the
temperatures in the interiors of the stars, which, according to our theoretical findings, can
be expected to approach the temperature unit. However, an indication of its validity can
be obtained by comparison with the unit of pressure, inasmuch as the temperature and
pressure are both relatively simple physical quantities with similar, but opposite, effects
on most physical properties, and should therefore have units of comparable magnitude.
The conventional units, the degree K and the gram per cubic centimeter have been
derived from measurements of the properties of water, and are therefore approximately
the same size. Thus the ratio of natural to conventional units should be nearly the same in
temperature as in pressure. The value of the temperature unit just calculated, 3.5978 x 109
degrees K, conforms to this theoretical requirement, as the natural unit of pressure
derived in Volume I is 5.386 x 109 g/cm3.
Except insofar as it enters into the determination of the value of the gas constant, the
natural unit of temperature defined for the gaseous state plays no significant role in
terrestrial phenomena. Here the unit with which we are primarily concerned is that
applicable to the condensed states. Just as the gaseous unit is related to the maximum
temperature of the gaseous state, the lower unit is related to the maximum temperature of
the the liquid state. This is the temperature level at which the unit molecule escapes from
the time region in one dimension of space. The motion in this low energy range takes
place in only one scalar dimension. We therefore reduce the three-dimensional unit,
3.5978 x 109 K, to the one-dimensional basis, and divide it by 3 because of the restriction
to one dimension of space. The natural unit applicable to the condensed state is then
1/3 (3.598 x 109)¹/3, degrees K = 510.8 oK.
The magnitude of this unit was evaluated empirically in the course of a study of liquid
volume carried out prior to the publication of The Structure of the Physical Universe in
1959. The value derived at that time was 510.2, and this value was used in a series of
articles on the liquid state that described the calculation of the numerical values of
various liquid properties, including volume, viscosity, surface tension, and the critical
constants. Both the 510.2 liquid unit and the gaseous unit were listed in the 1959
publication, but the value of the gaseous unit given there has subsequently increased by a
factor of 2 as a result of a review of the original derivation.
Since the basic linear vibrations (photons) of the atom are rotated through all dimensions
they have active components in the dimensions of any thermal motion, whatever that
dimension may be, just as they have similar components parallel to the rotationally
distributed motions. As we found in our examination of the effect on the rotational
situation, this basic vibrational component amounts to 2/9 of the primary magnitude.
Because the thermal motion is in time (equivalent space) its scalar direction is not fixed
relative to that of the vibrational component. This vibrational component will therefore
either supplement or oppose the thermal specific heat. The net specific heat, the measured
value, is the algebraic sum of the two. This vibrational component does not change the
linear relation of the specific heat to the temperature, but it does alter the zero point, as
indicated in Fig.2.

Figure 2
In this diagram the line OB’ is the specific heat curve derived from equation 5-7,
assuming a constant value of n and a zero initial level. If the scalar direction of the
vibrational component is opposite to that of the thermal motion, the initial level is
positive; that is, a certain amount of heat must be supplied to neutralize the vibrational
energy before there is any rise in temperature. In this case the specific heat follows the
line AA’ parallel to OB’ above it. If the scalar direction of the vibrational component is
the same as that of the thermal motion, the initial level is negative, and the specific heat
follows the line CC’, likewise parallel to OB’ but below it. Here there is an effective
temperature due to the vibrational energy before any thermal motion takes place.
Although this initial component of the molecular motion is effective in determining the
temperature, its magnitude cannot be altered and it is therefore not transferable.
Consequently, even where the initial level is negative, there is no negative specific heat.
Where the sum of the negative initial level and the thermal component is negative, the
effective specific heat of the molecule is zero.
It should be noted in passing that the existence of this second, fixed, component of the
specific heat confirm the vibrational character of the basic constituent of the atomic
structure, the constituent that we have identified as a photon. The demonstration that
there is a negative initial level of the specific heat curve is a clear indication of the
validity of the theoretical identification of the basic unit in the atomic structure as a
vibratory motion.
Equation 5-7 can now be further generalized to include thespecific heat contribution of
the basic vibration: the initial level, which we will represent by the symbol I. The net
specific heat, the value as measured, is then
dH/dT = 2T/n3 +I (5-8)
Where there is a choice between two possible states, as there is between the positive and
negative initial levels, the probability relations determine which of the alternatives will
prevail. Other things being equal, the condition of least net energy is the most probable,
and since the negative initial level requires less net energy for a given temperature than
the positive initial level, the thermal motion is based on the negative level at low
temperatures unless motion on this basis is inhibited by structural factors.
Addition of energy in the time region takes place by means off a decrease in the effective
time magnitude, and it involves eliminating successive time units from the vibration
period. The process is therefore discontinuous, but the number of effective time units
under ordinary conditions is so large that the relative effect of the elimination of one unit
is extremely small. Furthermore, observations of heat phenomena of the solid state do not
deal with single molecules but with aggregates of many molecules, and the measurements
are averages. For all practical purposes, therefore, we may consider that the specific heat
of a solid increases in continuous relation to the temperature, following the pattern
defmed by equation 5-8.
As pointed out earlier in this chapter, the thermal motion cannot cross the time region
boundary until its magnitude is sufficient to overcome the progression of the natural
reference system without assistance from the gravitational motion; that is, it must attain
unit magnitude. The maximum thermal specific heat, the total increment above the initial
level, is the value that prevails at the point where the thermal motion reaches this unit
level. We can evaluate it by giving each of the terms T and n in equation 5-7 unit value,
and on this basis we find that it amounts to 2 natural units, or 3R. The normal initial level
is -2/9 and of this 3R is specific heat, or -2/3R. The 3R total is then reached at a net
positive specific heat of 21/3 R.
Beyond this 3R thermal specific heat level, which corresponds to the regional boundary,
the thermal motion leaves the time region and undergoes a change which requires a
substantial input of thermal energy to maintain the same temperature, as will be explained
later. The condition of minimum energy, the most probable condition, is maintained by
avoiding this regional change by whatever means are available. One such expedient, the
only one available to molecules in which only one rotational unit is oscillating thermally,
is to change from a negative to a positive initial level. Where the initial level is +2/3 R
instead of -2/3 R, the net positive specific heat is 32/3 R at the point where the thermal
specific heat reaches the 3R limit. The regional transmission is not required until this
higher level is reached. The resulting specific heat curve is shown in Fig.3.
Inasmuch as the magnetic rotation is the basic rotation of the atom, the maximum number
of units that can vibrate thermally is ordinarily determined by the magnetic displacement.
Low melting points and certain structural factors impose some further restrictions, and
there are a few elements, and a large number of compounds that are confined to the
specific heat pattern of Fig.3, or some portion of it. Where the thermal motion extends to
the second magnetic rotational unit, to rotation two, we may say, using the same
terminology that was employed in the inter-atomic distance discussion, the Fig. 3 pattern
is followed up to the 21/3 level. At that point the second rotational unit is activated. The
initial specific heat level for rotation two is subject to the same n3 factor as the thermal
specific heat, and it is therefore 1/n3 x 2/3 R = 1/12 R. This change in the negative initial
level raises the net positive specific heat corresponding to the thermal value 3R from
2.333 R to 2.917 R, and enables the thermal motion to continue on the basis of the
preferred negative initial level up to a considerably higher temperature.

Figure 3
When the rotation two curve reaches its end point at 2.917 R net positive specific heat, a
further reduction of the initial level by a transition to the rotation three basis, where the
higher rotation is available, raises the maximum to 2.975 R. Another similar transition
follows, if a fourth vibrating unit is possible. The following tabulation shows the specific
heat values corresponding to the initial and final levels of each curve. As indicated
earlier, the units applicable to the second column under each heading are calories per
gram mole per degree Kelvin.
Vibrating Maximum Net Specific Heat
Effective Initial Level
Units (negative initial level)
1 -0.667 R -1.3243 2.3333 R 4.6345
2 -0.0833 R -0.1655 2.9167 R 5.7940
3 -0.0247 R -0.0490 2.9753 R 5.9104
4 -0.0104 R -0.0207 2.9896 R 5.9388
Ultimately the maximum net positive specific heat that is possible on the basis of a
negative initial level is attained. Here a transition to a positive initial level takes place,
and the curve continues on to the overall maximum. As a result of this mechanism of
successive transitions, each number of vibrating units has its own characteristic specific
heat curve. The curve for rotation one has already been presented in Fig.3. For
convenient reference we will call this a type two curve. The different type one curves,
those of two, three, and four vibrating units, are shown in Fig.4. As can be seen from
these diagrams, there is a gradual flattening and an increase in the ratio of temperature to
specific heat as the number of vibratory units increases. The actual temperature scale of
the curve applicable to any particular element or compound depends on the thermal
characteristics of the substance, but the relative temperature scale is determined by the
factors already considered, and the curves in Fig.4 have been drawn on this relative basis.
Figure 4
As indicated by equation 5-8, the slope of the rotation two segment of the specific heat
curve is only one-eighth of the slope of the rotation one segment. While this second
segment starts at a temperature corresponding to 21/3 R specific heat, rather than from
zero temperature, the fixed relation between the two slopes means that a projection of the
two-unit curve back to zero temperature always intersects the zero temperature ordinate
at the same point regardless of the actual temperature scale of the curve. The slopes of the
three-unit and four-unit curves are likewise specifically related to those of the earlier
curves, and each of these higher curves also has a fixed initial point. We will find this
feature very convenient in analyzing complex specific heat curves, as each experimental
curve can be broken down into a succession of straight lines intersecting the zero ordinate
at these fixed points, the numerical values of which are as follows:
Vibrating
Specific Heat at 0º K (projected)
Units
1 -0.6667 R -1.3243
2 1.9583 R 3.8902
3 2.6327 R 5.2298
4 2.8308 R 5.6234
These values and the maximum net specific heats previously calculated for the successive
curves enable us to determine the relative temperatures of the various transition points. In
the rotation three curve, for example, the temperatures of the first and second transition
points are proportional to the differences between their respective specific heats and the
3.8902 initial level of the rotation two segment of the curve, as both of these points lie on
this line. The relative temperatures of any other pair of points located on the same straight
line section of any of the curves can be determined in a similar manner. By this means the
following relative temperatures have been calculated, based on the temperature of the
first transition point as unity.
Vibrating Relative Temperature
End Point
Units Transition Point
1 1.000 1.80
2 2.558 4.56
3 3.086 9.32
4 3.391 17.87
The curves of Figs.3 and 4 portray what may be called the ―regular‖ specific heat patterns
of the elements. These are subject to modifications in certain cases. For instance, all of
the electronegadve elements with displacements below 7 thus far studied substitute an
initial level of -0.66 for the normal -1.32. Another common deviation from the regular
pattern involves a change in the temperature scale of the curve at one of the transition
points, usually the first. For reasons that will be developed later, the change is normally
downward. Inasmuch as the initial level of each segment of the curve remains the same,
the change in the temperature scale results in an increase in the slope of the higher curve
segment. The actual intersection of the two curve segments involved then takes place at a
level above the normal transition point.
There are some deviations of a different nature in the upper portions of the curves where
the temperatures are approaching the melting points. These will not be given any
consideration at this time because they are connected with the transition to the liquid state
and can be more conveniently examined in connection with the discussion of liquid
properties.
As mentioned earlier, the quantity with which this and the next two chapters are primarily
concerned is the specific heat at zero external pressure. In Chapter 6 the calculated values
of this quantity will be compared with measured values of the specific heat at constant
pressure, as the difference between the specific heat at zero pressure and that at the
pressures of observation is negligible. Most conventional theory deals with the specific
heat at constant volume rather than at constant pressure, but our analysis indicates that
the measurement under constant pressure corresponds to the fundamental quantity.

CHAPTER 6

Specific Heat Patterns


Fig.5 is a specific heat curve derived from experimental data. The points shown in this
graph are the measured values of the specific heat of silver. The accompanying solid lines
are the segments of the theoretical four-unit curve of Fig.4 with the temperature scale
located empirically. While the curve defined by the plotted points has the same general
shape as the theoretical curve, it is quite different in appearance inasmuch as the sharp
angles of the theoretical curve have been replaced by smooth and gradual transitions.
The explanation of this difference lies in the manner in which the measurements are
made. As indicated by equation 5-8 and the curves in Figs.3 and 4, the specific heat of an
individual molecule can be represented by a succession of straight lines. Experimental
observations, however, are not made on single molecules, but on aggregates of
molecules, and the observed temperature of the aggregate is the average of many
different individual molecular temperatures, which are distributed about the average in
accordance with the probability relations. Midway between the transition points the
relation between temperature and specific heat for most of the individual molecules is
such that their specific heats lie on the same straight line in the diagram. The average
consequently lies on the same line, and coincides with the true molecular specific heat
corresponding to the average temperature. In the neighborhood of a transition point,
however, the molecules that are individually at the higher temperatures cannot continue
on the same line beyond the 3R limit, and must conform to a lower curve based on a
higher number of rotating units. This operates to reduce the specific heat of the aggregate
below the true molecular value for the prevailing average temperature.
In the silver curve, Fig.5, for example, the true atomic specific heat at 75º K is 4.69. This
would also be the average specific heat of the silver aggregate at that temperature if the
silver atoms were able to continue vibrating on the basis of one rotating unit up to the
point beyond which the probability distribution is negligible. But at a specific heat of 2¹/3
R (4.633) the vibration changes to the two-unit basis. Those atoms in the probability
distribution that have specific heats above this level cannot conform to the one-unit line
but must follow a line that rises at a much slower rate. The lower specific heat of these
atoms reduces the average specific heat of the aggregate, and causes the aggregate curve
to diverge more and more from the straight line relation as the proportion of atoms
reaching the transition point increases. The divergence reaches a maximum at the
transition temperature, after which the specific heat of the aggregate gradually
approaches the upper atomic curve. Because of this divergence of the measured
(aggregate) specific heats from the values applicable to the individual atoms the specific
heat of silver at 75º K is 4.10 instead of 4.69.

Figure 5: Specific Heat -Silver

A similar effect in the opposite direction can be seen at the lower end of the silver curve.
Here the specific heat of the aggregate (the average of the individual values) could stay
on the one-unit theoretical curve only if it were possible for the individual specific heats
to fall below zero. But there is no negative thermal energy, and the atoms which are
individually at temperatures below the point where the curve intersects the zero specific
heat level all have zero thermal energy and zero specific heat. Thus there is no negative
deviation from the average, and the positive deviation due to the presence of atoms with
individual temperatures above zero constitutes the specific heat of the aggregate. The
specific heat of a silver atom at 15º K is zero, but the measured specific heat of a silver
aggregate at an average temperature of 15º K is 0.163.
Evaluation of the deviations from the linear relationship in these transitional regions
involves the application of probability mathematics, the validity of which was assumed as
a part of the Second Fundamental Postulate of the Reciprocal System. For reasons
previously explained, a full treatment of the probability aspects of the phenomena now
under discussion is beyond the scope of this work, but a general consideration of the
situation will enable us to arrive at some qualitative conclusions which will be adequate
for present purposes.
At the present stage of development of probability theory there are a number of
probability functions in general use, each of which seems to have advantages for certain
applications. For the purpose of this work the appropriate function is one that expresses
the results of pure chance without modification by any other factor. Such a function is
strictly applicable only where the units involved are all exactly alike, the distribution is
perfectly random, the units are infinitely small, the variability is continuous, and the size
of the group is infinitely large. The ordinary classes of events around which most present-
day probability theory has been constructed, such as coin and dice experiments,
obviously fail to meet these requirements by a wide margin. Coins, for instance, are not
continuously variable with an infinite number of possible states. They have only two
states, heads and tails. This means that a major item of uncertainty has become almost a
certainty, and the shape of the probability distribution curve has been altered accordingly.
Strictly speaking, it is no longer a true probability curve, but a combination curve of
probability and knowledge.
The basic physical phenomena do conform closely to the requirements of a system in
which the laws of pure chance are valid. The units are nearly uniform, the distribution is
random, the variability is continuous, or nearly continuous, and the size of the group,
although not infinite, is extremely large. If any of the probability functions in general use
can qualify as representing pure chance the most likely prospect is the so-called “normal”
probability function, which can be expressed as
1
y = ———e-x²/2
¯\/2
Tables of this function and its integral to fifteen decimal places are available.6 It has been
found in the course of this work that sufficient accuracy for present purposes can be
attained by calculating probabilities on the basis of this expression, and it has therefore
been utilized in all of the probability applications discussed herein, without necessarily
assuming the absolute accuracy of this function in these applications, or denying the
existence of more accurate alternatives. For example, Maxwell’s asymmetric probability
distribution is presumably accurate in the applications for which it was devised (a point
that has not yet been examined in the context of the Reciprocal System), and it may also
apply to some of the phenomena discussed in this work. However, the results thus far
obtained, particularly in application to the liquid properties, favor the normal function. In
any event it is clear that if any error is introduced by utilizing the normal function it is not
large enough to be significant in this first general treatment of the subject matter.
On the foregoing basis, the distribution of molecules with different individual
temperatures takes the form of a probability function øt, where t is the deviation from the
average temperature. The contribution of the øt molecules at any specified temperature to
the deviation of the specific heat from the theoretical value corresponding to the average
temperature depends not only on the number of these molecules but also on the
magnitude of the specific heat deviation attributable to each molecule; that is, the
difference between the specific heat of the molecule and that of a molecule at the average
temperature of the aggregate. Since the specific heat segment from which the deviation
takes place is linear, this deviation is proportional to the temperature difference t, and
may be represented as kt. The total deviation due to the øt molecules at temperature t is
then ktøt, and the sum of all deviations in one direction (positive or negative) may be
obtained by integration.
It is quite evident that the deviations of the experimental specific heat curves from the
theoretical straight lines, both at the zero level and at the transition point have the general
characteristics of the probability curves. However, the experimental values are not
accurate enough, particularly in the temperature range of the lower transition, to make it
worth while to attempt any quantitative correlations between the theoretical and
experimental results. Furthermore, there is still some theoretical uncertainty with respect
to the proper application of the probability function that prevents specifying the exact
location of the probability curve.
The uncertain element in the situation is the magnitude of the probability unit. Equation
6-1 is complete mathematically, but in order to apply it, or any of its derivatives, to any
physical situation it is necessary to ascertain the physical unit corresponding to the
mathematical unit. One pertinent question still lacking a definite answer is whether this
probability unit is the same for all substances. If so, the lower portion of the curve, when
reduced to a common temperature base, should be the same for all substances with the -
1.32 initial level. On this basis, the specific heat of the aggregate at the temperature T0,
where the theoretical curve intersects the zero axis, should be a constant. Actually, most
of the elements with the -1.32 initial level do have a measured specific heat in the
neighborhood of 0.20 at this point, but a few others show substantial deviations from this
value. It is not yet clear whether this is a result of variability in the probability unit, or
reflects inaccuracies in the experimental values.
Whether all of the curves with the same maximum deviation (0.20) are coincident below
T0 is likewise still somewhat uncertain. There is a greater spread in the observed specific
heats below 0.20 than can be ascribed to errors in measurement, but most of the scatter
can probably be explained as the result of lack of thermal equilibrium. At these low
temperatures it no doubt takes a long time to establish equilibrium, and even an accurate
measurement will not produce the correct result unless the aggregate is in thermal
equilibrium. It is significant that the specific heats of the common elements which have
been studied most extensively deviate only slightly from a smooth curve in this low
temperature region. Fig.6, which shows the measured values of the specific heats of six
of these elements on a temperature scale relative to T0, demonstrates this coincidence.
If the probability unit is the same for all, or most, of the elements, as these data suggest,
the deviation of the experimental curve from the theoretical curve for the single atom at
the first transition point, T1, should also have a constant value. Preliminary examination
of the curves of the elements that follow the regular pattern indicates that the values of
this deviation actually do lie within a range extending from about 0.55 to about 0.70.
Considerable additional work will be required before these curves can be defined
accurately enough to determine whether these is complete coincidence, but present
indications are that the deviation at T1 is, in fact, a constant for all of the regular elements,
and is in the neighborhood of three times the deviation at T0.

Figure 6: Specific Heat -Low Temperatures

With the benefit of the foregoing information as to the general nature of the deviations
from the theoretical curves of Chapter 5 due to the manner in which the measurements
are made, we are now prepared to examine the correlation between the theoretical curves
and the measured specific heats. In order to arrive at a complete definition of the specific
heat of a substance it is not only necessary to establish the shapes of the specific heat
curves, the objective at which most of the foregoing discussion is aimed, but also to
define the temperature scale of each curve. Although the theoretical conclusions with
respect to these two theoretical aspects of the specific heat situation, like all other
conclusions in this work, are derived by developing the consequences of the fundamental
postulates of the Reciprocal System of theory, they are necessarily reached by two
different lines of theoretical development. For this reason a more meaningful comparison
with the experimental data can be presented if we deal with these two aspects
independently. In this chapter, therefore, the experimental values will be compared
graphically with theoretical curves in which the temperature scales are empirical. Chapter
7 will complete the definition of the curves be deriving the relevant temperature
magnitudes.
The curves of Fig.7 are typical of those of most of the elements.7 As indicated in Fig.4,
the final straight line segment of each curve occupies the greater part of the temperature
range of the solid state in the case of the high melting point elements. The significant
features of the curves are therefore confined to the lower temperatures, and in order to
bring them out more clearly only the lower temperature range (up to 300º K) is shown in
the illustrations that follow. the remaining sections of the curves of Fig.7 are extensions
of the lines shown in the diagram, except in the case of tungsten, which undergoes a
transition to the four-unit status at about 325º K.
Figure 7: Specific Heat

Fig.8 is a similar group of specific heat curves for four of the electronegative elements
with the -0.66 initial level. Aside from this higher initial level these curves are identical
with those of Fig.7 when all are reduced to a common temperature scale. The transition to
the two-unit vibration takes place at 4.63 (2¹/3 R) regardless of the higher initial level.
This point will be given further consideration in Chapter 7. The upper portions of the lead
and antimony curves, which are not shown on the graph, are extensions of the lines in the
diagram. Arsenic and silicon have transitions at temperatures above 300º K.

Figure 8: Specific Heat


As noted in Chapter 5, there are a number of elements that undergo a modification of the
temperature scale at the first transition point. Two curves with the modified second
segment are shown in Fig.9.
These two curves actually apply to four elements, as the specific heat of lithium follows
the aluminum curve, while that of ruthenium coincides with the molybdenum curve.
Coincidence of the specific heat curves of different elements, as in the instances
mentioned, is not as uncommon as might be expected. The number of possible curve
patterns is quite limited, and, as we will see in the next chapter, where the nature of the
change in the temperature will be examined, the temperature factors are confined to
specific values mainly within a relatively narrow range.
Also included in Fig.9 is an example of a specific heat curve for an element which
undergoes an internal rearrangement that modifies the thermal pattern. The measurements
shown for samarium follow the regular pattern up to the vicinity of the first transition
point at 35º K. Some kind of a modification of the molecular structure is evidently
initiated at this point in lieu of, or in addition to, the normal transition to the two-unit
vibrational status. This absorbs a considerable quantity of heat, which manifests itself as
an addition to the measured specific heat over the next portion of the temperature range.
By about 175º K the readjustment is complete, and the specific heat returns to the normal
curve. Most of the other rare earth elements undergo similar readjustments at comparable
temperatures. Elsewhere, if changes of this kind take place at all, they almost always
occur at relatively high temperatures. The reason for this peculiarity of the rare earth
group is, as yet, unknown.

Figure 9: Specific Heat

All of the types of deviations from the regular pattern that have been discussed thus far
are found in the electronegative elements of the lower rotational groups. There is also an
additional source of variability in the specific heats of these elements, as their atoms can
combine with each other to form molecules. The result is a wide enough variety of
behavior to give almost every one of these elements a unique specific heat curve. Of
special interest are those cases in which the variation is accomplished by omitting
features of the regular pattern. The neon curve, for example, is a single straight line from
the -1.32 initial level to the melting point. the specific heat curve of a hydrogen molecule,
Fig.10, is likewise a single straight line, but hydrogen has no rotational specific heat
component at all, and this line therefore extends only from the negative initial level, -
1.32, to the specific heat of the positive initial level, +1.32, at which point melting takes
place.
The specific heats of binary compounds based on the normal orientation, simple
combinations of Division I and Division IV elements, follow the same pattern as those of
the electropositive elements. In these compounds each atom behaves as an individual
thermal unit just as it would in a homogeneous aggregate of like atoms. the molecular
specific heats of such compounds are twice as large as the values previously determined
for the elements, not because the specific heat per atom is any different, but because there
are two atoms in each formula molecule.

Figure 10: Specific Heat -Hydrogen

The curves for KCl and CaS, Fig.11, illustrate the specific heat pattern of this class of
compounds. Some binary compounds of other structural types conform to the same
regular pattern as in the curve for AgBr, also shown in Fig.11.

Figure 11
.
As in the elements there is also a variation of this regular pattern in which certain
compounds of the electronegative elements have a higher initial level, but in the
compounds such as ZnO and SnO this level is zero, rather than -0.66, as it is in the
elements.
Some of the larger molecules similarly act thermally as associations of independent
atoms. CaF2 and FeS2 are typical. More often, however, two or more of the constituent
atoms of the molecule act as a single thermal unit. For example, both the KHF2 molecule,
which contains four atoms, and the CsClO4 molecule, which contains six, act thermally as
three units. In the subsequent discussion the term thermal group will be used to designate
any combination of atoms that acts as a single thermal unit. Where individual atoms
participate in thermal motion jointly with groups of atoms, the individual atoms will be
regarded as monatomic groups. On this basis we may say that there are three thermal
groups in each of the KHF2 and CsClO4 molecules.
The great majority of compounds not only form thermal groups but also alter the number
of groups in the molecule as the temperature varies. A common pattern is illustrated by
the chromium chlorides. CrCl2 acts as a single thermal group at very low temperatures;
CrCl3 as two. The initial specific heat levels are -1.32 and -2.64 respectively. There is a
gradual increase in the average number of thermal groups per molecule up to the first
transition point, at which temperature all atoms are acting independently. At the initial
point of the second segment of the curve this independent status is maintained, and above
the transition temperature the CrCl2 molecule acts as three thermal groups, while CrCl3
has four.
At the present stage of the investigation we can determine from theory the possible ways
in which a molecule can split up into thermal groups, but we are not yet able to specify
on theoretical grounds just which of these possibilities will prevail at any given
temperature, or where the transition from one to the other will take place. The theoretical
information thus far developed does, however, enable us to analyze the empirical data
and to establish the specific heat pattern of each substance; that is, to determine just how
it acts thermally. Aside from some cases, mainly involving very large molecules, where
the specific heat pattern is unusually complex, and in those instances where experimental
errors lead to erroneous interpretation, it is possible to identify the effective number of
thermal groups at the critical points of the curves. Once this information is available for
any substance, the definition of its specific heat curve is essentially complete, except for
the temperature scale, the determinants of which will be identified in Chapter 7. Where n
is the number of active thermal groups in a compound, the initial level is -1.32 n, the
initial point of the second segment of a Type 1 curve is 3.89 n, and the first transition
point is 4.63 n.
The tendency of the atoms of multi-atom molecules to form thermal groups is particularly
evident where the molecules contain radicals, because of the major differences in the
cohesive forces that are responsible for the existence of the radicals. The extent to which
the association into thermal groups is maintained naturally depends on the relative
strength of the cohesive and disruptive forces. Those radicals such as OH and CN in
which the bonds are very strong act as single thermal groups under all ordinary
conditions. Those with somewhat weaker bonding–CO3, SO4, NO3, etc.,–also act as single
units at the lower temperatures. Thus we find that at the initial points of both the first and
second segments of the specific heat curves there are two groups in MnCO3, three in
Na2CO3, four in KAl(SO4)2, five in Ca3(PO4)2, and so on. At higher temperatures,
however, radicals of this class split up into two or more thermal groups. Still weaker
radicals such as ClO4 constitute two thermal groups even at low temperatures.
It was mentioned in Volume I that the boundary line between radicals and groups of
independent atoms is rather indefinite. In general, the margin of bond strength required
for a structural radical is relatively large, and we find many groups commonly recognized
as radicals crystallizing in structures such as the CaTiO3 cube in which the radical, as
such, plays no part. The margin required in thermal motion is much smaller, particularly
at the lower temperatures, and there are many atomic groups that act thermally in the
same manner as the recognized radicals. In Li3CO3, for example, the two lithium atoms
act as a single thermal group, and the specific heat curve of this compound is similar to
that of MgCO3 rather than of Na2CO3.
Extension of the thermal motion by breaking some of the stronger bonds at the higher
temperatures gives rise to a variety of modifications of the specific heat curves. For
example, MoS2 has only two thermal groups in the lower range, but the S2 combination
breaks up as the temperature rises, and all atoms begin vibrating independently. VCl2
similarly goes from one group to three. Splitting of the radical accounts for a change from
two groups to three in SrCO3, from one to three in AgNO3, and from two to six in
(NH4)2SO4. All of these alterations take place at or prior to the first transitional point.
Other compounds make the first transition on the initial basis and break up into more
thermal groups later. In a common pattern, a radical that acts as one thermal group at the
low temperatures splits into two groups in the temperature range of the second segment
of the curve, just as the CO3 radical in SrCO3, PbCO3 and similar compounds does at a
lower level. There are a number of structures such as KMnO4 and KIO3, where this
increases the number of groups in the molecule from two to three. Pb3(PO4)2, in which
there are two radicals, goes from five to seven, and so on.
The effect of water of crystallization is variable, depending on the strength of the
cohesion. For example, BaCl2.2H2O acts as three thermal groups at the lower
temperatures, the water molecules being firmly bound to the atoms of the compound. As
the temperature increases these bonds give way, and the molecule begins vibrating on a
five-group basis. In Al2(SO4)3.6H2O and in
NH4Al(SO4)2.12H2O the bonds with the water molecules remain fixed through the entire
experimental range, up to about 300º K, and the thermal groups in these hydrates are five
and six respectively, just as in the corresponding anhydrous compounds.
An example of a drastic change in thermal behavior due to the disruption of inter-atomic
bonds by thermal forces is shown in Fig.12. The radical CrO3 in the compound AgCrO3 is
a single thermal group at very low temperatures. There is a gradual separation into two
groups in the temperature range up to the first transition point, and the change to the two-
unit vibration is made on the basis of a two-group radical. At about 150º K all four atoms
in the radical begin vibrating independently, and the molecule undergoes a transition
from the second segment of a three-group curve to the second segment of a five-group
curve. At about 250º K the compound makes the normal transition to three-unit vibration,
continuing as five thermal groups.
The compounds used as examples in the foregoing discussion were selected mainly on
the basis of the availability of experimental data within the significant temperature
ranges. For an accurate definition of the slope of each of the straight line segments of any
empirical curve it is necessary to have measurements in the temperature range where the
deviations due to the proximity of a transition point are negligible. The examples have
been taken from among those of the experimental results that satisfy this requirement.

Figure 12: Specific Heat -AgCrO3


In a theoretical treatment of specific heat such as that in this present work it is necessary
to deal with this quantity on a per molecule basis. For practical application, however, it is
more convenient to use the specific heat per unit of mass, and most of the collected data
are expressed in this manner. It should be noted that the effect of association into thermal
groups is to reduce the specific heat per unit of mass. For this reason, the specific heat of
most complex compounds is relatively low at low temperatures, and rises toward the
values applicable to individual atoms as increasing temperature breaks up the original
thermal groups.
The simplest organic compounds, those composed of only two or three structural units,
generally divide into no more than two thermal groups. Many of the somewhat larger
organic molecules, particularly among the ring structures and branched compounds,
follow the same rule. The specific heat relations of these compounds are similar to those
of the inorganic compounds, except that there are more organic compounds in which the
thermal motion is restricted to one rotational unit. These substances, the hydrocarbons
and some other compounds of the lower elements, undergo a transition to a positive
initial level on reaching their first (and only) transition point. The resulting specific heat
curve, the one illustrated in Fig.3, is not much more than a straight line with a bend in it.
A few compounds, including ethane and carbon monoxide, even omit the bend, and do
not make the transition to the positive initial level.
Further addition of structural units, such as CH2 groups, to the simple organic compounds
results in the activation of internal thermal groups, units that vibrate thermally within the
molecules. The general nature of the thermal motion of these internal groups is identical
with that of the thermal motion of the molecule as a whole. But the internal motion is
independent of the molecular thermal motion, and its scalar direction (inward or outward)
is independent of the scalar direction of the molecular motion. Outward internal motion is
thus coincident with outward molecular motion during only one quarter of the vibrational
cycle. Since the effective magnitude of the thermal motion, which determines the specific
heat, is the scalar sum of the internal and molecular components, each unit of internal
motion adds one-half unit of specific heat during half of the molecular cycle. It has no
thermal effect during the other half of the cycle when the molecule as a whole is moving
inward.
Because of the great diversity of the organic compounds the specific heat patterns occur
in a variety that is correspondingly large. The effect of internal motion in those of the
organic compounds in which it is present is well illustrated, however, by the specific
heats of the normal paraffins. The values of the initial levels and the specific heat at T1
for the compounds of this series in the range from C3(propane) to C16(hexadecane) are
listed in Table 21, together with the number of internal thermal units in the molecule of
each compound.

Table 21: Specific Heats -Paraffin Hydrocarbons


Internal Initial Specific Heat
Thermal Units 1st Levels 2nd at T1
Propane 0 -2.64 2.64 9.27
Butane 0 -2.64 2.64 9.27
Pentane 2 -2.64 6.62 13.90
Hexane 3 -2.64 7.95 16.22
Heptane 4 -3.96 9.27 18.54
Octane 5 -3.96 10.59 20.86
Nonane 6 -5.30 11.92 23.18
Decane 7 -5.30 13.24 25.49
Hendecane 8 -5.30 14.57 27.81
Dodecane 9 -6.62 15.89 30.12
Tridecane 10 -6.62 17.22 32.44
Tetradecane 11 -6.62 18.54 34.76
Pentadecane 12 -6.62 19.86 37.08
Hexadecane 13 -7.95 21.19 39.39
Propane and butane have only the two molecular thermal groups corresponding to the
positive and negative ends of the molecules, and their specific heat at T1 is the normal
two-group value: 9.27. Beginning with two internal groups in pentane, each added CH2
structural group becomes and internal thermal unit, and adds 2.317 to the total specific
heat of the molecule at the transition point. The initial level of the first segment of the
specific heat curve is -2.64 (the two-group value) in the lower compounds, and changes
slowly, adding units of -1.32, as the length of the chain increases. The initial level of the
second segment is 2.64 in butane and propane. In the higher compounds, each of which
consists of n structural groups (CH2 and CH3), this second initial level is 1.324 n.
The values thus derived theoretically are all consistent with the experimental curves. In a
few cases the intersection of the two curve segments may not coincide with the calculated
specific heat of the transition point, but these deviations, if they are real, are small enough
to be explainable on the basis of changes in the temperature factors, the nature of which
will be one of the subjects of discussion in Chapter 7.
Branching of a hydrocarbon chain tightens the structure and tends to reduce the number
of internal thermal units. For example, octane has five internal thermal units, and a
specific heat of 20.86 at the transition point. But 2,2,4-trimethyl pentane, a branched
compound with the same composition, has no internal motion at all, and the T1 specific
heat of this compound is 9.27, identical with that of the C3 paraffin, propane. Ring
formation has a similar effect. Ethyl-benzene and the xylenes, which are also C8
compounds, have some internal motion, but their T1 specific heats are 11.59 (one internal
unit) and 13.90 (two internal units) respectively, well below the octane level. In Fig.13
the specific heat curves of hexane (straight chain) and benzene (ring), both C6
hydrocarbons, are contrasted.
The subject matter of this and the preceding five chapters consists of various aspects of
the volumetric and thermal relations of material substances. The study of these relations
was the principal avenue of approach to the clarification of basic physical processes that
ultimately led to the identification of the physical universe as a universe of motion, and
the determination of the nature of the fundamental features of that universe. There
relations were examined in great detail over a period of many years, during which
thousands of experimental results were analyzed and studied. Incorporation of the
accumulated mass of information into the theoretical structure was the first task
undertaken after the formulation of the postulates of the Reciprocal System of theory, and
it has therefore been possible to present a reasonably complete description of each of the
phenomena thus far dicussed, including what we may call the small-scale effects.
Beginning with the next chapter, we will be dealing with subjects not covered in the
inductive phase of the theoretical development. In this second phase, the deductive
development, we are extending the application of the theory to all of the other major
fields of physical science, in order to demonstrate that it is, in fact, a general physical
theory. Obviously, where the area to be covered is so large, no individual investigator can
expect to carry the development into great detail. Consequently, some of the conclusions
expressed in the subsequent pages with respect to the small-scale features of the areas
covered are subject to a degree of uncertainty. In other cases it will be necessary to leave
the entire small-scale patter for some future investigation.

Figure 13: Specific Heat

CHAPTER 7

Temperature Relations
AS explained in introducing the comparisons of the theoretical specific heats with
experimental results, the curves in Fig.5 to13 verify only the specific heat pattern, the
temperature scale of each curve being adjusted to the empirical results. In order to
complete the definition of the curves we will now turn our attention to the temperature
relations.
All of the distinctive properties of the different kinds of matter are determined by the
rotational displacements of the atoms of which these substances are composed, and by the
way in which the displacements enter into the various physical phenomena. As stated in
Volume I,
The behavior characteristics, or properties, of the elements are functions of their respective displacements.
Some are related to the total net effective displacement... some are related to the electric displacement,
others to the magnetic displacement, while still others follow a more complex pattern. For instance,
valence, or chemical combining power, is determined by either the electric displacement or one of the
magnetic displacements, while the inter-atomic distance is affected by both the electric and magnetic
diplacement, but in different ways.

The great variety of physical phenomena, and the many different ways in which different
substances participate in these phenomena result from the extension of this “more
complex pattern” of behavior to a still greater degree of complexity. One of these more
complex patterns was examined in Chapter 4, where we found that the response of the
solid structure to compression is related to the cross–section against which the pressure is
exerted. The numerical magnitude involved in this relation is determined by the product
of the effective cross-sectional factors, together with the number of rotational units that
participate in the action, a magnitude that determines the force per unit of the cross–
section. Inasmuch as one of the dimensions of the cross–section may take either the
effective magnetic displacement, represented by the symbol b in the earlier discussion, or
the electric displacement, represented by the symbol c, two new symbols were introduced
for purposes of the compressibility chapter: the symbol z to represent the second
displacement entering into the cross–section (either b or c), and the symbol y to represent
the number of effective rotational units (related to the third of the displacements). The a–
b–c factors were thus represented in the form a–z–y.
The values of these factors relative to the positions of the elements in the periodic table
follow the same general pattern in application to specific heat as in compressibility, and
most of the individual values are either close to those applying to compressibility or
systematically related to those values. We will therefore retain the a–z–y symbols as a
means of emphasizing the similarity. But the nature of the thermal relations is quite
different from that of the relations that apply to compressibility. The temperature is not
related to a cross–section; it is determined by the total effective rotation. Consequently,
instead of the product, azy, of the effective rotational factors, the numerical magnitude
defining the temperature scale of the thermal relations is the scalar sum, a+z+y, of these
rotational values.
This kind of a quantity is quite foreign to conventional physics. The scalar aspect of
vectorial motion is recognized; that is, speed is distinguished from velocity. But orthodox
physical thought does not recognize the existence of motion that is inherently scalar. In
the universe of motion defined by the postulates of the Reciprocal System of theory, on
the other hand, all of the basic motions are inherently scalar. Vectorial motions can exist
only as additions to certain kinds of combinations of the basic scalar motions.
Scalar motion in one dimension, when seen in the context of a stationary spatial reference
system, has many properties in common with vectorial motion. This no doubt accounts
for the failure of previous investigators to recognize its existence. But when motion
extends into more than one dimension there are major differences in the way these two
types of motion present themselves (or do not present themselves) to observation. Any
number of separate vectorial motions of a point can be combined into a single resultant,
and the position of the point at any specified time can be represented in a spatial system
of reference. This is a necessary consequence of the fact that vectorial motion is motion
relative to that system of reference. But scalar motions cannot be combined vectorially.
The resultant of scalar motion in more than one dimension is a scalar sum, and it cannot
be identified with any one point in spatial coordinates. Such motion is therefore incapable
of representation in a spatial reference system of the conventional type. It does not
follow, however, that inability to represent this motion within the context of the severely
limited kind of reference system that we are accustomed to use means that such motion is
non–existent. To be sure, our direct perception of physical events is limited to those that
can be represented in this type of a reference system, but Nature is not under any
obligation to stay within the perceptive capabilities of the human race.
As pointed out in Chapter 3, Volume I, where the subject of reference systems was
discussed at length, there are many aspects of physical existence (that is, many motions,
combinations of motions, or relations between motions) that cannot be represented in any
single reference system. This is not, in itself, a new, or unorthodox conclusion. Most
modern physicists, including all of the leading theorists, have realized that they cannot
accommodate all of present–day physical knowledge within the limitations of fixed
spatial reference systems. But their response has been the drastic step of cutting loose
from physical reality, and building their fundamental theories in a shadow realm where
they are free from the constraints of the real world. Heisenberg states their position
explicitly. “The idea of an objective real world whose smallest parts exists objectively in
the same sense as stones and trees exist, independently of whether or not we observe
themÉis impossible,” 8 he says. In the strange half–world of modern physical theory the
only realities are mathematical symbols. Even the atom itself is ―in a way only a
symbol,‖ 9 Heisenberg tells us. Nor is it required that symbols be logically related or
understandable. Nature, these front rank theorists contend, is inherently ambiguous and
subject to uncertainties of a fundamental and inescapable nature. ―The world is not
intrinsically reasonable or understandable,‖ Bridgman explains, ―It acquires these
properties in ever–increasing degree as we ascend from the realm of the very little to the
realm of everyday things.‖ 10
What the Reciprocal System of theory has done in this area is to show that once the true
status of the physical universe as a universe of motion is recognized, and the properties of
space and time are defined accordingly, there is no need for the retreat from reality, or for
the attempt to blame Nature for the prevailing inability to understand the basic relations.
The existence of phenomena not capable of representation in a spatial reference system is
a fact that we must come to terms with, but the contribution of the Reciprocal System has
been to show that the phenomena outside the scope of the conventional spatial reference
systems can be described and evaluated in terms of the same real entities that exist within
the reference system. The scalar sum of the magnitudes of motions in different
dimensions, the quantity that we will now use in analyzing the temperature relations, is
an item of this nature. It is just as real as any other physical quantity, and its components,
the motions in the individual dimensions, are motions of the same nature as those one–
dimensional scalar motions that are capable of representation in the spatial reference
systems, even though the scalar sum cannot be so represented in any manner accessible to
our direct perception.
In the theoretical minimum situation, where the effective thermal factors are 1–0–0, and
the scalar sum of these factors is one unit, the temperature of the initial negative level is
one unit out of the total of 128 that corresponds to the full 510.7 degrees temperature unit
of the condensed states. But since the thermal motion is effective in only one direction,
the ratio becomes 1/256, and the zero point temperature, T0, the temperature at which the
thermal motion counterbalances the negative initial level of vibration, is 1.995º K. For a
substance with thermal factors a, z, and y, and the normal 2/9 initial specific heat level,
we then have
T0 = 1.995 (a+z+y) degrees K (7-1)
This value completes the definition of the specific heat curves by defining the
temperature scales. It will be more convenient, however, to work with another of the
fixed points on the curves, the first transition point, T1. As this is the unit specific heat
level on the initial linear section of the curve, while T0 is 2/9 unit above the initial point,
the temperature of the first transition point is
T1 = 8.98 (a+z+y) degrees K (7-2)
Thermal factors of the elements for which reliable specific heat patterns are available,
and the corresponding theoretical first transition temperatures (T1) are listed in Table 22,
together with the T1 values derived from curves of the type illustrated in Figs.5 to13, in
which the temperature scale is empirical. In effect, this is a comparison between
theoretical and experimental values of the temperature scales of the specific heat curves.
The experimental values are subject to some uncertainty, as they have been obtained by
inspection from graphs in which the linear portions of the curves were also drawn from
visual inspection. Greater accuracy could be attained by using more sophisticated
techniques, but the time and effort required for this refinement did not appear to be
justified for the purposes of this initial investigation of the subject.
The compressibility factors derived in Chapter 4, with a few values restated in different,
but equivalent, terms, are shown in the table for comparison with the corresponding
thermal factors. The principal determinants of the compressibility values, aside from the
effect of the pressure level itself (including the internal pressure), were found to be the
magnitude and sign (positive or negative) of the displacement in the electric dimension.
The rotational group to which the element belongs (determined by the magnetic
displacements) is much less significant. In the thermal situation the rotational group
becomes the dominant influence. The elements of Group 3B (magnetic displacements 3-
3), midway in the group order, generally have thermal factors close to the compression
values. In half of the 3B elements included in the table the deviation is no more than one
unit. But in each direction from this central group there is a systematic deviation from the
compressibility values, upward in the lower groups and downward in the higher groups.
Every element above number 42, molybdenum, that is included in the table, with one
exception, has thermal factors either equal to or less than the corresponding
compressibility factors. Every element below molybdenum, with three exceptions (two of
which are alkali metals), has thermal factors that are either equal to or greater than the
corresponding compressibility factors.
It was noted in Chapter 4 that in compression the lowest electropositive elements do not
take the minimum 1-1-1 factors of their electronegative counterparts, but have a = 4 in all
of the elements of this class investigated by Bridgman. The reason for this difference in
behavior is not yet known (although it is no doubt connected with the all–positive nature
of the rotational displacement of these elements), but it is even more pronounced in the
thermal factors. Except for the alkali metals above sodium, which, as noted above, have
thermal factors even lower than the compressibility values, the lower electropositive
elements not only maintain the 6-unit minimum
(4-1-1 or equivalent) but raise the effective magnitudes of their thermal factors still
farther by omitting the n = 1 section of the specific heat curve based on equation 5-6, and
going immediately to n = 2, which increases the temperature scale by a factor of 8. This
pattern is followed by boron and carbon, and in part, by beryllium. The corresponding
members of the next higher group, magnesium, aluminum, and silicon, also have n = 2
from the start of the thermal motion, but here the second unit is one–dimensional rather
than three–dimensional. Beryllium combines the two patterns. It has the same thermal
factors as lithium, but a dimensional multiplier halfway between those of lithium and
boron, the two adjoining elements.

Table 22: Effective Rotational Factors


Factors T1 Factors T1
Comp Therm. n Tot. Calc. Obs. Comp.Therm. Tot. Calc. Obs.
Li 4-1-1 4-2-1 2 14 126 131 Y 4-2-1 4-3-1 8 72 71
4-1-1 2 12 108 110 Zr 4-8-1 4-4-1 9 81 84
Be 4-4-1 4-2-1 2 14 314 323 Mo 4-8-2 4-8-2 14 126 129
8 56 314 323 4-6-2 12 108 107
4-1-1 2 12 269 267 Ru 4-8-2 4-8-2 14 126 128
8 48 269 267 4-6-2 12 108 107
B 4-1-1 8 48 431 420 Rh 4-8-2 4-8-1 13 117 117
C-d 4-6-1
4-4-1 8 72 647 635 4-6-1 11 99 95
C-g 4-2-1
4-3-1 8 64 575 578 Pd 4-6-2 4-4-2 10 90 91
Na 4-1-1
4-1-1 6 54 52 4-4-1 9 81 78
Mg 4-4-1
4-1-1 2 12 108 109 Ag 4-4-2 4-3-1 8 72 72
3-1-1 2 10 90 91 Cd 4-4-1 2-2-1 5 45 46
Al 4-5-1 4-2-1 2 14 126 131 In 4-4-1 4-6-2 12 108 105
4-1-1 2 12 108 112 Sn 4-4-1 4-2-1 7 63 66
Si 4-4-1 4-6-2 2 24 216 220 4-1-1 6 54 57
P-r 4-6-2 2 24 216 207 Sb 4-4-1 4-3-1 8 72 68
P-w 4-2-1 7 63 66 Te 4-3-1 4-2-1 7 63 61
S 4-1-1 4-4-1 9 81 84 I 2-2-1 5 45 44
Cl 4-2-1 7 63 62 Xe 1-1-0 2 18 19
Ar 1-1-1 3 27 28 Cs 4-1-1 1-1-0 2 18 17
K 4-1-1 2-1-1 4 36 32 Ba 4-2-1 2-1-1 4 36 34
Ca 4-3-1 4-3-1 8 72 76 La 4-4-1 2-2-1 5 45 42
Sc 4-6-1 11 99 103 Pr 4-4-1 1-1-1 3 27 27
4-5-1 10 90 88 Nd 4-4-1 1-1-1 3 27 31
Ti 4-8-1 4-8-2 14 126 124 Sm 4-4-1 2-1-1 4 36 36
V 4-8-1 4-8-3 15 135 133 Eu 4-4-1 2-1-1 4 36 33
4-6-2 12 108 107 Gd 4-4-1 2-2-1 5 45 48
Cr 4-8-1 162 Tb 4-4-1 2-2-1 5 45 44
4-8-2 14 126 128 Dy 4-4-1 2-2-1 5 45 41
Mn 4-8-1 4-8-1 13 117 115 Ho 4-4-1 2-1-1 4 36 33
4-5-1 10 90 92 Er 4-4-1 1-1-1 3 27 28
Fe 4-8-1 4-8-4 16 144 142 Tm 4-4-1 1-1-1 3 27 29
4-6-2 12 108 108 Yb 4-2-1 2-1-1 4 36 37
Co 4-8-1 4-8-2 14 126 126 Hf 4-3-1 8 72 71
4-6-1 11 99 100 Ta 4-8-2 4-3-1 8 72 74
Ni 4-8-1 4-8-2 14 126 131 W 4-8-3 4-6-2 12 108 108
4-6-1 11 99 97 Re 4-4-2 10 90 93
Cu 4-6-1 4-6-2 12 108 108 4-4-1 9 81 78
Zn 4-4-1 4-3-1 8 72 73 Ir 4-8-3 4-6-1 11 99 98
Ga 2-1-1 4 36 36 4-5-1 10 90 88
Ge 4-4-1 4-8-1 13 117 119 Pt 4-8-2 4-3-1 8 72 76
As 4-4-1 4-6-2 12 108 106 Au 4-6-2 4-1-1 6 54 57
Se 4-1-1 4-3-1 8 72 75 Hg 2-1-1 4 36 32
Br 4-2-1 6 56 54 Tl 4-4-1 2-1-1 4 36 34
Kr 1-1-0 2 18 20 Pb 4-4-1 2-1-1 4 36 33
Rb 4-1-1 1-1-0 2 18 20 Bi 4-3-1 2-2-1 5 45 44
The option of one dimension or three dimensions is open whenever motion advances
from one unit to two units, but not under any other conditions. Three-dimensional motion
of one displacement unit is meaningless, as 13 = 1. After two units there is no option, as
there cannot be more than two units in linear succession, for reasons that were discussed
in Volume I. But two-unit motion can be either one-dimensional or three-dimensional. At
the point where the advance from one to two units takes place, the motion is therefore
able to take the dimensions that are best suited to the existing situation. A one-
dimensional increase in the value of n results in increasing the temperature scale by a
factor of 2 rather than 8. The alkali metals, which diverge from the normal electropositive
behavior in a number of respects because of their low electric displacement, follow the
same pattern as the elements listed in the preceding paragraph, but one step lower, as
indicated in the following comparison:
Group Alkalis OtherPositive
1B n=2 n=8
2A 4-x-x n=2
2B 1-1-x 4-x-x
As we found in the specific heat investigation, the electronegative elements below
displacement 7 have a half-size initial negative specific heat level: 1/9 unit instead of the
normal 2/9. It might be expected that this would result in a net effective specific heat of
8/9 unit or 2 2/3 R, at the transition point instead of the 7/9 unit (2 1/3 R) that exists when
the initial negative level is 2/9 unit. But it is quite clear from the measured specific heat
values that this is not true. The first transition point in the specific heat curves of the
electronegative elements is 2 1/3 R just as it is in the curves with the 2/9 unit (2/3 R)
negative initial level. Apparently the restriction that prevents the existence of the more
negative initial level in the specific heat of these elements is gradually eliminated as the
temperature rises, so that at the transition point the effective negative component of the
specific heat is the normal 2/9 unit.
The thermal factors of the higher inert gases, krypton and xenon, which have no rotation
in the electric dimension, are 1-0-0 rather than 1-1-1, as in compressibility. This is a
peculiarity of the mathematics, and has no physical significance. In both cases the
meaning of the symbols is that the effective magnitude is determined entirely by the
factors a and z. In multiplication this requires a unit value in the y position, whereas in
addition a zero is required for the same purpose. But this equivalence of the 1-1-1
compressibility and 1-1-0 thermal factors does not mean that 1-1-1 thermal and 1-1-0
thermal are equivalent. The 1-1-1 thermal combination is the minimum for a substance
with effective rotational displacement in all three dimensions. Where the thermal factors
drop to 1-1-0, as indicated for rubidium and cesium, there is no effective displacement in
the electric dimension, and the thermal motion is following the inert gas pattern. Such
behavior is uncommon, but it is not without precedent in other properties. We found in
Chapter 1, for instance, that a number of elements, including the halogens, the elements
corresponding to the alkalis on the opposite side of the inert gases, have inter–atomic
distances in one or two dimensions that are similarly based on magnetic rotation only.
Since the empirical values listed in Table 22 are subject to a considerable degree of
uncertainty, small differences between them and the calculated values have no
significance. In some cases, however, the discrepancy is large enough to be real, and
further study of the thermal relations of these elements will be required. Only one of the
experimental values shown in the table, one of those applicable to chromium, is too far
from any theoretical temperature to be incapable of explanation on the basis of the
theoretical information now available.
As brought out in the discussion of the general pattern of the specific heat curves in
Chapter 5, in many substances there is a change in the temperature scale of the curve at
the first transition point (T1), as a result of which the first and second segments of the
curve do not intersect at the 2¹/3 R end point of the lower segment of the curve in the
normal manner. This change in scale is due to a transition to the second set of thermal
factors given, for the elements in which it occurs, in Table 22. With the benefit of the
information that we have developed regarding the factors that determine the temperature
scale we can now examine the quantitative aspects of these changes.
As an example, let us look at the specific heat curve of molybdenum, Fig.9, which, as
previously noted, also applies to ruthenium. The thermal factors applicable to these
elements at low temperatures are 4-8-2, identical with the compressibility factors. The
first transition point, specific heat 4.63, is reached at 126º K on the basis of these factors.
The corresponding empirical temperatures, determined by inspection of the trend of the
experimental values of the specific heats, are 129 for molybdenum and 128 for
ruthenium, well within the range of uncertainty of the techniques employed in estimating
the empirical values. If the thermal factors remained constant, as they do in the “regular”
pattern followed by such elements as silver, Fig.5, there should be a transition to n = 2 at
this 126º K temperature, and the specific heat above this point would follow the extension
of a line from the initial level of 3.89 to 4.63 at 126º K. But instead of continuing on the
4-8-2 basis, the thermal factors decrease to 4-6-2 at the transition point. These factors
correspond to a transition temperature of 108º K. The specific heat of the molecule
therefore undergoes an isothermal increase at 126º K to the extension of a line from the
initial level of 3.89 to 4.63 at 108º K, and follows this line at higher temperatures. The
effect of the isothermal increase in the specific heat of the individual molecules is, of
course, spread out over a substantial temperature range in application to a solid aggregate
by the distribution of molecular velocities.
The temperature of the subsequent transition points and the end points of the various
segments of the specific heat curves can be calculated from the temperatures of the first
transition points by applying the relative values listed in Chapter 5 to the appropriate
values of T1. An approximate agreement between the empirical data and the higher
transition points thus calculated is indicated, but the angles at which the upper segments
of the curves intersect are too small to permit any close empirical definition of the
temperature of intersection. The only one of the end points that has any real significance
is the end point of the last segment of the curve applicable to the substance under
consideration. This is the temperature limit of the solid. Any further addition of heat
initiates the transition to the liquid state.
Inasmuch as it is the individual molecule that reaches its thermal limit at the solid end
point, it is the individual molecule that makes the transition to the liquid state. Physical
state is thus basically a property of the individual molecule rather than a property of the
aggregate, as seen in conventional physical theory. The state of the aggregate is merely a
reflection of the state of the majority of its constituents. Recognition of this fact some
forty years ago, in the early stages of the investigation that led to the results now being
reported, was a major step in the clarification of physical fundamentals that ultimately
opened the door to the formulation of a general physical theory.
The liquid state has long been an enigma to conventional physics. As expressed by V. F.
Weisskopf, “A liquid is a highly complex phenomenon in which the molecules stay
together yet move along each other. It is by no means obvious why such a strange object
should exist.” 11 Weisskopf goes on to speculate as to what the outcome would be if
physicists knew the fundamental principles on which atomic structure is based, as
present-day theory sees them, but “had never had occasion to see structures in nature.”
He doubts if these theorists would ever be able to predict the existence of liquids.
In the Reciprocal System of theory, on the other hand, the liquid state is a necessity, an
intermediate condition that must necessarily exist between the solid and gaseous states.
When the thermal motion of a molecule reaches equality with the inward progression of
the natural reference system in one dimension of the region outside unit distance, the
cohesive force in that dimension is eliminated. The molecule is then free to move in that
dimension, while it is held in a fixed position, or a fixed average position, in the other
dimensions by the cohesive forces that are still operative. The temperature at which the
freedom in one dimension is reached is the melting point of the aggregate, because any
additional thermal energy supplied to the aggregate is absorbed in changing the state of
additional molecules until the remaining content of solid molecules reaches the
percentage that can be accommodated within the liquid aggregate.
These remaining solid molecules are gradually converted to the liquid state in a
temperature range above the melting point. Thus the liquid aggregate in this range
contains a percentage of solid molecules, while the solid aggregate in a similar
temperature range below the melting point contains a percentage of liquid molecules. The
presence of these “foreign” molecules has a significant effect on the physical properties
of matter in both of these temperature ranges, an effect which, as we will see in the
subsequent discussion of the liquid state, can be evaluated accurately by using probability
relations to determine the exact proportions in which molecules of the two states exist at
each temperature.
While the end point of the solid state is the temperature at which the intermolecular
forces reach an equilibrium at the unit level, arrival at this end point does not mean
automatic entry into the liquid state. It merely means that the cohesive forces of the solid
are no longer operative in all three dimensions, and therefore do not prevent the free
movement in one dimension of space that is the distinguishing characteristic of the liquid
state. The significant point here is that a liquid molecule is limited to certain specific
temperatures. A liquid aggregate can take any temperature within the liquid range, but
only because the aggregate temperature is an average of a large number of the restricted
individual values.
This same restriction to one of a limited set of values also applies to the temperature of
the solid molecule, but in the vicinity of the melting point the solid is at a high time
region temperature level, where the proportionate change from one possible value, n
units, to the next, n + 1 units, is small. The motion of the liquid state, on the other hand,
is in the region outside unit space, and is equivalent to gas motion in the one dimension in
which the thermal energy exceeds the solid state limit. As we saw in Chapter 5,
temperatures in the vicinity of the melting point are very low on the scale applicable to
this outside region, and the proportionate change from n to n + 1 is large. The intervals
between the possible temperatures of liquid molecules are therefore large enough to be
significant.
Because of the limitation of the liquid temperatures to specific values, the temperature at
which a molecule qualifies as a liquid is not the end point temperature of the solid state,
but a higher value that includes the increment necessary to bring the end point
temperature up to the next available liquid level. This makes it impossible to calculate
melting points from solid state theory alone. Such calculations will have to wait until the
relevant liquid theory is developed in a subsequent volume in this series, or elsewhere.
But the temperature increment beyond the solid end point is small compared to the end
point temperature itself, and the end point is not much below the melting point. A few
comparisons of end point and melting point temperatures will therefore serve to confirm,
in a general way, the theoretical deductions as to the relation between these two
magnitudes.
There is a considerable degree of uncertainty in the experimental results at the high
temperatures reached by the melting points of many of the elements, and there are also
some theoretical aspects of the thermal situation in the vicinity of the melting point that
have not yet been fully explored. The examples for discussion in this initial approach to
the subject have been selected form among those in which these uncertain elements are at
a minimum. First, let us look at element number 19, potassium. This element has a
specific heat curve of the type identified by the notation n = 3 in Fig.4. Its thermal factors
are 2-1-1, and it maintains the same factors throughout the entire solid range. As
indicated in Chapter 5, the end point temperature of this type of curve is 9.32 times the
temperature of the first transition point. This leads to an end point temperature of 336º K.
The measured melting point is 337º K. In this case, then, the solid end point and the
melting point happen to coincide within the limits of accuracy of the investigation.
Chlorine, an element only two steps lower in the atomic series than potassium, but a
member of the next lower group, has the lower type of specific heat curve, with n = 2.
The end point temperature of this curve is 4.56 on the relative scale where the first
transition point is unity. The thermal factors that determine the transition point, and are
applicable to the first segment of the curve, are 4-2-1, but if these factors are applied to
the end point they lead to an impossibly high temperature. It is thus apparent that the
factors applicable to the second segment of the curve are lower than those applicable to
the first segment, in line with the previously noted tendency toward a decrease in the
thermal factors with increasing temperature. The indicated factors applicable to the end
point in this case are the same 2-1-1 combination that we found in potassium. They
correspond to an end point temperature of 164º K, just below the melting point at 170º K,
as the theory requires.
Next we look at two curves of the n = 4 type, the end point of which is at a relative
temperature of 17.87. On the basis of thermal factors 4-6-1, the absolute temperature of
the end point is
1765º K, which is consistent with the melting points of both cobalt (1768) and iron
(1808). Here, too, the indicated factors at the end point are lower than those applicable to
the first segment of the specific heat curve, but in this case there is independent evidence
of the decrease. Cobalt, which has the factors 4-8-2 in the first segment is already down
to 4-6-1 at the second transition point, while iron, the initial factors of which are also 4-8-
2, has reached 4-6-2 at this point, with two more segments of the curve in which to make
the additional reduction.
Compounds of elements about group 1B, or having a significant content of such
elements, follow one or another of the Type 1 patterns that have been illustrated by
examples from the elements. The hydrocarbons and other compounds of the lower group
elements have specific heat curves of type 2 (Fig.3) in which the end point is at a relative
temperature of 1.80. As an example of this class we can take ethylene. the thermal factors
of these lower group compounds are limited to
1-1-1, 2-1-1, and the combination value 1¹/2-1-1. As we found in Volume I, however, the
two groups of atoms of which ethylene and similar compounds are composed are inside
one time region unit of distance. They therefore act jointly in thermal interchange rather
than acting independently in the manner of two inorganic radicals, such as those in
NH4NO3. Each group contributes to the thermal factors of the molecule, and the value
applicable to the molecule as a whole is the sum of the two components. Ethylene uses
the 1-1-1 and 1¹/2-1-1 combinations. A difference of this kind between the two halves of
an organic molecule is quite common, and no doubt reflects the lack of symmetry
between the positive and negative components that was the subject of comment in the
discussion of organic structure. the combined factors amount to a total of 6¹/2 units. This
corresponds to a transition point at 58º K, which agrees with the empirical curve, and an
end point at 104º K, coincident with the observed melting point.
The joint action of the two ends of an organic molecule that combines their thermal
factors in the temperature determination is maintained when additional structural units
are introduced between the end groups. As brought out in Chapter 6, such an extension of
the organic type of structure into chains or rings also results in the activation of additional
thermal motions of an independent nature within the molecules. The general nature of
this internal motion was explained in the previous discussion. The same considerations
apply to the transition point temperature, except that the internal motion is independent of
the molecular motion in vectorial direction as well as in scalar direction. It is therefore
distributed three–dimensionally, and the fraction in the direction of the molecular motion
is 1/8 rather than 1/2. Each unit of internal motion thus adds 1/8 of 8.98 degrees, or 1.12
degrees K to the transition point temperature. With the benefit of this information we are
now able to compute the temperatures corresponding to the specific heats of the paraffin
hydrocarbons of Table 21. These values are shown in Table 23.

Table 23: Temperatures of Critical Points – Paraffin Hydrocarbons


Thermal Factors

Trans.Point Total End point Total


Propane 1-1-1 1-1-1 6 1-1-1 1-1-0 5
Butane 1-1-1 1-1-¹/2 5¹/2 2-1-1 1¹/2-1-1 7¹/2
Pentane 1¹/2-1-1 1¹/2-1-1 7 2-1-1 2-1-1 8
Hexane and above 2-1-1 1¹/2-1-1 7¹/2 2-1-1 2-1-1 8
Temperatures
Internal End Point Factors End Melting
T1
Units Internal Total Point Point
Propane 0 54 0 5 81 85
Butane 0 50 1 8¹/2 137 138
Pentane 2 65 1 9 145 143
Hexane 3 71 3 11 178 179
Heptane 4 72 3 11 178 182
Octane 5 73 5 13 210 216
Nonane 6 74 5 13 210 220
Decane 7 75 7 15 242 243
Hendecane 8 76 7 15 242 247
Dodecane 9 77 8 16 259 263
Tridecane 10 79 8 16 259 268
Tetradecane 11 80 9 17 275 279
Pentadecane 12 81 9 17 275 283
Hexadecane 13 82 10 18 291 291

The first section of this table traces the gradual increase in the thermal factors as the
molecule makes the transition from a simple combination of two structural groups, with
properties that are similar to those of inorganic binary compounds, except for the joint
thermal action due to the short inter-group distance, to a long-chain organic structure.
The increase in the factors follows a fairly regular course in this range except in the case
of butane. If the experimental values of the specific heat of this compound are accurate,
its transition point factors drop back from the total of 6 that applies to propane to 5¹/2,
whereas they would be expected to advance to 6¹/2. The reason for this anomaly is
unknown. At the C6 compound, hexane, the transition to the long-chain status is
complete, and the thermal factors of the higher compounds as far as hexadecane (C16), the
limit of the present study, are the same as those of hexane.
In the second section of the table the transition point temperatures are calculated on the
basis of 8.98 degrees K per molecular thermal factor, as shown in the upper section of the
table, plus 1.12 degrees per effective unit of internal motion. The number of internal
motions shown in Column 1 for each compound is taken from Table 21.
Columns 3 and 4 are the values entering into the calculation of the solid end point,
Column 5. As the table indicates, some of the internal motions that exist in the molecule
at the transition temperature are inactive at the end point. However, the active internal
motion components are thermally equivalent to the molecular motions at this point, rather
than having only 1/8 of the molecular magnitude as they do at T1. This is a result of the
general principle that the state of least energy takes precedence (in a low energy
environment) in cases where alternatives exist. Below the transition point the internal
thermal motions are necessarily one-dimensional. Above T1 they are free to take either
the one-dimensional or three-dimensional status. The energy at any given temperature
above T1 is less on the three-dimensional basis. This transition therefore takes place as
soon as it can, which is at T1. At the melting point the energy requirement is greater after
the transition to the liquid state. Consequently, this transition does not take place until it
must because there is no alternative. A return to one-dimensional internal thermal motion
is an available alternative that will delay the transition. This motion therefore gradually
reverts back to the one-dimensional status, reducing the energy requirement, and the solid
end point is not reached until all effective thermal factors are at the 8.98 temperature
level. The end point temperature of Column 5 is then 8.98 x 1.80 = 16.164 times the total
number of thermal factors shown in Column 4.
The calculated transition points are all in agreement with the empirical curves within the
margin of uncertainty in the location of these curves. As can be seen by comparing the
calculated solid end points with the melting points listed in the last column, the end point
values are also within the range of deviation that is theoretically explainable on the basis
of discrete values of the liquid temperatures. It is quite possible that there is some ―fine
structure‖ involved in the thermal relations of solid matter that has not been covered in
this first systematic theoretical treatment of the subject. Aside from this possibility, it
should be clear from the contents of this and the two preceding chapters that the theory
derived by development of the consequences of the postulates of the Reciprocal System
is a correct representation of the general aspects of the thermal behavior of matter.

CHAPTER 8

Thermal Expansion
AS indicated earlier, addition of thermal motion displaces the inter-atomic equilibrium in the
outward direction. A direct effect of the motion is thus an expansion of the solid structure.
This direct and positive result is particularly interesting in view of the fact that previous
theories have always been rather vague as to why such an expansion occurs. These theories
visualize the thermal motion of a solid as an oscillation around an equilibrium position, but
they fail to shed much light on the question as to why that equilibrium position should be
displaced as the temperature rises. A typical “explanation” taken from a physics text says,
“Since the average amplitude of vibration of the molecules increases with temperature, it
seems reasonable that the average distance between the atoms should increase with
temperature.” But it is not at all obvious why this should be “reasonable.” As a general
proposition, an increase in the amplitude of a vibration does not, in itself, change the
position of equilibrium.
Many discussions of the subject purport to supply an explanation by stating that the thermal
motion is an anharmonic vibration. But this is not an explanation; it is merely a restatement
of the problem. What is needed is a reason why the addition of thermal energy produces
such an unusual result. This is what the Reciprocal System of theory supplies. According to
this theory, the thermal motion is not an oscillation around a fixed average position; it is a
simple harmonic motion in which the inward component is coincident with the progression
of the natural reference system, and therefore has no physical effect. The outward
component is physically effective, and displaces the atomic equilibrium in the outward
direction.
From the theoretical standpoint, thermal expansion is a relatively unexplored area of
physical science. Measurement of the expansion of different substances at various
temperatures is being pursued vigorously, and the volume of empirical data in this field is
increasing quite rapidly. However, the practical effect of the change in the coefficient of
expansion due to temperature variation is of little consequence, and for most purposes it can
be disregarded. As stated in the physics text from which the “explanation” of the expansion
was taken, “Accurate measurements do show a slight variation of the coefficient of
expansion with the temperature. We shall ignore such variations.” This lack of significant
practical application has limited the amount of theoretical attention that the subject has
heretofore received. But one of the principal objectives of this present work is to
demonstrate that the Reciprocal System is a general physical theory. However limited the
practical use of the thermal expansion information may be, we will want to show that this
expansion can be explained on the same basis as the other properties of matter, using the
same principles and relations that are applied to those other properties, with only such
modifications as are required by considerations peculiar to the expansion.
In general, the volumetric behavior of a solid in response to the application of heat is
analogous to that of a confined gas, the differences being limited to those items which
depend on whether the point of equilibrium between any two of the constituent atoms is
inside or outside unit distance. At constant pressure, the general gas equation (5-3), which
describes the relation between the principal properties of the ideal gas, reduces to
V = kT (8-1)

This is Charles’ Law. It tells us that at constant pressure the volume of an ideal gas (one that
is entirely free from time region forces) is directly proportional to the absolute temperature.
The relation E = PV (equation 4-3) is merely a restatement of the definition of pressure, in a
different form, and is therefore valid in the time region (inside unit distance) as well as in
the ideal gas state. Since E = kT2 (equation 5-5) in the time region, it follows that in this
region
PV = kT2 (8-2)
At constant pressure this reduces to
V = kT2 (8-3)
In our consideration of volume changes in solid structures due to the addition of thermal
energy we will usually be interested mainly in the coefficient of thermal expansion, or
derivative of volume with respect to temperature. This is obtained by differentiating
equation 8-3.
dv/dT = 2kT (8-4)
Aside from the numerical constant k, this equation is identical with the specific heat
equation 5-7, where the value of n in that equation is unity. Thus there is a close association
between thermal expansion and specific heat up to the first transition temperature defined in
Chapter 5. For all of the elements on which sufficient data are available to enable locating
the transition point, this transition temperature is the same for thermal expansion as it is for
specific heat. Each element has a negative initial level of the expansion coefficient, the
magnitude of which has the same relation to the magnitude at the transition point as in
specific heat; that is, 2/9 in most cases, and 1/9 in some of the electronegative elements. It
follows that if the coefficient of expansion at the transition point is equated to 4.63 specific
heat, the first segment of the expansion curve is identical with the first segment of the
specific heat curve.
Beyond the transition point the thermal expansion curve follows a course quite different
from that of the specific heat, because of the difference in the nature of the two phenomena.
Since the term n3 is absent from the thermal expansion equation, the modification of the
expansion curve that takes place where motion of single units is succeeded by multi-unit
motion involves a change in the coefficient k. The expansion is related to the effective
energy (that is, to the temperature), irrespective of the relation between total energy and
effective energy that determines the specific heat above the first transition point. The
magnitude of the constant K that determines the slope of the upper segment of the expansion
curve is determined primarily by the temperature of the end point of the solid state.
For purposes of this present discussion, the solid end point will be regarded as coincident
with the melting point. As brought out in Chapter 7, this is, in fact, only an approximate
coincidence. But the present examination of thermal expansion is limited to its general
features. Evaluation of the exact quantitative relations will not be feasible until a more
intensive study of the situation is undertaken, and even then it will be difficult to verify the
theoretical results by comparison with empirical data because of the large uncertainties in
the experimental values. Even the most reliable measurements of thermal expansion are
subject to an estimated uncertainty of ±3 percent, and the best available values for some
elements are said to be good only to within ±20 percent. However, most of the
measurements on the more common elements are sufficiently accurate for present purposes,
as all that we are here undertaking to do is to show that the empirically determined
expansions agree, in general, with the theoretical pattern.
The total expansion from zero temperature to the solid end point is a fixed quantity, the
magnitude of which is determined by the limitation of the solid state thermal motion
(vibration) to the region within unit distance. At zero temperature the gravitational motion
(outward in the time region) is in equilibrium with the inward progression of the natural
reference system. The resulting volume is s03, the initial molecular volume. At the solid end
point the thermal motion is also in equilibrium with the inward progression of the natural
reference system, as this is the point at which the thermal motion is able to cross the time
region boundary without assistance from gravitation. The thermal motion up to the end point
of the solid state thus adds a volume equal to the initial volume of the molecule. Because of
the dimensional situation, however, only a fraction of the added volume is effective in the
region in which it is measured; that is, outside unit space.
For an understanding of the dimensional relations that are involved it is necessary to realize
that all of the phenomena of the solid state take place inside unit space (distance), in what
we have called the time region. The properties of motion in this region were discussed in
detail at appropriate points in Volume I. This discussion will not be repeated here, but a
brief review of the general situation, with particular reference to the dimensions of motion
may be helpful. According to the fundamental postulates of the Reciprocal System, space
exists only in association with time as motion, and motion exists only in discrete units From
this it follows that space and time likewise exist only in discrete units. Consequently, any
two atoms that are separated by one unit of space cannot move any closer together in space,
as this would require the existence of fractional units. These atoms may, however,
accomplish the equivalent of moving closer together in space by moving outward in time.
All motion in the time region, the region inside unit space, is motion of this kind: motion in
time (equivalent space) rather than motion in actual space.
The first unit of thermal motion is a one-dimensional motion in time. At the transition point,
T1, this motion has reached the full one-unit level. As already explained, only half of this
unit is physically effective. One fully effective unit is required for escape from the time
region, and the motion therefore enters a second time region unit. In this second unit a three-
dimensional distribution of the motion is possible. But the motion in time that takes place in
the time region has only a scalar connection with motion in the region outside unit space,
which is motion in space. This is equivalent to a one-dimensional contact. Thus only one
dimension of the three-dimensional time region motion is effective beyond the regional
boundary. The effective fraction of the motion is 1/8 of one unit, or 1/16 of the total two-unit
time region motion. The expansion is proportional to the effective component of the motion,
and this means that the volumetric expansion from zero temperature to the solid end point,
as measured in the region outside unit space, is also 1/16, or 0.0625 of the initial volume. On
a one-dimensional (linear) basis, this is 0.0205.
This is the relative expansion that would take place providing that no change in the
volumetric determinants of the substance occurs above the reference temperature (usually
room temperature). But such changes occur more often than not, and, as has been explained,
the volume changes accompanying an increase in temperature are normally in the direction
of increased volume. The total expansion is 0.0625 of the initial volume corresponding to
the volume at the solid end point. Where this theoretical initial volume is greater than the
reference volume projected to zero temperature, the expansion expressed relative to the
smaller volume is correspondingly increased. It follows that in most cases the linear
expansion, as measured, is somewhat above 0.0205, generally in the range from this value
up to about 0.028.
The increase in volume at the higher temperature, where it occurs, is generally due
to,structural rearrangements. The changes take place either in the inter-atomic distance, by
reason of transitions from one of the types of orientation discussed in Chapter 1 to another,
or in the crystal structure, or both. The expansion is related to the inter-atomic distance (s0)
rather than to the geometrical volume, and it is independent of the geometrical arrangement,
but, as indicated in the preceding paragraph, a modification of the geometry does affect the
relation of the volume at the solid end point to the reference volume at zero temperature.
In the NaCl type of structure the edge of the unit cube is equal to the inter-atomic distance.
This cube contains one atom, and the ratio of the measured volume to what we may call the
three-dimensional space, the cube of the inter-atomic distance, is therefore unity. In the
body-centered cube the edge is 2/V¯3 times the inter-atomic distance. Since the unit cube of
this type contains two atoms, the ratio of volume to three-dimensional space is 0.770. The
one-dimensional space, the edge of a hypothetical cube containing one atom, is then 0.9165
for the body-centered cube and 1.00 for the NaCl type structure. Transitions from one type
of structure to the other modify the spatial relations accordingly. The values applicable to all
five of the principal isometric crystal structures of the elements are listed in the following
tabulation.
Face-centered cube 0.8909
Close-packed hexagonal 0.8909
Body-centered cube 0.9165
Simple (NaCl) cube 1.0000
Diamond (ZnS) cube 1.1547
The second segment of the thermal expansion curve has no negative initial level, because
there is a positive expansion (that of the first segment) into which the initial level can
extend. Like the transition from the liquid to the solid state, the transition from single units
of motion to multi-unit motion involves a change in the zero datum applicable to
temperature. The temperature T0, corresponding to the initial negative level, is eliminated,
and the temperature of the end point, T1, of the first segment of the curve, which is 9/2 T0 on
this segment, is reduced to 7/2 T0 on the second segment.
As brought out in Chapter 7, the minimum of the zero point temperature, T0, is equivalent to
one of the 128 dimensional units that correspond to one full temperature unit, 510.8 degrees
K. As the temperature rises, additional units of motion are activated, and the corresponding
value when all 128 units are fully effective is thus 7/2 x 510.8 = 1788 degrees K. Under the
same maximum conditions, the second unit of thermal motion, from T1 to the solid end
point, adds an equal magnitude. Thus the temperature of this theoretical full-scale solid end
point is 3576 degrees k. The total expansion coefficient at T1 on the first segment of the
expansion curve, and at the initial point of the second segment, is then 0.0205/3576.
However, this coefficient is subject to a 1/9 initial level. This makes the net effective
coefficient 8/9 x 0.0205/3576 = 5.2 x 10-6 per degree K.
Where the end point temperature (which we are equating to the melting point, Tm, for
present purposes) is below 3576, the average coefficient of expansion is increased by the
ratio 3576/Tm, inasmuch as the total expansion up to the solid end point is a fixed
magnitude. If the first temperature unit, up to T1, were to take its full share of the expansion,
the coefficient at T1 on the first segment of the expansion curve, and at the initial point of the
second segment, would also be increased by the same ratio. But in the first unit range of
temperature the thermal motion takes place in one time region dimension only, and there is
no opportunity to increase the total expansion by extension into additional dimensions in the
manner that is possible when a second unit of motion is involved. (Additional dimensions do
not increase the effective magnitude of one unit, as
1n = 1.) The total expansion corresponding to the first unit of motion (speed) can be
increased by extension to additional rotational speed displacements, but this is possible only
in full units, and is limited to a total of four, the maximum magnetic displacement.
As an example, let us consider the element zirconium, which has a melting point of 2125º K.
The melting point ratio is 3576/2125 = 1.68. Inasmuch as this is less than two full units, the
expansion coefficient of zirconium remains at one unit (5.2 x 10-6) at the initial point of the
second segment of the curve, and the difference has to be made up by an increase in the rate
of expansion between this initial point and Tm; that is, by an increase in the slope of the
second section of the expansion curve. The expansion pattern of zirconium is shown
graphically in Fig.14.

Figure 14: Thermal Expansion


Now let us look at an element with a lower melting point. Titanium has a melting point of
1941º K. The ratio 3576/1941 is 1.84. This, again, is less than two full units. Titanium
therefore has the same one unit expansion coefficient at the initial level as the elements with
higher melting points. The melting point of palladium is only a little more than 100 degrees
below that of titanium, but this difference is just enough to put this element into the two unit
range. The ratio computed from the observed melting point, 1825º K, is 1.96, and is thus
slightly under the two unit level. But in this case the difference between the melting point
and the end point of the solid state, which we are disregarding for general application,
becomes important. as it is enough to raise the 1.96 ratio above 2.00. The expansion
coefficient of palladium at the initial point of the second segment of the curve is therefore
two units (10.3 x 10-6), and the expansion follows the pattern illustrated in the second curve
in Fig.14.
The effect of the difference between the solid end point and the melting point can also be
seen at the three unit level, as the melting point ratio of silver, 3576/1234 = 2.90, is raised
enough by this difference to put it over 3.00. Silver then has the three unit (15.5 x 10-6)
expansion coefficient at the upper initial point, as shown in the upper curve in Fig.14. At the
next unit level the element magnesium, with a ratio of 3.87, is similarly near the 4.00 mark,
but in this instance the end point increment is not sufficient to close the gap, and magnesium
stays on the three unit basis.
None of the elements for which sufficient data were available for comparison with the
theoretical curves has a melting point in the four unit range from 715 to 894º K. But since
the magnetic rotation is limited to four units, the four unit initial level also applies to the
elements with melting points below 715º K. This is illustrated in Fig.15 by the curve for
lead, melting point 601º K.

Figure 15: Thermal Expansion


As can be seen in Fig.14, the expansion coefficient of silver, as measured experimentally,
deviates from the straight line relation in the vicinity of T1. This deviation is not due to
experimental error or to structural readjustments. It is a result of the nature of the transition
from the one unit expansion below T1 to the multi-unit expansion above this temperature.
Unlike the specific heat transition, where the increments represented by the second segment
of the specific heat curve add to the specific heat at T1, the expansion represented by the
second segment of the expansion curve replaces the expansion represented by the first
segment. The initial level of the second segment at zero temperature is the unit (or n-unit)
level reached at the end of the first segment.
This means that at T1 the molecule undergoes an isothermal expansion to the level of the
second segment at that temperature. In the aggregate the individual molecular expansions
are spread out over a temperature range by the distribution of molecular velocities, and they
appear as a bulge in the expansion curve. Coincidentally, there is a downward deviation in
the curve, similar to that in the experimental specific heat curves, due to the effect of the
transition to the more nearly horizontal second segment of the curve. The net effect of these
two types of deviation from the theoretical curve applying to the single molecule depends on
their relative magnitude, and on the temperature range over which the deviations are
distributed. The curves of Fig.14 have been selected from among those in which the net
deviation is at a minimum, in order to minimize uncertainties in the definition of the upper
sections of the curves, and to make it clear that these linear sections actually terminate at the
calculated initial levels. More commonly, the bulge is quite pronounced, as in the curves for
gold and lead, Fig.15.
When the effect of this systematic deviation from the .linear relation in the vicinity of the
transition point is taken into consideration, all of the electropositive elements included in the
compilation of expansion data utilized in the investigation,12 except the rare earth elements,
have expansion curves that follow the theoretical pattern within the range of accuracy of the
experimental results. Most of the rare earths have the one unit expansion coefficient (5.2 x
10-6) at the initial level of the second segment of the curve, although their melting points are
in the range where coefficients of two, or in some cases three, units would be normal. The
reason for this, the only deviation from the general pattern in the expansion curves of these
elements, is as yet unknown, but it is no doubt connected with the other peculiarities of the
rare earth elements that were noted earlier.
The electronegative elements of Division III follow the regular pattern. The lowest melting
point in this group is that of mercury, 234º K, well below the lowest value for any of the
electropositive elements investigated, but this descent to a lower melting point does not
introduce any new behavior. The upper segment of the expansion curve for mercury, defined
by the empirical data in Fig.15, definitely terminates at the four unit level (20.7 x 10-6), as
required by the theory. Thus the theoretical relations are applicable to the full temperature
range of the first three divisions.
As noted earlier, the borderline elements of Division IV, those with negative electric
displacement 4, are capable of acting as members of either Division III or Division IV. The
expansion curve for lead, Fig.15, follows the normal Division III pattern. The lower
borderline elements, tin and germanium, have curves in which the initial levels, like those of
the rare earths, are lower than the values corresponding to the melting points. Otherwise,
these curves are also normal. Very little is known about the expansion of the elements of
negative displacement below 4. The theoretical development has not yet been extended to a
consideration of the effect of the strongly electronegative character of these elements on the
volume relations, and the empirical data are both meager and conflicting.
This Division IV situation is part of the general problem of anisotropic expansion, a subject
to which the Reciprocal System of theory has not yet been applied. The measurements
previously cited that apply to anisotropic crystals were made on polycrystalline material in
which the expansion in different directions is averaged as a result of the random orientation
in the aggregate. Both this issue of anisotropic expansion and the application of the thermal
expansion theory to compounds and alloys are still on the waiting list for future
investigation. There is no reason to believe that such an investigation will encounter any
serious difficulties, but for the present other matters are being given the priority.

CHAPTER 9

Electric Currents
Another set of properties of matter that we will want to consider results from the
interaction between matter and one of the sub-atomic particles, the electron. As pointed
out in Volume I, the electron, M 0-0-(1), in the notation used in this work, is a unique
particle. It is the only particle constructed on the material rotational base, M 0-0-0,
(negative vibration and positive rotation) that has an effective negative rotational
displacement. More than one unit of negative rotation would exceed the one positive
rotational unit of the rotational base, and would result in a negative value of the total
rotation. Such a rotation of the basic negative vibration would be unstable in the material
environment, for reasons that were explained in the previous discussion. But in the
electron the net total rotation is positive, even though it involves one positive and one
negative unit, as the positive unit is two-dimensional while the negative unit is one-
dimensional.
Furthermore, the independent one-dimensional nature of the rotation of the electron and
its positive counterpart, the positron, leads to another unique effect. As we found in our
analysis of the rotations that are possible for the basic vibrating unit, the primary rotation
of atoms and particles is two-dimensional. The simplest primary rotation has a one-unit
magnetic (two-dimensional) displacement, a unit deviation from unit speed, the condition
of rest in the physical universe. The electric (one-dimensional) rotation, we found, is not
a primary rotation, but merely one that modifies a previously existing two-dimensional
rotation. Addition of the one-unit space displacement of the electron rotation to an
existing effective two-dimensional rotation increases the total scalar speed of that
rotation. But the one-dimensional rotation of the independent electron does not modify an
effective speed; it modifies unit speed, which is zero from the effective standpoint. The
speed displacement of the independent electron, its only effective component, therefore
modifies only the effective space, not the speed.
Thus the electron is essentially nothing more than a rotating unit of space. This is a
concept that is rather difficult for most of us when it is first encountered, because it
conflicts with the idea of the nature of space that we have gained from a long-continued,
but uncritical, examination of our surroundings. However, the history of science is full of
instances where it has been found necessary to recognize that a familiar, and apparently
unique, phenomenon is merely one member of a general class, all members of which
have the same physical significance. Energy is a good example. To the investigators who
were laying the foundation of modern science in the Middle Ages the property that
moving bodies possess by reason of their motion–―impetus‖ to those investigators;
―kinetic energy‖ to us–was something of a unique nature. The idea that a motionless stick
of wood contained the equivalent of this ―impetus‖ because of its chemical composition
was as foreign to them as the concept of a rotating unit of space is to most individuals
today. But the discovery that kinetic energy is only one form of energy in general opened
the door to a major advance in physical understanding. Similarly, the finding that the
―space‖ of our ordinary experience, extension space, as we are calling it in this work, is
merely one manifestation of space in general opens the door to an understanding of many
aspects of the physical universe, including the phenomena connected with the movement
of electrons in matter.
In the universe of motion, the universe whose details we are developing in this work, and
whose identity with the observed physical universe we are demonstrating as we go along,
space enters into physical phenomena only as a component of motion, and the specific
nature of that space is, for most purposes, irrelevant, just as the particular kind of energy
that enters into a physical process usually has no relevance to the outcome of the process.
The status of the electron as a rotating unit of space therefore gives it a very special role
in the physical activity of the universe. It should be noted at this time that the electron
that we are now discussing carries no charge. It is a combination of two motions, a basic
vibration and a rotation of the vibrating unit. As we will see later, an electric charge is an
additional motion that may be superimposed on this two-component combination. The
behavior of charged electrons will be considered after some further groundwork has been
laid. For the present we are concerned only with the uncharged electrons.
As a unit of space, the uncharged electron cannot move through extension space, since
the relation of space to space does not constitute motion. But under appropriate
conditions it can move through ordinary matter,. inasmuch as this matter is a combination
of motions with a net positive, or time, displacement, and the relation of space to time
does constitute motion. The present-day view of the motion of electrons in solid matter is
that they move through the spaces between the atoms. The resistance to the electron flow
is then considered to be analogous to friction. Our finding is that the electrons (units of
space) exist in the matter, and move through that matter in the same manner as the
movement of matter through extension space.
The motion of the electrons is negative with respect to the net motion of material objects.
This is illustrated in the following diagram:

Line X in the diagram is a representation of a scalar magnitude of extension space, as it


appears in the conventional reference system. Line A shows the effect of a unit of motion
of a material object M through that space. The object that was originally coincident with
spatial unit 1 is now coincident with spatial unit 2. Line B shows what happens if the
original motion of object M is followed by a unit of electron motion. Just as object M
moved through space X in line A, so space X (the electrons) moves through object M in
line B. In one unit of motion (line A) object M advances from spatial unit 1 to spatial unit
2. In the following unit of the inverse type of motion (line B) the numbered spatial
locations advance one unit relative to object M. This brings M back into coincidence with
spatial unit 1, the same result that would have followed if object M had moved backward
in the absence of any electron movement. Thus the movement of space (electrons)
through matter is equivalent to a negative movement of matter through space. It follows
that the voltage differential that causes the electron motion, and the stress in any
substance that absorbs the motion, are likewise negative.
Directional movement of electrons through matter will be identified as an electric
current. If the atoms of the matter through which the current passes are effectively at rest
relative to the structure of the solid aggregate as a whole, uniform motion of the electrons
(space) through matter has the same general properties as motion of matter through space.
It follows Newton’s first law of motion, and can continue indefinitely without addition of
energy. This situation exists in the phenomenon known as superconductivity that has
been observed experimentally in many substances at very low temperatures. But where
the atoms of a material aggregate are in effective motion thermally, movement of
electrons through the matter adds to the spatial component of the thermal motion (that is,
increases the speed), and thereby imparts energy (heat) to the moving atoms.
The magnitude of the current is measured by the number of electrons (units of space) per
unit of time. Units of space per unit of time is the definition of speed, hence the electric
current is a speed. From a mathematical standpoint it is immaterial whether a mass is
moving through extension space or space is moving through the mass. Thus in dealing
with the electric current we are dealing with the mechanical aspects of electricity, and the
current phenomena can be described by the same mathematical equations that are
applicable to ordinary motion in space, with appropriate modifications for differences in
conditions, where such differences exist. It would even be possible to use the same units,
but for historical reasons, and as a matter of convenience, a separate system of units is
utilized in present-day practice.
The basic unit of current electricity is the unit of quantity. In the natural system it is the
spatial aspect of one electron, which has a speed displacement of one unit. Quantity, q, is
therefore equivalent to space, s. Energy has the same status in current flow as in the
mechanical relations, and has the space-time dimensions t/s. Energy divided by time is
power, 1/s. A further division by current, which has the dimensions of speed,. s/t, then
produces electromotive force (emf) with the dimensions 1/s x t/s = t/s2. These are, of
course, the space-time dimensions of force in general.
The term ―electric potential‖ is commonly used as an alternative to emf, but, for reasons
to be discussed later, we will not use ―potential‖ in this sense. Where a more convenient
term than emf is appropriate, we will use the term ―voltage,‖ symbol V.
Dividing voltage, t/s2, by current, s/t, we obtain t2/s3. This is resistance, symbol R, the
only electrical quantity thus far considered that is not equivalent to a familiar mechanical
quantity. Its true nature is revealed by an examination of its space-time structure. The
dimensions t2/s3 are equivalent to mass, t3/s3, divided by time, t. Resistance is therefore
mass per unit time. The relevance of such a quantity can easily be seen when it is realized
that the amount of mass entering into the motion of space (electrons) through matter is
not a fixed quantity, as it is in the motion of matter through extension space, but a
quantity whose magnitude depends on the amount of movement of the electrons. In
motion of matter through extension space the mass is constant while the space depends
on the duration of the movement. In the current flow the space (number of electrons) is
constant while the mass depends on the duration of the movement. If the flow is only
momentary, each electron may move through only a small fraction of the total amount of
mass in the circuit, whereas if it continues for a longer time the entire circuit may be
traversed repeatedly. In either case, the total mass involved in the current flow is the
product of the mass per unit time (the resistance) and the time of flow. In the movement
of matter through extension space, the total space is determined in the same manner; that
is, it is the product of the space per unit time (velocity) by the time of movement.
In dealing with resistance as a property of matter we will be interested mainly in the
specific resistance, or resistivity, which is defined as the resistance of a unit cube of the
substance under consideration. Resistance is directly proportional to the distance traveled
by the current and inversely proportional to the cross-sectional area of the conductor. It
follows that if we multiply the resistance by unit area and divide by unit distance we
arrive at a quantity with the dimensions t2/s2 that reflects only the inherent characteristics
of the material and the environmental conditions (principally temperature and pressure)
and is independent of the geometrical structure of the conductor. The reciprocals of
resistance and resistivity are conductance and conductivity, respectively.
With the benefit of the clarification of the space-time dimensions of resistance we can
now go back to the empirically determined relations between resistance and other
electrical quantities, and verify the consistency of the space-time identifications.
Voltage:V = IR = s/t x t2/s3 = t/s2
Power:P = I2R = s2/t2 x t2/s3 = 1/s
Energy:E = I2Rt = s2/t2 x t2/s3 x t = t/s
This energy equation demonstrates the equivalence of the mathematical expressions of
the electrical and mechanical phenomena. Since resistance is mass per unit time, the
product of resistance and time, Rt, is equivalent to mass, m. The current, I, is a speed, v.
The electrical energy expression RtI2 is thus dimensionally equivalent to the kinetic
energy expression ¹/2mv2. In other words, the quantity RtI2 is the kinetic energy of the
electron motion.
Instead of using resistance, time, and current, we may put the energy expression into
terms of voltage, V (equivalent to IR), and quantity, q, (equivalent to It). The expression
for the magnitude of the energy (or work) is then W = Vq. Here we have a definite
confirmation of the identification of electric quantity as the equivalent of space. Force, as
described in one of the standard physics textbooks, is ―an explicitly definable vector
quantity that tends to produce a change in the motion of objects.‖ Electromotive force, or
voltage, conforms to this description. It tends to cause motion of the electrons in the
direction of the voltage gradient. Energy in general is the product of force and distance.
Electrical energy, as Vq, is the product of force and quantity. It follows that electrical
quantity is equivalent to distance: the same conclusion that we derived from the nature of
the uncharged electron.
In conventional scientific thought the status of electrical energy as one form of energy in
general is accepted, as it must be, since it can be converted to any of the other forms, but
the status of electrical, or electromotive, force as one form of force in general is not
accepted. If it were, the conclusion stated in the preceding paragraph would be
inescapable. But the clear verdict of the observed facts is disregarded because there is a
general impression that electrical quantity and space are entities of a totally different
nature.
The early investigators of electrical phenomena recognized that the quantity measured in
volts has the characteristics of a force, and they named it accordingly. Contemporary
theorists reject this identification because it conflicts with their views as to the nature of
the electric current. W. J. Duffin, for instance, gives us a definition of electromotive force
(emf), and then says,
In spite of its name, it is clearly not a force but is equal to the work done per unit positive charge in taking a
charge completely around [the electric circuit]; its unit is therefore the volt. 13

Work per unit of space is force. This author simply takes it for granted that the moving
entity, which he calls a charge, is not equivalent to space, and he therefore deduces that
the quantity measured in volts cannot be a force. Our finding is that his assumptions are
wrong, that the moving entity is not a charge, but is a rotating unit of space (an uncharged
electron). The electromotive force, measured in volts, is then, in fact, a force. In effect,
Duffin concedes this point when he tells us, in another connection, that ―V/n [volts per
meter] is the same as N/C [newtons per coulomb].‖ 14 Both express the voltage gradient
in terms of force divided by space.
Conventional physical theory does not pretend to give us any understanding of the nature
of either electrical quantity or electric charge. It simply assumes that inasmuch as
scientific investigation has hitherto been unable to produce any explanation of its nature,
the electric charge must be a unique entity, independent of the other fundamental physical
entities, and must be accepted as one of the ―given‖ features of nature. It is then further
assumed that this entity of unknown nature that plays the central role in electrostatic
phenomena is identical with the entity of unknown nature, electrical quantity, that plays
the central role in current electricity.
The most significant weakness of the conventional theory of the electric current, the
theory based on the foregoing assumptions, as we now see it in the light of the more
complete understanding of physical fundamentals derived from the theory of the universe
of motion, is that it assigns two different, and incompatible, roles to the electrons. These
particles, according to present-day theory, are components of the atomic structure, yet at
least some of them are presumed to be free to accommodate themselves to any electrical
forces applied to the conductor. On the one hand, each is so firmly bound to the
remainder of the atom that it plays a significant part in determining the properties of that
atom, and a substantial force (the ionization potential) must be applied in order to
separate it from the atom. On the other hand, these electrons are so free to move that they
will respond to thermal or electrical forces whose magnitude is only slightly above zero.
They must exist in a conductor in specific numbers in order to account for the fact that
the conductor is electrically neutral while carrying current, but at the same time they must
be free to leave the conductor, either in large or small quantities, if they acquire sufficient
kinetic energy.
It should be evident that the theories are calling upon the electrons to perform two
different and contradictory functions. They have been assigned the key position in both
the theory of atomic structure and the theory of the electric current, without regard for the
fact that the properties that they must have in order to perform the functions required by
either one of these theories disqualify them for the functions that they are called upon to
perform in the other.
In the theory of the universe of motion, each of these phenomena involves a different
physical entity The unit of atomic structure is a unit of rotational motion, not an electron.
It has the quasi-permanent status that is required of an atomic constituent. The electron,
without a charge, and without any connection with the atomic structure, is then available
as the freely moving unit of the electric current.
The fundamental postulate of the Reciprocal System of theory is that the physical
universe is a universe of motion, one in which all entities and phenomena are motions,
combinations of motions, or relations between motions. In such a universe none of the
basic phenomena are unexplainable. ―Unanalyzables,‖ as Bridgman called them, do not
exist. The basic physical entities and phenomena of the universe of motion–radiation,
gravitation, matter, electricity, magnetism, and so on–can be defined explicitly in terms
of space and time. Unlike conventional physical theory, the Reciprocal System does not
have to leave its basic elements cloaked in metaphysical mystery. It does not have to
exclude them from physical inquiry, in the manner of the following statement from the
Encyclopedia Britannica:
The question ―What is electricity?‖ like the question ―What is matter?‖ really lies outside the realm of
physics and belongs to that of metaphysics.15

In a universe composed entirely of motion, an electric charge applied to a physical entity


must necessarily be a motion. Thus the problem faced in the theoretical investigation was
not to answer the question, What is an electric charge?, but merely to determine what
kind of motion manifests itself as a charge. The identification of the charge as an added
motion not only clarifies the relation between the charged electron that is observed
experimentally and the uncharged electron that is known only as the moving entity in the
electric current, but also explains the interchanges between the two that are the principal
support for the currently popular opinion that only one entity, the charge, is involved. It is
not always remembered that this opinion achieved general acceptance only after a long
and spirited controversy. There are similarities between static and current phenomena,
but there are also significant differences. Inasmuch as no theoretical explanation of either
kind of electric effect was available at the time, the question to be decided was whether to
regard the two as identical because of the similarities, or as disparate because of the
differences. Once made, the decision in favor of identity has persisted, even though much
evidence against its validity has accumulated in the meantime.
The similarities are of two general types: (1) some of the properties of charged particles
and electric currents are alike, and (2) there are observable transitions from one to the
other. Identification of the charged electron as an uncharged electron with an added
motion explains both types of similarities. For instance, a demonstration that a rapidly
moving charge has the same magnetic properties as an electric current was a major factor
in the victory won by the proponents of the ―charge‖ theory of the electric current many
years ago. But our findings are that the moving entities are electrons, or other carriers of
the charges, and the existence or non-existence of electric charges is irrelevant.
The second kind of evidence that has been interpreted as supporting the identity of the
static and current electrons is the apparent continuity from the electron of current flow to
the charged electron in such processes as electrolysis. Here the explanation is that the
electric charge is easily created and easily destroyed. As everyone knows, nothing more
than a slight amount of friction is sufficient to generate an electric charge on many
surfaces, such as those of present-day synthetic fabrics. It follows that wherever a
concentration of energy exists in one of these forms that can be relieved by conversion to
the other, the rotational vibration that constitutes a charge is either initiated or terminated
in order to permit the type of electron motion that can take place in response to the
effective force.
It has been possible to follow the prevailing policy, regarding the two different quantities
as identical, and utilizing the same units for both, only because the two different usages
are entirely separate in most cases. Under these circumstances no error is introduced into
the calculations by using the same units, but a clear distinction is necessary in any case
where either calculation or theoretical consideration involves quantities of both kinds.
As an analogy we might assume that we are undertaking to set up a system of units in
which to express the properties of water. Let us further assume that we fail to recognize
that there is a difference between the properties of weight and volume, and consequently
express both in cubic centimeters. Such a system is equivalent to using a weight unit of
one gram, and as long as we deal separately with weight and volume, each in its own
context, the fact that the expression ―cubic centimeter‖ has two entirely different
meanings will not result in any difficulties. However, if we have occasion to deal with
both quantities simultaneously, it is essential to recognize the difference between the two.
Dividing cubic centimeters (weight) by cubic centimeters (volume) does not result in a
pure number, as the calculations seem to indicate; the quotient is a physical quantity with
the dimensions weight/volume. Similarly, we may use the same units for electric charge
and electric quantity as long as they are employed independently and in the right context,
but whenever the two enter in to the same calculation, or are employed individually with
the wrong physical dimensions, there is confusion.
This dimensional confusion resulting from the lack of distinction between the charged
and uncharged electrons has been a source of considerable concern, and some
embarrassment to the theoretical physicists. One of its major effects has been to prevent
setting up any comprehensive systematic relationship between the dimensions of physical
quantities. The failure to find a basis for such a relationship is a clear indication that
something is wrong in the dimensional assignments, but instead of recognizing this fact,
the current reaction is to sweep the problem under the rug by pretending that it does not
exist. As one observer sees the picture:
In the past the subject of dimensions has been quite controversial. For years unsuccessful attempts were
made to find ―ultimate rational quantities‖ in terms of which to express all dimensional formulas. It is now
universally agreed that there is no one ―absolute‖ set of dimensional formulas.16

This is a very common reaction to long years of frustration, one that we encountered
frequently in our examination of the subjects treated in Volume I. When the most
strenuous efforts by generation after generation of investigators fail to reach a defined
objective, there is always a strong temptation to take the stand that the objective is
inherently unattainable. ―In short,‖ says Alfred Lande, ―if you cannot clarify a
problematic situation, declare it to be ‗fundamental,‘ then proclaim a corresponding
‗principle‘.‖ 17 So physical science fills up with principles of impotence rather than
explanations.
In the universe of motion the dimensions of all quantities of all kinds can be expressed in
terms of space and time only. The space-time dimensions of the basic mechanical
quantities were identified in Volume I. In this chapter we have added those of the
quantities involved in the flow of electric current. The chapters that follow will complete
this task by identifying the space-time dimensions of the electric and magnetic quantities
that make their appearance in the phenomena due to charges of one kind or another and in
the magnetic effects of electric currents.
This clarification of the dimensional relations is accompanied by a determination of the
natural unit magnitudes of the various physical quantities. The system of units commonly
utilized in dealing with electric currents was developed independently of the mechanical
units on an arbitrary basis. In order to ascertain the relation between this arbitrary system
and the natural system of units it is necessary to measure some one physical quantity
whose magnitude can be identified in the natural system, as was done in the previous
determination of the relations between the natural and conventional units of space, time,
and mass. For this purpose we will use the Faraday constant, the observed relation
between the quantity of electricity and the mass involved in electrolytic action.
Multiplying this constant, 2.89366 x 1014 esu/g-equiv., by the natural unit of atomic
weight, 1.65979 x 10-24 g, we arrive at 4.80287 x 10-10 esu as the natural unit of electrical
quantity.
The magnitude of the electric current is the number of electrons per unit of time; that is,
units of space per unit of time, or speed. Thus the natural unit of current could be
expressed as the natural unit of speed, 2.99793 x 1010 cm/sec. In electrical terms it is the
natural unit of quantity divided by the natural unit of time, and amounts to 3.15842 x 106
esu/sec, or 1.05353 x 10-3 amperes. The conventional unit of electrical energy, the watt-
hour, is equal to 3.6 x 1010 ergs. The natural unit of energy, 1.49275 x 10-3 ergs, is
therefore equivalent to 4.14375 x 10-14 watt-hours. Dividing this unit by the natural unit of
time, we obtain the natural unit of power
9.8099 x 1012 ergs/sec = 9.8099 x 105 watts. A division by the natural unit of current then
gives us the natural unit of electromotive force, or voltage, 9,31146 x 108 volts. Another
division by current brings us to the natural unit of resistance, 8.83834 x 1011 ohms.
The basic quantities of current electricity and their natural units in electrical terms can be
summarized as follows:

s quantity 4.80287 x 10-10 esu

s/t current 1.05353 x 10-3 amperes


1/s power 9.8099 x 105 watts
t/s energy 4.14375 x 10-14 watt-hours
t/s2 voltage 9.31146 x 108volts
t2/s3 resistance 8.83834 x 1011 ohms
Another electrical quantity that should be mentioned because of the key role that it plays
in the present-day mathematical treatment of magnetism is ―current density,‖ which is
defined as ―the quantity of charge passing per second through unit area of a plane normal
to the line of flow.‖ This is a strange quantity, quite unlike any physical quantity that has
previously been discussed in the pages of this and the preceding volume, in that it is not a
relation between space and time. When we recognize that the quantity is actually current
per unit of area, rather than ―charge‖ (a fact that is verified by the units, amperes per
square meter, in which it is expressed), its space-time dimensions are seen to be s/t x 1/s2
= 1/st. These are not the dimensions of a motion, or a property of a motion. It follows that
this quantity, as a whole, has no physical significance. It is merely a mathematical
convenience.
The fundamental laws of current electricity known to present-day science–such as Ohm‘s
Law, Kirchhoff‘s Laws, and their derivatives–are empirical generalizations, and their
application is not affected by the clarification of tne essential nature of the electric
current. The substance of these laws, and the relevant details, are adequately covered in
existing scientific and technical literature. In conformity with the general plan of this
work, as set forth earlier, these subjects will not be included in our presentation.
This is an appropriate time to make some comments about the concept of ―natural units.‖
There is no ambiguity in this concept, so far as the basic units of motion are concerned.
The same is true, in general, of the units of the simple scalar quantities, although some
questions do arise. For example, the unit of space in the region inside unit distance, the
time region, as we are calling it, is inherently just as large as the unit of space in the
region outside unit distance, but as measured it is reduced by the inter-regional ratio,
156.444, for reasons previously explained. We cannot legitimately regard this quantity as
something less than a full unit, since, as we saw in Volume I, it has the same status in the
time region that the full-sized natural unit of space has in the region ouside unit distance.
The logical way of handling this situation appears to be to take the stand that there are
two different natural units of distance (one-dimensional space), a simple unit and a
compound unit, that apply under different circumstances.
The more complex physical quantities are subject to still more variability in the unit
magnitudes, because these quantities are combinations of the simpler quantities, and the
combination may take place in different ways and under different conditions. For
instance, as we saw in our examination of the units of mass in Volume I, there are several
different manifestations of mass, each of which involves a different combination of
natural units and therefore has a natural unit of its own. In this case, the primary cause of
variability is the existence of a secondary mass component that is related to the primary
mass by the inter-regional ratio, or a modification thereof, but some additional factors
introduce further varability, as indicated in the earlier discussion.

CHAPTER 10

Electrical Resistance
While the motion of the electric current through matter is equivalent to motion of matter
through space, as brought out in the discussion in Chapter 9, the conditions under which
each type of motion is encountered in our ordinary experience emphasize different
aspects of the common fundamentals. In dealing with the motion of matter through
extension space we are primarily concerned with the motions of individual objects.
Newton‘s laws of motion, which are the foundation stones of mechanics, deal with the
application of forces to initiate or modify the motions of such objects, and with the
transfer of motion from one object to another. Our interest in the electric current, on the
other hand, is concerned mainly with the continuous aspects of the current flow, and the
status of the individual objects that are involved is largely irrelevant.
The mobility of the spatial units in the current flow also introduces some kinds of
variability that are not present in the movement of matter through extension space.
Consequently, there are behavior characteristics, or properties, of material structures that
are peculiar to the relation between these structures and the moving electrons. Expressing
this in another way, we may say that matter has some distinctive electrical properties.
The basic property of this nature is resistance. As pointed out in Chapter 9, resistance is
the only quantity participating in the fundamental relations of current flow that is not a
familiar feature of the mechanical system of equations, the equations that deal with the
motion of matter through extension space.
Present-day ideas as to the origin of electrical resistance are summarized by one author in
this manner:
Ability to conduct electricity…is due to the presence of large numbers of quasi-free electrons which under
the action of an applied electric field are able to flow through the metallic lattice…Disturbing
influences…impede the free flow of electrons, scattering them and giving rise to a resistance. 18

As indicated in the preceding chapter, the development of the theory of the universe of
motion arrives at a totally different concept of the nature of electrical resistance. The
electrons, we find, are derived from the environment. It was brought out in Volume I that
there are physical processes in operation which produce electrons in substantial
quantities, and that, although the motions that constitute these electrons are, in many
cases. absorbed by atomic structures, the opportunities for utilizing this type of motion in
such structures are limited. It follows that there is always a large excess of free electrons
in the material sector of the universe, most of which are uncharged. In this uncharged
state the electrons cannot move with respect to extension space, because they are
inherently rotating units of space, and the relation of space to space is not motion. In open
space, therefore, each uncharged electron remains permanently in the same location with
respect to the natural reference system, in the manner of a photon. In the context of the
stationary spatial reference system the uncharged electron, like the photon, is carried
outward at the speed of light by the progression of the natural reference system. All
material aggregates are thus exposed to a flux of electrons similar to the continual
bombardment by photons of radiation. Meanwhile there are other processes, to be
discussed later, whereby electrons are returned to the environment. The electron
population of a material aggregate such as the earth therefore stabilizes at an equilibrium
level.
These processes that determine the equilibrium electron concentration are independent of
the nature of the atoms of matter and of the atomic volume. The concentration of
electrons is therefore uniform in electrically isolated conductors where there is no current
flow. It follows that the number of electrons involved in the thermal motion of atoms of
matter is proportional to the atomic volume, and the energy of that motion is determined
by the effective rotational factors of the atoms. The atomic volume and thermal energy
therefore determine the resistance.
Those substances whose rotational motion is entirely in time (Divisions I and II) have
their thermal motion in space, in accordance with the general rule governing addition of
motions, as set forth in Volume I. For these substances zero thermal motion corresponds
to zero resistance, and the resistance increases with the temperature. This is due to the
fact that the concentration of electrons (units of space) in the time component of the
conductor is constant for any specific current magnitude, and the current therefore
increases the thermal motion by a specific proportion. Such substances are conductors.
Where there are two dimensions of rotation in space, as in many of the elements of
Division IV, the thermal motion , which requires two open dimensions because of the
finite diameters of the moving electrons, is necessarily in time. In this case, zero
temperature corresponds to zero motion in time. Here the resistance is initially extremely
high, but decreases with an increase in temperature. Substance of this kind are known as
insulators or dielectrics.
Where there is only one dimension of spatial rotation, as in Division III, the elements of
greatest electric displacement, those closest to the electropositive divisions, are able to
follow the positive pattern, and are conductors. The Division III elements of lower
electric displacement follow a modified time motion pattern, with resistance decreasing
from a high, but finite, level at zero temperature. These substances of intermediate
characteristics are semiconductors.
For the present we will be concerned primarily with the resistance of conductors, and will
further limit the discussion to what may be called the ―regular‖ pattern of conductor
resistance. A limitation of this kind is necessary at the present stage of the investigation
because the large element of uncertainty in the experimental information on the resistivity
of the various conducting materials makes the clarification of the resistance relations a
slow and difficult process. The early stages of the development of the Reciprocal System
of theory, prior to the publication of the first edition of this work in 1959, which were
very productive in the non-electrical areas, made relatively little progress in dealing with
the electrical properties, largely because of conflicts between the theoretical deductions
and some experimental results that have since been found to be incorrect. The increasing
scope and accuracy of the experimental work in the intervening years has improved this
situation very materially, but the basic problem still remains.
Ideally it should be possible to deduce all of the pertinent information from theoretical
premises alone, without reference to experimental determinations, but as a practical
matter this is not feasible. A few steps can be, and have been, taken on a purely
theoretical basis, particularly where the previous development of the theory has cast some
important new light on the subject matter, but from the practical standpoint an extensive
and detailed investigation in any area is possible only if the theoretical study and the
checking of the theoretical conclusions against experimental and observational data go
hand in hand. It follows that where empirical data are lacking, progress is difficult, and
where they are seriously wrong, it is essentially impossible.
Unfortunately, resistance measurements are subject to many factors that introduce
uncertainty into the results. The purity of the specimen is particularly critical because of
the great difference between the resistivities of conductors and dielectrics. Even a very
small amount of a dielectric impurity can alter the resistance substantially. Conventional
theory has no explanation for the magnitude of this effect. If the electrons move through
the interstices between the atoms, as this theory contends, a few additional obstacles in
the path should not contribute significantly to the resistance. But, as we saw in Chapter 9,
the current moves through all of the atoms of the conductor, including the impurity
atoms, and it increases the heat content of each atom in proportion to its resistance. The
extremely high dielectric resistance results in a large contribution by each impurity atom,
and even a very small number of such atoms therefore has a significant effect.
Semiconducting elements are less effective as impurities, but they may still have
resistivities thousands of times as great as those of the conductor metals.
The resistance also varies under heat treatment, and careful annealing is required before
reliable measurements can be made. The adequacy of this treatment in many, if not most,
of the resistance determinations is questionable. For example, G. T. Meaden reports that
the resistance of beryllium was lowered more than fifty percent by such treatment, and
comments that ―much earlier work was clearly based on unannealed specimens.‖19 Other
sources of uncertainty include changes in crystal structure or magnetic behavior that take
place at different temperatures or pressures in different specimens, or under different
conditions, often with substantial hysteresis effects.
Ultimately, of course, it will be desirable to bring all of these variables within the scope
of the theoretical treatment, but for the present our objective will have to be limited to
deducing from the theory the nature and magnitude of the changes in resistance resulting
from temperature and pressure variations in the absence of these complicating factors,
and then to show that enough of the experimental results are in agreement with the theory
to make it probable that the discrepancies, where they occur, are due to one of more of
these factors that modify the normal values.
Inasmuch as the electrical resistance is a product of the thermal motion, the energy of the
electron motion is in equilibrium with the thermal energy. The resistance is therefore
directly proportional to the effective thermal energy; that is, to the temperature. It follows
that the increment of resistance per degree is a constant for each (unmodified) substance,
a magnitude that is determined by the atomic characteristics. The curve representing the
relation of the resistivity to the temperature, in application to a single atom, is thus linear.
Like the curves representing the temperature variation of the other properties that we
examined in the earlier chapters, and for the same reasons, the initial level of the
resistivity curve is negative. From this initial level to the melting point the resistivity of
an unmodified atom (one that has not undergone a structural rearrangement or other
change that modifies the resistance relations) follows a single straight line, rather than a
curve composed of two or more segments of different slopes, as in the specific heat and
thermal expansion curves. This limitation to a single line is characteristic of the electron
relations, and is due to the fact that the electron has only one rotational displacement unit,
and therefore cannot shift to a multi-unit type of motion in the manner of the complex
atomic structures.
A somewhat similar change in the resistivity curve does occur, however, if the factors
that determine the resistance are modified by some rearrangement of the kind mentioned
earlier. As P. W. Bridgman commented in discussing some of his results, after a change
of this nature has taken place, we are really dealing with a different substance. The curve
for the modified atom is also a straight line, but it is not collinear with the curve of the
unmodified atom. At the point of transition to the new form the resistivity of the
individual atom abruptly changes to a different straight line relation. The resistivity of the
aggregate follows a transition curve from one line to the other, as usual. At the lower end
of the temperature range, the resistivity of the solid aggregate follows another transition
curve of the same nature as those that we found in the curves representing the properties
discussed earlier. The relation of the resistance to the temperature in this temperature
range is currently regarded as exponential, but as we saw in other cases of the same kind,
it is actually a probability curve that reflects the resistivity of the diminishing number of
atoms that are still individually above the temperature at which the atomic resistivity
reaches the zero level. The curve for the solid aggregate also diverges from the single
atom curve at the upper end, due to the increasing proportion of liquid molecules in the
solid aggregate.
In this case, again, two values are required for a complete definition of the linear curve;
either the coordinates of two points on the curve, or the slope of the curve and the
location of one fixed point. A fixed point that is available from theoretical premises is the
zero point temperature, the point at which the curve for the individual atom reaches the
zero resistance level. The theoretical factors that determine this temperature are the same
as those applying to the specific heat and thermal expansion curves, except that since the
resistivity is an interaction between the atom and the electron it is effective only when the
motions of both objects are directed outward. The theoretical zero point temperature
normally applicable to resistivity is therefore twice that applicable to the properties
previously considered.
Up to this point the uncertainties in the experimental results have had no effect on the
comparison of the theoretical conclusions with experience. It is conceded that the relation
of resistivity to temperature is generally linear. with deviations from linearity in certain
temperature ranges and under certain conditions. The only question at issue is whether
these deviations are satisfactorily explained by the Reciprocal System of theory. When
this question is considered in isolation, without taking into account the status of that
system as a general physical theory, the answer is a matter of judgment, not a factual
matter that can be resolved by comparison with observation. But we have now arrived at
a place where the theory identifies some specific numerical values. Here agreement
between theory and observation is a matter of objective fact, not one that calls for a
judgment. But agreement within an acceptable margin can be expected only if (1) the
experimental resistivities are reasonably accurate, (2) the zero point temperatures
applicable to specific heat (which are being used as a base) were correctly determined,
and (3) the theoretical calculation and the resistivity measurement refer to the same
structure.
Table 24 applies equation 7-1, with a doubled numerical constant, and the rotational
factors from Table 22, to a determination of the temperatures of the zero levels of the
resistance curves of the elements included in the study, and compares the results with the
corresponding points on the empirical curves. The amount of uncertainty in the resistivity
measurements is reflected in the fact that for 11 of these 40 elements there are two sets of
experimental results that have been selected as the ―best‖ values by different data
compilers.20 In three other cases there are substantial differences in the experimental
results at the higher temperatures, but the curves converge on the same value of the zero
resistivity temperature. In a situation where uncertainties of this magnitude are prevalent,
it can hardly be expected that there will be anywhere near a complete agreement between
the theoretical and experimental values. Nevertheless, if we take the closer of the two
―best‖ experimental results in the 11 two-value cases, the theoretical and experimental
values agree within four degrees in 26 of the 40 elements, almost two-thirds of the total.
The rare earth elements were not included in this study because the resistances of these
elements, like so many of their other properties, follow a pattern differing in some
respects from that of most other elements, including a transition to a new structural form
at a relatively low temperature, accompanied by a major decrease in the slope of the
resistivity curve. Because of this low temperature transition it is difficult to locate the
zero point temperature from the empirical data, but in 9 of the 13 elements of this group
for which sufficient data are available to enable an approximate identification of this
temperature, it appears to be between 10 and 20 degrees K. The theoretical range for
these elements, as indicated by the factors listed in Table 22, is from 12 to 20 degrees.
Here again, then, the measured resistivities of two-thirds of the elements are at least
approximately in agreement with the theoretical values.
The existence of this amount of agreement, in spite of all of the influences tending to
generate discrepancies, is about as good a confirmation of the validity of the theory, as a
general proposition, as can be expected under the existing circumstances. Furthermore, it
is not unlikely that there are alternate resistance patterns that result in explainable
deviations from the calculated values, and some of the larger discrepancies may be thus
accounted for when an investigation of broader scope in undertaken.

Table 24: Temperature of Zero Resistance


Total T0 Total T0
Factors Calc. Obs. Factors Calc. Obs.
LI 14 56 56 Ru 14 56 44-58
Na 6 24 30 Rh 13 52 44-55
Mg 12 48 45 Pd 10 40 39
Al 14 56 57-60 Ag 8 32 28-35
K 4 16 17 Cd 5 20 18
Sc 10 40 33 In 12 48 19
Ti 14 56 54 Sn 7 28 25
V 12 48 45 Sb 8 32 24-35
Cr 14 56 69 Cs 2 8 8
Fe 16 64 73 Ba 4 16 26
Co 14 56 64-78 Hf 8 32 32
Ni 14 56 55 Ta 8 32 30
Cu 12 48 46-49 W 12 48 46-55
Zn 8 32 27 Re 10 40 45
Ga 4 16 31 Ir 11 44 28-46
As 12 48 42 Pt 8 32 33
Rb 2 8 11 Au 6 24 18
Y 8 32 28 Hg 4 16 7
Zr 9 36 30-45 Tl 4 16 16
Mo 14 56 36-55 Pb 4 16 12
For the second defining value of the resistivity curves we can use the temperature
coefficient of resistivity, the slope of the curve, a magnitude that reflects the inherent
resistivity of the conductor material. The temperature coefficient as given in the
published physical tables is not the required value. This is merely a relative magnitude,
the incremental change in resistivity relative to the resistivity at a reference temperature,
usually 20 degrees C. What is needed for present purposes is the absolute coefficient, in
microhm-centimeters per degree, or some similar unit.
Some studies have been made in this area, and as might be expected, it has been found
that the electric (one-dimensional) speed displacement is the principal determinant of the
resistivity, in the sense that it is responsible for the greatest amount of variation.
However, the effective quantity is not usually the normal electric displacement of the
atoms of the element involved, as this value is generally modified by the way that the
atom interacts with the electrons. The conclusions that have been reached as to the nature
and magnitude of these modifications are still rather tentative, and there are major
uncertainties in the empirical values against which the theoretical results would normally
be checked to test their validity. The results of these studies have therefore been omitted
from this volume, in conformity with the general policy of restricting the present
publication to those results whose validity is firmly established.
The experimental difficulties that introduce uncertainties into the correlations between
the theoretical and experimental values of the resistivity do not play as large a role in the
relative resistance under compression. The compression results therefore give us a more
definite and unequivocal picture. Again, however, this initial exploration of the subject,
as it appears in the context of the Reciprocal System of theory, will have to be confined
to the ―regular‖ pattern, the one followed by most of the metallic conductors.
Because the movement of electrons (space) through matter is the inverse of the
movement of matter through space, the inter-regional relations applicable to the effect of
pressure on resistance are the inverse of those that apply to the change in volume under
pressure. We found in Chapter 4 that the volume of a solid under compression conforms
to the relation PV2= k. By reason of the inverse nature of the electron movement, the
corresponding equation for electrical resistance is
P2R = k (10-1)
As in the compressibility equation, the symbol P in this expression refers to the total
effective pressure. If we give the internal component of this total the designation P0, as in
the volume compressibility discussion, and limit the term P to the externally applied
pressure, the equation becomes
(P + P0)2R = k (10-2)
The general situation with respect to the values of the internal pressure applicable to
resistance is essentially the same as that encountered in the study of compressibility.
Some elements maintain the same internal pressure throughout Bridgman‘s entire
pressure range, some undergo second order transitions to higher P0 values, and others are
subject to first order transitions, just as in the volume relations. However, the internal
pressure applicable to resistance is not necessarily the same as that applicable to volume.
In some substances, tungsten and platinum, for example, these internal pressures actually
are identical at every point in the pressure range of Bridgman‘s experiments. In another,
and larger, class, the applicable values of P0 are the same as in compression, but the
transition from the lower to the higher pressure takes place at a different temperature.
The values for nickel and iron illustrate this common pattern. The initial reduction in the
volume of nickel took place on the basis of an internal pressure of 913 M kg/cm2.
Somewhere between an external pressure of 30 M kg/cm2(Bridgman‘s pressure limit on
this element) and 100 M kg/cm2(the initial point of later experiments at very high
pressure) the internal pressure increased to 1370 M kg/cm2(from azy factors 4-8-1 to 4-8-
1¹/2). In the resistance measurements the same transition occurred, but it took place at a
lower external pressure, between 10 and 20 M kg/cm2. Iron has the same internal
pressures in resistance as nickel, with the transition at a somewhat higher external
pressure, between 40 and 50 kg/cm2. But in compression this transition did not appear at
all in Bridgman‘s pressure range, and was evident only in the shock wave experiments
carried to much higher pressures.
Table 25 is a comparison of the internal pressures in resistance and compression for the
elements included in the study. The symbol x following or preceding some of the values
indicates that there is evidence of a transition to or from a different internal pressure, but
the available data are not sufficient to define the alternate pressure level.

Table 25: Internal Pressures in Resistance and Compression


(Bridgman’s pressure range)
P0(M kg/cm2) P0(M kg/cm2)
Comp. Res. Comp. Res.
Be 571-856 1285 Pd 1004 1004-1506
Na 33.6-50.4 33.6-50.4-134.4 Ag 577-x 577-866
Al 376-564 564-1128 Cd 246-x 246-554
K 18.8 x-37.6 In 236 236-354
V 913-x 1370 Sn 302 226-453
Cr x-913 x-457 Ta 1072 1206-x
Mn 293-1172 586-1172 W 1733 1733
Fe 913 913-1370 Ir 2007 1338-2007
Ni 913-1370 913-1370 Pt 1338 1338
Cu 845-1266 1266 Au 867 650-867
Zn 305 305-610 Tl x-253 169-x
As 274-548 274-548-822 Pb 221-331 165-441
Nb 897-1196 1794 Bi 165-331 x-662
Mo 1442 1442-2121 Th 313-626 626-1565
Rh 1442 1442 U 578-1156 419-838
The amount of difference between the two columns of the table should not be surprising.
The atomic rotations that determine the azy factors are the same in both cases, but the
possible values of these factors have a substantial range of variation, and the influences
that affect the values of these factors are not identical. In view of the participation of the
electrons in the resistivity relations, and the large impurity effects, neither of which enters
into the volume relations, some difference in the pressures at which the transitions take
place can be considered normal. There is, at present, no explanation for those cases in
which the internal pressures indicated by the results of the compression and resistance
measurements are widely divergent, but differences in the specimens can certainly be
suspected.
Table 26 compares the relative resistances calculated from equation 10-2 with
Bridgman's results on some typical elements. The data are presented in the same form as
the compressibility tables in Chapter 4, to facilitate comparisons between the two sets of
results. This includes showing the azy factors for each element rather than the internal
pressures, but the corresponding pressures are available in Table 25. As in the
compressibility tables, values above the transition pressures are calculated relative to an
observed value as a reference level. The reference value utilized is indicated by the
symbol R following the figure given in the ―calculated‖ column.

Table 26: Relative Resistance Under Compression


Pressure Calc. Obs. Calc. Obs. Calc. Obs. Calc. Obs.
(M kg/cm2) W Pt Rh Cu
4-8-3 4-8-2 4-8-2 4-8-1¹/2
0 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
10 .989 .987 .985 .981 .986 .984 .984 .982
20 .977 .975 .971 .963 .973 .968 .969 .965
30 .966 .963 .957 .947 .960 .953 .954 .949
40 .955 .951 .943 .931 .947 .939 .940 .934
50 .945 .940 .929 .916 .934 .925 .925 .920
60 .934 .930 .916 .903 .922 .912 .912 .907
70 .924 .920 .903 .891 .910 .900 .898 .895
80 .914 .911 .890 .880 .897 .889 .885 .884
90 .904 .903 .878 .870 .886 .880 .872 .875
100 .894 .895 .866 .861 .875 .872 .859 .866
Ni Fe Pd Zn
4-8-1 4-8-1 4-6-2 4-4-1
4-8-1¹/2 4-8-1¹/2 4-6-3 4-4-2
0 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
10 .978 .982 .978 .977 .980 .980 .938 .937
20 .960 .965 .958 .956 .961 .960 .881 .887
30 .946 .948 .937 .936 .943 .942 .836 .847
40 .933 .933 .918 .919 .925 .925 .810 .812
50 .919 .918 .901 .903 .907 .909 .786 .783
60 .907 .904 .889 .888 .891 .894 .762 .756
70 .894 .892 .875 .875 .880 .881 .740 .733
80 .882 .880 .864 .862 .868 .862 .719 .713
90 .870 .869 .853 .851 .858 .858 .699 .695
100 .858R .858 .841R .841 .847R .847 .679R .679
In those cases where the correct assignment of azy factors and internal pressures above
the transition point is not definitely indicated by the corresponding compressibility
values, the selections from among the possible values are necessarily based on the
empirical measurements, and they are therefore subject to some degree of uncertainty.
Agreement between the experimental and the semi-theoretical values in this resistance
range therefore validates only the exponential relation in equation 10-2, and does not
necessarily confirm the specific values that have been calculated. The theoretical results
below the transition points, on the other hand, are quite firm, particularly where the
indicated internal pressures are supported by the results of the compressibility
measurements. On this basis, the extent of agreement between theory and observation in
the values applicable to those elements that maintain the same internal pressures through
the full 100.000 kg/cm2 pressure range of Bridgman‘s measurements is an indication of
the experimental accuracy. The accuracy thus indicated is consistent with the estimates
made earlier on the basis of other criteria.
Inasmuch as the difference in the form of the compressibility equation, pv2= k (equation
4-4), and that of the pressure-resistance equation, p2R = k (equation 10-1), is a
requirement of the general reciprocal relation between space and time specified in the
postulates of the Reciprocal System of theory, the joint verification of these two
equations is a significant addition to the mass of evidence confirming the validity of this
reciprocal relation, the cornerstone of the quantitative expression of the theory of the
universe of motion.

CHAPTER 11

Thermoelectric Properties
As brought out in Chapter 9, the equivalent space in which the thermal motion of the atoms
of matter takes place contains a concentration of electrons, the magnitude of which is
determined, in the first instance, by factors that are independent of the thermal motion. In
the thermal process the atoms move through the electron space as well as through the
equivalent of extension space. Where the net time displacement of the atoms of matter
provides a time continuum in which the electrons (units of space) can move, a portion of the
atomic motion is communicated to the electrons. The thermal motion in the time region
environment therefore eventually arrives at an equilibrium between motion of matter
through space and motion of space (electrons) through matter.
It should be noted particularly that the motion of the electrons through the matter is a part of
the thermal motion, not something separate. A mass m attains a certain temperature T when
the effective energy of the thermal motion reaches the corresponding level. It is immaterial
from this standpoint whether the energy is that of motion of mass through equivalent space,
or motion of space (electrons) through matter, or a combination of the two. In previous
discussions of the hypothesis that metallic conduction of heat is due to the movement of
electrons, the objection has been raised that there is no indication of any increment in the
specific heat due to the thermal energy of the electrons. The development of the Reciprocal
System of theory has not only provided a firm theoretical basis for what was previously no
more than a hypothesis–the electronic nature of the conduction process–but has also
supplied the answer to this objection. The electron movement has no effect on the specific
heat because it is not an addition to the thermal motion of the atoms; it is an integral part of
the combination motion that determines the magnitude of that specific heat.
Because the factors determining the electron capture from and loss to the environment are
independent of the nature of the matter and the amount of thermal motion, the equilibrium
concentration is the same in any isolated conductor, irrespective of the material of which the
conductor is composed, the temperature, or the pressure. All of these factors do, however,
enter into the determination of the thermal energy per electron. Like the gas pressure in a
closed container, which depends on the number of molecules and the average energy per
molecule, the electric voltage within an isolated conductor is determined by the number of
electrons and the average energy per electron. In such a isolated conductor the electron
concentration is uniform. The electric voltage is therefore proportional to the thermal energy
per electron.
The energy level at which the electrons are in thermal equilibrium with the atoms of a
conductor depends on the material of which the conductor is composed. If two conductors of
dissimilar composition, copper and zinc, let us say, are brought into contact, the difference
in the electron energy level will manifest itself as a voltage differential. A flow of electrons
will take place from the conductor with the higher (more negative) voltage, the zinc, to the
copper until enough electrons have been transferred to bring the two conductors to the same
voltage. What then exists is an equilibrium between a smaller number of relatively high
energy electrons in the zinc and a greater number of relatively low energy electrons in the
copper.
In this example it is assumed that the voltages in the conductors are allowed to reach an
equilibrium. Some more interesting and significant effects are produced where equilibrium
is not established. For instance, a continuing current may be passed through the two
conductors. If the electron flow is from the zinc to the copper, the electrons leave the zinc
with the relatively high voltage that prevails in that conductor. In this case the lower voltage
of the electrons in the copper conductor cannot be counterbalanced by an increase in the
electron concentration, as all of the electrons that enter the copper under steady flow
conditions pass on through. The incoming electrons therefore lose a portion of their energy
content in the process of conforming to the new environment. The difference is given up as
heat, and the temperature in the vicinity of the zinc-copper junction increases. If the section
of the conductor under consideration is part of a circuit in which the electrons return to the
zinc, this process is reversed at the copper-zinc junction. Here the energy level of the
incoming electrons rises to conform with the higher voltage of the zinc, and heat is absorbed
from the environment to provide the electron energy increment. This phenomenon is known
as the Peltier effect.
In this Peltier effect a flow of current causes a difference between the temperatures at the
two junctions. The Seebeck effect is the inverse process. Here a difference in temperature
between the two junctions causes a current to flow through the circuit. At the heated
junction the increase in thermal energy raises the voltage of the high energy conductor, the
zinc, more than that of the low energy conductor, the copper, because the size of the
increment is proportional to the total energy. A current therefore flows from the zinc into the
copper, and on to the low temperature junction. The result at this junction is the same as in
the Peltier effect. The net result is therefore a transfer of heat from the hot junction to the
cold junction.
Throughout the discussion in this volume, the term “electric current” refers to the movement
of uncharged electrons through conductors, and the term “higher voltage” refers to a greater
force, t/s2, due to a greater concentration of electrons or its equivalent in a greater energy per
electron. This electron flow is opposite to the conventional, arbitrarily assigned, “direction
of current flow” utilized in most of the literature on current electricity. Ordinarily the
findings of this work have been expressed in the customary terms of reference, even though
in some cases those findings suggest that an improvement in terminology would be in order.
In the present instance, however, it does not appear that any useful purpose would be served
by incorporating an unfortunate mistake into an explanation whose primary purpose is to
clarify relationships that have been confused by mistakes of other kinds.
A third thermoelectric phenomenon is the Thomson effect, which is produced when a current
is passed through a conductor in which a temperature gradient exists. The result is a transfer
of heat either with or against the temperature gradient. Here the electron energy in the warm
section of the conductor is either greater or less than that in the cool section, depending on
the thermoelectric characteristics of the conductor material. Let us consider the case in
which the energy is greater in the warm section. The electrons that are in thermal
equilibrium with the thermally moving matter in this section have a relatively high energy
content. These energetic electrons are carried by the current flow to the cool section of the
conductor. Here they must lose energy in order to arrive at a thermal equilibrium with the
relatively cold matter of the conductor, and they give up heat to the environment. If the
current is reversed, the low energy electrons from the cool section travel to the warm
section, where they absorb energy from the environment to attain thermal equilibrium. Both
of these processes operate in reverse if the material of the conductor is one of the class of
substances in which the effective voltage decreases with an increase in the temperature.
There are also some substances in which the response of the voltage to a temperature
increment changes direction at some specific temperature level. A similar reversal of the
Thomson effect occurs whenever a change of this kind takes place.
The quantitative measure of capability to produce the thermoelectric effects is the
thermoelectric power of the various conductor materials. This is the electric voltage,
expressed either relative to some reference substance, usually lead, or as an absolute value
measured against a superconducting material. Neither the theoretical study nor the
experimental measurements are far enough advanced to make a quantitative comparison of
theory with experimental results feasible at this time, but some of the general considerations
that are involved in the quantitative determination can be deduced from theoretical premises.
The basic difference between the thermal motion of the electrons and that of the atoms of
matter is in the location of the initial level, or zero point. The zero for the thermal motion of
the atoms is the equilibrium condition, in which the atom is stationary in a three-dimensional
coordinate system of reference because the motion imparted to it by the progression of the
natural reference system is counterbalanced by the oppositely directed gravitational motion.
On the other hand, the zero for the thermal motion of the electrons, the magnitude of the
motion of the electrons in the absence of thermal motion, is the natural zero, which, in the
context of the stationary reference system, is unit speed, the speed of light. The measure of
the energy of the electron motion in matter is the deviation of the speed upward or
downward from this unit level.
The fact that the zero energy levels of the positive and negative electron motion are
coincident explains why each thermoelectric effect is a single phenomenon in which the zero
level is merely a point in a continuous succession of magnitudes, rather than a discontinuous
phenomenon such as the resistance to current flow. The difference between a small positive
electron speed and a small negative electron speed is itself relatively small, and within the
limits of what can be accomplished by a change in the conditions to which the conductor is
subject. Such a change in conditions may therefore reverse the motion. But a substance that
is a conductor in one temperature or pressure range does not become an insulator in another
range, because the positive zero is the equivalent of the negative infinity, rather than the
negative zero, and in application to the atomic motion. there is, as a consequence, an
immense gap between a small positive thermal speed and a small negative speed.
The status of the electron motion as positive or negative is determined by the position that
the interacting atom occupies in its rotational group, in the same manner as the effective
electric displacement of the atom. Each of these rotational groups consists of two divisions
that are positive from the atomic standpoint, followed by two negative divisions. But since
the electron is a single rotating system, instead of a double system of the atomic type, the
various subdivisions of the atomic series are reduced to half size in application to the
electrons. The reversals from positive to negative therefore occur at every divisional
boundary in electronic processes, rather than at every second division.
Identification of individual elements as positive or negative from the thermoelectric
standpoint is necessarily subject to some qualifications because, as previously mentioned,
some elements are positive in one temperature range and negative in another, but a
reasonably good test of the theoretical conclusions can be accomplished by comparing the
sign of the thermoelectric power as observed at zero degrees C with the divisional status of
the elements for which thermoelectric data are available in one of the recent compilations.
Table 27 presents such a comparison, omitting the Division I elements of displacements 1
and 2.

Table 27 Thermoelectric Power


Division
I II III IV
Al+ Co- Cu+ W+ Si-
Ce+ Fe- Zn+ Ir+ Pb-
Ni- Ge+ Pt- Bi-
Mo+ Ag+ Au+
Pd- Cd+ Hg-
In+ Tl+
Sn+
The reason for the omissions from the tabulation is that the first two Division I elements of
each rotational group follow a distinctive pattern of their own. In these elements the factor
controlling the thermoelectric power is the magnetic rotational displacement, rather than the
electric displacement. Because of the single rotation of the electron, the range of magnetic
displacements from 1-1 to 4-4 becomes two divisions, with a reversal of sign at the
boundaries. For reasons of symmetry, the interior section from 2-2 to 3-3 constitutes one
division, in which the displacement one elements, sodium, potassium, and rubidium, have
negative thermoelectric voltages. The corresponding members of the outer groups, lithium
and cesium, have positive voltages. The displacement two elements may follow either the
magnetic or the electric pattern. One of those included in the reference tabulation, calcium,
has the same negative voltage as its neighbor, potassium, but magnesium, the corresponding
member of the next lower group, takes the positive voltage of the higher Division I
elements.
While the theoretical development that is being described in this work has not yet been
extended to the quantitative aspects of the thermoelectric effects thus far discussed, it is of
interest to note that the relation of the thermoelectric power to temperature has many of the
characteristics that we encountered in our previous examination of the response of other
properties of matter to temperature changes. This is well illustrated in Fig.16, which shows
the relation between temperature and the absolute thermoelectric power of platinum.
Without the captions it would be difficult to distinguish this diagram from one applicable to
thermal expansion, or to the specific heat of an element of one of the lower groups. This is
no accident. The curves look alike because the same basic factors are applicable in all of
these cases.

Fig. 16: Absolute Thermoelectric Power -Platinum


In the platinum curve the initial level is positive and the increments due to higher
temperature are negative. This behavior is reversed in such elements as tungsten, which has
a negative initial level and positive temperature increments up to a temperature of about
1400 K. Above this temperature there is a downward trend. This downward portion of the
curve (linear, as usual) is the second segment. At the present stage of the theoretical
development it appears probable that a general rule is involved here; that is, the second
segment of each curve, the multi-unit segment, is directed toward more negative values,
irrespective of the direction of the first (single-unit) segment.
Another thermoelectric effect is the conduction of heat. This is a process that is more
important from a practical standpoint than those effects that were considered earlier, and it
has therefore been given more attention in the present early stage of the development of the
theory of the universe of motion. Although the examination of the subject was a somewhat
incidental feature of the review of electric current phenomena undertaken in preparation for
the new edition of this work, it has produced a fairly complete picture of the heat
conductivity of the principal class of conducting metals, together with a general idea of the
manner in which other elements deviate from the general pattern. It was possible to achieve
these results in the limited time available because, as it turned out, the metallic conduction
of heat is not a complex process, involving difficult concepts such as phonons, orbitals,
relaxation processes, electron scattering, and so on, as seen by conventional physics, but a
very simple process, capable of being defined by equally simple mathematics, closely
related to the mathematical relations governing purely mechanical processes.
In the first situation discussed in this chapter, that in which two previously isolated
conductors of different composition are brought into contact, the electron energies in the two
conductors are necessarily unequal. As brought out there, the contact results in the
establishment of an equilibrium between a larger number of less energetic electrons in one
conductor and a smaller number of more energetic electrons in the other. Such an
equilibrium cannot be established between two sections of a homogeneous conductor
because in this case there is no influence that requires either the individual electron energy
or the electron concentration to take different values in different locations. If the
environmental conditions are uniform, both the energy distribution and the electron
concentration attain uniformity throughout the conductor.
However, if one end of a conductor composed of a material such as iron is heated, the
energy content of the electrons at that location is increased, and a force differential is
generated. Under the influence of the force gradient some of the hot electrons move toward
the cold end of the conductor. At that end the newly arrived electrons give up heat in the
process of reaching a thermal equilibrium with the atomic motion, and join the concentration
of cold electrons previously existing at this location. The resulting higher electron pressure
causes a flow of cold electrons back toward the hot end of the conductor. None of the
characteristic electrical effects are produced in this process, because the two oppositely
directed electron flows are equal in magnitude, and the effects produced by one current are
cancelled by those produced by the other. The only observable result is a transfer of heat
from the hot end of the conductor to the cold end.
It should be noted that no electrostatic potential difference is involved in either of these
current flows. This is one of the obstacles in the way of a simple explanation of heat
conduction in the context of conventional physical theory, where electric currents are
assumed to result from differences in potential. As explained in Chapter 9, our finding is that
all of the forces causing flow of current in the conductor under consideration, that due to the
excess energy of the hot electrons, that due to the increased concentration of electrons at the
cold end, and that due to electric voltage in general, are forces of a mechanical type, not
electrostatic forces.
If the material of the conductor is a substance such as copper in which the voltage decreases
(becomes less negative) as the temperature rises, the same result is produced in an inverse
manner. Here the effective energy of the electrons at the hot end of the conductor is lower
than that of the cold electrons. A flow of cold electrons into the hot region therefore takes
place. These electrons absorb heat from the environment to attain thermal equilibrium with
the matter of the conductor. The resulting increased concentration of hot electrons is then
relieved by a flow of some of these electrons back toward the cold end of the conductor.
Here, again, the two oppositely directed electron flows produce no net electrical effects.
The conduction of heat in metals by movement of electrons is essentially the same process
as the convection of heat by movement of gas or liquid molecules. In a closed system,
energetic molecules from a hot region move toward a cold region, while a parallel flow
carries an equal number of cold molecules back to the hot region. There is only one
significant difference between the two heat transfer processes. Because the fluid molecules
are subject to a gravitational effect, heat transfer by convection is relatively rapid if it is
assisted by a thermally caused difference in density, whereas it is much slower if the
diffusion of the hot molecules operates against the gravitational force.
The quantitative measure of the ability of the electron movement to conduct heat is known
as the thermal conductivity. Its magnitude is determined primarily (perhaps entirely) by the
effective specific heat and the temperature coefficient of resistivity, both of which are
inversely related to the conductivity. There is a possibility that it may also be affected to a
minor degree by some other influences not yet identified, but in any event, all of the
modifying influences other than the specific heat are independent of the temperature, within
the range of accuracy of the measurements of the thermal conductivity, and they can be
combined into one constant value for each substance. The thermal conductivity of the
substance is then this constant divided by the effective specific heat:
Thermal conductivity = k/cp (11-1)
As we saw in the earlier chapters, the specific heat of the conductor materials follows a
straight line relation to the temperature in the upper portion of the temperature range of the
solid state, and the resistance is linearly related to the temperature at all points. At these
higher temperatures, therefore, there is a constant relation between the thermal conductivity
and the electrical conductivity (the reciprocal of the resistivity). This relation is known as
the Wiedemann-Franz law.
The relation expressed in this law breaks down at the lower temperatures, as soon as the
specific heat drops below the original straight line. However, the failure of the relation does
not occur as soon as would be expected from the normal specific heats of the metals, most of
which begin to drop away from the upper linear segment of the curve in the neighborhood of
room temperature. The reason for the extension of the high temperature linear relation to a
lower temperature in application to thermal conductivity is that the specific heat under the
conditions applicable to thermal conduction is not subject to all of the limitations that apply
to the transmission of thermal energy by contact between atoms of matter. Instead of going
through some intermediate steps, as in the measured specific heats, the effective specific
heat in thermal conduction continues on the high temperature basis down to the point where
multi-unit motion is no longer possible, and a transition to a single unit basis is mandatory.
The temperature designated as T0 in the previous discussion, the point at which the specific
heat curve reaches the zero level, is the same in thermal conduction as in the atomic
contacts, but in the interaction between the electrons and the atoms the single rotating
system of the electron adds one half unit to the one unit initial level of the double system of
the atom. The initial level of the modified specific heat curve is therefore 1¹/2 units (-1.98)
instead of the usual one unit (-1.32). This makes the slope of the curve somewhat steeper
than that of the initial segment of the normal specific heat curve defined in Chapter 5.
The deviation of the thermal conductivity from the constant relation expressed by the
Wiedemann-Franz law is the problem with which any theory of thermal conductivity has to
deal, and since the explanation derived from the Reciprocal System of theory attributes this
deviation to the specific heat pattern, the best way to demonstrate the validity of the
explanation appears to be to work backward from the measured thermal conductivities
(reference 21), calculate the corresponding theoretical specific heats from equation 11-1, and
then compare these calculated specific heats with the theoretical pattern just described.

Figure 17: Effective Specific Heat in Thermal Conductivity


Fig.17 is a comparison of this kind for the element copper, for which the numerical
coefficient of equation 11-1 is 24.0, where thermal conductivities are expressed in watts cm-2
deg-1. The solid lines in this diagram represent the specific heat curve applicable to the
thermal conductivity of copper, as defined in the preceding discussion. For comparison, the
first segment of the normal specific heat curve of this element is shown as a dashed line. As
in the illustrations of specific heat curves in the preceding chapters, the high temperature
extension of the upper segment of the curve is omitted in order to make it possible to show
the significant features of the curve more clearly. As the diagram indicates, the specific
heats calculated from the measured thermal conductivities follow the theoretical lines within
the range of the probable experimental errors, except at the lower and upper ends of the first
segment, where transition curves of the usual kind reflect the deviation of the specific heat
of the aggregate from that of the individual atoms.
Similar data for lead and aluminum are presented in Fig.18.

Figure 18: Effective Specific Heat in Thermal Conductivity


The pattern followed by the three elements thus far considered may be regarded as the
regular behavior, the one to which the largest number of the elements conform. No full scale
investigation of the deviations from this basic pattern has yet been undertaken, but an idea of
the nature of these deviations can be gained from an examination of the effective specific
heat of chromium, Fig.19. Here the specific heat and temperature values in the low
temperature range have only half the usual magnitude. The negative initial specific heat
level is -1.00 rather than -2.00, the temperature of zero specific heat is 16 K rather than 32
K, and the initial level of the upper segment of the curve is 2.62 instead of 5.23. But this
upper segment of the modified curve intersects the upper segment of the normal curve at the
Neel point, 311 K, and above this temperature the effective specific heat of chromium in
thermal conductivity follows the regular specific heat pattern as defined in Chapter 5.

Figure 19: Effective Specific Heat in Thermal Conductivity


Another kind of deviation from the regular pattern is seen in the curve for antimony, also
shown in Fig.19. Here the initial level of the first segment is zero instead of the usual
negative value. The initial level of the second segment is the half sized value 2.62.
Antimony thus combines the two types of deviation that have been mentioned.
As indicated earlier, it has not yet been determined whether any factors other than the
resistivity coefficient enter into the constant k of equation 11-1. Resolution of this issue is
complicated by the wide margin of uncertainty in the thermal conductivity measurements.
The authors of the compilation from which the data used in this work were taken estimate
that these values are correct only to within 5 to 10 percent in the greater part of the
temperature range, with some uncertainties as high as 15 percent. However, the agreement
between the plotted points in Fig.17,18, and 19, and the corresponding theoretical curves
shows that most of the data represented in these diagrams are more accurate than the
foregoing estimates would indicate, except for the aluminum values in the range from 200 to
300º K.
In any event, we find that for the majority of the elements included in our preliminary
examination, the product of the empirical value of the factor k in equation 11-1 and the
temperature coefficient of resistivity is between 0.14 and 0.18. Included are the best known
and most thoroughly studied elements, copper, iron, aluminum, silver, etc., and a range of k
values extending all the way from the 25.8 of silver to 1.1 in antimony. This rather strongly
suggests that when all of the disturbing influences such as impurity effects are removed, the
empirical factor k in equation 11.1 can be replaced by a purely theoretical value k/r, in
which a theoretically derived conversion constant, k, in the neighborhood of 0.15 watts cm-2
deg-1 is divided by a theoretically derived coefficient of resistivity.
The impurity effects that account for much of the uncertainty in the general run of thermal
conductivity measurements are still more prominent at very low temperatures. At least on
first consideration, the theoretical development appears to indicate that the thermal
conductivity should follow the same kind of a probability curve in the region just above zero
temperature as the properties discussed in the preceding chapters. In many cases, however,
the measurements show a minimum in the conductivity at some very low temperature, with
a rising trend below this level. On the other hand, some of the elements that are available in
an extremely pure state show little or no effect of this kind, and follow curves similar to
those encountered in the same temperature range during the study of other properties. It is
not unlikely that this will prove to be the general rule when more specimens are available in
a pure enough state. It should be noted that an ordinary high degree of purity is not enough.
As the data compilers point out, the thermal conductivities in this very low temperature
region are ―highly sensitive to small physical and chemical variations of the specimens.‖

CHAPTER 12

Scalar Motion
It was recognized from the beginning of the development of the theory of the universe of
motion that the basic motions are necessarily scalar. This was stated specifically in the
first published description of the theory, the original (1959) edition of The Structure of
the Physical Universe. It was further recognized, and emphasized in that 1959
publication, that the rotational motion of the atoms of matter is one of these basic scalar
motions, and therefore has an inward translational effect, which we can identify as
gravitation. Throughout the early stages of the theoretical development, however, there
was some question as to the exact status of rotation in a system of scalar motions,
inasmuch as rotation, as ordinarily conceived, is directional, whereas scalar quantities, by
definition, have no directions. At first this issue was not critical, but as the development
of the theory was extended into additional physical areas, more types of motion of a
rotational character were encountered, and it became necessary to clarify the nature of
scalar rotation. A full scale investigation of the subject was therefore undertaken, the
results of which were reported in The Neglected Facts of Science, published in 1982.
The existence of scalar motion is not recognized by present-day physics. Indeed, motion
is usually defined in such a way that scalar motion is specifically excluded. This type of
motion enters into observable physical phenomena in a rather unobtrusive manner, and it
is not particularly surprising that its existence remained unrecognized for a long time.
However, a quarter of a century has elapsed since that existence was brought to the
attention of the scientific community in the first published description of the universe of
motion, and it is hard to understand why so many individuals still seem unable to
recognize that there are several observable types of motion that cannot be other than
scalar.
For instance, the astronomers tell us that the distant galaxies are all moving radially
outward away from each other. The full significance of this galactic motion is not
apparent on casual consideration, as we see each of the distant galaxies moving outward
from our own location, and we are able to locate each of the observed motions in our
spatial reference system in the same manner as the familiar motions of our everyday
experience. But the true character of this motion becomes apparent when we examine the
relation of our Milky Way galaxy to this system of motions. Unless we take the stand that
our galaxy is the only stationary object in the universe, an assumption that few scientists
care to defend in this modern era, we must recognize that our galaxy is moving away
from all of the others; that is, it is moving in all directions. And since it is conceded that
our galaxy is not unique, it follows that all of the widely separated galaxies are moving
outward in all directions. Such a motion, which takes place uniformly in all directions has
no specific direction. It is completely defined by a magnitude (positive or negative), and
is therefore scalar.
A close examination of gravitation shows that the gravitational motion is likewise scalar,
differing from the motion of the galaxies only in that it is negative (inward) rather than
positive (outward). The resemblance to the motion of the galaxies can easily be seen if
we consider a system of gravitating objects isolated in space–perhaps a group of galaxies
relatively close to each other. From our knowledge of the gravitational effects we can
deduce that each of these objects will move inward toward all of the others. Here again
the motion is scalar. Each object is moving inward in all directions.
A small-scale example of the same kind of motion can be seen in the motion of spots on
the surface of an expanding balloon, often used as an analogy by those who undertake to
explain the nature of the motions of the distant galaxies. Here, too, each individual is
moving outward from all others. If the expansion is terminated, and succeeded by a
contraction, the motions are reversed, and each spot then moves inward toward all others,
as in the gravitational motion.
In the case of the expanding balloon there is a known physical mechanism that is causing
the expansion, and our understanding of this mechanism makes it evident that all
locations on the balloon surface are moving. The spots on this surface have no motion of
their own. They are merely being carried along by the movement of the locations that
they occupy. According to the astronomers‘ current view, the recession of the distant
galaxies is the same kind of a process. As Paul Davies explains:
Many people (including some scientists) think of the recession of the galaxies as due to the explosion of a
lump of matter into a pre-existing void, with the galaxies as fragments rushing through space. This is quite
wrong… The expanding universe is not the motion of the galaxies through space away from some centre,
but is the steady expansion of space. 22

Here, again, it is the locations that are moving, carrying the galaxies along with them.
But in this case there is no known physical mechanism to account for the movement. Like
the expansion of the balloon, the ―steady expansion of space‖ is merely a description, not
an explanation, of the movement. All that the observations tell us is that an outward
scalar motion of physical locations is taking place, carrying the galaxies with it.
The postulates of the Reciprocal System of theory, the theory of the universe of motion,
generalize this type of motion. They define a universe in which scalar motion of physical
locations is the basic form of motion from which all physical entities and phenomena are
derived. The manner in which this type of motion manifests itself to observation therefore
has an important bearing on the nature of fundamental physical phenomena.
This situation is a good example of the way in which important information is often
overlooked because no one spends the time and effort that are required in order to make a
thorough study of a seemingly unimportant observation. It has long been recognized that
the motion of spots on the surface of an expanding balloon is, in some way, different
from the ordinary motions of our everyday experience. The mere fact that this balloon
motion is so widely used as an analogy in explaining the recession of the distant galaxies
is clear evidence of this general recognition. But the galaxies seem to be a special case,
and expanding balloons do not play any significant part in normal physical activity.
Consequently, no one has been much interested in the physics of these objects, and this
admittedly unique kind of motion was never subjected to a critical examination prior to
the investigation of scalar motion that was undertaken in the course of the theoretical
development reported in the several volumes of this work. The finding that the
fundamental motion of the universe is scalar revolutionizes this situation. The motions of
the galaxies, gravitating objects, and spots on the surface of an expanding balloon are
obviously the kind of motions–scalar motions–that our theory identifies as fundamental.
Scientists are usually, with ample justification, reluctant to accept a hypothesis that
postulates the existence of phenomena that are unknown to observation. It should
therefore be emphasized that scalar motion is not an unobserved phenomenon; it is an
observed phenomenon that has not heretofore been recognized in its true character. Once
the motions identified in the foregoing paragraphs have been critically examined, and
their scalar character has been recognized, the existence of scalar motion is no longer a
hypothesis; it is a demonstrated physical fact. The existence of other scalar motions, as
required by the theory of the universe of motion is then a natural and logical corollary,
and those observed phenomena that have the theoretical properties of scalar motion can
legitimately be identified as scalar motions.
A one-dimensional scalar motion of a physical location is defined by a magnitude, and
can therefore be represented one-dimensionally as a point, or an assemblage of points,
moving along a straight line. Introduction of a reference point–that is, coupling the
motion to the reference system at a specific point in that system–enables distinguishing
between positive motion, outward from the reference point, and negative motion, inward
toward the reference point. The direction imputed to the motion may be a constant
direction, as in the case of the translational motion of the photon, the direction of which is
determined by chance at the time of emission, unless external factors intervene. The key
point disclosed by our investigation is that the direction is not necessarily constant. A
discontinuous, or non-uniform, change of direction could be maintained only by a
repeated application of an external force, but it has been known from the time of Galileo
that a continuous and uniform change of position or direction is just as permanent and
just as self-sustaining as a condition of rest. Our finding merely extends this principle to
the assignment of direction to scalar motion.
As an illustration, let us consider the motion of point X, originating at point A, and
initially proceeding in the direction AB in three-dimensional space. Then let us assume
that line AB is rotated around an axis perpendicular to it, and passing through point A.
This does not change the inherent nature or magnitude of the motion of point X, which is
still moving radially outward from point A at the same speed as before. What has been
changed is the direction of the movement, which is not a property of the motion itself, but
a feature of the relation between the motion and three-dimensional space. Instead of
continuing to move outward from A in the direction AB, point X now moves outward in
all directions in the plane of rotation. If that plane is then rotated around another
perpendicular axis, the outward motion of point X is distributed over all directions in
space. It is then a rotationally distributed scalar motion.
The results of such a distributed scalar motion are totally different from those produced
by a combination of vectorial motions in different directions. The combined effects of the
magnitudes and directions of vectorial motions can be expressed as vectors. The results
of addition of these vectors are highly sensitive to the effects of direction. For example, a
vectorial motion AB added to a vectorial motion AB‘ of equal magnitude, but
diametrically opposite direction, produces a zero resultant. Similarly, vectorial motions of
equal magnitude in all directions from a given point add up to zero. But a scalar motion
retains the same positive (outward) or negative (inward) magnitude regardless of the
manner in which it is directionally distributed.
None of the types of scalar motion that have been identified can be represented in a fixed
spatial reference system in its true character. Such a reference system cannot represent
simultaneous motion in all directions. Indeed, it cannot represent motion in more than
one direction. In order to represent a system of two or more scalar motions in a spatial
reference system it is necessary to define a reference point for the system as a whole; that
is, the scalar system must be coupled to the reference system in such a way that one of
the moving locations in the scalar system is arbitrarily defined as motionless (from the
scalar standpoint) relative to the reference system. The direction imputed to the motion of
each of the other objects, or physical locations, in the scalar system is then its direction
relative to the reference point.
For example, if we denote our galaxy as A, the direction of the motion of distant galaxy
X, as we see it, is AX. But observers in galaxy B, if there are any, see galaxy X as
moving in the different direction BX, those in galaxy C see the direction as CX, and so
on. The significance of this dependence of the direction on the reference point can be
appreciated when it is contrasted with the corresponding aspect of vectorial motion. If an
object X is moving vectorially in the direction AX when viewed from location A, it is
also moving in this same direction AX when viewed from any other location in the
reference system.
It should be understood that the immobilization of the reference point in the reference
system applies only to the representation of the scalar motion. There is nothing to
prevent an object located at the reference point, the reference object we may call it, from
acquiring an additional motion of a vectorial character. For example, the expanding
balloon may be resting on the floor of a moving vehicle, in which case the reference point
is in motion vectorially. Where an additional motion of this nature exists, it is subject to
the same considerations as any other vectorial motion.
The coupling of a system of scalar motions to a fixed reference system at a reference
point does not alter the rate of separation of the members of the scalar system. The
arbitrary designation of the reference point as motionless (from the scalar standpoint)
therefore makes it necessary to attribute the motion of the reference point, or object, to
the other points or objects in the system.
This conclusion that the observed change of position of an object B is due, in part, to the
motion of some different object A may be hard for those who are thinking in terms of the
conventional view of the nature of motion to accept, but it can easily be verified by
consideration of a specific example. Any two spots on the surface of an expanding
balloon, for instance, are moving away from each other; that is, they are both moving.
While spot X moves away from spot Y, spot Y is coincidentally moving away from spot
X. Placing the balloon in a reference system does not alter these motions. The balloon
continues expanding in exactly the same way as before. The distance XY continues to
increase at the same rate, but if X is the reference point, it is motionless in the reference
system (so far as the scalar motion is concerned), and the entire increase in the distance
XY, including that due to the motion of X, has to be attributed to the motion of Y.
The same is true of the motions of the distant galaxies. The recession that is measured is
merely the increase in distance between our galaxy and the one that is receding from us.
It follows that a part of the increase in separation that we attribute to the recession of the
other galaxy is actually due to motion of our own galaxy. This is not difficult to
understand when, as in the case of the galaxies, the reason why objects appear to move
faster than they actually do is obviously the arbitrary assumption that our location is
stationary. What is now needed is a recognition that this is a general proposition. The
same result follows whenever a moving object is arbitrarily taken as stationary for
reference purposes. The motion of any reference point of a scalar motion is seen, by the
reference system, in the same way in which we view our motion in the galactic system;
that is, the motion that is frozen by the reference system is seen as motion of the distant
objects.
This transfer of motion from one object to another by reason of the manner in which
scalar motion is represented in the reference system has no significant consequences in
the galactic situation, as it makes no particular difference to us whether galaxy X is
receding from us, or we are receding from it, or both. But the questions as to which
objects are actually moving, and how much they are moving, have an important bearing
on other scalar motions, such as gravitation. With the benefit of the information now
available, it is evident that the rotation of the atoms of matter described in Volume I is a
rotationally distributed negative (inward) scalar motion. By virtue of that motion, each
atom, irrespective of how it may be moving, or not moving, vectorially, is moving inward
toward all other atoms of matter. This inward motion can obviously be identified as
gravitation. Here, then, we have the answer to the question as to the origin of gravitation.
The same thing that makes an atom an atom–the scalar rotation–causes it to gravitate.
Although Newton specifically disclaimed making any inference as to the mechanism of
gravitation, the fact that there is no time term in his equation implies that the gravitational
effect is instantaneous. This, in turn, leads to the conclusion that gravitation is ―action at a
distance,‖ a process in which one mass acts upon another distant mass without an
intervening connection. There is no experimental or observational evidence contradicting
the instantaneous action. As noted in Volume I, even in astronomy, where it might be
presumed that any inaccuracy would be serious, in view of the great magnitudes
involved, ―Newtonian theory is still employed almost exclusively to calculate the motions
of celestial bodies.‖ 23
However, instantaneous action at a distance is philosophically unacceptable to most
physicists, and they are willing to go to almost any lengths to avoid conceding its
existence. The hypothesis of transmission through a ―luminiferous ether‖ served this
purpose when it was first proposed, but as further studies were made, it became obvious
that no physical substance could have the contradictory properties that were required of
this hypothetical medium.
Einstein‘s solution was to abandon the concept of the ether as a ―substance‖–something
physical–and to introduce the idea of a quasi-physical entity, a phantom medium that is
assumed to have the capabilities of a physical medium without those limitations that are
imposed by physical existence. He identifies this phantom medium with space, but
concedes that the difference between his space and the ether is mainly semantic. He
explains, ―We shall say our space has the physical property of transmitting waves, and so
omit the use of a word (ether) we have decided to avoid.‖ 24 Since this space (or ether)
must exert physical effects. without being physical, Einstein has difficulty defining its
relation to physical reality. At one time he asserts that ―according to the general theory of
relativity space is endowed with physical qualities,‖ 25 while in another connection he
says that ―The ether of the general theory of relativity [which he identifies as space] is a
medium which is itself devoid of all mechanical and kinematical qualities.‖ 26
Elsewhere, in a more candid statement, he concedes, in effect, that his explanations are
not persuasive, and advises us just to ―take for granted the fact that space has the physical
property of transmitting electromagnetic waves, and not to bother too much about the
meaning of this statement.‖ 27
Einstein‘s successors have added another dimension to the confusion of ideas by
retaining this concept of space as quasi-physical, something that can be ―curved‖ or
otherwise manipulated by physical influences, but transferring the ―ether-like‖ functions
of Einstein‘s ―space‖ to ―fields.‖ According to this more recent view, matter exerts a
gravitational effect that creates a gravitational field, this field transmits the effect at the
speed of light, and finally the field acts upon the distant object. Various other fields–
electric, magnetic, etc.–are presumed to coexist with the gravitational field, and to act in a
similar manner.
The present-day ―field‖ is just as intangible as Einstein‘s ―space.‖ There is no physical
evidence of its existence. All that we know is that if a test object of an appropriate type is
placed within a certain region, it experiences a force whose magnitude can be correlated
with the distance to the location of the originating object. What existed before the test
object was introduced is wholly speculative. Faraday‘s hypothesis was that the field is a
condition of stress in the ether. Present-day physicists have transferred the stress to space
in order to be able to discard the ether, a change that has little identifiable meaning. As R.
H. Dicke puts it, ―One suspects that, with empty space having so many properties, all that
had been accomplished in destroying the ether was a semantic trick. The ether had been
renamed the vacuum.‖ 28 P. W. Bridgman, who reviewed this situation in considerable
detail, arrived at a similar conclusion. The results of analysis, he says, ―suggest that the
role played by the field concept is that of an intellectual dummy, which cancels out of the
final result.‖ 29
The theory of the universe of motion gives us a totally different view of this situation. In
this universe the reality is motion. Space and time have a real existence only where, and
to the extent that, they actually exist as components of motion. On this basis, extension
space, the space that is represented by the conventional reference system, is no more than
a frame of reference for the spatial magnitudes and directions of the entity, motion, that
actually exists. It follows that extension space cannot have any physical properties. It
cannot be ―curved‖ or modified in any other way by physical means. Of course, the
reference system, being nothing but a human contrivance, could be altered conceptually,
but such a change would have no physical significance.
The status of extension space as a purely mental concept devised for reference purposes,
rather than a physical entity, likewise means that this space is not a container, or
background, for the physical activity of the universe, as assumed by conventional
science. In that conventional view, everything physically real is contained within the
space and time of the spatio-temporal reference system. When it becomes necessary to
postulate something outside these limits in order to meet the demands of theory
construction, it is assumed that such phenomena are, in some way, unreal. As Werner
Heisenberg puts it, they do not ―exist objectively in the same sense as stones or trees
exist.‖ 30
The development of the theory of the universe of motion now shows that the
conventional spatio-temporal system of reference does not contain everything that is
physically real. On the contrary, it is an incomplete system that is not capable of
representing the full range of motions which exist in the physical universe. It cannot
represent motion in more than one scalar dimension; it cannot represent a scalar system in
which all elements are moving; nor can it correctly represent the position of an individual
object that is moving in all directions simultaneously (that is, an object whose motion is
scalar, and therefore has no specific direction). Many of the other shortcomings of this
reference system will not become apparent until we examine the effects of very high
speed motion in Volume III, but those that have been mentioned have a significant
impact on the phenomena that we are now examining.
The inability to represent more than one dimension of scalar motion is a particularly
serious deficiency, inasmuch as the postulated three-dimensionality of the universe of
motion necessarily permits the existence of three dimensions of motion. Only one
dimension of vectorial motion is possible, because all three dimensions of space are
required in order to represent the directions of this one-dimensional motion, but scalar
motion has magnitude only, and a three-dimensional universe can accommodate scalar
motion in all three of its dimensions.
Since the conventional reference system cannot represent all of the distributed scalar
motions, and present-day science does not recognize the existence of any motions that
cannot be represented in that system, it has been necessary for the theorists to make some
arbitrary assumptions as a means of compensating for the distortion of the physical
picture due to this deficiency of the reference system. One of the principal steps taken in
this direction is the introduction of the concept of ―fundamental forces,‖ autonomous
entities that exist in their own right, and not as properties of something more basic. The
present tendency is to regard these so-called fundamental forces as the sources of all
physical activity, and the currently popular goal of the theoretical physicists, the
formulation of a ―grand unified theory,‖ is limited to finding a common denominator of
these forces.
Gravitation is, in a way, an exception, as the currently popular hypothesis as to the nature
of the gravitational force, Einstein‘s general theory of relativity, does attempt an
explanation of its origin. According to this theory, the gravitational force is due to a
distortion of space resulting from the presence of matter. So far as can be determined
from the scientific literature, no one has the slightest idea as to how such a distortion of
space could be accomplished. Arthur Eddington expressed the casual attitude of the
scientific community toward this issue in the following statement: ―We do not ask how
mass gets a grip on space-time and causes the curvature which our theory postulates.‖ 31
But unless the question is asked, the answer is not forthcoming. In Newton‘s theory the
gravitational force originates from mass in a totally unexplained manner. In Einstein‘s
theory it is a result of a distortion, or ―curvature,‖ of space that is produced by mass in a
totally unexplained manner. Thus, whatever its other merits may be, the current theory
(general relativity) accomplishes no more toward accounting for the origin of the
gravitational force than its predecessor.
In order to arrive at such an explanation we need to recognize that force is not an
autonomous entity; it is a property of motion. The motion of an individual mass unit is
measured in terms of speed (or velocity). The total amount of motion in a material
aggregate is then the product of the speed and the number of mass units, a quantity
formerly called ―quantity of motion,‖ but now known as momentum. The rate of change
of the motion of the individual unit is acceleration; that of the total quantity of motion is
force. The force is thus the total quantity of acceleration.
The significance of this, in the present connection, is that force not only produces an
acceleration when applied to a mass (a fact that is currently recognized), it is an
acceleration prior to that application (a fact that is currently overlooked or disregarded).
In other words, the acceleration is simply transferred. For example, when a rocket is
fired, the total ―quantity of acceleration‖ available for application to the rocket (the force)
is the sum of the quantities of acceleration of the individual particles of the gas produced
from the propellant. The division of this total quantity among the mass units of the rocket
determines the acceleration of each individual unit. and therefore of the rocket as a
whole.
Since force is a property of a motion rather than an autonomous entity, it follows that
wherever there is a force there must also be a motion of which the force is a property.
This leads to the conclusion that a gravitational force field is a region of space in which
gravitational motions exist. In the context of conventional physical thought this
conclusion is unacceptable, since there are no moving entities in an unoccupied field.
The information about the nature of scalar motion developed in the earlier pages clarifies
this situation. A material aggregate is moving gravitationally in all directions, but the
conventional spatial reference system is unable to represent a system of motions of this
nature in its true character. As previously indicated, where the scalar motion AB of object
A (the massive object now under consideration) toward object B (the test mass) cannot be
represented in the reference system because of the limitations of that system, this motion
AB is shown as a motion BA; that is, a motion of the test mass B toward the massive
object A, constituting an addition to the actual motion of that test mass. Because of the
spherical distribution of the scalar motions of the atoms of mass A, the magnitude of the
motion imputed to mass B depends on its distance from A, and is inversely proportional
to the square of that distance. Thus each point in the region surrounding A corresponds to
a specific fraction of the motion of A, representing the amount of motion that would be
imputed to a unit mass, if that mass is actually placed at this particular point.
Here, then, is the explanation of the gravitational field (and, by extension, all other fields
of the same nature). The field is not something physically real in the space; ―for the
modern physicist as real as the chair on which he sits,‖ 32 as asserted by Einstein. Nor is
it, as Faraday surmised, a stress in the ether. Neither is it some kind of a change in the
properties of space, as envisioned by present-day theorists. It is simply the pattern of the
magnitudes of the motions of one mass that have to be imputed to other masses because
of the inability of the reference system to represent scalar motion as it actually exists.
No doubt this assertion that what appears to be a motion of one object is actually, in large
part, a motion of a different object is somewhat confusing to those who are accustomed to
conventional ideas about motion. But once it is realized that scalar motion exists, and
because it has no inherent direction it may be distributed over all directions, it is evident
that the reference system cannot represent this motion in its true character. In the
preceding analysis we have determined just how the reference system does represent this
motion that it cannot represent correctly.
This may appear to be a return to the action at a distance that is so distasteful to most
scientists, but, in fact, the apparent action on distant objects is an illusion created by the
introduction of the concept of autonomous forces to compensate for the shortcomings of
the reference system. If the reference system were capable of representing all of the
scalar motions in their true character, there would be no problem. Each mass would then
be seen to be pursuing its own course, moving inward in space independently of other
objects.
In this case, accepted scientific theory has gone wrong because prejudice supported by
abstract theory has been allowed to override the results of physical observations. The
observers keep calling attention to the absence of evidence of the finite propagation time
that current theory ascribes to the gravitational effect, as in this extract from a news
report of a conference at which the subject was discussed:
When it [the distance] is astronomical, the difficulty arises that the intermediaries need a measurable time
to cross, while the forces in fact seem to appear instantaneously. 33

But it is assumed that we must accept either a finite propagation time or action at a
distance, which, as Bridgman once said, is ―a concept to which many physicists have a
violent allergy.‖ 34 Einstein‘s theory, which supports the propagation hypothesis, has
therefore been accorded a status superior to the observations. The following statement
from a physicist brings this point out explicitly:
Nowadays we are also convinced that gravitation progresses with the speed of light. This conviction,
however, does not stem from a new experiment or a new observation, it is a result solely of the theory of
relativity.35

This is another example of a practice that has been the subject of critical comment in
several different connections in the preceding pages of this and the earlier volume.
Overconfidence in the existing body of scientific knowledge has led the investigators to
assume that all alternatives in a given situation have been considered. It is then concluded
that an obviously flawed hypothesis must be accepted, in spite of its shortcomings,
because ―there is no other way.‖ Time and again in the earlier pages, development of the
theory of the universe of motion has shown that there is another way, one that is free
from the objectionable features. So it is in this case. It is not necessary either to contradict
observation by assuming a finite speed of propagation or to accept action at a distance.
Some of the most significant consequences of the existence of scalar motion are related to
its dimensions. This term is used in several different senses, two of which are utilized
extensively in this work. When physical quantities are resolved into component quantities
of a fundamental nature, these component quantities are called dimensions. Identification
of the dimensions, in this sense, of the basic physical quantities has been an important
feature of the development of theory in the preceding pages. In a different sense of the
term, it is generally recognized that space is three-dimensional.
Conventional physics recognizes motion in three-dimensional space, and represents
motions of this nature by lines in a three-dimensional spatial coordinate system. But these
motions which exist in three dimensions of space are only one-dimensional motions.
Each individual motion of this kind can be characterized by a vector, and the resultant of
any number of these vectors is a one-dimensional motion defined by the vector sum. All
three dimensions of the spatial reference system are required for the representation of
one-dimensional motion, and there is no way by which the system can indicate a change
of position in any other dimension. However, the postulate that the universe of motion is
three-dimensional carries with it the existence of three dimensions of motion. Thus there
are two dimensions of motion in the physical universe that cannot be represented in the
conventional spatial reference system.
In common usage the word ―dimensions‖ is taken to mean spatial dimensions, and
reference to three dimensions is ordinarily interpreted geometrically. It should be
realized, however, that the geometric pattern is merely a graphical representation of the
relevant physical magnitudes and directions. From the mathematical standpoint an n-
dimensional quantity is one that requires n independent magnitudes for a complete
definition. Thus a scalar motion in three dimensions, the maximum in a three-
dimensional universe, is defined in terms of three independent magnitudes One of these
magnitudes–that is, the magnitude of one of the dimensions of scalar motion–can be
further divided dimensionally by the introduction of directions relative to a spatial
reference system. This expedient resolves the one-dimensional scalar magnitude into
three orthogonally related sub-magnitudes, which together with the directions, constitute
vectors. But no more than one of the three scalar magnitudes that define a three-
dimensional scalar motion can be sub-divided vectorially in this manner.
Here is a place where recognition of the existence of scalar motion changes the physical
picture radically. As long as motion is viewed entirely in vectorial terms–that is, as a
change of position relative to the spatial reference system–there can be no motion other
than that represented in that system. But since scalar motion has magnitude only, there
can be motion of this character in all three of the existing dimensions of the physical
universe. It should be emphasized that these dimensions of scalar motion are
mathematical dimensions. They can be represented geometrically only in part, because of
the limitations of geometrical representation. In order to distinguish these mathematical
dimensions of motion from the geometric dimensions of space in which one dimension of
the motion takes place, we are using the term ―scalar dimension‖ in a manner analogous
to the use of the term ―scalar direction‖ in the earlier pages of this and the preceding
volume.

CHAPTER 13

Electric Charges
The history of the development of a mathematical understanding of electricity and
magnetism has been one of the great success stories of science and engineering. With the
benefit of this information, a type of phenomena totally unknown up to a few centuries ago
has been harnessed in a manner that has revolutionized life in the more advanced human
societies. But in a strange contrast, this remarkable record of success in the identification
and application of the mathematical relations involved in these phenomena coexists with an
almost complete lack of understanding of the basic nature of the quantities with which the
mathematical expressions are dealing.
In order to have a reasonably good conceptual understanding of electricity and magnetism,
we need to be able to answer questions such as these:
What is an electric charge?
What is magnetism?
What is an electric current?
What is an electric field?
What is mass?
What is the relation between mass and charge?
How are electric and magnetic forces produced?
How do they differ from the gravitational force?
How are these forces transmitted?
What is the reason for the direction of the electromagnetic force?
Why do masses interact only with masses, charges with charges?
How are charges induced in electrically neutral objects?
Conventional science has no answers for most of these questions. To rationalize the failure
to discover the explanations, the physicists tell us that we should not ask the questions:
The question ―What is electricity?‖–so often asked–is… meaningless.36
(E. N. daC. Andrade)
What is electricity?...Definitions that cannot, in the nature of the case, be given, should not be demanded. 37
(Rudolf Carnap)

The difficulty in accounting for the origin of the basic forces is likewise evaded. It is
observed that matter exerts a gravitational force, an electric charge exerts an electric force,
and so on, but the theorists have been unable to identify the origin of these forces. Their
reaction has been to evade the issue by characterizing the forces as autonomous,
―fundamental conceptions of physics‖ that have to be taken as given features of the universe.
These forces are then assumed to be the original sources of all physical activity.
So far as anyone knows at present, all events that take place in the universe are governed by four fundamental
types of forces.38

As pointed out in Chapter 12, this assumption is obviously invalid, as it is in direct conflict
with the accepted definition of force. But those who are desperately anxious to have some
kind of a theory of the phenomena that are involved close their eyes to this conflict.
After having ―solved‖ the problem of the origin of the forces by assuming it out of
existence, the theorists have proceeded to solve the problem of the transmission of the basic
forces in a similar manner. Since they have no explanation for this phenomenon, they
provide a substitute for an explanation by equating this transmission with a different kind of
phenomenon for which they believe they have at least a partial explanation. Electromagnetic
radiation has both electric and magnetic aspects, and is unquestionably a transmission
process. In their critical need for some kind of an explanation of the transmission of electric
and magnetic forces, the theory constructors have seized on this tenuous connection, and
have assumed that electromagnetic radiation is the carrier of the electrostatic and
magnetostatic forces. Then, since the gravitational force is clearly analogous to those two
forces, and can be represented by the same kind of mathematical expressions, it has been
further assumed that some sort of gravitational radiation must also exist.
But there is ample evidence to show that these forces are not transmitted by radiation. As
brought out in Volume I, gravitation and radiation are processes of a totally different kind.
Radiation is an energy transmission process. A quantity of radiant energy is produced in the
form of photons. The movement of these photons then carries the energy from the point of
origin to a destination, where it is delivered to the receiving object. No movement of either
the originating object or the receiving object is required. At either end of the path the energy
is recognizable as such, and is readily interchangeable with other forms of energy.
Gravitation, on the other hand, is not an energy transmission process. The (apparent)
gravitational action of one mass upon another does not alter the total external energy content
(potential plus kinetic) of either mass. Each mass that moves in response to the gravitational
force acquires a certain amount of kinetic energy, but its potential energy is decreased by the
same amount, leaving the total unchanged. As stated in Volume I, gravitational, or potential,
energy is purely an energy of position: that is, for any specific masses, the mutual potential
energy is determined entirely by their spatial separation.
All that has been said about gravitation is equally applicable to electrostatics and
magnetostatics. Each member of any system of two or more objects (apparently) interacting
electrically or magnetically has a potential energy determined by the magnitudes of the
charges and the intervening distance. As in the gravitational situation, if the separation
between the objects is altered by reason of the static forces, an increment of kinetic energy is
imparted to one or more of the objects. But its, or their, potential energy is decreased by the
same amount, leaving the total unchanged. This is altogether different from a process such
as electromagnetic radiation which carries energy from one location to another. Energy of
position in space cannot be propagated in space. The concept of transmitting this kind of
energy from one spatial position to another is totally incompatible with the fact that the
magnitude of the energy is determined by the spatial separation.
As stated earlier, the coexistence of an almost total lack of conceptual understanding of
electric and magnetic fundamentals with a fully developed system of mathematical relations
and representations seems incongruous. In fact, however, this is the normal initial result of
the manner in which scientific investigation is usually handled. A complete theory of any
physical phenomenon consists of two distinct components, a mathematical formulation and
a conceptual structure, which are largely independent. In order to constitute a complete and
accurate definition of the phenomenon, the theory must be both conceptually and
mathematically correct. This is a result that is difficult to accomplish. In most cases it is
practically mandatory to approach the conceptual and mathematical issues separately, so that
this very complex problem is reduced to more manageable dimensions. We either develop a
mathematically correct theory that is conceptually imperfect (a ―model‖), and then attack the
problem of reconciling this theory with the conceptual aspects of the phenomena in
question, or alternatively, develop a theory that is conceptually correct, as far as it goes, but
mathematically imperfect. and then attack the problem of accounting for the mathematical
forms and magnitudes of the physical relations.
As matters now stand in conventional science, the requirement of conceptual validity is by
far the most difficult to meet. With the benefit of the mathematical techniques now available
it is almost always possible to devise an accurate, or nearly accurate, mathematical
representation of a physical relation on the basis of those physical factors that are known to
enter into the particular situation, and the currently accepted concepts of the nature of these
factors. The prevailing policy, therefore, is to give priority to the mathematical aspects of the
phenomena under consideration. Vigorous mathematical analysis is applied to models which
admittedly represent only certain portions of the phenomena to which they apply, and which,
as a consequence, are conceptually incorrect, or at least incomplete. Attempts are then made
to modify the models in such a way that they move closer to conceptual validity while
maintaining their mathematical validity.
There is a sound reason for following this ―mathematics first‖ policy in the normal course of
physical investigation. The initial objective is usually to arrive at a result that is useful in
practical application; that is, something that will produce the correct mathematical answers
to practical problems. From this standpoint, the issue of conceptual validity is essentially
irrelevant. However, scientific investigation does not end at this point. Our inquiry into the
subject matter is not complete until we (1) arrive at a conceptual understanding of the
physical phenomena under consideration, and (2) establish the nature of the relations
between these and other physical phenomena.
A mathematical relation that is unexplained conceptually is of little or no value toward
accomplishing these objectives. It cannot be extrapolated beyond the range for which its
validity has been experimentally or observationally verified without running the risk of
exceeding the limits of its applicability (as will be demonstrated in Volume III). Nor can it
be extended to any area other than the one in which it originated. As it happens, however,
many physical problems have resisted all attempts to discover the conceptually correct
explanations. Many of the frustrated theorists have reacted by abandoning the effort to
achieve conceptual validity, and are now contending that mathematical agreement between
theory and observation constitutes ―experimental verification.‖ Obviously this is not true.
Such a ―verification,‖ or any number of similar mathematical correlations, tell us only that
the theory is mathematically correct. As has been emphasized at several points in the
preceding discussion, mathematical validity does not, in any way, assure conceptual validity.
It gives no indication whether the interpretation that is being given to the mathematical
relations is right or wrong. The inevitable result of the currently prevailing policy is to
overload physical science with theories that are mathematically correct but conceptually
wrong.
Solutions for the many long-standing problems of physical science clearly cannot be
obtained as long as the attacks on the problems are terminated when mathematical
agreement is achieved. But even if this defect in present practice is corrected, it is doubtful
whether the answers to most of these difficult problems can be obtained by the prevailing
method of devising a mathematical solution first, and then looking for a conceptual
explanation. The reason is that a valid mathematical expression can be constructed to fit
almost any model. As Einstein states the case:
It is often, perhaps even always, possible to adhere to a general theoretical foundation by securing the
adaptation of the theory to the facts by means of artificial additional assumptions. 39

Consequently, the mathematical expressions cannot be relied upon to furnish the necessary
clues to a conceptual understanding.
The important contribution of the Reciprocal System of theory to the solution of these
problems is that it enables attacking them from the opposite direction; that is, first arriving at
a conceptual understanding by deduction from very general basic relations, and then
developing the mathematical aspects of the established conceptual relationships. In other
words, instead of getting a mathematical answer and then looking for a conceptual
explanation to fit it, we start by getting a conceptual answer and then look for a
mathematical way of expressing it. In general, this is a much simpler procedure, but it could
not be utilized on any extensive scale until a unified general theory was available, so that
conceptual answers could be obtained by deductive processes. The Reciprocal System of
theory satisfies this requirement.
The clarification of the basic aspects of electricity and magnetism provides a dramatic
example of the power of this new method of approach to physical problems. It is no longer
necessary to deny the existence of answers to the questions listed at the beginning of this
chapter, or to content ourselves with pseudo-answers such as the ―curved space‖ explanation
of gravitation. Two of these questions, ―What is mass?,‖ and ―What is an electric current?,‖
have already been answered in the previous pages of this and the preceding volume. Those
involving magnetism will be answered in the general discussion of that subject which begins
with Chapter 19, and the process of induction of charges will be explained in Chapter 18.
The answers to all of the other questions in the list will be developed in this present chapter.
When these presentations are complete, we will have provided simple and logical
explanations for every one of these items with which present-day science is having so much
difficulty.
In the universe of motion all physical entities and phenomena are motions, combinations of
motions, or relations between motions. It follows that the development of the structure of the
theory that describes this universe is primarily a matter of determining just what motions
and combinations of motions can exist under the conditions specified in the postulates. Thus
far in our discussion of electrical phenomena we have been dealing only with translational
motion, the movement of electrons through matter, and the various effects of that motion,
the mechanical aspects of electricity, so to speak. We will now turn our attention to the
electrical phenomena that involve rotational motion.
As we saw in Volume I, gravitation is a three-dimensional rotationally distributed scalar
motion. Objects having only one or two effective dimensions of scalar rotation were found to
exist, but these objects, sub-atomic particles, have only a limited role in physical
phenomena. In view of the general pattern of generating motions of greater complexity by
combining motions of different kinds, the possibility of superimposing one-dimensional or
two-dimensional scalar rotation on gravitating objects to produce phenomena of a more
complex nature naturally suggests itself. On analyzing the situation, however, we find that
the addition of ordinary rotationally distributed motion in less than three dimensions to the
gravitational motion would merely modify the magnitude of that motion, and would not
result in any new kinds of phenomena.
There is, however, a modification of the rotational distribution pattern that we have not yet
explored. Three general types of simple motion (scalar motion of physical locations) have
thus far been considered: (1) translation, (2) linear vibration, and (3) rotation. We now need
to recognize that there is a fourth type: rotational vibration, a motion that is related to
rotation in the same way that linear vibration is related to translational motion. Vectorial
motion of this type is uncommon–the motion of the hairspring of a watch is an example–and
it is largely ignored in conventional physical thought, but it plays an important part in the
basic motion of the universe.
At the atomic level, rotational vibration is a rotationally distributed scalar motion that is
undergoing a continuous change from outward to inward and vice versa. As in linear
vibration, the change of scalar direction must be continuous and uniform in order to be
permanent. Like the motion of the photon of radiation, it is therefore a simple harmonic
motion. As noted in the discussion of thermal motion in Chapter 5, when such a simple
harmonic motion is added to an existing motion it is coincident with that motion (and
therefore ineffective) in one of the scalar directions, and has an effective magnitude in the
other scalar direction. Every added motion must conform to the rules for the combination of
scalar motions that were set forth in Volume I. On this basis, the effective scalar direction of
a self-sustaining rotational vibration must be outward, in opposition to the inward rotational
motion with which it is associated. A similar addition with an inward scalar direction is not
stable, but can be maintained by an external influence, as we will see later.
A scalar motion in the form of a rotational vibration will now be identified as a charge. A
one-dimensional motion of this type is an electric charge. In the universe of motion, any
basic physical phenomenon such as a charge is necessarily a motion, and the only question
to be answered by an examination of its place in the physical picture is what kind of a
motion it is. We find that the observed electric charge has the properties that the theoretical
development identifies as those of a one-dimensional rotational vibration, and we can
therefore equate the two.
It is interesting to note that conventional science, which has been at so much of a loss to
explain the origin and nature of the charge, does recognize that it is scalar. For instance, W.
J. Duffin reports that experiments which he describes show that ―charge can be specified by
a single number,‖ thus justifying the conclusion that ―charge is a scalar quantity.‖40
However, in current physical thinking this electric charge is regarded as one of the
fundamental physical entities, and its identification as a motion will no doubt be a surprise
to many persons. It should therefore be emphasized that this is not a peculiarity of the theory
of the universe of motion. Irrespective of our findings, based on that theory, a charge is
necessarily a motion on the basis of the definitions that are employed in conventional
physics, a fact that is disregarded because it is inconsistent with present-day theory. The key
factor in this situation is the definition of force. It was brought out in Chapter 12 that force is
a property of motion, not something of a fundamental nature that exists in its own right. An
understanding of this point is essential to the development of the theory of charges, and
some further consideration of the relevant facts is therefore appropriate in the present
connection.
For application in physics, force is defined by Newton‘s second law of motion. It is the
product of mass and acceleration, F = ma. Motion, the relation of space to time, is measured
on an individual mass unit basis as speed, or velocity, v, (that is, each unit moves at this
rate), or on a collective basis as momentum, the product of mass and velocity, mv, formerly
called by the more descriptive name ―quantity of motion.‖ The time rate of change of the
magnitude of the motion is dv/dt (acceleration, a) in the case of the individual unit, and m
dv/dt (force, ma) when measured collectively. Thus force is, in effect, defined as the time
rate of change of the magnitude of the total quantity of motion, the ―quantity of
acceleration‖ we might call it. From this definition it follows that a force is a property of a
motion. It has the same standing as any other property, and is not something that can exist as
an autonomous entity.
The so-called ―fundamental forces of nature,‖ the presumably autonomous forces that are
currently being called upon to explain the origin of the basic physical phenomena, are
necessarily properties of underlying motions; they cannot exist as independent entities.
Every ―fundamental force‖ must originate from a fundamental motion. This is a logical
requirement of the definition of force, and it is true regardless of the physical theory in
whose context the situation is viewed.
Present-day physical science is unable to identify the motions that the definition of force
requires. An electric charge, for instance, produces an electric force, but so far as can be
determined from observation, it does this on its own initiative. There is no indication of any
antecedent motion. This apparent contradiction of the definition of force is currently being
handled by ignoring the requirements of the definition, and treating the electric force as an
entity generated in some unspecified way by the charge. The need for an evasion of this kind
is now eliminated by the identification of the charge as a rotational vibration. It is now clear
that the reason for the lack of any evidence of a motion being involved in the origin of the
electric force is that the charge itself is the motion.
An electric charge is thus a one-dimensional analog of the three-dimensional motion of an
atom or particle that we identified as mass. The space-time dimensions of mass are t3/s3. In
one dimension this is t/s. Rotational vibration is a motion similar to the rotation that
constitutes mass, differing only in the periodic reversal of scalar direction. It follows that the
electric charge, a one-dimensional rotational vibration, also has the dimensions t/s. The
dimensions of the other electrostatic quantities can be derived from those of charge. The
electric field intensity, a quantity that plays an important part in many of the relations
involving electric charges, is the charge per unit area, t/s x 1/s2 = t/s3. The product of field
intensity and distance, t/s3 x s = t/s2, is a force, the electric potential.
For the same reasons that apply to the production of a gravitational field by a mass, the
electric charge is surrounded by a force field. However, there is no interaction between mass
and charge. As brought out in Chapter 12, a scalar motion that alters the separation between
A and B can be represented in the reference system either as a motion AB (a motion of A
toward B) or a motion BA (a motion of B toward A). Thus the motions AB and BA are not
two separate motions; they are merely two different ways of representing the same motion in
the reference system. This means that scalar motion is a mutual process, and cannot take
place unless the objects A and B are capable of the same kind of motion. Consequently,
charges (one-dimensional motions) interact only with charges, masses (three-dimensional
motions) only with masses.
The linear motion of the electric charge analogous to gravitation is subject to the same
considerations as the gravitational motion. As noted earlier, however, it is directed outward
rather than inward, and therefore cannot be added directly to the basic vibrational motion in
the manner of the rotational motion combinations. This restriction on outward motion is due
to the fact that the outward progression of the natural reference system, which is always
present, extends to the full unit of outward speed, the limiting value. Further outward motion
can be added only after an inward component has been introduced into the motion
combination. A charge can therefore exist only as an addition to an atom or sub-atomic
particle.
Although the scalar direction of the rotational vibration that constitutes the charge is always
outward, both positive (time) displacement and negative (space) displacement are possible,
as the rotational speed may be either above or below unity, and the rotational vibration must
oppose the rotation. This introduces a rather awkward question of terminology. From a
logical standpoint, a rotational vibration with a space displacement should be called a
negative charge, since it opposes a positive rotation, while a rotational vibration with a time
displacement should be called a positive charge. On this basis, the term ―positive‖ would
always refer to a time displacement (low speed), and the term ―negative‖ would always refer
to space displacement (high speed). Use of the terms in this manner would have some
advantages, but so far as the present work is concerned, it does not seem advisable to run the
risk of adding further confusion to explanations that are already somewhat handicapped by
the unavoidable use of unfamiliar terminology to express relationships not previously
recognized. For present purposes, therefore, current usage will be followed, and the charges
on positive elements will be designated as positive. This means that the significance of the
terms ―positive‖ and ―negative‖ with respect to rotation in reversed in application to charge.
In ordinary practice this should not introduce any major difficulties. In this present
discussion, however, a definite identification of the properties of the different motions
entering into the combinations that are being examined is essential for clarity. To avoid the
possibility of confusion, the terms ―positive‖ and ―negative‖ will be accompanied by
asterisks when used in the reverse manner. On this basis, an electropositive element, which
has low speed rotation in all scalar dimensions, takes a positive* charge, a high speed
rotational vibration. An electronegative element, which has both high speed and low speed
rotational components, can take either type of charge. Normally, however, the negative*
charge is restricted to the most negative elements of this class, those of Division IV.
Many of the problems that arise when scalar motion is viewed in the context of a fixed
spatial reference system result from the fact that the reference system has a property,
location, that the scalar motion does not have. Other problems originate for the inverse
reason: scalar motion has a property that the reference system does not have. This is the
property that we have called scalar direction, inward or outward.
We can resolve this latter problem by introducing the concept of positive and negative
reference points. As we saw earlier, assignment of a reference point is essential for the
representation of a scalar motion in the reference system. This reference point then
constitutes the zero point for measurement of the motion. It will be either a positive or a
negative reference point, depending on the nature of the motion. The photon originates at a
negative reference point and moves outward toward more positive values. The gravitational
motion originates at a positive reference point and proceeds inward toward more negative
values. If both of these motions originate at the same location in the reference system, the
representation of both motions takes the same form in that system. For example, if an object
is falling toward the earth, the initial location of that object is a positive reference point for
purposes of the gravitational motion, and the scalar direction of the movement of the object
is inward. On the other hand, the reference point for the motion of a photon that is emitted
from that object and moves along exactly the same path in the reference system is negative,
and the scalar direction of the movement is outward.
One of the deficiencies of the reference system is that it is unable to distinguish between
these two situations. What we are doing in using positive and negative reference points is
compensating for this deficiency by the use of an auxiliary device. This is not a novel
expedient; it is common practice. Rotational motion, for instance, is represented in the
spatial reference system with the aid of an auxiliary quantity, the number of revolutions.
Ordinary vibrational motion can be accurately defined only by a similar expedient. Scalar
motion is not unique in its need for such auxiliary quantities or directions; in this respect it
differs from vectorial motion only in that it has a broader scope, and therefore transcends the
limits of the reference system in more ways.
Although the scalar direction of the rotational vibration that constitutes the electric charge is
always outward, positive* and negative* charges have different reference points. The
motion of a positive* charge is outward from a positive reference point toward more
negative values, while that of a negative* charge is outward from a negative reference point
toward more positive values. Thus, as indicated in the accompanying diagram, Fig. 20,
while two positive* charges (line a) move outward from the same reference point, and
therefore away from each other, and two negative* charges (line c) do likewise, a positive*
charge moving outward from a positive reference point, as in line b, is moving toward a
negative* charge that is moving outward from a negative reference point. It follows that like
charges repel each other, while unlike charges attract.
As the diagram indicates, the extent of the inward motion of unlike charges is limited by the
fact that it eventually leads to contact. The outward movement of like charges can continue
indefinitely, but it is subject to the inverse square law, and is therefore reduced to negligible
levels within a relatively short distance.

Figure 20
– + – +
(a) | <=|=> |
(b) |=> <=|
(c) | <=|=> |
Electric charges do not participate in the basic motions of atoms or particles, but they are
easily produced in almost any kind of matter, and can be detached from that matter with
equal ease. In a low temperature environment, such as that on the surface of the earth, the
electric charge plays the part of a temporary appendage to the relatively permanent rotating
systems of motions. This does not mean that the role of the charges is unimportant. Actually,
the charges often have a greater influence on the outcome of physical events than the basic
motions of the atoms of matter that are involved in the action, but from a structural
standpoint it should be recognized that the charges come and go in much the same manner
as the translational (kinetic or thermal) motions of the atom. Indeed, as we will see shortly,
charges and thermal motion are, to a considerable degree, interconvertible.
The simplest type of charged particle is produced by imparting one unit of one-dimensional
rotational vibration to the electron or positron, which have only one unbalanced unit of one-
dimensional rotational displacement. Since the effective rotation of the electron is negative,
it takes a negative* charge. As indicated in the description of the sub-atomic particles in
Volume I. each uncharged electron has two vacant dimensions; that is, scalar dimensions in
which there is no effective rotation. We also saw earlier that the basic units of matter, atoms
and particles, are able to orient themselves in accordance with their environments; that is,
they assume the orientations that are compatible with the effective forces in those
environments. When produced in free space–as, for instance, from the cosmic rays–the
electron avoids the restrictions imposed by its spatial displacement (such as the inability to
move through space) by orienting in such a way that one of its vacant dimensions coincides
with the dimension of the reference system. It can then occupy a fixed position in the natural
system of reference indefinitely. In the context of a stationary spatial reference system, this
uncharged electron, like the photon, is carried outward at the speed of light by the
progression of the natural reference system.
If this electron enters a new environment, and becomes subject to a new set of forces, it can
reorient itself to conform to the new situation. On entry into a conducting material, for
instance, it encounters an environment in which it is able to move freely, inasmuch as the
speed displacement in the motion combinations that constitute matter is predominantly in
time, and the relation of the space displacement of the electron to the atomic time
displacement is motion. Furthermore, the environmental factors favor such a reorientation;
that is, they favor an increase in speed above the previous unit level in a high speed
environment, and a decrease in a low speed environment. The electron therefore reorients
with its active displacement in the dimension of the reference system. This is either a spatial
or a temporal reference system, depending on whether the speed is below or above unity, but
the two systems are effectively parallel. They are actually two sections of a single system, as
they represent the same one-dimensional motion in two different speed ranges.
Where the speed is above unity, the representation of the variable magnitude is in the
temporal coordinate system, and the fixed position in the natural reference system appears in
the spatial coordinate system as a movement of the electrons (the electric current) at the
speed of light. Where the speed is below unity, these representations are reversed. It does
not follow that the progression of the electrons along the conductor takes place at these
speeds. In this respect, the electron aggregate is similar to a gas. The individual electrons are
moving at high speeds, but in random directions. Only the net excess of the motion in the
direction of the current flow–the electron drift, as it is usually called–is effective as a
unidirectional movement.
This idea of an ―electron gas‖ is generally accepted in present-day physics, but it is
conceded that ―The simple theory runs into greater difficulties when examined in more
detail.‖41 As noted previously, the prevailing assumption that the electrons of this electron
gas are derived from the structures of the atoms encounters many problems. There is also a
direct contradiction in the specific heat values. ―The electron gas would be expected to
contribute an extra 3/2 R to the specific heat of metals,‖41 but no such specific heat
increment is found experimentally.
The theory of the universe of motion supplies the answers to both of these problems. The
electrons whose movement constitutes the electric current are not derived from the atoms,
and are not subject to the restrictions that apply to this origin. The answer to the specific
heat problem is provided by the nature of the electron motion. The motion of these
uncharged electrons (units of space) through the matter of the conductor is equivalent to
motion of the matter through extension space. At a given temperature, the atoms of matter
have a certain speed relative to space. It is immaterial whether this is extension space or
electron space. The motion through electron space (movement of the electrons) is part of the
thermal motion, and the specific heat due to this motion is part of the specific heat of the
atom, not something separate.
Once the reorientation of the electrons takes place in response to the environmental factors,
it cannot be reversed against the forces associated with those factors. The electrons therefore
cannot leave the conductor in the uncharged state. The only active property of an uncharged
electron is a space displacement, and the relation of this space to extension space is not
motion. However, an electron can escape from the conductor by acquiring a charge. A
combination of rotational motions (an atom or particle) with a net displacement in space (a
speed greater than unity) can move only in time, as indicated earlier, and one with a net
displacement in time (a speed less than unity) can move only in space, as motion is a
relation between space and time. But unit speed (the natural zero or datum level) is unity in
both time and space. It follows that a motion combination with a net speed displacement of
zero can move in either time or space. Acquisition of a unit negative* charge (actually
positive in character) by the electron, which, in its uncharged state has a unit negative
displacement, reduces the net speed displacement to zero. and allows the electron to move
freely in either space or time.
The production of a charged electron in a conductor requires only the transfer of sufficient
energy to an uncharged electron to bring the existing kinetic energy of that particle up to the
equivalent of a unit charge. If the electron is to be projected into space, an additional amount
of energy is required to break away from the solid or liquid surface and to overcome the
pressure exerted by the surrounding gas. At energies below this level, the charged electrons
are confined to the conductor in the same manner as when they are uncharged.
The necessary energy for the production of the charge and the escape from the conductor
can be supplied in a number of ways, each of which therefore constitutes a method of
producing freely moving charged electrons. A convenient and widely used method furnishes
the required energy by means of a voltage differential. This increases the translational
energy of the electrons until they meet the requirements. In many applications the necessary
increment of energy is minimized by projecting the newly charged electrons into a vacuum
rather than requiring them to overcome a gas pressure. The cathode rays used in x-ray
production are streams of charged electrons projected into a vacuum. The use of a vacuum is
also a feature of the thermionic production of charged electrons, in which the necessary
energy is imparted to the uncharged electrons by means of heat. In photoelectric production,
the energy is absorbed from radiation.
Existence of the electron as a free charged unit is usually of brief duration. Within a short
time after it has been produced by one transfer of energy and ejected into space, it again
encounters matter and enters into another energy transfer by means of which the charge is
converted back into thermal energy or radiation, and the electron reverts to the uncharged
condition. In the immediate neighborhood of an agency that is producing charged electrons
both the creation of charges and the reverse process that transforms them back into other
types of energy are going on simultaneously. One of the principal reasons for the use of a
vacuum in electron production is to minimize the loss of charges by way of this reverse
process.
Charged electrons in space can be observed–that is, detected by various means–and because
of the presence of the charges they are subject to electric forces. This enables control of their
motions, and unlike its elusive uncharged counterpart, the charged electron is an observable
entity that can be manipulated to produce physical effects of various kinds.
It is not feasible to isolate and examine individual charged electrons in matter as we do in
space, but we can recognize the presence of such particles by evidence of freely moving
charges within the material aggregates. Aside from the special characteristics of charges,
these charged electrons in matter have the same properties as the uncharged electrons. They
travel readily through good conductors, less readily through poor conductors, they move in
response to voltage differences, they are restrained by insulators, which are substances that
do not have the necessary open dimensions to allow free electron motion, and so on. In their
activities in and around aggregates of matter, these charged electrons are known as static
electricity.

CHAPTER 14

The Basic Forces


As brought out in the preceding chapter, the development of a purely deductive theory of
the physical universe has enabled reversing the customary procedure in scientific
investigation. Instead of deriving mathematical relations applicable to the phenomena
under consideration, and then looking for an explanation of the mathematics, we are now
able, by deduction from very general premises, to derive a theory that is conceptually
correct, and then look for an accurate mathematical representation of the theory. This is a
much more efficient procedure, for reasons that were previously explained, but it does
not necessarily follow that completing the task by solving the mathematical problems will
be free from difficulty. In some cases, the search for the correct mathematical statement
will require expenditure of a great deal of time and effort. During the course of this
extended investigation there will be some defects in the mathematical ―models‖ that are
being used, just as there are defects in the conceptual models that are utilized in current
practice.
The original development of the theory of the universe of motion, prior to the first
publication is 1959, answered a number of the physical questions with which
conventional physical science was (and still is) unable to deal. Atoms and sub-atomic
particles were identified as combinations of scalar motions, and gravitation was identified
as the inward translational manifestation of these motions. Electric charges were
identified as one-dimensional motions of an oscillating character superimposed on the
basic motion.combinations, with similar translational (scalar) resultants. The basic forces
were identified as the force aspects of these basic motions.
These identifications answered the questions as to how the forces are produced and the
nature of the originating entities. They also answered the problem of explaining how the
gravitational and electrostatic effects are (apparently) transmitted, and accounting for the
instantaneous nature of the apparent transmission. Like many of the other answers to
long-standing problems that have emerged from the development of this theory, the
answer to the transmission problem took an unexpected form. From the postulates of the
theory we deduce that each mass and charge follows its own course, and the apparent
transmission is merely a result of the fact that the motion is scalar, and therefore has
either an inward scalar direction, carrying all objects of these classes toward each other,
or an outward scalar direction, carrying all such objects away from each other. There is
no transmission of the effects, and hence no transmission time is involved
As might be expected, the answers to most of these problems were initially incomplete,
and the history of the theory since 1959 has been that of a progressive increase in
understanding in all physical areas, during which one after another of the remaining
issues has been clarified. In some cases, such as the atomic rotation, the mathematical
aspects of the problems presented no particular difficulty, and the points at issue were
conceptual. In other instances, the difficulties were primarily concerned with accounting
for the mathematical forms of the theoretical relations. and their numerical values.
The most troublesome problem of the latter kind has been that of the force equations. The
force between electric charges can be calculated by means of the Coulomb equation, F =
QQ‘/d2, which states that, when expressed in appropriate units, the force is equal to the
product of the (apparently) interacting charges divided by the square of the distance
between them. Aside from the numerical coefficients, this Coulomb equation is identical
with the equation for the gravitational force that was previously discussed, and, as we
will see later, with the equation for the magnetostatic force as well.
Unfortunately, these force relations occupy a dead end from a theoretical standpoint.
Most basic physical relations have the status of points of departure from which more or
less elaborate systems of consequences can be built up step by step. Correlation of these
consequences with each other and with experience then serves either to validate the
theoretical conclusions or to identify whatever errors or inadequacies may exist. No such
networks of connections have been identified for these force equations, and this
significant investigative aid has not been available to those who have approached the
subjects theoretically. The lack of an explanation has not been as conspicuous in the case
of the electric force, as the Coulomb equation, which expresses the magnitude of this
force, is stated in terms of quantities derived from the equation itself, but there is an
embarrassing lack of theoretical understanding of the basis for the relation that is
expressed mathematically in the gravitational equation. Without such an understanding
the physicists have been unable to tie this equation into the general structure of physical
theory. As expressed in one physics textbook, ―Newton‘s law of universal gravitation is
not a defining equation, and cannot be derived from defining equations. It represents an
observed relationship.”
The problems involved in application of the theory of the universe of motion to the
gravitational relations were no less formidable, and the early results of this application
were far from satisfactory. Ordinarily, the results of incomplete investigations of this kind
have not been included in the published material. The opportunities for publication of the
findings of this investigation have been severely limited, and the material released for
publication has therefore been confined, in general, to those results that have been
established as correct both mathematically and conceptually, within the limits to which
the investigation has been carried. If the gravitational motion and force were matters of
an ordinary degree of importance, the best policy probably would have been to put the
unsatisfactory results aside for the time being, and to wait for further developments in
related areas to clarify the general situation enough to make further progress in the
gravitational area possible. But because of the fundamental nature of the gravitational
relations it has been necessary to make extensive use of them, in whatever form they
might happen to be, as the theoretical investigation progressed. The previous
publications, including the first volume of this present series, have therefore contained
some of the tentative, and only partially correct, results of the earlier studies. However,
much new light is being thrown on the subject matter by the continuing advances that are
being made in related areas, and the status of the gravitational theory is consequently
being updated in each new publication.
At the time of the first studies, the most obvious need was a clarification of the
dimensions of the equations. Overall dimensional consistency is something that has never
been attained by conventional physics. In some areas, such as mechanics, the currently
recognized relations are dimensionally consistent, but in many other areas the
dimensional confusion is so widespread that it has led to the previously mentioned
conclusion that a rational system of dimensions for all physical quantities is impossible.
The present standard practice is to cover up the discrepancies by assigning dimensions to
the numerical constants in the equations. Thus the gravitational constant is asserted to
have the dimensions dyne-cm2/gram2. Obviously, this expedient is illegitimate. Whatever
dimensions enter into physical expressions are properties of the physical entities that are
involved, not properties of numbers. Dimensions are excluded from numbers by
definition. Wherever, as in the gravitational case, an equation cannot be balanced without
assigning dimensions to a numerical constant, this is prima facie evidence that there is
something wrong in the understanding on which the dimensional assignments are based.
Either the dimensions assigned to the physical quantities in the equation are incorrect, or
the so-called ―numerical constant‖ is actually the magnitude of an unrecognized physical
property. Both types of dimensional errors have been encountered in our examination of
current thought in the areas covered by our investigation.
One of the powerful analytical tools made available by the theory of the universe of
motion is the ability to reduce all physical quantities to terms of space and time only. In
order to be correct, an equation must have a space-time balance; that is, both sides of the
equation must reduce to the same space-time expression. Another useful analytical tool
derived from this theory is the principle of equivalence of units. This principle asserts
that, inasmuch as the basic quantities, in all cases, are units of motion, there are no
inherent numerical constants in the mathematical equations that represent physical
relations, other than what we may call structural constants–values that have definite
physical meanings, as, for instance, the number of active dimensions in one of the
participating quantities. It follows that if the quantities involved in a valid physical
equation are all expressed in natural units, or the equivalent in the units of another
measurement system, the equation is in balance numerically, and no numerical constant is
required.
The gravitational equation, in its usual form, fails by a wide margin to meet the test of
dimensional consistency, but the general nature of the modifications that have to be made
in the dimensional assignments was identified quite early in the investigation. The 1959
publication dealt with this dimensional problem, pointing out the need to reduce the
distance term and one of the mass terms to a dimensionless status; that is, to recognize
that they are merely ratios. It also emphasized the fact that an acceleration term must be
introduced into the equation for dimensional consistency, and showed that this term
represents the inherent acceleration of gravitating objects, which is unity, and therefore
not perceptible in empirical measurements. Application of the principle of equivalence of
natural units was attempted, without much success, but the tentative results of this study
included a derivation of the gravitational constant.
By the time Nothing But Motion was published twenty years later, the lack of a fully
satisfactory interpretation of the gravitational equation had become somewhat
embarrassing. Furthermore, the validity of the original derivation of the gravitational
constant was challenged by some of the author‘s associates, and the evidence in its favor
was not sufficient to meet that challenge effectively. It was therefore decided to abandon
that interpretation, and to look for a new explanation to take its place. In retrospect it will
have to be admitted that this 1979 revision was not a well-conceived attack on the
problem. It was essentially an attempt to find a mathematical (or at least numerical)
solution where the logical development of the theory had met an obstacle. This is the
same policy that, as pointed out in Chapter 13, has brought conventional theory up
against so many blank walls, and it has turned out to be equally unproductive in the
present case. It became increasingly evident that some further study was necessary.
This brings up an issue that has been the subject of some comment. It is our contention
that the many thousands of correlations between the observations and the consequences
of the postulates of the theory of the universe of motion have established that this theory
is a true and accurate representation of the actual physical universe. The skeptics then
want to know how we can arrive at wrong conclusions in some cases, if we are applying a
correct theory; why a conclusion reached in the first volume of a series had to be
modified even before the second volume was published. The answer, as explained in
many of our previous publications, is that while the theory is capable of producing the
right answers, if properly applied, it does not necessarily follow that those who are
attempting to apply it properly will always be successful in so doing. As stated earlier, an
attempt has been made to confine the published material to firmly established items, aside
from a few that are specifically identified as somewhat speculative, but nevertheless,
some of the conclusions that have been published have subsequently been found to be
incomplete, and in a few instances, incorrect.
There is no reason to be apologetic about these few errors and omissions. Present-day
physical theory has been in the process of development for centuries, during which a
myriad of conclusions that have been reached with respect to details of the theory (or
theories) have subsequently had to be abandoned as incorrect. In comparison with this
experience, the error rate in the development of the theory of the universe of motion is
fantastically low. This is no accident. Inasmuch as all conclusions in all areas are derived
deductively from the same set of basic premises, consistency of the interrelations
between phenomena, the basic requirement for conceptual validity, is achieved
automatically. Those cases in which the developers of the theory are having some trouble
merely emphasize the easy and natural way in which solutions for most of the previously
unresolved fundamental problems of physical science have emerged from the theoretical
development.
The review of the gravitational situation that was recently undertaken was able to take
advantage of some very significant advances that have been made in our understanding of
the details of the universe of motion–that is, in the consequences of the postulates–in the
years that have elapsed since publication of Volume I in 1979. Chief among these is the
clarification of the nature and properties of scalar motion, discussed in Chapter 12, and
covered in more detail in The Neglected Facts of Science. The improvement in
understanding of this type of motion has thrown a great deal of new light on the force
relations. It is now clear that the differences between the basic types of forces that were
recognized from the start of the investigation as dimensional in nature are differences in
the number of scalar dimensions involved, rather than geometric dimensions of space.
This provides simple explanations for several of the issues that had been matters of
concern in the earlier stages of the theoretical development.
The significant conceptual change here is in the nature of the relation between motion
and its representation in the reference system. In previous physical thought motion was
regarded as a change of position in a specifically defined physical space (Newton) or
spacetime (Einstein) during a specific physical time. This physical space and time thus
constitute a background, or container. Changes of position due to motion relative to the
spatial background are assumed to be capable of representation by vectors (or tensors of
higher rank). In the theory of the universe of motion, on the other hand, space and time
have physical existence only as the reciprocally related components of motion, and the
three-dimensional space of our ordinary experience is merely a reference system, not a
physical container. Furthermore, the development of the details of the theory in the
preceding pages of this and the earlier volume shows that the spatio-temporal reference
system which combines the three-dimensional spatial frame of reference with the time
magnitudes registered on a clock, in incapable of representing the full range of existing
motions. Some motions cannot be represented in their true character. Others cannot be
represented in this reference system at all.
The deficiency of the reference system with which we are particularly concerned at this
time is its inability to represent multi-dimensional scalar motion. This inability of the
reference system to represent more than one scalar dimension of motion explains why the
forces exerted by charges and masses are all one-dimensional, irrespective of the number
of scalar dimensions applicable to the inherent motion of the charge or mass. Only one of
these scalar dimensions is coincident with the dimension of the reference system, and the
motion in this dimension is therefore the only one that can be represented in the reference
system. As indicated earlier, this limitation on the capacity of the reference system is the
reason for the great disparity in magnitude between the basic forces. The total
magnitudes of the electric and gravitational forces are actually the same, but only the
motion in the dimension of the reference system is effective. In our gravitationally bound
system, the dimensional ratio (in cgs units) is 3 x 1010. Thus the electric force, which is
one-dimensional, and therefore fully effective, is relatively strong. The gravitational force
actually has the same total strength, but it is distributed over three scalar dimensions, only
one of which coincides with the dimension of the reference system. The effective
gravitational force is therefore weaker than the effective electrostatic force by the factor 9
x 1020.
It should be noted, however, that the difference in the number of effective scalar
dimensions has this effect on the relative magnitude of the forces only because it is
applied to the very large value of the unit of speed, the relation between the sizes of the
units in which we measure space and time. This, in turn, is a consequence of our position
in a gravitationally bound system that is moving inward in space at a high speed,
opposing the spatial component of the outward progression of the natural reference
system. The net motion of the gravitating system in space is relatively small, while the
motion in time proceeds at the full speed of the progression. Thus we experience a small
change in space coincidentally with a very large change in time. We assign values to the
units of these quantities that reflect the manner in which we experience them, and on this
basis we have defined a unit of time (in the cgs system) that is 3 x 1010 times as large as
our unit of space. Our unit of speed is then 3 x 1010 space units (centimeters) per unit of
time (second).
As can be seen from the foregoing, the magnitude that we assign to the unit of speed, the
speed of light, customarily represented by the symbol c, is not an inherent property of the
universe (although the magnitude of the speed itself is). The general range within which
this value will fall is determined by our position in a system of gravitating objects, and
the specific value within these limits is assigned arbitrarily. Any change in the unit of
either space or time that is not counterbalanced by an equivalent change in the other
alters the value of c, in our measurement system, and the relation between the magnitudes
of the electric and gravitational forces, c2, is changed accordingly. (The electric force is
usually asserted to be 1039 or 1040 times as strong as the gravitational force, but this figure
is based on a set of erroneous assumptions.)
The further clarification of the mutual nature of scalar motion accomplished in the most
recent studies has also thrown a very significant additional light on the force situation. As
brought out in Chapter 12, it is now evident that a scalar motion AB cannot be
distinguished, in the absence of a fixed coupling to the reference system, from a scalar
motion BA. This means that in considering the mutual gravitational motion of two
masses we are dealing with only one motion, the representation of which in the reference
system depends on external factors.
On this basis, the expression mm‘ in the gravitational equation is not a product of two
masses, but the product of one mass and the number of units of mass in the interacting
object. Likewise, the distance term, s2, is a pure number, the ratio of s2 units to 12 unit.
Thus the only dimensional quantity that appears in the equation, aside from the resultant
force, is one of the mass terms. This result of the current study confirms the original
finding reported in the 1959 publication. It likewise confirms the earlier finding that
another dimensional term, a unit of acceleration, must be inserted into the equation to
produce a dimensional balance. Force in general is the product of mass and acceleration.
It follows that the expression for any particular force must reduce to F = ma when all
dimensions are properly assigned. The existence of the acceleration term is not apparent
without a theoretical analysis because the gravitational acceleration is unity, and therefore
has no effect on the numerical result.
The difficulties that have previously been experienced in applying the principle of the
equivalence of natural units to the gravitational equation are now seen to have been due
to an inadequate understanding of the manner in which the dimensionless terms in the
equation should be treated when the statement of the unit equivalence is formulated. We
now recognize that these terms vanish if they are given unit value in the system of
measurement in which the values of the dimensionless terms are stated, unless some
structural factor is specifically applicable. However, the use of an arbitrary mass unit in
the conventional measurement systems introduces a complication, as it means that two
different systems of units are actually being used. As we saw in the discussion of physical
fundamentals in Volume I, all physical quantities, including mass, can be expressed in
terms of units of space and time only. It follows that when an arbitrary unit is used for the
measurement of mass, we are expressing the mass and the acceleration in different
measurement systems. This is equivalent to introducing a numerical factor into whatever
physical relations may be involved: the ratio between the sizes of the respective units.
Introduction of this factor does not affect the numerical balance of an equation as long as
both sides of the equation contain the same number of mass terms, but in the gravitational
equation
F = kmm‘/d2, there are two mass terms on one side of the equation, while the force, the
lone term on the other side, contains only one mass term (F = ma). In order to balance the
equation numerically, a correction factor must be applied to convert the extra mass term
to the units applicable to space and time. The ratio of the natural space-time unit of mass
to the arbitrary mass unit is the required correction factor. Together with whatever
structural factors are applicable to the equation, it constitutes the gravitational constant.
The ratio of the natural unit of mass in the cgs system to the arbitrary unit, the gram, was
evaluated in Volume I as 2.236055 x 10-8. It was also noted in that earlier volume that the
factor 3 (evidently representing the number of effective dimensions) enters into the
relation between the gravitational constant and the natural unit of mass. The gravitational
constant is then
3 x 2.236055 x 10-8 = 6.708165 x 10-8 (with a small adjustment that will be considered
shortly).
To apply the principle of equivalence of natural units to the gravitational equation, the
dimensionless quantities m‘ and d2 are given unit value in terms of the conventional
measurement systems, so that they vanish from the equation. The dimensional terms, the
mass term m and the acceleration term inserted into the equation, are then stated in the
appropriate natural units, 1.6197 x 10-24 grams and 1.971473 x 1026 cm/sec2, respectively.
The natural unit of force derived from these values is 3.27223 x 102 dynes.
The values thus derived exceed the measured gravitational constant and the previously
determined value of unit force by the factor 1.00524. Since it is unlikely that there is an
error of this magnitude in the measurements, it seems evident that there is another, quite
small, structural factor involved in the gravitational relation. This is not at all surprising,
as we have found in the earlier studies in other areas that the primary mass values
entering into physical relations are often subject to modification because of secondary
mass effects. The ratio of the unit of secondary mass to the unit of primary mass is
1.00639. The remaining uncertainty in the gravitational values is thus within the range of
the secondary mass effects, and will probably be accounted for when a comprehensive
study of the secondary mass situation is carried out.
A rather ironic result of the new findings with respect to the gravitational constant, as
described in the foregoing paragraphs, is that they have taken us back almost to where we
were in 1959. The repudiation of the 1959 result in the 1979 publication as a consequence
of the criticism levied against it is now seen to have been a mistake. In the light of the
additional information now available it appears that the shortcoming of the original
results was not that they were wrong, but that they were incomplete and not adequately
supported with explanations and confirmatory evidence, and were therefore vulnerable to
attack. The more recent work has provided the support that was originally lacking.
Clarification of the gravitational force equation is not only important in itself, but has a
further significance in that it opens the door to an understanding of the general nature of
all of the primary force equations. Each of these equations is an expression representing
the magnitude of the force (apparently) exerted by one originating entity (mass or charge)
on another of the same, or equivalent, kind, at a specified distance. All take the same
general form as the gravitational equation, F = kmm‘/d2.
With the benefit of the information developed in the earlier pages of this chapter, we may
now generalize the equation by replacing m with X, which will stand for any distributed
scalar motion with the dimensions (t/s)n, and introducing a term Y with the value 1/s x
(s/t)n-1. The primary force equation is then F = kXY (X‘/d2).
Since only one dimension of an n-dimensional scalar motion is effective in the space of
the conventional reference system, the effective space-time dimensions of the motion
participating in the force equation are t/s. By definition, force has the dimensions t/s2. The
function of the term Y in the primary force equation is to reduce (t/s)n to t/s and to
introduce the term 1/s that is necessary to convert t/s to t/s2. In the case of the
gravitational equation, this involves multiplying by s2/t2 x 1/s = s/t2. These are the
dimensions of acceleration. In the Coulomb equation the correction factor Y is merely
1/s.
The term X‘/d2 is a combination of two ratios, and has unit value in the unit statement of
the equation. The numerical constant k is also unity if all quantities are expressed in units
that are consistent with the units in which the space and time magnitudes entering into the
equation are measured. Where one or more of these quantities are expressed in units of
another kind, the difference in the size of the units appears as the value of the numerical
constant k. In the gravitational case, for example, the gravitational constant reflects the
result of expressing mass in terms of a special unit (grams in the cgs system) rather than
in sec3/cm3.
In essence, all that the force equations do is to reduce the scalar motions (mass, charge,
etc.) to their effective one-dimensional values, introduce the 1/s term that relates the
motion to the corresponding force, and correct for any inconsistencies in the units that are
employed. It is somewhat of an anticlimax to arrive at such a simple explanation after
years of exploring much more complicated hypotheses, but the simplicity of this result is
consistent with the general nature of the findings in the basic areas of other physical
fields. There are many complex phenomena in nature, to be sure, but throughout the
development of the details of the universe of motion we have found that the fundamental
relations are quite simple.
As noted earlier, the reference point of a scalar motion, the point in the fixed reference
system to which an object in the scalar motion system is coupled, may be in motion
vectorially. The mass of this object is a measure of its three-dimensional distributed
scalar motion, the inward gravitational motion. The vectorial motion is outward, and it
order for it to take place, a portion of the inward gravitational motion must be overcome.
The mass is thus also a measure of the magnitude of the resistance to vectorial motion,
the inertia of the object. In the light of the points brought out in the preceding pages, it is
evident that in these manifestations of mass we are looking at two aspects of the same
thing, just as in the case of the rocket, where the quantity of acceleration imparted by the
combustion products (the force) is the same as the quantity of acceleration imparted to
the rocket.
This point was not recognized by the early investigators because they were not aware of
the existence of motion in different scalar dimensions. It appeared to them that two
different quantities were involved: a gravitational mass and an inertial mass. Very
accurate measurements showed that these two masses are identical, a finding that the
physics of that day could not explain. As one observer says, ―Within the framework of
classical physics there is no explanation. When attention was directed to the problem, it
seemed like a complete mystery.‖ 42 A step toward solution of the problem was taken by
Einstein. In the absence of an understanding of scalar motion, he was not able to see that
gravitation is a motion, but he formulated a ―Principle of Equivalence,‖ in which he
postulated that gravitation is equivalent to a motion. Since he viewed ―motion‖ as
synonymous with ―vectorial motion,‖ the postulate meant that gravitation is equivalent to
an accelerated frame of reference, and it is often expressed in these terms. But such an
equivalence is inconsistent with Euclidean geometry. As explained by Tor Gerholm:
If acceleration and gravitation are equivalent, we must apparently also be able to imagine an acceleration
field, a field formed by inertial forces. It is easy to realize that no matter how we try, we will never be able
to get such a field to have the same shape as the gravitational field around the earth and other celestial
bodies...If we want to save the equivalence principle...If we want to retain the identity between
gravitational and inertial mass, then we are forced to give up Euclidean geometry! Only by accepting a
non-Euclidean metric will we be able to achieve a complete equivalence between the inertial field and the
gravitational fields. This is the price we must pay.43

Identification of gravitation as a distributed scalar motion has now thrown an entirely


new light on the situation. Gravitation is an accelerated motion, but it is not geometrically
equivalent to an accelerated frame of reference. Einstein‘s attempt to reconcile these two
phenomena by resort to non-Euclidean geometry is misdirected. Whatever mathematical
results are obtained by the use of this expedient (actually not very many. As Paul Davies
points out, ―technical problems of a mathematical nature render all but the simplest
systems hopelessly insoluble‖ 44) are not indicative of the true relations. The scalar
gravitational motion of an object and any vectorial motion that it may possess are quite
different in their nature and properties.
In the case of the propagation of radiation, the principal stumbling block for the ether
theory was the contradictory nature of the properties that the hypothetical substance
―ether‖ must possess in order to perform the functions that were assigned to it. Einstein‘s
solution was to replace the ether with another entity that was assumed to have no
properties, other than an ability to transmit the radiation, an ability which he says we
should ―take for granted.‖ 27
Similarly, the obstacles to accounting for the observed results of the addition of velocities
were the existence of absolute magnitudes and fixed spatial coordinate locations. Here,
the answer was to deny the reality of absolute magnitudes, and as Einstein says, to ―free
oneself from the idea that co-ordinates must have an immediate metrical meaning.‖ 45
Now we find that he deals with the gravitational problem in the same way, loosening the
mathematical constraints, rather than looking for a conceptual error. He invents the
―equivalent of motion.‖ a hypothetical something which has enough of the properties of
motion to enable accounting for the mathematical results of gravitation (at least in
principle) without having those properties of vectorial motion that are impossible to
reconcile with the observed behavior of gravitating objects. In all of these cases, the
development of the theory of the universe of motion has shown that the real reason for
the existence of these problems was the lack of some essential information. In the case of
the composition of velocities, the missing item was an understanding of motion in time.
In the other two cases cited, the problems were consequences of the lack of recognition of
the existence of scalar motion.

CHAPTER 15

Electrical Storage
We now turn to a consideration of the storage of uncharged electrons (electric current), a
subject that was not considered earlier because it was more convenient to wait until after the
nature of electric charges was clarified.
The basic requirement for storage is a suitable container. Any conductor is, to some extent, a
container. Let us consider an isolated conductor of unit cross section, a wire. This conductor
has a length of n units, meaning that it extends through n units of extension space, the space
represented in the reference system. Each of these units of the reference system is a location
in which a unit of actual space (that is, the spatial component of a motion) may exist. In the
absence of an externally applied electric voltage, the wire contains a certain concentration of
uncharged electrons (actual units of space), the magnitude of which depends on the
composition of the material of the conductor, as explained in Chapter 11. If this wire is
connected to a source of current, and a very small voltage is applied, more uncharged
electrons flow into the wire until all of the units of the spatial reference system that
constitute the length of the wire are occupied. Unless the voltage is increased, the inward
flow ceases at this point.
When the wire is fully occupied, the aggregate of electrons could be compared to an
aggregate of atoms of matter in one of the condensed states. In these states all of the units of
extension space within the limits of the aggregate are occupied, and no further spatial
capacity is available. But if a pressure is applied, either an internal pressure, as defined in
Chapter 4, or an external pressure, the inter-atomic motions are extended into time, and the
addition of the spatial equivalent of this time allows more atoms to be introduced into the
same section of the extension space represented in the reference system, increasing the
density of the matter (the number of mass units per unit of volume of extension space)
beyond the normal equilibrium value.
This ability of physical phenomena to extend into time when further extension into space is
prevented is a general property of the universe that results from the reciprocal relation
between space and time. The scope of its application is limited, however, to those situations
in which a spatial response to an applied force is not possible. In the example just discussed,
the compression of solid matter, the obstacle to further inward movement in space is the
discrete unit limitation on subdivision. In a wide variety of astronomical phenomena that
will be considered in Volume III, the obstacle is the limit on one-dimensional spatial speed.
Here, in the electrical storage process, the obstacle is the fixed relation between the unit of
actual space and the unit of extension space. An n-unit section of the extension space
represented in the reference system can contain n units of actual space, and no more.
If a voltage is applied to force additional electrons into the fully occupied section of wire,
the excess electrons are pushed out into time, where they occupy positions in the spatial
equivalent of that time. This penetration into time can only be accomplished by application
of a force, as the concentration of uncharged electrons in time is already at an equilibrium
level. If the voltage is reduced or eliminated, the restoring force tending to bring the electron
concentration back into equilibrium reverses the flow, and the excess electrons move back
out of the wire. Application of a positive* voltage similarly withdraws electrons from the
wire and from equivalent space.
As we have seen in the preceding pages of this and the earlier volume, the region of time
beyond the unit of space is two-dimensional. The concentration of excess electrons and the
effective voltage therefore decrease in direct proportion to the distance from the wire at a
rate determined by basic physical factors and the dimensions of the wire (or other
conductor), reaching the zero level at a specific distance.
Let us consider a case in which a conductor is subjected to a voltage differential of 2V, and
the voltage in equivalent space surrounding each terminal reaches zero at a distance s from
the terminal. As long as the terminals (electrodes) are separated by a distance greater than
2s, the electron storage, the quantity of current that can be withdrawn at the positive*
terminal and introduced at the negative* terminal, is independent of the location of those
terminals. However, if the separation is reduced to less than 2s, a portion of the volume of
equivalent space from which the electrons are being withdrawn coincides with the volume of
equivalent space into which electrons are being introduced. The excess and deficiency of
electrons in this common volume cancel each other, decreasing the net excess or deficiency
at the terminals, and thereby reducing the voltage. This means that where the separation of
the terminals is reduced below 2s, the same amount of storage will take place at a lower
voltage, or alternatively, a greater amount of storage will be possible at the same voltage.
The relations involved in the storage of current (uncharged electrons) are illustrated in
Fig.21.

Figure 21

When the terminals are separated by the distance 2s, the full voltage drop, V, takes place at
each terminal. The electron excess at the negative* terminal, which we will call E, is
proportional to V. If the separation between the terminals is decreased to 2xs, there is an
overlap of the equivalent volumes to which the excess and deficiency of electrons are
distributed, as indicated above.. The effective voltage then drops to xV. At this point, the
electron concentration corresponding to xV is in the equivalent volume at the negative*
terminal, while the balance of the total electron input represented by E is in the common
equivalent volume, where the net concentration of excess electrons is zero. If the voltage is
reduced, the electrons from the common equivalent volume, and from the volume related to
the negative* terminal only, flow out of the system in the same proportions in which they
entered. Thus the storage capacity at a separation 2xs and voltage xV is the same as that at a
separation 2s and voltage V. Generalizing this result, we may say that the storage capacity,at
a given voltage, of a combination of positive* and negative* electrodes in close proximity is
inversely proportional to the distance between them.
The ability of a conducting wire to accept additional electrons when subjected to a voltage
makes it available as a container in which uncharged electrons (units of electric current) can
be stored and withdrawn as desired. Such storage has some uses in electrical practice, but it
is inconvenient for general use. More efficient storage is made possible by a device that
contains the necessary components in a more compact form. In this device, a capacitor, two
plates, each with an area s2, are separated by a distance s‘. Each plate is equivalent to s2
conductors of unit cross section. Thus the storage capacity of a capacitor at a given voltage
is directly proportional to the plate area and inversely proportional to the distance between
the plates. This storage capacity is called the capacitance, symbol C. Since it has the
dimensions of space (s2/s‘ = s), it can be calculated directly from the geometrical dimensions
of the capacitor. The centimeter has been use as a unit, although the present practice is to use
a special unit, the farad.
If a capacitor is connected to a current supply, the effective voltage, a force (t/s2), pushes the
uncharged electrons that constitute the current into the capacitor until the concentration
corresponding to that voltage is reached. The space-time dimensions of the product are
t/s2 x s = t/s. This is inverse speed, or energy. It is not a charge, on the basis of the definition
of charge given in this work, but since electric charge has the dimensions of energy, t/s, the
quantity stored is equivalent to charge. To minimize the deviations from currently accepted
terminology, we will call it a capacitor charge. The magnitude of the storage can be
expressed by the equation
Q = CV, where Q is the capacitor charge, C is the capacitance, and V is the voltage
differential across the plates of the capacitor.
The unit of capacitance, the farad, is defined as one coulomb per volt. The volt is one joule
per coulomb. These are units of the SI system, which will be used in most of the subsequent
discussion of electricity and magnetism, rather than the cgs system of measurement that is in
general use in these volumes, the reason being that a substantial amount of clarification of
the physical relations in these areas has been accomplished in very recent years, and most of
the current literature relating to these subjects utilizes the SI system.
Unfortunately, this recent clarification of the electrical and magnetic situations has not
extended to some of the most fundamental issues, including the many problems introduced
into electrical theory by the failure to recognize the existence of uncharged electrons and the
consequent lack of distinction between electric quantity and electric charge. As we saw in
Chapter 9, the unit of electric quantity is a unit of space (s). We find that the unit of electric
charge is a unit of energy (t/s). In current practice, both of these quantities are expressed in
the same measurement unit, esu (cgs system) or coulombs (SI system). Now that the electric
charge has been introduced into our subject matter, we will have to make the distinction that
current theory does not recognize, and instead of dealing only with coulombs, we will have
to specify coulombs (s) or coulombs (t/s). In this work the symbol Q, which is currently
being used for both quantities, will refer only to electric charge, or capacitor charge,
measured in coulombs (t/s). Electric quantity, measured in coulombs (s) will be represented
by the symbol q.
Returning now to the question as to the quantities entering into the capacitance, the volt, a
unit of force, has the space-time dimensions t/s2. Since capacitance, as we have now seen,
has the dimensions of space, s, the coulomb, as a product of volts and farads has the
dimensions
t/s2 x s = t/s. But the coulomb as the quotient of joules/volts, has the dimensions t/s x s2/t = s.
Thus the coulomb that enters into the definition of the farad is not the same coulomb that
enters into the definition of the volt. We will have to revise these definitions for our
purposes, and say that the farad is one coulomb (t/s) per volt, while the volt is one joule per
coulomb (s).
The confusion between quantity (s) and charge (t/s) prevails throughout the electrostatic
phenomena. In most cases this does not result in any numerical errors, because the
calculations deal only with electrons, each of which constitutes one unit of electric quantity,
and is capable of taking one unit of charge. Thus the identification of the number of
electrons as a number of units of charge instead of a number of units of quantity does not
alter the numerical result. However, this substitution does place a roadblock in the way of an
understanding of what is actually happening, and many of the relations set forth in the
textbooks are incorrect.
For instance, the textbooks tell us that E = Q/s2. E, the electric field intensity, is force per
unit distance, and has the space-time dimensions t/s2 x 1/s = t/s3. The dimensions of Q/s2 are
t/s x 1/s2 = t/s3. This equation is therefore dimensionally correct. It tells us that, as we would
expect, the magnitude of the field is determined by the magnitude of the charge. On the
other hand, this same textbook gives the equation expressing the force exerted on a charge
by the field as
F = QE. The space-time dimensions of this equation are t/s2 = t/s x t/s3. The equation is
therefore invalid. In order to arrive at a dimensional balance, the quantity designated as Q in
this equation must have the dimensions of space, so that the equation in space-time form
will become
t/s2 = s x t/s3. in this case, then, the Q term is actually q (quantity) rather than Q (charge), and
the applicable relation is F = qE.
The error due to the use of Q instead of q enters into many of the relations involving
capacitance, and has introduced considerable confusion into the theory of these processes.
Since we have identified the stored energy, or capacitor charge, as dimensionally equivalent
to charge, Q, the capacitance equation in its customary form, Q = CV, reduces to t/s = s x
t/s2, which is dimensionally consistent. The conventional form of the energy (or work,
symbol W) equation is
W = QV, reflecting the definition of the volt as one joule per coulomb. If CV is substituted
for Q in this equation, as would appear to be justified by the relation Q = CV, the result is W
= CV2. This equation is not dimensionally valid, but it and its derivatives can be found
throughout the scientific literature. For instance, the development of theory in this area in
one current textbook46 begins with the equation dW = VdQ for the potential energy of a
charge, and by means of a series of substitutions of presumably equivalent quantities
eventually arrives at an expression for energy in terms of E, the electric field intensity, and
As, the volume occupied by the electric field. The first column of the accompanying
tabulation shows the expressions that are equated to energy in the successive steps in this
development. As indicated in the second column, the dimensional error in the first equation
carries through the entire sequence, and the space-time dimensions of these expressions
remain at t2/s3 instead of the correct t/s.
In Textbook Correct
QV t/s x t/s2 = t2/s3 qV s x t/s2 = t/s
Q/CdQ t/s x 1/s x t/s = t2/s3 Q/C dq t/s x 1/s x s = t/s
CV2 s x t2/s4 = t2/s3 qV s x t/s2 = t/s
2
E As t2/s6 x s2 x s = t2/s3 E(q/s ) As t/s3 x s/s2 x s2 x s = t/s
2

The error in this series of expressions was introduced at the start of the theoretical
development by a faulty definition of voltage. As indicated earlier, the volt is defined as one
joule per coulomb, but because of the lack of distinction between charge and quantity in
current practice, it has been assumed that the coulomb entering into this definition is the
coulomb of charge, symbol Q. In fact, as brought out in the previous discussion, the
coulomb that enters into the energy equation is the coulomb of quantity, which we are
denoting by the symbol q. The energy equation, then, is not
W = QV, but W = qV.
The correct terms and dimensions corresponding the those in the first two columns of the
tabulation are shown in columns 3 and 4. Here the term Q in the first two expressions, and
the term CV which was substituted for Q in the last two have been replaced by the correct
term q. As indicated in the tabulation, this brings all four expressions into agreement with
the correct space-time dimensions, t/s, of energy. The purely numerical terms in all of these
expressions were omitted from the tabulation, as they have no bearing on the dimensional
situation.
When the full capacity of the capacitor at the existing voltage is reached, the opposing
forces arrive at an equilibrium, and the flow of electrons into the capacitor ceases. Just what
happens while the capacitor is filling or discharging is something that the theorists have
found very difficult to explain. Maxwell found the concept of a ―displacement current‖
essential for completing his mathematical treatment of magnetism, but he did not regard it as
a real current. ―This displacement does not amount to a current,‖ he says, ‖ but it is the
commencement of a current.‖ He describes the displacement as ―a kind of elastic yielding to
the action of the force.‖ 47 Present-day theorists find this explanation unacceptable because
they have discarded the ether that was fashionable in Maxwell‘s day, and consequently have
nothing that can ―yield‖ where the plates of a capacitor are separated by a vacuum. The
present tendency is to regard the displacement as some kind of a modification of the
electromagnetic field, but the nature of the hypothetical modification is vague, as might be
expected in view of the lack of any clear understanding of the nature of the field itself. As
one textbook puts it, ―The displacement current is in some ways the most abstract concept
mentioned in this book so far.‖ 48 Another author states the case in these words:
If one defines current as a transport of charge, the term displacement current is certainly a misnomer when
applied to a vacuum where no charges exist. If, however, current is defined in terms of the magnetic fields it
produces, the expression is legitimate.49

The problem arises from the fact that while the physical observations and the mathematical
analysis indicate that a current is flowing into the space between the plates of the capacitor
when that space is a vacuum, as well as when it is occupied by a dielectric, such a current
flow is not possible if the entities whose movement constitutes the current are charged
electrons, as currently assumed. As stated in the foregoing quotation, there are no charges in
a vacuum. This impasse between theory and observation that now prevails is another of the
many items of evidence showing that the electric current is not a movement of charged
particles.
Our analysis shows that the electrons do, in fact, flow into the spatial equivalent of the time
interval between the plates of the capacitor, but that these electrons are not charged, and are
unobservable in what is called a vacuum. Aside from being only transient, this displacement
current is essentially equivalent to any other electric current.
The additional units of space (electrons) forced into the time (equivalent space) interval
between the plates increase the total space content. This can be demonstrated experimentally
if we introduce a dielectric liquid between the plates, as the increase in the amount of space
decreases the internal pressure, the force per unit area due to the weight of the liquid. For
this purpose we may consider a system in which two parallel plates are partially immersed in
a tank of oil, and so arranged that the three sections into which the tank is divided by the
plates are open to each other only at the bottom of the tank. If we now connect the plates to a
battery with an effective voltage, the liquid level rises in the section between the plates.
From the foregoing explanation it is evident that the voltage difference has reduced the
pressure in the oil. The oil level has then risen to the point where the weight of the oil above
the free surface balances the negative increment due to the voltage differential.
Because accepted theory requires the ―displacement current‖ to behave like an electric
current without being a current, conventional science has had great difficulty in ascertaining
just what the displacement actually is. It is an essential element in Maxwell‘s formulations,
but some present-day authors regard it as superfluous. ―All the physics of dielectrics could
be discussed without ever bringing in the displacement vector,‖ 50 says Arthur Kip. One of
the principal factors contributing to this uncertainty as to its status is that the displacement is
customarily defined and treated in electrostatic terms, whereas it is actually a manifestation
of current electricity. In Maxwell‘s equation for the displacement current, the current
density, I/s2, and the time derivative of the displacement, dD/dt, are additive, and are
therefore terms of an equivalent nature; that is, they have the same dimensions. The space-
time dimensions of current density are s/t x 1/s2 = 1/st. The dimensions of D, the
displacement, are then 1/st x t = 1/s. Its place in the capacitance picture is then evident. In
the storage process, units of space, uncharged electrons, are forced into the surrounding
equivalent space–that is, the spatial equivalent of time (t = 1/s)–and this inverse space, 1/s,
becomes one of the significant quantities with which we must deal.
In the customary electrostatic treatment of the displacement, it is defined as D = 0E, where
E is the field intensity (an electrostatic concept) and 0is the permittivity of free space. Since
the dimensions of E are t/s3, and we have now found those of D to be 1/s, the space-time
dimensions of permittivity are 1/s x s3/t = s2/t. In current practice, however, the permittivity
is expressed in farads per meter. This makes it dimensionless, since both the farad and the
meter are units of space. We are thus faced with a conflict between the dimensional
definition of permittivity expressed in the conventional unit and the definition derived from
Maxwell‘s relations, a definition that is consistent with the dimensions of displacement. The
relation between the two in space-time terms, which is s2/t, shows where the difference
originates, as this is the relation of the unit of electric current, s, to the electrostatic unit, t/s.
The farad per meter is an electrostatic unit, while the s2/t dimensions for permittivity relate
this quantity to the electric current system.
Permittivity is of importance mainly in connection with non-conducting substances, or
dielectrics. If such a substance is inserted between the plates of a capacitor, the capacitance
is increased. The rotational motions of all non-conductors contain motion with space
displacement. It is the presence of these space components that blocks the translational
motion of the uncharged electrons through the time components of the atomic structure, and
makes the dielectric substance a non-conductor. Nevertheless, dielectrics, like all other
ordinary matter, are predominantly time structures; that is, their net total displacement is in
time. This time adds to the time of the reference system, and thus increases the capacitance.
From this explanation of the origin of the increase, it is evident that the magnitude of the
increment will vary by reason of differences in the physical nature of the dielectrics,
inasmuch as different substances contain different amounts of speed displacement in time,
arranged in different geometrical patterns. The ratio of the capacitance with a given
dielectric substance between the plates to the capacitance in a vacuum is the relative
permittivity, or dielectric constant, of the substance.
The dielectric constants of most of the common dielectric substances–Class A dielectrics, as
they are called–show little variation at low frequencies under ordinary conditions.51 This
indicates that permittivity is an inherent property of the substance, a consequence of its
composition and structure, rather than of its relation to the environment. This is consistent
with the theoretical explanation given above.
Conventional theories of dielectric phenomena are based on the premise that these
phenomena are electrostatic in nature. It should be understood, however, that all theories
which depend on the existence of electric charges in electrically neutral materials cannot be
other than hypothetical. Furthermore, conventional science has no comprehensive
electrostatic theory of dielectrics. As expressed by W. J. Duffin:
It is important to realize that calculations of fields due to, and forces on, charge distribution are performed on a
model and the results compared with experiment...different models are required to account for different sets of
experimental results.52

In the model that is applied to the capacitance problem it is assumed (1) that positive* and
negative* charges exist in the electrically neutral dielectric, (2) that ―small movements of
the charges have taken place in opposite directions,‖ and (3) that these movements produce
the ―polarization which we believe takes place (italics added).‖ 53 As this statement by
Duffin concedes, there is no direct evidence of the polarization that plays the principal role
in the theory. The entire ―model‖ is hypothetical.
Clarification of the dimensions of the quantity known as permittivity eliminates the static
charges from the mathematics of the electrical storage process, and thereby cuts the ground
out from under all of the electrostatic models. The customary mathematical treatment is
carried out in terms of four quantities, the displacement, D, the polarization, P, the electric
field intensity, E, and the permittivity, 0. These quantities, the investigators tell us, are
related by the expression P = D – 0E. We have already seen, earlier in this chapter, that the
space-time dimensions of D are 1/s, and those of the permittivity, 0,are s2/t. The dimensions
of the quantity 0E are then s2/t x t/s3= 1/s. It follows that the dimensions of P are also 1/s.
We thus find that all of the quantities entering into the dielectric processes are quantities
related to the electric current: the electric quantity (s), the capacitance (s), the displacement
(1/s), the polarization (1/s), and the quantity 0E, which likewise has the dimensions 1/s. No
quantity with the dimensions of charge (t/s) has any place in the mathematical treatment.
The language is that of electrostatics, using terms such as ―polarization,‖ ―displacement,‖
etc., and an attempt has been made to introduce electrostatic quantities by way of the electric
field intensity, E. But it has been necessary to couple E with the permittivity, 0, and to use it
in the form 0E, which, as just pointed out, cancels the electrostatic dimension of E. Electric
charges thus play no part in the mathematical treatment.
A similar attempt has been made to bring the electric field intensity, E, into relations that
involve the current density. Here, again, the electrostatic quantity, E, is out of place, and has
to be removed mathematically by coupling it with a quantity that converts it into something
that has a meaning in electric current phenomena. The quantity that is utilized for this
purpose is conductivity, symbol s, space-time dimensions s2/t2. The combination sE has the
dimensions s2/t2 x t/s3 = 1/st. These are the dimensions of current density. Like the
expression 0E, previously discussed, the expression E has a physical meaning only as a
whole. Thus it is indistinguishable from current density. The conventional model brings the
field intensity into the theoretical picture, but here, again, it is necessary to remove it by a
mathematical device before the theory can be applied in practice.
In those cases where the electric field intensity has been used in dealing with electric current
phenomena, without introducing an offsetting quantity such as s or 0, the development of
theory leads to wrong answers. For example, in their discussion of the ―theoretical basis of
Ohm‘s law,‖ Bleaney and Bleaney say that ―when an electric field strength E acts on a free
particle of charge q, the particle is accelerated under the action of the force,‖ and this ―leads
to a current increasing at the rate dJ/dt = n (q2/m) E,‖ 54 where n is the number of particles
per unit volume. The space-time dimensions of this equation are 1/st x 1/t = 1/s3 x s2 x s3/t3 x
t/s3. Thus the equation is dimensionally balanced. But it is physically wrong. As the authors
admit, the equation ―is clearly at variance with the experimental observation.‖ Their
conclusion is that there must be ―other forces which prevent the current from increasing
indefinitely.‖
This fact that the key element of the orthodox theory of the electric current, the hypothesis
as to the origin of the motion of the electrons, is ―clearly at variance‖ with the observed facts
is a devastating blow to the theory, and not all of its supporters are content to simply ignore
the contradiction. Some attempt to find a way out of the dilemma, and produce explanations
such as the following:
When a constant electric field E is applied each electron is accelerated during its free path by a force – E, but
at each collision it loses its extra energy. The motion of the electrons through the wire is thus a diffusion
process and we can associate a mean drift velocity v with this motion.55

But collisions do not transform accelerated motion into steady flow. If they are elastic, as the
collisions of the electrons presumably are, the acceleration in the direction of the voltage
gradient is simply transferred to other electrons. If the force Eq actually existed, as present-
day electrical theory contends, it would result in accelerating the average electron. The
authors quoted in reference 54 evidently recognize this point, but they fall back on the
prevailing confidence that something will intervene to save the ―moving charge‖ theory of
the electric current from its multiplicity of problems; there ―must be other forces‖ that take
care of the discrepancy. No one wants to face the fact that a direct contradiction of this kind
invalidates the theory.
The truth is that this concept of an electrostatic force (Eq) applied to the electron mass is one
of the fundamental errors introduced into electrical theory by the assumption that the electric
current is a motion of electric charges. As the authors quoted above bring out in the
derivation of their electric current equation, such a force would produce an accelerating rate
of current flow, conflicting with the observations. In the universe of motion the moving
electrons that constitute the electric current are uncharged and massless. The mass that is
involved in the current flow is not a property of the electrons, which are merely rotating
units of space; it is a property of the matter of the conductor. Instead of an electrostatic
force, t/s2, applied to a mass, t3/s3, producing an acceleration
(F/m = t/s2 x s3/t3 = s/t2), what actually exists is a mechanical force (voltage, t/s2) applied to a
mass per unit time, a resistance, t2/s3, producing a steady flow, an electric current
(V/R = t/s2 x s3/t2 = s/t).
Furthermore, it is observed that the conductors are electrically neutral even when a current is
flowing. The explanation given for this in present-day electrical theory is that the negative*
charges which are assumed to exist on the electrons are neutralized by equivalent positive*
charges on the atomic nuclei. But if the hypothetical electrostatic charges are neutralized so
that no net charge exists, there is no electrostatic force to produce the movement that
constitutes the current. Thus, even on the basis of conventional physical theory, there is
abundant evidence to show that the moving electrons do not carry charges. The
identification of the electric current phenomena with the mechanical aspects of electricity
that we derive from the theory of the universe of motion now provides a complete and
consistent explanation of these phenomena without recourse to the hypothesis of moving
charged electrons.
As noted in Chapter 13, charged electrons are subject to the same forces that apply to their
uncharged counterparts, as well as to those specifically appertaining to the charges. It would
therefore be theoretically possible to apply a voltage and store these charged electrons in
capacitors in the same manner as the uncharged electrons (electric current). In practice,
however, the storage of charged electrons is accomplished in a totally different manner. A
widely used electrostatic device is the Van de Graaf generator. In this generator charged
electrons are produced and sprayed onto to a moving belt of insulating material. The belt
carries them to a storage unit in the form of a large hollow metal sphere. The electrons pass
from the belt into the sphere, gradually building up a potential that may reach a level as high
as several million volts.
In our examination of electric current phenomena in the preceding chapters we found that
the electrons which constitute the current move from the regions of higher voltage (greater
concentration or higher speed of the electrons) to regions of lower voltage. In the Van de
Graaf generator, electrons of very low electrostatic potential on the belt pass into a container
in which the potential may be in the million volt range. Obviously, we are dealing with two
different things, both having the dimensions of force, and both customarily measured in
volts, but physically unlike in some important respects.
It should now be evident why the term ―potential‖ was not used in the preceding pages in
connection with capacitor storage, or other electric current phenomena. The property of the
electric current that we are calling ―voltage‖ is the mechanical force of the current, a force
that acts in the same manner as the force responsible for the pressure exerted by a gas.
Electrostatic potential, on the other hand, is the radial force of the charges, which decreases
rapidly with the distance. The potential of a charged electron (in volts) is very large
compared to the contribution of the translational motion of that electron to the voltage. It
follows that even where the potential is in the million volt range, the electron concentration
in the storage sphere, and the corresponding voltage, may be low. In that event, the small
buildup of the voltage in the electrode at the end of the belt is enough to push the charged
electrons into the storage sphere, regardless of the high electrostatic potential.
Many present-day investigators realize that they cannot account for electric currents by
means of electrostatic forces alone. Duffin, for instance, tells us that ―In order to produce a
steady current there must be, for at least part of the circuit, non-electrostatic forces acting on
the carriers of charge.‖ 13 His recognition that these forces act on ―the carriers of charge,‖
the electrons, rather than on the charges, is particularly significant, as this means that neither
the forces nor the objects on which they act are electrostatic. Duffin identifies the non-
electrostatic forces as being derived ―from electromagnetic induction‖ or ―non-
homogeneities such as boundaries between dissimilar materials. or temperature gradients.‖
Since the electric currents available to both the investigators and the general public are
produced either by electromagnetic induction or by processes of the second non-electrostatic
category mentioned by Duffin (batteries, etc.), the non-electrostatic forces that admittedly
must exist are adequate to account for the current phenomena as a whole, and there is no
need to introduce the hypothetical electrostatic charge and force. We have already seen that
the charge does not enter into the mathematics of the current flow and storage processes.
Now we find that it has no place in the qualitative explanation of current flow either.
Addition of these further items of physical and mathematical evidence to those discussed
earlier now provides conclusive proof that the mathematical structure of theory dealing with
the storage of electric current is not a representation of physical reality. This is not an
isolated case. As pointed out in Chapter 13, the conditions under which scientific
investigation is conducted have had the effect of directing the investigation into
mathematical channels, and the results that have been attained are almost entirely
mathematical. As expressed by Richard Feynman:
Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics. 56

The development of this mathematical structure of theory is an outstanding achievement,


and it has had very important–even spectacular–practical results. However, these successes
have fostered a tendency to forget that mathematics is not physics. It is a useful, perhaps
indispensable, tool for the physicist, but physical phenomena are subject to a multitude of
limitations that do not apply to the mathematics that are utilized to represent these
phenomena, and consequently are not recognized unless they are identified physically. The
mathematical representation of space, for example, can be ―curved,‖ or otherwise modified,
but this does not, in any way, assure us that physical space can be so modified. That
question can be settled only by means of a purely physical investigation such as the one
reported in this work, which finds that such a modification of extension space is impossible.

CHAPTER 16
Induction of Charges
Clarification of the structure of the gravitational equation and application of the new
information to formulation of the primary force equation opens the door to an understanding
of the Coulomb equation, F = QQ‘/d2, that expresses the electrostatic force. This equation is
set up on an equivalent basis without a numerical coefficient; that is, the numerical value of
the charge Q is defined by the equation itself. It would seem, therefore, that when the other
quantities in the equation, force, F, and distance, d, are expressed in terms of the cgs
equivalents of the natural units, Q should likewise take the cgs value of the appropriate
natural unit. But the dimensions of charge are t/s, and the natural unit of t/s in cgs units is
3.334 x 10-11 sec/cm, whereas the experimental unit of charge has the different numerical
value 4.803 x 10-10. In conventional physics there is no problem here, as the unit of charge is
regarded as an independent quantity. But in the context of the theory of the universe of
motion, where all physical quantities are expressed in terms of space and time only, it has
been a puzzle that we have only recently been able to solve. One of the new items of
information that was derived from the most recent analysis of the gravitational equation, and
incorporated into the primary force equation, is that the individual force equations deal with
only one force (motion). The force (apparently) exerted by charge A on charge B and the
force (apparently) exerted by charge B on charge A are not separate entities, as they appear
to be; they are merely different aspects of the same force. The reasons for this conclusion
were explained in the gravitational discussion.
A second point, also derived from the gravitational study reported in Chapter 14, although it
could have been arrived at independently, is that there is a missing term in the usual
statement of each of the force equations. This term, identified as 1/s x (s/t)n-1 in the primary
force equation, must be supplied in order to balance the equation. In the gravitational
equation it is an acceleration term. In the Coulomb equation it is reciprocal space, 1/s.
Here we encounter a difference between the two equations that we have been examining. In
the gravitational equation the unit of mass is defined independently of the equation. In the
Coulomb equation, however, the unit of charge is defined by the equation. Consequently,
any term that is omitted from the statement of the equation is automatically combined with
the charge, instead of having to be introduced separately, as was necessary in the case of the
acceleration term of the gravitational equation. The quantity 1/s, which, as we have just
seen, is required for a dimensional balance, therefore becomes a component of the quantity
that is called ―charge‖ in the statement of the equation. That quantity is actually t/s (the true
dimensions of charge) multiplied by 1/s (the omitted term), which produces t/s2.
The same considerations apply to the size of the unit of this quantity. Since the charge is not
defined independently of the equation, the fact that there is only one force involved means
that the expression QQ‘ is actually Q¹/2Q‘¹/2. It follows that, unless some structural factor (as
previously defined) enters into the Coulomb relation, the value of the natural unit of Q
derived from that relation should be the second power of the natural unit of t/s2. In carrying
out the calculation we find that a factor of 3 does enter into the equation. This probably has
the same origin as the factors of the same size that apply to a number of the basic equations
examined in Volume I. It no doubt has a dimensional significance, although a full
explanation is not yet available.
The natural unit of t/s2, as determined in Volume I, is 7.316889 x 10-6 sec/cm2. On the basis
of the findings outlined in the foregoing paragraphs, the value of the natural unit of charge is
Q = (3 x 7.316889 x 10-6)2 = 4.81832 x 10-10 esu.
There is a small difference (a factor of 1.0032) between this value and that previously
calculated from the Faraday constant. Like the similar deviation between the values for the
gravitational constant, this difference in the values of the unit of charge is within the range
of the secondary mass effects, and will probably be accounted for when a systematic study
of the secondary mass relations is undertaken.
The equivalence of the scalar motions AB and BA, which plays an important part in the
force relations, is also responsible for the existence of a unique feature of static electricity,
the induction of charges. One of the basic characteristics of scalar motion, resulting from
this equivalence is that it is indifferent to location in the reference system. From the
vectorial standpoint, locations are very significant. A vectorial motion originating at location
A and proceeding in the direction AB is specifically defined in the reference system, and is
sharply distinguished from a similar motion originating at location B and proceeding in the
direction BA. But since a scalar motion has magnitude only, a scalar motion of atom A
toward atom B is simply a decrease in the distance between A and B. As such, it cannot be
distinguished from a similar motion of B toward A. Both of these motions have the same
magnitude, and neither has any other property.
Of course, the scalar motion plus the coupling to the reference system does have a specific
location in that system: a specific reference point and a specific direction. But the coupling
is independent of the motion. The factors that determine its nature are not necessarily
constant, hence the motion AB does not necessarily continue on the AB basis. A change in
the coupling may convert it to BA, or it may alternate between the two.
The rotational component of the scalar motion of a charged atom always maintains the same
relation to an atom at another location. Half of the elements of that rotational motion are
approaching the second atom, while the other half are receding in equivalent directions and
at equivalent speeds. But this is not true of the rotational vibration that constitutes a charge.
In this case the relation of the motion (charge) to the distant atom is continually changing;
that is, the relative motion of the two atoms has the same vibratory character as the charge
itself. As has been stated, a scalar motion A (such as a charge) toward or away from atom B
is indistinguishable from a similar motion of B toward or away from A. The representation
of this motion in the spatial reference system can therefore take either form.
Ordinarily, some redistribution of energy is required for a change from one representation of
a motion to another, and such changes therefore do not usually take place in the absence of
external forces. In fact, Newton‘s first law of motion requires a motion in the direction AB
to continue in that direction indefinitely unless acted upon by some force. However, there is
an exception to this general rule because of the existence of a class of phenomena that we
may call zero energy processes. Most of the physical processes that have been examined in
the preceding pages either operate by application of energy, or occur spontaneously with
release of energy. For instance, there is a force of cohesion between the atoms of a solid, and
energy must be applied to separate them. If they are allowed to recombine, a corresponding
amount of energy is liberated. But the various components of a combination of basic
motions are not bound to each other in this manner in all cases. Often they are merely
associated, and are free to separate or combine without gaining or losing energy.
One such zero energy process is the simultaneous creation or destruction of charges of the
same magnitude and opposite polarity. It is the existence of this process, together with the
equivalence of scalar motions AB and BA, that makes the induction of electric charges
possible. As we saw earlier, all material objects contain a concentration of uncharged
electrons, which are essentially rotating units of space. In each case where an electron exists
in an atom of matter, the atom likewise exists in the unit of space that constitutes the
electron. This might be compared to a solution of alcohol in water. The atoms of alcohol
exist in the water, but it is equally true that the atoms of water exist in the alcohol.
Let us now consider an example in which a positively* charged body X is brought into the
vicinity of an otherwise isolated metallic object Y. The scalar direction of the vibratory
motion (charge) of atom A in object X is periodically reversing, and at each reversal the
reference point of the motion of A relative to any atom B that is free to move is redetermined
by chance; that is, the motion may appear in the reference system either as a motion of A
toward B or a motion of B toward A. By means of this chance process, the motion is
eventually divided equally between AB and BA.
An atom C that is located in extension space is not free to move because energy would be
required for the motion. But atom B in object Y, which is located in electron space, is not
subject to this energy restriction, as the rotational motions of the atom and the associated
electron are oppositely directed, and the same motion that constitutes a positive* charge on
the atom constitutes a negative* charge on the electron, because in this case it is related to a
different reference point. The production of these oppositely directed charges is a zero
energy process. It follows that atom B is free to respond to the periodic changes in the
direction of the scalar motion originating at A. In other words, the positive* charge on atom
A in object X induces a similar positive* charge on atom B in object Y and a negative*
charge on the associated electron.
The electron is easily separable from the atom, and it is therefore pulled to the near side of
object Y by the positive* charge on object X, leaving atom B in a unit of extension space,
and with a positive* charge. The positions of the positively* charged atoms are fixed by the
inter-atomic forces, and these atoms are not able to move under the influence of the
repulsive forces exerted by charged object X, but the positive* charges are transferred to the
far side of object Y by the induction process. The residual positive* charge on atom B
induces a similar charge on a nearby atom D that is located in electron space. The electron at
D, now with a negative* charge, is drawn to atom B, where it neutralizes the positive*
charge and restores the atom to the neutral status. This process is repeated, moving the
positive* charge farther from object X in each step, until the far side of object Y is reached.
Where the original charge on object X is negative*, a negative* charge is induced on the
electron associated with atom B. This is equivalent to a positive* charge on the atom. In this
case, the negatively* charged electron is repelled by the negative* charge on object X and
migrates to the far side of object Y. The residual positive* charge on the atom is then
transferred to the near side of this object by the induction process.
If metallic object Y is replaced by a dielectric, the situation is changed, because in this case
the electrons no longer have the capability of free movement. The induced charge on the
atom and the opposite charge on the electron (or vice versa) remain joined. It is possible,
however, for this atom to participate in a relative orientation of motions with a neutral atom-
electron unit with which it is in contact, the result being a two-atom combination in which
the negative* pole of one atom is neutralized by contact with the positive* pole of the other,
leaving one atom-electron unit positively* charged and the other negatively* charged (that
is, the charge is on the electron).
The optimum separation between the unlike charges, when under the influence of an
external charge, the condition that is reached when the carriers of the negative* charges are
free to move, is the maximum. The situation in the two-atom combination is therefore more
favorable than that in the single atom, and the combination takes precedence. A still greater
separation is achieved if one or more neutral atoms are interposed between the atoms of the
charged combination. Each event of this kind moves either the positive* or the negative*
charge in the direction determined by the inducing charge. Thus the effect of an inducing
charge on a dielectric is a separation of the positive* and negative* charges similar to, but
less complete than, that which takes place in a conductor, because the length of the chains of
atoms is limited by thermal forces.
On the basis of the foregoing explanation, the charges are produced by induction. The
subsequent separation is accomplished by action of the inducing charge on the newly
produced induced charges. Conventional theory of dielectrics is based on the concept of the
nuclear atom, a hypothetical structure in which the components are held together by the
attraction between positive* and negative* charges. It is assumed that these charges have a
limited amount of freedom of movement, and can separate slightly on being subjected to the
effect of an external charge. One observation that has been interpreted as supporting the
assumption that pairs of positive* and negative* charges are always present in the atoms is
that if a charged dielectric is subdivided, each of the parts contains both positive* and
negative* charges. This is quite different from the behavior of charges in conductors. If a
metallic object is cut perpendicular to the line of force while under the influence of an
inducing charge, the two parts are oppositely charged, and will remain so after the inducing
charge is removed. But if the same procedure is followed with a dielectric, both parts have
positive* and negative* charges on the opposite sides, just as in the original object before
separation. And when the inducing charge is removed, both parts revert to the neutral status.
The current interpretation of these results, as expressed in a contemporary textbook, is this:
The inference to be drawn is that insulators contain charges which can move small distances so that attraction
still occurs, but that they are bound in equal and opposite amounts so that no splitting of the body can separate
two kinds of charge.57

The amount of separation of charges that could take place in the manner assumed by this
theory is admittedly very small, and it is difficult to account for the generation of any
substantial attractive or repulsive forces by this means. But forces of this nature actually do
exist. Small static charges, usually produced by friction, are common in the terrestrial
environment, and they produce effects that are quite noticeable. Merely walking across a
carpeted room in cold, dry weather can build up enough charge to give one an
uncomfortable sensation when he touches a metallic object and discharge occurs. Likewise,
the behavior of the modern synthetic fabrics shows the effect of static charges, including the
inductive effect, in a conspicuous, and often annoying, way. These fabrics behave in much
the same way as charged conductors. They attract such things as bits of paper and chips of
wood, and are themselves attracted by the furniture and walls of a room.
The discrepancy between the very small theoretical separation of charges and the relatively
large inductive effect has forced the theorists to call upon collateral factors, such as the
presence of contaminants, to explain the observations. For example, the following statement
taken from a physics textbook refers to the ability of electrically charged non-conducting
objects to pick up bits of paper and wood:
A chip of perfect insulator would show hardly any effect, but bits of wood and paper always have enough
moisture to make them slightly conducting.58

The much greater separation of charges that results from the inductive process described in
this chapter resolves this problem, while it remains consistent with the appearance of
charges at both ends of each piece when a dielectric under the influence of an inducing force
is separated. Before the separation takes place a substantial number of atoms of the dielectric
exist in multi-atom combinations with positively* and negatively* charged ends. Although
the separation of the charges in many of these combinations is large compared to the
distance between atoms, it is very small compared to the dimensions of an ordinary charged
dielectric. Thus when the separation occurs, there are charged combinations of this kind in
each portion. Consequently, each piece has the same charge characteristics as the original
unbroken object.
It was pointed out in Volume I that the existence of positive* and negative* charges in close
proximity, as required by the nuclear theory of the atom, is incompatible with the observed
behavior of charges of opposite polarity. These observations show that such charges
neutralize each other long before they reach separations as small as those which would exist
in the hypothetical nuclear atom. This is a decisive argument against the validity of the
nuclear theory. It is appropriate, therefore, to note that the existence of both positive* and
negative* charges in objects under the influence of inducing charges does not conflict with
our finding that there is a minimum distance (identified as the natural unit of distance, 4.56
x 10-6cm) within which charges of opposite polarity cannot coexist. The coexistence of
induced positive and negative charges is possible because they are forcibly prevented from
reaching the limiting distance at which they would combine. If the external charge is
removed, the induced charges do combine and neutralize each other.
In charging by induction it is often convenient to make use of grounding, which is simply
connecting the inductively charged object to the earth by means of a conductor. The earth is
electrically neutral, and so large that it is insensitive to gains or losses of charge in the
amounts actually encountered in practice. If object Y is grounded while under the influence
of a negative* inducing charge, the negatively* charged electrons on the far side of the
object are forced through the conductor into the earth. Breaking the ground connection then
leaves only positive* charges on object Y, and this object remains positively* charged after
the object X that contains the inducing charge is removed. If the induction process is
initiated by a positive* charge on object X, the ground connection permits electrons to be
pulled from the earth and charged negatively* to neutralize the positive* charges on Y,
leaving only negative* charges. Breaking the ground connection then leaves Y negatively*
charged.
The locations occupied by the charges in any charged conducting object not subject to
inductive forces are determined by the repulsion between the charges, which operates to
produce the maximum separation. If the object is under the influence of outside charges, the
charge locations are determined by the net effect of the inductive potential and the repulsion
between like charges. In either case, the result is that the charges are confined to the outside
surfaces of the conducting materials, and, except for local variations in very irregular bodies,
there are no charges in the interiors. The same considerations apply if the objects are hollow.
The inside walls of such objects carry no charges. These walls may be charged by placing an
insulated charged object in the hollow interior, but in that case the inside walls are ―outside‖
from the standpoint of the inducing charge; that is, they are the locations closest to that
charge.
The observed concentration of charge at the conductor surfaces is another direct
contradiction of the accepted theory of the electric current, which views the current as a
movement of charges. This concentration at the surface is due to the mutual repulsion
between the charges, which drives them to opposite sides of the conductor. The repulsive
force is not altered if the charges move along the conductor, since the direction of this force
is perpendicular to the direction of movement. Nor would the presence of positive* charges
on the interior atoms of the conductor alter this situation, if any such charges existed. If the
electrons, or any portion of them, are firmly held by the attraction of the hypothetical proton
charges, then they cannot move as an electric current. If they are free to move in response to
an electric potential gradient, then they are also free to move to the surfaces of the conductor
under the influence of their mutual repulsions.
From this it follows that if present-day electric theory were correct the current should flow
only along the outer surfaces of the conductors. In fact, however, electric resistance is
generally proportional to the cross-sectional area of the conductor, indicating that the motion
takes place fairly uniformly throughout the entire cross section. This adds one more item of
evidence supporting the finding that the electric current is a movement of uncharged
electrons, not of charges.
Since no charges are induced within a hollow conductor by an outside charge, any object
within a conducting enclosure is insulated against the effects of an electric charge. Similar
elimination, or reduction, of these effects is accomplished by conductors of other shapes that
are interposed between the charge and the objects under consideration. This is the process
known as shielding, which has a wide variety of applications in electrical practice.
Within the limits to which the present examination of electrical phenomena has been carried,
there does not appear to be any major error in the conventional dimensional assignments,
other than those discussed in the preceding pages. Aside from the errors that have been
identified, the SI system is dimensionally consistent, and consistent with the mechanical
system of quantities. The space-time dimensions of the most commonly used electrical units
are listed in Table 28. The first column of this table lists the symbols that are used in this
work. The other columns are self–explanatory.

Table 28: Electric Quantities


t time second t
dipole moment coulomb(t/s) x meter t
W energy (work) watt-hour t/s
Q charge (flux) coulomb (t/s) t/s
V potential volt t/s2
V voltage volt t/s2
E field intensity volt/meter t/s3
flux density coulomb (t/s)/meter2 t/s3
charge density coulomb (t/s)/meter3 t/s4
resistivity ohm-meter t2/s2
R resistance ohm t2/s3
current density ampere/meter2 1/st
power watt 1/s
D displacement coulomb (s)/meter2 1/s
P polarization coulomb (s)/meter2 1/s
s space meter s
q electric quantity coulomb(s) s
C capacitance farad s
I current ampere s/t
permittivity s2/t
conductivity siemens/meter s2/t2
conductance siemens s3/t2
The natural units of most of these quantities can be derived from the natural units previously
evaluated. Those of the remaining quantities can be calculated by the methods used in the
previous determinations, but the evaluation is complicated by the fact that the measurement
systems in current use are not internally consistent, and it is not possible to identify a
constant numerical value that relates any one of these systems to the natural system of units,
as was done for the mechanical quantities that involve the unit of mass. Neither the SI
system nor the cgs system of electrical units qualifies as a single measurement system in this
sense. Both are combinations of systems. In the present discussion we will distinguish the
measurement systems, as here defined, by the numerical coefficients that apply, in these
systems, to the natural unit of space, s, and inverse speed, t/s.
On the basis of the values of the natural units of space and time in cgs terms established in
Volume I, the numerical coefficient of the natural unit of s, regardless of what name is
applied to the quantity, should be 4.558816 x 10-6, while that of the natural unit of t/s should
be
3.335635 x 10-11. In the mechanical system of measurement the quantity s is identified in its
most general character as space, and the unit has the appropriate numerical coefficient. But
the concept of mass was introduced into the t/s quantity, here called energy, and an arbitrary
mass unit was defined. This had the effect of modifying the numerical values of the natural
units of energy and its derivatives by the factor 4.472162 x 107, as explained in Volume I.
The definition of the unit of charge (esu) by means of the Coulomb equation in the
electrostatic system of measurement was originally intended as a means of fitting the
electrical quantities into the mechanical measurement system. But, as pointed out in
Chapter14, there is an error in the dimensional assignments in that equation which
introduces a deviation from the mechanical values. The electrostatic unit of charge and the
other electric units that incorporate the esu therefore constitute a separate system of
measurement, in which t/s is identified with electric charge. The unit of this quantity was
evaluated from the Faraday constant in Chapter 9 as 4.80287 x 10-10 esu.
Unit charge can also be measured directly, inasmuch as some physical entities are incapable
of taking more than one unit of electric charge. The charge of the electron, for instance, is
one unit. Direct measurement of this charge is somewhat more difficult than the derivation
of the natural unit from the Faraday constant, but the direct measurements are in reasonably
good agreement with the indirectly derived value. In fact, as noted in Chapter14,
clarification of the small scale factors that affect these phenomena will probably bring all
values, including the one that we have derived theoretically, into agreement.
An electromagnetic unit (emu) analogous to the esu can be obtained by magnetic
measurements, and this forms the basis of an electromagnetic system of measurement. The
justification for using the emu as a unit of electrical measurement is provided by the
assumption that it is an electric unit derived from an electromagnetic process. We now find,
however, this it is, in fact, a magnetic unit; that is, it is a two-dimensional unit. It is therefore
a unit of t2/s2 rather than a unit of t/s. To obtain an electric (one-dimensional) unit, t/s,
corresponding to the esu from the emu it is necessary to multiply the measured value of the
emu coefficient, 1.602062 x 10-20 by the natural unit of s/t,
2.99793 x 1010 cm/sec. This brings us back to the electrostatic unit, 4.80287 x 10-10esu. The
electromagnetic system is thus nothing more than the electrostatic system to which an
additional factor, meaningless in the electrical context, has been applied.
The SI system of units is a modification of the electromagnetic system. In the early days of
electrical measurement the ampere was selected as the fundamental unit, and was defined on
an arbitrary basis. After more information had been accumulated, and the desirability of
relating the measurement system to physical fundamentals was recognized, the
electromagnetic (emu) system was adopted for general use, but in order to avoid making a
radical change in the size of the ampere, an arbitrary factor of 10 was introduced. As M.
McCaig remarks, the appearance of such a number ―in a primary definition is unusual; it
arises because although this definition is intended to fix the value of the ampere, we have
already decided in advance fairly precisely the value we desire the unit to have.‖ 59
The arbitrary modification of the emu values changed the numerical coefficient of the
natural unit of t/s to 1.602062 x 10-19. Because of the lack of distinction between electric
charge (t/s) and electric quantity (s) in current practice, the same unit is used for both of
these physical quantities in all three of the electrical measurement systems, as shown in
Table 29.

Table 29: Numerical Coefficients of the Natural Units


s t/s
Space-time (cgs) 4.558816 x 10-6 3.335635 x 10-11
Mechanical 4.558816 x 10-6 1.49175 x 10-3
Electrostatic 4.80287 x 10-10 4.80287 x 10-10
Electromagnetic 1.602062 x 10-20 1.602062 x 10-20
SI modification 1.602062 x 10-19 1.602062 x 10-19
In applying the principle of the equivalence of natural units to electrical quantities it is
necessary to take into account these differences between the numerical values applying to
the different systems. For example, the natural unit of capacitance, the quantity that plays
the principal role in the phenomena discussed in Chapter15, is the natural unit of electric
charge divided by the natural unit of voltage, t/s x s2/t = s. On the basis of the explanation of
the natural electrical units given in the preceding paragraphs, the value of the natural unit of
electric charge in the cgs electrostatic system is 4.80287 x 10-10 esu. The natural unit of
capacitance is this value divided by the natural unit of voltage, which was evaluated in
Chapter 9 as 9.31146 x 108 volts. The result is 5.15802 x 10-18 farads. As we found earlier,
the farad is a unit of space. The natural unit of space derived in Volume I is 4.558816 x 10-6
cm. Dividing these two values, we obtain 1.1314 x 10-12 as the ratio of the numerical
coefficients of the natural units. From geometric measurements, the centimeter, as a unit of
capacitance, has been found equal to 1.11126 x 10-12 farads. The theoretical and
experimental values are therefore in agreement within the limits of accuracy to which the
present study of the electric relations has been carried.
In this case the application of the equivalence principle merely corroborates an experimental
result. Its value as an investigative tool derives from the fact that it is equally applicable in
situations where nothing is available from other sources.

CHAPTER 17

Ionization
Electric charges are not confined to electrons. Units of the rotational vibration that
constitutes electric charge may also be imparted to any other rotational combination,
including atoms as well as other sub-atomic particles. The process of producing such
charges is known as ionization, and electrically charged atoms or molecules are called
ions. Like the electrons, atoms or molecules can be charged, or ionized, by any of a
number of agencies, including radiation, thermal motion, other physical contact, etc.
Essentially, the ionization process is simply a transfer of energy, and any kind of energy
will serve the purpose if it is delivered to the right place and in the necessary
concentration.
As indicated above, one of the sources from which the ionization energy can be derived is
the thermal energy of the ionizable matter itself. We saw in Chapter 5 that the thermal
motion is always directed outward. It therefore joins with ionization in opposition to the
basic inward rotational motions of the atoms, and is to some degree interchangeable with
ionization. The magnitude of the energy required to ionize matter varies with the
structure of the atom and with the existing level of ionization. Each element therefore has
a series of ionization levels corresponding to successive units of rotational vibration.
When the thermal energy concentration (the temperature) of an aggregate reaches such a
level the impacts to which the atoms are subjected are sufficiently energetic to cause
some of the linear thermal motion to be transformed into rotational vibration, thus
ionizing some of the atoms. Further rise in temperature results in ionization of additional
atoms of the aggregate, and in additional ionization (more charges on the same atoms) of
previously ionized matter.
Thermal ionization is only of minor importance in the terrestrial environment, but at the
high temperatures prevailing in the sun and other stars thermally ionized atoms, including
positively* charged atoms of Division IV elements, are plentiful. The ionized condition
is, in fact, normal at these temperatures, and at each of the stellar locations there is a
general ionization level determined by the temperature. At the surface of the earth the
electric ionization level is zero, and except for some special cases among the sub-atomic
particles, any atom or particle that acquires a charge while in the gaseous state is in an
unstable condition. It therefore eliminates the charge at the first opportunity. In some
other region where the prevailing temperature corresponds to an ionization level of two
units, for example, the doubly ionized state is the most stable condition, and any atoms
that are above or below this degree of ionization tend to eliminate or acquire charges to
the extent necessary to reach this stable level.
Since the rotational vibration that we know as ionization is basically a motion in
opposition to the rotational motion of the atom, the ionization cannot exceed the net
effective positive* displacement (the atomic number). In a region where the ionization
level is very high, the heavier elements therefore have a considerably larger content of
positive* displacement in the form of ionization at a given temperature than those of
smaller mass. This point has an important bearing on the life cycle of the elements, and
will be given further consideration later.
In the nuclear theory of atomic structure currently accepted by the physicists the atomic
―nucleus‖ is surrounded by a number of electrons equal to the atomic number of the
element. Ionization is viewed as a process of detaching electrons from the atom. On this
basis, the maximum degree of ionization is attained when all electrons have been
removed and only the bare nucleus remains. This is a plausible hypothesis, and, on first
consideration, its plausibility would appear to be a point in favor of the nuclear theory. It
should be realized, however, that any tenable theory of atomic structure will have
essentially the same explanation of ionization, differing only in the language in which it
is expressed. Such a theory must identify entities that are added to, or removed from, the
atom as the atomic number increases. Successive addition or elimination of these entities
then explains ionization. In the nuclear theory, which views the atom as a collection of
particles, these entities are electrons. In the theory of the universe of motion, which finds
the atom to be a combination of motions, they are units of rotational motion. Any other
theory that might be formulated would necessarily have to identify some entity that could
similarly be added or removed unit by unit. Thus the ionization process would be
consistent with any theory. Consequently, it gives support to none.
In the terrestrial environment each ionization level of each element has a specific
ionization potential that represents the amount of energy required in order to accomplish
the ionization. It is currently assumed that these values are fixed natural relations and
therefore constant for all environments. The theoretical status of this assumption in the
context of the Reciprocal System of theory has not yet been clarified. It may well be valid
throughout the gaseous state. However, the measured ionization levels are obviously not
applicable to ionization in the condensed gas state, the state in which the gas molecules
are within the equivalent of unit distance of each other. The physical relations in this state
are very different from those in an ordinary gas, including reversal of all scalar directions.
Thus all that we can now say about the ionization potential in this state is that each
successive level of ionization must involve an increase in energy. As we will see in
Volume III, the matter in most of the observed stars is in the condensed gas state.
The relation between temperature and the degree of ionization enables using the
ionization, which can be observed spectroscopically, as a measure of the surface
temperature of the stars. For example, below 12,000 K, helium is not ionized. At about
35,000 K it is mainly in the form of He II (singly ionized). At still higher temperatures it
is doubly ionized (He III). Other elements have similar ionization patterns, and the
mixture of ions observed in the spectrum of a star thus indicates the range of temperature
at its surface. The O stars, which are in the range up to about 80,000 K are reported to
contain N II, O II, C II, and Si III, as well as helium and hydrogen ions.
It should be understood, however, that this relation between ionization and temperature
holds good only where the ionization is produced thermally. References are made in the
astronomical literature to ―ionization temperatures,‖ but these are merely the temperature
equivalents of the ionization levels. Unless the ionization is thermally produced they do
not indicate the actual temperature. The level of ionization is a reflection of the strength
of the ionizing agency, whatever it may be. If that agency is the thermal energy, then the
ionization is a measure of the temperature. But if the ionizing agency is radiation, the
ionization level is a measure of the strength of the radiation, not the temperature.
In Volume III we will encounter the same kind of a misconception in dealing with the
relation between temperature and the production of x-rays. When the x-rays are thermally
produced, there is actually a relation between the x-ray emission and the temperature, but
here, again, if the x-rays are produced by some other agency, the relation is between the
x-ray emission and the strength of that other agency, and it is independent of the
temperature. The importance of this point lies in the fact that the emission of x-rays is
currently being treated as an indication of high temperature in cases where the nature of
the x-ray production process is unknown; even in cases where the conditions are such that
the temperatures necessary for thermal production of x-rays are impossible. Temperatures
in the millions of degrees are inferred from x-ray observations in locations where the
actual temperature level cannot be more than a few degrees above absolute zero.
―Temperature,‖ without a qualifying adjective, is a specifically defined concept, and it is
temperature as thus defined that enters into the various thermal relations. The use of other
kinds of ―temperature‖ is entirely in order, providing that each is clearly defined, and is
identified by an appropriate adjective, in an expression such as ―ionization temperature.”
In fact, we will introduce such an alternate kind of temperature, a magnetic temperature,
in Chapter 24. But it should be recognized that these ―temperatures‖ have their own sets
of properties. The thermal relations do not apply to them. For example, the general gas
law applies only to temperature in the usual (thermal) sense. This law is expressed as PV
= RT, where P is the pressure, V is the volume, T is the temperature, and R is the gas
constant. From this law it is apparent that a high temperature can be developed in a given
volume of gas only under high pressure. In interstellar and intergalactic space the
pressure acting on the extremely tenuous medium is near zero, and from the general gas
law it is evident that the temperature must be at a correspondingly low level. The
temperatures in the millions of degrees that are currently being reported from these
regions are totally unrealistic, if they are intended to mean ―temperature‖ in the thermal
sense.
Some of the existing confusion in this area appears to be due to a failure to draw a clear
distinction between the two types of vectorial motion in which the particles of a gas
participate. These constituent particles share in the translational motion of a gaseous
aggregate as a whole, and it is generally understood that this is not a thermal motion; that
is, a fast-moving aggregate may be relatively cool. An atom or particle moving
independently in space is subject to the same considerations. Its free translational motion
has no thermal significance. The thermal motion is a product of containment. It is the
directionally distributed random motion that results from the restriction of the motion to
the volume within certain limits. The pressure is a measure of the containment. The
temperature, the measure of the thermal motion, is therefore a function of the pressure, as
indicated in the gas laws. High temperatures can only be attained under high pressures. If
part, or all, of the gas in an aggregate escapes from confinement, its constituents move
outward unidirectionally, and the thermal motion is converted to linear translational
motion. The temperatures and pressures decrease accordingly.
The picture of the nature of electric charges and ionization that we derive from the
postulates of the theory of the universe of motion is very different from the currently
accepted explanation of these phenomena, which is an outgrowth of hypotheses
formulated in the early days of electrical investigation on the basis of the limited amount
of empirical information then available. The early investigators in this area identified
negative* charges with electrons and positive* charges with atoms of matter. Meanwhile
it was found that the atoms of certain elements undergo spontaneous disintegration in
which electrons are emitted along with other products. On the basis of these empirical
findings, the scientific community adopted the hypothesis previously mentioned in which
positive* charges are attributed to an atomic ―nucleus,‖ and negative* charges entirely to
electrons. Positive* and negative* ionizations were then ascribed to deficiency or excess
of electrons, respectively.
One disturbing feature of this explanation was the great disparity in the sizes of the units
of the two entities that were identified as the carriers of the charges. The roles to be
played by positive* and negative* charges in the theory were essentially reciprocal in
nature, yet the presumed carrier of the positive* charge, the proton, has nearly two
thousand times the mass of the negatively* charged particle, the electron. Physicists were
therefore greatly relieved when the positive* analog of the electron, the positron, was
discovered. It does not seem to be generally appreciated that this discovery, which
restored the symmetry that we have come to expect in nature, has destroyed the
foundations of the orthodox theory. It is now evident that the positive* charge is as much
of a reality as the negative* charge; it is not merely an electron deficiency, as the theory
contends.
While the discovery of the positron solved one of the symmetry problems, it produced
another that has been even more troublesome. Inasmuch as the electron and the positron
are inversely related, so far as we can tell, it would seem that they should appear in equal
numbers. But positrons are scarce in our environment, whereas electrons are plentiful.
Conventional science has no answer to this problem, other than mere speculations. From
the theory of the universe of motion we find that the asymmetrical distribution of
electrons and positrons, and of positive* and negative* charges in general, is not due to
any inherent difference in the character of the motions that constitute the charges, but is a
consequence of the fact that the net rotational displacement of the atoms of ordinary
matter is in time; that is, it is positive. The charges acquired by these atoms in the
ionization process are therefore positive*, except in the relatively few instances where
negative* ionization is possible because of the existence of negative electric rotational
displacement of the appropriate magnitude in the structures of certain atoms. The simple
positively* charged sub-atomic particles, the positrons, are scarce in the vicinity of
material atoms because their net rotational time displacement is compatible with the basic
structure of the atoms, and they are readily absorbed on contact. The corresponding
negatively* charged particles of the material system, the electrons, are abundant, as their
space displacement is usable in the structures of the material atoms only to a very limited
degree.
It is evident that both of the mechanisms discussed in the foregoing pages, the selective
incorporation of the positrons into the structure of matter, which leaves a surplus of free
electrons, and the ionization mechanism, which produces only positive* ions under high
temperature conditions (where most of the ionization takes place), are incompatible with
the existence of a law requiring absolute conservation of charge. This will no doubt
disturb many individuals, because the conservation laws are generally regarded as firmly
established basic physical principles. Some consideration of this issue will therefore be
appropriate before moving on to other subject matter.
In conventional physical science the conservation laws are empirical. As expressed by
one physicist:
We are in a curious situation. We know the conservation laws, but we do not know their underlying
dynamic basis; that is, we do not know the kind of symmetries responsible for them. 60
While the conservation laws have retained their original status as important fundamental
principles of physics during the broad expansion of scientific knowledge that has taken
place in the twentieth century, the general understanding of their nature has undergone a
significant change. Any empirically based relation or conclusion is always subject to
modification by reason of relevant new discoveries. This is what has happened to
conservation. Originally, the law of conservation of energy, for instance, was thought to
be inviolable. ―No gain or loss of energy has ever been observed in an isolated system,‖
says a 1919 textbook.61This statement is no longer true. Mass and energy, we have
found, are interconvertible. Thus one can increase at the expense of the other. The
content of a conservation law has therefore had to be redefined. As expressed by Eric M.
Rogers,
In its present fullest form you may consider it [the conservation of energy] more than a generalization from
experiment; it has expanded into a convention, an agreed scheme of energy now so defined that its total
must, by definition, remain constant.62

It is now frequently stated that we should not speak of the conservation of mass or the
conservation of energy, only the conservation of mass-energy. However, the conversion
of one of these entities into the other occurs only under circumstances that, in the
terrestrial environment, are quite exceptional, and the separate conservation laws are
applicable under all ordinary circumstances. It would seem more appropriate, therefore,
to state these laws individually, as in the past, and to qualify the statements in such a way
as to limit the application of the laws to situations in which there is no conversion to or
from a different form of motion.
These same considerations apply to electric charges. There is a wide range of physical
activity in which the conservation of charge is maintained. Indeed, the currently
prevailing view is that charge conservation is absolute, as indicated in the following
statement:
The law of conservation of electric charge...states that...there is no way to alter, to the slightest degree, the
total amount of electric charge in the world.63

Our finding is that all physical quantities with the dimensions t/s, including electric
charge, are equivalent to, and, under appropriate conditions, interconvertible with kinetic
energy. Thus while energy and charge are each conserved individually within a certain
range of physical processes, there is a wider range of processes in which the quantity t/s
is conserved, but changes occur in the magnitudes of the subsidiary quantities, such as
charge or kinetic energy, because of conversion from one to another.
The law of conservation of electric charge is valid wherever no such conversion takes
place, and it has persisted because most of the common electrical processes are of this
nature. The observation that has been most influential in leading to the conclusion that
charge conservation is absolute is the existence of processes in which positive* and
negative* charges are created in pairs, and destroyed jointly. A unit negative* charge is a
unit of outward scalar motion in time. A unit positive* charge is a unit of outward scalar
motion in space. Since the two motions are oppositely directed from the natural zero
point, a combination of the two units arrives at a net total motion (measured as energy or
speed) of zero on the natural scale. Thus the creation or neutralization of such a pair of
charges involves no change in the total net charge or energy. It is another instance of
what we have called a zero energy process.
The induction process discussed in Chapter 16 is another example. As explained there, an
external positive* charge induces a rotational vibration (charge) which is positive*
relative to each of the atoms of the object subjected to the charge, and negative* relative
to the mobile units of space (electrons) in which some of these atoms are located. The
attractive and repulsive forces due to the external charge then cause each of the atom-
electron combinations to separate into a pair of positively* and negatively* charged
entities. It can be seen that this process does not alter the net amount of electric charge.
An object (a combination of motions) with zero net rotational vibration (charge) separates
into two components, the net total charge of which is zero.
However, it is also evident that these are processes of a special kind, and the fact that
charge is conserved in such processes does not indicate that charge is always conserved.
The best resolution of the conservation question appears to be to recognize that each of
the conservation laws previously formulated is valid within certain limits, and therefore
has a specific field of usefulness, but to state each of these laws in such a form that its
applicability is restricted to the range of conditions in which no conversion from or to
other forms of motion is involved.
While the foregoing is a significant limitation of the field of applicability of the charge
conservation law, there is still a wide range of physical phenomena in which electric
charge is conserved, as the processes that involve changes in the net total t/s in the form
of electric charge are confined mainly to those that take place at very high temperatures,
or very large kinetic energies.
One of the important areas in which charge is conserved is ionization in liquids. The
molecules of a simple chemical compound such as hydrochloric acid (HCl), for example,
consist of two components, in this case a hydrogen atom and a chlorine atom, oriented in
the manner described in Volume I, and held together by the cohesive forces discussed in
Chapter 1 of this volume. In the liquid state the molecules move independently, subject to
the restrictions imposed by the nature of this state of matter. The effective rotation of the
hydrogen atom, as oriented in HCl, is positive, while that of the chlorine atom is
negative. These components of the molecule are therefore capable of taking positive* and
negative* charges respectively, if they separate.
The molecules in a liquid aggregate are in constant motion, and collisions are frequent. A
certain percentage of these collisions, depending on the temperature, are energetic
enough to break the bonds between the molecular components and separate each
molecule into two parts. Ordinarily these parts recombine promptly, but if the atom is
located in a unit of electron space, the collision imparts a rotational vibration to each of
the components. (As noted in Chapter 13, such rotational vibrations, electric charges, are
easily produced in contacts of various kinds.) This rotational vibration is a positive*
motion of the hydrogen atom relative to the associated electron space, and a negative*
motion of the electron relative to the chlorine atom. The generation of the charges is thus
a zero energy process, and it does not add to the energy of the system.
The HCl molecule has now become a H+ molecule, an ion, and a Cl atom associated with
a charged electron, a Cl- ion, we may say. The charges on these new molecules, or ions,
balance the valences of their associated atoms, and the ions are therefore stable in the
same sense as the original HCl molecules, except that there is a rather strong tendency
toward recombination that limits the net amount of ionization.
Let us now turn to an examination of the effects that are produced when a voltage is
applied in such a way as to cause a voltage gradient in a liquid that is, to some extent,
ionized. This is accomplished by inserting two electrical conductors, or electrodes, into
the liquid, and connecting them through a source of current so that electrons are
withdrawn from the positive* electrode, the anode, and forced into the negative*
electrode, the cathode. Liquids such as HCl are not conductors of electricity, in the sense
in which this term is applied to metals; that is, they do not permit free movement of
electrons. However, the introduction of a voltage differential causes a movement of the
ions in the ionized liquid.
As we saw in Chapter 15, this voltage differential forces some of the electrons at the
cathode out into the spatial equivalent of time, and withdraws a similar number of
electrons from the spatial equivalent of time at the anode. Some of the contacts with
liquid molecules are sufficiently energetic to impart charges to electrons in the vicinity of
the cathode. Thus a quantity of negative* charge accumulates in the liquid in this vicinity,
a process known as polarization.
At the anode, the withdrawal of electrons leaves a deficiency of electrons, relative to the
equilibrium concentration. This leads to a break-up of some of the neutral combinations
of positive* atoms and negative* electrons. The electrons thus released are absorbed into
the electron ―vacuum,‖ losing their charges in the process. This leaves a surplus of
positively* charged ions; that is, the region in the vicinity of the anode is positively*
polarized.
As a result of the polarization, the positive* and negative* ions are attracted to the
cathode and anode respectively by the electric forces between unlike charges. The
positive* ions (such as H+) arriving at the cathode neutralize negatively* charged
electrons, and withdraw them from the electron concentration in equivalent space. These
are replaced by electrons drawn from the cathode. Additional electrons then acquire
charges by the collision process to restore the polarization equilibrium in the liquid
surrounding the cathode. Meanwhile the negative* ions (such as Cl-) arriving at the
anode neutralize positive* charges in the vicinity of that electrode, and release electrons,
which are drawn into the anode to restore the polarization equilibrium.
The loss of electrons from the cathode and acquisition of electrons by the anode in the
process that has been described creates a voltage difference between the two electrodes,
in addition to that supplied by the external voltage source. A current therefore flows from
the anode to the cathode through the metallic conductor to restore the equilibrium
condition. This current persists as long as the ions continue to move through the liquid.
The proportion of the total number of molecules that will be ionized in a particular liquid
under specified conditions is a probability function, the value of which depends on a
number of factors, including the strength of the chemical bond, the nature of the other
substances present in the liquid, the temperature, etc. Where the bond is strong, as in the
organic compounds, the molecules often do not ionize at all within the range of
temperature in which the substance is liquid. Substances such as the metals, in which the
atoms are joined by positive bonds, likewise cannot be ionized in the liquid state, since
the zero energy ionization process depends on the existence of a positive*-negative*
combination.
The presence or absence of ions in the liquid is an important factor in many physical and
chemical phenomena, and for that reason chemical compounds are often classified on the
basis of their behavior in this respect as polar or non-polar, electrolytes or non-
electrolytes, etc. This distinction is not as fundamental as it might appear, as the
difference in behavior is merely a reflection of the relative bond strength: whether it is
greater or less than the amount necessary to prevent ionization. The position of organic
compounds in general as non-electrolytes is primarily due to the extra strength of the
two-dimensional bonds characteristic of these compounds. It is worth noting in this
connection that organic compounds such as the acids, which have one atom or group less
strongly attached than is normal in the organic division, are frequently subject to an
appreciable degree of ionization.
Ionization of a liquid is not a process that continues to completion; it is a dynamic
equilibrium similar to that which exists between liquid and vapor. The electric force of
attraction between unlike ions is always present, and if an ion encounters one of the
opposite polarity at a time when its thermal energy is below the ionizing level,
recombination will occur. This elimination of ions is offset by the ionization of additional
molecules whose energy reaches the ionizing level. If conditions are stable. an
equilibrium is reached at a point where the rate of formation of new ions is equal to the
rate of recombination.
The conventional explanation of the ionization process is that it consists of a transfer of
electrons from one atom, or group of atoms, to another, thus causing a deficiency of
electrons, identified as a positive* charge, in one of the participants, and an excess of
electrons, identified as a negative* charge, in the other. In the electrolytic process, the
negative ions are assumed to carry electrons to the anode, where they leave the ions, enter
the conductor, and flow through the external circuit to the cathode. Here they encounter
the positive* ions that have been drawn to this electrode, and the charges are neutralized,
restoring the electrical balance.
This is a simple and plausible explanation. It is not surprising, therefore, that it has met
with widespread acceptance. Like many another attractive, but erroneous, hypothesis,
however, its net effect has been to direct physical thinking into unproductive channels. In
fact, this interpretation of the electrolytic process is one of the major influences
contributing to the belief that the electric current is a movement of charges, one of the
basic errors of present-day electrical theory.
Since negative* charges clearly do move through the electrolyte to the anode, there is, on
first consideration, an analogy with the metallic circuit, and discussions of electrolysis
habitually refer to ―passing a direct current through an electrolytic solution.‖ If there
actually were a continuous flow around the circuit, and if the moving units could be
identified as negative* charges in one segment of that circuit, it would be reasonable to
assume that the moving units in the remainder of the circuit are also charges. But this
argument is wholly dependent on the continuity, and that continuity clearly does not
exist. The electrolytic process is not a simple flow of current around the circuit; it is a
more complex series of events in which both positive* and negative* charges originate in
the solution and move in opposite directions to the electrodes. This means that
electrolytic conduction has to be explained independently of metallic conduction, and it
eliminates most of the support that the electrolytic process has been presumed to give to
the conventional theory of the electric current.
The final topic for consideration in this chapter is the overall limit on the magnitude of
the combined thermal and ionization energy. As pointed out earlier, the thermal energy
must reach a certain level, which depends on the characteristics of the atoms involved,
before thermal ionization is possible. After this level is reached, an equilibrium is
established between the temperature and the degree of ionization. A further increase in
the temperature of an aggregate causes both the linear speed displacement (particle
speed) and the charge displacement (ionization) to increase, up to the point at which all of
the elements in the aggregate are fully ionized; that is, they have the maximum number of
positive* charges that they are capable of acquiring. Beyond this point of maximum
ionization a further increase in the temperature affects only the particle speeds.
Ultimately the total of the outward displacements (ionization and thermal) reaches
equality with one of the inward magnetic rotational displacement units of the atom. The
inverse speed displacements then cancel each other, and the rotational motions that are
involved revert to the linear status. At this point the material aggregate has reached what
we may call a destructive limit.
There have been many instances in the preceding pages in which a limiting magnitude of
the particular physical quantity under consideration has been shown to exist. We have
just seen that the number of units of electric ionization of an atom is limited to the net
equivalent number of units of effective electric rotational displacement. For example, the
element magnesium, which has the equivalent of 12 net effective electric rotational
displacement units, can take 12 units of electric vibrational displacement (ionization), but
no more. Similarly, we found that the maximum rotational base of the thermal vibration
in the solid state is the primary magnetic rotation of the atom. Most of the limits thus far
encountered have been of this type, which we may designate as non-destructive limits.
When such a limit is reached, further increase of this particular quantity is prevented, but
there is no other effect.
We are now dealing with a quantity, the total outward speed displacement, which is
subject to a different kind of a limit, a destructive limit. The essential difference between
the two stems from the fact that the non-destructive limits merely define the extent to
which certain kinds of additions to, or modifications of, the constituent motions of the
atoms can be carried. Reaching the electric ionization limit only means that no more units
of positive* electric charge can be added to the atom; it does not, in any way, imperil the
existence of the atom. On the other hand, a limit that represents the attainment of equality
with a basic motion of the atom has a deeper significance. Here it should be remembered
that rotation is not a property of the scalar motion itself; it is a property of the coupling of
the motion to the reference system. For example, the basic constituent of the uncharged
electron is a unit of inward scalar motion in space. This motion per se has no properties
other than the unit inward magnitude, but it is coupled to the reference system in such a
way that it becomes a rotation, in the context of that system, retaining its inward scalar
direction. When the electron is charged, the coupling is so modified that an oppositely
directed rotational vibration is superimposed on the rotation. The charged positron is a
unit inward motion in time, similarly coupled to the reference system.
When brought into proximity, a charged electron and a charged positron are attracted
toward each other by the electrical forces. When they make contact, the two rotational
vibrations of equal magnitude and opposite polarity cancel each other. The oppositely
directed unit rotations do likewise. This eliminates all aspects of the coupling of the
motion to the reference system other than the reference point, reducing the particles to
radiation, and bringing them to rest in the natural reference system. As seen in the spatial
reference sytem, they become two photons moving outward in opposite directions from
the point in the reference system at which the neutralization took place.
This neutralization, or ―annihilation,‖ process becomes more difficult to accomplish as
the particles increase in size and complexity, and takes place on a significant scale only in
the sub-atomic range. However, full units of the magnetic rotation of the atomÑunits of
inward rotational speed displacementÑcan be neutralized by combination with outward
displacements of equal magnitude. The outward motions available for this purpose are
ionization and thermal motion. When the total displacement of these motions reaches
equality with that of a full unit of the magnetic rotation of an atom, or any full unit of that
rotation, the existence of the rotational unit terminates, and its speed displacement reverts
to the linear basis (radiation or kinetic energy).
As we saw earlier, the thermal ionization level is related to the temperature. The total
outward speed displacement at which neutralization occurs is therefore reached at a
specific temperature, a destructive temperature limit. Full ionization is attained at a level
far below this limiting temperature. Inasmuch as it is the total outward displacement that
enters into the neutralization process, rather than the thermal motion alone, the
temperature of the destructive limit of an element depends on its atomic number. The
heavier elements have more displacement in the form of ionization when all are fully
ionized, and these elements therefore reach the same total displacement at lower
temperatures.
When the temperature of an aggregate arrives at the destructive limit of the heaviest
element present, this element reduces to one with less magnetic (two-dimensional)
rotation, the difference in mass, t3/s3, being converted to its one-dimensional equivalent,
energy, t/s. As the rise in temperature continues, one after another or the elements meets
the same fate in the order of decreasing atomic number.

CHAPTER 18
The Retreat From Reality
IN the eight chapters from 9 to 17 (excluding Chapter 12) we have described the general
features of electricity–both current electricity and electric charges–as they emerge from a
development of the consequences of the postulates of the theory of the universe of
motion. This development arrives at a picture of the place of electricity in the physical
universe that is totally different from the one that we get from conventional physical
theory. However, the new view agrees with the electrical observations and measurements,
and is entirely consistent with empirical knowledge in related areas, whereas
conventional theory is deficient in both respects. Thus there is ample justification for
concluding that the currently accepted theories dealing with electricity are, to a
significant degree, wrong.
This finding that an entire subdivision of accepted physical theory is not valid is difficult
for most scientists to accept, particularly in view of the remarkable progress that has been
made in the application of existing theory to practical problems. But neither a long period
of acceptance nor a record of usefulness is sufficient to verify a theory. The history of
science is full of theories that enjoyed general acceptance for long periods of time, and
contributed significantly to the advance of knowledge, yet eventually had to be discarded
because of fatal defects. Present-day electrical theory is not unique in this respect; it is
just another addition to the long list of temporary solutions to physical problems.
The question then arises, How is it possible for errors of this magnitude to make their
way into the accepted structure of physical theory? It is not difficult to find the answer.
Actually, there are so many factors tending to facilitate acceptance of erroneous theories,
and to resist parting with them after they are once accepted, that it has been something of
an achievement to keep the error content of physical theory as low as it is. The
fundamental problem is that physical science deals with so many entities and phenomena
whose basic nature is not understood. For example, present-day physics has no
understanding of the nature of the electric charge. We are simply told that we must not
ask; that the existence of charges has to be accepted as one of the given features of
nature. This frees theory construction from the constraints that would normally apply. In
the absence of an adequate understanding, it is possible to construct and secure
acceptance of theories in which charges are assigned functions that are clearly seen to be
incompatible with the place of electric charge in the pattern of physical activity, once that
place is specifically defined.
None of the other basic entities of the physical universe–about six or eight of them, the
exact number depending on the way in which the structure of fundamental theory is
erected–is much, if any, better known than the electric charge. The nature of time, for
instance, is even more of a mystery. But these entities are the foundation stones of
physics, and in order to construct a physical theory it is necessary to make some
assumptions about each of them. This means that present-day physical theory is based on
some thirty or forty assumptions about entities that are almost totally unknown.
Obviously, the probability that all of these assumptions about the unknown are valid is
near zero. Thus it is practically certain, simply from a consideration of the nature of its
foundations, that the accepted structure of theory contains some serious errors.
In addition to the effects of the lack of understanding of the fundamental entities of the
physical universe, there are some further reasons for the continued existence of errors in
conventional physical theory that have their origin in the attitudes of scientists toward
their subject matter. There is a general tendency, for instance, to regard a theory as firmly
established if, according to the prevailing scientific opinion, it is the best theory of the
subject that is currently available. As expressed by Henry Margenau, the modern scientist
does not speak of a theory as true or false, but as “correct or incorrect relative to a given
state of scientific knowledge.”64
One of the results of this policy is that conclusions as to the validity of theories along the
outer boundaries of scientific knowledge are customarily reached without any
consideration of the cumulative effect of the weak links in the chains of deductions
leading to the premises of these theories. For example, we frequently encounter
statements similar to the following:
The laws of modern physics virtually demand that black holes exist.65 No one who accepts general
relativity has found any way to escape the prediction that black holes must exist in our galaxy. 66

These statements tacitly assume that the reader accepts the ―laws of modern physics‖ and
the assertions of general relativity as incontestable, and that all that is necessary to
confirm a conclusion–even a preposterous conclusion such as the existence of black
holes–is to verify the logical validity of the deductions from these presumably established
premises. The truth is, however, that the black hole hypothesis stands at the end of a long
line of successive conclusions, included in which are more than two dozen pure
assumptions. When this line of theoretical development is examined as a whole, rather
than merely looking at the last step on a long road, it can be seen that arrival at the black
hole conclusion is a clear indication that the line of thought has taken a wrong turn
somewhere, and has diverged from physical reality. It will therefore be appropriate, in the
present connection, to undertake an examination of this line of theoretical development,
which originated with some speculations as to the nature of electricity.
The age of electricity began with a series of experimental discoveries: first, static
electricity, positive* and negative*, then current electricity, and later the identification of
the electron as the carrier of the electric current. Two major issues confronted the early
theorists, (1) Are static and current electricity different entities, or merely two different
forms of the same thing?, and (2) Is the electron only a charge, or is it a charged particle?
Unfortunately, the consensus reached on question (1) by the scientific community was
wrong. The theory of electricity thus took a wrong direction almost from the start. There
was spirited opposition to this erroneous conclusion in the early days of electrical
research, but Rowland‘s experiment, in which he demonstrated that a moving charge has
the magnetic properties of an electric current, silenced most of the critics of the ―one
electricity‖ hypothesis.
The issue as to the existence of a carrier of electric charge–a ―bare‖ electron–has not been
settled in this manner. Rather, there has been a sort of a compromise. It is now generally
conceded that the charge is not a completely independent entity. As expressed by Richard
Feynman, ―there is still ‗something‘ there when the charge is removed.‖67 But the wrong
decision on question (1) prevents recognition of the functions of the uncharged electron,
leaving it as a vague ―something‖ not credited with any physical properties, or any effect
on the activities in which the electron participates. The results of this lack of recognition
of the physical status of the uncharged electron, which we have now identified as a unit
of electric quantity, were described in the preceding pages, and do not need to be
repeated. What we will now undertake to do is to trace the path of a more serious retreat
from reality that affects a large segment of present-day physical theory, and accounts for
a major part of the difference between current theory and the conclusions derived from
the postulates that define the universe of motion.
This theoretical development that we propose to examine originated as a result of the
discovery of radioactivity and the identification of the three kinds of emanations from the
radioactive substances as positively* charged alpha particles (helium atoms), negatively*
charged electrons, and electromagnetic radiation. It was taken for granted that when
certain particles are ejected from an atom during radioactivity, these particles must have
existed in the atom prior to the radioactive disintegration. This conclusion does not seem
so obvious today, when the photon of radiation (which no one suggests as a constituent of
the undisturbed atom) is recognized as a particle, and a whole assortment of strange
particles is observed to be emitted from atoms during high energy disintegrations. At any
rate, it is clearly nothing more than an assumption.
An extension of this assumption led to the conclusion that the atom is a composite
structure in which the emitted particles are the constituent parts. Some early suggestions
as to the arrangement of the parts gained little support, but a discovery, in Rutherford‘s
laboratory, that the mass of the atom is concentrated in a very small volume in the center
of the space that it presumably occupies, led to the construction of the Rutherford atom-
model, the prototype of the atom of modern physics. In this model the atom is viewed as
a miniature analog of the solar system, in which negatively* charged electrons are in
orbit around a positively* charged ―nucleus.‖
The objective of this present discussion is to identify the path that the development of
theory on the basis of this atom-model has taken, and to demonstrate the fact that
currently accepted theory along the outer boundaries of scientific knowledge, such as the
theory that leads to the existence of black holes, rests on an almost incredible succession
of pure assumptions, each of which has a finite probability–in some cases a very strong
probability–of being wrong. As an aid in emphasizing the overabundance of these
assumptions, we will number those that we identify as being definitely in the direct line
of the theoretical development that leads eventually to the concepts of the black hole and
the singularity.
In the construction of his model, Rutherford accepted the then prevailing concepts of the
properties of electricity, including the two assumptions previously mentioned, and
retained the assumption that the atom is constructed of separable parts. The first of the
assumptions that he added will therefore be given the number 4. These new assumptions
are:
(4)The atom is constructed of positively* and negatively* charged components.
(5)The positive* component, containing most of the mass, is located in a small nucleus.
(6)Negatively* charged electrons are in orbit around the nucleus.
(7)The force of attraction between unlike charges applied to motion of the electrons
results in a stable orbital equilibrium.
This model met with immediate favor in scientific circles, but it was faced with two
serious problems. The first was that the known behavior of unlike charges does not
permit their coexistence at the very short distances in the atom. Even at substantially
greater distances they neutralize each other. Strangely enough, little attention was paid to
this very important point. It was tacitly assumed (8) that the observed behavior of charges
does not apply in this case, and that the hypothetical charges inside the atom are stable.
There is no evidence whatever to support this assumption, but neither is there any
evidence to contradict it, as the inside of the atom is unobservable. Here, as in many other
areas of present-day physical theory, we are being asked to accept absence of disproof as
the equivalent of proof.
Another of the problems encountered by the new theory involved the stability of the
assumed electronic orbits. Here there was a direct conflict with empirical knowledge.
From experiment it is found that charged objects moving in circular orbits (and therefore
accelerated) lose energy and spiral in toward the center of the circle. On this basis the
assumed electronic orbits would be unstable. This conflict was taken more seriously than
the other, and remained a source of theoretical difficulty until Bohr ―solved‖ the problem
with another assumption, postulating, entirely ad hoc, that the constituents of the atom do
not follow normal physical laws. He assumed (9) that the hypothetical electronic orbits
are quantized, and can take only certain specific values, thus eliminating the spiralling
effect.
At this point, further impetus was given to the development of the atom-model by the
discovery of a positively* charged particle of mass one on the atomic weight scale. This
particle, called the proton, was promptly assumed (10) to be the bare nucleus of the
hydrogen atom. This led to the further assumption (11) that the nuclei of other atoms
were made up of a varying number of protons. But here, again, there was a conflict with
observation. According to the observed behavior of charged particles, the protons in the
hypothetical nucleus would repel each other, and the nucleus would disintegrate. Again
an ad hoc assumption was devised to rescue the atom-model. It was assumed (12) that an
inward-directed ―nuclear force‖ (of unknown origin) operates against the outward force
of repulsion, and holds the protons in contact.
This assumed proton-electron composition quickly encountered difficulties, one of the
most immediate being that in order to account for the various atoms and isotopes it had to
be assumed that some of the electrons are located in the nucleus –admittedly a rather
improbable hypothesis. The theorists were therefore much relieved when a neutral
particle, the neutron, was discovered. This enabled changing the assumed atomic
composition to identify the nucleus as a combination of protons and neutrons (assumption
13). But the observed neutron is unstable, with an average life of only about 15 minutes.
It therefore does not qualify as a possible constituent of a stable atom. So once more an
ad hoc assumption was called upon. It was assumed (14) that the ordinarily unstable
neutron becomes stable when it enters the atomic structure (where, fortunately for the
hypothesis, it is undetectable if it exists).
As a result of the critical study to which the Bohr atom-model was subjected in the next
few decades, this model, in its original form, was found untenable. Various
―interpretations‖ of the model have therefore been offered as remedies for the defects in
this original version. Each of these adds some further assumptions to those included in
Bohr‘s formulation, but none of these additions can be considered definitely in the main
line of the theoretical development that we are following, and they will not be taken into
account in the present connection. It should be noted, however, that all 14 of the
assumptions that we have identified in the foregoing paragraphs enter into the theoretical
framework of each modification of the atom-model. Thus all 14 are included in the
premises of the ―atom of modern physics,‖ regardless of the particular interpretation that
is accepted.
It should also be noted that four of these 14 assumptions (numbers 8, 9, 12, and 14) have
a status that is quite different from that of the others. These are ad hoc assumptions,
untestable assumptions that are made purely for the purpose of evading conflicts with
observation or firmly established theory. Assumption 12, which asserts the existence of a
―nuclear force,‖ is a good example. There is no independent evidence that this assumed
force actually exists. The only reason for assuming its existence is that the nuclear atom
cannot survive without it. As one current physics textbook explains, ―A very strong
attractive force is needed to hold the nucleons in the nucleus.‖68 What the physicists are
doing here is giving us an untestable excuse for the failure of the nuclear theory to pass
the test of agreement with experience. Such evasive tactics are not new. In Aristotle‘s
physical system, which was the orthodox view of the universe for nearly two thousand
years, it was assumed that the planets were attached to transparent spheres that rotated
around the earth. But according to the laws of motion, as they were understood at that
time, this motion could not be maintained except by continual application of a force. So
Aristotle employed the same device that his modern successors are using: the ad hoc
assumption. He postulated the existence of angels who pushed the planets along in their
respective orbits. The ―nuclear force‖ of modern physics is the exact equivalent of
Aristotle‘s ―angels‖ in all but language.
With the benefit of the additional knowledge that has been accumulated in the meantime,
we of the present era have no difficulty in arriving at an adverse judgment on Aristotle‘s
assumption. But we need to recognize that this is an illustration of a general proposition.
The probability that an untestable assumption about a physical entity or phenomenon is a
true representation of physical reality is always low. This is an unavoidable consequence
of the great diversity of physical existence. When one of these untestable assumptions is
used in the ad hoc manner–that is, to evade a discrepancy or conflict–the probability that
the assumption is valid is much lower.
All of these points are relevant to the question as to whether the present-day nuclear
atom-model is a representation of physical reality. We have identified 14 assumptions
that are directly involved in the main line of theoretical development leading to this
model. These assumptions are sequential; that is, each adds to the assumptions previously
made. It follows that unless every one of them is valid, the atom-model in its present form
is untenable. The issue thus reduces to the question: What is the probability that all of
these 14 assumptions are physically correct?
Here we need to consider the status of assumptions in the structure of scientific theory.
The construction of physical theory is a process of applying reasoning to premises
derived initially from experience. Where the application involves going from the general
to the particular, the process is deductive reasoning, which is a relatively straightforward
operation. To go from the particular to the general requires the application of inductive
reasoning. This is a two-step process. First, a hypothesis is formulated by any one of a
number of means. Then the hypothesis is tested by developing its consequences and
comparing them with empirical knowledge. Positive verification is difficult because of
the great complexity of physical existence. It should be noted, in this connection, that
agreement of the hypothesis with the observation that it was designed to fit does not
constitute a verification. The hypothesis, or its consequences, must be shown to agree
with other factual knowledge.
Because of the verification difficulties, it has been found necessary to make use, at least
temporarily, of many hypotheses whose verification is incomplete. However, a prominent
feature of ―modern physics‖ is the extent to which the structure of theory rests on
hypotheses that are entirely untested, and, in many cases, untestable. Hypotheses that are
accepted and utilized without verification are assumptions. The use of assumptions is a
legitimate feature of theory or model construction. But in view of the substantial
uncertainty as to their validity that always exists, the standard scientific practice is to
avoid pyramiding them. One or two steps into the unknown are considered to be in order,
but some consolidation of the exposed positions is normally regarded as essential before
a further unsupported advance is undertaken.
The reason for this can easily be seen if we consider the way in which the probability of
validity is affected. Because of the complexity of physical existence mentioned earlier,
the probability that an untestable assumption is valid is inherently low. In each case, there
are many possibilities to be conceived and taken into account. If each assumption of this
kind has an even chance (50 percent) of being valid, there is some justification for using
one such assumption in a theory, at least tentatively. If a second untestable assumption is
introduced, the probability that both are valid becomes one in four, and the use of these
assumptions as a basis for further extension of theory is a highly questionable practice. If
a third such assumption is added, the probability of validity is only one in eight, which
explains why pyramiding assumptions is regarded as unsound.
A consideration of the points brought out in the foregoing paragraphs puts the status of
the nuclear theory into the proper perspective. The 14 steps in the dark that we have
identified in the path of development of the currently accepted atom-model are totally
unprecedented in physical science. The following comment by Abraham Pais is
appropriate:
Despite much progress, Einstein‘s earlier complaint remains valid to this day. ―The theories which have
gradually been associated with what has been observed have led to an unbearable accumulation of
individual assumptions.‖69

Of course, it is possible for an assumption to be upgraded to the status of established


knowledge by discovery of confirmatory evidence. This is what happened to the
assumption as to the existence of atoms. But none of the 14 numbered assumptions
identified in the preceding discussion has been similarly raised to a factual status. Indeed,
some of them have lost ground over the years. For example, as noted earlier, the
assumption that emission of certain particles from an atom during a decay process
indicates that these particles existed in the atom before decay, assumption (3), has been
seriously weakened by the large increase in the number of new particles that are being
emitted from atoms during high energy processes. The present uncritical acceptance of
the nuclear atom-model is not a result of more empirical support, but of increasing
familiarity, together with the absence (until now) of plausible alternatives. A comment by
N. R. Hanson on the quantum theory, one of the derivatives of the nuclear atom model, is
equally applicable to the model itself. This theory, he says, is ―conceptually imperfect‖
and ―riddled with inconsistencies.‖ Nevertheless, it is accepted in current practice
because ―it is the only extant theory capable of dealing seriously with
microphenomena.‖70
The existence, or non-existence, of alternatives has no bearing, however, on the question
we are now examining, the question as to whether the nuclear atom-model is a true
representation of physical reality. Neither general acceptance nor long years of freedom
from competition has any bearing on the validity of the model. Its probability of being
correct depends on the probability that the 14 assumptions on which it rests are all
individually valid. Even if no ad hoc assumptions were involved, this composite
probability, the product of the individual probabilities, would be low because of the
cumulative effect. This line of theoretical development is the kind of product that
Einstein called ―an unbearable accumulation of individual assumptions.‖ Even if we
assume the relatively high value of 90 percent for the probability of the validity of each
individual assumption, the probability that the final result, the atom-model, is correct
would be less than one in four. When the very low probability of the four purely ad hoc
assumptions is taken into account, it is evident that the probability of the nuclear atom-
model, ―the atom of modern physics,‖ being a correct representation of physical reality is
close to zero.
This conclusion derived from an examination of the foundations of the currently accepted
model will no doubt be resisted–and probably resented–by those who are accustomed to
the confident assertions in the scientific literature. But it is exactly what many of those
who played leading roles in the development of the long list of assumptions leading to the
present version of the nuclear theory have been telling us. These scientists know that the
construction of the model in terms of electrons moving in orbits around a positively*
charged nucleus does not mean that such entities actually exist in an atom, or behave in
the manner specified in the theory. Erwin Schršdinger, for instance, emphasized that the
model is ―only a mental help, a tool of thought.‖71 and asserted that if the question, ―Do
the electrons actually exist in these orbits within the atom?‖ is asked, it ―is to be
answered with a decisive No.‖72 Werner Heisenberg, another of the architects of the
modern version of Bohr‘s atom-model, tells us that the physicists‘ atom does not even
―exist objectively in the same sense as stones or trees exist.‖73 It is, ―in a way, only a
symbol,‖9 he says.
These statements, applying specifically to the nuclear theory of the atom, that have been
made by individuals who know the true status of the assumptions that entered into the
construction of that theory, agree with the conclusions that we have reached on the basis
of probability considerations. Thus the confident statements that appear throughout the
scientific literature, asserting that the nature of the atomic structure is now ―known,‖ are
wholly unwarranted. A hypothesis that is ―only a mental help‖ is not a representation of
reality. A theoretical line of development that culminates in nothing more than a
―symbol‖ or a ―tool of thought‖ is not an exploration of the real world; it is an excursion
into the land of fantasy.
The finding that the nuclear atom-model rests on false premises does not necessarily
invalidate the currently accepted mathematical relationships derived from it, or suggested
by it. This may appear contradictory, as it implies that a wrong theory may lead to correct
answers. However, the truth is that the conceptual and mathematical aspects of physical
theories are, to a large extent, independent. As Feynman puts it, ―Every theoretical
physicist who is any good knows six or seven different theoretical representations for
exactly the same physics.‖74 Such a physicist recognizes this many different conceptual
explanations that agree with the mathematical relations. A major reason for this is that the
mathematical relations are usually identified first, and an explanation in the form of a
theory is developed later as an interpretation of the mathematics. As noted earlier, many
such explanations are almost always possible in each case. In the course of the
investigation on which this present work is based, this has been found to be true even
where the architects of present-day theory contend that ―there is no other way.‖
Since the practical applications of a theory are primarily mathematical, or quantitative,
one might be led to ask, Why do we want an explanation? Why not just use the
mathematics without any concern as to their meaning? The answer is that while the
established mathematical relations may serve the specific purposes for which they were
developed, they cannot be safely extrapolated beyond the ranges of conditions over
which they have been tested, and they make no contribution toward an understanding of
relations in other areas. On the contrary, they lead to wrong conclusions, and constitute
roadblocks in the way of identifying the correct principles and relations in related areas.
This is what has happened as a result of the assumptions that were made in the course of
developing the nuclear atom-model. Once it was assumed that the atom is composed
primarily of oppositely charged particles, and some valid mathematical relations were
developed and expressed in terms of this concept, the prevailing tendency to accept
mathematical agreement as proof of validity, together with the absence (until now) of any
serious competition, elevated this product of multiple assumptions to the level of an
accepted fact. ―Today we know that the atom consists of a positively charged nucleus
composed of protons and neutrons surrounded by negatively charged electrons.‖ This
positive statement, or its equivalent, can be found in almost every physics textbook. But
any proposition that rests on assumptions is hypothesis, not knowledge. Classifying a
model that rests upon more than a dozen independent assumptions, mostly untestable, and
including several of the inherently dubious ―ad hoc‖ variety, as ―knowledge‖ is a travesty
on science.
When the true status of the nuclear atom-model is thus identified, it should be no surprise
to find that the development of the theory of the universe of motion reveals that the atom
actually has a totally different structure. We now find that it is not composed of
individual particles, and in its normal state it contains no electric charges. This new view
of atomic structure was derived by deduction from the postulates that define the universe
of motion, and it therefore participates in the verification of the Reciprocal System of
theory as a whole. However, in view of the crucial position of the nuclear theory in
conventional physics it is advisable to make it clear that this currently accepted theory is
almost certainly wrong, on the basis of current physical knowledge, even without the
additional evidence supplied by the present investigation, and that some of the physicists
who were most active in the construction of the modern versions of the nuclear model
concede that it is not a true representation of physical reality. This is the primary purpose
of the present chapter.
In line with this objective, the most significant of the errors introduced into electric and
magnetic theory by acceptance of this erroneous model of atomic structure have been
identified in the preceding pages. But this is not the whole story. This product of ―an
unbearable accumulation of individual assumptions‖ has had an even more detrimental
effect on astronomy. The errors that it has introduced into astronomical thought will be
discussed in detail in Volume III, but it will be appropriate at this time to point out why
astronomy has been particularly vulnerable to an erroneous assumption of this nature.
The magnitudes of the basic physical properties extend through a much wider range in
the astronomical field than in the terrestrial environment. A question of great
significance, therefore, in the study of astronomical phenomena, is whether the physical
laws and principles that apply under terrestrial conditions are also applicable under the
extreme conditions to which many astronomical objects are subjected. Most scientists are
convinced, largely on philosophical, rather than scientific, grounds, that that the same
physical laws do apply throughout the universe. The results obtained by development of
the consequences of the postulates that define the universe of motion agree with this
philosophical assumption. However, there is a general tendency to interpret this principle
of universality of physical law as meaning that the laws that have been established as
applicable to terrestrial conditions are applicable throughout the universe. This is
something entirely different, and our findings do not support it.
The error in this interpretation of the principle stems from the fact that most physical
laws are valid, in the form in which they are usually expressed, only within certain limits.
Many of the currently accepted laws applicable to solids, for example, do not apply at
temperatures above the melting points of the various material substances. The prevailing
interpretation of the uniformity principle carries with it the unstated assumption that there
are no such limits applicable to the currently accepted laws and principles other than
those that are recognized in present-day practice. In view of the very narrow range of
conditions through which these laws and principles have been tested, this assumption is
clearly unjustified, and our findings now show that it is definitely incorrect. We find that
while it is true that the same laws and principles are applicable throughout the universe,
most of the basic laws are subject to certain modifications at critical magnitudes, which
often exceed the limiting magnitudes experienced on earth, and are therefore unknown to
present-day science. Unless a law is so stated that it provides for the existence and effects
of these critical magnitudes, it is not applicable to the universe as a whole, however
accurate it may be within the narrow terrestrial range of conditions.
One property of matter that is subject to an unrecognized critical magnitude of this nature
is density. In the absence of thermal motion, each type of material substance in the
terrestrial environment has a density somewhere in the range from 0.075 (hydrogen) to
22.5 (osmium and iridium), relative to liquid water at 4° C as 1.00. The average density
of the earth is 5.5. Gases and liquids at lower densities can be compressed to this density
range by application of sufficient pressure. Additional pressure then accomplishes some
further increase in density, but the increase is relatively small, and has a decreasing trend
as the pressure rises. Even at the pressures of several million atmospheres reached in
shock wave experiments, the density was only increased by a factor of about two. Thus
the maximum density to which the contents of the earth could be raised by application of
pressure is not more than about 15.
The density of most of the stars of the white dwarf class is between 100,000 and
1,000,000. There is no known way of getting from a density of 15 to a density of 100,000.
And present-day physics has no general theory from which an answer to this problem can
be deduced. So the physicists, already far from the solid ground of reality with their
hypotheses based on an atom-model that is ―only a symbol,‖ plunge still farther into the
realm of the imagination by adding more assumptions to the sequence of 14 included in
the nuclear atom-model. It is first assumed (15) that at some extremely high pressure the
hypothetical nuclear structure collapses, and its constituents are compressed into one
mass, eliminating the vacant space in the original structure, and increasing the density to
the white dwarf range.
How the pressure that is required to produce the ―collapse‖ is generated has never been
explained. The astronomers generally assume that this pressure is produced at the time
when, according to another assumption (16), the star exhausts its fuel supply.
With its fuel gone it [the star] can no longer generate the pressure needed to maintain itself against the
crushing force of gravity.75

But fluid pressure is effective in all directions; down as well as up. If the ―crushing force
of gravity‖ is exerted against a gas rather than directly against the central atoms of the
star, it is transmitted undiminished to those atoms. It follows that the pressure against the
atoms is not altered by a change of physical state due to a decreasee in temperature,
except to the extent that the dimensions of the star may be altered. When it is realized that
the contents of ordinary stars, those of the main sequence, are already in a condensed
state (a point discussed in detail in Volume III), it is evident that the change in
dimensions is too small to be significant in this connection. The origin of the hypothetical
―crushing pressure‖ thus remains unexplained.
Having assumed the fuel supply exhausted, and the star cooled down, in order to produce
the collapse, the theorists find it necessary to reheat the star, since the white dwarfs are
relatively hot, as their name implies, rather than cold. So again they call upon their
imaginations and come up with a new assumption to take care of the problem. They
assume (17) that when the atomic structure collapses, the matter of the star enters a new
state. It becomes degenerate matter, and acquires a new set of properties, among which is
the ability to generate a new supply of energy to account for the observed temperature of
the white dwarf stars.
Even with the wide latitude for further assumptions that is available in this purely
imaginary situation, the white dwarf hypothesis could not be extended sufficiently to
encompass all of the observed types of extremely dense stars. To meet this problem it
was assumed (18) that the collapse which produces the white dwarf is limited to
relatively small stars, so that the white dwarfs do not exceed a limiting mass of about two
solar masses. Larger stars are assumed (19) to explode rather than merely collapse, and it
is further assumed (20) that the pressure generated by such an explosion is sufficient to
compress the residual matter to such a degree that the hypothetical constituents of the
atoms are all converted into neutrons, producing a neutron star (currently identified with
the observed objects known as pulsars). There is no evidence to support this assumption.
The existence of a process that accomplishes such a conversion under pressure is itself an
assumption (21), and the concept of a neutron star requires the further assumption (22)
that neutrons can exist as stable independent particles under the assumed conditions.
Although this is the currently orthodox explanation of the origin of the pulsars, it is
viewed rather dubiously even by some of the astronomers. Martin Harwit, for instance,
concedes that ―we have no theories that satisfactorily explain just how a massive star
collapses to become a neutron star.‖76
The neutron star, too, is assumed to have a limiting mass. It is assumed (23) that the
compression due to the more powerful explosion of the larger star reduces the volume of
the residual aggregate enough to enable its self-gravitation to continue the compression.
It is then further assumed (24) that the reduction of the size of the aggregate eventually
reaches the point where the gravitational force is so great that radiation cannot escape.
What then exists is a black hole.
While it is not generally recognized as such, the ―self-gravitation‖ concept, in application
to atoms, is another assumption (25). Observations show only that gravitation operates
between atoms or independent particles. The hypothesis that it is also applicable within
atoms is derived from Einstein‘s general theory of relativity, but since there is no proof of
this theory (the points that have thus far been adduced in its favor are merely evidence)
this derivation does not alter the fact that the hypothesis of gravitation within atoms rests
on an assumption.
Most astronomers who accept the existence of black holes apparently prefer to look upon
these objects as the limiting state of physical existence, but others recognize that if self-
gravitation is a reality, and if it is once initiated, there is nothing to stop it at an
intermediate stage such as the black hole. These individuals therefore assume (26) that
the contraction process continues until the material aggregate is reduced to a mere point,
a singularity.
This line of thought that we have followed from the physicists‘ concept of the nature of
electricity to the nuclear model of atomic structure, and from there to the singularity, is a
good example of the way in which unrestrained application of imagination and
assumption in theory construction leads to ever-increasing levels of absurdity–in this
case, from atomic ―collapse‖ to degenerate matter to neutron star to black hole to
singularity. Such a demonstration that extension of a line of thought leads to an absurdity,
the reductio ad absurdum, as it is called, is a recognized logical method of disproving the
validity of the premises of that line of thought. The physicist who tells us that ―the laws
of modern physics virtually demand that black holes exist‖ is, in effect, telling us that
there is something wrong with the laws of modern physics. In the preceding pages we
have shown just what is wrong: too much of the foundation of conventional physical
theory rests on untestable assumptions and ―models.‖
The physical theory derived by development of the consequences of the postulates that
define the universe of motion differs very radically from current thought in some areas,
such as astronomy, electricity, and magnetism. Many scientists find it hard to believe that
the investigators who constructed the currently accepted theories could have made so
many mistakes. It should be emphasized, therefore, that the profusion of conflicts
between present-day ideas and our findings does not indicate that the previous
investigators have made a multitude of errors. What has happened is that they have made
a few serious errors that have had a multitude of consequences.
The astronomical theories based on the nuclear atom-model that have been mentioned in
this chapter provide a good example of how one basic error distorts the pattern of
thinking over a wide area. In this case, an erroneous theory of the structure of the atom
leads to an erroneous theory of extremely high density, which then results in the
construction of erroneous theories of all of the astronomical objects composed of
ultradense matter; not only the white dwarfs, but also quasars, pulsars, x-ray emitters, and
compact galactic cores. Once the pyramiding of assumptions begins, such spurious
results are inevitable.

CHAPTER 19

Magnetostatics
As we saw in the preceding pages, one of the principal obstacles to the development of a
more complete and consistent theory of electrical phenomena has been the exaggerated
significance that has been attached to the points of similarity between static and current
electricity, an attitude that has fostered the erroneous belief that only one entity, electric
charge, is involved in the two types of phenomena. The same kind of a mistake has been
made in a more complete and categorical manner in the current view of magnetism. While
insisting that electrostatic and current phenomena are simply two aspects of the same thing,
contemporary scientific opinion concedes that there is enough difference between the two to
justify a separate category of electrostatics for the theoretical aspects of the static
phenomena. But if magnetostatics, the corresponding branch of magnetism, is mentioned at
all in modern physics texts, it is usually brushed off as an ―older approach‖ that is now out
of date. Strictly static concepts, such as that of magnetic poles, are more often than not
introduced somewhat apologetically.
The separation of physical fields of study into more and more subdivisions has been a
feature of scientific activity throughout its history. Here in the magnetostatic situation we
have an example of the reverse process, a case in which a major subdivision of physics has
succumbed to cannibalism. Magnetostatics has been swallowed by a related, but quite
different, phenomenon, electromagnetism, which will be considered in Chapter 21. There
are many similarities between the two types of magnetic phenomena, just as there are
between the two kinds of electricity. In fact, the quantities in terms of which the
magnetostatic magnitudes are expressed are defined mainly by electromagnetic relations.
But this is not by any means sufficient to justify the current belief that only one entity is
involved. The subordinate status that conventional physics gives to purely magnetic
phenomena is illustrated by the following comment from K. W. Ford:
As theoretical physicists see it, magnetism in our world is merely an accidental by-product of electricity; it
exists only as a result of the motion of electrically charged particles. 77

The implication of a confident statement of this kind is that the assertions which it makes are
reasonably well established. In fact, however, this assertion that magnetism exists only as a
result of the motion of electrically charged particles is based entirely on unsubstantiated
assumptions. The true situation is more accurately described by the following quotation
from a physics textbook:
It is only within the past thirty years or so that models tying together these two sources of magnetism [magnets
and electromagnetism] have been developed. These models are far from perfect, even today, but at least they
have convinced most people that there is really only one source of magnetic fields: all magnetic fields come
from moving electric charges.78

What this text is, in effect, telling us is that the idea does not work out very well in practice,
but that it has been accepted by majority vote anyway. A prominent American astronomer, J.
N. Bahcall, has pointed out that ―We frequently settle important scientific issues by
acclamation rather than observation.‖79 The uncritical acceptance of the ―far from perfect‖
models of magnetism is a good example of this unscientific practice.
A strange feature of the existing situation is that after having come to this conclusion that
magnetism is merely a by-product of electricity, one of the ongoing activities of the
physicists is a search for the magnetic analog of the mobile electric charge, the electron.
Again quoting K. W. Ford:
An electric particle gives rise to an electric field, and when it moves it produces a magnetic field as a
secondary effect. For symmetry‘s sake there should be magnetic particles that give rise to magnetic fields and
in motion produce electric fields in the same way that moving electric particles create magnetic fields.

This author admits that ―So far the magnetic monopole has frustrated all its investigators.
The experimenters have failed to find any sign of the particle.‖ Yet this will-o‘-the-wisp
continues to be pursued with an ardor that invites caustic comments such as this:
It is remarkable how the lack of experimental evidence for the existence of magnetic monopoles does not
diminish the zeal with which they are sought.80

Ford‘s contention is that ―the apparent non-existence of monopole particles now presents
physicists with a paradox that they cannot drop until they have found an explanation.‖ But
he (unintentionally) supplies the answer to the paradox when he closes his discussion of the
monopole situation with this statement:
What concerns the physicist is that, in defiance of symmetry and all the known laws, no magnetic particle so
far has been created or found anywhere.

Whenever the observed facts ―defy‖ the ―known laws‖ and the current understanding of the
application of symmetry relations to any given situation, it can safely be concluded that the
current understanding of symmetry and at least some of the ―known laws‖ are wrong. In the
present case, any critical appraisal will quickly show not only that a number of the premises
from which the conclusion as to the existence of magnetic monopoles is derived are pure
assumptions without factual support, but also that there is a definite contradiction between
two of the key assumptions.
As explained by Ford, the magnetic monopole for which the physicists are searching so
assiduously is a particle which ―gives rise to magnetic fields; that is, a magnetic charge. If
such a particle existed, it would, of course, exhibit magnetic effects due to the charge. But
this is a direct contradiction of the prevailing assumption that magnetism is a ―by-product of
electricity.‖ The physicists cannot have it both ways. If magnetism is a by-product of
electricity (that is, electric charges, in current thought), then there cannot be a magnetic
charge, a source of magnetic effects, analogous to the electric charge, a source of electric
effects. On the other hand, if a particle with a magnetic charge (a magnetic monopole) does
exist, then the physicists‘ basic theory of magnetism, which attributes all magnetic effects to
electric currents, is wrong.
It is obvious from the points brought out in the theoretical development in the preceding
pages that the item of information which has been missing is an understanding of the
physical nature of magnetism. As long as magnetism is assumed to be a by-product of
electricity, and electricity is regarded as a given feature of nature, incapable of explanation,
there is nothing to guide theory into the proper channels. But once it is recognized that
magnetostatic phenomena are due to magnetic charges, and that such a charge is a type of
motion–a rotational vibration–the situation is clarified almost automatically. Magnetic
charges do, indeed, exist. Just as there are electric charges, which are one-dimensional
rotational vibrations acting in opposition to one-dimensional rotations, there are magnetic
charges, which are two-dimensional rotational vibrations acting in opposition to two-
dimensional rotations. The phenomena due to charges of this nature are the subject matter of
magnetostatics. Electromagnetism is a different phenomenon that is also two-dimensional,
but involves motion of a continuous, rather than vibratory, nature.
The two-dimensionality is the key to understanding the magnetic relations, and the failure to
recognize this basic feature of magnetism is one of the primary causes of the confusion that
currently exists in many areas of magnetic theory. The two dimensions of the magnetic
charge and electromagnetism are, of course, scalar dimensions. The motion components in
the second dimension are not capable of direct representation in the conventional spatial
reference system, but they have indirect effects that are observable, particularly on the
effective magnitudes. Lack of recognition of the vibrational nature of electrostatic and
magnetostatic motions, which distinguishes them sharply from the continuous motions
involved in current electricity and electromagnetism, also contributes significantly to the
confusion. Magnetostatics resembles electromagnetism in those respects in which the
number of effective dimensions is the determining factor, whereas it resembles electrostatics
in those respects in which the determining factor is the vibrational character of the motion.
Our findings show that the absence of magnetic monopoles is not a ―defiance of symmetry.‖
The symmetry exists, but a better understanding of the nature of electricity and magnetism is
required before it can be recognized. There is symmetry in the electric and magnetic
relations, and in some respects it is the kind of symmetry envisioned by Ford and his
colleagues. One type of magnetic field is produced in the same manner as an electric field,
just as Ford suggests in his explanation of the reasoning underlying the magnetic monopole
hypothesis. But it is not an ―electric particle‖ that produces an electric field; it is a certain
kind of motion–a rotational vibration–and a magnetic field is produced by a similar
rotational vibration. The electric current, a translational motion of a particle (the uncharged
electron) in a conductor, produces a magnetic field. As we will see in Chapter 21, a
translational motion of a magnetic field likewise produces an electric current in a conductor.
Here, too, symmetry exists, but not the kind of symmetry that would call for a magnetic
monopole.
The magnetic force equation, the expression for the force between two magnetic charges, is
identical with the Coulomb equation, except for the factor t/s introduced by the second
scalar dimension of motion in the magnetic charge. The conventional form of the equation is
F = MM‘/d2. As in the other primary force equations, the terms M‘ and d2 are dimensionless.
From the general principles applying to these force equations, as defined in Chapter 14, the
missing term in the magnetic equation, analogous to 1/s in the Coulomb equation, is 1/t. The
space-time dimensions of the magnetic equation are then F = t2/s2 x 1/t = t/s2.
Like the motion that constitutes the electric charge, and for the same reasons, the motion that
constitutes the magnetic charge has the outward scalar direction. But since magnetic rotation
is necessarily positive (time displacement) in the material sector, all stable magnetic charges
in that sector have displacement in space (negative), and there is no independent magnetic
phenomenon corresponding to the negative* electric charge. In this case there is no
established usage that prevents applying the designations that are consistent with the
rotational terminology, and we will therefore refer to the magnetic charge as negative, rather
than using the positive* designation, as in application to the electric charge.
Although positive magnetic charges do not exist in the material environment, except under
the influence of external forces in a situation that will be discussed later, the two-
dimensional character of the magnetic charge introduces an orientation effect not present in
the electric phenomena. All one-dimensional (electric) charges are alike; they have no
distinguishing characteristic whereby they can be subdivided into different types or classes.
But a two-dimensional (magnetic) charge consists of a rotational vibration in the dimension
of the reference system and another in a second scalar dimension independent of the first,
and therefore perpendicular to it in a geometrical representation. The rotation with which
this second rotational vibration is associated divides the atom into two halves that can be
separately identified. On one side of this dividing line the rotation appears clockwise to
observation. The scalar direction of the magnetic charge on this side is therefore outward
from a clockwise rotation. A similar charge on the opposite side is a motion outward from a
counterclockwise rotation.
The unit of magnetic charge applies to only one of the two rotating systems of the atom.
Each atom therefore acquires two charges, which occupy the positions described in the
preceding paragraph, and are oppositely directed. Each atom of a magnetic, or magnetized,
substance thus has two poles, or centers of magnetic effect. These are analogous to the
magnetic poles of the earth, and are named accordingly, as a north pole, or north-seeking
pole, and a south pole.
These poles constitute scalar reference points, as defined in Chapter 12. The effective
direction of the rotational vibration that constitutes the charge located at the north pole is
outward from the north reference point, while the effective direction of the charge centered
at the south pole is outward from the south reference point. The interaction of two
magnetically charged atoms therefore follows the same pattern as the interaction of electric
charges. As illustrated in Fig.22, which is identical with Fig.20, Chapter 13, except that it
substitutes poles for charges, two north poles (line a) move outward from north reference
points, and therefore outward from each other. Two south poles (line c) similarly move
outward from each other. But, as shown in line b, a north pole moving outward from a north
reference point is moving toward a south pole that is moving outward from a south reference
point. Thus like poles repel and unlike poles attract.

Figure 22
S N S N
(a) | <=|=> |
(b) |=> <=|
(c) | <=|=> |
On this basis, when two magnetically charged atoms are brought into proximity, the north
pole of one atom is drawn to the south pole of the other. The resulting structure is a linear
combination of a north pole, a neutral combination of two poles, and a south pole. Addition
of a third magnetically charged atom converts this south pole to a neutral combination, but
leaves a new south pole at the new end of the structure. Further additions of the same kind
can then be made, limited only by thermal and other disruptive forces. A similar linear array
of atoms with north and south poles at opposite ends can be produced by introducing atoms
of magnetizable matter between the magnetically charged atoms of a two-atom combination.
Separation of this structure at any point breaks a neutral combination, and leaves north and
south poles at the ends of each segment. Thus no matter how finely magnetic material is
divided, there are always north and south poles in every piece of the material.
Because of the directional character of the magnetic forces they are subject to shielding in
the same manner as electric forces. The gravitational force, on the other hand, cannot be
screened off or modified in any way. Many observers have regarded this as an indication
that the gravitational force must be something of a fundamentally different nature. This
impression has been reinforced by the difficulty that has been experienced in finding an
appropriate place for gravitation in basic physical theory. The principal objective of the
theorists currently working on the problem of constructing a ―unified theory,‖ or a ―grand
unified theory,‖ of physics is to find a place for gravitation in their theoretical structure.
Development of the theory of the universe of motion now shows that gravitation, static
electricity, and magnetostatics are phenomena of the same kind, differing only in the number
of effective scalar dimensions. Because of the symmetry of space and time in this universe,
every kind of force (motion) has an oppositely directed counterpart. Gravitation is no
exception. it takes place in time as well as in space, and is therefore subject to the same
differentiation between positive and negative as that which we find in electric forces. But in
the material sector of the universe the net gravitational effect is always in space–that is,
there is no effective negative gravitation–whereas in the cosmic sector it is always in time.
And since gravitation is three-dimensional, there cannot be any directional differentiation of
the kind that we find in magnetism.
Because of the lack of understanding of the true relation between electromagnetic and
gravitational phenomena, conventional physical science has been unable to formulate a
theory that would apply to both. The approach that has been taken to the problem is to
assume that electricity is fundamental, and to erect the structure of physical theory on this
foundation, further assumptions being made along the way as required in order to bring the
observations and measurements into line with the electrically based theory. Gravitation has
thus been left with the status of an unexplained anomaly. This is wholly due to the manner
in which the theories have been constructed, not to any peculiarity of gravitation. If the
approach had been reversed, and physical theory had been constructed on the basis of the
assumption that gravitation is fundamental, electricity and magnetism would have been the
―undigestable‖ items. The kind of a unified theory that the investigators have been trying to
construct can only be attained by a development, such as the one reported in this work, that
rests on a solid foundation of understanding in which each of these three basic phenomena
has its proper place.
Aside from the effects of the difference in the number of scalar dimensions, the properties of
the rotational vibration that constitutes a magnetic charge are the same as those of the
rotational vibration that constitutes an electric charge. Magnetic charges can therefore be
induced in appropriate materials. These materials in which magnetic charges are induced
behave in the same manner as permanent magnets. In fact, some of them become permanent
magnets when charges are induced in them. However, only a relatively small number of
elements are capable of being magnetized to a significant degree; that is, have the property
known as ferromagnetism.
The conventional theories of magnetism have no explanation of the restriction of
magnetization (in this sense) to these elements. Indeed, these theories would seem to imply
that it should be a general property of matter. On the basis of the assumptions previously
mentioned, the electrons which conventional theory regards as constituents of atoms are
miniature electromagnets, and produce magnetic fields. In most cases, it is asserted, the
magnetic fields of these atoms are randomly oriented, and there is no net magnetic resultant.
―However, there are a few elements in whose atoms the fields from the different electrons
don‘t exactly cancel, and these atoms have a net magnetic field… in a few materials… the
magnetic fields of the atoms line up with each other.‖81 Such materials, it is asserted. have
magnetic properties. Just why these few elements should possess a property that most
elements do not have is not specified.
For an explanation in terms of the theory of the universe of motion we need to consider the
nature of the atomic motion. If a two-dimensional positive rotational vibration is added to
the three-dimensional combination of motions that constitutes the atom it modifies the
magnitudes of those motions, and the product is not the same atom with a magnetic charge;
it is an atom of a different kind. The results of such additions will be examined in Chapter
24. A magnetic charge, as a distinct entity, can exist only where an atom is so constituted
that there is a portion of the atomic structure that can vibrate two-dimensionally
independently of the main body of the atom. The requirements are met, so far as the
magnetic rotation is concerned, where this rotation is asymmetric; that is, there are n
displacement units in one of the two magnetic dimensions and n+1 in the other.
On this basis, the symmetrical B groups of elements, which have magnetic rotations 1-1, 2-
2, 3-3, and 4-4, are excluded. While the magnetic charge has no third dimension, the electric
rotation with which it is associated in the three-dimensional motion of the atom must be
independent of that associated with the remainder of the atom. The electric rotational
displacement must therefore exceed 7, so that one complete unit (7 displacement units plus
an initial unit level) can stay with the main body of the magnetic rotation, while the excess
applies to the magnetic charge. Furthermore, the electric displacement must be positive, as
the reference system cannot accommodate two different negative displacements (motion in
time) in the same atomic structure. The electronegative divisions (III and IV) are thus totally
excluded. The effect of all of these exclusions is to confine the magnetic charges to Division
II elements of Groups 3A and 4A.
In Group 3A the first element capable of taking a magnetic charge in its normal state is iron.
This number one position is apparently favorable for magnetization, as iron is by far the
most magnetic of the elements, but a theoretical explanation of this positional advantage is
not yet available. The next two elements, cobalt and nickel, are also magnetic, as their
electric displacement is normally positive. Under some special conditions, the displacements
of chromium (6) and manganese (7) are increased to 8 and 9 respectively by reorientation
relative to a new zero point, as explained in Chapter 18 of Volume I. These elements are
then also able to accept magnetic charges.
According to the foregoing explanation of the atomic characteristics that are required in
order to permit acquisition of a magnetic charge, the only other magnetic (in this sense)
elements are the members of Division II of Group 4A (magnetic displacements 4-3). This
theoretical expectation is consistent with observation, but there are some, as yet
unexplained, differences between the magnetic behavior of these elements and that of the
Group 3A elements. The magnetic strength is lower in the 4A group. Only one of the
elements of this group, gadolinium, is magnetic at room temperature, and this element does
not occupy the same position in the group as iron, the most magnetic element of Group 3A.
However, samarium, which is in the iron position, does play an important part in many
magnetic alloys. Gadolinium is two positions higher in the atomic series, which may
indicate that it is subject to a modification similar to that applying to the lower 3A elements,
but oppositely directed.
If we give vanadium credit for some magnetic properties on the strength of its behavior in
some alloys, all of the Division II elements of both the 3A and 4A groups have a degree of
magnetism under appropriate conditions. The larger number of magnetic elements in Group
4A is a reflection of the larger size of this 32 element group, which puts 12 elements into
Division II. There are a number of peculiarities in the relation of the magnetic properties of
these 4A elements, the rare earths, to the positions of the elements in the atomic series that
are, as yet, unexplained. They are probably related to the other still unexplained
irregularities in the behavior of these elements that were noted in the discussions of other
physical properties. The magnetic capabilities of the Division II elements and alloys carry
over into some compounds, but the simple compounds such as the binary chlorides, oxides,
etc. tend to be non-magnetic; that is, incapable of accepting magnetic charges of the
ferromagnetic type.
In undertaking an examination of individual magnetic phenomena, our first concern will be
to establish the correct dimensions of the quantities with which we will be working. This is
an operation that we have had to carry out in every field that we have investigated, but it is
doubly important in the case of magnetism because of the dimensional confusion that
admittedly exists in this area. The principal reason for this confusion is the lack in
conventional physical theory of any valid general framework to which the dimensional
assignments of electric and magnetic quantities can be referred. The customary assignment
of dimensions on the basis of an analysis into mass, length, and time components produces
satisfactory results in the mechanical system of quantities. Indeed, all that is necessary to
convert these mechanical MLT assignments to the correct space-time dimensions is to
recognize the t3/s3 dimensions of mass. But extending this MLT system to electric and
magnetic quantities meets with serious difficulties. Malcolm McCaig makes the following
comment:
Very contradictory statements have been made about the dimensions of electrical quantities. While some
writers state that it is impossible to express the dimensions of all electrical and magnetic quantities in terms of
mass, length, and time, others such as Jeans and Nicolson do precisely that. 82

The nature of the problem that the theorists face in attempting to arrive at an accurate and
consistent set of MLT dimensions can be seen by comparing the dimensions that have been
assigned to one of the basic electric quantities, electric current, with the correct space-time
dimensions that we have identified in the preceding pages. Current, in MLT terms, is
asserted to have the dimensions M¹/2L¹/2T-1. When converted to space-time dimensions, this
expression becomes (t3/s3)¹/2 x s¹/2 x t-1 = t¹/2/s. The correct dimensions are s/t. The reason
for the discrepancy is that the MLT dimensions are taken from the force equations, and
therefore reflect the errors in the conventional interpretation of those equations. The further
error due to the lack of distinction between electric charge and electric quantity is added
when dimensions are assigned to the electric current, and the final result has no resemblance
to the correct dimensions.
The SI system and its immediate predecessors avoid a part of the problem by abandoning the
effort to assign MLT dimensions to electric charge, and taking charge as an additional basic
quantity. But here, again, the distinction between charge and quantity is not recognized,
leading to incorrect dimensions for electric current. These dimensions are stated as Q/T, the
space-time equivalent of which is 1/s, instead of the correct s/t. Both the MLT and the
MLTQ systems of dimensional assignment are thus wrong in almost every electric and
magnetic application, and they serve no useful purpose.
In our study of electrical fundamentals we were able to establish the correct dimensions of
the electric quantities by using the mechanical dimensions as a base and taking advantage of
the equivalence of mechanical and current phenomena. This approach is not feasible in
application to magnetism, but we have a good alternative, as our theory indicates that there
is a specific dimensional relation between the magnetic quantities and the corresponding
electric quantities, the dimensions of which we have already established.
The basic difference between electricity and magnetism is that electricity is one-dimensional
whereas magnetism is two-dimensional. However, the various permutations and
combinations of units of motion that account for the differences between one physical
quantity and another are phenomena of only one scalar dimension, the dimension that is
represented in the reference system. No more than this one dimension can be resolved into
components by introduction of dimensions of space (or time). It follows that addition of a
second dimension of motion to an electrical quantity takes the form of a simple unit of
inverse speed, t/s. The dimensions of the magnetic quantity corresponding to any given
electric quantity are therefore t/s times the dimensions of the electric quantity. The
dimensions thus derived for the principal magnetic quantities are shown in Table 30.

Table 30: Electrical Analogs of Principal Magnetic Quantities


Electric Magnetic
t dipole moment t2/s dipole moment
t/s charge t2/s2 flux
t/s2 potential t2/s3 vector potential
t/s3 flux density t2/s4 flux density
t/s3 field intensity t2/s4 field intensity
t2/s2 resistivity t3/s3 inductance
t2/s3 resistance t3/s4 permeability
Here, then, we have a solid foundation for a critical analysis of magnetic relations, one that
is free from the dimensional inconsistencies that have plagued magnetism ever since
systematic investigation of magnetic phenomena was begun. In the next chapter we will
apply the new understanding of magnetic fundamentals to an examination of magnetic
quantities and units.

CHAPTER 20
Magnetic Quantities and Units
One of the major issues in the study of magnetism is the question as to the units in which
magnetic quantities should be expressed, and the relations between them. ―Since the first
attempts to put its study on a quantitative basis,‖ says J. C. Anderson, ―magnetism has
been bedevilled by difficulties with units.‖83 As theories and mathematical methods of
dealing with magnetic phenomena have come and gone, there has been a corresponding
fluctuation in opinion as to how to define the various magnetic quantities, and what units
should be used. Malcolm McCaig comments that, ―with the possible exception of the
1940s, when the war gave us a respite, no decade has passed recently without some major
change being made in the internationally agreed definitions of magnetic units.‖ He
predicts a continuation of these modifications. ―My reason for expecting further
changes,‖ he says, ―is because there are certain obvious practical inconveniences and
philosophical contradictions in the SI system as it now stands.‖84
Actually, this difficulty with units is just another aspect of the dimensional confusion that
exists in both electricity and magnetism. Now that we have established the general nature
of magnetism and magnetic forces, our next objective will be to straighten out the
dimensional relations, and to identify a consistent set of units. The ability to reduce all
physical quantities to space-time terms has given us the tool by which this task can be
accomplished. As we have seen in the preceding pages, this identification of the space-
time relations plays a major part in the clarification of the physical situation. It enables us
to recognize the equivalence of apparently distinct phenomena, to detect errors and
omissions in statements of physical relationships, and to fit each individual relation into
the total physical picture.
Furthermore, the verification process operates in both directions. The fact that all
physical phenomena and relations can be expressed in terms of space and time not only
enables identifying the correct relations, but is also an impressive confirmation of the
validity of the basic postulate which asserts that the physical universe is composed, in its
entirety, of units of motion, an entity defined as a reciprocal relation between space and
time.
The conventional treatment of magnetic phenomena employs the units of the mechanical
and electrical systems so far as they are appropriate, and also, in some specialized
applications, utilizes the same quantities under different names. For example, inductance,
symbol L, is the term applied to the quantity involved in the production of an
electromotive force in a conductor by means of variations in the current. The
mathematical expression is
F = -L dI/dt
In space-time terms, the inductance is then
L = t/s2 x t/s x t = t3/s3
These are the dimensions of mass. Inductance is therefore equivalent to inertia. Because
of the dimensional confusion in the magnetic area the inductance has often been regarded
as being dimensionally equivalent to length, and the centimeter has been used as a unit,
although the customary unit is now the henry, which has the correct dimensions. The true
nature of the quantity known as inductance is illustrated by a comparison of the inductive
force equation with the general force equation, F = ma.
F = ma = m dv/dt = m d2s/dt2
F =L dI/dt = L d2q/dt2
The equations are identical. As we have found, I (current) is a speed, and q (electric
quantity) is space. It follows that m (mass) and L (inductance) are equivalent. The
qualitative effects also lead to the same conclusion. Just as inertia resists any change in
speed or velocity, inductance resists any change in the electric current.
Recognition of the equivalence of inductance and inertia clarifies some hitherto obscure
aspects of the energy picture. An equivalent mass L moving with a speed I must have a
kinetic energy¹/2LI2. We find experimentally that when a current I flowing through an
inductance L is destroyed, an amount of energy ¹/2LI2 does make its appearance. The
explanation on the basis of conventional theory is thai the energy is ―stored in the
electromagnetic field,‖ but the identification of L with mass now shows that the
expression ¹/2LI2 is identical with the familiar expression ¹/2mv2, and that, like its
mechanical analog, it represents kinetic energy.
The inverse of inductance, t3/s3, is reluctance, s3/t3, the resistance of a magnetic circuit to
the establishment of a magnetic flux by a magnetomotive force. As can be seen, this
quantity has the dimensions of three-dimensional speed.
In addition to the quantities that can be expressed in terms of the units of the other classes
of phenomena, there are also some magnetic quantities that are peculiar to magnetism,
and therefore require different units. As brought out in the preceding chapter, these
magnetic quantities and their units are analogous to the electric quantities and units
defined in Chapter 13, differing from them only by reason of the two-dimensional nature
of magnetism, which results in the introduction of an additional t/s term into each
quantity.
The basic magnetic quantity, magnetic charge, is not recognized in current physical
thought, but an equivalent quantity, magnetic flux, is used instead of charge, as well as in
other applications where flux is the more appropriate term. The space-time dimensions of
this quantity are the dimensions of electric charge, t/s, multiplied by the factor t/s that
relates magnetism to electricity: t/s x t/s = t2/s2. In the cgs system, magnetic flux is
expressed in maxwells, a unit equivalent to10-8 volt-sec. The SI unit is the weber,
equivalent to the volt-sec. The justification for deriving the basic magnetic unit from an
electric unit, the volt, can be seen when this derivation is expressed in space-time terms:
t/s2 x t = t2/s2.
The natural unit of magnetic flux is the product of the natural unit of electric potential,
9.31146 x 108 volts, and the natural unit of time, 1.520655 x 10-16 seconds, and amounts
to 1.41595 x 10-7 volt-sec, or webers. The natural units of other magnetic quantities can
similarly be derived by combination of previously evaluated natural units.
Magnetic flux density, symbol B, is magnetic flux per unit area. The space-time
dimensions are t2/s2 x 1/s2 = t2/s2. The units are the gauss (cgs) or tesla (SI). Magnetic
potential (also called vector potential), like electric potential, is charge divided by
distance, and therefore has the space-time dimensions t2/s2 x 1/s = t2/s2. The cgs unit is the
maxwell per centimeter, or gilbert. The SI unit is the weber per meter.
Since conventional physical science has never established the nature of the relation
between electric, magnetic, and mechanical quantities, and has not recognized that an
electric potential is a force, the physical relations involving the potential have never been
fully developed. Extension of this poorly understood potential concept to magnetic
phenomena has then led to a very confused view of the relation of magnetic potential to
force and to magnetic phenomena in general.
As indicated above, the vector potential is the quantity corresponding to electric potential.
The investigators working in this area also recognize what they call a magnetic scalar
potential, which they define as B ds/m, where s is space and m is a quantity with the
dimensions t3/s4 that will be defined shortly. The space-time dimensions of the scalar
potential are thus
t2/s4 x s x s4/t3 = s/t. The so-called scalar potential is therefore a speed, equivalent to an
electric current, a conclusion that agrees with the units, amperes, in which this quantity is
measured. W. J. Duffin comments that it is not easy to put a physical interpretation on
magnetic scalar potential.85 The space-time dimensions of this quantity explain why. A
potential (that is, a force) equivalent to a speed is a physical contradiction. The scalar
potential is merely a mathematical construction without physical significance.
As indicated earlier, the magnetic quantities thus far defined are derived from the
quantities of the mechanical and electrical systems. The units derived from the electrical
system are related to the corresponding units of that system by the dimensions t/s,
because of the two-dimensional nature of magnetism. Most of the other magnetic
quantities in common use are similarly derived, and all quantities of this set are therefore
dimensionally consistent with each other and with the mechanical and electrical
quantities previously defined in this and the preceding volume. But there are some other
magnetic quantities that have been derived empirically, and are not consistent with the
principal set of magnetic quantities or with the defined quantities in other fields. It is the
existence of inconsistencies of this kind that has led to the conclusion of some physicists,
expressed in a statement quoted in Chapter 9, that a consistent system of dimensions of
physical quantities is impossible.
Analysis of this problem indicates that the difficulty, as far as magnetism is concerned, is
mainly due to incorrect treatment of the dimensions of permeability, symbol m, a
quantity that enters into these and other magnetic relationships. The permeability of the
great majority of substances is unity, or a close approximation thereto. The numerical
results of magnetic measurements on these substances therefore give no indication of its
existence, and there has been a tendency to overlook it, except where some collateral
relation makes it clear that there are missing dimensions. But its field of application is
actually very wide, as our theoretical development indicates that permeability is the
magnetic equivalent of electrical resistance. It has the space-time dimensions of
resistance, t2/s3, multiplied by the factor t/s that relates magnetism to electricity, the result
being t3/s4.
One of the empirical results that has contributed to the dimensional confusion is the
experimental finding that magnetomotive force (MMF), or magnetomotence, is related to
the current (I) by the expression MMF = nI, where n is the number of turns in a coil.
Since n is dimensionless, this empirical relation indicates that the dimensions of
magnetomotive force are the same as those of electric current. The SI unit of MMF has
therefore been taken as the ampere. It was noted in Chapter 9 that the early investigators
of electrical phenomena attached the name ―electromotive force‖ to a quantity that they
recognized as having the characteristics of a force, an identification that we now find to
be correct, notwithstanding the denial of its validity by most present-day physicists. A
somewhat similar situation exists in magnetism. The early investigators in this area
identified a magnetic quantity as having the characteristics of a force, and gave it the
name ―magnetomotive force‖ . The prevailing view that this quantity is dimensionally
equivalent to electric current contradicts the conclusion of the pioneer investigators, but
here, again, our finding is that the original conception of the nature of this quantity is
correct, at least in a general sense. Magnetomotive force, we find, is the magnetic (two-
dimensional) analog of the one-dimensional quantity known as force. It has the
dimensions of force, t/s2, multiplied by the factor t/s that relates electricity to magnetism.
Dimensional consistency in magnetomotive force and related quantities can be attained
by introducing the permeability in those places where it is applicable. Recognition of the
broad field of applicability of this quantity has been slow in developing. As noted earlier,
in most substances the permeability has the same value as if no matter is present, the
reference level of unity, generally called the ―permeability of free space.‖ Because of the
relatively small number of substances in which the permeability must be taken into
account, the fact that the dimensions of this quantity enter into many magnetic relations
was not apparent in most of the early magnetic experiments. However, a few empirical
relations did indicate the existence of such a quantity. For example, one of the important
relations discovered in the early days of the investigation of magnetism is Ampére‘s Law,
which relates the intensity of the magnetic field to the current. The higher permeability of
ferromagnetic materials had to be recognized in the statement of this relation.
Permeability was originally defined as a dimensionless constant, the ratio between the
permeability of the ferromagnetic substance and that of ―free space.‖ But in order to
make the mathematical expression of Ampére‘s Law dimensionally consistent, some
additional dimensions had to be included. The texts that define permeability as a ratio
assign these dimensions to the numerical constant, an expedient which, as pointed out
earlier, is logically indefensible. The more recent trend is to assign the dimensions to the
permeability, where they belong. In the cgs system these dimensions are abhenry/cm. The
abhenry is a unit of inductance, t3/s3, and the dimensions of permeability on this basis are
t3/s3 x 1/s = t3/s4, which agrees with the previous determination. The SI units henry/meter
and newton/ampere2 (t/s2 x t2/s2 = t3/s4) are likewise dimensionally correct. The unit
farad/meter has been used, but this unit is dimensionless, as capacitance, of which the
farad is the unit, has the dimensions of space. Using this unit is equivalent to the earlier
practice of treating permeability as a dimensionless constant. McCaig is quite critical of
the unit henry/meter. He makes this comment:
Most books now.quote the units of m0 as henry per metre. Although this usage is now almost universal, it
seems to me to be a howler...The henry is a unit of self or mutual inductance and it seems quite
incongruous to me to associate a metre of free space with any number of henries. If one wishes to be silly,
one can invent numerous absurdities of this kind, e.g., torque is measured in Nm or joule!86

The truth is that these two examples of what McCaig calls dimensional ―absurdities‖ are
quite different. His objection to coupling inductance with length is a purely subjective
reaction, an opinion that they are incompatible quantities. Reduction of both quantities to
space-time terms shows that his opinion is wrong. As indicated above, the quotient
henry/meter has the dimensions t3/s4, with a definite physical meaning. On the other hand,
if the dimensions of torque are so assigned that they are equivalent to the dimensions of
energy, there is a physical contradiction, as a torque must operate through a distance to
do work; that is, to expend energy. This situation will be given further consideration later
in the present chapter.
Returning now to the question as to the validity of the empirical relation MMF = nI, it is
evident from the foregoing that the error in this equation is the failure to include the
permeability, which has unit value under the conditions of the experiments, and therefore
does not appear in the numerical results. When the permeability is inserted, the equation
becomes MMF = µnI, the space-time dimensions of which are t2/s3= t3/s4 x s/t. The
dimensions t2/s3, which are assigned to MMF on this basis, are the appropriate
dimensions for the magnetic analog of electric force, as they are the dimensions of force,
t/s2, multiplied by t/s, the dimensional relation between electricity and magnetism.
In our previous consideration of a magnetic quantity currently measured in amperes, the
magnetic scalar potential, we found that the assigned dimensions are correct, but that the
quantity has no physical significance. In the case of the magnetomotive force, also
measured in amperes in current practice, the magnetic quantity called by this name
actually does exist in a physical sense, and it is a kind of force, but the dimensions
currently assigned to it are wrong.
As in the electric system, the magnetic field intensity is the potential gradient, and should
therefore have the dimensions t2/s3 x 1/s = t2/s4, the same dimensions that we found for the
flux density. The cgs unit, the oersted, is one gilbert per centimeter, and therefore has the
correct dimensions. However, the unit in the SI system is the ampere per meter, the
space-time dimensions of which are s/t x 1/s = 1/t. These dimensions have been derived
from the ampere unit of MMF, and the error in the dimensions of that quantity is carried
forward to the magnetic field intensity. Introducing the permeability corrects the
dimensional error.
Magnetic pole strength is a quantity defined as F/B, where F is the force that is exerted.
Again the permeability dimensions should be included. The correct definition is µF/B,
the space-time dimensions of which are t3/s4 x t/s2 x s4/t2 = t2/s2. Pole strength is thus
merely another name for magnetic charge, as we might expect.
The permeability issue also enters into the question as to the definition of magnetic
moment. The quantity currently called by that name, or designated as the electromagnetic
moment (symbol m), is defined by the experimentally established relation m = nIA,
where n and I have the same significance as in the related expression for the
magnetomotive force, and A is the area of the circuit formed by each turn of a coil. The
space-time dimensions are s/t x s2 = s3/t. The moment per unit volume, the magnetization,
M, is s3/t x 1/s3 = 1/t.
An alternate definition of the magnetic moment introduces the permeability. This
quantity, which is called the magnetic dipole moment to distinguish it from the moment
defined in the preceding paragraph, has the composition mnIA. The space-time
dimensions are t3/s4 x s/t x s2 = t2/s. (The distinction is not always effective, as some
authors – Duffin, for example–apply the dipole moment designation to the s3/t quantity.)
The dipole moment per unit volume, called the magnetic polarization, has the dimensions
t2/s4. This quantity is therefore dimensionally equivalent to the flux density and the
magnetic field intensity, and is expressed in the same units. The question as to whether
the permeability should be included in the ―moment‖ affects other magnetic relations,
particularly that between the flux density B and a quantity that has been given the symbol
H. This is the quantity with the dimensions 1/t that, in the SI system, is called the field
intensity, or field strength. Malcolm McCaig reports that ―the name field for the vector H
went out of fashion for a time,‖ and says that he was asked by publishers to use
―magnetizing force‖ instead. But ―the term magnetic field strength now seems to be in
fashion again.‖87
The relation between B and H has supplied the fuel for some of the most active
controversies in magnetic circles. McCaig discusses these controversial issues at length in
an appendix to his book Permanent Magnets in Theory and Practice. He points out that
there are two theoretical systems that handle this relationship somewhat differently.
―Both systems have international approval,‖ he says, ―but there are intolerant lobbies on
both sides seeking to have the other system banned.‖ The two are distinguished by their
respective definitions of the torque of a magnet. The Kennelly system uses the magnetic
dipole moment (t2/s), and expresses the torque as T = mH. The Sommerfeld system uses
the electromagnetic moment (s3/t) and expresses the torque as T = mB.
Torque is a product of force and distance, t/s2 x s = t/s. The space-time dimensions of the
product mH are t2/s x 1/t = t/s. The equation T = mH is thus dimensionally correct. The
space-time dimensions of the product mB are s3/t x t2/s4 = t/s. So the equation T = mB is
likewise dimensionally correct. The only difference between the two is that in the
Kennelly system the permeability is included in m, whereas in the Sommerfeld system it
is included in B. This situation emphasizes the importance of a knowledge of the space-
time dimensions of physical quantities, particularly in determining the nature of the
connection between one quantity and another. A mathematically correct statement of a
physical relation is not necessarily a true statement, because at least some of the terms of
that relation must have physical dimensions (otherwise it would be merely a
mathematical statement, not a physical statement), and if those dimensions are wrong, the
statement itself is physically wrong, regardless of its mathematical accuracy. The
dimensions constitute a description of the physical nature of the quantities to which they
apply, and give the mathematical statement of each relation a physical meaning.
As matters now stand, this is not recognized by everyone. McCaig, for example,
indicates, in his discussion, that he holds an alternate view, in which the dimensions are
seen as merely a reflection of the method of measurement of the quantities. He cites the
case of force, which, he says, could have been defined on the basis of the gravitational
equation, rather than by Newton‘s second law, in which event the dimensions would be
different.
The truth is that we do not have this option, because the dimensions are inherent in the
physical relations. In any instance where two different derivations lead to different
dimensions for a physical quantity, one of the derivations is necessarily wrong. The case
cited by McCaig is a good example. The conventional dimensional interpretation of the
gravitational equation is obviously incompatible with the accepted definition of force
based on Newton‘s second law of motion. Force cannot be proportional to the second
power of the mass, as required by the prevailing interpretation of the gravitational
equation, and also proportional to the first power of the mass, as required by the second
law. And it is evident that an interpretation of the force equation that conflicts with the
definition of force is wrong. Furthermore, this equation, as interpreted, is an orphan. The
physicists have not been able to reconcile it with physical theory in general, and have
simply swept the problem under the rug by assigning dimensions to the gravitational
constant.
McCaig‘s comments about the dimensions of torque emphasize the need to bear in mind
that a numerically consistent relation does not necessarily represent physical reality, even
if it is also consistent dimensionally. Good mathematics is not necessarily good physics.
The definition of torque is Fs, the product of the force and the lever arm (a distance). The
work of rotation is defined as the product of the torque and the angle of displacement .
The work is thus Fs . But work is the product of a force and the distance through which
the force acts. This distance, in rotation, is not , which is purely numerical, nor is it the
lever arm, because the length of the lever arm is not the distance through which the force
acts. The effective distance is s . Thus the work is not Fs x (torque x angle), but F x s
(force x distance). Torque is actually a force, and the lever arm belongs with the angular
displacement, not with the force. Its numerical value has been moved to the force merely
for convenience in calculation. Such transpositions do not affect the mathematical
validity, but it should be understood that the modified relation does not represent physical
reality, and physical conclusions drawn from it are not necessarily valid.
Reduction of the dimensions of all physical quantities to space-time terms, an operation
that is feasible in a universe where all physical entities and phenomena are manifestations
of motion, not only clarifies the points discussed in the preceding pages, but also
accomplishes a similar clarification of the physical situation in general. One point of
importance in the present connection is that when the dimensions of the various
quantities are thus expressed, it becomes possible to take advantage of the general
dimensional relation between electricity and magnetism as an aid in determining the
status of magnetic quantities.
For instance, an examination in the light of this relation makes it evident that
identification of the vector H as the magnetic field intensity is incorrect. The role of this
quantity H in magnetic theory has been primarily that of a mathematical factor rather than
an expression of an actual physical relation. As one textbook comments, ―the physical
significance of the vector H is obscure.‖88 (This explains why there has been so much
question as to what to call it.) Thus there has been no physical constraint on the
assignment of dimensions to this quantity. The unit of H in the SI system is the ampere
per meter, the dimensions of which are s/t x 1/s = 1/t. It does not necessarily follow that
there is any phenomenon in which H can be identified physically. In current flow, the
quantity 1/s appears as power. Whether the quantity 1/t has a role of this kind in
magnetism is not yet clear. In any event, H is not the magnetic field intensity, and should
be given another name. Some authors tacitly recognize this point by calling it simply the
―H vector.‖
As noted earlier, the magnetic field intensity has the dimensions t2/s4, and is therefore
equivalent to mH (t3/s4 x 1/t) rather than to H. This relation is illustrated in the following
comparison between electric and magnetic quantities:

Field Intensity or Flux Density


E = V/s = t/s2 x 1/s = t/s3 Potential per unit space
Electric
E = R/t = t2/s3 x 1/t =t/s3 Resistance per unit time

B = A/s = t2/s3 x 1/s = t2/s4 Potential per unit space


Magnetic
µH = m/t = t3/s4 x 1/t = t2/s4 Permeability per unit time
Ordinarily the electric field intensity is regarded as the potential per unit distance, the
manner in which it normally enters into the static relations. As the tabulation indicates, it
can alternatively be regarded as the resistance per unit time, the expression that is
appropriate for application to electric current phenomena. Similarly, the corresponding
magnetic quantity B or µH, can be regarded either as the magnetic potential per unit
space or the permeability per unit time.
A dimensional issue is also involved in the relation between magnetization, symbol M,
and magnetic polarization, symbol P. Both are defined as magnetic moment per unit
volume. The magnetic moment entering into magnetization is s3/t, and the dimensions of
this quantity are therefore
s3/t x 1/s3 = 1/t, making magnetization dimensionally equivalent to H. The magnetic
moment entering into the polarization is the one that is generally called the magnetic
dipole moment, dimensions t2/s. The polarization is then t2/s x 1/s3 = t2/s4. Magnetic
polarization is thus dimensionally equivalent to field intensity B. To summarize the
foregoing, we may say that there are two sets of these magnetic quantities that represent
essentially the same phenomena, and differ only in that one includes the permeability,
t3/s4, while the other does not. The following tabulation compares the two sets of
quantities:
Magnetic moment s3/t Dipole moment t3/s4 x s3/t = t2/s4
Magnetization 1/t Polarization t3/s4 x 1/t = t2/s4
Vector H 1/t Field Intensity t3/s4 x 1/t = t2/s4

A point to be noted about these quantities is that the magnetic polarization is not the
magnetic quantity corresponding to the electric polarization. The magnetic polarization is
a magnetostatic quantity, with dimensions t2/s4, and its electric analog would be an
electrostatic quantity with dimensions t/s3. This what electric polarization would be on
the basis of the conventional theory of storage of electric charge in capacitors. But, as we
saw in Chapter 15, the capacitor stores electric current, not electric charge. It has
therefore been found necessary to introduce a term with the dimensions s2/t into the
mathematical relations, eliminating the electrostatic quantities; that is, reducing coulombs
(t/s) to coulombs (s). The need for this mathematical adjustment is a verification of our
conclusion that the electrical storage process does not involve any polarization in the
electrostatic sense.
The magnetic quantities identified in the discussion in this chapter–the principal magnetic
quantities, we may say–are listed in Table31, with their space-time dimensions and their
units in the SI system.
The magnetic scalar potential has been omitted from the tabulation, for the reasons
previously given, together with a number of other quantities identified in the
contemporary magnetic literature in connection with individual magnetic phenomena that
we are not examining in this volume, or in connection with special mathematical
techniques utilized in dealing with magnetism. The dimensionally incorrect SI units for
MMF and magnetic field intensity are likewise omitted.

Table 31: Magnetic Quantities


Quantity SI Units Dimensions
dipole moment weber x meter t2/s
flux weber t2/s2
pole strength weber t2/s2
vector potential weber/meter t2/s3
MMF t2/s3
flux density tesla t2/s4
field intensity t2/s4
polarization tesla t2/s4
inductance henry t2/s3
permeability henry/meter t2/s4
magnetization ampere/meter 1/t
vector H ampere/meter 1/t
magnetic moment ampere x meter2 s3/t
reluctance 1/henry s3/t3
There is a question as to how far we ought to go in attaching different names to quantities
that have the same dimensions and are therefore essentially equivalent. It would appear
that the primary criterion should be usefulness. It is undoubtedly useful to distinguish
clearly between electric quantity (space) and extension space, but it is not so clear that
this is true of the distinction between the various quantities with the dimensions t2/s4, for
example. The magnetic field intensity can be identified with these dimensions, by
analogy with the electric field intensity. Perhaps there is some justification for
distinguishing it from magnetic polarization, which has the same dimensions. Whether
this is also true of the other t2/s4 quantities such as flux density and magnetic induction is
somewhat questionable.
The mathematical treatment of magnetism has improved very substantially in recent
years, and the number of dimensional inconsistencies of the kind discussed in the
preceding pages is now relatively small compared to the situation that existed a few
decades earlier. But the present-day theoretical treatment of magnetism tends to deal with
mathematical abstractions, and to lose contact with physical reality. The conceptual
understanding of magnetic phenomena therefore lags far behind the mathematical
treatment. This is graphically illustrated in Table 32. The upper section of this tabulation
shows the ―corresponding quantities in electric and magnetic circuits,‖89 according to a
current textbook, with the space-time dimensions of each quantity, as determined in the
present investigation. The lower section shows the correct analogs (magnetic = electric x
t/s) in the three cases where a magnetic analog actually exists. Only two of the seven
identifications in the textbook are correct, and in both of these cases the dimensions that
are currently assigned to the magnetic quantity are wrong. As brought out in the
preceding discussion, the permeability, which belongs in both the MMF and the magnetic
field intensity, is omitted from these quantities in the SI system.

Table 32: Corresponding Quantities


Electric Magnetic
From reference 89, with space-time dimensions added
s/t current t2/s2 magnetic flux
1/st current density t2/s4 magnetic induction
s2/t2 conductivity 3 4
t /s permeability
t/s2 EMF t2/s3 MMF
t/s3 electric field intensity 2 4
t /s magnetic field intensity
s3/t2 conductance t3/s3 permeance
t2/s3 resistance s2/t3 reluctance
Correct analogs (magnetic = electric x t/s)
s/t current no magnetic analog
1/st current density no magnetic analog
s2/t2 conductivity no magnetic analog
t/s2 EMF 2 3
t /s MMF
t/s3 electric field intensity t2/s4 magnetic field intensity
s3/t2 conductance no magnetic analog
t2/s3 resistance t3/s4 permeability
When the dimensions of the various magnetic quantities are assigned in accordance with
the specifications in the preceding pages, these quantities are all consistent with each
other, and with the previously defined quantities of the mechanical and electric systems.
This eliminates the need for employing illegitimate artifices such as attaching dimensions
to pure numbers. The numerical magnitudes of the existing valid magnetic relations have
already been adjusted in previous practice to fit the observations, and are not altered by
the dimensional clarification.
This dimensional clarification in the magnetic area completes the consolidation of the
various systems of measurement into one comprehensive and consistent system in which
all physical quantities and units can be expressed in terms that are reducible to space and
time only. There are, of course, many specialized units that have not been considered in
the pages of this and the preceding volume–such as the light year, a unit of distance; the
electron-volt, a unit of energy; the atmosphere, a unit of pressure; and so on–but the
quantities measured in these units are the basic quantities, or combinations thereof, and
their units are specifically related to the units of space and time, both conceptually and
mathematically.

CHAPTER 21

Electromagnetism
The terms ―electric‖ and ―magnetic‖ were introduced in Volume I with the understanding
that they were to be used as synonyms for ―scalar one-dimensional‖ and ―scalar two-
dimensional‖ respectively, rather than being restricted to the relatively narrow significance
that they have in common usage. These words have been used in the same senses in this
volume, although the broad scope of their definitions is not as evident as in Volume I,
because we are now dealing mainly with phenomena that are commonly called ―electric‖ or
―magnetic.‖ We have identified a one-dimensional movement of uncharged electrons as an
electric current, a one-dimensional rotational vibration as an electric charge, and a two-
dimensional rotational vibration as a magnetic charge. More specifically, the magnetic
charge is a two-dimensional rotationally distributed scalar motion of a vibrational character.
Now we are ready to examine some motions that are not charges, but have some of the
primary characteristics of the magnetic charge; that is, they are two-dimensional
directionally distributed scalar motions.
Let us consider a short section of a conductor, through which we will pass an electric
current. The matter of which the conductor is composed is subject to gravitation, which is a
three-dimensional distributed inward scalar motion. As we have seen, the current is a
movement of space (electrons) through the matter of the conductor, equivalent to an outward
scalar motion of the matter through space. Thus the one-dimensional motion of the current
opposes the portion of the inward scalar motion of gravitation that is effective in the scalar
dimension of the spatial reference system.
For purposes of this example, let us assume that the two opposing motions in this section of
the conductor are equal in magnitude. The net motion in this scalar dimension is then zero.
What remains of the original three-dimensional gravitational motion is a rotationally
distributed scalar motion in the other two scalar dimensions. Since this remaining motion is
scalar and two-dimensional, it is magnetic, and is known as electromagnetism. In the usual
case, the gravitational motion in the dimension of the current is only partially neutralized by
the current flow, but this does not change the nature of the result; it merely reduces the
magnitude of the magnetic effect.
From the foregoing explanation it can be seen that electromagnetism is the residue of the
gravitational motion that remains after all or part of the motion in one of the three
gravitational dimensions has been neutralized by the oppositely directed motion of the
electric current. Thus it is a two-dimensional scalar motion perpendicular to the flow of
current. Since it is the gravitational motion in the two dimensions that are not subject to the
outward motion of the electric current, it has the inward scalar direction.
In all cases, the magnetic effect appears much greater than the gravitational effect that is
eliminated, when viewed in the context of our gravitationally bound reference system. This
does not mean that something has been created by the current. What has happened is that
certain motions have been transformed into other types of motion that are more concentrated
in the reference system, and energy has been brought in from the outside to meet the
requirements of the new situation. As pointed out in Chapter 14, this difference that we
observe between the magnitudes of motions with different numbers of effective dimensions
is an artificial product of our position in the gravitationally bound system, a position that
greatly exaggerates the size of the spatial unit. From the standpoint of the natural reference
system, the system to which the universe actually conforms, the basic units are independent
of dimensions; that is, 13 = 12 = 1. But because of our asymmetric position in the universe,
the natural unit of speed, s/t, takes the large value
3 x 1010 cm/sec, and this becomes a dimensional factor that enters into every relation
between quantities of different dimensions.
For example, the c2 term (the second power of 3 x 1010) in Einstein‘s equation for the
relation between mass and energy reflects the factor applicable to the two scalar dimensions
that separate mass (t3/s3) from energy (t/s). Similarly, the difference of one dimension
between the two-dimensional magnetic effect and the three-dimensional gravitational effect
makes the magnetic effect 3 x 1010 times as great (when expressed in cgs units). The
magnetic effect is less than the one-dimensional electric effect by the same factor. It follows
that the magnetic unit of charge, or emu, defined by the magnetic equivalent of the Coulomb
law is 3 x 1010 times as large as the electric unit, or esu. The electric unit 4.80287 x 10-10 esu
is equivalent to 1.60206 x 10-20 emu.
The relative scalar directions of the forces between current elements are opposite to the
directions of the forces produced by electric and magnetic charges, as shown in Fig.23,
which should be compared with Fig. 22 of Chapter 19. The inward electromagnetic motions
are directed toward the zero points from which the motions of the charges are directed
outward. Two conductors carrying current in the same direction, AB or A‘B, analogous to
like charges, move toward each other, as shown in line (a) of the diagram, instead of
repelling each other, as like charges do. Two conductors carrying current in the direction BA
or B‘A, as shown in line (c), also move toward each other. But conductors carrying current
in opposite directions, AB‘ and BA‘, analogous to unlike charges, move away from each
other, as indicated in line (b).

Figure 23
B’ A B A’
(a) | |=> | <=|
(b) | <=| |=> |
(c) |=> | <=|
These differences in origin and in scalar direction between the two kinds of magnetism also
manifest themselves in some other ways. In our examination of these matters we will find it
convenient to consider the force relations from a different point of view. Thus far, our
discussion of the rotationally distributed scalar motions–gravitational, electric, and
magnetic–has been carried on in terms of the forces exerted by discrete objects, essentially
point sources of the effects under consideration. Now, in electromagnetism, we are dealing
with continuous sources. These are actually continuous arrays of discrete sources, as all
physical phenomena exist only in the form of discrete units. It would therefore be possible to
treat electromagnetic effects in the same manner as the effects due to the more readily
identifiable point sources, but this approach to the continuous sources is complicated and
difficult. A very substantial simplification is accomplished by introduction of the concept of
the field discussed in Chapter 12.
This field approach is also applicable to the simpler gravitational and electrical phenomena.
Indeed, it is the currently fashionable way of handling all of these (apparent) interactions,
even though the alternate approach is, in some ways, better adapted to the discrete sources.
In examining the basic nature of fields we may therefore look at the gravitational situation,
which is, in most respects, the simplest of these phenomena. As we saw in Chapter 12, a
mass A has a motion AB toward any other mass B in its vicinity. This motion is inherently
indistinguishable from a motion BA of atom B. To the extent that actual motion of mass A is
prevented by its inertia or otherwise, the motion of object A therefore appears in the
reference system as a motion of object B, constituting an addition to the actual motion of
that object.
The magnitude of this gravitational motion of mass A that is attributed to mass B is
determined by the product of the masses A and B, and by the separation between the two. as
is the motion of mass B, if the scalar motion AB is regarded as a motion of both objects. It
then follows that each spatial location in the vicinity of object A can be assigned a
magnitude and a direction, indicating the manner in which a mass of unit size would move
under the influence of the gravitational force of object A if it occupied that location. The
assemblage of these locations and the corresponding force vectors constitute the
gravitational field of object A. Similarly, the distribution of the motion of an electric or
magnetic charge defines an electric or magnetic field in the space surrounding this charge.
The mathematical expression of this explanation of the field of a mass or charge is identical
with that which appears in currently accepted physical theory, but its conceptual basis is
entirely different. The conventional view is that the field is ―something physically real in the
space‖ 32 around the originating object, and that the force is physically transmitted from one
object to the other by this ―something.‖ However, as P. W. Bridgman concluded, after
carrying out a critical analysis of this situation, there is no evidence at all to justify the
assumption that this ―something‖ actually exists.29 Our finding is that the field is not
―something physical.‖ It is merely a mathematical consequence of the inability of the
conventional reference system to represent scalar motion in its true character. But this
recognition of its true status as a mathematical expedient does not negate its usefulness. The
field approach remains the simplest and most convenient way of dealing mathematically
with magnetism.
The field of a magnetic charge is defined in terms of the force experienced by a test magnet.
The field of a magnetic pole–one end of a long bar magnet, for example–is therefore radial.
As can be seen from the description of the origin of electromagnetism in the foregoing
paragraphs, the field of a wire carrying an electric current would also be radial (in two
dimensions) if it were defined in terms of the force experienced by an element of the current
in a parallel conductor. But it is customary to define the electromagnetic field on the
magnetostatic basis; that is, by the force experienced by a magnet, or an electromagnet in the
form of a coil, a solenoid, which produces a radial field similar to that of a bar magnet by
means of its geometrical arrangement. When the field of a current-carrying wire is thus
defined, it circles the wire rather than extending out radially. The force exerted on the test
magnet is then perpendicular to the field, as well as to the direction of the current flow.
Here is a direct challenge to physical theory, an apparent violation of physical principles that
apply elsewhere. It is a challenge that has never before been met. The physicists have not
even been able to devise a plausible hypothesis. So they simply note the anomaly, the
―strange‖ characteristics of the magnetic effect. ―The magnetic force has a strange
directional character,‖ says Richard Feynman, ―at every instant the force is always at right
angles to the velocity vector.‖90 It is likely, however, that this perpendicular relation
between the direction of current movement and the direction of the force would not seem so
strange if magnets interacted only with magnets and currents with currents. In that event, the
magnetic effect of current on current would still be ―at right angles to the velocity vector,‖
but it would be in the direction of the field, rather than perpendicular to it, as the field would
have to be defined in terms of the action of current on current. When there is interaction
between current and magnet, the resultant force is perpendicular to the magnetic field; that
is, to the field intensity vector. A test magnet in an electromagnetic field does not move in
the direction of the field, as would be expected, but moves in a perpendicular direction.
Notice how strange the direction of this force is. It is not in line with the field, nor is it in the direction of the
current. Instead, the force is perpendicular to both the current and the field lines. 91

The use of the word ―strange‖ in this statement is a tacit admission that the reason for the
perpendicular direction is not understood in the context of present-day physical theory.
Here, again, the development of the theory of the universe of motion provides the missing
information. The key to an understanding of the situation is a recognition of the difference
between the scalar direction of the motion (force) of the magnetic charge, which is outward,
and that of the electromagnetic motion, which is inward.
The motion of the electric current obviously has to take place in one of the scalar
dimensions other than that represented in the spatial reference system, as the direction of
current flow does not normally coincide with the direction of motion of the conductor. The
magnetic residue therefore consists of motion in the other unobservable dimension and in
the dimension of the reference system. When the magnetic effect of one current interacts
with that of another, the dimension of the motion of current A that is parallel with the
dimension of the reference system coincides with the corresponding dimension of current B.
As indicated in Chapter 13, the result is a single force, a mutual force of attraction or
repulsion that decreases or increases the distance between A and B. But if the interaction is
between current A and magnet C, the dimensions parallel to the reference system cannot
coincide, as the motion (and the corresponding force) of the current A is in the inward scalar
direction, while that of the magnet C is outward.
It may be asked why these inward and outward motions cannot be combined on a positive
and negative basis with a net resultant equal to the difference. The reason is that the inward
motion of the conductor A toward the magnet C is also a motion of C toward A, since scalar
motion is a mutual process. The outward motion of the magnet is likewise both a motion of
C away from A and a motion of A away from C. It follows that these are two separate
motions of both objects, one inward and one outward, not a combination of an inward
motion of one object and an outward motion of the other. It then follows that the two
motions must take place in different scalar dimensions. The force exerted on a current
element in a magnetic field, the force aspect of the motion in the dimension of the reference
system, is therefore perpendicular to the field.
These relations are illustrated in Fig.24. At the left of the diagram is one end of a bar
magnet. This magnet generates a magnetostatic (MS) field, which exists in two scalar
dimensions. One dimension of any scalar motion can be so oriented that it is coincident with
the dimension of the reference system. We will call this observable dimension of the MS
motion A, using the capital letter to show its observable status, and representing the MS
field by a heavy line. The unobservable dimension of motion is designated b, and
represented by a light line.
We now introduce an electric current in a third scalar dimension. As indicated above, this is
also oriented coincident with the dimension of the reference system, and is designated as C.
The current generates an electromagnetic (EM) field in the dimensions a and b perpendicular
to C. Since the MS motion has the outward scalar direction, while the EM motion is directed
inward, the scalar dimensions of these motions coincident with the dimension of the
reference system cannot be the same. The dimensions of the EM motion are therefore B and
a; that is, the observable result of the interaction between the two types of magnetic motion
is in the dimension B, perpendicular to both the MS field A and the current C.
The comment about the ―strange‖ direction of the magnetic force quoted above is followed
by this statement: ―Another strange feature of this force‖ is that ―if the field lines and the
wire are parallel, then the force on the wire is zero.‖ In this case, too, the answer to the
problem is provided by a consideration of the distribution of the motions among the three
scalar dimensions. When the dimension of the current is C, perpendicular to the dimension
A of the motion represented by the MS field, the EM field is in scalar dimensions a and B.
We saw earlier that the observable dimensions of the inward EM motion and the outward
MS motion cannot be coincident. Thus the EM motion in dimension a is unobservable. It
follows that the motion in scalar dimension B, the dimension at right angles to both the
current and the field has to be the one in which the observable magnetic effect takes place,
as shown in Fig.24. However, if the direction of the current is parallel to that of the magnetic
field, the scalar dimensions of these motions (both outward) are coincident, and only one of
the three scalar dimensions is required for both motions. This leaves two unobservable
scalar dimensions available for the EM motion, and eliminates the observable interaction
between the EM and MS fields.

Figure 24
As the foregoing discussion brings out, there are major differences between magnetostatics
and electromagnetism. Present-day investigators know that these differences exist, but they
are unwilling to recognize their true significance because current scientific opinion is
committed to a belief in the validity of Ampère‘s nineteenth century hypothesis that all
magnetism is electromagnetism. According to this hypothesis, there are small circulating
electric currents–―Ampèrian currents‖ –in magnetic materials whose existence is assumed in
order to account for the magnetic effects.
This is an example of a situation, very common in present-day science, in which the
scientific community continues to accept, and build upon, hypotheses which have been
revised so drastically to accommodate new information that the essence of the original
hypothesis has been totally negated. It should be realized that there is no empirical support
for Ampère‘s hypothesis. The existence of the Ampèrian currents is simply assumed. But
today no one seems to have a very clear idea as to just what is being assumed. Ampère‘s
hypothetical currents were miniature reproductions of the currents with which he was
familiar. However, when it was found that individual atoms and particles exhibit magnetic
effects, the original hypothesis had to be modified, and the Ampèrian currents are now
regarded as existing within these individual units. At one time it appeared that the assumed
orbital motion of the hypothetical electrons in the atoms would meet the requirements, but it
is now conceded thai something more is necessary. The current tendency is to assume that
the electrons and other sub-atomic particles have some kind of a spin that produces the same
effects as translational motion. The following comment from a 1981 textbook shows how
vague the ―Ampèrian current‖ hypothesis has become.
At the present time we do not know what goes on inside these basic particles [electrons, etc.]. but we expect
their magnetic effects will be found to be the result of charge motion (spinning of the particle, or motion of the
charges within it).92

Ampère‘s hypothesis was originally attractive because it explained one phenomenon


(magnetostatics) in terms of another (electromagnetism), thereby apparently accomplishing
an important simplification of magnetic theory. But it is abundantly clear by this time that
there are major differences between the two magnetic phenomena, and just as soon as that
fact became evident, the case in favor of Ampère‘s hypothesis crumbled. There is no longer
any justification for equating the two types of magnetism. The continued adherence to this
hypothesis and use of Ampèrian currents in magnetic theory is an illustration of the fact that
there is inertia in the realm of ideas, as well as in the physical world.
The lack of any theory–or even a model–that would explain how either a magnetostatic or
electromagnetic effect is produced has left magnetism in a confused state where
contradictions and inconsistencies are so plentiful that none of them is taken very seriously.
A somewhat similar situation was encountered in our examination of electrical phenomena,
particularly in the case of those issues affected by the lack of distinction between electric
charge and electric quantity, but a much larger number of errors and omissions have
converged to produce a rather chaotic condition in the conceptual aspects of magnetic
theory. It is, in a way, somewhat surprising that the investigators in this field have made so
much progress in the face of these obstacles.
As noted earlier, many of the physical quantities involved in electromagnetism are the same
as those that enter into magnetostatic phenomena. These are quantities applicable to two-
dimensional scalar relations, irrespective of the particular nature of the phenomena in which
they participate. The electromagnetic units applicable to these quantities are therefore the
same ones defined for magnetostatic phenomena in Chapter 20. Some of the relations
between these quantities are also those of two-dimensional motions in general, rather than
being peculiar to either magnetostatics or electromagnetism. More commonly, however, the
relations involved in electromagnetism are analogous to those encountered in current
electricity, as electromagnetism is a phenomenon of current flow rather than of magnetic
charges.
One example is the force between currents. There is no electromagnetic relation analogous
to the Coulomb equation. The theorists commonly use ―current elements‖ for purposes of
analysis, but such units obviously cannot be isolated. A simple interaction between two
units, analogous to the interaction between two charges, therefore does not exist. Instead, the
simplest electromagnetic interaction, the one that is used in defining the unit of current, the
ampere, is the interaction between the magnetic forces of parallel wires carrying currents.
Making use of the field concept, the advantage of which is quite evident in dealing with
currents, we first define the magnetic field of one current in terms of the flux density, B.
This quantity B has been found to be equal to µ0I/(2 s). The space-time dimensions of this
expression are t3/s4 x s/t x 1/s = t2/s4, the correct dimensions of the flux density The force
exerted by this field on a length l of the parallel current-carrying wire is then BIl,
dimensions t2/s4 x s/t x s = t/s2.
The expressions representing the two steps of this evaluation of the force can be
consolidated, with the result that the force on wire B due to the current in wire A is
µ0IAIBl/(2 s). If the currents are equal this becomes µ0I2l/(2 s). There is some resemblance
between this and an expression of the Coulomb type, but it actually represents a different
kind of a relation. It is a magnetic (that is, two-dimensional) relation analogous to the
electric equation V = IR. In this electric relation, the force is equal to the resistance times the
current. In the magnetic relation the force on a unit length is equal to the permeability (the
magnetic equivalent of resistance) times the square of the current.
The energy relations in electromagnetism have given the theorists considerable difficulty. A
central issue is the question as to what takes the place of the mass that has an essential role
in the analogous mechanical relations. The perplexity with which present-day scientists view
this situation is illustrated by a comment from a current physics textbook. The author points
out that the energy of the magnetic field varies as the second power of the current, and that
the similarity to the variation of kinetic energy with the second power of the velocity
suggests that the field energy may be the kinetic energy of the current. ―This ‗kinetic
energy‘ of a current‘s magnetic field,‖ he says, ―suggests that it has something like mass.‖93
The trouble with this suggestion is that the investigators have not been able to identify any
electric or magnetic property that is ―something like mass.‖ Indeed, the most striking
characteristic of the electric current is its immaterial character. The answer to the problem is
provided by our finding that the electric current is a movement of units of space through
matter, and that the effective mass of that matter has the same role in current flow as in the
motion of matter through space. In the current flow we are not dealing with ―something like
mass,‖ we are dealing with mass.
As brought out in Chapter 9, electrical resistance, R, is mass per unit time, t2/s3. The product
of resistance and time, Rt, that enters into the energy relations of current flow is therefore
mass under another name. Since current, I, is speed, the electric energy equation, W = RtI2,
is identical with the equation for kinetic energy, W = ¹/2 mv2. The magnetic analog of
resistance is permeability, with dimensions t3/s4. Because of the additional t/s term that
enters into this two-dimensional quantity, the permeability is the mass per unit space, a
conclusion that is supported by observation. As expressed by Norman Feather, the mass
―involves the product of the permeability of the medium and a configurational factor having
the dimensions of a length.‖ 94 In some applications, the function of this mass term,
dimensions t3/s3, is clear enough to have led to its recognition under the name of inductance.
The basic equations employed in dealing with inductance are identical with the equations
dealing with the motion of matter (mass) through space. We have already seen (Chapter 20)
that the inductive force equation, F = L dI/dt, is identical with the general force equation, F
= m ds/dt, or
F = ma. Similarly, magnetic flux, which is dimensionally equivalent to momentum, is the
product of inductance and current, LI, just as momentum is the product of mass and
velocity, mv. It is not always possible to relate the more complex electromagnetic formulas
directly to corresponding mechanical phenomena in this manner, but they can all be reduced
to space-time terms and verified dimensionally. The theory of the universe of motion thus
provides the complete and consistent framework for electric and magnetic relationships that
has heretofore been lacking.
The finding that the one-dimensional motion of the electric current acting in opposition to
the three-dimensional gravitational motion leaves a two-dimensional residue naturally leads
to the conclusion that a two-dimensional magnetic motion similarly applied in opposition to
gravitation will leave a one-dimensional residue, an electric current, if a conductor is
appropriately located relative to the magnetic motion. This is the observed phenomenon
known as electromagnetic induction. While they share the same name, this induction
process has no relation to the induction of electric charges. The induction of charges results
from the equivalence of a scalar motion AB and a similar motion BA, which leads to the
establishment of an equilibrium between the two motions. As indicated above,
electromagnetic induction is a result of the partial neutralization of gravitational motion by
oppositely directed scalar motion in two dimensions.
This induction process is another of the aspects of electricity and magnetism that is
unexplained in conventional science. As one textbook puts it,
Faraday discovered that whenever the current in the primary circuit 1 is caused to change, there is a current
induced in circuit 2 while that change is occurring. This remarkable result is not in general derivable from any
of the previously discussed properties of electromagnetism. 95

Here, again, the advantage of having at our disposal a general physical theory, one that is
applicable to all subdivisions of physical activity, is demonstrated. Once the nature of
electromagnetism is understood, it is apparent from the theoretical relation between
electricity and magnetism that the existence of electromagnetic induction necessarily
follows.
Since there is no freely moving magnetic particle corresponding to the electron, there is no
magnetic current, but magnetic motion can be produced in a number of ways, each of which
is a method of inducing electric currents or voltage differences. For example, the magnetic
motion may originate mechanically. If a wire that forms part of an electrical circuit is moved
in a magnetic field in such a way that the magnetic flux through the wire changes
(equivalent to a magnetic motion), an electric current is induced in the circuit. A similar
effect is produced if the magnetic field is varied, as, for instance, if it is generated by means
of an alternating current.
The force aspect of the one-dimensional (electric) residual motion left by the magnetic
motion in the electromagnetic induction process can, of course, be represented as an electric
field, but because of the manner in which it is produced, this field is not at all like the fields
of electric charges. As Arthur Kip points out, there is an ―extreme contrast‖ between these
two kinds of electric fields. He explains,
An induced emf implies an electric field, since it produces a force on a static charge. But this electric field,
produced by a changing magnetic flux, has some properties which are quite different from those of an
electrostatic field produced by fixed charges...the special property of this new sort of electric field is that its
curl, or its line integral around a closed path, is not zero. In general, the electric field at any point in space can
be broken into two parts, the part we have called electrostatic, whose curl is zero, and for which electrostatic
potential differences can be defined, and a part which has a nonzero curl, for which a potential function is not
applicable in the usual way.96

While the substantial differences between the two kinds of electric fields are recognized in
current physical thought, as indicated by this quotation, the reason for the existence of these
differences has remained unidentified. Our finding is that the obstacle in the way of locating
the answer to this problem has been the assumption that both fields are due to electric
charges–static charges in the one case, moving charges in the other. Actually, the differences
between the two kinds of electric fields are easily accounted for when it is recognized that
the processes by which these fields have been produced are entirely different. Only one
involves electric charges.
The treatment of this situation by different authors varies widely. Some textbook authors
ignore the discrepancies between accepted theory and the observations. Others mention
certain points of conflict, but do not follow them up. However, one of those quoted earlier in
this volume, Professor W. J. Duffin, of the University of Hull, takes a more critical look at
some of these conflicts, and arrives at a number of conclusions which, so far as they go,
parallel the conclusions of this work quite closely, although, of course, he does not take the
final step of recognizing that these conflicts invalidate the foundations of the conventional
theory of the electric current.
Like Arthur Kip (reference 96), Duffin emphasizes that the electric field produced by
electromagnetic induction is quite different from the electrostatic field. But he goes a step
farther and recognizes that the agency responsible for the existence of the field, which he
identifies as the electromotive force (emf), must also differ from the electrostatic force. He
then raises the issue as to what contributes to this emf. ―Electrostatic fields cannot do so,‖13
he says. Thus the description that he gives of the electric current produced by
electromagnetic induction is completely non-electrostatic. An emf of non-electrostatic origin
causes a current I to flow through a resistance R. Electric charges play no part in this
process. ―No charge accumulates at any point,‖ and ―no potential difference can be
meaningfully said to exist between any two points.‖97
Duffin evidently accepts the prevailing view of the current as a movement of charged
electrons, but, as indicated in a previously quoted statement (reference 13), he realizes that
the non-electrostatic force (emf) must act on the ―carriers of the charges‖ rather than on the
charges. This makes the charges superfluous. Thus the essence of his findings from
observation is that the electric currents produced by electromagnetic induction are non-
electrostatic phenomena in which electric charges play no part. These are the currents of our
ordinary experience, those that flow through the wires of our vast electrical networks.
In the course of the discussion of electricity and magnetism in the preceding pages we have
identified a number of conflicts between the results of observation and the conventional
―moving charge‖ theory of the electric current, the theory presented in all of the textbooks,
including Duffin‘s. These conflicts are serious enough to show that the current cannot be a
flow of electric charges. Now we see that the ordinary electric currents with which the
theory of current electricity deals are definitely non-electrostatic; that is, electric charges
play no part in them. The case against the conventional theory of the current is thus
conclusive, even without the new information made available by the development reported
in this work.

CHAPTER 22

Magnetic Materials
The discussion of static magnetism in Chapter 19 was addressed to the type of two-
dimensional rotational vibration known as ferromagnetism. This is the magnetism known
to the general public, the magnetism of permanent magnets. As noted in that earlier
discussion, ferromagnetism is present in only a relatively small number of substances,
and since this was the only type of magnetism known to the early investigators,
magnetism was considered to be some special kind of a phenomenon of limited scope.
This general belief undoubtedly had a significant influence on the thinking that led to the
conclusion that magnetism is a by-product of electricity. More recently, however, it has
been found that there is another type of magnetism that is much weaker, but is common
to all kinds of matter.
For an understanding of the nature of this second type of static magnetism one needs to
recall that the basic rotation of all material atoms is two-dimensional. It follows from the
previously developed principles governing the combination of motions that a two-
dimensional vibration (charge) can be applied to this two-dimensional rotation. However,
unlike the ferromagnetic charge, which is independent of the motion of the main body of
the atom, this charge on the basic rotation of the atom is subject to the electric rotation of
the atom in the third scalar dimension. This does not alter the vibrational character of the
charge, but it distributes the magnetic motion (and force) over three dimensions, and thus
reduces its effective magnitude to the gravitational level. To distinguish this type of
charge from the ferromagnetic charge we will call it an internal magnetic charge.
As we have seen, the numerical factor relating the magnitudes of quantities differing by
one scalar dimension, in terms of cgs units, is 3 x 1010. The corresponding factor
applicable to the interaction between a ferromagnetic charge and an internal magnetic
charge is the square root of the product of 1 and 3 x 1010, which amounts to 1.73 x 105.
The internal magnetic effects are thus weaker than those due to ferromagnetism by about
105.
The scalar direction of the internal magnetic charge, like that of all other electric and
magnetic charges thus far considered, is outward. All magnetic (two-dimensional)
rotation of atoms is also positive (net displacement in time) in the material sector of the
universe. But the motion in the third scalar dimension, the electric dimension, is positive
in the Division I and II elements and negative in the Division III and IV elements. As
explained in Chapter 19, the all-positive magnetic rotations of the material sector have a
polarity of a different type that is related to the directional distribution of the magnetic
rotation. If an atom of an electropositive element is viewed from a given point in space–
from above, for example–it is observed to have a specific magnetic rotational direction,
clockwise or counterclockwise. The actual correlation with north and south has not yet
been established, but for present purposes we may call the end of the atom that
corresponds to the clockwise rotation its north pole. This is a general relation applying to
all electropositive atoms. Because of the reversals at the unit levels, the north pole of an
electronegative atom corresponds to counterclockwise rotation; that is, this north pole
occupies a position corresponding to that which is occupied by the south pole of an
electropositive atom.
When electropositive elements are subjected to the field of a magnet, the orientation of
the poles is the same in both the atoms and the magnet (which is similarly positive). The
atoms of these elements therefore tend to orient themselves with their magnetic axes
parallel to the magnetic field, and to move toward the stronger part of the field; that is,
they are attracted by permanent magnets. Such substances are called paramagnetic.
Electronegative elements, which have the reverse polarity, are oriented with the poles of
their atoms opposite to those of a magnet. This puts like poles together, causing
repulsion. These atoms therefore tend to orient themselves perpendicular to the magnetic
field, and to move toward the weaker part of the field. Substances of this kind are called
diamagnetic.
In present-day magnetic theory diamagnetism is regarded as a universal property of
matter, the origin of which is unexplained. ―All materials are diamagnetic,‖ 98 says one
textbook. On this basis, paramagnetism or ferromagnetism, where they exist, simply
overpower the basic diamagnetism. Our finding is that each substance is either
paramagnetic or diamagnetic, depending on the scalar direction of the rotation in the
electric dimension. Ferromagnetic substances are paramagnetic with an additional two-
dimensional rotational vibration of the kind previously described.
All elements of the electropositive divisions I and II, except beryllium and boron, are
paramagnetic. As in the case of other properties previously discussed, the positive
preference carries over into some of the adjoining elements of Division III. All other
elements of the electronegative divisions III and IV, except oxygen, are diamagnetic.
The abnormal behavior of some of the elements of Group 2A is a result of the small size
of this 8-member group, which permits the constituent elements, in some instances, to
function as members of the inverse division of the group. Boron, for example, is normally
the third member of the positive division of Group 2A, but it can alternatively act as the
fifth member of the negative division of this group. Boron and beryllium are the positive
elements nearest to the negative division in this group, and therefore the most subject to
whatever influences tend to cause the polarity reversal. Just why oxygen is the element of
the negative division in which the polarity reversal takes place is not yet known.
As brought out in Volume I, all chemical compounds are combinations of electropositive
and electronegative components. The presence of any significant amount of motion in
time (space displacement) in a molecular structure prevents establishment of the positive
magnetic orientation. All compounds, except those that are ferromagnetic, or heavily
weighted with paramagnetic elements, are therefore diamagnetic. This overwhelming
preference for diamagnetism in the compounds is probably what led to the currently
accepted hypothesis of a universal diamagnetism.
The intensity of the magnetic effect in a magnetic material is measured in terms of
magnetization, symbol M, which was defined in Chapter 20. The magnetization and the
intensity of the applied field are additive. Both therefore have the dimensions of magnetic
field intensity, t2/s4, but for historical reasons the field intensity is customarily identified
with the vector H, which has the dimensions 1/t. Since the magnetization must have the
same dimensions as the field intensity, it is also expressed in terms of a unit with the 1/t
dimensions. As we saw in Chapter 20, the actual physical quantities are µM and µH,
rather than M and H, but the permeability, µ, entering into these definitions is the
―permeability of free space,‖ µ0, which has unit magnitude. The dimensional error
therefore does not affect the numerical results of calculations.
From the foregoing, the net total magnetic field intensity, B, is the sum of µ0M and µ0H.
For some purposes it is convenient to express this quantity in terms of H only. This is
accomplished by introducing the magnetic susceptibility, , defined by the relation =
M/H. On this basis,
B = (1+ )µ0H.
As indicated earlier, the internal magnetic effects are relatively weak. The susceptibilities
of both paramagnetic and diamagnetic materials are therefore low. Those of the
diamagnetic substances are also independent of temperature. Some studies of the factors
that determine the magnitude of the internal magnetic susceptibility were undertaken in
the early stages of the theoretical investigation whose results are here being reported, and
calculations of the diamagnetic susceptibilities of a number of simple organic compounds
were included in the first edition of this work. These results have not yet been reviewed
in the light of the more complete understanding of the nature of magnetic phenomena that
has been gained in the past several decades, but there are no obvious inconsistencies, and
some consideration of these findings will be appropriate at this time.
As would be expected, since the internal magnetic charge is a modification of the
magnetic component of the rotational motion of an atom, the magnetic susceptibility is
the reciprocal of the effective magnetic rotational displacement. There are, of course, two
possible values of this displacement for most elements, but the applicable value is often
indicated by the environment; that is, association with elements of low displacement
generally means that the lower value will prevail, and vice versa. Carbon, for instance,
takes its secondary displacement, one, in association with hydrogen, but changes to the
primary displacement, two, in association with elements of the higher groups.
Another source of variability is introduced by the fact that the susceptibility, like most
other physical properties, has an initial level, and this level is also influenced by
environmental factors. At the present stage of the investigation we are not able to
evaluate these factors from purely theoretical premises, but they vary in a fairly regular
way in the various families of compounds. We can therefore establish what we may call
semi-theoretical values of the diamagnetic susceptibility of many relatively simple
organic compounds with the aid of series relationships.
The experimental values of the susceptibility of these compounds vary over a substantial
range. It was found, however, in the original investigation, that, except for certain
differences in the initial levels, the diamagnetic susceptibility has the same value as a
constant, which we are calling the refraction constant, that determines the index of
refraction. The properties of radiation will not be covered in this volume, but the
measurements of the refractive index are much more accurate than those of the magnetic
susceptibility. It will therefore be desirable to use the refraction constant as a base in the
calculation of the susceptibilities. and some explanation of the manner in which that
constant was derived will be required.
Like the internal susceptibility, the refraction constant is the reciprocal of the effective
magnetic rotational displacement, the total displacement minus the initial level. As in the
case of the susceptibility, the determination of this constant is complicated by a
variability in the initial levels, especially those of the most common elements in the
organic compounds, carbon and hydrogen. For convenience, both in calculation and in
emphasizing the series relationships, a value of the refraction constant is first calculated
on the basis of what we may regard as ―normal‖ values. The deviation of the constant
from the normal value is then determined for each compound.
Table 33 shows the derivation of the refraction factors in three representative organic
families of compounds. In the acids, for example, the normal rotational displacement of
the oxygen atoms and the carbon atom in the CO group is 2, while that of the hydrogen
atoms and the remaining carbon atoms is 1. The normal initial level is 2/9 in all cases.
The normal refraction factors of the individual rotational mass units are then 0.778 for the
displacement 1 atoms, and half this value, or 0.389 for those of displacement 2. All of the
acids from acetic (C2) to enanthic (C7) inclusive have normal initial levels (no deviations),
and the differences in the individual refraction factors are due entirely to a higher
proportion of the 0.778 units as the size of the molecule increases. The normal initial
level in the corresponding hydrocarbons, however, is only 1/9, and when the molecular
chain becomes long enough to free some of the hydrocarbon groups at the positive end of
the molecule from the influence of the acid radical at the negative end, these groups
revert to their normal initial levels as hydrocarbons, beginning with the CH3 end group
and moving inward. In caprylic acid (C8), the three hydrogen atoms in the end group have
made the change, those in the adjoining CH2 group do likewise in pelargonic acid (C9),
and as the length of the molecule increases still further the hydrogen in additional CH2
groups follows suit.

Table 33: Index of Refraction (n-1)/d


Dev. kr 697 kr Observed
ACIDS
O– .389 CO–.389 C–.778 H–.778
acetic acid 0 .511 .356 354 .356
propionic 0 .564 .393 .391 .393
butyric 0 .600 .418 .415 .417
valeric 0 .625 .436 .434
caproic 0 .644 .449 .448
enanthic 0 .659 .459 .458
caprylic 3 .675 .470 .472
pelargonic 5 .687 .479 .478
capric 7 .697 .486 .485
hendecanoic 9 .705 .491 .491
lauric 11 .713 .496 .500
myristic 15 .724 .505 .502
palmitic 19 .733 .511 .511
stearic 23 .741 .516 .514
PARAFFINS
C–.778 H–.889
propane 5 .834 .581 .582
butane 3 .820 .572
pentane 3 .818 .570 .570
hexane 3 .816 .568 .568 .569
heptane 3 .814 .567 .567 .568
octane 3 .813 .567 .5655
nonane 3 .812 .566 .565
decane 3 .812 .566 .5645
hendecane 3 .811 .565 .566
dodecane 0 .807 .563 .563
tridecane 0 .807 .562 .575
tetradecane 0 .807 .562
pentadecane 0 .807 .562 .5605
hexadecane 0 .807 .562 .561
heptadecane 0 .807 .562 .562
octadecane 0 .807 .562 .562
2-Me propane 5 .827 .576 .577
2-Me butane 5 .823 .573 .573
2-Me pentane 3 .816 .568 .566
2-Me hexane 3 .814 .567 .567
2-Me heptane 3 .813 .567 .5655
ESTERS
O–.389 CO–.389 C–.778 H–.778
methyl formate 0 .511 .356 .353
ethyl 0 .564 .393 .390 .392
propyl 0 .600 .418 .417 .419
butyl 0 .625 .436 .437
amyl 0 .644 .449 .447 .452
hexyl 3 .664 .462 .463
octyl 5 .687 .479 .479
isopropyl 0 .600 .418 .419
isobutyl 0 .625 .436 .437 .438
isoamyl 0 .644 .449 .449
methyl acetate -3 .556 .387 .385 .389
ethyl -3 .593 .413 .413 .417
propyl 0 .625 .436 .433 .434
butyl 0 .644 .449 .447 .448
amyl 0 .659 .459 .456 .461
hexyl 3 .675 .470 .470
heptyl 3 .685 .477 .478
isopropyl 0 .625 .436 .433
isobutyl 0 .644 .449 .447 .448
isoamyl 0 .659 .459 .458 .459
methyl propionate -3 .593 .413 .412
ethyl -3 .619 .431 .430 .432
propyl 0 .644 .449 .447
butyl 0 .659 .459 .458
methyl butyrate -3 .619 .431 .431
ethyl 0 .644 .449 .447
propyl 0 .659 .459 .458
butyl 0 .671 .467 .463 .467
amyl 3 .685 .477 .477
The deviations from the normal values (expressed in numbers of 1/9 units per molecule)
are shown in the first column of Table 33. The second column shows the refraction
constants, kr, calculated by applying the deviations in column 1 to the normal values. In
columns 3 and 4 the product 0.697 kr is compared with the quantity (n-1)/d, where n is
the refractive index at the sodium D wavelength and d is the density. The refraction
constant is related to the natural unit wavelength rather than to the wavelength at which
the measurements were made, but the difference is incorporated in the factor 0.697 that is
applied before the comparison with the values derived from observation . An explanation
of the derivation of this factor and the reason for making the correlation in this particular
manner would require more discussion of radiation than is appropriate in this volume, but
the status of the calculated refraction constants as specific functions of the composition of
the compounds is evident.
In the paraffins the initial levels increase with increasing length of the molecule rather
than decreasing as in the acids. As brought out in Volume I, the hydrocarbon molecules
are not the symmetrical structures that their formula molecules would seem to represent.
For example, the formula for propane, as usually expressed, is CH3 CH2 CH3, which
indicates that the two end groups of the molecule are alike. But the analysis of this
structure revealed that it is actually CH3.CH2.CH2.H, with a positive CH3 group at one end
and a negative hydrogen atom at the other. This negative hydrogen atom has a zero initial
level, and it exerts enough influence to eliminate the initial level in the hydrogen atoms of
the two CH2 groups, giving the molecule a total of 5 units of deviation from the normal
initial level. When another CH2 group is added to form butane, the relative effect of the
negative hydrogen atom is reduced, and the zero initial level is confined to the CH2 H
combination, with 3 hydrogen atoms. The deviation continues on this basis up to
hendecane (C11), beyond which it is eliminated entirely, and the molecule as a whole
takes the normal 0.889 refraction constant.
Also shown in Table 33 is a representative sample of the monobasic esters, which, as
would be expected of acid derivatives, follow the same pattern as the acids. The only new
feature is the appearance of a -3 deviation in some of the lower compounds. This appears
to be due to a reversal of the influences that are responsible for the additional positive
deviations in the lower paraffins, an interpretation that is supported by the fact that both
end groups of the esters are positive.
The objective of Table 33 is merely to show how the refraction constants that are used in
the susceptibility calculations are derived from the molecular composition and structure,
and the number of compounds listed has been limited to those required for this purpose.
The refraction constants used in application to the greater number and variety of
compounds included in Table 34, which shows the kind of results that are obtained from
the susceptibility calculations, are determined in the same manner.
As noted earlier, the diamagnetic susceptibility of an organic compound is equal to its
refraction constant with an adjustment for a difference in the initial levels. The magnetic
initial level is generally the same as that in refraction except in certain groups in which
the level is modified by some factor not yet specifically identified, but apparently
geometric. In the compounds listed in Table 34, the CH3, CH2OH, and OH end groups
have initial levels 1/9 unit higher, per unit of rotational mass, than the refraction levels.
Interior CH2 groups are subject to a similar modification, half as large (1/18 unit) at
certain points, as the molecular chains lengthen The sum of the individual differences in
initial level, I, is m‘/9, where m‘ is the number of rotational mass units in the modified
end groups of the molecule, plus half of the number of units in the modified interior
groups, with appropriate adjustments in special cases.
The average difference in initial level for a molecule of rotational mass m is then m‘/9m.
In Table 34 this value, shown as I/m, is applied to the refractive constants of
representative groups of simple organic compounds to arrive at the internal magnetic
susceptibilities. The corresponding values from observation are listed in the last three
columns of the table. Values marked with asterisks are taken from a recent
compilation.99 Where no measurement was available from this source, a representative
value from the earlier reports is shown in the same column. The last two columns shown
the range of results reported from the earlier measurements.

Table 34: Diamagnetic Susceptibilities


kr I DI/m Calc. Observed
PARAFFINS
propane .834 2.00 .077 .911 .919* .898
pentane .818 2.00 .048 .866 .874* .874
hexane .816 2.00 .040 .856 .865* .858 .888
heptane .814 2.00 .034 .848 .851* .850
octane .813 2.00 .030 .843 .846* .845 .872
nonane .812 2.00 .027 .839 .843* .843
decane .812 2.00 .024 .836 .842* .839
2-Me propane .827 2.00 .059 .886 .890* .888
2-Me butane .823 3.00 .071 .894 .893* .892
2-Me pentane .816 3.00 .060 .875 .873* .873
2-Me hexane .814 3.00 .052 .866 .861* .860 .862
2-Me heptane .813 3.00 .045 .857 .857
2,2-di Me propane .823 2.00 .048 .871 .875* .874
2,2-di Me butane .816 3.00 .060 .876 .885* .883 .885
2.2-di Me pentane .814 3.00 .052 .866 .868* .866 .869
2,3-di Me butane .809 4.00 .080 .889 .885* .883 .885
2,3-di Me pentane .809 4.00 .069 .878 .873* .873 .875
2,3-di Me hexane .808 4.00 .061 .869 .865* .865
2,2,3-tri Me butane .809 4.00 .069 .878 .882* .878 .884
2,2,3-tri Me pentane .808 4.50 .068 .876 .874* .872 .874
ACIDS
acetic acid .511 0.50 .016 .527 .525* .520 .535
propionic .564 1.00 .025 .589 .586* .578 .587
butyric .600 1.00 .024 .624 .625* .627 .636
valeric .625 1.50 .025 .650 .655*
caproic .644 2.00 .031 .675 .676* .676
enanthic .659 2.00 .027 .686 .680* .680
ALCOHOLS
methyl alcohol .599 1.00 .056 .655 .660 .650 .674
ethyl .658 2.00 .077 .735 .728* .717 .744
propyl .686 2.00 .059 .745 .752* .740 .766
butyl .708 2.00 .048 .756 .763* .743 .758
amyl .722 2.00 .040 .762 .766* .766
hexyl .730 2.50 .043 .773 .774* .775 .805
octyl .744 2.50 .034 .778 .777* .788
dodecyl .761 3.00 .028 .792 .792
isopropyl .686 2.50 .074 .760 .762* .759
isobutyl .708 3.00 .071 .779 .779* .772 .798
isoamyl .722 3.00 .060 .782 .782* .782 .799
MONOBASIC ESTERS.
methyl formate .511 0.50 .016 .527 .533* .518 .533
ethyl .564 1.00 .025 .589 .580* .580 .588
propyl .600 1.00 .021 .621 .625* .623
butyl .625 1.00 .018 .643 .644* .645
methyl acetate .556 1.00 .025 .581 .575* .570 .590
ethyl .593 1.00 .021 .614 .614* .607 .627
propyl .625 1.00 .018 .643 .645* .645 .651
butyl .644 1.50 .023 .667 .666* .663 .667
methyl propionate .593 1.50 .031 .624 .624* .614 .628
ethyl .619 1.50 .027 .646 .651* .644 .651
propyl .644 1.50 .023 .667 .671* .666
methyl butyrate .619 1,50 .027 .646 .650* .645 .650
ethyl .644 1.50 .023 .667 .669* .667 .669
propyl .659 2.00 .028 .687 .687* .687
isobutyl formate .625 1.50 .027 .652 .654* .654
isoamyl .644 2.00 .031 .675 .674* .675
isopropyl acetate .625 2.00 .035 .660 .656* .656
isobutyl .644 2.00 .031 .675 .676* .676
isoamyl .659 2.00 .027 .686 .687* .687 .690
DIBASIC ESTERS
ethyl oxalate .546 1.00 .013 .559 .560* .552 .554
propyl .585 1.00 .011 .596 .605* .600
methyl malonate .514 1.00 .014 .528 .528* .520
ethyl .564 1.00 .012 .576 .578* .573 .578
methyl succinate .537 1.50 .019 .556 .558* .555
ethyl .578 2.00 .021 .599 .604* .600
AMINES
butylamine .774 1.00 .024 .798 .805* .806
amyl .779 1.00 .020 .799 .796* .795
heptyl .786 1.00 .015 .801 .808* .808
diethyl .774 1.00 .024 .798 .777* .776 .835
dibutyl .788 1.00 .014 .802 .802* .802
CYCLANES
cyclopentane .784 2.23 .056 .840 .844* .843
cyclohexane .787 0.89 .019 .806 .810* .785 .810
Me cyclohexane .790 0.89 .016 .806 .804 .792
Et cyclohexane .788 1.44 .023 .811 .812
BENZENES
benzene .778 -3.11 -.074 .704 .702* .698 .732
toluene .782 -3.50 -.063 .719 .718* .712 .734
o-xylene .786 -3.50 -.055 .731 .733* .733
m-xylene .786 -4.28 -.067 .719 .721* .720 .743
p-xylene .786 -3.89 -.061 .725 .723* .722
ethylbenzene .782 -3.50 -.055 .727 .727* .738
In the normal paraffins the association between the CH2 group and the lone hydrogen
atom at the negative end of the molecule is close enough to enable the CH2.H
combination to act as the end group. This means that there are 18 rotational mass units in
the end groups of each chain. The value of I for these compounds is therefore 18/9 = 2.
Branching adds more ends to the molecule, and consequently increases I. The 2-methyl
paraffins add one CH2 end group, raising DI to 3, the 2,3-dimethyl compounds add one
more, bringing this quantity up to 4, and so on. A very close association, similar to that in
the CH2.H combination, modifies this general pattern. In 2-methyl propane, for instance,
the CHCH3 combination acts as an interior group, and the value of DI for this compound
is the same as that of the corresponding normal paraffin, butane. The C(CH3)2
combination likewise acts as an interior group in 2,2-dimethyl propane, and as a unit with
only one end group in the higher 2,2-dimethyl paraffins.
Each of the interior CH2 groups with the higher initial level adds nine rotational mass
units rather than the 8 corresponding to the group formula. This seems to indicate that in
these instances a CH2.CH2 combination is acting geometrically as if it were CH3.CH. In
the ring compounds the CH2 and CH groups take the normal 8 and 7 unit values
respectively.
The behavior of the substituted chain compounds is similar to that of the paraffins, but
there is a greater range of variability because of the presence of components other than
carbon and hydrogen. The alcohols, a typical family of this kind, have a CH3 group at one
end of the molecule and a CH2OH group at the other. The value of I for the longer
chains is therefore 26/9 = 2.89. In the lower alcohols, however, the CH2 portion of the
CH2OH group reverts to the status of an interior group, and I drops to 2.00. The methyl
alcohol molecule goes a step farther and acts as if it has only one end. A similar pattern
can be seen in other organic families, such as the esters. Since we have found that the
effective units of some of these compounds in certain of the phenomena previously
examined are double formula molecules, it appears likely that the magnetic behavior of
methyl alcohol and other compounds with similar characteristics can be attributed to the
size of the effective molecule.
No similar studies of paramagnetic materials have yet been made. Unlike diamagnetism,
paramagnetism is temperature dependent. For an explanation of this dependence we need
to recall that magnetism is a motion. One of the significant advantages of recognizing its
status as a motion is that its effect on other motions can be evaluated in terms of a direct
addition or subtraction, rather than having to be approached circuitously by means of
some hypothetical mechanism. Diamagnetism, which is motion in time (negative) has no
connection with the thermal motion, which is motion in space (positive). But
paramagnetism is positive, and has an imputed direction opposite to that of the thermal
motion. Thus an increase in temperature reduces the paramagnetic effect.
The internal magnetism, which has been the principal subject of discussion thus far in the
present chapter, is of interest primarily because of the light that it sheds on the nature and
properties of magnetism in general. From a practical standpoint, ―magnetism‖ is
synonymous with ferromagnetism. No systematic study of ferromagnetism in the context
of the theory of the universe of motion has yet been undertaken. There are, however, a
few points about the place of this phenomenon in the general physical picture that should
be noted.
Ferromagnetism exists only below a temperature, the Curie point, which is specific for
each substance. Inasmuch as this type of magnetism is restricted to positive elements and
some of their compounds, ferromagnetic materials are also paramagnetic, and exhibit
their paramagnetic properties above the Curie temperature. In this range, the
susceptibility is linearly related to the temperature, but the relation is inverse; that is, the
relation is between temperature and 1/ .
In one respect there is a significant difference between the magnetic susceptibility and
most of the physical properties discussed in the earlier pages. The specific heat of any
given substance, for instance, decreases with decreasing temperature, and reaches zero at
a particular temperature level. There is no negative specific heat. Consequently, the
specific heat of the individual atom is zero at all temperatures below this level. But
magnetic forces act upon magnetic substances at all temperatures below the critical
temperature, as well as above it. What we have here is a difference in the significance of
the zero point.
As explained in Volume I, the true datum of physical activity, the natural zero, is unit
speed, the speed of light. Natural physical magnitudes extend from this natural zero to
the natural unit of speed in space (our zero) in one direction, and to unit speed in time
(inverse speed) in the other. These two speed ranges are identical, except for the
inversion. Most of the physical magnitudes with which we deal are in the range from our
zero to the speed of light, but there are some quantities that extend beyond the natural
zero levels. This introduces some modifying factors into the physical relations, as the
natural zero levels are limiting magnitudes of the kind discussed in Chapter 17; that is,
points at which an inversion of most physical properties takes place.
For example, a property such as thermal radiation that increases with the temperature up
to the unit temperature level (the natural zero) does not continue to increase as the
temperature rises still farther. Instead, as we will see in Volume III, it undergoes a
decrease symmetrical with the increase that takes place between zero and unit
temperature. A somewhat similar reversal occurs in the case of those properties that
extend into the region inside unit space, the time region, as we have called it, because all
changes in this region take place in time, while the associated space remains constant at
the unit level.
Ferromagnetism is a phenomenon of the time region, and its natural zero point (the Curie
temperature) is therefore a boundary between two dissimilar regions, rather than a center
of symmetry, like the speed of light, the natural zero of speed. Instead of following the
kind of a linear relation that is characteristic of the properties of the regions outside unit
space, the relation of ferromagnetism to temperature has a more complex form due to the
substitution of the spatial equivalent of time for actual space in this region where no
change in actual space takes place.
No detailed studies in this area have yet been undertaken, but it seems evident that in the
more regular elements the magnetization is subject to the (1-x2)¹/2 relation that applies to
other time region properties examined earlier, and to a square root factor, which may also
be inter-regional. It can therefore be expressed as M = k(1-T2)¹/4. If the magnetization is
stated as a fraction of the initial magnetization, and the temperature is similarly stated as
a fraction of the Curie temperature, the constant k is eliminated, and the values derived
from the equation apply to all substances that follow the regular pattern.. Within the
limits of accuracy of the experimental data, the reduced magnetizations thus calculated
are in agreement with the empirical values, as reported by D. H. Martin.100
Because the internal magnetic charge is applied against the basic rotational motion of the
atom, its force is symmetrically distributed in the same manner as the gravitational force.
But, as we have seen, ferromagnetism is a motion of an individual, specifically located,
component of the atom. The directional distribution of the ferromagnetic force in the
reference system is therefore determined by the atomic orientation. If each atom acted
independently, the orientation of the atoms of an aggregate would be random, but, in fact,
each magnetically charged atom exerts a force on its magnetic neighbors, tending to line
up these neighboring atoms with its own magnetic directions. This orienting effect
encounters mechanical resistance, and is ordinarily limited in scope. For this reason, and
because the relation of each magnetic aggregate to its magnetic environment changes
from time to time, the magnetic orientation of an aggregate is not usually uniform.
Instead, the aggregate is subdivided magnetically into a number of sections, generally
called ―domains.‖
Ordinarily, the domains are randomly oriented, and the effective magnetic force is
reduced by the distribution over the different directions. Application of an external field
forces a reorientation of the atoms to conform with the direction of the field, the extent of
which depends on the strength of the field. This reorientation concentrates the magnetic
effect in the direction of the field, and results in an increase in the effective magnetic
force, reaching a maximum, the saturation level, when the reorientation is complete.

CHAPTER 23

Charges in Motion
When a negative* charge is added to an electron, the net total scalar speed of the charged
particle is zero. But since the electron rotation has the inward scalar direction, while the
charge has the outward direction, the two motions take place in different scalar
dimensions. Thus the electron does not act physically as a particle of zero speed
displacement, but as an uncharged electron and a charge. A moving charged electron
therefore has both magnetic properties (those of moving uncharged electrons) and
electrostatic properties (those of charges).
The conventional view is that electrostatic phenomena are due to charges at rest, and
magnetic phenomena are due to charges in motion. But, in fact, charges in motion have
exactly the same electrostatic properties as charges at rest. ―It is one of the remarkable
properties of electric charge that it is invariant at all speeds,‖ 101 says E. R. Dobbs. So
the motion of the charges is not, in itself, sufficient to account for electromagnetism.
Some additional process must come into operation in order to enable a charged particle to
exhibit magnetic properties when in motion. Whether this additional process involves the
charge or the particle–the ―carrier of the charge,‖ as it was called in a statement
previously quoted–is not specifically indicated observationally. Present-day theory
simply assumes that all effects are due to the charges. But since there are ―carriers,‖ these
are obviously the moving entities. The charges have no motion of their own; they are
carried. Even on the basis of conventional theory, therefore, the electromagnetic
phenomena are due to the motion of the carriers, not motion of the charges. The
development of electromagnetic theory in Chapter 21 now verifies this conclusion, and
identifies the carriers of the charges as ―bare‖ electrons.
As noted in Chapter 13, a flow of charged electrons through a conductor (a time
structure) follows the same course as the flow of uncharged electrons. But the charged
electrons have a property that their uncharged counterparts do not have. They can also
move freely through the gravitational fields of extension space, producing
electromagnetic phenomena that correspond to the effects of the flow of current in
conductors. This is illustrated by an arrangement such as that shown in Fig.25. In the
center of the diagram is a wire through which a current is moving downward, as indicated
by the arrow. (The conventional ―direction of current flow‖ is opposite to the actual
movement of the electrons, and is upward.) At the right is another conducting wire so
arranged that a segment of the wire is hanging loose in a container filled with mercury.
When a current is passed through this system in the same downward direction, the loose
end of wire is attracted toward the center wire. At the left of the diagram is a vacuum
tube through which a stream of electrons is also moving downward. This stream is
attracted toward the center wire in the same way as the loose wire in the mercury
container.

Figure 25

The movement of charged electrons through extension space is quite different in some
other respects from the movement of uncharged electrons (space units) through matter.
For instance, no electrical resistance is involved, and the motion therefore does not
conform to Ohm‘s law. But the magnetic effect depends only on the neutralization of one
dimension of a quantity of gravitational motion by the translational motion of the
electrons, and from this standpoint the collateral properties of the motion are irrelevant.
As long as the motion of the charged electrons takes place in a gravitational field, the
requirement for the production of magnetic effects is met.
On the basis of the general principles applying to electromagnetic forces, as defined in
Chapter 21, the magnetic force on a charged particle in a magnetic field is the product of
the magnetic field intensity B and a motion combination with the dimensions s2/t. The
combination applicable to the motion of a charged particle, we find, is electric quantity q
(measured as charge) multiplied by the particle velocity v. The force equation is then F =
Bqv, with space-time dimensions
t/s2 = 2/s4 x s x s/t. The static force of the charge is F = qE, the dimensions of which are
t/s2 = s x t/s3.
The electrostatic forces between the charges (units of Q) are independent of the magnetic
forces due to the movement of the electrons (units of q). The total force acting on a
charged electron in a magnetic field is then F = QE + Bqv. Since Q and q are numerically
equal, because each electron takes one unit of charge, this force expression can be written
F = q(E + Bv). The combined force is known as the Lorentz force. Lorrain and Corson
comment on this force as follows:
The Lorentz force of equation 10-2 is intriguing. Why should v x B [velocity x magnetic field intensity]
have the same effect as the electric field E? Clearly from equation 10-2, the particle cannot tell whether it
―sees‖ an E or a v x B term… Thus v x B is somehow an electric field intensity. 102

The authors then go on to say that the explanation is provided by the theory of relativity.
But the space-time analysis shows that relativity has no bearing on this situation. From a
physical standpoint, electric field intensity acts on a charged particle not as field
intensity, but as a quantity of t/s3. Similarly, magnetic field intensity, t2/s4, acting against
an electron moving with a velocity s/t has the effect of a quantity of (t2/s4 x s/t); that is, a
quantity of t/s3. The magnitude of the physical result is the same in both cases.
This is not an unusual situation. On the contrary, it is common throughout all kinds of
physical phenomena. The increase in temperature due to the addition of energy, for
instance, depends entirely on the quantity of t/s that is added to the thermal motion. It is
immaterial whether that energy increment is in the form of kinetic energy, chemical
energy, electrical energy, or any other form of t/s.
The effect of v x B does differ from that of E in direction, and the expression given for
the Lorentz force is therefore valid only in vectorial form. The electric force qE acts in
the direction of the field, and because the field is radial, the charges to which the force is
applied ―are accelerated, gaining kinetic energy.‖ 103 The effect of the magnetic forces
follows a different pattern. For the reasons explained in Chapter 21, the force exerted by
a magnetic field on a moving electron is perpendicular to the field. As noted in the
discussion of electromagnetism, this perpendicular direction of the force is an
unexplained anomaly in present-day physical thought. ―The strangest aspect of the
magnetic force on a moving charge is the direction of the force,‖ 104 says a current
textbook. When the origin of the magnetic field is understood, there is nothing strange
about this direction. The scalar dimension of the motion of the electron is the dimension
in which a portion of the gravitational motion is neutralized by the one-dimensional
electron movement, and the residual two-dimensional motion necessarily exists in the
two perpendicular dimensions.
The force aspect of this residual motion is also perpendicular to the magnetic field. If this
is a magnetostatic field, it has the outward scalar direction, whereas the residual force has
the inward scalar direction, and must therefore be in a different scalar dimension. If the
field is electromagnetic, the forces are likewise in different dimensions, although the
cause is different. As noted earlier, the motion of the uncharged electrons that constitute
the electric current is in a scalar dimension other than that of the reference system. A
freely moving charged particle, on the other hand, is moving in the space, and therefore
in the scalar dimension, of the reference system. The acceleration of an electron moving
in a uniform magnetic field is thus perpendicular both to the field and to the direction of
motion. Such an acceleration does not change the magnitude of the velocity; it merely
changes the direction. Motion at constant speed with a constant acceleration at right
angles to the velocity vector is motion in a circle. If the particle is also moving in a
direction perpendicular to the plane of the circle, the path of motion is spiral.
Most of the empirical knowledge that has been gained with respect to the nature and
properties of sub-atomic particles and cosmic atoms has been derived from observations
of their motion in electric and magnetic fields. Unfortunately, the amount of information
that can be obtained in this manner is very limited. A particularly significant point is that
the experiments that can be made on electrons by the application of electric and magnetic
forces are of no assistance to the physicists in their efforts to confirm one of their most
cherished assumptions: the assumption that the electron is one of the basic constituents of
matter. On the contrary, as pointed out in Chapter 18, the experimental evidence from
this source shows that the assumed nuclear structure of the atom of matter which
incorporates the electron is physically impossible.
The theory postulating orbital motion of negatively* charged electrons around a
hypothetical positively* charged nucleus, developed by Rutherford and his associates
after their celebrated experiments with alpha particles, collided immediately with one of
the properties of the charged electrons. A charged object radiates if it is accelerated.
Since the charge itself is an accelerated motion (for geometrical reasons), the force
required to produce a given acceleration of the charge is less than that required to
produce the same acceleration of the rotational unit. But the charge is physically
associated with the rotational combination, and must maintain the same speed. The
excess energy is therefore radiated away. This loss of energy from the hypothetical
orbiting electrons would cause them to spiral in toward the hypothetical nucleus, and
would make a stable atomic structure impossible.
This obstacle in the way of the nuclear hypothesis was never overcome. In order to
establish the hypothetical structure as physically possible, it would be necessary (1) to
determine just why an accelerated particle radiates, and (2) to explain why this process
does not operate under the conditions specified in the hypothesis. Neither of these
requirements has ever been met. Bohr simply assumed that the motion of the electrons is
quantized and can take only certain specific values, thus setting the stage for all of the
subsequent flights of fancy discussed in Chapter 18. The question as to whether the
quantum assumption could be reconciled with the reasons for the emission of radiation by
accelerated charges was simply ignored, as was the even more serious problem of
accounting for the assumed coexistence of positive* and negative* charges at separations
much less than those at which such charges are known to destroy each other. It should be
no surprise that Heisenberg eventually had to conclude that the nuclear atom he helped to
develop is not a physical particle at all, but is merely a ―symbol,‖ that is, a mathematical
convenience.
All of the foregoing discussion of the phenomena involving charges in motion has been
carried out in terms of charged electrons. The same considerations apply, inversely in
some respects, to charged positrons. Like the charged electrons, these positively* charged
particles are capable of moving through space, and since their motion is outward,
differing from that of the charged electrons only in rotational speed, they produce the
same general kind of magnetic effect as the charged electrons. In the cosmic sector, the
cosmic electric current is a flow of uncharged positrons through cosmic matter, and
charged positrons moving through the cosmic gravitational fields in time have magnetic
properties.
The rotational vibration that constitutes a charge may also be applied to other particles or
to atoms. The charge on a atom or multi-unit particle and the unit of rotation that it
modifies constitute a semi-independent component of that entity. The combination of
charge and rotational unit remains as a constituent of the atom or particle, but vibrates
independently, in the same manner as the magnetic motion combinations discussed in
Chapter 19. Inasmuch as this vibrating combination has the same composition as a
charged electron or positron -- a unit rotation modified by a unit rotational vibration–it
has the same electric and magnetic properties.
The charges on atoms may be either positive* or negative*. As explained in Chapter 17,
however, negative* ionization is confined to a relatively small number of elements
because an atom must have a negative rotation in order to acquire a negative* (= positive)
charge, and effective negative electric rotations are confined almost entirely to the
elements of Division IV. On the other hand, any element can take a positive* charge. If
the rotation in the electric dimension of the atom is negative, so that the positive* charge
cannot be applied in this dimension, it can be applied to the rotation in one of the
magnetic dimensions. The magnetic rotation is always positive in the material sector. It
follows that while the mobile sub-atomic particles are predominantly negative*–that is,
electrons–the freely moving (gaseous) ions are predominantly positive*.
The charged particles with which we have been concerned in the foregoing pages are
electrically charged. Since there are also particles that are capable of taking magnetic
charges, the question arises, Why do we not observe magnetically charged particles? The
explanation can be found in the requirement that the net rotational displacement of a
material atom or particle must be positive. The magnetic displacement, which is the
larger component of the total, must therefore also be positive. This means that only
negative magnetic charges can be applied to material particles.
The particles with magnetic rotational displacement are the neutron and the neutrino. The
neutron has no electric displacement and only a single unit of magnetic displacement.
Addition of an oppositely directed (negative) unit of charge therefore reduces the net
displacement to zero, and terminates the existence of the particle. The neutrino has both
electric and magnetic rotational components, and can therefore take a magnetic charge,
but when it is in this charged condition it cannot move through space, for reasons that
will be explained in Chapter 24, where the role of the charged neutrino in physical
processes will be examined in detail.
This chapter concludes the discussion of magnetism as far as this subject will be covered
in the present volume. Before turning to a different subject, it will be appropriate to make
a few comments on the contents of the last five chapters and their relation to the physical
situation in general.
Because the theory of the universe of motion, the detailed development of which is being
described in these volumes, is new to the scientific community, and conflicts with many
ideas and beliefs of long standing, the presentation in the several volumes of this series
has a two-fold objective. It is designed not only to report the findings of the investigation
based on the new theory, but also to provide the evidence that is required in order to
confirm the validity of the findings. It therefore needs to be emphasized that the points
brought out in the discussion of magnetism in these five chapters have made a very
significant contribution to the mass of confirmatory evidence that is now available.
The particular importance of the magnetic evidence lies in the fact that the theory defines
a specific dimensional relation between electricity and magnetism. It follows that
whenever the theory identifies the nature of an electric phenomenon, this identification
carries with it the assertion that there also exists a corresponding magnetic phenomenon,
differing only in that it is two-dimensional, while the electric analog is one-dimensional.
Thus we find from the theory that there is a one-dimensional rotational vibration,
identified as an electric charge, which has the space-time dimensions t/s and gives rise to
a variety of electrostatic phenomena. According to the theory, it necessarily follows that
there must be a two-dimensional rotational vibration, a magnetic charge, with the
dimensions t2/s2, that gives rise to an analogous variety of magnetostatic phenomena. The
observations verify the existence of a class of phenomena of this type, and an analysis of
the dimensions of the magnetostatic quantities shows that they are, in fact, related to the
corresponding electric quantities by the factor t/s, as required by the theory.
The dimensional interaction between electricity and magnetism is a particularly
significant demonstration of the predictive power of the theory. We find from theory that
gravitation is a three-dimensional scalar motion, and that an electric current is a one
dimensional flow of units with the dimensions of space through the three-dimensional
gravitating objects. From this it follows that the interaction should leave a two-
dimensional scalar residue, oriented perpendicular to the current flow. Observations show
that such a residue does exist, and that the process which leads to its existence can be
identified with the phenomenon known as electromagnetism. It further follows from the
same premises that the equivalent of a two-dimensional scalar motion through a three-
dimensional gravitating object leaves a one-dimensional scalar motion as a residue. This
interaction can be identified with the observed process known as electromagnetic
induction, and the residue can be identified as an electric current.
The principal dimensional consequences that can be inferred from the theoretical
identification of the electric current, electromagnetism, and gravitation with one, two, and
three dimensions of scalar motion, respectively, are thus definitely correlated with
observed electric and magnetic phenomena. But this is only the groundwork of a massive
accumulation of evidence confirming the dimensional relations derived from theory.
Contemporary science places a great deal of emphasis on the predictive power of new
theories. This is probably an overemphasis, as the ability of a theory to correlate existing
information is as important as its ability to point the way to new information, and is
becoming increasingly important as the ―multitude of different parts and pieces‖ that now
constitutes physical theory continues to expand. In any event, it should be recognized that
deductions from the premises of a theory that identify hitherto unknown relations among
known phenomena are predictions in the same sense as asserting the existence of a
hitherto unknown phenomenon.
For example, the postulate that motion is the sole constituent of the physical universe
carries with it the consequence that all physical quantities can be expressed in terms of
space and time only. This is a prediction. The assertions as to the relation between
electric and magnetic quantities discussed in the foregoing paragraphs are likewise
predictions based on the same premises. The fact that the development of the
consequences of the postulates of the theory of the universe of motion in the pages of this
and the preceding volume has led to a complete and consistent system of space-time
dimensions applicable to mechanical, electric, and magnetic quantities is a verification of
these predictions.
The verification of this prediction is all the more significant because the possibility of
arriving at any consistent system of dimensions, even with the use of four or five basic
quantities, is denied by the majority of physicists.
In the past the subject of dimensions has been quite controversial. For years unsuccessful attempts were
made to find ―ultimate rational quantities‖ in terms of which to express all dimensional formulas. It is now
universally agreed that there is no one ―absolute‖ set of dimensional formulas. 16

A similar prediction concerning the numerical values of these physical quantities is also
implicit in the postulates. Since it is postulated that motion exists only in discrete units, it
follows that the other physical quantities, all of which are either motions, combinations of
motions, or relations between motions, likewise exist only in discrete units related to the
units of the basic motion. This means that when the physical relations are correctly stated,
they contain no numerical values other than those specifically identifying numbers of
units, such as the atomic number, for example. The so-called ―fundamental constants of
physics‖ and the multitude of ―disposable constants‖ that appear in relations such as the
equations of state, will all be eliminated.
This fact that the values of the ―fundamental constants‖ have no physical meaning in the
context of the theory of the universe of motion contrasts sharply with the place of these
constants in current scientific thought, where they are regarded as the critical magnitudes
that determine the nature of the universe. Paul Davies expresses the prevailing view in
this statement:
The gross structure of many of the familiar systems observed in nature is determined by a relatively small
number of universal constants. Had these constants taken different numerical values from those observed,
then these systems would differ correspondingly in their structure. What is specially interesting is that, in
many cases, only a modest alteration of values would result in a drastic restructuring of the system
concerned.105

As we have seen in the pages of this and the preceding volume, some of these constants,
the speed of light, the electron charge, etc., are natural units–that is, their true magnitude
is unity–and the others are combinations of those basic units. The values that they take in
the conventional measurement systems are entirely due to the arbitrary magnitudes of the
units in which the measurements are expressed. The only way in which the constants
could take the ―different numerical values‖ to which Davies refers is by a modification of
the measurement system. Such a change would have no physical meaning. Thus the
possibility that he suggests in the quoted statement, and explores at length in the pages of
his book The Accidental Universe, is ruled out by the unitary character of the universe.
No physical relation in that universe is ―accidental.‖ The existence of each relation, and
the relevant magnitudes, are necessary consequences of the basic factors that define the
universe as a whole, and there is no latitude for individual modification, except to the
extent that selection among possible outcomes of physical events may be determined by
probability considerations.
The clarification of these numerical relations to put them in terms of natural units is a
gigantic task, and it is still far from being complete, but enough progress has been made,
particularly in the fundamental areas, to make it evident that there is no serious obstacle
in the way of continued progress toward the ultimate goal.
The special contribution of magnetism to the verification of these significant
consequences of the postulates that define the universe of motion has been that, because
of its intermediate position between the one-dimensional and three-dimensional
phenomena, it, in a sense, ties the whole fabric of scalar motion theory together.
Recognition of this point, early in the theoretical development, led to deferring
consideration of magnetism until after the relations in the other major physical areas were
quite firmly established. As a result, the investigation of magnetic phenomena is not as
far advanced, particularly in quantitative terms, as the theoretical development in most of
the other areas that have been covered.
There is also another factor that has limited the extent of coverage, one that is related to
the objective of the presentation. This work is not intended as a comprehensive treatise
on physics. It is simply an account of the results thus far obtained by development of the
consequences of the postulates that define the universe of motion. In this development we
are proceeding from the general principles expressed in the postulates toward their
detailed applications. Meanwhile, the scientific community has been, and is, proceeding
in the opposite direction, making observations and experiments, and working inductively
from these factual premises toward increasingly general principles and relations. Thus the
results of these two activities are moving toward each other. When the development of
the Reciprocal System of theory reaches the point, in any field, where it meets the results
that have been obtained inductively from observation and measurement, and there is
substantial agreement, it is not necessary to proceed farther. Nothing would be gained by
duplicating information that is already available in the scientific literature.
Obviously, the validity of existing theory in any particular area is one of the principal
factors that determine just how far the new development needs to be carried in that area.
As it happens, however, the previous work in magnetism, and to some extent in
electricity as well, has followed along lines that are very different from those that are
defined for us by the concept of a universe of motion, and the results of that previous
work are, to a large extent, expressed in language that is altogether foreign to the manner
in which our findings must necessarily be stated. This makes it difficult to determine just
where we reach the point beyond which we are in agreement with previously existing
theory. Whether the clarification of the electric and magnetic relations in the special areas
covered in the preceding pages will be sufficient, together with a translation of present-
day theory into the appropriate language, to put electricity and magnetism on a sound
theoretical footing, or whether some more radical reconstruction of theory will be
required, is not definitely indicated as yet.

CHAPTER 24
Isotopes
While the magnetic charges involved in the phenomena that we recognize as magnetic all
have the outward scalar direction, this does not mean that inward magnetic charges are
non-existent. It is a result of the fact that the magnetic (two-dimensional) rotational
displacement of material atoms is always inward. The principles governing the addition
of motions, as set forth in Volume I, require charges to oppose the basic motions of the
atoms in order to form stable combinations. The only stable magnetic charge is therefore
the outward charge. However, inward charges may also be produced under appropriate
conditions, and may continue to exist if their subsequent separation from the rotational
combinations is forcibly prevented.
The events that take place during the beginning of the process of aggregation in the
material environment were described in Volume I. As brought out there, the decay of the
cosmic rays entering this environment produces a large number of massless neutrons, M
¹/2-¹/2-0. These are subject to disintegration into positrons, M 0-0-1, and neutrinos, M ¹/2-
¹/2-(1). Obviously, the presence of any such large concentration of particles of a particular
type can be expected to have some kind of a significant effect on the physical system. We
have already examined a wide variety of phenomena resulting from the analogous excess
of electrons in the material environment. The neutrino is more elusive, and there is very
little direct experimental information available concerning this particle and its behavior.
However, the development of the Reciprocal System of theory has given us a theoretical
explanation of the role of the neutrinos in physical phenomena, and we are now able to
trace the course of events even where there are no empirical observations or data
available for our guidance.
We can logically conclude that in some environments the neutrinos continue to exist in
the uncharged condition in which they are originally formed, just as we find that the
electron normally has no charge in the terrestrial environment. In this uncharged
condition, the neutrino has a net displacement of zero. Thus it is able to move freely in
either space or time. Furthermore, it is not affected by gravitation or by electrical or
magnetic forces, since it has neither mass nor charge. It therefore has no motion relative
to the natural system of reference, which means that from the standpoint of a stationary
system of reference the neutrinos produced at any given location move outward at unit
speed in the same manner as radiation. Each material aggregate in the universe is
therefore exposed to a continuing flux of neutrinos, which may be regarded as a special
kind of radiation.
Although the neutrino as a whole is neutral, from the space-time standpoint, because the
displacements of its separate motions add up to zero, it actually has effective
displacements in both the electric and magnetic dimensions. It is therefore capable of
taking either a magnetic or an electric charge. Probability considerations favor the
primary two-dimensional motion, and the charge acquired by a neutrino is therefore
magnetic. This charge opposes the magnetic rotation, and since the rotation is inward the
charge is directed outward. Inasmuch as this unit outward charge neutralizes the inward
magnetic rotation, the only effective (unbalanced) unit of displacement of the charged
neutrino is that of the inward negative rotation in the electric dimension. This charged
neutrino is thus, in effect, a rotating unit of space, similar in this respect to the uncharged
electron, and, as matters now stand, indistinguishable from it.
As a unit of space, the charged neutrino is subject to the same limitations as the
analogous uncharged electron. It can move freely through the time displacements of
matter, but it is barred from passage through open space, since the relation of space to
space is not motion. Any neutrino that acquires a charge while passing through matter is
therefore trapped. Unlike the charged electron, it cannot escape from the material
aggregate by acquiring a charge. It most lose its charge in order to reach the neutral
condition in which it is capable of moving through space. This is difficult to accomplish,
as the conditions within the aggregate are favorable to producing charges rather than
destroying them. At first the proportion of neutrinos captured in passing through a newly
formed material aggregate is probably small, but as the number of charged particles
within the aggregate builds up, increasing what we may call the magnetic temperature,
the tendency toward capture becomes greater. Being rotational in nature, the magnetic
motion is not radiated away in the manner of the translational thermal motion, and the
increase of the neutrino population is therefore a cumulative process. There will
inevitably be some differences in the rate of build-up of this population by reason of local
conditions, but in general the older a material aggregate becomes, the higher its magnetic
temperature rises.
The charged neutrino, as a unit of space, is an addition to the space represented by the
reference system, extension space, as we have called it. Where charged neutrinos are
present, some of the atoms of matter exist, for a time, in the space of the neutrinos rather
than in units of extension space, or in the space of the uncharged electrons that, as we
have seen previously, are also present, The charged neutrinos are rotating relative to the
spatial reference system, and they are consequently rotating relative to the systems of
motions that constitute the material atoms, systems that are defined relative to the
reference system. The outward rotational vibration (charge) of the spatial unit, the
neutrino, is therefore equivalent to, and interchangeable with, an inward rotational
vibration (charge) of the time structure, the atom. When the neutrino and the atom
subsequently separate, there is a finite probability that the charge will remain with the
atom.
The inward scalar direction of this two-dimensional atomic charge is the same as that of
the two-dimensional atomic rotation. This fact that the rotational vibration of the atom
induced by a magnetically charged neutrino is compatible with the basic magnetic (two-
dimensional) inward rotation of the atom has a profound effect on the participation of this
motion in physical processes. The ordinary magnetic charge is a foreign element in the
material system, an outward motion in a system of inward motions. Magnetism therefore
plays a detached role of relatively small importance in the local environment. The
neutrino-induced rotational vibration, or charge, on the other hand, adds to the net
rotational displacement (the mass) of the atom, and aside from being more dependent on
conditions in the environment, is fully coordinate with the basic atomic rotation. Instead
of being a distinct added motion, this induced charge modifies the magnitude of the
previously existing atomic rotation.
The presence of a concentration of charged neutrinos tending to produce inward
rotational vibration of the atoms of an aggregate explains why an atom as a whole does
not take an ordinary magnetic charge, and why these ordinary magnetic charges are
confined to asymmetric atoms that have motion components which can vibrate
independently of the main body. The outward motion cannot be initiated against the
forces tending to produce inward motion.
In view of the very significant difference in behavior between the inward charge induced
by the neutrinos and the ordinary outward magnetic charge, we will not use the term
―magnetic charge‖ in application to the rotational vibration of the type we are now
considering. Instead, we will call this a gravitational charge. Since the motion that
constitutes this charge is a form of rotation, and is compatible with the atomic rotation, it
adds to the net rotational displacement of the atom. However, there is only one rotating
system in the neutrino, whereas the atom is a double system. The mass corresponding to
the unit of gravitational charge is thus only half that of the unit of rotation (the unit of
atomic$number). For convenience, the smaller unit has been taken as the unit of atomic
weight, or atomic mass. The primary atomic mass of a gravitationally charged atom is
therefore 2Z+ G, where Z is the atomic number, and G is the number of units of
gravitational charge.
In addition to the difference in the size of the units, the gravitational charge (rotational
vibration) also has a relation to the atomic structure in general that is somewhat different
from that of the full rotations. We will therefore distinguish between the rotational mass
of the basic atomic rotation and the mass due to the gravitational charge, which we will
call vibrational mass. The relation between the gravitational charge and the atomic
rotation will have further consideration from the standpoint of the atomic structure in
Chapters 25 and 26, and from the mass standpoint in
Chapter 27.
Inasmuch as the gravitational charge is variable, the masses of the atoms of an element
take different values, extending through a range that depends on the maximum size of the
vibrational mass G under the prevailing conditions. The different states that each element
can assume by reason of the variable gravitational charge are identified as isotopes of the
elements, and the mass on the 2Z + G basis is identified as the isotopic mass. As the
elements occur naturally on the earth, the various isotopes of each element are almost
always in the same, or nearly the same, proportions. Each element therefore has an
average isotopic mass which is recognized as the atomic weight of that element. From the
points brought out in the foregoing discussion, it is evident that the atomic weight thus
defined is a reflection of the local neutrino concentration, the magnetic temperature, as
we have called it, and does not necessarily have the same value in a different
environment.
For reasons that will be explained in Chapter 26, the transfer of magnetic ionization from
neutrino to atom is irreversible under terrestrial conditions. However, there are processes,
to be described later, that gradually transform the vibrational mass into rotational mass.
At a low magnetic temperature (concentration of charged neutrinos) most of the single
gravitational charges are removed from the system by these processes before a second
charge can be added. As the magnetic temperature increases, the frequency of magnetic
ionization of atoms likewise increases because of the larger number of contacts. As a
result, double or multiple ionization occurs in some atoms. Each aggregate thus has a
magnetic ionization level analogous to the level of electric ionization previously
discussed.
The degree of magnetic ionization of the individual elements depends not only on the
magnetic temperature but also on the relative ability of those elements to absorb the
neutrinos. This is a property of the individual units of time displacement. The effective
magnetic ionization, the number of gravitational charges that are added to the atomic
motion, therefore depends on the atomic mass as well as on the magnetic temperature.
From the nature of the addition process we can deduce that at the unit ionization level
each net unit of rotational displacement (atomic number) should be capable of acquiring
one unit of gravitational charge (half the size of the atomic mass unit). But the atom
exists in the time region, whereas the neutrino is not subject to the factors that apply to
motion inside unit space. The relation between the charge and the atomic rotation is
therefore between mv, the vibrational mass, and mr2, the second power of the rotational
mass. Furthermore, the atomic rotation in the time region is subject to the inter-regional
ratio, 156.444. Denoting the magnetic ionization level as I, we then have the equilibrium
relation
mv = I mr2/156.444 (24-1)
In this equation the rotational mass, mr, is expressed in the double units (units of atomic
number) and the vibrational mass,mv, in the single units (units of atomic weight).
The value of mv thus derived is the number of units of gravitational charge (mass) that
will normally be acquired by an atom of rotational mass mr if raised to the magnetic
ionization level I. It is quite obvious from the available empirical information that the
magnetic ionization level on the surface of the earth is close to unity. A calculation for
the element lead on the unit ionization basis, to illustrate the application of the equation,
results in mv = 43. Adding the 164 atomic weight units of rotational mass corresponding
to atomic number 82, we arrive at a theoretical atomic weight of 207. The experimental
value is 207.2.
This close agreement is not quite as significant as it appears to be. Actually there are
stable isotopes of lead with isotopic masses ranging from 204 to 208. The explanation is
that the value obtained from equation 24-1 is not necessarily the mass corresponding to
the atomic weight, nor the isotopic mass of the most stable isotope. It is the center of a
zone of isotopic stability. Because of the individual characteristics of the elements, the
actual median of the stable isotopes and the measured atomic weight may be offset to
some extent from this theoretical center of stability, but the deviation is generally small.
In more than 60 percent of the first 92 elements it is only one unit, or none at all.
Furthermore, the agreement is improving as more accurate measurements become
available from experimental sources. In the nearly thirty years since the publication of the
first edition of this work, in which the comparative values were tabulated, the accepted
atomic weight of six elements has been changed by a significant amount, and in all of
these cases the change has been in the direction of closer agreement with the theoretical
values.
Table 35: Atomic Mass Equilibrium Values
m m
Z mv Calc. Obs Diff. Z mv Calc. Obs. Diff.
1 .01 2 1 -1 47 14.12 108 108 0
2 .03 4 4 0 48 14.73 111 112.5 +1,5
3 .06 6 7 +1 49 15.35 113 115 +2
4 .10 8 9 +1 50 15.98 116 119 +3
5 .16 10 11 +1 51 16.63 119 122 +3
6 .23 12 12 0 52 17.28 121 128 +7
7 .31 14 14 0 53 17.96 124 127 +3
8 .41 16 16 0 54 18.64 127 131 +4
9 .52 19 19 0 55 19.34 129 133 +4
10 .64 21 20 -1 56 20.05 132 137 +5
11 .77 23 23 0 57 20.77 135 139 +4
12 .92 25 24 -1 58 21.50 138 140 +2
13 1.08 27 27 0 59 22.25 140 141 +1
14 1.25 29 28 -1 60 23.01 143 144 +1
15 1.44 31 31 0 61 23.78 146 145 -1
16 1.64 34 32 -2 62 24.57 149 150 +1
17 1.85 36 35.5 -0.5 63 25.37 151 152 +1
18 2.07 38 40 +2 64 26.18 154 157 +3
19 2.31 40 39 -1 65 27.01 157 159 +2
20 2.56 43 40 -3 66 27.84 160 162.5 +2.5
21 2.82 45 45 0 67 28.69 163 165 +2
22 3.09 47 48 +1 68 29.56 166 167 +1
23 3.38 49 51 +2 69 30.43 168 169 +1
24 3.68 52 52 0 70 31.32 171 173 +2
25 4.00 54 55 +1 71 32.22 174 175 +1
26 4.32 56 56 0 72 33.14 177 178.5 +1.5
27 4.66 59 59 0 73 34.06 180 181 +1
28 5.01 61 59 -2 74 35.00 183 184 +1
29 5.38 63 63.5 +0.5 75 35.96 186 186 0
30 5.75 66 65 -1 76 36.92 189 190 +1
31 6,14 68 70 +2 77 37.90 192 192 0
32 6.55 71 73 +2 78 38.89 195 195 0
33 6.96 73 75 +2 79 39.89 198 197 -1
34 7.39 75 79 +4 80 40.91 201 200.5 -0.5
35 7.83 78 80 +2 81 41.94 204 204 0
36 8.28 80 84 +4 82 42.98 207 207 0
37 8.75 83 85.5 +2.5 83 44.03 210 209 -1
38 9.23 85 88 +3 84 45.10 213 209 -4
39 9.72 88 89 +1 85 46.18 216 210 -6
40 10.23 90 91 +1 86 47.28 219 222 +3
41 10.74 93 93 0 87 48.38 222 223 +1
42 11.28 95 95.5 +0.5 88 49.50 226 226 0
43 11.82 98 98 0 89 50.63 229 227 -2
44 12.37 100 101 +1 90 51.78 232 232 0
45 12.94 103 103 0 91 52.93 235 231 -4
46 13.53 106 106.5 +0.5 92 54.10 238 238 0
Table 35 is an updated version of the original tabulation. The first column of the table
gives the atomic number, The second column shows the value of mv calculated from
equation 24-1. Column 3 is the theoretical equilibrium mass, 2Z + G, taken to the nearest
unit, since the gravitational charge does not exist in fractional units. Column 4 is the
observed atomic weight, also expressed in terms of the nearest integer, except where the
excess is almost exactly one half unit. Column 5 is the difference between the observed
and calculated values. The trans-uranium elements are omitted, as these elements cannot
have (terrestrial) atomic weights in the same sense in which that term is used in
application to the stable elements.
The width of the zone of stability is quite variable, ranging from zero for technetium and
promethium to a little over ten percent of the rotational mass. The reasons for the
individual differences in this respect are not yet clear. One of the interesting, and
probably significant, points in this connection is that the odd-numbered elements
generally have narrower stability limits than the even-numbered elements. This and other
factors affecting atomic stability will be discussed in Chapter 26. Isotopes that are outside
the zone of stability undergo spontaneous modifications that have the result of moving
the atom into the stable zone. The nature of these processes will be examined in the next
chapter.
In addition to the limitation on its width, the zone of isotopic stability also has an upper
limit due to the restrictions on the total rotation of the atom. It was established in Volume
I that the maximum effective magnetic rotational displacement is four units. The
elements of rotational group 4B have magnetic rotational displacements 4-4. By adding
rotation in the electric dimension it is possible to build the total rotation up to 4-4-31, or
the equivalent 5-4-(1), corresponding to atomic number 117, without exceeding the
overall displacement maximum. But the next step brings the electric rotation up to the
equivalent of the next unit of magnetic rotation. The effective magnetic rotation (that is,
the total less the initial unit) is then four units in each magnetic dimension. As explained
earlier, a displacement of four full magnetic units is equivalent to no displacement at all.
Arrival at this point therefore terminates the rotation. The speed displacement reverts to
the translational status. Element 118 is thus unstable, and will disintegrate promptly if it
is formed. All rotational combinations above element 118 (rotational mass 236) are
similarly unstable, whereas all elements below 118 are stable at a zero ionization level.
At a finite ionization level the corresponding vibrational mass is added to the rotational
mass, and the 236 limit is reached at a lower atomic number. As indicated by Table35,
the equilibrium mass of uranium, atomic number 92, is 238 at the unit ionization level.
This exceeds the 236 limit. Uranium and all elements above it in the atomic series are
therefore unstable in an environment that is subject to this degree of ionization. The
converse is not necessarily true; that is, it does not necessarily follow that all isotopes
below the 236 limit are stable if they are within the zone of stability defined by the ratio
of vibrational to rotational mass. At the magnetic temperature corresponding to the unit
ionization level most atoms of an aggregate have one gravitational charge. But some have
none, whereas others may possess two charges. The existence of a doubly charged atom
has no observable physical consequences, other than the added mass, unless the second
charge puts the total mass over the 236 limit. In that event, the atom will eventually
disintegrate.
All of the factors that determine the extent of instability in the elements just below
uranium in the atomic series have not yet been identified, but, as would be expected,
there is a general decrease in the tendency toward instability as the atomic number
decreases. The lowest element that could theoretically become unstable by reason of
acquisition of two gravitational charges is gold, element 79, for which the total mass with
two units of charge is 238. However, the probability of the second ionization drops
rapidly as we move down the atomic series, and while the first few elements below
uranium are very unstable, the instability is negligible below bismuth, element 83.
As the magnetic ionization level rises, the stability limit decreases still further in terms of
atomic number. It should be noted, however, that the rate of decrease slows down
quickly. The first stage of ionization reduces the stability limit from 118 to 92, a
difference of 26 in atomic number. The second unit of ionization causes a decrease of 13
atomic number units, the third only 8, and so on.

CHAPTER 25

Radioactivity
The ejection of positive or negative displacement by an atom that becomes unstable for
one of the reasons discussed in the preceding pages will be identified as radioactivity, or
radioactive decay, and the adjective radioactive will be applied to any element or isotope
that is in the unstable condition. As brought out in Chapter 24, there are two distinct
kinds of instability. Those elements whose atomic mass exceeds 236, either in rotational
mass alone, or in rotational mass plus the vibrational mass added by magnetic ionization,
are beyond the overall stability limit, and must reduce their respective masses below 236.
In a fixed environment this cannot ordinarily be accomplished by modification of the
vibrational mass alone, since the normal ratio of vibrational to rotational mass is
determined by the prevailing magnetic ionization level. The radioactivity resulting from
this cause therefore involves the actual ejection of mass and the transformation of the
element into an element of a lower atomic number. The most common process is the
ejection of one or more helium atoms, or alpha particles, and is known as alpha decay.
The second type of instability is due to a ratio of vibrational to rotational mass which is
outside the zone of stability. In this case ejection of mass is not necessary; the required
adjustment of the ratio can be accomplished by a process that converts vibrational mass
into rotational mass, or vice versa, and thereby transforms the unstable isotope into
another isotope within or closer to the zone of stability. The most common process of this
kind is the emission of a beta particle, an electron or positron, together with a neutrino,
and the term beta decay is applied.
In this work the alpha and beta designations will be used in a more general sense. All
processes that result from instability due to exceeding the 236 mass limit (that is, all
processes that involve the ejection of primary mass) will be classified as alpha
radioactivity, and all processes that modify only the ratio of vibrational mass to rotational
mass will be classed as beta radioactivity. If it is necessary to identify the individual
processes, such terms as b+ decay, etc., will be employed. The nature of the processes
will be specified in terms of the beta particle, and coincident emission of the appropriate
neutrino should be understood.
In analyzing these processes, which are few in number and relatively simple, the essential
requirement is to distinguish clearly between the rotational and vibrational mass. For
convenience we will adopt a notation in the form 6–1, where the first number represents
the rotational mass and the second the vibrational mass. The example cited is the mass of
the isotope Li7. A negative mass (space displacement) will be indicated by parentheses, as
in the expression 4–(1), which is the mass of the isotope He3. This system is similar to the
notation used for the rotational displacement in the different scalar dimensions, but there
should be no confusion since one is a two–number system while the other employs three
numbers.
Radioactive processes generally involve some adjustments of the secondary mass, but
these are minor items that have not yet been studied in the context of the Reciprocal
System of theory. They will not be considered in the present discussion, which will refer
only to the primary mass, the principal component of the total.
The composition of the motions of a stable isotope can be changed only by external
means such as violent contact, absorption of a particle, or magnetic ionization, and the
frequency of such changes is related to the nature of the environment, rather than to
anything inherent in the structure of the isotope itself. An unstable isotope, on the other
hand, is capable of moving toward stability on its own initiative by ejecting the
appropriate motion or combination of motions, Consequently, each such process has a
specific time pattern, subject to the probability relations.
The basic process of alpha radioactivity is the direct removal of rotational mass. Since
each unit of rotational displacement is equal to two units of mass on the atomic weight
scale, the effect of each step in this process is to decrease the rotational mass by 2n units.
The rotational combination with n = 1 is the H2 isotope, which is unstable because its
total rotation is above the limit for either a single rotating system, or an intermediate type
of structure similar to that of the H1 isotope, but is less than one double (atomic number)
unit in each of the two rotating systems of the atomic structure. This H2 isotope therefore
tends either to lose displacement and revert to the H1 status, or to add displacement and
become a helium atom. The particle ejected in alpha radioactivity is the smallest stable
double rotating system, in which n = 2. Emission of this particle, the He4 isotope, with
mass components 4–0, results in a change such as
O16 => C12 + He4
16–0 => 12–0 + 4–0
Since rotational vibration exists only as a modifier of rotation, there are no separate units
of vibrational mass that can be added or subtracted directly in the manner of the alpha
particle. But the mass of the compound neutron has the same single (atomic weight) unit
value as the vibrational mass unit, and like the latter, it is a single rotating system (from
the material standpoint). It is therefore interchangeable with the vibrational mass. In our
numerical notation, it can be expressed as 0–1. This equivalence of the neutron mass and
the unit of vibrational mass makes it possible to modify isotopes by adding or removing
compound neutrons. Thus we may start with the mass two isotope of hydrogen, H2, and
by adding a compound neutron obtain the mass three isotope, H3.
H2 + n => H3
2–0 + 0–1=> 2–1
Beta radioactivity is a conversion process rather than an ordinary addition process. An
isotope that is above the zone of stability has one or more units of magnetic displacement,
¹/2–¹/2–0, in the form of rotational vibration, superimposed on units of the magnetic
rotation of the atom. These vibrational units are only half the size of the rotational units.
Addition of a second half–size unit to one of the combinations of unit rotation and unit
rotational vibration is therefore required to produce an additional rotational unit. This
cannot be accomplished by direct addition, as a rotational unit is not capable of accepting
more than one vibrational unit. However, an unstable isotope is subject to influences that
cause it to eject displacement. (That is what makes it unstable.) An isotope above the
stability zone ejects a cosmic neutrino, (¹/2)–(¹/2)–1 and an electron, 0–0–(1). This
ejection is equivalent to addition of displacement ¹/2–¹/2–0, the addition that is required to
convert one of the half size vibrational units to a rotational unit.
Neither of the ejected particles has any effective primary mass. No change in mass
therefore takes place in this process (b– radioactivity). The original isotope with
rotational mass 2Z and vibrational mass n becomes an isotope with rotational mass
2(Z+1)–that is, an isotope of the next higher element–and vibrational mass n–2. The total
mass of the combination of motions remains the same, but two units of vibrational mass
have been converted to rotational mass, and the combination has moved closer to the
zone of stability. If it is still outside that zone, the ejection process is repeated.
Where an isotope is below the zone of stability (deficient in vibrational mass) the process
described in the foregoing paragraphs is reversed. In this process, b+ radioactivity, a unit
of rotational mass is converted to two units of vibrational mass by ejection of a material
neutrino, ¹/2–¹/2–(1), together with a positron, 0–0–1. The isotope of element Z, with
rotational mass 2Z and vibrational mass n then becomes an isotope of element Z–1, with
rotational mass 2(Z–1) and vibrational mass n+2.
These are the basic radioactive processes. The actual course of events in any particular
case depends on the initial situation. It may involve only one such event; it may consist of
several successive events of the same kind, or a combination of the basic processes may
be required to complete the transition to a stable condition. In natural beta radioactivity
under terrestrial conditions a single beta emission is usually sufficient, as the unstable
isotopes are seldom very far outside the zone of beta stability. However, under some
other environmental conditions the amount of radioactivity required in order to attain beta
stability is very substantial, as we will see in Volume III.
In natural alpha radioactivity, the mass that must be ejected may amount to the equivalent
of several alpha particles even in the terrestrial environment. The loss of this rotational
mass necessitates beta emission to restore the equilibrium between rotational and
vibrational mass. Alpha radioactivity is thus usually a complex process. As an example,
we may trace the steps involved in the radioactive decay of uranium. Beginning with U238,
which is just over the borderline of stability, and has the long half life of 4.5 x 109 years,
the first event is an alpha emission.
U238 => Th234 + He4
184–54 => 180–54 + 4–0
This puts the vibrational mass outside the zone of stability, and two successive beta
events follow promptly, bring the atom back to another isotope of uranium.
Th234 => Pa234
180–54=> 182–52
Pa234 => U234
182–52 => 184–50
Two successive alpha emissions now take place, with a considerable delay between
stages, as both U234 and the intermediate product Th230 have relatively long half lives.
These two events bring the atomic structure to that of radium, the prototype of the
radioactive elements.
U234 => Th230 + He4
184–50 => 180–50 + 4–0
Th230 => Ra226 + He4
180–50 => 176–50 + 4–0
After another somewhat shorter time interval, a rapid succession of decay events begins.
Half life periods in this phase of the decay range from days down to as low as seconds.
Three more alpha emissions start the sequence.
Ra226 => Rn222 + He4
176–50 => 172–50 + 4–0
Rn222 => Po218 + He4
172–50 => 168–50 + 4–0
Po218 => Pb214 + He4
168–50 => 164–50 + 4–0
By this time the vibrational mass of 50 units is well above the zone of stability, the center
of which is theoretically 43 units at this point. The next event is therefore a beta emission.
Pb214 => Bi214
164–50 => 166–48
This isotope is still above the stable zone, and another beta emission is in order, but a
further alpha emission is also imminent, and the next step may take either direction. In
either case, the emission is followed by one of the alternate kind, and the net result of the
two events is the same regardless of which step is taken first. We may therefore regard
this as a double decay.
Bi214 => Pb210 + He4
166–48 => 164–46 + 4–0
After some delay due to a 22 year half life of Pb210, two successive beta emissions and
one alpha event occur.
Pb210 => Bi210
164–46 => 166–44
Bi210 => Po210
166–44 => 168–42
Po210 => Pb206 + He4
168–42 => 164–42 + 4–0
The lead isotope Pb206 is within the stability limits both with respect to total mass (alpha)
and with respect to the ratio of vibrational to rotational mass (beta). The radioactivity
therefore ends at this point.
The unstable isotopes that are responsible for natural beta radioactivity in the terrestrial
environment originate either as by–products of alpha radioactivity or as a result of atomic
transformations originated by high energy processes, such as those initiated by incoming
cosmic rays. Alpha radioactivity is mainly the result of past or present inflow of material
from regions where the magnetic ionization level is below that of the local environment.
In those regions where the magnetic ionization level is zero, or near zero, all of the 117
possible elements are stable, and there is no alpha radioactivity. The heavy element
content of young matter is low because atom building is a cumulative process, and this
young matter has not had time to produce more than a relatively small number of the
more complex atoms. But probability considerations make it inevitable that some atoms
of the higher groups will be formed in the younger aggregates, particularly where older
matter dispersed into space by explosive processes has been accreted by these younger
structures. Thus, although aggregates composed primarily of young matter have a much
lower heavy element content than those composed of older matter, they do contain an
appreciable number of the very heavy elements, including the trans–uranium elements
that are absent from terrestrial matter. The significance of this point will be explained in
Volume III.
If matter from a region of zero magnetic ionization is transferred to a region such as the
surface of the earth, where the ionization level is unity or above, the stability limit in
terms of atomic number drops, and radioactivity is initiated. Whether the material
constituents of the earth acquired the unit magnetic ionization level at the time that the
earth assumed its present status as a planet, or reached this level at some earlier or later
date is not definitely indicated by the information now available. There is some evidence
suggesting that this change took place in a considerably earlier era, but in any event the
situation with respect to the activity of the elements now undergoing alpha radioactivity
is essentially the same. They originated in a region of zero, or near zero, magnetic
ionization, and either remained in that region while the magnetic ionization level
increased, or in some manner, the nature of which is immaterial in the present
connection, were transferred to their present locations, where they have become
radioactive for the reasons stated.
As noted above, another source of natural radioactivity is atomic rearrangement resulting
from interaction of material atoms with high energy particles, principally the cosmic rays
and their derivatives. In such reactions stable isotopes of one kind or another are
converted into unstable isotopes, and the latter than become sources of radioactivity,
mostly of the beta type. The level of the beta radioactivity produced in this manner is
quite low. The very intense activity of the same general nature that is indicated by the
radio and x–ray emission from certain kinds of astronomical objects originates by means
of a different process, examination of which will be deferred until the nature and
behavior of the objects from which the emissions are observed are developed in Volume
III.
The processes that constitute natural radioactivity can be duplicated experimentally,
together with a great variety of similar atomic transformations which presumably also
occur naturally under appropriate circumstances, but have been observed only under
experimental conditions. We may therefore combine our consideration of natural beta
radioactivity, the so–called artificial radioactivity, and the other experimentally induced
transformations into an examination of atomic transformations in general. Essentially,
these transformations, regardless of the number and type of atoms or particles involved,
are no different from the simple addition and decay reactions previously discussed. The
most convenient way of describing these more complex events is to treat them as
successive processes in which the reacting particles first join in an addition reaction and
subsequently eject one or more particles from the combination. According to some of the
theories currently in vogue, this is the way in which the transformations actually take
place, but for present purposes it is immaterial whether or not the symbolic representation
conforms to physical reality, and we will leave this question in abeyance. The formation
of the isotope P30 from aluminum, the first artificial radioactive reaction discovered, may
be represented as
Al27 + He4 => P30 + n1
26–1 + 4–0 => 30–1 => 30–0 + 0–1
In this case the two phases of the reaction are independent, in the sense that any
combination which adds up to 30–1 can produce P30 + n1, while there are many ways in
which the 30–1 resultant of the combination of Al27 + He4 can be broken down. The final
product may, for instance, be Si30 + H1.
The usual method of conducting transformation experiments is to accelerate a small
atomic or sub–atomic unit to a very high velocity and cause it to impinge on a target. In
general, the degree of fragmentation of the target atoms depends on the relative stability
of those atoms and the kinetic energy of the incident particles. For example, if we use
hydrogen atoms against an aluminum target at a relatively low energy level. we get
results similar to those produced in the Al27 + He4 reaction previously discussed. Typical
equations are
Al27 + H1 => Mg24 + He4
26–1 + 2–(1) => 28–0 => 24–0 + 4–0
Al27 + H1 => Si27 + n1
26–1 + 2–(1) => 28–0 => 28–(1) + 0–1
Greater energies cause further fragmentation and result in such rearrangements as
Al27 + H1 => Na24 + 3 H1 + n1
26–1 + 2–(1) => 28–0 => 22–2 + 6–(3) + 0–1
The general principle that the degree of fragmentation is a function of the energy of the
incident particles has an important bearing on the relative probabilities of various
reactions at very high temperatures, and will have further consideration later.
In the extreme situation where the target atom is heavy and inherently unstable, the
fragments may be relatively large. In this case, the process is known as fission. The
difference between the fission process and the transformation reactions previously
described in merely a matter of degree, and the same relationships apply.
Although it is possible in some instances to transform one stable isotope into another by
an appropriate process, the more general rule is that if the original reactants are stable the
major product is unstable, and therefore radioactive. The reason is, of course, that the
stable isotopes have vibrational to rotational mass ratios within the stability zone, and any
change in the ratio tends to move it out of that zone. As an example, the P30 isotope
formed in the reaction between aluminum and helium atoms is below the stability zone;
that is, it is deficient in vibrational mass. It therefore decays by the b+ process to form a
stable silicon isotope.
P30 => Si30
30–0 => 28–2
In the radioactive reactions of the heavy elements the products often have substantial
excesses of vibrational mass, and in these cases successive beta emissions take place,
resulting in decay chains in which the unstable isotopes move step by step toward
stability. One of the relatively long chains of this type that has been identified is the
following:
Xe140 => Cs140 => Ba140 => La140 => Ce140
108–32 => 110–30 => 112–28 => 114–26 => 116–24
(19) (19) (20) (21) (22)
The figures in parentheses refer to the number of units of vibrational mass corresponding
to the center of the zone of stability, as calculated for each element from equation 24–1.
The original product Xe140 has 13 excess vibrational units, and is thus far outside the
stability zone. Successive beta emissions convert two–unit quantities of vibrational mass
to rotational mass, while the stable amount of vibrational mass gradually increases as the
atomic number rises. On reaching Ce140 the excess has been reduced to two units. This is
within the stability margin, and the radioactivity therefore ceases.
The foregoing description of the atomic transformation processes has been confined to
the essential element of the transformation, the redistribution of the primary mass, and
the collateral effects have either been ignored or left for later treatment. In the latter
category are the mass–energy relations, which will be discussed in Chapter 27. The
electric charges carried by some of the reacting substances, or the reaction products, have
no significance in the present connection, as they only affect the energy relations.
On first consideration it might appear that the addition processes discussed in the
preceding pages would provide the answer to the problem of accounting for the existence
of the heavier members of the series of chemical elements. In current practice this is
taken for granted, and the question to be answered is accepted as being merely the issue
as to what specific one or more of these processes is operative.
The currently accepted hypothesis is that the raw material from which the elements are
formed is hydrogen, and that mass is added to hydrogen by means of the addition
processes. It is recognized that (with certain exceptions that will be considered later) the
addition mechanisms are high energy processes. Atoms approaching each other at low or
moderate speeds normally rebound, and take up positions at equilibrium distances. The
additions take place only where the speeds are high enough to overcome the resistance,
and these speeds generally involve disruption of the structure of the target atoms,
followed by some recombination.
The only place now known in our galaxy where the energy concentration is at the level
required for operation of these processes on a major scale is in the interiors of the stars.
The accepted hypothesis therefore is that the atom building takes place in the stellar
interiors, and that the products are subsequently scattered into the environment by
supernova explosions. It has been demonstrated by laboratory experiments, and more
dramatically in the explosion of the hydrogen bomb, that the mass 2 and mass 3 isotopes
of hydrogen can be stimulated to combine into the mass 4 isotope of helium, with the
release of large quantities of energy. This hydrogen conversion process is currently the
most powerful source of energy known to science (aside from some highly speculative
ideas that involve carrying gravitational attraction to hypothetical extremes). The attitude
of the professional physicists has always been that the most energetic process known to
them must necessarily be the process by which energy is generated in the stars (even
though they have had to revise their concept of the nature of this process twice already,
the last time under very embarrassing circumstances). The current belief of both the
physicists and the astronomers therefore is that the hydrogen conversion process is
unquestionably the primary stellar energy source. It is further assumed that there are other
addition processes operating in the stars by which atom building beyond the helium level
is accomplished.
It will be shown in Volume III that there is a mass of astronomical evidence
demonstrating conclusively that this hydrogen conversion process cannot be the means
by which the stellar energy is generated. But even without this evidence that demolishes
the currently accepted assumption, any critical examination of the fundamentals of atom
building will make it clear that high energy processes–inherently destructive–are not the
answer to the problem. It is true that the formation of helium from isotopes of hydrogen
proceeds in the right direction, but the fact is that the increase in atomic mass that results
from the hydrogen conversion reaction is an incidental effect of a process that operates
toward a different end. The primary objective of that process, the objective that supplies
the probability difference that powers the process, is the conversion of unstable isotopes
into stable isotopes.
The fuel for the known hydrogen conversion process, that of the hydrogen bomb and the
experiments aimed at developing fusion power, is a mixture of these unstable hydrogen
isotopes. The operating principle is merely a matter of speeding up the conversion,
causing the reactants to do rapidly what they will do slowly, of their own accord, if not
subjected to stimulation. It is freely asserted that this is the same process as that by which
energy is generated in the stars, and that the fusion experiments are designed to duplicate
the stellar conditions. But the hydrogen in the stars is mainly in the form of the stable
mass one isotope, and there is no justification for assuming that this stable atomic
structure can be induced to undergo the kind of a reaction to which the unstable isotopes
are subject by reason of their instability. The mere fact that the conversion process would
be exothermic, if it occurred, does not necessarily mean that it will take place
spontaneously. The controlling factor is the relative probability, not the energy balance,
and so far as we know, the mass one isotope of hydrogen is just as probable a structure as
the helium atom under any physical conditions, other than those, to be discussed in
Chapter 26, that lead to atom building.
At high temperatures the chances of atomic break–up are improved, but this does not
necessarily increase the proportion of helium in the final product. On the contrary, as
noted earlier, a greater kinetic energy results in more fragmentation, and it therefore
favors the smaller unit rather than the larger. A certain amount of recombination of the
fragments produced under these high temperature conditions can be expected,
particularly where the extreme conditions are only temporary, as in the explosion of the
hydrogen bomb, but the relative amounts of the various possible products of
recombination are determined by probability considerations. Inasmuch as stable isotopes
are more probable than unstable isotopes (that is what makes them stable), formation of
the stable helium isotope from the atomic and sub–atomic fragments takes precedence
over recombination of the unstable isotopes of hydrogen. But the mass one hydrogen
isotope that is the principal constituent of the stars is just as stable as helium, and it has
the advantage, in a high energy environment, of being the smaller unit, which makes it
less susceptible to fragmentation, and more capable of recombination if disrupted. Thus it
cannot be expected that recombination of fragments into helium, under high energy
conditions, will occur on a large enough scale to constitute a major source of stellar
energy.
In this connection, it should be noted that the general tendency of high energy reactions
in the material sector of the universe is to break down existing structures rather than build
larger ones. The reason for this should be evident. The material sector is the low speed
sector, and the lower the speed of matter the more pronounced its material character
becomes; that is, the more it deviates from the speeds of the cosmic sector. It follows
that, in general, the lower the speed the greater the tendency to form combinations of the
material type. Conversely, higher speeds lessen the material character of the matter, and
not only inhibit further combination, but tend to disrupt the combinations already
existing. Furthermore, this increase in the amount of negative displacement (thermal or
translational motion) is not conducive to building up positive displacement in the form of
mass. Thus the net result of the reactions in the high speed environment of the stellar
interiors can be expected to decrease, rather than increase, the average atomic weight of
the matter participating in these reactions.
An analogous process in a more familiar energy range is the pyrolysis of petroleum.
Cracking of a paraffinic oil, for instance, yields products that, among other things,
include substantial quantities of complex aromatic compounds. For example, one of those
that makes it appearance is anthracene, a 24–atom molecule. There are few, if any, of the
ring compounds, even the smaller ones, in the original material. Thus it is evident that the
high temperatures of this process have not only broken down the original hydrocarbon
molecules into smaller molecules or atoms, but have also allowed some recombination
into larger molecular units. Nevertheless, the general result of the cracking process is a
drastic reduction in the average size of the molecules, the greater part of the mass being
reduced to hydrogen, methane, and carbon.
The point that needs to be recognized is that this is what high energy processes do to
combinations such as atoms, regardless of whether those atoms are combinations of
particles, as contended by conventional physics, or combinations of different forms of
motion, as deduced from the postulates of the theory of the universe of motion. Such
processes disrupt some or all of the original combinations. In the chaotic conditions
generated by the application of powerful forces there is a certain amount of
recombination going on alongside the disintegration. This may result in the appearance of
some new combinations (isotopes), which may suggest that atom building is occurring.
But, in fact, these constructive events are merely incidental results of a destructive
process.
In the universe of motion, the raw material for atom building consists of massless
particles, the decay products of the cosmic rays. Conversion of these particles into simple
atoms of matter, and production of increasingly more massive atoms from the original
units, is a slow and gradual constructive process, not a high energy destructive process.
This assertion as to the general character of the atom building process is confirmed by the
astronomical evidence, which, as will be brought out in Volume III, shows that atom
building is taking place throughout the universe, not merely at special locations and under
special conditions, as envisioned in present–day theories. The details of the atom building
processes in the universe of motion will be the subject of the next chapter.
CHAPTER 26

Atom Building
Several chapters of Volume I were devoted to tracing the path followed by the matter that
is ejected into the material sector of the universe from the inverse, or cosmic, sector in the
form of cosmic rays. As brought out there, the cosmic atoms that constitute the cosmic
rays, three–dimensional rotational combinations with net speeds greater than unity, are
broken down into massless particles; that is, particles with effective rotation in less than
three dimensions. These particles are then reassembled into material atoms, three–
dimensional rotational combinations with net speeds less than unity. The processes by
which this rebuilding is accomplished have not yet been observed, nor has the applicable
theory been fully clarified. It was stated in the earlier volume that our conclusions in this
area were necessarily somewhat speculative. Additional theoretical development in the
meantime has placed these conclusions on a much firmer basis, and it would now be in
order to call them tentative rather than speculative.
As brought out in Chapter 25, the currently prevailing opinion is that atom building is
carried on by means of addition processes of the type discussed in that chapter. For the
reasons that were specified, we find it necessary to reject that conclusion, and to
characterize these processes, to the extent that they actually occur, as minor and
incidental activities that have no significant influence on the general evolutionary pattern
in the material sector of the universe. However, as noted in the earlier discussion, there is
one addition process that actually does occur on a large enough scale to justify giving it
some consideration before we turn our attention to broadening the scope of the
explanation of the atom building process introduced in Volume I. This addition process
that we will now want to examine is what is known as ―neutron capture.‖
The observed particle known as the ―neutron‖ is the one that we have identified as the
compound neutron. It has the same type of structure as the mass one hydrogen isotope;
that is, it is a double rotating system with a proton type rotation in one component and a
neutrino type rotation in the other. In the hydrogen isotope the neutrino rotation has the
material composition M ¹/2–¹/2–(1). In the compound neutron it has the cosmic
composition C (¹/2)–(¹/2)–1. The net displacements of this particle are M ¹/2–¹/2–0, the
same as those of the massless neutron. The compound neutron is fully compatible with
the basic magnetic (two–dimensional) rotational displacement of the atoms, and since it
carries no electric charge it can penetrate to the vicinity of an atom much more easily
than the particles that normally interact in the charged condition. Consequently, the
compound neutrons are readily absorbed by atoms. On first consideration, therefore,
neutron capture would appear to be a likely candidate for designation as the primary atom
building process. Nevertheless, the physicists relegate it to a minor role. The prevailing
downgrading of the potential of neutron capture is mainly due to the physicists‘
commitment to other processes that they believe to be responsible for the energy
production in the stars. If, as now believed, the continuing additions to the atomic masses
are made as a collateral feature of the stellar energy production processes, neutron
capture can have only a limited significance. Some support for this conclusion is derived
from the finding that there is no stable isotope of mass 5. As the textbooks point out, the
neutron capture process would come to a stop at this point.
In the universe of motion this argument is invalid. As we saw in Chapter 24, isotopic
stability is determined by the level of magnetic ionization. The lack of a stable isotope of
mass 5 is peculiar to the unit ionization level, the level that happens to exist at the surface
of the earth at the present time. In earlier eras, when the magnetic ionization level was
lower, the obstacle at mass 5 was absent, or at least not fully effective, and in the future
when the ionization level has risen, this obstacle will again be minimized or removed.
We must nevertheless concur with the prevailing opinion that neutron capture is not the
primary atom building process, because even though the mass 5 obstacle can be
circumvented, there are not anywhere near enough of the compound neutrons to take care
of the atom building requirements. These particles are produced in limited quantities in
reactions of a special nature. Atom building, on the other hand, is an activity of vast
proportions that is going on continuously in all parts of the universe. The compound
neutron is actually a very special kind of combination of motions. The reason for its
existence is that there are some physical circumstances under which two–dimensional
rotation is ejected from matter. In the material atoms the two–dimensional rotation is
associated with mass because of the way in which it is incorporated into the atomic
structure. There is no way in which this mass can be given up, because the process by
which it originated, bringing a massless particle to rest in the fixed spatial reference
system, is irreversible. The two–dimensional speed displacement is therefore forced into
the only available alternative, the compound neutron structure, even though this structure
is inherently one of low probability.
Let us turn now to the process which, according to the findings reported in Volume I, is,
in fact, the primary means whereby atom building is actually accomplished. As brought
out in that earlier discussion, the principal product of the decay of cosmic atoms, the
original constituents of the cosmic rays, is the massless neutron, M ¹/2–¹/2–0. This particle
can combine with an electron, M 0–0–(1), or eject a positron, M 0–0–1, to form a
neutrino, M¹/2–¹/2–(1). On the basis of the principles governing the combination of
motions, as defined in Volume I, simple combinations of motions do not produce stable
structures unless the added motion has some characteristic opposed to that of the original.
However, this restriction does not apply to a combination with a neutrino, as this particle
has a net total speed displacement of zero, and the added motion is therefore the only
active unit in the combination. Thus a massless neutron can be added to a neutrino. Some
significant consequences ensue.
All massless particles are moving outward at the speed of light (unit speed) relative to the
conventional spatial reference system. But when the neutrino, M¹/2–¹/2–(1), combines
with the massless neutron, M¹/2–¹/2–0, the displacements of the combination are M 1–1–
(1), which means that the combination has an active inward two–dimensional rotational
displacement in a three–dimensional type of structure. The addition of inward motion in
the third scalar dimension brings the consolidated particle to rest in the spatial reference
system. The results of this sequence of events were described in Volume I. As noted
there, although the massless neutron and the neutrino have no effective mass, they do
have the two–dimensional analog, t2/s2, of the three–dimensional property, t3/s3, that is
known as mass. When one of these particles, moving at the speed of light relative to the
spatial reference system comes to rest in the gravitationally bound system represented by
the reference coordinates, the unit translational speed thereby eliminated provides the
necessary energy, t/s, to convert the two–dimensional quantity, the internal momentum,
as we have called it, to the three–dimensional quantity, mass.
The product of this process, with rotational displacements 1–1–(1) and a mass of one
atomic weight unit, is the proton. In conventional physics the proton is regarded as a
positively* charged particle that constitutes the nucleus of the hydrogen atom. We find
that it is, in fact, a particle, which may or may not carry a positive* electric charge. We
also find that as a particular kind of motion (not as a particle) it is a constituent of the
hydrogen atom. It is not, however, a ―nucleus.‖ The mass one hydrogen isotope is a
double rotating system in which the proton type of motion is combined with a motion of
the neutrino type. The atom is formed by direct combination of the proton and the
neutrino, but the existence of the particles as particles terminates when the combination
takes place. At this point the motions that previously constituted the particles become
constituent motions of the combination structure, the atom.
This is an appropriate point at which to make some general comments about the
successive combinations of different types of motions that are the essence of the atom
building process. The key to an understanding of this situation is a recognition of the fact
that these are scalar motions. The only inherent property of a scalar motion is its positive
or negative magnitude, and the representation of that magnitude in the spatial reference
system is subject to change in accordance with the conditions prevailing in the
environment. The same scalar motion can be either translational, rotational, vibrational,
or a rotational vibration, and it is free to switch from one of these to another to conform
to changed conditions. Such a change is a zero energy process, as previously defined,
merely a rearrangement.
This is the same kind of a situation that we encountered in Chapter 17 in connection with
ionization. As noted there, ionization of a particle can take place by means of any one of
a number of different processes–absorption of radiant energy, capture of electrons,
contact with fast moving particles, etc. Since the motions that are involved are of
different types, it might appear that we are confronted with a difficult problem when we
attempt to explain these processes as interchange of motions. But the situation is simple
when it is viewed in scalar terms. The only inherent property of these scalar motions–the
vibratory photon motion, the rotational electron motion, the translational motion of the
atom or particle–is the magnitude. It follows that the magnitude is the only property that
is necessarily transmitted unchanged in an interaction. The coupling to the reference
system that distinguishes the photon from the electron, or from translational motion, is
free to conform to the new environment. In ionization it takes the form of a rotational
vibration, regardless of the type of the antecedent motion.
Production of the hydrogen atom in the manner described in the preceding pages
terminates the role of the direct addition processes in atom building. The essential step in
this process is to bring the massless neutrons from their normal motion at the speed of
light (stationary in the natural reference system) to a condition of rest in the fixed spatial
reference system. As pointed out in Volume I, this requires the existence of rotational
motion in all three scalar dimensions, since the particle is capable of moving at the speed
of light (relative to the spatial reference system) in any vacant dimension. The massless
neutron does not have the necessary three dimensions of motion, but combination with
the neutrino provides the required addition to the neutron dimensions. This combination,
1–1–(1), has a net total three–dimensional rotational displacement (mass) of one unit.
The 1–1–(1) particle, the proton, thus produced cannot accept another massless neutron
because of the two–dimensional nature of that particle. Nor can it accept a combination
of the massless neutron with a neutrino, as that combination constitutes another proton,
and consolidation of two protons is subject to the opposing factors previously considered
in connection with the direct combination of atoms. Beyond the mass one hydrogen
stage, therefore, atom building takes place mainly by means of an ionization process that
we will now consider.
The neutrinos in the decay products of the cosmic rays are subject to contacts with other
particles, particularly photons of radiation. Some of these contacts result in magnetic
ionization; that is, a two–dimensional rotational vibration is imparted to the neutrino.
Since this is a one–unit displacement in opposition to the one unit of two–dimensional
rotational displacement in the neutrino, the resultant net rotational displacement in these
two dimensions is zero. As can readily be seen, such a charge could not be applied to a
massless neutron. This particle already has zero displacement in the electric dimension,
and if the one unit in the magnetic dimensions is neutralized the particle would have no
effective speed displacement, and would be reduced to the status of the rotational base,
the rotational equivalent of nothing at all. At the primitive level magnetic ionization is
therefore confined to the neutrino.
The magnetic ionization process was discussed at length in Chapters 24 and 25, and the
steps through which the original ionization of the neutrinos is passed on to the atoms
were described in considerable detail. At this time we will take a look at the mass
relations, with the objective of demonstrating that the process by which mass is added
during the events previously described is irreversible (up to the destructive limits defined
in Chapter 25), and that magnetic ionization is therefore an atom–building process of
such broad scope that it is clearly the predominant means of accomplishing the formation
of the heavier elements.
As explained previously, since the magnetically charged neutrino has no active speed
displacement other than the one negative unit in the electric dimension, it is, in effect, a
rotating unit of space vibrating in the magnetic dimensions. A material atom, which is a
time structure (net displacement in time), can exist in this space of the neutrino just as in
any other space. Such an atom is continually moving from one space unit to another. If it
enters the space of a neutrino, the rotational vibration of the space unit (the neutrino) is
equivalent to, and in equilibrium with, a similar, but oppositely directed, rotational
vibration of the atom. When the atom again passes into another space unit it is a matter of
chance whether the vibration goes with it, or is left with the space unit (the neutrino).
Thus some of the magnetic charges originally imparted to the neutrinos in a material
aggregate are transferred from the neutrinos to the atoms.
Neutrinos, whether charged or uncharged, move at unit speed relative to the spatial
reference system, and their occasional periods of coincidence with atoms of matter are
possible only because of the finite magnitude of the units of space and time. If the
magnetic charge stays with the atom when the atom and neutrino separate, the charge,
which is moving at unit speed while it is associated with the neutrino, is brought to rest in
the spatial reference system. Elimination of the unit of outward speed provides the unit of
displacement required for the addition of rotation in the third scalar dimension and
enables the unit of magnetic (two–dimensional) speed displacement to be absorbed by the
atom. Inasmuch as this unit that is absorbed has only half the mass of the full rotational
unit, and has no rotation at all in the third dimension, it enters the atom as a unit of
vibrational mass. If this puts the isotopic weight of the atom outside the zone of stability,
some of the vibrational mass is converted to rotational mass in the manner previously
described, moving the atom to a position higher in the atomic series.
The transition from the massless state (stationary in the natural reference system) to the
material status cannot be reversed in the material environment, as there is no available
process for going directly from rotation to translation. The sub–atomic particles are
subject to neutralization reactions in which oppositely directed rotations cancel each
other, causing their speed displacements to revert to the translational status. But direct
combination of two multi–unit atoms is difficult to accomplish. Because of the reversed
direction of the forces in the time region, there is a strong force of repulsion between two
such structures when they approach each other. Furthermore, each atom is a combination
of motions in different scalar dimensions, and even if two atoms acquire sufficient
relative speed to overcome the resistance and make effective contact, they cannot join
unless the displacements in the different dimensions reach the proper conditions for
combination simultaneously. With few, if any, exceptions, the additions to the masses of
the atoms are therefore permanent (up to the time that one of the destructive limits is
reached).
Here, then, the first application of this atom building process is complete. By means of
the successive steps that have been identified, the magnetic rotational speed displacement
of the massless neutron produced by cosmic ray decay (the only active property of that
particle) is converted into an addition to the mass of an atom. Successive additions of the
same kind move the atom up the atomic series.
Atom building in intergalactic space is slow because of the low density of matter, but the
amount of time spent in this stage is so long that there is sufficient opportunity for
production of a finite quantity of all of the117 possible elements, in proportions
determined by the relative probabilities. After this initial period, the existing matter is
increasingly concentrated into large aggregates. This speeds up the atom building, but
meanwhile there are processes in operation that destroy some of the heavier elements.
A significant aspect of the theoretical findings reported in this and the immediately
preceding chapters is the important role of the massless particles, entities which, with the
exception of the photon and the neutrino, are not recognized by conventional science. As
brought out in the discussion earlier in this chapter, the characteristic feature of these
particles is that they have no capability of independent motion, and are therefore
stationary in the natural system of reference. It follows that they are moving at unit speed
(the speed of light) in the context of the conventional spatial reference system.
According to our findings, there are three categories of material particles (combinations
of motions without enough rotational displacement to form the atomic type of structure).
These are (1) massless particles, (2) similar particles that have acquired mass, and (3)
particles with structures intermediate between those of class (2) and the full atomic
structure. Table 36 lists the sub–atomic particles of the material sector.
The mass one hydrogen isotope is included in this list because of its intermediate type
structure, although it is generally regarded as a full scale atom. Electric charges that may
be present are not shown, except in the case of the one–dimensional charged particles,
where they provide the rotational vibration that brings these particles into the
gravitationally bound system. Charges applied to other particles in the list have no
significant effect on the phenomena now being considered.

Table 36: The Subatomic Particles


Massless Particles
photon
M0–0–0 rotational base
M0–0–(1) electron
*M¹/2–¹/2–(1) charged neutrino
M0–0–1 positron
M¹/2–¹/2–(1) neutrino
M¹/2–¹/2–0 massless neutron
Particles With Mass
–M0–0–(1) charged electron
+M0–0–1 charged positron
M1–1–(1) proton
Intermediate Systems
M1–1–(1)
compound
C(¹/2)–(¹/2)–1
neutron
M1–1–(1)
M¹/2–¹/2–(1) mass 1 hydrogen
* gravitational charge
– negative* electric charge
+ positive* electric charge
An exact duplicate of the Table36 list exists in the cosmic sector, with the speed
displacements inverted. In this case the particles are built on the cosmic rotational base,
represented as
C 0–0–0, rather than the material rotational base, M 0–0–0. The particles not listed in
Table35 that the physicists claim to have discovered–mesons, etc.–are combinations of
the cosmic type, either particles from the cosmic sub–atomic list, or full–sized cosmic
atoms (where the presumed discoveries are authentic). It is even possible that some of the
events of extremely short duration attributed to transient particles may be originated by
cosmic chemical compounds.
Recognition of the place of the massless particles in the evolutionary pattern of matter is
one of the advances in understanding that has given us the present consistent, and
apparently correct, explanation of the transition from cosmic to material (and vice versa).
The 1959 publication identified the cyclic nature of the universe, and gave an account of
the manner in which the transitions between sectors take place. At that time, however, the
existence of the massless particles had not yet been discovered theoretically, and the
particle now identified as the compound neutron was thought to be the intermediary by
means of which intersector transfer is accomplished. When it was finally realized that the
theory requires the existence of a massless neutron, the door to a new understanding of
the transition process was opened. It then became evident that the transition is not
directly from cosmic to material, but from cosmic (moving inward in time) to neutral (no
motion relative to the natural reference system), and then to material (moving inward in
space).
This finding revolutionized our concept of the position of the massless particles in the
physical picture. It can now be seen that these particles–the neutrino (known to
conventional science), the massless electron and massless positron (previously identified
as the moving particles in electric currents), the massless neutron, the rotational base, and
the gravitationally charged neutrino (discovered theoretically)–are the constituents of a
hitherto unknown subdivision of physical existence, a neutral state of the basic units of
matter, intermediate between the states of the cosmic and material sectors.
Inasmuch as the atom building process operates by means of successive additions of
single units, the relative proportions of the various elements in a material aggregate are
directly related to the age of the matter, and inversely related to the atomic number.
However, there are a number of collateral factors that modify the basic relations. As we
have seen, production of the mass one isotope of hydrogen is a relatively simple matter,
involving nothing more than a union of two simple particles. The next step is more
difficult because it requires the formation of a double system in which there are effective
rotational displacements in both components. The great majority of the material atoms
are therefore still in the hydrogen stage. The first full double system, helium, atomic
number 2, is in second place, as would be expected. Beyond this level, the atomic
rotation becomes more complex, and factors other than the required number of additions
of mass units introduce numerous irregularities into what would otherwise be a regular
decrease of abundance with atomic number.
Evidently a single addition to the atomic rotation introduces a degree of asymmetry that
decreases stability, as the even–numbered elements are generally more abundant than the
odd–numbered ones. For instance, the ten most abundant elements beyond hydrogen in
the earth‘s crust include seven even–numbered elements, and only three with odd atomic
numbers. The zone of isotopic stability is likewise wider in the even–numbered than in
the odd–numbered elements, as would be expected if they are inherently more stable.
Many of the odd–numbered group have only one stable isotope, and there are five within
the 117 element range of the terrestrial environment that have no stable isotope at all (in
that environment). On the other hand, no even–numbered element, other than beryllium,
has less than two stable isotopes.
The same kind of symmetry effect can be seen in the first additions of rotation in the
magnetic dimensions. The positive elements of Group 2A, lithium, beryllium, and boron,
are relatively scarce, while the corresponding members of group 2B, sodium, magnesium,
and aluminum, are relatively abundant. At higher levels this effect is not apparent,
probably because the successive additions to these heavier elements are smaller in
proportion to the total mass, while the effects of other factors become more significant.
One of the features of the rotational patterns of the elements that introduces variations in
their susceptibility to the addition of mass, and corresponding variations in the
proportions in which the different elements occur in material aggregates, is the change in
the magnetic rotation that takes place at the midpoint of each rotational group. For
example, let us again consider the 2B group of elements. The first three of these elements
are formed by successive additions of positive electric displacement to the 2–2 magnetic
rotation. Silicon, the next element, is produced by a similar addition, and the probability
of its formation does not differ materially from that of the three preceding elements.
Another such addition, however, would bring the speed displacement to 2–2–5, which is
unstable. In order to form the stable equivalent, 3–2–(3), the magnetic displacement must
be increased by one unit in one dimension. The probability of accomplishing this result is
considerably lower than that of merely adding one electric displacement unit, and the step
from silicon to phosphorus is consequently more difficult than the additions immediately
preceding. The total amount of silicon in existence therefore builds up to the point where
the lower probability of the next addition reaction is offset by the larger number of silicon
atoms available to participate in the reaction. As a result, silicon should theoretically be
one of the most abundant of the post–helium elements. The same considerations should
apply to the elements at the midpoints of the other rotational groups, when due
consideration is given to the general decrease in abundance that takes place as the atomic
number increases.
As we will see in Volume III, there are reasons to believe that the composition of
ordinary matter at the end of the first phase of its existence in the material sector, the dust
cloud phase, conforms to these theoretical expectations. However, the abundances of the
various elements in the region accessible to direct observation, a region in a later stage of
development, give us a different picture. The total heavy element content does increase
with the age of the matter. A representative evaluation finds the percentage of elements
heavier than helium ranging from 0.3 in the globular clusters, theoretically the youngest
stellar aggregates that are observable, to 4.0 in the Population I stars and interstellar dust
in the solar neighborhood, theoretically the oldest matter within convenient observational
range. These are approximations, of course, but the general trend is clear.
The peaks in the abundance curve that should theoretically exist at the midpoints of the
rotational groups also make their appearance at the appropriate points in the lower groups
of elements. The situation with respect to carbon is somewhat uncertain, because the
observations are conflicting, but silicon is relatively abundant compared to the
neighboring elements, as it theoretically should be, and iron, the predominant member of
the trio of elements at the midpoint of Group 3A is almost as abundant as silicon. But
when we turn to the corresponding members of the 3B group, ruthenium, rhodium, and
palladium, we find a totally different situation. Instead of being relatively abundant, as
would be expected from their positions in the atomic series just ahead of another increase
in the magnetic displacement, these elements are rare. This does not necessarily mean
that the relative probability effect due to the magnetic displacement step is absent, as all
of the neighboring elements are likewise rare. In fact, all elements beyond the iron–nickel
group exist only in comparatively minute quantities. Estimates indicate that the combined
amount of all of these elements in existence is less than one percent of the existing
amount of iron.
It does not appear possible to explain the relative abundances in terms of the probability
concept alone. A fairly substantial decrease in abundance compared to iron would be in
order if the age of the local system were such as to put the peak of probability somewhere
in the vicinity of iron, but this should still leave the ruthenium group among the relatively
common elements. The nearly complete elimination of the heavy elements, including this
group which should theoretically be quite plentiful requires the existence of some
additional factor: either (1) an almost insurmountable obstacle to the formation of
elements beyond the iron group, or (2) a process that destroys these elements after they
are produced.
There is no indication of the existence of any serious obstacle that interferes with the
formation of the heavy elements. So far as we can determine, the atom building process is
just as applicable to the heavy elements as to the light ones. The building of the heavy
elements is endothermic, but this should not be a serious obstacle, and in any event it
does not apply below Group 4A, and therefore has no bearing on the scarcity of the 3B
and lower division 3A elements. The peculiar distribution of abundances therefore seems
to require the existence of a destructive process that prevents the accumulation of any
substantial quantities of the elements heavier than the iron group, even though they are
produced in the normal amounts. We have already seen, in Chapter 17, that such a
process exists. This process will be examined in detail in Volume III, where it will be
shown that the theoretical results of the process are in full agreement with the observed
distribution of abundances of the elements.
The entire atom building process described in this chapter is duplicated in the cosmic
sector, with space and time interchanged. Here inverse mass is added to move the
elements up the cosmic atomic series.

CHAPTER 27

Mass and Energy


The discovery of the mass-energy relation E = mc2 by Einstein was a significant advance
in physical theory, and has already had some far-reaching physical applications. It is, of
course, entirely consistent with the Reciprocal System of theory. Indeed, this theory
provides the explanation of the relation that has heretofore been lacking. It is not always
recognized that, in the light of current physical thought, this is a very strange relation.
Why should the relation between mass and energy be expressible in terms of speed?
Einstein supplied no explanation. He derived the relation from the mathematical
expression of his theory of relativity, but a mathematical derivation does not explain
anything until an interpretation of the mathematics gives that derivation a physical
meaning. The information that has been missing is now supplied by the Reciprocal
System. In the universe of motion defined by that system of theory, mass and energy are
both reciprocal speeds, differing only in dimensions, mass being three-dimensional, while
energy is one-dimensional. Unit energy is therefore the product of unit mass and the
second power of unit speed, the speed of light.
This finding as to the true significance of the mass-energy relation has an important effect
on its applicability. It shows that the current belief that a quantity of energy always has a
certain mass associated with it is erroneous. Reciprocal speed can exist either as mass, or
as energy, but not both simultaneously. A quantity of mass, three-dimensional scalar
motion, is equivalent to a quantity of energy, one-dimensional scalar motion, only when
three-dimensional motion is actually transformed into one-dimensional motion, or vice
versa. In other words, an existing quantity of mass does not correspond to any existing
energy, but to the quantity of energy that would come into existence if the mass is
actually converted into energy.
For this reason, Einstein‘s hypothesis of an increase in mass accompanying increased
velocity is inconsistent with our findings. The kinetic energy increment could increase
the mass only if it were converted to mass by some appropriate process, and in that event
it would cease to be kinetic energy; that is, the corresponding velocity would no longer
exist. Actually, this hypothesis of Einstein‘s is inconsistent with his valid concept of the
conversion of mass into energy, regardless of the point of view from which the question
is approached. Mass cannot be an accompaniment of kinetic energy, a quantity that
increases as the energy increases, and also an entity that can be converted into kinetic
energy, a quantity that increases as the energy decreases. The two concepts are mutually
exclusive.
In the theoretical universe of motion now being described, the mass-energy relation is
applicable only to those processes in which mass disappears and energy appears, or vice
versa. The most familiar process of this kind is the interchange between mass and energy
that takes place as a result of radioactivity, or similar atomic transformations. As we saw
in Chapter 25, the primary mass is conserved in these reactions. In the radioactive
disintegration Ra226—> Rn222 + He4, for example, the total primary mass of the original
radium atom was 226. The primary mass of the residual radon atom, 222, and that of the
ejected alpha particle, 4, likewise add up to 226. Thus any mass-energy conversion
involved in atomic transformations of this kind is confined to the secondary mass.
Current scientific opinion regards this secondary mass component as the mass which,
according to accepted theory, is associated with the ―binding energy‖ that holds the
hypothetical constituents of the hypothetical atomic nucleus together. It must be
conceded that this ―binding energy‖ concept fits in very well with the prevailing ideas as
to the nature of the atomic structure, but it should be remembered that the entire nuclear
concept of the atom is purely hypothetical. No part of it has been verified empirically.
Even Rutherford‘s original conclusion that most of the mass of the atom is concentrated
in a small nucleus—the hypothesis from which the present-day atomic theory was
derived—is not supported except on the basis of the assumption that the atoms are in
contact in the solid state, an assumption that we now find is erroneous. And every
additional step that has been taken in the long series of adjustments and modifications to
which the theory has been subjected as a means of extricating it from difficulties has
involved one or more further assumptions, as pointed out in Chapter 18. Thus the fact
that the ―binding energy‖ concept is consistent with this aggregate of hypotheses has no
physical significance. All available evidence is consistent with our finding that the
difference between the observed total mass and the primary mass is a secondary mass
effect due to motion within the time region, and that the conversion of this secondary
mass to energy is responsible for the energy production during radioactivity or other
atomic transformations.
The nature of the secondary mass was explained in Volume I. The magnitudes of this
quantity applicable to the sub-atomic particles and the hydrogen isotopes were also
calculated. Some studies were made on the higher elements during the early stages of the
investigation, and it was shown in the first edition of this work that there is a fairly
regular decrease in the secondary mass of the most abundant isotope of the elements in
the range from lithium to iron. Beyond iron the values are irregular, but the secondary
mass (negative in this range) remains in the neighborhood of the iron value up to about
the midpoint of the atomic series, after which it gradually decreases. and returns to
positive values in the very heavy elements. The effect of this secondary mass pattern is to
make both the growth process in the light elements and the decay process in the heavy
elements exothermic.
From the foregoing, it follows that the secondary mass in the lower half of the atomic
series, with the exception of hydrogen, is negative. This conflicts with the general belief
that mass is always positive, but our previous development of theory has shown that the
observed mass of an atom is the algebraic sum of the mass equivalents of the speed
displacements of the constituent rotations. Where a rotation is negative, the
corresponding mass component is also negative. The net total mass of a material atom is
always positive only because the magnetic rotation is necessarily positive in the material
sector of the universe, and the magnetic rotation is the principal component of the total.
Just why the minimum in the secondary mass is at or near the midpoint of the atomic
series rather than at one of the extremes is still unknown, but a similar pattern was noted
in some of the material properties examined in the preceding pages of this and the earlier
volume, and it is not unlikely that there is a common cause.
Many investigators have devoted considerable effort to the study and analysis of atomic
transformations that might possibly serve as the source of the energy generated in the sun
and other stars. The general conclusion has been that the most likely reactions are those
in which hydrogen is converted into helium, either directly or through a series of
intermediate reactions. Hydrogen is the most abundant element in the stars, and in the
universe as a whole. This hydrogen conversion process, if actually in operation, could
therefore furnish a substantial supply of energy. But, as brought out in Chapter 25, there
is no actual evidence that the conversion of ordinary hydrogen, the H1 isotope, to helium
is a naturally occurring process in the stars or anywhere else. Even without the new
information supplied by the investigation here being reported, there are many reasons to
doubt that this process is actually operative, and to question whether it would supply
enough energy to meet the stellar requirements if it were in operation. It obviously fails
by a wide margin to account for the enormous energy output of the quasars and other
compact astronomical objects. As one astronomer states the case, the problem of
accounting for the energy of the quasars ―is widely considered to be the most important
unsolved problem in theoretical astrophysics.‖ 106
The catastrophic effect that the invalidation of the hydrogen conversion process as the
stellar energy source would otherwise have on astronomical theory, leaving it without
any explanation of the manner in which this energy is generated, is avoided by the fact
that the development of the Reciprocal System of theory has revealed the existence of not
only one, but two hitherto unknown physical phenomena, each of which is far more
powerful than the hydrogen conversion process. These newly discovered processes are
not only capable of meeting the energy requirements of the stable stars, but also the far
greater requirements of the supernovae and the quasars (when the quasar energies are
scaled down to the true magnitudes from the inflated values based on the current
interpretations of the redshifts of these objects).
Perhaps some readers may find it difficult to accept the thought that there could be
hitherto unknown processes in operation in the universe that are vastly more powerful
than any previously known process. It might seem that anything of that magnitude should
have made itself known to observation long ago. The explanation is that the results of
these processes are known observationally. Extremely energetic events are prominent
features of present-day astronomy. What has not been known heretofore is the nature of
the processes whereby the enormous energies are generated. This is the information that
the theory of the universe of motion is now supplying.
In Chapter 17 we examined one of these processes, the conversion of mass to energy that
results when the matter in the interior of a star reaches the destructive thermal limit. This
is the long-continuing process that supplies the relatively modest (on the astronomical
scale) amount of energy necessary to meet the requirements of the stable stars. It also
accounts for the large energy output of one kind of supernova, as we will see in Volume
III. At this time we will take a look at what happens when a star arrives at a different kind
of a destructive limit.
The destructive limit identified in Chapter 17 is reached when the total of the outward
displacements (thermal and electric ionization) reaches equality with one of the inward
rotational displacements of the atom, reducing the net displacement of the combination to
zero, and destroying its rotational character. A similar destructive limit is reached when
the inward displacements (rotation and gravitational charge) are built up to a level that,
from the rotational standpoint, is the equivalent of zero.
This concept of the equivalent of zero is new to science, and may be somewhat
confusing, but its nature can be illustrated by consideration of the principle on which the
operation of the stroboscope is based. This instrument observes a rotating object in a
series of views at regular intervals. If the interval is adjusted to equal the rotation time,
the various features of the rotating object occupy the same positions in each view, and the
object therefore appears to be stationary. A similar effect was seen in the early movies,
where the wheels of moving vehicles often appeared to stop rotating, or to rotate
backward.
In the physical situation, if a rotating combination completes its cycle in a unit of time,
each of the displacement units of the combination returns to the same circumferential
position at the end of each cycle. From the standpoint of the macroscopic behavior of the
motion, the positions at the ends of the time units are the only ones that have any
significance—that is, what happens within a unit has no effect on other units—and, under
the conditions specified, these positions lie in a straight line in the reference system. This
means that there is no longer any factor tending to keep the units together as a rotational
combination (an atom). Consequently, they separate as linear motions, and mass is
transformed into energy. It should be understood, however, that this transformation at the
destructive limit has no effect on the motion itself. Scalar motion has no property other
than its positive or negative magnitude, and that remains unchanged. What is altered is
the coupling to the reference system, which is subject to change at the end of any unit, if
the conditions existing at that point are favorable for such a change.
The emphasis on the ends of the units of motion in the foregoing discussion is a reflection
of the nature of the basic motions, as defined in the fundamental postulates of the
Reciprocal System of theory. According to these postulates, the basic units of motion are
discrete. This does not mean that the motion proceeds by a succession of jumps. On the
contrary, motion is inherently a continuous progression. A new unit of the progression
begins at the point where the preceding unit ends, so that continuity, in this sense, is
maintained from unit to unit, as well as within units. But since the units are separate
entities, the effects of the events that take place in one unit cannot be carried forward to
the next (although the combination of the internal and external features of the same unit
may be effective, as in the case of the primary and secondary mass). The individual units
of motion may continue on the same basis, but the coupling of the motion to the reference
system is subject to change to conform to whatever conditions may exist at the end of a
unit. When the atom has returned to the situation that existed at the original zero, as is
true if the end of the rotational cycle coincides with the end of the time unit, the motion
has reached a new starting point, a new zero, we may say.
For the reasons previously given, the limiting value, the equivalent of zero in each scalar
dimension, is eight units of one-dimensional, or four units of two-dimensional, rotational
displacement. In the notation used herein, the latter is a 4-4 magnetic combination.
However, as indicated in Chapter 24, the destructive limit is not reached until the
displacement in the electric dimension also arrives at the equivalent of the last magnetic
unit. A rotational combination (atom) is therefore stable, at zero magnetic ionization, up
to 4-4-31, or the equivalent 5-4-(1), which is element 117. One more step reaches the
limit at which the rotational motion terminates.
If the rotational limit is reached in atoms whose individual magnetic ionization is above
the general level in the aggregate of which these atoms are constituents, the effect of
approaching the limit is that the atoms become radioactive, and eject portions of their
masses in the form of alpha particles, or other fragments. This prevents the building of
elements heavier than number 117, but it does not result in destruction of primary mass
such as that which occurs at the destructive thermal limit. Thus the radioactivity is a
means of avoiding the destructive effects of reaching the limiting value of the magnetic
displacement.
This situation is analogous to a number of others that are more familiar. For example, we
saw in Chapter 5 that the limiting value of the specific heat of a solid is reached at a
relatively low temperature. Beyond this limit the atom. or molecule, enters the liquid
state. The transition requires a substantial energy input, and since the lower energy states
are more probable in a low energy environment, the atom avoids the need to provide the
energy increment by changing to a different thermal vibration pattern, if it has the
capability of so doing. The atoms of the heavier elements make several changes of this
kind as new limiting values of the specific heat are encountered at successively higher
temperatures. Eventually, however, a point is reached at which no further expedients of
this kind are available, and the atom must pass into the liquid state. Similarly, the
probabilities favor the continued existence of the combination of motions that constitutes
the atom, as long as this is possible. The destructive effects of arriving at the
displacement limit are therefore avoided by the ejection of mass. But here, too, as in the
case of the specific heat, a point is eventually reached where the level of magnetic
ionization tending to increase the atomic mass prevents further ejection of mass from the
atom, and arrival at the destructive limit can no loner be avoided.
The consequences of reaching this rotational displacement limit at the equivalent of zero
are qualitatively identical with those of reaching the thermal displacement limit at zero.
The various rotational components cancel out, and the motion reverts to the linear basis.
This transforms mass into kinetic energy, most of which is imparted to the residue of the
atoms, or to other matter in the environment. The remainder goes into electromagnetic
radiation. From a quantitative standpoint, there are some significant differences between
the two phenomena. The thermal limit applies only to the heaviest element that is present
in the aggregate in a significant quantity, and the rate at which this element arrives at the
limit is regulated by a process that will be discussed in Volume III. The elements lower in
the atomic series are not affected. Furthermore, the conversion of rotational to linear
displacement (mass to energy) at the thermal limit does not necessarily apply to more
than one of the magnetic displacement units of the atom, and a large part of the atomic
mass may therefore remain intact, either as a residual atom or a number of fragments.
Consequently, the thermal limit has no catastrophic effect until the temperature reaches
the destructive limit of an element, iron, that is present in relatively large quantities. On
the other hand, arrival at the magnetic displacement limit affects the entire mass of each
atom, and the only portion of the mass of an aggregate that remains intact is that in the
outer portions of the aggregate where the magnetic ionization level is lower than in the
deeper interior. There is no process that limits the rate of disintegration at this destructive
limit. The resulting explosion, known as a Type II supernova, is therefore much more
powerful (relative to the mass of the exploding star) than the Type I supernova that
occurs at the thermal limit, although its full magnitude is not evident from direct
observation, for reasons that will be explained in Volume III.
While the thermal disintegration process is operative in every star, it does not necessarily
proceed all the way to destruction of the star. The extent to which the mass of the star,
and consequently the temperature, increases depends on its environment. Some stars will
accrete enough mass to reach the temperature limit and explode; others will not. But the
increase in the magnetic ionization level is a continuing process in all environments, and
it necessarily results in arrival at the magnetic destructive limit when sufficient time has
elapsed. This limit is thus essentially an age limit.
A process related to those that have been described in the foregoing paragraphs is the
sequence of events that counterbalances the conversion of three-dimensional motion
(mass) into one-dimensional motion (energy) in the stars. The energy that is generated by
atomic disintegration leaves the stars in the form of radiation. According to present-day
views, this radiation moves outward at the speed of light, and most of it eventually
disappears into the depths of space. The theory of the universe of motion gives us a very
different picture. It tells us that inasmuch as the photons of radiation have no capability of
independent motion relative to the natural datum, they remain stationary in the natural
reference system, or move inward at the speed of the emitting object. Each photon
therefore eventually encounters, and is absorbed by, an atom of matter. The net result of
the generation of stellar energy by atomic disintegration is thus an increase in the thermal
energy of other matter. As will be explained in Volume III, the matter of the universe is
subject to a continuing process of aggregation under the influence of gravitation.
Consequently, all matter in the material sector, with the added thermal energy, is
ultimately absorbed by one of the giant galaxies that are the end products of the
aggregation process.
When supernova explosions in the interior of one of these giant galaxies become frequent
enough to raise the average particle speed above the unit level, some of the full units of
speed thus made available are converted into rotational motion, creating cosmic atoms
and particles. This cosmic atom building, which theoretically operates on a very large
scale in the galactic interiors, has been observed on a small scale in experiments, the
results of which were discussed in Volume I. In the experiments, the high energy
conditions are only transient, and the cosmic atoms and particles that are produced from
the high level kinetic energy quickly decay into particles of the material system. Some
such decays no doubt also occur in the galactic interiors, but in this case the high energy
condition is quasi-permanent, favoring continued existence of the cosmic units until
ejection of the quasar takes place. In any event, the production of these rotational
combinations has increased the amount of existing cosmic or ordinary matter at the
expense of the amount of existing energy, thus reversing the effect of the production of
energy by disintegration of atoms of matter.
In concluding this last chapter of a volume dealing with the properties of matter, it will be
appropriate to call attention to the significant difference between the role that matter
plays in conventional physical theory, and its status in the theory of the universe of
motion. The universe of present-day physical science in a universe of matter, one in
which the presence of matter is the central fact of physical existence. In this universe of
matter, space and time provide the background, or setting, for the action of the universe;
that is, according to this view, physical phenomena take place in space and in time.
As Newton saw them, space and time were permanent and unchanging, independent of
each other and of the physical activity taking place in them. Space was assumed to be
Euclidean (―flat‖ in the jargon of present-day mathematical physics), and time was
assumed to flow uniformly and unidirectionally. All magnitudes, both of space and of
time, were regarded as absolute; that is, not dependent on the conditions under which
they are measured, or on the manner of measurement. A subsequent extension of the
theory, designed to account for some observations not covered by the original version,
assumed that space is filled with an imponderable fluid, the ether, which interacts with
physical objects.
Einstein‘s relativity theories, which have replaced Newton‘s theory as the generally
accepted view of the theoretical physicists, retain Newton‘s concept of the general nature
of space and time. To Einstein these entities constitute a background for the action of the
universe, just as they did for Newton. Instead of being a three-dimensional space and a
one-dimensional time, independent of each other, as they were for Newton, they are
amalgamated into a four-dimensional spacetime in Einstein‘s system, but they still have
exactly the same function; they form the framework, or container, within which physical
entities exist and physical events take place. Furthermore, these basic physical entities
and phenomena are essentially identical with those that exist in Newton‘s universe.
It is commonly asserted that Einstein eliminated the ether from physical theory. In fact,
however, what he actually did was to eliminate the name “ether,‖ and to apply the name
―space‖ to the concept previously called the ―ether.‖ Einstein‘s ―space‖ has the same kind
of properties that were formerly assigned to the ether, as he admits in the following
statement:
We may say that according to the general theory of relativity space is endowed with physical qualities; in
this sense, therefore, there still exists an ether.25

The downfall of Newtonian physics was due to a gradual accumulation of discrepancies


between theory and observation, the most critical being the results of the Michelson-
Morley experiment and the measurements of the advance of the perihelion of Mercury,
neither of which could be explained within the limits of Newton‘s system. Some
modification of that system was obviously necessary. The question, as it stood around the
end of the nineteenth century, was what form the revision of Newton‘s ideas should take.
As brought out in Chapter 13, in order to qualify as ―theory,‖ in the full meaning of the
term, the treatment of a physical phenomenon must cover not only its mathematical
aspects, but also its physical aspects; that is, it must provide a conceptual understanding
of the entities and relations to which the mathematics refer. However, the general
tendency in recent years has been to concentrate on the mathematical development and to
omit the parallel conceptual development, substituting conceptual interpretations of the
individual mathematical results. Richard Feynman describes the present situation in this
manner:
Every one of our laws is a purely mathematical statement in rather complex and abstruse mathematics. 56

In his attack on the problem of revising Newton‘s theory, Einstein not only adopted this
policy of widening the latitude for theory construction by restricting his development to
the mathematical aspects of the subject under consideration, and thereby avoiding any
conceptual limitations on his basic assumptions, but went a step farther, and loosened the
normal mathematical constraints as well. He first introduced a high degree of flexibility
into the numerical values by discarding ―the idea that co-ordinates must have an
immediate metrical meaning [an expression that he defines as the existence of a specific
relationship between differences of coordinates and measureable lengths and times].‖ 36
As C. Moller describes this theoretical picture:
In accelerated systems of reference the spatial and temporal coordinates thus lose every physical
significance; they simply represent a certain arbitrary, but unambiguous, numbering of physical events. 107

Along with this flexibility of physical measurement, which greatly increased the latitude
for making additional assumptions, Einstein introduced a similar flexibility into the
geometry of spacetime by assuming that it is distorted or ―curved‖ by the presence of
matter. The particular aim of this expedient was to provide a means of dealing with
gravitation, a key issue in the general problem. One textbook explains the new view in
this manner:
What we call a gravitational field is equivalent to a ―warping‖ of time and space, as if it were a rubbery sort
of material that stretched out of shape near heavy bodies.108

The basis for this assertion is an assumption, the assumption that, for some unspecified
reason, space and matter exert an influence upon each other. ―Space acts on matter,
telling it how to move. In turn, matter acts on space, telling it how to curve.‖ 109
(Misner, Thorne, and Wheeler) But neither Einstein nor his successors have given us any
explanation of how such interactions are supposed to take place—how space ―tells‖
matter, or vice versa. Nor does the theory explain inertia, an aspect of the gravitational
situation that has given the theorists considerable trouble. As Abraham Pais sums up this
situation:
It must also be said that the origin of inertia is and remains the most obscure subject in the theory of
particles and fields.110

Today there is a tendency to call upon Mach‘s principle, which attributes the local
behavior of matter to the influence of the total quantity of matter in the universe. Misner,
Thorne, and Wheeler say that ―Einstein‘s theory identifies gravitation as the mechanism
by which matter there (the distant stars) influences inertia here.‖ 111 But, as indicated in
the statement by Pais, this explanation is far from being persuasive. It obviously gives us
no answer to the question that baffled Newton: How does gravitation originate?. Indeed,
there is something incongruous about the acceptance of Mach‘s principle by the same
scientific community that is so strongly opposed to the concept of action at a distance.
The fact is that neither Newton‘s theory nor Einstein‘s theory tells us anything about the
―mechanism‖ of gravitation. Both take the existence of mass as something that has to be
accepted as a given feature of the universe, and both require that we accept the fact that
masses gravitate, without any explanation as to how, or why, this takes place. The only
significant difference between the two theories, in this respect, is that Newton‘s theory
gives us no reason why masses gravitate, whereas Einstein‘s theory gives us no reason
why masses cause the distortion of space that is asserted to be the reason for gravitation.
As Feynman sums up the situation, ―There is no model of the theory of gravitation today,
other than the mathematical form.‖ 56
The concept of a universe of motion now provides a gravitational theory that not only
explains the gravitational mechanism. but also clarifies its background, showing that
mass is a necessary consequence of the basic structure of the universe, and does not have
to be accepted as unexplainable. This theory is based on a new, and totally different, view
of the status of space and time in the physical universe. Both Newton and Einstein saw
space and time as the container for the constituents of the universe. In the theory of the
universe of motion, on the other hand, space and time are the constituents of the universe,
and there is no container. On this basis, the space of the conventional spatio-temporal
reference system is just a reference system—nothing more. Thus it cannot be curved or
otherwise altered by the presence or action of anything physical. Furthermore, since the
coordinates of the reference system are merely representations of existing physical
magnitudes, they automatically have the ―metrical meaning‖ that Einstein eliminated
from his theory to attain the flexibility without which it could not be fitted to the
observations.
The theory of the universe of motion is the first physical theory that actually explains the
existence of gravitation. It demonstrates that the gravitational motion is a necessary
consequence of the properties of space and time, and that the same thing that makes an
atom an atom, the rotationally distributed scalar motion, also causes it to gravitate.
Additionally, the same motion is responsible for inertia.
Of course, this return to absolute magnitudes and mathematical rigidity invalidates the
conceptual interpretations of Einstein‘s solutions of the problems raised by the observed
deviations from the consequences of Newton‘s theory, and requires finding new answers
to these problems. But these answers have emerged easily and naturally during the course
of the development of the details of the new theory. In most case no changes in the
existing formulation of the mathematical relations have been required. While Einstein‘s
modification of Newton‘s theory was almost entirely mathematical, our modification of
the Newton-Einstein system is primarily conceptual, because the errors in currently
accepted theory are nearly all in the conceptual interpretation of the observations and
measurements; that is, in the prevailing understanding of the meaning of the
mathematical terms and the relations between them.
The changes that the new theory makes in the conceptual aspects of the gravitational
situation do not affect any of the valid mathematical results of Einstein‘s theory. For
example, most of the mathematical consequences of the general theory of relativity that
have led to its acceptance by the scientific community are derived from one of its
postulates, the Principle of Equivalence, which states that gravitation is the equivalent of
an accelerated motion. In the theory of the universe of motion, gravitation is an
accelerated motion. It follows that any conclusion that can legitimately be drawn from the
Principle of Equivalence, such as the existence of gravitational redshifts, can likewise be
derived from the postulates of the theory of the universe of motion in exactly the same
form.
The agreement between the two theories that exists in these subsidiary areas, and in the
mathematical results, does not extend to the fundamentals of gravitation. Here the
theories are far apart. The theoretical development reported in the several volumes of this
work shows that the attempt to resolve physical issues by mathematical means—the path
that has heretofore been followed in dealing with fundamental physics—precludes any
significant conceptual changes in theory, whereas, as our findings have demonstrated,
there are major errors in the basic assumptions upon which the mathematical theories
have been constructed.
Until comparatively recently it was not feasible to locate and correct these errors, because
access to a large amount of factual information is indispensable to such an undertaking,
and the available supply of information was simply not adequate. Continued research has
overcome this obstacle, and the development of the theory of the universe of motion has
now identified the ―machinery,‖ not only of gravitation, but of physical processes in
general. We are now able to identify the common denominator of all of the fundamental
physical entities, and by defining it, we define the entire structure of the physical
universe.

References
1.Weisskopf, V. F., Lectures in Theoretical Physics, Vol. III, Britten, J. Downs, and B.
Downs, editors, Interscience Publishers, New York, 1961, p. 80.
2.Wyckoff, R. W. G., Crystal Structures, and supplements, Interscience Publishers, New
York, 1948 and following.
3.All compression data in the range below 100.000 kg/cm2 not otherwise identified are
from the works of p. W. Bridgman, principally his book, The Physics of High Pressure,
G. Bell & Sons, London, 1958, and a series of articles in the Proceedings of the American
Academy of Arts and Sciences.
4.Kittel, Charles, Introduction to Solid State Physics, 5th edition, John Wiley & Sons,
New York, 1976.
5.McQueen and Marsh, Journal of Applied Physics, July 1960.
6.National Bureau of Standards, Tables of Normal Probability Functions, Applied
Mathematics Series, No. 23, Washington, DC, 1953.
7.Specific heat data are mainly from Hultgren, et al, Selected Values of the
Thermodynamic Properties of the Elements, American Society for Metals, Metals Park,
OH, 1973, and Thermophysical Properties of Matter, Vol. 4, Touloukian and Buyko,
editors, IFI Plenum Data Co., New York, 1970, with some supplemental data from
original sources.
8.Heisenberg, Werner, Physics and Philosophy, Harper & Bros., New York, 1958, p. 189.
9.Heisenberg, Werner, Philosophic Problems of Nuclear Science, Pantheon Books, New
York, 1952, p. 55.
10.Bridgman, p. W., Reflections of a Physicist, Philosophical Library, New York, 1955,
p. 186.
11.Weisskopf, V. F., American Scientist, July-Aug., 1977.
12.Values of the coefficient of expansion are from Thermophysical Properties of Matter,
op. cit., Vol. 12.
13.Duffin, W. J., Electricity and Magnetism, 2nd edition, John Wiley & Sons, New York,
1973, p. 122.
14.Ibid., p. 52.
15.Robinson, F. N. H., in Encyclopedia Britannica, 15th edition, Vol. 6, p. 543.
16.Stewart, John W., in McGraw-Hill Encyclopedia of Science and Technology, Vol. 4,
p. 199.
17.Lande, Alfred, Philosophy of Science, 24-309.
18.Meaden, G. T., Electrical Resistance of Metals, Plenum Press, New York, 1965, p. 1.
19.Ibid., p. 22.
20.The resistance values are from Meaden, op. cit., supplemented by values from other
compilations and original sources.
21.Values of the thermal conductivity are from Thermophysical Properties of Matter,
op. cit., Vol. I.
22.Davies, Paul, The Edge of Infinity, Simon & Schuster, New York, 1981, p. 137.
23.Alfven, Hannes, Worlds-Antiworlds, W. H. Freeman & Co., San Francisco, 1966,
p. 92.
24.Einstein and Infeld, The Evolution of Physics, Simon & Schuster, New York, 1938,
p. 185.
25.Einstein, Albert, Sidelights on Relativity, E. p. Dutton & Co., New York, 1922, p. 23.
26.Ibid., p. 19.
27.Einstein and Infeld, op. cit., p. 159.
28.Dicke, Robert H., American Scientist, Mar. 1959.
29.Bridgman, p. W., The Way Things Are, Harvard University Press, 1959, p. 153.
30.Heisenberg, Werner, Physics and Philosophy, op. cit., p. 129.
31.Eddington, Arthur S., The Nature of the Physical World, Cambridge University Press,
1933, p. 156.
32.Einstein and Infeld, op. cit., p. 158.
33.Science News, Jan. 31, 1970.
34.Bridgman, p. W., The Way Things Are, op. cit., p. 151.
35.Von Laue, Max, Albert Einstein Philosopher-Scientist, edited by p. A. Schilpp, The
Library of Living Philosophers, Evanston, IL, 1949, p. 517.
36.Andrade, E. N. daC., An Approach to Modern Physics, G. Bell & Sons, London, 1960,
p. 10.
37.Carnap, Rudolf, Philosophical Foundations of Physics, Basic Books, New York,
1966, p. 234.
38 Gardner, Martin, The Ambidextrous Universe, Charles Scribner's Sons, New York,
1979, p. 200.
39.Einstein, Albert, Albert Einstein: Philosopher-Scientist, op. cit., p. 21.
40.Duffin, W. J., op. cit., p. 25.
41.Ibid., p. 281.
42.Gerholm, Tor R., Physics and Man, The Bedminster Press, Totowa, NJ, 1967, p. 135.
43.Ibid., pp. 147, 151.
44.Davies, Paul, Space and Time in the Modern Universe, Cambridge University Press,
1977, p. 139.
45,Einstein, Albert, Albert Einstein: Philosopher-Scientist, op. cit., p. 67.
46.Lorrain and Corson, Electromagnetism, W. H. Freeman & Co., San Francisco, 1978,
p. 95.
47.Maxwell, J. C., Royal Society Transactions, Vol. CLV.
48.Rojansky, Vladimir, Electromagnetic Fields and Waves, Dover Publications, New
York, 1979, p. 280.
49.Smythe, William R., Mc Graw-Hill Encyclopedia, op. cit., Vol. 4, p. 338.
50.Kip, Arthur, Fundamentals of Electricity and Magnetism, 2nd edition, McGraw-Hill
Book Co., New York, 1969, p. 136.
51.Duffin, W. J., op. cit., p. 301.
52.Ibid., p. 313.
53.Ibid., p. 302.
54.Bleaney and Bleaney, Electricity and Magnetism, 3rd edition, Oxford University
Press, New York, 1976, p. 64.
55.Dobbs, E. R., Electricity and Magnetism, Routledge & Kegan Paul, London, 1984,
p. 50.
56.Feynman, Richard, The Character of Physical Law, MIT Press, 1967, p. 39.
57.Duffin, W. J., op. cit., p. 3.
58.Rogers, Eric M., Physics for the Inquiring Mind, Princeton University Press, 1960,
p. 550.
59.McCaig, Malcolm, in Permanent Magnets and Magnetism, D. Hadfield, editor, John
Wiley & Sons, New York, 1962, p. 18.
60.Park, David, Contemporary Physics, Harcourt, Brace & World, New York, 1964,
p. 122.
61.Mellor, J. W., Modern Inorganic Chemistry, Longmans, Green & Co., London, 1919.
62.Rogers, Eric M., op. cit., p. 407.
63.Park, David, op. cit., p. 15.
64.Margenau, Henry, Open Vistas, Yale University Press, 1961, p. 72.
65.Thorne, Kip S., Scientific American, Dec. 1974.
66.Misner, Thorne, and Wheeler, Gravitation, W. H. Freeman & Co., New York, 1973,
p. 620.
67.Feynman, Richard, The Feynman Lectures on Physics, Vol. II, Addison-Wesley
Publishing Co., Menlo Park, CA, 1977, p. 1-3.
68.Bueche, F. J., Understanding the World of Physics, McGraw-Hill Book Co., New
York, 1981, p. 328.
69.Pais, Abraham, Subtle is the Lord, Oxford University Press, New York, 1982, p. 34.
70.Hanson, Norwood R., in Scientific Change, edited by A. C. Crombie, Basic Books,
New York, 1963, p. 492.
71.Schršdinger, Erwin, Science and Humanism, Cambridge University Press, 1952, p. 22.
72.Schršdinger, Erwin, Science and the Human Temperament, W. W. Norton & Co., New
York, 1935, p. 154.
73.Heisenberg, Werner, Physics and Philosophy, op. cit., p. 129.
74.Feynman, Richard, The Character of Physical Law, op. cit., p. 168.
75.Jastrow, Robert, Red Giants and White Dwarfs, Harper & Row, New York, 1967,
p. 41.
76.Harwit, Martin, Cosmic Discovery, Basic Books, New York, 1981, p. 243.
77.Ford, K. W., Scientific American, Dec. 1963.
78.Hulsizer and Lazarus, The New World of Physics, Addison-Wesley Publishing Co.,
Menlo Park, CA, 1977, p. 254.
79.Bahcall, J. N., Astronomical Journal, May 1971.
80.Duffin, W. J., op. cit., p. 165.
81.Hulsizer and Lazarus, op. cit., p. 255.
82.Mc Caig, Malcolm, op. cit., p. 35.
83.Anderson, J. C., Magnetism and Magnetic Materials, Chapman and Hall, London,
1968, p. 1.
84.McCaig, Malcolm, Permanent Magnets in Theory and Practice, John Wiley & Sons,
New York, 1977, p. 339.
85.Duffin, W. J., op. cit., p. 156.
86.McCaig, Malcolm, Permanent Magnets, op. cit., p. 341.
87.Ibid., p. 8.
88.Shortley and Williams, Elements of Physics, 2nd edition, Prentice-Hall, New York,
1955, p. 717.
89.Lorrain and Corson, op. cit., p. 360.
90.Feynman, Richard, The Feynman Lectures on Physics, op. cit., Vol. II, p. 13-1.
91.Bueche, F. J., op. cit., p. 259.
92.Ibid., p. 267.
93.Rogers, Eric M., op. cit., p. 562.
94.Feather, Norman, Electricity and Matter, Edinburgh University Press, 1968, p. 104.
95.Kip, Arthur, op. cit., p. 278.
96.Ibid., p. 283.
97.Duffin, W. J., op. cit., p. 210.
98.Lorrain and Corson, op. cit., p. 334.
99.Handbook of Chemistry and Physics, 66th edition, Chemical Rubber Publishing Co.,
Cleveland, 1976.
100.Martin, D. H., Magnetism in Solids, MIT Press, 1967, p. 9.
101.Dobbs, E. R., op. cit., p. 54.
102.Lorrain and Corson, op. cit., p. 237.
103.Duffin, W. J., op. cit., p. 60.
104.Concepts in Physics, CRM Books, Del Mar, CA, 1979, p. 266.
105.Davies, Paul, The Accidental Universe, Cambridge University Press, 1982, p. 60.
106.Mitton, Simon, Astronomy and Space, Vol. I, edited by Patrick Moore, Neale Watson
Academic Publishers, New York, 1972.
107.Moller, C., The Theory of Relativity, Clarendon Press, Oxford, 1952, p. 226.
108.Hulsizer and Lazarus, op, cit., p. 222.
109.Misner, Thorne, and Wheeler, op. cit., p. 5.
110.Pais, Abraham, op. cit., p. 288.
111.Misner, Thorne, and Wheeler, op, cit., p. 543.

DEWEY B. LARSON: THE COLLECTED WORKS


Dewey B. Larson (1898-1990) was an American
engineer and the originator of the Reciprocal System of
Theory, a comprehensive theoretical framework capable
of explaining all physical phenomena from subatomic
particles to galactic clusters. In this general physical
theory space and time are simply the two reciprocal
aspects of the sole constituent of the universe–motion.
For more background information on the origin of
Larson‘s discoveries, see Interview with D. B. Larson
taped at Salt Lake City in 1984. This site covers the
entire scope of Larson‘s scientific writings, including his
exploration of economics and metaphysics.
Physical Science The Universe of Motion
The third volume of the revised edition of
The Structure of the Physical Universe
The Structure of the Physical Universe,
The original groundbreaking publication
applying the theory to astronomy.
wherein the Reciprocal System of Physical
Theory was presented for the first time.
The Liquid State Papers
A series of privately circulated papers on the
The Case Against the Nuclear Atom
liquid state of matter.
―A rude and outspoken book.‖
The Dewey B. Larson Correspondence
Beyond Newton Larson‘s scientific correspondence, providing
―...Recommended to anyone who thinks the many informative sidelights on the
subject of gravitation and general relativity development of the theory and the
was opened and closed by Einstein.‖ personality of its author.

New Light on Space and Time The Dewey B. Larson Lectures


A bird‘s eye view of the theory and its Transcripts and digitized recordings of
ramifications. Larson‘s lectures.
The Collected Essays of Dewey B. Larson
The Neglected Facts of Science Larson‘s articles in Reciprocity and other
Explores the implications for physical publications, as well as unpublished essays.
science of the observed existence of scalar
motion. Metaphysics
Beyond Space and Time
Quasars and Pulsars
A scientific excursion into the largely
Explains the most violent phenomena in the
unexplored territory of metaphysics.
universe.
Economic Science
Nothing but Motion The Road to Full Employment
The first volume of the revised edition of The scientific answer to the number one
The Structure of the Physical Universe, economic problem.
developing the basic principles and relations.
The Road to Permanent Prosperity
Basic Properties of Matter A theoretical explanation of the business
The second volume of the revised edition of cycle and the means to overcome it.
The Structure of the Physical Universe,
applying the theory to the structure and
behavior of matter, electricity and
magnetism.
From; http://www.reciprocalsystem.com/um/index.htm

The Universe of Motion


DEWEY B. LARSON
Volume III of a revised and enlarged edition of
The Structure of the Physical Universe

Preface 17. Pulsars


1. Introduction 18. Radiative Processes
2. Galaxies 19. X-ray Emission
3. Globular Clusters 20. The Quasar Situation
4. The Giant Star Cycle 21. Quasar Theory
5. The Later Cycles 22. Verification
6. The Dwarf Star Cycle 23. Quasar Redshifts
7. Binary and Multiple Stars 24. Evolution of Quasars
8. Evolution–Globular Cluster Stars 25. The Quasar Populations
9. Gas and Dust Clouds 26. Radio Galaxies
10. Evolution–Galactic Stars 27. Pre-Quasar Phenomena
11. Planetary Nebulae 28. Inter-Sector Relations
12. Ordinary White Dwarfs 29. The Non-Existent Universe
13. The Cataclysmic Variables 30. Cosmology
14. Limits 31. Implications
15. The Intermediate Region References
16. Type II Supernovae

Preface
This volume applies the physical laws and principles of the universe of motion to a
consideration of the large-scale structure and properties of that universe, the realm of
astronomy. Inasmuch as it presupposes nothing but a familiarity with physical laws and
principles, it is self-contained in the same sense as any other publication in the
astronomical field. However, the laws and principles applicable to the universe of motion
differ in many respects from those of the conventional physical science. For the
convenience of those who may wish to follow the development of thought all the way
from the fundamentals, and are not familiar with the theory of the universe of motion, I
am collecting the most significant portions of the previously published books and articles
dealing with that theory, and incorporating them, together with the results of some further
studies, into a series of volumes with the general title The Structure of the Physical
Universe. The first volume, which devleops the fundamental physical relations, has
already been published as Nothing but Motion. This present work is designated as
Volume III. Volume II,
Basic Properties of Matter, will follow.
As stated in Nothing But Motion, the development of thought in these books is purely
theoretical. I have formulated a set of postulates that define the physical universe, and I
have derived all of my conclusions in all physical fields by developing the necessary
consequences of those postulates, without introducing anything from any other source. A
companion volume, The Neglected Facts of Science, shows that many of the theoretical
conclusions, including a number of those that differ most widely from conventional
scientific thought, can also be derived from purely factual premises, if some facts of
observation that have heretofore been overlooked or disregarded are taken into
consideration.
As explained in the introductory chapter of this volume, astronomy is the great testing
ground for physical theory. Here we can ascertain whether or not the physical relations
established under the relatively moderate conditions that prevail in the terrestrial
environment still hold good under the extremes of temperature, pressure, size, and speed
to which astronomical entities are subjected. In order to be valid, the conclusions derived
from theory must agree with all facts definitely established by astronomical observation,
or at least must not be inconsistent with any of them. To show that such an agreement
exists, I have compared the theoretical conclusions with the astronomical evidence at
each step of the development. It should be understood, however, that this comparison
with observation is purely for the purpose of verifying the conclusions; the observations
play no part in the process by which these conclusions were reached.
There are substantial differences of opinion, in many instances, as just what the
observations actually do mean. Like the situation in particle physics discussions in
previous publications, the ―observed facts‖ in astronomy are often ten percent
observation and ninety percent interpretation. In those cases where the astronomers are
divided, the most that any theoretical work can do is to agree with one of the concflicting
opinions as to what has been observed. I have therefore identified the sources of all of the
astronomical information that I have used in the comparisons. Since this work is
addressed to scientists in general, rather than to a purely astronomical audience, I have
taken information from readily accessible sources, where possible, in preference to the
original reports in the astronomical literature.
Once again, as in the preface to Nothing But Motion, I have to say that it is not feasible to
acknowledge all of the many individual contributions that have been made toward
developing the details of the theoretical system and bringing it to the attention of the
scientific community, but I do want to renew my expression of appreciation of the efforts
of the officers and members of the organization that has been promoting understanding
and acceptance of my results. Since the earlier volume was published, this organization,
founded in 1970 as New Science Advocates, has changed its name to the International
Society of Unified Science, in recognition of its increased activity in foreign countries,
three of which are currently represented on the Board of Trustees.
Publication of this present volume has been made possible through the efforts of Rainer
Huck, who has acted as business manager of the publishing project, Jan Sammer, who
has handled all of the many operations involved in taking the work from the manuscript
stage to the point at which it was ready for the printers, and my wife, without whose
encouragement and logistic support the book could not have been written. also
participating were Eden G. Muir, who prepared the illustrations, and Ronald Blackburn,
Maurice Gilroy, Frank Meyer, and Robin Sims, who assisted in the financing.
March 1984
D. B. Larson

Copyright © 1959, 1984


By Dewey B. Larson
All rights reserved

CHAPTER 1

Introduction
This volume is a continuation of a series which undertakes to determine the
characteristics that the physical universe must necessarily have if it is composed entirely
of discrete units of motion, and to show that the universe thus defined is identical, item
by item, with the observed physical universe. The specific objective of this present
volume is to extend the physical relations and principles developed in the earlier volumes
to a description of the large-scale features of the universe of motion. This is the field of
astronomy, and the pages that follow will resemble an astronomical treatise. In order to
avoid misunderstanding, therefore, we will begin by emphasizing that this is not an
astronomical work, in the usual sense.
Astronomy and astrophysics are based on facts determined by observation. Their
objective is to interpret these facts and relate them to each other in a systematic manner.
The primary criterion by which the results of these interpretive activities are judged is
how well they account for, and agree with, the relevant observational data. But
astronomical data are relatively scarce, and often conflicting. Opinion and judgment
therefore play a very large part in the decisions that are made between conflicting
theories and interpretations. The question to be answered, as it is usually viewed, is
which is the best explanation―? In practice this means which fits best with current
interpretations in related astronomical areas.
The conclusions that are expressed in this work, on the other hand, are derived from the
postulated properties of space and time in a universe of motion, and they are independent
of the astronomical observations. These conclusions must, of course, be consistent with
all that is definitely known from observation, but whatever observational information
may exist, or may not exist, plays no part in the development of thought that arrives at the
conclusions that are stated. Observed astronomical objects and phenomena are not being
described and discussed in this work as a foundation on which to Construct theory. They
are introduced only for the purpose of showing that these observations are consistent with
the conclusions derived from theory. Thus the present volume is not an astronomical
work, which interprets and systeniatizes the information derived from astronomical
observation; it is a physical Work, which extends the development of physical theory in
the two preceding volumes into the astronomical field, confirming the previously derived
laws and principles by showing that they still apply under extreme conditions.
The availability of this accurate new physical theory, developed and verified in other
fields where the facts are more readily accessible, now gives us a source of information
about astronomical matters that is not subject to the limitations that are inherent in the
procedures that the astronomers must necessarily employ, It gives us a unique
opportunity to examine the subject matter of astronomy from an outside viewpoint
completely independent of any conclusions that have been reached from the results of
astronomical observation.
The record of advancement of astronomical knowledge has been largely a story of the
invention and utilization of new and more powerful instruments. The optical telescope,
the spectroscope, the photographic plate, the radio telescope, the x-ray telescope, the
photoelectric cell—these and the major improvements that have been made in their power
and accuracy are the principal landmarks of astronomical progress. It is a matter of
considerable significance, therefore, that in application to astronomical phenomena, the
theory of the universe of motion, the Reciprocal System of theory, as we are calling it,
has the characteristics of a new instrument of exceptional power and versatility, rather
than those of an ordinary theory,
Astronomy has many theories, of course, but the products of those theories are quite
different from the results obtained from an instrument, inasmuch as they are determined
primarily by what is already known or is believed to be known, about astronomical
phenomena. This existing knowledge, or presumed knowledge, is the raw material from
which the theory is constructed, and conformity with the data already accumulated, and
the prevailing pattern of scientific thought, is the criterion by which the conclusions
derived from the theory are tested. The results obtained from an instrument, on the other
hand, are not influenced by the current state of knowledge or opinion in the area
involved. (The interpretation of these results may be so influenced, but that is another
matter,) If those results conflict with accepted ideas, it is the ideas that must be changed,
not the information that the instrument contributes, The point now being emphasized is
that the Reciprocal System, like the instrument and unlike the ordinary theory, is wholly
independent of what is known or believed about the phenomena under consideration,
Stars and galaxies are found in the existing astronomical theories because they are put
into these theories. They are aggregates of matter, they exert gravitational forces, and
they emit radiation, and so on, in the theoretical picture, because this information was put
into the theories. They theoretically generate the energy that is required to maintain the
radiation by converting matter to energy, because this, too, was put into the astronomical
theories. They conform to the basic laws of physics and chemistry; they follow the
principles laid down by Faraday, by Maxwell, by Newton, and by Einstein, because these
laws and principles were put into the theories. To this vast amount of knowledge and
pseudo-knowledge drawn from the common store, the theorist adds a few assumptions of
his own that bear directly on the point at issue and, after subjecting the entire mass of
material to his reasoning processes, he arrives at certain conclusions. Such a theory,
therefore, does not see things as they are; it sees them in the context of existing
observational information and existing patterns of thought. We cannot get a quasar, for
instance, out of such a theory until we put a quasar, or something from which, within the
context of existing thought, a quasar can be derived, into the theory.
On the other hand, the existing concepts of the nature of astronomical objects cannot be
put into an instrument. One cannot tell an instrument what it should see or what it should
record, other than by limiting the scope of its application, and it therefore sees things as
they are, not as the scientific community thinks that they ought to be. If there are quasars,
the appropriate instrument, appropriately utilized, sees quasars. Every new instrument
uncovers many errors in accepted thinking about known phenomena, while at the same
time it reveals the existence of other phenomena that were not only unknown, but in
many instances wholly unsuspected.
The Reciprocal System of theory is like an instrument in that it, too, is independent of
existing scientific thought. Stars and galaxies composed of matter appear in this theory,
but neither these objects nor the matter itself are put into the theory; they are
consequences of the theory: results that necessarily follow from the only things that are
put into the theory, the postulated properties of space and time. The astronomical objects
that appear in the theory are subject to the basic physical laws, they exert gravitational
forces, they emit radiation, and so on, not because these things were put into the theory,
but because they are products of the development of the theory itself. All of the entities
and relations that constitute the theoretical universe of motion are consequences of the
fundamental postulates of the system.
While we can hardly say, a priori, that this system of theory sees things as they are, we
can say that it sees things, as they must be if the physical universe is a universe of
motion. If there are quasars, then this theory, like an appropriate instrument, and
independently of any previous theoretical or observational information, sees quasars.
Indeed, it did see quasars, somewhat indistinctly, to be sure, but definitely, long before
the astronomers recognized them. As will be brought out in detail in Chapter 20, this pre-
discovery development of theory identified the quasars, together with some related
phenomena that were not distinguished from them at this stage of the theoretical study, as
high-speed products of galactic explosions (not yet discovered observationally), defined
their principal properties, and described their ultimate fate.
Like the invention of the telescope, the development of this new and powerful theoretical
instrument now gives the astronomer an opportunity to widen his horizons, to get a clear
view of phenomena that have hitherto been hazy and indistinct, and to extend his
investigations into areas that are totally inaccessible to the instruments previously
available. The picture obtained from this new instrument differs in many respects from
present-day astronomical ideas—very radically in some instances—but the existence of
such differences is clearly inevitable in view of the limited amount of observational
information that has been available to the astronomers, and the consequent highly
tentative nature of much of the astronomical theory currently in vogue, As has been
demonstrated in the preceding volumes, the correct explanation of a physical situation
often differs from the prevailing ideas to a surprising degree even where the current
theories have been successful enough in practice to win general acceptance. In
astronomy, where comparatively few issues have been definitely settled, and differences
of opinion are rampant, it can hardly be expected that the correct explanations will leave
the previous theoretical structure intact.
This work does not attempt to cover the entire astronomical field. Much of the attention
of the astronomers is centered on individual objects. They determine the distance to
Sirius, the atmospheric pressure on Mars, the temperature of the sun's photosphere, the
density of the moon, and so on, none of which is relevant to the objectives of this present
work, except to the extent that some individual fact or quantity may serve to illustrate a
general proposition. Furthermore, the scope of the work, both in the number of subjects
covered, and in the extent to which the examination of each subject has been carried, has
been severely limited by the amount of time that could be allocated to the astronomical
portion of a project equally concerned with many other fields of science. The omissions
from the field of coverage, in addition to those having relevance only to individual
objects, include (1) items that are not significantly affected by the new findings and are
adequately covered in existing astronomical literature, and (2) subjects that the author
simply has not thus far gotten around to considering. Attention is centered principally on
the evolutionary patterns, and on those phenomena, such as the white dwarfs, quasars,
and related objects, with which conventional theory is having serious difficulties.
One of the recalcitrant problems of major significance is the question as to the origin of
the galaxies.
There are great many things that the cosmologist not only does not know, but also
finds severe difficulty in envisaging a path towards finding out . . . In particular,
how did the galaxies form? The encyclopaedias and popular astronomical books
are full of plausible tales of condensations from vortices, turbulent gas clouds and
the like, but the sad truth is that we do not know how the galaxies came into
being.1 (Laurie John)
Gerrit Verschuur foreseen major changes in current views:
With what perspective will someone fifty years from now read our astronomical
journals and books? . . I feel that in the area of understanding galaxies we might
well leave present ideas farther behind than in any other area of astronomy.2
Most astronomers apparently believe that the question as to the origin of the stars is
closer to a solution, but when the issue is squarely faced they are forced to admit that no
tenable theory of star formation has yet been devised. For example, I. S. Shklovsky (or
Shklovskii), a prominent Russian astronomer whose views will be quoted frequently in
these pages, concedes that the star formation process is still in ‖the realm of pure
speculation.― He describes the situation in this manner:
It is natural to suppose that the connection between O and B stars and dust clouds]
should be a genetic one, with the stars in the associations being formed from
condensing clouds of gas and dust. Nevertheless . . . the problem [of proof] has
not yet been definitively solved . . . the situation has turned out to be all too
complicated. New technological developments . . . may ultimately lift the star
formation problem from the realm of pure speculation and make it an exact
science.3
Our first concern in this present work will be with these two basic problems. As we saw
in Volume I, the large-scale action of the universe is cyclic. The contents of the sector of
the universe in which we live, the material sector, originate in a primitive, widely
dispersed form, and undergo a process of aggregation into large units. Ultimately the
aggregates of maximum size are explosively ejected into an inverse sector of the
universe, the cosmic sector. A similar process takes place in that sector, culminating in an
explosive ejection of the major aggregates of cosmic matter back into the material sector.
The two preceding volumes have described the aggregation process in the material sector
insofar as it applies to the primary units: atoms and sub-atomic particles. The incoming
matter from the cosmic sector arrives in the form of cosmic atoms. The structure of these
atoms is incompatible with existence in the material sector (that is, at speeds less than
that of light), and they decay into sub-atomic particles that are able to accommodate
themselves to the material environment Over a long period of time these particles
combine to form simple atoms, after which the atoms absorb additional particles to form
more complex atoms (heavier elements) Meanwhile the atoms are subject to a continual
increase in ionization, the ultimate result of which is to bring each atom to a destructive
limit At this point all, or part, of the rotational motion (mass) of the atom is converted to
linear motion (kinetic energy).
This atomic aggregation process, previously described in detail, thus terminates in
destruction of the atom, or a portion thereof, rather than in ejection into the cosmic sector.
In order to understand how the ejection takes place we will have to examine matter from
a different standpoint. Heretofore we have been looking at the behavior of the individual
units, the atoms. Now we will need to turn our attention to the behavior of material
aggregates. This is the principal subject of the present volume.
Let us begin our consideration of these aggregates with a pre-aggregate situation, a
volume of extension space (the space of the conventional reference system) in which
there is a nearly uniform distribution of widely separated hydrogen atoms and sub-atomic
particles, the initial products derived from the incoming cosmic matter: the cosmic rays.
Coexisting with this primitive material there is usually a small admixture of matter that
has been scattered into space by explosive processes, mainly gas and dust, but including
some larger aggregates up to stellar size. There may even be a few small groups of stars.
All this material is subject to the two general forces of the universe, gravitation and the
force due to the outward progression of the natural reference system. The nature of the
aggregates that are formed is determined by the properties of these two forces. Three
general types of aggregates can be distinguished: (1) dust particles, (2) stars and related
aggregates, (3) galaxies and related aggregates.
In the diffuse matter under consideration, the progression of the natural reference system
is the dominant force except at very great distances. As we saw in Volume I, the direction
of this progression is outward, but the natural outward direction, to which this
progression conforms, is away from unity, because the natural datum level is unity, not
zero. Inside unit space, ‖away from unity― is inward as seen in the reference system.
Inasmuch as the sizes of the atoms and sub-atomic particles put them into what we have
called the time region, the region inside unit space, there is nothing to prevent random
motion of one from bringing it within unit distance of another. When this occurs, the
progression of the reference system moves these objects inward toward each other until
they reach equilibrium positions where the gravitational motion and the progression are
balanced. Such contacts are infrequent because of the very low densities and
temperatures, but over a long period of time these infrequent contacts are sufficient to
build up molecules and dust particles.
Nothing larger than a dust particle can be formed by this contact process, because as soon
as the diameter of the aggregate reaches unit distance, 4.56 x 106 cm, the direction of the
progression of the natural reference system, relative to the conventional spatial coordinate
system, is reversed. Outward from unity becomes outward from each other, and the
particles move apart. Inter-atomic forces of cohesion operate against this outward
progression, and permit the maximum size of relatively complex particles such as the
silicates to exceed the natural unit of distance to a limited extent. The maximum
attainable diameter is something less than one micron (10-4 cm). This is the explanation
of the ‖surprising― fact noted by Otto Struve:
It is surprising that the particles of all clouds are of about the same size. . . There
must be a mechanism that prevents the particles from growing larger than one
micron.4
Average grain sizes are closer to the unit of distance, which is equivalent to about 0.05
micron. Simon Mitton reports average values ranging from 0.02 microns for iron to 0.15
microns for silicates.5
Each of the individual entities with diameters greater than unity existing in the primitive
diffuse volume of matter—molecules, dust particles, and bits of debris from disintegrated
larger aggregates—is far outside the gravitational limits of its neighbors, and the
progression of the natural reference system therefore tends to move them apart, but this
outward motion is opposed, not only by the gravitational forces of the neighbors, but also
by the inward motion due to the combined gravitational effect of all masses within the
effective distance.
If we start from a given point in the region of diffuse matter, and consider spheres of
successively larger radius, the progression of the natural reference system is much greater
than the gravitational effect originally, but the total gravitational force is directly
proportional to the mass—that is, to the cube of the radius, where the density is
uniform—whereas the effect of distance is a decrease proportional to the square of the
radius. The net gravitational force that the mass included within the concentric spheres
exerts against a particle at the outer boundary in each case therefore increases in direct
proportion to the radius of the sphere. Hence, although the gravitational motion (or force)
at the shorter distances is almost negligible compared to the progression of the natural
reference system, equilibrium is eventually reached at some very great distance.
Beyond the point of equilibrium the particles of matter are being pulled inward toward
the center of the spherical aggregate. But coincidentally, the gravitational forces acting
from other similar centers are being exerted on the particles in the same region of space,
and the net result is that there is a movement in both directions that leaves a relatively
clear space between adjacent aggregates. The original immense volume of very diffuse
matter thus separates into a number of large autonomous gravitationally bound
aggregates.
Current astronomical thought regards the condensation of a cloud of dust or gas as a
matter of the relative strength of the gravitational force and the opposing thermal forces.
On this basis, it is difficult to account for any large-scale condensation. As expressed by
Gold and Hoyle:
Attempts to explain both the expansion of the universe and the condensation of galaxies
must be very largely contradictory so long as gravitation is the only force field under
consideration. For if the expansive kinetic energy of matter is adequate to give universal
expansion against the gravitational field it is adequate to prevent local condensation
under gravity, and vice versa. That is why, essentially, the formation of galaxies is passed
over with little comment in most systems of cosmology.6
In the universe of motion the inward and outward forces arrive at an equilibrium, as
indicated in the foregoing paragraphs. No condensation would take place if this
equilibrium persisted, but the continued introduction of new matter from the cosmic
sector alters the situation. The added mass strengthens the gravitational force, and
initiates a contraction. The decrease in the distance between particles increases the
gravitational force still further. The contraction is thus a self-reinforcing process, and
once it is started it accelerates,
The two processes that have been described, the gradual contraction of the very large
diffuse aggregate and the consolidation of the individual atoms and sub-atomic particles
into molecules and dust particles, take place coincidentally. The drastic reduction in the
number of separate units in the aggregate resulting from the consolidation results in an
excess of empty space within the contracting volume, and causes the contracting sphere
of matter to break up into a large number of smaller aggregates separated by nearly
empty space. The product is a globular cluster, in which a large number of submasses—
up to a million or more—are contained within the overall gravitational limit of a large
spherical aggregate. Each of the sub-masses is outside the gravitational limits of its
neighbors, and is therefore moving away from them, but it is being pulled inward by the
gravitational force of the entire aggregate.
Many of the internal condensations take place around the remnants of disintegrated
galaxies that are scattered through the contracting material. In that case, the relatively
massive core thus provided makes the mass a selfcontracting unit. Where no such nuclei
are available, the forces of the globular cluster as a whole confine the sub-masses, and the
contraction continues under the influence of these external forces until the density is
adequate to continue the process.
This is where the astronomers current theories of star formation are stopped cold. They
envision the formation as taking place in the galaxies, but there are no gas or dust clouds
in our galaxy or in any other, so far as we know—that have anywhere near the critical
density, or have any way of increasing their density to the critical level.
Basically there does not appear to be enough matter in any of the hydrogen clouds
in the Milky Way that would allow them to contract and be stable. Apparently our
attempt to explain the first stages in star evolution has failed.7 (G. Verschuur)
If the contraction of the sub-masses contained within the globular cluster is permitted to
continue without interference from outside agencies, the gravitational energy of position
(the potential energy) of their constituent units—atoms, particles, etc.—is gradually
transformed into kinetic energy, and the temperature of the aggregate consequently rises.
At some point, the mass becomes self-luminous, and it is then recognized as a star. The
globular cluster, as we observe it, consists of an immense number of stars, separated by
great distances, and forming a nearly spherical aggregate. As the foregoing discussion
brings out, however, the star cluster stage is preceded by a stage in which the constituent
units, or sub-masses, of the globular cluster are presteilar gas clouds rather than stars. The
existence of such structures has some important consequences that will be explored as we
proceed.
No new assumptions or concepts have had to be introduced in order to derive this picture
of the stellar condensation process in the depths of space. We have simply taken the
physical principles and relations previously obtained from a development of the
consequences of the basic postulates as to the nature of space and time, as described in
the previous volumes of this work, and have applied them to the problems at hand. The
results of this study not only give us a clear picture of how the formation of stars takes
place, but also show that the formation occurs under conditions that necessarily exist
throughout immense regions of space. The production of sufficient star clusters of the
globular type to meet the requirements of the later phases of evolutionary development is
thus shown to be a natural and inevitable consequence of the premises of the theory.
The globular clusters are actually small aggregates of the same general nature as the
galaxies. ‖There is no absolutely sharp cutoff distinguishing galaxies from globular
clusters,―8 says Martin Harwit. The process just described thus provides the answers for
both of the major astronomical problems identified earlier: the formation of stars and the
formation of galaxies. As noted earlier, present-day astronomy has no tenable theory of
galaxy formation. In the words of W. H. McCrea, ‖We do not yet know how to tackle the
problem.―9 The situation with respect to the formation of stars is somewhat different, in
that, although it is evident that the mechanism of star formation is not yet understood,
there is a general impression that the dust clouds in the galaxies must be the locations in
which this mechanism is operating.
In such cases as this, where the general trend of thought in any field is on the wrong
track, the reason almost invariably is the uncritical acceptance of some erroneous
conclusion or conclusions. As will be brought out in detail in the pages that follow,
astronomy has unfortunately been the victim of two particularly far-reaching errors, The
latter portion of this volume will examine a wide variety of phenomena in which the true
relations have not heretofore been recognized because the general submission to
Einstein's dictum that speeds in excess of that of light are impossible has diverted inquiry
into unproductive channels, The theories applicable to the more familiar astronomical
objects that will be discussed in the earlier chapters have been led astray by another
erroneous conclusion also imported from the physicists, This costly mistake is the
conclusion that the energy production process in the stars is the conversion of hydrogen
to helium and successively heavier elements.
As brought out in Volume II, the development of the consequences of the postulates that
define the universe of motion arrives at a totally different conclusion as to the nature of
the process by which the stellar energy is produced. Inasmuch as there is no direct way of
determining just what is happening in the interiors of the stars, all conclusions with
respect to this energy generation process have to be based on considerations of an indirect
nature, Thus far, the thinking about this subject has been dominated by the physicists―
insistence that the most energetic process known to them must necessarily be the process
whereby the stars generate their energy, regardless of any evidence to the contrary that
may exist in other scientific areas. The fact that they have had to change their conclusions
as to the nature of this process twice already has not altered this attitude, The most recent
change, from the gravitational contraction hypothesis to the hydrogen conversion
hypothesis was preceded by a long and acrimonious dispute with the geologists, whose
evidence showed that geological history required a great deal more time than was
allowed by the gravitational contraction process. Ultimately the physicists had to concede
defeat.
It might be assumed that the embarrassing outcome of this controversy would have
engendered a certain amount of caution in the claims made for the newest hypothesis, but
there is no indication of it. Today there is ample astronomical evidence that the physicists
current hypothesis is wrong, just as there was ample geological evidence in the nineteenth
century that their then current hypothesis was wrong, But they are no more willing to
listen to the astronomical evidence today than they were to the geological evidence of the
earlier era. The astronomers are less combative than the geologists, and are not inclined
to challenge the physicists dicta. So they are ignoring the evidence from their own field,
and accommodating their theories to the hydrogen conversion hypothesis. Curiously
enough, the only real challenge to that hypothesis at the present time comes from a rather
unlikely source, an experiment whose execution is difficult, and whose interpretation is
open to question. This is an experiment designed to measure the rate of emission of
neutrinos by the sun. The number of neutrinos observed is far less than that predicted on
the basis of the prevailing theories. ‖This is a terrible puzzle,―10 says Hans Bethel
The neutrino experiment is one of the most interesting to be carried out in
astronomy in recent years, and seems to be giving the most profound and
unexpected results. The least that we can conclude is that until the matter is
settled, we must treat all the theoretical predictions about stellar interiors with a
bit of caution. 11 (Jay M. Pasachoff)
The mere fact that the hydrogen conversion process can be seriously threatened by a
marginal experiment of this kind emphasizes the precarious status of a hypothesis that
rests almost entirely on the current absence of any superior alternative. The hypothesis of
energy generation by ordinary combustion processes held sway in its day on the strength
of the same argument. Then gravitational contraction was recognized as more potent, and
became the physicists orthodoxy, defended furiously against attacks by the geologists and
others. Now the hydrogen conversion process is the canonical view, resting on exactly
the same grounds that crumbled in the two previous instances. In each case the contention
was that there is no other tenable alternative. But in both of these earlier cases it turned
out that there was such an alternative. Even without the contribution of the theory of the
universe of motion, which shows that, in fact, there is a logical and rational alternative, it
should be evident from past experience that the assertion that ‖there is no other way― is
wholly unwarranted. Without this crutch, the hydrogen conversion process is no more
than a questionable hypothesis, a very provisional conclusion that must stand or fall on
the basis of the way that its consequences agree with physical observations.
Unfortunately the astronomers, whose observations are the ones against which the
hypothesis can be tested, have taken it as an established fact, and have accorded it a status
superior to their own findings, adjusting their interpretations of their own observations to
agree with the physicists hypothesis. We need go no farther than the first deduction that is
made from the assumed existence of the hydrogen conversion process to encounter a
glaring example of the way in which this pure assumption is allowed to override the
astronomical evidence. In application to the question of stellar ages, this hypothetical
process leads to the conclusion that the hot, massive stars of the O and B classes are very
young, as their output of energy is so enormous that, on the basis of this hypothesis, their
supply of fuel cannot last for more than a relatively short time. It then follows that these
stars must have been formed relatively recently, and somewhere near their present
locations.
No theory that calls for the formation of stars within the galaxies is plausible so long as
the theorists are unable to explain how stars can be formed in this kind of an
environment. One that, in addition, requires the most massive and most energetic of all
stars to be very young, astronomically speaking, converts the implausibility into an
absurdity. Even some of the astronomers find this conclusion hard to swallow, For
instance, Bart J. Bok once observed that
It is no small matter to accept as proven the conclusion that some of our most
conspicuous supergiants, like Rigel, were formed so very recently on the cosmic
scale of time measurements12
In the context of the theory of the universe of motion, the formation of single stars, or
small groups of stars, by condensation from galactic dust or gas clouds is not possible. In
addition to all of the other problems that have baffled those who have attempted to devise
a mechanism for this purpose, the new theory discloses that there is a hitherto
unrecognized force operating against such a condensation, the force due to the outward
progression of the natural reference system, which makes condensation still more
difficult, No known force other than gravitation is capable of condensing diffuse material
into a star, and gravitation can accomplish this result only on a wholesale scale, under
conditions in which an immense number of stars are formed jointly from a gas and dust
medium of vast proportions.
On this basis, the globular clusters are the youngest aggregates of matter, and the stars of
these clusters are the youngest of all stars. Thus the astronomers have their age sequence
upside down. It may be hard to believe that the present structure of astronomical theory
could contain such a major error in its basic framework. But, as we will see when we
examine the various astronomical phenomena in the pages that follow, even the
astronomers themselves admit that the theoretical conclusions based on the currently
accepted age sequence are inconsistent with the observations all along the line. Of course
they are reluctant to make any blanket statement to this effect, but if we add up their
comments concerning the individual items, this is what they amount to. In the quotations
from astronomical sources that will be introduced in connection with the discussion of
these various subjects we will find that the individual inconsistencies and contradictions
are characterized as ‖puzzling,― ‖curious,― ‖confusing,― ‖difficult to explain,― ‖not yet
understood,― and so on. Some of the more candid writers concede that the theoretical
understanding is unsatisfactory, referring to a particular inconsistency as ‖an impressive
challenge to theoreticians,― admitting that it ‖imperils― currently accepted theory, or
‖conflicts with current models,― reporting that ‖severe problems remain― in arriving at
understanding, or even that the observations constitute an ‖apparent defiance― of modern
theory.
The existence of this multitude of commonly recognized contradictions and
inconsistencies is a clear indication that there is something radically wrong with the
foundations of present-day astronomical theory. What the development of the theory of a
universe of motion has done is to identify the mistake that has been made. Uncritical
acceptance of an assumption made by the physicists has led to a conclusion regarding the
ages and evolution of stars that is upside down.

CHAPTER 2

Galaxies
From the finding that the initial product of the large-scale aggregation process in the
material sector of the universe is the globular cluster, it follows that galaxies are formed
by consolidation of globular clusters. This conclusion is in direct conflict with the
prevailing astronomical opinion, which is described by John B. Irwin as follows:
The Milky Way system, like other galaxies, is thought to have originated from a
condensation or collapse of the intergalactic medium, which event resulted in a system of
stars. The reason for the collapse is not known, and the details of the process are
uncertain.13
As might be expected where neither the antecedents of the process nor the details are in
any way understood, this explanation has encountered serious difficulties, and is
currently in deep trouble. As expressed by Virginia Trimble in a report of a conference, at
which this situation was discussed at some length, ―The conventional wisdom concerning
galaxy formation and evolution is beginning to leak badly at the seams.‖ In the
concluding portion of her report she notes that‖ Fall, Hogan, and Rees (Cambridge) have
considered the case of a galaxy assembled entirely out of pre-existing star clusters,‖ and
she makes this comment:
The discerning reader will long since have noticed where we are headed—if there are
problems making the biggest things (clusters of galaxies) first, then perhaps we should try
making the smallest things (stars or clusters of stars) first.14
Such a reversal of thinking on the subject is difficult in the context of present-day
astronomical theory because so much of that theory has been specifically tailored to fit
the ―big things first‖ viewpoint but, as we will see in the following pages, if the
observational evidence is taken at its face value and not twisted to conform to the
prevailing theories, the problems disappear. In the universe of motion the galaxies are, in
fact, ―assembled entirely out of preexisting star clusters,‖ as the Cambridge astronomers
suggested.
Unlike the individual stars, whose spheres of gravitational control meet at locations of
minimum gravitational force, so that each star is outside the gravitational limits of its
neighbors, the original boundaries of the aggregate that ultimately becomes a globular
cluster meet those of its neighbors at locations of maximum gravitational force, The
contraction of the aggregates leaves the gravitational effect at these locations unchanged,
while the increase in mass due to the influx of material from the cosmic sector adds a
significant increment. Each of the globular clusters is thus well within the gravitational
limits of the adjoining clusters. Consequently, there is a general tendency for the clusters
to move toward each other and combine. When such a combination does occur, the
combined unit exerts a stronger gravitational force within wider spatial limits, and both
the accretion of diffuse material and the attraction of nearby clusters are speeded up. Like
the contraction of the pre-cluster aggregate, the contraction of the group of clusters
leading to combination is thus a self-reinforcing process.
It should be noted in this connection that consolidation of two clusters is inevitable if
their mutual gravitational attraction continues to act without interference from outside
sources (that is, gravitational forces of other aggregates). There has been a rather general
belief that because of the immense distances between the stars in a cluster, or other
aggregate; two such structures could pass through each other with little or no actual
contact. Fred Hoyle expresses this general opinion in this statement:
Think of the stars as ordinary household specks of dust. Then we must think of a galaxy
as a collection of specks a few miles apart from each other, the whole distribution filling
a volume about equal to the Earth. Evidently one such collection of specks could pass
almost freely through another. 15
Our finding that the stars occupy equilibrium positions throws a considerably different
light on this situation. A stellar aggregate such as a cluster has the general characteristics
of a viscous liquid and collision of two such aggregates involves an inelastic impact
similar to the impact of one liquid aggregate upon another. In each case there is a certain
smount of penetration while the kinetic energy of the incoming mass is being absorbed,
but the eventual result is consolidation. The incurring mass meets a wall, not a
passageway.
This liquid-like nature of the aggregates of stars, deduced theoretically and confirmed
observationally by the behavior characteristics of the galaxies and star clusters that will
be examined in the subsequent pages, has a major effect on the phenomena in which
these objects participate. It invalidates many of the conclusions such as the one expressed
by Hoyle in the statement just quoted, and a great many mathematical calculations that
rest on the hypothesis of free movement of the constituent stars of an aggregate.
Consolidation of two globular clusters produces an aggregate which not only has double
the mass of a cluster, but also, because the impact is not exactly central in the usual case,
has a rotational motion that was absent in the original cluster. Instead of an oversize
cluster, we may therefore regard the combination as an aggregate of a new type: a small
galaxy. For a period of time after its formation such a galaxy has a rather confused and
disorderly structure, and is therefore classified as irregular, but in time the disruptions
due to the collision are smoothed out, and the galaxy assumes a more regular form. By
reason of the rotational motion that is now present, the galactic structure deviates to some
extent from the nearly spherical shape of the original clusters, and it is now classed as an
elliptical galaxy.
If some larger unit does not capture this small elliptical galaxy it continues growing by
accretion of dust and gas, and occasionally picks up another globular cluster. In the
earlier stages, each such capture of a cluster disorganizes the galactic structure and puts
the galaxy back into the irregular class for a time, but as it increases in size the galaxy
gradually becomes able to swallow a cluster without any major effect on its own
structure. By this time, however, some combinations of small galaxies begin to take
place. Here, again, a structural irregularity develops, and persists for a time. In this stage
the aggregates are reported to be ―several hundred times larger than the dwarf elliptical
galaxies.18
As long as the captured clusters are mature—that is, fully consolidated into stars—the
amount of dust in an elliptical or small irregular galaxy is relatively minor. Eventually,
however, one or more of the captives is a cluster of dust and gas clouds an immature
globular cluster, rather than a mature cluster of stars. The mixing of this large amount of
dust and gas with the stars of the galaxy alters the dynamics of the rotation, and causes a
change in the galactic structure If the dust cloud is captured while the galaxy is still quite
small, the result is likely to be a reversion to the irregular status until further growth of
the galaxy takes place. Because of the relative scarcity of the immature clusters, however,
most captures of these objects occur after the elliptical galaxy has grown to a substantial
size. In this case the result is that the structure of the galaxy opens up and a spiral form
develops.
There has been a great deal of speculation as to the nature of the forces responsible for
the spiral structure, and no adequate mathematical treatment of the subject has appeared.
But from a qualitative standpoint there is actually no problem, as the forces, which are
definitely known to exist—the rotational forces and the gravitational attraction—are
sufficient in themselves to account for the observed structure. As already noted, the
galactic aggregate has the general characteristics of a heterogeneous viscous liquid. A
spiral structure in a rotating liquid is not unusual; on the contrary, a striated or laminar
structure is almost always found in a rapidly moving heterogeneous fluid, whether the
motion is rotational or translational. Objections have been raised to this explanation,
generally known as the ―coffee cup‖ hypothesis, on the ground that the spiral in a coffee
cup is not an exact replica of the galactic spiral, but it must be remembered that the coffee
cup lacks one force that plays an important part in the galactic situation: the gravitational
attraction toward the center of the mass, If the experiment is performed in such a manner
that a force simulating gravitation is introduced, as, for instance, by replacing the coffee
cup by a container that has an outlet at the bottom center, the resulting structure of the
surface of the water is very similar to the galactic spiral.
In this kind of a rotational structure the spiral is the last stage, not an intermediate form.
By proper adjustment of the rotational velocity and the rate of water outflow the original
dispersed material on the water surface can be caused to pull in toward the center and
assume a circular or elliptical shape before developing into a spiral, but the elliptic
structure precedes the spiral if it appears at all. The spiral is the end product. The manner
in which the growth of the galaxy takes place has a tendency to accentuate the spiral
form, but the rotating liquid experiment shows that the spiral will develop in any event
when the necessary conditions exist. Furthermore, this spiral is dynamically stable. We
frequently find the galactic spirals characterized as unstable and inherently short-lived,
but the experimental spiral does not support this view. From all indications, the spiral
structure could persist indefinitely if the mass and rotational velocity remained constant.
The conclusion that the spiral arms are quasi-permanent features of the galaxies is
currently contested on other grounds, as in the following quotation from an astronomy
textbook:
The trouble is that this idea predicts the arms should be nearly fixed structures almost as
old as the galaxy itself, whereas actually they are young regions only a few million years
old.16
The assertion that the spiral arms are ―young regions‖ is based on the presence of hot,
massive stars, currently considered to be young, on the strength of the prevailing
assumption as to the nature of the stellar energy generation process. The evidence that
invalidates this hypothesis, which will be presented at appropriate points in the pages that
follow, thus cuts the ground from under this argument.
A spiral galaxy consists of a nucleus, approximately spherical, and a system of curving
arms extending outward from the nucleus. In the smaller and younger objects the nucleus
is small, the arms are thick and widely separated, and the general structure can be
described as loose. As these galaxies grow older and larger, the nucleus becomes more
prominent, the rotational velocity increases, and the greater velocity causes the arms to
thin out and wind up more tightly. Ultimately the arms disappear entirely and the nearly
spherical nucleus becomes the galaxy. At this stage the shape of the galaxy is the same as
that of the smallest and youngest of the galaxies that have attained a stable form, and
these giant old galaxies are generally included in the elliptical category. But putting such
widely different aggregates into the same class simply on the basis of their form leads to
confusion, and cannot be considered good practice. Fortunately, the term ―spheroidal‖ is
being used to some extent in this connection, and since it is quite appropriate, we will
classify these oldest and largest of the stellar aggregates as spheroidal galaxies.
As the foregoing discussion brings out, the primary criterion of the age of galaxies is size,
with shape as a secondary characteristic varying in direct relation to size. It must be
realized, of course, that accidents of environment and other factors will affect this
situation to some extent, so that there are some deviations from the normal pattern, but in
general the ages of the various types of galactic structures stand in the same order as their
sizes. The passage of time also brings other observable results that confirm the ages
indicated by the sizes of the galaxies. One of these is a decrease in abundance. In the
evolutionary course as outlined, each aggregate is growing at the expense of its
environment. The smaller units are feeding on atoms, small particles, and stray stars. The
larger aggregates pull in not only all material of this kind in their vicinity, but also any of
the small aggregates that are within reach.
As a result of this cannibalism the number of units of each size progressively decreases
with age. Observations show that the existing situation is in full agreement with the
theoretical expectation, as the order of abundance is the inverse of the age sequence
indicated by the galactic size and shape. The giant spheroidal galaxies, the senior
members of the galactic family, are relatively rare, the spirals are more common, the
elliptical galaxies are abundant, and the globular clusters exist in enormous numbers.
It is true that the observed number of small elliptical galaxies, those in the range just
above the globular clusters, is considerably lower than would be predicted from the age
sequence, but it is evident that this is a matter of observational selection. When the
majority of galaxies are observed at such distances that only the large types are visible, it
is not at all strange that the number of small elliptical galaxies actually identified is less
than the number which, according to the theory, should exist. The many additional
elliptical galaxies discovered within the Local Group in very recent years, increasing the
already high ratio of elliptical to spiral in the region accessible to detailed observation,
emphasizes the effect of the selection process.
Conventional astronomical theory neither requires nor excludes the existence of large
numbers of these dwarf galaxies, and because they are too inconspicuous to demand
attention from an observational standpoint, little notice has been taken of them until
recently. Since our development leads to the conclusion that they are, next to the globular
clusters, the most numerous of the astronomical aggregates, it is worth noting that the
astronomers are beginning to recognize their abundance. For instance, a recent (1980)
comment suggests that these dwarfs ―may be the most common type of galaxy in the
universe.17 This is what the theory of the universe of motion says that they must be.
Other observational indications of age will be examined later, after some more
foundations have been laid, but these will merely supply additional confirmation. At this
time it should be noted that all three of the criteria thus far discussed are in agreement
that the observed galaxies and sub-galaxies can be placed in a sequence consistent with
the theoretical deduction that there is a definite evolutionary path in the material sector of
the universe extending from dispersed atoms and sub-atomic particles through multi-
molecular dust particles, clouds of atoms and particles, stars, clusters of stars, elliptical
galaxies, and spiral galaxies to the giant spheroidal galaxies which constitute the final
stage of the material phase of the great cycle of the universe. It is possible, of course, that
some of these units may have remained inactive from the evolutionary standpoint for
long periods of time, perhaps because of a scarcity of available ―food‖ for accretion in
their particular regions of space, and such units may be chronologically older than some
of the aggregates of a more advanced type. Such variations as these are, however, merely
minor fluctuations in a well-defined evolutionary pattern.
‖One of the continuing mysteries,‖ says Virginia Trimble, ―is why galaxies should have
the range of masses they do.―14 The foregoing explanation of the evolution of the galaxies
shows why. The galaxies originate as globular clusters and grow by capture until they
reach a size limit at which their existence terminates. Galaxies therefore exist in all sizes
between these two limits.
Next we turn to a different kind of evidence that gives further support to the theoretical
conclusions. In the preceding discussion it has been demonstrated that the deductions as
to continual growth of the material aggregates by capture of matter from the surroundings
are substantiated by the definite correlation between the size, shape and relative
abundance of the various types of galaxies and clusters. Now we will examine some
direct evidence of captures of the kind required by the theory. First we will consider
evidence which indicates that certain captures are about to take place, then evidence of
captures actually in progress, and finally evidence of captures that have taken place so
recently that their traces are still visible.
The observed positions and motions of the globular clusters provide the most abundant
evidence of impending captures, but the total amount of information about these clusters
now available is sufficient to justify a separate chapter. The capture of clusters by
galaxies will therefore be discussed in Chapter 3 , in connection with the general
consideration of the role of these objects.
Capture of galaxies by larger galaxies is much less common than capture of globular
clusters, simply because the clusters are very much more abundant.
We may deduce, however, that there should be a few galaxies on the road to capture by
each of the giant galaxies. This is confirmed by the observation that the nearer large
spirals have ―satellites,‖ which are nothing more than small galaxies that are within the
gravitational range of a larger aggregate, and are being pulled in to where they can be
conveniently swallowed. The Andromeda spiral, for instance, has at least eight satellites:
the elliptical galaxies M 32, NGC 147, NGC 185, and NGC 205, and four small galaxies
that have been named Andromeda I, II, III, and IV. The Milky Way galaxy is also
accompanied by at least six fellow travelers, the largest of which are the two Magellanic
Clouds and the elliptical galaxies in Sculptor and Fornax. The expression ―at least‖ must
be included in both cases, as it is by no means certain that all of the small elliptical
galaxies in the vicinity of these two large spirals have been identified.
As one report summarizes the situation, the dwarf galaxies ―cluster in swarms about the
giant galaxies.‖ The author goes on to say, ―Why this should be is not yet understood; but
theorists believe that it could be telling us much about the way galaxies form 18 In the
light of the information presented in the foregoing pages, it should be evident that what
these observations are telling us is simply that the original products are undergoing a
process of consolidation into larger aggregates.
Some of these galactic satellites not only occupy the kind of positions required by theory,
and to that extent support the theoretical conclusions, but also contribute evidence of the
second class: indications that the process of capture is already under way. The so-called
―irregular‖ galaxies were not given a separate place in the age-size-shape sequence
previously established, as it appears reasonably certain that these galaxies, which
constitute only a small percentage of the total number of galaxies that have been
observed, are merely galaxies belonging to the standard classes which have been
distorted out of their normal shapes by factors related to the capture process. The Large
Magellanic Cloud, for instance, is big enough to be a spiral, and it contains the high
proportion of advanced type stars that is characteristic of the spirals. Why, then, is it
irregular rather than spiral? The most logical conclusion is that the answer lies in the
proximity of our own giant system; that the Cloud is in the process of being swallowed
by our big spiral, and that it has already been greatly modified by the gravitational forces
that will eventually terminate its existence as an independent unit. We can deduce that the
Large Cloud was actually a small spiral at one time, and that the ―rudimentary‖ spiral
structure, which is recognized in this galaxy, is actually a vestigial structure.
The Small Cloud has also been greatly distorted by the same gravitational forces, and its
present structure has no particular significance. From the size of this Cloud we may
deduce that it was a late elliptical or early spiral galaxy before its structure was disrupted.
The conclusion that it is younger than the large Cloud, which we reach on the basis of the
relative sizes, is supported by the fact that the Small Cloud contains a mixture of the type
of stars found in the globular clusters, currently called Population II, and the type found
in the spiral arms, currently called Population 1, whereas the stars of the Large Cloud are
predominantly of Population 1.
The long arm of the Large Cloud, which extends far out into space on the side opposite
our galaxy is a visible record of the recent history of the Cloud. The gravitational
attraction of the Galaxy is exerted on each component of the Cloud individually, as well
as on the structure as a whole, since the Cloud is an assembly of discrete units in which
the cohesive and disruptive forces are in balance. This balance is precarious at best, and
when an additional gravitational force is superimposed on the equilibrium within the
Cloud some of the stars are detached from the aggregate. The difference between the
forces exerted by our galaxy on the nearest stars of the Cloud and those exerted on the
most distant stars was unimportant when the Cloud was far away, but as it approached the
Galaxy this force differential increased to significant levels. As the main body was
speeded up by the increasing gravitational pull some stragglers failed to keep up with the
faster pace, and once they had fallen behind, the force differential became even greater.
The Cloud therefore left a luminous trail behind it marking the path alone, which it had
traveled.
This is no isolated phenomenon. Small galaxies may be pulled into larger units without
leaving visible evidence behind, as the amount of material involved is too small to be
detected at great distances, but when two large galaxies approach each other we
commonly see luminous trails of the same nature as the one that has just been discussed.
Fig.1 is a diagram of the structural details that can be seen in photographs of the galaxies
NGC 4038 and 4039. Here we can see that one galaxy has come up from the lower right
of the diagram and has been pulled around in a 90 degree bend. The other has moved
down from the direction of the top center and has been deflected toward the first galaxy.
When the action is complete there will be one large spiral moving forward to its ultimate
destiny, leaving the stray stars trailing behind the galaxies to be pulled in individually, or
be picked up by some other aggregate that will come along at a later time. Several
thousand ―bridges‖ that have developed from interaction between galaxies are reported to
be visible in photographs taken with the 48-inch Schmidt telescope on Mount Palomar.
Some of these are trailing arms similar to those in Fig. 1. Others are advance units that
are rushing ahead of the main body. The greater velocity of these advance stars is also
due to the gravitational differential between the different parts of the incoming galaxy,
but in this case the detached stars are the closest to the source of the gravitational pull and
are therefore subject to the greatest force.
Irregularities of one kind or another are relatively common in the very small galaxies, but
these are not usually harbingers of coming events like the gravitational distortions of the
type experienced by the Magellanic Clouds. Instead, they are relics of events that have
already happened. Capture of a globular cluster by a small galaxy is a major step in the
evolution of the aggregate. Consolidation with another small galaxy is a revolutionary
event. Since the relatively great disturbance of the galactic structure due to either of these
events is coupled with a slow return to normal because of the low rotational velocity, the
structural irregularities persist for a longer time in these smaller galaxies. The number of
small irregular aggregates visible at any particular time is correspondingly large.
Although the general spiral structure of the larger galaxies is regained relatively soon
after a major consolidation because of the high rotational velocities that speed up the
mixing process, there are features of some of these structures that seem to be correlated
with recent captures. We note, for instance that a number of spirals have semi-detached
masses, or abnormal concentrations of mass within the spiral arms, that are difficult to
explain as products of the recent development of the spiral itself, but could easily be the
result of recent captures. The outlying mass NGC 5195 seemingly attached to one of the
arms of M 51, for example, has the appearance of a recent acquisition (although there is
some difference of opinion as to the true status of this object). The lumpy distribution of
matter in M 83 gives this galaxy the aspect of a recent mixture which has not yet been
thoroughly stirred; NGC 4631 looks as if it contains a still undigested mass; and so on.
A study of the ―barred‖ spiral galaxies also leads to the conclusion that these objects are
galactic unions that have not yet reached the normal form. The variable factor in this case
appears to be the length of time required for consolidation of the central masses of the
combining galaxies. If the original lines of motion intersect, the masses are no doubt
intermixed quite thoroughly at the time of contact, but an actual intersection of this kind
is not required for consolidation. All that is necessary is that the directions of motion be
such as to bring one galaxy into the general vicinity of the other. The gravitational force
then accomplishes the change of direction that is necessary in order to bring about a
contact of the two objects. Where the gap to be closed by gravitational action is relatively
large, the rotational forces may establish the characteristic spiral form in the outer regions
of the combination before the consolidation of the central masses is complete, and in the
interim the galactic structure is that of a normal spiral which a double center.

Figure 2

Figure of NCG 1300


Figure M 51
Fig.2 (a) shows the structure of the barred spiral galaxy NGC 1300. Here the two
prominent arms terminate at the mass centers a and b, each of which is connected with
the galactic center c by a bridge of dense material that forms the bar On the basis of the
conclusions in the preceding paragraph, we may regard a and b as the original nuclei of
galaxies A and B. the two aggregates whose consolidation produced NGC 1300. The
gravitational forces between a and b are modifying the translational velocities of these
masses in such a manner as to cause them to spiral in toward their common center of
gravity, the new galactic nucleus, but this process is slowed considerably after the galaxy
settles down to a steady rotation, as only the excess velocity above the rotational velocity
of the structure as a whole is effective in moving the mass centers a and b forward in their
spiral paths. In the meantime the gravitational attraction of each mass pulls individual
stars out of the other mass center, and builds up a new galactic nucleus between the other
two. As NGC 1300 continues on its evolutionary course, we can expect it to gradually
develop into a structure such as that in Fig.2 (b), which shows the arms of M 51. Fig.2 (c)
indicates how M 51 would look if the central portions of the arms were removed. The
structural similarity to NGC 1300 is obvious.
Additional evidence of relatively recent captures will be developed in Chapter 8 after
some further groundwork has been laid. Meanwhile the evolutionary pattern of the
constituent stars of the clusters and galaxies will be defined, and it will be shown that the
stellar evolution corresponds with the pattern of evolution of the galaxies, as described in
this present chapter. All in all, the results obtained from these various lines of inquiry add
up to an overwhelming mass of evidence confirming the validity of the theoretical
process of galactic evolution beginning with dispersed matter and ending with the giant
spheroidal galaxies.
This picture of continuous growth from globular cluster to spheroidal galaxy extending
over a period of many billion years is in direct conflict with the prevailing astronomical
view, which regards the galaxies as having been formed directly from dispersed matter in
an early stage of an evolutionary universe, and having remained in essentially the same
condition in which they were originally formed. The difference between this view and
that derived from the Reciprocal System of theory is graphically illustrated by an
argument offered by Shklovsky in support of the contention that a process of star
formation must be operative in the Galaxy. He points out that at least one of the stars of
the Galaxy ―dies‖ each year in a supernova explosion, and then argues that ―In order that
the stellar tribe should not become extinct, just as many new stars on the average must be
formed annually in our Galaxy.19 While our findings portray the Galaxy as not only
pulling in single stars on a continuous basis, but also periodically swallowing a globular
cluster, and even an occasional small galaxy, Shklovsky is not even willing to concede
the capture of one star per year.
The same viewpoint is reflected in the current tendency to try to explain the globular
clusters detected in inter-galactic space as outgoing rather than incoming. These
―intergalactic tramps,‖ says one text, ―may actually be globular clusters that escaped from
our Galaxy.20 Even the halo stars surrounding the Galaxy tend to be regarded as escapees
from the original galactic system rather than as incoming matter,
In a strange juxtaposition alongside this uncompromising orthodox view, there is a
widespread and growing recognition of the prevalence of galactic cannibalism, For
example, Joseph Silk tells us that ―It seems that the giant galaxies have grown at the
expense of other galaxies in their cluster,21 M. J. Rees elaborates on the same theme:
We can see many instances where galaxies seem to be colliding and merging with each
other, and in rich clusters such as Coma the large central galaxies may be cannibalizing
their smaller neighbors . . . Many big galaxies—particularly the so-called CD galaxies in
the centers of clusters—may indeed be the result of such mergers.22
There is also an increased willingness to recognize the observational indications of
galactic collisions. After a number of years during which the collision hypothesis applied
earlier to such powerful radio emitters as Cygnus A was regarded as a mistake, it has
resurfaced, and is now widely accepted, We now frequently encounter unequivocal
statements such as this: ―Several hundred collisions or near collisions between galaxies
have been photographed in the past 20 years.23
The concepts of galactic cannibalism, of galaxies ―growing,‖ of ―capture,‖ and of
―collision,‖ are the concepts appertaining to the theory developed in this work, not to the
theory currently accepted by the astronomers. Whether or not the investigators who are
using these concepts realize that they are striking at the foundations of orthodox theory is
not clear, but in any event, that is the effect of the present trend of thought. These
present-day investigators and theorists are providing an increasing amount of significant
support for the conclusions detailed in this volume.
One more question about the aggregation process remains to be considered. We have
found thus far in our examination of this process that the original stellar aggregates, the
globular clusters, enter into combinations, which continue growing until they reach the
status of giant spheroidal galaxies. The question now arises, is this the end of the
aggregation process, or do the galaxies combine into super-galactic aggregates‖ ? The
existence of many definite groups of galaxies with anywhere from a dozen to a thousand
members would seem to provide an immediate answer to this question, but the true status
of these groups or clusters of galaxies is not as evident as that of the stars or the galaxies.
Each of the stars is a definite unit, constructed according to a specific pattern from
subsidiary units that are systematically related to each other. The same can be said of the
galaxies. It is by no means obvious, however, that this statement can be applied to the
clusters of galaxies. So let us turn to a theoretical examination of the question.
The globular cluster, we found, originates as a contracting aggregate of diffuse matter in
which numerous centrally concentrated sub-aggregates are forming. Because of their
central concentration these sub-aggregates, which eventually become stars, meet their
neighbors at locations of minimum gravitational effect, and their net movement is
therefore outward away from each other. Dispersed aggregates of near uniform density,
on the other hand, meet their neighbors at locations where the gravitational effect is at a
maximum. They exist as separate entities only because of competition between the
various centers, which limits each aggregate to the minimum stable size. When open
space is made available by reason of contraction of the individual units, these aggregates,
the globular clusters, move inward toward each other.
If we now consider a still larger volume of space, there are no large-scale aggregates
corresponding to the stars; that is, centrally concentrated aggregates that are outside the
gravitational limits of their neighbors. But in their original condition, the assemblage of
globular clusters constitutes a dispersed aggregate similar to the dispersed aggregate of
gas and dust particles, but on a larger scale. Applying the same principles as before, we
can deduce that there exists a gravitationally determined limiting size of the aggregates of
clusters (which we will call groups) corresponding to the limiting size of the aggregates
of gas and dust (the globular clusters). We could continue this hierarchy of aggregates,
and contemplate an aggregate of groups, but before this next level of structure has time to
materialize, the life span of the constituent stars has terminated. Thus the groups of
globular clusters, which eventually become groups of galaxies, are the largest structural
units. The hierarchical theory, in which there are clusters, clusters of clusters, and so on
indefinitely, is thus excluded. This theory has maintained a certain amount of support in
astronomical circles over the years, but on the basis of the foregoing findings it is no
longer tenable.
The theoretically defined groups of galaxies are not necessarily, or even usually,
coincident with the currently recognized aggregates called clusters of galaxies The
members of each of the classes of aggregates that we have defined, clusters and groups,
are moving inward toward each other. The inward motion of the smaller units, the
clusters, is much the faster. It follows that the net motion of the outer clusters of
adjoining groups carries them away from each other, even though the groups of which
they are components are moving inward. Consequently, the amount of empty space
between groups continually increases. Ultimately the inward motion of the groups would
reverse this trend, if it continued, but before this can happen the time limit intervenes.
Inasmuch as the new groups form in the regions of space left empty by the recession or
disintegration of previously existing groups of galaxies—the ―holes‖ in space reported by
the astronomers—the sizes of the resulting aggregates of galaxies are determined by the
sizes of the vacant spaces. This is a matter of chance, and the individual values are no
doubt distributed over a considerable range, but we can conclude that there is an average
size, probably including some hundreds of visible galaxies and many hundreds of
invisible dwarfs, to which most aggregates will conform approximately, with a relatively
small number substantially above or below this average.
On this basis, the largest units in which gravitation is effective toward consolidation of its
components are the groups of galaxies. Each such group is formed jointly with a number
of adjoining groups. These groups begin separating immediately, but until the outward
movement produces a clear-cut separation, their identity as distinct individuals is not
apparent to observation. Here, then, is the explanation of the large ―clusters‖ and
―superclusters‖ of galaxies. These are not structural units in the same sense as stars or
galaxies, or the groups of galaxies that we have been discussing. Each consists of a
number of independent groups, formed simultaneously in the same general region of
space, and separating so slowly that the processes of galaxy formation and growth are
well under way before the units have moved far enough apart to be recognized as
separate entities. Some of the mathematical aspects of these cluster relationships will he
explored further in Chapter 15 .

CHAPTER 3

Globular Clusters
In the preceding chapter we saw that galaxies (small ones, called globular clusters)
condense out of diffuse material, grow by accretion and capture, and finally at an
advanced age reach the limiting size, that of a giant spheroidal galaxy. This is the essence
of the large-scale evolutionary process in the material sector of the universe, the subject
of the first half of this volume. The next several chapters will be devoted to examining
the most significant details of this process. We will first turn our attention to the galaxies,
junior grade, the globular clusters.
It should be noted, in this connection, that current astronomical theory has no explanation
for either the formation of the clusters or their existence in their present form. It is
generally assumed that the clusters are products of the process of galaxy formation, but
this provides no answer to the problem, in view of the absence of anything more than
vague and tentative ideas as to how the galaxies were formed.
The clusters are spherical, or nearly spherical, aggregates containing from about 20,000
stars to a maximum that is subject to some difference of opinion, but is probably in the
neighborhood of a million stars. These are contained in a space with a diameter of from
about 5 to perhaps 25 parsecs. The parsec is a unit of distance equivalent to 3.26 light
years. Both of these units are in common use m astronomy, and in order to conform to the
language in which the information extracted from the astronomical literature is expressed,
both units will be employed in the pages that follow.
The structure of these clusters has long been a mystery. The problem is that only one
force of any significant magnitude that of gravitation has been definitely identified as
operative in the clusters. Inasmuch as the gravitational force increases as the distance
decreases, the force that is adequate to hold the cluster together should be more than
adequate to draw the constituent stars together into one single mass, and why this does
not happen has never been ascertained. Obviously some counter force is acting against
gravitation, but the astronomers have been unable to find any such force. Orbital motion
naturally suggests itself, in view of the prevalence of such motion among astronomical
objects, but the rotations of the clusters, if they are rotating at all, are far too small to
account for the outward force. For example, K. Cudworth, reporting on a study of M 13,
says that ‖no evidence of cluster rotation was found. 24 It is recognized that this is a
problem that calls for an answer. ‖Why then is the rotation of globular clusters so small?―
25
ask Freeman and Norris. Those who dislike having to concede that there is a significant
gap in astronomical knowledge here are inclined to make much of the fact that a few
clusters do show some signs of rotation. For instance, Omega Centauri is slightly
flattened, and some indication of rotation has been found in the spectra of M 3. But a
showing that some clusters rotate is meaningless. All must be rotating quite rapidly to
give any substance to the hypothesis that rotational forces are counterbalancing the
gravitational attraction. If even one cluster is not rotating, or is rotating only slowly, this
is sufficient to demonstrate that rotation is not the answer to the problem. Thus it is clear
that rotation does not provide the required counter force.
The suggestion has also been made that these clusters may be similar to aggregates of gas
molecules, in which the individual units maintain a wide separation, on the average. But
such an explanation requires both high stellar speeds and frequent collisions, neither of
which can be substantiated by observation. Furthermore, the existence of the gaseous
type of structure depends on elastic collisions, and the impact of stars upon stars, if it
were possible, would certainly not be elastic. Indeed a rather large degree of
fragmentation could be expected. Together with the large kinetic energies that would be
required to counterbalance the weight of the overlying layers of stars, this would result in
a physical condition in the central regions of the clusters very different from that existing
in the outlying regions. Here, again, no such effect is observed.
The astronomers are reluctant to concede that such a conspicuous problem as that of the
structure of these clusters is without an acceptable solution, and the general tendency is to
assume that the possibilities mentioned in the preceding paragraphs will somehow
develop into an answer at some future time. It is therefore significant that exactly the
same problem exists with respect to the observed dust and gas clouds in the Galaxy, and
here, where the processes suggested as possible explanations of the cluster structure
clearly do not apply, the theorists are forced to admit that this is ‖a major unanswered
question.― The dust cloud situation will be discussed in Chapter 9.
As in so many of the phenomena previously examined, the answer to this problem is
provided by the outward progression of the natural reference system relative to the
conventional stationary system of reference. Because of the way in which the cluster is
formed, every constituent star is outside the gravitational limits of its neighbors, and
therefore has a net outward motion away from each of them. Coincidentally, all of the
stars in the cluster are subject to a motion toward the center of the aggregate by reason of
the gravitational effect of the cluster as a whole. Near this center, where the gravitational
effect of the aggregate is at a minimum, the net motion is outward. But in the outer
regions of the cluster, where the gravitational motion exceeds the progression of the
reference system, the net motion is inward. The outer stars thus exert a force on the inner
ones, confining them to a finite volume, in much the same way that the fabric of a
balloon confines the gas that it encloses. The immense region of space around each star is
thus reserved for that star alone, irrespective of the stellar motions. Whether or not the
cluster acquires a rotation is immaterial. It is equally stable in a static condition.
This question as to the structure of the globular clusters is only one of many physical
situations in which an equilibrium exists between gravitation and a hitherto unidentified
counter force. Because of the lack of understanding of the nature and origin of this force,
the general tendency has been to ignore it, and either to grope for some other kind of
answers, as in the globular cluster case, or to evade the issue in some manner. One of the
few authors who has recognized that an ‖antagonist― to gravitation must exist is Karl
Darrow. ‖This essential and powerful force has no name of its own,― Darrow points out
in an article published in 1942. ‖This is because it is usually described in words not
conveying directly the notion of force. 26 By this means, Darrow says, the physicist
‖manages to avoid the question.― In spite of the clear exposition of the subject by Darrow
(a distinguished member of the Scientific Establishment), and the continually growing
number of cases in which the ‖antagonist― is clearly required in order to explain the
existing relations, the physicists have ‖managed to avoid the question― for another forty
years.
The development of the theory of a universe of motion has now revealed that the
interaction between two oppositely directed forces plays a major role in many physical
processes all the way from inter-atomic events to major astronomical phenomena. We
will meet the ‖antagonist― to gravitation again and again in the pages that follow. Like
gravitation, this counter force, which we have identified as the force due to the outward
progression of the natural reference system relative to the conventional system of
reference, is radial in the globular cluster, and since these two are the only forces that are
operative to any significant degree during the formative period, the contraction of the
original cloud of dust and gas into a cluster of stars is accomplished without introducing
any appreciable amount of rotation. This is the answer to the question posed by Freeman
and Norris. As noted in Chapter 2, consolidation of two or more of these clusters to form
a small galaxy usually results in a rotating structure. The same result could be produced
on a smaller scale if the cluster picks up a stray group of stars or a small dust cloud. Some
such event, or gravitational effects during the approach to the Galaxy, probably accounts
for the small amount of rotation that does exist in some clusters.
The compression of the cluster structure reduces the inter-stellar distances to some extent,
but they are still immense. Current estimates put the density at the center of the cluster at
about 50 stars per cubic parsec, as compared to one star per ten cubic parsecs in the solar
vicinity,27 This corresponds to a reduction in separation by a factor of eight. Since the
local separation exceeds 112 parsecs, or five light years, the average separation in the
central regions after compression is still more than half of a light year, or 3 x 1012 miles,
an enormous distance.
For general application to the inter-stellar distances, the term ‖star system― has to be
substituted for the word ‖star― as used in the foregoing paragraphs, but star systems in
this sense are rare in the globular clusters. The origin and nature of double and multiple
systems will be discussed in Chapter 7.
In assessing the significance of the various available items of information about the
globular clusters, to which we will now turn our attention, it should be kept in mind that
all of the conclusions that have been reached in this work concerning these individual
items are derived from the same source as the foregoing explanations of the origin and
structure of the globular clusters; that is, from the postulates that define the universe of
motion.
As indicated in the preceding chapter, the observations of the globular clusters add
materially to the amount of evidence confirming the theoretical conclusions as to the
growth of the galactic aggregates by the capture process. On the basis of this theory, each
galaxy is pulling in all of the clusters within its gravitational limits. We can therefore
expect all galaxies, except those that are still very young and very small, to be surrounded
by a concentration of globular clusters moving gradually inward. Inasmuch as the
original formation of the clusters took place practically uniformly throughout all of the
space under the gravitational control of each galaxy (except for a very large-scale radial
effect that will be discussed later), the concentration of clusters should theoretically
continue to increase as the galaxy is approached, until the capture zone is reached.
Furthermore, the number of clusters in the immediate vicinity of each galaxy should
theoretically be a function of the gravitational force and the size of the region within the
gravitational limit, both of which are related to the size of the galaxy.
These theoretical conclusions are confirmed by observation. A few clusters have been
found accompanying such small galaxies as the member of the Local Group located in
Fornax; there are several in the Small Magellanic Cloud and two dozen or more in the
Large Cloud; our Milky Way galaxy has 150 to 200, when allowance is made for those
which we cannot see for one reason or another; the Andromeda spiral, M 31, has the
same or more; NGC 4594, the ‖Sombrero― galaxy, is reported to have ‖several hundred―
associated clusters; while the number surrounding M 87 is estimated to be from one to
two thousand.
These numbers of clusters are definitely in the same order as the galactic sizes indicated
by observation and by criteria previously established. The Fornax—Small Cloud—Large
Cloud—Milky Way sequence is not open to question. M 31 and our own galaxy are
probably close to the same size, but there are indications that M 31 is slightly larger. The
dominant nucleus in NGC 4594 shows that this galaxy is still older and larger, while all
of the characteristics of M 87 suggest that it is near the upper limit of galactic size.
Observation gives us only what amounts to an instantaneous picture, and to support the
validity of the theoretical deductions we must rely primarily on the fact that the positions
of the clusters as observed are strictly in accord with the requirements of the theory. It is
significant, however, that such information as is available about the motions of the
clusters of our own galaxy is also entirely consistent with the theoretical findings. In the
words of Struve, we know ‖that the orbits of the clusters tend to be almost rectilinear, that
they move much as freely falling bodies attracted by the galactic center. 28 According to
the theory of the universe of motion, this is just exactly what they are.
We see the globular clusters as a roughly spherical halo extending out to a distance of
about 100,000 light years from the galactic center. There is no definite limit to this zone.
The cluster concentration gradually decreases until it reaches the cluster density of
intergalactic space, and individual clusters have been located out as far as 500,000 light
years. This distribution of the clusters is completely in agreement with the theoretical
conclusion that the clusters do not constitute parts of the galactic structure, but are
independent units that are on the way to capture by the Galaxy. Both the spherical
distribution and the greater concentration in the immediate vicinity of the Galaxy are
purely geometrical consequences of the tact that the gravitational forces of the Galaxy are
pulling the clusters in from all directions at a relatively constant rate.
On the basis of the theoretical findings described in the preceding pages, the globular
clusters are the youngest of the visible astronomical structures, and the stars of which
they are composed (aside from an occasional older star or a small group of stars obtained
from the environment in which the cluster condensed) are the youngest members of the
stellar population. One of the observable consequences of this youth is supplied by the
composition of the matter in the cluster stars. Inasmuch as the build-up of the heavier
elements, according to the theoretical findings, is a continuing process, offset only to a
limited extent by the destruction of those atoms that reach one or the other of the
destructive limits, the proportion of heavy elements in any aggregate increases with age.
It can be expected, then, that the stars of the globular clusters, with only a few exceptions,
are composed of relatively young matter, in which the heavy element content is low.
The evidence concerning the stellar composition is somewhat limited, as the observations
reflect only the conditions in the outer regions of the stars, and are influenced to a
substantial degree by the character of the material currently being accreted from the
environment. ‖Detailed studies of the composition of stars,― says J. L. Greenstein, ‖can
he made only in their atmospheres. 29 However, the differences in the reported values are
too large to leave any doubt as to the general situation. For example, the percentage of
elements above helium in the average globular cluster is reported to be lower by a factor
of 10 or more than the corresponding percentage in the sun.30
Current astronomical theory concedes that the matter in the stars of the globular clusters
is matter of a less advanced type than that in the spiral arms, but to reconcile this fact
with the prevailing ideas as to the age of the clusters it invokes the assumptions (1) that
the heavier elements were produced in the stellar interiors, (2) that they were ejected
therefrom in supernova explosions, and (3) that the stars with the greater heavy element
content were formed from this ejected material, This is an ingenious theory, but it is
being called upon to explain a situation that is decidedly abnormal. The normal
expectation would, of course, be that the youngest matter would be found in the youngest
structures. A theory that postulates a reversal of the normal relationships is not ordinarily
given serious consideration unless some strong evidence in its favor can be produced, but
in this case there is no observational evidence to support any of the three assumptions.
Indeed, there is some evidence to the contrary, as in the following report:
The relative abundance of these [heavy] elements in the supernova is not very different
from their abundance in the sun. If the supernovae synthesize heavy elements out of
lighter ones in the course of their explosion, none of that material is initially seen in the
rapidly expanding debris.31 (Robert P. Kirshner).
This is an example of the way in which, as noted in Chapter 1, the astronomical
community is disregarding or distorting the evidence from observation in order to avoid
contradicting the physicists conclusions as to the nature of the stellar energy generation
process. The failure to find any evidence of the predicted increase in the concentration of
heavy elements in the supernova products is, in itself, a serious blow to a theory that rests
entirely on assumptions, but it is only one of a long list of similar conflicts and
inconsistencies that we will encounter as we proceed with our examination of the
astronomical field.
As will be demonstrated in the pages that follow, all of the relevant astronomical
evidence that is available is consistent with the theoretical identification of the course of
galactic evolution outlined in the preceding pages, and is more than ample to confirm its
validity. In fact, the available data concerning the globular clusters are sufficient in
themselves to provide a conclusive verification of the theoretical conclusions set forth in
this work. The remainder of this chapter will review these globular cluster data, and will
indicate their relevance to the point at issue. The various items of information that have
been accumulated will be described briefly. Each description will then be followed by a
short discussion, indicating the manner in which this item is related to the point that is
being demonstrated: the validity of the new conclusions with respect to the place of the
clusters in the evolutionary sequence.
1. Observation: The globular cluster structure is stable.
Comment: The explanation of the hitherto inexplicable structure of the clusters
has already been discussed, but it should be included in the present review of the
evidence contributed by the observations. The fact that the explanation of the
cluster structure is provided by the existence of the same hitherto unrecognized
factor that accounts for the recession of the distant galaxies is particularly
significant.
2. Observation: The proportion of heavy elements in the stars of the globular
clusters is considerably lower than in the stars and interstellar material in the solar
neighborhood.
Comment: Like item number 1, this fact, already discussed, is being included in
the list so that it will appear in the summary of the evidence.
3. Observation: Some globular clusters contain appreciable numbers of hot stars.
Comment: This observed fact is very disturbing to the supporters of current
theories. Struve, for example, called the presence of hot stars an ‖apparent
defiance― of stellar evolutionary theory.32 But it is entirely in harmony with the
theory of the universe of motion. Some stars, or groups of stars, are separated
from the various aggregates by explosive processes, and are scattered into
intergalactic space. As the globular clusters form from dispersed material they
incorporate any of these strays that happen to be present. Others are captured as
the clusters move through space. The presence of a small component of older and
hotter stars in some of the young globular clusters is thus normal in the universe
of motion. On the other hand, if the clusters have always existed in the outer
regions of the galaxies, and are composed of very old stars, in accordance with
conventional astronomical theory, the hot stars (which in this theory are young)
should have disappeared long ago.
4. Observation: Some clusters also contain nebulous material.
Comment: Helen S. Hogg, writing in the Encyclopedia Britannica, says,
‖Puzzling features in some globular clusters are dark lanes of nebulous material.―
It is difficult, she says, ‖to explain the presence of distinct, separate masses of
unformed material in old systems. 33 Quite true. But it is easy to explain the
presence of such material in young systems, which the clusters are, according to
the findings of this work.
5. Observation: There is an increasing amount of evidence indicating that very large
dust clouds are being pulled into the Galaxy.
Comment: This observed phenomenon has not yet been fitted into conventional
astronomical theory. It is part of the cannibalism that is contrary to the premises
of that theory, but is not yet clearly recognized in that light. In the universe of
motion, the significance of these incoming dust clouds is clear. They are simply
unconsolidated globular clusters, aggregates that have been, or are about to be,
captured by the Galaxy before they have had time to complete the process of star
formation. Considerable information concerning the structure of these
unconsolidated clusters, and the nature of the processes that they undergo after
entering the Galaxy, is now available, and will be examined in Chapter 9.
6. Observation: Aside from the somewhat exceptional instances where nebulous
material is present, the globular clusters show little evidence of the presence of
dust.
Comment: Current astronomical theory ascribes this to age, assuming that over a
long period of time the original dust will have been formed into stars, or captured
by stars. Our finding is that the nature of the globular cluster condensation process
results in almost all of the dust and gas of which the cluster was originally
composed being brought under the gravitational control of the stars. In this
condition the dust is not observable as a separate phenomenon. Evidence of the
existence of dust aggregates is observed only where the normal condensation
process has been subject to some disturbing influence, or where a dust cloud has
been captured.
7. Observation: Globular clusters exist in a zone surrounding our galaxy that
extends out to a distance of at least 100,000 light years from the galactic center,
and in similar locations around other galaxies. The existence of a substantial
number of clusters in intergalactic space is also indicated.
Comment: The crucial point in this connection is the number of intergalactic
clusters. According to conventional theory, the formation of the Globular clusters
was part of the formation of the galaxies, and there should be no clusters between
the galaxies other than a few strays. In the universe of motion intergalactic space
is the original zone of formation of the clusters, and the concentration around each
galaxy is merely a geometric result of the gravitational notion toward the galaxy
from all directions. On this basis there should be no definite limits to the cluster
zone. The clusters should just thin out gradually until they reach the
approximately uniform density in which they exist in space that is free of large
aggregates of matter. The total number of' intergalactic clusters should thus be
very large The amount of information currently available is not sufficient to
produce a definitive answer to the question as to how common these intergalactic
clusters actually are, hut the increasing number of discoveries of distant clusters is
highly favorable to the new theory.
The growing realization that dwarf' galaxies, not much larger than globular
clusters, may he ''the most common type of' galaxy in the universe'' is a significant
step toward recognition that intergalactic space is well populated with globular
clusters. Indeed, some of the aggregates that are now being identified as dwarf
galaxies may actually be globular clusters. Current estimates of the size of these
dwarf galaxies, which put the average at about one million stars, are within the
range of the estimates of the sizes of the globular clusters that have been made by
other observers.
8. Observation: The number of clusters associated with each galaxy is a function of
the mass of the galaxy.
Comment: Either theory can produce a satisfactory explanation of this fact. On the
basis of conventional theory the material from which the clusters are formed
should constitute a fairly definite proportion of the total galactic raw material, and
a larger galaxy should therefore provide material for more clusters. The
Reciprocal System of theory asserts that the clusters are being drawn in from
surrounding space, and that the more massive galaxies gather more clusters
because they exert stronger gravitational forces throughout larger volumes of
space.
9. Observation: The distribution of clusters around the Galaxy is nearly spherical,
and there is no evidence that the cluster system participates to any substantial
degree in galactic rotation.
Comment: This is difficult to reconcile with conventional theory. If the formation
of the clusters was a part of the galaxy formation as a whole, it is hard to explain
why one part of' the structure acquired a high rotational velocity while another
part of the same structure acquired little or none. B. Lindblad has suggested that
the Galaxy is composed of sub-systems of different degrees of flattening, each
rotating at a different rate. This, however, is simply a description, not an
explanation. The Reciprocal System of theory provides a simple and
straightforward explanation. According to this theory the clusters arc not part of
the Galaxy, but are external objects being drawn into the Galaxy by gravitational
force. On this basis the reason why the clusters do not participate in the galactic
rotation is obvious. The nearly spherical distribution is also explained by the
theoretically near uniform distribution of the clusters in the volume of space from
which they were drawn.
10. Observation: Interstellar distances in the outer regions of the globular clusters are
comparable to those in the solar neighborhood. Present estimates are that the
distances in the central regions are less by a factor of about eight.27
Comment: The significant point about the foregoing is that the variations in
interstellar distance are relatively minor, and even in the locations of greatest
density the distances between the stars are enormous. Conventional theory has no
explanation for this state of affairs. In fact, the observed limitation on the
minimum distance between stars is ignored in current astronomical thought, and
close approaches of stars are features of a number of astronomical theories. The
finding of this work is that the immense size of the minimum distance between
stars (other than that between members of binary or multiple systems) is not
accidental; it is a result of the inability of a star (or star system) to come within
the gravitational limit of another. The stars do not approach each other more
closely because they can not do so.
11. Observation: The ‖orbits― of the clusters are rectilinear. As expressed by Struve
in the statement previously quoted, the clusters ‖move much as freely falling
bodies attracted by the galactic center.―
Comment: Our findings are that this is exactly what they are, and that the
observed motions are therefore just what we should expect. Conventional theory
can explain such motions only by assuming extremely elongated elliptical orbits
with relatively frequent passage of the clusters through the galactic structure. In
view of the liquid-like nature of this structure, as deduced from the postulates that
define the universe of motion, such passages through the galaxy are clearly
impossible. Even without this information, however, it should be rather obvious
that there is some reason why the observed minimum separation between the stars
in the solar neighborhood (the only region in which we can determine the
minimum) is so large. There is no justification for assuming that this reason,
whatever it may be, is any less applicable to the stars of the globular clusters. The
factors that determine this minimum separation bar the passage of any stellar
aggregate through any other such aggregate, irrespective of what their nature may
be. The conventional explanation of the observed inward motions of the clusters
also conflicts with the following observation.
12. Observation: Clusters closer to the galactic center are somewhat smaller than
those farther out. Studies indicate a difference of 30 percent between 10,000
parsecs and 25,000 parsecs.34
Comment: If the ‖elongated orbit― theory were correct, the present distances from
the galactic center would have no significance, as a cluster could be anywhere in
its orbit. But the existence of a systematic difference between the closer and more
distant clusters shows that the present positions do have a significance. Since the
visible diameter of the average cluster is in the neighborhood of 100 light years,
and the actual overall dimensions are undoubtedly greater, there is a substantial
gravitational differential between the near and far sides of a cluster at distances
within 100,000 light years. We can therefore deduce that the clusters are
experiencing an increasing loss of stars as they approach the Galaxy, both by
acceleration of the closest stars and by retardation of the most distant. The effect
of slow losses of this kind on the shape of the aggregate is minor, and the
detached stars remerge with the general field of stars that is present in the same
zone as the cluster. The process of attrition is therefore unobservable in any direct
manner, but we can verity its existence by the comparison of sizes as noted above.
From the observed differences it appears that the clusters lose more than half of
their mass by the time they reach what may be regarded as the capture zone, the
region in which the gravitational action on the cluster structure is relatively
severe.
The loss of stars due to gravitational differentials is substantially less in the case
of a cluster approaching a small elliptical galaxy. Thus we find that an elliptical
galaxy in Fornax, a member of the Local Group with a mass of about 2 x 109 solar
equivalents, ‖contains about five globulars that are bigger than those in our
galaxy.
13. Observation: There is also an increase in the heavy element content of the cluster
stars as the distance from the galactic center decreases.
Comment: This is another systematic correlation with radial distance that
contradicts the ‖elongated orbit― theory. It is also inconsistent with the currently
prevailing assumption that the globular clusters are component parts of the
Galaxy and were formed in conjunction with the rest of the galactic structure.
14. Observation: The globular clusters range in size from a few tens of thousands to
over a million stars. No stable stellar aggregates have been found between this
size and the multiple star systems consisting of a few stars separated by very short
distances comparable to the diameters of planetary orbits.
Comment: This is a very striking situation for which present-day astronomical
theory has no explanation. A study of the problem by S. Von Hoerner was able to
conclude only that ‖the reasons must lie in the original conditions under which the
clusters were formed. 35 This is true, but it is not an explanation. What is needed
is the information derived from theory in Chapter 2, the nature of those
‖conditions under which the clusters were formed.― As brought out there, no star
can be formed within the gravitational limit of an existing star or multiple star
system, since the gravitational pull of that star or star system prevents the
accumulation of sufficient star-forming material. (Division of existing stars, as we
will see later, forms Binary and multiple stars, not by condensation of new stars.)
Stars formed outside the gravitational limit of an existing star are subject to a net
outward motion. The cluster is held together only by reason of the gravitational
attraction that the cluster as a whole exerts on its constituent stars. A cluster must
therefore exceed a certain minimum size in order to be gravitationally stable.
Such clusters originate only where large numbers of stars are formed
contemporaneously from dust and gas clouds of vast proportions.
The foregoing discussion has considered 14 sets of facts, derived from observation, that
represent the most significant items of information about the globular clusters now
available, aside from a few items that we will not be in a position to appraise until after
some further background information has been developed. The deductions from the
postulates of the universe of motion that have been described supply a full and detailed
explanation of every one of these sets of facts. The performance of conventional
astronomical theory, on the other hand, is definitely unsatisfactory, even if it is given the
benefit of the doubt where definitive answers to the questions at issue are unavailable.
Evaluation of the adequacy of explanations is, of course, a matter of judgment, and the
exact score will differ with the appraiser, but an evaluation on the basis of the comments
that were made in the preceding discussion leads to the conclusion that conventional
theory provides explanations that are tenable, on the basis of what is known from
observation, for only three of the 14 items (1, 6, 8). It supplies no explanation at all for
five items (2, 7, 9, 10, 14), and the explanation it advances is inconsistent with the
observed facts in 6 cases (3, 4, 5, 11, 12, 13). Five more sets of observations that are
pertinent to this evaluation will be examined in Chapter 9, and with the addition of these
items the total score for conventional astronomical theory is 4 items explained, 7 with no
explanation, and 8 explanations inconsistent with observation. The significance of these
numbers is obvious.

CHAPTER 4

The Giant Star Cycle


Thus far we have been concerned with the globular clusters and their successors as
aggregates of stars, Now we will turn our attention to the individual stars of which these
aggregates are constructed, As we saw in Chapter 1, the stars originate as dust and gas
clouds, There is no clear line between dust cloud and star. Until comparatively recently
stars could be detected only by means of their radiation in the visible range, and this
established a low limit at about 2500 K. During the last few decades instruments of
greatly extended range have been developed, and stars of normal characteristics are now
being observed down to the neighborhood of 1000 K. Infrared objects of a nature not yet
clearly determined, with surface temperatures as low as 300 to 700 K, have been
reported.
From theoretical considerations we deduce that at some point after the interior of a
contracting cloud of dust and gas has been raised to a high temperature by gravitational
energy, a relatively rapid rise in the temperature of the entire aggregate occurs when the
destructive limit of the heaviest element present is reached in the central regions, and
conversion of mass to energy begins. As explained in Volume II, both the thermal energy
of the matter in the star and its ionization energy are space displacements, and when the
total of these space displacements reaches equality with one of the rotational time
displacements of an atom, the opposite displacements neutralize each other, and the
rotation reverts to the linear basis. In other words, both the ionization and a portion of the
matter of the atoms are converted into kinetic energy. Inasmuch as all atoms are fully
ionized before the temperature limit is reached, and the heavier atoms are capable of
acquiring a greater degree of ionization than the lighter ones, the amount of thermal
energy required to bring the total space displacement up to the limit is less for the heavier
elements. The limiting temperature is therefore inversely related to the atomic mass.
Production of increasingly heavier elements is a continuing process that begins with the
original entry of primitive matter from the cosmic sector. The pre-stellar dust cloud
therefore contains a small proportion of newly formed heavy elements, together with
whatever heavy element content there may have been in the fragments of older matter
incorporated from the surroundings. Inasmuch as the entire structure of the cloud is fluid,
the heavy elements make their way to the center. As the temperature in the central
regions rises, successively lighter elements reach their destructive limits and are
converted to energy.
Activation of this second energy source necessitates an immediate and substantial
increase in the temperature of the aggregate in order to produce enough radiation to reach
equilibrium with the greater energy generation. Thus there is not a gradual rise of the
surface temperature of the aggregate from the near zero of inter-stellar space up to the
levels recognized as those of stars, but rather a long period of no more than minor
warming, followed by a quite sudden jump to the temperature of an infrared star. The
objects cooler than 1000 K generally display some peculiar characteristics that
distinguish them from normal stars, and make it difficult to draw definite conclusions as
to their true nature.
The most significant evolutionary changes that take place in the stars, as they grow older
can conveniently be shown on a graph in which the luminosity (expressed as magnitude)
is plotted against some measurement representing the surface temperature. In its original
form, this Hertzsprung-Russell, or HR, diagram utilized an arbitrary spectral
classification as the temperature variable, but the present tendency is to use a color index,
which accomplishes the same result. The textbooks still retain the H-R diagram, probably
for historical reasons, but the color-magnitude, or CM, diagram is now in general use by
the observers.
The CM diagram of the globular cluster M3 is shown in Fig.3. In this diagram the points
representing the magnitudes applicable to the individual stars fall mainly within the
crosshatched area. Identification of the locations marked O. A, B. and C has been added
to the conventional diagram for purposes of this present discussion.
The mass, density, and central temperature of the globular cluster stars are related to the
variables of the CM diagram, and although they are subject to modification by other
factors, so that they cannot be represented accurately in this two-component diagram,
they can be located approximately, and adding them to the framework of the diagram for
reference purposes facilitates understanding of the theoretical development. Accurate
measurements of magnitudes in the area of the diagram occupied by the globular cluster
stars are difficult to obtain. S. J. Inglis points out that ‖There is no red giant whose mass
we know with any degree of certainty. 36 But we can relate these magnitudes to the
evolutionary pattern of the stars, and thus arrive at approximations of their values.
We know, for instance, that the line BC, the main sequence, is the location of
gravitational equilibrium. The stars on this line are therefore at approximately the same
density. The density at C is actually greater than that at B by a factor of 3 or 4, because of
the compression due to the larger stellar mass, but since the equilibrium densities along
the main sequence are more than a million times greater than those in the early portions
of area O. the difference between B and C is negligible on the scale of the diagram. We
may therefore draw lines parallel to BC and treat them as lines of equal density for
analytical purposes. Similarly, the line AB theoretically represents a condition of constant
mass. The theory further indicates that the central temperatures are determined by the
stellar mass. Lines parallel to AB can thus be regarded as lines of equal mass and central
temperature. On the basis of the explanation of the line AC that will be developed in the
following pages, this line represents a condition in which condensation of a dust cloud of
nearly uniform density is proceeding at a rate determined by gravitational forces. We may
call it a line of constant growth.
Fig. 4 is a reproduction of the M 3 diagram with the lines representing these other
variables added. These lines provide a good indication of the way in which the several
variables are related in different regions of the diagram, and reference to the pattern of
this illustration will be helpful in interpreting the CM diagrams that will be introduced
later. The relations represented by the auxiliary lines in Fig. 4 apply to the stars of the
globular cluster type only. As we will see later, the corresponding relations—the lines of
equal mass, for instance—are altogether different for other classes of stars. This is a fact
that has not heretofore been recognized, an oversight that is responsible for many errors
in the orthodox interpretations of the CM diagrams.
All of the stars of a globular cluster condensed from the same dispersed aggregate of
primitive material, but the conditions affecting the rate of condensation varied, and the
evolutionary stages of the stars therefore differ. Consequently, the stars of a cluster such
as M 3 are spread out over a range of the stellar evolutionary pattern on the CM diagram.
The earliest of the visible stars are the coolest but, by reason of the immense area from
which they are radiating, their luminosity is relatively high. These stars therefore occupy
positions in the upper right of the diagram, in the general area marked O. The remainder
of this chapter will give a general description of the paths that these stars follow when
they leave this area. Further details will be added in Chapter 8, after some additional
groundwork has been laid.
The stars of these globular clusters exist in two size ranges. The great majority are small,
in the neighborhood of the solar mass or below. Another portion of the total consists of
stars that are substantially larger. We can identify the latter as stars that had a fragment of
preexisting material as a nucleus for condensation of the pre-stellar dust and gas cloud.
The smaller stars are those that did not enjoy this advantage. The fragments incorporated
into the stars were usually small, as the explosions that scattered them into space were
violent enough to reduce the greater part of the original structure to dust, gas, and small
aggregates. The growth of the stellar structure follows essentially the same course
whether or not it contains a small fragment as a nucleus. The important difference is that
it takes a very long time to build a dust particle up to an aggregate of fragment size. A
pre-stellar aggregate that has a fragment to start with therefore has a big head start over
those that have to build all the way from dust particles, and it is able to establish
gravitational control over a larger volume of the protocluster. Thus, even though the stars
of both of these groups are nearly alike at their points of origin in area O, those of one
group have a much greater potential for growth.
The supply of dust and gas available for capture is, in effect, exhausted for the first group
by the time they reach the vicinity of point A. These stars then cease to grow, and they no
longer continue on the path OC. Instead they make a sharp turn and move downward on a
relatively steep slope, reaching gravitational equilibrium on the main sequence at point B.
Along the path AB the gravitational contraction continues, but because the mass is no
longer increasing, the central temperature remains approximately constant. The decrease
in the size of the radiating surface results in an increase in the surface temperature, but
coincidentally the corresponding increase in density increases the resistance to the flow
of heat from the center of the star to the surface. These two oppositely directed processes
just about counterbalance each other, and the net result, including the effect of the energy
contributed by the contraction, is a small increase in surface temperature. The
combination of a decrease in the radiating surface and a relatively small temperature
change results in a rapid decrease in the luminosity.
With the benefit of this information as to the nature of the changes that take place along
the evolutionary path OAB of the small stars, it can now be seen that the stars on the path
OAC are subject to the same factors, except that there is a continuous addition of more
matter, and a consequent increase in the central temperature. As a result, the increase in
surface temperature is much greater than that along the line AB, and the decrease in
luminosity is smaller, leading to a nearly horizontal movement across the CM diagram.
Arrival at the main sequence, at either point B or point C, eliminates any further
generation of energy from gravitational contraction. Each star then has to establish a
thermal equilibrium on the basis of the atomic energy generation alone. For this purpose
it moves up or down the main sequence to the point where the dissipation of energy by
radiation is in balance with the energy production. The main sequence is the location
where the stars spend most of the latter part of their lives. It has been estimated that about
95 percent of the observable stars are on this sequence (although it should be understood
that the observable stars do not constitute a representative sample of the stars as a whole).
For convenient reference in the subsequent discussion we will designate the stars on the
evolutionary paths OAB or OAC as Class A, and those of the main sequence as Class B.
The stars of Class A and Class B coincide, in general, with those currently called
Population II and Population I respectively. The reason for the reversal of the sequence is
that it puts the classes into the correct evolutionary order. The younger stars are currently
called Population II. The A classification is more appropriate.
In the context of the star and cluster formation process deduced from the postulates that
define the universe of motion, the foregoing explanation of the CM diagram of the
globular clusters is essentially self-evident, but the astronomers cannot take this simple
and logical view of the situation. They did so in an earlier era, but they have changed
their ideas. As one author states, ‖Present knowledge has forced a nearly complete
reversal of this view.― This ‖knowledge,― he says, is partly observational and partly
theoretical. The ‖observational― items that he cites are (1) ‖red giants are common in
globular clusters and elliptical galaxies, systems which are known to be of great age . . .
and in which star formation has ceased countless ages ago,― and (2) ‖red giants do not
appear in greater numbers in the nebulous regions of the Galaxy, as they would certainly
do if they had been formed recently from the great gas and dust clouds of space. 37
As can easily be seen, these so-called ‖observational― items are, in fact, purely
theoretical. Their application to the points at issue depends entirely on the prevailing
theories of stellar formation and of stellar ages. As long as the astronomers were basing
their conclusions on the evidence from their own field, they arrived at an understanding
of the evolutionary course of the globular cluster stars very similar to that which we now
derive from the Reciprocal System of theory. But it became evident that this conclusion
is inconsistent with the physicists contention that the stellar energy is generated by the
hydrogen conversion process (this is the ‖present knowledge― cited in the quotation
above). This pure assumption by the physicists is the only basis for the assertion that the
globular clusters ‖are known to be of great age.― There is no astronomical basis for that
conclusion. But since the astronomers are unwilling to challenge the physicists assertions,
they have, as indicated in the quoted comment, ‖completely reversed― their own ideas,
and have accommodated their theories to the requirements of the hydrogen process.
On this basis, the stars of the globular clusters are old stars. The evolutionary path
obviously has to start in region O of the diagram, since the protostars are necessarily
diffuse and cool. It is generally recognized that the red giant stars of the globular clusters
are stars of the same type as the protostars. Shklovsky, for instance, concedes that the
massive protostars in a late stage of their evolution ‖have all the characteristics of giant
stars.― 38 But since the astronomers now see the red giants of the globular clusters as old
stars they cannot accept the conclusion that these are identical objects.
As a consequence of this inability to recognize the identity, astronomical theory first has
to put the stars through the evolutionary process as protostars, and then, after a
hypothetical sojourn on the main sequence, bring them back for another experience as
giant stars. These giants then have to make their way, in some, as yet unexplained,
manner, directly from their position in region O of the CM diagram to the region of the
early white dwarfs, which is located in the diametrically opposite corner of the diagram.
As expressed by L. H. Aller in an understatement of classic proportions, ‖the details of its
[the giant star's] evolution are uncertain.― 39
When the stars of the globular clusters and dwarf galaxies are recognized as relatively
young objects, only one step beyond the dense dust cloud, or protostar, stage, the
necessity for these contortions in the theoretical evolutionary path is eliminated. The
infrared protostars are precursors of the red giants; they are already giants and on the way
to becoming red. From this cool and diffuse state they follow one or the other of the two
alternate paths to gravitational equilibrium on the main sequence.
After a star has achieved both gravitational and thermal equilibrium, and has settled down
to a somewhat stable condition, its subsequent course depends on the environment. If this
environment is relatively free from dust and gas, the star may not be able to generate
enough energy to replace that lost by radiation, because of a shortage of heavy elements.
In that case it moves slowly down the main sequence to the point where the radiation has
been reduced enough to balance input and output. Whether or not this movement ever
continues far enough to lower the central temperature below the lowest destructive limit,
so that the star loses its energy supply and ceases to be a star, is not clearly indicated at
the present stage of the theoretical development. As matters now stand, however, it seems
probable that any aggregate that is once able to attain the stellar status on the main
sequence will remain a star.
The continual replenishment of the supply of heavy elements by means of the atomic
building process described in Volume II is an important factor in this situation. It plays a
major role even where there is a significant amount of accretion, as there is only a very
small proportion of heavy elements in the accreted matter. Since the amount of atom
building is proportional to the mass of the aggregate, the same rate of heavy element
formation that maintains the stellar status of the smaller stars is sufficient to add
materially to the fuel supply of a larger star.
The automatic reduction in the amount of radiation which takes place in response to a
decrease in the generation of energy enables a star to adjust to a rather wide range of
environmental conditions, and since changes in these conditions occur only on an
extremely long time scale, many of the main sequence stars maintain approximately the
same pattern of thermal behavior for extended periods of time (fortunately for the human
race). But accretion from the environment plays a very important part in the general
evolutionary picture in the globular clusters the growth comes entirely, or almost entirely
from the remaining portions of the original pre-stellar dust and gas cloud. But accretion
of matter also takes place from whatever environments the stars enter after consolidation
of the original dust and gas is complete. Such accretion is common in the post-globular
cluster stages, and has a significant effect on many astronomical phenomena, as we will
see in the pages that follow.
For reasons that will be discussed in Chapter 8, the accretion by the average star in the
outer regions of a spiral galaxy exceeds the losses due to radiation, and this star therefore
moves up the main sequence. Stars in regions of greater dust and gas concentrations
evolve still more rapidly, and the process also speeds up, as the stars become more
massive, since the stronger gravitational forces draw material from larger regions of
space.
As the stars increase in mass, the central temperatures increase accordingly, and
successively higher destructive limits are reached, making additional elements available
as fuel for the energy generation process. Since none of the heavy elements is present in
more than a relatively small quantity in a region of minimum accretion, the availability of
an additional fuel supply due to reaching the destructive limit of one more element is not
sufficient to cause any significant change in the energy balance of the stars in the lower
half of the main sequence. The rate of accretion increases as the stars move up the
sequence, but because of the corresponding increase in mass and total energy content,
they are able to absorb greater fluctuations. The main sequence stars are therefore
relatively quiet and unspectacular as they gradually make their way along the
evolutionary path.
The chemical composition of the stars and the distribution of elements in the stellar
interiors are debatable subjects, but the deductions that have been made from the
principles established in the earlier development of theory do not conflict with actual
observations; they merely conflict with some interpretations of those observations. While
the gravitational segregation of the stellar material which theoretically puts a high
concentration of the heavier elements into the central core is not entirely in agreement
with current astronomical thought, it should be emphasized that such a segregation is the
normal result in a fluid medium subject to gravitational forces, and a theory which
requires the existence of normal conditions is never out of order where the true situation
is observationally unknown.
Furthermore, even though the conclusions that have been reached as to the amount of
heavy elements present in the stellar interiors are beyond the possibility of direct
verification, it will be brought out in the subsequent discussion of the solar system that
some strong evidence as to the internal constitution of the stars can be obtained from
collateral sources. Current ideas as to stellar composition are based almost entirely on
spectroscopic information. These data are useful, but they have a limited applicability, as
they only tell us what conditions prevail in the outer regions of the stars. Even from this
restricted standpoint the evidence may actually be misleading, as the spectroscopic results
are affected to a significant degree by the character of the material currently being picked
up through the accretion process. The observed differences in the stellar spectra that can
be attributed to variations in chemical composition are probably more indicative, in many
cases, of the environments in which the stars happen to exist at the moment than of the
true composition of the stars themselves.
The presence of substantial amounts of elements such as technetium, for example, in the
outer regions of some stars poses a formidable problem if we are to regard this as an
actual indication of the composition of the stars. It is doubly difficult for present-day
astronomical theory. If the technetium is manufactured in the regions of maximum
temperature in the center of each star, in accordance with the majority opinion at the
moment, there is a serious problem in explaining how this material gets to the surface
against the density gradient. L. H. Aller makes this comment:
How the star gets the heavy elements from the core to the surface without exploding
provides an impressive challenge to theoreticians.40
Shklovsky regards this emergence from the central regions as impossible, and contends
that ‖Only nuclear reactions in the surface layers of the stars can account for the presence
of technetium lines in type S stellar spectra.― 41 But this merely replaces one question
with another. Just how the conditions necessary for initiating atomic reactions can be
attained in these surface layers is an equally difficult problem. On the other hand, the
technetium content at the surface of the star is easily explained on the basis that the
observed amounts of this material have been derived from the captured material. This
element is stable, according to the findings detailed in Volume II, wherever the magnetic
ionization level is zero, and relatively heavy concentrations could be produced in areas
that are left undisturbed for long periods of time.
As indicated earlier, the gradual and uneventful progress of the growing stars up the main
sequence is due to the relatively small size of the increments of energy that result from
the attainment of the destructive limits of successively lighter elements. When the
destructive limit of nickel is reached there is a change in the situation, as this element is
present, both in the stars and in the interstellar matter, in quantities that are substantially
greater than those of any heavier element. It could be expected, then, that the attainment
of this temperature limit would result in some observable enhancement of the thermal
activity of the stars that are involved. Such increased activity is observed in a special
class of stars located near the top of the main sequence.
These Wolf-Rayet stars are somewhat less massive than the stars of the O class, the
highest on the main sequence, but they have about the same luminosity, and they are
associated with the O stars in the disk of the Galaxy. Their principal distinguishing
characteristic is a very disturbed condition in their surface layers, with ejection of
material that forms an expanding shell around each star. These special conditions lead to
the existence of a distinctive spectrum. It appears probable that the Wolf-Rayet star is the
one whose central temperature has reached the destructive limit of nickel. We may
interpret its observed characteristics as indicating that arrival at this temperature limit has
resulted in an increase in the production of energy that is large enough to cause violent
internal activity, and ejection of matter from the star, without being enough to initiate a
full-scale explosion. On this basis, the star remains in the Wolf-Rayet condition until the
greater part of the nickel is consumed. It then resumes accreting mass (probably picking
up most of what was ejected) and reverts to the O status.
The foregoing comments on the Wolf-Rayet stars apply only to those known as
Population I Wolf-Rayets. The Wolf-Rayet designation is also applied to some of the
central stars of planetary nebulae, but there is little justification for putting these two
groups of stars into the same class. This issue will be discussed in Chapter 11.
When the temperature corresponding to the destructive limit of iron is reached, the
situation is more drastically changed. This element is not limited to very small quantities,
or even to moderate quantities like the nickel content. It is present in concentrations,
which represent an appreciable fraction of the total stellar mass. The sudden arrival of
this quantity of matter at its destructive limit activates a source of far more energy than
the star is able to dissipate through the normal radiation mechanism. The initial release of
energy from this source therefore blows the whole star apart in a tremendous explosion.
According to current estimates, iron is more than 20 times as abundant in the stars as
nickel. If the amount of nickel is sufficient to bring the star to the verge of an explosion,
as the behavior of the Wolf-Rayet stars appears to indicate, the amount of iron is far more
than is needed in order to cause an explosion. The explosion thus takes place as soon as
the first portions of this element are converted into energy. The remainder, together with
the overlying lighter material, is dispersed by the explosive forces. The carry-over of
material from one cycle to the next enables the amount of iron and lighter elements to
continue building up as the age of the system increases, whereas the heavier elements
have to start from scratch after the explosion, except for some limited quantities of the
elements close to iron that have escaped destruction. This accounts for what George
Gamow called the ‖surprising shape of the empirical curve [of abundance of the
elements],―42 the existence of distinctly different patterns above and below iron.
The explosion that theoretically occurs at the destructive limit of iron is consistent with
observation, as it can be identified with the observed phenomenon known as a Type I
supernova. However, the characteristics of the supernova explosion, as derived from
theory, conflict, in some respects, with current astronomical opinion. One of these
conflicts concerns the kind of stars that are subject to becoming Type I supernovae.
Inasmuch as the temperature of a star is a function of its mass, the temperature limit at
which the explosion takes place is also a mass limit. According to our theory, then, the
stars that reach the destructive temperature limit and become Type I supernovae are hot
massive stars, and they are all nearly alike.
The astronomers concede the existence of a stellar mass limit. Since there is a recognized
relation between stellar mass and temperature along the main sequence, the existence of a
mass limit carries with it the existence of a temperature limit, as required by the theory of
the universe of motion. Neither limit has an explanation in terms of conventional
astronomical theory, and the observed cut-off in the mass distribution function was
unexpected. ‖It is a surprise,― say Jastrow and Thompson, ‖that there also appears to be
an upper limit to the mass of a star.― 43 These authors put the limit at about 60 solar
masses. Other observers place it at about 100.
The astronomers also admit that all Type I supernovae are very much alike. The
observations of these phenomena are thus consistent with our theoretical findings.
Furthermore, the temperature limit can be reached in any galaxy, and Type I supernovae
should therefore occur in all classes of galaxies. They are the only kind that can occur
regularly, according to our findings, in elliptical and small irregular galaxies. Spirals,
such as our Milky Way, and the giant spheroidal galaxies, contain both Type I and Type
II supernovae, which result from a different kind of stellar explosion that we will
examine in detail in Chapter 16. As we will see there, the Type II explosion is the result
of reaching an age limit. Except where some stray old star has been picked up by a young
aggregate, stars cannot reach the age limit in young galaxies. This accounts for the
observed restriction of the Type II supernovae to the older and larger galaxies. All that is
known about the Type I supernovae is thus entirely consistent with the theory of the
universe of motion.
On the other hand, the observations are almost totally inconsistent with conventional
astronomical theory. The astronomers have been almost completely baffled by the
supernova phenomenon. Most investigators are reluctant to admit that they are up against
a blank wall, and tend to describe the situation in ambiguous terms, such as the following,
taken from a recent report on one aspect of the supernova problem: ‖The exact
mechanism by which a star becomes a supernova is not yet known.― 44 The insertion of
the word ‖exact― into this statement implies that the general behavior of the supernovae is
understood, and that only the details are lacking. But the truth is that the astronomers
have nothing but speculations to work with, and some of the more candid observers admit
this. R. P. Kirshner, for instance, concedes that the ‖models― thus far proposed for the
origin of supernovae are no more than speculative, and adds this comment:
The train of events leading to a supernova of Type I is more mysterious than that leading
to one of Type II, since a Type I supernova is expected to be the explosion of a star about
as massive as the sun. Since such a star can comfortably settle down to being a white
dwarf, something unusual must happen for it to explode as a supernova.31
This is a good example of the problems in astronomy that have been created by the
elevation of the physicists assumption as to the nature of the stellar energy process to a
status superior to that of the astronomical observations, As Kirshner brings out in his
statement, the Type I supernova is mysterious not so much because little is known about
it, but because that which is known from observation conflicts with two items that are
‖known― from deductions based on generation of energy by the hydrogen conversion
process, The conclusion that a star of about one solar mass can ‖comfortably settle down
to becoming a white dwarf― is wholly dependent on the status of the red giants as old
stars, This, in turn, is based entirely on the assumption as to the nature of the energy
generation process. The further conclusion that these ‖old― red giants develop into white
dwarfs rests on the equally unsupported assumption that the white dwarfs are still older
than the red giants, and that there must be some progression from one to the other. The
astronomical evidence disproving these assumptions will be presented at appropriate
points in the subsequent pages. The fact now being emphasized is that Kirshner's
‖mystery― is simply a conflict between the astronomical observations and the
consequences of the physicists assumption that the astronomers accept as gospel.
The same conflict exists with respect to the other item of ‖knowledge― cited by Kirshner,
the identification of the Type I supernova with the explosion of a star of about one solar
mass. This is another conclusion that rests entirely on the physicists hydrogen conversion
hypothesis. On the basis of this hypothesis, it has been concluded that the stars of the
elliptical galaxies and small irregulars are very old. Conventional theory indicates that the
more massive stars (which, according to the theory, are short-lived) would have been
eliminated from these old aggregates by evolutionary processes. The deduction, then, is
that ‖before their outburst type I supernovae were very old stars whose mass was at most
only slightly (say 10 to 20 percent) greater than the mass of the sun.― 45
But this does not fit into the rest of conventional astronomical theory at all. As P. Maffei
puts it, ‖This result has caused some problems to theoreticians.― 46 Kirshner points out
that the supernova explosion is not the fate that present-day theory predicts for the small
stars. Furthermore, the identification of the supernovae with the small stars, whose mass
varies over a wide range, leaves the theory without any explanation for one of the few
things about the Type I supernovae that definitely is known; that is, these explosions are
all very much alike.
In the light of the points brought out in the foregoing paragraphs, it is evident that the
astronomers cannot legitimately claim to have a tenable theory of supernovae. In this
case, then, as in so many of the others that have been, or will be, discussed in this
volume, the deductions from the theory of the universe of motion are simply filling a
vacuum, providing explanations that conventional astronomical theory has been unable to
supply.

CHAPTER 5

The Later Cycles


Only a relatively small proportion of the mass of the star needs to be converted into
energy in order to produce the Type I supernova explosion. The remainder, constituting
the bulk of the original mass, is blown away from the explosion location at high speeds.
We therefore find the site of such an explosion surrounded by a cloud of material moving
rapidly outward. The prevailing view is that the entire mass is dispersed into interstellar
space. As expressed by Shklovsky, ‖The gaseous material expelled during the outburst
forever breaks its connection with the exploding star and travels out into interstellar
space, interacting with the interstellar medium.― 47 In this particular case he is referring
specifically to supernovae of Type II, but his subsequent comments make it clear that
these remarks apply to Type I as well.
It is evident that a large part of the matter ejected into space is actually dispersed in this
manner, but there is likewise a significant part of the total that does not escape. As we
will see in Chapter 6, the matter in the central portion of the star does not participate in
the expansion into space. Because the speeds generated by the explosion are distributed
over a wide range, another substantial portion of the ejected mass is restricted to
relatively moderate outward speeds. One factor that has a bearing on this situation is that
the Type I explosion takes place in the center of the star rather than throughout the
structure. Consequently, much of the ejected material does not come out in the form of
finely divided debris, but consists of portions of the outer sections of the star. These are
ejected in aggregates of various sizes, what we would call fragments if we were dealing
with solid matter. Such quasi-fragments have lower initial velocities than the small
particles or individual atoms, since the acceleration imparted by a given pressure
decreases as a function of the mass, where the density is uniform. They also expand
quickly from their highly compressed initial state, which reduces their temperature
drastically and makes them invisible. The visible portions of the Type I supernova
remnants are mainly the fastest particles.
During their outward travel, these explosion products are subject to the gravitational
effect of the total mass until the fastest components reach the gravitational limit, and to a
gradually decreasing effect thereafter. It follows that the slower components are subject
to gravitational retardation, as well as to some resistance from the interstellar medium,
for a very long period of time. If we take the previously cited figure of 60 solar masses as
the size of the exploding star, and assume that a third of the total mass goes into energy,
the outer portions of the explosion products are subject to the gravitational effect of 40
solar masses, In Chapter 14 we will develop an equation for calculating the gravitational
limit, and from this equation we will find that the gravitational limit of an aggregate of 40
solar masses is 23 light years, or 7 parsecs, The radii of the observed Type I supernova
remnants in the Galaxy average about 5 parsecs.48 Thus the expansion of these remnants,
great as it has been, has not even taken the fastest of the explosion products beyond the
gravitational limit of the aggregate as yet. Clearly, many of the slower products cease
moving outward long before they reach the gravitational limit of the remaining mass.
At this stage, where the expansion ceases, there is a cloud of cold and very diffuse
material occupying a tremendous expanse of space. But unlike the large dust and gas
clouds in the galactic arms, this material is under gravitational control. The gravitational
effect of the mass as a whole on each individual particle is small because of the huge
distances involved, but a net gravitational force does exist, and once the expansion has
ceased, a contraction is initiated. Another long interval must elapse while this initially
minute force does its work, but ultimately the constituent particles are pulled back to
where the internal temperature of the mass can rise enough to reactivate the energy
generation process, and the star is reborn.
This star is now back in area O of the CM diagram, first as an infrared star, and later, as it
contracts and increases in temperature, as a red giant. This red giant resembles the first
generation of stars of the same type, but it is not identical with them. It has gone around
the cycle and through the explosion process, and has undergone some modifications in so
doing. The most significant respect in which the new stars of the second cycle differ from
their counterparts of the first cycle is that the second cycle star has a gravitationally stable
core. The first cycle star condensed from a practically uniform dispersed aggregate. As
noted earlier, some of these stars had nuclei on which to build, but only in rare instances
is this anything more than a small fragment. Thus, until it reaches the critical density,
such a star is simply a contracting dust and gas cloud. On the other hand, the aggregate of
matter from which the second cycle star condenses is heavily concentrated towards the
center, the site of the supernova explosion. The gravitational contraction therefore
proceeds much faster in this central region, and a large part of the mass of the star reaches
a state of gravitational equilibrium by the time that the atomic energy process is initiated.
The newly formed second cycle star is thus a two-component system, a stable core with a
large contracting outer envelope.
In this combination structure, the luminosity is determined by the amount of energy
generated. This, in turn, depends on the mass, which is concentrated mainly in the core.
But the surface temperature corresponding to a given luminosity depends on the volume
of the star, and this is mainly the volume of the envelope. Thus the surface temperature of
the early second cycle star is similar to that of an early first cycle star, while the
luminosity is similar to that of a main sequence star instead of being concentrated in one
region in the upper right of the CM diagram in the manner of the early first cycle stars,
the early stars of the second cycle occupy a band along the right of the diagram similar to
the upper part of the main sequence on the left. We will designate this type of star as
Class C. Adding the number of the cycle, these stars of the second cycle are Class 2C
stars.
After the initial movement downward from region O to a position determined by the
stellar mass, the evolution of the Class 2C stars, resulting from continuation of the
process of condensing the outer envelope, leaves the luminosity practically unchanged,
but the surface temperature increases because of the reduction in the size of the radiating
surface. This second cycle star thus moves almost horizontally across the CM diagram if
it is in a region of minimum accretion. Any further accretion that takes place puts the
terminal point, the location at which the star reaches gravitational equilibrium, higher on
the diagram. The evolutionary paths of the Class C stars are therefore totally different
from those of Class A, the stars of the first cycle.
The Class C pattern is illustrated in Fig.5. The numbers shown with the names of the
prominent stars identified in the diagram are the masses in solar units. As can be seen
from these values, the mass scale for the Class C stars on the right of the diagram is
practically identical with that of the Class B (main sequence) stars on the left. The line
XY then represents the evolutionary path of a star of about five solar masses that is
accreting only the remnants of its original dispersed matter. If the star condenses within a
dust cloud, or enters such a cloud before the consolidation of the diffuse matter is
complete, the increase in mass by accretion from the cloud moves the star upward on the
diagram, and the resulting path is similar to the line XZ.
It should be noted that although the evolutionary path of the Class C stars in the CM
diagram is quite different from that of the Class A stars, and the significance of positions
in the diagram, in terms of variables other than temperature and luminosity, is also quite
different, the result of the evolutionary development is the same in both cases. The
evolution carries the stars from a cool and very diffuse condition in region O of the
diagram to a position on the main sequence that is determined by the stellar mass. And it
accomplishes the movement by means of the same process in both cases, a process—
gravitational contraction—that is known to he operative under the existing conditions.

In sharp contrast to this straightforward gravitationally powered process, conventional


astronomical theory offers a bizarre succession of twists and turns that attempt to
reconcile the observational data with the upside down evolutionary sequence based on the
purely hypothetical hydrogen conversion process as the source of stellar energy. As
already noted, this theory requires a movement from region O. the red giant region, of the
CM diagram, to the main sequence, but then finds it necessary to reverse the movement
and bring the stars back to the red giant region again. The theorists have not been able to
define this reverse movement without making the mass of the star an independent
quantity. They have therefore abandoned any systematic connection between mass and
position in the diagram, aside from that which exists along the main sequence. As
Shklovsky puts it, the stars move on the diagram ‖in a rather meandering fashion.― 49
This assumption that the temperature and luminosity of a star can be totally independent
of the mass is another inherently improbable hypothesis. Both of these quantities are
determined by the mass along the main sequence, and the idea that the connection is
completely severed under other conditions is unrealistic. Furthermore, it runs into an
obvious difficulty when the hypothetical evolutionary line again intersects the main
sequence on the road from red giant to white dwarf. If we examine the hypothetical
evolutionary path without regard to its ‖meanderings,― what we find is a ‖turnoff― from
the main sequence at a point asserted to be determined by the mass of the star, a
horizontal movement to the right, and then a turn upward that continues on a diagonal
line to the red giant region. From there the path extends back to the left along a rather
indefinite horizonal course. A diagram that purports to demonstrate the agreement
between this theoretical pattern and the observations accompanies almost all discussions
of the subject in astronomical literature. This is a composite diagram, combining the CM
diagrams of a number of star clusters. (See, for instance, reference 50 .)
Aside from the question as to the direction of movement, which cannot be determined
from observation, the hypothetical evolutionary path agrees, in general, with the CM
diagram of the globular clusters. It could hardly do otherwise, since it was deliberately
designed to fit the globular cluster pattern. The agreement of the composite diagram with
the hypothetical evolutionary pattern is therefore significant only to the extent that there
is agreement in the case of clusters other than those of the globular cluster type. In
Chapter 10 we will find that some clusters such as M 67 and NGC 188 that are classified
as open clusters are actually fragments of globular clusters that have not yet lost all of
their globular characteristics. To arrive at the true significance of the composite diagram
we need to eliminate the clusters of this type, as well as the normal globular clusters, and
examine the extent of agreement between the remaining open clusters and the theoretical
pattern. When we do this we find that there is no correlation whatever. These clusters
have stars along the main sequence, and in the immediate vicinity thereof, and one of
them also contains some red giants. But there is no trace of the evolutionary pattern that
the diagram is supposed to corroborate. The evidence that is asserted to support the
contention that the stars of the open clusters ‖evolve off the main sequence― simply does
not exist.
A recognition of the true evolutionary pattern, as derived from the theory of the universe
of motion, makes it possible to understand the real meaning of the association of certain
kinds of stars with dust clouds that has led to the belief that the stars are being formed
within the clouds. Two such types of association are recognized. O associations are
composed of stars of the O and B types, the largest and hottest of all stars. T associations
are groups of stars of the T Tauri class, much smaller and cooler than the O and B stars.
‖Often, but not always, the T-associations coincide with O-associations.― 51 The
prevailing belief that the hot massive stars are young leads to the conclusion that they
were formed somewhere near their present locations. Taken together with the observed
association between the O and B stars and nebulosities, this indicates that the stars of the
O associations have been formed by condensation of portions of the gas and dust clouds
in which they are now located. This hypothesis is currently accepted by most
astronomers, but, as brought out in Chapter 1, they are unable to explain how stars can be
formed from clouds of such low density. ‖This process,― says Simon Mitton, ‖is almost a
total mystery.― 52
Development of the theory of the universe of motion does not provide any way whereby
the dust and gas clouds of the Galaxy can condense into stars. On the contrary, it
identifies still another force opposing such a condensation, the force due to the outward
progression of the natural reference system, and it indicates that condensation cannot take
place unless the clouds are either very much larger or very much denser than anything
that exists in the Galaxy. However, it is clear from the information brought to light by
this development that what is actually happening is accretion of matter from the dust and
gas clouds by previously existing stars. These stars already in existence are not limited by
the factor that prevents dust and gas particles from condensing into a stellar aggregate
under galactic conditions; the net motion of each particle outward away from all others.
All particles within the gravitational limit of an existing star have a net inward motion
toward the star, and are on the way to capture. .
The clouds of dust and gas in the Galaxy are subject to forces that tend to spread them out
and dissipate them. It follows that the identifiable clouds are relatively recent acquisitions
by the Galaxy. As such, they are associated mainly with the relatively recent stellar
acquisitions, the Class 1A stars. As we have seen, these stars are initially divided into two
groups, a large group of small stars that reach gravitational equilibrium in the lower
portion of the main sequence, and a smaller group of large stars that reach equilibrium
well above the midpoint of that sequence. We can therefore expect the products of
accretion to existing stars from the gas and dust clouds to be of two kinds, one group of
hot massive stars and one group of small and relatively cool stars. These two groups
required by the theory can obviously be identified with the O associations and the T
associations respectively. Both groups contain some of the Class 2 stars that have been
mixed with the Class 1 population since entry of the younger stars into the Galaxy.
The positions of the O and T associations in the CM diagram are entirely consistent with
the accretion explanation. The upper portion of the main sequence, in which the O stars
are located, cannot be reached without some accretion from the environment. The largest
Class 1 stars reach the main sequence considerably below this level, and the Class 2 (and
later) red giants, reconstituted from part of the matter of the O type star that exploded, are
necessarily somewhat less massive than the O stars. ‖There are no super red giants which
would correspond to the evolution of an O-type star.― 36 (S. J. Inglis) The stars of all
classes therefore have to grow at the expense of their environment in order to reach the O
status.
The T Tauri stars are found in a location generally described as ‖above― the lower portion
of the main sequence. Inasmuch as this sequence runs diagonally across the diagram, it is
equally correct to say that these stars are located somewhat to the right of the main
sequence. As can be seen from Figure 5, this is consistent with an ongoing accretion from
the surrounding cloud of dust and gas. A star that is accreting substantial quantities of
such material is in essentially the same condition as one that is consolidating the final
remnants of, the material dispersed by a supernova explosion. As we saw earlier, a star of
this latter type (Class 2C) is moving horizontally across the CM diagram from right to
left. In the latter part of this movement it occupies a position similar to that in which the
T Tauri stars are found. The T Tauri position is thus in full accord with the accretion
explanation. The observation that there are ‖erratic changes in brightness― 53 of these
stars is also consistent with the finding that they are accreting material from the
environment in substantial, and probably variable, quantities.
Let us now take a closer look at the pattern of events in the interior of an aggregate that
has just become a star (of any cycle) by activating the atomic disintegration process as a
source of energy. The additional energy thus released causes a rapid expansion of the
star: This expansion has a cooling effect, which is most pronounced in the central
regions, and as the temperature in these regions drops below the recently attained
destructive limit, the energy generation process itself is shut off, accentuating the cooling
effect. Eventually this cooling stops the expansion and initiates a contraction of the star,
whereupon the temperature again rises, the destructive limit is once more reached, and
the whole process is repeated.
A newly formed star of either Class A or Class C is therefore variable in the amount of its
radiation an intrinsic. Variable as it is called to distinguish it from stars of the class
whose variability is due to external causes. ‖Almost all these stars I those below 1700 K I
are, as we had expected, long period variables ― 54 report Neugebauer and Leighton,
pioneer investigators of the infrared stars. Not all cool stars are young, but an old cool
star has had time to reach gravitational equilibrium, and it is therefore small, whereas the
young stars are still very diffuse—such a star has been described as nothing but a red hot
vacuum—and hence they are very large. Inasmuch as they are radiating from a much
larger area their total radiation is much greater than that of old stars of the same surface
temperature. The bright infrared stars are thus the newly for-mea variables.
The length of the cycle, or period, of a variable star depends on the relation of the
magnitude of the energy released by atomic disintegration to the total energy content of
the star. When the star is very young, and its temperature is barely above the stellar
minimum, the rate of energy generation is large compared to the total energy of the star,
and the swings from the ‖on― to the ‖off― position of the energy production mechanism
are relatively large. Such stars are therefore long-period variables. As a star grows older,
its temperature and energy content increase, because the average energy production
exceeds the radiation in this stage of the evolutionary cycle. The fluctuations in the rate
of energy production thus represent a constantly decreasing proportion of the total energy
of the star, Both the period and the magnitude of the variation (measured in terms of
percent change in radiation) therefore decrease with time.
As the average temperature of the star rises, a point is eventually reached at which the
temperature in the central regions during the low phase of the cycle no longer drops
below the destructive limit of the heaviest element present. But this does not put an end
to the variability, because by this time, or very soon thereafter, the high point of the
temperature cycle reaches the destructive limit of the next lighter element, and generation
of energy by destruction of this element takes place in the same kind of an on and off
cycle. The fluctuations never cease entirely, but they decrease in magnitude, and are no
longer evident observationally after the temperature stabilizes, or when the total energy
of the star becomes so large that the effect of the variations is negligible on the scale of
the observations.
One star, the sun, is so close to us that even small variations in energy production should
be detectable. This subject has not yet been studied in the context of the universe of
motion, but some aspects of the sun's behavior are known to be variable. The observed
fluctuations in the sunspot activity are particularly noticeable. The origin of these spots is
unknown, but no doubt they are initiated in some manner by the energy production
process. Hence they may be giving us an indication of the variations in the output of that
process that would be expected from the periodic changes. There are also some relatively
long range variations, such as the decrease in energy output that caused the Little Ice Age
in the seventeenth century and the increase that is producing the gradual warming in the
twentieth, which may be due to variations in the nature or amount of the material accreted
from the environment. In any event these are subjects that warrant some investigation. It
is possible that such an investigation can be extended to more distant stars not currently
classified as variables. Some observations of ‖variations in activity similar to the sun's I l-
year sunspot cycle― in nearby stars have already been made.55
The theoretical explanation of the process whereby the heavier elements are built up, as
set forth in Volume II, defines it as a continuous capture process that is taking place
throughout the entire extent of the material sector. In the primitive aggregate of diffuse
material, and in the early dust and gas clouds, the magnetic ionization level is zero, which
means that there is no obstacle to the formation of any of the 117 possible elements. The
time spent in this first of the evolutionary stages is so long that all of the elements are
represented in the constituent dust of the clouds by the time the protostar stage is reached.
Inasmuch as the build-up of the atomic structure is a step-by-step process, the initial
abundance of the elements is an inverse function of the atomic mass (with some
modification by other factors), but even a small amount of the very heavy elements is
sufficient to initiate the atomic disintegration that provides the increment of energy which
raises the dense dust cloud to stellar status.
By the time the initial supply of heavy elements is exhausted, the stellar fuel has been
replenished by accretion of material from the environment, and by the continued
operation of the atomic building process. All accreted matter has some heavy element
content, but the addition to the fuel supply is not limited to this amount. Any matter that
adds significantly to the total mass of the star serves the purpose of activating an
additional source of energy. The increase in mass increases the central temperature of the
star, and it thereby makes more fuel available through reaching the destructive limits of
lighter elements.
Correlation of the central temperature with the mass carries with it the implication that
the principal fuel supply at any given mass level is provided by a specific element. Most
of the very heavy elements are present only in small concentrations, and this makes it
difficult, in most instances to distinguish the points at which destruction of an additional
element begins. There is, however, a relatively wide mass range, indicated by the cross-
hatched area in Fig 6, in which the variability is sufficiently regular to make it evident
that a single element is the principal energy source. The distinctive character of the
variability in this region, which we will identify as the Cepheid zone, extends through a
wide enough range of central temperatures to indicate that the energy is being derived
primarily from an element that is present in the star in a higher concentration than that
reached by any element of greater atomic number. The particular element that is involved
cannot be positively identified without further investigation, but since lead is not only the
first moderately abundant element in the descending order of atomic mass, but also the
only such element in the upper portion of the atomic series, we may, at least tentatively,
correlate the destructive thermal limit of this element number 82 with the central
temperature corresponding to the mass range of the Cepheid zone. It should be noted in
this connection that lead is the heaviest element that is stable against radioactivity in a
region of unit magnetic ionization, and it therefore occupies a preferred position
somewhat similar to that of iron.
The long period variables that precede the Cepheids on the evolutionary path can be
correlated with the elements above lead in the atomic series. Here the quantities of energy
generated as the successive destructive limits are reached are smaller, inasmuch as these
elements are relatively scarce, but each increment of energy has a greater effect on the
stellar equilibrium because of the smaller heat storage capacity of these low temperature
stars. This accentuates the eftcct of minor variations in the incoming flow of matter from
the environment, and as a result these long period variables are less regular than the
Cepheids. In general, these stars are not separable into easily recognizable classes on the
order of the Cepheids, but some groups of a somewhat similar nature have been
identified. The RV Tauri variables. for instance, are found between the red. Mira type,
long period variables and the Cepheids . 56

There are 35 elements heavier than lead in the primitive material from which the globular
cluster stars were formed. The destructive limit of each of these elements establishes a
central temperature for a particular group of stars in the same manner that the destructive
limit of lead (presumably) establishes the central temperature, and consequently the
characteristic properties, of the Cepheids. Most of the Class C stars are probably at the
unit magnetic ionization level, reducing the number of stable elements above lead to ten,
and the RV Tauri stars account for one of these, but trying to divide all of the variables
earlier than the Cepheids into groups, and to identify the elements that constitute the
principal energy source for each, is clearly impractical, as matters now stand, even if only
nine more groups are involved,
The stars located in the area where the Class A evolutionary line AC crosses the Cepheid
zone arc known as RR Lyrae stars. They are abundant in the globular clusters, and for
this reason are also called cluster variables.
However, they are not the only Class A Cepheid stars in these clusters. One kind of a
globular cluster star that we have not yet considered is a star that condenses on a large
nucleus; either a pre-existing small star or an aggregate of planetary mass. When
condensation of a star takes place on a nucleus of this size the line of evolutionary
development is similar to that of the Class C giants, and is shifted upward on the CM
diagram relative to the Class 1A path. This line enters the Cepheid zone at a location
where the mass and central temperature are the same as in the region of the RR Lyrae
stars, but the density and surface temperature are lower, while the luminosity is higher.
The astronomers as Population know the stars of this Type II Cepheids, or W Virginis
stars. In our terminology they are Class 1 Cepheids.
The changes that take place in the stars during their trip around the cycle have some
effect on the position that a star of a given kind occupies within its zone of the CM
diagram. One such result is the existence of a third kind of Cepheid star. A giant Class C
star of the second or later cycle moves through the Cepheid zone if it has a large initial
mass, or is subject to heavy accretion. As would be expected, the general characteristics
of this kind of Cepheid are similar to those of the Class 1 Cepheids. Indeed, it is only
relatively recently that the existence of two distinct groups of these large Cepheid type
stars has been recognized. But it is now known that the Class 2C (Population 1) Cepheids
are more massive, and are located higher (about 1 ~/z magnitudes) on the CM diagram
than those of Class 1.57 They are also quite uniform in size and other properties. Both the
large mass and the similarity in properties of these Class 2C stars are explained by our
finding that they are stars reconstituted from the products of Type I supernovae, which
are explosions of stars that have reached the mass limit, and are therefore both very large
and very much alike. These characteristics carry over into their products.
As would also be expected, the RV Tauri variables previously mentioned are likewise
separable into two distinct groups similar to the two classes of Cepheids.56
On the other side of the Cepheid zone the controlling factors are reversed. The heat
storage capacity of the star is much greater, because of the higher temperatures and
greater mass. Consequently, any variations, either in the rate of accretion or in the
abundance of heavy elements in the accreted matter, are, to a large extent, smoothed out.
The Cepheid stars have played an important part in the advancement of astronomical
knowledge because there is a specific relation between their periods and their
luminosities. This is understandable as another result of the interrelation between the
different properties of the stars that has been the subject of much of the discussion in this
and the preceding chapter. It probably applies to most other kinds of intrinsic variables as
well as to the Cepheids, but these other types of variable stars are less common, and so
far, less clearly identified. It is also doubtful if any of these other classes of stars are as
uniform as the Cepheids. The period-luminosity relation for the Cepheids, when properly
calibrated enables the absolute magnitude of a Cepheid star to be determined from the
period, an observable quantity. The relation of the absolute magnitude to the observed
magnitude then indicates the distance to the star, thereby providing a means of measuring
distances up to millions of light years, far beyond the limits of ordinary methods of
measurement.
The explanation of the pulsations of the Cepheids and other similar types of variable stars
given in this work is, of course, quite different from that found in the astronomical
literature. The astronomers envision this as a mechanical vibration—just like a bell, as
one textbook puts it. But the observed characteristics of the pulsation contradict this
hypothesis.
A peculiar fact . . . is that the maximum brightness occurs near the time of most rapid
expansion, while minimum brightness coincides with the most rapid contraction. This is
contrary to any theory, which assumes a simple pulsation of the entire stellar body. It
might indeed seem that the star should be brightest and hottest shortly after the
contraction has brought it to a state of highest density and pressured 58 (R. Burnham)
Like so many of the other ‖peculiar facts― that are noted, but disregarded, in current
practice, this one is giving us a message. It is telling us that the prevailing theory of the
pulsation is wrong. The theory of the universe of motion now reveals just what it is that is
wrong. The pulsation is not a mechanical vibration; it is thermally powered. The interplay
between two processes expansion and energy generation—is the cause of the periodicity.
The maximum brightness occurs near the time of maximum expansion because this is the
point at which the generation of energy at the maximum rate has persisted for the longest
time.
Except for some portions of the stellar content of nickel and other elements close to iron
that escape because of local variations in the conditions in the central regions of a star,
the elements heavier than iron are destroyed in the production of energy during the life of
the star, and in the Type I supernova explosion which terminates that life if the star
arrives at the temperature limit of iron. The building up of these elements has to start
approximately from scratch again, but the period of expansion and re-aggregation of the
explosion products is long enough to bring the heavy element concentration in the
second-generation protostars back somewhere near that in the protostars of the first cycle.
Meanwhile the concentration of iron and the elements of lower atomic mass has been
increasing without interruption, and the total heavy element content of the class 2C stars
(usually expressed as the percentage of elements above hydrogen or above helium, or as
the ratio of the heavier elements to hydrogen) is substantially greater than that of the
Class 1A stars.
The same atom building processes are effective in the environment of the stars the
interstellar space. The heavy element content is determined by the age of the mutter,
irrespective of whether that matter is in the form of dust and gas or is incorporated into
stars. As noted in Chapter 3, the current view of the astronomers is that the heavy
elements are formed in the interiors of the stars and are scattered into the environment by
supernova explosions. On this basis, the heavy element content of young stars is greater
than that of old stars because the proportion of heavy elements in the ‖raw material―
available for star building increases as the galaxy ages.
Although this view currently enjoys quite general acceptance, more and more anomalies
are appearing as evidence from observation continues to accumulate. In addition to the
many items of evidence contradicting this hypothesis that have been discussed in the
previous pages of this volume, we may now note that there is evidence indicating that the
heavy element content in the interstellar matter of the local environment is not increasing.
Martin Harwit has considered this situation at some length. He observes that the
‖similarity in abundances― (that is, in chemical composition, as indicated by the spectra)
of different classes of stars in the Galaxy—B stars, red giants, planetary nebulae, etc.—is
‖somewhat puzzling.― 59 These similarities lead him to this conclusion: ‖These analyses
show that throughout the lifetime of the Galaxy the interstellar matter has had an almost
unchanged composition.― This is definitely in conflict with the basic premise underlying
the currently accepted explanation of the difference in composition between the ‖old― and
young, stars: the assumption that the interstellar medium is continually enriched with
heavy elements ‖cooked― in the stars and scattered into the environrllent.
Of course, our findings also require the heavy element content of any given quantity of
matter to increase with age, but the existing interstellar matter is not the same matter that
occupied this region of space in earlier times. All galaxies are pulling in diffuse material
from their surroundings, material, which, according to our findings, is relatively young.
For example, Harwit refers to a recently discovered, apparently continuous, infall of gas
from outside the Galaxy.― As noted in Chapter 2, the larger galaxies are also capturing
some immature globular clusters in which the constituent dust clouds have not yet
consolidated into stars. Meanwhile, the stars are accreting the older interstellar matter. It
is quite likely that these two processes come near enough offsetting each other to leave
the average composition of the interstellar material in the local environment nearly
constant. Harwit's conclusion as to the constancy of composition is therefore consistent
with the theory of the universe of motion insofar as it applies to the situation in the outer
regions of spiral galaxies. The proportion of heavy elements should theoretically be
greater in the older regions of the galaxies, but these are not accessible to detailed
observation, as matters now stand.

CHAPTER 6

The Dwarf Star Cycle


At the very high temperatures prevailing in the interiors of the stars at the upper end of
the main sequence the thermal velocities are approaching the unit level, and when these
already high velocities are further increased by the energy released in the supernova
explosion the speeds of many of the interior atoms rise above unity. The results of speeds
above the unit level were discussed briefly in Volume 1, but a more detailed
consideration will now be required, as these greater-than-unit speeds, which play no part
in the physical activity of our terrestrial environment, are involved in a wide variety of
astronomical phenomena.
The discovery of the existence of speeds greater than that of light is one of the most
significant results of the development of the theory of the universe of motion. It has
opened the door to an understanding of many previously obscure or puzzling phenomena
and relations. But some of the concepts that are involved in dealing with these very high
speeds are new and unfamiliar. For that reason many persons find them hard to accept on
the strength of theoretical reasoning alone, regardless of how solid a base that reasoning
may have. The results of recent research reported in The Neglected Facts of Science,
published in 1982, should be very helpful to these individuals, as that research has shown
that many of the new findings derived from the theory of the universe of motion can also
be derived from purely factual premises, independently of any theory, thus providing an
empirical validation of the theoretical results. Among these theoretical conclusions that
are now provided with factual proof are the items with which we are presently concerned:
the existence of greater-than-unit speeds, and the characteristics of motion at these
speeds. In order to emphasize the point that the theoretical findings in this areas however
strange they may appear in the light of previously accepted ideas, are fully confirmed by
observed facts and logical deductions from those facts, the description of the basic
motions of the universe, for purposes of the theoretical development in this work, has
been taken from the purely factual derivation given in the 1982 publication.
This factual development was made possible by recognition of the physical evidence of
the existence of scalar motion, and a detailed analysis of the properties of motion of this
nature The scalar nature of the basic motions of the universe is an essential feature of the
Reciprocal System of theory, and has been emphasized from the time of its first
presentation. The points brought out in the extract from the 1982 book are simply the
necessary consequences of the existence of these basic scalar motions. However, in order
to follow the development of thought, it will be necessary to bear in mind some of the
special features of scalar motion that were brought out in the previous volumes of this
work. Although scalar motion, by definition, has no direction, in the usual sense of that
term, it can be either positive or negative, When such motions are represented in a
reference system, the positive and negative magnitudes appear as outward and inward
respectively, For convenient reference, these are designated as ‖scalar directions.―
Inasmuch as a scalar motion is simply the relation between a space magnitude and a time
magnitude, it can be measured either as speed, the relation of space to time, or as inverse
speed, the relation of time to space. Inverse speed was identified, in Volume I, as energy.
A reciprocal relation, such as that between space and time in motion, is symmetrical
about the unit value; that is, speeds of l/n (which we have identified as motion in space)
are equivalent to inverse speeds, or energies, of n/l, whereas energies of l/n (which we
have identified as motion in time) are equivalent to speeds of n/l. With the benefit of this
understanding of those of the relevant factors that may be unfamiliar, we may now begin
the extract from the published description of the high-speed regions,
Photons of radiation have no capability of independent motion, and are carried outward at
unit speed by the progression of the natural reference system, as shown in (I), Fig. 7. All
physical objects are moving outward in the same manner, but those objects that are
subject to gravitation are coincidentally moving inward in opposition to the outward
progression. When the gravitational speed of such an object is unity, and equal to the
speed of progression of the natural reference system, the net speed relative to the fixed
spatial reference system is zero, as indicated in (2), In (3) we see the situation at the
maximum gravitational speed of two units. Here the net speed reached is -1, which, by
reason of the discrete unit limitation, is the maximum, in the negative direction.

An object moving with speed combination (2) or (3) can acquire a translational motion in
the outward scalar direction. One unit of the outward translational motion added to
combination (3) brings the net speed relative to the fixed reference system, combination
(4), to zero. Addition of one more translational unit, as in combination (5), reaches the
maximum speed, + 1, in the positive scalar direction. The maximum range of the
equivalent translational speed in any one scalar dimension is thus two units.
As indicated in Fig. 7, the independent translational motions with which we are now
concerned are additions to the two basic scalar motions, the inward motion of gravitation
and the outward progression of the natural reference system. The net speed after a given
translational addition therefore depends on the relative strength of the two original
components, as well as on the size of the addition. That relative strength is a function of
the distance. The dependence of the gravitational effect on distance is well known. What
has not heretofore been recognized is that there is an opposing motion (the outward
progression of the natural reference system) that predominates at great distances,
resulting in a net outward motion.
The outward motion (recession) of the distant galaxies is currently attributed to a
different cause, the hypothetical Big Bang, but this kind of an ad hoc assumption is no
longer necessary. Clarification of the properties of scalar motion has made it evident that
this outward motion is something in which all physical objects participate. The outward
travel of the photons of radiation, for instance, is due to exactly the same cause. Objects
such as the galaxies that are subject to gravitation attain a full unit of net speed only
where gravitation has been attenuated to negligible levels by extreme distances. The net
speed at the shorter distances is the resultant of the speeds of the two opposing motions.
As the distance decreases from the extreme values, the net outward motion likewise
decreases, and at some point, the gravitational limit, the two motions reach equality, and
the net speed is zero. Inside this limit there is a net inward motion, with a speed that
increases as the effective distance decreases. Independent translational motions, if
present, modify the resultant of the two basic motions.
The units of translational motion that are applied to produce the speeds in the higher
ranges are outward scalar units superimposed on the motion equilibria that exist at speeds
below unity, as shown in combination (5), Fig. 7. The two-unit maximum range in one
dimension involves one unit of speed, s/t, extending from zero speed to unit speed, and
one unit of inverse speed, t/s, extending from unit speed to zero inverse speed. Unit speed
and unit energy (inverse speed) are equivalent, as the space-time ratio is 1/1 in both
cases, and the natural direction is the same; that is, both are directed toward unity, the
datum level of scalar motion. But they are oppositely directed when either zero speed or
zero energy is taken as the reference level. Zero speed and zero energy in one dimension
are separated by the equivalent of two full units of speed (or energy) as indicated in Fig.8.

In the foregoing paragraphs we have been dealing with full units. In actual practice,
however, most speeds are somewhere between the unit values. Since fractional units do
not exist, these speeds are possible only because of the reciprocal relation between speed
and energy, which makes an energy of n/l equivalent to a speed of l/n. While a simple
speed of less than one unit is impossible, a speed in the range below unity can be
produced by addition of units of energy to a unit of speed. The quantity I /n is modified
by the conditions under which it exists in the spatial reference system (for reasons
explained in the earlier volumes), and appears in a different mathematical form, usually
l/n².
Since unit speed and unit energy are oppositely directed when either zero speed or zero
energy is taken as the reference level, the scalar direction of the equivalent speed 1/n²
produced by the addition of energy is opposite to that of the actual speed, and the net
speed in the region below the unit level, after such an addition is 1 - 1/n². Motion at this
speed often appears in combination with a motion 1 - 1/m² that has the opposite vectorial
direction. The net result is then l/n² - 1/m², an expression that will he recognized as the
Rydberg relation that defines the spectral frequencies of atomic hydrogen—the possible
speeds of the hydrogen atom.
The net effective speed 1 - 1/n² increases as the applied energy n is increased, but
inasmuch as the limiting value of this quantity is unity, it is not possible to exceed unit
speed (the speed of light) by this inverse process of adding energy. To this extent we can
agree with F.instein's conclusion. However, his assertion that higher speeds are
impossible is incorrect, as there is nothing to prevent the direct addition of one or two full
units of speed in the other scalar dimensions. This means that there are three speed
ranges. Because of the existence of these three ranges with different space and time
relationships, it will be convenient to have a specific terminology to distinguish between
them. In the subsequent discussion we will use the terms low speed and high speed in
their usual significance, applying them only in the region of three-dimensional space, the
region in which the speeds are 1 - 1/n². The region in which the speeds are 2 - 1/n² that is,
above unity, but below two units—will be called the intermediate region, and the
corresponding speeds will be designated as intermediate speeds. Speeds in the 3 - 1/n²
range will be called ultra high speeds.
The foregoing paragraphs conclude the portions of the text of The Neglected Facts of
Science that are relevant to the intermediate speed range. Consideration of speeds in the
ultra high range will be deferred to later sections of this volume, as the phenomena now
under review are limited to speeds below two units. However, one point that was
mentioned in the extract from the 1982 publication, which should have some further
emphasis in view of its importance in the present connection, is the status of unit speed.
The true datum level of scalar motion, the physical zero, as we called it in the earlier
volumes, is unit speed, not either of the mathematical zero points. This is significant,
because it means that the second unit of motion, as measured from zero speed, does not
add to the first unit. It replaces that unit. Although the use of zero speed as a reference
level makes it appear that the sequence of units is 0, 1, 2, the status of unit speed as the
true physical zero means that the correct sequence is -1, 0, +1. The importance of this
point is its effect on the second unit of motion. This second unit is not the spatial motion
(speed) of the first unit plus a unit of motion in time (energy), but the unit of motion in
time only.
The speeds of the fast-moving products of the supernova explosions that we are now
undertaking to examine are in the intermediate range, where motion is in time. Instead of
being blown outward in space in the same manner as the products that are ejected at
speeds below unity, these intermediate speed products are blown outward in time. In both
cases, the atoms, which were in relatively close contact in the hot massive star, are widely
separated in the explosion product, but in the intermediate speed product the separation is
in time rather than in space. This does not change either the mass or the volumetric
characteristics of the atoms of matter. But when we measure the density, m/V, of the
giant stars we include in V, because of our method of measurement, not only the actual
equilibrium volume of the atoms, but also the empty three-dimensional space between the
atoms, and the density of the star- calculated on this basis is something of a totally
different order from the actual density of the matter of which it is composed.
Similarly, where the atoms are separated by empty time rather than by empty space the
volume obtained by our methods of measurement includes the effect of the empty three-
dimensional time between the atoms, which reduces the equivalent space (the apparent
volume), and again the density calculated in the usual manner has no resemblance to the
actual density of the stellar material, In the giant stars the empty space between the atoms
(or molecules) decreases the measured density by a factor which may be as great as 105
or l06.The time separation produces a similar effect in the opposite direction, and the
second product of the explosion is therefore an object of small apparent volume, but
extremely high density: a white dwarf star.
The name ‖white dwarf― was applied to these stars in the early days just after their
discovery, when only a few of them were known. These had temperatures in the white
region of the spectrum, and the designation that was given them was intended to
distinguish them from the red dwarfs in the lower portions of the main sequence. In the
meantime it has been found that the temperature range of these stars extends to much
lower levels, leading to the use of such expressions as ‖red white dwarf.― But by this time
the name ‖white dwarf― is firmly established by usage, and it will no doubt be permanent,
even though it is no longer appropriate.
When judged by terrestrial standards, the calculated densities of these white dwarfs are
nothing less than fantastic, and the calculations were originally accepted with great
reluctance after all alternatives that could be found were ruled out for one reason or
another. The indicated density of Sirius B. for instance, is about 130,000 g/cm³, that of
Procyon B is estimated at 900,000 g/cm³, while other stars of this type have still greater
densities. In the light of the relationships developed in this work, however, it is clear that
this very high density is no more out of line than the very low density of the giant stars.
Each of these phenomena is simply the inverse of the other. Donald Lynden-Bell
expresses the conventional wisdom on the subject in this statement:
We know already that some stars have collapsed to a size only ten times larger than that
at which they would become black holes.60
In the face of this asserted ‖knowledge― it may not be easy to accept the idea that these
objects have, in fact, expanded to their present size; that is, their components have moved
outward away from each other in time, and the small size that we observe is merely a
result of the way in which the expansion in time appears in the spatial reference system.
But this conclusion is a necessary consequence of basic physical principles whose
validity has been demonstrated in the preceding volumes of this series, and, as we will
see in the subsequent pages, it produces explanations of the white dwarf properties that
are in full agreement with all of the definitely established observational information.
Unfortunately, the amount of observational information with respect to the white dwarfs
that has been accumulated thus far is very limited, and much of what is available is of
questionable accuracy. This scarcity of reliable information is due to a combination of
causes. The white dwarfs have been known for only a relatively short time. The first one
to be seen, the ‖pup― companion of Sirius, the dog star, was observed in 1862, but it was
not until about 1915 that the distinctive character of the properties of this star was
recognized, and theories to account for these properties were not developed until
considerably later. The second reason for the lack of information is the dimness of these
objects, which makes them very difficult to see, and limits both the number of stars that
can be observed and the amount of information that can be obtained from each.
The third factor that has led to confusion in this area is the lack of a correct theoretical
explanation of the white dwarf structure. As indicated in the statement quoted above, the
currently accepted theory envisions an atomic collapse. It is asserted that the energy
supply of a star is eventually exhausted, and that when energy generation ceases, the star
collapses into a hypothetical state called ‖degenerate matter― in which the space between
the hypothetical constituents of the atoms is eliminated, and these constituents are
squeezed into a close-packed condition. As explained by Robert Jastrow:
With its fuel gone it [the star] can no longer generate the pressures needed to maintain
itself against the crushing force of gravity, and it begins to collapse once more under its
own weight.61
Joseph Silk's explanation is essentially the same:
The pressure exerts an outward force that withstands the gravity of the star, as long as
there is sufficient hydrogen present in the stellar core to produce helium . . .
After the supply of nuclear energy runs out and fails to provide adequate heat and
pressure, gravitational collapse must ensue.62
This is an astounding conclusion. To put it into the proper perspective, it should be
realized that the hypothetical collapse is something that is supposed to take place within
the atom; that is, the pressure exerted on the atoms becomes so great that they are unable
to withstand it. But, in fact, the pressure to which the atoms of the condensed gas are
subjected to is not significantly altered by the cooling that results when and if the energy
generation ceases. Each atom is subject to the pressure due to the weight of all overlying
matter in any event, regardless of whether that matter is hot or cold. The pressure due to
the thermal motion has nothing to do with conditions inside the atom; it merely
introduces additional space between the atoms. Certainly, this added space would be
eliminated if the star cooled down by reason of the exhaustion of the energy supply, but
this would not change the conditions to which the atoms are subjected.
The books from which an earlier generation of Americans learned to read contained a
story about a man who was returning home from the city with a heavy sack of flour that
he had purchased. He was afraid that the weight of the flour would be too much for the
horse that he was riding, so to lighten the load on the horse he held the sack in his arms.
In those days, the children that read the story found it hilarious, but now we are
confronted with essentially the same thing in different language, and we are expected to
take it seriously.
Some writers seem to suggest the existence of a kinetic component that would add to the
static pressure against the central atoms. Paolo Maffei gives us this version of the
‖collapse":
Eventually, when all the lighter energy-producing elements have been depleted, energy
will no longer be generated in the interior of the sun. In the absence of the internal
pressure that supported them, the outer shells will rapidly fall toward the center due to
gravitational attraction. In the course of this very rapid collapse, the atoms will be
squeezed together ever more tightly, and the electrons will be disassociated from the
nuclei.63
But the assumption that a star could cool down rapidly enough to increase the total
pressure significantly is nothing short of outrageous. There is no reason to believe that
the heat transfer process within the star will be any faster during the cooling process than
in the normal outward flow. Indeed, the cooling will be slowed up considerably by the
release of gravitational energy as the outer portions of the star move inward.
Furthermore, even under the most extreme assumptions, the critical pressure at which the
atomic collapse is presumed to occur could be reached only in the very large stars, since
the central atoms in the smaller stars can obviously withstand pressures immensely
greater than the static pressure to which they are normally subject. (We know this to be
true because atoms of the same kind do withstand these immensely greater pressures in
the large stars). Thus the collapse, if it occurred at all, could occur only in the stars which
current theory says do not collapse, but explode. And no one bothers to explain how the
layers of matter outside the central regions of the star, which certainly are not subjected
to any excessive pressure, are induced to participate in the degeneracy.
The truth is that the question as to how matter gets from its normal state into this
hypothetical degenerate condition is given scant attention. The astronomers have arrived
at an explanation of the extremely high density of the white dwarfs that appears
reasonable in the context of the currently accepted theory of atomic structure. That theory
portrays the atoms in terms of individual constituents separated by large amounts of
space. Elimination of this space seems to be a logical way of accounting for the enormous
increase in density. No direct evidence bearing on this issue is currently available, and the
hypothesis is therefore free from any direct conflict with observation. Having this (to
them) satisfactory explanation of the density of the white dwarf, the astronomers have
apparently considered it obvious that the stars must get from their normal condition to
this white dwarf state in some way. Consequently, they have not considered it necessary
to look very closely into the question as to how the collapse is to be accomplished.
Eddington is often credited with having provided the ‖explanation― of the white dwarfs.64
But an examination of one of his discussions of the subject, such as that in the chapter on
‖The Constitution of the Stars― in his New Pathways in Science,65 reveals that the whole
point of his discussion is to show that the existence of degenerate matter is consistent
with accepted atomic theory. He does not address the question as to how this degeneracy
is to be accomplished, except to comment that it can be produced by pressure, which gets
us nowhere, as he offers no suggestion as to how the necessary pressure could be
produced—the same lacuna that is so evident in the more recent discussions of the
subject. Where such a suggestion is attempted, it is usually an obvious absurdity. Here is
an example:
Gravitation tends to squeeze the star to smaller and smaller dimensions, but every
contraction only strengthens the force, thereby compelling further contraction . . . Its [the
star's] contraction accelerates all the time for the reasons just explained, and outright
would collapse into a black hole if forces were not generated to counteract the
gravitational contraction. Such a force is the thermal pressure of the gas . . . the pressure
eventually begins to balance gravitation.66 (M. J. Plavec)
This not only conflicts with the previously noted fact that the thermal pressure does not
alter the pressure exerted against the atoms, but is also specifically contradicted by direct
observation, as we know from experience that matter in which thermal pressure is not
”generated to counteract the gravitational contraction,"—that is, matter that is near zero
absolute temperature—does not ‖collapse into a black hole.― It remains in the condition
that we call the solid state, in which there is a definite minimum distance between the
atoms. This is an equilibrium distance, and it can be reduced by application of pressure,
but there is no observational indication of any kind of a limit, even though pressures as
high as five million atmospheres have been reached in experiments.
The truth is that there is no empirical evidence to support the assumption that gravitation
operates within atoms. Observations show only that there is a gravitational effect between
atoms (and other discrete particles). Furthermore, the behavior of matter under
compression demonstrates that there is a countertorce, an antagonist to gravitation (the
same) that we encountered earlier in our examination of the structure of the globular
clusters) that limits the extent to which the gravitational force can decrease the inter-
atomic distance.
Plavec's contention that collapse into a black hole will take place unless forces, such as
the thermal pressure, are ‖generated― to oppose gravitation is contradicted by the
observed behavior of matter, which shows that the necessary counter-force is inherent in
the structure of matter itself, and does not have to be generated by a supplementary
process.
. In order to clear the way for the ‖collapse― hypothesis, it is first necessary to assume
that there is a limit to the strength of the counter-force, an assumption that is entirely ad
hoc, since current science has not even identified the nature of this force, to say nothing
of establishing its limits, if any. Then it is further necessary to assume that the
gravitational force operates within the atom and that the opposing force is not so
operative to any significant extent. The combination of these latter assumptions is
inherently improbable, and in view of the lack of any indication of a limit to the
resistance to compression, the first assumption has no more claim to plausibility. The
theory of atomic collapse is thus simply an excursion into the realm of the imagination.
In the universe of motion stars cannot and do not collapse. The results that are currently
attributed to this hypothetical collapse are produced by the expansion of the fastest
products of the supernova explosion into time. The factor that controls the course of
development of the white dwarf stars is the inversion of physical properties in the
intermediate speed region. As we have seen, the expansion into time increases the
amount of three-dimensional time occupied by this star. This is equivalent to a decrease
in the volume of space; that is, the equivalent spatial dimensions are reduced, resulting in
an increase in density when measured as mass per unit of volume.
Contraction of the matter of the white dwarf star under pressure has the opposite effect,
just as it does in the case of ordinary matter. Pressure thus reduces the density measured
on this same basis. The constituents of a white dwarf star, like those of any other star, are
subject to the gravitational effect of the structure as a whole, and the atoms in the interior
are therefore under a pressure. The natural direction of gravitation is always toward
unity. In the intermediate region (speeds above unity), as in the time region (distances
below unity) that we explored in the earlier volumes, toward unity is outward in the
context of a fixed spatial reference system, the datum level of which is zero. Thus the
gravitational force in the white dwarf star is inverse relative to the fixed system of
reference. It operates to move the atoms closer together in time, which is equivalent to
farther apart in space. At the location where the pressure due to the gravitational force is
the strongest, the center of the star, the compression in time is the greatest, and since
compression in time is equivalent to expansion in space, the center of a white dwarf is the
region of lowest density. As we will see later, this inverse density gradient plays an
important part in determining the properties of the white dwarfs.
Another effect of the inversion at the unit level can be seen in the relation of the size of
the white dwarf to its mass. References are made in the astronomical literature to the
‖curious― fact that ‖the more massive a white dwarf is, the smaller its radius.― 67 When
the true nature of the white dwarf is understood, this is no longer curious. A massive
cloud of matter expanding into space occupies more space than one of less mass, and the
radius of the massive cloud is therefore greater. A massive cloud of matter expanding
into time similarly occupies more time than one of less mass, and the radius of the
massive cloud (measured as a spatial quantity) is therefore smaller, inasmuch as more
time is equivalent to less space.
Astronomical observations give us only occasional disconnected glimpses of the white
dwarf stars as they move through the various stages of their existence, but we can arrive
at a theoretical picture of their evolution that is in full agreement with the little that is
observationally known. The following paragraphs will outline the general nature of the
evolutionary development, which will be considered in detail in Chapters 11, 12, and 13.
In what may be called Stage 1, the immediate post-ejection period following the
supernova explosion in which the white dwarf is formed, this star is expanding in time.
This means that from a spatial standpoint it is contracting in equivalent space. In this
stage, the constituent particles, newly raised to intermediate speeds, are emitting radiation
at radio frequencies as they move toward isotopic stability at these speeds. (The process
by which the radiation is generated will be examined in Chapter 18). Such a star is
observable only as an otherwise unidentifiable source of radio emission. A great many
such sources—"blank fields,― as they are known to the observers—have been located,
and presumably many of these are outgoing white dwarfs.
During this expansion stage energy is being lost to the environment, and there is little
generation of energy to replace the losses. Energy production by atomic disintegration is
reduced as the temperature rises in the range above unity, as this decreases the inverse
temperature, which determines the destructive limits of the elements in the intermediate
speed range. Since unity is the natural datum for physical activity, the critical level at
which the disintegration of the atom takes place is unit equivalent temperature,
corresponding to the speed of light, regardless of whether the predisintegration
temperature is above or below the unit level. A deviation upward from unity (a decrease
in inverse speed) has the same effect on the process as a downward deviation of the same
magnitude (a decrease in speed). Inasmuch as the maximum speed is well above unity,
only the very heavy elements are initially available as fuel.
When the energy loss to the environment has been sufficient to terminate the contraction
in equivalent space, a process of re-expansion begins. The energy loss continues
throughout this second evolutionary stage. As the expansion proceeds, and as the
temperature falls toward unity, energy production increases to some extent, since
successively lighter elements reach their destructive limits in the same manner as in the
inverse situation on the opposite side of the unit temperature level. But the supply of
elements heavier than iron was reduced to near zero before the supernova explosion, and
the expanding white dwarf therefore has very little fuel for energy generation. The atom
building process and the accretion of matter from the environment eventually begin
replenishing the supply, but this proceeds at a relatively slow pace, Furthermore, the
white dwarf does not have the benefit of gravitational energy, such as that which is
released by the contraction of the giant stars, because the effect of gravitation in time is
the inverse of the effect of gravitation in space.
Because of the energy losses, the temperatures of the constituents of a white dwarf
continually decrease, and eventually they begin dropping below the unit level, As this
reversion to the lower speed range proceeds, the star is gradually converted from the
status of a white dwarf (a star whose constituents move at intermediate speeds) to that of
an ordinary main sequence star (one whose constituents move at speeds below the unit
level), The evolution of the white dwarf is thus directed toward the same end as the
evolution of the giant stars; that is, a restoration of the state of gravitational and thermal
equilibrium that was destroyed by the supernova explosion. In the case of the red giant,
the explosion produced a cool and diffuse aggregate, which had to contract and heat in
order to reach the equilibrium condition. In the case of the white dwarf, the explosion
produced a dense hot aggregate that had to expand and cool in order to reach the same
equilibrium condition.
Since the astronomers do not recognize the true nature of the white dwarf star, they have
had great difficulty in charting an evolutionary course for these objects. As noted earlier,
they have developed a theory of stellar evolution that takes the stars as far as the red giant
stage. They regard the white dwarfs as being in the last stage on the road to stellar
oblivion. It follows, so they conclude, that the stars must, in some way, get from red giant
to white dwarf. The amount of progress that has been made toward putting some
substance into this pure assumption during the last twenty years can be seen by
comparing the following two statements:
We know remarkably little about evolution in population I after the red giants,68 (J. L
Greenstein, 1960)
The details of the process by which the red giants evolve into white dwarfs are poorly
understood.― 69 (R. C. Bohlin, 1982)
But when a pure assumption of this kind is repeated again and again, its dubious
antecedents are eventually forgotten, and it begins to be accepted as established
knowledge. The remarkable way in which the status of this assumption as to the location
of the evolutionary path has been elevated by the process of repetition, without any
addition to the observational support, can be seen from the following statement from an
astronomy textbook, in which the ‖poorly understood― and purely hypothetical
evolutionary course becomes a certainty:
We do not know precisely what happens Ito the red giants] at this point, but we are sure
that shortly thereafter the star moves rapidly to the left on the H-R diagram and then
downward, fading out slowly in the lingering death of the white dwarf.70
Even in the light of conventional theory, the hypothesis that the stars ‖move rapidly to the
left on the H-R diagram [from the red giant region] and then downward,― meanwhile
shedding mass, is untenable. Movement to the left from the red giant region involves an
increase in the mass of a Class I star, and either an increase or a constant mass for a
member of one of the later classes. The stars in the upper left of the diagram are the most
massive of all of the known stars. The mass loss assumed to be taking place during this
hypothetical leftward movement is incompatible with the observed mass relationships.
Nor is there any explanation as to how this assumed loss of mass could take place.
Shklovsky, for instance, concedes that ‖we simply do not understand exactly how
material is ejected from the envelopes of such [red giant] stars.― 71
Furthermore, even where matter is actually ejected from a star, this does not necessarily
mean that it leaves the system. When the issue is squarely faced, it is apparent that there
is no evidence of any significant loss of mass from any star system, other than the stars
that explode as supernovae. There are, of course, many types of stars that eject mass,
either intermittently or on a nearly continuous basis, but they do not given their ejecta
anywhere near enough velocity to reach the gravitational limit and escape from the
gravitational control of the star of origin. This ejected matter therefore eventually returns
to the star from which it originated.
In this connection, it should be noted that although the relation of the stellar mass to the
variables of the CM diagram is different for the different classes of stars, our findings
show that it is fixed for any one of these classes. Stars that are following an evolutionary
course that involves an increase of mass cannot lose mass and still continue on that
course. This not only rules out the theoretical loss of mass by stars such as the red giants
which show no evidence of any significant outflow of matter, but also means that the
observed ejection of material by stars like the Wolf-Rayets is a cyclical process of the
kind discussed in the preceding paragraph. We will encounter this same kind of a cyclical
ejection process in a more extensive form in the case of the planetary nebulae, which will
be examined in Chapter 11.
The present chapter is the first in this volume that involves a full-scale application of the
reciprocal relation between space and time, the most significant consequence of the
postulate of a universe composed entirely of motion. Some of the conclusions of the
preceding chapters depend in part on this relationship, but the entire content of this
chapter rests on the inverse relation between the effects of an expansion into space and
those of an expansion into time. The concept of an object becoming more compact (from
the spatial standpoint) as it expands will no doubt be a difficult one for many individuals
(although for some reason, most seem to be quite comfortable with the fantastic ‖holes―
in space—black holes, white holes, wormholes, etc.—that figure so prominently in
present-day cosmological speculations). But the validity of the reciprocal relation
between space and time has been demonstrated in many hundreds of applications in the
preceding volumes, and it provides the complete and consistent explanation of the white
dwarfs that conventional astronomical theory is unable to supply.
The theory of white dwarfs in the universe of motion contains none of the awkward gaps
that are so conspicuous in currently accepted astronomical theory. In the context of this
new theory both the nature of the white dwarfs and their properties—those properties that
are so different from those of the familiar objects of everyday life—are necessary
consequences of the event in which these stars originated: the supernova explosion. And
these properties define the ultimate fate of these objects. There is no need to assume a
stellar ‖death― for which there is no observational evidence. The destiny of the white
dwarf, an eventual return to the main sequence, is implicit in the physical characteristics
that make it the kind of a star that it is.

CHAPTER 7

Binary and Multiple Stars


The prevalence of binary and multiple systems is one of the most striking facts that has
emerged from the astronomers observations of the stars, but they have not been able thus
far to find an explanation for the existence of these star systems that is plausible enough
to attain general acceptance. A number of different types of theories have been proposed,
but all are subject to serious difficulties. As one astronomy textbook describes the
situation:
Our hopes of understanding all stars would brighten if we could explain exactly how
binary and multiple stars form . . . Unfortunately we cannot.72
In view of this embarrassing lack of understanding of one of the most prominent features
of stellar existence, it is significant that the development of the theory of the universe of
motion provides a detailed account of the origin of these binary and multiple systems, not
as something of a separate nature, but as an integral part of the explanation of the stellar
evolutionary process. Furthermore, this explanation of the origin of these systems carries
with it an explanation of the diversity of the components, another item that has hitherto
puzzled the investigators. A half century ago, James Jeans made the following comment
about this situation, an observation that is equally appropriate today:
Reverting to the special problem of binary systems, it is hard to see how the two
constituents can be of the same age, and yet they can only be of different ages if they
have come together as the result of capture, a contingency which is so improbable that it
can be ruled out as a possible origin for the normal binary system . . . Clearly some piece
of the puzzle is missing.73
The existence of two distinct products of the supernova explosions, with speeds in
different ranges, is the piece of the puzzle that has been missing. On the basis of the
theory of the Type I supernovae outlined in Chapters 4 and 6, every star that has been
through one such expansion is now a star system consisting of two components: an A
component on or above the main sequence, and a B component on or below the main
sequence. This means that the seemingly incongruous associations of stars of very
different types that are so common are perfectly normal developments. Combinations of
giant and dwarf stars, for example, are not freaks or accidents; they are the natural initial
products of the process that produces the second-generation stars.
The significance of the term ‖star system― introduced earlier should now be apparent. A
star system, in this sense, consists of two or more stars or aggregates of sub-stellar size
that have been produced by subdivision of a single star. Inasmuch as the constituents of
such a system have originated inside the gravitational limit of the original star, they are
gravitationally connected, rather than having a net outward motion away from each other,
as is true of the individual stars.
The term ‖binary― is frequently used by astronomers in an inclusive sense to cover all
systems with more than one component, but for the purposes of this present work it will
be restricted to the double systems. The star systems with more than two components will
be called multiple systems.
In the early stages the pairing varies with the evolutionary age of the system.
Immediately after the explosion the A component is merely a cloud of dust and gas which
appears as nebulosity surrounding the white dwarf B component. Later the cloud
develops into a pre-stellar aggregate, and then into a giant infrared star. Since these
aggregates are invisible, except under special circumstances, the white dwarf appears to
be alone during this phase. When the giant star gets into the high luminosity range this
situation is likely to be reversed, as the bright star then overpowers its relatively faint
companion. Further progress eventually brings the giant down to the main sequence. The
development of the white dwarf is slower, and there is usually a stage in which a main
sequence star (the former giant) is paired with a white dwarf, as in Sirius and Procyon.
Finally the white dwarf, too, reaches the main sequence, and thereafter both components
progress upward along the same path. The upper (more advanced) portions of the main
sequence therefore contain no associations of dissimilar stars. Many of these stars are
binaries, but they are pairs of the same or closely related types. There are some
differences in composition. The white dwarf gets the lion's share of the heavy elements in
the supernova process, and even though it accretes the same kind of matter as the giants,
it has a larger content of ‖metals― in the main sequence stage.
The Wolf-Rayet stars appear to reflect this difference. Their distribution and relative size
indicate (on the basis of the theory discussed in Chapter 4 ) that they are former white
dwarfs. They are less massive than the O and B stars with which they are associated.74 As
noted earlier, they are probably rich in nickel, a white dwarf characteristic, and they are
‖closely confined to the plane of the Galaxy,― 75 indicating that they are stars of Class 2
or later. No Wolf-Rayet stars have been found in the Orion Nebula, where O and B stars
of Class 1 are plentiful.74
It is suspected that all [Wolf-Rayets] may be components of close pairs, the W stars
revolving with larger O type companions, a situation that may provide an important clue
to the still mysterious behavior of Wolf-Rayet stars.76
This ‖suspected― pairing with O type stars, reported by an astronomy textbook, is fully in
accord with our theoretical findings. The O star is the A component of the binary system,
the former giant, while the Wolf-Rayet star is the B component, the former white dwarf.
The astronomers have been unable to arrive at any explanation as to why so many stars
are binary, and they are even more at a loss to explain the frequent occurrence of pairs of
a very dissimilar nature. The pairing of these dissimilar objects is an anomaly in the
context of conventional astronomical theory, which pictures the two stars in a binary
system as following the same evolutionary path, and therefore occupying very different
locations on that path if they are stars of different types. This presumed difference in
evolutionary status is hard to reconcile with the rather obvious probability that the two
stars of such a system have a common origin. The fact that the white dwarf is normally
(probably always) the less massive of the two stars exacerbates this problem.
Double stars . . . often present the strange circumstance that the more massive star is still
a main sequence subject, while the less massive star has reached the white dwarf stage. If
the two stars are of the same age, and have always been a physical pair, then the more
massive star should evolve faster than the other.77
Dean B. McLaughlin makes this comment on a specific situation:
It is curious that several other novalike variables, as well as two recurrent novae, T
Coronae Borealis and RS Ophiuchi, have red giant stars for companions.78
From the standpoint of the findings of this work, there is nothing at all ‖curious― about
this situation. Nor is it a ‖strange circumstance― that the more massive star is on the main
sequence. The seeming anomaly is actually an observational repudiation of current
astronomical theory. It exposes the Falsity of the assumption upon which the current
theory is based: the assumption that all stars follow the same evolutionary course, and
that the main sequence stars precede the white dwarf stars on this course. Our finding is
that the two constituents of a binary system follow totally different paths, and at any
specific time they are equally far advanced on their respective paths. The path back to the
main sequence is, however, somewhat longer for the white dwarfs, which accounts for
the variety of the combinations. Because of the nature of the process by which they were
formed, all of the stars of the white dwarf class, including the novae and related
variables, are accompanied by stars or pre-stellar aggregates on or above the main
sequence, These companions are not always visible, particularly if they are still in the
pre-stellar stage, but if they are observable, they are either giants, sub-giants, or main
sequence stars,
It is true that some of the observed double stars do not fit into this evolutionary picture on
the basis of their reported composition, For example Capella is said to be a pair of giants,
Neither of these stars can qualify as the B component of a binary, On the basis of the
theory of the universe of motion, we must therefore conclude that Capella is actually a
multiple system rather than a double star, and that it has two unseen white dwarf or faint
main sequence components, The Algol type stars, in which the main sequence star is
paired with a sub-giant of a somewhat smaller mass, are similarly indicated as multiple
systems. The main sequence star cannot be the B component because it is the larger of
the two units and has already attained the equilibrium status, while the sub-giant cannot
be the B component because it is above the main sequence. We must therefore conclude
that at least one of these stars has undergone a second explosion, and that a faint B
companion accompanies it. This assessment of the situation is supported by the fact that
in Algol itself at least one, and possibly two, small B components have been located
observationally.
The second explosive event attributed to such stars as Capella and Algol is a normal
development that can be expected to occur in any star system of an advanced
evolutionary age, if it is in an appropriate environment. Chronological age alone will not
produce this result, as there is no progress up the main sequence unless sufficient material
is available for accretion. But where there is an adequate supply of ‖food, in the
environment, the stars continue moving around the cycle until their life span is terminated
by a process that will be discussed in Chapter 15.
Each passage of a single star through the explosive stage of the cycle results in the
production of a binary system (unless the B component is below stellar size, a possibility
that we will examine shortly), The number of stars in the system thus continues to
increase with age, as long as sufficient material for accretion is available, Systems with as
many as six components are found within the present observational range, and
considerations that will be discussed later indicate that even larger systems may exist in
the older regions of large spiral and spheroidal galaxies. The status of these multiple
systems as combinations of separately produced binaries is clearly indicated by their
structures.
In triple systems . . . two stars commonly co-rotate in a close orbit, and a third star
revolves around the pair at a great distance. In quadruple systems, such as Mizar, two
close pairs are likely to revolve around each other at a great distance.79 (W. K. Hartmann)
The local star group, the concentration of stars in the immediate vicinity of the sun, is
composed mainly of Class B stars, those of the main sequence, and since there is ample
evidence, such as that contributed by their heavy element content, that these are second
generation products, Class 2B, they should be largely binaries. This theoretical
conclusion is confirmed by observation. ‖Single stars are a minority.80 Most of the
recognized binary systems have main sequence stars in both positions, but there are some
main sequence-white dwarf combinations. Few, if any, giant-white dwarf systems are
recognized in this region, but this is probably due to the effect of the time factor on the
number of stars in each part of the cycle, as the interval during which the giant stars are
visible is of short duration compared to the time spent by the white dwarfs in their
evolutionary development.
It should be noted in this connection that this local group is representative only of a
particular evolutionary stage, not of stellar systems in general, and the proportions in
which the various types of stars occur in this local region are not indicative of the
composition of the stellar population as a whole, The white dwarf, for instance, is an
explosion product, a star of the second or later generation, and stars of this type are
almost totally absent from stellar systems such as the globular clusters, which are
composed almost exclusively of first generation stars, those which have not yet passed
through the explosion phase of the cycle. It should not be assumed, therefore, that the
high proportion of white dwarfs in the local region indicates a similar high proportion
throughout the universe, or even throughout the Galaxy.
The same caveat should be applied to the estimate, quoted in Chapter 4, that 95 percent of
all stars are located on the main sequence. This estimate does not give sufficient
consideration to the fact that few of the early type stars, those of the globular clusters and
the early elliptical galaxies, have reached this evolutionary stage. These aggregates,
which constitute the great majority of stellar systems (although they do not necessarily
contain the majority of all stars), are composed almost entirely of Class 1A stars, those
that have not yet reached the main sequence. The number of stars of the later classes in
these aggregates is no more than can be explained on the basis of the strays, the scattered
remnants of disintegrated older structures.
The observers recognize the almost complete absence of the various types of binary stars
from these young aggregates, but it remains unexplained in current thought. Burnham, for
instance, comments that ‖For some reason not fully understood, eclipsing binaries appear
to be very rare in globular star clusters.― 81 Likewise novae are scarce. ‖There are only
two cases of novae in globular star clusters,― 82 he says. The search for binaries in the
center of globular clusters has been totally unsuccessful,83 reports Bart J. Bok. Shklovsky
concedes that for Population 11 stars in general, multiplicity is ‖fairly rare.― 84
No reason for this near absence of binaries from Population 11 (Class 1A) stars is given
in the astronomical literature. Nor is much support given to the rather half-hearted efforts
to explain the origin of the double and multiple systems. The sad fact is that the
astronomers are trapped by their upside down evolutionary sequence. The striking
difference in the abundance of binaries between two groups of stars that admittedly differ
primarily in age shows that this must be an evolutionary effect. But since the astronomers
regard the group with almost no binaries as the older, they have to find one process by
means of which the binaries are produced in the original star formation, or very soon
thereafter, and another process whereby the combinations are uncoupled at some later
evolutionary stage. Even the origin of the binaries is without any explanation that is taken
seriously, and no explanation at all has been advanced to account for the uncoupling.
When the correct evolutionary direction is recognized, one half of this problem
disappears. Only one process remains to be explained: the production of binary systems
at some stage of the evolutionary development. In the context of the theory of the
universe of motion, this is seen to be a necessary consequence of the division between
motion in space and motion in time that takes place in the products of extremely violent
explosions. Here, then, this theory provides a complete and consistent explanation of an
important feature of the astronomical universe that is without any explanation in terms of
conventional astronomical theory.
The clarification of the situation that is accomplished by the new theory does not end at
this point Because of the lack of understanding of the basic principles that are involved,
the astronomers are unable to distinguish between cause and effect in these phenomena.
For example, Shklovsky expresses the current astronomical opinion in this statement:
Enough has been said to conclude that the doubling of a star decisively controls its
evolution.85
As the points brought out in the preceding discussion demonstrate, this view of the
situation is upside down, like so many other aspects of currently accepted theory. Instead
of the doubling of the star determining its evolution, the evolutionary development of the
star results in the doubling. The conventional view expressed by Shklovsky really does
not explain anything; it merely replaces one question with another. The question as to
what causes the evolution becomes a question as to what causes the doubling. On the
other hand, the answer derived from the theory of the universe of motion is complete.
This theory explains why stars evolve, why this evolution terminates in an explosive
event, and how the doubling of the star results from the explosion.
In a statement quoted in the first volume of the present series, Richard Feynman
commented that ‖Today our theories of physics, the laws of physics, are a multitude of
different parts and pieces that do not fit together very well.― 86 This description is even
more appropriate in application to the theories of astronomy.
Despite its tradition, which stretches back many millennia, astronomy does not appear to
qualify as a mature science in [Thomas! Kuhn's sense of the word—a science with an
established framework of theory and understanding,87 (Martin Harwit)
The binary star theory is one of the individual ‖parts and pieces, that has little connection
with anything else. The existence of binaries is simply taken as given, and a set of
conclusions with respect to some of the observed binary phenomena is then drawn from
this existence, without fitting these conclusions, and the phenomena to which they refer,
into the rest of astronomical theory. This comment is not intended as a criticism; it is
simply a statement of one of the aspects of astronomy, as it now exists, that needs to be
taken into consideration in order to understand why the theoretical development in this
series of volumes arrives at so many conclusions that differ radically from the prevailing
astronomical thought. Inasmuch as the astronomers have no general structure of theory,
either in physics or in astronomy, with which to work, they have had no option but to
proceed on this piecemeal basis. Actually, they have made impressive progress in
identifying and clarifying the multitude of different parts and pieces.― What is now
needed is to put these parts and pieces together, turning them right side up where
necessary, and fitting them together in the correct manner. This is the task that the
general physical theory derived from the postulates that define the universe of motion is
now prepared to handle.
With the benefit of, the information supplied by this new theoretical system, it can now
be seen that the behavior characteristics of the binary star systems are inherent in the stars
themselves. There is no need to invent processes that call for interaction between the
components. Hypothetical processes of this nature are the current orthodoxy.
Interacting double stars—i.e., those in which gas flows from one star to the other—are in
vogue to explain many peculiar celestial phenomena. The subject has become a
bandwagon during the last decade or less.88 (David A, Allen)
In many binary systems the separation between the stars is relatively sma1l, and some
interaction between them is a definite possibility (although it should be remembered that
where one of the two stars is a white dwarf, there is a separation in time as well as in
space, and the stars are not actually as close to each other as they appear to be). But the
current tendency is to use the hypothesis of mass transfer from one member of a binary
system to the other as a kind of catch-all, to explain away any aspect of binary star
behavior that is not accounted for in any other way. The remarkable extent to which this
hypothetical mass transfer process is currently being stretched is well illustrated by the
purported resolution of what is called the ‖Algol paradox.― As noted earlier in this
chapter, the two principal components of Algol are a relatively large and hot main
sequence star and a less massive, cooler subgiant.
Here lies a paradox. The more massive B or A star should be the one to expand first yet
the less massive star is the more evolved giant. Why? Is there a fundamental mistake in
our idea of stellar evolution?89 (W. K. Hartmann)
Very little is actually known about the conditions that exist in these binary systems, and
still less is known about the events that have taken place earlier in the lives of these stars.
Thus, at the present level of instrumentation and techniques there is no way of disproving
a hypothesis about these binaries, and the astronomers have taken full advantage of the
freedom for invention, ‖Theoretical studies have resolved the paradox", Hartmann says,
It is simply assumed that the smaller star was originally the larger, and that after having
achieved the more advanced status, it obligingly transferred most of its mass to its
companion. In other binary star situations, such as in the cataclysmic variables, the
transfer explanation can be used only if the movement is in the opposite direction, So it is
cheerfully assumed, in this case, that the transfer is reversed. As Shklovsky explains;
It seems that the hot component has already passed through its evolution and, at some
epoch in the past, transferred much of its material to its companion star. But now the
companion is returning the favor by restoring to the evolved star the material ‖borrowed―
many millions of years ego, 90
Of course, we have to keep in mind the difficulties under which the astronomers carry on
their work, but nevertheless, there are limits to what can legitimately be classified as
scientific. Acceptance of untestable ad hoc assumptions as the resolution of problems, or
giving them any status other than that of highly tentative suggestions for study, is
incompatible with good scientific practice, It inevitably leads to wrong answers. The
correct answer to Hartmann's question is, Yes, there is a fundamental mistake in current
ideas of stellar evolution. The so-called ‖paradoxes― are actually observational
contradictions of a theory that has no foundation in fact.
In addition to the binaries, we also observe a considerable number of stars in the local
region, which appear to be single. Some of these may actually be single stars that have
drifted in as a result of the mixing process that occurs by reason of the rotational motion
of the Galaxy, but others are double stars in which one of the components is
unobservable. We have already noted that the A component of a binary is invisible during
a portion of the early evolutionary stage, and all we see under these conditions is a 1one
white dwarf. The components of the white dwarfs are not dispersed in space, and these
stars do not participate in this kind of a retreat into obscurity, but they become invisible
for other reasons. As we will find in Chapter II, they cannot be seen at all until they cool
down to a certain critical temperature. Later a bright giant or main sequence companion
may overpower them, or they may simply be too dim to be observable at any
considerable distance.
Inasmuch as the maximum speed produced by the supernova explosions that we are
considering is less—usually considerably less—than two units, the distribution of speeds
above and below unity is asymmetric, with the greater part of the mass taking the lower
speeds. For this reason, even though some of the matter ejected into space escapes from
the gravitational control of the remnants of the star, the amount of retained slow speed
material still exceeds the inward-moving mass in most, and probably ail, cases. The giant
member of the binary system therefore has the greater mass. In Sirius, for example, the
main sequence star, originally the giant, has more than twice the mass of the dwarf. Since
even the smallest star is subject to a Type II supernova explosion at the age limit, it is
evident that in many instances the mass of the dwarf component is below the minimum
required for a star, in which case the final product is a single star with one or more
relatively small and cool attendants: a planetary system,
In the supernova explosion the material near the center of the star is obviously the part of
the mass that acquires greater-than-unit speed, and disperses into time. The remainder of
the stellar material is dispersed outward into space. In view of the segregation of heavy
and light components which necessarily takes place in a fluid aggregate under the
influence of gravitational forces, the chemical composition of the two components of the
explosion products differs widely, Most of the lighter elements will have been
concentrated in the outer portions of the star before the explosion, those heavier than the
nickel-iron group will have been converted to energy, except for the stray atoms mixed in
with other material, and the recent acquisitions that had not had time to sink to the center,
while the central portions of the star contained a high concentration of the iron group
elements.
When the explosion occurs, the outward moving material, which we will call Substance
A, consists mainly of light elements, with only a relatively small proportion of high
density matter. It can be deduced that the composition of Substance B. the matter of the
inward-moving component, is subject to a considerable amount of variation. The
exploding stars differ in their chemical composition No doubt there are also differences
in some of their physical properties—rotational speed, for example. Because of these
differences in the stars from which they originate, the size and composition of the white
dwarf components of the explosion products is also variable. If this component is small, it
can be expected to be composed almost entirely of the iron group elements. The large
white dwarfs contain a greater proportion of the lighter materials.
In each of the two products of the stellar explosions that we are now considering the
primary gravitational forces are directed radially toward the center of the mass of the
dispersed material Hence, unless outside agencies intervene, it is to be expected that any
capture of one subsidiary aggregate by another will result in consolidation the formation
of a binary or multiple system being ruled out by the absence of non-radial motions.
Ultimately, then, the greater part of the matter of the larger of the two components, the
material dispersed in space, will be collected into one unit. The smaller component then
acquires orbital motion around the larger, consolidation being unlikely in this case, as
neither unit will be moving directly toward the other unless by pure chance The ultimate
result is a system in which a mass, or a number of masses, composed primarily of
Substance B is moving in an orbit, or orbits, around a central star of Substance A If the B
component is of stellar size, the system is a binary star; if it is smaller the product is a
planet, or a planetary system Because of interaction during the final stages of the
formation process, some of the unconsolidated fragments may take up independent
orbital positions, constiturting planetary satellites.
This provides an explanation of, the origin of the solar system, a matter that has been the
subject of much speculation among the members of the human race, who occupy a planet
of that system. On the foregoing basis we may conclude that at the beginning of the
formative period of the solar system, after the gravitational forces had almost completed
the task of aggregating the masses dispersed by the supernova explosion. A large mass of
Substance A, with some small subsidiary aggregates and consider-able dispersed matter
not yet incorporated into the central mass was approaching a much smaller and less
consolidated mass of Substance B When the combination of the two systems took place
under the influence of the mutual gravitational attraction, the major aggregates of the B
component acquired orbital motion around the large central mass of the A component In
the process of assuming their positions. These newly constituted planets encountered
local aggregates of Substance A which had not yet been drawn into the central star, and
under appropriate conditions these aggregates were captured, becoming satellites of the
planets At the end of this phase all major units had been incorporated into a stable system
in which planets composed of Substance B were revolving around a star composed of
Substance A, and smaller aggregates of Substance A were similarly in orbit as planetary
satellites
Small fragments are subject to being pulled out of their normal paths by the gravitational
forces of the larger masses which they may approach, and while orbital motion of these
fragments is entirely possible, the chances of being drawn into one of the larger masses
increase as the size decreases We may therefore deduce that during the latter part of the
formative period all of the larger members of the system increased their masses
substantially by accretion of fragments of Substance A in various sizes from
planetesimals down to atoms and sub-atomic particles Some smaller amounts of
Substance B. in assorted sizes, were also accreted. After the situation had stabilized, the
central star, the sun, consisted primarily of Substance A, with a small amount of
Substance B derived from the heavy portions of the original Substance A mix and the
accretions of Substance B. Each planet consisted of a core of Substance B and an outer
zone of Substance A. the surface layer of which contained some minor amounts of
Substance B acquired by capture of small fragments.
The planetary satellites, which had comparatively little opportunity to capture material
from the surroundings because of their small masses and the proximity of their larger
neighbors, were composed of Substance A with only a small dilution of Substance B. It
can also be deduced that after the formative period ended, further accretion took place at
a slower rate from the remains of the original material. From newly produced matter, and
from matter entering the system out of interstellar space, but the general effect of these
subsequent additions did not differ greatly from that of the accretions during the
formative period, and did not change the nature of the result
This is the theoretical picture as it can be drawn from the information developed in the
earlier pages. Now let us look at the physical evidence to see how well this picture agrees
with observation. The crucial issue is, of course, the existence of distinct Substances A
and B. Both the deduction as to the method of formation of the planetary systems and the
underlying deduction as to the termination of the dense phase of the stellar cycle at the
destructive limit would he seriously weakened if no evidence of a segregation of this kind
could be found. Actually, however, there is no doubt on this score. Many of the
fragments currently being captured by the earth reach the surface in such a condition that
they can be observed and analyzed. These meteorites definitely fall into two distinct
classes. The irons and the stones, together with mixtures, the stony-irons.
The approximate average composition is as follows:
Chemical Composition of Meteorites
Irons Stones
Iron 0.25
Iron 0.90
Oxygen 0.35
Silicon 0.18
Nickel 0.08
Magnesium 0.14
Other 0.02 Other 0.08
Total 1.00 Total 1.00
The composition of the iron meteorites is in full agreement with the conclusion that these
are fragments of pure Substance B. The stony meteorites have obviously be en unable to
retain any volatile constituents, and when due allowance is made for this fact their
composition is entirely consistent with a status as Substance A. The existence of the
mixed structures, the stony-irons, is easily explained on the basis of the previous
deductions as to the composition of the various sizes of white dwarfs,
It is also reported that the iron meteorites contain practically no uranium or thorium,
whereas stony meteorites do.91 This is another piece of information that fits in with the
theoretical picture. The energy generation process exhausted the supply of very heavy
elements in the central regions of the stars, from which the iron meteorites (Substance B)
are derived, before the supernova explosion occurred. But the outer regions of these stars,
the source of the stony meteorites (Substance A), contained portions of the heavy element
content of the accreted matter that had not yet made their way down to the center. The
evidence from the meteorites thus gives very strong support to those aspects of the theory
that require the existence of two distinct explosion products, Substances A and B.
There is no proof that the meteorites actually originated contemporaneously with the
planets in the manner described, but this is immaterial so far as the present issue is
concerned. The theoretical process that has been outlined is not peculiar to the solar
system; it is applicable to any system reconstituted after a supernova explosion, and the
existence of distinct stony and iron meteorites is just as valid proof of the existence of
distinct Substances A and B whether the fragments originated within the solar system, or
have drifted in from some other system that, according to the theory, originated in the
same manner. The support given to the theory by the composition of the meteorites is all
the more impressive because the segregation of the fragmentary material into two distinct
types on such a major scale has been very difficult to explain oh the basis of previous
theories,
Additional corroboration of the theoretical deductions is provided by the spectra of
novae. Since these are stars of the white dwarf class, they are composed of Substance B
as originally formed. However, the white dwarfs accrete matter from the surroundings in
the same manner as other stars, and within a relatively short time the original star is
covered by a layer of Substance A. This material is essentially the same as that in the
outer regions of stars of other types, and the composition of the stellar interior is therefore
not revealed by the spectra obtained during the pre-nova and post-nova stages. But when
the nova explosion occurs, some of the Substance B from the interior of the star forces its
way out, and the radiation from this material can be observed along with the spectrum
from the exterior, As would be expected from theoretical considerations, the explosion
spectra often show strong indications of highly ionized iron.92
Another theoretical deduction that can be compared with the evidence from observation
is the nature of the distribution of Substances A and B in the planetary system, The sun
has a relatively low density, and we can undoubtedly say that it consists primarily of
Substance A, as required by the theory. Whether or not it actually contains the predicted
small amount of Substance B cannot be determined on the basis of the information now
available. The planet that is most accessible to observation, the earth, definitely conforms
to the theoretical requirement that it should consist of a core of Substance B with an
overlying mantle of Substance A. The observed densities of the other inner planets,
together with such other pertinent information as is available, likewise make it practically
certain that they are similarly constituted.
The prevailing astronomical opinion is that the differentiation which produced the iron
cores occurred after the formation of the planets. This necessitates the assumption that
these aggregates passed through a molten, or semi-molten stage, during which the iron
‖drained into metallic cores.― 93 Although this theory is still the one that appears most
frequently in the astronomical literature, it received what is probably a fatal blow from
the results of the Mariner 10 mission to Mercury. A report of these results reads in part as
follows:
Somehow in the region where Mercury formed from the dust and gas of the primeval
nebula, it first gathered iron-rich materials to form a dense core before adding the outer
shells of less dense material, Planetologists here (Jet Propulsion Laboratory) feel this to
be true because there is no evidence revealed by Mariner that Mercury could have gone
through a subsequent hot period during which iron-rich materials could have
differentiated and formed the core.94
These observations indicating that the core formation preceded the acquisition of the
lighter material are fully in accord with the theory of planetary formation derived in the
foregoing pages, a theory which places the differention of the iron from the lighter
elements in the pre-supernova star, rather than in the planets.
The observational situation with respect to the major planets is less clearly defined. The
densities of these planets are much lower than those of the earth and its neighbors, but
this is to be expected, since they have been able, by reason of greater size and lower
temperature, to retain the lighter elements, particularly hydrogen, that have been lost by
the inner planets. The observations indicate that the outer regions of these major planets
are composed largely of these light elements. This eaves the internal composition an open
question. It seems, however, that there must have been some kind of a gravitationally
stable nucleus in each case to initiate the build-up of the light material, and it is entirely
possible that this original mass, which is now the core of the planet, is composed of
Substance B. Jupiter has a total mass 317 times that of the earth, and even if the core
represents only a small fraction of the total mass, it could still be many times as large as
the earth's core.
We may thus conclude that, although the observational data on the outer planets do not
definitely confirm the theoretical deduction that they have inner cores of Substance B. the
observed properties are consistent with that finding. Since it is highly probable that all of
the planets have the same basic structure, this lack of any definite conflict between theory
and observation is significant.
The satellites present a similar picture. The verdict with respect to the distant satellites,
like that with respect to the distant planets, is favorable to the theory, but not conclusive.
The available evidence is consistent with the theory that the inner cores of these satellites,
as well as their outer regions, are composed of Substance A, but it does not definitely
exclude other possibilities. The satellites that we know best, like the planet that we know
best, gives us an unequivocal answer. The moon is definitely composed of materials
similar to the stony meteorites and the earth's crust; that is, it is practically pure
Substance A, as it theoretically should be.
It is appropriate to point out that this theory of planetary origin derived by extension of
the development of the consequences of the fundamental postulates of the Reciprocal
System is independent of the temperature limitations that have constituted such
formidable obstacles to most of the previous efforts to account for the existing
distribution of material. The fact that the primary segregation of Substance A from
Substance B antedated the formation of the soar system explains the existence of distinct
core and mantle compositions without the necessity of postulating either a liquid
condition during the formative period, or any highly speculative mechanism whereby
solid iron can sink through solid rock.
This explanation of the formation of the system also accounts for the near coincidence of
the orbital planes of the planets, and for the distribution of the planetary orbits in distance
from the sun. It has been recognized for two hundred years that the planets are not
distributed haphazardly, but occupy positions at distances that are mathematically related
in a regular sequence.
This relation, called Bode's Law (although discovered by Titius), has never been
explained, and since present-day scientists are reluctant to concede that there are answers
which they are unable to find, the present tendency is to regard it as a mere curiosity. ‖It
is probable that the law is no more than an interesting relation of a coincidental nature.―
95
says one textbook.
The basic principles governing this situation were explained in Chapter 6. The white
dwarf is moving in time, and the speeds of its constituents are distributed in the range
between one and two units. Increments of speed above the unit level are limited to unit
values, but since the motion in the intermediate speed range is distributed over the full
three dimensions of time, the applicable units are the three-dimensional units. As we saw
earlier, the two linear units from zero to the one-dimensional limit correspond to eight
three-dimensional units. The constituents of the white dwarf are thus distributed to a
number of distinct speed levels, with a maximum of seven. The distances in equivalent
space at the point of maximum expansion are similarly distributed. In the subsequent
contraction back to the equilibrium condition these separations are maintained unchanged
although the individual constituents move from one level to the next lower whenever they
lose a unit of speed.
During the contraction in time (equivalent to a reexpansion in space) there are two
processes in operation. The gravitational force of the aggregate as a whole is pulling the
particles in toward the center of mass, Coincidentally, each of the subdivisions of this
aggregate defined by the different speed levels is individually consolidating, since all
particles in each subdivision are moving at the same speed, and are therefore at rest
relative to each other, aside from their mutual gravitational motion. The rate at which
each process takes place depends mainly on the mass that is involved and the distance
through which the constituents have to travel. If the total mass is relatively large, the
central aggregation proceeds rapidly, and the local concentrations are pulled in to the
center before they have a chance to develop very far. Where the total mass is relatively
small the distances involved are about the same, and the central force is therefore weaker.
In this case the subsidiary aggregates have time to form, and the consolidation of these
aggregates into one central mass may not be complete by the time the white dwarf
becomes subject to the gravitational effect of its companion in the binary system.
Up to this point the subsidiary aggregates are all in a straight line spatially, They are
distributed over three dimensions of time, but the spatial equivalent of this time is a scalar
quantity, and it appears in the spatial reference system in linear form. When the white
dwarf reaches the vicinity of its giant or main sequence companion, and is pulled out of
its original line of travel by the gravitational force of the companion, the various
subsidiary aggregates go into orbit at distances from the companion that reflect their
separations, as well as the amount by which the line of movement of the white dwarf is
offset from a direct central impact on its companion.
Bode's Law reproduces these distances, as they appear in the solar system, as far as the
planet Uranus. It provides no explanation as to where its elements come from, but it does
correctly identify these elements as two fixed quantities and one variable. The fixed
quantities are properties of the particular star system (the soar system), and therefore have
to be obtained empirically; they cannot be calculated from theoretical premises. The first
of these represents the distance in actual space between the A component and the closest
of the planetary masses at the time the orbital motion was established. It is the same for
all planets, and has the vague 0.4 in terms of the astronomical unit, the mean radius of the
earth's orbit, Our finding confirms the vague that appears in Bode's Law, The second
constant is related to factors such as the masses of the two components of the binary
system, and the magnitude of the explosion in which they were produced, In Bode's Law
it has the vague 0.3. We arrive at a somewhat lower value, 0.267.
The variable in the distance relation is the speed level of the motion in time. There are
several factors involved in this relation that make it more complex than the simple
sequence in Bode's Law. Two of these factors enter into the values in the first half of the
group of planets. There is a 1½ step in the numerical sequence that does not appear in
Bode's Law. As we have seen in the earlier volumes, this vague frequently appears in
such a sequence where the quantity involved is complex, so that it is feasible to have a
combination of one-unit and two-unit components, Apparently the big jump from one to
two (a one hundred percent increase) favors such an intermediate vague which is
relatively rare at the higher levels, The second special factor that enters into the situation
we are now considering is that, for reasons explained in Volume I, all magnitudes in
equivalent space appear in the spatial reference system as second powers of the original
quantities, The distances below n = 4 can thus be expressed by the relation d = 0.267 n² +
0.4 In this lower range the results obtained from this expression are practically identical
with those obtained from Bode's Law, as indicated in Table I, where the observed
distances are compared with those calculated from the two equations.
TABLE I
PLANETARY DISTANCES
Planet n Calc. Obs. Bode’sLaw
Mercury 0 0.40 0.40 0.40
Venus 1 0.70 0.70 0.70
Earth 1½ 1.00 1.00 1.00
Mars 2 1.50 1.50 1.60
Asteroids 3 2.80 2.80 2.80
4 4.70
Neutral point 4.95
Jupiter (4) 5.20 5.20 5.20
Saturn (3) 8.90 9.50 10.00
Uranus (2) 19.60 19.20 19.60
Neptune (1½) 34.50 30.00 —
Pluto — 39.40 38.80
In this half of the total distance range, the increments of distance add directly, even
though they are the results of increments of motion in time (equivalent space), because
they correspond to the first half of the eight-unit speed range, which is on the spatial side
of the neutral point. Beyond this point, on the temporal side, the relations are inverted.
The n values (number of units from the appropriate zero) move back down, and the
distances in (equivalent space, expressed in spatial terms, are inversely related to the
value of n², Furthermore, the transition from space to time at the midpoint involves a
change in the gravitational effect from one positive unit (gravitation in space) to one
negative unit (gravitation in time), a net change of two units. On this basis, the neutral
point is one unit (0.267) above the 4.7 distance corresponding to n = 4 on the space side.
One more such unit brings the distance to 5.2. This is 4.8 plus the 0.4 initial vague. For
the more distant planets, the 4.8 applicable to n = 4 is increased in inverse proportion to
n², resulting in the values shown in the table. The applicable equation is d = 76.8/n² + 0.4.
The agreement between the observed and calculated distances is not as close for these
outer planets as for the inner group, but it is probably as close as can be expected, except
in the case of Pluto. Bode's Law could have a place for Pluto, but only at the expense of
omitting Neptune. This is not acceptable, as Neptune is a giant planet, while Pluto is a
small object of uncertain status, It appears likely that the inverse speed range
corresponding to n = 1½ is the maximum that was reached by the parent white dwarf, and
that both Neptune and Pluto condensed in this relatively wide distance range, This would
account for the fact that the calculated value for n = 1½ falls between the observed
distances of the two planets.
This explanation of the interplanetary distances implies that almost all-small stars of the
second generation or later have similar planetary systems in orbit, a point that we will
consider in another connection later. Otherwise, the clarification of the distance situation
is not of any special importance in itself, It is significant, however, that when we put
together the different properties that the motion of the white dwarf constituent of a small
binary system must possess, according to the theory of the universe of motion, we arrive
at a series of interplanetary distances that are almost identical with the observed values.
This numerical agreement between theory and measurement is a substantial addition to
the evidence supporting the theoretical conclusions as to the nature of motion in the upper
speed ranges. The white dwarf is the only object with component speeds greater than the
speed of light that is involved in the astronomical phenomena thus far discussed in this
volume, the phenomena that take up about 80 percent of a standard astronomy textbook.
But the remainder of this work will be concerned mainly with objects whose components,
and often the objects themselves, are moving at upper range speeds. A full understanding
of the nature and properties of the white dwarf will contribute materially to clarification
of the more complex phenomena of the intermediate and ultra high-speed ranges that will
be discussed in the pages that follow.
The smaller components of the solar system include interplanetary dust and gas,
meteorites, asteroids, and comets. The asteroids are aggregates of Substance B. from l
000 km in diameter downward, which were never captured by planets, and did not accrete
enough material to become planets in their own right. Most of the large asteroids are
located in the asteroid belt― between Mars and Jupiter, and represent the core of a
potential planet that failed to complete its consolidation because of the gravitational
effect of nearby Jupiter. The orbits of the asteroids are subject to modification by the
gravitational forces of the planets, and occasionally one is deflected into an orbit that
results in capture by the earth. Those that reach the earth intact, or in fragmentary form,
are the previously mentioned iron, or stony-iron, meteorites. Stray aggregates of
Substance A similarly captured are the stony meteorites. Most of these latter objects, like
the asteroids, date from the original formation of the solar system.
Comets are relatively small aggregates of material drawn in by the sun from distant
locations within its gravitational limit, Unless the incoming material happens to make a
direct hit, it goes into a very elongated orbit on the first approach. At each return it loses
part of its mass and reduces the size of its orbit. Eventually its entire contents are either
absorbed by one of the larger bodies of the solar system or distributed in the space
surrounding those bodies. The earth participates in this process in a relatively small way,
capturing both individual particles (sporadic meteors) and meteor swarms, which are
portions of the detached cometary material that follow previous orbits of their parent
comets.
The current view is that there must be a ‖reservoir― of cornets at some relatively large
distance from the sun in a sense, this is true, as the long period comets spend the greater
part of their lives in the outer portions of their orbits. But this reservoir is merely a
storage location, not a source. There is a certain residual amount of dust and gas within
the gravitational limit of the sun, some inflow of diffuse matter from interstellar space,
and a small, but continuous, formation of new matter from the incoming cosmic rays.
Thus new cometary matter is constantly being made available. The number of comets in
the system is probably now at an equilibrium level where the rate of formation is equal to
the rate of loss due to evaporation from the comets and eventual capture of the remnants.
The contents of this chapter identify some of the factors that have a bearing on the
question as to where planets are likely to exist, a question that excites a great deal of
interest because it is a key element in any assessment of the possibility of the existence of
1ife—particularly intelligent life—elsewhere in the universe. The B component of a
binary system is either a star or a planetary system, not both. This eliminates all binary
stars, and since all Class 1 stars are automatically excluded, it confines the possibility of
planets to single Class 2 stars (such as the sun), or to single components of multiple
systems Class 3 and later). Inasmuch as a long period of reasonably stable conditions is
probably required for the development of life certainly for the emergence of any of the
higher forms of life—the Class C stars of the second and later cycles, and the stars high
on the main sequence, all of which are subject to relatively rapid change, can also be
crossed off the list.
This wholesale exclusion of so many different classes of stars may seem to limit the
possibility of the existence of extra-terrestrial life very drastically, but in fact, these
conspicuous and well-publicized stars constitute only a minor part of the total galactic
population. The great majority of the stars of the Galaxy are small, and relatively cool,
stars in the lower sections of the main sequence. As we will see in Chapter l2, there is a
lower limit to the mass of a white dwarf star, and when the B component of a system is
below this limit it cannot attain stellar status. This implies the existence of an immense
number of planets among the smaller systems. Of course, there are requirements as to
size, temperature, etc., that a planet must meet in order to be available as an abode for
life, but there is a zone in each system within which a planet of an appropriate size is
quite likely to meet the other requirements. Since Bode's Law (as revised) is applicable to
all systems in which the conditions are favorable for planet formation (the small
systems), it is probable that all of these systems have at least one planet in the habitable
zone.
The findings of this work thus increase the probability that there are a very large number
of habitable planets earth-like planets, let us say—in our own galaxy, as well as in other
spiral galaxies. There are few, if any in the galaxies smaller than the spirals—the
ellipticals and the small irregulars—because they are composed almost entirely of Class l
stars. The situation in the giant spheroidals is not yet clear. There are multitudes of lower
main sequence systems in these giants and these can be expected to have the usual
proportion of planetary systems. However, the intense activity that, as we will see later' is
taking place in the interior zones of these giants no doubt rules out the existence of life.
Whether enough of this activity carries over into the outer parts of these galaxies to
exclude life in these areas as well is uncertain. The oldest of these giants are probably
lifeless. As we will find in Chapter 19, there is a strong X-ray emission from these mature
galaxies and this is probably lethal. So far as we know at present, however there may be
outlying regions in some of the younger galaxies of this class in which the conditions are
just as favorable for life as in the spirals.
In today's science fiction, where life in other worlds is a favorite motif, the habitations of
the alien civilizations are identified with familiar names, for reasons that are
understandable. The thrilling action that the authors of these works describe takes place
on planets that circle Sirius, or Areturus, or some other well-known star. But according to
our findings, few, if any, of these familiar stars are capable of having a habitable planet in
orbit, and are also old enough to have developed complex forms of life. Sirius, for
instance, has a white dwarf companion instead of a planetary system. Arcturus is a young
Class C star. The astronomers do not make the mistake of identifying the environments of
these stars as the abode of life, but they avoid it by making a different mistake. In
selecting the target of their first systematic attempt at interplanetary communication
(1974) they were misled by their current view of the evolutionary direction of the stars.
This initial effort was directed at the globular cluster M 13, on the assumption that it is a
very old structure in which the processes that lead to life have had ample time in which to
operate. We now find that the globular clusters are relatively young structures which,
aside from a few stray stars that have been picked up from the environment, are
composed entirely of Class l stars. These cluster stars have not been through the
explosion process― and therefore have no planetary companions at all.
As matters now stand the available information indicates that habitable planets are
plentiful, but that the planets on which life probably exists are not located in any systems
that we can call by name. The stars that they are orbiting are undistinguished anonymous,
and with few, if any, exceptions unseen stars of the lower main sequence.

CHAPTER 8

Evolution - Globular Cluster Stars


Even though a globular cluster may contain as many as a million stars, it is too small to
have any major effect on the structure of a barge spiral galaxy such as ours when a
capture takes place. But since this capture occurs practically on our doorstep, we are able,
to trace the progress of the clusters into the main body of the galaxy, and to read their
history in considerable detail. This process is too stow to be followed observationally, but
we can accomplish essentially the same thing by identifying clusters in successively later
stages of development, and establishing the order in which the various changes take
place.
As brought out in Chapter 3, the globular clusters are being drawn in toward the galaxy
from the surrounding space by gravitational forces, and the observed concentration of the
clusters thus far located within a sphere that has a radius of about 100,000 light years is
merely a geometrical effect. The clusters move ‖as freely falling bodies attracted by the
galactic center― , and they do not participate, to any significant extent, in the rotation of
the Galaxy. Thus the observations indicate that the clusters are on the way to capture by
the Galaxy.
The increasing strength of the gravitational forces as the clusters approach closer to the
Galaxy has a disruptive effect on the positional equilibrium within the clusters. The outer
stars tend to be stripped away, and the clusters therefore decrease in size as they
approach. Observations reported in Chapter 3 indicate that a cluster loses more than one
third of its mass by the time it reaches a position within l0,000 parsecs of the galactic
center. In the capture zone, the region in which the structure of the clusters begins to be
disrupted, the losses are still greater, and at the time when contact is made with the
Galaxy the remaining stars are numbered in the tens of thousands rather than in the
original hundreds of thousands. On entry into the rapidly rotating galactic disk still
further disintegration occurs, and the globular cluster separates into a number of open
clusters. These are relatively small groups, most being in the range from around a dozen
to a few hundred stars, although a few have as many as a thousand.
The total mass of a small cluster of this kind is not large enough to produce a
gravitational attraction that is equal to the outward progression of the natural reference
system, even when augmented by the gravitational effect of the galaxy as a whole The
open clusters are therefore expanding at measurable rates. One of the results of this rapid
expansion is that the lifetime of these clusters is relatively short. In order to account for
the large number of such clusters now in existence, which runs into the thousands—one
estimate (reference 96) is 40,000 when due allowance is made for the fact that only a
small fraction of the total can be identified from our position in the galaxy, there must be
some process in operation that continually replenishes the supply, The astronomers have
been unable to find any such process, Like other members of the human race, they are
reluctant to admit that they are baffled, so the general tendency at present is to assume
that the open clusters must originate by means of the star formation process that they
believe is taking place in dense dust clouds. But this explanation simply cannot stand up.
If the cohesive forces in these clouds are strong enough to form a cluster they are
certainly strong enough to maintain it. The observed expansion thus contradicts the
hypothesis of formation near the present cluster sites.
Of course, it is conceivable that some clusters formed under certain favorable conditions
might at some later date encounter conditions that would cause them to disintegrate, but
all open clusters are disintegrating, and astronomical theory has to explain this fact. No
stable stellar aggregate exists in the range between the globular clusters and the multiple
star systems. If the issue is squarely faced, it is clear that conditions in the Galaxy are
favorable for dissolution of the clusters, whereas the existing clusters must have been
formed under conditions favorable for such formation.
Those astronomers who do face the issue recognize that current theory has no satisfactory
answer to the problem, notwithstanding the wide range of possibilities that has been
explored. Bok and Bok, who discuss the question at some length, conclude that at least
some classes of clusters are not being replaced. The most conspicuous clusters, the
Pleiades, Hyades, etc., are disintegrating, and these authors say that ‖there seem to be no
others slated to take their place.― Likewise they conclude that the ‖open clusters with
stars of spectral types A and later . . . may be a vanishing species. 97
The obvious answer cannot be ignored completely. Bok and Bok concede that ‖one might
be tempted to think about dismembered globular clusters as possible future Pleiades-like
clusters― , but since this conflicts with the prevailing ideas as to the direction of stellar
evolution, they resist the temptation, and dismiss the idea as impossible, Here, again, the
physicists, assumption as to the nature of the energy generation process must be
supported, whatever the cost to astronomy may be. The two considerations that they say
‖show how impossible this would be― are first, that the spectra changes required in going
from globular to Pleiades-like clusters are impossible, and second, that the ‖rate of
evaporation for globular clusters is far too slow.― The first of these objections is simply a
reiteration of the upside down evolutionary sequence that the astronomers have adopted
to conform to the physicists, assumptions. As already explained, the evolutionary path for
all stars is from globular cluster to main sequence, not vice versa, And the globular
clusters that fail into the Galaxy do not shrink slowly by evaporation; they are torn apart
quickly by the rotating matter of the galactic disk. The piece of information that has been
lacking in the astronomers, view of the situation is the existence of an interstellar force
equilibrium that gives an aggregate of stars the physical characteristics of a viscous fluid.
The entry of the cluster into the galaxy is physically similar to the impact of one fluid
aggregate on another. All of the elements of the problem fall into place when it is viewed
in the light of the theory of the universe of motion.
The conclusion as to the origin of the open clusters derived from this theory is reinforced
by the available data on the properties of these stellar groups. One of these properties is
the density of the group. Any gravitationally bound group of stars has a density greater
than that of the field of stars in its environment. Inasmuch as the aggregate of stars in the
Galaxy has the characteristics of a liquid, a stellar group whose density exceeds the
density of the field stars will fall toward the galactic plane. This is a necessary
consequence of the gravitational differential, and the descent will take place regardless of
the nature of the influences that are responsible for the separation between the field stars,
and regardless of whether the clusters fall into the Galaxy, as asserted by the theory of the
universe of motion, or originate somewhere within that structure, in accordance with
present-day astronomical theory. Even the much looser ‖associations― participate in this
response to the gravitational diffcrential.98
Since the clusters are falling objects, those that are higher above the galactic plane are
younger, on the average, than those lower down. One of the most conspicuous members
of the higher class is M 67, about 440 parsecs above the plane. At the other extreme are
objects such as the double cluster in Perseus, which is in the general vicinity of the plane.
It follows directly from the relative positions of the two classes that the clusters of the M
67 class are the younger and those of the Perseus class are the older.
This conclusion derived from the relation of position to cluster density is corroborated by
direct observation of density changes. Inasmuch as the clusters are expanding, their
densities are decreasing with age, While the density of any individual cluster may reflect
the particular conditions to which it has been subject, the average density of the clusters
of each class should depend mainly on the amount of expansion that has occurred It
therefore follows that the clusters with the higher average density are the younger, and
those with the lower average density are the older. Studies show that the clusters of the M
67 class have the higher average density,99 Hence these are the young clusters, and the
clusters of the Perseus class are relatively old—the same conclusion that we reach from a
consideration of the positions above the galactic pane. Both of these indications of
relative age are observed properties of the clusters, and are independent of the
astronomical theory in whose context they are viewed, In this case, then, we have
something that is very rare in astronomy: a direct observational indication of the direction
of evolution.
Here we have positive proof that the stars of the main sequence are older than the stars of
the globular cluster type (the kind of which M 67 is composed). This negates the basic
premise on which current theory of stellar evolution is founded. That theory asserts that
the stars of the upper main sequence are necessarily young because the supply of
hydrogen for production of energy will be exhausted in these stars in a relatively short
time. The proof that these stars are not young now turns the argument upside down. The
demonstrated fact that they are relatively old stars shows that hydrogen is not the stellar
fuel
With the addition of this evidence to the many items previously noted, we now have a
positive definition of the direction of evolution of the stars of the globular and open
clusters, and by extension, a definition of the direction of stellar evolution in general, In
order to see just how this information fits into the theoretical picture, we will now turn to
a consideration of the evolution of the stars in the clusters,
Inasmuch as the remains of disintegrated stars and galaxies are scattered throughout all
space, and atoms of matter are continually forming in this space from the decay products
of the cosmic rays, there is a certain minimum amount of material subject to accretion in
any environment in which a star may be located. Immediately after the formation of a
globular cluster star by condensation of a portion of a protocluster, this thin diet of
primitive material and the atom building that takes place within it, are all that is available
for growth, and the evolution of the stellar structure is correspondingly slow. The stars of
the globular clusters are therefore in an early stage of development. Aside from a few
strays from older systems that have been incorporated during the formation of the cluster,
the distant clusters contain only Class I stars: infrared stars, red giants, sub-giants, long-
period Class IA variables, and variables of the RR Lyrae and associated types, To these,
the clusters closer to the Galaxy add some Class 1B stars of the lower main sequence.
As noted in Chapter 4, the CM diagram provides a picture of the most significant changes
that take place in the constituent stars of the globular clusters. The first stage of their
evolution, after they become observable in area O of the diagram, is a contraction under
the influence of the combined gravitational forces of each star itself and the cluster as a
whole. This ends for each star when it reaches gravitational equilibrium on the line BC,
the main sequence. Thus the paths OAB and OAC on the CM diagram of M 3 are the
routes followed by the stars of this cluster in the continuation of the process by which
they originated, The locations along this path represent what we may call evolutionary
ages, A star at point B or point C has traveled the entire length of its path OAB or OAC.
Although it is common practice to refer to the pre-stellar aggregate as a dust cloud, it is
actually a gas cloud with a small dust content. Thus the physical aspect of the evolution
of the newly formed stars is defined by the behavior of an isolated gaseous aggregate
subjected to a continuing increase in temperature and pressure under the influence of
gravitational forces. Since the matter of the star is above the critical temperature by the
time that the pressure reaches significant levels, it has been assumed that the star is
gaseous throughout its structure. As expressed in one textbook, ‖Because the sun (a star)
is so hot throughout all its volume, all of its matter must be in the gaseous state.―100
This statement is valid on the basis of the conventional definition of the gaseous state, in
which this state has no density limit, but the investigation upon which this work is based
(see Volume II) has shown that this definition leads to some erroneous conclusions. In
particular, it leads to the conclusion that all matter above the critical temperature
conforms to the gas laws—the general gas equation PV = RT, and its derivative relations.
This is not true. In fact, these laws do not apply to matter at all. They apply only to the
empty space between the atoms or molecules of the gas. At very low densities the volume
of a gas aggregate, as measured, consists almost entirely of empty space, and the gas laws
are therefore applicable. As soon as the density increases to the point where the volume
occupied by the particles of matter begins to constitute an appreciable proportion of the
total, a correction for the deviation of the volume from that of the ‖ideal gas― (the empty
space) must be applied. A further increase in density ultimately brings the aggregate to a
critical paint at which the correction becomes the entire volume; that is, the empty space
has been completely eliminated. The aggregate is now a condensed gas.
Inasmuch as conventional physics has no theoretically based relations from which to
compute the magnitudes of the various properties of gas aggregates at high pressures, and
relies on empirical relations, restricted to a relatively low pressure range, for this purpose,
the existence of this third condensed state of matter was not detected prior to the
development of the theory of the universe of motion. In the light of this theory, however,
the existence of this condensed gas state is a necessary consequence of the nature of
physical state. In the gaseous state the individual units—atoms or molecules—are
separated by more than one unit of space, and are therefore moving freely as independent
particles. In the condensed states—solid, liquid, and condensed gas—the separation has
been reduced to the (equivalent of less than a unit of space (by the introduction of time).
Here the individual particles occupy fixed (solid state) or spatially restricted (liquid and
condensed gas) positions in which they are subject to a set of relations quite different
from the gas laws. For example, as brought out in Volume II, the volume of a solid
aggregate is inversely proportional to the square root of the total pressure, including the
internal pressure, rather than inversely proportional to the external pressure as in the
gaseous state.
A study of the volumetric relations carried out in the course of the investigation on which
this work is based has disclosed that the transition to condensed gas takes place within
the temperature and pressure range of much of the experimental work reported in the
scientific literature. For instance, application of the theoretical relations to the volumetric
data on water at 1000 C indicates that the transition from the gaseous state to the
condensed gas state begins at about 600 atm. pressure, and is completed at about 3000
atm. Above this level the condensed gas volume can be computed by means of the
relations that apply to the liquid state. The temperatures in the stars are, of course, vastly
greater, but so are the pressures, and both the gaseous and condensed gas states exist
within the stellar temperature and pressure range, a fact that has an important bearing on
the evolutionary pattern of the stars.
One important property shared by all of the condensed states is that an aggregate in any
one of these states has a definite surface. This is not true of a gas cloud. Such an
aggregate simply thins out with the radial distance until it reaches the density of the
surrounding medium, This point is generally recognized in the case of star clusters and
galaxies, which are structures of the same kind, differing only in that the constituent units
are stars rather than particles. The fact that the dimensions of these objects, as observed,
depend on the limiting magnitude reached by the observations is well known, but the
corresponding phenomenon in the stars, if it is recognized at all, is not emphasized in the
astronomical literature. This is no doubt due, at least in part, to the observational
difficulties. The dimensions of the stars of the dust cloud classes can only be observed by
means of special techniques of limited applicability (such as interference methods) or
under special circumstances (such as in eclipsing variables), and the absence of surfaces
has not been evident enough to attract attention. The only star that is readily accessible to
observation, the sun, belongs to the other class of stars, those that do have definite
surfaces.
The condensation of a dust and gas cloud under the influence of gravitational forces is an
equilibrium process, not a static equilibrium like that of the stars on the main sequence,
where the variables react in such a way as to maintain constant relations, but a dynamic
equilibrium, in which the interactions between the variables maintain a uniform pattern
of change in their relations. Consequently, all of the clouds condensing into stars follow
the same evolutionary path, differing only in the rate at which they move along that path.
At any given stage of the contraction process along the line OA on the CM diagram all
stars therefore have the same effective mass and volume (aside from the variations that
are responsible for the width of the line), irrespective of the size of the dust clouds from
which they are drawing their material.
In this first part of the evolutionary path the continuing condensation of the stellar
aggregate is made possible only by the assistance of the gravitational effect of the cluster
as a whole, as this early type of star is not a self-gravitating object, As indicated in the
earlier discussion, however, the gravitational forces of the star are strengthened as it
becomes denser, and at a certain point, designated A on the CM diagram, Fig. 3, the star
reaches the critical density where it becomes self-gravitating; that is, it is capable of
further contraction toward gravitational stability without outside assistance. Beyond the
point at which the critical density is reached, the two processes, the original growth
process and the self-gravitation, are in competition. The outcome depends on the relative
rapidity of the processes.
If the growth of the star has taken place all the way from particle size, without the benefit
of any gravitationally stable core, the contents of the parent dust cloud are practically
exhausted by the time that the star reaches the critical density at point A. In this event the
self-gravitation initiated at A proceeds at a more rapid rate than the growth by accretion.
The star then pulls away from its surroundings and moves directly down the diagram
along the line AB, the line of constant mass.
If the star did have a pre-existing fragment as a nucleus, growth along the line OAC is
able to continue. As noted in Chapter 4, the availability of even a very small fragment as
a nucleus for condensation gives a star a big advantage over the majority, which have to
start from particles. Because of the much larger amount of dust and gas over which they
are able to establish gravitational control, these stars that had the head start are usually
able to follow the line AC all the way to point C, or at least to the vicinity of that point. In
some cases there is a tendency for the observed paths to bend downward shortly before
reaching C, indicating that the material for growth has been exhausted. In other eases the
trend in the vicinity of point C is upward. This is no doubt due to accelerated accretion
from favorable environments.
Inasmuch as the nature of the process by which the process by which the primitive cloud
of matter was formed formed the primitive cloud of matter as described in Chapter l,
produces essentially the same initial conditions in each cluster, the equilibrium conditions
are practically the same for all clusters. It follows that the critical points A and C on the
line OAC are the same for all of these clusters. This conclusion refers, of course, to the
true values, the absolute magnitudes. But the astronomers, evaluation of absolute
magnitudes is subject to a considerable degree of uncertainty. For present purposes,
therefore, it appears to be advisable to deal with the observed magnitudes, using the
observed magnitude at some identifiable location in each diagram as a reference point.
The resulting diagram is identical with that which would result from the use of the correct
absolute magnitudes, except that the magnitude scale is shifted by an amount that reflects
the effect of distance and obscuration.
There are some other factors --chemical composition, for instance—in addition to the
evolutionary development, that affect the variables represented on the CM diagram, and
these factors, together with the observational uncertainties, result in rather wide
evolutionary paths, but aside from these effects, the foregoing theoretical conclusions
indicate that the upper sections of all CM diagrams of globular clusters should be
identical, to the extent that the evolution of each cluster has progressed.
Fig. 9 shows that this theoretical pattern is followed by six of the most prominent
globular clusters. The outlined areas in each cluster diagram show the observed star
locations. The boundaries of these areas have been located by inspection of diagrams
published in the astronomical literature. Greater accuracy is possible, but this would call
for an expenditure of time and effort that did not appear to be justified for the purposes of
this somewhat preliminary survey of the situation.
The theoretical evolutionary lines, the diagonal lines in the diagrams, are the same for all
clusters, except that in each case the reference point determines the magnitude scale.
Whatever differences in the lengths and slopes of these lines may exist between the
individual diagrams are due to differences in the scales of the original diagrams from
which the data were taken. The upper of the three points identified on each line is the
reference point. The point corresponding to a B-V color index of 1.4 has been selected as
the reference point in most of the CM diagrams in this volume, because it is usually quite
clearly defined by the observations, but where the 1.4 location is uncertain some better
defined location has been substituted. What the diagrams show is that if the location of
the reference point is taken to represent the absolute value of the luminosity, then the
points A and B on the line OAC, as previously defined, have the correct theoretical
relation to the reference value, within the accuracy of the representation. Some of the
evolutionary paths tend to diverge from the theoretical line as they approach the main
sequence at point C, but the deviation is within the range of the processes previously
mentioned as being applicable in this region.
These considerations that apply to the upper section of the diagram, the line OAC, are
likewise applicable to the lower sections, the line AB and the relevant portion of the main
sequence, which have been identified observationally for only a limited number of
clusters. It then follows that when the location of any one point is specified in the manner
that has just been described, the M 3 pattern can be applied to a determination of the
entire theoretical pattern of any globular cluster. The complete CM diagrams thus
obtained for two of the clusters of Fig. 9 are shown in Fig. 10. These clusters clearly
conform to the theoretical pattern.

It is true that there is considerable variability in the line AB, but this is easily understood
as a result of the expansion and contraction of the cluster during the travel toward the
Galaxy. As explained in Chapter 3, the cluster is subject to substantial loss of stars during
its approach, because of differential gravitational effects. These losses alter the
equilibrium in the cluster, and tend to cause density fluctuations. The variations in the
cluster density have a corresponding effect on the pressure that is exerted on the
individual stars by the gravitational force of the cluster as a whole, thus transmitting the
density fluctuations to the stars. If the cluster and its constituent stars are expanding as
the stars approach point A, the contraction along the line AB is delayed to some extent,
and the evolutionary path is displaced to the left of the line. Then, when the expansion
phase of the density cycle is succeeded by a contraction, the path is displaced to the right
at some location farther down the line. There may even be another swing to the left
before the main sequence is reached. As can be seen in the diagrams, this cyclic effect is
at a minimum in M 13, but it shows up clearly in such clusters as M 3 and M 5.
The red giant section OA of the CM diagram of a globular cluster is usually well defined,
even where the limiting magnitude to which the observations have been carried cuts off
most of the lower portions of the diagram. Since only one observed point is required in
order to establish the complete Class l diagram, and any point in this well-defined giant
section will serve the purpose, it is not difficult to define the theoretical CM diagram for
an ordinary globular cluster. Furthermore, if the observations extend to the main
sequence, the accuracy of the diagram thus defined can be verified by the observed
positions of the main sequence stars. Thus, as indicated by the diagrams already
introduced, there is little question as to the position of the evolutionary paths.
Uncertainties arise only in the case of the very distant clusters that are observed with such
difficulty that only the most luminous stars can be identified. Even at these distances the
diagrams are often well defined. For instance, Fig. 11 shows the relation between the
theoretical OAC line and the observed locations of the stars of two of the most distant
clusters for which data are available. These clusters, NGC 6356 and Abell 4 have
uncorrected magnitudes at a B-V color index of. 1.4 of. 16.2 and 18.2 respectively. These
compare with l 2.l for M 13 and 10.4 for NGC 6397, the cluster closest to the sun. The
luminosity of the most distant of these four clusters is less than that of the closest by a
factor of more than a thousand.
A point that should be noted in connection with the evolutionary pattern of the globular
clusters is that the difference in luminosity (on the logarithmic scale) between point B
and point A, 5.6 magnitudes, is twice the difference between point B. and point C, which
is 2.8 magnitudes. The significance of this relation will be discussed in Chapter 11.
Identification of the globular cluster pattern as a fixed relationship provides a simple and
potentially accurate method of determining the distances to the clusters. Inasmuch as the
theoretical findings indicate that the pattern is identical for all the clusters, it follows that
the absolute magnitude corresponding to any specific color index is the same for all. Like
the evolutionary pattern itself, we will have to determine this absolute magnitude
empirically for the present but once we have it for one cluster, we can apply it to all
clusters, A value of 4.6 at a B-V color index of 0.4 on the main sequence has been
selected on the basis of two criteria. First, this agrees with the currently accepted values
applicable to the nearest clusters which are presumably the most favorably situated for
accurate observation, and second, this value arrives at practically the same average as the
observational values given by W. E. Harris for a long fist of clusters.101 These previously
reported values from observation should average close to the correct magnitude if there
are no systematic errors, even though the error range of the individual values is conceded
to be quite wide. The distance moduli (the differences between the absolute and apparent
magnitudes) calculated on the 4.6 basis are compared with those given in the tabulation
by Harris in Table II. A few distant clusters not listed by Harris are also included. For the
benefit of those readers who are not much at home with the astronomers, magnitude
system, the distances are expressed in terms of light years in the last column of the table.
The potential accuracy of the method is not fully attained in the present work because of
the previously mentioned approximate nature of the process employed in identifying the
reference point for each cluster. But even so, more than half of the values calculated on
this basis in the course of the present study agree with the values given by Harris within
his estimate of the probable error range. Most of the observers, original reports do not
specify whether the values shown in their diagrams have been corrected for the reddening
due to dust along the line of travel of the radiation. The theoretical distances in the table
have been calculated on the assumption that no such correction has been applied to the
plotted values. If this is incorrect in any specific case, the calculated distance will be
modified accordingly.
The main sequence, as defined by the astronomers,102 has an absolute magnitude of about
3.8 at a B-V color index of 0.4. This puts it 0.8 magnitudes above the position obtained
for the globular clusters from the study of the CM diagrams. The significance of this
difference will be discussed later.
The evolutionary age of each cluster is indicated by its position on the CM diagram. The
overall range from the earliest to the latest type of star in the cluster remains about the
same, but the positions of both the front end of the age sequence, the location of the most
advanced stars, and the rear end, the location of the least advanced, move forward. Bart J.
Bok comments that the branch of the diagram of the cluster Omega Centauri that is
occupied by the red giants ‖is unusually long― , and also that the data ‖do not reveal the
full extent of the main sequence. ―87 He attributes the length of the giant branch to a high
degree of variability in the metal content, Our analysis shows, however, that both of the
features of the diagram that Bok mentions are aspects of the same thing, They indicate
that Omega Centauri is not as far advanced from the evolutionary standpoint as a cluster
such as M 13, for example, There are stars in Omega Centauri that are earlier (that is,
farther to the right in the CM diagram than the earliest stars in M I 3, while not enough
stars have reached the main sequenced to give this cluster a main sequence population
comparable to that of M 13.

TABLE II
DISTANCES - GLOBULAR CLUSTERS
Distance Modulus
Cluster Reddening Harris This Work Distance
(lt. years)
Abell 4 20.2 357,000
Palomar 14 20.0 326,000
NGC 6256 19.2 225,000
Kron 3 18.8 188,000
NGC 6356 0.21 17.07 18.0 130,000
M 14 0.58 16.9 17.7 113,000
Palomar 5 17.2 90,000
NGC 6235 0.38 16.6 16.4 62,000
NGC 6144 0.36 15.6 15.8 47,000
NGC 1851 0.07 15.4 15.8 47,000
NGC 6535 0.36 16.1 15.6 43,000
NGC 6535 0.36 16.1 15.6 43,000
M 79 0.00 15.65 15.3 37,000
NGC 5053 0.03 16.00 15.3 37,500
NGC 288 0.00 14.70 14.9 31,000
M 68 0.03 15.01 14.7 28,000
M 15 0.07 15.26 14.6 27,000
M3 0.00 15.00 14.6 27,000
M 30 0.01 14.60 14.5 26,000
M5 0.07 14.51 14.4 26,000
M2 0.19 14.30 14.3 23,500
Omega Centauri 0.11 13.92 14.3 23,500
47 Tuc. 0.04 13.46 14.2 22,500
M 71 0.28 13.90 14.1 21,500
NGC 3201 0.28 14.15 14.1 21,500
M 13 0.02 14.35 14.1 21,500
M 22 0.35 13.55 13.9 19,700
M 92 0.01 14.50 13.9 19,700
M 55 0.07 14.00 13.9 19,000
M4 0.01 13.20 13.1 14,300
NGC 6752 0.01 13.20 13.1 13,700
NGC 6397 0.13 13.30 12.3 9,400
The evolutionary age of the matter of which the stars of a cluster are composed is, of
course, related to the age of the cluster, but these ages are not coincident. The
chronological age of the matter includes not only the time spent in the star cluster stage,
but also the time spent in the diffuse stage that precedes condensation into a star, This is
subject to considerable variation. Furthermore, there are circumstances under which the
evolution of the matter proceeds much faster than the evolution of the cluster. Thus,
although the older clusters are, in general, composed of older matter, there is no direct
relation. Some examples of accelerated evolution of matter in clusters will be examined
in Chapter 9.
The question as to the ages of the globular clusters has received a great deal of attention
from the astronomers because they are presumed to have been formed within a relatively
short time after the Big Bang in which most astronomers now believe the universe
originated. On this basis, as Bok points out in a recent discussion of the subject, the
clusters ‖seem to be the oldest objects in the Milky Way. ―83 But it is agreed that the
concentration of ‖metals― (heavy elements) in an astronomical object is an indicator of its
age, and as Bok acknowledges, there are differences in the metal content of the clusters
that ‖imperil― the current theory of their formation. Some, particularly those most distant
from the galactic center, are relatively metal-poor, while others have substantially greater
metal content. Harris and Racine give us this assessment of the situation: ‖It is plain that
the maximum [Fe/H] decreases roughly linearly with log R [distance from the galactic
center], even out to about 100 kpc. ―104
Many astronomers are beginning to recognize that this radial dependence of the cluster
ages, as indicated by the metal abundances, is inconsistent with present-day astronomical
theory. Bok, for example, recognizes that something is wrong here. He states the case in
this manner: ‖The spread of ages for the globular clusters conflicts with current models of
how the galaxy evolved .―83
Our finding is that almost all of the conclusions in this area that have been reached on the
basis of current astronomical theory are wrong, either in whole or in part. On first
consideration it may seem unlikely that errors would be made on such a wholesale scale,
but actually this is an inevitable result of the manner in which astronomical conclusions
have to be reached under present conditions, where there is no general theoretical
structure connecting the various astronomical areas. In the absence of the restraints that
would be imposed by such a general structure, wrong theories and wrong interpretations
of observations are able to reinforce each other and resist correction In the case now
under consideration, a wrong theory of stellar energy generation, a wrong theory of the
origin of the universe, and a wrong theory of stellar evolution provide mutual support for
each other, and for the wrong interpretation of the place of the globular culusters in the
astronomical picture.
Correction of these errors one by one is not feasible, because a change in only one of the
erroneous hypotheses introduces obvious contradictions with those that are retained, All
of the major errors that are relevant to the point at issue have to be corrected
simultaneously in order to arrive at a consistent system of thought. This is the objective
of the present work.
CHAPTER 9

Gas and Dust Clouds


As explained in Chapter 1, the original aggregates into which the primitive dispersed
matter separates are the predecessors of the globular clusters. At first they are merely
masses of the primitive matter in gravitational equilibrium, but they are caused to
contract for reasons previously stated, and they eventually arrive at a density sufficient to
justify calling them clouds of dust and gas. If these clouds remain undisturbed for a
sufficient length of time, they ultimately condense into globular clusters of stars, as
indicated in the earlier chapters.
Although the protoclusters are probably somewhere near the same size, they are subject
to different conditions because of such factors as the amount of fragmentary old material
present, and the position of the protocluster in what we have called the group.
Consequently, the rate at which condensation into stars takes place is subject to
considerable variation. In the preceding pages we have been tracing the development of
the faster aggregates, but we have now reached the point where the slower group enters
into the evolutionary process in a significant way. We will therefore want to take a look
at what has happened to the slower aggregates while all of this development of the faster
ones has been going on.
The slower aggregates are subject to the same external gravitational forces as the faster
group. Thus they undergo the same kind of combination and capture processes as their
more advanced counterparts. It is possible that some of them may remain isolated long
enough to complete the process of consolidation into clusters of stars. In that event they
follow the course that we have been describing. But because of the difference in the
amount of time required for completion of the condensation process, many of the slower
aggregates are captured while they are still in the gas and dust cloud stage. As a result,
they enter into the galactic structure as clouds of particles rather than as stars.
We have already noted (in Chapter 3) the existence of evidence indicating that the Galaxy
is capturing some globular clusters in a pre-stellar stage. One report reads as follows:
The most striking result of surveys of the distribution and motions of neutral hydrogen
away from the galactic plane is the discovery of several high velocity hydrogen clouds or
concentrations, nearly all having negative (approaching) radial velocities of up to about
200 km/s-1.104
Here we see that the unconsolidated clusters like the globular star clusters are moving ‖as
freely falling objects attracted by the galactic center,― 28 in accordance with the
conclusions that we derive from the theory of a universe of motion. The observed
approach of these aggregates implies that there have been captures of similar aggregates
in the past, and that the remains of these immature globular clusters are present in the
Galaxy. Unlike the star clusters, which are broken into relatively small units as soon as
they fall into the rotating disk, the particles of which the clouds are composed are able to
penetrate into the interstellar spaces, and they envelop the stars that they encounter, rather
than colliding with their radial force fields. A cloud of this kind therefore tends to
maintain its identity for a substantial period of time, although its shape may be greatly
modified by the objects that it encounters.
Until quite recently no evidence of gas and dust aggregates of globular cluster size had
been found within the Galaxy. Smaller aggregates—nebulae, as they are called—have
been recognized ever since the early days of astronomy; some bright, others dark. Only
within the last few years has it begun to be recognized that many, perhaps most, of these
identified nebulae are portions of much larger aggregates. For instance, Bok and Bok
report that the Orion nebula, the most conspicuous of these objects, is actually a part of a
larger cloud with a total mass of 50,000 to 100,000 solar units (comparable to the size of
the globular clusters that are being captured). They characterize the Orion nebula as lust a
little sore spot of ionized hydrogen in the larger complex.― 105
Still more recently it has been found that there are many larger clouds of gas in the
Galaxy that have masses comparable to those of, the large globular clusters in the range
from 100,000 to 200,000 solar masses. According to a report by Leo Blitz, these giant
clouds are about 20 times as numerous in the Galaxy as the globular clusters. Both of
these characteristics (size and abundance) are in agreement with what would be expected
on the basis of the theoretical origin of the clouds as captured immature globular clusters.
The gas cloud is less subject to loss of mass in approaching the Galaxy than the star
clusters because of the vastly greater number of units involved (particles in one ease,
stars in the other), while, as already noted, it is not subject to being broken up by contact
with the moving stars of the Galaxy in the manner of the globular star clusters.
The report by Blitz contributes some further information that verifies the identification of
these giant gas clouds with the immature globular clusters. ‖The density of the gas in
each clouds he says, ‖is l00 times greater than the average density of the interstellar
medium.― 106 It is difficult, probably impossible to explain the formation of a distinct
aggregate of this size within arrogating galaxy, and since the observed density establishes
the cloud as a definite unit, distinct from the interstellar medium, the observations lend
strong support to the theoretical conclusion that the clouds were formed outside the
Galaxy and captured later. Furthermore, Blitz also reports that ‖the gas in each cloud is
organized into clumps whose density is 10 times greater than the average density in the
cloud.― He adds that some clumps with much greater density have been observed. The
nature of these ‖clumps― is practically obvious, in the light of our findings, Here, of
course, are the immature stars of the immature globular cluster. The clumps that are
larger than the average are the aggregates that would have followed the upper branch
OAC of the Class l evolutionary path if capture by the Galaxy had not intervened to
prevent the consolidation that would have given these clumps the status of stars.
The simple history of these gas and dust clouds, as derived from theory— formation by
the globular cluster process, capture by the Galaxy, mixing with the galactic stars,
eventually expanding into and merging with the interstellar medium—is in direct conflict
with the upside down evolutionary view derived from the physicists, assumption as to the
nature of the stellar energy generation process. Since the astronomers have accepted that
erroneous view of the direction of evolution, they are forced to invent processes whereby
the normal course of events is reversed. Instead of originating as massive aggregates and
being gradually disintegrated by the rotational forces of the Galaxy, forces that are known
to exist and to operate in that direction, the astronomers find it necessary to assume the
existence of some unknown counterforce that causes the clouds to form and grow to their
present size against the normal direction of change. ‖Some mechanism must be
continually farming them in the galaxy―, says Blitz, But he admits that the mechanisms
thus far suggested—‖density waves―, magnetic effects, etc.—are not convincing. ‖The
solution to the problem of how the complexes form does not seem to be close at hand―,
he concludes. This is another understatement of the kind that is so common in the
astronomers comments on their problems. The solution not only is not ‖close at hand―; it
is not perceptible even in the far distance. The problem is still further complicated for the
astronomers because their theory requires the clouds to form and then disperse again,
while they remain in the same environment and subject to the same forces.
The specific words used in the quotation in the preceding paragraph are worth a few
comments, as they are repeated over and over again in current astronomical literature,
and they epitomize the attitude that has made it possible for such a large theoretical
structure of an imaginary nature to develop in the astronomical field. Some mechanism
must exist, the author says, to take care of the problems that are encountered in trying to
reconcile the observations with the deductions from the basic premises of the current
theory. We have met this contention many times in the earlier pages, and we will
encounter it again and again in the pages that follow. The observed facts stubbornly
refuse to cooperate with the theorists, but the basic assumptions from which the
theoretical conclusions are derived, particularly the assumption as to the nature of the
stellar energy generation process, are sacrosanct. They cannot be questioned. There must
be something, somewhere—‖some mechanism―—that brings the recalcitrant facts into
line current astronomical thought insists.
One of the reasons why the astronomers are having so much difficulty in dealing with the
dust and gas clouds in the Galaxy is that they have never arrived at an understanding of
their structure, just what it is that maintains them in their existing condition. As
explained, by Blitz in his article:
Under normal circumstances the pressure inside a cloud roughly balances the cloud's self-
gravitation, which would tend to collapse the cloud if its action were unopposed. What
generates the pressure is a major unanswered question.
The truth is that this is the same ‖major unanswered question― that the astronomers face
with respect to the structure of the globular clusters, They have managed to avoid
conceding their inability to explain the cluster situation, but they have no option in the
case of the clouds, as the opportunities for ad hoc assumptions that would enable them to
evade the issues are too limited. There is clearly no rotation, and the temperature reveals
the particle velocities, which is observable. As conceded in the foregoing quotation, it is
clear that there is something missing in the current understanding of the physics of the
clouds.
The theory of the universe of motion identifies this missing ingredient as the outward
progression of the natural reference system relative to the conventional stationary system
of reference. Once again we meet the antagonist to gravitation. Both the particles in the
cloud and the stars in the cluster are subject to the outward progression as well as to the
inward gravitational motion. Main sequence stars are gravitationally stable; that is, the
inward gravitational force acting on their outermost atoms exceeds the outward force due
to the progression of the reference system. In aggregates of stars or dispersed particles, on
the other hand, the net force acting on their outer units is outward unless the mass of the
aggregate exceeds a certain limit. For aggregates of the type that we are now considering,
this limit is in the neighborhood of the mass of a large globular cluster. Any mass smaller
than this limit is subject to expansion and loss of its outer units.
The rate of loss depends on the size of the units, the mass of the aggregate, relative to the
limiting value, the speed of movement of the constituent units (the temperature, in the
case of the clouds), and the external forces exerted on the aggregate, if any. For the gas
and dust clouds that exist in the Galaxy, all of these factors are favorable to a slow rate of
loss. The units are very small, the clouds themselves are large, the temperature is very
low, and the net external forces exerted on the clouds are small. It appears probable,
therefore, that the existence of a cloud as a distinct unit eventually terminates as a result
of processes other than escape of its outer particles principally the mixing action that
takes place by reason of the motion of the associated stars, The effects of this process are
clearly visible. The aggregates, originally spherical, are now observed to be irregular in
shape and often elongated.
Accretion of matter from a cloud by the stars enveloped within it during the mixing
process reduces the mass of the diffuse aggregate substantially while the gradual
destruction of the cloud is taking place. This accretion explains the presence of ‖new―
stars in the clouds, especially the hot stars of the O and B classes, whose existence in
these locations is currently ascribed to condensation directly from the dust and gas.
The association of O and B type stars with gas and dust clouds is well established. Since
the astronomers regard these stars as very young, astronomically speaking, they have
concluded that the stars must have been formed from the clouds, somewhere near their
present locations. Our finding that they are relatively old changes this picture drastically.
There is now no reason why we must assume, in the face of all of the evidence to the
contrary, that the dust and gas clouds in the spiral arms condense into stars. The simple
and logical explanation of the presence of these stars in the clouds is that they are stars of
the galactic population that have been mixed into the incoming dust and gas, and have
grown to their present size by accretion from the clouds. This explanation fits all of the
observational evidence, and it accounts for the existence of stars of these types by the
operation of simple processes that are known to be capable of producing the observed
results, and are known to be operative under the conditions existing in the clouds.
The extent to which accretion of material by the stars takes place has long been subject to
differences of opinion. Some astronomers regard it as minimal. S. P. Wyatt, for instance,
says that ‖There is virtually no replenishment from the outside.― 107 The most that he is
willing to concede is the capture of an occasional meteoroid. In fact, however, even a
planet does better than that, in spite of strong competition from the sun. It is reported that
‖there is an extremely large flux of meteoroids near the planet Jupiter.― 108 The truth is
that the astronomers, conclusions as to the amount of accretion by stars have been little
more than guesswork. The existence of some accretion is well established,
notwithstanding assertions such as that by Wyatt. The only open question concerns the
quantities. In this connection it is significant that within very recent years the general
astronomical opinion has moved a long way in the direction of recognizing the
importance of dust and gas in the universe from a concept of interstellar and intergalactic
space as essentially empty to a realization that the total amount of matter in these regions
is very large, and may even exceed the amount that has been gathered into stars.
Calculations on which adverse conclusions regarding accretion are based generally
assume that the stars are moving through the gas and dust clouds, and that this motion
prevents any substantial amount of accretion. Our theoretical study indicates, however,
that these clouds are participating in the rotation of the Galaxy in the same manner as the
stars, and that the stars are therefore nearly stationary with respect to the clouds, a
situation that is much more favorable to accretion. Bok and Bok specifically say that ‖the
interstellar gas partakes in the general rotation of the galaxy.― 109
From the theoretical standpoint, there is nothing uncertain about the accretion situation.
In the cyclic universe of motion everything that enters the material sector must be
counterbalanced by the ejection of its equivalent. As we will see in the final chapters of
this volume, only the explosion products of stars and stellar aggregates can acquire the
speed that is needed in order to cross the regional boundary. It follows that all of the gas
and dust formed from the primitive matter that enters this sector must either be condensed
into stars or accreted by stars. We have already seen that the condensation into stars is not
complete. As we trace the pattern of stellar behavior, it will also become evident that a
great deal of material escapes from the stars before the final explosive events in which
they are ejected from the material sector, and another substantial amount is scattered into
space in connection with those explosions. Some of this dispersed matter is incorporated
into the globular cluster stars as they are formed, but the rest has to be picked up by
existing stars sooner or later. The average star must therefore increase in mass quite
considerably during its lifetime.
It is true that matter is being converted into energy in the stars, and is being lost from
them by radiation, but in a cyclic universe all processes are in equilibrium. The mass loss
by conversion to radiant energy is necessarily counterbalanced by an equivalent
conversion of radiation to matter in processes of the inverse nature. Thus the existence of
the radiation process does not alter the fact that all of the mass entering the material
sector in dispersed form must be aggregated into stars in order to be ejected back into the
cosmic sector to keep the cycle in equilibrium.
The foregoing theoretical conclusions can be summarized by stating that they indicate
that the dust and gas in interstellar and intergalactic space exists in much greater
quantities and plays a much greater part in the evolutionary development of stars and
galaxies, than the astronomers have been willing to concede, on the basis of their
observations. Since there is no source of empirical information other than these
observations, we have heretofore had to rely on the cogency of the reasoning by which
our conclusions were reached, together with the absence of any actual evidence that
would contradict those conclusions Now, however, the situation has been revolutionized
by the results of observations with the Infrared Astronomical Satellite (IRAS).
The first observations with this satellite show that dust (and presumably gas) does,
indeed, exist in interstellar space on the massive scale required by the theory of the
universe of motion. As reported in an article in a current periodical (March 1984), ‖Dust
is what IRAS found everywhere.― 349 The discovery, also reported in this article, of
substantial quantities of dust surrounding Vega and Fomalhaut, together with indications
that similar concentrations may exist around 50 other stars, is particularly relevant to the
accretion situation. After a quarter of a century, the astronomers are finally arriving at the
same kind of a view of the stellar environment as that which was derived from theory,
and described in the first edition of this work, published in 1959.
The accretion process is theoretically applicable to stars of all kinds, but if the cloud in
which the accretion takes place is located well above the galactic plane, as is true of the
Orion nebula and some of the others that are frequently characterized as ‖birthplaces― of
stars, it is probable that most of the stars intermixed with the nebulae are of the globular
cluster type In this event, the effect of the accelerated accretion is to move the stars to the
left from their positions on the two branches of the evolutionary path, and to distribute
them along nearly horizontal lines intersecting the main sequence at relatively high
temperatures This is where the Orion stars are actually found.110
Occasionally some astronomer does concede that the O and B stars in the nebulae may be
accretion products. For instance, George Gamow, like most of his colleagues, minimized
the importance of the accretion process, but nevertheless admitted that ‖it is not
impossible that the . . . Blue Giants found in spiral arms are actually old stars formed
during the original process which were rejuvenated by accretion.― 111
But the orthodox astronomical view at present is that the stars of the O and to
associations are new stars condensed out of the dust and gas clouds by some thus far
unidentified process. Wyatt, for example, refers to ‖the unquestionable evidence that stars
form out of interstellar matter.― 112 Here, then, the same textbook author who tells us that
the strong gravitational forces of a stable galactic star are capable of ‖virtually no―
accretion of matter is, at the same time, contending that the galactic dust clouds, which
are known to exert no net gravitational force on their, constituents, are in some unknown
way able to pull those constituents together to form a star. These two propositions are
obviously incompatible, and their coexistence illustrates the disconnected and
compartment nature of present-day astronomical theory. The absence of any general
structure of theory encourages reliance on negative rather than positive evidence. Since
the theorist has no explanation whose validity he can prove, what he attempts to do is to
devise an explanation that cannot be disproved In this connection it is interesting to
follow the chain of reasoning by which one prominent astronomer arrives at the currently
orthodox conclusion with respect to condensation of stars from dust and gas clouds, The
following are the essential statements from the five paragraphs in which he outlines the
development of thought:
There are virtually no clouds observed in which gravity is strong enough to overwhelm
the temperature effects based on the measurements that can be made at present . . .
There may be a way out of this dilemma . . .
We really do not yet know how much molecular hydrogen lies in typical atomic hydrogen
clouds. Such a situation is tailor-made for any theoretician to work with because there are
no data that could contradict any assumption made about the amount of additional matter
in the clouds. . .
We assume it [the cloud] must have enough matter to cause it to contract.113 (Gerrit
Verschuur)
This explanation of the background of one of the current theories of star formation in the
galactic gas and dust clouds should make it evident why the astronomers are having so
much difficulty in getting down to details. Verschuur is simply assuming the problem out
of existence. Other theorists rely on some different assumptions—a hypothetical process
to supplement the effect of gravitation, for example—but they all operate on the same
principle; that is, they construct their hypotheses in such a way that ‖there are no data that
could contradict― the assumptions. As might be expected, all details are vague. Verschuur
admits that ‖We are far from understanding all the details of how clouds actually become
stars.― 114 Perhaps the best assessment of the situation is that it illustrates the validity of
this comment from the British scientific journal Nature (1974):
Indeed, a great many theoretical astronomers delight in a situation where there is just
enough evidence to make model building worthwhile, but not enough to prove that their
favored model is ineorreet.115
The effect of the availability of dust and gas on the rate of evolution is illustrated by the
globular clusters that are located in the Large Magellanie Cloud (LMC). Here the
gravitational distortion of the structure of the Cloud has resulted in an irregular
distribution of the dust and gas, and some globular clusters have entered regions of
relatively high density. The rotational forces that would normally break up the clusters as
they approach the central plane of the galaxy (the LMC) have also been greatly reduced
by the gravitational distortion. As a result, some of the globular clusters remain intact in
dusty regions for a long enough period to permit their constituent stars to reach an
evolutionary stage comparable to that of the stars of the open clusters. While the shape
and size of these clusters are those of normal globular clusters their stars are members of
Class 1B, like those of the open clusters.
We can correlate the evolutionary stages of the stars in the two Magellanic Clouds with
the galactic ages, although the more heterogeneous populations of these larger aggregates
make this correlation less specific than the corresponding results of the globular cluster
study, The most significant observation, in this connection, is that the LMC has many red
supergiant stars associated with hot blue stars in hydrogen clouds. As brought out in
Chapter 5, these two very different types of stars are closely related from the evolutionary
standpoint. The hot blue star (Class 1B, is near the supernova stage. The red giant of the
second cycle (Class 2C) is the first visually observable post-supernova star. The presence
of these red giants thus identifies the LMC as an aggregate in which the most advanced
stars have reached the second evolutionary cycle.
Stars of this class are not found in the Small Magellanic Cloud (SMC).116 Nor have any
supernova remnants been located there.117 Their absence indicates that the most advanced
stars of this galaxy are still in the first cycle. The concentration of Cepheid variables per
unit of volume is much higher in the SMC.118 This is consistent with the evidence from
the giants, as the first Cepheids are Class 1A stars, and the evolution around the cycle
reduces the number of stars of the earlier classes. The conclusion to be drawn from these
observations is that the main body of the SMC is composed of stars of Classes 1A and
1B, whereas the average star in the LMC is in a more advanced evolutionary stage. The
number of lA stars has decreased and some of the l B stars have passed into the 2C stage.
The stellar compositions of the two galaxies thus support the conclusion, based on their
relative sizes, that the LMC is older than the SMC, They also provide the answer to a
question asked in the book from which the data cited above were taken: ‖Why has the
Large Cloud so many more very young stars than the Small Cloud?― 119 The answer is
that the ‖very young― stars to which the questioner refers are actually relatively old
second-generation stars, and the LMC has more of these stars than the SMC because it is
an older galaxy.
While the gas and dust clouds in the Galaxy are undergoing the changes that have been
described, their constituents are also aggregating into larger units; that is, atoms are
combining to form molecules and dust particles. It has been known for many years that a
number of the elements above helium are present in these clouds, but recently it has been
discovered that these elements are, to some extent, organized into molecules. Over fifty
different molecules, some of considerable complexity, have been identified so far.
In view of the extremely low density and low temperature of the clouds, which limit the
frequency of contact of the constituents, the observed amount of molecule formation was
not anticipated. The results of this present investigation indicate, however, that the
conditions in the clouds are much investigation indicate, however, that the conditions in
the clouds are much more favorable for combination, up to a certain limit, than
previously believed. The reason was explained in Chapter l. Inside unit distance, 4.56x
10-6 cm, the net motion, other than thermal, is inward until an equilibrium point is
reached. At the very low temperatures of the clouds, estimated at about 10 K (reference
106
), capture on contact, or even on a near miss, therefore has a high probability. As
brought out in Volume II physical state is inherently a property of the individual
molecule. At 10 K even the hydrogen molecule is in the solid state. The contact process is
thus capable not only of producing a variety of molecules, but also of building up solid
aggregates to sizes in the neighborhood of unit distance. As noted in the earlier
discussion, the cohesive forces of the molecules enable the maximum size of the dust
particles to exceed unit distance by a relatively small amount. Any further increment puts
the particle into the region where the net motion is outward, and gravitational control
over dispersed matter is possible only in very large aggregates.
With the benefit of the information contained in this and the preceding chapters, we are
now in a position to complete the comparison of the Reciprocal System and conventional
astronomical theory from the standpoint of their ability to explain what is now known
about the globular clusters. This addition to the comparison in Chapter 3 will be set up in
the same manner as the original, and since 13 sets of observed facts were discussed in
that chapter, we will begin with number 14.
14. Observation: The stars of the globular clusters are confined to the region above
and to the right of the main sequence in the CM diagram, and to a relatively short
section of the main sequence.
Comment: Both theories have explanations for the observed situation. Opinions as
to their relative merits will no doubt differ, as long as this situation is considered
is isolation.
15. Observation: Some clusters (M 67, for example) are classified as open clusters on
the basis of size, shape, and location, but have CM diagrams very similar to those
of the globular clusters.
Comment: It is difficult to account for the existence of these hybrid clusters in
terms of the totally different cluster origins portrayed in conventional theory. The
derivation from the theory of the universe of motion arrives at a simple and
straightforward explanation. It identifies M 67 and the others of the same general
type as former globular clusters or parts thereof, which have only recently reached
the galactic disk. The modification of the cluster structure under the influence of
the strong rotational forces of the Galaxy is already under way, but the
acceleration of the evolution of the stars by reason of the availability of more dust
and gas for accretion is a slower process, and it has not yet had time to show
much effect.
16. Observation The observed motions of the stars in the open clusters show that
these groups are disintegrating at a relatively rapid rate. The large number of these
clusters now in existence in spite of the short indicated life means that some
process of replenishment of the supply must be in operation.
Comment: As indicated in the discussion of this subject in Chapter 8, current
astronomical theory has nothing to offer on this problem but pure speculation,
The theory of the universe of motion identifies the source of the replacements.
17. Observation: Studies indicate that clusters similar to M 67 have a greater density
and are located higher above the galactic plane than clusters that resemble the
double cluster in Perseus.
Comment: The significance of these observations has also been noted earlier,
They constitute prima facie evidence that the accepted view of the direction of
evolution of the clusters and their constituent stars is wrong.
18. Observation: In addition to globular clusters of the Norma type, the Magellanic
Clouds contain some clusters that have the size and shape of globular clusters, but
are composed of stars that resemble those of the open clusters in the galaxy.
Comment: As Bart J. Bok pointed out in the statement quoted in Chapter 8, the
existence of stars of different evolutionary ages in globular clusters is inconsistent
with current astronomical theory, which views these clusters as having been
formed early in the history of the universe. But it is easily understood on the basis
of the theory of the universe of motion.
Summarizing, we can add to the previous count one set of facts (number 14)
explained, by current astronomical theory, two (15 and 16) without any
explanation and two (17 and 18) for which the current explanations are
inconsistent with the observed facts. As reported in Chapter 3, this makes the total
score for current astronomical theory 4 items explained, 7 with no explanation,
and 7 with untenable explanations, In sharp contrast to this dismal record, the
deductions from the postulates that define the universe of motion, which are
totally independent of any input from astronomical observation, lead to
explanations for all 18 items that are fully consistent with the observations.
This globular cluster situation is not an isolated case. It is merely a particularly
conspicuous example of the results of basing astronomical theory on pure
assumptions. The principal assumptions that have been made, and the manner in
which they have been utilized to construct a wholly imaginary astronomical
universe, will be reviewed in Chapter 28, after the pertinent information that can
be derived from the theory of the universe of motion has been more fully
developed.

CHAPTER 10

Evolution - Galactic Stars


When a globular cluster finally falls into the Galaxy and becomes subject to the forces of the
galactic rotation, some rather drastic changes take place, and the CM diagram of the cluster
is modified to the point where it is no longer recognizable without some understanding of
the effects that are produced by the galactic forces. These effects are illustrated in Fig.12,
which is the CM diagram of the cluster M 71. In this, and the other CM diagrams that will
follow, any areas in which the star concentration is sufficiently above average to warrant
special consideration are cross-hatched, while sparsely populated areas that may or may not
belong in the diagram are outlined by dashed lines. M 71 is on the borderline, and has been
classified as an open cluster by some observers, although it is now more commonly regarded
as a globular. 120 From this uncertainty as to its true status we can deduce that it is a globular
cluster that has reached the edge of the galactic disk and is on the way to becoming an open
cluster, or more likely, will break up into a number of open clusters. The CM diagram of this
cluster is described by Burnham as having a ‖red giant sequence resembling that of a
globular― , with ‖an unusually large scatter and a steeper slope than normal― , but lacking
the usual horizontal branch and extension to the main sequence. Thus, even for the
astronomers, this diagram leaves a great deal to be explained. In the context of the new
information developed in this volume, this diagram has even less resemblance to that of a
normal globular cluster, as a ‖steep slope― of any of the lines in the diagram is inadmissible,
The theoretical positions of all three of the evolutionary lines are fixed. The portion of the
diagram in the upper right that is being identified as a wide giant branch is too steep to be
the red giant line OA, and the slope of the cross-hatched section at the lower end of the
diagram is not steep enough to be the evolutionary line AB. The diagram looks like a misfit.
So let us examine the situation from a theoretical standpoint. When the cluster enters the
rotating stream, the immediate effect is that the loosely attached matter is stripped away,
both stars from the cluster as a whole, and particles from the individual stars. As noted
earlier, the differential gravitational forces are already reducing the sizes of the clusters very
significantly as they approach the Galaxy, and this loss of stars is accelerated when the
rotational forces are added to the radial gravitational effect. Reduction in size has the
collateral result of reducing the central condensation.
The globular clusters do not move freely through the field of stars in the manner described
by Hoyle in the statement quoted in Chapter 2; they have to push the stars aside in order to
clear their paths. But the individual stars do move through the interstellar medium. In so
doing they lose the unconsolidated material by which they were surrounded, and from which
they were drawing the additional mass that enabled them to follow the normal evolutionary
paths. The loss of this material stops the growth of the star, and prevents it from reaching the
critical density by the accretion route. However, the star is still subject to the compressive
forces due to the gravitational effect of the cluster as a whole, and these forces, together with
the self-gravitation of the star, compress the existing gaseous aggregate, and move it
downward on the CM diagram along a line of constant mass.
The theoretical results of the stripping action on the locations of the stars in the CM diagram
are illustrated in Fig.13. Diagram (a) is the regular cluster diagram for a cluster in which the
most advanced stars have just recently reached the main sequence. Diagram (b) shows
where these stars would be if the cluster remained isolated long enough to permit the
evolutionary development to bring most of the stars down to the main sequence, with only
the least advanced stars still on the path AB. If the cluster falls into the galaxy while it is in
the condition shown in (a), the atmospheres of dust and gas from which the stars along the
path OA are growing are swept away. These stars are then unable to move forward along
this line, Instead of continuing on to the vicinity of point A before the supply of material for
accretion is exhausted, they are deprived of this material almost immediately on entering the
rotating stream. As a result, each star along the line OA leaves that line from whatever
location it may happen to occupy at the time of entry, and moves downward on the diagram
along a path parallel to AB, a line of constant mass.

Thus the effect of the interaction with the interstellar medium is to replace the relatively
narrow path AB with a path that has the same slope and length, but has a width equal to OA.
This path has a lower limit XX, parallel to OA that represents the extent to which
evolutionary progress has taken place since the beginning of the capture process. As the
evolution continues, the line XX, moves downward on the diagram. The theoretical CM
diagram for a captured cluster in a relatively early stage is then similar to (c).
When the last stars have left OA on the downward path, their positions lie along a line YY,
parallel to OA, constituting an upper limit to the stellar positions on the diagram.
Summarizing this process, in the first interval after the entry of the cluster into the rotational
stream the stars are located in the area between OA and the limit XX. As the downward
movement continues, the last stars leave OA, and in the next stage the star locations are
between XX, and YY,. Finally XX, is cut off by the main sequence, and in this last portion of
the downward movement, the stars are located between YY, and the main sequence, as
indicated in diagram (d). After the first stars reach the condition of gravitational equilibrium,
the main sequence population continues to increase throughout the remainder of the
evolutionary development.
If we apply diagram (c), which shows the theoretical positions of the stars of a newly
captured cluster, to the M 71 situation, everything falls into line, M 71 shows both of the
characteristics previously mentioned as those of a greatly reduced globular cluster that is
entering the fringes of the rotating galactic disk: a relatively low central condensation and a
relatively small size, Its diameter is said to be about 30 light years. Double this value would
still be below average. The giants exceed 200 light years. The relation of the observed
locations of the stars of this cluster to the theoretical diagram is shown in Fig. 14. Here we
see that the observations fit neatly within the theoretical parallelogram. The absence of
identifiable stars on the line AC, the horizontal branch, is explained by two results of the
stripping process: (1) no new stars are moving into the AC region, and (2) the relatively
small number of stars that were located on this line prior to the start of the capture process
were scattered over the triangular area ABC by the same kind of a downward movement that
occurs in the more heavily populated region on the other side of the path AB.

The M 71 pattern is not uncommon. Five other clusters out of those examined in this
investigation also show the same kind of evidence that they are just entering the rotational
stream only one is in the intermediate range where both the upper (YY,) and lower (XX,)
limits are observable. The more advanced clusters that are limited to the lower section of the
diagram between YY, and the main sequence are again fairly numerous. But here we find
that a new factor has entered into the determination of position on the CM diagram. The
main sequence sections of some of these more advanced clusters are well defined, and they
show that the clusters in this stage of evolution are subject to an upward displacement of the
main sequence,
In the cluster M 67, which is regarded as the prototype of this class of cluster, the shift is
about 2.6 magnitudes. Fig.15 is the CM diagram of M 67. As can be seen, this diagram is
similar to those of M 71 and other newly captured clusters, but a considerable number of the
stars of the cluster have reached the main sequence, and they do not lie on the line BC, the
lower line in Fig. 15. Instead, they follow a line parallel to BC, but above it by the amount of
the displacement. Otherwise, the stellar positions are entirely Norma It is particularly
significant that the upper limit of the populated area, the line designated as YY, is sharp and
distinct, because this line has a definite theoretical relation to the evolutionary pattern. It has
to be parallel to the theoretical line OA, which is specifically defined mathematically, even
though M 67 actually has no stars in the upper areas of the complete globular cluster
diagram.

In order to understand the origin of the displacement of the main sequence, the gravitational
shift, as we will call it; the nature of the equilibrium on the main sequence needs to be
recognized. Basically, this is an equilibrium between the gravitational force (or motion) and
the force (or motion) of the progression of the natural reference system, In the dust cloud
state in which the giant stars originate there are two gravitational components, the self
gravitation of the star and the gravitational effect of the cluster in which the star is located.
The net resultant of all forces is inward, and the star therefore contracts. As the contraction
proceeds, the net inward force weakens, and ultimately the point is reached where the
inward and outward forces are equal. This is the main sequence of the cluster
Two of the three force components, the progression of the natural reference system and the
self-gravitation of the star, are constant for a star of a given mass and volume but the third
component is variable and it determines the location of the main sequence equilibrium The
stars in a globular cluster occupy equilibrium positions where there is no net force in either
direction, In this case, therefore, the variables force component is zero in the equilibrium
condition, if the contraction is competed within the cluster, Here the stellar equilibrium
within the cluster is identical with that of an isolated star in space.
The stars of the Galaxy also occupy equilibrium positions, but the galactic situation is not a
full three-dimensional equilibrium. It has been attained in part by balancing a portion of the
inward gravitational effect of the galaxy as a whole against the outward component of the
rotational motion. This is a one-dimensional vectorial motion, and while it counterbalances
the gravitational motion so far as the representation in the conventional spatial reference
system is concerned, it does not offset the full effect of a motion such as gravitation that is
effective in all three scalar dimensions. Thus there is a second gravitational component in
the main sequence force equilibrium of the galactic stars. The component due to self-
gravitation at equilibrium is reduced accordingly; that is, the contraction of the star stops at a
lower density (or expands back to that density), This puts the main sequence of the galactic
stars somewhat higher on the CM diagram than the main sequence of the globular cluster
stars. As indicated earlier, the difference is about 0.8 magnitudes,
This is a theoretical conclusion that takes us into a hitherto unexplored area of astronomy,
but it is not without observational support. We note, for instance, that when the main
sequence of the clusters is lowered to the 4.6 level, the area of the diagram included between
this and the galactic main sequence at 3.8 magnitude includes the positions of a group of
stars known as sub-dwarfs. ‖The location of metal-poor subdwarfs is puzzling― , say M. and
G. Burbidge, ‖because they seem less bright than [galactic] main sequence stars of
comparable surface temperature and hence lie below the main sequence.― But then these
authors go on to give us the information about the subdwarf stars, which, in the light of the
theoretical conclusions that we have just reached, provides the explanation.
These subdwarfs . . . are not traveling with the sun in its giant orbit around the hub of our
galaxy, and consequently they are moving with high speeds relative to the sun and in one
general direction—that opposite to the direction in which the galactic rotation is carrying the
sun.102
According to our findings, these are stars that have escaped from globular clusters, and have
entered the Galaxy from outer space. The fact that they are relatively metal-poor supports
this conclusion. But in any event, whatever their origin may have been, the significant point
is that they are not ‖traveling with the sun― ; that is, they are not participating (or not
participating fully) in the rotation that we find to be the cause of the 0.8 magnitude
gravitational shift of the galactic field stars. Actually, they can hardly avoid being affected
to some extent by the rotational forces. It follows that they should theoretically be
distributed throughout the region between the two main sequence locations. This is just
where they are found.
Another item of evidence supporting the theoretical identification of the 0.8 magnitude
difference as a gravitational shift will be forthcoming in Chapters 11 and 12, where it will be
shown that the gravitational equilibrium applicable to objects moving in time is related to
the 4.6 magnitude level, rather than to that of the galactic main sequence.
With the benefit of the foregoing information we are now in a position to explain the
gravitational shifts of M 67 and other open, or galactic clusters. M 67 is a remnant, or
fragment, of a globular cluster that has quite recently fallen into the galaxy. It has reached
the point where it has begun building up a main sequence population, although its slower
stars are still in the process of completing their evolution along the globular cluster path AB
and its rightward extension. It is one of the earliest of the objects classified as open clusters,
and has the principal characteristics of a recent arrival: a star population that is large for an
open cluster, a relatively compact structure, and a position high above the galactic plane.
The big decrease from the globular cluster size and the entry into the galactic disk have
destroyed the structural stability that existed in the parent globular cluster, and M 67 has
begun the expansion that will ultimately terminate its existence as a separate entity.
Now that they are within the Galaxy, the M 67 stars are subject to the same forces as the
galactic field stars, and in addition are subject to the residual cohesive force of the cluster.
Expressing this in another way, we can say that the stars of the main sequence of the open
cluster have not yet completed their transition to gravitational equilibrium. The temporary
equilibrium represented by their main sequence positions includes a diminishing component
from the gravitational force of the cluster as a whole. The cluster stars will not reach main
sequence positions comparable to those of the field stars of the Galaxy until the cluster
expansion is complete, and this extra force component is eliminated. In the meantime, the
main sequence of each cluster will be above that of the field stars by an amount depending
on the remaining cohesive force of the cluster. This gravitational shift is greatest where the
clusters are young, large, and compact, like M 67, and decreases as the cluster becomes
older, smaller, and looser.
As we saw earlier, when galaxies reach the size at which they capture substantial numbers of
globular clusters they also begin to pull in some unconsolidated clusters, aggregates that are
still merely clouds of dust and gas. These clouds arrive too late in the elliptical stage of
galactic evolution to have much effect on the properties of the observed elliptical galaxies,
although they may be responsible for the occurrence of concentrations of blue stars in some
of these galaxies. But when the elliptical structure spreads out to form the spiral, the stars of
the galaxy are mixed with the recent acquisitions of dust and gas. The stage is then set for a
period of rapid advance along the path of stellar evolution, as the availability of this kind of
a supply of material accelerates the evolutionary process.
During the time that the mixing is taking place the dust and gas exist in widely different
concentrations in different parts of the galactic structure. The average concentration in the
outlying regions that it reaches first is sufficient to support an accretion rate that results in a
continuing increase in the mass of the average star. After arrival at the main sequence, the
very small stars, those whose growth was cut off prematurely by the entry of the cluster into
the Galaxy, take up relatively permanent positions in the lower sections of this sequence,
while the larger stars accrete matter and move upward along this path. Since the stars of a
cluster, aside from the few captured strays, were all formed in the same event, and are of
approximately the same age, most clusters occupy only a limited sector of the evolutionary
cycle. The active sector does not expand appreciably, but merely moves forward as the
cluster ages and passes through the various evolutionary stages.
In the Hyades, Fig.16(a), a cluster somewhat older than M 67, a few stars still remain on the
contraction path AB, but the majority have reached the main sequence. Fig.16(b) represents
a still more advanced cluster, the Pleiades, in which the last stragglers have attained
gravitational equilibrium, and the main body of the active stars has moved up along the main
sequence. Whether or not the Pleiades cluster is actually older than the Hyades is uncertain,
as the evolutionary age is not necessarily coincident with the chronological age. The
Pleiades are located in an observable nebulosity, and the accelerated accretion from this
source may account for the more advanced evolutionary stage.
The possible variations in the rate of development of these nearby clusters are of particular
interest in connection with the possibility that many of the open clusters in the local region
of the Galaxy may be fragments of the same disintegrated globular cluster. It has already
been recognized that some of these clusters are similar enough to imply a common origin.
This has been suggested, for example, in the case of Praesepe and the Hyades.121 The
principal objection that has been raised to this hypothesis is that the clusters arc too tar apart
(the distance between these two is over 450 light-years) to have originated in the same
event. This conclusion is, of course, based on conventional astronomical theory. When it is
realized that the open clusters are fragments of globular clusters this objection is eliminated,
as it is evident that fragments of a disintegrated cluster could be distributed over much
greater distances than those that are observed.
In any event, the greater density of the M 67 class of clusters and their higher galactic
latitude, taken together with the observed expansion of all open clusters, definitely establish
the M 67 class as younger than the main sequence clusters such as the Pleiades and the
Hyades. This conclusion, previously reached, is now corroborated by the relative
magnitudes of the gravitational shifts. Those of the M 67 class average about 2.5
magnitudes, while those of the main sequence clusters are not much above the 0.8 level of
the field stars.
Extension of the findings with respect to accretion by the main sequence stars indicates that
continued development of the Pleiades cluster will eventually bring the hottest stars in this
group to the destructive limit at the top of the main sequence, and will cause these stars to
revert back to the red giant status via the explosion route. In the Perseus double cluster,
Fig.17, such a process has already begun. Here the main body of stars is in the region just
below the upper limit of the main sequence, but a number of red giants are also present. We
can identify these giants as explosion products, stars of Class 2C, rather than new stars,
Class 1A, as this identification keeps all of the stars in the cluster in an unbroken sequence
along the evolutionary path, whereas if these were young stars of the first generation they
would be unrelated to the remainder of the cluster. The presence of 2C giants implies that
there are also young white dwarfs in this cluster, but they may be still in the invisible stage.
Some binary stars are also reported to be present in clusters such as the Hyades and the
Pleiades. In these clusters, however, the A components of the binaries are on the main
sequence, and there is a wide evolutionary gap between them and the Class 1 main sequence
stars of the clusters. There are several possible explanations of their presence; ( I ) they are
not actually members of the clusters, (2) they arc strays, older stars that were picked up
during the condensation of the globular clusters, or during their subsequent travels, or (3)
they were stars from the horizontal branch of the same globular cluster whose vertical
branch produced the Class 1 stars of the open cluster. The cluster diagrams indicate that the
stars of the two branches reach the main sequence at about the same time. Consequently
there is an evolutionary gap between them that is just about right to account for the presence
of some Class 2 (binary) stars in the Class 1 main sequence clusters. It seems probable that
alternative (3) is the source, or at least the principal source, of these binary stars.
It is important to note at this point that in the context of the theory of the universe of motion,
the presence of observable nebulosity is not necessary to account for the position of the
hotter stars of the cluster at the top of the main sequence. As explained earlier, the theory
definitely requires continued stellar growth even under conditions where the density of the
stellar medium is no greater than average. This is something that cannot be confirmed
observationally with currently available instruments and techniques, but neither can it be
disproved. Thus, this aspect of the theory is not inconsistent with anything that is actually
known, which is all that is required in the case of an integrated general theory that is fully
verified in other areas.

It is significant, in this connection, that current astronomical theory is inconsistent with the
observations. This theory places the star formation in dense galactic nebulae. The location
most commonly cited as a stellar birthplace is the Great Nebula in Orion, and the association
between this nebula and a large group of hot O and B type stars is offered as evidence of
recent formation from the existing dust-and gas cloud. But no nebulosity can be detected in
the Perseus cluster, or in NGC 2362, another similar cluster that has been extensively
studied, or in a number of other clusters in which O and B stars are prominent, while most of
the main sequence clusters, such as the Pleiades, that do have associated nebulosity have no
O type stars. It is commonly recognized that there is a contradiction here that calls for an
explanation, but since such contradictions abound in astronomy, it is not taken as seriously
as the situation actually warrants.
Some of the open clusters evidently carry over into Class 2B, as there are a large number of
loose, somewhat irregular, clusters that have second generation characteristics. Here we find
a substantial proportion of giant and subgiant stars, indicating that the clusters are either
considerably older or considerably younger than a main sequence cluster such as the
Pleiades. These clusters do not have the characteristics of the M 67 class, the predecessors of
the Pleiades type of cluster, and their structure (or lack of structure) indicates that they have
undergone considerable modification. We can therefore conclude that they are older, and
that their giant stars belong to Class 2C. This conclusion is supported by evidence indicating
that large proportions of the stars of these clusters are binaries.
Up to this point no more than casual consideration has been given to the rotation of the
various astronomical objects that have been discussed, because the significance of the
information available on this subject is not clearly indicated as long as each individual
situation is considered in isolation. We have now reached the point, however, where we can
put together enough information from different sources to show that there is a general
correlation between rotation and age throughout the astronomical universe.
The earliest structures, both the globular clusters and the stars of which they are composed,
have little or no rotation. As explained earlier, this is easily understood as a consequence of
star and cluster formation under conditions in which only radial forces are operative to any
significant degree. But it confronts conventional astronomical theory with difficult
problems. The desperate attempts of the theorists to read some signs of rotation into the
observations of the globular clusters as a means of accounting for the stability of these
structures have already been discussed. In application to the stars, this problem is somewhat
less acute, as the stars actually do rotate, and the issue here is a matter of origin and
magnitude.
According to J. L. Greenstein, the average rotational speeds of stars of spectral class G and
fainter are less than 25 km/sec. His estimates of the giant stars show an increasing trend up
to about 200 km/see for spectral classes A3 to A7, with a decrease thereafter. The peak for
the ‖dwarf― class (that is, the main sequence stars) is placed at a somewhat higher
luminosity, in classes B5 to B7, and is estimated at 250 km/sec.29 The existence of these
peaks does not mean that the rotation actually decreases in the largest stars. These are
surface velocities, and the decrease is merely a reflection of the slowing of the speed of the
outer layers of these stars, a differential effect that is evident even in stars as small as the
sun. Current theory offers no explanation as to why speeds of these particular magnitudes
should exist. Indeed, Verschuur points out that, on the basis of the prevailing theories, they
should be much greater.
The simplest calculations for star formation suggest that all stars should be spinning very,
very fast as a result of their enormous contraction from cloud to star, but they do not do so.
Why not? The answer is far from known at present. 114
Furthermore, there is direct evidence that the rotational speed is a function of age. For
example, A. G. Davis Philip reports that the rotational velocities of Ap and Am stars
decrease with increasing cluster age (which is decreasing age, according to our findings).122
We might also note that the question as to what happens to the rotational speed as stars go
through the contortions that are required by present-day evolutionary theory receives
practically no attention.
Against this background, the simple, observationally confirmed, picture of the rotational
situation derived from the theory of the universe of motion provides a striking contrast. On
the basis of this theory, all of the primary astronomical objects—stars, star clusters, and
galaxies—originate with little or no rotation, and acquire rotational velocities as a
consequence of the evolutionary processes. This increase in velocity is primarily due to
angular momentum imparted to these objects during the accretion of matter. Globular
clusters, which have little opportunity for accretion, acquire little or no rotation. The larger
galaxies and the stars of the upper main sequence, which grow rapidly, on the astronomical
time scale, increase their rotational velocities accordingly.
From the nature of the evolutionary processes, as they have been described in the preceding
pages, it is apparent that no aggregate consists entirely of a single stellar class. However, the
very young aggregates approach this condition quite closely, inasmuch as they are composed
of young stars, and the only dilution by older material results from picking up an occasional
stray that has been ejected from an older aggregate. Aside from these interlopers, the earlier
globular clusters are pure Class 1A, and their CM diagrams are somewhere between a
concentration at the initial point of the diagram at the extreme end of the red giant region
and a distribution similar to that of M 3, Fig.3.
As brought out in the preceding pages, the evolutionary ages of the observable globular
clusters are correlated with their distances from the Galaxy. On first consideration, the
existence of such a relation may seem rather surprising, but it is an inevitable result of the
kind of a cluster formation process that was described in Chapters 1 and 2. In the
equilibrium condition from which the contraction of the group of proto-clusters begins, the
protoclusters in the outer regions of the group are moving inward, exerting a compressive
force on those closer to the center of the group. Thus there is a
density gradient from the periphery of the group to one or more central locations, just as
there is a similar gradient from the outer regions of the clusters to their centers after they
begin contracting individually. These density centers are the locations in which the
condensation into stars first takes place, and the combination of the clusters into galaxies
begins. Ultimately they become the locations of the major galaxies of each group. The
density gradient from the periphery of the proto-group to the condensation centers then takes
the form of a gradient from the gravitational limits of the major galaxies to the locations of
those galaxies.
The basic physical process in the material sector of the universe is aggregation in space.
Growth of the aggregates proceeds by a mechanism called capture, if it occurs on an
individual basis, or condensation, if it takes place on a collective basis. The rate of growth is
primarily a matter of the density of the medium from which the material is being drawn.
Condensation does not occur at all unless the density exceeds a certain critical value.
Capture is not so limited, but the rate at which it occurs depends on the probability of
making contact, and that probability is a function of the spatial density of the entities subject
to capture. All of the aggregation processes therefore speed up as the clusters move toward
the Galaxy and into a denser environment. This accounts for the evolutionary changes,
already noted, that take place during the travel of the globular clusters from the distant
regions of intergalactic space to the point at which they end their existence as separate
entities by falling into the Galaxy.
The aggregation of matter on the atomic scale that produces successively heavier elements
follows the same general course as the aggregation of the dust and gas particles into stars.
The atom-building process, as described in the previous volumes of this series, is also a
capture process, and it, too, proceeds at a rate that depends on the density of matter in the
environment.
Current estimates of the densities in the different regions through which the clusters pass
give a general indication of the magnitudes that are involved. The following are some recent
figures:123
Density (g/cm³)
Intergalactic space 10-31
Space near edge of galaxy 10-28
Interstellar space 10-24
On this basis, the density increases by a factor of 1000 during the travel of the cluster from a
distant point of origin to the edge of the Galaxy. Here, then, is the explanation for the
differences in composition between the distant clusters and those near the Galaxy that were
described earlier. After entry into the galactic environment the increase in density and the
corresponding evolutionary changes are still more rapid.
It is not possible to follow the evolutionary cycle of the stars in the distant galaxies in the
same detail as in our own galaxy, but we can apply our findings from the study of evolution
in the Galaxy to an explanation of some of the changes in the observable features of these
other galaxies. We can deduce that the small elliptical galaxies, including the distorted
members of this class currently classified as irregular, are more advanced than the average
distant globular cluster, and are in an evolutionary stage comparable to that of the most
mature of those clusters. On the basis of the classification that we have set up, this means
that they are composed of a mixture of the IA and 1B classes of stars.
The older and larger elliptical galaxies (not including the giant spheroidals, which are not
classified as elliptical in this work) are in the same evolutionary stage as the earliest open
clusters, and the CM diagrams of M 67 and the Hyades are representative of the phases
through which these elliptical galaxies pass. It should be noted, however, that because of the
continuing capture of younger aggregates, the early end of the age distribution is not cut off
in the galaxies as it is in the clusters. The CM diagram for an elliptical galaxy in the same
evolutionary stage as the Hyades would extend the sector occupied by the Hyades stars all
the way back through the globular cluster sector to the original zone of star formation.
The rapid evolution in the early spiral stage eliminates most of the 1A stars, except those in
the incoming stream of captured material. Aging of these spirals then results in the
production of second generation stars, beginning with Classes 2C and 2D. All of these stars,
both the giants (2C) and the white dwarfs (2D), are moving toward the main sequence, on
reaching which they enter class 2B, the class to which the sun and its immediate neighbors
belong. There are no giants among these local stars, but the presence of white dwarfs in such
systems as Sirius and Procyon, and the existence of planets, shows that the local stars passed
through the explosion phase fairly recently. We may interpret the lack of giants as indicating
that the former giants, such as Sirius, have had time to get back to the main sequence, while
their slower white dwarf companions are still on the way. It is not certain that all of the
nearby stars actually belong in this same evolutionary group, as some younger or older stars
may also be present as a result of the mixing due to the rotation of the galaxy and the
gravitational differentials, but there are no obvious incongruities.
The 2B stars in the regions of average accretion or above move upward along the main
sequence in the same manner as they did when they were 1B stars of the first cycle, and
again undergo the Type I supernova explosions. Eventually they recondense into stars of the
third cycle, Classes 3C and 3D. These are three-member systems, if only one of the stars of
the Class 2 binary system has exploded, or four-member systems if both have gone through
the explosion phase. As indicated earlier, a considerable number of such multiple systems
are known.
Theoretically, this movement around the cycle will continue until the matter of which the
star is composed reaches its age limit, providing that the environment is favorable for
growth, but as mentioned in the discussion of the spiral structure, the contents of the
galaxies are in a physical condition that has the general characteristics of a viscous liquid. In
such an aggregate the heavier material moves toward the center of gravity, displacing the
lighter units, which are concentrated preferentially in the outer regions. This process is slow
and irregular because of the viscosity and the effects of the galactic rotation, but there is a
general tendency for the older and heavier systems to sink toward the galactic center, into
regions where the supply of material for accretion is limited. One six-member system,
Castor, is frequently mentioned in the astronomical literature, but apparently systems of this
size, systems of the fourth cycle, are scarce in the readily observable regions of the Galaxy.
In view of the smaller amount of material available to the stars in the unobservable regions
closer to the galactic center, and the increased competition for the material that is available,
because of the higher concentration of stars, it is quite possible that the movement around
the evolutionary path is limited to four or five cycles.
Some evidence suggesting continuation to additional cycles is available from the cosmic
rays. As explained in Volume 1, the nature of the process whereby matter is transferred from
the material sector to the cosmic sector, and vice versa, is such that this matter is near its age
limit before being ejected from the sector of origin. The cosmic iron content of the cosmic
rays (the incoming matter from the cosmic sector) is something on the order of 50 times that
of the estimated iron content of the local main sequence (Class 2B) stars. If taken at its face
value, this indicates that the evolutionary development, which causes the increase in the iron
content, must extend into more than two or three additional cycles beyond the 2B stage.
However, as noted earlier, the spectra of the stars tell us only what is present in the outer
regions, and there is reason to believe that the iron content of the older stars in the local
environment is substantially greater than indicated by the spectroscopic data. For the present
it seems appropriate to interpret the cosmic ray composition as evidence favoring the higher
iron content of the Class 2B stars rather than as indicating evolution beyond four or five
cycles.
In either case, however, the continuation of the accretion process into a number of cycles
means that the proportion of large stars (products of the explosion of stars of maximum size)
in the galactic population increases as time goes on. Inasmuch as the oldest stars are
concentrated toward the galactic center, it follows that the number of large stars in the
central regions of the Galaxy is considerably greater than would be expected from the
proportions in which they are observed in the local environment. As we will see later, the
presence of this large population of big stars in the central regions of the major galaxies has
some important consequences.
The fact that the development of the spiral structure antedates the appearance of the second-
generation stars enables defining the general distribution of the stellar classes of the Milky
Way galaxy and similar spirals. With the qualification ‖except for strays from older
systems,― which has to be understood as attached to all statements in the discussion of stellar
populations, we may say that the stars of the second and later generations, Class 2C and
later, are confined to the galactic disk (including the arms) and the nucleus. The early first
generation stars, Class 1A, are distributed throughout the outer structure. They constitute
practically the entire halo population. The main sequence stars of the first generation. Class
1B. occupy an intermediate position, most prominently in the spiral arms.
The identification of the conspicuous hot and luminous stars of the upper main sequence
with the spiral arms was the step that led to the original concept of two distinct stellar
populations. However, the information that has been developed herein shows that the
galactic arms actually contain a very heterogeneous population, including not only stars
from the entire first evolutionary cycle, but also stars from several, perhaps nearly all, of the
later cycles.
Observational difficulties limit our ability to follow the evolution of the galaxies beyond the
stage of the spiral arms by studying the individual stars, but we can derive some further
information from the character of the light that is being received from the inner regions.
Since the stars in the galactic nucleus are older than those in the disk, they should be more
advanced from an evolutionary standpoint, on the average. This difference in age is reflected
in a difference in color. However, the correlation is not directly between color and age, but
between color and the positions of the stars in the evolutionary cycle.
It should be realized that the great majority of all stars are red. Consequently, we can expect
red light under all conditions except where the stellar population includes an appreciable
number of the relatively rare blue and white stars of the upper end of the main sequence, and
then only because the emission from these hot stars is so much greater than that from the red
stars that even a small proportion of them has a major effect on the color of the aggregate as
a whole. The hottest stars may be thousands of times as luminous as the average Class 1 star.
Thus the color of a galaxy, or a portion thereof, does not identify the stage of evolution of
the constituent stars. It merely tells us that the aggregate does, or does not, contain a
significant number of stars in that part of an evolutionary cycle which extends into the upper
end of the main sequence. The particular cycle to which these stars belong cannot be
determined from this information, but since the color changes in galaxies take place
gradually, the characteristics of the light emitted by a galaxy, or one of its constituent parts,
supplement the evolutionary criteria previously identified.
The integrated light from the larger elliptical galaxies belongs to the spectral type G
(yellow). In the early spirals the emission rises to type F (yellow white), or even type A
(white) in some cases, because of the large number of Class 1B stars that move up to the
higher levels of the main sequence. As these stars pass through the explosion stage and
revert to the 2C and lower 2B status, accumulating to a large extent in the galactic nucleus,
the light gradually shifts back toward the red, and in the oldest spirals the color is much like
that of the ellipticals. Summarizing the color cycle, we may say that the early structures are
red, because they are relatively cool, there is only a small change in the character of the light
during the development of the elliptical galaxy, then a rapid shift toward the blue as the
transition from elliptical to spiral takes place, and finally a slow return toward the red as the
spiral ages.
Current astronomical theory correctly identifies the stars of the nuclear regions of the
galaxies as older than those in the spiral arms, but reaches this conclusion by offsetting one
error with another. This theory identifies the globular cluster stars as older than the main
sequence stars of the galactic arms. This is incorrect. But then the theory equates the stars of
the nucleus with those of the globular clusters. This, too, is an error, but it reverses the first
error and puts the stars of the nucleus in the correct age sequence relative to those of the
galactic arms. However, this superposition of errors leaves the astronomers with an open
contradiction of their basic assumption as to the relation between the age of a star and its
content of heavy elements. This embarrassing conflict between current theory and the
observations is beginning to be a subject of comment in the astronomical literature. For
example a 1975 review article reports measurements indicating that the ‖dominant stellar
population in the nuclear bulges of the Galaxy and M 31 consists of old metal-rich stars.―124
As the authors point out, this reverses the previous ideas, the ideas that are set forth in the
astronomy textbooks. The expression ‖old metal-rich stars― is, in itself, a direct
contradiction of present-day theory. The whole fabric of the accepted evolutionary theory
rests on the hypothesis that old stars are metal-poor. The existence of a greater metal content
in the central regions of the galaxies is apparently not contested. Harwit makes this
comment:
There also seems to exist abundant evidence that the stars, at least in our Galaxy and in M
31, have an increasingly great metal abundance as the center of the galaxy is approached.
The nuclear region appears to be particularly metal rich, and this seems to indicate that the
evolution of chemical elements is somehow speeded up in these regions8
In the light of our findings it is, of course, unnecessary to assume any speeding up of stellar
evolution in the central regions of the Galaxy. All that is needed is to recognize that the stars
in these regions are the oldest in the galaxy, and their evolution has continued for a long
period of time.
This chapter completes our discussion of the more familiar areas of the astronomical
universe. In the remainder of this volume we will be exploring hitherto uncharted regions,
aspects of astronomy where the currently accepted ideas are almost completely wrong,
because of the strangely unquestioning acquiescence in Einstein's assumption that the
experimentally observed decrease in acceleration at high speeds is due to an increase in
mass, and that speeds in excess of that of light are therefore impossible. As has been
demonstrated in the course of the development of the theory of the universe of motion, the
speed of light is a limit applying only to one-dimensional motion in space, and there are vast
regions of the universe in which motion takes place in time, or in multi-dimensional space.
Most of these are inaccessible to observation from our position in the universe, but some of
the entities and phenomena of these regions do have observable effects on the material
sector, the sector in which we make our observations. These effects will constitute the
subject matter of the remaining chapters.
Since these subjects will be approached from a totally different direction, the conclusions
that will be reached will differ radically, in many cases, from those currently accepted by the
astronomical community. As we begin our consideration of these new, unfamiliar, and
perhaps disturbing, findings in the admittedly poorly understood areas of astronomy, it will
therefore be well to bear in mind what the theory of the universe of motion has been able to
do in the presumably quite well understood astronomical areas. It has produced an
evolutionary theory that turns the conventional astronomical theories upside down, and it
has identified a variety of observational data that confirm the validity of the revised
evolutionary sequence, including two sets of observations, the densities of the different
classes of open clusters, and the metal content of the stars in the central regions of the
galaxies, that provide definite proof that evolution takes place in the reverse direction. This
ability of the new theory to correct a major error in current thought with respect to the
phenomena of the better known regions should inspire some confidence in the validity of the
conclusions that are derived from that theory in the relatively unknown astronomical areas,
particularly when it is remembered that scarcity of observational information is not a major
handicap to a purely theoretical structure of thought, whereas it is usually fatal to theories,
like most of those in astronomy, that rest entirely on observational.

CHAPTER 11

Planetary Nebulae
Inasmuch as the system of reference by means of which we define the positions of
physical objects in the material sector of the universe, the sector in which we are located,
is stationary in space, but moving at the speed of light in time, we cannot detect objects
moving in time, except during an extremely short interval while they pass through our
reference system, and then only atom by atom. As explained earlier, however, if the net
total three-dimensional scalar speed is below the point of equal division between motion
in space and motion in time, any time motion component included in the total acts as a
modifier of the spatial motion—that is, as a motion in equivalent space—rather than as an
independent motion in actual time.
The nature of the modification depends on the magnitude and dimensions of the motion
being modified. The participation of time motion in combinations of motion that are
multi-dimensional in space (ultra high speeds) will be discussed later, in another
connection. The motion with which we are now concerned, motion at intermediate
speeds, is one-dimensional, but the original unit of speed (motion in space) has been
extended linearly to a second unit, which is a unit of motion in time. Because of the effect
of this time component, the successive spatial positions of an object moving freely at an
intermediate speed do not lie on a straight line in the reference system as they would if
the speed were less than unity. Motion in time has no direction in space. The spatial
direction of each successive unit of the time component of the intermediate speed is
therefore determined by chance. However, the average position of the freely moving
object follows the straight line of the purely spatial motion, because the total three-
dimensional motion is still on the spatial side of the sector boundary.
As a result of this time effect, the radiation from a white dwarf in its early stages is not
received from the surface of the star itself, but from a much larger area centered on the
average stellar location. When the inherently weak radiation from this (spatially) very
small star is further diluted by being spread out over this wide area it is reduced below
the observable level. It follows that the white dwarfs expanding back toward the material
sector (evolutionary stage 2) are not observable at all as long as their surface temperature
is above the level corresponding to the unit speed boundary. On that boundary the change
of position in time (equivalent space), relative to the natural datum, the unit speed level,
is zero, and the radiation from the white dwarf is received at full strength. The white
dwarf stars therefore become observable at this point.
Our first concern will be with the relatively large stars, those whose mass exceeds a
certain critical level that we will identify later. The detailed study of the white dwarf stars
and related phenomena in the context of the theory of the universe of motion is still in the
early stages, and we are not yet in a position to calculate the entry temperature for this
class of white dwarf, but it can be evaluated empirically, and is found to be in the
neighborhood of 100,000 K.
At this temperature, where the relatively large white dwarf enters its third evolutionary
stage, it is still a gas and dust cloud in equivalent space; that is, it is in the gaseous state.
In this gaseous state in time the B-V color index for a given temperature is different from
that of the stars on the spatial main sequence. We find empirically that the color index
corresponding to the 100,000 K temperature of the incoming white dwarfs is about -0.3.
On the main sequence this index corresponds to a temperature of about 30,000 K.
Theoretically, these temperatures should be related by a factor of three. On entry into the
observable region, the white dwarf is moving in all three dimensions of time (equivalent
space). The radiation from this star, the wavelength of which determines the color, is one-
dimensional. From the color standpoint, therefore, the radiation consists of three
independent components, each of which has the wavelength and color corresponding to
one third of the total rate of emission of thermal energy. The temperature, on the other
hand, is determined by the total energy emission. Thus the color index of the newly
arrived white dwarf of the class we are now considering is the same as that of a spatial
main sequence star with a temperature one third that of the white dwarf. Since we do not
have the theoretically correct values at this time, we will continue using 100,000 K and
30,000 K, with the understanding that these values refer to a temperature of about
100,000 K and a temperature one third as large, approximately 30,000K.
The location of the -0.3 index on the CM diagram coincides, in general, with the position
of a rather obscure class of stars known as the hot subdwarfs. The ‖evolutionary status―
of these stars ‖has not been really understood.― 125 say Kudritzke and Simon, but current
opinion apparently favors the suggestion that ‖On the way to becoming a white dwarf,
while it is still very hot and just before the thermonuclear reactions cease, a star may find
temporary stability in the region below the main sequence.― 102 In short, this is presumed
to be a way station on the totally unexplained, and poorly defined, route by which,
according to current theory, a red giant becomes a white dwarf.
Observational information about these hot subdwarfs is still scarce and somewhat
uncertain. A 1961 report by K. Hunger, et al.. says that ‖little is known about their
precise location in the H-R diagram.― 126 These authors make the following comments on
matters that are relevant to the present discussion: (1) a major fraction of these stars are
binaries, (2) some of them are central stars of planetary nebulae, and (3) the mass of one
of them, the star HD 49798, has been evaluated as 1.5 solar masses. According to our
findings, all of these stars are binaries. The relevance of the other two items will appear
as our examination of these stars proceeds.
During the interval between the supernova explosion that produced the white dwarf and
the reentry of that star into the reference system, where it is subject to observation, the
portion of the original material ejected into space at less-than-unit speeds has also
undergone some changes. Immediately following the explosion, the density of the
material moving outward was sufficient to carry everything in the vicinity along with it,
and the visible object was a rapidly expanding cloud of matter. As the expansion
proceeded, the density of the cloud decreased, and in time a point was reached where the
outgoing matter passed through the interstellar material rather than carrying that material
with it. Eventually the outward motion of the ejected matter came to a halt, and inward
motion began under the influence of gravitation, as explained in Chapter 4.
The existence of the hot subdwarfs suggests that the turnaround time is less for the
material dispersed in time than for the material dispersed in space, and that the hot star is
visible for a time before there is any substantial inflow of material from the environment.
But eventually the matter that is being pulled back by the gravitational forces begins
falling into the star. The first material of this kind reaching one of these newly arrived
white dwarf stars. the hot subdwarfs, encounters the extremely high temperature of this
object, and is heated to such an extent that it is ejected back into the surroundings. Since
both the incoming and the outgoing material are at a very low density. there is only a
limited amount of interaction, and the cold material continues to flow inward through the
outward moving matter.
When the incoming matter reaches the hot surface of the star it is not only heated to a
very high temperature, but is also strongly ionized. The outgoing ionized matter emits
visible radiation, and we therefore see a sphere of ionized matter centered on the young
white dwarf. The radiation from ionized atoms occurs when they drop to a lower state of
ionization, and as a consequence the greater part of it takes place after the ejected
material has traveled far enough to lose a substantial part of its original ionization energy,
and before that energy is all dissipated. This leaves a nearly invisible region in the
interior of the sphere. To the observer, the resulting structure has the appearance of a
ring. Such an object is a planetary nebula.
Here we see the significance of the observation, cited above, that some of the hot
subdwarfs are central stars of planetary nebulae. According to our deductions from theory
all of the hot subdwarfs will become central stars of planetary nebulae in due course.
The planetaries are all far distant from our location, and it is difficult to get an accurate
observational picture of the complicated processes that are under way in them.
Consequently, there is considerable difference of opinion as to just what is happening.
Although we seem to comprehend the broad outlines of their formation and development,
much of what we see is confusing and not at all well understood.127 (James B. Kaler)
The outward motion of the ionized gas in the typical large nebula is well established, and
the general tendency is to take this as indicating that the central star, which is conceded to
be a white dwarf, or on the way to becoming a white dwarf, is ejecting mass into its
surroundings as a part of the process which, according to current ideas, will eventually
reduce it to a burned-out cinder. This observed outward flow of matter seems, on first
consideration, to define the nebula as an expanding cloud of material. But there are strong
indications that this simple view is incorrect. One significant point is that the nebulae are
not actually expanding at the rates indicated by the measured speeds of the outgoing
matter. Indeed, some of the nebulae are not expanding at all. For instance, velocity
measurements indicate that the diameter of NGC 2392, the Eskimo Nebula, is increasing
at the rate of about 68 miles per second. But no definite increase in size is shown in
photographs taken 60 years apart.
Our findings now indicate that the prevailing view of the nebulae as expanding structures
is incorrect. Instead of being a rapidly dissipating cloud of material ejected from the
central star in a single burst, or succession of closely spaced bursts, our analysis indicates
that the planetary nebula is a relatively permanent ionization sphere through which the
outgoing stream of material flows. We might compare it to the visible area of a river
illuminated by the beam of a searchlight.
A report by M. and W. Liller concedes that ‖Very possibly, all planetary nebulae are
ionization spheres,― 129 but contends that in general these ionization spheres are
expanding, although at a slower rate than would be indicated by the measured velocities.
The size of the ionization sphere depends on the temperature of the central star and on the
density of the nebula, increasing with higher central temperature and decreasing with
higher density. The temperatures of the central stars necessarily decrease from the
100,000 K initial level. There is little or no loss of matter from the planetary nebula
system, since the initial speeds of the ejected matter, while high by terrestrial standards,
are not anywhere near sufficient to carry the outgoing matter to the gravitational limit
before being slowed down. In the meantime, additional matter is being drawn in from the
environment. Thus the general trend of both temperature and density is in the direction of
reducing the size of the ionization
sphere (the observable nebula). It does not follow, however, that this decrease is
continuous and uniform throughout the planetary nebula stage. On the contrary, the flow
conditions in the nebula are such that fluctuations of a major nature can be expected,
particularly in the early portion of this evolutionary stage.
The inward flow of material toward the central star is not observable. Under ordinary
conditions very diffuse material at great distances and low temperatures cannot be
detected by any means now available. A part of this incoming matter is ionized by the
radiation from the star, but the ionizing effect increases as the star is approached, and the
transitions to lower ionization states that cause the emission of radiation are minimized.
Radiation from the incoming matter is thus minor, and it has not been identified. We can,
however, deduce that the initial inflow of the material, when the hot white dwarf first
establishes a definite position in space, is relatively heavy, as the site of a recent
supernova explosion is well filled with explosion products.
This relatively large amount of incoming matter encounters the maximum 100,000 K
temperature, is strongly ionized on contact, and is ejected at a high speed. Thus a large
ionization sphere is quickly established when the action begins. The outward movement
of the relatively large amount of ejected matter retards the inward flow of matter to some
extent. This has two effects. It reduces the amount of material reaching the central star,
thereby reducing the amount of ejection, and diminishing the outward flow.
Coincidentally, the incoming material that is being held back by the outward flow builds
up a concentration in the regions beyond the ionization sphere. Eventually the reduced
outward flow is unable to hold back this concentration of material that is being pulled
inward by gravitational forces, and there is a surge of matter toward the central star. This
recreates the original situation (at a somewhat lower level, since the temperature of the
central star has decreased in the meantime), and the whole process is repeated.
During the time that the outward flow predominates, the density within the ionization
sphere is decreasing, while because of the reduction in the inflow of cold matter, the
surface temperature of the central star remains approximately constant. The ionization
sphere therefore expands slowly. When the inward surge of matter occurs, these
conditions undergo a rapid change. The density within the ionization sphere increases
sharply, and the surface temperature of the central star decreases. The result is a rapid
contraction of the ionization sphere. After these effects of the surge have run their course,
the heavier outward flow and the expansion of the ionization sphere are resumed, but in
the meantime the internal temperature of the central star has dropped, and the surface
temperature does not regain its former level. The expansion therefore starts from a
smaller size than before, and the next surge occurs before the size of the ionization sphere
reaches its earlier maximum. Thus, as the successive expansions and contractions
continue, the size of the nebula gradually decreases. Eventually the open space in the
center is eliminated, or at least drastically reduced. The older nebulae are therefore
relatively small, and have filled, or partially filled, centers.
One observed phenomenon that tends to confirm the validity of the foregoing explanation
of the general behavior of the planetary nebulae is the existence of faint outer rings in
some of the nebulae. These are just the kind of remnants that would be left behind if there
is a relatively rapid periodic decrease in the size of the ionization sphere, as indicated in
the foregoing theoretical account of the process. The currently favored explanation is that
the rings were produced by explosive outbursts from the central star that preceded an
outburst to which the main portion of the nebula is attributed. But there is no evidence
that such explosive outbursts occur, nor does present-day astronomical theory have any
explanation of how they could originate.
This is only one of many conflicts between the pattern of evolution of the planetary
nebulae, as derived from the theory of the universe of motion, and the view currently
prevailing among the astronomers. In that currently accepted view, these nebulae are seen
as expanding objects, and the largest ones are therefore regarded as the oldest. But the
temperature relations specifically contradict this hypothesis. Examination of the data
reported for a selected group of ‖prominent― nebulae130 shows that the temperatures of
the central stars range from about 100,000 K to about 30,000 K. The sizes of the nebulae
vary widely, but those members of the sample group with temperatures in the
neighborhood of 100,000 K all have diameters of a minute of arc or more, while almost
all of those at the lower end of the temperature range have diameters of less than 30
seconds. The idea that a ‖dying star― that ‖has come close to the end of its life . . .
destined soon to become a white dwarf, the last stage before it disappears from view
altogether― 129 is steadily increasing in temperature from 30,000 K to 100,000 K during
the planetary stage is preposterous. Even on the basis of the astronomers own theory, the
temperature trend must be downward.
It is true that the luminosity of the central star increases substantially as the size of the
nebula decreases. The data reported for the sample group show that at the high end of the
range of nebular sizes the average magnitude of the central star is about 14. The four with
Messier numbers have magnitudes 13.5, 14, 15, and 16.5. From this level the luminosities
increase rapidly, and at the low end of the nebular size range the average magnitude is
about 11. These are observed, rather than absolute, magnitudes, but the correction for
distance, if available, would not change the general picture. Coincidentally, the
temperatures of the central stars are decreasing. It is evident that what is happening here
is that while the total emission is decreasing as the stellar temperature drops, a larger
proportion of that total is coming directly from the central star, rather than being passed
on to the nebula and emitted from there. The decrease in temperature is the salient feature
of the change that takes place with time, and it establishes the direction of evolution
unequivocally.
Turning now to the question as to the location of the planetaries on the CM diagram, the
first point to be noted is that we are now dealing with stars that are quite different from
those on the upper side of the main sequence. As we saw in Chapter 6, these stars, Class
D stars, as we are calling them, were dispersed in time by the supernova explosion, rather
than being dispersed in space by the process with which we are more familiar. When they
again become visible as stars they are expanding back toward the main sequence, instead
of contracting. The color indexes and luminosities of these stars can be measured, and
they can therefore be represented on a CM diagram. But, as we have already seen in the
case of the Class C stars, the other variable properties of the stars do not necessarily
maintain the same relations to the color index-luminosity function in the different star
classes. For instance, the Class C mass at a given point in the diagram is usually quite
different from the mass of a Class A star at that same point. The truth is that, with the
exception of the spatial main sequence, which is common to all, the CM diagrams of the
different classes of stars are different diagrams.
Fig. 4 and the accompanying discussion in Chapter 4 bring out the fact that the major
properties of Class A stars, other than those on which the CM diagram is based, are
specifically related to the variables of the diagram, so that the stars of this class are alike
if they have the same position on the diagram. This is not true, in general, of a Class A
star and a Class C star at the same location. Similarly, if the Class D central star of a
planetary nebula occupies the same position in the diagram as a certain Class A star, this
does not mean that the two are alike. On the contrary, they are very different, because of
their dissimilarity in the properties that are not portrayed by the diagram.
This issue does not arise in the case of most of the stars of the dwarf classes, as they are
well below the spatial main sequence, but some of the large hot subdwarfs and central
stars of the planetary nebulae are close to, or even above, the location of the main
sequence. It should be recognized that the diagram is misleading in these cases, and that
the stars of these two dwarf classes are actually very different from stars whose motion is
in space. In this volume, all Class D stars will be regarded as ‖below the main sequence―
for the purposes of the discussion.
The temperature of about 100,000 K at which the white dwarf reaches the observable
region is far above the level of the environment in the material sector of the universe. In
order to reach a point of thermal equilibrium in that sector the star must cool down to a
level within the sector energy range (below unit speed). This cannot be accomplished in
one continuous operation; a three-step process is required. Conversion to the one-
dimensional material status can take place only on a single unit basis, in which a single
unit of one-dimensional time motion converts to a single unit of one-dimensional space
motion. The star must first cool down to a limiting temperature where the individual
atoms at the stellar surface are in the unit condition in time. This is the temperature,
which we have identified empirically as approximately 30,000 K. Here the transition
from motion in time to motion in space takes place. The third step in the process, a
further cooling to the equilibrium temperature, then follows.
From the foregoing, we find that the planetary nebulae are located on the conventional
color-magnitude version of the H-R diagram between the two vertical lines drawn in
Fig.18, representing temperatures of I00,000 K and 30,000 K respectively. The plotted
points are the locations of the planetary nebulae in the tabulation by G. O. Abell
(reference 131). All of these points fall within the temperature limits defined by the
specified lines.
For an understanding of the positions and evolutionary changes illustrated by Fig.18 we
need to review some of the findings of the previous volumes of this series with respect to
natural units. According to the fundamental postulates of the theory of the universe of
motion, the basic constituent of the universe, motion, is limited to discrete units. Since all
physical phenomena in this universe are motions, combinations of motions, or relations
between motions, it follows from the discrete nature of the units of motion that all of
these subsidiary phenomena must also be limited to discrete units.
The basic units of space, time, mass, energy, etc., were evaluated in Volume I. However,
these simple units are not directly applicable to complex phenomena. Here a compound
unit usually applies, a combination of the simple primary units. For example, the primary
unit of space has been evaluated as 4.56x 10-6 cm. But within a unit of space there are
compound motions in which the spatial units are modified by certain combinations of
units of time. As a result, the phenomena in this region are not related to the simple units
of space, but to a compound (or modified) unit of space that amounts to 0.0064 of the
full-sized natural unit, or 2.92x10-8 cm.
Because of the general applicability of the discrete unit limitation, we can deduce that
wherever we encounter a critical value of some kind, we are dealing with a compound
unit, or a small number of such units. It is not usually possible to evaluate the compound
unit in terms of the simple units of which it is composed until after the theoretical
relations that are applicable have been clarified in considerable detail. For instance, in the
case of the space units, the factor 0.0064 that relates the compound unit to the simple unit
is something that one would not be likely to find unless he had a very good idea as to
where to look for it. The development of the theory of the universe of motion has not yet
been applied to the quantitative aspects of astronomical phenomena on an extensive
enough scale to enable evaluating more than a limited number of the compound
astronomical units. But the mere knowledge that some particular magnitude is a
compound unit, or a small whole number of such units, is very often helpful.
In the present instance, we are able to make use of a feature of the evolutionary pattern of
the globular clusters that was mentioned, but not discussed, in Chapter 8. As noted there,
the difference in luminosity between point A and point B on the CM diagram, on the
logarithmic scale, is twice the difference between point B and point C. Inasmuch as these
points are all critical points in the evolutionary pattern, the difference in magnitude
between any two of them is presumably n compound units, where n is a small whole
number.
The nature of this compound unit has not yet been determined, but the logarithmic
magnitude scale suggests a dimensional relation, and leads to the surmise that the
magnitudes at points B. C, and A are 1, 2, and 3 respectively. There is, of course, a large
hypothetical component in this conclusion, at the present stage of theoretical and
observational knowledge, but we can treat it like any other hypothesis; that is, develop its
consequences and compare them with observation. As will be seen in the discussion that
follows, the consequences of this hypothesis do, in fact, agree with the available
observational information. Within the limits to which the correlation has been carried, the
hypothesis has been verified.
The particular value of this hypothesis is that it gives us a means of locating the critical
points in the white dwarf section of the CM diagram. In the earlier volumes it was
established that the boundary between motion in space and motion in time has a finite
width, and that there are two natural units between the respective unit levels. It follows
that if, as we have concluded, point B corresponds to one unit in the spatial direction (+1
we may say), then a point one unit lower on the extension of the line AB corresponds to
zero, and point B', two units lower, corresponds to -1; that is, one unit in the temporal
direction. The line APB' parallel to BC is then the equivalent of the main sequence for
motion in time.
With the benefit of this information we can now define the evolutionary paths of the
planetary stars. Fig.19 compares these paths with those of the giant stars. The line OAB is
the evolutionary pattern of a giant star that has a mass of about 1.1 solar units at point B.
Such a star originates with a smaller mass, but secretes material as it moves along line
OA, and reaches the 1.1 mass level at point A. This is the critical density level, where the
star acquires the ability to contract by means of its own gravitation, without the aid of
outside forces. This contraction carries it down to the point of gravitational equilibrium at
B.
This star that begins its life as a red giant is in a state of thermal equilibrium; that is, it is
radiating the same amount of heat that it generates. But its density is extremely low, far
below the level of stability. Its evolutionary course beyond point A, unless modified by
accretion, is therefore along a line of constant central temperature toward the main
sequence, the location of gravitational equilibrium. The early white dwarf, on the other
hand, is already in a state of gravitational equilibrium, from the standpoint of gravitation
in space, while it is too hot to be thermally stable. This star therefore moves along the
line of gravitational equilibrium for motion in time, the equivalent of the spatial main
sequence, toward a condition of thermal equilibrium.
The (inverse) volume of the white dwarf star at any given surface temperature is
determined by the mass. Thus the more massive stars reach the 100,000 K temperature
level while their inverse volume. from which they radiate (a point that will be considered
further in Chapter 12) is greater, and their luminosity is consequently higher. The
incoming white dwarfs are thus distributed along the 100,000 K line in accordance with
their masses. At the 1.1 level, identified as A' in Fig.19, the white dwarf occupies a
critical position somewhat similar to the critical density position at point A on the giant
path. T his white dwarf of 1.1 solar masses is the smallest star that has sufficient total
thermal energy to maintain the 100,000 K surface temperature in the gaseous type of
gravitational equilibrium.
This critical mass star that originates at point A' moves down along the line A'B',
gradually converting its outermost atoms from three-dimensional motion in time to one-
dimensional motion in time. This conversion is completed at B'. Further cooling then
transforms the one-dimensional motion in time at B' to one-dimensional motion in space
at B. Thus the giant and dwarf stars of the same mass eventually arrive at the same point
on the spatial main sequence.
Giant stars whose growth ceases at some point a between O and A follow a path ab
parallel to AB, terminating at point b on the main sequence. Stars that continue adding
matter beyond point A have a different evolutionary pattern, as previously explained. In
the dwarf region it is the star with a mass less than 1.1 solar units that has a different
pattern of evolution, one that will be examined in Chapter 12. Since the mass of the white
dwarf is constant during its movement along the line of gravitational equilibrium, it
follows that each mass has its own equilibrium line. Thus the equivalent of the main
sequence for motion in time is a series of lines parallel to the spatial main sequence.
The larger stars that we are now considering originate along the 100,000 K temperature
line at locations above point A'. Such a star moves along a path a'b' from a', the point of
origin, to b', a point on the line B'B. It then converts to motion in space at point B in the
same manner as the star of mass 1.1, but B is not a location of thermal equilibrium for the
more massive star. A further movement along the main sequence is required in order to
reach the point of thermal stability. The final position of this star in the CM diagram is a
point somewhere between B and C, the exact location depending on the mass.
On the diagram, the movement from B to x, the final location, appears anomalous, as a
decrease in temperature normally corresponds to a movement to the right. This is another
illustration of the fact that the CM diagrams of stars of different classes, or even different
sub-classes, are actually different diagrams. The temperature corresponding to a given
color index is much lower on the spatial main sequence than on the equivalent path a'b'
for motion in time. The movement of the Class D stars toward the left after reaching the
spatial main sequence is not a temperature effect, but a result of this difference in the
significance of positions in the diagram. The cooling star is actually at a considerably
lower temperature in its final position at point x than it was at point b', even though it is
farther to the left.
Inasmuch as the white dwarfs are contracting in time rather than in space, the spatial
compression due to the gravitational motion toward the galactic center has no effect on
these Class D stars. The evolutionary paths shown in Fig.19 therefore meet at the
globular cluster level of the spatial main sequence, rather than at the position of the
galactic field stars. The final position, designated x, on the spatial main sequence is,
however, subject to the gravitational shift, and the last phase of the conversion from
motion in time to motion in space includes an upward movement of 0.8 magnitudes, as
well as the movement to the left from B to x. As noted in Chapter10, the observed Class
D pattern is strong evidence of the reality of the gravitational shift.
Observational information about the two classes of relatively large white dwarfs that we
have been considering, the hot subdwarfs and their successors, the central stars of the
planetary nebulae, is very limited, but the positions on the CM diagram indicated by the
available data are entirely consistent with the evolutionary pattern that we have derived
from theory. The dashed line in Fig.18 outlines the locations of the hot subdwarfs as
given by M. and G. Burbidge (reference 102). This indicated area is clearly consistent
with the theoretical conclusions. As noted earlier, the locations of the representative
group of planetary nebulae identified in Fig.18 are also within the theoretical limits.
Because of their decrease in temperature, the movement of these nebulae on the CM
diagram must be, at least generally, from left to right. (Even the adherents of
conventional astronomical theory concede this. See, for instance, the diagram by
Pasachoff, reference 132.) Some further confirmation of the theoretical findings can
therefore be obtained by examination of the relation of the diameters of the planetaries on
the Abell list to their locations on the diagram. Fig.20 is a reproduction of Fig.18, with
the diameters in parsecs shown alongside the points indicating the locations. As might be
expected, in view of the diversity of the conditions under which the nebulae exist, and to
which the observations are subject, there are wide unexplained variations in the
individual values, but the general trend is clear. Disregarding the group of nebulae below
the line A'B', which are subject to some special considerations that we will examine in
Chapter12, there are 16 nebulae with an average diameter of 84 parsecs in the left half of
the identified nebular region, and 7 nebulae with an average diameter of 47 parsecs in the
right half.
Observational information on the masses of the planetary stars is minimal, but the little
that is available is consistent with the existence of a lower limit at 1.1 solar masses, or at
least not inconsistent with it. An average of 1.2 solar masses has been suggested.― 133 As
noted earlier, the mass of one of the preplanetary stars, the hot sub-dwarfs, has been
determined as 1.5 on the same scale. We will see in Chapter13 that the mass of another
star, which we will identify as a former planetary star, has been calculated at 2.1 solar
masses. These results are too few in number to confirm the theoretical minimum, but they
do point in that direction.
Inasmuch as this shortage of empirical information exists, in one degree or another,
throughout the entire range of the white dwarf phenomena, it is again appropriate to call
attention to the fact that the validity of the general principles and relations that have been,
and will be, applied to explaining these phenomena has already been firmly established in
physical fields where factual data are abundant and reliable. Thus, even though the
correlations between theory and observation that are possible in such areas as that of the
white dwarfs are too limited to provide positive confirmation of the validity of the
theoretical conclusions, the fact that these conclusions are consistent with what is known
from observation is sufficient, in conjunction with the validity of the principles on which
they are based, to establish a strong probability that they are correct.
It was noted in Chapter 6 that some of the central stars of planetary nebulae are currently
identified as Wolf-Rayet stars. The identification is based on their high temperatures and
spectra that are similar to those of the massive Wolf-Rayets. In other respects these
objects are quite different. As described by Smith and Aller, the central stars of the
planetary nebulae are believed to have masses in the neighborhood of solar, and absolute
magnitudes fainter than -3, whereas the Wolf-Rayet stars in the other class are believed to
average about ten solar masses and to have absolute magnitudes brighter than -4. A
comparison of typical stars of these two classes leads these authors to the conclusion that
they have a ‖totally different evolutionary status.― They admit that they ‖are led to
wonder how many different stages of evolution can yield the Wolf-Rayet form of
spectrum.― 134
‖It is a further problem,― says Anne B. Underhill, ‖to understand . . . why this physical
state may occur early in the life of a massive star (the Wolf-Rayet stars of Population 1)
and late in the life of a star of small mass (the Wolf-Rayet stars of the disk population).―
75
This problem is resolved by our finding that the planetary stage follows almost
immediately after the Wolf-Rayet stage; that is, the true Wolf-Rayet is a late pre-
explosion star, whereas the central star of a planetary nebula currently confused with the
Wolf-Rayet is an early post-explosion star. The similarity of the spectra is no doubt due
to the existence of very high temperatures in both cases, and to the presence, in both
classes of stars, of matter from the stellar interiors that has been brought to the surface by
explosive activity.
According to the general description of the dwarf star cycle given in Chapter 4, and the
identification of the evolutionary pattern in Fig.19, the white dwarfs ultimately make
their way back to positions on the spatial main sequence. We have now traced the course
of one group of these stars along lines parallel to the main sequence from their points of
entry into the observable region to positions somewhere near the low temperature limit in
the neighborhood of 30,000 K. As has been indicated, the next move will be upward
toward the main sequence. Before discussing the nature of the change that takes place in
this final dwarf stage, however it will be advisable to examine another group of white
dwarf stars that also has to undergo this final transition to the material status.

CHAPTER 12

Ordinary White Dwarfs


The previous discussion of the white dwarf stars has been directed at the products of the
Type I supernovae, the explosions that take place at the temperature limit to which matter
is subject. As already mentioned, a similar explosion, known as a Type II supernova,
takes place when matter reaches an age limit. This is intrinsically a more violent process,
and in its extreme manifestations it produces results that are quite different from those of
the Type I supernovae. Discussion of these results and the manner in which they are
produced will be deferred to the later chapters. At this time we will want to note that
under less extreme conditions the results of the Type II supernovae are identical with
those of Type I, except that the products are smaller.
The explanation is that the unique character of the products of the extreme Type II
supernovae is due to the ultra high level of the speed imparted to these products by the
combination of a large explosion (that is, one involving a large star) and an extremely
energetic process. The products of Type I supernovae do not reach this speed level, even
though the exploding star is one of maximum size, because the process is less violent.
Similarly, the products of a Type II supernova do not reach the ultra high level if the
exploding star is small, even though they have the benefit of the very energetic process.
Although the age limit can be reached by stars of any size, and the white dwarf products
of Type II explosions extend through a wide size range, the great majority of those that
exist in the outer regions of the galaxies are small, simply because the great majority of
the stars in these regions are small. Many of these small white dwarfs are below the
minimum size of 1.1 solar masses that applies to the central stars of the planetary
nebulae. Our next objective will be to examine the evolutionary course of these smaller
stars, ordinary white dwarfs, as we will call them.
As we saw in Chapter 11, the 1.1 lower mass limit of the planetary nebula region is the
white dwarf mass below which the energy content of the star is not sufficient to maintain
a gaseous structure in gravitational equilibrium. This is analogous to the critical density
of the giant stars. It should be understood that the term ‖giant― refers to the volume, not
to the mass. Most of these giants are low mass stars. Such stars, whose first stage of
evolution carries them along the path OA, are unable to reach the critical density in the
dust cloud (gaseous) condition, and have to call upon the compressive forces of the
aggregate in which they are located to aid in developing a compact, gravitationally stable,
core in order to increase the average density to the required level. What exists here is a
situation in which the inward-directed forces operate to force the matter of the star into a
gravitationally stable condition. When the star is too small for this condensation to take
place in a single operation, applicable to the star as a whole, it proceeds on a two-
component basis, in which one component, the central core, is compressed to the
condensed gas state, while the remainder of the stellar aggregate continues on the gaseous
basis, gradually converting to condensed gas as the star moves down toward the main
sequence.
In the case of the white dwarfs there is no gravitational problem, as the white dwarf
aggregate is always under gravitational control, but the smaller stars those with masses
less than 1.1 solar units, do not have enough energy content to maintain the surface
temperature at 100,000 K in the gaseous state. Hence they, too, have to proceed on a two-
component basis, developing a condensed gas component like that of the smaller stars of
the giant class. However, the fact that the motion of the constituents of the white dwarf is
in time rather than in space introduces some differences. Because of the inverse density
gradient in the white dwarf stars this relatively heavy condensed gas component takes the
form of an outer shell, rather than that of an inner core. Then, the presence of this shell
reduces the radiation temperature to that of a condensed gas surface. This is the same
surface condition that exists along the line B'B, the 30,000 K temperature line of the
planetaries. Thus the 100,000 K line above point A' becomes a 30,000 K line below that
level.
The existence of an outer shell has been recognized observationally, but because of the
prevailing theory of white dwarf structure this has been interpreted as a zone of ordinary
matter surrounding the hypothetical degenerate matter of which the white dwarf,
according to current astronomical theory, is composed. Gireenstein reports that there is a
non-degenerate envelope about 65 miles deep.135 On the basis of our findings, the
thickness of the shell at the time of entry into the observable region depends on the size
of the star. A white dwarf just below the critical 1.1 mass needs only a thin shell, but the
required thickness increases as the mass of the star decreases.
As brought out in Chapter11, the central star of a planetary nebula moves down the CM
diagram along the line A'B', or a parallel line above it, to the level at 30,000 K where the
energy content of the outer thermal units of this gaseous aggregate is on the boundary
between motion in time and motion in space. Here the transition from units of temporal
motion to units of spatial motion takes place. But since the ordinary white dwarfs have to
develop an outer shell of condensed gas before they become observable, the energy
content of their outer thermal units is already below the unit speed boundary. A transition
to motion in space on the basis of the full-sized unit is therefore impossible. These small
stars have to cool to a lower critical temperature at which their outer thermal units are at
the level of the smaller compound units of the condensed gas state, a state in which the
atoms occupy equilibrium positions inside unit distance, in what we have called the time
region.
The 30,000 K and 100,000 K temperatures along the line at the left of the CM diagram
are critical values in the sense in which this term was used in the discussion of the
luminosity scale of the diagram. We may therefore deduce by analogy with the situation
in the region above the main sequence that the drop from 100,000 K to 30,000 K at the
point A' involves one of the compound natural units of luminosity. The 30,000 K
equivalent of the line APB' is then a parallel line one unit lower in the diagram. This line
constitutes the lower boundary of the zone occupied by the ordinary white dwarfs.
Above point A' the constituents of the white dwarf stars are moving freely in time; that is,
they constitute gaseous aggregates in time. It follows that they radiate from the surface
corresponding to an inverse volume. The more massive stars of this group (the hot
subdwarfs and the planetary stars) have the greater inverse volume and are therefore the
more luminous. Below point A' the outer layers of the stars are in the condensed gas
state, in which they are confined within limited volumes of space. These stars radiate
from the spatial surface, the surface corresponding to a direct volume. The more massive
stars of this class have the larger inverse volume, and therefore the smaller direct volume
(a theoretical conclusion that, as we have noted earlier, is confirmed observationally).
Consequently, they are less luminous than the smaller stars of the same class.
There may be some question as to why there should be a difference between the radiation
pattern of the gaseous state and that of the condensed gas state when the motion is in
time, since we do not encounter any such difference in dealing with motion in space. The
stars on or above the spatial main sequence radiate in space regardless of their physical
state. The answer to this seeming contradiction is that condensed gas aggregates radiate
in time if they are condensed in time, whereas they radiate in space if they are condensed
in space. The outer shells of the white dwarfs condense in space.
From their initial locations along the entry line, the cooling ordinary white dwarfs move
down the CM diagram along lines parallel to the spatial main sequence in the same
manner, and for the same reasons, as the planetary stars, within the relatively narrow
band between APB' and the lower zone boundary. Since the radiation from these stars is
in space, the color-temperature relation applicable to this radiation is the same as that
which applies to the stars of the spatial main sequence. The evolutionary lines of the
ordinary white dwarfs therefore continue to their individual temperature limits, rather
than terminating at the extension of the low temperature limit of the planetaries.
Consideration of the question as to the location of these low temperature limits of the
ordinary white dwarfs will be deferred to the next chapter. At this time we will merely
note that the evolutionary lines followed in the cooling of these stars do not reach the
position of the lower portion of the spatial main sequence, which bends sharply
downward beyond 4000 K. James Liebert reports that there is a cut-off between
magnitudes 15 and 16. This fact that the range of the white dwarfs stops short of the main
sequence has come as an unwelcome surprise to the astronomers. Greenstein makes this
comment:
An anomaly has been found in the number and relative frequency of cool, red white
dwarfs. It has been expected that these would be very common, but, in fact, objects more
than 10,000 times fainter than the sun are rare.136
Main sequence dwarfs are observed all the way down to about magnitude 19, and it has
been anticipated that the white dwarf population would extend to comparable levels. The
observed cut-off at a higher luminosity confronts astronomical theory with an awkward
problem. The evolutionary sequence, according to orthodox ideas, is protostar to main
sequence star to red giant to white dwarf to black dwarf. One of the biggest problems that
arises in the attempts to reconcile this theoretical sequence with the observations is how
to account for the changes in mass that are required if this sequence is followed. As
already noted, the theorists are experiencing major difficulties in accounting for the
reduction in mass that is necessary if the red giant is to evolve into a white dwarf. They
have no explanation at all for an increase in mass during the evolution of the star. The
existence of main sequence stars smaller than the white dwarf minimum thus puts them
into a difficult position. Liebert, arguing from the premises of accepted theory, states that
the observed cut-off implies either (1) an error in the calculations, or (2) a decreased
white dwarf birthrate about 1016 years ago.137
In the context of the theory of the universe of motion aggregates of intermediate speed
matter are produced in all sizes from the maximum downward. But the smaller
aggregates are unable to complete their consolidation into single compact entities. As
explained in Chapter 7, in connection with the formation of planetary systems, the pattern
of gravitational forces in the aggregates of intermediate speed matter favors complete
consolidation of the larger aggregates, but becomes more favorable to multiple products
as the total mass decreases. On this basis, the reason for the absence of white dwarfs
below a mass of about 0.20 solar units is not that white dwarf aggregates of smaller sizes
do not exist, but that these smaller aggregates are not able to complete their
consolidation, and remain as groups of objects of less than stellar size.
The existence of this lower mass limit applying to the white dwarfs is one of the reasons
for the big difference in luminosity between the planetary stars and the ordinary white
dwarfs that has puzzled the observers. As Richard Stothers puts it, there is a ‖luminosity
gap― between the coolest planetary star and the hottest of the ordinary white dwarfs.138
Some attempts have been made to explain this gap in terms of stellar composition.
Greenstein, for instance, tells us that
The only possible explanation of their low luminosity is that hydrogen must now
comprise less than 0.00001 of the mass of a dwarf star.135
Like so many other astronomical pronouncements, what this assertion really means is that
its author is unable to find any other explanation within the limits of currently accepted
theory. From the theory of the universe of motion we find that the ‖gap― between the
luminosities is mainly due to the reduction in the luminosity of the ordinary white dwarfs
by reason of the outer shell of condensed gas that characterizes these stars. The
luminosity difference is increased by the existence of the mass minimum, as this
eliminates the small stars that would be the most luminous members of this class.
We can now see the significance of the group of planetary nebulae immediately below
the line APB' on the CM diagram. These are stars that are small enough to require an
outer shell, but so close to the dividing line that the shell is too thin to block much of the
radiation from the interior. These out-of-place planetaries are found only in a very limited
region of the diagram, because as soon as they cool a little more and move down the
evolutionary path a short distance the shell thickness increases enough to cut off the
planetary type of radiation.
In our examination of the behavior of ordinary white dwarfs we will, as usual, draw upon
various sources in the astronomical literature for the observational information that is
needed, but the specific comparisons with the theoretical pattern will deal mainly with a
group of 60 white dwarfs on which all of the major physical properties have been
determined—absolute magnitudes and color indexes by J. L. Greenstein (reference 139),
and masses and temperatures by H. L. Shipman (reference 140). Fig.21 is the CM diagram
for this group of stars.
All but three of the masses of the sample group fall within the evolutionary band that has
been identified. The average decrease in luminosity is more rapid than that indicated by
the theoretical evolutionary lines, but this faster drop is due to known causes. At the
upper end of the evolutionary band the entire distribution of masses is shifted upward to
some extent. In this early white dwarf stage, when the outer shells are relatively thin,
some of the radiation from the interior is evidently penetrating the shell, increasing the
luminosity beyond the normal levels. This is a weaker form of the same effect that was
noted in connection with the existence of planetary nebulae below the line A'B'. In the
remainder of the band the average luminosity gradually drops away from the theoretical
line in the same manner, and for the same reason, that the spatial main sequence turns
downward in its lower sections. This is a result of the gradual decrease in the frequencies
of the radiation from the stars, which shifts more and more of the radiation into the
optically invisible ranges as the temperature drops.

The general relation between mass and luminosity is definitely inverse, as required by the
theory. While the positions of the individual members of the three mass groups identified
by symbols in Fig.20 are somewhat scattered, those of the smaller stars are all in the
upper portion of the populated areas of the diagram, while those of the group with masses
above 0.8 solar units are all in the lower portion. Most of the stars of the intermediate
group, those with masses between 0.4 and 0.8, are close to the average.
As noted earlier, the lower section of the evolutionary band of the ordinary white dwarfs
is not cut off at the 0.4 color index in the manner of the planetary stars, but continues on
to a limit somewhere in the neighborhood of magnitude 16. The faintest star in the
sample group has magnitude 15.73. The number of stars below the 0.4 color index in this
sample group is rather small, but this is undoubtedly a matter of observational selection.
All of the white dwarfs are relatively dim, and the observational difficulties resulting
from this cause increase as the stars age and become less luminous. The available data on
these objects therefore come preferentially from the earlier, more luminous, stars. As we
will see later, ‖the most numerous kind of white dwarf― is the cool, dim type of star that
populates the lower luminosity range, beyond a color index of 0.3 or 0.4, the same range
that is so poorly represented in the sample group. The question as to what happens to the
stars that reach the lower limit of this white dwarf evolutionary path will be the subject of
discussion in the next chapter.
The foregoing findings as to the evolutionary course of the ordinary white dwarfs now
enable us to extend the theoretical CM diagram of the planetary stars, Fig.19, to include
the stars of this smaller class, and to show how the zone occupied by these ordinary white
dwarfs is related to the positions of the other classes of stars. For comparison, this
enlarged diagram, Fig.22, also indicates the location of the ordinary white dwarfs as
identified in the illustration accompanying the previously cited article by M. and G.
Burbidge.102
The spectra of the white dwarfs show a considerable amount of variation, and on the
basis of this variability these stars are customarily assigned to a number of different
classes. Greenstein distinguishes nine classes, and the designations that he has applied in
his tabulation141 are in general use. However, the basic distinction appears to be between
the hydrogen-rich stars, designated as Class DA, a few hybrid classes, particularly DAF,
and the balance, which are helium-rich. Much of the discussion in the literature is carried
on in terms of DA and non-DA. H. M. Van Horn, for instance, comments that ‖The
existence of white dwarfs with non-DA (hydrogen deficient) spectra has not yet been
satisfactorily explained.― 142
Because of this lack of an acceptable explanation, the astronomers have not reached any
consensus on the question as to whether the observed differences that have led to the
distinction between the various classes reflect actual differences in composition, or are
products of processes that take place during the evolution of the stars. The theoretical
development in this work leads to the conclusion that these differences are primarily
evolutionary. Before discussing these theoretical reasons why changes take place in the
atmospheres of the white dwarfs as they age, we will first examine the evidence which
demonstrates that these stars do, in fact, undergo significant changes as they progress
along their evolutionary paths.
In this case, as is usual in astronomy, the observations give us only what amounts to an
instantaneous picture, and do not specifically indicate whether the regularities that are
observed are time related. This is the reason for the existing uncertainty. But the new
information developed in the foregoing pages has now provided a basis from which we
can approach the question. As shown in Fig.21, the ordinary white dwarfs of different
masses follow parallel cooling lines on the CM diagram, with the smaller stars at the top
of the luminosity range and the larger ones at the bottom. From this demonstrated fact
that the lines parallel to the main sequence in the white dwarf region of the diagram are
lines of equal mass, as the theory requires them to be, it follows that on a plot of mass
against the B-V color index, Fig.23, where the lines of equal mass are horizontal, the
distance from the left side of the diagram along any one of these lines represents time;
that is, it measures the amount of evolutionary development. The general trend obviously
is from the hydrogen-rich stars, Class DA, to the classes marked x on the diagram, the
DC group, we might call them, all of which are classified by Shipman as helium-rich.
At temperatures above a dividing line in the vicinity of 8000 K the great majority are DA
stars, with only about ten percent in the DC group. Below this temperature all of the stars
fall into the DC group or a transitional class. A specific segment of the general transition
from the DA status to that of the DC group can be recognized in the larger stars.
Greenstein defines a class DAF, in which the hydrogen lines characteristic of the DA
spectrum are weaker, and Ca II lines are present. Class DF, in which Ca II appears
follows this, but hydrogen does not. Evolution through the entire sequence DA, DAF, DF
is taking place in stars with mass above 0.50.
Next let us turn to the question as to what causes this shift from a hydrogen atmosphere
to a helium atmosphere as the white dwarf ages. The astronomers have no answer to this
question. As explained by James Liebert in a 1980 review article, ‖The existence of
nearly pure helium atmosphere degenerates over a wide range of temperatures has long
been a puzzle.― 137 The ‖cooler helium-rich stars,― he reports, are ‖the most numerous
kind of white dwarf.― Furthermore, the concentration of still heavier elements in the
atmospheres of these stars is also too high to be explainable on the basis of current
astronomical theory. Since the interior of the white dwarf is in an unusual physical state
(this is true regardless of whether the matter is ‖degenerate,― as seen by conventional
theory, or expanding into time, as seen by the theory of a universe of motion), the matter
in the atmosphere, which is normal, must have been accreted from the environment.
Liebert points out that
The metals in the accreted material should diffuse downward, while hydrogen should
remain in the convective layer. Thus the predicted metals-to-hydrogen ratios would be at
or below solar (interstellar) values, yet real DF-DG-DK stars have calcium-to-hydrogen
abundance ratios ranging from about solar to well above solar.137
The only possibility that Liebert is able to suggest as a solution to the ‖puzzle― is that the
hydrogen accretion must be ‖blocked by some mechanism.― This is clearly a ‖last resort―
kind of hypothesis, lacking in plausibility, and wholly without factual support. On the
other hand, the explanation of the structure of the white dwarf derived from the postulates
that define the universe of motion requires just the kind of a situation that is found by the
observers. As Liebert says, on the basis of conventional theory, ‖the metals in the
accreted material should diffuse downward.― But on the basis of the theory described in
this work, the center of the white dwarf is the region of least density. According to this
theory, then, the hydrogen should ‖diffuse downward,― and the metals should remain in
the outer regions. The helium, too, should remain behind while the lighter hydrogen
sinks. The observed distribution of the three components, hydrogen, helium, and metals,
in the classes of stars identified by Liebert is exactly what the theory of the universe of
motion tells us it should be in the older white dwarfs.
The presence of hydrogen atmospheres in the earlier stars, and the gradual nature of the
transition to helium atmospheres are due to slow transmission of physical effects across
any boundary between motion in space and motion in time. Originally, the white dwarf,
located in the middle of the debris left by the supernova explosion, was able to accrete
matter at a relatively rapid rate. Inasmuch as these accreted explosion products consisted
mainly of hydrogen, the accretion gave the white dwarf a hydrogen atmosphere. But there
was a small proportion of helium and other heavier elements in the accreted matter.
Long-continued preferential movement of hydrogen into the stellar interior therefore
resulted in a gradual increase in the proportion of heavier elements in the atmosphere.
Meanwhile the accretion rate was decreasing as the white dwarf and its giant companion
swept up the residue from the explosion. Eventually the incoming hydrogen passed into
the interior of the star as fast as it arrived. Beyond this point, which we located in the
vicinity of 8,000 K, the atmosphere of the white dwarfs is predominantly helium. In view
of the complete inability of the astronomers to find any tenable explanation of these
helium atmospheres within the limits of accepted physical and astronomical theory. the
agreement with the theory of the universe of motion is impressive.
This is an appropriate point at which to emphasize one of the most significant aspects of a
general physical theory, one that derives all of its conclusions in all physical fields by
deduction from a single set of basic premises, independently of any information from
observation. The development of such a theory not only produces explanations for known
phenomena that have hitherto resisted explanation, but also, because of its purely
theoretical foundations, is able to supply explanations in advance for phenomena that
have not yet been discovered. Items of this anticipatory character have had only a minor
impact on the presentation in the preceding pages of this volume, as the subject matter
thus far covered has been confined almost entirely to phenomena that were already
known prior to the first publication of the theory of the universe of motion in 1959. But
the remainder of this volume will deal mainly with astronomical phenomena that have
been discovered, or at least recognized in their true significance, since 1959. The
explanations that will be given for these phenomena will be taken directly from the 1959
publication, or derived by extension of the findings described therein. One entire chapter
(Number 20) will be devoted to describing the predictions made in 1959 with respect to
the origin and properties of a then unknown group of objects that are now identified with
the quasars, pulsars, and related objects.
The phenomenon that we are now considering, the existence of helium atmospheres in
certain classes of white dwarf stars, is a more limited example of the same kind of
anticipation of the observational discoveries. Here the explanation was provided before
the need for it was recognized. The essential feature of this explanation is the inverse
density gradient. The existence of this inverse gradient is not an ad hoc assumption that
has been formulated to fit the observations, in the manner of so many of the
‖explanations― offered by conventional theory. It is something that is definitely required
by the basic postulates of the theory of the universe of motion, and was so recognized,
and set forth in the published works, long before the existence of the helium atmospheres
was reported by the observers, and the need for an explanation of this seeming anomaly
became evident. ‖The 1959 publication stated specifically that ‖The center of a white
dwarf star is the region of lowest density .―
Once the existence of the inverse density gradient was recognized, the presence of helium
atmospheres in the older white dwarf stars could have been deduced, independently of
any observations, if the investigations had been extended into more detail. This was not
feasible as a part of the original project, because of the limited amount of time that could
be allocated to astronomical studies in an investigation covering the fundamentals of all
major branches of physical science. The answer to the problem of the helium
concentration was, however, available for immediate use as soon as the problem was
specifically recognized. In the pages that follow, this experience will be repeated time
and time again. We will encounter a long succession of recent discoveries—some of a
minor character, like the helium atmospheres; others that have a major significance to
astronomy—and we will find simple and logical explanations of these discoveries ready
and waiting in the physical principles that were previously derived from the postulates of
the theory of the universe of motion.
This ready availability of deductively derived answers to current problems is something
that conventional astronomical theory does not have. The astronomers first have to make
the discovery, and then look for an explanation of what they have found. Almost all-
important new discoveries come as surprises. Thus it is to be expected, in a rapidly
growing field of knowledge, that there will be many phenomena that are still
unexplained, or not satisfactorily explained, in terms of accepted theories and concepts.
This situation is not looked upon as particularly serious, inasmuch as explanations of a
more or less plausible character can reasonably be expected to be forthcoming for most of
these items as more observations are made and the general level of knowledge in the
relevant areas rises. But the prevalence of these issues of a work-in-progress nature tends
to obscure the fact that among the unexplained phenomena there are some that clearly
cannot be reconciled with the accepted theories, and therefore provide definite proof that
there is something seriously wrong in the currently prevailing structure of theory.
Spontaneous movement of heavy atoms against the density gradient does not occur in the
real world. Technetium cannot rise from the core of a normal star to the surface through
an overlying volume of hydrogen. Helium and the metals cannot remain on the surface of
a normal star, or a highly condensed star, while hydrogen sinks to the center. Inasmuch as
the observations show that technetium is present in the surface layers of some stars, and
the heavier elements do remain in the surface layers of some of the white dwarfs, it is
evident that the current theories are wrong in some essential respects. In the first of these
cases, there is adequate evidence to show that technetium is present in stars of normal
characteristics; that is, matter is not being ejected from the interiors explosively. It then
follows that the technetium is not produced in the core of the star in accordance with the
prevailing ideas. In the white dwarf situation, there is adequate evidence to demonstrate
that the concentration of heavy elements in the outer layers of certain classes of these
stars is greater than that in the matter that is being accreted from the environment. Here,
then, the hydrogen is preferentially sinking into the stellar interior. In this case it
necessarily follows that the white dwarf is not a normal star, or a star composed of
‖degenerate― matter, but a star with an inverse density gradient.

CHAPTER 13

The Cataclysmic Variables


The white dwarf situation is a good example of the way in which an erroneous basic
concept can cause almost endless confusion in an area where the information from
observation is erroneously interpreted. This is one of the two most misunderstood areas in
astronomy (aside from cosmology, which belongs in a somewhat different category), and
it is significant that the other badly confused area, the realm of the quasars and associated
phenomena, is another victim of the same basic error: a misunderstanding of the cause of
the extremely high density of such objects as white dwarfs and quasars.
The wrong conclusion as to the nature of the very dense state of matter leads to an
equally wrong conclusion as to the ultimate destiny of the stars that attain this state: the
conclusion that they must, in the end, sink into oblivion as black dwarfs, cold, lifeless
remnants that play no further part in the activity of the universe. This is the basis for the
assumption, already discussed, that the white dwarfs must have evolved from the red
giants. Extension of this line of thought then leads to the conclusion that, except for
‖freaks,― the stars of the high density classes should line up in some kind of an
evolutionary sequence. As previously noted, the position of the planetary nebulae in the
CM diagram has been interpreted by the astronomers as indicating that they are the first
products of the unidentified hypothetical process that carries the red giants into the white
dwarf region. It then follows that the central stars of the planetary nebulae must evolve
into the ordinary white dwarfs.
Shklovsky regards this as incontestable. ‖There can be no question,― he says, ‖but that
the stable object into which the nucleus of a planetary nebula evolves should be a white
dwarf.― 143 But even this essential step in the hypothetical evolutionary course runs into
difficulties. Aller and Liller give us this assessment of the situation:
Our evidence indicates that they [the central stars of the planetary nebulae] evolve into
white dwarfs, but we do not yet know whether they represent an intermediate stage for
most stars or not. Neither do we know from what specific kinds of stars they may
evolve.144
This problem persists all the way down the line. The theorists not only have difficulty in
explaining how the planetaries evolve from the red giants, and how the ordinary white
dwarfs evolve from the planetaries; they are also confronted with the problem of how to
account for the existence of a variety of high density objects for which their evolutionary
sequence has no place. The novae, for instance, must fit into the picture somehow. But
there does not seem to be any place for them in the astronomers' version of the
evolutionary path. ‖Nova outbursts are too rare to be a typical stage in stellar evolution,―
145
says Robert P. Kraft. Because of the lack of any explanation consistent with the
accepted theories of stellar evolution, there is a rather general tendency to dismiss the
novae and related objects, the cataclysmic variables, as aberrations. For example, one
astronomy textbook offers this comment:
Very little is known about the reason for a nova's outburst. It appears that something has
gone wrong with the process of nuclear energy generation in the star. 146
Development of the theory of the universe of motion now shows that the planetaries and
the ordinary white dwarfs follow parallel, rather than sequential, evolutionary paths. All
of these dwarf stars enter the observable region along a critical temperature line at the left
of the CM diagram, and move downward and to the right along parallel lines as they cool
(evolutionary stage 3). On reaching the temperature at which a transition to motion in
space takes precedence over further cooling of the atoms moving in time, a temperature
that is determined by the stellar mass, each star converts to motion in space. This change
takes it upward on the CM diagram (evolutionary stage 4). The general nature of the
conversion process is the same for all of these stars, but the specific character of the
observable results depends on the magnitudes of the factors involved. Our next objective
will be to examine the details of this process.
As successive portions of the intermediate speed matter of which the two classes of white
dwarf stars are composed cross the unit speed boundary in their continuing loss of
thermal energy, they form local concentrations of gas—bubbles, we may say—with
particle speeds in the range below unity. Because of the inverse density gradient in the
interior of the white dwarf star, these gas bubbles move downward to the center, the
location of lowest density, and accumulate there. Some interchange takes place between
the gas and the surrounding intermediate speed matter, tending to convert part of the gas
back to intermediate speeds, but this interchange is slower than the oppositely directed
movement across the unit boundary that produces the gas in the outer regions. A gas
pressure therefore builds up at the center of the star. When this pressure is high enough,
the compressed gas breaks through the overlying material, and the very hot matter from
the interior is exposed briefly at the surface of the star, increasing its luminosity by a
factor that may be as high as 50,000. The star also becomes a x-ray emitter. The
significance of this emission will be discussed in Chapter19.
Within a relatively short time (astronomically speaking) the small amount of matter
brought to the surface by the outburst cools, and the star gradually returns to its original
status. A white dwarf is inconspicuous and, since the first observed events of this kind
could not be correlated with previously identified objects, they were thought to involve
the formation of entirely new stars. As a result, the inappropriate term nova was applied
to this phenomenon.
From the foregoing description it is apparent that the nova process is periodic. As soon as
one gas accumulation is ejected, the compressive and thermal forces in the interior of the
star begin working toward development of a successor. Inasmuch as the gravitational
forces operating within the star are gradually expanding it toward the condition of
equilibrium for motion in space represented by the spatial main sequence (that is, they are
drawing the constituent atoms closer together in time), the resistance to the gas pressure
that builds up in the center of the star decreases as the star moves through this stage of its
existence. The decreasing resistance shortens the time interval between explosions. The
first event of this kind may not occur for a very long time after the beginning of the
observable life of the star, but as the star approaches closer to the point of full conversion
to motion in space the time interval decreases, and a number of novae have repeated
within the last 100 years.
Novae are relatively infrequent phenomena, and observationally difficult because of the
relatively short duration of the active period, and the rapid changes that take place during
this time. Meaningful information about them is consequently limited. The theoretical
conclusions with respect to this stage of the evolution of the stars on the dwarf side of the
main sequence can therefore be compared with observation only to a very limited extent.
We will have to be content, in most cases, with a showing that the theoretical findings are
not inconsistent with what has been observed.
Two of the brightest novae, T Coronae Borealis and RS Ophiuchi, are in the class known
as recurrent novae, having repeated three or four times during the period in which they
have been subject to observation. This is another name that is not very appropriate, as
some novae of the more common ‖classical― type have also been observed to repeat their
outbursts, and theoretical considerations indicate that all will eventually repeat many
times. T Coronae is estimated to have a mass of 2.1 solar units,147 which puts it, and
presumably RS Ophiuchi, in the class of the larger white dwarfs, those that were formerly
the central stars of planetary nebulae. This large mass is consistent with the high
luminosity of the two novae that have been mentioned.
The nature of the nova process is the same regardless of the size of the star that is
involved. In all cases there is a pressure build-up that eventually breaks through the
overlying layers of the star. But there are differences in the rate of pressure increase, and
in the weight of matter through which the confined gas must force its way in order to
escape, and the variability in these factors results in major differences in the character of
the outbursts from different classes and sizes of stars. In the white dwarfs of the larger
(planetary) class, the luminosity and temperature changes required to move a star from
the point on the evolutionary line where it begins its final transition to motion in space to
the appropriate main sequence position on the line segment BC are relatively small,
averaging about three magnitudes, and they are accomplished quite rapidly. This
accounts for the short interval between the outbursts of these stars.
On the other side of the dividing line the situation is quite different. The first stars of the
smaller class, the ordinary white dwarfs, not only enter the observable region at a much
lower luminosity, but undergo a greater decrease in luminosity and temperature as they
cool, so that when they arrive at the point where they are ready to begin the transition
from motion in time to motion in space they have a long way to go, as Fig.21 clearly
indicates. The time between outbursts is correspondingly long. On the other hand, the
magnitude of the outburst is not related to the amount of energy decrease involved in the
transition, but to the size of the star, which determines the resistance to escape of the
confined gas. Even the largest of the novae produced by ordinary white dwarfs are
therefore less violent than those of the T Coronae class, although their range of
magnitudes is greater. Initially they repeat only at very long intervals, too long for more
than one event to have occurred during the time that observations of these phenomena
have been carried on.
The observers classify novae as slow, fast, or very fast, depending on the rate at which
the luminosity develops and returns to normal. Aside from details of the spectra, which
are not being covered in this work, available quantitative information about these objects
includes the maximum and minimum luminosity, together with the difference between
the two: the total luminosity range. The distances to the novae are not known, and the
absolute magnitudes are therefore unavailable. The most significant luminosity
measurement is the total range, which is independent of the distance, except to the extent
that there has been absorption of light in passing through the intervening matter. Table III
compares the ranges of the group of novae tabulated by McLaughlin (reference 148) with
the assigned classifications and with the number of days required for the luminosity to
decline seven magnitudes, a rough check on the validity of the classification.
Some general conclusions can be drawn from this information. Theoretically the earliest
outbursts of the largest novae should be the fastest, and should have the maximum
magnitude range, since these largest stars are at the bottom of the white dwarf
evolutionary band. Both the rate of luminosity change and the magnitude range should
decrease as the white dwarf star ages. The mass does not change significantly. The
slowest novae with the smallest magnitude range should therefore be those in which the
stars are at the low end of the nova size range, and also near the end of their nova stage.
In between these two extremes, the magnitude range is determined by the size and age of
the nova. Average range may indicate either an old large nova, or a young small one, or
one that is near average in both respects.
Table III
NOVAE
Range Decline
Nova Class
(magnitudes) (days)
CP Pup 16.6 VF 140
V450 Cyg >14.0 S —
DQ Her 13.6 S 8880
EL Aql 13.5 F —
GK Per 13.3 VF 300
CP Lac 13.2 VF 154
V476 Cyg 12.5 VF 170
V603 Aql 11.9 VF 260
Q Cyg 11.8 VF 250
RR Pic 11.5 S 1000
CT Ser >11.0 F? —
V630 Sgr 11.0 VF 123
T Aur 11.0 S 1800
V528 Aql 10.7 F —
DK Lac 10.5 F 500
V465 Cyg 10.1 S? —
V360 Aql >10.0 VF —
V606 Aql 9.9 F 320
DL Lac 9.8 F 300
V604 Aql >9.2 F 230
XX Tau >9.0 F <500
V356 Aql 9.0 S 1100
HR Lyr 8.7 S 600
Eu Ser >8.6 F 70
T Cr B* 8.6 VF 300
DM Gem 8.5 VF 150
V841 Oph 8.3 S 5000
DO Aql >7.9 S —
DN Gem 7.9 VF 550
V8490 Oph >7.6 S —
T Pxy** 7.6 S —
V1017 Sgr** 7.5 S 400
WZ Sge** 7.4 F 300
RS Oph* 6.7 VF —
* ‖recurrent― ** ‖repeated outburst"

The information available from observation gives only a very general indication of how
well the novae conform to this theoretical pattern, but the little that is available is clearly
in agreement with the theory. Most of the novae with large magnitude ranges are in the
very fast category, and there is a general trend toward the next lower rating, the ‖fast―
class, as the range decreases. Only one nova with a range greater than 11 magnitudes is
definitely classified as fast. Below this magnitude level, the fast group outnumbers the
very fast by about three to one. This is consistent with the theoretical conclusion that the
earliest outbursts of the largest novae should have the maximum magnitude range, that
this range should be less for the smaller novae, and that in all cases the range should
decrease as time goes on and the outbursts are repeated.
While the slow novae are not concentrated at the lower end of the list as strongly as the
very fast are concentrated at the upper end, there is a definite increase in the proportion of
slow novae as the magnitude range decreases. If we omit the two stars of the larger class
(identified as recurrent), the proportion of slow novae in the group with magnitude range
9.0 or below is 64 percent. In the group with ranges above this level it is only 24 percent.
In between the extremes there are some relatively slow novae that are quite high on the
list, and some of the very fast class that are quite low. Theoretically, the latter should be
quite small stars and the former quite large. This cannot be confirmed observationally, as
matters now stand.
The novae that are observed to have repeated are at the low end of the list; that is, they
have the lowest magnitude ranges. Of course, these are not the only novae in the list that
have repeated; they are merely the white dwarfs that are approaching the end of the nova
stage of their existence, and are repeating their outbursts at short enough intervals to have
had at least two within the time interval during which observations have been made.
Their position at the low end of the list is another agreement with the theory.
At this point we need to take into account the fact that the possible speeds in the
intermediate speed range do not constitute a continuous succession of values, but are
confined to eight distinct levels, a characteristic of this speed range that we have already
had occasion to recognize in applications such as the explanation of the relation known as
Bode's Law. As noted earlier, four of the eight speed levels are on the spatial side of the
dividing line, and correspond to identifiable locations in equivalent space. In the
planetary stars, which are in the gaseous state throughout, the particles moving at the
different speed levels are well mixed, and there is a continuous density gradient from the
outer to the inner regions. Here the build-up of pressure and eventual escape of the
compressed gas follows essentially the same pattern irrespective of the size of the star.
The situation in the ordinary white dwarf stars is quite different because the outer shell of
these stars is in the condensed gas state. In this state, as in a liquid, matter of different
densities stratifies. The outer shell is therefore not homogeneous, but consists of a
number of layers, initially four. Since the white dwarf aggregate is a time structure, rather
than a space structure, the gas bubbles in the center of the star (space structures) remain
in the aggregate, rather than separating from it. Thus they accumulate in the lowest of the
four layers, and are confined by the weight of the three overlying denser layers.
Smaller stars of the same surface temperature have lower internal temperatures, and at
some point in the mass range the fourth speed level is vacant. The compressed gas in the
central regions of these smaller stars is located in the third level, and is subject to the
weight of only two overlying denser levels. This very substantial reduction in the weight
that is confining the gas results in a corresponding reduction in the pressure that has to be
built up before the compressed gas can break through. It can be expected, therefore, that
at some definite level of white dwarf mass the nova type of outburst will be replaced by a
different type of eruptive behavior in which the outbursts are more frequent but less
violent.
This theoretical expectation is fulfilled observationally. While the white dwarf stars that
reach the main sequence at the higher luminosities are observed as novae during the
conversion from motion in time to motion in space, those that are smaller, and less
luminous in their final state, follow what may be described as a nova pattern in miniature,
with less violent outbursts and much shorter periods, ranging from about a year
downward. These small scale novae, of which SS Cygni and U Geminorum are the type
stars, are classified together with the true novae, the recurrent novae, and certain nova-
like variables, as cataclysmic variables.
The magnitude at which the change in behavior occurs is a critical level that, on the basis
of the considerations previously discussed, should be related to the other critical levels of
the CM diagram by integral numbers of natural (compound) units. We have identified the
difference between points B and C on the spatial main sequence, 2.8 magnitudes, as one
such natural unit. The explanation just given for the transition from nova to a less violent
type of outburst suggests that it should take place one natural unit below the upper limit
of the normal, or ‖classical― novae, which coincides with the lower limit of the planetary
stars at point B on the diagram, magnitude 4.6. This puts the boundary between the two
types of cataclysmic variables at 7.4 magnitudes on the spatial main sequence. The
corresponding mass is 0.65 solar units. Thus the ordinary white dwarf in the range above
0.65 solar masses follows the nova pattern in its conversion to motion in space, while
those in the range immediately below 0.65 solar masses are SS Cygni variables in the
conversion stage.
We have identified the novae as white dwarfs that have component particles with speeds
in all four of the levels on the (equivalent) spatial side of the dividing line, and the SS
Cygni variables as white dwarfs in which the component speeds are limited to three of
these four levels. On the basis of the same considerations that are applicable to the novae,
the magnitude range of the SS Cygni stars, after conversion to motion in space, should be
between 7.4 and 10.2, and the mass range should be from 0.65 to 0.40 solar units.
Inasmuch as there are white dwarfs with still smaller masses, it follows that there should
exist a class of these stars in which component speeds occupy only two levels. Since this
leaves only one level overlying the one in which the gas is accumulating, it can be
expected that the gas bubbles in these stars will break out at a relatively early stage before
they reach any substantial size. The observations indicate that this third theoretical class
of cataclysmic variable can be identified with the flare stars. These are theoretically stars
of from 0.40 to 0.25 solar masses, with main sequence magnitudes, after conversion to
motion in space, in the range from 10.2 to 13.0.
The 0.25 lower limit of the mass of the flare stars leaves room for some white dwarfs
with only one speed level, as the minimum mass of the ordinary white dwarfs is
somewhat lower, probably around 0.20. There is no significant amount of resistance to
escape of gas from these stars, other than the viscosity of the condensed gas through
which it has to make its way, but since the gas comes out in the form of bubbles, it is
probable that there are visible flares from these stars similar to those from the two-level
class, known to the astronomers as UV Ceti stars. The flare stars are not usually included
in the classification of cataclysmic variables but they share the distinguishing
characteristic of those variables, periodic outbursts of very energetic matter, and differ
mainly in the magnitude of the outbursts. As indicated in the preceding paragraphs, there
is a specific place for them in the pattern of the cataclysmic variables that has been
derived from the theory of the universe of motion. the pattern that applies also to the
novae and the SS Cygni stars.
Turning now to a consideration of the information that is available from observation, we
find that the average magnitudes reported by the observers are within the theoretical
limits, but these limits are so wide that the agreement with observation is not very
significant. The mean absolute magnitude of the SS Cygni stars is reported as 7.5+0.7
(reference 149) and that of the UV Ceti flare stars as 13.1 (reference 150). The
cataclysmic variable stage begins at about magnitude 16, the low limit of the ordinary
white dwarfs, and extends to the level of the galactic main sequence, 0.8 magnitudes
above the limiting magnitudes on the undisplaced basis as noted above. Since the
conversion process accelerates, the average position of these variable stars, as observed,
should be well below the midpoint of the magnitude range. The 13.1 magnitude reported
for the UV Ceti stars is consistent with this prediction. The 7.5 magnitude of the SS
Cygni stars is too high, near the upper end of the theoretical range, but it is likely that this
value includes a large contribution from the after effects of the interior heat released
during the outbursts.
Other data on the smaller classes of cataclysmic variables are scarce. Unlike the novae,
which are spectacular, but rare because of the long intervals between outbursts, the SS
Cygni stars are dim and hard to detect. It is reported that about 100 of them have been
located, but only a few of them have been studied in detail. These are found to have a
‖period-amplitude relation whereby the stars of longer period show the more violent
outbursts,― 151 thus continuing the pattern of the true novae, noted earlier. The maximum
observed magnitude range is near 6 magnitudes, about one magnitude below the
minimum of the true novae indicated in Table III.
Very little is known about the properties of the flare stars, aside from those that they
share with the other cataclysmic variables. A. H. Joy describes them as ‖extremely faint
M-type dwarfs― in which the ‖light curve rises to maximum in a few seconds or minutes
of time and declines to normal in less than a half hour.― 152 These light curves ‖are similar
in form to the light-curves of novae,― 153 an observation that supports the theoretical
identification of the flare stars as junior members of the group headed by the novae.
The heterogeneous group of stars known as the nova-like variables do not constitute a
separate class, but are members of the classes already identified, with some special
characteristics that distinguish them from the type stars of their respective classes. For
instance, R. Aquarii and similar stars differ from SS Cygni mainly in that in SS Cygni
both components of the binary system are dwarfs. whereas R. Aquarii combines a red
giant and a hot blue dwarf.154 Z Andromedae is the prototype of a group of stars that
undergo outbursts of about three magnitudes, and ‖combine the features of a low
temperature red giant and a hot bluish B star which is probably a subdwarf.― 155 The
terms applied to the dwarf components in these quoted descriptions of the binary systems
are appropriate for the white dwarf members of cataclysmic variable pairs. A ‖blue
dwarf― is simply a hot white dwarf. while a ‖subdwarf― is a dwarf star below the spatial
main sequence in the area in which the cataclysmic variables are theoretically located. As
noted previously, a combination of a red giant and white dwarf is not unusual; it is an
early evolutionary stage that in time evolves into the more familiar combination of main
sequence star and white dwarf
It is now generally accepted that all cataclysmic variables are binary systems, as required
by the theory developed herein. The following is an expression of the current view:
A dwarf nova, like all cataclysmic variables (novae, recurrent novae, dwarf novae, and
novalike variables) is a close binary system in which the primary component is a white
dwarf. The secondary is a normal star. 156
The current tendency is to ascribe the explosive behavior to this binary nature of the
system. ‖The sudden outbursts― of the SS Cygni stars, says Burnham, ‖are undoubtedly
connected in some way with the duplicity of the system, but the exact details are
uncertain.― 157 Notwithstanding the use of the word ‖undoubtedly― in this statement, our
findings are that the binary nature of the cataclysmic variables, which we confirm, has no
connection with their explosive behavior. This is why the astronomers have not been able
to explain how their hypothetical process operates. These systems are binary because they
originate in supernova explosions powerful enough to accelerate some of their products
to intermediate speeds, and the white dwarf member of the binary system is explosive
because the intermediate speed component of the supernova products goes through an
explosive stage on its way back to the normal speeds of the material sector. The
theoretical conclusions agree with the observed fact—the binary nature of these objects—
but they disagree with the prevailing assumption as to the nature of the process
responsible for the explosive outbursts.
The situation with respect to the location of the cataclysmic variables on the CM diagram
is similar. We deduce from the theory that these objects are on the way from the white
dwarf status to positions on the spatial main sequence, and therefore occupy intermediate
positions. The observers agree as to the positions.
From their luminosities, which are similar to the sun on the average, we are forced to
conclude that they [the novae] are small superdense stars somewhat like white dwarfs,
but not so extreme.78 (D. B. McLaughlin) Virtually all known post-nova stars are objects
of the same peculiar type, hot bluish subdwarfs of small radius and high density,
apparently intermediate between the main sequence stars and the true white dwarfs. 158
(R. Burnham) The prenova is below the main sequence intermediate between white dwarf
and main sequence stars.159(E. Schatzman)
Where our findings differ from astronomical theory is in the direction of the evolution of
the cataclysmic variables. The prevailing astronomical opinion is that the evolutionary
direction is down the CM diagram from some location above the main sequence,
generally identified as the red giant region, toward the white dwarf stage. The white
dwarf is seen as the last form in which the less massive stars are observable, the
penultimate stage on the way to extinction as black dwarfs. This leaves the planetaries
and the cataclysmic variables dangling without any clearly identified role. Shklovsky
calls them ‖freaks.''
Our analysis now shows that here again the astronomers' evolutionary sequence is upside
down. The white dwarfs of both classes (planetaries and ordinary white dwarfs) enter the
field of observation at the left of the CM diagram from an unobservable condition
analogous to that of the earliest of the protostars that eventually enter the diagram in the
red giant region. Just as these giants move to the left and down the diagram to the
equilibrium positions on the spatial main sequence, the white dwarfs move to the right
and up the diagram to reach similar equilibrium positions on that sequence. The upward
movement takes place in the cataclysmic variable stage.
As the foregoing survey of the results of observation of the cataclysmic variables
indicates, existing empirical knowledge is much too limited to provide a clear picture of
these objects. But each of the isolated bits of information currently available fits into the
general pattern derived from the theory of the universe of motion. While the theoretical
pattern of behavior conflicts to some extent with current astronomical thought, it is really
not accurate to say that the results of this present investigation contradict the astronomers'
theory of the cataclysmic variables, because, aside from the rather vague idea of a giant
star ‖shedding mass― and moving toward the hypothetical black dwarf status along an
unspecified route, the astronomers have no theory of these objects. ‖Severe problems
remain,― 142 in arriving at an understanding, says H. M. Van Horn. A. H. Joy describes
the situation with respect to the stars of the SS Cygni class in this manner:
The general problem of the SS Cygni stars is so complicated that little progress has been
made toward its solution . . . No satisfactory explanation of the novalike outbursts which
occur at semi-regular intervals in the variable stars of this class has been proposed, and
their relationship to other groups has yet to be determined. 160
Gallagher and Starrfield give us a similar assessment of the current state of knowledge
with respect to the novae.
It is clear that there are few problems relating to the novae that we may consider as
solved, and many phenomena for which we have yet even to identify the nature of the
underlying physical processes.161
Dean B. McLaughlin views the nova problem even more pessimistically. He sees little
prospect of improvement.
The cause of nova outbursts is not likely to be identified directly by observation. At best
we can only hope to arrive at an idea of the cause by devising hypotheses, calculating
their consequences, and comparing the expected results with the observed facts.162
The development in this work, based on deductions from the postulates that define the
universe of motion, has now provided the kind of a complete and consistent theory of the
cataclysmic variables that has heretofore been lacking. In the course of this development
we have identified the three basic errors that have diverted astronomical thinking about
the white dwarfs into the wrong channels: (1) the assumption that conversion of hydrogen
into heavier elements is the energy production process in the stars, (2) the assumption
that speeds in excess of that of light are impossible, and (3) the assumption that the white
dwarf is a dying star. Correction of these errors and application of the physical principles
governing motion at speeds greater than light, derived in the preceding volumes of this
work, have arrived at a logical and consistent theory of the entire class of cataclysmic
variables.
These results show that Shklovsky's characterization of the cataclysmic variables as
‖freaks― is totally wrong. These stars (and the planetaries as well) are in the direct line of
one of the two coordinate branches of the stellar evolutionary cycle. They are all white
dwarfs, differing only in the properties that are affected by the particular evolutionary
stage at which each type of object makes its appearance, and they all go through the same
general processes of cooling to a critical temperature level and then converting from
motion in time to motion in space. Meanwhile the companions of these white dwarfs are
going through the successive stages of the giant evolutionary cycle. The two stars of each
of these binary systems are at comparable evolutionary stages, regardless of the
difference in their properties, and they ultimately arrive at the same kind of a
gravitationally and thermally stable state. When they're relatively short excursion away
from the main sequence is ended, both of the partners will settle down for another long
stay in that equilibrium condition.

CHAPTER 14
Limits
One of the most significant features of the physical universe, as it emerges from the
development of the consequences of the postulates that define the universe of motion, is the
existence of limits. Everywhere we turn we find ourselves confronted with some kind of
limit—a gravitational limit, a mass limit, an age limit, and so on and on. These limits exist
because the postulates define a universe that is finite, with effective magnitudes that extend,
not from zero, but from unit motion; that is, unit speed or unit energy. Since the deviations
from this datum are finite, neither infinity nor zero is ever reached (except in a mathematical
sense where the difference between two existing quantities of equal magnitude enters into
some physical situation).
Many of the errors in present-day scientific theory owe their existence to a lack of
recognition of the reality of these limits. Some particularly far-reaching conclusions of an
erroneous nature that are pertinent at this stage of our investigation have been drawn from
the Second Law of Thermodynamics. This law has been stated in a number of different
ways. One of the simplest makes use of the physical quantity known as entropy, which is
essentially a measure of the unavailability of energy for doing work. The statement of the
Second Law on this basis is that the entropy of the universe continually increases. In the
absence of recognition of any limits applicable to this process, the conclusion has been
reached that the universe is on the way to becoming a featureless uniformity in which no
significant action will take place. As expressed by Marshall Walker, ‖Apparently the
universe is 'running down,' and in the remote future it will consist of a disordered cold soup
of matter dispersed throughout space at a uniform temperature of a few degrees above
absolute zero.― 163 Many writers dispense with qualifying words such as ‖apparently,― and
express this point of view in uncompromising terms. Paul Davies, for example, puts it in this
manner:
Unless, therefore, our whole understanding of matter and energy is totally misconceived, the
inevitability of the end of the world is written in the laws of nature.164
James Jeans, writing half a century earlier, was already firmly convinced, and makes this
equally positive assertion:
The energy is still there, but it has lost all capacity for change . . . We are left with a dead,
although possibly a warm, universe—a ‖heat death.― Such is the teaching of modern
thermodynamics. There is no reason for doubting or challenging it, and indeed it is so fully
confirmed by the whole of our terrestrial experience, that it would be difficult to find any
point at which it could be open to attack.165
But in that earlier day, when the ‖heat death― idea was new and still controversial, Jeans
found it necessary to explain the reasoning by which this conclusion was reached
(something his modern successors usually do not bother with), and in the course of this
explanation he tells us this:
Thus the main physical process of the universe consists in the energy of exceedingly high
availability which is bottled up in atomic and nuclear structures being transformed into heat-
energy at the lowest level of availability. 166
As we have seen in the preceding pages, this is not the ‖main physical process of the
universe.― It is merely one of the subsidiary processes, an eddy in the main stream. The
primary process of the material sector of the universe does not begin with energy of high
intensity ‖bottled up in atomic structures"; it ends in that form, as the result of a long period
of aggregation under the influence of gravitation. This high intensity state is one of the limits
of the primary physical process. The highly dispersed state from which the aggregation
started is the other. These limits could be compared to the high and low points in the travel
of a pendulum. At its highest. point the pendulum is motionless, but it is subject to
gravitation. The gravitational force pulls it down to its low limit, but in so doing imparts a
motion to it, and this motion then takes the pendulum back to the level from which it started.
Similarly, the new matter in the material sector is widely dispersed and motionless.
Gravitational forces pull the dispersed units inward to a limiting concentration, but in so
doing impart a system of motions to the matter. These motions initiate a chain of events that
ultimately brings the matter back into the same dispersed and motionless condition from
which it started.
This definition of the fundamental action of the universe as a cyclic process carries with it
the existence of limits to the various subsidiary quantities. This present chapter will be
addressed to an examination of some of the most significant of these limits.
The Reciprocal System of theory deals exclusively with units of motion, and thus it is
quantitative from the start. As noted in the earlier volumes, the quantitative development
goes hand in hand with the qualitative development as the theory is extended into additional
areas and into more detail. Thus far in the present volume, however, the treatment of the
subject matter has been almost entirely qualitative. There are two reasons for this. The first
is that the objects with which astronomy deals are aggregates of the same kind of matter that
was discussed in the previous volumes, differing only in that the range of sizes of the
aggregates, and the range of conditions to which these aggregates are subject, are much
greater than in the situations previously considered. By and large, therefore, the quantitative
relations applicable to the astronomical aggregates are the same relations that were
developed in the previous volumes. Thus one reason why there was not much quantitative
discussion in the preceding pages is that most of the pertinent quantitative relations had
already been covered in the earlier volumes.
The other reason for the limited amount of quantitative development is that, as noted in
Chapter 1, the quantities of interest in astronomy are largely applicable to individual objects,
whereas our present concern is with classes of astronomical objects and the general
evolutionary patterns of those objects, rather than with individuals. Furthermore, even where
quantitative relations are relevant to the subject matter under consideration, and have not
been previously derived, it has appeared advisable, in most cases, to delay discussing them
until after the general aspects of the astronomical universe, as seen in the context of the
theory of the universe of motion, were examined and placed in their proper relations with
each other. To the extent that this examination has not yet been carried out, these items will
be discussed at appropriate points in the pages that follow. This chapter will take up the
question of the quantitative limits applicable to some of the phenomena already examined
from the qualitative standpoint in the earlier pages.
Chief among these, because of its fundamental nature, is the gravitational limit. On the basis
of the general principles developed in Volume I, the gravitational limit of a mass is the
distance at which the inward gravitational motion of another mass toward the mass under
consideration is equal to its outward motion due to the progression of the natural reference
system relative to our stationary system of reference. If the unit gravitational motion acted
directly against the unit outward speed imparted to the mass unit by the outward progression
of the reference system, the gravitational limit for unit mass would be at one natural unit of
space, by reason of the relation between the natural units explained in Volume I. But the
gravitational effect is distributed over all of the many dimensional variations of both the
rotational and translational motion, and its effective magnitude is reduced by this
dimensional distribution. As we saw in the earlier volumes, the rotational distribution
extends over 128 units in each dimension. Since motion in space involves three dimensions
of space and one dimension of time, the total rotational distribution is (128)4. Additionally
there is a translational distribution over the eight units that we have previously identified as
the linear maximum. The total of both distributions amounts to 8 x (128)4 = 2.1475x104.
What this means is that the gravitational motion is distributed over 2.1475x109 units, only
one of which is acting against the outward progression of the natural reference system in the
line of the progression. The effective component of the gravitational force (motion) is thus
reduced by this ratio, the rotational ratio, as we will call it.
Inasmuch as the one-dimensional analog of this rotational ratio, the interregional ratio,
includes an additional component amounting to 2/9 of the 128 rotational units, increasing
the ratio to 156.444, there may be some question as to why the rotational ratio does not
contain a similar extra component. The explanation is that the atomic rotation is a rotation of
a linear vibration. The total atomic motion is therefore distributed among the vibrational
units (2/9 of 128) as well as among the 128 rotational units. But the 8-unit translational
distribution included in the rotational ratio covers all of the possible linear motion, including
the basic vibrational motions that is being rotated. Thus no additional term is required in the
rotational ratio.
Within the gravitational limit the effective gravitational motion (or force) is inversely
proportional to the square of the distance. Without the distribution to the multiple units, the
equilibrium equation under unit conditions would be m/d0² = 1; that is, the gravitational
force exerted by the natural unit of mass at the natural unit of distance would be in
equilibrium with the unit force of the progression of the natural reference system. The
distribution of the gravitational force reduces its effective value by the rotational ratio. The
effective equilibrium is then.
4.65661 X 10-10 m/d0² = 1 (14-1)
Solving for d0 the gravitational limit. we obtain
d0 = 2.15792 X 10-5 m½ (14-2)
To convert this equation from natural to conventional units we divide the coefficient by the
number of natural units of distance per light year, 2.0752x 10,23 and by the square root of
the number of grams per natural unit of mass, 1.65979x 10.-14 In terms of grams and light
years, equation 14-2 then becomes.
d0 = 8.0714 X 10-17 m½ light years (14-3)
33
The mass of the sun has been calculated as 2 x 10 grams. Applying the coefficient of
equation 14-3, we find that the gravitational limit of the sun is 3.61 light-years. This is
consistent with the observed separations. The nearest star system, Alpha Centauri, is 4.3
light years distant, and the average separation of the stars in the vicinity of the sun is
estimated to be somewhat less than 2 parsecs, or 6.5 light years. Sirius, the nearest star
larger than the sun, has its gravitational limit at 5.3 light years, and the sun, 8.7 light years
away, is well outside this limit.
It is evident that such a distribution of a very large number of objects in space, where the
minimum separation is two-thirds of the average, requires some kind of a barrier on the low
side; it cannot be the result of pure chance. The results of this present investigation show
that the reason why the stars of the solar neighborhood do not approach each other closer
than about four light years is that they can not do so. This finding automatically invalidates
all theories that call for star systems making contact or a close approach (as contemplated in
some theories of the formation of planetary systems), and all theories that call for the
passage of one aggregate of stars through another (such as the currently accepted theory of
‖elongated rectilinear orbits― of the globular clusters).
Furthermore, these results show that the isolation of the individual star system is permanent.
These systems will remain separated by the same tremendous distances because each star, or
star system, or prestellar cloud continually pulls in the material within its gravitational
range, and this prevents the accumulation of enough matter to form another star in this
volume of space. The immense region within the gravitational limit of each star is reserved
for that star alone.
The interstellar distance calculated from the number of stars per unit volume is less in the
interiors of the globular clusters and the central regions of the galaxies. But since the three-
dimensional region of space extends only to the gravitational limit, the reduction in
volumetric dimensions beyond this limit, due to the gravitational effect of the aggregate as a
whole, is in equivalent space rather than in actual space. It therefore does not alter the spatial
relation of a star to the gravitational limits of its neighbors.
Galactic masses are usually expressed in terms of a unit equal to the solar mass. Since we
have already evaluated the gravitational limit of the sun, we may express equation 14-3, for
application to the galaxies, in the convenient form.
d0 = 3.61 (m/ms)½ light years (14-4)
This relation enables us to verify the conclusions reached in Chapter2 with respect to the
process of cannibalism by which the giant spheroidal galaxies have reached their present
sizes. As noted in that earlier discussion, the development of theory indicates that galaxies
as large as the Milky Way are pulling in not only a large amount of diffuse material, but also
individual stars, globular clusters, and small galaxies. The Magellanic Clouds were
identified as galaxies in the process of being captured. In order for such a capture to take
place, the smaller unit must be within the gravitational limit of the larger. Let us see, then,
just what distances are involved.
Estimates of the mass of the Galaxy range from 1011 to 5 x 1011 solar masses. If we accept
an intermediate value, 3 x 1011 for present purposes, equation 14-4 indicates a gravitational
limit of about two million light years. On this basis, the Magellanic Clouds are well on their
way to capture by the Galaxy.
It was also brought out in the previous discussion that the irregular structures of the
Magellanic Clouds are due to distortion of the original spiral or elliptical structures by
differences in the gravitational forces acting on different parts of the Clouds. The diameters
of the Clouds are approximately 20,000 and 30,000 light years respectively. These distances
are obviously large enough in proportion to the distance from the Galaxy to give rise to
significant differences between the forces acting on the near sides of the Clouds and those
acting on the far sides.
Both of these findings with respect to the Magellanic Clouds can be generalized. We can say
that any galaxies within a distance of about two million light years of a galaxy as large as
the Milky Way are in the process of being pulled in toward the galaxy and will eventually be
captured. We can also say that any galaxy on the way to capture will experience structural
distortion in the last stages of its approach.
The calculations that support the theoretical conclusions as to what is now happening to the
globular clusters and small galaxies in the vicinity of larger aggregates can be extended to
confirm the further conclusions concerning what is going to happen in the future as the
evolutionary development of the galaxies proceeds. In the original discussion of the
aggregation process it was pointed out that the possibility of growth by capture depends on
the location of the gravitational limit. Consolidation of two star systems cannot take place,
because each such system is outside the gravitational limits of all others, and these limits can
be extended only at a relatively slow rate, since there is nothing in interstellar space subject
to capture other than diffuse matter and a few small objects such as comets. On the other
hand, the theoretical consideration of the galactic situation in Chapter2 showed that the
globular clusters and early type galaxies are inside the gravitational limits of their immediate
neighbors because of the nature of the process by which they were formed. Consolidation of
these objects into larger and larger aggregates therefore proceeds until the greater part of the
mass in each large region of space is gathered into one giant spheroidal galaxy.
This theoretical finding implies that most of the galaxies included in what is known as the
Local Group—our Milky Way, the Andromeda Galaxy, M 33, the newly discovered Maffei
galaxies, and a considerable number of smaller aggregates—will eventually combine. For an
evaluation of the likelihood of such an outcome, let us look at the gravitational limits. We
have found that if we take the mean of present estimates of the size of our galaxy, its
gravitational limit is about two million light years. But if we take the highest of the
previously quoted values, this limit becomes 2½ million light years. It is generally believed
that the Andromeda Galaxy, M 31, about two million light years distant, is somewhat larger
than our own. Thus, if the higher estimate of the mass of the Milky Way is correct, we are
already within the gravitational limit of M 31.
If the actual masses are smaller than these estimates indicate, this conclusion is premature.
But M 31 is growing. Intergalactic space contains a multitude of material aggregates—
globular clusters, unconsolidated dust and gas clouds of globular cluster size, dwarf
galaxies, and stray stars—all of which are subject to capture by the giant spirals, along with
quantities of diffuse matter. If the Milky Way is not yet within the gravitational limit of M
31, it certainly will be within that limit when the capture process is a little farther along. In
the meantime, the total mass of the various aggregates and diffuse matter between the major
galaxies helps to hold the Local Group together while those galaxies continue their
cannibalism. The calculation of the gravitational limits of the local galaxies thus confirms
the theoretical conclusion from more general premises that the eventual result of the
evolutionary process will be a giant spheroidal galaxy containing most of the mass of the
Local Group.
It is even possible that a larger, hitherto unrecognized, member of the group might swallow
both M 31 and the Milky Way galaxy. A very large volume of space in our immediate
vicinity is hidden from our view by the dense portions of our own galaxy, and we do not
know what is behind this barrier. The two Maffei galaxies were just recently discovered on
the fringes of this unobservable zone, and there are reports indicating the existence of two
more in a ‖heavily obscured region in Cygnus.― 18 One is said to be a ‖bright elliptical.― If
this is correct, the elliptical galaxy could be the dominant member of the Local Group.
The conclusion as to the ultimate consolidation of most of the mass of the Local Group
conflicts with current astronomical opinion, as expressed by Gorenstein and Tucker, who
assert flatly that ‖the probability of our galaxy's ever colliding with the Andromeda galaxy is
close to zero.― 167 But that opinion is a relic of the traditional view of the galaxies as objects
that originated in essentially their present forms in the early stages of the existence of the
universe, and are moving randomly, a view that, although it is still orthodox doctrine, is
gradually giving way as more and more evidence of galactic collisions and cannibalism
accumulates.
Another critical magnitude in which we are interested is the limiting distance beyond which
all galaxies recede at the full speed of light. The importance of this distance lies in the fact
that its relation to the speed of light determines the rate of increase in the recession speed
relative to the distance, the Hubble constant, as the astronomers call it. This is a place where
the theory of the universe of motion arrives at conclusions that are altogether different from
those that the astronomers have reached from their consideration of the only piece of
observational information that has been available to them, the existence of a galactic
recession at a speed that is proportional to the distance. On this slender factual basis, they
have erected an elaborate framework of assumption and inference that serves as the
orthodox theory of the large-scale action of the universe. The cosmological aspects of this
theory will be discussed in the final chapters of this volume. Our concern at present is with
the recession speed. In the absence of any evidence to the contrary, the astronomers have
assumed that the Hubble constant is a fixed characteristic of the physical universe, and they
further assume that the increase in the recession speed with increasing distance continues
indefinitely, approaching the speed of light asymptotically.
While these two assumptions seemed reasonable in the light of the information available to
their originators, the development of a theoretical understanding of the recession process
now shows that both of them are wrong. The recession is not peculiar to the galaxies. It is a
general property of the universe, a relation between the reference system that we utilize and
the reference system to which natural phenomena actually conform, and it applies not only
to the galaxies, but to all material objects, and also to non-material entities, such as the
photons. The outward motion of the photons at the speed of light is the same phenomenon as
the recession of the galaxies, differing only in that the galactic recession is slowed by the
opposing gravitational forces, whereas the photons are not subject to any significant amount
of gravitational retardation, except in the immediate vicinity of very large masses. The
Hubble constant― is not, as currently assumed, a basic property of the physical universe.
Like the gravitational limit, it is a property of each individual mass aggregate. In application
to the galactic recession this so-called constant is a function of the total galactic mass,
exclusive of the outlying globular clusters and halo stars that are still in free fall.
The assumption that the speed of light is a limiting value, which the recession speed never
quite reaches, is also invalidated by the theoretical findings. As we saw earlier in this
chapter, the effect of the distribution of the gravitational motion over all of the rotational and
translational units involved in the atomic rotation is to require 2.1475 x 109 mass
(gravitational) units to counterbalance each unit of the outward motion of the natural
reference system. The point at which this is accomplished is the gravitational limit. Here the
net speed, relative to the conventional spatial reference system, is zero. Beyond the
gravitational limit the reduced gravitational speed is only able to neutralize a part of the
outward progression, and there is a net outward motion: the galactic recession.
The net outward speed increases with the distance, but it cannot continue increasing
indefinitely. Eventually the attenuation of the gravitational motion by distance brings it
down to the point where the remaining motion of each mass unit is sufficient only to cover
the distribution over the dimensional units involved in direct one-dimensional contact
between motion in space and motion in time. Less than this amount (a natural compound
unit) does not exist. Beyond this point, therefore, the gravitational effect is eliminated
entirely, and the recession takes place at the full speed of light. The limiting distance with
which we are now concerned can thus be obtained by substituting the one-dimensional
relation, 128 (1+2/9) = 156.44 (previously identified as the inter-regional ratio), for the
rotational ratio in equation 14-1. The new equilibrium force equation is then
1/156.44 x m/d1= 1 (14-5)
Again, solving for the distance, which in this case we are calling d1, we have
d1 = m½/12.55 (14-6)
which can be expressed in terms of solar masses as
d1 = 13350 (m/ms)½ light years (14-7)
If we again take the intermediate estimate of the mass of the Galaxy that was used earlier, 3
x 1011 solar masses, and apply equation 14-7, we find that the limiting distance, d1, is 7.3x
109 light years. Disregarding the relatively short distance between the Galaxy and its
gravitational limit, we may now calculate the distance from our galaxy to any other galaxy
of the same or smaller mass by converting the redshift in the spectrum of that galaxy to
natural units (traction of the speed of light). and multiplying by 7.3x 109 light years, or 2.24x
109 parsecs. This is equivalent to a value of 134 km/sec per million parsecs for the Hubble
constant.
This calculated value for the Hubble constant does not apply to the recession of a galaxy
larger than our own, as the effective gravitational force that defines the limiting distance is
the force exerted by the larger of the two aggregates. The mass of the smaller aggregate is
immaterial from this standpoint. It is true that the controlling mass exerts a greater
gravitational force on a large mass than on a smaller one, but the opposing effect of the
progression of the natural reference system is subject to the same proportionate increase, and
the equilibrium point remains the same. The astronomers estimates of the value of the
Hubble constant are based largely on observations of the more massive galaxies, the ones
that are most easily observed. The masses of these giant galaxies are quite uncertain, and the
estimates vary widely, but as a rough approximation we may take the mass of a galaxy of
maximum size to be ten times that of the Milky Way galaxy. Substituting this mass for that
used in the previous calculation, we obtain a Hubble constant of 42 km/see per million
parsecs.
The value currently accepted by most astronomers is between 50 and 60 km/sec. Before
1952 the accepted value was 540. By the time the first edition of this work was published in
1959 it was down to about 150. Subsequent revisions have brought it down to the present 50
or 60. These latest results are consistent with the theoretically calculated values within the
accuracy of the galactic mass determination.
Emission of radiation from the rotating atoms of matter is also subject to a dimensional
distribution, but radiation is a much simpler process than the gravitational interaction, and
the distribution is correspondingly more limited. As noted in Volume II, where the
dimensional distribution effective in gravitation was discussed, the theoretical conclusions
with respect to the dimensional distribution of the primary motions are still somewhat
tentative, although the satisfactory agreement with observation gives them a significant
amount of support. It appears from these findings that the radiation distribution is confined
to the basic 128 rotational positions in one dimension.
The application of this distribution with which we are now concerned is its effect on the
magnitude of the increase in the wavelength of radiation (the redshift) due to the outward
progression of the reference system. Since all physical entities are subject to this
progression, it has no effect on ordinary physical phenomena, but it does alter the neutral
point, the boundary between motion in space and motion in time. The outward progression
in space relative to the location from which we are making our observations shifts the
boundary in the direction of longer wavelengths. Observers in the cosmic sector (if there are
any) see a similar shift in the direction of the shorter wavelengths.
Inasmuch as the natural unit in vibrational motion is a half cycle, the cycle is a double unit.
The wavelength corresponding to unit speed is therefore two natural units of distance, or
9.118x10-6 cm. The distribution over 128 positions increases the effective distance to 1.167
x 10-3 cm (11.67 microns). This, then, is the effective boundary between motion in space and
motion in time, as observed in the material sector. On the high frequency (short wavelength)
side of the boundary there is first the near infrared, from 1.167 x 10-3 cm to 7x10-5 cm, next
the optical region from the infrared boundary to 4 x 10-5 cm, and finally the x-ray and
gamma ray regions at the highest frequencies. Because of the reciprocal relation between
space and time these high frequency regions are duplicated on the low frequency (long
wavelength) side of the neutral level.
Inasmuch as the processes of the region below unit speed involve transfer of fractional units
of speed—that is, units of energy—the frequencies of the normal radiation from these
processes are on the energy side of the boundary. This is high frequency radiation. At speeds
above unity, this situation is reversed. The physical processes at these speeds involve
transfer of fractional amounts of energy, and the frequencies of the normal radiation are on
the speed side of the unit boundary. In the regions accessible to our observation, these low
frequency processes are less common than those in the high frequency range, and the
instrumentation that has been developed for dealing with them is much less advanced.
Consequently they are not as well known, and only two subdivisions are recognized. The far
infrared corresponds to the near infrared and the optical ranges, while the radio range
corresponds to the x-ray and gamma ray ranges.
The term ‖normal― in the foregoing paragraph refers to radiation in full units of the type
appropriate to the speed of the emitting objects. For example, thermal radiation is a product
of processes operating at speeds below unity (the speed of light). The full units produced at
these speeds constitute frequencies on the upper side of the unit boundary. Fractional units
do not exist, but adding or subtracting units of time can produce the equivalent of fractional
changes in the amount of space. This enables extension of a portion of the frequency
distribution of thermal radiation into the far infrared, below the unit level. In fact, if the
radiating object is cool, it may radiate entirely in this lower range. But if this radiating object
is hot enough to produce a substantial amount of radiation, the great bulk of this radiation is
in the upper frequency ranges. Thus strong thermal radiation comes from matter in the speed
range below unity. The same principle applies to radiation produced by any other processes
of the low speed range. Conversely, strong radiation of the inverse type—far infrared and
radio comes from matter in the upper speed ranges (above unity).
The existence of a sharp line of demarcation between the kind of objects that radiate in the
near infrared and the kind that radiate in the far infrared is clearly recognized, even at this
rather early stage of infrared astronomy, but the fact that there is an equally sharp distinction
in the nature of the radiation from these objects has not yet been recognized by the
astronomers. For example, Neugebauer and Becklin suggest that the observed strong
radiation from some objects at 100 microns is ‖thermal radiation from dust heated by star
light― ;168 that is, it is essentially equivalent to the radiation from cool stars, although they
also report that the objects which radiate strongly at two microns (in the near infrared) are
altogether different from those that radiate at 20 or 100 microns (in the far infarred). The ten
brightest sources at two microns, they report. are all stars: three supergiants, three giants.
and four long-period variables—the same stars that are bright in the visible region On the
other hand, none of the ten brightest sources at 20 microns is an ordinary star. They include
the center of the Galaxy. several nebulae, and a number of objects whose nature is, as yet.
not clearly understood. As the investigators say, ‖At present we lack the information needed
to understand the sources unambiguously."
Our findings show that what is needed is recognition of the existence of the unit boundary at
11.67 microns. Strong radiation in the far infrared, beyond 11.67 microns, comes from
matter with speeds in the upper ranges, above the speed of light, not from relatively cool
thermal sources like those that radiate weakly in the far infrared. As we will see in the pages
that follow, strong infrared emission is one of the conspicuous features of the objects that we
will identify as involving motion at upper range speeds: quasars, Seyfert galaxies, the cores
of other large galaxies, exploding galaxies such as M 82, etc. The infrared radiation from the
quasars is estimated to be 1000 times the radiation in the visible range.167 The association
between infrared emission and radiation in the radio range (which we identify with upper
range speeds) is another feature of these objects, which, like the infrared emission, is
unexplained in current astronomical theory. The significance of the results of the surveys of
the infrared sources within the Galaxy, such as the one reported by Neugebauer and Becklin,
is that they demonstrate the existence of the line of demarcation between the far infrared of
the upper range speeds and the near infrared of the speeds below unity.
In the case of complex objects in which both upper range and normal speeds are strongly
represented, the existence of a discontinuity is evident in the spectra. For instance, the IRAS
observations show that ‖The spectrum of the Crab nebula 'breaks,' or turns over in the far
infrared,― leading to the conclusion that ‖something must happen in the infrared region that
lies between the near infrared and radio bands.― 350
The processes other than thermal that give rise to these various radiation frequencies, and
their identification with the speeds of the emitting objects, will be discussed in Chapter18.
As we saw in Chapter 6, the inverse process of adding energy accomplishes the increase in
speed in the range below the unit level. Addition of n - 1 units of energy to zero speed (1 -
1/1) results in a speed of 1 - 1/n² Obviously, this is a very inefficient method of increasing
the speed, inasmuch as a large increment of energy produces only a very small increase in
the speed. Furthermore, the maximum speed that can be attained by this means is limited to
one unit (that is, the speed of light). But in spite of these highly unfavorable aspects, this is
the way in which additions to the speed are made in the range between zero speed and unit
speed, simply because there is no alternative. Fractional units of speed do not exist.
The subsequent pages of this work will be concerned mainly with phenomena that take place
at speeds in excess of unity, and one of the arguments that will be advanced against the
reality of such speeds by the adherents of orthodox physical thought will be that the amount
of energy required to produce speeds of this magnitude is incredibly great. Indeed, such
arguments are already being raised against suggestions that call for the ejection of galactic
fragments at speeds that merely approach the speed of light. The answer to these objections
is that the upper range speeds are not produced by the inefficient inverse process of adding
energy; they are produced by the direct addition of units of speed, a much more efficient
process.
To illustrate the difference, let us consider the result of adding energy or speed to the initial
situation just mentioned, where the energy is unity and the net speed is 1 - 1/1 = 0. (Most
speeds in the material sector are differences
( 1 - 1/n²) - ( 1 -1/m²), but it will be convenient to deal with the simpler situation). If we add
two units to the energy component the net speed increases to 1 -1/9 = 0.889. On the other
hand, if we add two units to the speed component the net speed increases to 3 - 1/1 = 2.000,
at the threshold of the ultra high-speed range. The significance of these figures lies in the
fact that in the universe of motion a unit of speed and a unit of energy are equivalent. It
follows that an event that is capable of increasing the net speed of an object only as far as
0.889 by the process of adding energy is capable of increasing it to 2.000 by the process of
adding speed, if speed is available in unit quantities. The conclusion that we reach from the
theoretical development is that matter reaches its age and size limits under conditions that
result in gigantic explosions in which speed is, in fact, released in unit quantities, and is
available for acceleration of the explosion products to the upper range speeds. This means
that intermediate and ultra high speeds are well within the capability of known processes.
It is probable that most readers who encounter this idea for the first time will see it as a
strange and unprecedented addition to physical thought. But, in fact, the basic principle that
is involved is the same one that governs a well known, and quite common, type of physical
situation. The only thing new is that this principle has not heretofore been recognized as
applying to the phenomenon now being discussed. The truth is that what we are here dealing
with is simply a threshold effect, something that we meet frequently in physical theory and
practice. The photoelectric effect is a good example. In order to eject electrons from cold
metal, the frequency of the impinging radiation must be above a certain level, the threshold
frequency. The result cannot be accomplished by increasing the total amount of low
frequency energy. Ejection does not take place at all unless the energy is available in units of
the required size. When units of this size are applied, even a small total energy is sufficient.
The production of speeds in excess of the speed of light is governed by the same kind of a
limitation. Speed units of the required magnitude must be available.
A number of other magnitudes that are significant in the quantitative description of the
evolution of the contents of the universe are subject to calculation on the theoretical basis
that has been provided, including such items as the average duration of the various
evolutionary stages, maximum and minimum sizes of the various aggregates, and so on.
Lack of time has prevented undertaking any systematic investigation of these subjects, but
Some results have been obtained as by-products of studies made for other purposes. These
are mostly properties of objects moving at upper range speeds, and they can be discussed
more conveniently in connection with the general examination of the phenomena of these
upper speed ranges in the subsequent chapters.

CHAPTER 15

The Intermediate Regions


We are now ready to begin a full-scale examination of the only one of the regions into
which the universe is divided by the reversals that take place at the unit levels of speed,
space, and time, that has not yet been considered. This is an appropriate point at which to
review the general situation, and to see just how each of the several regions fits into the
overall picture.
As brought out in detail in the preceding pages of this and the earlier volumes, the region
that can be accurately represented in a spatial system of reference is far from being the
whole of the physical universe. There is another equally extensive, and equally stable,
region that is not capable of representation in any spatial reference system, but could be
correctly represented in a three-dimensional temporal reference system, and there is a
large, relatively unstable, transition zone between the two regions of stability. The
phenomena of this transition zone cannot be represented accurately in either a spatial or a
temporal reference system.
Furthermore, there is still another region at each end of the speed-energy range that is
defined, not by a unit speed boundary, but by a unit space or time boundary. In large-
scale phenomena, motion in time is encountered only at high speeds. But since the
inversion from motion in space to motion in time is purely a result of the reciprocal
relation between space and time, a similar inversion also occurs wherever the magnitude
of the space that is involved in a motion falls below the unit level. Here motion in space
is not possible, because less than unit space does not exist, but the equivalent of a motion
in space can be produced by adding motion in time, since an energy of n/1 (n units of
energy) is equivalent to a speed of 1/n. This region within one unit of space, the time
region, as we have called it, because all change that takes place within it is in time, is
paralleled by a similar space region at the other end of the speed-energy range. Here the
equivalent of a motion in time is produced inside a unit of time by the addition of motion
in space.
With the addition of these two small-scale regions to those described above, the speed-
energy regions of the universe can be represented in this manner.
Speed 0 1 2
time | 3d | scalar | 3d | space
only | space | zone | time | only
2 1 0 Energy
The extent to which our view of the physical universe has been expanded by the
development of the theory of the universe of motion can be seen from the fact that the
one section of this diagram marked '3d [three-dimensional] space' is the only part of the
whole that has been recognized by conventional science Of course, this is the only region
that is readily accessible to human observation, and the great majority of the physical
phenomena that come to the attention of human observers are phenomena of this three-
dimensional spatial region. But the difficulties that physical science is currently
encountering are not primarily concerned with these familiar phenomena; they arise
mainly from attempts that are being made to deal with the universe as a whole on the
basis of the assumption that nothing exists outside the region of three-dimensional space.
By developing the necessary consequences of the postulates that define the universe of
motion, we have identified the nature and properties of the primary entities and
phenomena of the three-dimensional region of space and those of the time region. The
inverse regions, the three-dimensional region of time and the space region, are
unobservable, but since they are exact duplicates of the regions that we have examined in
detail, the findings with respect to those observable regions are also applicable, with
space and time interchanged, to the two inverse regions. The subdivision that remains to
be explored is the transition zone between the two three-dimensional regions identified in
the diagram as the scalar zone.
The separation of the two sectors of the universe into distinct regions is a result of the
fact that the relation between the natural and conventional reference systems changes at
each unit level, because the natural system is related to unity, while the conventional
system is related to zero. Since the natural system is the one to which the universe
actually conforms, any process, which, in fact, continues without change across a
regional (unit) boundary reverses in the context of the arbitrary, fixed spatial reference
system. Each region thus has its own special characteristics, when viewed in relation to
the spatial reference system.
Inasmuch as the three scalar dimensions are independent, the maximum speed in each is
two units, as shown in Fig.8 (Chapter 6). Thus there are six total units of speed (or
energy) between the absolute speed zero and the absolute energy zero. It follows that the
neutral point is at three units. At any net speed below this level, the motion of an object,
as a whole, is in space
This fact has an important bearing on the nature of the motion. As we have seen in our
examination of fundamentals, an object can move either in space, at a speed of 1/n, or in
time, at a speed of n/1, but it cannot move in both space and time (relative to the natural
datum of unity) coincidentally, since the speed cannot be both above and below unity. As
pointed out earlier, it then follows that where motion in time exists as a minor component
of a motion that, in total. takes place in space, the motion in time acts as a modifier of the
magnitude of the motion in space, rather than as an actual motion in time. In other words,
it is a motion in the spatial equivalent of time: a motion in equivalent space.
As indicated in Fig.8, the second unit of translational motion is motion in time, but since
the translational motion as a whole is in space as long as it remains below three units, this
second unit of motion takes place in equivalent space. Such motion can be represented in
the conventional spatial reference system to the extent that the magnitudes that are
involved are within the limits of the reference system: zero and unity. As can also be seen
from Fig.8 , the motion in the second unit of one dimension, the motion in time
(equivalent space), is fully coordinate with the motion in the first unit, and has the same
kind of a relation to the natural datum, unity, as the original unit. It therefore substitutes
for the original unit, rather than adding to it. The motion in equivalent space in this
second unit is similar to the motion in space in the first unit, except that it is inverse, and
reduces, rather than increases, the equivalent spatial magnitudes. These magnitudes
therefore remain within the limits of the reference system, and can be represented in it, as
we see in the case of the reduction in the size of the white dwarf.
On the other hand, when the motion is extended to a third unit, a unit of motion in space,
so that there is motion in both a dimension of space and a dimension of equivalent space,
the magnitudes are additive, and the increment due to the motion in equivalent space is
outside the one-unit limit of the reference system. It follows that this increment cannot be
represented in the reference system, although it appears in measurements such as the
Doppler shift that deal with total scalar magnitudes, and are not subject to the limitations
of the reference system.
The two linear scalar units of motion effective in the intermediate region are not
restricted to a specific scalar dimension. Consequently they are distributed over all three
of these dimensions by the operation of probability. This means that there are eight
possible orientations of the units of motion. Motion in actual space is restricted to one of
these by the nature of the reference system that defines what is considered to be actual
space. Inasmuch as this reference system imposes no restrictions on time, the other seven
orientations are available for the motion in the spatial equivalent of time (equivalent
space).
ln the absence of any influences tending to favor one orientation over another, the total
scalar motion is distributed equally among the eight orientations. However, for reasons
that were explained in Volume I, all quantities in equivalent space are two dimensional in
terms of actual space. The seven equivalent units are divided between the two
dimensions, usually equally. One of the two resulting speed components adds to the
speed in actual space. The other does not. Furthermore, the second power relation
between the quantities of the two kinds (actual space and equivalent space) leads to a
similar relation between the units. Thus the fractional unit of equivalent space
comparable to the fractional unit n of actual space is n½. Where the spatial speed is n, the
corresponding speed in equivalent space (the distribution in one dimension) is normally
3.5 n½, and the total equivalent spatial speed is n + 3.5 n½.
This quantity, the total equivalent spatial speed, is the total scalar magnitude in the
dimension of the reference system, the quantity that is measured by the Doppler shift.
Motion in the low speed and intermediate ranges is confined to one spatial dimension,
and is limited to the value n, generally represented by the symbol z in this application.
Extension of the motion to a second spatial dimension in the ultra high speed range adds
the magnitude represented by the term 3.5z½ Thus the Doppler shift of any object moving
at ultra high speed is normally z + 3.5z½ This theoretical conclusion will be compared
with the results of observation in Chapter 22.
In conventional physical thought the reception of radiation from an object traveling away
from our location at a speed greater than that of light would be impossible, even if such
speeds exist, which conventional theory denies. The reasoning is that an object moving at
such a speed would be traveling faster than the signal that it is emitting. But in the
universe of motion any motion at a speed greater than unity (the speed of light) is a
motion in time at an inverse speed less than unity. This means that the object is moving
slower than light in time. The radiation from an object moving away from us at such a
speed therefore reaches us through time rather than through space, if we are appropriately
located relative to the moving object.
Although the neutral point between motion in space and motion in time is at three units of
equivalent speed, the maximum net translational speed that can be attained is actually two
units, because of the gravitational reversal. When the net speed reaches the two-unit
level, reversal of the gravitational direction (from motion in space to motion in time)
results in a net increase in speed of two units. Any further increase in equivalent speed
beyond this two-unit level is accomplished by reduction of the magnitude of the inverse
speed; that is, by decreasing the motion in time.
While net speeds exceeding two units take the motion across the sector boundary and out
of the observable region, as indicated above, ultra high speeds can exist in the material
sector in conjunction with oppositely directed speeds of different origin that keep the net
effective speed below the two unit level, at least temporarily. Under these conditions, the
distinctive features of ultra high-speed motion, such as existence in two spatial
dimensions, is applicable to objects that are still within the observable range.
The intermediate, or scalar, zone is separated into two regions by the sector boundary.
One of these, from two units of inverse speed to one unit, is on the cosmic side of the
sector boundary, and is unobservable. The other, from one unit of speed to two units, is
outside the region that can be represented accurately in the spatial reference system (the
three-dimensional region of space), but it is within the material sector, and it does have
effects on physical magnitudes in that sector. These effects enable us to detect objects
moving in the upper speed ranges, and to determine their properties. The results of an
investigation of these properties will constitute the subject matter of the remainder of this
volume, except for the last two chapters.
The effects of translational speed in the intermediate range, between one and two units,
insofar as they apply to aggregates of stellar size, were discussed in Chapter 6. Some
similar effects are produced in galaxies, but it will be convenient to consider these after
the characteristics of matter moving at ultra high speeds are developed. Examination of
these intermediate galactic phenomena will therefore be deferred to Chapters 26 and 27.
We will now turn our attention to motion at ultra high speeds, in the range between two
and three units.
Extension of motion to this higher speed range results in the production of some new and
different classes of astronomical phenomena. We will examine the most significant of
these phenomena individually in the pages that follow, but in order to lay a foundation for
the subsequent discussion, and to emphasize the unitary character of the theory that will
be applied to an understanding of the individual items, the general aspects of ultra high
speed motion, as derived from the postulates that define the universe of motion, will be
developed at this point, where they can be considered independently of the particular
conditions that surround the specific applications.
In order to produce speeds in this ultra high range it is, of course, necessary that there be
explosive phenomena that are more violent than the Type I supernovae. We will have no
difficulty in identifying such phenomena. While the ultra high speed range is a full unit
greater than the intermediate range of speeds produced by the Type I explosion, the
availability of the direct process of adding units of speed, as discussed in Chapter 14,
makes the required addition to the violence of the explosion much smaller than would be
indicated by the effects of energy addition in the range below unit speed.
We saw in Chapter 6 that when the Type I supernova explosion occurs, the material in
the central regions of the star is blown outward in time, producing a white dwarf star. A
more violent explosion that accelerates the fastest of the explosion products to ultra high
speeds has the same kind of an effect on the matter in the central regions, except that it
adds an outward translational motion in a second dimension of space to the outward
(inward from the spatial standpoint) expansion in time. This explosion product is another
white dwarf, differing from the white dwarfs discussed in the earlier chapters, in that it is
moving outward at a high rate of speed.
At the time of origin, the explosion product is subject to gravitational forces—that is, to
motion in the inward direction—and even though the explosion speed is in the 3 - 1/n²
range, the net speed is below two units, and the moving object is observable. As this
object travels outward, the gravitational effect is gradually reduced, and unless external
forces intervene, the net speed eventually reaches the two-unit sector limit. Further
reduction in the gravitational component then takes the net speed across the boundary and
into the cosmic sector. At this boundary, gravitation begins operating in time rather than
in space. The constituents of this aggregate then move outward from each other in space
because of the progression of the natural reference system, now no longer offset by
inward gravitational motion. Coincidentally, the process of aggregation in time begins.
Ultimately the aggregate in space ceases to exist, and its constituents aggregate in time.
The normal fate of the ultra high-speed products of the most violent supernova
explosions is thus ejection from the material sector. But unlike the intermediate region
white dwarf, whose destiny is foreordained, the ultra high-speed product may deviate
from the normal pattern. Elimination of gravitation in space does not affect the other
influences that tend to reduce the speed of the moving material object, particularly the
resistance due to the presence of other matter in the line of travel. Instead of continuing to
increase, as gravitation in time becomes increasingly effective, the speed of this object
may, under certain conditions, reach a maximum at some point above the two-unit level,
and then decrease, eventually dropping back into the region of motion in space. Such
objects return to the low speed range in essentially the same condition in which they left
it. They have never ceased to be material aggregates. Only a very limited range of initial
explosion speeds will produce this kind of a result, but it is common enough to give rise
to a distinct class of phenomena that will be examined in Chapter 19.
As we have seen, motion at intermediate speed adds an expansion to the original
translational motion of a material object. Ultra high speed adds another unit of motion,
which takes the form of a translational motion of the expanding object. The fast-moving
white dwarf that we are now considering is expanding in time, like any other white
dwarf, and moving translationally in a scalar dimension of space other than the dimension
of the spatial reference system. But there is no reason why the combination of the two
added scalar motions must necessarily take place in this manner. Other things being
equal, it is just as possible for the expansion to take place in the second dimension of
space, and the translational motion of the expanding object to be in time.
In the central regions of an exploding star, the white dwarf type of product is favored
because the compressive forces produced by the explosion act on matter that is already
highly compressed, and is so situated geometrically that it cannot relieve any of the
pressure by outward motion. The entire action is therefore inward in space, which is
equivalent to outward in time. In the outer regions of the star the initial compression is
less, and much of the explosion pressure can be absorbed by motion of the matter against
which the pressure acts. Here the general direction of the action is outward. These
conditions are favorable for production of the alternate type of combination of motions,
that in which the expansion is in space and the translation is in time.
Expansion in space is equivalent to contraction in time, and vice versa. Thus, in either of
the alternative motion combinations the expansion produces a compact object. Expansion
in time, as in the white dwarf, produces a compact object in space. This object then
moves linearly in space. The constituents of the object are moving in time; the object as a
whole is moving in space. In the alternate motion combination, the constituents of the
compact object are moving in space, while the object as a whole is moving in time.
As has been emphasized repeatedly in all of the volumes of this work, the conventional
fixed spatial reference system is severely limited in its ability to represent the motions
that take place in the universe. Many kinds of motion cannot be represented in this
system at all, and others cannot be represented accurately. The linear outward motion in
the second scalar dimension of space is incapable of representation, as is all motion in
actual time. But where aggregates expanding at the upper range speeds are observable in
the reference system, the speed magnitudes are reflected in the dimensions of these
aggregates. We have already seen how this effect accounts for the extremely small-
observed size of the white dwarf. Now we will need to recognize that expansion into a
second scalar dimension of space produces the same kind of a result in the inverse
manner; that is, the moving object appears in greatly extended form in the spatial
reference system.
Some of the change of position due to the unobservable ultra high speeds is represented
in the reference system in an indirect manner. In the initial stage of the outward motion of
the ultra high-speed explosion product, much of the explosion speed is applied to
overcoming the inward gravitational motion of this object. Inasmuch as that gravitational
motion has altered the position (in the reference system) of the matter that now
constitutes the explosion product, elimination of the gravitational motion results in a
movement of this matter back to the spatial position that it would have occupied if the
gravitational motion had not taken place. Since it reverses a motion in the reference
system, this elimination of the gravitational change of position is observable in that
system, even though the change of position due to the motion that produced this effect,
the ultra high speed motion in time or in a second scalar dimension in space, is itself
unobservable.
The gravitational retardation of the speed is at a maximum at the time of e ejection, and
decreases thereafter, reaching zero at the point where the explosion-generated motion
arrives at the unit speed level. Thus the greater the net speed of the explosion product, the
smaller the resulting change of position in the reference system. Here we have another of
the many findings of this present investigation that seem nothing short of preposterous in
the light of current scientific thinking. But again, as in the previous instances of a similar
nature, we will find that the validity of this conclusion is verified in a number of
astronomical applications.
Inasmuch as the spatial motion component of the ultra high-speed motion is in a second
scalar dimension, it is perpendicular to the normal dimension of the reference system.
This perpendicular line cannot rotate in a third dimension because the three-dimensional
structure does not exist beyond the unit speed level. Thus the representation of the motion
in the reference system is confined to a fixed line. Consequently the first portion of the
expansion of this ultra high-speed aggregate is linear rather than spherical. The spherical
expansion cannot begin until the net speed reaches the unit level and the linear movement
has ceased.
As we saw in our examination of the fundamentals of scalar motion, this type of motion
does not distinguish between the direction AB and the direction BA, since the only
inherent property of the motion is a magnitude. Unless external influences intervene, any
linear motion originating at a given point is therefore divided equally between two
opposite directions by the operation of probability. In its initial stages the expanding
cloud of ultra high speed particles thus takes the shape of two oppositely directed
cylindrical streams of matter (jets, in the language of the observers) moving outward
from the point of origin.
In the next stage, after the head end of the jet has reached the linear limit, and spherical
expansion has begun, each half of the expanding cloud is a weaker and more irregular jet,
or stream, of matter, with a knob on the outer end. In its unmodified form, the expanding
object as a whole has what is called a dumbbell shape during this stage. In the last stage,
the jet has disappeared, and there are now two spherically expanding clouds of ultra high-
speed matter, each centered at one of the limits of the linear expansion.
In many instances the physical situation is such that expansion in one of the two
directions is prevented, or at least impeded. In that event, the result is a single jet,
sometimes accompanied by a small counter jet. In other cases, obstructions, or motions of
the expanding object in the dimension of the reference system during the period of
expansion, modify the shape and direction of the jet, even to the extent, as we will see in
the next chapter, of altering the structure to the point where it is no longer recognized as a
jet.
The two alternative expansion patterns of these ultra high speed explosion products, as
viewed in the context of the spatial reference system—one a small, dense, and
inconspicuous object, the other an enormous double cloud of widely dispersed matter
spread over an immense volume of space—are so radically unlike that without a
theoretical understanding of their nature and origin one would not suspect that they are
related in any way. But, as we have just seen, they are simply two manifestations of the
same thing: the result of ultra high-speed motion, a form of motion that involves both
expansion and linear movement. In one case the expansion takes place in time, and the
linear motion in space. In the other the roles of space and time are reversed; the
expansion takes place in space and the linear motion in time. The expansion in time
produces an object that is extremely small from the spatial standpoint. The expansion in
space produces one that is extremely large spatially.
In the pages that follow we will examine a variety of astronomical phenomena that
belong in this category, and we will find that, notwithstanding their apparent
dissimilarities, they can all be explained on the basis of the theory set forth in the
preceding paragraphs. Each of the particular applications has some special features that
are peculiar to the existing situation. This has the effect of confusing the essential issues,
as they tend to be buried under a mass of detail that has little, if any, relevance to the
basic elements of the phenomena that are involved. Furthermore, the prevailing view of
the particular situation is, in most cases of the kind we are now considering, incorrect in
some significant respects, and it is hard for those accustomed to these currently accepted
ideas to avoid being influenced by them. It is therefore advantageous to consider the
essential issues on an abstract basis, as we are doing in this chapter, without the
complications that accompany the specific applications, and to establish the theoretical
relations on a firm basis before undertaking to apply them to the individual cases to
which they are applicable.
One more feature of the intermediate regions that should have attention from the
standpoint of fundamental theory before beginning consideration of the observable
phenomena of this region is the nature of the thermal radiation from objects moving with
upper range speeds. As we saw in the earlier volumes, this thermal radiation originates
from linear motion of the small-scale constituents of material aggregates in the dimension
of the spatial reference system. The effective magnitude of this motion is measured as
temperature.
Inasmuch as motion at intermediate speeds is in the same scalar dimension as the motion
at speeds below unity, the vibrational motion that produces the thermal radiation
continues into the upper speed ranges. But because of the reversal at the unit speed level,
the temperature gradient in the intermediate region is inverse; that is. the maximum
intensity of the thermal vibration, and the resulting radiation, is at the unit speed level,
and it decreases in both directions. In this intermediate region, an increase in speed
(equivalent to a decrease in inverse speed) decreases the thermal radiation.
Furthermore, the radiating units of matter are confined within one unit of time at the
upper end of the intermediate temperature range (the lowest inverse temperatures), just as
they arc confined within one unit of space at the lower end of the normal temperature
range. This alters the character of the observed radiation. As brought out in the earlier
volumes, and reviewed in Chapter 6, speeds less than unity can be attained only by
addition of units of the inverse quantity, energy. The result of such an addition is a speed
of 1 - 1/n², where n is the number of units of energy. A wider range of values is then
possible by means of combinations of the form ( 1 - 1/n²) - ( 1 - 1/m²).When an atom
moves independently, as it does in the true gaseous state, it can only move with certain
specific speeds, the speeds that are defined by the foregoing equation with the applicable
values of the energy components m and n. Thus each kind of atom (each element) has a
specific set of possible radiation frequencies, and a line spectrum.
The radiation is emitted from the atom at these same frequencies regardless of the
physical state of the aggregate in which the atom exists, but at the low end of the ordinary
temperature range, where matter is in the solid or liquid state, the thermal motion of the
atom takes place entirely within one unit of space. The radiation originating from this
motion has to be transmitted across the boundary between the region inside the unit
boundary, the time region, to the outside region where it is observed. For reasons that
have been explained in detail in the earlier volumes, this radiation is distributed in
direction, and over a range of frequencies, in the inter-regional transmission process. This
radiation therefore has a continuous spectrum.
The same situation prevails in the intermediate region, if it is viewed in terms of inverse
speeds and temperatures. When expressed in terms of the speeds and temperatures of the
low speed region, the relations in the intermediate region are inverse. The radiation at
speeds just above unity comes from atoms that are still in the gaseous state, and are
moving freely in time. This radiation, like that in the corresponding range on the lower
side of the unit level, has a line spectrum. As the speed increases still further, the
intensity of the radiation decreases, just as it does at speeds farther from unity in the low
speed region. At a critical level of inverse temperature and pressure the atom drops into
the space region, the region inside unit time; that is, the aggregate of these atoms
condenses into the inverse solid state. The optical radiation from this region, likes that
from the time region where ordinary solids are located, has a continuous spectrum.

CHAPTER 16

Type II Supernovae
The derivation of the principal characteristics of objects moving at ultra high speeds in
the preceding chapter gives us a foundation on which we can construct a theoretical
picture of the nature and properties of astronomical objects of this class. Before so doing,
however, it will be appropriate to give some attention to the process by which the ultra
high speeds are generated.
As explained in Volume II, the continued existence of matter is subject to two limits, one
related to the temperature, and therefore to the mass of the star in which the matter is
located, and the other related to the age of the matter itself, subject to some modification
by reason of its location We have seen that when the temperature limit is reached in the
center of a star, that star explodes in an event known as a Type I supernova. Arrival at the
age limit results in a similar explosion, which is called a Type II supernova. While these
explosions are basically alike, in that each results from the sudden conversion of a
substantial portion of the mass of the star into energy, and each produces some products
that move with speeds greater than that of light, as well as slow moving products, there
are also some significant differences that we will want to explore.
The upper destructive limit of matter is actually a limiting value of the magnetic
ionization, but this is a function of age, because the magnetic ionization level continually
increases under normal conditions. This ionization is equalized when atoms come into
effective contact All components of a solid aggregate are therefore at the same ionization
level. In the fluid states—liquid, gas, and condensed gas—the equalization process
proceeds more slowly. Where the material aggregate is as large as a star, and there is a
substantial inflow of matter from the environment, an ionization gradient is produced,
extending from the lower level of the accreted material to the higher level of the older
matter in the interior. When the ionization level in the interior reaches the destructive
limit, and the explosion occurs, the matter that is still below the destructive ionization
level is dispersed in space and in time in a manner similar to the dispersion of the
products of the Type I supernova.
Reliable information about supernovae is very limited. Unfortunately, observations of the
individual explosive events can only be made under some rather severe handicaps. No
supernova has been observed in our galaxy for nearly 400 years, and information about
the active stage of these objects can be obtained only from extragalactic observation,
aside from such deductions as can be made from imprecise eyewitness accounts by
observers of the supernovae of 1604 and earlier. The most meaningful information comes
from examination of certain astronomical objects, a few of which are known to be
remnants of old supernovae, and others that are similar enough to justify including them
in the same category. Even at best, however, hard evidence is scarce, and it is not
surprising that there is considerable difference of opinion among the astronomers as to
classification and other issues. As might be expected under these circumstances, our
deductions from physical theory conflict with some current astronomical thought.
The Type I explosion, according to our findings, originates in a star that has reached the
size and temperature limits. This is a hot, massive star at the upper end of the main
sequence, a member of a group of practically identical objects. Thus our theoretical
conclusion is that all Type I supernovae are very much alike. The observers concede the
validity of this conclusion. Here are some typical comments:
Type I supernovae display a fascinating homogeneity of photometric and spectroscopic
properties.170 (David Branch)
Supernovae of Type I form a fairly homogenous group with relatively little variation
between the spectrum of one star and that of the next . . . Supernovae of Type II
constitute a much less homogenous group than those of Type I.31 (Robert P. Kirshner)
The supernovae other than those of Type I are actually so diverse that serious
consideration has been given to defining several additional types. In the light of our
findings it is apparent that a substantial degree of variation in the Type II events can be
expected by reason of the differences in the masses of the exploding stars, and in their
physical condition; that is, in the stage of the evolutionary cycle in which they happen to
be at the time when they arrive at their age limits. Some of the observations show
indications of mass differences. For instance, R. Minkowski reports that ‖The supernova
of 1961 in NGC 4303 which Zwicky designates as Type III, shows properties that
suggest strongly a supernova of Type II with unusually large ejected mass.― 171 Massive
objects are, of course, relatively rare in a sample drawn at random from the general run of
stars, the great majority of which are small.
The astronomers have not been able to find a satisfactory explanation for the difference
between the two classes of supernovae. Shklovsky, for instance, points out that this is one
of the things that a theory of stellar explosions should explain:
Why, for instance, are the light curves of Type I supernovae so similar to one another?
And why are the light curves of Type II supernovae so diverse?― Theoreticians have
found these questions very difficult indeed.172
The principal roadblock in the way of arriving at an answer to these questions is the
prevailing commitment to the upside down evolutionary sequence, which is the basis for
the current belief in astronomical circles that the Type II explosions are the ones that
originate from the hot massive stars. Again quoting Shklovsky:
As for the stars that become Type II supernovae, it is logical to infer that they are young
objects. This conclusion follows from the simple fact that they are located in spiral arms,
where stars are formed out of a gas-dust medium.173
The lack of force in this argument can be appreciated if it is recalled that this same author
characterized the current theory of star formation as ‖pure speculation.― This is another of
the places where the uncritical acceptance of the physicists' assumption as to the nature of
the stellar energy generation process has diverted the astronomers' thinking into the
wrong channels, and induced them to close their eyes to the direct astronomical evidence.
When the correct age sequence is recognized, all of the observations fall into line without
difficulty.
Type I supernovae are found to be distributed among all of the various kinds of galaxies.
This is consistent with our findings, as the limiting mass may theoretically be reached
early in the life of a star, under appropriate circumstances. Age, on the other hand, is
inconsistent with an early type of galaxy (with the usual exception that some old stray
stars may be picked up by a young galaxy). A Type I event, if it occurs at all, must
precede the Type II event that marks the demise of the star. Since the Type II supernova
is a result of age, the explosions of this type are primarily phenomena of the older
galaxies. The absence (or near absence) of Type II supernovae from the Magellanic
clouds, for instance, is easily understood on the age basis, as these Clouds are clearly
much younger than the Galaxy, according to the criteria that we have developed. On the
other hand, this is a distinct embarrassment for the prevailing ‖massive star― theory of
Type II supernovae. As explained by Shklovsky,
The fact that only Type I supernovae appear in irregular galaxies such as the Magellanic
Clouds would seem inconsistent with the picture we have outlined, for these galaxies
contain a great many hot, massive stars. Why is it that Type II supernovae are not
observed there?173
What needs to be recognized is that when the observed facts are ‖inconsistent with the
picture,― then they are telling us that the picture is wrong. This is the same message that
we get from a whole assortment of astronomical observations that were discussed item by
item in the preceding pages of this volume. All agree that the objects—stars, clusters,
galaxies—characterized by astronomers as the older members of their respective classes
are, in fact, the younger. This is to answer the Shklovsky's question, and to a wide range
of similar problems.
In spite of the absence of observed events, Type II supernovae are not totally excluded
from small elliptical or irregular galaxies, or even from globular clusters. As pointed out
earlier, all of these aggregates contain a few old stars that have been picked up from the
environment during the formation and subsequent travel of the aggregates. When these
old stars reach their age limits, supernova explosions take place. The absence of observed
events of this kind is due to their scarcity. The Large Magellanic Cloud does contain a
few supernova remnants that can be identified with Type II events, indicating that at least
a few Type II supernovae have occurred in this galaxy within the last 100,000 years.
The observed Type II events are largely in the arms of the spiral galaxies, as indicated in
one of the quotations from Shklovsky, but we find from theory that the great majority of
the Type II supernovae occur in the unobservable inner regions of the giant spheroidal
galaxies and the largest spirals. This is where the oldest stars are concentrated. The
number of stars that undergo Type II explosions is considerably greater than the number
that undergo Type I explosions, since all must eventually meet the Type II fate. This is
offset in part by the fact that many stars repeat the Type I explosion at least once, in some
cases several times. Aside from occurring much later in the life span of the star—at the
very end—the most distinctive feature of the Type II explosion is that the intensity of the
explosion, relative to the stellar mass, is much greater than in Type I. The total mass
participating in the explosion is, in most cases, less than that of the massive star that
becomes a Type I supernova, as the mass of the star involved in the Type II event may be
anywhere between the maximum and minimum stellar limits. But the Type II explosion
converts a much larger proportion of this mass into energy, and the ratio of energy to
unconverted mass is therefore considerably higher, increasing the proportion of the mass
going into the products with upper range speeds, and the maximum explosion speed of
these products.
The optical emission from the explosion products comes mainly from the low speed
component, the material that is expanding outward into space. Since the amount of this
material is much smaller in the Type II events than in those of Type I, the optical
magnitude of the Type II supernova at the peak is considerably less than that of the Type
I events. One investigation arrived at average magnitudes of -18.6 for Type I and -16.5
for Type II.174 The Type II Supernovae 221 emission from Type II also drops off more
rapidly at first than that from Type I, and the light curves of the two types of explosions
are thus quite different. This is one of the major criteria by which the observational
distinction between the two types is drawn.
In view of the limited optical activity and the relatively small mass of the remnants, there
has been some question as to what happened to the energy of these Type II events.
Poveda and Woltjer, for instance, comment that they find it difficult to reconcile current
ideas as to the energy release in the Type II supernovae with the present state of the
remnants.175 This question is answered by our finding that the great bulk of the energy
that is generated goes into the upper range explosion products, most of which are not
optically visible.
These products include some that are moving at intermediate speeds, and are
unobservable because their radiation is widely dispersed by the motion in time, and
others moving at ultra high speeds and therefore optically visible only during the linear
stage of their expansion. The ultra high-speed matter moves outward with the low speed
products during this early stage. The intermediate speed matter has no spatial motion
component of its own, but much of it is entrained with the outward-moving products. As
a result this outward-moving cloud of matter contains local aggregates in which there are
substantial amounts of material with the speeds and other characteristics of the white
dwarf stars.
The long-continued radio emission of the remnants of the Type II supernovae is due to
the presence of these upper range products. It was noted in Chapter 6 that the early white
dwarf product of the Type I supernova is not visible optically, and manifests itself only
by its radio emission. The same is true of the local concentrations of intermediate speed
matter in the remnants, which are the equivalent of small-scale white dwarfs, and pass
through the same evolutionary stages. Because of their small size, their evolution
proceeds more rapidly, and even in the relatively short time during which the remnants
are observable there are portions of the intermediate speed matter in all stages, including
small aggregates with the outer shell of condensed gas that is characteristic of the white
dwarfs in the visible stages. Thus the radiation from the remnants is not limited to
dissipation of the kinetic energy imparted to the explosion products by the supernova.
There is a continued generation of energy within the remnants. As the observers concede,
the brightness of the supernova remnants decreases much less rapidly with increase in
radius than conventional theory predicts.176 The supplemental energy generation is the
answer to this problem.
Continued generation of energy in the remnants is manifested not only by the persistence
of the radio emission, but also by direct evidence of energetic events within these
structures. Inasmuch as conventional astronomical theory provides no means of
generating energy in the explosion products, the prevailing view is that any emission of
energy exceeding that, which can be ascribed to the initial explosion, must be introduced
into the remnant from some separate source. In the case of the Crab Nebula, the remnant
of a supernova observed in 1054 A.D., it has been estimated that an input of energy ‖of
the order 1038 erg/see― is required to maintain the observed emission.177 The current
belief is that this energy is derived from a dwarf star located in the center of the nebula,
but this is purely hypothetical, and it depends on the existence of a transfer mechanism of
which there is no evidence, or even a plausible theory.
The explanation that we derive from the theory of the universe of motion is that the
continued supply of energy is due to radioactivity in the local concentrations of upper
range matter in the remnants. It is the existence of this secondary energy generation in the
Type II remnants that accounts for the great difference between the maximum period of
observable radio emission in the Type I remnants, perhaps 3000 years, and that of the
Type II remnants, which is estimated at more than 100,000 years. As an example of this
difference, there is a nebulosity in the constellation Cygnus, known as the Cygnus Loop,
which is generally considered to be a remnant of a Type II supernova, and is estimated to
be about 60,000 years old. After all of this very long time has elapsed, we are still
receiving almost twice as much radiation at 400 MHz (in the radio range) from this
remnant as from all three of the historical (1006, 1572, and 1604) Type I supernova
remnants combined.178
There are a number of other remnants with radio emissions that are far above the
magnitudes that can be correlated with Type I. Also there are some remnants whose radio
emission is within the range of the Type I products, but whose physical condition
indicates an age far beyond the Type I limit. These must also be assigned to Type II. In
general, it is probably safe to say that unless there is some evidence of comparatively
recent origin, all remnants with substantial radio emission can be identified with Type II
supernovae, even though Type I events may be more frequent in the observable region of
our galaxy.
The conclusions as to the relative magnitude of the radio emission enable us to classify
the most conspicuous of the remnants, the Crab Nebula, as a Type II product. The radio
flux from this remnant is about 50 times that of the remnant of the Type I supernova that
appeared in 1006, and is therefore of practically the same age. The Crab Nebula was
originally assigned to Type I by the astronomers, mainly on the basis of the differences
between it and Cassiopeia A, the remnant of a supernova that occurred about 1670 A.D.,
which was regarded as the prototype of the Type II remnant. More recently it has been
recognized that the differences between the Crab Nebula and the Type I remnants are
more significant. Minkowski, for instance, reports that ‖an unbiased assessment of the
evidence leads to the conclusion that the Crab Nebula is not a remnant of a supernova of
Type I.― 171
This nebula consists of two physically distinct components, ‖one is an amorphous
distribution of gas . . . and the other is a chaotic network of filaments.― 179 In the center of
the nebula there is a dwarf star of the Type II class, the nature and characteristics of
which will be discussed in the next chapter. The presence of a star of this type definitely
identifies the nebula as a product of a Type II supernova large enough to produce
maximum speeds in the ultra high range.
On the basis of the theoretical considerations discussed in the preceding chapter, the
presence of ultra high speed matter in the inward-moving product of the Type II
supernova implies the existence of an observable outward-moving ultra high speed
component, which should consist of one or more jets of material. Instead, as indicated
above, the observers report the presence of a ‖chaotic network of filaments.― So let us
take a look at the nature of these filaments.
The dictionaries define the word ‖filament― as a ‖slender, threadlike object.― We are
accustomed to the way in which astronomical magnitudes dwarf those of our ordinary
experience. Indeed, we commonly use the term ‖astronomical― in the sense of ‖extremely
large.― But even so, it comes as somewhat of a shock when we are told that ‖on the
average the bright filaments are 1.4 arc sec. in diameter, which corresponds to a width of
2.5 x 1012 km.180 The ‖slender― object is more than a hundred billion kilometers in
diameter. But this does give us an answer to the question as to the nature of the filaments.
These ‖slender― filaments are clearly the same kind of entities that we call jets in a
different context. Their erratic courses are undoubtedly due to the resistance that they
meet as they make their way through the clouds of matter moving at lower speeds.
There is also a problem in connection with the so-called ‖amorphous― component of the
nebula. It must consist in part of the low speed products of the supernova explosion, but
the properties of this component do not resemble those of a hot gas and dust mixture. In
fact, even though it is identified as a ‖gas,― its spectrum is continuous, like that of a solid.
This seeming anomaly gives us the clue that points the way to an explanation of the
observations. An explosion that is powerful enough to give some of its products speeds in
the ultra high range also accelerates other portions of its products to speeds just below the
ultra high level; that is, the upper part of the intermediate range. These intermediate
products are moving in time only, and have no capability of independent motion in space,
but most of them are entrained in the moving components. Those that mix with the low
speed matter are carried along until the particles individually drop out of the stream. This
settling out process begins immediately after ejection. The outward motion of the
products of the Crab supernova has therefore left the volume of the nebula filled with
scattered particles of intermediate speed matter concentrated toward the center,181 rather
than toward the periphery, as in the shell structures that are typical of supernova remnants
in general.
As we saw in our examination of the theoretical aspects of the upper range speeds in the
preceding chapter, particles moving with speeds in the upper portion of the intermediate
speed range radiate in the same manner as those in the lower portion of the range below
unity; that is, with a continuous spectrum. The physical state of this material is the
temporal equivalent of the solid state: a condition in which the atoms occupy fixed
positions in three-dimensional time, and the emission is modified in the same manner as
in the solid state. Here we have another concept that is totally foreign to conventional
physical thought. For that reason it will undoubtedly be difficult for many persons to
accept. But it is clearly the kind of a result that necessarily follows from the general
reciprocal relation between space and time. The two speed ranges with continuum
emission are symmetrically related with respect to the natural datum level: unit speed.
Furthermore, the intermediate range continuum radiation is not limited to supernova
remnants. We will meet the same kind of radiation from matter in the same temperature
range later, under different circumstances.
The theoretical presentation in Chapter 15 also explains why the filaments, which are in a
still higher speed range, have a line spectrum. As brought out there, motion in a second
scalar dimension is incapable of representation in the conventional spatial reference
system, but the elimination of the gravitational effect by this motion does cause an
observable change of position in that system. This indirect result applies to the thermal
motion as well as to the unidirectional translational motion previously considered, but in
both cases the magnitude of the observed motion is subject to the limitations on the
gravitational speed in one dimension; that is, it is confined to the range below unity.
Thus, even though the speeds of the particles in the filaments are in the ultra high range,
the observable thermal effect is in the low speed range, and the radiation that is produced
has a line spectrum like that of an ordinary hot gas.
It has not been possible to extend the present investigation to an analysis of the spectra of
astronomical objects because of the amount of time that would be required for such an
undertaking. Some aspects of these spectra that are of special significance in connection
with the subjects under discussion will, however, be noted briefly as we proceed. In the
case of the Crab Nebula much stress has been laid by the astronomers on two points: (1)
that the radiation is non-thermal, and (2) that it is polarized. It will therefore be
appropriate to point out that, according to our theoretical findings: (1) all radiation from
objects with upper range speeds, except that generated by indirect processes such as the
one explained in the preceding paragraph, is non-thermal, and (2) all such radiation is
polarized as emitted. Where a lower polarization is observed, this is due to depolarizing
effects during travel of the radiation. A three-dimensional distribution of radiation is
impossible in a two-dimensional region.
As noted earlier, the observed characteristics of Cassiopeia A, the other very conspicuous
(at radio frequencies) supernova remnant, are quite different from those of the Crab
Nebula, even though it is now conceded (not without dissent) that both are Type II
remnants. Here again, there are two components of the remnant, but neither resembles a
component of the Crab Nebula. Both appear to consist mainly of local concentrations of
ordinary matter distributed in the volume of space occupied by the remnant. The objects
of one class are moving rapidly, and are located mainly at the periphery of the remnant in
what is commonly described as a shell. The other objects are larger, more evenly
distributed throughout the remnant, and nearly stationary. The shell is no doubt composed
of the outward-moving low speed explosion products. The problem of accounting for the
quasi-stationary objects in the context of conventional astronomical theory has been very
difficult; so difficult, in fact, that there is a tendency to try to dodge the whole issue, as in
the following statement:
The only possible interpretation of the stationary filaments in Cas A is that these
filaments were present before the supernova outburst.182
Here again we meet the assumption of omniscience that is so curiously prevalent among
the investigators of the least known areas of science. From the very start of the
investigation whose results are being reported in this work, the answers to outstanding
problems have almost invariably been found in areas in which the adherents of orthodox
theories have claimed that they have examined all conceivable alternatives. The
Cassiopeia A situation is no exception. The explanation that these authors characterize as
impossible can be obtained from a consideration of the theory that is being discussed in
this work.
There is no indication of the existence of a Type II dwarf in the remnant. We can
conclude from its absence that the Cassiopeia A supernova was not energetic enough to
produce significant amounts of ultra high-speed products. On this basis, the two
components of the remnant can be identified as low speed and intermediate speed
products This raises another issue, because intermediate speeds in the dense central core
of an exploding star would normally cause inward motion and production of a Type I
dwarf. No such product is observed. From its absence we can conclude that the star of
which Cassiopeia A is a remnant did not have a dense core; that is, it was a star of the
giant, or pre-giant, class, in an early stage before there was much central condensation.
The Type II explosion can take place at any stage of the stellar cycle. If it happens during
a diffuse stage, the explosion involves the entire structure the explosion forces are
predominantly outward, and they are distributed so widely that they d`, not reach the ultra
high levels. In this case the intermediate speed products are entrained in the outgoing low
speed matter, and are distributed in the remnant in much the same manner as the
amorphous mass in the Crab Nebula, but in local concentrations because of the lower
density of the moving matter in which they are being carried.
Explosion of a relatively cool and extremely diffuse star would not be as spectacular an
event as an ordinary supernova. This is probably the reason, or at least a major part of the
reason, why there is no record of an observation of the supernova that produced
Cassiopeia A. The explanation of the strength of the radiation now being received from
this remnant, and the rather rapid decrease in the amount of this radiation, will become
apparent when the process by which the radiation is generated is described in Chapter 18.
From the explanations that have been given, it can be seen that the unique characteristics
of both Cassiopeia A and the Crab Nebula are due to the youth of these objects. These are
features of the very early post-explosion stages. Within a few thousand years these early
phases of the evolutionary development will be completed. The optically observable
activity in the remnant will then be confined almost entirely to the outer shell, where the
outward-moving low speed component is concentrated. Radio and x-ray emission will
continue on a reduced scale for a considerable period of time. The Vela remnant,
estimated to be about l0,000 years old, has already reached this more advanced age.

CHAPTER 17

Pulsars
As indicated earlier, the maximum product speeds of the least powerful Type II
supernovae, those in which the exploding star is relatively small, are in the intermediate
range. Like the fast-moving products of the Type I explosions, the products of these
smaller Type II supernovae are white dwarfs. On the average they are smaller than the
white dwarf products of the Type I supernovae, and their iron content is less, but they
follow the same evolutionary pattern. The ultra high-speed products of the more powerful
Type II explosions follow a different course. As we saw in Chapter15, they move linearly
outward, and in the usual case ultimately arrive at a net explosion speed exceeding two
units, and disappear into the cosmic sector.
Those of the ultra high-speed products that are expanding in time and moving linearly in
space are fast-moving Stage I (not optically visible) white dwarfs. Their most distinctive
feature is the intermittent nature of the radiation that is received from them, and for this
reason they are called pulsars.
Up to the time when Quasars arid Pulsars was published in 1971, about 60 pulsars had
been located. This number has now risen to over 300. Aside from the discovery of x-ray
pulsars and the identification of their properties, progress in the pulsar field during the
intervening years has consisted mainly of accumulating more data of the same nature as
that available in 1971. There has been a great deal of theoretical activity, but since this
has been based almost entirely on the ‖neutron star― hypothesis, no progress has been
made toward recognition of what this work has identified as the true nature of the pulsars.
This lack of basic progress is clearly demonstrated by the current inability to account for
the two fundamental processes that are involved. As reported by F. G. Smith in a review
of the existing situation, the manner in which the pulsar is produced by the supernova
explosion ‖is not understood,― and ‖little is known about... the mechanism of the
radiation.― 183
Furthermore, no one can explain how the hypothetical neutron stars originate. As brought
out in Chapter 6, the arguments advanced in support of the assumption of a ‖collapse―
under the influence of self-gravitation are absurd, and no other way of producing
‖degenerate matter― has been identified. But the astronomers continue to insist that
neutron stars must nevertheless exist.
Even now, however, we have no theories that satisfactorily explain just how a massive
star collapses to become a neutron star. We know that neutron stars are possible in our
universe only because we see that they are there—not because we understand how they
form.184 (Martin Harwit)
Harwit defines a neutron star as ‖a collapsed, compact star whose core consists largely of
neutrons.― 185 Only one of the descriptive words in this definition is supported by the
astronomical evidence. This evidence shows that the object that is being called a ‖neutron
star― is indeed a compact object. But, as Harwit himself admits, there is no evidence to
support the assertion that it is a ‖collapsed― star. No one can explain how a star could
have collapsed. Nor is there any evidence that this object has a core, or that it is
composed, to any significant extent, of neutrons. The definition does not define the
observed object; it defines a purely hypothetical object dreamed up by the theorists.
Harwit says that ‖we see that they [the neutron stars] are there.― This is definitely not
true. He and his colleagues see that compact stars are there, but the further assertion that
these are neutron stars is pure assumption. It is simply another of the many instances
where astronomical thought has lost touch with reality because of the prevailing tendency
to assume that the most plausible theory available at the moment must necessarily be
correct, regardless of how many questions it leaves unanswered, and how often it
conflicts with the evidence from observation. The case in favor of the neutron star
hypothesis is the same ‖there is no other way― argument that we have met so often in the
earlier pages of this and the preceding volumes. Of course, the practice of arriving at
conclusions by a process of elimination does have merit under appropriate circumstances.
It is not the use, but the misuse, of this argument that is subject to criticism. As Fred
Hoyle pointed out in connection with one of these misuses:
So the argument amounts to nothing more than the convenient supposition that something
which has not been observed does not exist. It predicates that we know everything.186
This is the crux of the situation. The use of the ‖no other way― argument is legitimate
only in those cases where we have good reason to believe that we do know everything
that is relevant. In any case where the relevant factors are well understood, the
elimination of all but one of the recognized possibilities creates a rather strong
presumption (although still not a proof) that the one remaining possibility is correct,
providing that this possibility does not involve any conflict with observation or
measurement. The serious mistake that is so often made in present-day scientific practice,
not only in astronomy, but in other areas of physical science as well, is in accepting this
kind of an argument in cases, such as the assumption of the existence of neutron stars,
where the foregoing requirements are not met. The result is that the distinction between
fact and fancy is lost.
The distribution and observed properties of the pulsars indicate that they are situated
within, or close to, the Galaxy. Since one of them is associated with the Crab Nebula, and
another with the Vela Nebula, both supernova remnants, it seems evident that the pulsars
are products of supernovae. The validity of this currently accepted conclusion is
confirmed by our theoretical development. The fact that both of these objects are located
in Type II remnants also supports our finding that the pulsars are products of Type II
explosions only. Some members of the astronomical community are reluctant to accept
this conclusion, as it is difficult to reconcile with current views as to the nature of the
pulsars. Shklovsky, for example, admits that ‖The two known pulsars in SNR are
associated with SN II explosions,― 187 but nevertheless expresses the belief that pulsars
may yet be discovered in association with Type I remnants. The conclusion that no
pulsars form in Type I explosions is ‖at least premature,― he contends. His argument is
that the light curves of all supernovae are best explained by continued input of energy
from pulsars within the remnants, in the manner assumed in the case of the Crab Nebula,
and that the pulsars therefore probably exist in Type I remnants even though none have
been detected.
The truth is that Shklovsky's argument is very much stronger if it is turned upside down.
It contains three statements: (1) the energy in the Crab Nebula is supplied by the pulsar
(neutron star in current thought), (2) the power supply is the same in all remnants, and (3)
the observations show that there are no pulsars in Type I remnants. Shklovsky assumes
that statement (1) is valid, and deduces from the foregoing that statement (3) is false. But
(3), the observation, is far more reliable as a premise on which to base our reasoning. If
we take this observation at its face value, we deduce that statement (1) is false, and that
the energy of the Crab Nebula is not supplied by the pulsar. This agrees with the
conclusion that we reach by deduction from the postulates of the theory of the universe of
motion.
Those astronomers who reject the idea that there are concealed pulsars in Type I
remnants have no explanation for the restriction of the pulsars to Type II events, but
generally agree with F. G. Smith that ‖the association with Type II supernovae seems
established without further argument.― 188
No pulsars have been discovered in external galaxies, but as noted in Chapter 15, there
are a few remnants of Type II supernovae in the Large Magellanic Cloud, indicating that
pulsars occasionally do appear in relatively small galaxies, as well as in the larger
aggregates. This is consistent with what we have previously found with respect to the
existence of a few older stars in the younger galaxies.
In a number of instances, the observations of the pulsars arrive at results that seem
contradictory. It has been found that many, probably most, of them are moving rapidly,
with speeds often exceeding 100 km/sec.189 Furthermore, the average height of the
pulsars above the galactic plane is considerably greater than is normal for the objects
from which they presumably originated. These motions and positions are seemingly
inconsistent with the fact that the Crab and Vela pulsars have remained near the center of
their respective remnants.
In the universe of motion, the spatial position of the pulsar and its observable spatial
speed are related to the gravitational retardation. The explosion speed, and the resulting
change of position in a second scalar dimension of space, are not capable of
representation in the spatial coordinate system, but, as we saw in Chapter 15, when a
portion of the gravitational motion is eliminated by the oppositely directed motion
generated by the explosion, the outward motion that was being counterbalanced by
gravitation becomes effective, and appears as an observable spatial motion equal in
magnitude and opposite in direction to the gravitational motion that is neutralized. Thus,
during the first portion of the outward travel of the ultra high speed explosion product,
there is an observable spatial speed, and a corresponding change of position in the
reference system, the magnitude of which depends on the strength of the gravitational
force that has to be overcome.
The gravitational effect on an object moving through a portion of the Galaxy is
continually changing. Initially the exploding star is outside the gravitational limit of its
nearest neighbor (unless it is a member of a double or multiple system), and the
gravitational restraint on the pulsar is mainly due to the mass of the slow-moving
remnants of the explosion. This effect decreases rapidly, and as the pulsar moves farther
away from the initial location, the integrated effect of all mass concentrations within
effective range becomes the dominant factor.
This variation in the gravitational restraint explains some of the observations that
otherwise seem mutually contradictory. All pulsars are moving. If the supernova
explosion occurs in an isolated star in the outer regions of the galaxy, the gravitational
restraint on the pulsar is relatively weak, and the outward movement resulting from
elimination of the gravitational effect is correspondingly small. The Crab pulsar, for
example, is moving very slowly with respect to the nebula, and according to present
estimates it will not escape for about 100,000 years.199 At present it is still near the center
of the nebula.
On the other hand, pulsars produced by explosions that are more centrally located in the
galaxy are subject to substantial gravitational forces due to the effects of the central mass
as a whole. In this case, the spatial component of the explosion speed, which causes a
change of position in the space of the reference system, is relatively large. It follows that,
as a rule, we can expect to find the pulsars produced by isolated stars in the outer regions
of the galaxy
moving quite slowly and located in or near the remnants, whereas those produced in
central locations will be moving rapidly, and most of them will be found well away from
the galactic plane. The pulsars produced in binary or multiple star systems, or in clusters,
are subject to more gravitational restraint than the single stars, and if they are located in
the outer regions they follow an intermediate course, not attaining the high speeds of
those produced in the central regions, but moving fast enough to leave the vicinity of the
remnants within a few thousand years. This accounts for the absence of pulsars from
most of the observable remnants.
Another apparent anomaly is that the observed number of pulsars in the Galaxy seems to
require a rate of formation that is considerably in excess of the observed frequency of
Type II supernovae. Smith calls this ‖a serious discrepancy between the theory of origin
of pulsars in supernovae, and the observations of their ages and numbers in the Galaxy.―
191

Our findings clarify this situation. On the basis of theoretical conclusions reached in the
preceding discussion, the number of Type II supernova explosions occurring in the
Galaxy is not only ample, but greatly in excess of that required to account for the
observed number of pulsars. However, our findings are that the oldest stars, the ones that
reach the age limit and explode as supernovae, are concentrated mainly in the central
regions of the galaxy, the oldest portions of the structure. The great majority of the Type
II supernovae therefore take place in these central regions, where they are unobservable
because of the strong background radiation and obscuration by intervening material.
Furthermore, since the stellar aggregates have the general characteristics of viscous
liquids, they resist penetration by the explosion products. In the central regions of the
largest galaxies, the overlying matter confines all of the explosion products, and the
pulsars included in these products are, like the supernovae, unobservable individually. In
galaxies of less than maximum size, such as the Milky Way, some of the pulsars
originating in the outer portions of the central regions are able to make their way out to
join the pulsars originating from isolated supernovae in the galactic disk. Thus there is no
difficulty in accounting for the number of Type II supernovae required in order to support
the estimated pulsar population.
Conventional pulsar theory rests to a large extent on the current interpretation of the
observations of the Crab Nebula. According to these ideas, the emission of radiation from
the nebula is powered by energy from the pulsar located at its center. But only a few of
the known pulsars are associated with supernova remnants (only two such associations
are definitely confirmed). Some other explanation of the long-continued emission of
energy from the other remnants is therefore required in any event, and when this is
available there is no need for a special process in the Crab Nebula. The theory of the
universe of motion supplies a source of energy that is independent of the existence of
pulsars in the remnants.
The most characteristic property of the pulsars, the one that has given them their name, is
the pulsating nature of the radiation that we receive from them. In the early days of the
pulsar investigation, just after the discovery of the first of these objects, the extreme
regularity of the pulses and the absence of any known natural process whereby they could
be generated, suggested that the pulses might be artificially produced, and for a time they
were facetiously called messages from little green men. When more pulsars were
discovered it became evident that they are natural phenomena, and the little green men
had to be abandoned, but no explanation of the origin of the pulsed radiation that the
astronomers have been able to put together thus far is any less fanciful than the little
green men. As F. G. Smith, one of the prominent investigators in the field, said in the
statement previously quoted, ‖little is known― in this area.
The big problem is that natural processes capable of producing regularly pulsed radiation
are hard to find within the arbitrarily circumscribed boundaries of conventional physical
science. The only such process thus far suggested that has received any appreciable
degree of support is rotation. In the absence of any competition, this is the currently
accepted hypothesis, although, as indicated in the statement by Smith, it is recognized
that this explanation has not been developed to the point where it can be considered
satisfactory. It depends too much on the assumed existence of special conditions of which
there is no observational evidence, and it leaves a number of the observed properties of
the pulsars unaccounted for. Furthermore, when the rotation process is applied to
explaining the periodicity, the theorists are precluded from using it to explain some other
phenomena that, on the basis of the observational evidence, and independently of any
theory, are almost certainly due to rotation—the ‖drifting sub-pulses,― for example.
In the universe of motion, the periodicity of the radiation received from the pulsars is a
necessary consequence of the property that makes them pulsars: the ultra high speed. An
object moving in the explosion dimension with a speed in this ultra high range arrives at
the gravitational limit when its net speed in this dimension (the explosion speed minus
the effective gravitational speed) reaches unity. At this point the effective gravitational
speed, as we saw in Chapter 14, is equal to the oppositely directed unit speed of the
progression of the natural reference system. On the basis of the theory of radiation set
forth in the earlier volumes, this means that at the gravitational limit radiation is being
emitted at such a rate that we receive one unit of radiation from each mass unit per unit of
area per unit of time. At distances beyond this limit, the average amount of radiation
received is less because of the further distribution over equivalent space. But radiation is
a type of motion, and motion exists only in units. The decrease in the average amount of
radiation received can therefore be accomplished only by a reduction in the number of
units of time during which radiation is being received. Radiation from a pulsar beyond
the gravitational limit is received at the same strength as that from one at the gravitational
limit, but only during a constantly decreasing proportion of the total time. All of the mass
units of a star enter the pulsation zone within a very short time, only a small fraction of
the observed period. Thus, even though the total radiation from the star is distributed over
an appreciable time interval, it is received as a succession of separate pulses.
All pulsar periods are lengthening (except in the pulsating x-ray emitters, which we will
consider in Chapter 19). The period is thus clearly an indication of the age of the pulsar,
but the specific nature of the relation is not immediately apparent. At first it was believed
that the age could be determined by simply dividing the period by the rate of change, and
‖characteristic ages― thus defined are found in reference works. But it is now evident that
the situation is more complicated, and that most of the ages thus calculated are too high.
The first study of the pulsar ages in the context of the Reciprocal System of theory
likewise took a wrong turn, and arrived at ages that are now seen to be too low. As
pointed out in Volume I, the status of this system of theory, the theory of the universe of
motion, as a general physical theory means that it should be able to provide the correct
explanation for any physical situation. But this explanation does not emerge
automatically. A substantial amount of study and investigation may be required in any
specific case before the correct answers are obtained. The first such study frequently
turns out to be deficient m some respect. Relevant factors may have been overlooked, or
may not have been taken fully into account, even where the development of theory may
have been correct, so far as it went. This was the case in the original pulsar study, which
we now find arrived at results that are correct in their general aspects, but require
modification in some of the details. A full-scale review of the pulsar phenomena
undertaken in connection with the preparation of the text of this new edition has clarified
a number of points that were not correctly interpreted either in conventional astronomical
thought or in Quasars and Pulsars. This clarification is still not complete, but some
significant advances in understanding have been accomplished.
Fig. 24 is a diagram that is found in many recent discussions of the pulsar period
relationship, with some lines added for purposes of the present review. It is recognized
that the diagonal line at the right of the diagram, with a slope proportional to the fifth
power of the period, represents the cut-off at which the pulsed radiation ceases. It is also
realized that there must be some significance in the absence of observations that fall in
the lower left part of the diagram. But, in essence, what this diagram does for the
astronomers is to identify some of the questions. It does not give the answers.
In the context of the theory of the universe of motion, the outer boundary of the material
sector, the sector of motion in space, is a spatial limit. Since space and time, in this
sector, are subject to the relation s = at², where a is a constant applicable to the specific
phenomenon involved, the time magnitude that enters into the quantities related to the
sector limit is t² Furthermore, the sector limit applies to the total motion, the motion in all
three scalar dimensions; that is, to t6. The time interval between successive radiation
pulses, the period of the pulsar, is related to the total time. The rate of change of the
period, as observed, is therefore the derivative of P. The period decreases with time, but
because of the inversion at the unit level, the applicable quantity is not the derivative of
the reciprocal of P6; but the reciprocal of the derivative of phi that is, the reciprocal of 6
P5.
This indicates that the points farthest to the left in Fig.24 define another and with the
same slope as the cut-off line on the right of the diagram, and intersecting the latter at a
period of about 0.62, as shown in the diagram. This downward-sloping line is the path of
the period-derivative relation for a pulsar that conforms to the 1/6 P 5 relation without
modification, and 0.62 seconds
is the period at which the pulsar reaches the sector limit. As we saw in Chapter 15,
however, the are eight ways in which the motion in the region of equivalent space can be
distributed, only one of which results in transmission of the effects across the boundary
into the three-dimensional region. Where the motion is distributed over n of the eight, the
observed period is increased to nP Or. if we let P represent the observed period, the true
period becomes P/n, and the reciprocal of the derivative is 1/6 (n/P)5. Each distribution
thus has its separate path extending from the same initial point to a terminus on the cut-
off line at a period of 0.62 n seconds.
While the observed points clearly follow the theoretical lines, as shown in Fig.24, in
some instances, there is also considerable scatter in the diagram, the significance of
which is not yet clear. The existence of half-integral effective values of n is undoubtedly
one of the contributing factors. As we have noted frequently in the pages of the earlier
volumes, in cases where the probability considerations favoring n and n + 1 are nearly
equal, the result often is that half of the units involved take the n value and the other half
the n + 1 value, making the effective magnitude n + ½ The existence of an evolutionary
line based on n = 1½ is SO evident that this line has been included in the diagram.
Similar half-integral values may exist throughout the total range, and this may be all that
is needed to explain the scatter of the observed points. If not, there probably are some
transitions from one value of n to the next as the net speed increases.
At the present stage of the theoretical development it is not possible to arrive at a firm
theoretical value for the reference magnitude, the period corresponding to the sector limit
where n = 1. In fact, this period may be, to some extent, variable. The value 0.62 seconds
quoted in the foregoing discussion has been derived empirically by fitting the theoretical
shape of the diagram in Fig.24 to the observed points.
The pulsar age involves another reference value for which we will have to use an
empirically determined magnitude, 3.25x105 years, pending further theoretical study. The
current age of the pulsar is the product of this value, the distribution factor n, and the
square of the period in terms of the 0.62 unit (that is, (P/0.62² For the Crab pulsar, which
is designated 0531 + 21, from which the value of the age constant was derived, we have
(0.033/0.62² x l x3.25x105 = 921 years. The Vela pulsar, 0835-45, is on the 1.5
evolutionary line, and its theoretical age is (0.089/0.062)² x l.5 x 3.25x105 = 10046 years.
This agrees with the age of the supernova remnant, estimated at about 10,000 years. The
theoretical life spans of these two pulsars, if they stay on their present evolutionary paths,
are 3.25x 105 years and 1.10x106 years respectively. The maximum concentration of
pulsars is on, or near, the lines with n values of 2 and 3. The corresponding lifetimes are
2.6x106 and 8.8x106 years. These results are consistent with current estimates based on
observation of various pulsar characteristics. F. G. Smith, for instance, arrives at this
conclusion: ‖We therefore take... the maximum lifetime for most pulsars as 3x106 years.―
192

From the theoretical explanation of the nature of the pulsation it is evident that the shape,
or profile, of the pulse is a reflection of the shape of the radio structure of the object from
which the radiation is emitted. The dimensions of the pulsar in the line of sight,
determine the width and amplitude of the pulse. Thus the pulse profile is a representation
of a cross-section of the pulsar or, more accurately, the summation of a series of cross-
sections.
The most common profile, a single hump, with or without irregularities, clearly originates
from a globular object, which may be somewhat irregular. This simple profile, called
Type S. predominates in the younger pulsars, those in the upper left of Fig. 24. As
explained in Chapter 15, however, an object whose components are moving at speeds in
the ultra high range, between two and three natural units, appears to observation at radio
frequencies as a double structure. The separation, initially zero. increases with the
distance, and most of the older pulsars therefore have complex profiles, Type C, with
double or multiple peaks.
As the rotation of the pulsar carries its various features across the line of sight, the
amplitude of the radiation varies, giving rise to variations in the individual pulses. But
when the data on these individual pulses are combined into an integrated profile that
reflects the total emission during the full rotational cycle, the profile remains constant,
except to the extent that actual changes in the pulsars (movement of local concentrations
of matter, etc.) take place. The integrated profiles therefore show ‖well-organized and
characteristic behavior.― 193
The rotation imparted to the pulsar by the original explosion is generally quite limited,
and ordinarily it takes from 500 to 2000 or more pulses for the integrated pulse profile of
a young pulsar to reach the stable form which indicates that a full rotational cycle has
elapsed. Interaction with the environment tends to increase the rotational speed, and
many of the older pulsars, those approaching the cut-off line in Fig. 24, are rotating fast
enough to cause an observable drift of the sub-pulses. ‖The sub-pulses of successive
pulses tend to occur at earlier phases, so that they drift fairly uniformly across the
profile.― 194
It has been noted by observers (see, for instance, Manchester and Taylor, reference 195)
that differences between the pulse shapes at radio and optical frequencies, together with
the discontinuity between the corresponding spectra, suggest different emission
processes, whereas the time coincidence of the peaks indicates that the processes are
closely related. These seemingly contradictory observations are explained by our finding
that the time pattern of the pulses of radiation is independent of the process by which the
radiation is produced. At any specific time, all of the radiation emitted from the matter in
a specific section of the pulsar becomes observable, irrespective of its origin.
Inasmuch as the pulsation is due to the attenuation of the radiation by distance, rather
than to any feature of the emitter or the emission process, radiation from all objects
moving at ultra high speeds is received in pulsed form if emitted during the time that the
object is passing through the pulsation zone, irrespective of the nature of the emitting
object. However, the radiation from the giant clouds of particles that constitute the
second type of ultra high speed explosion product is too diffuse to be observed, while that
from galaxies or galactic fragments is unobservable because the individual stars of which
these aggregates are composed are so far apart that the pulsations in the radiation
received from them are not synchronized.
Since the pulsar radiation originates in a two-dimensional region, it is distributed two-
dimensionally; that is, it is polarized.
Individual pulses, and especially those that have a simple Gaussian shape, are highly
polarized . . . The polarization often reaches 100 percent. 196 (F. G. Smith)
According to the theory of the universe of motion, all radiation originating in the
intermediate speed range is 100 percent polarized at the point of origin, but there are
many depolarizing influences along the line of travel in most cases. The observed
percentage of polarization is an indication of the amount of depolarization rather than of
the initial situation. Thus we note that the radiation from the short-period pulsars with
simple pulse profiles, classified as Type S. which have not yet had time to separate from
the cloud of debris at the site of the supernova explosion, is weakly polarized, while that
from the long-period complex (Type C) pulsars shows strong polarization. 197 Similarly,
the sub-pulses and micropulses are, in general, more highly polarized that the integrated
profiles, a difference that is generally attributed to depolarization.198
Development of the details of the universe of motion as they apply to the pulsar
phenomena has not yet been carried far enough to arrive at firm conclusions concerning
the quantitative relations. We can, however, obtain some tentative results that are
probably at least approximately correct. According to the findings described in the
preceding pages, the size of the pulsar is indicated by the width of the pulse. The basic
period, we found empirically, is 0.62 seconds. The equivalent space is 0.62 x 3 x 105 km
= 1.86 x 105 km. The average width of the pulse is reported to be about three percent of
the period. 199 The indicated diameter of the average pulsar is then 0.03 x 1.86 x 10³ km =
5580 km. On this basis, most pulsars are in the range from 5000 to 6000 km in diameter.
This is within the white dwarf range.
We may now divide the corresponding circumferential distance by the time required to
stabilize the integrated pulse profile, and arrive at an approximate value of the equatorial
speed of rotation. For a rapidly rotating pulsar that reaches a stable pulse form in 10
pulses of one second each, the equatorial speed is about 1800 km/sec. This is very fast,
but not out of line for an object that has been traveling at an extremely high speed. It is an
order of magnitude less than some of the rotational speeds suggested in connection with
previous theories.200 Where 1000 pulses are required before the integrated profile is
stable, the equatorial speed is less than 20 km/sec.
One of the major advantages of a general physical theory is that it is a theory of the
unknown physical phenomena of the universe, as well as a theory of the known
phenomena. Of course, as long as a phenomenon remains unknown it is not particularly
helpful to have a theory that explains it, unless that theory helps, in some way, to make
discovery of the phenomenon possible. But once the hitherto unknown phenomenon is
discovered, the existence of a general theory leads almost immediately to an
understanding of the place of this phenomenon in the physical picture, something that
may take a long time to achieve if no theory is available in advance.
In the case of the pulsars, the development of the astronomical aspects of the theory of
the universe of motion had already been carried far enough prior to their discovery to
provide an explanation of the nature and properties of the general class of objects to
which they belong: ultra high speed products of stellar disintegration at the age limit. The
deductions made in the course of the original investigation, and published in 1959, will
be discussed in Chapter 20. This early investigation was directed primarily at the
products of galactic explosions, but as soon as the pulsars were discovered, it was evident
that these objects belong in the same class as the galactic explosion products whose
existence was predicted in the 1959 publication, differing only in those respects where
size is a significant factor.
Conventional science, on the other hand, has no general physical or astronomical theory,
and this has left the pulsar field wide open for speculation. The theorists' imaginations
have had full play. As matters now stand, the prevailing opinion is that the pulsars belong
to the hypothetical category of ‖neutron stars.― Where difficulty is experienced in fitting
the neutron stars into the picture, a further exercise of the imagination produces a ‖black
hole.―
In considering the conflicts between current astronomical thought and the theory of
pulsars derived from the postulates of the Reciprocal System, it should be recognized that
there is no independent evidence of the existence of such things as neutron stars or black
holes. They are purely hypothetical, and they have been introduced only because
accepted ideas as to the nature and properties of the white dwarfs impose limits on the
roles that these objects can play in physical phenomena; limits that are wholly theoretical
and have no factual support. From an observational standpoint, all of the high-density
stars are alike. There is no physical evidence to indicate any division by sizes of the
nature required by present-day theory. The truth is that the inability of the conventional
white dwarf theory to account for the full range of these observationally similar objects is
a serious defect in the theory; one which, in most fields of science, would be enough to
prevent its acceptance. But in this case, the weakness in the white dwarf theory is used as
an argument in favor of the black hole theory, or at least, as conceded by some of the
proponents of the theory, it is a ‖key link― in that argument.201
When the existence of matter at extremely high densities was first brought to light by the
discovery of the white dwarf stars it was found possible to devise a theory of this density
that appeared plausible in the context of the facts that were known at that time. But later,
when the same phenomenon—extremely high density—was encountered in the quasars,
where the white dwarf theory that had been constructed is obviously inapplicable, instead
of taking the hint and reexamining the white dwarf situation, the theorists directed their
efforts (so far unsuccessfully) to finding some different explanation that would fit the
quasars.
Then, when the same extremely high density showed up in the pulsars, still another
explanation was required, and this time the neutron star hypothesis was invented. Further
discoveries have revealed the existence of extremely high density in material aggregates
of other kinds where neither white dwarf theory nor neutron star theory meets the
requirements. So here we must have another new theory, and the resourceful theorists
have brought forth the black hole. Thus, in order to explain the different astronomical
manifestations of one physical phenomenon—extremely high density of certain material
aggregates—we have an ever-growing multitude of separate theories, one for the white
dwarfs, one for the pulsars, at least two for the x-ray emitters, several for the dense cores
of certain types of galaxies, and no one knows how many for the quasars.
Even in astronomical circles, the absurdity of this situation is beginning to be recognized.
For instance, M. Ruderman made this comment recently:
Theoreticians have apparently found it easy to understand them [the pulsars] for they
have produced not only a theory of pulsars but dozens of theories of pulsars.202
The application of the Reciprocal System of theory to this problem merely accomplishes
something that was long overdue in any event: a reevaluation and reconstruction of the
entire theory of extremely dense aggregates in the light of the increased amount of
information that is now available. This theoretical development shows that the extremely
high density results, in all cases, from the same cause: component speeds exceeding the
speed of light, unit speed in the universe of motion. All of the stars with extremely high
density, regardless of whether we observe them as white dwarfs, novae, pulsars, x-ray
emitters, or unidentified sources of radio emission, are identically the same kind of
objects, differing only in their speeds and in the current stage of their radioactivity.
Quasars are objects of the same nature, in which the extremely fast-moving components
are stars rather than atoms and particles.

CHAPTER 18

Radiative Processes
Aside from what can be learned from particles or aggregates of matter encountered by the
earth in the course of its motion through space, empirical information about astronomical
entities and phenomena comes almost entirely from incoming electromagnetic radiation.
Until 1932 the observations of this radiation were limited to the optical range and a
portion of the adjoining infrared. In that year radiation at radio wavelengths originating
from extraterrestrial sources was detected. Sixteen years later, x-ray radiation from
astronomical sources joined the list, and gamma rays soon followed. In the meantime,
coverage of the infrared range has gradually been extended. As matters now stand, the
entire electromagnetic spectrum is supplying astronomical information.
The most significant result of this widening of the scope of the observations is not the
increased quantity of information that has been obtained, but the much greater variety of
information. The new observations have not only brought to light new aspects of known
astronomical objects, but have resulted in the discovery of classes of objects that were
previously unknown and unsuspected. The most unexpected feature of these new kinds of
objects, and the most difficult to explain on the basis of conventional astronomical
theory, is the magnitude of the non-thermal components of the radiation being received.
Non-thermal radiation plays only a relatively minor role in the astronomical phenomena
that were known prior to the recent opening up of the high and low frequency ranges of
the spectrum. But the radiation from many of the newly discovered objects is mainly non-
thermal. This has confronted the astronomers with a problem for which they have not yet
been able to find a satisfactory solution within the boundaries of accepted thought.
‖We must admit at the outset,― says F. G. Smith, ‖that we have a very poor understanding
of the processes by which pulsars radiate.― 203 The primary radiation from these and the
other very compact astronomical objects is definitely non-thermal. As expressed by M.
and G. Burbidge, with particular reference to the continuous radiation from the quasars,
‖it is clear that the observed continuous energy distribution does not accord with any
model in which the radiation is emitted thermally from a hot gas.― These authors then
assert that ‖There are only two processes that appear to be possible in this situation. They
are synchrotron emission and emission by the inverse Compton process.― 204
This is the same kind of a situation that we have encountered so often in the subject areas
examined earlier. No satisfactory explanation is available for the event or phenomenon
under consideration within the limits of existing physical theory, and the investigators are
unwilling to concede that the fault may lie in that theory. The prevailing policy, therefore,
is to take what they consider the least objectionable of the known alternatives, the
synchrotron process, and to accept it as the correct explanation, on the strength of the
assertion that ‖there is no other way.― Time after time in the pages of this and the
preceding volumes we have encountered this assertion, and invariably our theoretical
development has shown that there is another way, one that does lead to a satisfactory
answer. So it is in the present case.
Actually it should be obvious that synchrotron emission is not the kind of a process that
meets the requirements as the principal source, or even a major source, of non-thermal
radiation from astronomical objects. It is clearly one of the class of what we may call
incidental processes of generating radiation, processes that produce limited amounts of
radiation under very special conditions, and have no significant impact on the radiation
situation in general.
The basis of the synchrotron process is the property of electrons whereby they radiate if
they are accelerated in a magnetic field. Both an adequate supply of very high energy
electrons and a magnetic field of the necessary strength are therefore required in order to
make this kind of radiation possible. Such a combination is present only under very
unusual circumstances. Indeed, there is little, if any, evidence that the required conditions
actually exist anywhere in the astronomical field. Thus this currently accepted theory
relies on the existence of very special and uncommon conditions to account for
phenomena that are so common and so widely distributed that it is almost self-evident
that they are products of the normal evolutionary development; features of the matter
itself, rather than processes originated by conditions in the environment.
Strong magnetic fields are relatively uncommon. Large concentrations of ‖relativistic―
electrons—that is, electrons) with extremely high speeds—are still less common. As
expressed by Simon Mitton, ‖our general experience is that sufficiently large volumes of
space are essentially electrically neutral.― 205 Furthermore, there are serious problems in
accounting for the containment of the hypothetical concentrations of electrons, and the
persistence of these concentrations over the long periods of time during which the non-
thermal radiation is begging emitted. And no one is facing up to the problem of
explaining how the energy of these electrons is maintained during these long periods of
time. The output of energy from many of the sources is enormous, and the theorists are
not even able to account for the generation of these huge amounts of energy, to say
nothing of explaining how that energy is continually converted into the form of high
energy electrons.
The prevailing tendency is to assume that the synchrotron process produces the non-
thermal radiation, and then to deduce from this that the conditions necessary for the
operation of the process must exist in the objects from which the radiation originates. The
following statement from Bok and Bok is typical:
The fact that radio-synchrotron radiation is observed as coming from the direction of
known supernova remnants, such as the Crab Nebula, indicates that large-scale magnetic
fields are associated with them.206
The situation here is very similar to that of the ‖neutron stars― discussed in the preceding
chapter. In that case, the argument goes like this: (1) pulsars exist, (2) they are thought to
be neutron stars, (3) consequently, some method of producing neutron stars must exist,
even though no one is able to suggest a feasible process for their production. Similarly, in
the present instance, we are offered this argument: (1) non-thermal radiation exists, (2) it
is thought to be synchrotron radiation, (3) consequently, the conditions necessary for the
production of synchrotron radiation (such as strong magnetic fields) must exist, even
though there is no observational indication that this is true.
Other investigators go still farther, and find ‖proofs― of the validity of the identification
of the non-thermal radiation as a product of the synchrotron process in some of the
characteristics of the radiation, such as the polarization. Here the argument is (1)
synchrotron radiation is expected to be at least partially polarized, (2) the radiation from
some of the non-thermal sources is totally or partially polarized, (3) therefore the
observed radiation must be synchrotron radiation. In order to ‖save― this obviously
invalid conclusion, the theorists invoke the ‖no other way― argument, and assert that the
synchrotron process is the only possibility (the inverse Compton process suggested in
reference 204 is usually ignored). This is a flagrant example of the presumptuous attitude
criticized by Hoyle: ‖It predicates that we know everything. ‖
Similarly, Shklovsky deduces theoretically that synchrotron radiation from distributed
sources such as the supernova remnants should decrease rapidly as these sources expand.
He then asserts categorically that ‖The detection of the theoretically predicted rapid
decline in the radio flux of Cassiopeia A affords a direct proof of the synchrotron theory
and all its implications."707
Such assertions are preposterous, regardless of how eminent their authors may be, or how
widely they are accepted. All that has been demonstrated by the agreement with
observation in each of theses two instances is that, in some cases, the synchrotron theory
meets one of the many requirements for validity. The ‖direct proof― claimed by
Shklovsky can come only from meeting all of these requirements in all cases—at least all
known cases. At the current stage of astronomical knowledge, no theory of the non-
thermal radiation can legitimately claim to be verified by astronomical observation. The
observational data are simply far too limited. The astronomers admit that their theory
lacks credibility in application to extreme situations. For instance, we are told that
‖synchrotron theory is severely strained to explain the radiation from highly polarized
quasars.― 208 It is increasingly evident that some less restricted explanation has to be
found for the extremely powerful and highly variable emission from these and other
major radiation sources. But a theory that is capable of accounting for these powerful
emissions can be applied to the simpler situations as well. There is no need for the
synchrotron process.
The truth is that the astronomers have not yet realized that their discovery of the very
energetic compact extra-galactic objects has opened up a totally new field of study. The
strong non-thermal radiation from these objects, including vast outpourings of radiant
energy unparalleled elsewhere in the universe, are so different in magnitude, and in their
widespread distribution, from the relatively insignificant terrestrial phenomena such as
synchrotron radiation, that it should be evident that there is a difference in kind. The
observers have recognized that the compact objects which they have recently
discovered—pulsars, quasars, etc.—are physical entities of a kind heretofore unknown,
and that some new insight into physical fundamentals will be required in order to gain a
comprehensive understanding of these objects. What is now needed is an extension of
this recognition to the radiation from these unfamiliar entities, a realization that there,
too, some new processes are involved.
One of the basic assumptions generally accepted by scientists is that the fundamental
laws and principles that are found to be valid in terrestrial experience are applicable
throughout the universe. From this it follows that all-physical phenomena, wherever
located. should be capable of explanation by means of these same laws and principles.
These are reasonable assumptions, and their validity has now been confirmed in the
course of the development of the Reciprocal System of theory. But there is an unfortunate
tendency to extend this line of thought to the further assumption that all physical
phenomena, regardless of location, must be explainable by means of the same processes
that are found to he in operation in the terrestrial environment. This assumption is
definitely not valid, because the physical conditions prevailing on earth are limited to a
very small part of the total range of those conditions.
It is obviously possible that there may be processes in operation elsewhere in the universe
that cannot be duplicated on earth because the conditions necessary for the operation of
those processes. such as for example, extremely high temperatures or pressures, are
unattainable. In the previous pages we have seen that such processes do. in fact, exist.
The unwarranted assumption that they do not exist has therefore had a very detrimental
effect on astronomical understanding We have already seen how its application to the
stellar energy generation problem has resulted in a monumental distortion of evolutionary
theory. Now we meet a similar situation in the application of this assumption to the
theory of non-thermal radiation.
Enough information is now available to show that non-thermal radiation is a major
feature of the physical universe. It is the predominant form of radiation emitted by a
variety of astronomical objects, including the most powerful of the known sources of
radiation. The theories that attempt to explain these very common phenomena of a major
character by means of processes that require very uncommon conditions are clearly on
the wrong track. They are committing the proverbial error of sending a boy to do a man's
work. It is true that strong radiation of this non-thermal type is restricted to certain
particular classes of objects (not clearly identified in current practice, but identified in the
theory of the universe of motion as objects moving at speeds in excess of the speed of
light). But within those classes of objects it is normal, rather than exceptional. The
radiation process must therefore be one that becomes operative in the normal course of
events.
The explanation of the non-thermal radiation that is derived from the Reciprocal System
of theory conforms to this requirement. It brings the large-scale emission of this type of
radiation into the mainstream of physical activity, where it clearly belongs. The finding
of this present work is that the strange astronomical objects discovered in recent years,
and identified as sources of strong non-thermal radiation, are ordinary material
aggregates—stars, galaxies, or fragments thereof—that have been accelerated to speeds
in excess of the speed of light by violent explosions, and are moving in. or returning
from, the upper speed ranges discussed in Chapter15. The strong non-thermal radiation
from these objects is generated by processes that arcxx normal features of physical
activity where these upper range speeds are involved.
Since the existence of speeds beyond the speed of light has not been recognized by the
astronomers. who accept the physicists' speed limit with the same unquestioning faith that
they manifest in accepting the physicists' equally misleading assumption as to the nature
of the stellar energy generation process, we will be dealing with a hitherto unexplored
field in our examination of the radiation currently classified as non-thermal. This is a
place where we will be able to make good use of the ability of a general physical theory,
one that derives all of its conclusions from the same set of premises, to deal with the
previously unknown areas of the physical universe, as well as those with which we are
already familiar. All of the physical principles, laws, and relations that we will need in
order to get a complete and consistent picture of the radiation situation in the upper speed
ranges are already available. They have been identified and verified in the physical areas
where the empirical facts are readily accessible to accurate observation and measurement,
and they have been explained in detail in the preceding pages of this and the earlier
volumes.
Nothing new is required.
Our first concern will be to identify the different classes of radiation with which we will
be dealing. The customary classification of all radiation into two categories, thermal and
non-thermal, is a reflection of the very narrow limits, within which terrestrial experience
is restricted. In the universe as a whole, non-thermal radiation plays a much larger role
than the thermal radiation that is so prominent in the local environment. Actually, there
are four kinds of radiation that can be classified as major features of the physical activity
of the universe, in addition to the processes that, as noted earlier, are minor and
incidental. Thermal radiation is one of the four. The radiation commonly classified as
non-thermal includes the emission at radio wavelengths, x-rays and gamma rays, and an
inverse type of thermal radiation.
As explained in Chapter 14, ordinary thermal radiation is a high frequency phenomenon,
in the sense that it is produced at wavelengths shorter than 11.67 microns. Matter at
temperatures below that corresponding to unit speed produces this thermal radiation.
Matter at temperatures above this level produces inverse thermal radiation by the same
process, but at wavelengths longer than 11.67 microns, and with an energy distribution
that is the inverse of the normal distribution applicable to thermal radiation. In both cases,
a portion of the radiation produced by matter in any of the condensed states (solid, liquid,
or condensed gas) is degraded in passing out of the sub-unit region in which it is
produced. This minor component appears as radiation of the inverse frequency, but it
conforms to the energy distribution of the radiation class to which it belongs. The thermal
energy emission in the infrared, for instance, decreases with increasing wavelength.
Here, then, is the explanation of the infrared component of the observed nonthermal
radiation. Like ordinary thermal radiation, this inverse type is produced by the normal
motions of the matter from which it emanates, and the process requires neither a special
set of environmental conditions in which to operate, nor a separate energy source. Since
every atom contributes to this radiation, all that is necessary in order to constitute a strong
source is a sufficiently large aggregate of matter at a temperature not too far above that
corresponding to unit speed. As we will see in the pages that follow, several classes of
compact objects meet these requirements.
The difference between the thermal and inverse thermal radiation enables us to make a
positive identification of the speed range of the components of astronomical aggregates.
Strong inverse thermal radiation at wavelengths greater than 11.67 microns (in the far
infrared) identifies the emitter as one whose components are in the upper speed ranges.
We may also go a step farther, and deduce that if this emitting object produces radiation
at any wavelength outside the inverse thermal range, this will be radiation at radio
wavelengths.
The inverse thermal process is not capable of producing strong radiation in the radio
range. Like the thermal process, it is one of low intensity relative to the natural datum at
unit speed, and it is therefore limited mainly to wavelengths that are relatively close to
the unit level of 11.67 microns. It has long been understood that most of the observed
radiation at very short wavelengths, x-rays and gamma rays, is not produced thermally,
but by processes of a different kind, involving some more fundamental activity of greater
intensity in the emitting matter. Our finding is that the same is true of the radiation at
very long wavelengths, those in the radio range, In both cases, the principal process
involved is radioactivity, the nature of which was examined in detail in Volume II. As
brought out in that discussion, the essential feature of radioactivity is a change in atomic
structure.
In all radioactive events, the function of the electromagnetic radiation is to take care of
the fractional amounts of motion that remain after the major redistributions, such as the
emission of alpha and beta particles, are accomplished. As we have seen, the equivalent
of a fractional unit of speed is an Integral number of units of the inverse entity, energy.
Spontaneous radiation from matter moving at less than unit speed therefore involves
emission of photons of equivalent speed 1/n² (or 1/n²-1/m²), where n is the number of
units of energy. And since the fraction 1/n² is small, for reasons explained in the detailed
discussion in Volume II, n is a relatively large number. The radiation thus consists of
high energy photons, x-rays and gamma rays. In radiation from matter moving at speeds
greater than unity, the full unit is a unit of energy, and the equivalent of a fractional unit
is attained by adding units of speed. Spontaneous radiation from matter moving at upper
range speeds involves emission of low energy photons, with frequencies in the radio
range.
With this understanding of the radiation pattern, we are now able to identify the general
nature of the strong emitters of radio and x-ray radiation. A small or moderate amount of
radiation of these types may originate in any one of a number of ways, but the discrete
astronomical sources of strong radiation are objects in which radioactive processes are
taking place on a vast scale. For an understanding of how such extremely large quantities
of radioactivity originate, we need to turn to a phenomenon that is not recognized in
conventional science, but was discovered in the course of the theoretical development
described in Volume II. This is the process that we are calling magnetic ionization. Just
as soon as the nature of electric ionization was clarified in the original phase of the
investigation, prior to the publication of the first edition of this work, it became obvious
that there must be a two-dimensional analog of the one-dimensional electric ionization.
The level of this magnetic ionization is the principal determinant of the stability of the
various isotopes of the chemical elements.
As explained in Volume II, an atom of atomic number Z has a rotational mass, m,., equal
to 2Z. At a magnetic ionization level of zero, this is the atomic weight (subject to some
modifications of a minor character). When raised to the magnetic ionization level 1, the
atom acquires a vibrational mass component, mv, of magnitude I mr²/156.44. The total of
mr and mv establishes the atomic weight (or isotopic weight) corresponding to the center
of the zone of isotopic stability. If an isotope is outside this zone of stability, it undergoes
a spontaneous radioactive process that moves it back to this stable zone. The composition
of the motions of a stable isotope of an element can be changed only by external
influences, such as a violent contact, or absorption of a particle, and the occurrence of
such changes is related to the nature of the environment, rather than to anything inherent
in the atom itself. Atom building is therefore a slow and uncertain process. On the other
hand, an unstable isotope is capable of moving toward stability on its own initiative by
ejecting the appropriate motion, or combination of motions. The isotopic adjustment
process begins automatically when conditions change.
The magnetic ionization level of matter is determined by the concentration of neutrinos in
that matter. The level of concentration is primarily a function of age. Those aggregates
that have existed long enough to reach one or the other of the destructive limits, and
become supernovae, are therefore magnetically ionized. When a portion of such an
aggregate is accelerated to a speed in excess of unity (the speed of light), its constituent
atoms move apart in time, as explained in the previous pages. The neutrinos of the
material type, which cause the magnetic ionization, cannot move through space,
inasmuch as they are inherently units of space, and the relation of space to space is not
motion. But these neutrinos are capable of moving in the empty time that exists between
the fast-moving atoms in the intermediate speed range, since the relation of space
(neutrinos) to time is motion. The diffusion of the neutrinos into this additional time
reduces the neutrino concentration drastically, and the aggregate consequently drops to a
lower ionization level. This lowers the zone of stability, and leaves some isotopes above
the stability zone. These isotopes are then unstable, and must undergo radioactivity to
eliminate some of their vibrational mass. As noted earlier, radioactivity in the
intermediate speed range results in the emission of radiation at radio wavelengths.
Large-scale production of radiation in the radio range thus takes place under conditions in
which extremely large quantities of matter are transferred from one speed range to a
higher range in a relatively short period of time. Such conditions are, almost by
definition, a result of explosive processes. (For an explanation of the concepts such as
magnetic ionization, rotational and vibrational mass, and neutrino concentration, which
enter into the description of the radiation production process, sees Volume II).
The time required for isotopic adjustments varies widely, but many of the isotopes are
short-lived in the unstable state. These isotopes are quickly eliminated, and the
radioactivity of the explosion products therefore decreases quite rapidly in the early
stages following the explosion. But there are also many isotopes with much longer half-
lives, some extending to billions of years, and a certain amount of radioactivity persists
for a long period of time. Both the total length of the active period, and the time during
which the radiation is at peak intensity are extended substantially when the aggregate is
initially at a high magnetic ionization level, as the ionization is reduced successively from
one level to the next as the expansion into time proceeds. Each of these reductions puts a
new group of isotopes outside the stability limits, and initiates a new set of radioactive
transformations.
There are no aggregates of intermediate or ultra high speed matter in the material (low
speed) sector of the universe, other than these explosion products, but, as noted earlier, if
an object ejected into the intermediate region by an explosion does not have enough
speed to reach the two-unit level and escape from the material sector, it loses speed in
interactions with the environment, and eventually returns to the region of motion at less
than unit speed. In addition to the outward-moving explosion products, the material
sector thus contains a population of returning objects of the same nature. As the speed of
such an object decreases, the changes that took place during the outward movement are
reversed. The amount of empty time between the components of the aggregate is reduced,
the concentration of neutrinos (the magnetic temperature) is increased, and the aggregate
moves step by step up to its original ionization level. Each successive increase in the
ionization level leaves some isotopes below the new location of the zone of stability. and
therefore radioactive. The last step in this process is a result of the transition from motion
in time to motion in space. In this case the isotopic adjustments take place in matter that
has dropped below unit speed, and the accompanying radiation is in the high frequency
range; that is, it consists of x-rays and gamma rays.
Observers comment on ‖the great power of the [radiation] sources― and the ‖rapid and
complex variability.― Both of these features are explained by the theory outlined in this
chapter. Strong radioactive emission from masses of stellar magnitude is obviously
sufficient to explain the observed power, while the emission from constantly changing
groups of isotopes, with half-lives all the way from seconds to billions of years, accounts
for both the rapidity and the complexity of the variations. The existence of both radio and
x-ray emission from certain sources has also been noted. This is a result of the turbulent
conditions in the material aggregates in which very energetic processes are taking place.
While the net movement across the speed boundary determines the principal emission,
there are local and temporary reversals of the general trend.
The radio-emitting objects thus far discussed are three classes of white dwarf stars:
ordinary white dwarfs, pulsars, and central stars of planetary nebulae. Moist pulsars are
known only by their radio emission. Only two of those located to date (1983) are
optically visible, and if it were not for the pulsations, the remainder, like the non-
pulsating Stage 1 and Stage 2 white dwarfs, would be merely unidentified radio sources.
The relatively small class of pulsars whose radiation consists primarily of x-rays will be
examined in the next chapter. Later we will consider a variety of radio-emitting objects of
galactic size. All of these originate in the manner described in this chapter; that is, they
are either explosion products that have been accelerated to speeds in the upper ranges, or
they are aggregates that contain substantial quantities of such products.
The biggest problem that has confronted the astronomers ever since they extended the
range of their observations beyond the relatively peaceful Milky Way galaxy and into the
realm of violent events that take place in some of the extra-galactic aggregates, has been
to account for the enormous energies that are involved in some of these events. Many
different hypotheses have been advanced, mainly of a highly speculative nature, but none
of these has reached the stage where it can withstand a critical scrutiny. As expressed by
Simon Milton:
Although we can now give a qualitative picture of certain types of interaction, each time
we have to pull a rabbit out of the hat—a mysterious source of energy. The evidence that
the energy is there in abundance is convincing. But we have only begun to scratch the
surface in the battle to explain whence this energy has come.209
As noted earlier, a major weakness of much of current astronomical theory is that it calls
upon the existence of some very special conditions to explain general features of the
evolution of aggregates of matter. In the theory of the universe of motion, on the other
hand, the general evolutionary features are results of conditions that necessarily arise in
the normal course of events. The basic energy production process in this universe. we
find, is the conversion of rotational motion (mass) to linear motion (energy) at the age
and temperature limits of matter. This one process accounts for the entire range of energy
generation, from supplying the modest fuel requirements of the quiet stars, to providing
the enormous energy required for the ejection of a quasar. And it requires no special
conditions or unusual circumstances to bring it into operation. All matter eventually
arrives at one or the other of these limits.
The manner in which the energy requirements of the violent astronomical phenomena are
met will be developed in detail in the pages that follow. As we will see there, the current
estimates of the energy output of the quasars are grossly overstated, and the largest
sustained emission that we have to account for is comparable to that of the radio galaxies.
In order to get an idea of the amount of energy that is involved, we may turn to the results
of a calculation
If we are allowed to turn matter into energy with total efficiency we need something
like... 100,000 stars. On the other hand, if we are only allowed to use conventional
astrophysics, 10 million solar masses must be involved in the production of the requisite
energy.210
The processes that take place at the destructive limits of matter, as described in the
preceding pages of this and the earlier volumes have a maximum capability of complete
conversion of matter into energy, but in practice operate at a lower rate, and thus require
an amount of participating mass somewhat greater than Mitton's lower figure. As we will
see in the later chapters, this is well within the theoretical limits of the mass
concentrations.
With the benefit of the additional information developed in this chapter, we are now in a
position to enlarge upon what was said in Chapter 14 with reference to the conclusions
that are currently being drawn from the Second Law of Thermodynamics. It is now
evident that this law does not have the significance that present-day science attributes to
it. The First Law of Thermodynamics, which expresses the principle of conservation of
energy, defines ‖energy― in a broad manner, including in this concept both kinetic energy
and potential energy. In the reasoning by which the conclusions as to the eventual ‖heat
death― are reached, it is taken for granted that the term ‖energy― has the same meaning in
the statement of the Second Law. In the words of the authors quoted in Chapter 14,
‖energy flows always in the same direction― from the highest level ‖in the hot interior of
a star― to the lowest level, ‖a disordered cold soup of matter dispersed throughout space.―
But this is not true of potential (that is, gravitational) energy. The potential energy of the
matter in the hot interior of a star is at a minimum, while that of the dispersed ‖cold
soup― is at a maximum. The evolutionary direction of this potential energy is opposite to
that of the kinetic energy.
It should be noted, in this connection, that there is no thermal motion in space at the
temperature of the ‖cold soup,― only a degree or two above absolute zero. Hydrogen is in
the solid state below its melting point at 14 K. In that state (a property of the individual
atom or molecule) the thermal motion is in time (equivalent space), and within the spatial
unit in which each atom is located. Thus there is no outward force acting on the atoms of
matter in this cold dispersed state other than that due to the outward progression of the
natural reference system. Any sufficiently large volume of dispersed matter is therefore
subject to a net gravitational force, and will eventually consolidate in the manner
described in Chapter 1, converting its potential energy into kinetic energy.
Thus the ‖energy― to which the Second Law applies is not the same ‖energy― that is
defined in the First Law. The Second Law applies only to kinetic energy. When this fact
is recognized, the conclusions that can be drawn from the Second Law are completely
changed. It then becomes clear that, in application to the large-scale activity of the
universe, the Second Law of Thermodynamics is valid only in conjunction with the
gravitational law. The result of this combination, instead of being a relentless progression
toward a ‖heat death"—the ‖end of the world― envisioned by Davies—is a cyclic
movement from maximum kinetic energy and minimum potential energy, in the stellar
interior, to maximum potential energy and minimum kinetic energy, in the cold and
diffuse state, followed by a swing back to the original combination. The findings of this
present investigation show that the aggregation of the diffuse matter under the influence
of gravitation is just as inevitable as the degradation of kinetic energy in thermodynamic
activity. Indeed, as pointed out in Chapter 14, aggregation is the primary process. All
matter entering the material sector is eventually incorporated into stars, but only part of it
is returned to the diffuse state in space by means of the processes to which the Second
Law applies. The remainder is ejected into the cosmic sector. and returns to the diffuse
state in space by a longer route.

CHAPTER 19

X-ray Emission
As we saw in the preceding pages, some of the products of the supernova explosion reach
their maximum speeds in the range between one and two units, intermediate speeds in our
terminology. Inasmuch as these objects continue losing energy to the environment, they
ultimately return to the region of three-dimensional space, below the unit speed level,
where they are observed as white dwarf stars. The general nature of the evolutionary
development of the white dwarfs was discussed in the earlier chapters. At this time we
will review the situation from the standpoint of the changes in the radiation pattern that
take place as the stars move through their successive evolutionary stages.
The radiation during Stage 1, the immediate post-ejection stage, as we have seen, is at
radio frequencies. As explained in Chapter 18, it results from the isotopic readjustments
required to bring some of the components of the star back into the zone of stability, after
they have been left outside that zone by the decrease in the magnetic ionization level that
follows the expansion of the stellar aggregate into time.
It was established in Volume II that the earth is at the one-unit magnetic ionization level.
As we found earlier in the present volume, the solar system is somewhere near the
average age of the stars in the galactic arms. We may therefore conclude that unit
magnetic ionization is normal in the outer regions of the Galaxy. This means that, in the
local environment, only one downward step is usually involved in the decrease of the
magnetic ionization level on entry into the intermediate speed range. In view of the
expansion into a second scalar dimension that occurs at unit speed, it then follows that the
series of changes, which ultimately result in the production of radio radiation, are
initiated immediately after crossing the unit speed boundary. The isotopic adjustments
that arc necessary are therefore substantially complete by the end of the outward travel of
the white dwarfs. Consequently. there is little or no radio emission from these objects
during the unobservable stage of the return (Stage 2), or during the time that they are
observable as stable stars (Stage 3). Futerermore, there is a substantial amount of
accretion of low speed matter in this latter stage. facilitated by the fact that the white
dwarf remains spatially stationary in the debris left by the supernova explosion. During
stage 3, the observable radiation from the white dwarfs comes mainly from the low speed
matter.
Evolutionary Stage 4, which follows, involves a return to the speed range below unity.
This reverses the process that took place when the unit speed level was exceeded in the
outward stage of the travel of these stars. The volumetric change accompanying the drop
into the lower speed range increases the neutrino concentration. This restores the unit
magnetic ionization level, which raises the zone of isotopic stability, and leaves some of
the existing isotopes below the limits of the zone. A series of isotopic adjustments then
follows, accompanied by radioactivity. Since these processes take place after the speed
drops below the unit level, the radiation is in the x-ray range. The Stage 4 white dwarfs,
the cataclysmic variables, are therefore x-ray emitters. ‖Almost every CV looked at with
Einstein [the orbiting observatory] was an emitter of x-rays.― 211 (Mason and Cordova)
Here, then, we have a simple x-ray production process that is a direct result of changes
that take place during the normal evolution of the white dwarf stars, and does not require
the existence of any special or unusual conditions. This contrasts sharply with the
production mechanisms postulated in current astronomical thought, as described in this
statement from a report of a symposium on x-ray astronomy:
Most of the known, realistic, mechanisms for the generation of x-rays lead to somewhat
complicated theoretical statements, and the number of adjustable parameters is often too
great for comfort.212
Since all outgoing explosion products that attain upper range speeds emit radio radiation,
while only part of them return to the lower speed range and emit x-rays, the total radio
emission is much greater than the total radiation at x-ray frequencies. It is also more
easily observed, as a large part of the radiation at radio frequencies penetrates the earth's
atmosphere, and can be observed at the surface, whereas the x-ray radiation is almost
completely blocked, and can be observed only by instruments that are lifted above the
greater part of the atmosphere. But the objects that emit x-rays are moving at speeds
below unity, and are optically visible, while most of the radio-emitting objects within the
Galaxy are invisible. For this reason, the new x-ray branch of astronomy has accumulated
a substantial amount of information about the x-ray emitters and their properties, in spite
of the observational difficulties.
One of the important results of this addition to the body of astronomical knowledge has
been a significant increase in the volume of evidence supporting the evolutionary pattern
of the white dwarf stars that has been derived from the theory of the universe of motion.
According to that theory, the white dwarfs originate in supernova explosions, are
accelerated to speeds in excess of the speed of light, move outward in time to a limiting
distance, then reverse their course, and move back to their original positions and to
speeds below the unit level. These stars undergo certain processes on the outward trip,
and are then subjected to the same processes in reverse during the return. The finding that
the processes leading to the emission of x-rays are the inverse of the corresponding
processes that result in the emission of radio radiation establishes a specific relationship
of a fixed character between the various features of the x-ray emitters and those of the
radio emitters. This means that the nature and properties of the x-ray emitters are strictly
defined theoretically. The fact that all of the observational evidence is consistent with the
rigid theoretical requirements is therefore an impressive confirmation of the whole
interlocking structure of white dwarf theory.
From this theory we find that the x-ray emitters of the type that we are now considering
are components of binary, or multiple, systems in which they are associated with stars
that originate from the low speed supernova products as infrared stars, and pass through a
giant or supergiant stage as they move toward gravitational equilibrium on the main
sequence. Thus far, only about 20 percent of the x-ray emitters that have been identified
as stars are definitely known to have companions, and the theoretical conclusion that they
are all components of binary or multiple systems has been confirmed only to that extent,
but there is no evidence to indicate that the remainder do not have companions. Indeed,
one of the prominent investigators in this field, R. Giacconi, reports that the evidence
from observation warrants ‖a working hypothesis that all galactic x-ray sources are either
members of a binary system or supernova remnants.― 213
The theoretical identification of one class of x-ray emitters as white dwarfs is also in
agreement with the observational finding that the x-rays ‖must originate from relatively
small, compact objects.― 214 This description is applicable troth to the stars currently
recognized as white dwarfs and to the stars not currently included in the white dwarf
class, but theoretically identified as Stage 4 white dwarfs: the novae and other
cataclysmic variables.
Another observable characteristic of the discrete x-ray emitters in the Galaxy is their
distribution. Inasmuch as the observable white dwarfs are distributed somewhat
uniformly among the stars in the disk of the Galaxy. as the theory requires, it is to be
expected that the discrete x-ray emitters will share this distribution. The observations are
in full agreement with this expectation. The correlation between the distribution of the
planetary nebulae (early Stage 3 white dwarfs), which are more observable than the
ordinary white dwarf stars, with the discrete x-ray sources (Stage 4 white dwarfs) is
particularly close.215
According to a 1977 report, seven of the approximately 130 observable globular clusters
are probable locations of known x-ray sources.216 This is consistent with the conclusion
that we derive from theory; that is, the only products of supernova explosions that exist in
objects as young as the globular clusters are those produced from the relatively small
number of old stars that have been incorporated into the young aggregates.
The observations support the theoretical finding that the x-ray emission from the white
dwarfs occurs mainly during Stage 4 of the evolution of these objects, the cataclysmic
variable stage. A number of papers concerning the x-ray emission from these variables
were presented at a recent symposium. As one participant observed, reports of such
observations are now ‖appearing in profusion.― Both soft and hard x-rays have been
detected from the cataclysmic variables, according to another of the investigators, who
concedes that ‖production of hard x-rays, as detected in several of these sources, is hard
to understand.― 216 The appearance of x-rays in some of the objects of this class and not in
others has also been regarded as anomalous.
Both of the seeming anomalies are readily explained by the theory discussed in these
pages. The cataclysmic variables are in the last stage of the life of the stars as white
dwarfs. Some of them naturally make faster progress toward completion of the transition
process than others. Thus some are still emitting hard x-rays, while others have
eliminated the hard x-ray sources, the short-lived isotopes, and are emitting soft x-rays.
Furthermore, both the character and the magnitude of the emission are subject to
variation because of the intermittent nature of the explosive activity. During the explosive
outburst, which exposes some of the material from the interior, where the radiation
originates, the emission is at a maximum, and the x-rays are ‖hard―; that is, their
frequency is relatively high. Between these outbursts, the radiation is reduced, both in
quantity and in frequency, by absorption and re-radiation during its travel through the
outer shell of the star.
From the explanation of the nature of the ejections from the cataclysmic variables in
Chapter 13 it is evident that those ejections which are accompanied by relatively strong
x-ray emissions are nearly continuous in two groups of these objects: the smaller ones,
and the older ones, those that are nearing the end of the eruptive stage. It follows that
there are continuous, as well as intermittent, x-ray emitters among the Stage 4 white
dwarfs. By the time these stars reach the main sequence, the isotopic adjustments are well
along toward completion. and the remaining x-ray radiation is reduced to lower
frequencies on the way out from the stellar interior, without further explosive activity.
We now turn to the other class of discrete galactic x-ray emitters. the pulsars. As we saw
in Chapter 17, the distinguishing characteristic of the pulsars is that they are traveling at
ultra high speed in the explosion dimension. Ordinarily these stars increase their net
speed as the gravitational force is gradually attenuated by distance, and they eventually
disappear into the cosmic sector. But there are influences other than gravitation that tend
to reduce the pulsar speed, particularly the resistance due to the presence of other matter
in the path of movement. In some instances, where the original explosion speed is only
slightly above the two-unit level, the retardation due to these causes may be sufficient to
prevent the net speed from reaching two units. And even if the pulsar does enter the
speed range above two units, where there is no longer any gravitational restraint on a
material object, the pulsar is still subject to the other influences of the material sector.
Under appropriate conditions, therefore, the net speed of the outgoing pulsars may reach
a maximum somewhere in the vicinity of the two-unit level, decreasing thereafter, and
eventually dropping back to levels below unity. A small proportion of the pulsars thus
return to the material status rather than escaping into the cosmic sector, as most pulsars
do.
Since the linear outward motion of the pulsar is in a dimension of space, the transition at
the two-unit level carries it into the region of motion in time. Those pulsars that return,
after crossing the two-unit boundary, resume motion in space. The isotopic adjustments
that follow this change are therefore accompanied by radiation at x-ray frequencies. On
this return trip the pulsars pass through the same pulsation zone that they traversed in the
opposite direction in their outgoing stage, and in so doing they emit pulsed x-rays in the
same manner in which they emitted pulsed radio radiation on the outward course. By the
time the pulsar has passed through the pulsation zone it has had time to complete the
adjustments involving the short-lived isotopes which produce the hard x-rays, and the x-
rays from ‖most of the persistent sources that do not pulse― are relatively soft.217
Eventually the incoming pulsars revert to the status of normal white dwarfs, and follow
the regular white dwarf evolutionary pattern, including a resumption of x-ray emission in
Stage 4 of that evolution.
The pulsating x-ray emitters have some features that are quite different from those of the
radio pulsars, and those differences have aroused a great deal of speculation, much of
which is little more than fantasy. It is among the complex x-ray emitters, pulsating and
non-pulsating, that the theorists are finding candidates for designation as black holes. The
explanation of the x-ray emitting objects that we derived from the theory of the universe
of motion requires none of these Imaginative constructions. As can be seen from what
has been said in the foregoing paragraphs, all that is necessary to explain the x-ray
emission and its pulsation is to invert the theory already developed for the radio-emitting
objects. The expansion into time during the outward travel of the pulsar lowers the zone
of isotopic stability, and initiates isotopic adjustments. On the return trip there is a
contraction which raises the zone of stability back to the original level, and causes a
reversal of the changes in the isotopes. The adjustments during the outward travel take
place while the speeds of the pulsar components are above unity. The fractional residues
of the adjustment process are therefore units of speed, which are ejected in the form of
radiation at radio frequencies. The adjustments during the return take place while the
component speeds are below unity. Here the fractional units are units of energy, and they
are ejected in the form of x-ray radiation. The x-ray process is simply the inverse of the
radio process.
According to the observers' reports, the incoming pulsars are concentrated toward the
galactic center, as would be expected in view of our finding that this is where most
pulsars originate. Since, as we have seen, the pulsars that are produced in these central
regions of the Galaxy are moving rapidly outward to a limiting position in space during
their radio pulsation stage, it follows that any of these objects which return as x-ray
emitters will move down from this limiting position to their original locations as
gravitation again becomes effective. The following is a comment by Shklovsky about one
of the most conspicuous of the incoming pulsars:
The radial velocity of HZ Herculis, close to 60 km/s, is directed toward the galactic
plane. The reason could be that the star has already reached its maximum distance from
the galactic plane and is now moving back down.218
In considering the effects of these motions of the pulsars, it is essential to recognize that
the motions being observed are taking place in a second scalar dimension of space. As
explained earlier, such motion is represented in the reference system only under some
special conditions, and it has no effect on the spatial relationships in the original scalar
dimension. Thus the association between the low speed and ultra high speed products of
the Type II supernova is maintained in essentially the same manner as the association
between the white dwarf product of the Type I supernova and its low speed companion
that we examined in the earlier chapters. In the early stages, the low speed companion of
the pulsar is merely an expanding cloud of dust and gas, and it is doubtful if the life
period of an outgoing pulsar is long enough to enable this cloud to contract into an
observable object. (One such case has been reported, but this identification needs to be
investigated further).
The incoming pulsars are, of course, much older, and their low speed companions have
developed to the point where they are observable. The x-ray emitters are therefore
recognizable as binary systems. The pulsars most likely to fall just short of reaching the
point of no return are those produced by explosions of very large stars. These are
products of rapid accretion, and a large proportion of their total mass is below the
destructive ionization limit at the time of the explosion, because of the amount of time
required for equalization of the ionization levels. This results in an explosion speed near
the lower limit of the ultra high range, and increases the probability that the outward
motion will be halted. When one of these pulsars returns to the speed range in which it is
observable as an x-ray emitter, its large low temperature component is seen to be a giant
or supergiant. A 1975 report states that 5 of the 8 then known binary x-ray stars
incorporate massive supergiants (relatively rare in the Galaxy).219
The white dwarf product of the explosion of a massive star is also a large object of its
class. In Cygnus X- I the x-ray star is estimated to be in the range from 6 to 10 solar
masses, while the optical star is twice as large. This object is currently the favored
candidate for the black hole status, on the ground that the accepted theories limit both the
white dwarfs and the hypothetical neutron stars to smaller masses. Shklovsky, whose
estimates of the masses of the components of Cygnus X-1 are quoted above, follows
these figures with the comment, ‖so it would follow that the Cygnus X-l source is a black
hole― .220
Here one can see how little substance there is to the structure of reasoning on which the
case for the existence of black holes is based. Certain observed entities that have all the
properties of the class of objects called white dwarf stars are excluded from this
classification on the strength of a wholly unsupported theoretical conclusion that the
white dwarfs are subject to a mass limit in the neighborhood of two solar masses. Then
these objects that are observationally indistinguishable from white dwarfs, but are above
the hypothetical mass limit, are arbitrarily assumed to have some different type of
structure. Finally, a structure, the black hole, is invented for these objects on an ad hoc
basis.
When the black hole concept was first proposed, it was recognized in its true colors. As
expressed in one comment published in 1973, only the ‖counsel of desperation― leads to
invoking this hypothesis.221 This situation has not changed. The black hole hypothesis has
no more foundation today than it had ten years ago when that judgment was passed. But
the intangible nature of the hypothesis, which prevents testing its validity, and its
constant repetition in the astronomical literature, together with the general retreat from
strict scientific standards in recent years, has resulted in a quite general, although
somewhat uneasy, acceptance of the black hole. There is now an increasing tendency to
call upon this purely hypothetical concept for the solution of all sorts of difficult
astronomical problems.
In reaching the conclusion that the compact astronomical objects in the stellar size range
are all white dwarfs differing only in properties related to their speeds, this present work
is not in conflict with any observed facts; it is merely challenging some unfettered flights
of the imagination. In this connection, it should be noted particularly that the assumption
that gravitation is effective within the atom is the cornerstone of all of the currently
accepted theories of the various compact astronomical objects. There is no evidence
whatever to support this assumption. Observation tells us only that gravitation acts
between atoms. The assumption that it also plays a role within the atom rests entirely on a
theory of atomic structure that, as brought out in the preceding pages of this and the
previous volumes, is contradicted by many definitely established physical facts, and is
kept alive only by invoking the aid of a whole series of ad hoc assumptions to evade the
contradictions.
There are some special conditions under which it is theoretically possible for outgoing
pulsars to emit pulsed x-rays. As noted earlier, those pulsars that originate in locations
where the gravitational retardation is minimal have relatively low spatial speeds, and
remain near the location of the supernova explosion for a considerable period of time.
While the entire aggregate of ultra high speed explosion products maintains its identity in
time, and all of its components move at the same explosion speed, so that their pulsations
are synchronized, some portions of the whole are entrained in the material moving
outward in space, in the same manner as the local aggregates of intermediate speed
matter discussed in Chapter 15. Since these spatially detached portions of the pulsar are
in close contact with the low speed matter, interaction with that matter reduces the speeds
of some of the constituent particles below the unit level, causing isotopic adjustments and
emission of x-rays. Similar interactions take place at the surface of the main body of the
pulsar, particularly where, as in the Crab Nebula, the pulsar still remains in the midst of
the debris at the site of the supernova explosion.
The strong x-ray emission from the Crab Nebula pulsar is not duplicated in the Vela
pulsar, the second of these objects located in a supernova remnant. It appears that the
slow-moving explosion products, which are still dense around the 900-year old Crab
pulsar, have been largely dispersed in the 10,000 years since the Vela pulsar was
produced. This conclusion is consistent with the difference in the polarization of the radio
radiation, which is relatively low (about 25 percent 222) in the radiation from the Crab
Nebula and pulsar, but almost complete in that from the Vela pulsar.223
As noted in Chapter 16, it is currently believed that the energy radiated by the Crab
Nebula is generated by the central star, the pulsar, and is transferred to the nebula, where
the radiation is presumed to be produced by the synchrotron process. We have seen that
the arguments in favor of the synchrotron hypothesis depend mainly on the ‖there is no
other way― contention, and cannot stand up under critical scrutiny. The hypothetical
energy transfer mechanism is likewise without any firm support. The astronomers admit
that ‖the mechanisms for the transport of the pulsar power through the nebula and for the
acceleration of the electrons are not well understood.― 217 ‖Not well understood― is, of
course, the currently fashionable euphemism for ‖unknown. ‖
Earlier we found that the energy which powers the radio radiation from the supernova
remnants is not a product of the explosion, but is generated by radioactive processes in
aggregates of intermediate speed matter that have been entrained in the outgoing low
speed explosion products. From the points brought out in the preceding paragraphs it can
now be seen that the x-ray radiation from the remnants originates in a similar manner;
that is, from spatially separated portions of the pulsar. But the x-ray radiation is only a
minor component of the total radiation from an outgoing pulsar, and it therefore
terminates relatively soon. Consequently, the pulsed x-ray emission of this nature is
limited to very young supernova remnants of the Crab Nebula type. This nebula itself is
the only known instance at present (1983). The emission from the young remnant
Cassiopeia A is not pulsed, because, as we saw earlier, the maximum speed of the
explosion products that constitute this remnant was never great enough to carry them to
the pulsation zone.
Any x-ray radiation from outgoing pulsars should be accompanied by strong radio
emission, and pulsed x-rays without any more than a weak radio accompaniment can
usually be regarded as originating in incoming pulsars. However, the most distinctive
characteristic of each class of pulsar is the direction of the change in the period. The
periods of the outgoing pulsars are increasing. Those of the incoming pulsars are
decreasing. Most of the decreasing periods are longer than those of the outgoing pulsars,
and because the return motion is subject to a variety of environmental conditions, they do
not conform to the kind of a regular pattern that we find in the periods of the outgoing
pulsars.
Inasmuch as the pulsar moves outward in the explosion dimension as a unit, irrespective
of the spatial locations of its components, and the pulsation rates are determined by the
speed, these rates are the same for all types of emission. Otherwise, the characteristics of
the pulses produced by the different processes can be expected to differ. They may be out
of phase, the relative intensity of the pulses may vary, or x-ray emission may cease while
radio emission is taking place, or vice versa. A number of such dissimilarities are
reported to be present in the radiation from the Vela pulsar.
The ‖x-ray stars― are relatively rare. Only about 100 are known to exist in the Galaxy,
about 20 of which are pulsating,217 and it is believed that the observed number is almost
complete. This is consistent with the conclusion that the pulsars that fail to reach the
conversion level at unit speed are mainly those originating in explosions of very large
stars, which are likewise rare outside the unobservable central regions of the galaxy.
Actually, none of the classes of discrete x-ray sources thus far considered is observable in
large numbers. R. Giacconi points out that the compact x-ray sources are either
‖exceedingly rare or represent short-lived x-ray emitting phases in stellar evolution.― 224
The increase in the number of weak emitters of soft x-rays detected in recent years
modifies the observational situation to some extent, but the conclusion is still valid. The
theoretical identification of the soft x-rays with age, and the evidence from the remnants
that ages from 25,000 to 50,000 years are sufficient to reduce the emission to the soft
status, show that Giacconi's second alternative is the correct one.
In the case of the x-ray stars, the returning pulsars, the time that was spent above the two-
unit speed level was too short for any large amount of isotopic adjustment to have taken
place, and the reversal of the changes that did occur is accomplished relatively rapidly.
The isotopic composition of the ordinary white dwarfs is fully adjusted to the
intermediate speed during the outward travel of these objects, and the reverse adjustment
continues over a long period of time, but here the strong radiation is intermittent. It
escapes from the star in major quantities only under special circumstances of short
duration. Smaller amounts are emitted as leakages or in minor outbursts.
The supernova remnants provide an opportunity for observing the evolution of the x-ray
emission. The more distant an isotope is from the center of the zone of stability, the more
energetic its radiation, and the shorter its half-life, generally speaking. Consequently, the
original hard, or energetic, x-rays from matter dropping back into the low speed range are
followed by softer emissions as time goes on and the short-lived isotopes are eliminated.
The initial x-ray radiation from the remnants is identical with that from the hard compact
sources. For instance, the x-rays from Cassiopeia A are reported to be ‖quite hard.― 225
The radiation then continues on a soft basis for a relatively long period of time. For
example, the x-rays from the Cygnus Loop, one of the older remnants, are all in the soft
range, below 1 KeV.226
Because of the variety of sources and conditions involved in the observed emission of x-
rays, it is possible to establish some limitations on the x-ray production process that can
be compared with the theoretical deductions. First, we can conclude that it is extremely
unlikely that two different processes for production of strong x-ray radiation would be
put into operation by the same supernova event. The mechanism by which the x-rays are
produced must therefore be one that is applicable to both of the observed types of
supernova products: compact sources and extended remnants. (It is generally conceded
that the supernova, which produces a compact x-ray emitter also, leaves a remnant). This
imposes some severe constraints on the kind of a process that can be given consideration.
Furthermore, when the observed emission of x-rays from the remnants of supernovae is
considered in conjunction with the results of observations that have sought. but tailed to
detect. high frequency radiation in significant amounts from the supernovae,227 a still
more rigid requirement is imposed on x-ray theory. The fact that the emission occurs both
from concentrations of matter (hot spots) and from diffuse clouds (extended sources) in
the remnants means that the emission must result from the condition of the matter itself,
not from the way in which that matter is aggregated. But the absence of x-ray radiation
during the observable stage of the supernova explosion, when the particle energies are at
a maximum, shows that the thermal processes are not, in themselves. adequate to account
for the strong x-ray emission.
In the remnants. the x-rays come from matter that has been losing energy for a
considerable period of time, more than 50,000 years in some cases, and is now at an
energy level well below the peak reached in the explosion. The observations thus require
the existence of a process in which matter that loses a portion of its energy after having
reached the high energy level of a violent explosion undergoes some kind of a change
that involves emission of x-rays. In the preceding pages we have seen that the
development of the theory of the universe of motion leads to just such a process.
All of this development of theory had taken place long before the astronomical x-ray
emitters were discovered. It had already been determined that the fast-moving products of
stellar and galactic explosions undergo inverse radioactivity on crossing from the low
speed to the upper speed ranges, and thereby produce radiation at radio frequencies. It
had also been found, from theoretical considerations, that certain of these explosion
products acquire sufficient speed to escape from the material sector into the cosmic
sector, whereas others do not attain the escape speed, and eventually return to the
relatively low speeds that are normal in the material sector. All that was required in order
to complete the theoretical understanding of the x-ray emitters was a recognition of the
rather obvious fact that the process previously deduced as the source of the radiation at
radio frequencies from the outgoing products of stellar and galactic explosions also works
in reverse to produce x-rays and gamma rays from those of the explosion products that
return to the low speed range.
We thus have a theoretical definition of the origin and properties of the x-ray emitters
that has not been constructed to fit the observations, in the manner in which most
scientific theories are devised, but had already been deduced from the postulates that
define the universe of motion, and had been published prior to the discovery of the
astronomical x-ray emission The close agreement between this pre-existing theory and
the observational information now available is thus highly significant. Two features of
this explanation of the x-ray emission are especially noteworthy. First, the theoretical x-
ray process is an essential element of the theoretical energy production process. Hence it
is not necessary to provide a separate explanation of how the energy is produced. Second,
the same process is applicable to all of the strong x-ray emitters.
Meanwhile, conventional astronomical theory is faced with the problem of having to put
together a new combination of energy generation and radiation production processes for
almost every new type of x-ray emitter that is encountered. Current thinking in this area
is centered largely on the situation in the Crab Nebula. Here the astronomers have been
able to construct a theory with which they are reasonably comfortable, although, as
indicated earlier, they are somewhat uneasy about their inability to identify the
mechanism by which the energy transfer required by their theory is carried out. In this
theory the central pulsar is a rotating neutron star, which releases energy by slowing its
rotation. This energy is transferred to the nebula, which then radiates ‖by means of the
synchrotron process, emitting radio waves, visible light, and X-rays.― 226 But few of the
radiation sources conform to this pattern. Aside from these few, the radio emitting pulsars
have no associated remnants, and the supernova remnants contain no pulsars. Some
different hypotheses are therefore required to account for the radio emission from these
sources. The x-ray emitters complicate the situation still further. Rotation is ruled out as a
source of energy for these objects, as only a few of them exhibit the pulsation that is
interpreted as evidence of rotation, and in these few the pulsation periods are decreasing.
Giacconi points out that, because of this speed-up, ‖energy could not come from
rotation,― and he goes on to say:
The only remaining plausible source of the energy was gravitational energy released by
accretion of material from the companion star onto the x-ray emitting object.228
Here, once again, we meet the ubiquitous ‖no other way― argument. There is no physical
evidence to support the assumption that such a process is actually in operation. Whether
or not this is ‖plausible,― it is simply a hypothesis based on a set of assumptions about the
nature of the two components of the x-ray emitting binary system, assumptions that, as
we have shown, are invalid.
Neither the synchrotron process nor the accretion process is applicable to the remnants,
other than those of the Crab Nebula type, so still another x-ray emission process has to be
devised for them. Here the assumption is that ‖the high energy radiation is generated by
heating as the gas of the remnant collides with the interstellar medium.― 229 Two big
problems confront this hypothesis: ( 1 ) it is hard pressed to account for the existence of
‖hot spots― in the interiors of many remnants, particularly where, as in Cassiopeia A, the
hot spots are quasi-stationary, and (2) the energy emission from the remnants decreases
much more slowly than this explanation predicts.
The x-ray emission as a whole is another example of the way in which present-day
astronomical theory arrives at many different explanations of the same thing. In view of
the incomplete nature of astronomical knowledge as it now exists, in spite of the
remarkable progress that has been made in the past few decades, it is not possible to test
these hypotheses against established facts. In the absence of the disproof that would
follow from such a test, each of these explanations has a degree of plausibility, when
considered individually, although all are based almost entirely on assumptions. But like
the many different theories devised to account for the individual manifestations of
extremely high density that were enumerated in Chapter 17, the multiplicity of
explanations of the same phenomenon has the cumulative effect of exposing the
artificiality of the foundations of the hypotheses. The demonstrated need to devise a new
explanation of a phenomenon whenever it is encountered in new setting is prima facie
evidence that there is something wrong with the current understanding of this
phenomenon.
As might be expected, extra-galactic observations add still further dimensions to the
problem. All galaxies emit some x-rays. In many cases, this radiation evidently originates
from sources similar to those that are observed in our own Milky Way galaxy. Both
discrete sources and x-ray emitting supernova remnants have been identified in some of
the other galaxies that are close enough to be within the range of available observational
facilities. Among the more distant galaxies there are some much more powerful x-ray
emitters. The Seyfert galaxies, a class of very active spirals that will be discussed in
Chapter 27, are observed to be strong x-ray sources. Galaxies that show evidence of
violent activity, such as M 82 and NGC 5128 also radiate enormous amounts of x-ray
energy. Quasars are likewise prodigious emitters of x-rays, as would be expected from
the turbulent conditions in these objects, which are undergoing rapid and drastic changes.
Another recent finding that has attracted a great deal of attention is the discovery of x-
rays in the intergalactic space in some distant clusters of galaxies. A 1980 report by
Giacconi identifies two classes of these emissions one in which the emitting sources are
‖clumped around individual galaxies or groups of galaxies― and another in which the
emission is ‖concentrated near the center and falls off smoothly with distance.― 228
According to Gorenstein and Tucker, the x-ray emission comes from clusters with ‖a
centrally located supergiant elliptical [spheroidal] galaxy.―167 These authors also report
that M 87, the nearest galaxy of the giant class, and a member of the Virgo cluster, ‖is
enveloped by an x-ray emitting cloud nearly a million light years across.―
The x-ray emission in these clusters of galaxies is currently attributed to the presence of a
hot gas. ‖The space within such clusters is pervaded by gas heated to some 10 million
degrees, 228 asserts Giacconi. It is evident that this situation calls for some more critical
consideration. In the light of what is known about the fundamentals of heat and
temperature, a high temperature in a medium as sparse as that of intergalactic space is
impossible. As explained in Volume II, the temperature of a gas is the result of
containment. The pressure is a measure of the containment, while the temperature is a
measure of the energy imparted to the gas that is subject to pressure. Thus the
temperature, T is a function of the pressure, P. For a given volume, V, of ‖ideal gas,― the
two quantities are directly proportional. as indicated by the general gas law, PV = RT,
where R is the gas constant. If the pressure is very low, as in the near vacuum of
intergalactic or interstellar space, the temperature is likewise very low. It is measured in
degrees, not in millions of degrees.
It is often asserted that portions of the gas in the vicinity of hot stars or active galaxies are
‖heated by radiation.― But radiation does not repeal the gas laws. Absorption of radiation
does not increase the temperature if the gas is free to expand. The radiation may ionize
the atoms of the gas, and there appears to be an Impression that this elevates the
temperature, but this conclusion is incorrect. The degree of ionization is an indication of
the intensity of the ionizing agency, whatever it may happen to he. At very high
temperatures thermal ionization takes place; that is, a part of the thermal motion is
converted to the type of motion known as ionization In this case the degree of ionization
is actually an indication of the temperature. the intensity of the thermal motion. But
ionization by another agency, such as radiation, is independent of the temperature.
Motion in the form of radiation is converted directly to motion in the form of ionization.
Here the degree of ionization is an indication of the intensity of the radiation, and has no
relation to the temperature. The possibility of a radiative addition to the gas temperature
must therefore be ruled out. The x-rays in the space around the giant galaxies cannot be
thermally produced. A non-thermal process at the locations where they are observed must
generate them.
In the universe of motion the x-ray emission is due to leakage of intermediate speed
matter from the high pressure region in the interiors of the giant galaxies. Where the
temperature of the leaking matter is in the lower section of the intermediate speed range,
a relatively small amount of cooling carries some of the particles across the unit speed
boundary and into the lower speed range. In this case the emission begins shortly after the
leaking matter leaves the galaxy, and the emission ‖falls off smoothly with the distance―
as in Giacconi's second category, and around M 87. A higher initial temperature delays
the start of the x-ray radiation, and favors emission in the vicinity of the other galaxies of
the cluster, where the matter escaping from the giant is cooled by contact with the
outlying matter of those galaxies. The distribution of the x-ray emissions then follows the
‖clumpy― description in Giacconi's first category. As we will see in Chapter 27, the
Seyfert galaxies are also losing intermediate speed material from their interiors, and the
x-ray emission from these objects, while considerably stronger, for reasons that will be
explained in the subsequent discussion, is a result of essentially the same process that is
operating around the distant giants.
These conclusions with respect to the origin of the x-rays that are observed in the vicinity
of the giant galaxies are also applicable, on a smaller scale, to the production of x-rays in
the surroundings of individual stars. These x-rays are believed to originate in the stellar
coronas, and it has therefore been concluded that ‖temperatures of a million to 10 million
degrees― 228 exist in these coronas. Here, again, the existence of such temperatures is
excluded by basic thermal principles. Consequently, the x-rays cannot be produced
thermally in these locations. But, as in the galactic situation, the x-ray production is
easily explained on the basis of leakage of intermediate speed matter from the interiors of
the stars, followed by a return to the low speed range in the coronas.
This ‖leakage― explanation also accounts for the relative emission rates from the different
classes of stars, one of the areas in which the new observational findings are upsetting
previous theories. ‖The predictions of x-ray emission based on classical theories says
Giacconi, ‖fail completely to account for the observations.― 228 As can be seen from the
description of the stellar energy generation process in the earlier pages, the central
regions of all stars are in the condition where the combined thermal and ionization
energies are just sufficient to begin imparting intermediate speeds to a considerable
number of particles. In view of this near uniformity of the internal situation, the principal
determinant of the amount of leakage, aside from the mass of the star, is the thickness of
the layer of matter through which the intermediate speed particles have to make their
way. It follows that the leakage rate should be relatively greater in the smaller stars. This
is confirmed by the Einstein observatory results, which show that the ratio of x-ray to
optical emission is 100,000 times as large in the small main sequence stars of spectral
class M than in the sun, a member of the larger class G. These results, says Giacconi
‖will force a major reconstruction of theories of both stellar atmospheres and stellar
evolution.― 228

CHAPTER 20

The Quasar Situation


The existence of quasars strongly suggests that we are dealing with phenomena that
present-day physics is at a loss to explain. Perhaps we are making fundamentally wrong
interpretations of some data or it might indicate that there are laws of physics about
which we know nothing yet.230 (Gerrit Verschuur)
The most obvious and most striking feature of the quasars, the point that has focused so
much attention on them, is that they simply do not fit into the conventional picture of the
universe. They are ―mysterious,‖ ―surprising,‖ ―enigmatic,‖ ―baffling,‖ and so on. Thus
far it has not even been possible to formulate a hypothesis as to the nature or mechanism
of these objects that is not in open and serious conflict with one segment or other of the
observed facts. R. J. Weymann makes this comment:
The history of the our knowledge of the quasi-stellar sources has been one surprise after
another. Indeed, almost without exception, every new line of observational investigation
has disclosed something unexpected.231
The irony of this situation is that long before the quasars were discovered, there was in
existence a physical theory that predicted the existence of the class of objects to which
the quasars belong, and produced an explanation of the major features of these objects,
those features that are now so puzzling to those who are trying to fit them into the
conventional structure of physical and astronomical thought. Although the application of
the theory of the universe of motion to astronomical phenomena was still in a very early
stage at that time, nearly a quarter century ago, the existence of galactic explosions had
already been deduced from the basic premises of the theory, together with the general
nature of the explosion products.
Observational knowledge was far behind the theory. At the time the first edition of this
work was published in 1959 the study of extra-galactic radio sources was still in its
infancy. Indeed, only five of these sources had yet been located. The galactic collision
hypothesis was still the favored explanation of the generation of the energy of this
radiation. The first tentative suggestions of galactic explosions were not to be made
public for another year or two, and it would be three more years before any actual
evidence of such an explosion would be recognized. The existence of quasars was
unknown and unsuspected.
Under these circumstances, the extension of a physical theory to the prediction of the
existence of exploding galaxies, and a description of the general characteristics of these
galaxies and their explosion products was an unprecedented step. It is almost impossible
to extend traditional scientific theory into an unknown field in this manner, as the
formulation of the conventional type of theory requires some experimental or
observational facts on which to build, and where the phenomena are entirely unknown, as
in this case, there are no known facts that can be utilized.
These theoretical steps have to be founded on observational data. Where no data base
exists, the logic of theory alone provides no help.232 (Martin Harwit).
A comprehensive general theory, one that derives all of its conclusions from a single set
of basic premises, without introducing anything from any other source, is not limited in
this manner. It is, of course, convenient to have observational data available for
comparison, so that the successive steps in the development of theory can be verified as
the work proceeds, but this is not actually essential. There are some practical limitations
on the extent to which a theory can be developed without this concurrent verification, as
human imagination is limited and human reasoning is not infallible, yet it is entirely
possible to get a good general picture of observationally unknown regions by appropriate
extensions of an accurate theory. The subject matter of the next six chapters of this
volume, the phenomena of the final stages of the life of material galaxies, provides a very
striking example of this kind of theoretical penetration into the unknown, and before we
undertake a survey of this field as it now stands, it will be appropriate to examine just
what the theory of the universe of motion was able to tell us in 1959 about phenomena
that had not yet been discovered.
We have seen in the preceding pages of this and the earlier volumes that the structure of
matter is such that it is subject to an age limit, the attainment of which results in the
disintegration of the material structure and the conversion of a portion of its mass into
energy. Inasmuch as aggregation is a continuing process in any region of the universe in
which gravitation is the controlling factor (that is, exceeds the recession due to the
progression of the natural reference system), the oldest matter. in the universe is located
where the process of aggregation has been operating for the longest period of time, in the
centers of the largest galaxies. Ultimately, therefore, each of the giant old galaxies must
reach the destructive age limit and undergo a violent explosion or series of explosions.
At a time when there was no definite supporting evidence, this was a bold conclusion,
particularly when coming from one who is not an astronomer, and id reasoning entirely
from basic physical premises. As expressed in the 1959 edition:
While this is apparently an inescapable deduction from the principles previously
established, it must be conceded that it seems rather incredible on first consideration. The
explosion of a single star is a tremendous event; the concept of an explosion involving
billions of stars seems fantastic, and certainly there is no evidence of any gigantic variety
of supernova with which the hypothetical explosion can be identified.
The text then goes on to point out that some evidence of explosive activity might be
available, as there was a known phenomenon that could well be the result of a galactic
explosion, even though contemporary astronomical thought did not regard it in that light.
In the galaxy M 87, which we have already recognized as possessing some of the
characteristics that could be expected in the last stage of galactic existence, we find just
the kind of a phenomenon which theory predicts, a jet issuing from the vicinity of the
galactic center, and it would be in order to identify this galaxy, at least tentatively, as one
which is now undergoing a cosmic explosion, or strictly speaking, was undergoing such
an explosion at the time the light now reaching us left the galaxy.
In addition to predicting the existence of the galactic explosions, the 1959 publication
also forecast correctly that the discovery of these explosions would come about mainly as
the result of the large amount of radiation that would be generated at radio wavelengths
by reason of the isotopic adjustment process. The conclusion reached was that
Objects which are undergoing or have recently (in the astronomical sense) undergone
such [extremely violent! processes are therefore the principal sources of the localized
long wave radiations which are now being studied in the relatively new science of radio
astronomy.
Altogether, the theoretical study published in 1959 made the following predictions:
1. That exploding galaxies exist, and would presumably be discovered sooner or
later.
2. That radio astronomy would be the most probable source through which the
discovery would be made.
3. That the distribution of energies in the radiation at radio wavelengths would be
non-thermal.
4. That the exploding galaxies would be giants, the oldest and largest galaxies in
existence.
5. That two distinct kinds of products would be ejected from these exploding
galaxies.
6. That one product would move outward in space at a normal low speed.
7. That the other, containing the larger part of the ejected material, would move
outward at a speed in excess of that of light.
8. That this product would disappear from view.
9. That the explosions would resemble radioactive disintegrations, in that they
would consist of separate events extending over a long period of time.
10. That because of the long time scale of the explosions it should be possible to
detect many galaxies in the process of exploding.
In the quarter century that has elapsed since these predictions were published, the first
three have been confirmed observationally. Evidence confirming the next five is
presented in this work. The information now available indicates that the last two are valid
only in a somewhat limited sense. We now find that the predicted long series of separate
explosions are supernovae in the galactic interiors preceding the final explosion of the
galaxy. and that the latter is an event resembling a boiler explosion. There is evidence
that the products of the supernova explosions do actually build up in the central regions
of the galaxies over a long period of time, as suggested in item 10. This evidence will be
discussed at appropriate points in the pages that follow.
In one respect the 1959 study stopped just short of reaching an additional conclusion of
considerable importance. Inasmuch as one of the products of the galactic explosion is
accelerated to speeds in excess of that of light. it was concluded that this component of
the explosions products would be invisible. This is the ultimate fate of almost all material
ejected with ultra high speeds, including the galactic explosion products. However, the
subsequent finding that the galactic explosion occurs when the internal pressure in the
galaxy becomes great enough to break through the overlying structure means that the
ejected material comes out in the form of fragments of the galaxy—aggregates of stars—
rather than as fine debris. These fragments are subject to strong gravitational forces. and
even though the speeds imparted to them by the explosion exceed the speed of light, the
net speeds after overcoming the oppositely directed gravitational motion are less than that
of light for a finite period of time. It follows that, although the fast-moving component of
the explosion products will finally escape from the gravitational limitations and move off
into the unobservable regions, there is a substantial interim period in which these objects
are accessible to observation. Here, of course, are the quasars, and this is how close the
theoretical study came to identifying them years before they were found by observation; a
point that is all the more worthy of note in view of the fact that conventional theory still
has no plausible explanation of their existence.
As pointed out in the discussion of the pulsars in Chapter 17, what was actually
accomplished in this area in the original investigation reported in 1959 was to predict the
existence and properties of the class of objects to which both the pulsars and quasars
belong. The properties defined in that publication are those which are shared by all
objects of this class. All are explosion products. All have speeds in the upper ranges,
above the speed of light. All are moving outward, rather than being stationary in space
like the related intermediate speed objects, the white dwarfs. Except for the few, such as
those discussed in Chapter 17, that lose enough speed to reverse direction and return to
the material status, all ultimately disappear into the cosmic sector. What the original
investigation failed to do was to carry the theoretical development far enough to disclose
the existence of two different kinds of objects of this class, one originating from the
explosion of a star, the other from the explosion of a galaxy.
The special features of each type of object are due to the differences between stars and
galaxies. The quasar is long lived because it is ejected from a giant galaxy and is subject
to powerful gravitational forces. The pulsar, on the other hand, is ejected from a
relatively small object, a star, and is initially subject to little gravitational restraint. It is
therefore short lived. The many evolutionary features of the quasars have no counterparts
in the life of the pulsar because that life is too short for much evolution. Conversely,
although the pulsed radiation that is the most distinguishing feature of the pulsars
undoubtedly exists in the quasars as well, it is unobservable because the individual
pulsations are lost in the radiation from millions of stars that have entered the pulsation
zone at different times.
As the information in the foregoing paragraphs demonstrates, the theoretical exploration
of the galactic explosion phenomenon carried out prior to 1959 and reported in the book
published in that year, well in advance of any observational discoveries in this area.
supplied us with a large amount of information which, as nearly as we can now determine
on the basis of existing knowledge' is essentially correct. This is a very impressive
performance. and it demonstrates the significant advantage of having access to a theory
of the universe as a whole, one that is independent of the accuracy—and even of the
existence--of observational data in the area under consideration.
Meanwhile, conventional astronomy has been baffled. It has been unable to arrive at any
definite conclusions as to what the quasars are, where they are, or how their unusual
properties originate. The following is an assessment of the existing situation taken from a
current textbook on astronomy:
The most accurate assessment of the quasar problem is that no satisfactory explanation
has been found for the existence of these objects, whose puzzling properties place them
beyond the limits of current astronomical knowledge.233
In this connection, it should be noted that the difficulties which conventional theory is
having with the quasars—those difficulties that have made ‖quasar― almost synonymous
with ‖mystery―—are not due to any lack of knowledge about these objects, but to too
much knowledge; that is, more knowledge than can be accommodated within the limits of
the existing concept of the nature of the universe. It is easy to fit a theory to a few bits of
information, and the scientific community currently claims to have a sound theoretical
understanding of a number of phenomena about which very little is actually known—
even about some phenomena that we now find are totally non-existent. But by this time a
great many facts about the quasars have accumulated. As a consequence, orthodox theory
is currently in a position where any explanation that is devised to account for one of the
observed features of the quasars is promptly contradicted by some other known fact.
There is no light on the horizon to indicate that a solution of the existing difficulties is on
the way. More and more data are being gathered, but a basic understanding still eludes
the astronomers. A review of the situation in 1976 by Stritimatter and Williams included
this comment, which is equally appropriate today:
In general this [the large amount of information accumulated in the past seven years] has
led to new problems related to the QSO's, rather than to solution of the many long-
standing problems associated with these objects. The QSO's remain among the most
exciting but least well-understood astronomical phenomena. 234
Ironically, the principal obstacles that have stood in the way of an understanding of the
quasar phenomena are not difficult and esoteric aspects of nature; they are barriers that
the investigators themselves have erected. In the search for scientific truth, a complicated
and difficult undertaking that needs the utmost breadth of vision of which the human race
is capable, these investigators have gratuitously handicapped themselves by placing
totally unnecessary and unwarranted restrictions on the allowed thinking about the
subject matter under consideration. The existing inability to understand the quasars is
simply the result of trying to fit these objects into a narrow and arbitrary framework in
which they do not belong.
Most of these crippling restrictions on thinking result from a widespread practice of
generalizing conclusions reached from single purpose theories. This practice is one of the
most serious weaknesses of present-day physical science. Many of our current theories,
both in physics and in astronomy, are in this single-purpose category, each of them
having been devised solely for the purpose of explaining a single set of observed facts.
This very limited objective imposes only a minimum of requirements that must be met by
the theory, and hence it is not very difficult to formulate something that will serve the
purpose, particularly when the prevailing attitude toward the free use of ad hoc
assumptions is as liberal as it is in present-day practice. This means, of course, that the
probability that the theory is correct is correspondingly low. Such a theory is not, in the
usual case, a true representation of the physical facts. It is merely a model that represents
some of the facts of the physical situation to which it applies. When conclusions derived
from such a theory are applied to phenomena in related fields, the inevitable result is a
distortion of the true relations.
The most damaging of these generalizations based on far-reaching extrapolations of
conclusions derived from very limited data is the pronouncement that there can be no
speeds greater than that of light. For some strange reason, the scientific Establishment has
decreed that this product of a totally unsubstantiated assumption must be treated as Holy
Writ, and accepted without question. ‖Thou shall not think of speeds greater than that of
light,― is the dictum. The conservation laws may be questioned, causality may be thrown
overboard, the rules of logic may be defied, and so on, but one must not suggest that the
speed of light can be exceeded in any straightforward way.
Using a theory of this highly questionable nature as a basis for laying down a limiting
principle of universal significance is simply absurd, and it is hard to understand why
competent scientists allow themselves to be intimidated by anything of this kind. But the
‖iron curtain― is almost impenetrable. There are a few signs of a coming revolt against
strict orthodoxy. Some investigators are beginning to chafe under the arbitrary
restrictions on speed, and are trying to find some way of circumventing the alleged limit
without offering a direct challenge to relativity theory. ‖Tachyons, ' hypothetical particles
that move faster than light, but have some very peculiar ad hoc properties that enable
them to be reconciled with relativity theory. are now accepted as legitimate subjects for
scientific speculation and experiment. But such halfway measures will not suffice. What
science needs to do is to cut the Gordian knot and to recognize that there is no adequate
justification for the assertion that speeds in excess of the speed of light are impossible.
By an unfortunate coincidence, a universal principle, recognition of which would have
avoided this costly mistake, has never been accepted by physical science. Most other
branches of thought recognize what they call the Law of Diminishing Returns. which
states that the ratio of the output of any physical process to the input does not remain
constant indefinitely, but ultimately decreases to zero. The basis for the existence of such
a law has not been clear, and this is probably one of the principal reasons why scientists
have not accepted it. In the light of the theory of the universe of motion it is now evident
that the law is merely an expression of the fact that the status of unity as the datum for
physical activity precludes the existence of infinity. Zero may exist as a difference
between two finite quantities, but there is no simple zero, and there are no infinities in
nature.
Present-day physicists realize that they are dealing with too many infinities. ‖If we put all
these principles [the ‖known― principles of physical] together . . . we get inconsistency,
because we get infinity for various things when we calculate them,― 235 says Richard
Feynman. But the physicists have not conceded the existence of the universal law that
bars all infinities, and they have allowed Einstein to assume that the relation F = ma
extends to infinity. (This, of course, is the assumption on which he bases his conclusion
that the limiting zero value of a corresponds to an infinite value of m.)
Meanwhile, conclusions derived from other single purpose theories have compounded the
difficulties due to the arbitrary exclusion of speeds greater than that of light from
scientific thought. The accepted explanation of the high density of the white dwarfs
cannot be extended to aggregates of stars, and therefore stands in the way of a realization
that the high density of the quasars results from exactly the same cause. Acceptance of
the Big Bang theory. of the recession of the distant galaxies, a theory designed to explain
one observed fact only, prevents recognition of the scalar nature of motion of the
recession type, and so on.
An unfortunate result of the proliferation of these single purpose theories is that it places
a barrier in the way of correcting errors individually. the step by step way in which
scientific knowledge normally advances. Each of these erroneous theories applicable to
individual phenomena rests in part on equally erroneous theories of other phenomena,
and has been forced into agreement with those other theories by means of ad hoc
assumptions and other expedients. Correction of the error or errors in any one of these
interlocking theories is unacceptable because it leaves that theory in conflict with all
others in the network. Scientists are naturally reluctant to make a wholesale change in
their theories and concepts. But when they have maneuvered themselves into the kind of
a theoretical position that they now occupy in the area that we are discussing, there is no
alternative. Elimination of errors must take place on a wholesale scale if it to be done at
all. The broad scope of the revisions of astronomical thought required by the theory of the
universe of motion should therefore be no surprise.
It is true that correction of a multitude of errors in one operation leads to theoretical
descriptions of some phenomena that are so different from previous views that it might
almost seem as if we are dealing with a different world. But it should be remembered, as
we begin consideration of the quasar phenomena, the terra incognito of modern
astronomy, that the criterion of scientific validity is agreement with the observed facts.
Furthermore, the acid test of a theory, or system of theories, is whether that agreement,
once established, continues to hold good as observation and experiment disclose new
facts. Of course, if the theory can predict the observational discoveries, as the theory of
the universe of motion did in the case of the galactic explosions and a number of the
features of the products thereof, this emphasizes the agreement, but prediction is not
essential. The requirement that a theory must be prepared to meet is that it must be
consistent with all empirical knowledge, including the new information continually being
accumulated. This is the rock on which so many once promising theories have foundered.
Many other theories have survived only with the help of ad hoc assumptions to evade
conflicts. This currently fashionable expedient is not available to the Reciprocal System
of theory, which, by definition, is barred from introducing anything from outside the
system; that is, anything that cannot be derived from its fundamental postulates. But, as
can be seen in the pages of this volume, this new system of theory has no need for such
an expedient. The principal elements of the new observational information acquired by
the astronomers during the last few decades have all been identified with corresponding
elements of the theoretical structure without any serious difficulty, and there is good
reason to believe that the minor details will likewise be accounted for when someone has
time to examine them systematically.
It has been necessary to extend the theoretical development very substantially, not only to
account for the facts disclosed by the new observations, but also to deal with areas not
covered in the original investigation. That original study was not primarily concerned
with astronomical phenomena as such, but rather as physical processes in which the
physical principles derived from the theory could be tested in application under extreme
conditions. In this present volume the objectives have been broadened. In addition to
using astronomy as a proving ground for the laws and principles of fundamental physics,
we are using these laws and principles, now firmly established, to explain and correlate
the astronomical observations.

CHAPTER 21

Quasar Theory
The key to an understanding of the quasars and associated phenomena is a recognition of
their status as the galactic equivalents of the class of white dwarfs known as pulsars. The
theory of the upper speed ranges developed in Chapter15 and applied to the products of
the supernova explosions in the subsequent discussion can be applied in the same manner
to the quasars, with such modifications as are appropriate in view of the differences
between stars and galaxies.
Aside from these differences due to the larger size, more complex structure, stronger
gravitational forces, etc., the galactic explosions are analogous to the supernovae, the
major products of the galactic explosions are analogous to the major products of the
supernovae, and the unusual properties of the ultra high speed component of the galactic
explosion products are analogous to the unusual properties of the ultra high speed product
of the Type II supernova, the fast-moving white dwarf that we call a pulsar. The
‖mysterious― quasars are not so mysterious after all, except in the sense that all entities
and phenomena are mysterious if they are viewed in the context of erroneous theories or
assumptions.
The analogy between the white dwarfs and the quasars is so obvious that it should have
been recognized immediately, in general, if not in detail, when the quasars were first
discovered. The white dwarf is a star whose distinctive property is a density far outside
the range of the densities of normal stars. The quasar is an aggregate of stars, one of the
most distinctive properties of which is a density far outside the range of the densities of
normal stellar aggregates. The conclusion that these new objects, the quasars, are the
galactic equivalent of the white dwarfs follows almost automatically. But however
natural this conclusion may be, the astronomers cannot accept it because they are
committed to conflicting ideas that have been derived from single purpose theories and
have been generalized into universal laws.
The explosive events that produce these two classes of objects differ in some respects,
but the general situation is the same in both cases. One component of the products of both
types of explosions is ejected with a speed less than that of light. Since this is the normal
speed in the material sector of the universe, this product is an object of a familiar type, a
rather commonplace aggregate of the units of which the exploding object was originally
composed. The constituent units of a star are atoms and molecules. When a star explodes
it breaks down into these units, and we therefore see a cloud of atomic, molecular, and
multimolecular particles emanating from the site of the explosion. But there is also a
second component, a peculiar object known as a white dwarf star, which we have now
identified as a cloud of similar particles that has been ejected with a speed greater than
that of light, and is therefore expanding into time rather than into space.
Some of the products of the galactic explosion are likewise reduced to atomic or particle
size, but the basic units of which a galaxy is composed are stars, and hence the material
ejected by the explosion comes out mainly in the form of stars. Instead of clouds of gas
and dust particles, the galactic explosion produces ‖clouds― of stars: fragments of the
original giant galaxy. Here, as in the supernova explosion, one of the products of a full
scale galactic explosion acquires a speed in excess of that of light, while the other
remains below that level. The aggregate of stars traveling at normal speed is also normal
in other respects, the only prominent distinguishing feature being the strong radio
emission in the early stages, due to the intermediate speed matter incorporated in the
aggregate. This product is a radio galaxy. The ultra high speed product is a quasar.
As noted in Chapter 20, while the additional studies that have been made since the
original publication of the theory in 1959 have confirmed most of the conclusions therein
expressed, the early views as to the mechanism of the galactic explosion have been
modified to some extent. It is now evident that there is a build-up of pressure in the
interiors of the giant galaxies due to Type II supernova explosions that occur in large
numbers when the old stars in the central regions begin to reach their age limits. The
enormous internal pressure thus generated eventually reaches a point at which it blows
out a section of the overlying mass of the galaxy, in the manner of a boiler explosion.
When the pressure is relieved, the galactic structure reforms, and the building up of the
internal pressure is resumed. In due course this results in a repetition of the galactic
explosion. As predicted in the 1959 edition, a long series of such explosive events
ultimately destroys the galaxy.
The build-up of internal pressure that occurs in the central regions of the giant galaxies,
according to our deductions from the postulates that define the universe of motion, would
be impossible on the basis of previous astronomical thought, as conventional theory
provides no means of containment of energetic stars or particles. But our finding is that
the stars in any aggregate occupy equilibrium positions, and they resist any displacement
from these positions. The outer regions of a galaxy thus act as the walls of a container,
resisting the internal forces, and confining the high speed material to the interior of the
galaxy where the large-scale disintegration of stars is taking place. As we saw in
Chapter19, there is some leakage through the confining walls, but the fact that evidence
of leakage is detected only in the vicinity of the largest galaxies (Reference 167) indicates
that it does not reach major proportions until the internal pressure is almost strong enough
to accomplish the break-through. When the internal pressure does finally arrive at the
level at which it overcomes the resistance, a whole section of the overlying portion of the
galaxy is blown off as a quasar.
A study of quasar sizes, which will be discussed later, indicates that these ejected
fragments of the giant galaxies range in size from about 7x107 stars, the size of a dwarf
elliptical galaxy, to about 2x109 stars, the size of a small spiral. In the pages that follow it
will be shown that the theoretical properties of galactic fragments of these sizes, moving
at ultra high speeds in the explosion dimension, are identical with the observed properties
of the quasars.
It is worth noting at this point that the foregoing explanation of the nature and origin of
the quasars is not in conflict with existing quasar theory, as the astronomers have not yet
been able to formulate a theory of the quasars.
As yet we have no unique theory and no single model that explains the nature of quasars,
let alone their origin or source of energy.236 (Martin Harwit)
Nor do they have a theory as to how and why galaxies explode. Even their theories of
stellar explosions are admittedly little more than speculations.
We should emphasize at the outset that modern science does not yet have a genuine
theory of stellar explosions at its disposal.237 (I. S. Shklovsky)
Motion of an astronomical object perpendicular to the line of sight—proper motion, as it
is known to the astronomers—can be measured or at least detected, by observation of the
change of position of the object with respect to the general pattern of astronomical
positions. Motion in the line of sight is measured by means of the Doppler effect. the
change in the frequency of the radiation from the object that takes place when the emitter
is moving toward or away from the observer. No proper motion of the quasars or other
very distant galaxies has been detected, and we may therefore conclude that the random
vectorial motions of these galaxies are too small to be observable at the enormous
distances that separate us from these objects. By reason of the progression of the natural
reference system, however, the distant galaxies are receding from each other and from the
earth at high speeds that increase in direct proportion to the distance. The Doppler effect
due to these speeds shifts the spectra toward the red in the same proportion. Inasmuch as
an approximate value of the relation between redshift and distance (the Hubble constant)
can be obtained by observation of the nearby galaxies whose distance can be
approximated by other methods, the redshift serves, in current practice, as a means of
measuring the distances to galaxies that are beyond the reach of other methods.
One of the most striking features of the quasars is that their redshifts are fantastically
high in comparison with those of other astronomical objects. While the largest redshift
thus far (1983) measured for a normal galaxy is 0.67 (Reference 238), some of the quasar
redshifts are near 4.00. If we assume, as most astronomers now do, that these are ordinary
recession redshifts, then the quasars must be by far the most distant objects ever detected
in the universe.
Our theoretical development indicates that from the standpoint of distance in space this
conclusion is erroneous. In the context of the theory of the universe of motion, the normal
recession redshift cannot exceed 1.0O, as this value corresponds to the speed of light, the
full speed of the progression of the natural reference system, the level that is reached
when the effect of gravitation becomes negligible. Even without any detailed
consideration, it is therefore evident that the observed quasar redshift includes another
component in addition to the recession shift. From the account that has been given of the
origin of the quasar it can readily be seen that this excess over and above the redshift due
to the normal recession is a result of the motion in additional dimensions that has been
imparted to the quasar by the violent galactic explosion.
As brought out in Chapter 15, an object with a speed intermediate between one unit (the
speed of light) and two units is moving in the spatial equivalent of a time magnitude. This
motion in equivalent space is not capable of representation in the spatial reference
system, except where it reverses a gravitationally caused change of position. The Doppler
shift, on the other hand, is a simple numerical relation, a scalar total of the speed
magnitudes in all dimensions, independent of the reference system. The effective portion
of the speed in equivalent space therefore appears as a component of the quasar redshift.
The qualification ‖effective― has to be included in the foregoing statement because the
quasar motion beyond the unit speed level takes place in two scalar dimensions, only one
of which is coincident with the dimension of the spatial reference system. The motion in
the other equivalent space dimension has no effect on the outward radial speed, and
therefore does not enter into the Doppler shift.
Perhaps it would be well to add some further explanation on this point, since the idea of
scalar motion in two dimensions is unfamiliar, and probably somewhat confusing to those
who encounter it for the first time. In application to scalar motion, the term ‖dimension―
is being used in the mathematical sense, rather than in the geometrical sense; that is, a
two-dimensional scalar quantity is one that requires two independent scalar magnitudes
for a complete definition. When such a two-dimensional scalar quantity is superimposed
on a commensurable one-dimensional quantity, as in the extension of the one-
dimensional scalar motion into the two-dimensional region, only one of the two scalar
magnitudes of the two-dimensional quantity adds to the one-dimensional magnitude.
Since the other is, by definition, independent of the magnitude with which it is associated
in two dimensions, it is likewise independent of the one-dimensional quantity that adds to
that associated magnitude.
On the basis of the theory developed in Chapter 15, the total redshift (a measure of the
total effective speed) of an object moving with a speed greater than unity is the recession
redshift plus half of the two-dimensional addition. As explained in the earlier discussion,
the resulting value is normally z + 3.5z½. Since both the recession in space and the
explosion-generated motion in equivalent space are directed outward, no blueshifts are
produced by either component of the quasar motion.
The question as to the interpretation of the redshifts has been a lively source of
controversy ever since the original discovery of the quasars. Both of the alternatives
available within the limits of conventional astronomical theory are faced with serious
difficulties. If the redshift is accepted as an ordinary Doppler effect, due to the galactic
recession, the indicated distances are so enormous that other properties of the quasars,
particularly the energy emission, are inexplicable. On the other hand, if the redshift is not
due, or not entirely due, to the recession speed, current theory has no tenable hypothesis
as to the mechanism by which it is produced. So the question at issue, as matters now
stand, is not which alternative is correct, but which of the two untenable alternatives
currently available should be given preference for the time being.
Firm evidence bearing on this issue is difficult to obtain. Arguments on one side or the
other of the question are based mainly on apparent association between quasars and other
objects. Apparent associations with similar redshifts are offered as evidence in support of
the simple Doppler shift, or ‖cosmological,― hypothesis. The opponents counter with
what appear to be associations between objects whose redshifts differ, evidence, they
contend that two different processes are at work. Each group brands the opponents'
associations as spurious.
Obviously, a kind of evidence that supports both sides of an argument does not settle the
issue. Something more than the mere existence of what may be an association between
astronomical objects is needed before a firm conclusion can be derived from observation.
In the next chapter we will examine the only case now known in which enough additional
information is available to enable reaching firm conclusions.
Some attempts have been made to derive support for the cosmological hypothesis from
the existence of absorption redshifts, the thought being that the absorption may take place
in a cloud of matter existing somewhere in the line of sight, but this idea has never made
much headway, as it has become increasingly clear that the absorption redshifts are
intrinsic to the quasars. Correlation of redshift with observed brightness has also been
called upon to provide an empirical foundation for the currently popular hypothesis. For
example, the results of a comparison by Bahcall and Hills were summarized in a news
report as follows: ‖The point is simply that, by and large, quasars with large redshifts
seem dimmer than those with small redshifts, just as we would expect if they are farther
away.― 239 This is valid evidence against the ‖local― hypothesis, which asserts that the
quasars have been ejected from our own, or some nearby, galaxy, but it does not favor the
cosmological hypothesis over the assertions of its present-day critics, who merely
contend that there is a second component in the observed redshift, in addition to the
component due to the normal recession.
A somewhat more substantial item of support for the cosmological hypothesis that has
been increasing in popularity in recent years is the finding that there is observable ‖fuzz―
surrounding many of the quasars. This is interpreted as evidence that the quasars are
simply the active cores of highly disturbed galaxies, similar to the Seyfert galaxies, but
more extreme, Super-Seyferts, so to speak. However, conclusions of this kind, which are
welcomed by the investigators because they tend to support currently popular theories,
are not usually given the critical scrutiny that is applied to less favored products. If we
look at this conclusion without the rose-colored glasses, we will note the following
points: (1) Some ‖fuzz― can be expected around many of the quasars without any normal
galaxy being present. Its existence is demonstrated by the absorption redshifts. (2) In
view of the presence of the very bright quasars, much, perhaps most, of the optical
radiation from the ‖fuzz― is reflected light. (3) The properties of the quasars are not
merely more extreme manifestations of the properties of the Seyferts; in many respects
they are quite different. (4) Even if this ‖fuzz― argument were valid, it does not come to
grips with the key problem of the cosmological hypothesis: the utter inability to account
for the enormous energy output. Thus it does not change the essential element of the
situation.
Most astronomers accept the cosmological hypothesis, not because of the weight of the
evidence, but because they know of no mechanism whereby the second redshift
component can be generated, and they are not willing to concede the existence of an
unknown mechanism. This leaves them with the necessity of finding a new mechanism
whereby energy can be generated in amounts vastly exceeding not only the capabilities of
any known energy generation process, but also the total energy available in any known
source. Just why this should be the preferred alternative is rather difficult to understand.
In either case, something new must be found, but an explanation of the energy generation
process must also provide for an enormous extension of the magnitude of known energy
generation processes. Some hypotheses of a far-out nature have been advanced to meet
this requirement, but as noted by Jastrow and Thompson:
These ideas [about the energy of the quasars] are not supported by observational
evidence. They are no more than desperate efforts by the astronomer to take the most
luminous single objects that he has ever discovered and scale these objects upward in size
and mass by factors of a million-fold or more, without any valid theoretical reason for
doing so.233
In any event, the application of the theory of the universe of motion to this situation now
eliminates the need for any new kind of a mechanism, as it identifies the second redshift
component as another Doppler shift, produced in the same manner as the normal
recession redshift, and constituting a scalar addition to the normal shift.
As in the case of the pulsars discussed in Chapter17, the quasar remains as a discrete
object in the spatial reference system until the two-unit boundary of the material sector is
reached. But there is an important difference. The gravitational retardation of the pulsar
by the remnants of the star from which it originated is relatively minor. It may be slowed
up to some extent at a later stage of its existence by the combined effect of other stars in
the neighborhood, but it is never subject to any strong gravitational effect. As a result it
reaches the sector boundary and converts to motion in time relatively soon. It is therefore
a short-lived object (astronomically speaking). The quasar, on the other hand, is subject
from the start to the gravitational forces of an entire galaxy of somewhere near 1012 solar
masses. It therefore accelerates slowly and appears as a visible object in space for a long
period of time while the gravitational attraction is being overcome.
If the quasar is not destroyed by internal violence during its visible life, it finally
disappears when the point of conversion to the cosmic status is reached. As we saw in
Chapter15, the explosion redshift, 3.5z½., at this point is 2.00. The corresponding
recession redshift is 0.3265, and the total quasar redshift is 2.3265. (In the ensuing
discussion the last digit will be dropped, as the redshift measurements are not currently
carried to more than four significant figures.) Here the motion in space converts to
motion in time. An alternative that may take precedence under appropriate conditions
will be discussed in Chapter 23.
In this connection, it is interesting to note that while current astronomical theory regards
the range of quasar speeds as extending without interruption up to beyond the 3.5 level,
the observers have reported evidence indicating that something occurs in the vicinity of
what we have identified as the cut-off point at 2.326. This evidence and its implications
will be included in the discussion in Chapter 23.
The energy imparted to the galactic fragment identified as a quasar is, of course, shared
between the motion of the individual stars within the fragment, that of the associated dust
and gas, and the motion of the object as a whole. Indeed, a considerable portion of the
total energy involved is communicated to the constituent stars during the build-up of the
explosive forces before the ejection actually takes place. We may deduce, therefore, that
most, or all, of the stars in the quasar are individually moving at speeds in the upper
ranges. The quasar is consequently expanding in time, which means that it is contracting
in equivalent space. Hence, like the white dwarfs, which are abnormally small stars, the
quasars are abnormally small galaxies (from the spatial standpoint).
This is the peculiarity that has given them their name. They are ‖quasistellar― sources of
radiation, mere points like the stars, rather than extended sources like the normal
galaxies. Some dimensions and structure are now being observed with the aid of powerful
instruments and special techniques, but this new information merely confirms the earlier
understanding that as galaxies, or galactic fragments, they are extremely small. The most
critical issue in the whole quasar situation, as seen in the context of current thought, is
‖the problem of understanding how quasars can radiate as much energy as galaxies while
their diameters are some thousand times smaller.― 240
But this is not a unique problem; it is a replay of a record with which we are already
familiar. We know that there is a class of stars, the white dwarfs, which radiate as much
energy as some ordinary stars, while their diameters are many times smaller. Now we
find that there is a class of galaxies, the quasars, that has the same characteristics. All that
is required for an understanding is a recognition of the fact that these are phenomena of
the same kind. It is true that the currently accepted theory of the white dwarfs contains an
explanation of their small sizes that cannot be extended to the quasars, but the obvious
conclusion from this is that the current theory of the white dwarfs is wrong. In the
universe of motion the abnormally small dimensions are due to the same cause in both
cases. Speeds in excess of the speed of light introduce motion in time, which has the
effect of reducing the equivalent space occupied by each object. As pointed out earlier,
the quasars are simply the galactic equivalent of the white dwarf stars.
The brightness of the quasars, another of their special characteristics, is also a result of
their abnormally small spatial size. The area from which the radiation of a quasar
originates is much smaller than that of a normal galaxy of equivalent size, while the
emission is greater because of the greater energy density. In this case, the situation is
somewhat more complex than in the white dwarf stars. The increase in the intensity of
emission from these stars is mainly a matter of radiating a similar amount of energy from
a smaller surface. The corresponding increase in the emission per unit of surface area of
the stars of the quasar has no effect on the radiation per unit of surface area of this object
as a whole, but an increase in the radiation intensity is produced by the greater stellar
density—that is, the larger number of stars per unit of volume due to the small size of the
quasar. The intensity of the radiation is still further increased by the emission from the
large concentrations of fast-moving gas and dust particles in the quasars, a galactic
component that is not present in normal galaxies. The radiation from these two separate
sources in the quasars can be identified with the two observed radiation components, that
with a line spectrum from diffuse matter, and that with a continuous spectrum from the
stars.
Because of the diversity of the processes that are taking place in the quasars, the
frequencies of the emitted radiation extend over a wide range. As explained in Chapter18
thermal and other processes affecting the linear motions of atoms generate radiation that
is emitted principally at wavelengths relatively close to that corresponding to unit speed,
9.l2x10-6 cm, whereas processes such as radioactivity that alter the rotational motions of
the atoms generate radiation that is mainly at wavelengths far distant from this level.
Explosions of stars or galaxies, particularly the latter, cause rotational readjustments of
both the material and cosmic types, hence these events generate both very long wave
radiation (radio) and very short wave radiation (x-rays and gamma rays). as well as
thermal and inverse thermal radiation.
The question as to the origin of the large amount of energy radiated from the quasars has
been a serious problem ever since the discovery of these objects. The new information
derived from the theory of the universe of motion has now resolved this problem. First, it
has drastically scaled down the indicated magnitude of this energy. The finding that the
greater part of the quasar motion indicated by its redshift has no effect on the position of
this object in space, and that, as a consequence, the quasar is much less distant than the
cosmological interpretation of the redshift would indicate, makes a very substantial
reduction in the calculated energy emission. The further finding that the radiation is
distributed two-dimensionally rather than three-dimensionally simplifies the problem
even more significantly.
For example, if we find that we are receiving the same amount of radiation from a quasar
as from a certain nearby star, and the quasar is a billion (109) times as tar away as the
star, then if the quasar radiation is distributed over three dimensions, as currently
assumed, the quasar must be emitting a billion billion (1018) times as much energy as the
star. But on the basis of the two-dimensional distribution that takes place in equivalent
space, according to the theory. of the universe of motion. the quasar is only radiating a
billion (109) times as much energy as the star. Even in astronomy, where extremely large
numbers are commonplace, a reduction of the energy requirements by a factor of a billion
is very substantial. An object that radiates the energy of a billion billion (1018) stars is
emitting a million times the energy of a giant spheroidal galaxy, the largest aggregate of
matter in the known universe (about 1012 stars), and attempting to account for the
production of such a colossal amount of energy is a hopeless task, as matters now stand.
On the other hand, an object that radiates the energy of a billion (109) stars is equivalent,
from the energy standpoint, to no more than a rather small galaxy.
While the theory is thus drastically scaling down the amount of energy to be accounted
for, it is at the same time providing a large new source of energy to meet the reduced
requirements. The disintegration of an atom at the upper destructive limit can result in the
complete conversion of the atomic mass into energy. Inasmuch as the magnetic ionization
of the matter of which a star is composed is uniform throughout a large portion of the
mass, the explosion of the star at this upper limit is theoretically able to convert a major
part of the stellar mass into energy. It should also be noted that the quasar is not called
upon to provide its own initial energy supply. The giant galaxy from which the quasar is
ejected provides the kinetic energy that accelerates a quasar as a whole and its constituent
stars to upper range speeds. All that the quasar itself has to do is to meet the subsequent
energy requirements.
A point that has given considerable trouble to those who are attempting to put the
observational data with respect to the quasars into some coherent pattern is the existence
of relatively large fluctuations in the output of radiation from some of these objects in
very short intervals of time. This imposes some limits on the sizes of the regions from
which the radiation is being emitted, and thereby complicates the already difficult
problem of accounting for the magnitude of the energy being radiated. Our new
theoretical development has eliminated these difficulties. The answers to the size and
energy problems have been derived from the fundamental premises of the theory of the
universe of motion in the foregoing pages. When viewed in the context of these general
findings, identification of the primary source of the energy as a large number of
individual stellar explosions that accelerate their products to speeds in excess of the speed
of light is sufficient to account for the fluctuations.
One feature of the quasars that has been given a great deal of attention by the astronomers
is their distribution in space. Almost from the beginning of radio astronomy, it has been
noted that there is an apparent excess of faint radio sources; that is, if it is assumed that
the luminosity is related to the distance in the normal inverse square manner, the density
of the sources increases with the distance. Since the radiation that is now being received
from the more distant sources has been traveling for a longer time, the observations may
be interpreted as indicating that the average density of the objects that emit radio
radiation was greater in earlier eras. Such a conclusion, if valid, would be highly
favorable to the evolutionary theories of cosmology, and the available evidence has been
closely scrutinized for this reason.
As matters now stand, the majority opinion is that the issue has been settled in favor of
the contention that the density of these radio sources is less now than at the time the
radiation left the distant sources; that is, the density is decreasing with time. However,
this conclusion is based on the assumption that the distribution of the radiation is three-
dimensional, and it is invalidated by our finding that the radiation from the quasars is
distributed two-dimensionally. On the basis of this new finding, the excess of faint
sources merely means that some of the sources are quasars, which we already know
without the benefit of the radio source counts.
Because of the much more rapid decrease in visibility on the three-dimensional basis as
compared to the two-dimensional distribution, the theoretical development indicates that
the observable sources of radiation beyond a certain limiting magnitude should all be
objects radiating in two dimensions; that is, quasars. This theoretical conclusion is
confirmed by a study by Bohuski and Weedmain which found that the curve representing
the relation of the number of distant radio sources to the magnitude has a slope of 0.4,
corresponding to a two-dimensional distribution, rather than the 0.6 slope that would
result from a distribution in three dimensions. As expressed by the investigators,
‖virtually all stellar objects at high galactic latitude with magnitude of 23 or above are
quasars.― 241 Some further observational information supporting the theoretical two-
dimensional distribution of the quasars will be presented in the chapters that follow,
especially Chapter 25.
The idea of a two-dimensional distribution of radiation is not as unprecedented as it may
seem. It has been recognized that there are aspects of the quasar and pulsar radiation that
are indicative of a distribution in less than three dimensions. Current theory of the pulsars
is expressed in terms of ‖beams.― For instance, A. Hewish, in a review article on pulsars,
refers to ' beaming in two coordinates,― 242, which is merely one way of describing a two-
dimensional distribution. The essential difference between the conventional view and the
explanation derived from the theory of the universe of motion is that the astronomical
hypotheses depend on the existence of special mechanisms of a highly speculative nature,
whereas the deductive derivation of the properties of the universe of motion leads to a
two-dimensional distribution of all radiation originating from objects moving at upper
range speeds.
The discussion in this chapter can appropriately be closed with some reflections on a
comment by Gerrit Verschuur that reads as follows:
At present there are many areas of astronomy (big bang theory, quasars, black holes) in
which conventional physics seems to fail, and seeking understanding of these strange
phenomena may yet lead to a revolution in thought.243
The perplexity with which the astronomers view the facts that they have accumulated
about the properties of the quasars is well illustrated by this comment which classifies
these observed, but not understood, objects with two hypothetical entities, the Big Bang
and the black hole, that are not only ‖strange,― but wholly non-existent. Verschuur's
suggestion that arriving at a better understanding of the quasar phenomena might require
a change in physical thinking has now been verified, but in the reverse sequence. As the
contents of this chapter show, it is the revolution in physical thought resulting from the
development of the theory of the universe of motion that has enabled understanding of
the quasar phenomena. The chapters that follow will extend this understanding into more
detail.

CHAPTER 22

Verification
The quasar theory described in Chapter 21 accounts for the major features of these objects: what they
are (fast-moving galactic fragments), how they originate (by explosions of massive old galaxies),
where their energy comes from (large numbers of supernovae), what gives them their unique
characteristics (speeds greater than that of light), and what their ultimate destiny will be (escape into
the cosmic sector, the sector of motion in time). These are all necessary consequences of the physical
principles of the universe of motion, as developed in the earlier volumes of this work, and the quasars
are directly in the main line of the cyclic evolution of matter as described in the preceding pages of
this present volume. But in view of the unfamiliar nature of some of the physical principles that are
applicable to objects moving at upper range speeds, and the important part that these unfamiliar
principles play in the quasar theory, it will be advisable to supply some additional verification of the
Validity of that theory by locating some specific situations in which we can compare the predictions of
the theory with the results of observation.
The most significant situations of this kind are those in which the predictions of the new theory are
unique; that is, those in which the development of the Reciprocal System of theory arrives at
conclusions that are totally different from those derived from other sources. Situations that lead to
quantitative answers are particularly meaningful. One such item is that there is a specific mathematical
relation between the normal recession redshift and the explosion redshift, the increment due to the
ultra high speed imparted to the quasar by the galactic explosion. The existence of this fixed relation is
due to the fact that the motion generated by the explosion is a scalar motion of the same general nature
as the recession, differing only in the number of dimensional units involved. The retarding effect of
gravitation is variable, but it applies equally to all units, except as modified by the inter-regional
relations. As explained in Chapter 15. where the recession redshift is z, the corresponding explosion
redshift is 3.5z½ (except under some special conditions that will be discussed later). A crucial test of
the theory can therefore be accomplished by identifying the relative magnitudes of the two redshift
components in a representative number of quasars, and comparing the results with the theoretical
values.
As matters now stand, there is no way by which we can separate the observed redshift of an ordinary
quasar into its two components, other than by means of the theoretical relation. But because of the
manner in which the quasars originate, each of these objects is a member of a three-component group
consisting of (1) the galaxy in which the explosion occurred, (2) the quasar, and (3) a radio galaxy. As
a result of the reversal of directions at the unit speed level, the radio galaxy is ejected in the spatial
direction opposite to that of the motion of the quasar. These two objects are therefore located on
opposite sides of the galaxy of origin. All three members of each group occupy adjacent locations in
space, and their recession redshifts are approximately equal, differing only by the amounts due to the
random motion in space and to the relatively small change of position since the explosion.
Disregarding these minor items, a three-component association resulting from a galactic explosion
should consist of a central galaxy with redshift z, an ordinary radio galaxy with redshift z, and a
quasar with redshift z + 3.5 z½ In any case where at least one of the associates of a quasar in such a
group can be identified, and the redshifts have been measured, we can test the validity of the
theoretical relation by computing the value of the quasar redshift that theoretically corresponds to the
value z obtained from the associated object at the same spatial location, and comparing it with the
observed quasar redshift.
As it happens, Dr. Halton Arp, a prominent American astronomer, has made an intensive search for
radio sources associated with galaxies of a ‖peculiar― nature. Since both the quasar and the radio
galaxy ejected in a galactic explosion are strong sources of radiation at radio frequencies, these
explosion generated groups are likely candidates for discovery in a search such as that conducted by
Dr. Arp. We may logically conclude that at least some of those of Arp's associations that have the
required composition—a central galaxy that shows visible signs of internal disturbance, and two radio-
emitting objects on opposite sides of this central galaxy, one of which is a quasar—are explosion
systems. This provides us with an opportunity to make a clear-cut test of the validity of the theoretical
conclusions.
If we were working with data of unquestionable reliability, we would simply go ahead with the
calculations without further ado. But the task that Dr. Arp has performed is a difficult one, and it is
unrealistic to expect that all of his ‖associations― define groups of objects of common origin. Indeed,
the majority of his colleagues seem unwilling to concede any validity to these results, as would be
concluded from the general preference for the cosmological explanation of the quasar redshifts, which
attributes them entirely to the normal galactic recession. and thus denies the existence of the second
redshift component that must exist if Arp's associations are physically real. We are therefore in a
position where we have a double task. We must verify the reality of the associations in the same
operation by which we verify the theoretical redshift relation from the available data.
In order to deal with what may be a mixture of correct and incorrect identifications, it is necessary to
rely upon probability considerations. If the quality of the data available for analysis is poor, it might
not be possible to reach any definite conclusions by this method, as the results that we obtain may not
differ enough from random probability to be statistically significant, but if a reasonable percentage of
Arp's identifications represent actual physical associations, some meaningful results can be obtained.
Inasmuch as the theory being tested calls for the existence of a specific mathematical relation, any
degree of conformity with that relation exceeding random probability is evidence in favor of the
theoretical conclusion. A high degree of correlation, much in excess of random probability, is
tantamount to proof, not only of the validity of the theoretical relation, but also of the accuracy of the
identifications.
The nature of the process is such, however, that some stringent precautions are necessary in order to
avoid introducing some kind of bias that would invalidate the probability argument. The most
essential requirement is that the data that are utilized must be random with respect to the point at issue.
One of the best ways to insure randomness is to utilize data that were previously compiled for some
other purpose. Since Arp carried out his work for one purpose, and we are making use of it for an
entirely different purpose, randomness of the data, with respect to the object of our inquiry, is
achieved automatically.
One further requirement that must be observed, however, if conclusions based on probability
considerations are to be beyond reproach, is that the data must be homogeneous, because unless they
are homogeneous they are not likely to be completely random. We must therefore use only
information that has been gathered on the basis of the same set of criteria and the same processes of
judgment. This means that where a process of selection has been involved in accumulating the data,
we must use these data in their original form, and exclude later additions or modifications, as it is
practically impossible to maintain the original selection criteria unchanged over any substantial period
of time. Even if a conscious effort is made to avoid changes events taking place in the interim, and the
natural evolution of thinking in the course of time will alter the criteria in ways that are difficult to
identify.
For this reason, the comparisons in this chapter are all based on Arp's first extensive set of results,
published in 1967, which was confined to objects included in the 3C (Third Cambridge) catalog of
radio sources.244 Subsequent to this publication Dr. Arp modified some of his original groupings and
identified a number of additional associations, some on the basis of the original considerations, and
some on other grounds. But we cannot use this additional material in conjunction with the original,
because if we do, we no longer have the homogeneous set of random data that is required in order to
assure the validity of our probability arguments.
For example, Arp has found that there are several quasars located in a straight line apparently
proceeding from the galaxy NGC 520, and he regards this as evidence of physical association. But an
identification of association of quasars or other objects based on linear alignment is something quite
different from an identification based on the presence of two radio emitters on opposite sides of a
‖peculiar― galaxy, and we are not justified in taking either of these into account when we are
undertaking to apply probability principles to an assessment of the validity of identifications made on
the other basis. Whatever conclusions we may draw from the NGC 520 alignment are separate and
distinct from those derived from the study of objects from the 3C catalog selected on an entirely
different basis. The additional material can, of course, be used for other studies of the same subject,
and the results thereof are entitled to the same kind of consideration as the results of the present
analysis, but it must be separate consideration.
With the benefit of the foregoing understanding as to what we propose to do, and how we propose to
go about it, we may now proceed to an examination of the ten associations from Arp's study of the 3C
objects that are available for this purpose. His list is much longer, but for present purposes we are
interested only in those associations in which one of the observed radio emitters is a quasar. On
beginning the examination, the first thing that we encounter is the necessity of making some further
exclusions, because the theory itself identifies some of the presumed associations as incorrect, and
hence these associations cannot provide any comparison of theory with observation. Where the theory
itself asserts that no agreement is to be expected, a demonstrated lack of agreement is meaningless.
Dr. Arp says that he does not expect to be able to identify the central ‖peculiar― galaxies beyond a
recession of 10,000 km/sec.245 The quasar 3C 254, with a redshift of 0.734, of which 0.039 is the
normal recession, is theoretically receding at slightly over this limiting recession speed, and is
therefore approximately at the point beyond which the theoretically correct central galaxy is
unobservable. According to the theory, then, any identification of a central galaxy with a quasar
appreciably more distant than 3C 254 (in Arp's association 148) is prima facie wrong, and a
comparison of the redshifts has no significance. We can test the theory only by checking the
correlation in those cases where the theory says that there should be an agreement. On this basis, the
only significant correlations with the central galaxy are the first four in Table IV. In all of these cases
the theoretical and observed values show a satisfactory agreement. (The ratio 2.78 for association 134
would not be satisfactory at a higher z value, but obviously the random motions and other incidental
factors have a higher proportionate effect where the recession is so small).
TABLE IV
Association Quasar Basis of
Excess/z½
Number Redshift Calculation
134 .158 C 2.78
160 .320 C 3.41
125 .595 C 3.31
148 .734 C 3.76
201 1.037 R 3.56
139 1.055 R 3.31
5055 1.659 R 5.59
Excluded
5223 .849 C 5.3
143 1.063 C 9.1
197 2.38 C 16.7
Beyond the point where the correct galaxy of origin becomes unobservable it is still possible that the
radio galaxy associated with a particular quasar may have been correctly identified, as the radio
galaxies can be detected at distances well beyond those at which the features distinguishing a
‖peculiar― galaxy can be recognized. The correlations in our analysis have therefore been made on the
basis of the radio galaxy, if the necessary redshift measurement is available, rather than the central
galaxy, for all distances greater than that of association 148, as indicated by the symbol R in the third
column of the table.
Here, again, there is an upper observational limit. It is somewhat indefinite, because of the wide range
of emission energies. but the available evidence indicates that only the exceptional radio galaxy can be
detected (or could be detected with the facilities available in 1967) at the distance corresponding to the
theoretical location of the quasar 3C 280.1 in association 5055. The legitimacy of this association is
therefore open to question. Since we must exclude associations 5223, 143, and 197 on the grounds
previously cited, this questionable case, number 5055, is the only one in the entire list where there is a
lack of agreement with the theory. All of the other associations in which the observed relation between
quasar redshift and normal recession redshift could agree with the theoretical relation do show such an
agreement.
The relevant data from Table IV are shown graphically in Fig.25. Each plotted point on the graph
indicates the relation of the excess redshift of the particular quasar, the amount by which the quasar
redshift exceeds that of the galaxy with which it is presumably associated, to the square root of the
redshift of the associated galaxy. The diagonal line shows the relation to which these points should
theoretically conform. If the prevailing astronomical opinion were correct, and the redshifts the
quasars were due to the normal recession alone, there would be no definite relation between the quasar
redshift and that of the object or objects that are grouped with it. In that event the plotted points would
scatter randomly, not only over the area of the graph as shown, but also over a much larger area above
it, extending up to a value of 30 or more, as can be seen from the figures applying to the ‖excluded―
group in Table IV. The same would be true if the associations are real, but, as Arp himself suggests,
the excess redshift is due to some cause other than motion, and hence not directly related to the normal

recession.
But they are definitely not random. On the contrary, five of the six points fall essentially on the
theoretical line; that is, within the margin that can be attributed to the distances the ejected objects
have moved since the explosion, to random spatial motion, and other minor effects. The probability
that five out of the six would by chance fall on a straight line coinciding with a theoretically derived
relation is negligible. The results of the investigation are therefore conclusive. They constitute a
positive verification of the theoretical 3.5 z½ value of the explosion redshift.
All of the other evidence, both for and against the association of the quasars with objects of lower
redshift, has been indefinite. Most of it rests upon correlations between the redshift of objects whose
projections on the sky are close enough to indicate that these objects may be contiguous. As a general
proposition a finding of this kind, a showing that some of the members of a given class conform to a
specified relationship, has only a very limited significance. It remains little more than speculative
unless further study enables defining a sub-class such that all of the members of this sub-class
conform to the specified relation.
The reason why the results obtained by Arp are conclusive, whereas the other findings are not, is that
Arp has done what no one else has been able to do; that is, he has defined a class of objects,
associations of a specified nature between radio emitters included in the 3C catalog, which, when
further limited by the criteria developed in this work, do conform to a definite and specific redshift
relation. The associations that he has identified are not merely groups of objects whose observable
positions indicate that they may be neighbors. They are groupings whose physical characteristics are
similar, and are in agreement with the theoretical results of galactic explosions. Their identification
depends not only on apparent proximity, but also on (1) abnormalities in the central galaxy (which are
consistent with its having exploded), (2) radio emission from the presumed ejecta (which is
characteristic of high speed explosion products), and (3) existence of the presumed ejecta in pairs at
comparable distances and in positions on opposite sides of the central galaxy (the positions that they
would occupy if they had been thrown off simultaneously in opposite directions as required by the
theory). The number of associations included in Table IV is small, to be sure, but these are all of the
associations of this type that Dr. Arp was able to identify among the objects of a catalog which, at the
time of its compilation, covered all of the accessible extragalactic radio sources then known. In the
sample area that it covers, the study is therefore comprehensive, and the results are conclusive.
These results show that the additional component that is present in the quasar redshift is due to a
physical mechanism that is specifically refuted to the normal recession. The existence of two distinct
components makes any hypothesis such as that of ‖tired light― untenable, while the fixed
mathematical relation between the two components rules out anything, such as a redshift of
gravitational origin, that is independent of the recession. Conventional physical theory has no other
explanation to offer, but these features to> which the observations point are the same features that we
find when we apply pure reasoning to the properties of space and time as defined in the postulates of
the Reciprocal System of theory. The explosive event that is required by the theory produces exactly
the kind of an association of three related objects—a central galaxy with a radio galaxy on one side
and a quasar diametrically opposite—that Arp has identified in his studies. The ultra high speed
imparted to the quasar by the tremendous amount of energy released in the galactic explosion exists in
a second dimension of motion, and provides a second redshift component, related to but distinct from,
the normal recession redshift, and the mathematical statement of that relation, as derived from theory
is identical with the relation between the measured values.
While the pattern of redshift values illustrated in Fig.25 is conclusive in itself, it does not exhaust all
of the corroboration of the theory that we can extract from Arp's associations. The distances of the
radio emitters from the central galaxy also have a significance in this respect. As explained in
Chapter15, gravitation is effective in all three scalar dimensions, and therefore operates against the
explosion-generated motion as well as against the normal recession. As a result, the net explosion
speed is initially small, and increases with the distance in the same manner, except for the two-
dimensional effect, as the recession speed. On the other hand, since the greater part of the explosion
speed is initially applied to overcoming the effect of gravitation, which operates within the fixed
spatial reference system, there is a rapid change of position in the reference system during this initial
period when the net total speed, including the scalar speed not capable of representation in this
reference system, is quite small. The rate of change of position then decreases as gravitation is
gradually overcome and the net speed increases. Thus the theory leads to the decidedly
unconventional conclusion that the faster the quasar moves in the explosion dimension, the less its
position in space changes.
According to the theory, the relative spatial speed of the quasar, the component that manifests itself by
changing the quasar position in space, is the difference between I.0, the speed of light, and the
explosion component of the quasar redshift, 3.5 z½ in the quasars of Table IV. The relative speed of
the radio galaxy is the average outward speed of the stars that fail to reach the 1.0 speed level, and are
therefore ejected in space rather than becoming constituents of the quasar. Since the distribution of
these speeds was initially the tail of a probability curve from l.0 downward, the average at the time of
observation should be somewhat above 0.5, and nearly the same in all cases. Here, again, Arp's
associations provide a sample that we can test to see if this theoretical requirement is met. In these
associations we can measure the ratio of the distances of the two ejected objects from the central
galaxy, since the three objects lie on a straight line. Inasmuch as the distance traveled since the
explosion is proportional to the average spatial speed, the distance ratio thus determined is also the
ratio of the average speeds. Applying this ratio to the spatial speed in the explosion dimension derived
from the redshift measurement, we arrive at the speed of the radio galaxy.
For this test we are able to use only those associations in which all three components central galaxies,
quasars, and radio galaxies—have been clearly identified. Four of the associations listed in Table IV
are within the 10,000 km/see range in which identification of the central galaxy is feasible. but the
radio galaxy in association 148 is unidentified optically. Its approximate location is known, and it can
therefore be included in the study, along with the three associations that are clearly identified, with the
understanding that the results on 148 are subject to some uncertainty. Table V shows the observational
data on these four associations, and the speeds of the radio galaxies as calculated from these data.
is still possible that the radio galaxy associated with a particular quasar may have been correctly
identified, as the radio galaxies can be detected at distances well beyond those at which the features
distinguishing a ‖peculiar― galaxy can be recognized. The correlations in our analysis have therefore
been made on the basis of the radio galaxy, if the necessary redshift measurement is available, rather
than the central galaxy, for all distances greater than that of association 148, as indicated by the
symbol R in the third column of the table.
TABLE V
Association Excess Spatial Distance R.G.
Number Redshift Speed Ratio Speed
134 .155 .845 .73 .62
160 .312 .688 .91 .62
125 .566 .434 1.35 .59
148 .695 .305 2.57 .78
Column 2 of the table gives the explosion redshift of the quasar in the association identified in
Column 1. Column 3 is the relative spatial speed of the quasar, the difference between unity and the
value in column 2. Column 4 is the measured distance ratio. Multiplying Column 4 by Column 3, we
arrive at the speed of the radio galaxy, relative to an explosion speed of 1.0.
These results given in Column 5 meet the requirements set forth earlier, that is, they arrive at
essentially the same speed for all four radio galaxies (if we make allowance for the lack of certainty in
the position of the radio galaxy in association 148), and this calculated speed is within the limits that
we can establish from more direct considerations. Furthermore, a very wide range of quasar speeds is
included, as the theoretical spatial speed of the quasar 3C 273 m association 134 is twice that of 3C
345 in association 125, and almost three times that of 3C 254 in association 148. The downward trend
in the relative distance of the quasars from the central galaxy as the speed increases is unmistakable.
Verification of a theoretical conclusion of this nature, one that is nothing short of outrageous in the
context of conventional theory is particularly significant because it shows that a drastic change in
fundamental theory is required before the full range of physical phenomena can be understood. The
customary process of adjustment and modification of existing theory by means of additional ad hoc
assumptions is clearly incapable of dealing with discrepancies of this magnitude. No amount of
tinkering with the conventional theory of motion can reconcile a decrease in the rate of change of
spatial position with an increase in speed. Some new light on the general situation is indispensable..
A related phenomenon that is equally inexplicable in terms of conventional physical thought is the
nearly constant separation of the radio emitting regions in most quasars. Although the distances to
different quasars vary over an extremely wide range, the apparent separation of the two radio
components is usually close to a constant value. For example, Table VI shows the separations (in
seconds of arc) measured by D. E. Hogg,246 excluding three values that will be considered later.
Table VI
COMPONENT SEPARATIONS
Quasar Separation Quasar Separation
3C 181 6.0 3C 273 19.6
3C 204 31.4 3C 275.1 13.2
3C 205 15.8 3C 280.1 19.0
3C 207 6.7 3C 288.1 6.4
3C 208 10.5 3C 336 21.7
3C 249.1 18.8 3C 432 12.9
3C 261 10.8 MSH 13-011 7.8
3C 268.4 9.4
Similar measurements by Macdonald and Miley include a substantial proportion of larger separations,
but these authors comment that their list includes many objects in which the radio components are so
far distant from the optical center that, in their words, ‖If the radio structures of the larger QSOs were
not symmetric about the optical QSO they might not have been identified.― 247 This suggests that the
quasars with the larger component separations represent a different group of objects, the members of
which have a second observable set of laterally displaced components. Such a hypothesis is supported
by a further comment from the investigators which indicates that, in some instances, both types of
component separation are present in the same structures. ‖Many sources,― they say, ‖have large scale
structure but small scale components dominate.―
The almost constant angular separation of such a large proportion of these radio components of
quasars stands out as an observed fact for which conventional astronomical theory has no explanation.
As expressed by K. I. Kellerman, ‖either: the linear dimensions of radio sources depend on redshift in
just such a way as to cancel the geometrical effects of the redshift, or: The geometric effect of the
redshift on apparent size is negligibly small.― 248 Since neither of these alternatives can be
accommodated within the boundaries of conventional theory. astronomy, Kellerman says, is
confronted with a paradox.
In approaching the question theoretically. we note first that the outward radial movement of the
quasars is beyond the limits of the reference system, and it is therefore incapable of representation in
that system. As explained in connection with the derivation of the applicable general principles in
Volume II, motion in a second dimension is normally excluded from representation in the spatial
reference system because the presence of motion in the original dimension preempts the full capacity
of the reference system. But when representation of the motion in the original scalar dimension is
ruled out for some reason, representation of the motion in the second dimension becomes possible.
The lateral motion of the distant quasars is analogous to the lateral magnetic motion discussed in
Volume II. As in electromagnetism, the motion in the second dimension of the intermediate speed
range appears in the reference system with a direction perpendicular to the line of motion in the
original dimension. In the case of the quasars, this direction is perpendicular to the line of sight.
The recession speed in the second dimension is the same as in the dimension coincident with the
reference system, but as observed it is reduced by the interregional ratio, 156.444. Since it originates
in a two-dimensional region, it is observed as a second power quantity. Thus the ratio of lateral to
radial motion is (2/156.444) . In the terms in which the astronomers generally express the lateral
displacement, this observable recession in the lateral direction amounts to 16.9 seconds of arc.
Inasmuch as the outward motion of a quasar has a specific direction, as seen in the spatial reference
system, the lateral motion is confined to one specific perpendicular line. As noted earlier, however,
scalar motion does not distinguish between the direction AB and the direction BA. The lateral
recession outward from point X is therefore divided equally between a direction XA and the opposite
direction XB by the operation of probability. Matter moving translationally at upper range speeds thus
appears in the reference system in two locations equidistant from the line of motion in the coincident
dimension (the optical line of sight, in most cases), and separated by 33.8 seconds of arc.
It does not follow, however, that the separation observed from the earth will be this large. If the quasar
is a distant one no evidence of its existence can be detected here until the radiation has had time to
travel the long intervening distance. When first received. this radiation will disclose only the situation
that existed at the location and time of ejection. before the lateral recession had begun The progress of
the recession will be revealed gradually by the radiation subsequently received. but the observed
recession will lag behind the true magnitude by the time required for the travel of the radiation, until
the observed separation reaches the limiting value. In the meantime, the separation will be observed at
some value intermediate between zero and the maximum.
l his explains why the observed separations vary, and arc generally less than the calculated 33.8
seconds of arc. As can be seen from the foregoing explanation these observed separations should be
related to the time that has elapsed since the explosive event that produced the fast-moving products
from which the radiation is being emitted. The relation of the optical and radio emissions provides a
rough indication of this time. The ratio of these emissions is affected by the evolutionary changes that
take place in the various stages of the existence of the quasar, but by limiting our consideration to a
homogeneous group of objects we can minimize the effect of these changes. For such a group the
radio emission should decrease with time, as the isotopic adjustment progresses toward completion,
and the ratio of optical to radio emission should increase accordingly. The magnitude of this ratio
should therefore give us an approximate measure of the relative quasar ages.
An appropriate group of this kind consists of the six quasars in Hogg's list with redshifts above 1.00
for which luminosity data are available in the tabulations in Chapter 25. Examination of these data
indicates that the approximate ratio (RL) of optical to radio luminosity is related to the separation of
the radio components (S) by the expression: S = 83RL + 3.0. Separations calculated on this basis are
compared with Hogg s measurements in Table VII.
Table VII
COMPONENT SEPARATIONS
Separation
Quasar RL
Calc. Obs.
3C 204 0.279 26.2 31.4
3C 208 0.113 12.4 10.5
3C 432 0.112 12.3 12.9
3C 268.4 0.075 9.2 9.4
3C 181 0.033 5.7 6.6
3C 280.1 0.031 5.6 19.0
All but one of these correlations are within the range of variation that can be expected in view of the
diversity of the conditions affecting the individual quasars. The reason for the discrepancy in the
values applicable to the quasar 3C 280. l is not ‖known― but it could be the result of a second very
recent, outburst that has renewed the radio emission. On this basis, the low RL value is produced by
the radiation from the second explosive event, whereas the 19.0 figure is the separation between the
products of the earlier event.
The separations greater than about 35 seconds of arc that are included in the reports that were quoted,
those of the three quasars omitted from the tabulation of the Hogg results, and a larger number from
the work of Macdonald and Miley, arc due to a different cause. They are the results of actual motion
of the ultra high speed dust and gas from which the radio emission originates, motion that has taken
this material away from the location where the optical radiation is being produced. This is the process
by which the separation of the radio components of the radio galaxies originates, and it will be
examined in connection with the discussion of these objects in Chapter 26. As we will see there, this
process is not operative beyond a distance of 1.00 in the explosion dimension (total redshift 1.081).
Thus there should be no component separations above 33.8 seconds by more than the observational
error at redshifts above 1.081. This is consistent with the findings of both of the investigations cited.
In addition to the major explosive events that produce the larger radio aggregates, there is also a
continuing series of explosions of a more limited character (to be explained in Chapter 24) in the older
quasars. In some instances these result in scattered centers of emission along the normal lateral line,
but a large proportion of the total energy is generated by the radioactivity of the short-lived isotopes,
which is observed at or near the optical location. As we will see shortly, there is also another factor
that confines some of the radio emission in the older quasars to the center position. Thus there is a
tendency toward three, rather than two, major locations of radio emission. The prevalence of the three-
component pattern is illustrated in the data reported by Macdonald and Miley. These investigators say
that only 6 of the 36 objects for which they determined radio structures are definitely double, whereas
23 have, or may have, a third component at the center. The remaining 7 are more complex.
The finding that the radio emission from the distant quasars originates mainly at the same spatial
location as the optical emission, but that we see it at two or more locations in the reference system, is
another conclusion that appears outrageous in the context of current physical thought, but like the
equally unconventional findings previously discussed, it is in agreement with the physical
observations, and provides the explanations for aspects of those Observations that are in conflict with
conventional astronomical theory. In reality, it is not a strange or unusual phenomenon; it is merely
unfamiliar. Multiple images produced by other means—mirrors, for instance—are commonplace.
All radiation from a quasar is subject to the same considerations, but the stellar constituents from
which the optical emission originates are usually moving at speeds below the two-unit level. Thus the
optical position of a quasar normally shows no lateral displacement. In some stars, however, the
internal speeds may be in the ultra high range. In that event, both the optical and the radio emissions
originate from the laterally displaced locations. The recently discovered cases of ‖twin quasars,―
which are thought to be duplicate images produced by gravitational lenses, may well be single quasars
with ultra high speed optically emitting components.
When the quasars have reached the point where their net speed exceeds two units and enters the
cosmic range, the gravitational effect is inverted, and motion in time replaces motion in space. This
eliminates the lateral recession in equivalent space that is responsible for the double character of the
radio structure, as seen in the spatial reference system. The radiation from the quasar is still observable
until the motion in time has continued long enough to destroy the status of the quasar as a spatial
aggregate, and in the meantime this radiation is observed in the undisplaced radial location.
Observations indicate that many of the oldest of the visible quasars are in this transitional stage. A
substantial proportion of those quasars that, on the basis of criteria such as the presence of absorption
redshifts, large radio emission, and high z values, are in an advanced stage of development, show no
spatial extension other than that corresponding to the spatial dimensions of the optical objects.
Thus the theory of the universe of motion provides an explanation of the major features of the quasar
structures. Kellerman's ‖paradox,― we find, is simply a message from nature, and it is the same
message that we get from our analysis of the redshifts in Arp's associations. It tells us that inasmuch as
the lateral displacements, like the excess redshift, are directly related to the recession, and are
therefore observable effects of motion, the conventional narrow view of motion, which limits it to
speeds less than that of light and to effects that can be represented within a three-dimensional spatial
system of reference, must be broadened. But this is not something new that we are just now finding
out by examination of the astronomical situation. It is a direct consequence of the inherent nature of
the motion of which the universe is composed, and it plays just as significant a part in the fundamental
physical relations as in the astronomical phenomena we are now considering. The principles here
being applied were developed deductively in the earlier volumes, and were there utilized in
application to many physical phenomena. For example, the physical principle that explains why radio
sources are double (or triple) is not peculiar to this particular application; it is a general property of
scalar motion that has previously been shown to provide the explanation for such diverse phenomena
as the induction of electric charges and the deflection of light by massive objects.
As demonstrated in this and the preceding chapter, the deductions from the Reciprocal System of
theory, incorporating this more comprehensive view of the nature of motion, are in full agreement
with the results of observation in the quasar areas examined thus far. In the pages that follow it will be
shown that this one-to-one correspondence between the theoretical deductions and the observational
results is maintained throughout the entire range of the quasar phenomena. Some of the features of the
account of the origin and nature of the quasars thus derived are in conflict with current astronomical
thought, to be sure, but this merely reveals the erroneous nature of much of the current thinking. For
example, present-day theory sees no way in which the forces necessary to eject a galactic fragment
can be built up within a galaxy. ‖Obviously d normal assemblage of stars cannot be hurled about like a
snowball,― says Arp. However, the observational evidence makes it clear that fragments are ejected
under some circumstances; that is, they are hurled about like snowballs. Current astronomical
literature is full of references to, and hypotheses dependent upon, ejection of ‖assemblages of stars―
from galaxies. In explaining how this is possible, and indeed, inevitable in the normal course of
galactic evolution, the Reciprocal System is simply filling a conceptual vacuum.

CHAPTER 23

Quasar Redshifts
Although some of the objects now known as quasars had previously been recognized as
belonging to a new and different class of phenomena, because of their peculiar spectra,
the actual discovery of the quasars can be said to date from the time, in 1963, when
Maarten Schmidt identified the spectrum of the radio source 3C 273 as being shifted 16
percent toward the red. Most of the other identifying characteristics originally ascribed to
the quasars have had to be qualified as more data have been accumulated. One early
description, for example, defined them as ‖star-like objects identified with radio sources.―
But present-day observations show that in most cases the quasars have complex
structures that are definitely un-starlike, and there is a large class of quasars from which
no significant radio emission has been detected. But the high redshift has continued to be
the hallmark of the quasar, and its distinctive character has been more strongly
emphasized as the observed range of values has been extended upward. The second
redshift measured, that of 3C 48, is 0.369, substantially above the first measurement.
0.158. By early 1967, when about 100 redshifts were available, the highest value on
record was 2.223, and at the present writing it is up to 3.78.
Extension of the redshift range above l.00 raised a question of interpretation. On the basis
of the previous understanding of the origin of the Doppler shift, a recession redshift
above 1.00 would indicate a relative speed greater than that of light. The general
acceptance of Einstein's contention that the speed of light is an absolute limit made this
interpretation unacceptable to the astronomers. and the relativity mathematics were
invoked to resolve the problem. Our analysis in Volume I shows that this is a
misapplication of these mathematical relations In the situations to which those relations
actually do apply, there are contradictions between values obtained by direct
measurement and those obtained by indirect means, such as. for instance, arriving at a
speed measurement by dividing coordinate distance by clock time. In these instances the
relativity mathematics (the Lorentz equations) are applied to the indirect measurements to
bring them into conformity with the direct measurements, which are accepted as correct.
The Doppler shifts are direct measurements of speeds, and require no correction. A
redshift of 2.00 indicates a relative outward motion with a scalar magnitude of twice the
speed of light.
While the high redshift problem was circumvented in conventional astronomical thought
by this sleight-of-hand performance with the relativity mathematics, the accompanying
distance-energy problem has been more recalcitrant, and has resisted all attempts to
resolve it, or to evade it. Reference was made to this problem in Chapter 21, but
inasmuch as it constitutes a crucial issue, for which the theory of the universe of motion
has an answer, while conventional theory does not, a review of the situation will be
appropriate in the present connection.
If the quasars are at cosmological distances—that is, the distances corresponding to the
redshifts on the assumption that they are ordinary recession redshifts—then the amount of
energy that they are emitting is far too great to be explained by any known energy
generation process, or even any plausible speculative process. On the other hand, if the
energies are reduced to credible levels by assuming that the quasars are less distant, then
conventional science has no explanation for the large redshifts.
Obviously something has to give. One or the other of these two limiting assumptions has
to be abandoned. Either there are hitherto undiscovered processes that generate vastly
more energy than any process now known, or there are hitherto unknown factors that
increase the quasar redshifts far beyond the normal recession values. For some reason,
the rationale of which is difficult to understand, the majority of astronomers seem to
believe that the redshift alternative is the only one that requires a revision or extension of
existing physical theory. The argument most frequently advanced against the contentions
of those who favor a non-cosmological explanation of the redshifts is that a hypothesis,
which requires a change in physical theory should be accepted only as a last resort. What
these individuals are overlooking is that this last resort is the only thing left. If
modification of existing theory to explain the redshifts is ruled out, then existing theory.
must be modified to explain the magnitude of the energy generation.
Furthermore, the energy alternative is much more drastic, inasmuch as it not only
requires the existence of some totally new process, but also involves an enormous
increase in the scale of the energy generation, a rate far beyond anything now known. All
that is required in the redshift situation, on the other hand, even if a solution on the basis
of known processes cannot be obtained, is a new process. This process is not called upon
to explain anything more than is currently recognized as being within the capability of the
known recession process; it merely has to account for the production of the redshifts at
less distant spatial locations. Even without the new information derived in the
development of the theory of the universe of motion it should be evident that the redshift
alternative is by far the better way out of the existing impasse between the quasar energy
and redshift theories. It is therefore significant that this is the explanation that emerges
from the application of the Reciprocal System of theory to the problem.
Such considerations are somewhat academic, as we have to accept the world as we find
it, whether or not we like what we find. It is worth noting, however, that here again, as in
so many instances in the preceding pages, the answer that emerges from the new
theoretical development takes the simplest and most logical path. Indeed, the answer to
this quasar problem does not even involve breaking as much new ground as expected by
those astronomers who favor a non-cosmological explanation of the redshifts. As they see
the situation, some new physical process or principle must be invoked in order to add a
‖non-velocity component― to the recession redshift of the quasars. But we find that no
such new process or principle is needed. The additional redshift is simply the result of an
added speed; one that has hitherto escaped recognition because it is not capable of
representation in the conventional spatial reference system.
The preceding chapter explained the nature and origin of the second component of the
redshifts of the quasars, the explosion-generated component, and showed that the validity
of this explanation is confirmed by an analysis of the three-member ‖associations―
identified by Halton Arp. In this present chapter we will examine the quasar redshifts in
more detail.
As indicated in the preceding pages, the limiting value of the explosion speed, and
redshift, is two net units in one dimension. If the explosion speed is divided equally
between the two active dimensions of the intermediate region, the quasar can convert to
motion in time when the explosion component of the redshift in the initial dimension is
2.00, and the total quasar redshift is 2.326. At the time Quasars and Pulsars was
published only one quasar redshift that exceeded the 2.326 value by any substantial
amount had been reported. As pointed out in that work, the 2.326 redshift is not an
absolute maximum, but a level at which conversion of the motion of the quasar to a new
status, which it will ultimately assume in any event, can take place. Thus the very high
value 2.877 attributed to the quasar 4C 05.34 either indicated the existence of some
process whereby the conversion that is theoretically able to occur at 2.326 is delayed. or
else was an erroneous measurement. Inasmuch as no other data bearing on the issue were
available, it did not appear advisable to attempt to decide between the two alternatives at
that time, In the subsequent years, many additional redshifts above 2.326 have been
found, and it has become evident that extension of the quasar redshifts into these higher
levels is a frequent occurrence. The theoretical situation has therefore been reviewed, and
the nature of the process that is operative at the higher redshifts has been ascertained.
As we have seen, the 3.5 redshift factor that prevails below the 2.326 level is the result of
an equal division of seven equivalent space units between a dimension that is parallel to
the dimension of the spatial motion and a perpendicular dimension. Such an equal
division is the normal result of the operation of probability where there are no influences
that favor one distribution over another, but other distributions are not totally excluded.
There is a small, but not negligible, probability of an unequal distribution. Instead of the
normal 3½ - 3½ distribution of the seven units of speed, the division may become 4 - 3,
4½ - 2½ etc. The total number of quasars with redshifts above the level corresponding to
the 3½ - 3½ distribution is relatively small, and any random group of moderate size—say
100 quasars—would not be expected to contain more than one, if any. A representative
random group of quasars examined in Chapter 25 has none.
An asymmetric dimensional distribution has no significant observable effects at the lower
speed levels (although it would produce anomalous results in a study such as the analysis
of Arp's associations in Chapter 22 if it were more common), but it becomes evident at
the higher levels because it results in redshifts exceeding the normal 2.326 limit. Because
of the second power nature of the inter-regional relation, the 8 units involved in the
explosion speed, 7 of which are in the intermediate region, become 64 units, 56 of which
are in that region. The possible redshift factors above 3.5 therefore increase in steps of
0.125. The theoretical maximum, corresponding to a distribution to one dimension only,
would be 7.0, but the probability becomes negligible at some lower level, apparently in
the neighborhood of 6.0. The corresponding redshift values range up to a maximum of
about 4.0. The largest redshifts thus far measured are as follows:
Redshift
Quasar Observed Calculated Factor
2000-330 3.78 3.75 6.000
OQ 172 3.53 3.54 5.625
2228-393 3.45 3.47 5.500
OH 471 3.40 3.40 5.375
An increase in the redshift factor due to a change in the dimensional distribution does not
involve any increase in the distance in space. All quasars with redshifts of 2.326 and
above are therefore at approximately the same spatial distance. This is the explanation of
the seeming inconsistency involved in the observed fact that the brightness of the quasars
with extremely high redshifts is comparable to that of the quasars in the redshift range
around 2.00.
The stellar explosions that initiate the chain of events leading to the ejection of a quasar
from the galaxy of origin reduce a large part of the matter of the exploding stars to kinetic
and radiant energy. The remainder of the stellar mass is broken down into gas and dust
particles. A portion of this dispersed material penetrates into the sections of the galaxy
surrounding the region where the explosions take place, and when one such section is
ejected as a quasar it contains some of this fast-moving dust and gas. Since the maximum
particle speeds are above those required for escape from the gravitational attraction of the
individual stars, this material gradually makes its way outward, and eventually assumes
the form of a cloud of dust and gas around the quasar —an atmosphere, we might call it.
The radiation from the constituent stars of the quasar passes through this atmosphere,
giving rise to absorption lines in the spectrum. The dispersed material surrounding a
relatively young quasar is moving with the main body, and the absorption redshift is
therefore approximately equal to the emission value.
The constituent stars grow older during the time that the quasar moves outward, and in
the later stages of its existence some of these stars reach their destructive limits. These
stars then explode as Type II supernovae in the manner previously described. As we have
seen, such explosions eject one cloud of explosion products outward into space, and
another similar cloud outward into time (equivalent to inward in space). When the
explosion speed of the products ejected into time is superimposed on the speed of the
quasar, which is already near the sector boundary, these products pass into the cosmic
sector and disappear.
The outward motion of the explosion products ejected into space is equivalent to an
inward motion in time. It therefore opposes the motion of the quasar, which is outward in
time. If this inward motion could be observed independently it would produce a blueshift,
as it is directed toward our location, rather than away from it. But since this motion
occurs only in combination with the outward motion of the quasar its effect is to reduce
the net outward speed and the magnitude of the redshift. Thus the slower-moving
products of the secondary explosions move outward in the same manner as the quasar
itself, and their inverse speed components merely delay their arrival at the point where
conversion to motion in time takes place.
A quasar in one of these later stages of its existence is thus surrounded not only by an
atmosphere moving with the quasar itself, but also by one or more independent clouds of
particles moving away from the quasar in time (equivalent space). Each cloud of particles
gives rise to an absorption redshift differing from the emission value by the magnitude of
the inward speed imparted to these particles by the internal explosions. As pointed out in
the discussion of the nature of scalar motion, any object that is moving in this manner
may also acquire a vectorial motion. The vectorial speeds of the quasar components are
small compared to their scalar speeds, but they may be large enough to cause some
measurable deviations from the scalar values. In some cases this results in an absorption
redshift slightly above the emission value. Because of the inward direction of the speeds
resulting from the secondary explosions, all other absorption redshifts differing from the
emission values are below the emission redshifts.
The speed imparted to the ejected particles has no appreciable effect on the recession' z
like the increase in effective speed beyond the 2.326 level, therefore, the change has to
take place in the redshift factor, and it is limited to steps of 0.125, the minimum change in
that factor. The possible absorption redshifts of a quasar thus exist in a regular series of
values differing by 0.125z½ Inasmuch as the value of z for the quasars reaches a
maximum at 0.326, and all variability of the redshifts above 2.326 results from changes
in the redshift factor, the theoretical values of the possible absorption redshifts above the
2.326 level are identical for all quasars, and coincide with the possible values of the
emission redshifts.
Since most of the observable high redshift quasars are relatively old, their constituents are
in a state of violent activity. This vectorial motion introduces a margin of uncertainty into
the measurements of the emission redshift, and makes it impossible to demonstrate an
exact correlation between theory and observation. The situation is more favorable in the
case of the absorption redshifts because the measured absorption values for each of the
more active quasars constitute a series, and a series relation can be demonstrated even
where there is a substantial degree of uncertainty in the individual values.
This is illustrated in Table VII, which compares the measured absorption redshifts of
three of the high redshift quasars with the theoretically possible values. The correlation is
impressive in the case of the quasar OH 471. With the exception of the value at redshift
factor 3.75, all of the observed redshifts are within 0.01 of the theoretical values, and
only one of the first seven theoretically possible absorption redshifts is missing from the
observed list. In this instance the agreement between the values is close enough to be
conclusive in itself. The differences between the theoretical and measured values for the
other quasars in the table are typically about 0.02. Since the interval between successive
theoretical redshifts is only 0.07, the 0.02 discrepancy is uncomfortably large, when each
correlation is considered individually. But when all of the values for the quasar 4C 05.34
are compared, as a series, with the series of theoretical values, the two series clearly
agree. The data for the third quasar in the table are more scattered, but the general trend
of the values is similar.
Because the explosion. redshift is the product of the redshift factor and z½, each quasar
with a recession speed (z) less than 0.326 has its own set of possible absorption redshifts,
the successive members of each series differing by 0.125z². One of the largest systems in
this range that has been studied thus far is that of the quasar 0237-233, the observed
redshifts of which are compared with the theoretical values in Table IX. An asterisk
indicates an average of two or more measured values.
Similar data for the quasars PHL 938 and 0424-131 are included in the tabulation. The
theoretical absorption redshifts in this table are calculated from the observed emission
redshifts (indicated by the symbol Em) and are therefore subject to any errors that may
have been made in the determination of the emission shifts. Apparently no major errors
are involved, as the correlations between theory and observation are just as close as in
Table VIII, where the theoretical absorption values are independent of any
measurements.
TABLE VIII
ABSORPTION REDSHIFTS
Redshift
Calc. OH 471 4C 05.34 0830+115
Factor
5.25 3.33 3.34
5.125 3.25 3.25
5 00 3.18 3.19
4.875 3.11 3.12
4.75 3.04
4.625 2.97 2.97 2.95
4.50 2.90 2.91 2.88 2.91
4.375 2.83 2.81
4.25 2.75 2.77 2.77
4.125 2.68
4.00 2.61 2.59
3.875 2.54
3.75 2.47 2.49 2.47
3.625 2.40
3.50 2.33
3.375 2.25 2.22
3.25 2.18 2.18
3.125 2.11 2.13
In general, the negative component added to the particle speed by the secondary (internal)
explosions is limited to about 1.50, but in some cases absorption redshifts 2.00 or more
below the emission values have been reported. The significance of these very low values
is still uncertain. Since the speed of the secondary explosion products is independent of
that of the main body of the quasar, the dimensional distribution of this speed may be
different from that of the speed of the quasar, and it is not unlikely that the low redshifts
are due to combinations of explosion speed and change in the dimensional distribution.
There is no currently available information against which this hypothesis can be checked,
and the very low values have therefore been omitted from the tabulations.
Absorption redshifts have been identified in many quasar spectra, but the number of rich
systems thus tar located is relatively small. This is significant because the length of the
absorption series is an indication of the extent to which disintegration of the quasar by
destruction of its constituent stars has taken place. Some quasars are already so badly
disintegrated that they will probably never reach the point at which they convert to
motion in time while they are still in the form of aggregates of stars. No doubt the
number of these rich systems will be increased to some extent as more observations are
made, but it seems evident, on the basis of the information now available, that they are a
minority. Most of the larger quasars apparently convert to motion in time while the
quasar structure is still practically intact.
TABLE IX
ABSORPTION REDSHIFTS
0237-223 PHL 938 0424-131
Factor
Calc. Obs. Calc. Obs. Calc. Obs.
3.5 2.223 2.223Em 1.955 1.955Em 2.165 2.165Em
3.375 2.154 2.176
3.25
3.125 2.019 2.013
3.0 1.948 1.955 2.875
2.75 1.588 1.592 1.763 1.768
2.625 1.696 1.715
2.5 1.674 1.673* 1.465 1.463
2.375 1.605 1.623* 1.561 1.579
2.25 1.536 1.526* 1.494 1.532
2.125 1.281 1.261
2.00 1.399 1.364 1.220 1.227*
The reason for the difference in behavior between these two classes of quasars is that two
different processes are involved. Demise of the quasar within the spatial reference system
is due to age. When the great majority of the stars that constitute the fast-moving galactic
fragment that we call a quasar have reached the age limit of matter, and have individually
disintegrated, the quasar ceases to exist as such, irrespective of where it may happen to be
at that time. On the other hand, the disappearance of the quasar at the sector boundary,
the point at which it begins moving in actual time, is a matter of speed, and consequently
of distance. A quasar that originates at a distant location begins moving outward in time
away from our location when the net total explosion speed relative to our galaxy,
including the component due to our distance from the point of origin of the quasar,
reaches the two-unit level. However, the transition from gravitation in space to
gravitation in time does not take place until the explosion speed alone is two units. A
quasar that has left our field of view by reason of the sector limit is still observable from
other locations closer to the point of origin until the gravitational transition occurs.
Ordinarily a long period of time is required to bring a significant number of the stars of a
quasar up to the age limit that initiates explosive activity. Consequently, absorption
redshifts differing from the emission values do not usually appear until a quasar reaches
the redshift range above 1.75. From the nature of the process, however, it is clear that
there will be exceptions to this general rule. The outer, more recently accreted, portions
of the galaxy of origin are composed mainly of younger stars, but special conditions
during the growth of the galaxy, such as a relatively recent consolidation with another
large aggregate, may have introduced a concentration of older stars into the portion of the
structure of the galaxy that was thrown off in the explosion. These older stars then reach
their age limits and initiate the chain of events that produces the absorption redshifts at a
stage of the quasar life that is earlier than usual. It is unlikely, however, that the number
of old stars included in any newly ejected quasar is ever large enough to generate the
amount of internal activity that would lead to an extensive absorption redshift system.
In the higher redshift range a new factor enters into the situation and accelerates the trend
toward more absorption redshifts. A substantial amount of explosive activity is normally
required in order to impart the increments of speed to the dust and gas components of the
quasar that are necessary for the production of absorption systems. Beyond an explosion
speed of two units, however, this limitation no longer applies. Here the diffuse
components are subject to the environmental influences of the cosmic sector, which tend
to reduce the inverse speed (equivalent to increasing the speed), thus producing
additional absorption redshifts in the normal course of quasar evolution, without the
necessity of further generation of energy in the quasars. Above this level, therefore, ‖the
quasars . . . all show strong absorption lines.― Strittmatter and Williams, from whose
review of the subject the foregoing statement was taken, go on to say that
It is as if there were a threshold for the presence of absorbing material at emission
redshifts of about 2.2.234
This empirical conclusion agrees with our theoretical finding that there is a definite sector
boundary at redshift 2.326.
In addition to the absorption redshifts in the optical spectra, to which the foregoing
discussion refers, some absorption redshifts have also been found at radio frequencies.
The first such discovery, in the radiation from the quasar 3C 286, generated considerable
interest because of a rather widespread impression that the radio absorption requires an
explanation different from that applicable to absorption at optical frequencies. The
original investigators Concluded that the radio redshift is due to absorption by neutral
hydrogen in some galaxy lying between us and the quasar. Since the absorption redshift
is about 80 percent of the emission redshift in this case, they regarded the observations as
evidence in favor of the cosmological redshift hypothesis. On the basis of the theory of
the universe of motion, the radio observations do not introduce anything new. The
absorption process that operates in the quasars is applicable to all radiation frequencies.
and the existence of an absorption redshift at a radio frequency has the same significance
as the existence of an absorption redshift at an optical frequency. The measured radio
redshifts of 3C 286 in emission and absorption are 0.85 and 0.69 respectively. At redshift
factor 2.75, the theoretical absorption redshift corresponding to the emission value of
0.85 is 0.68.

CHAPTER 24

Evolution of Quasars
On the basis of the theoretical findings outlined in the preceding pages, the isotopic
readjustment activity in the ejected fragment of the exploding galaxy that constitutes the
quasar is at a high level in the initial stage immediately following the explosion. The
radio emission is correspondingly strong. As time goes on the internal activity gradually
subsides, and eventually radio emission ceases, or at least declines to unobservable
levels. This radio-quiet stage comes to an end when the constituent stars of the quasar
begin to arrive at their age limits in substantial numbers, and the explosions of these stars
renew the isotopic adjustment activity. Radio emission then resumes.
The most distant of the quasars that have been identified belong to the class of radio-
emitting quasars that follows the radio-quiet stage, Class II, as we will call it. Below a
redshift of about 1.00, however, both classes are present, and in order to distinguish
between the two it is necessary to identify some properties in which there is a systematic
difference between the values applicable to the two classes of objects. Ultimately it
should be possible to establish such lines of demarcation from pure theory, but for the
present we will have to rely on semi-empirical distinctions. We can expect, for instance,
that the evolution of the quasars from the early to the later stages will be accompanied by
color changes. Identification of certain specific color characteristics that vary
systematically with the quasar age will be sufficient for present purposes. A full
explanation of the reason for the observed differences can be left for future investigation.
As noted earlier, the colors of astronomical objects are customarily expressed in terms of
color indexes. At this time we will be concerned mainly with the U-B index, which is the
difference between the magnitude measured through an ultraviolet filter and that
measured through a blue filter. Later we will introduce the B-V index, the color index
that we used in dealing with the radiation from the stars, which is the difference between
the blue magnitude anti the visual, or photographic, magnitude, obtained through a
yellow filter. The empirical data indicate that in the quasars the U-B index is a rough
indication of temperature. In main sequence stars the U-B index is positive; that is, more
energy is received in the blue range. (It should be remembered that the magnitude scale is
inverse.) This index is also positive in ordinary galaxies, which are composed mainly of
such stars. Because of the inversion that takes place when the speed of light is exceeded,
the theoretical development indicates that in the quasars the color trend should be
reversed, and the U-B index should be negative, indicating that more energy is received
in the ultraviolet range. All of the U-B values quoted in this chapter are negative, and
should be so understood.
The number of quasars on which fairly complete measurements are now available runs
into the hundreds, and it will not be feasible to analyze all of these data in a work of a
broad general nature. Our examination will therefore have to be limited to a
representative sample. The group of quasars studied in Quasars and Pulsars was one for
which the redshifts and color data were tabulated by M. and G. Burbidge in their book
Quasi-Stellar Objects.250 It includes all of the quasars for which these data were available
up to the time of publication, and is therefore free from selection effects, except insofar
as it favors the objects that are the most accessible to observation. No significant
modifications of the conclusions drawn from the original study have been necessary, and
the following discussion will be taken from the earlier publication, with the addition of
the results from some subsequent studies, mainly of the same group of objects, those
listed in the Burbidge table 3.1.
The color indexes are determined primarily by the internal activity (the temperatures,
together with the isotopic adjustments and their consequences) within the quasars. The
pattern of change during the evolution of the quasars should therefore be capable of being
evaluated on a purely theoretical basis. Such a project is beyond the scope of this work,
but the general nature of the changes that take place in the indexes, as empirically
determined, shows a definite qualitative correlation with the changes that theoretically
occur in the generation and dissipation of energy. We can therefore set up some defining
criteria for these quasar classes on a semi-empirical basis.
In the original study reported in Quasars and Pulsars the division was established at U-B
= 0.60 and an absolute ratio flux (R.F.) of 6.0 measured at 178 MHz: All quasars with U-
B values less than 0.60 were placed in Class I. Those having higher U-B, but R.F. below
6.0 were found to be continuous with the low U-B quasars in their properties, and were
also assigned to Class I. The high R.F.-high U-B quasars form a discontinuous group
with quite different properties, and were identified as members of Class II.
Fig.26 shows the relation between U-B and R.F for those of the Class I quasars listed in
the Burbidge Table 3.1 for which the necessary information is available. This diagram is
essentially equivalent to the astronomers' ‖two color diagram,― except that the scales
have been inverted because we are here dealing with phenomena of am inverse region,
the region of intermediate speeds. We will use both colors later, with and without the
radio flux. It would be convenient to define the quasar classes on the basis of color alone,
and some progress in this direction will be made when the B-V index is introduced later
in this chapter, but color criteria that are capable of defining these classes in a manner
that is free from ambiguity have not yet been developed.
When a quasar is first ejected from the galaxy of origin, its constituents are in a state of
violent activity, and its radio flux is abnormally high. Only one of the quasars included in
the group under consideration is still in this very early stage. This is 3C 196, which has
U-B = 0.43 and absolute R.F. = 4 x .3. Its redshift is 0.871, of which 0.054 is the normal
recession component. In this work the symbol z is used to represent the normal recession
redshift only. The explosion-generated component, usually 3.5 z1/2 but subject to
modification of the redshift factor 3.5, will hereafter be designated as q, and the total
quasar redshift will be represented by the symbol Z. We then have the relation Z = z + q.
We will also want to recognize that the redshift component q represents an equivalent
distance (that is, a distance in the spatial equivalent of time), and we will call this the
quasar distance. The quasar distance of 3C 196 is 0.817, which makes it one of the most
distant Class I objects in the Burbidge table.
After the initial spurt of activity in a quasar dies down to some extent, it can be found in
the zone designated ‖early― in the upper left of Fig.26. As it ages, and its activity drops
still further, it moves to the right (toward lower R.F.) and downward (toward higher U-
B). Ultimately it passes the zero radio flux line and enters the radio quiet stage.
The tabulated data show that at the time they were compiled no Class I quasars had been
detected at quasar distances greater than 0.900, and no objects of this class that are old
enough to have U-B indexes above 0.60 had been found beyond a quasar distance of
approximately 0.700. The significance of these figures lies in the fact that the high R.F.
quasars with U-B indexes above 0.60 (Class II) can be detected beyond a quasar distance
of 0.700. Indeed, we can follow them all the way out to the ultimate limit at 2.00. It is
clear, then, that these more distant objects are not in the same condition in which they
were when they were originally ejected. In order to move into the range in which they are
now observed these distant Class II quasars must have undergone some process that
released a substantial amount of additional radiant energy at radio wavelengths.
We have already deduced that such a process exists. Because of the long period of time
during which a quasar is traveling outward before it arrives at the point where it converts
to the cosmic status, some of its constituent stars reach the age that corresponds to the
destructive limit. Secondary Type 11 explosions then occur. Obviously, this is just the
kind of a process that is required in order to explain the emergence of a second class of
radio-emitting quasars at distances beyond the observational limit of Class I objects. It
should be noted that a secondary series of explosions is a natural sequel to the original
explosion of the giant galaxy. That original explosion was initiated as soon as enough of
the oldest stars in the galaxy reached their age limits. The stars in the ejected fragment,
the quasar, were younger, but many of them were also well advanced in age, and after
another long period of time some of these necessarily arrived at the age limit.
The original stellar explosions occurred outside the portion of the galaxy that was ejected
as a quasar; that is, they took place in the interior of the giant galaxy of which the quasar
is a fragment. Thus the radio emission from a Class I quasar is mainly a result of the
extremely violent ejection. On the other hand, the secondary explosions occur in the body
of the quasar itself, and the emission from the Class II quasars comes directly from the
exploding stars. This difference in origin is reflected in the relation between the U-B
index and the radio flux, enabling us to utilize this relation as a means of distinguishing
between the two classes. Fig.27 is a plot of U-B vs. R.F. for the Class II quasars in the
Burbidge table. As can be seen, the points representing these objects tall entirely outside
the section of the diagram occupied by the quasars of Class I. There is no indication in
this diagram that the Class II quasars follow any kind of an evolutionary pattern, but we
will give this question some consideration later.

The quasar 3C 273 is of particular interest. This is definitely a Class II quasar, according
to the criteria that have been defined, but its distance is far out of line with that of all
other known objects of its class. No other Class II quasar in the group we are now
examining has a quasar distance less than 0.315, whereas the quasar distance of 3C 273 is
only 0.156. Ordinarily we can consider that when we measure the redshift of an object we
are also determining its maximum possible age, as this age cannot be greater than the
time required to move out to its present position. On this basis, we would interpret the
low redshift of 3C 273 as an indication that it is an unusually young Class II quasar. This
could be true. It was pointed out in the earlier discussion that the secondary explosions
may occur relatively soon after the original ejection, inasmuch as some of the stars in the
galactic fragment that is ejected as a quasar may already be near the age limit at the time
of the explosion. Very young Class II quasars are therefore definitely possible
But 3C 273 is not necessarily young. It may be very much older than the 0.156 quasar
distance would indicate, as the general relation between redshift and age does not hold
good at very short distances where the magnitude of the possible random motion is
comparable to that of the recession. Two galaxies that are separated by a distance in the
neighborhood of their mutual gravitational limit can maintain this separation almost
indefinitely, and the width of the zone in which the relative motion can be little or none at
all is increased considerably if there is random motion with an inward component.
Hence 3C 273 may have spent a long time near its present position relative to our Milky
Way galaxy, and may be just as old as the quasars at distances around 0.300.
The observational information currently available is not adequate to enable making a
definite decision between these alternatives, but where we have a choice between
attributing an unusual situation to a chance coincidence that has resulted in an object of a
relatively rare type being located very close to us, or attributing it to a unique
characteristic which we know that the object in question does possess -its proximity -the
latter is clearly entitled to the preference pending the accumulation of further evidence.
We therefore conclude tentatively that 3C 273 is at least as old as the Class II quasars in
the vicinity of quasar distance 0.300.
The position of 3C 273 in Fig.27 is indicated by a triangle. As can be seen from the
diagram, this quasar is among the weaker radio emitters in its class (although we receive
a large radio flux from it because it is so close) but, so far as its properties are concerned,
it is not abnormal, or even a borderline case. Its proximity therefore provides a unique
opportunity to observe at relatively close range a member of a class of objects that can
otherwise be found only at great distances.
Further experience in application of the U-B criterion to distinguishing the quasar classes
has indicated that it is somewhat ambiguous in the region of high U-B values and low
radio emission. Introducing the B-V index has therefore made an adjustment of the
selection criteria. In this region of high (more negative) U-B values, the location where
the original criteria proved to be deficient, there are some quasars with low radio
emission that have absorption redshifts. As brought out in Chapter 23, this is an
indication of advanced age, which places them in Class II. These objects have B-V
indexes in the upper portion of the full range of values, whereas the indexes of the
relatively low redshift quasars in this region, which can be expected to be Class I objects,
fall in the lower portion of this range. We may tentatively establish a dividing line at B-V
= +0.15, and instead of assigning all quasars with low radio emission and high U-B
indexes to Class I, we will put those members of this group that have B-V indexes above
0.15 in Class II. Until such time as we are able to base the selection criteria on a
theoretical rather than an empirical foundation we can hardly expect precision, but this
change to a two-color basis undoubtedly brings us closer to the correct line of
demarcation. The revised color index pattern for quasars at distances below 1.00 is shown
in Table X. Included in this revision is a change in the U-B classification boundary from
0.60 to 0.59.
The identification of the evolutionary status of the quasars by color and radio flux (or
distance) enables utilizing the data with respect to the other observable features of the
quasars to verify the theoretical conclusions as to the differences between the classes, and
between the earlier and later members of each class, something that we could not do if
these features entered into the criteria by which the classes are identified. For instance,
we have deduced from theoretical premises that the absorption, which gives rise to the
absorption redshift lines in the quasar spectra, takes place in clouds of material
accelerated to high inverse speeds by internal supernova explosions in these objects. No
absorption occurs, therefore, until these explosions occur on a sufficiently large scale. As
noted earlier, this point is not reached until the quasar is somewhere in the radio quiet
stage, while it is evident from the nature of the requirements for the production of
multiple absorption redshift systems that multiplicity will not appear until a still higher
level of activity is reached. On the basis of this evolutionary pattern, we can deduce the
following rules regarding the occurrence of absorption redshifts:
Table X
QUASAR CLASSES
U-B B-V
Class R.F.
(negative values) (positive values)
I early Below 0.59 Below 6.0
I late Above 0.59 Below 0.15 Below 6.0
II early Above 0.59 Above 0.15 Below 6.0
II late Above 0.59 Above 6.0
1. Class I quasars have no absorption redshifts.
2. Absorption redshifts approximating the emission values are possible throughout
most of the radio-quiet region, and in the Class II radioemitting quasars.
3. Absorption redshifts differing from the emission values by more than the amount
that can be attributed to random motion are possible only in Class II quasars and
relatively old radio-quiet quasars.
A check of 29 quasars with absorption redshifts listed in a 1972 compilation by Burbidge
and O'Dell 251 shows that all of these objects are in compliance with the foregoing rules
when the assignment to classifications is made on the basis that has been specified. Here,
then, we have a significant confirmation of the theoretical description of the conditions
under which the absorption redshifts occur.
It was noted earlier that there would be a further advantage in being able to distinguish
the two classes of radio-emitting quasars by color alone without having to consider the
magnitude of the radio emission. As indicated in Fig.28, which is a combination of Fig.26
and 27, with the B-V index substituted for the radio flux, this is almost accomplished by
the resulting two-color diagram. There is some uncertainty along the dividing line at the
0.15 B-V index, and there is one deviant object, 3C 280.1, which has a B-V index of
0.13, although its redshift far exceeds the Class I limit. Otherwise, the two classes of
quasars are located in separate portions of the diagram, as in Fig.26 and 27. The deviation
of 3C 280.1 from the normal range of B-V indexes is probably due to the same cause as
the deviation of this quasar from the normal radio pattern, as shown in Table Vll, Chapter
22.

Thus far we have been looking at the color indexes and radio flux as means of
differentiating between the various classes of quasars. Now we will want to examine the
significance of the changes that take place in these quantities during the evolution of the
quasars. The magnitudes of all of the properties that we are now considering undergo
evolutionary changes. Thus any one of them can serve as an indicator of quasar age.
Obviously, however, the properties that change most uniformly with time are the best
indicators, and on this basis we may consider the radio flux in Fig.26 and 27 as indicating
the quasar age. These diagrams thus show how the quasar temperature (U-B) varies with
age (R.F.). We now find that the B-V index follows approximately the same trend as the
radio flux, which means that this index is also an indicator of age, and can be substituted
for the radio flux in the diagrams.
The U-B indexes of the earliest Class I quasars fall in the range from about -0.40 to -0.59.
As these quasars age, the index moves almost horizontally to the vicinity of B-V = +0.15,
and then turns sharply downward on the diagram (toward more negative values) as the
radio-quiet zone is approached. The B-V index of the earliest Class I quasar in the sample
under examination is 0.60. This index decreases as the quasar ages, reaching positive or
negative values near zero at the radio-quiet boundary. The U-B indexes of the Class II
quasars range from -0.59 to about -1.00, with no apparent systematic variation. The
corresponding B-V indexes for most of the Class II quasars with relatively low redshifts
(below 0.750) are in the neighborhood of +0.20. Beyond 0.750 the index increases, and
the maximum values around 0.60 or 0 70 are reached near the 1.00 distance. This peak is
followed by a decrease to a level at which most values are comparable to those of the
early members of this class.
While the actual mathematical relations between the internal activity of the quasars and
their color indexes have not yet been examined in the light of the Reciprocal System of
theory, the evolutionary pattern followed by the values of these indexes, as described in
the preceding paragraph, shows a definite qualitative correlation with the changes that
theoretically take place in the generation and dissipation of energy. In Class I the initial
energy is high, but it gradually subsides, as no continuing source of large amounts of
energy is available to these objects. Both color indexes respond to this change by moving
toward more negative values as the quasars age. In Class II the initial activity develops
slowly, as it originates from many small events rather than from one big event, and the
Class II quasars do not reach the high temperatures that are characteristic of the early
Class I objects.
The lowest (least negative) U-B values in Class II are in the neighborhood of the dividing
line at -0.59, and the full range extends to about -1.00. The five radio-quiet quasars in the
Burbidge tables for which color indexes are given have U-B indexes in the range from -
0.78 to -0.90. It follows that only those quasars with U-B indexes between about -0.75
and the -0.59 limit can be regarded as having a temperature increment due to the
secondary explosions, and even in this group, which includes about 40 percent of the total
number of Class II Quasars, the increment is not large There is no systematic change with
age in the U-B indexes of these Class II objects. This is understandable on the basis of the
conclusion that this index is related to the temperature, as the temperature variations in
Class II are due to events that can take place at any time during the Class II stage of
quasar existence.
The pattern of values of the B-V index that was previously described indicates that the
processes, which determine the magnitude of this index, are increasing in strength
throughout the Class II stage. The specific nature of these processes has not yet been
established but obviously they are aspects of the motion of the quasar constituents, and
for the present we can use the very general term ‖internal activity― in referring to them.
As the quasar distance increases, the average age of the observable quasars rises,
inasmuch as the age range is continually being extended. This increase in age is
accompanied by a corresponding increase in internal activity, and, below a quasar
distance of 1.00, by an increase in the B-V index. As already mentioned, this index
decreases beyond 1.00 distance, probably because of a decrease in the intensity of the
internal activity due to the dimensional distribution of the various properties of the
quasars that occurs in this distance range.
Inasmuch as the concentration of energetic material in the interior of the giant spheroidal
galaxy from which a quasar was ejected was built up gradually over a long period of
time, the isotopic adjustments taking place in this material at the time of the ejection are
mainly of the long-lived types. Thus the decrease in radio emission and ‖internal activity―
in the early quasar stage should be quite gradual. The temperature, on the other hand, is
raised to a very high level by the explosion, and can be expected to take a very sharp
initial drop. We would normally expect, therefore, that the early Class I stage would
begin with an exponential decrease in the U-B index (temperature) as a function of the B-
V index (age). But this is not at all what Fig.28 indicates. There is little, if any, decrease
in the U-B index in the early Class I stage. Let us see, then, if we can account for the
observed situation.
One obvious possibility is that the rapid decrease in the temperature precedes the earliest
quasar stage. On this basis, the temperature of the newly ejected galactic fragment drops
rapidly to a certain level, which we can identify as that of the earliest Class I quasars (U-
B = -0.40 + 0.10), remains at this level to about B-V = +0.15, and then resumes a rapid
drop to a minimum level near 1.00. On first consideration, this may appear to be another
of the combinations of ten percent fact and ninety percent speculation that are so common
in the relatively uncharted areas of physics and astronomy. However, there actually is in
existence a class of objects, not currently identified as quasars, that occupies the position
in this U-B vs. B-V diagram in which the theoretical very early group of quasars would
fall if the foregoing explanation of the nature of the early evolutionary pattern is correct.
Like the quasars, these objects are abnormally small, very powerful extragalactic bodies.
Their existence was first recognized when the radiation from the ‖variable star― BL
Lacertae was found to have some very peculiar properties. Several dozen similar objects
have since been located. Because their properties are in some respects unique, they have
been placed in a new astronomical category. However, no consensus has been reached on
a name for these objects. As matters now stand, we have a choice between BL Lac
objects, lacertids, and lacertae. The latter term will be used in the discussion that follows.
Most of the differences between the lacertae and the quasars are merely matters of
degree, as would be expected if the lacertae are very young quasars. For instance, the
evidence of association with giant galaxies is much stronger than in the case of the
quasars. Joseph S. Miller describes the results of a recent (1981) investigation in which
both lacertae and quasars were examined as follows:
We conclude that the data are consistent with all BL Lac objects being located in
luminous giant elliptical galaxies . . . No galaxy components were definitely detected for
any of the QSOs in this study.252
These observations are consistent with the status of the lacertae as pre-quasar explosion
products. The observed galaxies are the giants—spheroidal, in the terminology of this
work—from which these objects were ejected. The parent galaxies are more likely to be
observed while the explosion products are still in the lacertae stage immediately
following ejection because these products have not yet had time to travel very far. By the
time the quasar stage is reached the ejected fragment has moved farther away from the
galaxy of origin, and the association between the two is not necessarily evident.
All known lacertae are radio sources, whereas many, perhaps most, quasars are radio
quiet. Here again, the difference is accounted for if we accept the conclusion that the
lacertae are the initial products of the galactic explosions; that is, they are in the violent
post-ejection stage. This conclusion is supported by the observation that ‖The BL Lac
type objects appear to be very closely related to violently variable QSO's like 3C 279 and
3C 345 (two quasars of Early Class I).― 253 The reason for the lack of radio-quiet lacertae
is then evident. The violent internal activity that produces the radiation at radio
frequencies continues throughout both the lacertae and Early Class I stages.
It has been found that the bright lacertae are not associated with extended radio
sources,254 whereas most quasars of the early classes do show such an association. Here,
again, extreme youth is the explanation. The extended sources have simply not had time
to develop.
The radiation from the lacertae includes optical, radio, and infrared components, all of
which are to be expected from young explosion products moving at upper range speeds.
No x-ray radiation has been detected. This, too, is consistent with the theoretical
evolutionary status of the lacertae. There are no x-rays in very young explosion products,
as we saw earlier in the case of the supernovae. Objects that lose energy after having
been accelerated to upper range speed levels emit X-rays. By the time the ejected
fragment reaches the quasar stage, some loss of energy has taken place, and production of
x-rays has begun.
A clear picture of the relation between lacertae and quasars is provided by the respective
colors. To illustrate this point, the colors of a representative group of lacertae 254 have
been added to Fig.28, and the enlarged diagram is shown in Fig.29. Quite clearly, the
positions of the lacertae in this two-color diagram are fully consistent with the theoretical
conclusion that these objects are the initial products of the galactic explosions, and
precede the early Class I quasars in the evolutionary development of the ejecta from the
explosions. Except for a few objects that have penetrated into the Class II region of the
diagram, the evolutionary path of the lacertae joins that of the Class I quasars in a smooth
transition, and the combined path follows the pattern that, as explained earlier, we would
expect the galactic explosion products to follow in their early stages, on the basis of the
theory that we have
developed.
One more of the distinctive characteristics of the lacertae remains to be examined.
The most intriguing difference between quasars and lacertae is that the quasars have
strong emission lines in their spectra that the lacertae lack. The reason for this is not yet
understood.255 (Disney and Veron)
This, too, is readily explained on the basis of the theoretical description of the immediate
post-ejection conditions. The principle that plays the most important role in this situation
has been encountered repeatedly in connection with other phenomena discussed in the
preceding pages, but it is one of those items that is so foreign to existing physical thought
that it may be a source of conceptual difficulty for many readers. A more detailed
discussion is therefore appropriate at this point, where the relevant observational
evidence is more extensive than in the applications considered earlier.
For reasons already specified, the radioactivity and the accompanying emission of
radiation at radio frequencies decline slowly throughout the Class I quasar stages. This
decline is illustrated in Fig.30 Here the absolute radio emissions are plotted against the
U-B color indexes (indicative of the temperature) in steps 0.02 of the index. This
procedure results in some values that are averages of two or three individual emissions,
thereby smoothing the resulting curve to some extent. The circled points indicate the
average values. Those not so identified are single values. As might be expected from the
nature of the radio emission process, there are a few widely divergent values, but the
general trend is clearly represented by a line such as that in the diagram, which conforms
to the theoretical expectation.
The optical situation is more complicated because the stellar component speeds that are
produced by acquisition of a part of the explosion energy are much lower than those of
the gas and dust particles that supplied the original explosion energy. These stellar
components therefore return to the speed range below unity during the evolution of the
Class I quasars. The effect on the optical emission is shown in Fig.31, which is similar to
Fig.30, with the absolute optical luminosities substituted for the radio emissions. (The
methods of calculating the absolute values of both the optical and the radio emissions will
be explained in Chapter 25.) Here we see that the luminosity remains nearly constant in
the initial range, up to about U-B = - 0.50. It then begins a rapid rise to a point in the
neighborhood of -0.59. At this point the emission drops by one half. During the late Class
I stage, which follows, there is a moderately fast decrease to a level below -0.05 at the
point of entry into the radio-quiet zone.
Since the stellar component speeds that are primarily responsible for the magnitude of the
optical luminosity are subject to the same conditions that apply to the radio emission; that
is, a gradual decay of the effects of the explosive ejection. the peak in the luminosity
curve is somewhat surprising on first consideration. But, in fact' two different processes
are involved. The isotopic adjustments that produce the radio emissions decrease
gradually in intensity as more and more of them are completed. The optical emission is a
function of the temperature; that is, of the speeds of the component particles. In the low
speed range with which we are all familiar, the rate of emission of radiation increases
with the component speeds (the temperature). It might seem that a still further increase in
the speed would lead to a still greater rate of emission. But in the universe of motion
directions are reversed at the unit level. Consequently, the same factors that cause the
radiation to increase as the component speeds approach unity from lower levels also
operate to increase the radiation as unit speed is approached from the higher levels. It
follows that the radiation is at a maximum at the unit level, and decreases in both
directions.
Applying this principle to the Class I quasars, we see that in the U-B range as far as -
0.45, the component speeds are nearly constant as they slowly approach their maximum,
and begin to decrease. Then the continued radiation losses with no comparable
replacements accelerate the rate of decrease, reaching a maximum at the unit speed level.
During this interval, while the speeds are still above unity, the decrease in speeds results
in an increase in the rate of emission, reaching a peak at unit speed. As the diagram
indicates, this peak coincides with the dividing line between classes I and 11 at U-B = -
0.59. Beyond this point the speed drops into the range below unity, the range in which a
decrease in temperature results in a decrease in the radiation. Like gravitation, the
radiation process is operative in both of the active dimensions of the intermediate region.
Half of the radiation is therefore eliminated at the unit speed level.

The lack of emission lines in the spectra of the lacertae is another result of this radiation
pattern. The immediate post-explosion speeds of the gaseous component of the explosion
products are very high, probably close to the two unit level. As brought out in Chapter15,
this is the zero for motion in time, and the physical condition of an aggregate at this
temperature is similar to that of an aggregate at a temperature near the zero of motion in
space. The explanation of the lack of emission lines, then, is that the temperatures of the
gases in the lacertae are too high to produce a line spectrum. At these extremely high
temperatures (low inverse temperatures) the aggregate is in a condition in time that is
analogous to a solid structure in space, and like the latter it radiates with a continuous
spectrum. This is another example of the same phenomenon that we noted in Chapter16
in connection with the continuum emission from the Crab Nebula. By the time the quasar
stage is reached, the temperature has dropped enough to give the aggregate the normal
characteristics of a gas, including a line spectrum.
It was evident from the time of the earliest studies of the different classes of quasars,
reported in Quasars and Pulsars, that the -0.59 value of the U-B index marks some kind
of a physical division, and this was one of the criteria on which the classification of the
quasars in that publication was set up. It can now be seen that the -0.59 U-B level
corresponds to unit temperature. The fact that the evolutionary path of the Class I quasars
(including the lacertae) contains a horizontal section, rather than decreasing somewhat
uniformly from the initial to the final state, as might be expected where there is no source
of replacement for the energy that is being lost by radiation, is explained by the transition
from two-dimensional to one-dimensional motion. The energy of the second dimension
of motion in the intermediate speed range is analogous to the heats of fusion and
vaporization. When the change to one-dimensional motion takes place, the energy of
motion in the other dimension becomes available to maintain the temperature, and the U-
B index, at a constant level for a time before the decreasing trend is resumed.
Incorporation of the lacertae into the path of development now completes the
evolutionary picture of the Class I explosion products from the time they are ejected from
the galaxy of origin to their entry into the radio-quiet stage. Some of these objects may
disappear during that stage, for reasons that will be explained in the next chapter. The
remainder eventually undergo secondary explosions and attain the Class II status. There
is no systematic relation between the temperature and age in Class II, because both the
time at which the secondary explosions occur and their magnitude are subject to major
variations. Each individual Class II quasar does, however, follow a course that eventually
brings it to the point where it crosses the sector boundary and disappears.
There are many pitfalls in the way of anyone who attempts to follow a long chain of
reasoning from broad general principles to specific details, and since this is an initial
effort at applying the Reciprocal System of theory to the internal structural features of the
quasars, it must be conceded that modification of some of the conclusions that have here
been reached is likely to be necessary as observational knowledge continues to
accumulate. and further advances in theoretical understanding are made in related areas.
However, the general picture of the quasar structure and evolution derived from theory
corresponds so closely with the information now at hand that there seems little reason to
doubt its validity, particularly since that picture was developed easily and naturally from
the same premises on which the earlier conclusions regarding the origin and nature of the
quasars were based.
It is especially significant that nothing new is required to explain either the existence or
the properties of the quasars (including the lacertae). Of course, nothing new can be put
into a purely deductive theory of this kind. Introduction of additional hypotheses or ad
hoc assumptions of the kind normally employed in the adjustment of theories to fit new
observations is excluded by the basic design of the theoretical system, which calls for
deriving all conclusions from a single set of premises, and from these only. Some new
principles and hitherto unknown phenomena are certain to be revealed by any new
theoretical development of this magnitude, and many such discoveries have, in fact, been
made in the course of the theoretical studies thus far undertaken. Such items as those
utilized in the foregoing applications of the theory to the various aspects of the quasar
situation -the status of all physical phenomena as more or less complex relations between
space and time, the inversion of these relations at unit levels, the role of time as
equivalent space, and the asymmetric transmission of physical effects across unit
boundaries - are all new to science. But these are not peculiar to the quasars; they are
general principles, immediate and direct consequences of the basic postulates, the kind of
features that distinguish the universe of motion from the conventional universe of matter,
and they were discovered and employed in a variety of applications decades before the
quasar study was undertaken. All of the novel principles deduced from theory and
utilized in this work were explicitly stated in the initial presentation of the Reciprocal
System of theory in the first edition of this work, published in 1959, years before the
quasars were discovered.
Furthermore, many of the consequences of these general principles, in the form of
physical phenomena and relations, that are now seen to play important parts in explaining
the origin and evolution of the quasars were likewise pointed out in detail in that 1959
publication, four years before Maarten Schmidt measured the redshift that ushered in the
era of the quasar ‖mystery.― The status of stellar aggregates as structures in positional
equilibrium, which permits the building up of internal pressures in the galaxies, and the
ejection of fragments, the existence of two distinct divisions of the explosion products,
ejected in opposite directions, one moving at normal speed and the other moving at a
speed in excess of that of light; the reduction in the apparent spatial size of aggregates
whose components move at upper range speeds; the generation of large amounts of
radiation at radio wavelengths from the explosion products; and the eventual
disappearance of the ultra high speed material; were all derived from theory and
discussed in the published work, not only long before the discovery of the quasars but
years before any definite evidence of the galactic explosions that produce the quasars was
found.

CHAPTER 25

THE QUASAR POPULATIONS


Now that we have identified the different classes of quasars, located them in the evolutionary
course of development, and established criteria by which to distinguish one from another, it
will be of interest to undertake what we may describe as a census, to get an idea as to the
relative numbers of observable objects of the various classes, the factors that are responsible
for the differences between these classes, and the effect of the evolutionary development on
these various populations.
The list of known quasars is continually being extended, both by increasing the capability of
the available instrumentation, and by more use of the existing equipment. A complete survey
of the observable quasars is therefore impossible, as matters now stand. The best that we can
do is to examine all those on which the necessary information was available up to some
particular date. Under the circumstances there is no advantage to a very large sample. As the
modern poll takers have demonstrated, a relatively small sample is adequate if it is actually
representative. Rather than attempting to cover all of the quasars currently known, we will
therefore review and update the results of a study made some years ago on the same group of
quasars examined in the studies reported in the preceding chapter, those on which the
relevant data were available in 1967.
The total number of quasars included in the 1967 tabulation by the Burbidges is 102, but
color indexes were not available for 26 of these. The study was therefore confined to the
other 76. Of these, 45, or sixty percent, were quasars of Class II. The spatial distribution of
these objects is quite uniform out to a quasar distance of 1.00. On the two-dimensional basis
that we have seen is applicable to the intermediate speed range, two independent distributions
are possible in three-dimensional space. The existing quasars can be located either in the
scalar dimension that is represented in the conventional spatial reference system, or in a
dimension that is perpendicular to it. It follows that only half of the existing quasars are
visible. There are 20 visible Class II quasars within a 1.00 radius (quasar distance) and 5
within 0.50. Both of these figures represent the same density: 20 quasars in a sphere of radius
1.00. We may therefore take this as the true density of Class II quasars observable in this
distance range with the 3C instruments and procedures. The total number of these quasars in
equivalent space is twice this number, or 40 per spherical unit. In the second quasar distance
unit, from 1.00 to 2.0O, there is another division between two perpendicular dimensions,
which again reduces the visibility by one half, cutting the visible number to one quarter of
the total. This means that where the actual quasar population remains unchanged, only 10
Class II quasars per spherical unit are visible in the quasar distance range from 1.00 to 2.00.
The number of Class II quasars calculated on the foregoing basis for spheres of successively
larger radius is compared with the observed number in Table XI. There are a number of
factors that cause some deviations from the theoretical distribution at very short distances,
but the number of quasars involved is so small that the effect on the distribution pattern is
negligible. Except for the normal amount of random fluctuation, the theoretical distribution is
maintained throughout the quasar distance range up to about 1.80. Beyond this point there is
a slow decrease as the normal limit at 2.00 (total redshift 2.326) is approached, and an
increasing number of quasars become unobservable because they cross the boundary into the
cosmic sector.
The relation of the number of visible quasars to the distance has been a matter of much
interest to the astronomers because of the bearing that it has, or may have, on the question as
to whether the density of matter in the universe is decreasing, as required by the Big Bang
cosmological theory. This has been a hotly contested subject, but the present consensus, as
reported by H. L. Shipman, is that ‖Quasars were far more abundant in the early universe
than they are now.― 256 But this conclusion is based on the assumption that the quasars are
distributed three-dimensionally, and the data of Table XI that confirm the two-dimensional
distribution, together with the corroborative evidence presented earlier, cut the ground out
from under the astronomers' conclusions. From these data it is evident that there has been no
change in the quasar density during the time interval represented by the quasar distance of
2.00.
The close correlation between the calculated and observed quasar distributions not only
demonstrates the uniformity of the quasar density throughout space, but also confirms the
validity of the theoretical principles on which the calculations were based. It should be
emphasized that this is not merely a case of providing a viable alternative to the currently
accepted view of the situation. The fact that uniformity of distribution on the two-
dimensional basis has been demonstrated not only for the total number of radio-emitting
quasars in a representative sample, but also individually for each of the three classes of
objects included in this total puts the findings on a firm basis. The essential concept of the
Big Bang theory is thus invalidated.
The data for the other two classes of radio-emitting quasars, early Class I and late Class I, are
included in Table XI. Here the distribution is reproduced with space densities of 40 and 60
quasars per spherical unit respectively. We thus find that the predominance of Class II
quasars in the observed list does not reflect the true situation. Instead of being a 40 percent
minority, the Class I objects actually constitute about 70 percent of the total number of
radioemitting quasars.
TABLE XI
DISTRIBUTION OF QUASARS
Quasar Number Quasar Number
Distance Calc. Obs. Distance Calc. Obs.
Class II Class I—Early
Number =20q² Number =20q²
0.1 0 0 0.1 0 0
0.2 1 1 0.2 1 0
0.3 2 1 0.3 2 0
0.4 3 5 0.4 3 2
0.5 5 5 0.5 5 2
0.6 7 5 0.6 7 7
0.7 10 7 0.7 10 11
0.8 13 8 0.8 13 14
0.9 16 15 0.9 16 16
1.0 20 20
Class I—Late
Number = 10q² + 10 Number = 30q²
1.1 22 23
1.2 24 25 0.1 0 0
1.3 27 29 0.2 1 1
1.4 30 31 0.3 3 3
1.5 33 32 0.4 5 7
1.6 36 34 0.5 8 9
1.7 39 36 0.6 11 11
1.8 42 41 0.7 15 14
1.9 46 44
2.0 50 45
The sample on which the study was conducted contains no quasars with quasar distances
above 2.00, a fact which indicates that the asymmetric redshift factors, discussed in Chapter
23, that lead to redshifts exceeding the normal limit are relatively uncommon.
Although we know the quasars (and other astronomical objects as well) only as sources of
radiation, the amount of information that can be extracted from this radiation is surprisingly
large; so large, in fact, that much of it will not be needed for purposes of the kind of a general
survey of the various quasar populations that we are now undertaking. The current status of
the quasars as astronomy's greatest mystery is not due to a lack of sufficient information, but
to the astronomers' inability, thus far, to construct the kind of a theoretical framework that
would enable placing the many items of information that now seem irrelevant or
contradictory in their proper places relative to each other, and to the astronomical universe as
a whole. Availability of a purely deductive system of theory, in which all conclusions are
derived by development of the consequences of the fundamental properties of space and
time, now provides what is needed.
Our present undertaking is to examine the primary characteristics of the different classes of
quasars and to show how they fit into the general picture. We will make use of the
information developed in the preceding chapters, particularly that referring to the color
indexes, the recession redshift (and distance), z, and the quasar distance (and redshift), q. The
other magnitudes with which we will be mainly concerned are the optical luminosity, 1, its
absolute value, L, and the radio emission or flux, for which we will use the customary
symbol S.
The optical radiation as received is ordinarily expressed in terms of the astronomical
magnitude scale. This system of measurement is presumably satisfactory to the astronomers,
since they continue using it, but it is confusing to just about everyone else. Actually. it is a
historical accident. The magnitudes were originally ordinal numbers—simply positions in a
series. The brightest stars were designated as stars of the first magnitude, the next brightest as
stars of the second magnitude, and so on. Later these magnitudes were adjusted to conform to
a specific mathematical relation, so that they became a measurement scale. but in order to
avoid major changes, the upside down ordinal sequence was retained. Thus the stars with the
greatest numerical magnitude are not the brightest, but the faintest. For the same reason, the
numerical scale, which for convenience is exponential, was constructed on an awkward basis
in which 2.5 magnitudes are equivalent to a factor of 10. It has been necessary to refer to
astronomical magnitudes to some extent in this work tin order to maintain contact with the
astronomical literature. To facilitate translating these values into terms that are more familiar
to most readers of this volume, the following table of equivalents may be helpful:
Magnitude Magnitude
Factor Factor
Difference Difference
2 0.75 10 2.50
4 1.50 50 4;25
5 1.75 100 5.00
8 2.25 1000 7.50
The quantity that is being measured in terms of the magnitude scale is the luminosity of the
object. For our present purposes we will want to deal with the actual luminosity, and we will
therefore convert the magnitudes to luminosities. In order to keep the numerical values
within a convenient range we will state the luminosity in terms of the increments of
magnitude above 15, converted to the luminosity basis. Such values represent the ratio of the
measured luminosity to the luminosity corresponding to visual magnitude 15. For example,
the value 0.200 indicates a luminosity one fifth of the reference level. As indicated by the
foregoing tabulation, reducing the luminosity by a factor of 5 adds 1.75 to the magnitude.
The value 0.200 thus corresponds to magnitude 16.75. We will be concerned mainly with the
absolute luminosity, the actual emission from the quasar, rather than with the observed value,
which varies with the distance. For this purpose, we will establish a reference datum at the
point where q is 1.00 and z is 0.08. The absolute luminosity will be expressed in terms of the
measured value projected to this datum by the appropriate relation.
No doubt some exception will be taken to the use of an unorthodox measurement scale in the
comparisons that follow, but in addition to generating values that are more convenient to
handle, this different scale of measurement will help to avoid the confusion that might
otherwise arise from the fact that the basis for projecting the observed luminosity to the
absolute system is not the same in our calculations as in conventional practice, and the
calculated absolute luminosities corresponding to the observed values will not usually agree.
The same considerations apply to the radio emission values. The values given in the tables
are absolute emissions recalculated from the data of Sandage,257 and expressed on a relative
basis similar to that used for the optical emission.
As we have seen in the preceding pages, the distinctive characteristics of the quasars and
related astronomical objects are due to their greater-than-unit speeds. However, in
undertaking to follow the course of development of these objects it will be necessary to
recognize that the quasar is a complex object with many speeds, each of which may vary
independently of the others. These include:
1. Quasar speeds. The quasars are ejected with scalar speeds exceeding two units.
During the interval in which it is restrained by gravitation, each quasar has a speed of
z in space, due to the normal recession, and a net speed of 3.5 z½ in time (equivalent
space) in the dimension of the spatial reference system. The observed quasar redshift
is a measure of the scalar total of these two redshift components.
2. Stellar speeds. The pre-explosion activity and the violent explosion raise the speeds
of most of the constituent stars of the ejected galactic fragment (the quasar) above the
unit level. It is this intermediate speed of the stars of the quasar, and the consequent
expansion into time, that are responsible for the small apparent sizes of the quasars.
They are galactic equivalents of the white dwarf stars.
3. Stellar component speeds. The speeds of the individual atomic and molecular
components of the stars (temperatures) are independent of the speeds of the stars.
Like the stellar speeds, they are increased to levels in the intermediate range by the
energy released during the explosion, but they are subject to radiation losses, while
the speeds of the stars are not affected by radiation. Consequently. the speeds
(temperatures) of the stellar components decrease relatively rapidly, and in most
quasars they return to the speed range below unity at the end of the early Class I
stage. The stellar speeds, on the contrary, remain in the intermediate range throughout
the entire life of the quasar.
4. Independent particle speeds. Dust and gas particles are accelerated to high speeds in
the stellar and galactic explosions, and they retain these speeds (temperatures) longer
than the atomic and molecular constituents of the stars because of the lower rate of
radiation in the gaseous state. Radio emission therefore continues through both Class
I stages.
As indicated in the foregoing paragraphs, the explosive forces impart the initial speeds of the
quasar system. Prior to the explosion that produces the quasar the interior of the giant galaxy
of origin is in a state of violent activity resulting from a multitude of supernova explosions.
The products of these explosions are confined to this interior region by the overlying stellar
aggregate, which, as pointed out earlier, has physical characteristics resembling those of a
viscous liquid. The dust and gas particles in the agitated interior are moving with speeds
greater than that of light. When the internal pressure finally becomes great enough to blow
out a section of the overlying material as a quasar, a large quantity of this fast-moving
material becomes part of the quasar aggregate. The violent readjustments resulting from the
explosion accelerate a substantial proportion of the component stars of the quasar to these
same intermediate speeds.
After the initial sharp decrease during the lacertae stage, the status of the quasar speeds at the
beginning of the early Class I stage is as follows: The quasar as a whole is moving
unidirectionally outward at ultra high (above two units) speed, but is subject to the
gravitational effect of the galaxy of origin. This results in the net speed reflected in the
observed redshift, z + 3.5z½ The constituent stars of the quasar are moving at intermediate
(between one and two units) speeds, and are therefore expanding into time, causing the
apparent spatial dimensions of the quasar to decrease. The atomic and molecular constituents
of the stars are likewise moving at intermediate speeds, with similar results, putting the stars
into the white dwarf condition. The gas and dust particles, which acquired upper range
speeds prior to the explosion, undergo a relatively slow speed decrease. All matter
accelerated to a higher speed level by the explosion is experiencing isotopic adjustments, and
is therefore emitting strong radiation at radio wavelengths.
As the quasar ages and moves away from the galaxy of origin its net outward speed increases
because of the continual reduction of the retarding gravitational force. All of the internal
speeds decrease because the large initial energy content is supplied by the galactic explosion,
and there is no active source of energy in the quasar itself, other than the normal stellar
generation processes, which are wholly inadequate to maintain the high energy concentration
that exists initially. The internal motions therefore lose energy in radiation and other
interactions with the environment.
This decrease in the internal activity results in a corresponding decrease in the optical
luminosity. In determining the true, or absolute, luminosity from the observed value, one of
the factors that must be taken into consideration is the effect of the distribution to two
perpendicular planes. This applies to the radiation as well as to our ability to see the quasars,
and it means that only half of the radiation originating from the quasar components that are
moving at speeds below unity is included in the observed luminosity. If the quasar
components from which the radiation originates are moving at intermediate speeds, the
distribution of the radiation is extended to the full eight units of the intermediate region. In
calculating the absolute luminosity, the measured value is thus subject to an increase by a
factor of 2 or 8. The limitation of the intermediate range speeds (temperatures) to the early
Class I stage restricts the application of the ratio of 8 to l to this class. For all other classes of
quasars the ratio is 2 to 1.
The other determinant of the relation between the observed and absolute luminosities is the
distance. The magnitude of this effect depends on the route by which the radiation travels.
The normal recession in space of a quasar elected from a nearby galaxy is small, and the
quasar motion is therefore primarily in time from the very start. Consequently, the radiation
from this object travels back to us through time. On the other hand, a quasar ejected from a
distant galaxy is receding at a high speed in space at the time of the explosion, and a
substantial period of time elapses before the motion in time in the explosion dimension
reaches the recession level. In the meantime the radiation from this quasar travels back
through space. Eventually, however, the continually increasing net explosion speed exceeds
the speed of the recession, after which the travel of the radiation from this distant quasar, like
that from the one nearby, takes place through time.
On this basis, the radiation from the lacertae, the quasars of early Class I, the youngest
members of Late Class I, and a few small, rapidly evolving, members of the radio-quiet class,
travels in space. That from the remainder of late Class I, most of the radio-quiet quasars, and
the quasars of Class II, travels in time. Quasars that are very close, where random motion in
space plays a significant role, may continue on the space travel basis beyond the normal
transition point.
Because of the two-dimensional distribution of the quasar radiation originating in the
intermediate speed range, the radiation received through space is proportional to the first
power of the distance in space, z. Inasmuch as q = 3.5 z½ it is also proportional to q². The
distribution of the radiation in time is likewise two-dimensional, and the quasar radiation
received through time is proportional to the first power of the distance in time (equivalent
space), q. In the discussion that follows all distances will be identified in terms of q (time) or
q² (space).
Table XII gives the observational data for the early Class I quasars in the group under
consideration, expressed in the terms that have been described, together with two calculated
values, the quasar distance, q, and the visibility limit. This visibility limit is the approximate
luminosity that a quasar of a given class and distance must have in order to be located by a
survey with the equipment and techniques available to the observers whose results constitute
the quasar sample that is being examined.
A purely theoretical determination of this limit would require a quantitative evaluation of the
capabilities of the equipment in use at the time the observations were made, an undertaking
that is not feasible as a part of the present investigation. The visibility limits for the quasars
of the various classes have therefore been determined empirically from the minimum
luminosities of the observed Class II quasars; that is, it is assumed for present purposes that
the limiting luminosity actually observed approximates the true limit.
The faintest magnitudes reached in the results here being studied were 19.44 (3C 280.1),
l9.35(3C 2), and 19.25(1116 + 12). The corresponding absolute luminosities are 0.025,
0.017, and 0.037. The quasar distance of 3C 2 is 0.962. If we assume that this quasar, which
has the lowest luminosity of any Class II object in the sample group, is almost at the visibility
limit, we can take a luminosity of 0.016 (magnitude 19.50) as the limit at q = 1.00. The
corresponding limits, on the q basis, for 3C 280.1 and 1116+12 are then 0.020 and 0.029
respectively; that is. both of these quasars are close to the limit of visibility. This should be
sufficient to justify using 0.016 for the visibility limit on the q basis for the purposes of our
investigation.
TABLE XII
CLASS I QUASARS-EARLY TYPE
Quasar Z q U-B B-V S m Limit L
1049-09 .344 .335 -.49 +.06 .17 16.79 .057 .172
3C 48 .367 .357 -.58 +.42 1.49 16.2 .065 .337
1327-21 .528 .507 -.54 +.10 .31 16.74 .132 .413
3C 279 .538 .516 -.56 +.26 .76 17.8 .136 .162
3C 147 .545 .523 -.59 +.35 2.79 16.9 .140 .381
3C 275.1 .557 .534 -.43 +.23 3.77 19.00 .146 .057
3C 345 .595 .569 -.50 +.29 .72 16.8 .166 .495
3C 261 .614 .586 -.56 +.24 .25 18.24 .176 .140
3C 263 .652 .621 -.56 +.18 .48 16.32 .197 .913
3C 207 .684 .650 -.42 +.43 .43 18.15 .216 .186
3C 380 .692 .637 -.59 +.24 2.61 16.81 .221 .653
1354+19 .720 .682 -.55 +.18 .42 16.02 .238 1.455
3C 254 .734 .695 -.49 +.15 .78 17.98 .247 .247
3C 138 .760 .718 -.38 +.23 1.33 17.9 .264 .285
3C 196 .871 .817 -.43 +.60 3.25 17.6 .342 .486
0922+14 .895 .838 -.52 +.54 .23 17.96 .360 .365
The objects that have been used for the evaluation of this limit are quasars of Class II, in
which, as we have seen, the radiation travels through time (on the q basis). The radiation
from most of the Class I quasars travels through space and this modifies the visibility limits.
The principal factor that enters into this situation is that there is a difference between the
brightness, or luminosity, of an astronomical object, and what we may call the intensity of
the radiation, if the radiating matter is moving at a speed greater than unity (the speed of
light). This difference arises because of the introduction of a second time component at the
higher speed. At speeds less than unity the only time entering into the radiation process is the
clock time. At higher speeds there are also changes in position in three-dimensional time
(relative to the natural datum). Here it becomes necessary to> distinguish between the time of
the progression of the natural reference system, the time that is registered on a clock, and the
total time involved in the physical phenomenon under consideration. This total time is the
sum of the clock time and the change in time location.
Ability to detect radiation with equipment of a given power is determined by the intensity of
the radiation, the radiation per unit of time. Distribution of the radiation over additional units
of time reduces the intensity. The luminosity, however, is measured as the amount of
radiation received during the total time corresponding to a unit of clock time (one of the
components of the total), and it is not affected by the number of units involved in this total.
If the radiation travels through time its magnitude is a scalar quantity in spatial terms. It
therefore has no geometrical distribution, and is received at full strength. However, if
radiation from an object in the intermediate speed range travels through space it is distributed
in the spatial equivalent of time; that is, in equivalent space. As we saw in Chapter 23, the
full distribution extends over 64 effective units. Only two of these are collinear with the
scalar dimension of the spatial reference system. Thus the radiation received through space
from an object in the intermediate region per unit of total time, the intensity of the radiation,
is 1/32 of the total emission.
It follows that the visibility limit for travel in space corresponding to the 0.016 limit for
travel in time is 32 x 0.016 = 0.512. This is the limit applying at quasar distance 1.00. For
other distances, the limits are 0.016 q (time travel) and 0.512 q² (space travel). The limits
shown in Table XII and the tables of the same nature that will follow have been calculated on
this basis.
While this general distribution of the radiation over the full 64 units in time does not affect
the luminosity, we have already found that there are other distributions in space that reduce
the ratio of the observed radiation to the original emission by a factor of 8 for the early Class
I quasars and a factor of 2 for all others. The ratio of intensity to luminosity for motion
through space is then the ratio of intensity to emission, 1/32, divided by the ratio of
luminosity to emission, 1/2 or 1/8. This gives us 1/4 for the early Class I quasars and 1/16 for
the others.
The significance of these ratios is that they enable us to determine the visibility limits in
terms of the observed magnitudes (luminosities) for those Class I quasars whose radiation
travels through space. The 1/4 ratio tells us that quasar radiation originating in the
intermediate speed range and received through space (q² basis) has only one quarter of the
intensity that it would have if travel through time (q basis) were possible. This is equivalent
to a difference of approximately 1.5 magnitudes. The q² limit corresponding to the 19.50
magnitude of the q limit applicable to the quasar sample under investigation is thus 18.00.
While the equipment used in collecting the data included in this sample was capable of
observing Class II quasars at 19.50 magnitude, early Class I quasars, whose radiation travels
through space, had to be 1.5 magnitudes (4 times) brighter in order to be detected.
The reality of the 18.00 limit can be seen by inspection of the values in Table XII. Only one
of the magnitudes in this list exceeds this limit by more than the amount that can be expected
in view of the variability in the luminosity of these extremely active objects. The one
exception, 3C 275.1, is a very strong radio emitter, with the largest radio output of any quasar
in the sample under examination. It was probably located optically in an intensive search
with powerful equipment.
The gradual decrease in the energy level of the quasars that we observe in the early Class I
stage continues during the late Class I stage, as indicated in both the radio emission (Fig. 30)
and the optical luminosity (Fig. 31). Since the spatial change of position is initially very
slow, the travel of the radiation is still mainly in space (q² basis) at the start of the late stage,
but by its end the radiation from many of the smaller objects (those below about 0.50
absolute luminosity) is reaching us through time (q basis). Coincidentally, the color indexes
become less reliable as an indicator of quasar age, as the smaller aggregates evolve more
rapidly.
These factors introduce some uncertainty into the determination of the absolute luminosity of
objects of this class. Any individual late Class I quasar outside of the local region in which
random motion is significant may be just beyond the early stage, so that its radiation is still
traveling in space, or it may have originated nearby, so that the currently indicated distance
represents travel in time. Usually, however, the relation of the luminosities calculated on the
two different bases to the applicable visibility limits indicates the correct alternative. Most of
the quasars whose absolute luminosities calculated on the q² basis are above the q² limits
probably have true luminosities in the neighborhood of the values calculated on that basis.
Conversely, where the luminosity on the q basis is only slightly above the corresponding
limit, the quasar radiation probably travels through time. In those cases where the luminosity
calculated on the q basis is substantially above the q limit. but the quasar does not qualify as
visible on the q² basis, the absolute luminosity is somewhere between the q and q² values,
and its true magnitude cannot be determined from the information now available.
Limiting
Intensity Luminosity I/L
Magnitude
Time travel 1 1 1 19.50
Early Class I 1/32 1/8 1/4 18.00
Other space travel 1/32 1/2 1/16 16.50
Luminosity data for the late Class I quasars of the reference list are given in Table XIII. The
basis (either q or q²) on which each of the absolute luminosities in the last column was
calculated is indicated by the column in which the corresponding visibility limit is shown.
For these quasars. whose luminosity to emission ratio is ½, the intensity to luminosity ratio
becomes 1/16. This corresponds to a magnitude difference of 3.0, which puts the visibility
limit for this quasar class at 16.50. The limiting magnitudes for the different classes of
quasars are summarized in this tabulation:
TABLE XIII
CLASS I QUASARS—LATE TYPE
Limits L
Quasar Z q U-B B-V S m
q q
2135-14 .200 .197 -.83 +.10 15.53 .020 .048
1217+02 .240 .235 -.87 +.02 .06 16.53 .028 .027
PHL1093 .260 .255 -1.02 +.05 17.07 .004 .038
PHL1078 .308 .301 -.81 +.04 18.25 .005 .015
3C249.1 .311 .303 -.77 -.02 .22 15.72 .047 .095
3C277.1 .320 .312 -.78 -.17 .20 17.93 .005 .021
3C351 .371 .360 -.75 +.13 .33 15.28 .066 .200
3C 47 .425 .411 -.65 +.05 .58 18.1 .007 .024
PHL 658 .450 .435 -.70 +.11 16.40 .097 .104
3C 232 .534 .513 -.68 +.10 .18 15.78 .135 .257
3C 334 .555 .532 -.79 +.12 .35 16.41 .145 .155
MSH 03-19 .614 .586 -.65 +.11 .60 16.24 .176 .219
MSH 13-011 .626 .596 -.66 +.14 .48 17.68 .010 .051
3C 57 .68 .646 -.73 +.14 .01 16.40 .214 .230
The limitation of the Late Class I quasars to the shorter distances is a conspicuous feature of
TableXIII, as there are absolute luminosities among this group of objects that are high even
by the standards of the Class II quasars, which can be seen all the way out to the 2.00 sector
limit. No quasars in TableXIII have a quasar distance beyond 0.646. This early cut-off is a
result of the 16.50 limiting magnitude, together with the steep rise of the visibility limit on
the q² basis applying to space travel. Quasars originating nearby and moving out to a greater
distance have passed out of the Class I stage before traveling this far, whereas most of those
originating beyond 0.500 are cut off by the rapidly rising visibility limit, which is up to 0.128
at this point. The most distant late Class I quasar in the list, 3C 57. is a relatively large
fragment, with absolute luminosity 0.230. just above the 0.214 visibility limit corresponding
to this distance.
The existence of the l6.500 magnitude limit is clearly demonstrated in the table. Nine of the
quasars in this list have a high enough luminosity in proportion to the visibility limit to make
it probable that their radiation is transmitted through space, and none of these is appreciably
above 16.50 magnitude (that is, less luminous).
A comparison of the values in Table XIII with those of Table XII shows the extent of the
decrease in energy emission that takes place as the Class I quasars grow older. Because the
early Class I quasars are products of extremely violent galactic explosions, their emission is
very high, booth at optical and radio frequencies, much above that of any other quasar class.
In the absence of any adequate source of replacement of the energy that is lost by radiation
the internal activity gradually subsides, and the average emission in the late Class I stage is
much lower. The maximum emission in the early class, both optical and radio, is six times
the maximum of the late class. The average optical luminosity of the quasars of early Class I
is four times the average of those of late Class I. The average radio emission in early Class I
is also four times the average emission of those members of the late class for which radio
data were available.
Since the radio and optical radiation are produced by different processes their decline as a
result of the gradual decrease in the internal energy content of the quasars does not
necessarily have to proceed at exactly the same rate, but the fact that the relative emissions of
the two groups are the same for both types of radiation is a significant confirmation of the
validity of the theoretical relations on which the calculations are based.
The Class I radio-quiet quasars are a distinctive and quite homogeneous group, and some
consideration of their place in the general picture is appropriate, but only two of them appear
in the sample under examination. In order to have an adequate sample, the quasars of this
class listed in the 1972 compilation by Burbidge and O'Dell 251 have been added to those in
the 1967 list. TableXIV gives the emission data for these quasars. As would be expected on
theoretical grounds, these are small objects, their average luminosity being only 0.018,
whereas the average of those of the late Class I radio-emitting quasars of Table XIII that are
in the same distance range is 0.064. The reason for this difference is that the smaller quasars
have less energy to start with, and they dissipate it more rapidly because of their greater ratio
of surface area to mass. They consequently pass through the various stages of evolution in
less time, and some of them reach the radio-quiet stage while the larger Class I quasars of the
same age are still radio emitters.
TABLE XIV
CLASS I RADIO-QUIET QUASARS
Quasar Z q Limit L
B 234 .060 .060 .001 .006
B 264 .095 .094 .002 .016
TON 256 .131 .130 .009* .015*
B 154 .183 .180 .003 .007
B 340 .l84 .181 .003 .030
BSO-2 .l86 .183 .003 .006
B 114 .221 .217 .003 .015
PHL 1186 .270 .264 .004 .010
B 46 .271 .265 .004 .020
PHL 1194 .299 .292 .005 .029
RS 32 .341 .332 .005 .009
PHL 1027 .363 .353 .006 .054
PHL 1226 .404 .391 .006 .020
B 312 .450 .435 .007 .010
*q² basis
This more advanced evolutionary status is reflected in the mode of travel of the radiation.
While the radiation from the majority of the late Class I radio-emitting quasars travels in
space, all but one of the radio-quiet quasars in Table XIV has reached the stage where the
travel of the radiation is in time. One of the factors that contributes to this result is that the
visibility limit of these small objects on the q² basis is reached relatively soon. Only three of
the 14 quasars listed in Table XIV have absolute luminosities over 0.020. The visibility limit
on the q² basis corresponding to 0.020 luminosity is at a quasar distance of about 0.200. This
means that a Class I radio-quiet quasar whose radiation travels in space is visible only within
this relatively short distance.
As in the case of the Class I radio emitters, the limitation on the distance of the radio-quiet
quasars whose radiation travels in time is a result of evolutionary development. By the time
these objects have moved from their relatively near locations of origin out to a quasar
distance of about 0.400 their optical emission has decreased to the point where it is not
detectable with equipment of the kind used by the investigators whose results are reported in
Table XIV. The most distant quasar of this group is at a quasar distance of 0.435. There are
no radio-quiet objects between this distance and q = 1.136 in either of the two samples that
we are examining. They reappear in the range beyond I .136. The factors that are responsible
for this distribution pattern will be considered later in this chapter.
There is considerable doubt as to the true status of some of the small objects that have been
classified as quasars. A recent (1982) news item reports that B 234, the closest object in
TableXIV (z = 0.060) and B 272, another object that has been regarded as a nearby quasar (Z
= 0.040), are H II galaxies, in which the radiation originates in large regions of ionized
hydrogen 258. The members of this recently recognized class of galaxies appear to be in the
size range of small spirals, and in approximately the same evolutionary stage, but they have
not yet acquired the spiral structure. It is possible that more of the small nearby ‖quasars― are
actually galaxies of this new class, but this should not change any of the conclusions reached
herein, other than the estimate of the minimum quasar size, which might be increased
slightly.
Inasmuch as the Class II stage is the last of the phases through which a quasar passes
between its origin and its disappearance, a normal Class II quasar has been traveling outward
for a very long time. It therefore follows that the absolute luminosity of such an object should
approximate the value calculated on the q basis. Table XV gives the luminosity data thus
calculated for the Class II quasars from the reference list that are nearer than q = 1.00. There
is one exceptional case in this tabulation. As noted earlier. when a relatively large quasar is
very close to the location from which we are observing it, the outward movement may be
retarded long enough to enable the quasar to reach Class II status before the transition from
radiation travel in space to travel in time. The quasar 3C 273 is in this condition.
TABLE XV
CLASS II QUASARS—BELOW q =1.00
Quasar Z q U-B B-V S m Limit L
3C 273 .158 .156 -.85 +.21 1.50 12.8 .012 .369
2251+11 .323 .315 -.84 +.20 .15 15.82 .005 .148
1510-08 .361 .351 -.74 +.17 .35 16.52 .006 .087
1229-02 .388 .376 -.66 +.48 .20 16.75 .006 .075
3C 215 .411 .398 -.66 +.21 .21 18.27 .006 .020
2344+09 .677 .643 -.60 +.25 .30 15.97 .010 .263
PHL 923 .717 .679 -.70 +.20 17.33 .011 .079
3C 286 .849 .797 -.82 +.22 2.21 17.30 .013 .096
3C 454.3 .859 .806 -.66 +.47 2.13 16.10 .013 .293
1252+11 .871 .817 -.75 +.35 .26 16.64 .013 .181
3C 309.1 .904 .846 -.77 +.46 1.33 16.78 .014 .164
0957+00 .906 .847 -.71 +.47 .23 17.57 .014 .080
3C 336 .927 .866 -.79 +.44 .69 17.47 .014 .089
MSH 14-121 .940 .877 -.76 +.44 .95 17.37 .014 .099
3C 288.1 .961 .895 -.82 +.39 .56 18.12 .014 .050
3C 245 1.029 .955 -.83 +.45 .68 17.25 .015 .120
CTA 102 1.037 .962 -.79 +.42 1.91 17.32 .015 .114
3C 2 1.037 .962 -.96 +.79 .83 19.35 .015 .017
3C 287 1.055 .977 -.65 +.63 1.24 17.67 .016 .084
3C 186 1.063 .984 -.71 +.45 .95 17.60 .016 .090
Table XVI is a similar presentation of the corresponding data for the Class II quasars at
quasar distances greater than 1.00. The objective of separating the Class II objects into these
two groups is to show that, from a luminosity standpoint, the two groups are practically
identical. The range of values in each case is about the same, and the average luminosity for
the group below 1.00 is 0.126, while that for the more distant group is 0.13X. In booth the
average and the maximum luminosities there is a small increase at the tar end of the distance
range, above' 1.70, due to the changes that take place as the sector limit at 2.00 is
approached, changes that were previously discussed in connection with the redshifts (Chapter
23) and the color indexes (Chapter 24). Otherwise, wherever we draw out a random sample
of Class II objects we obtain practically the same luminosity mixture.
TABLE XVI
CLASS II QUASARS-ABOVE q = 1.00
Quasar Z q U-B B-V S m Limit L
3C 208 1.110 1.024 -1.00 +.34 .98 17.42 .016 .111
3C 204 1.112 1.026 -.99 +.55 .19 18.21 .016 .053
1127-14 1.187 1.090 -.70 +.27 1.51 16.90 .017 .190
BSO-1 1.241 1.136 -.78 +.31 16.98 .018 .183
1454-06 1.249 1.142 -.82 +.36 .45 18.0 .018 .072
3C 181 1.382 1.254 -1.02 +.43 1.02 18.92 .020 .034
3C 268.4 1.400 1.269 -.69 +.58 .73 18.42 .020 .055
3C 446 1.403 1.271 -.90 +.44 1.48 18.4 .020 .056
PHL 1377 1.436 1.298 -.89 +.15 16.46 .021 .339
3C 298 1.439 1.301 -.70 +.33 3.30 16.79 .021 .250
3C 270.1 1.519 1.367 -.61 +.19 1.03 18.61 .022 .049
3C 280.1 1.659 1.480 -.70 -.13 .80 19.44 .024 .025
3C 454 1.757 1.559 -.95 +.12 .82 18.40 .025 .069
3C 432 1.805 1.597 -.79 +.22 .93 17.96 .026 .104
PHL 3424 1.847 1.630 -.90 +.19 18.25 .026 .082
PHL 938 1.93 1.695 -.88 +.32 17.16 .027 .232
3C 191 1.953 1.713 -.84 +.25 1.18 18.4 .027 .075
0119-04 1.955 1.715 -.72 +.46 .39 16.88 .027 .304
1148-00 1.982 1.736 -.97 +.17 .84 17.60 .028 .158
PHL 1127 1.990 1.742 -.83 +.14 18.29 .028 .084
3C 9 2.012 1.759 -.76 +.23 .41 18.21 .028 .091
PHL 1305 2.064 1.800 -.82 +.07 16.96 .029 .295
0106+01 2.107 1.833 -.70 +.15 .56 18.39 .029 .081
1116+12 2.118 1.841 -.76 +.14 .90 19.25 .029 .037
0237-23 2.223 1.922 -.61 +.15 .74 16.63 .031 .429
This does not mean that the optical characteristics of all Class II quasars are identical; it
merely means that whatever differences do exist are distributed throughout the Class II
evolutionary stage. There are periods in the life of Class II quasars when the internal
explosive activity is at a level above normal. but these active periods are not confined to any
one phase of the Class II existence, and may occur at any time.
One of the significant results of the near identity between these two quasar groups at much
different distances, when their absolute luminosities are calculated by means of the first
power relation derived from theory, is to supply another confirmation of that relation; that is,
to confirm the two-dimensional nature of the quasar radiation. The validity of this
relationship was demonstrated in Quasars and Pulsars by a direct correlation between quasar
distance and the average luminosities of small groups of quasars in which all group members
are at approximately the same distance. Now the relation is verified in a different manner by
showing that the distribution of luminosities calculated on this first power basis is, with the
one exception that has been noted, independent of the distance. Obviously, sample groups
from different sections of the range of distances would not show the close approach to
uniformity that is evident in the tables unless the basis for reducing observed to absolute
luminosity is correct. The identification of the Class II quasars above q = 1.00 is positive, as
no other quasars have quasar distances in this range. It then follows that the agreement
between the properties of the two groups of Class II quasars also validates the criteria by
which the members of the group below 1.00 were differentiated from the Class I quasars that
exist in the same distance range.
It is clear from the entries in Table XVI that the quasars do not thin out gradually with
distance, as expected on the basis of conventional theory. On the contrary, there is evidently
a sharp cut-off at some point just beyond the last object of the sample group (quasar distance
1.922). This is not due to decreased visibility, as the visibility limit at the 1.922 distance is
0.031, far below the 0.133 average luminosity of the Class II quasars. It must result from
some other limiting factor that comes into operation at this distance. This is in full agreement
with the theoretical conclusion that the quasars that retain the normal 3 ½--3½ distribution of
the intermediate region units of motion convert to motion in time, and disappear from view.
at quasar distance 2.00.
The radio-quiet quasars included in Table XVI are relatively large objects. their average
absolute luminosity being 0.145, in sharp contrast to the Class I radio-quiet quasars of Table
XIV, which average only 0.018. A substantial size is thus indicated as a requirement for
attaining the Class II radio-quiet status. This is understandable when we consider the nature
of the process that is responsible for the Class II activity. As we have seen, the Class II stage
is initiated when a considerable number of the stars of the quasar reach their age limits and
undergo supernova explosions. If some or all of the explosion products are confined within
the interior of the structure, the quasar becomes a Class II radio emitter. If it is not big
enough, or compact enough' to confine these products they are ejected as they are produced,
or at intervals, and the quasar gradually disintegrates.
The luminosity data for the various classes of quasars are summarized in Table XVII. The
most conspicuous feature of this tabulation is the high luminosity of the early Class I objects.
However, when we consider the enormous disparity in size between the exploding galaxy
that produced the early Class I quasar and the exploding fragment that constitutes the Class II
quasar, the difference in luminosity between these two classes is easily accounted for. The
relatively low emission of the late Class I objects is obviously a result of the energy losses
during the time that has elapsed since the galactic explosion. At the end of the Class I stage,
the quasars are in what we may call a condition of minimum internal activity.
TABLE XVII
QUASAR LUMINOSITIES
Class Max. Min. Av. Max/Min
I-Early 1.455 ..057 ..422 25
I-Late (under
..257 ..024 ..155 11
0.76)
I-Late (over 0.76) ..155 ..015 ..057 10
I-Radio Quiet ..054 ..006 ..017 .9
Il-Below 1.00 ..369 ..017 ..126 22
II-Above 1.00 ..429 ..025 ..138 17
Table XVII separates the 14 quasars of late Class I into two groups of 7 each, with the
dividing line at U-B = 0.76. The ratio of maximum to minimum luminosity in these two sub-
groups is practically identical, indicating that the decrease in internal activity continues
throughout the late Class I stage, as would be expected from theoretical considerations, and
that the difference between the tabulated values for the two groups reflects a decrease in the
luminosity level because of the reduced activity, rather than a difference in the sizes of the
quasars in the two groups. We may thus conclude that the absolute luminosity of a radio-
emitting quasar of minimum size in a condition of minimum internal activity is about 0.015.
As indicated earlier, the radio-quiet quasars in the Class I distance range differ from the
coexisting radio emitters mainly in size. Addition of this radio-quiet class brings the
minimum size down to 0.006, or to make some allowance for the rather small sample, let us
say 0.005. Some question may be raised as to why there should be a minimum size; that is,
why the explosion does not produce debris of all sizes from sub-atomic particles up to some
maximum size of fragment. The answer is that the quasar is the whole cloud of ultra high
speed matter ejected by the explosion, including stars, star fragments, dust, and gas. We see
the cloud as a discrete object because of the great distances that are involved.
The maximum luminosities vary considerably more than the minimum. This is evidently due
to the fact that in the quasars, as well as in the pre-explosion galaxies, the internal activity
can build up to a higher level in the larger aggregates before breaking through the overlying
layers of material. The effect of this factor is shown by the ratios of maximum to minimum
luminosities, which range from 17 to 25 in the active quasar classes, but average only about
10 in the relatively inactive late Class I groups. Since each of the larger quasars passes
through all of the stages represented by the various radio-emitting classes, the range of sizes
should be the same in each, if the sample is representative. The 0.155 maximum of the sub-
group of late Class I in which the U-B index is over 0.76 should therefore be the maximum
value comparable to the 0.015 minimum that we found to be applicable under the condition
of minimum internal activity.
Since the sample is small, there may be some larger objects elsewhere, but the continuity of
the maximum-minimum ratio throughout Class I indicates that the 0.155 value is at least
close to the maximum. Furthermore, the quasar 3C 334, which has the 0.155 luminosity, may
still have somewhat more than the minimum internal activity. These possibilities tend to
counterbalance each other. It thus appears that a value of about 0.150 is acceptable as the
maximum absolute quasar luminosity under conditions of minimum internal activity.
What we now want to consider is the meaning of these maximum and minimum values in
terms of the masses of the quasars; that is, are they consistent with the theoretical conclusion
that the quasar is a fragment of a giant spheroidal galaxy? The various factors that enter into
this situation are not yet defined clearly enough to enable an accurate calculation, but an
approximation is all that is needed in order to answer the question as stated. The most
convenient way of obtaining this answer is to make a direct comparison between a quasar
and the galaxy from which it was ejected, both of which are at the same spatial distance. The
logical pair for this purpose is the one that we know the best, the quasar 3C 273 and its
associate, the giant galaxy M 87.
The largest uncertainty in this evaluation is in the relative mass-to-light ratios of these two
objects. It is known that there is a systematic increase in this ratio as the size of the galaxy
increases, as would be expected from the theoretical information about the galactic structure
developed in the preceding chapters. A recent review by Faber and Gallagher reported
relative values for spiral galaxies ranging from 1.7 for the smaller class to 10 for the large so
spirals.259 Information with respect to the giant spheroidal galaxies, the parent objects of the
quasars, was reported to be scarce, but the available data indicated a substantially higher
ratio, probably at least 20.
The increase in the mass-to-light ratio with the size of the galaxy is mainly due to the
increasing amount of confined high density, high temperature, material in the galactic
interiors. At the level of minimum internal activity the quasars contain much less of this
dispersed material, without the confinement.
The stars are still moving at upper range speeds, and the star density remains high, but this
does not affect the mass-to-light ratio, which is determined primarily by the extent to which
upper range speeds exist in the constituents of the stars. As previously noted, these
constituents return to temperatures below unity at the end of the early Class I stage. The
mass-to-light ratios of the quasars in the minimum activity condition should therefore
approximate those of the smaller spiral galaxies. An estimate of 2 should be reasonable. This
means that the ratio of the masses of the minimum activity quasars to those of the galaxies of
origin is less by a factor of about 10 than the ratios of the luminosities.
As indicated in Table XVII, the Class II quasars are about twice as luminous as the quasars in
the stages of minimum internal activity. This brings the mass-to-light ratio of 3C 273 down
to about ½0 of that applicable to M 87. The observed magnitudes of M 87 and 3C 273 are 9.3
and 12.8 respectively. The corresponding ratio of luminosities is 25. Applying the correction
for the difference in the mass-to-light ratios, we arrive at the conclusion that M 87 is 500
times as massive as 3C 273.
From the data in the tables in this chapter it appears that 3C 273 is somewhere near the
maximum quasar size. On this basis, then, only about 0.2 percent of the mass of a giant
galaxy is ejected in the form of a quasar, even when the fragment is one of maximum size.
This is only a very small portion of the galaxy, but the galaxy itself is so immense (about
1012 stars, according to current estimates) that 0.2 percent of its mass is a huge aggregate of
matter. It is equivalent to about two billion stars, enough to constitute a small spiral galaxy.
The smallest quasar, radio quiet by the time we observe it, represents only about 0.007
percent of the galactic mass—a mere chip, one might say—yet it, too, is a very large object
by ordinary standards, as it contains approximately 70 million stars, the equivalent of about
100 large globular clusters, or a dwarf elliptical galaxy.
The data examined in this volume, and the two that preceded it, together with the
interpretation of these data in terms of the quasar theory derived from the postulates of the
Reciprocal System give us a picture of the quasars that is complete and wholly consistent. As
this analysis shows, if a fragment of a giant galaxy, of a size consistent with the theory. has
been ejected at a speed greater than that of light, as required by the theory, then the optical
emission from the constituent stars of the fragment, occurring at a rate consistent with the
normal emission from such stars, at the distance theoretically indicated by the redshift, and
distributed in space and its equivalent in the manner required by the theory, will be received
here on earth in just the quantities that are actually observed. There are no inconsistencies of
the kind that are so conspicuous in the application of conventional theory to the quasars. All
of the observations fit easily and naturally into the theoretical structure.
As brought out in the preceding pages, this is true not only of the general situation, but also
of the minor details. The correlation between theory and observation provides individual
confirmation of many of the special features of the theory, such as the first power relation
between distance and luminosity, the changes in color and distribution of the radiation that
take place when the speed exceeds one or another of the unit levels, the special
characteristics of the early type quasars, the differences between the limiting magnitudes of
the various quasar types, etc.
Furthermore, the theory from which all of these results have been obtained is not something
that has been constructed to fit the observations. Each and every conclusion that has been
reached is a necessary consequence of the basic assumptions as to the properties of space
and time. The theoretical development shows that just because space and time have these
postulated properties, quasars must exist, and they must have exactly the characteristics that
are now revealed by observation.

CHAPTER 26

Radio Galaxies
As predicted in the first edition of this work, the fast-moving products of galactic
explosions that are now known as quasars were discovered in the course of observations
of radiation at radio wavelengths. A dozen years earlier, the first radio galaxy, Cygnus A,
had been identified. The optical object corresponding to this radio source was found to
have the appearance of two galaxies in collision. When another very strong radio emitter,
Centaurus A, was discovered and identified with an optical object, NGC 5128, that
likewise appeared to be a pair of colliding galaxies, the galactic collision hypothesis
became the favored explanation of the origin of the extra-galactic radio emission,
although no one could explain how collisions could produce the observed radiation.
As more radio observations accumulated, it became clear that the great majority of the
radio sources are not colliding galaxies. The necessity of providing some other
explanation for most of the sources raised doubts as to the validity of the collision
hypothesis, and ‖by 1960 the colliding-galaxy theory of radio sources had all but
expired.― Ten or twelve years later the pendulum had swung back in the other direction.
The authors of the foregoing comment on the situation in 1960 saw it this way in 1973:
We suspect that in NGC 3921 and similar objects one is witnessing the vigorous tumbling
together or merger of what until recently were two quite separate galaxies.260
A realization that galactic collisions, once thought to be rare events, are actually quite
common, has been a significant factor in this change of attitude. There are many galaxies
with distorted shapes, and it has been found that a substantial number of these are, or at
least appear to be, double structures of some kind, suggesting that two separate galaxies
are, or have been, interacting. In NGC S l28 what we apparently see is a spiral galaxy
plowing into the middle of a giant elliptical galaxy. This view is supported by the
observation that ‖the gaseous disk is apparently rotating much faster than the elliptical
component.― 261 The galaxy NGC 4650A is reported to have a similar structure, with an
elliptical core and an outer spiral galaxy revolving around the core.
In considering the situation from a theoretical standpoint, the first point to be noted is that
colliding galaxies produce radio frequency radiation by the same process as any other
strong radio emitter; that is, the radiation comes from particles that have been accelerated
to upper range speeds. The acceleration can be produced in any one of a number of
different ways. Thus it is quite possible that some of the observed radio emission may be
a result of galactic collisions, even though in the majority of the radio emitters it results
from explosive processes. It would be expected, however, that the explosions, the more
violent of the two processes, would produce the stronger radiation.
What needs to be explained, then, in the case of the two sources Cygnus A and Centaurus
A, is the exceptional strength of their radio emissions. The answer that we obtain from
the theory is that the strong emission is not a direct result of the collision but an indirect
result, in which sources of radiation already present are released. It is evident from
observation that in each case one of the two colliding objects is a giant. We have
previously deduced that the interiors of such giant galaxies contain concentrations of
intermediate and ultra high speed matter, enough to make these galaxies strong sources of
radio emission even when their structures are intact and only a small part of the radiation
that is produced is able to pass through the material that overlies the producing zone.
These giant galaxies are large enough, and stable enough, to be able to absorb globular
clusters or small galaxies without any significant disturbance of their own structures, but
a collision with a large spiral can be expected to result in some disruption of the outer
structure of the giant, allowing the escape of large quantities of explosion products from
the interior. Here, then, is a source of radiation that is easily able to account for the strong
emission from the two objects in question.
The alteration of the normal pattern of development of the internal activity by escape of
matter and radiation during collisions is not likely to have any long run significance. It
can be expected that when the consolidation of the two galaxies is complete the new
galactic structure will be able to contain the material moving with upper range speeds,
and the build-up of this material will be resumed, continuing to the ultimate limit in the
normal manner. However, some drastic changes in the pattern of evolution of the galaxy
may result if the large-scale explosive activity is premature. This possibility will be
explored in the next chapter.
Two objections have been raised to the collision hypothesis in application to NGC 5128:
( 1 ) the ` `dark lane is wider than would be normal for the disk in a spiral galaxy,― and
(2) the ‖lane is more disturbed than the matter in the disk of a spiral galaxy should be.―
263
Neither of these objections is tenable once it is understood that the stars of a galaxy
occupy equilibrium positions. Disturbance of this equilibrium by contact with another
galaxy generates effects that extend over great distances.
The recent discovery that NGC 5128 is a strong x-ray source supports the conclusion that
a collision has disrupted the structure of the giant galaxy, as the intermediate speed
component of the matter escaping from the central region of the galaxy begins emitting x-
rays as soon as its temperature drops below the unit level. Inasmuch as the x-ray radiation
originates from matter moving at less than unit speed, it should be emitted mainly from
the optical location rather than the radio locations. This theoretical conclusion is
confirmed by observation .264
Whether the speeds responsible for the radio emission of the colliding galaxies are
produced in part by the collision, or whether the fast-moving matter released by the
rupture of the outer layers of the larger galaxy is the sole source of this radiation is not
definitely indicated by the information now available, but the indications are that the
contribution of the collision is no more than minor. The disruption of the outer structure
of a giant spheroidal galaxy is clearly the process that leads to the greatest release of
radio-emitting material, and it accounts for the fact that such objects as Cygnus A are
extremely strong radio emitters.
Smaller amounts of such material escape from other galaxies and from quasars under
special conditions. The giant galaxy M 87, for example, apparently has a hole in its outer
structure through which ultra high speed matter is escaping in the form of a jet. In still
another class of radio emitting objects, the Seyfert galaxies, the containment is quite
limited, and the ultra high speed material escapes continuously, or at short intervals.
These latter two classes of objects will be given some further consideration in the next
chapter.
Another special kind of radio galaxy is the one known as the N galaxy. Most of these
objects are far distant. Consequently they have not been studied as extensively as those
more accessible to observation, and the amount of information about them that is now
available is rather limited. For this reason, whatever conclusions we may reach with
respect to them will have to be somewhat tentative. However, the theory that has been
developed from the postulates of the universe of motion requires the existence of a class
of objects with the same characteristics as those thus far observed in the N galaxies. On
the basis of the information currently available, it thus appears probable that the N
galaxies are the objects that the theory calls for.
Inasmuch as there is no gravitational effect beyond a quasar distance of 1.00, the
explosion speed has no component in the dimension of the reference system in the range
from 1.00 to 2.00. From our point of view, therefore, a quasar originating beyond q =
1.00 remains at its original spatial location (subject to the normal recession) during its
entire life span. Ordinarily the radiation from the quasar overpowers that of the galaxy of
origin, and the quasar appears to be alone. In some circumstances, however, the presence
of the galaxy can be detected. Furthermore, we can deduce from probability
considerations that some of the quasars are located directly behind the heavily populated
galactic centers from which, according to the theory, they originate. In this case the
quasar radiation is absorbed and reradiated.
This means that there should'exist a class of galaxies in which the galactic nucleus is
abnormally bright and emits radiation with some of the spec'tral characteristics of the
radiation from the quasars. The distinguishing feature of the N galaxies is a nucleus of
this nature, and it is now conceded that ‖the spectra and colors of quasars are similar to
those of the nuclei of N galaxies.― 265 Indeed, the similarities between these galaxies and
the quasars are so evident that it has been suggested that all quasars may be N galaxies
with very prominent nuclei.
One specific observation that has been interpreted as evidence in favor of this hypothesis
is a change of three magnitudes (a factor of about 16) in the emission from the galaxy X
Comae. This leads the observers to conclude that this is ‖an object that apparently can
change temporarily from an N-type galaxy to a QSO.― This, they say, ‖clearly supports
the hypothesis― that quasars are simply very bright galactic nuclei.266 However, the
explanation provided by the theory presented in this work is not only equally consistent
with the observations, but also explains how and why the change takes place, something
that is conspicuously lacking in the ‖bright nucleus― hypothesis. If the quasar is behind
the galaxy from which it was ejected, as we have concluded that the N galaxies are, it is
quite possible for changes to occur, as the galaxy rotates, in the amount of matter through
which the quasar radiation must pass. Such changes are probably no more than minor in
the usual case, but they obviously can extend all the way from a condition in which the
entire radiation from the quasar is~absorbed and reradiated, so that we have nothing but
an N galaxy, 'to a condition in which that radiation passes through essentially unchanged,
and we see only a quasar.
It has also been reported 267 that in some of the objects of this class the quasar component
is ‖off center― with respect to the underlying galaxy. This is very difficult to explain on
the basis of the hypothesis that the N galaxy is a galaxy with a quasar core, but it is easily
understood if what is being observed is a galaxy with a quasar almost directly behind the
galactic center. Another significant observation is that ‖the underlying galaxy [of the N
systemJ has the same colors as a giant elliptical (E) galaxy. ‖ 265 This supports the
theoretical finding that the underlying galaxy in the N system is a galaxy of maximum
size (and age) that exploded and ejected the quasar.
Further support for this explanation comes from the observation that the N galaxies are x-
ray emitters. After having been raised to the radio-emitting speed level by the strong
radiation from the quasar, some of the gas and dust of the N galaxy loses energy in its
interaction with the other galactic constituents, and returns to the lower speed range. This
initiates x-ray emission. ‖All optically known N galaxies out to a red shift of 0.06 are
detected as x-ray emitters.― 227
The general run of radio galaxies-those that are not members of special classes such as
the ones that have been described-are explosion products. As we saw earlier, a radio
galaxy is normally produced jointly with each quasar. It is also possible that in some
galaxies large-scale supernova activity may begin before the galaxy has reached the size
that makes it capable of resisting internal pressures in the ultra high range. In that event,
the galactic explosion will be less violent, and the major explosion product will not attain
the ultra high speed that characterizes the quasar. Instead, it will be a radio galaxy. In all
cases, however, a radio galaxy is an ordinary galaxy, differing from the other members of
its class only in that it contains gas and dust that has been accelerated to speeds greater
than that of light, and is therefore undergoing the isotopic adjustments that produce
radiation at radio frequencies.
Many quasars are strong radio sources, as could be expected from the fact that secondary
explosions take place in the older quasars, giving them a source of replacement for the
particles and the energy that are dissipated. As we saw in our examination of the
absorption spectra, the particle speeds are actually increasing in the older quasars. Radio
galaxies, on the other hand, are limited to the original supply of matter and energy that
they acquire in the explosive event. It should be noted, however, that the strength of the
radiation from the distant quasars is greatly overestimated in current practice, because the
absolute value of the emission is calculated on the basis of a three-dimensional
distribution. As explained earlier, the actual distribution is two-dimensional.
In those scientific areas where data from observation and experiment are scarce and
subject to a variety of interpretations, the generally accepted choice from among the
alternatives often fluctuates in a manner reminiscent of the changes of fashion in
clothing. The changing attitudes toward the process responsible for the generation of the
radiation from the radio galaxies that were mentioned earlier in this chapter now appear
to be entering still another phase. The ‖high fashion― in today's astrophysical theory is the
black hole. Wherever problems are encountered, the current practice is to call upon the
black hole to provide the answer. So it was probably inevitable that black holes would
find.their way into the theory of the radio galaxies.
Just how the black hole accomplishes the observed result is not explained. We are simply
expected to say ‖black hole― as we would say ‖open sesame,― and take it for granted that
we have the answers. For example, K. I. Kellerman reports evidence supporting the
‖speculation that the efficient transport of energy from the black hole to the extended
radio lobes occurs by what is commonly referred to as a relativistic beam or jet.― 268 The
basic questions as to how and why a black hole produces a ‖relativistic beam― are passed
over without comment.
Since the astronomers know of no means of producing strong radio radiation other than
the synchrotron process, they assume that this process must be operating, even though
they realize that, as matters now stand, there is no plausible explanation of how the
conditions necessary for the operation of this process could be produced on such an
enormous scale. J. S. Hey tells us that.
The synchrotron theory has remained undisputed as the principal process of radio
emission. But the problems of the production of relativistic particles and their
replenishment by repeated activity have prompted a great deal of speculation . . .
There are at least as many theories as there are theoretical astronomers.269
As this statement indicates, the astronomers' view of this phenomenon has not advanced
beyond the highly speculative stage. H. L. Shipman sums up the situation in this manner:
We have no definite explanation for the appearance of even the most common
form of radio galaxy, the double radio galaxy.270
Here again, the theory of the universe of motion produces the answers to the problems in
the course of a systematic and orderly development of the consequences of its basic
postulates, without the necessity of making any further assumptions, and without calling
upon any black holes or any other figments of the imagination. This theory tells us that,
except for some minor contributions from processes such as galactic collisions, the
energy of the radio radiation is produced explosively. Gas and dust particles are
accelerated to upper range speeds, and radiation at radio frequencies is then produced in
the manner described in Chapter 18. Where conditions are such that the speed of certain
particles drops back below the unit level at some stage of the evolution of the explosion
products, x-ray emission takes place, as also explained in an earlier chapter.
Where the maximum explosion speeds are in the intermediate range, below two units, the
explosion products expanding in time have no other motions. The radio emission
therefore takes place from the original spatial position-that is, the optical location-of the
exploding object, except to the extent that some of the intermediate speed matter may be
entrained in the outward-moving low speed products. The general run of white dwarfs
and many other radio emitters are therefore single radio sources. Explanation of these
sources presents no particular problem, except the basic requirement of accounting for
the production of strong radio radiation. Current astronomical theory has nothing to offer
as a means of meeting this requirement except the synchrotron process, which, as brought
out earlier, is wholly inadequate. But the isotopic adjustment process discussed in
Chapter 18 provides an explanation that is in full agreement with the observations.
The most glaring deficiency in the current astronomical views regarding the radio
radiation is the one that authors such as Shipman are conceding in their discussions of the
subject: the lack of any plausible explanation of the structure of the extended sources.
Our finding is that these sources are expanding clouds of matter not essentially different,
except in the distribution of their component motions, from the other strong radio sources
that we have examined.
In all explosive events within our ordinary experience we observe that an expanding
cloud of material is ejected from the exploding object. A supernova remnant is such a
cloud. One of the rather surprising results of the development of the consequences of the
postulates of the theory of the universe of motion is the finding that the white dwarf, a
small compact object, is likewise an expanding cloud of material. It is essentially the
same kind of thing as the cloud that is expanding in space, differing only in that it is
expanding into time, and is therefore contracting when viewed from the spatial
standpoint. This difference in behavior is easily understood when the inverse nature of
motion in time (as compared to motion in space) is taken into consideration. Expansion
into space increases the spatial size of one cloud of explosion products. Expansion into
time decreases the size of the other.
The ‖mysterious― pulsars have an equally simple explanation. They are merely moving
white dwarfs. The ordinary white dwarf, as we have seen, is a stationary expanding
object; stationary in space (aside from ordinary vectorial motion) and expanding in time.
The pulsar is moving at ultra high speed, the next higher speed range. This object
therefore adds another motion, expanding in time like any other white dwarf, and, in
addition, moving translationally in a dimension of space other than the one represented in
the conventional spatial reference system.
The quasars have the same kind of a combination of motions as the pulsars. Thus we can
describe both of these classes of objects as stationary in the dimension of the reference
system (except for the normal recession and possible random motion in space),
expanding into time (equivalent space), and having a linear motion in a second spatial
dimension. Here the explosive inerease of speed into the ultra higb range has resulted in
the addition of two more motion components to the original spatial motion, an expansion
and a translational motion. Because of the alternation of space and time in the basic
motion, one of these added components must be motion in time and the other motion in
space. In the case of the quasars and the pulsars, the expansion is in time and the
translation is in space. But, as we saw in Chapter 15, where the theoretical situation was
examined, it is equally possible, under appropriate circumstances, for the expansion to
take place in space (that is, in the second spatial dimension) and the translation to take
place in time. This produces the same results, except that space and time are
interchanged. Here we have expansion in space and translation in time.
Although the combination of motions is essentially the same in both cases, the observed
phenomena are totally different, because of the limitations of the spatial reference system.
To observation, quasars and pulsars are small, very compact, contracting objects.
Inverting the roles of space and time in this description, we find that the explosion
products of the inverse type are large, very diffuse, expanding objects.
In both cases, the motion in the early stages, immediately following the explosion, is
modified by gravitation. As we saw in the case of the quasars, the spatial motion in the
second scalar dimension is normally unobservable, but for a time subsequent to the
explosion this unobservable scalar motion is acting against gravitation. The gradual
elimination of the gravitational effect allows the progression of the natural reference
system that was counterbalanced by gravitation to become effective, reversing the change
of position in the reference system that resulted originally from gravitation. It was noted
in Chapter 22 that this process results in an observable movement in space during the
early part of the quasar life, gradually decreasing, and terminating at a quasar speed of 1
.00.
ln this instance, Case I, as we will call it, an object that is expanding in time, and is
therefore compact in space, undergoes a linear outward motion in space. In the inverse
situation, Case II, an object that is expanding in space, and therefore extends over a large
spatial volume, undergoes a linear outward motion in time (unobservable). In both cases,
the first portion of the spatial motion operates against gravitation, and the gravitational
change of position that is eliminated is observable. Thus in Case I there is an observable
linear translational motion that terminates at a quasar distance of 1 .00, where the net
gravitational motion reaches zero. In Case II there is an observable linear expansion
terminating at the same 1.00 distance. Beyond this point the expansion takes the normal
spherical form that results from a random distribution of directions.
A rapidly moving stream of particles is commonly called a jet. Thus the spatial expansion
at ultra high speeds takes the form of a jet and sphere combination. As we saw earlier,
scalar motion does not distinguish between the direction AB and the opposite direction
BA. It follows that where there is no obstacle in the way of the expansion, two oppositely
directed jet and sphere combinations originate at each explosion site. The objects
inversely related to the quasars and pulsars therefore manifest themselves by a radiation
pattern that can be described as having a dumbbell shape.
This widely dispersed matter is not generally regarded as an ‖object― in the same sense in
which this term is applied to a quasar, but actually the two are identical in form, aside
from the inversion of space and time. The quasars and pulsars are compact in space and
spread out over a very large expanse of time. The tadio-emitting dumbbell is compact in
time and spread out over a very large expanse of space. Both of these kinds of objects are
essentially nothing but expanding clouds of explosion products. The difference between
them, as they appcar to our observation, is due to the manner in which we are observing
them; that is, we are able to detect changes of position in three dimensions of space, but
our direct apprehension of time is limited to the scalar progression. We detect other
motion in time only by its effect, if any, on spatial positions.
Deviations from the dumbbell pattern are caused by obstructions in the way of travel of
the explosion products, by supplementary explosive activity, by vectorial motion of the
galaxy of origin during the expansion stage, or by interaction with neighboring galaxies.
The structure of the radio-emitting cloud of matter thus has a considerable amount of
diversity, but the division into two somewhat symmetrical regions is generally apparent,
except where a specific direction is imparted to the motion of the explosion products by
escape through a single orifice.
Distant radio galaxies are subject to the same lateral displacement of the radio image that
applies to the quasars, but this displacement is small compared to that resulting from the
linear expansion of the explosion products, and it is generally obscured by elements of
the structure due to that expansion. However, a noted in Chapter 22, both the large scale
structure and the small scale displacements are observed in some cases.
The ultra high speed motion in the interiors of the giant galaxies is thermal motion, in
which the directions of the motions of the individual particles are continually changing
because of repeated contacts of the moving particles. When the galactic explosion occurs,
those of the ultra high speed particles that escape from the galaxy are incorporated into
the two major explosion products. Here the forces tending to confine this material are
inadequate to accomplish total confinement, and the ultra high speed thermal motion is
therefore gradually converted into ultra high speed linear outward motion. Thus both of
the major products of the galactic explosion, the quasar and the radio galaxy, are ejecting
the dumbbell type of radio-emitting clouds.
As brought out in the theoretical discussion in Chapter 15, the ultra high speed particles
expanding into space in the combination jet and sphere pattern are moving at the same
total speed as the pulsars. Thus their ultimate fate is the same. Except for a relatively
small proportion that are slowed down sufficiently by environmental factors to reduce
their speeds below the two-unit level, the individual particles of the expanding cloud of
matter eventually cross the boundary and escape into the cosmic sector in the same
manner as the pulsars and the quasars. The x-ray radiation from the relatively small
number of particles that return to the lower speed ranges is too widely scattered to be
observable. Optical radiation is visible only from entrained material in the early jet stage.
The ultra high speed expansion is therefore primarily a radio phenomenon.
In addition to the components moving at less than unit speed, and the components
moving at ultra high speed that have just been discussed, the products of the most violent
explosions also include particles moving with intermediate speeds. As we have seen in
the earlier pages, motion in this speed range (the speeds of the components of the white
dwarfs) does not change the position in space. From a spatial standpoint, the particles that
constitute this intermediate speed component are motionless. The spatial densities of the
outward-moving material are high enough to carry most of this otherwise motionless
matter with the streams, but some of it remains at the explosion site. In those cases where
the size of this remainder is substantial, the radio emission pattern has three main centers
rather than only two. Some of the entrained intermediate speed matter may also drop out
of the stream during the jet stage, resulting in local concentrations of material, often
called ‖knots,― in the jet.
The discussion of the cloud of ultra high speed matter that produces the dumbbell type of
radio-emitting structure in this chapter completes the identification of the different types
of motion combinations that are involved in the phenomena of the upper speed ranges.
Summarizing these findings, it can be said that, although the objects included in this
category show a wide diversity of shapes and sizes, all the way from tiny, but extremely
dense, aggregates to very diffuse clouds of material spread out over vast regions of space,
they can all be described as fast-moving clouds of matter, either clouds of particles or
clouds of stars. The very diffuse objects are clouds of matter widely dispersed in space by
the forces of the explosions. The very compact objects are clouds of matter widely
dispersed in time by forces of the same kind.
The variations in the way in which these clouds appear to observation are due to the
differences between motion in space and motion in time, and to the variability in the
manner in which these different motions are distributed among the three speed levels of
the material sector of the universe. The relations between the different kinds of observed
objects are brought out clearly by the comparison in Table XVIII.
Here we see that all of the new type of objects discovered by the astronomers during the
last few decades, from the rather commonplace supernova remnants to the ‖mysterious―
quasars, are explosion products, differing in the way in which they appear to observation
because some are aggregates of particles while others are aggregates of stars, and because
there are variations in two properties of the motions of their components: the speed level
(which determines whether the motion is in space or in time), and the motion
distribution-unidirectional (linear) or random (expansion or contraction). An additional
variation is due to the fact that some of these objects (the white dwarfs, for example) are
single entities while others are combinations in which a relatively compact object, such as
a radio galaxy, is associated with an extended cloud of material.
In this connection, it should be understood that expansion in time, like any other time
motion, acts as a modifier of the spatial dimension of a cloud-that is, as a contraction in
equivalent space-as long as the total motion of the object has a net spatial resultant. Thus,
even though motion in time is not, in itself, observable, the decrease in the size of an
astronomical object due to expansion in time can be observed.
TABLE XVIII
MOTION COMBINATIONS AT UPPER RANGE SPEEDS
Aggregates of particles Speed level
1 2 3
White dwarf - early ET
White dwarf - late CT
White dwarf remnant ES
Pulsar - outgoing ET LS
Pulsar - incoming CT LS
Pulsar remnant
Component A ES
Component B LT ES
Aggregates of stars
Quasar ET LS
Radio Galaxy LS
Intermediate speed gas component ET
Associated radio cloud LT ES

E expanding S in space
C Contracting T in time
L moving linearly
It is appropriate to emphasize that the explanations that emerge from the application of
the Reciprocal System of theory to the extremely compact objects, and related
phenomena, that have been brought within the scope of astronomical observation in very
recent years, are not drawn from the land of fantasy in the manner of ‖black holes,―
‖degenerate matter,― and the like, but are simple and direct results of two aspects of
motion that have not been recognized by previous investigators: motion in time and
motion at speeds exceeding that of light.
When the full range of motions is recognized, the explanations of the newly discovered
objects and phenomena emerge easily and naturally, each taking its specific place in the
evolutionary pattern of the material sector of the universe. This characteristic of the
theoretical development continues what has been one of the outstanding features of the
previously described results of the application of the theory of the universe of motion to
the astronomical field. Instead of being a collection of unrelated classes of entities, each
originating under a special set of circumstances, all of the observed astronomical objects
are found to have their definite places in an evolutionary path resulting from aggregation
under the influence of gravitation.
We have seen, for instance, that the formation of stars and galaxies is not the result of
hypothetical processes that operate only under very special conditions, as assumed in
present-day astronomy. Instead, the formation of each class of objects takes place at the
appropriate point in the evolutionary path as the direct result of gravitational aggregation,
a process that is known to exist and to be operative under the conditions existing at the
point of formation of the particular object. The situation with respect to the other
phenomena that have been examined in the preceding pages is similar. It was not
necessary to call upon processes that require the existence of special conditions of an
unusual nature to explain the strong radiation at radio or x-ray frequencies that is received
from certain classes of objects. Here again, the observed phenomena are explained by
means of processes that necessarily take place at certain stages of the evolutionary
development. Nor do we have to follow the astronomers' practice of evading the task of
accounting for such phenomena as the cataclysmic variables by calling them ‖freaks.―
These phenomena have places on the evolutionary path that are just as specific as those
of the better known astronomical objects.
The view of the newly discovered compact objects and other ‖puzzling― features of the
large-scale activity of the universe that we obtain by applying the physical principles
developed in the two preceding volumes of this work differs quite radically from the way
in which these phenomena are portrayed in current astronomical theory. But when it is
realized that the astronomical theories in these areas are based almost entirely on
assumptions, it should be evident that such conflicts are inevitable. The astounding extent
to which astronomical science has degenerated into science fiction will be described in
Chapter 29. In the interim we will examine a few phenomena that were not taken up
earlier because it was evident that they could be more conveniently considered after the
role of the quasars and associated phenomena had been clearly defined.

CHAPTER 27

Pre-Quasar Phenomena
In the preceding pages we have seen that a Type II supernova in the outer regions of the
galaxy, originating from a relatively large star, produces a pulsar that moves away from
the explosion site at ultra high speed, and also an assortment of products of smaller sizes
and lower speeds, both above and below unit speed (the speed of light). We have also
seen that when large numbers of these supernova explosions occur in the interiors of the
oldest and largest galaxies (as most of them do, since the oldest stars are concentrated in
the central regions of these galaxies), the pressure that is built up by the fastmoving
explosion products ultimately blows out a section of the overlying layers of the galaxy.
This fragment them moves off at ultra high speed as a quasar. Now we will want to give
some consideration to the events that precede this ejection.
The fact that the energy of each of the major explosive events comes from an
accumulation of relatively small (compared to the final energy release) energy increments
contributed by explosions of individual stars not only establishes the normal pre-quasar
pattern, but also determines the kind of variations from the normal pattern that are
possible. Since any small galaxy, or even a globular cluster, may incorporate a few
remnants of disintegrated old galaxies, Type lI supernovae may occur in any aggregate,
but they are relatively rare in the small young structures, and most of their products
escape immediately from these structures. However, when a galaxy reaches the stage in
which some of its constituent stars other than the strays begin to arrive at their age limits,
the number of supernovae in the galactic interiors, where the oldest stars are
concentrated, increases dramatically. Coincident with the increase in age, a galaxy also
increases in size, and the interior regions in which the explosive activity is taking place
are enclosed by a continually growing wall of overlying matter. In the ordinary course of
events this growth leads the increase in internal activity by a sufficient margin to prevent
the escape of any large amount of explosion products until the quasar stage is reached.
The normal pre-quasar period is therefore characterized by a slow, but steady, build-up of
intermediate and ultra high speed matter in the galactic interiors.
With the possible exception of one class of galaxies that we will consider shortly, the
galaxies of the normal evolutionary sequence, those that will eventually eject quasars if
they are not captured by larger aggregates before they reach the critical age, show no
structural evidence of the activity that we find from theory is taking place in their
interiors. There are, however, two observable phenomena that indicate the existence and
magnitude of this activity. One of these is radio emission. The magnitude of the radiation
at radio frequencies indicates the rate at which isotopic adjustments are taking place in
matter recently accelerated to speeds greater than unity by the supernova explosions. It
has been shown by Fanti et al. , that the amount of radio emission is related to the
brightness, and hence to the size, of both spiral and elliptical galaxies, as the theory
requires.271 All of the more advanced spirals are radio emitters, and the giant spheroidal
galaxies are strong radio emitters.
Further evidence of the presence of upper range speeds in the galactic interiors is
provided by the high density that is characteristic of the central cores of the larger
galaxies. According to current estimates, the density in the core of our Milky Way galaxy
is 30 or 40 times as great as would normally be expected, while the central regions of M
87, the nearest, and consequently the best known of the giants, are estimated to be at least
80 times the normal density. Current efforts to explain these abnormal densities are based
on the assumption that there must be a large number of high density objects in these
central regions: white dwarfs, or the hypothetical neutron stars or black holes.
The development of the theory of the universe of motion now reveals that the extremely
high density of all of the compact astronomical objects-white dwarf stars, pulsars, x-ray
emitters, galactic cores, quasars, etc.-is due to the same cause: speeds in excess of unity
(the speed of light). The conventional explanation of the high density of the white dwarfs
is based on the idea of a ‖collapse― of the atomic structure, and it therefore cannot be
extended to an aggregate composed of stars. The effect of upper range speeds, on the
other hand, is independent of the nature of the moving entities. The reduction in the
effective distance between objects by reason of these speeds is a specific function of the
speed, irrespective of whether these objects are atoms or stars.
Thus, the high density of the central regions of the larger galaxies is not due to the
presence of unusual concentrations of very dense objects, but to the distortion of the scale
of the reference system that results from the high speeds of the normal constituents of the
galactic interiors. The cores of these galaxies are in the same physical condition as the
white dwarf stars and the quasars; that is, their density is abnormally high because
introduction of the time displacement of the upper range speeds has reduced the
equivalent space occupied by the central portions of the galaxies. In brief, we may say
that the reason for the abnormal density in the older and larger galaxies is that these
galaxies have white dwarf cores-not white dwarfs in the core, but cores in which the
constituent stars and particles are in the same condition as the constituent particles of a
white dwarf star.
We do not have enough information to enable tracing the build-up of the internal activity
of the galaxies from its beginning, but we do have some knowledge about the interior of a
galaxy that is not yet very far along this road. This is our own Milky Way galaxy, where
we have the advantage of proximity, and can observe details that would otherwise be
beyond the scope of our instruments. A small region, known as Sagittarius A, apparently
located at the dynamic center of the galaxy, has some unusual characteristics which
indicate that it is the kind of a core that we could expect in a spiral of moderate age. The
picture is not entirely clear as yet, but as one report puts it, ‖Radio observations indicate
that something quite unusual is going on at the center of our own galaxy. ― 231 Another
observer draws the same conclusion from the infrared emission which, he says, is ‖so
intense that it cannot easily be interpreted unless we believe that something very special
is occuring there. ― 272
Here we have another instance of the association of strong radio and strong infrared
emission that was discussed in Chapter 14, an association that the astronomers have never
been able to explain. Referring particularly to the quasars, Shipman calls this ‖the
infrared puzzle.― 273 Both of these types of radiation are characteristic of matter that is
moving at upper range speeds. Their existence in the core of the Milky Way galaxy
shows that this galaxy has already developed the kind of an intermediate speed core-a
white dwarf core -that we would theoretically expect in a galaxy of this size and age.
The optical radiation from the core is unobservable because of absorption in the
intervening matter, but some information as to the size and properties of this core has
been derived from infrared and radio measurements. It is generally assumed that the
radiation at about two microns wavelength is thermal, and that its intensity is proportional
to the star density. As indicated in Chapter 14, our findings are in agreement with this
conclusion. On this basis it is estimated that there are about 70 million solar masses
within 10 parsecs of the center of the core, and that the density in the innermost volume
of 0.1 parsec radius is 100 million times the star density in the vicinity of the sun.274On
first consideration such a concentration may seem incredibly large, but when it is realized
that this observed high spatial density is actually a very low density in time, it becomes
evident that the observed magnitude is not out of line with other limiting densities. For
instance, the density of solid matter at zero temperature and pressure is in the
neighborhood of 100 million times the density of the most diffuse stars. 275
The radiation in the near infrared comes from stars that are moving at upper range speeds
(which accounts for their high spatial density), but are composed of particles whose
speeds (temperatures) are in the range below unity (which accounts for the thermal
character of the radiation). In addition to this type of radiation, there is also a very intense
radiation in the far infrared, a nonthermal radiation that ‖is presumed to be synchrotron
radiation. ― 276 In the light of the findings detailed in the preceding pages, it is evident that
this presumption is incorrect, and that the non-thermal radiation, both the infrared and the
associated radio emission, originates from isotopic adjustments in matter that has been
accelerated to upper range speeds. The existence of radiation of this nature identifies the
Milky Way galaxy as one that has a good start toward the build-up of matter with speeds
in the upper ranges which will eventually lead to the kind of a gigantic explosion that
ejects a quasar. ‖Astronomers,― says Hartmann, ‖are still groping for explanations of
what is happening at the center of the Milky Way. ― 277 Here is the framework of the
explanation that they are looking for.
As noted earlier, the evidence of internal activity increases as the galaxy becomes older
and larger. We are not yet able to make a quantitative determination of the maximum size
from theoretical premises, but we know from theory that such a limit exists, and this is
confirmed by observation. Fred Hoyle points out that ‖Galaxies apparently exist up to a
certain limit and not beyond. ― 278 Rogstad and Ekers give us an idea as to the location of
that ‖certain limit.― They report that an absolute photographic magnitude of about -20 is a
necessary condition for a spheroidal galaxy to be a strong radio source. 279
Some of the giant galaxies that are in the neighborhood of this limiting size have jets of
high speed material issuing from their central regions. The nature and properties of these
jets were examined in Chapter 26. Our present concern is with their origin. Such a jet is a
conspicuous feature of the giant galaxy M 87. Like the quasar 3C 273, with which it is
associated, M 87 is of special interest because it is the only member of its class near
enough to be accessible to detailed investigation. This object has all of the features that
theoretically distinguish a galaxy that has reached the end of the road. It is a giant
spheroidal, with the greatest mass of any galaxy for which a reasonably good estimate
can be made; it is an intense radio source, one of the first extragalactic sources to be
identified; and a jet of high speed material emitting strongly polarized light can be seen
originating from the interior of the galaxy. These indications of explosive activity are so
evident that they were recognized in the original application of the Reciprocal System of
theory to the astronomical field, just as soon as the theoretical limits to the life of the
galaxies were discovered, long before any observational evidence of galactic explosions
was recognized by the astronomers. The 1959 publication contained this statement: ‖It
would be in order to identify this galaxy [M 87], at least tentatively, as one which is now
undergoing a cosmic explosion."
Jets such as that issuing from M 87 are obviously produced under conditions in which
pressure is released in a specific direction. Since the galactic explosion that produces a
quasar blows out a particular segment of the outer structure of the galaxy, the spatial
motion of the quasar is given such a direction. Similar conditions may exist where
fragmentary material is ejected, and in that event the initial ejection takes the form of a
jet. The observable astronomical jets are fast-moving streams of unconsolidated material
with individual speeds (temperatures) that extend into the upper ranges. On this basis
they should theoretically be strong emitters of radiation at radio wavelengths, particutarly
at the ends of the jets, and the radiation should be highly polarized. These deductions,
based on the theoretical relations developed in the earlier pages, are in agreement with
the observations.
The theoretical development likewise accounts for a remarkable feature of the jets that is
inexplicable in the context of current astronomical theory. This is the nearly uniform
thickness of the M 87 jet and others of a similar nature. The hypothesis that the
astronomers have invoked to account for the radio emission and the polarization would
result in a rather rapid expansion and dissipation of the jet. Why does this not occur is, to
them, a mystery. Simon Mitton makes this comment:
The thickness of the jet is only tens of light years, so there must be a powerful constraint
to the natural expansion of the gas.280
The development of the theory of the universe of motion identifies this ‖powerful
constraint.― Aside from some entrained low speed matter, the constituents of these jets
are atoms and particles moving at speeds in the two upper ranges. At these speeds the
cloud of particles that constitutes the jet is expanding into time, rather than into space,
and its spatial dimensions are decreasing slightly, rather than increasing.
The available evidence does not indicate specifically how the jet originated. It is possible
that the hole in the outer structure of the galaxy through which the material of the jet is
issuing may be the result of a collision similar to that which seems to have taken place in
NGC 5128 and some other radio galaxies. However, the relatively small cross-section of
the jet and the absence of any indication of major distortion of the galactic structure
suggest that the jet is more likely to be an after-effect of the ejection of a quasar or other
explosion product. It no doubt takes an appreciable time to close the opening left by the
ejection of a section of the outer wall of the galaxy, and during this interval there must be
some loss of energetic material from the interior. If this is a correct interpretation of the
situation, the leakage now visible as a jet will eventually terminate as the outer wall of
the galaxy reforms and closes the existing gap.
There is at least one quasar in the immediate vicinity that could have been ejected from
M 87 recently. According to Arp, the average recession speeds of the galaxies in different
parts of the region around M 87 range from about 400 km/sec more than the speed of M
87 to about 400 km/sec less.244 Any quasar or radio galaxy whose normal recession is
within about 0.0015 of the recession of M 87 is therefore a probable member of the
cluster of galaxies centered around M 87, and is a possible product of an explosion of that
galaxy. Included within these limits is a quasar, PKS 1217+02, with a redshift of 0.240,
which is equivalent to a recession shift of 0.0045 (almost the same as that of M 87).
There are also several radio galaxies in the same neighborhood, with redshifts that qualify
them as possible partners of this quasar. It thus appears likely that PKS 1217+02 and one
of the nearby galaxies, perhaps 3C 270, with redshift 0.0037, were ejected in a relatively
recent explosion.
Of course, it is not possible to reconstruct the exact sequence of events in this crowded
area where there are so many galaxies that are interacting with each other, but it is clear
that the whole range of explosion products is present, from the very old Class II quasar,
3C 273, to the jet of M 87, which originated only yesterday, astronomically speaking.
There may even have been an explosion of M 87 that did not produce a quasar. It has
been noted that the galaxy M 84 (radio source 3C 272. 1 ) is aligned with the M 87 jet in
such a manner as to suggest that this galaxy may have been formed from material ejected
in a more violent period of the activity of the galaxy that preceded the production of the
jet.281 The present activity of M 87 could well be the concluding phase of that explosive
event.
Ultimately, after a number of ejections have occurred, an exploding galaxy will have lost
so much of its substance that it will be unable to resume its normal shape and once more
confine the explosion products to the interior of the structure. Thereafter the pressures
necessary for the ejection of fragments of the galaxy will not be generated, and the
products of the supernova explosions will be expelled at more moderate speeds in the
form of clouds of dust and gas. The galaxy M 82, the first in which definite evidence of
an explosion was recognized, seems to be in this stage. Photographs of the galaxy taken
with the 200-inch Palomar telescope show immense clouds of material moving outward,
and the galactic structure appears badly distorted.2Hz
Just how large M 82 may have been in its prime, before it began ejecting mass, cannot be
determined from observation, but presumably it was in the giant class. At present it is in
the range of the spirals. Sooner or later the remnants of all such overage galaxies will be
gathered into their younger and larger neighbors. The eventual fate of M 82 is clearly
foreshadowed by a comment in an article by A. R. Sandage that the evidence of explosive
events in this galaxy was discovered in a survey of ‖a group of visible galaxies centered
on the giant spiral galaxy M 81 . ― 282 Capture of one after another of this group of
galaxies will eventually build M 81 up to the maximum size. The giant thus produced
will then continue on its way to the ultimate destruction that it, in turn, will experience,
leaving behind remnants that will be incorporated into the new galaxies that form in the
regions of space that are vacated.
Identification of the galaxies that, like M 82, are in the process of disintegration, is
complicated by the fact that galaxies in the process of consolidation display many of the
same features. A collection of galaxies with these features, an ‖Atlas of Peculiar
Galaxies,― compiled by Halton Arp, probably contains a mixture of both types. The
galactic combinations should outnumber the disintegrations by a rather wide margin,
since many combinations are required to produce one giant spheroidal candidate for
disintegration.
Before turning to another subject, it may be of interest to note that the astronomers have
been so frustrated in their attempts to understand M 82 as an exploding galaxy that they
are now shifting to other ideas in the hope of getting something that they can fit into the
prevailing structure of astronomical theory. The following is a recent statement by
Harwit:
The brightest far-infrared extragalactic source known (M 82) at one time was
thought to be an exploding galaxy because hydrogen is seen streaming out of its
central portions at velocities of 1,000 kilometers a second. Energetic processes
that we do not yet understand appear to be active in this galaxy.283
This is a good example of the operation of one of the policies that has taken modern
astronomical theory out of the real world and into the land of fantasy. M 82 exhibits some
of the characteristic features of an exploding object. Recognition of these features
naturally led at first to the conclusion that an explosion was in process in the interior of
the galaxy. But as more information has been accumulated, difficulties have been
experienced in reconciling this information with current theories as to the nature of such
an explosion. As Harwit reports, the situation is not understood. The rather obvious
implication of these difficulties is that the current astronomical theories, insofar was they
apply to the M 82 situation, are wrong, but rather than pursuing this line of thought the
theorists have chosen to develop some new hypotheses to replace the explosion
assumption-hypotheses that are more speculative and less subject to disproof by testing
against observation. The present-day tendency toward this kind of a retreat from reality
will be given some further consideration in the next chapter.
Another kind of galactic phenomenon results from what we may call premature explosive
activity. A galaxy may, for example, capture a number of relatively old stars quite early
in its life, or it may even pick up some old star clusters or a remnant of a disintegrated
galaxy. These older stars will reach their age limits and explode before the galaxy arrives
at the stage where such explosions are normal events. If the premature activity of this
nature is not extensive, the energy that is released is absorbed in the normal motions of
the galaxy. But where a considerable number of stars-those in a captured cluster, for
instance-reach the age limit in advance of the normal time, some significant results may
follow.
If large scale activity of this kind begins when the galaxy is in an earlier stage in which it
is smaller and less compact than the giant spheroidals, the concentration of explosion
products in the interior may break though the overlying material before the pressure
required for the ejection of a quasar is attained. The theoretical results of this kind of a
situation are observed in a class of objects first identified and described by Carl Seyfert,
and known as Seyfert galaxies. These are spiral galaxies, much smaller than the giant
spheroidals, and by reason of their spiral structure, in which much of the mass is spread
out in the form of a disk, their central regions are relatively exposed, rather than being
buried under the outlying portions of the galaxies, as in the giants. The action that is
going on in the Seyferts is thus more accessible to observation.
Present-day astronomical theory is totally unable to account for the observed properties
of these galaxies. With reference to the facts that are now known about them, D. W.
Weedman makes the comment that ‖The reason for their existence remains one of the
most pressing astronomical mysteries. ― 284 As in so many of the ‖mysteries― examined in
the preceding pages, however, these observations are readily explained in the context of
the universe of motion. The biggest enigma for the astronomers is the magnitude of the
energy emission. Radiation of the upper range types-radio and far infrared-is being
emitted from these Seyfert galaxies in the same manner as from the core of the Milky
Way galaxy, but at an immensely greater rate. As reported by Neugebauer and Becklin:
The amount of power such galaxies radiate in the infrared corresponds to as much
as 10¹¹ times the power output of the sun. This is approximately the amount of
power radiated by all the stars in our galaxy at all wavelengths.168
"Conventional concepts of nuclear physics are woefully inadequate in accounting for
such a large energy output from such a miniscule region,― 285says Mitton. The
astronomers' perplexity is still more vividly expressed in this statement:
One cannot help wondering what strange machine is hidden at the center of that
galaxy [NGC 1275, a Seyfert] and others similar to it. Such prodigious emission
of energy and matter from a region that appears to be shrinking, the more we
study it, poses questions to which we have no answers.286 (P. Maffei)
Now that we have established the nature of the quasars, the finding that the Seyferts are
premature quasars identifies the source of the energy, and eliminates the problem of the
size of the emitting region. This region appears small only if we look at it as a spatial
domain, which it is not. It is actually a large region containing a huge number of stars, but
its extension is in time, rather than in space.
The violent motion required by the theory of the universe of motion has been detected in
the cores of these Seyfert galaxies. R. J. Weymann reports that the emission spectra of the
Seyfert galaxies ‖indicate that the gases in them are in a high state of excitation and are
traveling at high speeds in clouds or filaments. Outbursts probably occur from time to
time, producing new highvelocity material.― This, of course, is a description of the state
of affairs that the theory says should exist, not only in the Seyfert galaxies, but in the
cores of the giant spheroidals as well. To the astronomers the whole situation is a
‖puzzle― because, unlike the Reciprocal System, conventional astronomical theory
provides no means, other than gravitation, of confining high-speed material within a
galaxy, and gravitational forces are hopelessly inadequate in this case. Weymann
summarizes the situation in this manner:
If we accept the fact that the gas inside the tiny core of a Seyfert galaxy is moving
at the high apparent velocity indicated by the spectra, and if we assume that the
gas is not held within the core by gravitation, we must explain how it is replaced
or conclude that the violent activity observed in the core is a rare transient event
caused by some explosive outburst.
But the latter possibility, he concedes, is inadmissible, because the Seyfert galaxies
‖cannot be considered particularly rare.― Hence this piece of observational evidence that
is such a significant and valuable item of confirmation of the theory described in this
work, not only the theory of the Seyfert galaxies, but the whole theory of the galactic
explosion phenomena, including the quasars, is nothing but another enigma to
conventional theory.
Weymann also points out that the spectral characteristics of the light from the nuclei of
these Seyfert galaxies are quite different from those of the light coming from the outlying
regions.
Ordinary stars (such as our sun) emit more yellow light than blue light. This is
also the case if one observes a Seyfert galaxy through an aperture that admits
most of the light from the galaxy. As the aperture is reduced to accept light only
from the central regions, however, the ultraviolet and blue part of the spectrum
begins to predominate.231
This is another piece of information that fits neatly into the general theoretical picture.
We have deduced from theory that the predominantly yellow light (positive U-B) that we
receive from ordinary galaxies is characteristic of matter moving with speeds less than
that of light, while the predominantly ultraviolet light (negative U-B) is characteristic of
matter moving with upper range speeds. Now we observe an otherwise normal galaxy
with a nucleus in which there is some unusual activity. From theoretical considerations
we identify this activity as being due to a series of supernova explosions that are
accelerating some particles or aggregates of matter to speeds in excess of the speed of
light, and we find that the light from this galaxy displays just the characteristics that the
theory requires.
The existence of some kind of an unidentified energetic process in the interiors of the
Seyferts-a ‖strange machine,― as Maffei called it in the statement previously quoted-is
generally recognized. Simon Mitton makes this comment:
The variations in NGC 1068 (a Seyfert galaxyJ require a non-thermal mechanism
for the generating source of the intense infrared emission . . . Because of the
difficulties with the hot dust concept, Rieke and Low prefer to attribute the
radiation to a mysterious non-thermal source.287
As reported by Mitton, it is now generally agreed that there is sufficient evidence to show
that there are ‖periodic explosions in the Seyfert nucleus that blast debris into the
surrounding regions.― But these explosions are unexplained in current astronomical
thought. ‖All models of Seyfert nuclei ultimately rely on the ad hoc existence of a
primary energy source― 285
The theory developed herein resolves all of these issues. Furthermore, it explains the
periodic nature of the explosive activity. This is one of the most difficult aspects of the
situation from the standpoint of current theory. Observations confirm the existence of
high speed matter in the interiors of the Seyfert galaxies in the intervals between
explosions, but, as pointed out by Weymann, conventional astronomical theory has no
way of explaining the build-up and containment of this very energetic material. In this
case, as in so many others, the Reciprocal System, by providing an explanation, is filling
a conceptual vacuum.
The same factor that makes the intemal activity of the Seyfert galaxies more accessible to
observation than that of the giant spheroidals, the thinner layers of overlying material,
also limits the kind of products that can result from this activity. In these smaller galaxies
it is not possible to build up the great concentration of energy that is necessary in brder to
eject a quasar, and the emissions of material therefore take less energetic forms. The most
common result is nothing more than an outflow of matter in an irregular pattern, but in
some instances small fragments of the galaxy are ejected, without the ultra high speed of
the quasar.
Because of the periodicity of the explosive events in the Seyfert galaxies the nature and
magnitude of the radiation from the products are variable. Immediately after an outburst
the galaxy is a strong radio and infrared emitter, as noted in Chapter18. As time goes on,
the isotopic adjustments are completed and this radiation therefore decreases. As a result,
the radio emission from some of the Seyferts is little, if any, greater than that from the
average spiral galaxy. Except for that portion which is entrained in the outgoing low
speed matter, the intermediate speed products of the explosion remain in the immediate
vicinity of the galaxy because of the absence of translational motion in space in the
intermediate speed range. Ultimately this material cools enough to drop back below the
unit speed level. This initiates isotopic adjustments of the inverse nature, producing x-
rays. Thus some Seyferts are strong x-ray emitters, while in others little or no x-ray
radiation is detected,264 depending on the stage in which the galaxy happens to be when
observed. As would be expected, the stronger sources, both radio and x-ray, are subject to
large variations.
It is quite evident that there is some kind of a connection between the Seyferts and the
quasars. As expressed by Weymann, ‖Except for an apparent difference in luminosity,
Seyfert galaxies and quasars represent essentially similar phenomena.― 231 Many
astronomers believe that quasars are simply distant Seyfert galaxies, the basis for such a
conclusion being the finding that a number of quasars are surrounded by diffuse matter
that has the same redshift as the quasar itself.
It is difficult, however, to see why this conclusion should necessarily follow from the
observed facts. Some of the reports specify that what has been observed is ‖nebulosity―
that presumably indicates the presence of hot gas. But the presence of hot gas
surrounding an object does not preclude that object from being a quasar. Indeed, our
findings with respect to the origin of the quasars indicate that they musr be surrounded by
hot gas in their early stages, and probably are in their later stages as well. Nor is the
hypothesis as to the identity of the Seyferts and the quasars entitled to any more credence
because an association has been found between a quasar and a galaxy of the same
redshift. The logical conclusion in this case is that the previous classification was in error,
and that the observed object is actually a Seyfert galaxy.
The Seyferts are difficult to identify at great distances because the cores are so much
more tuminous than the surrounding structure. lt can be expected, therefore, that
improvements in instrumentation and procedures will result in identifying an increasing
number of objects of this type among the distant objects now classified as quasars. Only a
small proportion of the spiral galaxies have thus far been recognized as Seyferts.
Weedman estimates about one percent.284 Even a substantial increase over this percentage
would be consistent with the theoretical status of the Seyferts as deviants from the normal
evolutionary pattern, the pattern that culminates in the production of quasars.
The analog of the Seyfert galaxy is not the quasar but the giant spheroidal galaxy from
which the quasar was ejected. Both of these types of galaxies are subject to periodic
outbursts in which quantities of dust, gas, and galactic fragments are ejected. But the
giant galaxy also ejects quasars and diffuse material at ultra high speeds, while the
Seyfert explosions are not powerful enough to accelerate any of their products into the
ultra high range. Consequently there are no counterparts of the quasars in the Seyfert
products. Nor do these products have any of the other ultra high speed properties, such as
the characteristic radio structure.
No Seyfert galaxy exhibits a double radio structure such as that found in most
radio galaxies and quasars.286 (P. Maffei).
To conclude the discussion of the pre-quasar situation, we turn now to the earliest
ancestors of the giant galaxies that produce the quasars, the globular clusters. The,general
run of stars of these clusters are far too young to become supernovae, but as emphasized
in the earlier pages, the dispersed material from which the globular clusters were formed
contained a few remnants of disintegrated galaxies-stars and small star clusters. These are
incorporated into the newly formed globular clusters, usually serving as nuclei for the
cluster formation. They are already.well along the way to their limiting age, and may
reach it while the cluster is still an independent unit.
In a large cluster, one that has not yet undergone the attrition that takes place in the
immediate vicinity of a gaiaxy, the amount of material overlying the central regions is
sufficient to withstand a considerable amount of internal pressure. Any ultra high speed
explosion products probably escape, but those that are moving at less than unit speed are
largely confined, while the intermediate speed products, aside from those that are
entrained in the outward-moving material, remain at the location of origin, inasmuch as
they have no spatial motion components. The presence of these intermediate speed
products results in the existence of a high density region in the center of the cluster, a
small-scale replica of those in the cores of the large galaxies. After the few very old stars
are gone there is no replacement of the energy lost from the explosion products, and their
temperature therefore decreases. At some point it drops below the unit level. This
initiates x-ray emission.
A 1977 publication reported that seven ‖x-ray stars― had been found in the globular
clusters of our galaxy.288 Unlike the returning white dwarfs, whose x-ray emission is
observable only when the materiat from the interiors of these stars breaks through the
overlying low speed matter in a nova explosion, these ‖x-ray stars― are actually
concentrations of exptosion products similar to those in the observable supernova
remnants, and they continue their emission in the manner of those remnants, gradualty
decreasing as the isotopic adjustments are completed.

CHAPTER 28

Inter-sector Relations
Unquestionably, the most intriguing new finding that has emerged from the development
of the theory of the universe of motion is the existence of an inverse sector of the
universe that duplicates the material sector which has heretofore been regarded as the
whole of physical existence. As might be expected, this finding has met with a cold
reception by those scientists who adhere strongly to orthodox lines of thought. This is, in
a way, rather inconsistent, as these same individuals have been happy to extend
hospitality to the same ideas in different forms. The concept of an ‖antiuniverse―
composed of antimatter surfaced almost as soon as antimatter was established as a
physical reality; the hypothesis of ‖multiple universes― gets a respectful hearing from the
scientific Establishment; and astronomical literature is full of speculations about ‖holes―
that may constitute links between those universes-black holes, white holes, wormholes,
etc.
It should therefore be emphasized that the theory of the universe of motion which
identifies an inverse sector is not based on radical departures from previous thinking, but
on concepts that were already familiar features of scientific thought. Actually, all that has
been done in the extension of the new theory into this area is to take the vague concept of
an antiuniverse, put it on a solid factual foundation, and develop it in logical and
mathematical detail. Many of the conclusions that have been reached in the course of this
development are new, to be sure, but they are implicit in the antiuniverse concept.
Observational identification of antimatter in our local environment shows that the
observed universe and the antiuniverse are not totally isolated from each other; some
entities of the ‖anti― type exist in observable form in our familiar physical universe. It is
only one step farther-a logical additional step-to a realization that this implies that the
complex entities of the observed type may have componenrs of the ‖anti― nature. Once
this point is recognized, it can be seen that the unorthodox conclusions that have been
reached in the preceding pages are simply the specific applications of the antiuniverse
concept.
For example, additions to the linear component speeds (temperatures) decrease the
density of ordinary astronomical objects. It follows from the inverse nature of the ‖anti―
sector, the cosmic sector, as we are calling it, that addition of speeds of the ‖anti―
character increases the density. Similarly, addition of rotational motion in space to an
atom of matter decreases the isotopic mass, while addition of rotational motion of the
inverse type (motion in time) increases the isotopic mass. And so on.
The new theoretical development has merely taken the familiar idea of a universe of
motion, and the equally familiar idea of existence in discrete units only, and has followed
these ideas to their logical consequences, an operation that was made possible by the only
real innovation that the new development introduces into physical theory: the concept of
a universe composed entirely of discrete units of motion. With the benefit of this new
concept, it has been possible to define the physical universe in terms of the two postulates
stated in Volume I. The contents of this present volume describe the detailed
development of the consequences of these postulates, as they apply to astronomy. Before
concluding this description, and taking up consideration of some of the other
consequences and implications of the findings, it will be appropriate to give further
attention to the few, but important, direct contacts between the two sectors of the
universe.
In one sense, the two primary sectors of the universe, the material and the cosmic, are
clearly differentiated. The phenomena of the material sector take place at net speeca that
cause changes of position in space, whereas the phenomena of the cosmic sector take
place at net speeds that cause changes of position in time. But the space and time of the
material sector are the same space and time that apply to the cosmic sector. For this
reason, the phenomena of each sector are also, to some degree, phenomena of the other as
well.
Some of the observable effects of this inter-sector relationship have already been
discussed. In Volume I the cosmic rays that originate in the cosmic sector were
considered in substantial detail, and in the preceding chapters of this present volume
similar consideration has been given to the quasars and pulsars that are on their way to
the cosmic sector. In these areas previously examined, we have been dealing with
phenomena in which physical objects acquire speeds, or inverse speeds, that cause them
to be ejected from one sector into the inverse sector, where the combinations of motions
that constitute these objects are transformed into other combinations that are compatible
with the new environment. In addition to these actual interchanges of matter between
sectors, there are also situations in which certain phenomena of one sector make
observable contact with the other sector because of this point that has just been brought
out: the fact that the events of both sectors involve the same space and time.
As we have seen in the earlier pages, the dominant physical process in each sector is
aggregation under the influence of gravitation. In the material sector gravitation operates
to draw the units of matter closer together in space to form stars and other aggregates.
When portions of this matter are ejected into the cosmic sector in the form of quasars and
pulsars, gravitation ceases to operate in space. This leaves the outward progression of the
natural reference system unopposed, and that progression, which carries the constituent
units of the spatial aggregates outward in all directions, destroyes the spatial structures
and leaves their contents in the form of atoms and particles widely dispersed in both
space and time. Meanwhile, gravitation in time has become operative, and as it gradually
increases in strength it draws the dispersed matter into stars and other aggregates in time.
These aggregates then go through the same kind of an evolutionary course as that
followed by the aggregates in space.
As this description indicates, the basic physical units maintain the continuity of their
existence regardless of the interchanges between sectors, merely altering their
distribution in space and time. In the material sector they are distributed throughout the
full extent of the three dimensions of the spatial reference system, but they move only
through the restricted region of time traversed in a linear progression. In the cosmic
sector these distributions are reversed. Contacts between the entitites of the material
sector and those of the cosmic sector are therefore limited. In view of the relatively low
density of matter in the universe as a whole, a cosmic entity moving one-dimensionally
through three-dimensional space will, on the average, have to travel a long way before
encountering a material object. Nevertheless, some such encounters are continually
taking place.
The key factor in this situation is the nature of the relation between space and time. Not
until comparatively recently was it realized that such a relation actually exists. Even in
Newton's day these two entities were still regarded as being totally independent. The
current view is that time is one-dimensional, and constitutes a kind of quasi-space which
joins with three dimensions of space to form a four-dimensional space-time framework,
within which physical objects move one-dimensionally. While this four-dimensional
spacetime concept is relatively new, the basic idea of space and time as the elements of a
framework, or setting, for the activity of the universe is one of long standing. Indeed, it is
so deeply embedded in physical thought that it is very difficult to recognize the existence
of any alternative. The problem involved in making a break with this familiar habit of
thought is illustrated by the fact that even in the first edition of this work, the postulates
of the theory being described were still expressed in terms of ‖space-time.― Eventually,
however, it was realized that space-time is actually motion.
Throughout the development of thought concerning this subject, it has been recognized
by everyone that motion is a relation between space and time. The magnitude of the
motion, the speed or velocity, has been expressed accordingly, in terms of centimeters per
second, or some equivalent. The four-dimensional concept embraced by current science
assumes that a totally different kind of relation also exists. In application to entities of a
fundamental nature, such a duality is inherently improbable, and the development of the
theory of the universe of motion now indicates that the assumption is erroneous. Our
finding is that any relation between space and time is motion or an aspect of motion.
It is now evident that the concept of space-time employed in conventional physical
theory, and carried over into the early stages of the development of the theory of the
universe of motion, is a partial, and rather confused, recognition of the nature of the
fundamental relation between space and time. This so-called ‖space-time,― a simple
relation between a space magnitude and a time magnitude, is the basic scalar relation
between space and time; that is, ‖space-time― is actually scalar motion. Fundamentally,
this relation is mathematical. Its dimensions are therefore mathematical, or scalar,
dimensions. From the mathematical standpoint, an n-dimensional quantity is simply one
that requires n independent scalar magnitudes for a complete definition. It follows that in
a three-dimensional universe there can be three scalar dimensions of motion.
The spatial aspect of one (and only one) of these can be represented geometrically in a
reference system of the conventional coordinate type. Here we are dealing with three
dimensions of space, but only one dimension of motion. The reference system is not
capable of representing motion in the other two scalar (mathematical) dimensions. But
the fact that the same space and time are involved in all types of motion means that there
are some effects of the motion in these other dimensions that are observable, at least
indirectly, in the reference system. The force of gravitation, for example, is reduced by a
distribution over all three dimensions, and only a fraction of it is effective in the space of
the reference system.
Use of the term ‖dimension― in both mathematical and geometric applications leads to
some confusion. The term is usually interpreted geometrically, and many persons are
puzzled by the introduction of scalar dimensions of motion into the physical picture. It
has therefore been suggested that some different designation ought to be substituted for
‖dimension― in one of the two applications. However, all dimensions are inherently
mathematical. The geometric dimensions are merely representations of numerical
magnitudes.
Motion at a speed less than unity causes a change of position in space. The three-
dimensionality of the universe applies to the spatial aspect of this motion as well as to the
motion as a whole. The space involved in one of the scalar dimensions of motion can
therefore be resolved into three independent components, which can be represented
geometrically. Since no more than three dimensions exist, there is no basis, within three-
dimensional geometry, for representation of the spatial aspects of the other two scalar
dimensions of motion, except under certain special conditions discussed in the preceding
pages. Motion at a speed greater than unity causes a change of position in three-
dimensional time. If independent, this motion cannot be represented in the spatial
reference system. However, if it is a component of a combination of motions in which the
net total speed is on the spatial side of the neutral level, the temporal speed acts as a
modifier of the spatial speed; that is, as a motion in equivalent space.
From the foregoing it can be seen that the universe is not four-dimensional, as seen by
conventional science, nor is it six-dimensional (three dimensions of space and three
dimensions of time), as some students of the Reciprocal System of theory have
concluded. We live in a three-dimensional universe. Just how these three dimensions
manifest themselves in any specific case depends on the individual circumstances.
Two physical entities make contact when they occupy adjacent units of either space or
time. It is commonly believed that the essential condition for contact is to reach the same
point in space at the same time, but this is not necessarily true. Objects located in the
spatial reference system must be at the same stage of the progression-that is, the same
clock time-in order to make contact, but this is only because there is a space progression
paralleling the time progression recorded by the clock, and unless two such objects are at
the same stage of this space progression they are not at the same spatial location. Two
objects that are in contact in space are not usually at the same location in
threedimensional time. Likewise, objects that are in contact in time are usually at
different spatial locations.
This fact that the spatial contact is independent of the time location accounts for the
containment of the material moving at upper range speeds in the interiors of the giant
galaxies prior to the explosions that produce the quasars. Since the components of this
high speed aggregate are expanding into time at speeds in excess of the speed of light, it
might be assumed that they would quickly escape from the galaxy. But the increased
separation in time does not alter the spatial relations. The equilibrium structure in space
that exists in the outer regions of the giant galaxy is able to resist penetration by the high
speed material in the same manner in which it resists penetration by matter moving at less
than unit speed.
The motions of cosmic entities in time are similarly restrained by contacts with cosmic
structures, but these phenomena are outside our field of observation. The phenomena of
the cosmic sector with which we are now concerned are the observable events which
involve contacts of material objects with objects that are either partially or totally cosmic
in character. Interaction of a purely material unit with a cosmic unit, or a purely cosmic
unit with a material unit follows a special pattern. Where the structures are identical,
aside from the inversion of the space-time relations, as in the case of the electron and the
positron, they destroy each other on contact. Otherwise, the contact is a relation between
a space magnitude and a time magnitude, which is motion. Viewed from a geometrical
standpoint, these entities move through each other. Thus matter, which is primarily a time
structure, moves through space, while the uncharged electron, which is essentially a
rotating space unit, moves through matter.
Material and cosmic atoms, and most sub-atomic particles, are composite structures that
include both material (spatial) and cosmic (temporal) components. Inter-sector contacts
between such objects therefore have results similar to those of contacts between material
objects. To an observer, such a contact appears to be the result of a particle entering the
local environment from an outside source. These results are indistinguishable from those
produced by an incoming cosmic atom. The contact will therefore be reported as a
cosmic ray event. The cosmic atoms involved in these events are moving at the ordinary
inverse speeds of the cosmic sector, rather than at the very high inverse speeds of the
atoms that are ejected into the material sector as cosmic rays. The most energetic of the
reported cosmic ray events therefore 'probably result from these random encounters.
One other cosmic event that has an observable effect in the material sector is a
catastrophic explosion, such as a supernova or a galactic explosion, that happens to
coincide with the time of the spatial reference system. The radiation received in the
material sector from ordinary cosmic stars is widely dispersed in space, because only a
few of the atoms of each of these stars are located in the small amount of space that is
common to the cosmic star and the spatial reference system as they pass through each
other. But a cosmic explosion releases a large amount of radiation in a very small space,
just as an explosion of the material type releases a large amount of radiation in a very
short time. We can thus expect to observe some occasional very short emissions of strong
radiation at cosmic frequencies (that is, the inverse of the frequencies of the radiation
from the corresponding explosions of the material tyPe).
Both the theoretical investigations and the observations in this area are still in the early
stages, and it is premature to draw firm conclusions, but it seems likely that the
theoretical short, but very strong, emissions of radiation can be identified with some of
the gamma ray ‖bursts― that are now being reported by the observers. A reported ‖new
class of astronomical objects― is described in terms suggesting cosmic origin. These
objects, says .he report, ‖emit enormous fluxes of gamma radiation for periods of seconds
or minutes and then the emission stops. ‖ 289 Martin Harwit tells us that ‖remarkably little
is known about gamma ray bursts,― 290 and elaborates on that assessment by citing an
observers' summary of the existing situation, the gist of which is contained in the
following statement:
Neither the indicated direction or coincidence in times of occurrence have yet
established an association between these bursts and any other reported
astrophysical phenomena. Even today, 1978, with 71 bursts cataloged, and with
improved directional resolution available, the sources of these bursts remain
unidentified without even a strong suggestion of the class or classes of objects
responsible.291
In addition to these events involving contacts between the entities of the two sectors,
there are other phenomena which result from the fact that photons of radiation exist on
the sector boundary, and therefore participate in the activities of both sectors. This is a
consequence of the status of unit speed as the speed datum, the physical zero, as we
called it in the earlier discussion. From the standpoint of the natural reference system, a
speed of unity measured from zero speed, and an inverse speed of unity measured from
zero energy (inverse speed) are equal to each other, and equal to zero. An object moving
at this speed relative to the conventional spatial reference system, or to an equivalent
temporal reference system, is not moving at all from the natural standpoint. The photons
of radiation, which move at unit speed in the conventional reference system, are thus
stationary in the natural system of reference, regardless of whether they originate from
objects in the material sector, or from objects in the cosmic sector. It follows that they are
observable in both sectors.
Because of the inversion of space and time at the unit level, the frequencies of the cosmic
radiation are the inverse of those of the radiation in the material sector. Cosmic stars emit
radiation mainly in the infrared, rather than mainly at optical frequencies, cosmic pulsars
emit x-rays rather than radio frequency radiation, and so on. But these individual types of
radiation are not recognized as such in the material sector because, as we found earlier,
the atoms of matter that are aggregated in time to form cosmic stars, galaxies, etc., are
widely separated in space. The radiation from all types of cosmic aggregates is received
from these widely dispersed atoms as a uniform mixture of very low intensity that is
isotopic in space.
This ‖background radiation― is currently attributed to the scattered remnants of the
radiation originated by the Big Bang, which are presumed to have cooled to their present
state, equivalent to an integrated temperature of about 3K, in the billions of years that are
supposed to have elapsed since that hypothetical event occurred.
The Big Bang is one of the major features of the universe as it appears in modern
astronomical theory. The next chapter will present a comparison of this astronomical
universe with the universe of motion defined by the postulates of this work. It will be
shown that, although the building blocks of the astronomers' universe are observed
entities -stars, galaxies, etc.-that exist in the real sense, the universe that they have
constructed as a setting for these real objects is a purely imaginary structure that has no
resemblance to the real physical world.
Inasmuch as science claims to have methods and procedures that are capable of arnving
at the physical truth, it may be hard to understand how the astronomers, who presumably
utilize scientific methods, could have reached such very unscientific results. But an
examination of astronomical literature quickly shows just what has gone wrong. The
astronomers have followed the lead of a modern scientific school whose methods and
procedures do not conform to the rigid standards of traditional science.
Of course, this assertion will be vigorously denied by those whose activities are thus
characterized. So let us see just what is involved in this situation. Aside from gathering
information, the traditional way of extending scientific knowledge is by means of what is
known as the hypothetico-deductive method. This method involves three essential steps: (
1 ) formulation of a hypothesis, (2) development of the consequences thereof, and (3)
verification of the hypothesis by comparing these consequences with the facts disclosed
by observation and measurement. The nature of this process allows a wide latitude for the
construction of the basic hypotheses. On the other hand, the constraints on item (3), the
verification process, are extremely rigid. In order to qualify as an established item of
scientific knowledge, a hypothesis must be capable of being stated explicitly, so that it
can be tested against observation or measurement. It must be so tested in a large number
of separate applications distributed over the entire field to which this item is applicable, it
must agree with observation in a substantial number of these tests, and it must not be
inconsistent with observation in any case.
It is important to bear in mind that a physical proposition of a general nature, the kind of
a hypothesis that enters into the framework of the astronomers' universe, cannot be
verified directly in the manner in which we can verify a simple assertion such as ‖Water
is a compound of oxygen and hydrogen.― Direct verification of a general relation would
require an impracticable number of separate cotrelations. In this case, therefore, it is
necessary to rely on probability considerations. Each comparison of one of the
consequences of a hypothesis with observed or measured facts is a test of that hypothesis.
Disagreement is positive. It constitutes disproof. If even one case is found in which a
conclusion that definitely follows from the hypothesis is in conflict with a positively
established fact, that hypothesis, in its existing form, is disproved.
Agreement in any one comparison is not conclusive, but if the tests are continued, every
additional test that is made without encountering a discrepancy reduces the probability
that any discrepancy exists. By making a sufficient number and variety of such tests, the
probability that there is any conflict between the consequences of the hypothesis and the
physical facts can be reduced to a negligible level. Just where this level is located is a
matter of opinion, but the principle that is involved is the same as that which applies to
any other application of the probability laws. Many positive correlations are required in
order to establish a probability strong enough to validate a hypothesis. If only a few test
can be made, the probability of validity remains too low to be acceptable.
To illustrate the effect of a small number of correlations with empirical knowledge, let us
consider one of the coin tossing experiments that are used extensively in teaching
probability mathematics. We will assume, for purposes of the illustration, that the
participants have not been given the opportunity to examine the coin that will be used in
the experiment. Thus there is a small possibility that this coin is a phony object with
heads on both sides. If the first toss comes up heads, this is consistent with a hypothesis
that a two-headed coin is being used, but clearly, this one case of agreement with the
hypothesis does not change the situation materially. The odds are still overwhelmingly in
favor of the coin being genuine. Not until a substantial number of successive heads have
been tossed-perhaps nine or ten~ould the double-headed coin hypothesis be taken
seriously, and a still longer run would be required before the hypothesis could be
considered validated.
The effect of the number of trials, or tests, on ihe probability of the validity of a
hypothesis is independent of the nature of the proposition being tested. Astronomical
conclusions are subject to the same considerations as any other hypotheses, including the
hypothesis of the double-headed coin. But very few of the key features of the
astronomers' picture of the basic structure of the universe are supported by more than one
or two correlations with observation. Some have none at all. The fact that the one or two
cases of agreement between theory and observation, where they exist, does not add
significantly to the probability of validity thus means that these crucial astronomical
conclusions are unconfirmed. As scientific products they are incomplete. The final step in
the standard scientific procedure, verification, has not been carried out.
To make matters worse, many of the conclusions are not merely unverified. The
processes by which they have been reached are such that a large proportion of them are
necessarily wrong. The reason is that these conclusions rest, in whole or in part, on
general principles that are invented. The status of invention as a source of physical theory
was discussed in Volume I, but a review of the points brought out in that discussion that
are relevant to the astronomical situation will be appropriate at this time.
Modern physical theory is a hybrid structure derived from two totally different sources.
In most physical areas, the small-scale theories, those that apply to the individual
physical phenomena and the low-level interactions, are products of induction from factual
premises. Many of the general principles, those that apply to large-scale phenomena, or
to the universe as a whole, are invented. ‖The axiomatic basis of theoretical physics
cannot be an inference from experience, but must be free invention,― 292 is Einstein's
contention.
There is a great deal of misunderstanding as to the role of experience in the first step of
the scientific process, the formulation of a hypothesis, largely because of the language
that is used in discussing it. For example, in describing ‖how we look for a new
[physical] law,― Richard Feynman tells us, ‖First we guess it.― 293 This would seem to
leave the door wide open, and such statements are widely regarded as sanctioning free
use of the imagination in theory construction. But Feynman goes on the stipulate that the
hypothesis must be a ‖good guess,― and enumerates a number of criteria that it must
satisfy in order to qualify as ‖good.― Before he is through he concedes that ‖what we
need is imagination, but imagination in a terrible strait-jacket.― 294
What Feynman calls a ‖good― guess is actually one that has a substantial probability of
being correct. As he points out, there are an ‖infinite number of possibilities― if invention
is unrestricted. The probability of any specific one of these being correct is consequently
near zero. The scientific way of arriving at a reasonably probable hypothesis (the way
that Feynman describes, even though some of his language would lead us to think
otherwise) is to utilize inductive processes such as extrapolation, analogy, etc., to obtain
the kind of an ‖inference from experience― to which Einstein objects. A hypothesis
derived inductively-that is, an inference from experience-is, in effect, pretested to a
considerable extent. For instance, an analogy in which a dozen or so points of similarity
are noted is equivalent to an equal number of positive correlations subsequent to the
formulation of a hypothesis. Thus the inductive theory has a big head start over its
inventive counterpart, and is within striking distance of proof of validity from the very
start.
But inductive reasoning requires a factual foundation. Inferences cannot be drawn from
experience unless we have had experience of the appropriate nature. In many of the
fundamental areas the necessary empirical foundations for the application of inductive
processes have not been available. The result has been a long-standing inability to find
answers to many of the major problems of the basic areas of physics. Continued
frustration in the search for these answers is the factor that has led to the substitution of
inventive for inductive methods.
A similar situation exists throughout most of the astronomical field, where normal
inductive methods are difficult to apply because of the scarcity of empirical information
and the unfamiliar, and seemingly abnormal, nature of many of the observed phenomena.
The astronomers have therefore followed the example of the inventive school of
physicists, and have drawn upon their imaginations for their hypotheses. Application of
this policy has resulted in replacement of the standard scientific process of theory
construction by a process of ‖model building.― This process starts with a ‖free
invention,― a ‖castle in the air,― as H. L. Shipman describes it. Beginning with ‖a small,
neat castle in the air,― he says, you ‖patch on extra rooms and staircases and cupolas and
porticos. ‖295 The result is not a theory, an explanation or description of reality, it is a
model, something that, as Shipman explains, is merely intended to facilitate
understanding of the real world. ‖The model world exists only in people's minds,―296 he
says.
The fatal weakness of this kind of a program, based on invention, is that inventive
hypotheses are inherently wrong. The problems that they attempt to solve almost
invariably exist because some essential piece, or pieces, of information are missing. This
rules out obtaining the answer by inductive methods, which must have empirical
information on which to build. Without the essential information the correct answer
cannot be obtained by any means (except by an extremely unlikely accident). The
invented answer drawn from the imagination to serve as the basis for a model is therefore
necessarily wrong.
Of course, the erroneous invented theories, or models, cannot meet the standard tests of
validity, and the same process of invention has been applied to the development of
expedients for evading the verification requirements. Not infrequently these are
employed to evade actual conflicts with the observed facts. Chief among them is the ad
hoc assumption. When the consequences of a hypothesis are developed, and it is found
that they disagree with observation in some respects, instead of taking this as disproof of
the validity of the hypothesis, the theorist uses his ingenuity to invent a way out of the
difficulty that cannot be tested, and therefore cannot be disproved. He then assumes this
invention to be valid. Like the invented theories themselves, and for the same reasons,
these inventions that take the form of ad hoc assumptions are inherently wrong.
Another of the expedients frequently employed to justify acceptance of a hypothesis
whose validity has not been, or cannot be, tested is the ‖There is no other way― argument
that we have had occasion to discuss at a number of points in the preceding pages. No
further comment should be necessary on the usual form of this argument, but we often
meet it in a somewhat different form in astronomy. There are many astronomical
phenomena about which very little is known, and only one or two correlations with
observation are possible, as matters now stand. There is a rather widespread impression
that, under the circumstances, if a hypothesis is consistent with observation in these
instances, its validity is established. Here the argument is that the hypothesis has been
tested in the only way that is possible, and has withstood that test. The fallacy involved in
calling this a verification can be seen when it is realized that the limitation of the testing
of a hypothesis by reason of the unavailability of more than one or two tests is equivalent
to discontinuing the coin-tossing tests of the double-headed hypothesis after the first or
second toss. The truth is that the increase in the probability of validity of a hypothesis
that results from a favorable outcome of one or two tests is insignificant, regardless of the
reasons for the limitation of the testing to these cases.
What the current practice amounts to is that instead of proof the astronomers are offering
us absence of disproof. Shipman makes this commeni about the situation in one of the
poorly tested areas:
To a great extent this picture [of stellar evolution] is based on limited models,
blind faith, and a few observed facts.297
‖Blind faith― may be appropriate in religion, but it is totally unscientific. One of the most
unfortunate results of the reliance on absence of disproof is that it favors departures from
reality in the construction of theories. The farther a hypothesis diverges from reality, the
less opportunity there is for checking it against established facts, and the more difficult it
is to disprove. By the time the speculation reaches such concepts as the black hole, all
contact with reality has long since been lost.
For example, examination of the case in favor of the black hole as the explanation of the
x-ray source Cygnus X-l, the object that is supposed to provide the best observational
evidence for the existence of a black hole, reveals that this case is argued entirely on the
basis of what this object is not. It is not a white dwarf, so it is claimed, because it is larger
than the accepted unverified hypothesis as to the nature of the white dwarf stars will
permit. It is not a neutron star, because, for the same reason, the observations conflict
with the accepted unverified hypothesis as to the nature of the hypothetical neutron stars.
‖It is difficult to explain Cygnus X-1 as anything but a black hole ,―298 says Shipman. In
less credulous times, the inability of an investigator to find a viable explanation for a
phenomenon would have been regarded as an indication that his job is still unfinished.
But now we are expected to accept the best that he can do as the best that can be done.
In justice to this author, however, it should be noted that, although he accepts the
existence of the black hole as ‖probable― on the strength of the foregoing argument, he
evidently has some qualms about giving unreserved support to such an excursion into the
land of fantasy, because he goes on to say:
Black holes are, so far, entirely theoretical objects . . . It is very tempting,
especially for people who like science fiction, to succumb to the Pygmalion
syndrome and endow their model black holes with a reality that they do not yet
possess.299
It is, of course, true that the opportunities for gathering factual information are severely
restricted in astronomy, where experimentation is not possible and observation is limited
by the immense distances and very long times that are involved in the phenomena under
consideration. The structure of astronomical theory thus rests on a very narrow factual
base, and it is to be expected that more than the usual amount of speculation will enter
into astronomical thinking. But the presence of this speculative component in current
thought is all the more reason for taking special precautions to maintain a strict
distinction between those items that have met the test of validity and those that are still
unverified. In order to preserve the contact with reality, it is particularly important to
avoid pyramiding unverified results.
Here the demands of science collide with the interests of the scientists, especially the
theorists. Advancement of theoretical knowledge is a slow and difficult task. Few of
those who undertake this task ever accomplish anything of a lasting nature, other than
minor modifications of some features of previous thought. But the professional scientists
of the present day are under intense pressure to produce results of some kind. Financial
support, personal prestige, and professional advancement all depend on atriving at
something that can be published. As expressed among the university faculties, ‖Publish
or perish.― So the theorists concentrate their efforts mainly in the far-out regions where
there are only a minimum of those inconvenient facts that are the principal enemies of
theories, and they fill the scientific literature with products that cannot be tested because
they have too few contacts with physical reality. It is the pyramiding of these untestable
hypotheses that has produced the imaginary universe of modern astronomy that we will
examine in the next chapter.
To the extent that the theorists make any attempt to justify their wholesale use of
imaginary entities and phenomena in the construction of their models, they rely on the
contention that ‖ihere is no other way―; that the amount of factual information available
for their use is totally inadequate to provide the foundation for theoretical development.
This is a specious argument. It serves the purpose of the individual whose primary
purpose is to find something that he can publish, but it makes no contribution toward the
advancement of knowledge. On the contrary, to the extent that the imaginary results are
accepted, it places obstacles in the way of real advances.
Furthermore, the lack of factual information is not nearly as acute as the astronomers
depict it. It is true that the amount of information about individual pheno~nena is often
quite limited, but this is not peculiar to astronomy. It is common to all areas of inquiry,
and science has found ways of overcoming this handicap. For example, information in
several areas may sometimes be pooled. The concept of ‖energy,― which has played an
important part in the development of physical theory, did not emerge from the study of
any one individual area. It was derived by the process known as abstraction, involving the
use of data from many such areas. It would have been equally possible for the
astronomers to have abstracted the property of ‖extremely high density― from a number
of different astronomical phenomena, and to have examined it in the light of the large
amount of factual information thus collected. This might well have resulted in the
discovery of the true cause of this high density before it was brought to light by the
theory of the universe of motion.
Such considerations are now no more than academic in application to astronomy, since it
has been demonstrated in the preceding pages that the physical principles developed from
the postulates that define the universe of motion are capable of dealing with the whole
range of astronomical phenomena. But one of the things that many scientists have
envisioned is the eventual application of scientific methods and procedures to the solution
of the problems of some of the non-scientific branches of thought that have long been
mired down in confusion and contradiction. Before anything of this kind can be
accomplished, it will obviously be necessary for the scientific profession itself to return
to the traditional methods and procedures that are responsible for its record of
achievement. The black holes, the quarks, the Big Bang, and similar fantasies are the
products that are publicized as the fruits of scientific research in the media, and the
ordinary individual cannot be expected to realize that the remarkable accomplishments of
science over the past several centuries have not been made by such flights of fancy, but
by a steady application of the traditional methods of science to one problem after another,
testing each answer as it is obtained, and building up a solid and stable structure of theory
brick by brick. If science is to be applied to economics, for example, it will have to be in
this slow, careful, and painstaking way. Economics already has too many of the economic
equivalents of the black hole.

CHAPTER 29

The Non-existent Universe


Chapter 28 completes the description of the new view of astronomical phenomena that
we get from the development of the theory of the universe of motion, to the extent that
this development has thus far been carried. Before beginning consideration of a different
aspect of the physical universe in the final two chapters of this volume, it will be
appropriate to take a second look at the universe that this new understanding replaces, the
non-existent universe of the imaginative theorists that plays such a major role in present-
day physics and astronomy. The non-existent entities and phenomena that make up this
phantom universe have been discussed in detail in the earlier pages, but since this
discussion has been distributed over three volumes, there would seem to be some merit in
a recapitulation that brings the major astronomical items together, so that the connections
between the various items can be recognized, and the almost incredible extent of this
realm of fantasy that has grown up within the boundaries of the scientific disciplines can
be fully appreciated.
Construction of this elaborate network of figments of the imagination would have been
impossible in the prosaic and conservative science of Galileo and Newton, but when the
progress of experimental and observational discovery carried empirical knowledge
beyond the range of Newtonian theories, and thereby undermined their authority, Einstein
was able to secure acceptance of his contention that his distinguished predecessors were
wrong in believing that ‖the basic concepts and laws of physics . . . were derivable by
abstraction, i.e., by a logical process, from experiments.― 292 General acquiescence in his
dictum that ‖the axiomatic basis of theoretical physics cannot be an inference from
experience, but must be free invention― opened the gates to a free and unrestrained
exercise of the imagination. Accordingly, Bohr pioneered the idea of inventing new
physical laws for application in those areas where problems were encountered in applying
the established laws and principles, Einstein introduced the concept of flexible
magnitudes, Heisenberg promulgated a principle of uncertainty to legitimize
discrepancies, and soon the era of scientific invention was in full swing.
Now we are going to examine the structure of fantasy that has been erected by those who
have taken advantage of this license to give free rein to the imagination under the banner
of science, so that we can see just how far the universe of modern astronomy has
diverged from the universe of physical reality. Although it is fictional, this imaginary
universe has a logical structure. It is carefully reasoned from specified premises. But
some of these premises involve departures from reality. These are assumptions-free
inventions-in areas where the true facts were unknown, or not yet recognized, prior to the
investigation reported in this work. With the aid of such assumptions to complete their
foundations, the inventive astronomers have been able to build an elaborate structure of
theory extending far beyond the limits of the real universe and into the land of fantasy.
As pointed out in Chapter 28, the retreat from reality is primarily due to the fact that little
or no attempt has been made to subject the inventions, and the theoretical conclusions
based upon them, to the standard tests of validity. Inasmuch as the ties that bind this
structure of theory to the solid ground of observed and measured facts have been severed
only at a few specific points, it is usually difficult to determine by examination of any
one particular physical situation just how much is fact and how much is fiction. But we
can establish a clear line of demarcation between the real and the fictional by identifying
the points at which the false assumptions have been made, and following the lines of
reasoning, based on these assumptions, that lead to the kind of non-existent entities,
phenomena, and relations that populate the phantom universe of present-day science.
We will be concerned mainly with the astronomical fantasies, not only because
astronomy is the primary subject matter of this present volume, but also because it deals
with the physical extremes, and therefore has the effect of magnifying the departures
from reality. It is here, in the astronomical field, that we find the black holes, the
degenerate matter, the singularities, and other extravagances of fertile imaginations. But
the initial points of departure from the real world are at a more fundamental level. The
physicists are the ones that first strayed from the straight and narrow path. Astronomy has
suffered the consequences.
Of course, the astronomers do not recognize the remarkable extent to which their
discipline has taken on a fictional character, but at least some of them realize that there is
little connection between their theoretical universe and what is actually observed. As
Harwit puts it, there is ‖a gap between theorists and observers.― He comments on the
‖remarkable detachment― between observation and theory, and goes on to say:
The astrophysical concepts that lead us to an understanding of cosmic phenomena
have a history that is all but decoupled from the actual discovery of the
phenomena . . . Theory and observation pursue their own somewhat separate
ways, and the major cosmic phenomena continue to be discovered mostly by
chance.236
It is also beginning to be recognized that this gap will eventually have to be closed by
means of a reconstruction of basic theory. As noted elsewhere in this volume, there is a
tendency in astronomical circles to expect this reconstruction to take place in the
fundamental physical laws, rather than in astronomy itself. As demonstrated in this work,
a drastic revision of physical fundamentals is indeed required, but such a revision
necessarily has significant repercussions on the superstructure that the astronomers have
erected on the physical foundations that must now be rebuilt. At least some members of
the astronomical community are beginning to recognize this point. For example, Geoffrey
Burbidge, Director of the Kitt Peak National Observatory, made this comment in a recent
( 1983) interview:
My suspicion is that Chip Arp [Mount Wilson and Las Campanas Observatories]
is right, and some of the main pillars of extra galactic astronomy are going to
tumble down.300
After all, astronomy is merely large-scale physics, and the astronomers are m the
awkward position of having to place the foundations of their theoretical structure in what
Paul Davies (one of the most enthusiastic of the current generation of fantasy-
constructors) describes as ‖the Alice-in-Wonderland world of the New Physics, a world
alive with paradoxes, mysteries, and discontinuities.― 301As might be expected in an
Alice-in-Wonderland world, the retreat from reality starts at the very base of the
theoretical structure. This can be seen in the following comparison:
1. In the imaginary universe: The fundamental constituents of the universe are
elementary units of matter.
In the real universe: There are no elementary units of matter.
The word ‖elementary― in this context means ‖irreducible.― In earlier eras matter was
regarded as elementary, in this sense, and since it was known to consist of discrete units,
the existence of an elementary unit of matter was taken for granted. One of the major
objectives of investigators in the physical field has been to identify the elementary unit.
In the meantime, however, the discovery of processes whereby matter can be transformed
into non-matter, and vice versa, has provided concrete proof that matter is not
elementary. Since matter and radiation, for example, are interconvertible, they must
necessarily be different forms of the same thing. And since matter cannot qualify as
radiation, nor radiation as matter, it follows that neither can be elementary. Both must be
forms of the elementary entity. Thus there are no elementary particles of matter in the
real universe.
2. In the imaginary universe: The elementary units of matter are quarks.
In the real universe: There are no quarks.
Non-existent particles obviously cannot be found by the normal scientific process of
discovery. They have to be invented. There seems to be a general impression that if the
inventions are held to a minimum in any specific case, the development of thought is still
scientific; that is, it continues to be a study of nature. But this view greatly
underestimates the effect of a single deviation from reality. The original step into the
phantom world may be relatively harmless. In itself, the issue as to whether or not there is
an irreducible unit of matter has no significant effect on the general physical situation.
But one false step leads to another, and soon the development of thought is far out of
touch with reality.
No invention can anticipate the results of future empirical discoveries. Consequently, the
history of inventive theories is one of never-ending modifications and adjustments,
usually moving farther and farther away from the original point of contact with empirical
facts. The quark hypothesis is the end result (so far) of the effort to identify the non-
existent elementary particle, or particles, of matter, and it carries this process to the point
of absurdity. The quark is purely hypothetical. There is no actual evidence of the
existence of anything of this kind. Indeed, one of the principal activities of ‖elementary
particle physics― is dreaming up plausible reasons why such evidence cannot be found.
3. In the imaginary universe: The atom is constructed of particles that are made
up of quarks.
In the real universe: The atom is an integral unit that has no ‖parts.―
The quarks are not the only postulated particles that the investigators cannot find in the
real world. They cannot find the particles that are supposed to be constructed of quarks
either. They confuse this issue by giving these imaginary particles, the hypothetical
constituents of the atoms, the same names as observed particles such as electrons and
neutrons. But calling different objects by the same name does not make them the same
kind of objects. Regardless of what they are called, objects belong in the same category
only if they have the same properties. The properties that have to be ascribed to the
hypothetical sub-atomic particles in order to make it theoretically possible for them to be
constituents of atoms differ widely from the properties of the observed particles that are
called by the same names.
Stability, for instance, is an essential property of any atomic constituent, including the
hypothetical particle that is currently called a ‖neutron.― The observed neutron is not
stable. It lives only about 15 minutes. Similarly, the properties that the hypothetical
atomic constituent currently called an ‖electron― must have in order to fit into its
prescribed place in the atomic sttucture are quite different from those of the observed
electron. We can deal with these imaginary electrons only on a statistical basis, and as
Herbert Dingle points out, we can make these statistical methods effective ‖only by
ascribing to the partictes properties not possessed by any imaginable objects at all.― 302
Furthermore, as many leading theorists tetl us, the atomic electron cannot be regarded as
a ‖real― particle. It does not ‖exist objectively,― 337 they say. The idea that the real world
can be constructed of elementary units that are not real-that do not even ‖exist
objectively"-is the kind of an absurdity that is characteristic of the Wonderland of the
imaginary universe.
4. In the imaginary universe: The atom has a ‖nuclear― structure in which a
positively charged nucleus containing most of the mass is surrounded by
negatively charged electrons. In the real universe: The atom is a single integral
unit, not a collection of parts. The experimental ‖nucleus― is actually the atom
itself, and contains all of the mass.
Even though there are no ‖elementary― particles of matter, the ‖smallest― or ‖simplest―
particles of matter can be identified, and if these small or simple particles had the
properties that would qualify them as constituents of the larger particles, it would be in
order to postulate that the larger particles are so constituted. But since we know that
matter is not composed of elementary units of matter, there is no justification for
assuming that the atoms must necessarily be constructed of smaller particles of matter. It
follows that there is no reason why there must be atomic constituents. This eliminates any
grounds that may have existed for conjuring up imaginary constituents such as quarks, or
for inventing modifications of known particles to make them suitable as building blocks.
Since no real particles capable of meeting the requirements that apply to constituents of
atoms can be found, the logical conclusion (the one that has been reached in this work
from different premises) is that the atom is not constructed of subsidiary units. The
prevailing concept of a ‖nuclear― structure is a hypothetical assemblage of imaginary
particles; assumption piled upon assumption.
5. In the imaginary universe: Atomic behavior is govemed by a set of laws
differing in significant respects from the laws governing the behavior of
macroscopic matter.
In the real universe: The same physical laws are applicable everywhere.
The inventive theorists find it necessary to invent new laws (a) to account for the
hypothetical behavior of the non-existent constituents of the atom, and (b) to account for
the phenomena of the region inside unit distance, where the inversion that occurs at all
unit levets (not yet recognized by conventional science) alters the manner in which the
physical laws apply. Even with an unlimited license for making ad hoc assumptions, the
builders of the imaginary universe have not been able to devise a set of laws for their
atoms that is logical and self-consistent. In order to justify holding on to their concept of
the nature of the atomic structure they have therefore advanced the strange contention
that their atom has these incomprehensible characteristics because nature itself is illogical
and inconsistent in the realm of the very small.
6. In the imaginary universe: At the atomic level the universe is illogical and
incomprehensible.
In the real universe: Phenomena at the atomic level have the same character as
those at the macroscopic level.
The physicists' atom is not a real physical entity:
The modern atom is ‖the solution of a wave equation, and nothing more."j°; (E.
N. da C. Andrade) It is ‖in a way, only a symbol.― 304 (Werner Heisenberg) The
hypothetical electron constituent of the atom is an ‖abstract thing, no longer
intuitable in terms of the familiar aspects of everyday experience. ‖ 305 (Henry
Margenau)
The theory of that atom (the quantum theory) is incomprehensible:
I think I can safely say that nobody understands quantum mechanics.306(Richard
Feynman) An understanding of the `first order' is . . . almost by definition,
impossible for the world of atoms.307 (Werner Heisenberg)
As these statements from prominent scientists demonstrate, present-day science does not
even pretend that its atom belongs to the world of reality. But it asks us to believe the
preposterous assertion that the reality which admittedly does not exist at the atomic level
is somehow acquired in the course of combining these phantom atoms into macroscopic
structures. P. W. Bridgman states the case specifically in these words:
The world is not intrinsically reasonable or understandable; it acquires these
properties in ever-increasing degree as we ascend from the realm of the very little
to the realm of everyday things.308
This is utter nonsense, quite out of character for Bridgman, one of the keenest analysts
that the scientific profession has produced. A real structure can be built of real bricks. An
imaginary structure can be built of imaginary bricks. But a real structure cannot be built
of these imaginary bricks. What Bridgman has described is not the world as it actually
exists, but the physicists' understanding of that world. A real world can be built of real
entities that the physicists do not understand. Bridgman has used the term ‖not
understandable― where the correct term is ‖not understood.― The practice of treating that
which is not understood as not understandable is quite common, but obviously without
justification. If this unwarranted extrapolation is removed from Bridgman's statement, it
becomes something like this:
The world is not fully understood. It is understood to an increasing degree as we
ascend from the realm of the very little to the realm of everyday things.
Here we have a correct description of the situation as it stood prior to the development of
the theory of the universe of motion described in this and the preceding volumes. The
point that is being brought out in this present chapter is that, in the absence of an
understanding of the phenomena of ‖the realm of the very little,― the theorists have
invented a universe that they can manipulate to produce imaginary solutions for whatever
problems they may encounter. Thus far in our examination of the framework of this non-
existent universe we have been following the physicists' line of reasoning based on the
assumption (now known to be contrary to fact) that the basic entities of the universe are
elementary units of matter, a development .of thought that arrives at an imaginary
structure of the atom of matter. Next we will trace a similar line of reasoning based on a
contrafactual assumption as to the nature of the energy generation process in the stars,
and we will examine the fantastic features of the imaginary world that result from the
merging of these two lines of thought.
7. In the imaginary universe: The light elements are the fuel for the energy
generation in the stars.
In the real universe: The heavy elements are the stellar fuel.
Like the nuclear atom, the hydrogen conversion process appeared plausible when it was
first proposed. Direct observation of the energy production is not possible, but the
assertion that the energy is produced by the only process then known that appeared
capable of meeting the requirements seemed reasonable at that time. However, as soon as
the astronomical consequences of the production of energy by this process were
examined, it should have been clear that this is not the process that the stars utilize in the
real world. A multitude of astronomical observations are in conflict with the
consequences of this assumption.
8. In the imaginary universe: The hot, massive stars are young. The stars of the
globular clusters are old.
In the real universe: The hot, massive stars are the oldest stars of their respective
generations. The stars of the globular clusters are relatively young.
The stellar age sequence in the imaginary universe of present-day astronomy, is one of
the direct consequences of the assumption as to the nature of the energy generation
process, and it is a classic example of how an erroneous assumption in one limited area
can have consequences of a far-reaching nature. So far as the energy generation process
itself is concerned, the question as to which constituents of the star supply the energy is
not a critical issue, as long as the energy source is adequate and controllable. But the
indirect results of this error have been disastrous. The general acceptance of the hydrogen
conversion process as the stellar energy source has seduced the astronomers into
embracing an upside down view of the entire evolutionary process. If they had been
presented with this entire package as a whole, and had realized that it was all dependent
on an assumption as to the nature of an unobservable process, it is unlikely that this
package would ever have been accepted. But here, as in so many other cases, most of the
fictional components of theories are the results of extended lines of reasoning in which
the crucial role of the erroneous basic assumptions tends to be obscured. Many
astronomers are uneasy about this situation, and recognize that a fictional element has
entered into astronomy somewhere. Maffei makes this comment:
We are now moving beyond those concepts and the knowledge familiar to us in
the first half of this century, and we are entering a world in which science and
fantasy intertwine.309
It is evident, however, that there is no general understanding of how far the current
astronomical thinking has diverged from reality, or where the excursions into the land of
fantasy have originated. Item number 8 is one of the major points of departure. Another
consequence of the erroneous assumption as to the nature of the stellar energy generation
process that has played a significant part in diverting astronomical theory into fantasy-
land is the conclusion that the stars eventually run out of fuel.
9. In the imaginary universe: The light element fuel supply of a star is eventually
exhausted, and the star ultimately cools down to .the temperature of interstellar
space.
In the real universe: The fuel supply is continually replenished by accretion of
matter from the environment.
At this point the lines of development from the basic products of the imagination that we
have identified thus far join to produce some further nonexistent phenomena.
10. In the imaginary universe: ‖With its fuel gone it [the star] can no longer
generate the pressure necessary to maintain itself against the crushing force of
gravity. ‖ 61
In the real universe: Gas pressure operates in aIl directions equally; downward as
well as upward. The gravitational forces therefore remain the same regardless of
the magnitude of the gas pressure.
The structure of matter at zero absolute temperature, where thermal forces are absent,
arrives at an equilibrium condition, in which the gravitational force is counterbalanced by
an opposing force that has not been identified by conventional science, other than as an
‖antagonist. ‖ 26 There is no observational indication that this force is subject to any kind
of a limit, and we now find that in the universe of motion no such limit exists. The
‖antagonist― is the force generated by the progression of the natural reference system
relative to'the conventional reference system, and it cannot be overcome by the
gravitational force, however great that force may be.
11. In the imaginary universe: ‖The crushing force of gravity― acting against the
interior atoms of the star, after the elimination of the gas pressure, collapses their
structure.
In the real universe: (a) Elimination of the gas pressure, if it occurred, would not
increase the force acting on the central atoms. (b) The structure of the atom does
not collapse under pressure.
The ‖collapse― is an imaginary breakdown of the structure of the imaginary nuclear atom.
In this hypothetical atomic structure the imaginary positively and negatively charged
constituents are widely separated (on the atomic scale), leaving nothing but empty space
in the greater part of the volume occupied by the atom. The collapse is presumed to
eliminate most of this empty space, and bring the atomic constituents into contact. There
is ample observational evidence to support the theoretical conclusion that such a collapse
is impossible. The mere existence of stars that are 50 or 100 times as massive as the sun
is positive proof that the inter-atomic equilibrium is able to withstand the greatest
pressures of which we have any definite knowledge, those which exist at the center of
such a star. The contention that this pressure is increased when, and if, the star cools
because of the exhaustion of the fuel supply is pure nonsense. The matter in the center of
the star is subject to the full pressure due to the weight of the overlying material
regardless of whether that material is hot or cold.
12. In the imaginary universe: the collapse of the atomic structure converts the
matter of the star into a strange hypothetical state called ‖degenerate matter. ‖
In the real universe: There is no degenerate matter.
In this connection, it should be realized that the ‖collapse― is not merely an assumption
that has no observational support. It is an assumption that is specifically contradicted by
the observed facts. As pointed out above, the existence of very massive stars is definite
proof that the inter-atomic equilibrium is maintained under the greatest pressures that are
known to be brought against it-immensely greater than the maximum pressures reached
in the smaller stars: the ones that are presumed to collapse into the degenerate state. The
truth is that the collapse is merely another addition to the chain of inventions. It is a
mythical collapse of a hypothetical assemblage of imaginary particles. The degenerate
matter is an imaginary product of that mythical collapse.
13. In the imaginary universe: The speed of light is an absolute. limit on the speed
of material objects.
In the real universe: The speed of light is the limiting speed in one of the three
scalar dimensions in which motion can take place.
Here, again, the product of the imagination is specifically contradicted by observation
and measurement. As brought out in detail in Volume I of this work, and in other
previous publications, the Doppler shifts of the quasars are direct speed measurements,
and values exceeding 1 .00 indicate speeds greater than that of light. The customary
application of Einstein's relativity mathematics to reduce these speeds below the 1 .00
level is an unwarranted use of a relationship developed for, and justified in, a totally
different kind of a situation.
In this case, what the erroneous assumption has done is the inverse of the results of the
other basic errors that have been discussed. Those others opened the door to imaginative
ideas having no connection with reality; that is, they resulted in the extension of physical
and astronomical theory into areas that do not exist. General acceptance of the
assumption of an absolute limit at the speed of light has prevented extension of the theory
into some areas of the universe that actually do exist. It has blocked any investigation of
the phenomena of the realm of the very fast, and has enabled the fantasies of the
‖degenerate matter― type to be taken seriously because they have had no competition.
14. In the imaginury universe: The white dwarf is an aggregate of degenerate
matter produced by the collapse of a star of small or moderate size.
In the real universe: The white dwarf is one of the products of a supernova
explosion. It is composed of ordinary matter that has been accelerated to speeds in
excess of that of light, and is therefore expanding outward in time (equivalent to
inward in space).
The white dwarf is an aggregate of ordinary matter produced from another aggregate of
such matter (a star) by one of the processes to which ordinary matter is subject, and it has
the properties of ordinary matter. Its only distinctive observable feature is the magnitude
of one of these properties, its density. Conventional science has no explanation for
densities in the range in which the white dwarf densities fall, because it accepts the
dictum of the inventors of the imaginary universe that speeds greater than that of light
(the speeds that are responsible for the high density) do not exist.
15. In the imaginary universe: the ordinary white dwarf eventually cools and
becomes a black dwarf: a dead star.
In the real universe: The white dwarfs lose energy to the environment. In
the case of those produced by Type I or relatively small Type II
supernovae, this energy loss eventually reverses the process that is
responsible for the small size and high density of the white dwarfs, and
expands them back into main sequence stars. There are no dead stars.
The black dwarf is purely hypothetical. There is no observational evidence that any such
objects exist. Like so many other features of the non-existent universe of present-day
astronomy, the black dwarf hypothesis survives only because the existing astronomical
facilities are not capable of producing the physical evidence that would demonstrate that
there are no such objects.
One of the problems that the astronomers have encountered in building their imaginary
universe is that the consequences of some of their basic assumptions do not agree with
the consequences of some of the others. The white dwarf is a case in point. It is the result
of lines of reasoning based on the erroneous assumptions that have been identified in the
foregoing paragraphs. But another assumption, likewise accepted by most astronomers,
leads to a totally different result.
According to conventional physics we should expect stars at the ends of their
lives to contract under their own gravity until their gravitational fields become so
strong that light no longer escapes from them and they become invisible.310
The feature of conventional physics to which this statement refers is Einstein's
assumption that gravitation is a distortion of space-time due to the presence of matter.
16. In the imaginary universe: Gravitation is a distortion of space-time and
therefore acts within the atoms as well as between them.
In the real universe: Gravitation is a motion of the individual units (atoms and
sub-atomic particles) and therefore acts only between the units.
This is another of the basic departures from reality that have taken the astronomers'
perception of the universe into the land of fantasy. From the space distortion hypothesis
the theorists have derived the concept of selfgravitation of the atom. It is assumed that
application of sufficient external force brings matter to a critical point where this self-
gravitation becomes effective. Beyond this point the atoms continue contracting by virtue
of their own gravity.
This process is quite different from the ‖collapse― envisioned in the theory that leads to
the astronomers' conception of the white dwarf. Thus there are two competing theories in
this area. To further complicate the situation, the results of observation do not agree with
either of these theories. The statement quoted above as to the conclusions of
‖conventional physics― goes on to say: ‖in fact, we observe the reverse. Stars typically
explode at a certain critical phase of their tives.― Faced with this real-life observation,
which could not be ignored, the astronomers have worked out a compromise between the
observations and their two theories. As it happens, they have never been able to ascertain
what stars explode, or why the explosions occur. In the absence of this information, the
latitude for ad hoc assumptions is almost unlimited, and the theorists have been able to
put enough of them together to construct an explanation that meets the current liberal
standards of acceptability; that is, there is not enough information available to disprove it.
It is assumed that, for some unspecified reason, large stars are unable to collapse quietiy
into white dwarfs in the manner of their smaller counterparts, and instead terminate their
lives with explosions. Then it is further assumed that only the explosion products reach
the self-gravitation stage.
17. In the imaginary universe: Stars that exceed a certain mass limit terminate
their existence with explosive events that leave residues denser than the white
dwarfs.
In the real universe: Every star eventually reaches either a mass limit or an age
limit, and explodes, producing a white dwarf, or its inverse equivalent, or both.
Presumably the hypothetical critical density is somewhat above that of the hypothetical
degenerate matter. As one investigator in this field remarks, ‖precision is not possible,
because we do not know enough about the properties of matter at the 'supernuclear
densities' of a white dwarf. ‖`°' But according to the astronomers' theory, there must be a
physical state intermediate between the white dwarf and the self-gravitating object. To
meet this demand the theorists again call upon the remarkable property of the imaginary
neutron, that of becoming stable whenever stability is required by a theory.
18. In the imaginary universe: The high density products of explosions of stars in
the intermediate size range are neutron stars. They are observed as pulsars.
In the real universe: The pulsars are fast-moving white dwarfs. There are no
neutron stars.
The general impression today is that the status of the pulsars as neutron stars is an
established fact, although as Martin I4arwit admits in a statement quoted earlier, the
astronomers ‖have no theories that satisfactorily explain just how a massive star collapses
to become a neutron star.― 184 The problems involved in explaining the properties of the
pulsars in terms of neutron stars are equally intractable. F. G. Smith, one of the leading
investigators in the field, concedes, in another of the earlier references, that little is
known about either the origin or the mechanism of the pulsars.183 Our development
shows that the neutron star is a typical product of the imagination. The inability to define
its properties is not surprising. The properties of non-existent entities are always difficult
to define precisely. The pulsars are actually white dwarf stars produced by supernova
explosions that are powerful enough to give some of their products speeds in the ultra
high range. These result in outward translational motion, as well as the expansion into
time that is characteristic of all white dwarfs.
19. In the imaginary universe: The terminal events in the lives of the largest stars
produce compact objects whose density is above the critical level. These are black
holes.
In the real universe: There are no limits on the size of white dwarfs, other than
those that apply to all stars. There are no black holes.
"Of all the conceptions of the human mind from unicorns to gargoyles to the hydrogen
bomb perhaps the most fantastic is the black hole . . . Like the unicorn and the gargoyle,
the black hole seems much more at home in science fiction or in ancient myth than in the
real universe. ‖ 201This comment by K. S. Thorne, one of the enthusiastic searchers for
evidence of these ‖fantastic― phenomena, is an eminently correct assessment of the
situation. This author goes on to assert that, ‖Nevertheless, the laws of modern physics
virtually demand that black holes exist.― This, too, is true, but only because the particular
‖laws of modern physics― to which he refers are not the laws of the solid and stable areas
of physics. They are the laws of the phantom universe.
Without the self-gravitation concept, the theorists have no way of producing the extreme
densities of the black holes. But once they invoke the aid of this concept they have no
way of stopping it. Indeed, it must accelerate. The same imaginary process that accounts
for the existence of black holes in the imaginary universe therefore limits these entities to
no more than a transient existence. The black hole contracts to a point.
20. In the imaginary universe: There is no limit to the process of contraction by
self-gravitation. It therefore continues until the entire star has shrunk to a mere
point: a singulariry.
In the real universe: There are no singularities.
One of the recognized principles of logic, the branch of thought upon which scientific
procedure is organized, is the reductio ad absurdum, in which the falsity of a proposition
is established by demonstrating that a logical development of its consequences leads to an
absurdity. The singularity is an absurdity. It is totally foreign to all that we actually know
about the physical universe. It therefore follows that there is an error somewhere in the
line of thought that produced this absurd result. The findings of this present investigation
have now identified many such errors, but even without this new information it should be
clear that every assumption in the lines of thought leading to the singularity is open to
doubt until the situation is clarified.
The general assumption that the existence of black holes is at least quasipermanent is, in
effect, a denial of the validity of the singularity hypothesis. But those who have so much
to say about the extraordinary properties of black holes are silent on the question as to
why, or how, the contraction process should stop at this black hole stage. Such details, it
seems, are unimportant in a universe of the imagination.
21. In the imaginary universe: The existing physical universe originated in a
gigantic explosion: the Big Bang.
In the real universe: There was no Big Bang. The information now available does
not indicate how the universe originated, or whether it had an origin.
In the singularity hypothesis the observed limits of gravitational contraction are ignored,
and this concept is carried to the point of absurdity. In the Big Bang hypothesis the same
treatment is accorded to the concentration of energy. We find from observation that the
greatest concentration of energy (matter and the motion of matter) in the material sector
of the universe is in a giant spheroidal galaxy containing somewhere in the neighborhood
of 10¹² stars, and we have reasons to believe, even without the positive information
derived from the theory of the universe of motion, that this is a limiting concentration
imposed by natural laws. The Big Bang theory ignores this limitation, and again the result
is an absurdity: a hypothetical event whose antecedents are completely unknown, whose
mechanism cannot be explained, and whose results, as we will see in Chapter 30, do not
agree with what we actually observe.
A comparison of the Big Bang theory (which describes the theoretical results of an
extremely large explosion) with the astronomers' theory of the origin of black holes in
supernova events (which describes the theoretical results of large explosions) provides a
good illustration of the inconsistencies so prevalent in the imaginary universe. In their
study of the ultimate fate of large stars, the theorists have produced a hypothesis, based
on the concept of selfgravitation derived from Einstein's theories, that specifies the results
of a supernova explosion. If the same hypothesis is applied to the Big Bang explosion,
the result of the Big Bang will be an immense black hole, or singularity, surrounded by a
relatively small amount of material expanding in space. This obviously is not the universe
that we observe, so the astronomers simply repudiate Einstein and his gravitational
theories, so far as their application to the Big Bang is concerned, and invent another, very
different, theory for this special situation.
This concludes the description of the principal features of the imaginary universe that
modern theorists have constructed to explain the phenomena that they have not been able
to bring within the bounds of the current understanding of the universe of physical
reality. It is not feasible to examine the immense amount of detail into which the
development of this imaginary universe has been carried-the elaborate computer-
designed fictitious evolutionary paths of the stars, for instance, or the remarkably detailed
(but somewhat discordant) accounts of what happens in the first few seconds after the
hypothetical Big Bang, the comprehensive description of the insides of the imaginary
black holes, and so on-but the points that have been covered in the preceding pages
should be sufficient to indicate the extent of this imaginary universe, and the major part
that it plays in present-day physics and astronomy.
It should also be noted that this description is limited to those items with which most
astronomers agree, as matters now stand. The imaginations of the theorists are by no
means restricted to the areas that have been covered here. A host of books and articles are
currently explaining in great detail the hypothetical properties of other non-existent
entities and processes. ‖Holes― are the current fad, and new kinds are appearing in
profusion. Some are merely variations of the plain black hole-mini black holes,
superholes, rotating black holes, expanding black holes, etc.-while others step out boldly
with new concepts: white holes, for example, or even ‖wormholes.― The hypothetical
conditions existing in the first minutes after the imaginary Big Bang are likewise high
fashion at the moment, and are being called upon to provide explanations for the
formation of galaxies, the origin of the background radiation, the production of those
chemical elements that are not otherwise accounted for, and a variety of other items.
This is indeed a happy time for the theorists. They live in an era in which the universe of
the imagination is the prevailing orthodoxy, and they are provided with a fertile field in
which to work, one in which there are only a bare minimum of those inconvenient
observed or measured facts that have been the downfall of so many of the cherished
products of their less fortunate predecessors. The case in favor of the most typical
features of the imaginary universe, such items as degenerate matter or singularities, is
entirely negative; that is, it rests on the absence of any observational evidence that
specifically disproves these hypotheses. Thus the farther one of these products of the
imagination departs from reality, the easier it is to meet the requirements for acceptance
by the scientific community.
One of the strangest features of the whole situation is that while the theorists are letting
their imaginations run wild, and indulging in speculations of the most fantastic character,
all in the name of science, they are religiously observing a taboo that prevents them from
investigating the one hitherto unexplored area of the real universe in which the answers to
many of their problems can be found: the region of speeds greater than that of light.
There is nothing irrational or illogical about such speeds. Indeed, up to the beginning of
the present century there was no suggestion that there might be any inherent limitation on
speed. But Einstein has laid down an interdict that prohibits the exploration of the
consequences of motion at speeds greater than that of light, and since a challenge to this
ukase is unthinkable in the present-day scientific community, the astronomers are barred
from even speculating about the immense field of physical existence at speeds greater
than that of light, the field to which the entire latter half of this present volume has been
devoted. Current physical and astronomical theory stops dead at the speed of light.
Inductive reasoning, or exercise of the imagination, beyond this point are, in effect,
prohibited.
The construction of the astronomers' imaginary universe has been a gigantic task because
of the never-ending revisions, re-adjustments, and corrections that have been required by
the new information continually being produced by the work of the observers and
experimenters. Those who have participated in the undertaking are very proud of what
has been accomplished, and those who are now chronicling their endeavors characterize
them in superlatives, such as the following from Paul Davies, referring specifically to the
elucidation of the hypothetical details of the epoch immediately following the imaginary
Big Bang:
The study of this violent primeval epoch must rank as one of the most exciting
intellectual adventures of modern science.311
No doubt this task has been exciting for those who have been engaged in carrying it out,
and in this sense it is an ‖adventure,― but the primary aim of science is to increase our
knowledge of nature, and from a scientific standpoint the psychological reactions of the
investigators are irrelevant. The only legitimate scientific criterion by which the feats of
the imagination involved in constructing the imaginary universe can be judged is whether
or not they have, in fact, added to our knowledge of nature. They certainly have not done
so directly, since false information is not an addition to knowledge. Perhaps these
excursions into the land of fantasy may have stimulated some thinking along lines that
eventually produced some items of real knowledge. However, it is more probable that the
net result of the effort expended on the investigation of the properties of non-existent
entities and phenomena has been to obstruct the advance of knowledge, rather than to
facilitate it. As pointed out in the discussion of this subject in The Neglected Facts of
Science, ‖It would appear that the main purpose served by inventing a theory is to enable
the scientific community to avoid the painful necessity of admitting that they have no
answer to an important problem.― 312
In any event, there is no longer any need for a science fiction approach to astronomy. The
development of the theory of the universe of motion has provided a solid foundation of
positive knowledge and a comprehensive theoretical framework that enables fitting all of
the observed phenomena into their proper places in the grand design.

CHAPTER 30

Cosmology
Long before the first records of human activity were scratched on rocks or indented into
clay, the more thoughtful members of the human race were already wondering about the
origin of the world in which they found themselves living, and about its ultimate fate. We
know this to be true because these first records indicate that the thinking about such
matters had already reached a rather high level of sophistication. That early thinking was,
of course, purely speculative; the connection between the premises on which it was based
and the conclusions that were reached was too nebulous to justify calling it inductive
reasoning. Furthermore, these speculative ò ideas relied almost entirely on supernatural
processes, and they were essentially religious in character.
In the course of time, as various fields of thought split off from religion, and secular
branches of knowledge were originated, the questions as to the origin and fate of the
universe came to be accepted as philosophical issues. Such subjects as cosmology and
cosmogony were therefore defined, until quite recently, as subdivisions of philosophy.
Within the present century, however, some physical phenomena have been discovered
that are believed to have a bearing on these issues, and as a result, most of the theoretical
activity in this area is now carried on in scientific terms, and en though it is just about as
speculative as ever, it is regarded as scientific. As expressed by Hermann Bondi,
‖Nowadays we regard cosmology as a branch of science, or to be more precise, a branch
of astronomy.‖ 313
Bondi defines cosmology as ‖the field of thought that deals with the structure and history
of the universe as a whole.‖ An astronomy textbook gives this somewhat more explicit
definition:
Cosmology is concerned with the nature and origin of the entire universe-its
structure today, its past, and its future.314
The scope of the subject, as thus defined, is greatly extended beyond the earlier
objectives. We may, indeed, regard the modern additions to cosmology as a separate field
of knowledge. This is the view taken by the Encyclopedia Britannica, which places
cosmology under two separate headings: ‖Cosmology, in astronomy‖ and ‖Cosmology,
philosophical.‖ In this work the subject will be divided in essentially the same way. This
chapter will examine the aspects of astronomy that are generally classed as cosmological,
and Chapter 31 will then take up a consideration of the implications of our physical and
astronomical findings on questions of a more philosophical nature.
Present-day cosmological theories can be described as variations of two themes. Ever
since Hubble's discovery of the recession of the distant galaxies, accounting for this
recession has been regarded as the number one requirement of such a theory. The current
favorite, the Big Bang theory, assumes that an enormous explosion at some time in the
remote past hurled the entire contents of the universe out into space at the tremendous
speeds now observed. One variation of this theory sees the expansion as continuing
indefinitely, and the ultimate fate of the universe as a condition in which its constituent
parts are separated by distances too great for any interaction. An alternative view is that
the expansion will ultimately reach a limit, and will be succeeded by a contraction that
will terminate with another Big Bang, the cycle being repeated indefinitely.
These theories based on a Big Bang are evolutionary in character. They depict the
universe as undergoing a continual change from an initial to a final state, with or without
a reversal, depending on the particular version of the theory. The Steady State theories,
the only alternatives to the Big Bang that have been taken very seriously, portray the
universe as unchanging in its general aspects. In fact, one approach to this type of theory
bases it on a ‖Perfect Cosmological Principle,‖ which asserts that this uniformity is a
fundamental principle of nature. In order to maintain the uniformity, the steady state
concept, in its present form, requires the continual creation of new matter from which
new galaxies can be formed to fill the spaces left vacant by the outward movement of the
previously existing galaxies.
The fortunes of these rival theories have fluctuated as new observational discoveries have
posed difficulties for one or the other of them, and as revisions of the theories have been
made to accommodate them to the new information. As matters now stand, the steady
state type of theory is at a low ebb. It has for years been contending against observational
data which are asserted to indicate that there are more faint radio sources at great
distances than would be found under steady state conditions. In 1965 it received another
blow when an isotropic background radiation was discovered and attributed to the
remnants of the Big Bang. The present tendency on the part of the astronomers is to
conclude that the Steady State theories are ‖almost certainly excluded by two
independent sets of facts,‖ 315and to accept the Big Bang theory as having been
established by default, there being no other contenders.
In view of the very limited amount of factual data available in this area, and the open
questions as to the relevance of these data to the points at issue, the near unanimity of
astronomical opinion is clearly a bandwagon effect. As J. N. Bahcall pointed out in a
recent (1971) article, ‖We frequently settle important scientific issues by acclamation
rather than observation. ' The general acceptance of the Big Bang theory is a prize
example of this wholly unscientific practice. A few words of caution are being heard. For
instance, Bernard Lovell had this to say:
No one acquainted with the contortions of theoretical astrophysicists in the
attempt to interpret the successive observations of the past few decades would
exhibit great confidence that the solution in favour of the hot big bang would be
the final pronouncement in cosmology.317
Fred Hoyle states the case more bluntly. He tells us, ‖I have little hesitation in saying that
a sickly pall now hangs over the big-bang theory. One of the problems involved in
making a critical examination of invented theories is that they are generally vague
enough to leave room for differences of opinion on major details-often on vital details.
Current scientific literature is full of references to different ‖interpretations‖ of various
theories of this type. The Big Bang cosmological theory is no exception. In fact, the
differences between the interpretations of this theory are so extreme that these
interpretations actually constitute different theories rather than different versions of the
same theory. For this reason, the comments and criticisms that apply to one are not
necessarily applicable to another. To cope with this situation we will first consider the
original form of the theory, in which a highly concentrated aggregate of matter ―explodes
and ejects the galaxies in all directions.‖318 Subsequently we will give some attention to
the more recent interpretations.
The principal objections to the original Big Bang theory, as seen in the context of
conventional astronomical thought, without taking into account the new information
derived from the theory of the universe of motion, which will be considered later, can be
summarized as follows:
1. The Big Bang is pure assumption. There are no physical principles from which it
can be deduced that all of the matter in the universe would ever gather together in
one location, or from which it can be deduced that an explosion would occur if the
theoretical aggregation did take place.
2. Theorists have great difficulty in constructing any self-consistent account of the
conditions existing at the time of the hypothetical Big Bang. Attempts at
mathematical treatment usually lead to concentration of the entire mass of the
universe at a point. ‖The central thesis of Big Bang cosmology,‖ says Joseph Silk,
―is that about 20 billion years ago, any two points in the observable universe were
arbitrarily close together. The density of matter at this moment was infinite.‖ 319
This concept of infinite density is not scientific. It is an idea from the realm of the
supernatural, as most scientists realize when they meet infinities in other physical
contexts. Richard Feynman puts it in this manner: ―If we get infinity [when we
calculate] how can we ever say that this agrees with nature.‖ 235 This point alone
is enough to invalidate the Big Bang theory in all of its various forms.
3. The scale of the magnitudes involved is far out of line with experience, or even
any reasonable extrapolation from experience.
4. As noted in Chapter 29, the results attributed to the Big Bang are inconsistent
with the physical and astronomical theories currently employed in application to
supernova explosions.
5. It is difficult, if not impossible, to account for the isotropy of the observed
universe on the basis of the Big Bang hypothesis. As expressed by Dennis
Sciama, this is ―a headache to the astrophysicist.―320 This problem is particularly
acute in reference to the background radiation that is currently supposed to
provide the best support for the theory.
6. The problem of the formation of the galaxies has never been solved in the context
of this theory. ‖Moreover,‖ says W. H. McCrea, ‖those who have explored it most
fully seem to be the ones who are most convinced that almost no progress has
been made.―321 H. L. Shipman concedes that this is a significant point. ‖Since
galaxies exist, it is embarrassing that we can't make galaxies in a hot, Big Bang
cosmology.―322
7. The theory provides no explanation for a large number of physical phenomena
that are directly connected with the evolution of the hypothetical explosion
products.
8. Because of this lack of tie-in with observational information, the number of
deductions that can be made from the theory is very limited. This minimizes the
possibility of conflict with observation, and gives the impression that there are
few criticisms that can be levied against the theory from the observational
standpoint. In reality, however, what this means is that the theory cannot be
tested.
This is a devastating list of criticisms to be levied against one of the most highly
publicized elements of present-day astronomical thought. Most astronomers are reluctant
to subject the currently favored hypotheses in their field to critical scrutiny, but it is
obvious that these objections to the Big Bang demolish most of the arguments advanced
in favor of that theory in its original form. A large segment of the astronomical
community has therefore abandoned the original concept, and has substituted other, very
different, ideas, retaining only the Big Bang name. We now find many assertions such as
the following in the astronomical literature:
Many people (including some scientists) think of the recession of the galaxies as
due to the explosion of a lump of matter into a pre-existing void, with the galaxies
as fragments rushing through space. This is quite wrong . . . the expanding
universe is not the motion of the galaxies through space, away from some center,
but is the steady expansion of space.323 (Paul Davies)
This conceptual change eliminates some of the serious objections to the original Big
Bang hypothesis, but what does not seem to be realized by its proponents is that it also
eliminates the explanatory character of that hypothesis. The original Big Bang is based on
an analogy with observed explosions. Matter, we know, has an internal energy content
that, under appropriate circumstances, can be released explosively. The Big Bang is
assumed to accomplish such a release on a gigantic scale. But this explosive process
propels matter through space, the effect that Davies specifically repudiates. In order to
produce ‖steady expansion of space‖ explosively it would be necessary to have either a
means of applying the energy of matter to space, something that is totally foreign to
physical science as we know it, or a source of energy in space itself, something of which
there is no indication whatever. Consequently, there is neither observational nor
theoretical justification for the assumption that the concept of an explosion is applicable
to space. Thus the new version of the Big Bang expressed by Davies eliminates the
‖bang.‖ In fact, it eliminates all explanatory content from the hypothesis, and reduces it
to nothing more than a restatement of the observational situation. It merely asserts that
the space between galaxies is continually increasing.
Another alternative to the original hypothes,is calls for replacing the Big Bang with a
multitude of little bangs.
The theory seems to call for enormous numbers of small bangs . . . all essentially
simultaneous, close together, and nearly identical.324 (Lyman Spitzer, Jr.)
This suggestion avoids the fatal weakness of the space expansion version of the Big Bang
described by Davies, but only at the expense of introducing many other problems, such as
the question as to how the explosions are synchronized, the exacerbation of the isotropy
problem, etc. Consequently, the little bang hypothesis has received little attention thus
far. The principal significance of the present-day swing away from the original Big Bang
concept in all but name is that it demonstrates a recognition on the part of those who are
supporting the revised hypotheses that the objections to the original Big Bang are
insurmountable.
An examination of the astronomers' Steady State theory, again without considering the
new knowledge made available by the development reported in this work, discloses the
following major objections:
1. In this theory the expansion is a pure assumption. No mechanism for
accomplishing it is provided.
2. The theory requires the continuous creation of matter, which conflicts
with the conservation laws. Like the concept of infinite magnitudes, this
is a resort to the supernatural.
3. The theory has no explanation for the formation of galaxies, a key factor
in the events that this theory purports to explain.
4. The theory has no explanation for the observed background radiation
(aside from a suggestion by Fred Hoyle;25 that approximates what we
now find to be the true explanation, but was not taken seriously).
5. In this tbeory the oldest galaxies are removed from the system by
‖disappearing beyond the time horizon‖ to maintain the unchanging
galactic composition. This hypothesis breaks down when the galaxy
from which the universe is being observed becomes the oldest within the
observational limits. Thereafter the age of the oldest galaxy within these
limits continually increases, violating the basic premise of the theory.
6. The theory provides no explanation for a large number of physical
phenomena that are directly connected with the evolutionary pattern that
it predicts.
7. Because of this lack of detail, it is untestable.
A critical examination of this ‖theory‖ quickly shows that it is not a theory, nor even a
hypothesis. It is merely an unelaborated idea, the idea that is contained in what is known
as the Perfect Cosmological Principle. Most astronomers¢ accept, at least on a tentative
basis, the Cosmological Principle, which asserts that the universe appears the same, aside
from small scale irregularities, from all locations in space. The Perfect Cosmological
Principle extends this idea to include the assertion that it likewise appears the same from
ali locations in time. This extension has considerable appeal on broad philosophical
grounds, but in order to give it the status of a cosmological hypothesis that can be
subjected to scientific tests of its validity, it is necessary to identify and postulate
mechanisms whereby the uniformity that is called for can be maintained. There are four
major requirements: ( 1 ) a source of raw material for the formation of new galaxies, (2) a
mechanism for accomplishing this formation, (3) a mechanism for implementing the
galactic recession, and (4) a means of removing the over-age galaxies from the system.
The Steady State ‖theory‖ proposed by a group of astronomers does not come anywhere
near providing these details that would convert it from a mere idea into a testable
hypothesis. Its protagonists have suggested a continuous process of creation as the source
of the new matter, and have offered a process of disappearance over the time horizon as
an answer to the problem of removing the over-age galaxies. The latter, as already noted,
is unacceptable. No attempt has been made to account for the formation of galaxies, or
for the observed recession, in the context of the theory.
The Big Bang is a full-fledged hypothesis-not merely an idea like its competitor-but if
cosmology is to deal with the universe as a whole, as indicated by the definitions quoted
earlier, it is not a theory of cosmology. It deals only with the origin of the universe and
with the galactic recession, aside from a misapplication of the second law of
thermodynamics, and says nothing at all about the large number and variety of
phenomena that constitute the activities of the universe as a whole. Calling it a
cosmological theory is equivalent to asserting, that the galactic recession is the only thing
of any significance that occurs in the universe subsequent to its origin.
It should be evident that, even on the basis of previously available observational
information, without the benefit of the new knowledge contributed by the theory of the
universe of motion, neither of these presentday cosmological theories is anywhere near
tenable in its present form. The only justification for giving either of them any
consideration at all is the rather tenuous possibility that a continuing effort to overcome,
or at least minimize, their many shortcomings might eventually result in the construction
of a viable theory by a process of modification. But the case for these theories is not
currently being argued on these grounds. What we are being told is that there is no
alternative.
When astronomers express dissatisfaction with both the Big Bang and the Steady
State concepts of the universe, they are in trouble, because it is hard to imagine
radical alternatives.326 (Nigel Calder)
On the next page of his book, however, the author makes a statement that illustrates
where the trouble lies: why alternatives to these untenable theories are so hard to find.
‖The only way any anyone has thought of to avoid this conclusion (that the contents of
the universe were formerly much more closely crowded together than they are now],‖ he
says, ‖is to suppose that . . . less matter existed in the universe than does now.―
Nere again we meet the ubiquitous ‖only way‖ argument. As in so many similar cases
examined in the earlier pages of this and the preceding volumes, the so-called ‖only way‖
has that status only if it is assumed that the relevant portions of currently accepted
physical and astronomical theory are correct in all respects. This is a totally unwarranted
assumption. Any impasse such as that which exists in this case calls for a critical
examination of the premises on which the accepted view of the situation is based. The
long list of cases in which the investigation reported in this work uncovered new
alternatives where it had been generally accepted, on the basis of assurances from
Einstein and other leading scientists, that no such alternatives existed, is a graphic
illustration of the need for a more critical examination of the foundations on which the
current ideas rest.
What makes alternatives to the existing ideas so difficult to find is that a totally new view
of some essential element in the situation is usually required before the alternative
possibilities can be recognized. It is quite unlikely that the author of this work would
have been able to identify all of the many previously unrecognized alternatives that have
provided the answers to longstanding problems discussed in these volumes if he had not
had the benefit of a general physical theory that enabled him to arrive at these alternatives
by a straightforward process of deduction. The cosmologists have been at a disadvantage,
in that they have not had any assistance of this kind. The Big Bang and Steady State
theories are the only alternatives that they have been able to see in the context of the
current physical and astronomical theories, and they have not explored the possibility that
these theories might be wrong. Their inability to see the true picture is understandable,
but this does not make their conclusions any more acceptable. As this work has
demonstrated, astronomy has not yet produced enough data on which to build a tenable
cosmological theory, and there is no indication that it is likely to do so in the foreseeable
future.
The present data in cosmology are still limited, ambiguous, and fragmentary, and
they all depend on complex instruments stretched right to the limits of their
sensitivity and performance.327(Martin Rees)
A significant feature of this situation is that the spectacular increase in the scope and
quantity of observational information in the astronomical field in the last few decades has
not resulted in any significant progress toward an understanding of the cosmological
problem. The case in favor of any cosmological theory is still being argued mainly on the
basis of the shortcomings of the alternatives. Each step forward from the observational
standpoint seems to introduce new difficulties. This accumulation of unsolved problems
is a clear indication of the need for new ideas. In his book, The Structure of Scientific
Revolutions, Thomas Kuhn points out that the need for a new and better theory is
generally indicated by ‖a state of growing crisis."
The emergence of new theories is generally preceded by a period of pronounced
professional insecurity. As one might expect, that insecurity is generated by the
persistent failure of the puzzles of normal science to come out as they should.
Failure of existing rules is the prelude to the search for new ones.328
The existence of such a crisis in astronomy and cosmology is revealed by the current
reactions to the inability of accepted theory to deal with the many problems now
confronting these disciplines. More and more scientists are coming to realize that some
basic changes in the existing structure of theory will be required. Typical of the
comments now being made in increasing numbers are the following:
In some places, too, the extraordinary thought begins tQ emerge that the concepts of
physical science as we appreciate them today in all their complexity may be quite
inadequate to provide a scientific description of the ultimate state of the
universe.329(Bernard Lovell)
It [radio astronomy] is at present producing more and more data that cast more and more
doubt on the big bang and other evolutionary cosmologies, and it will probably continue
to do so until someone is able to propose an entirely new approach to cosmology; for
example, proposing a new physical law whose consequences can be tested by
astronorriers.330 (G. Verschuur)
Clearly, the physics of radio galaxies and quasars, the nature of the red shift, and perhaps
fundamental physics itself are being questioned by these measurements [recent radio
observations]331 (K. I. Kellerman)
Astronomers are looking more and more toward a revision of physical theory as an
answer to their currently outstanding problems. In the statements quoted above, Lovell
suggests that the concepts of physical science may be inadequate; Kellerman says that
fundamental physics is subject to question; and Verschuur predicts that a new physical
law will be required. The physicists do not offer much resistance to these conclusions.
They have problems of their own that are equally as recalcitrant as those that baffle the
astronomers, and they realize that their theories are in need of some overhauling.
Feynman, for instance, tells us that ‖All the principles that are known are inconsistent
with each other, so something has to be removed.‖ 332 He defines the problem in these
terms: ‖We have to find a new view of the world that has to agree with everything that is
known, but disagree in its predictions somewhere . . . and in that disagreement it must
agree with nature.‖ 294
As Feynman concedes, this is an ‖extremely difficult‖ assignment. The irony of the
situation is that the greater part of the difficulty is not inherent in the problem; it is
gratuitously introduced by the investigators themselves. Feynman's statements show just
where the trouble lies. When he says that the ‖new view . . . has to agree with everything
that is known,‖ he is using the word ‖known‖ in the sense of ‖positively established.‖
This is the only sense in which the statement is valid. But when he says that ‖the
principles that are known are inconsistent with each other,‖ he is using the word ‖known‖
in the sense of ‖currently accepted."
The practice of elevating the popular opinion of the moment to the status of established
truth is the root of the present difficulty. It not only stands in the way of finding the
answers to unsolved problems, but also prevents recognition of those answers if and
when they are obtained in spite of all obstacles. Replacement of an erroneous theory of
long standing is difficult enough without this unnecessary handicap, as scientists, like
their counterparts in other fields of human activity, are reluctant to change ideas to which
they are accustomed. In principle, new ideas are welcome, but in practice those that
disturb previous lines of thought encounter an atmosphere of hostility. The following
comment by Geoffrey Burbidge, reported in a news item, describes the existing situation:
As is always the case when scientific questions are really fundamental, new ideas which,
if they prevail, will overturn the old ones, are resisted by all means, in the name of
science, but by any means that come to hand.333
The theory derived from the postulates that define the universe of motion, and presented
in this work, encounters this antagonism in full force when it is extended into the
astronomical field, because it conflicts with many cherished ideas, some of very long
standing. The astronomers should realize, however, that when they reach the point where
they have to hoist the distress signal, and call for help by way of a ‖drastic revision‖ of
physical theory, they must expect some simila~ major changes in astronomical theory.
The changes required by the theory of the universe of motion.. are far-reaching, to be
sure, but nothing less will serve the purpose.
While the case in favor of this new theory is affirmative; that is, it is demonstrated in the
preceding pages that the physical universe does, in fact, conform to the principles and
relations derived from the postulates of the theory, the new findings that have emerged
from the development of the consequences of the postulates have added still further
dimensions to the case against both of the astronomers' cosmological theories. For
example, the finding that matter is subject to a destructive temperature limit precludes the
existence of a concentration of matter such as that assumed in the Big Bang hypothesis.
Likewise, the finding that the net motion of the galaxies is inward within the gravitational
limits, and outward in two dimensions beyond 1.00 redshift, rather than always outward
in three dimensions, invalidates the recession explanation in all versions of the Big Bang.
These examples could be multiplied manifold.
The universe of motion described in this work is a universe of the steady state type. It
conforms to the Perfect Cosmological Principle on which the astronomers' Steady State
theory is based; that is, the large-scale features of the universe are unchanging, both in
space and in time. But it is also evolutionary, differing from the Big Bang theory in that
the evolution is a continuing process: a cyclic evolution rather than a linear evolution.
This cyclic feature eliminates the need for continuous creation of matter, one of the
principal objections to the astronomers' Steady State theory, while it also negates the
prediction of a cold and lifeless ultimate state of the universe, a feature of the original Big
Bang theory that is philosophically distasteful to many scientists. Thus the cosmological
aspects of the theory of the universe of motion combine the more desirable features of the
astronomers' cosmological theories, while avoiding the most objectionable aspects of
each.
Unlike its predecessors, which, as noted earlier, are limited to providing explanatory
hypotheses for only a few of the cosmological aspects of the universe, the results of the
theoretical development now being described constitute a comprehensive cosmological
theory in which the evolutionary development of the constituents of the universe- atoms,
molecules, stars, galaxies, etc.-is an integral part of the cosmological process. This
understanding derived from the theory of the universe of motion participates in the proof
of the validity of the theory as a whole that is accomplished by the application of the
probability relations. It may, however, be of interest to supplement this proof by a
summary of the items that are relevant to the validity issue. Most of the content of such a
summary can be expressed by the statement that none of the objections against either the
Big Bang or the Steady State theory identified in the preceding pages is applicable to this
cyclic theory. The following additional points should be noted:
1. No ad hoc assumptions are employed. All conclusions are derived deductively
from the postulates that define the universe of motion.
2. The expansion of the material sector of the universe, as indicated by the recession
of the distant galaxies, is a direct consequence of these postulates.
3. The high degree of isotropy of the matter in the universe is a result of the fact that
the matter entering from the cosmic sector is distributed in space in accordance
with probability considerations.
4. The background radiation currently attributed to the remnants of the Big Bang is
the cosmic equivalent of starlight and other observed radiation of the material
sector. It is isotropic because it is emitted by cosmic sector matter that is
aggregated in time but dispersed in space.
5. The formation of stars, star clusters, and galaxies is a logical and natural part of
the aggregation process deduced theoretically.
6. No creation of matter is required.
7. No special scheme for getting rid of the mature galaxies is necessary. The existing
matter moves in a closed system.
8. The cosmological theory is a part of a general physical theory, applicable to all
physical phenomena. There are innumerable opportunities to test its validity by
correlation with observation.
This item number 8 is the key element in the whole situation. As Martin Rees pointed out
in a statement quoted earlier, the serious handicap under which present-day cosmology
labors is the lack of an adequate supply of relevant and reliable data. Without a solid base
from which to work, no refinement of the reasoning process will enable reaching correct
conclusions. Irwin Shapiro makes this comment:
All chains of reasoning in cosmology are elastic. Almost any observation interpreted to
support one conclusion can, in the hands of a moderately adroit theoretician, be
reinterpreted to support the opposite.334
The availability of a general theory of the physical universe now supplies the solid
theoretical foundation that has been lacking, not only in cosmology, but in astronoFny as
well. This fully integrated theoretical structure applicable to the entire range of physical
phenomena throughout the universe enables formulating the general physical principles
from purely theoretical premises, and verifying them in areas that are readily accessible
to observation. We are then able to apply them with confidence to the fields such as
cosmology where the information from observation is meager, or, in many cases, non-
existent.

CHAPTER 31

Implications
A scientific theory, such as the one described in the several volumes of this work, the
theory of the universe of motion, consists of a set of assumptions that define the theory,
together with the consequences of these assumptions, developed by applying logical and
mathematical processes to the basic premises. The ordinary scientific theory covers only
a limited portion of the total scientific field, and it is therefore an addition to established
scientific knowledge rather than an independent structure. Hence it necessarily utilizes
various items from the currently accepted body of scientific knowledge in the
development of its consequences. The theory of the universe of motion, on the other
hand, deals with the physical universe as a whole, and is entirely selfcontained. All of the
conclusions as to the consequences of this theory are derived from the basic postulates
without introducing anything from any other source.
We have now arrived at the point, however, where it should be recognized that the
foregoing statement applies to the theory of the universe of motion as a scientific product.
Science itself is not entirely self-contained. In order to make scientific investigation
possible, and to give meaning to the results thereof, it is necessary to make certain
preliminary assumptions of a philosophical nature. The validity of these assumptions is
accepted by the workers in the field of science as a condition of becoming scientists, and
since these assumptions form a background for all scientific work, they are not ordinarily
mentioned in scientific discourse, except in those instances where the topics under
consideration are on the borderline between science and philosophy. In this concluding
chapter of the present volume we will undertake to examine some of the questions that
arise along this borderline, and in preparation for that examination we will want to look at
the philosophical underpinnings of physical science: ‖the metaphysical presuppositions
of science,― 335 as one writer calls them. These include the following:
(a) lt is assumed that the universe is rational.
(b) It is assumed that the same physical laws and principles apply throughout the
universe.
(c) It is assumed that the results of specific physical actions are reproducible.
(d) It is assumed that the subject of scientific investigation is an objectively real
universe.
(e) It is assumed that physical changes (effects) result from causes.
(f) It is assumed that the results of scientific investigation, when verified in
accordance with standard scientific practice, are certain and permanent. (g) It is
assumed that the laws and principles of the physical universe are, in effect,
restrictions, and that whatever they do not prohibit exists.
Most members of the scientific community simply take these assumptions as axiomatic.
Indeed, the great majority of rank and file scientists would be quite surprised to find that
anyone questions such assumptions as the rationality of the universe, for example. But
some exceptions have been taken to specific items in the list, mainly by individuals who
are particularly interested in the philosophical aspects of science. An element of
uncertainty has thus been introduced into the substratum of physical science. The
development of the theory of the universe of motion has now clarified this situation, and
has demonstrated that the criticisms of these basic assumptions are invalid. It appears,
however, that a few of the criticisms that have been offered are of sufficient interest, in
view of the publicity that they have received, to warrant some discussion in this work.
The assumptions to which the following comments refer are identified by the same letter
symbols that were used in the earlier listing.
(a) If the universe were not rational, the scientific objective of arriving at a systematic
understanding of the activities of the universe would be an impossible task. It is true that,
as noted in Chapter 29, some prominent scientists have characterized the realm of the
very small as irrational, but what this amounts to is excluding this domain from the
scientific field. Our findings indicate that this exclusion is unnecessary.
(b) In present-day practice, the Principle of Uniformity, as we may call it, has not been
accepted in its entirety, because the theorists have been unable to find explanations on
this basis for the phenomena of some special areas, such as the sub-atomic region or the
interiors of the stars. However, it is accepted in a kind of a selective way, and regarded as
applying whenever it does not inconvenience the theorists, but leaving open the
possibility of deviations in special situations. The clarification of the physical relations in
the far-out regions that has been,accomplished by the development described in this work
has now shown that there are no exceptions to this general principle. The difficulties in
the special areas that have led to suggestions as to exceptions have been due to
inadequate understanding of the phenomena in these areas.
(c) The assumption of reproducibility is usually stated in terms of the reproducibility of
experiments, but it is equally applicable to any other type of physical action.
(d) One school of philosophy contends that the universe exists only in our minds. This is
a difficult position to contravene, as its defenders can simply extend it to apply to the
premises of any adverse argument. But as scientists, we can dismiss this point of view as
irrelevant. A subjective universe cannot be distinguished from an objectively real
universe by any means at our command, and from a scientific standpoint where there is
no distinction there is no difference.
A modification of this point of view that has some support among scientists concedes
reality only to the information received by the senses. The advocates of this interpretation
point out (correctly) that we do not perceive physical objects directly; we have direct
knowledge only of the ‖sense-data.― Our concepts of physical objects are theoretical
constructs based on these data. The conclusion that they have drawn from this is that only
the sense data have objective reality, and that all else is a creation of the human mind. As
expressed by G. C. McVittie:
A preferable alternative to the doctrine of the rational External World is to regard
science as a method of correlating sense-data . . . On4this view, the corpus of
sense-data may, or may not, form a rational whole, but the human mind by
selecting classes of data succeeds in grouping them into rational systems . . .
Unobservables such as light, atoms, eiectromagnetic and gravitational fietds, etc.,
are not constituents of an independently existing rational External World; they are
but concepts useful in the manufacture of systems of correlation.336
Other observers have adopted an intermediate position, conceding reality to some
features of the universe, primarily macroscopic objects; but denying the reality, in this
same sense, of other features-atoms and electrons, for example. Heisenberg cautions us
specifically that we must not regard the smallest parts of matter as being objectively real
in the same sense in which rocks and trees are real.337 ‖Atoms are neither things nor
objects,― he says, ‖atoms are paris of observational situations. ‖338 In another attempt to
describe this strange half-world in which the ‖official― school of modern physics places
the basic units of matter, he characterizes the atom as ‖in a way, only a symbol. ‖339
The theory of the universe of motion has provided a definitive answer to these questions
about reaGty. There i.s an external universe independent of the human race, and
independent of any observations that they may make. The physical universe is a universe
of motion; that is, motion is the reality of which the universe is composed. Motions and
combinations thereof are therefore ‖real― in any ordinary sense of the word. The relations
between these motions have a somewhat different status, and whether they can be
considered real depends on how that term is defined. In any event, some of the
‖unobservables― of modern physics, the nucleus of the atom, for instance, are wholly
non-existent. Some, such as electromagnetic and gravitational fields, are merely special
ways of looking at physical situations-that is, describing the relations between motions-
and belong in the same category in which we place such concepts as the center of gravity
or the poles of the earth. But the smallest subdivisions of matter, the atoms and sub-
atomic particles, have exactly the same claim to reality as the largest aggregates of
matter; the smallest subdivisions of electricity, the electrons, have the same claim to
reality as the heaviest electric currents; and so on. Whether or not the entity in question is
observable, as matters now stand, is irrelevant.
It should be understood, however, that reality, as defined above, is physical reality; that
is, the reality of the universe of motion. This does not necessarily exclude the possibility
that there may be reality of a different nature: a nonphysical reality.
(e) The same frustrations that have led modern scientists to invent theories where their
efforts to apply inductive reasoning to their problems have encountered difficulties have
also impelled them to jettison any of the previously accepted scientific or philosophical
principles that might happen to stand in the way of the inventions. Some are even ready
to discard logic, one of the foundations of the structure of scientific knowledge. For
example, F. Waismann asserts that ‖Quantum physics presents a strong case against
traditional logic"340-an upside down conclusion, if there ever was one. But the favorite
target of those who seek to make things easier for the theorists is the connection between
cause and effect.
Like Waismann, most of the others who are attempting to brush aside ihose principles
that stand in the way of the currently fashionable ideas rely primarily on the quantum
theory, with some assistance from relativity and other theoretical products of the modern
era, according these theories a status superior to that of the previously accepted
principles. As it happens, this quantum theory that is now being used as ammunition with
which to attack some of the essential features of traditional scientific procedure is itself
based on a sound principle, existence only in discrete units, that was derived by one of
these standard scientific procedures: generalization of empirical findings. The
development of tbe theory of the universe of motion has now shown that this discrete unit
principle is one of the key elements in the basic framework of the physical universe. But
because conventional science is unaware of the directional reversals that take place at the
unit levels, it has not been able to arrive at a theoretical explanation of events inside unit
distance that is consistent with the established laws of physics. This put the theorists in a
position where it seemed that they either had to give up quantum theory or sacrifice some
of the established philosophical principles. They chose the latter course, and quantum
theory, as now constituted, not only defies logic, but also causality and continuity of
existence (that is, it asserts that an object may exist at point A at one time and at point B
at another time without having been anywhere in the interim). The abandonment of
causality is particularly stressed by the expositors of the theory, as in the following
statement:
Whenever he [the physicist] penetrates to the atomic, or electronic level in his
analysis, he finds things acting in a way for which he can assign no cause, for
which he can never assign a cause, and for which the concept of cause has no
meaning, if Heisenberg's principle is right. This means nothing more nor less than
that the law of cause and effect must be given up.341 (P. W. Bridgman)
In the universe of motion all entities and phenomena are motions, combinations of
motions, or relations between motions. It follows that any physical event X involves
modification of an existing motion combination A by another motion or combination B.
Motion B is then the cause of event X. However, the initial combination A was itself the
result of a previous event Y in which a then existing motion combination C was modified
by a motion D to produce combination A. Thus D can also be regarded as a cause of
event X. In fact, any physical event has what amounts to an infinite number of causes.
This event is the intersection of two or more causal systems, and might be compared to a
major river, which is the result of continual joining of the products of intersection of an
almost infinite number of rivulets. Thus the conclusions of quantum theory leading to the
abandonment of causality must be rejected.
In this connection, however, it is necessary to distinguish between causality and
determinism. ‖There is some disagreement among scientists about the concept of
causality. Among many it is essentially equivalent to the notion of determinism.― 342 (R.
B. Lindsay) But there is a distinct difference between the two concepts. Causality implies
nothing more than the existence of a cause for every physical event. Determinism
includes the further premise that the same cause applied to the same kind of a situation
always produces the same result. In the universe of conventional science, which is a
universe of matter, non-material causes act upon material ‖things,― and there are grounds
for concluding that the same cause should produce the same effect if applied to the same
thing under the same conditions. However, the real world does not act in this manner, and
the reaction of ‖modern science― has been to throw the baby out with the bath water; that
is, to reject causality.
Our finding that both matter and non-material phenomena are manifestations of motion
now resolves the problem. On this basis, cause and effect are simply aspects of the
interaction of motions. Causality is maintained in all cases, because a motion cannot be
changed except by an interaction with another motion (since nothing exists but motions).
But, as we have seen in the preceding pages, there are continual interchanges between
different kinds of motion-between scalar and vectorial motion, between one-dimensional
motion and two or three-dimensional motion, between motion in space and motion in
time-and many of these interchanges involve redetermination of direction or magnitude
by chance processses. Because of this intervention by chance, the exact results of such
interactions are unpredictable. Thus, while causality is maintained throughout the
physical world, determinism is ruled out.
(f) On the basis of this assumption, physical science has a permanent, and ever-growing,
core of positively established knowledge. This is the view of traditional science, a view
that is still accepted by the great majority of scientists. But the general relaxation of
scientific standards that has accompanied the introduction of inventive theories in modern
times has confused the situation to the point where there is no longer any clear distinction
between today's best guess and established fact. This has led to a contention on the part of
some scientists and philosophers that no scientific findings are positively established, an
assertion that is welcomed in some quarters because it tends to excuse the deficiencies of
many unverified theories. ‖The notion that scientific knowledge is certain is an illusion,―
343
says Marshall Walker.
This point of view is based largely on an unrealistic concept of ‖certainty.― It is true that
no physicai statement can be verified with what we may call mathematical certainty, in
which the probability of error is zero. Because of the nature of physical observations, the
best that we can do in any physical situation is to arrive at a point where the probability
of error is negligible, a physical certainty, we may say. But from a practical standpoini,
this physical certainty is fully equivalent to mathematical certainty. Drawing a distinction
between the two is meaningless hairsplitting. A theory is verified when its validity is
established with physical certainty.
In this connection, it is important to recognize that scientific statements can be verified
only if they are properly expressed so that they stay within the limits to which
comparisons with observation can be made. Much of the erroneous thinking in this area is
due to a lack of precision in defining the items that are involved. For example, we cannot
ordinarily verify a statement in the form y = 3x, where x and y are physical variables
(unless this proposition can be incorporated into one of greater scope that can be verified
as a whole). In order to be verifiable, the statement will usually have to be put into the
form: Within the limits x = a and x = b, y = 3x to an accuracy of one part in 10`: When
thus expressed and validated by comparison with the results of observation, this
statement constitutes exact and permanent knowledge, regardless of whether some future
findings may show that the relation is invalid somewhere outside the limits specified, or
that there is a deviation of less than one part in 10z under some circumstances. As
Lecomte du Nouy points out, ‖science has never had to retract an affirmation based on
facts that are well established within accurately defined limits.― 344
In support of his assertion that there is no certain scientific knowledge, Walker tells us
that ‖New models are often quite radically different from their predecessors, and often
require the abandonment of ideas that have long been considered obvious and axiomatic.―
This comment illustrates one of the common errors in thought that underlie the denial of
scientific certainty. Walker bases his conclusion on the observation that many ‖models―
and presumably ‖obvious and axiomatic― ideas ultimately had to be abandoned. But the
truth is that few models ever qualify as scientific knowledge. Models do not attempt to
cover all aspects of the phenomena with which they deal (if they did, they would be
theories, not models), and consequently they are inherently erroneous, either in part or in
their entirety. The failure of these models to stand the test of time therefore has no
relevance to the status of firmly established knowledge. Likewise, if an assertedly
‖obvious and axiomatic― idea can be definitely verified, it then constitutes scientific
knowledge, and is both certain and permanent. If it fails the test of comparison with the
observed facts, then it is not, and never was, ‖obvious and axiomatic,― nor is it scientific
knowledge, and the necessity of discarding it has no significance in the present context.
(g) This principle is commonly expressed in the statement that ‖What can exist does
exist.― K. W. Ford puts it in this manner:
One of the elementary rules of nature is that, in the absence of a law prohibiting
an event or phenomenon, it is bound to occur with some degree of probability. To
put it simply and crudely: Anything that can happen does happen.345
This author uses the word ‖happen― rather than ‖exist,― but as he notes in another
connection, at the basic level ‖there is no clear distinction between what is and what
happens. ― 346
This principle is not as well known, as a principle of nature, among scientists in general
as those previously discussed, but they all employ it, usually unconsciously, in a great
variety of applications. It is this principle that provides the justification for interpolation
and extrapolation. It has been the key factor in such theoretical anticipations as
Mendeleev's prediction of previously unknown elements, Dirac's prediction of the
positron, and myriads of other, less dramatic, scientific advances. And it is the essence of
the lines of reasoning that are being employed in the current attempts to evaluate the
possibility of life elsewhere in the universe. As can be seen in these illustrations, the
absence of a prohibition is first established in one area. The principle that what can exist
does exist is then invoked to justify the assertion that the phenomenon in question also
exists in the other areas.
The validity of this principle, in application to the physical universe, has been clearly
established by our findings. In many cases, entities or phenomena that would otherwise
exist, on the basis of this principle, are excluded by adverse probabilities or other specific
factors. Aside from these exclusions, all of the entities or phenomena that are
theoreticatly possible within the area thus far covered in the investigation have their
counterparts in the observed physical universe. It is true that only a relatively small
portion of the universe as a whole has been examined in the context of the new
theoretical system, but the area of coverage includes the basic phenomena of all of the
major subdivisions of physical science, and many thousands of individual items. The
probability that there is any violation of this principle anywhere in the universe has thus
been reduced to a negligibie level.
Addition of these philosophical principles to the physical knowledge set forth in this and
the preceding volumes now puts us in a position where we are able to arrive at answers to
some long-standing questions about fundamental issues. We will begin with
1. Is the physical universe finite or infinite?
In past discussion of this subject it has usually been assumed that the question reduces to
a matter of whether or not space is finite. Those who favor the finite alternative generally
envision some kind of a space curvature, a geometry that permits space to be finite, yet
unbounded. As brought out in Volume I, space as ordinarily conceived-extension space,
in terms of this work-is not a physical entity. It is merely a reference system, a purely
mental construction. As such, it can be thought of as infinite. But the space that actually
exists in a physical sense is the space aspect of the existing motion of the universe. The
question as to whether this space is finite or infinite therefore becomes a question as to
whether the amount of motion in the universe is finite.
The finding that the activity of the universe is cyclic answers this question immediately.
A cyclic system is a closed system; it is finite. In the universe of motion, spatial
structures exist only for a limited time; that is, a limited segment of the time progression.
Temporal structures (in the cosmic sector) exist only during a limited segment of the
space progression.
The principal obstacle that stands in the way of acceptance of the idea of a finite universe
is the observed outward motion of the photons of light and other electromagnetic
radiation. On first consideration, it would seem that, regardless of what the aggregates of
matter may be doing, the radiation is being dispersed outward into space, and is
eventually lost from the universe as we know it. But we now find that this apparent
outward movement of the photons is an illusion due to the inward movement of the
gravitationally bound system from which we are doing our observing. The photons
actually have no capability of independent motion. This is why the physicists have never
been able to find a mechanism for the ‖propagation of radiation.― There is no such
propagation, and therefore no need for a mechanism. The prevailing impression is that
Einstein provided an explanation for this phenomenon, but, in fact, what he did was to
dismiss the problem as too difficult. In a statement quoted in Volume I, he characterizes
the situation in this manner:
Our only way out . . . seems to be to take for granted the fact that space has the
physical property of transmitting electromagnetic waves, and not to bother too
much about the meaning of this statement.347
Since the photons of radiation remain at their points of origin, in the natural system of
reference, their ultimate fate is not to be lost in the depths of space, as observations from
our locations in the universe of motion appear to indicate. We are doing our observing
from locations that are moving inward at high rates of speed, and our observations are
distorted accordingly. All photons remain in the space over which the matter of the
universe is distributed. It follows that they must ultimately encounter, and be absorbed
by, matter. They are then transformed into thermal motion, or participate in the atom
building process by which radiation is reconverted into matter. A small fraction of the
total are able to pass into the cosmic sector, appearing there as a ‖background radiation―
of the type discussed in Chapter 30.
2. Did the universe evolve from a primitive condition, or has it been in the same
condition in which we now observe it during its entire existence?
The results of the development of theory in the preceding pages of this and the previous
volumes are consistent with either of these alternatives. The evolution in each sector
begins with matter in a primitive dispersed condition, but it does not necessarily follow
that there was ever a time at which all matter was in this condition. In any event, even if
the universe did originate in a primitive condition, theoretical considerations indicate that
it would eventually arrive at an equilibrium such as that which now appears to exist.
3. Did the universe have a beginning, or has it always existed?
The two parts of this question are not mutually exclusive, as they appear to be. We can
answer the second part affirmatively, but this does not necessarily mean that the answer
to the first part is negative. Such words as ‖always― and ‖before― presuppose the
existence of time. ‖Always― means ‖during all time.― ‖Before― means ‖at an earlier
time.― The universe has always existed; that is, it has existed throughout all time, because
time exists only as a constituent of that physical universe.
In the sense in which it is being asked, the first part of this question is meaningless, as it
assumes that the existence of time is independent of the existence of the universe.
Whether or not the question might have a real significance on the basis of something
other than a sequence in time is beyond the scope of the present work.
4. Will the universe eventually come to an end?
All individual objects in the physical universe, including the earth and the solar system,
have finite life spans, and their existence will eventually terminate. But there is nothing in
the physical system that would end the existence of the universe as a whole. The physical
universe is a self-contained, and self-perpetuating mechanism. It will continue on the
present basis indefinitely, unless it is destroyed by some outside agency. The question as
to whether any such outside agency exists will be considered later.
5. Was the universe created by some agency?
The development of theory in this work sheds no light on the question of creation. The
only thing that exists in the physical universe is motion. Our theory, as it now stands,
defines what motion is, and what it does, but not how it originated, or whether it had an
origin. Since time, in a universe of motion, exists only as an aspect of that motion, the
universe and time are coeval. On this basis, the universe has existed always-during all
time-regardless of whether or not it originated from an act of creation. Neither the theory
of the universe of motion, nor the many hitherto unrecognized physical facts uncovered
during its development, gives any indication as to whether a creation occurred. This
remains a wide open question, so far as science is concemed.
6. Is the activity of the physical universe purposeful, or is it simply mechanistic?
The finding that the physical universe consists entirely of a finite quantity of motion
means that it is purely mechanistic. However, this does not preclude the possibility that
the existence of this machine may have a purpose. This is an issue on which our study of
the mechanism sheds no light, although it does clear the way for a study of the problem.
7. Is the human race merely part of the machine, or does it, in some way, have an
independent role?
Conventional science takes a somewhat ambivalent attitude toward this question. It
portrays the universe as strictly mechanistic, and yet introduces the concept of an
‖observer,― whose presence is presumed to have a significance with respect to the
outcome of physical processes. The effect of the new information derived from the
development of the theory of the universe of motion on our understanding of the relation
of the human race to its physical environment has been explored in connection with an
extension of the physical investigation into the non-physical fields, the results of which
will be reported in a separate publication.
8. Are we alone, or is there intelligent life elsewhere in the universe?
This is a long-standing question that has entered a new phase since the development of
communication processes that are, at least potentially, capable of transmitting and
receiving messages from distant planets. It is now a lively subject of discussion and
speculation, and some steps have been taken toward a systematic search for evidence of
extra-terrestrial life. This question can be subdivided into the following three parts:
1. Are there other locations in the universe in which the physical conditions are
suitable for the existence of life?
2. Does life necessarily develop in some fraction of the suitable locations?
3. Where life exists, does it necessarily evolve into intelligent life under the most
favorable conditions?
The results obtained from the theory of the universe of motion enable giving an
affirmative answer to the first of these subsidiary issues. As brought out in Chapter 7, our
findings indicate not only that there are an enormous number of planetary systems, but
also that the planets in these systems are distributed in distance from their controlling
stars in accordance with Bode's Law (as revised). This means that the great majority of
the systems include at least one planet within the habitable zone, a planet that may be
suitable for the development of the higher forms of life.
Inasmuch as the results reported in the several volumes of this work do not extend into
the biological field, they do not provide answers for the other two subdivisions of the
main question. However, the'se results have verified the status of the postulates of the
theory of the universe of motion as a correct definition of the physical universe. If life is a
physical phenomenon, then it, too, is defined by these postulates. Thus the theory opens
an avenue of approach to these other two issues. A preliminary study along these lines
has been included in the extension of the physical investigation that was mentioned in the
answer to question 7.
9. If there are intelligent beings elsewhere in the universe, will we eventually be
able to make some kind of contact with them?
At the present stage of our knowledge, any answer to this question would be pure
speculation.

10. Is there anything outside (that is, independent of) the universe of motion?

This is probably the most important question that can be asked by members of the human
race. Many persons, particularly those with strong religious ties, will be inclined to
contest this assertion, having in mind issues that are more directly connected with their
specific beliefs. But we can safely predict that if these alternative questions are carefully
examined it will be found that they have no meaning unless this question number 10 can
be answered affirmatively.

Conventional science gives us a negative answer. It regards space and time as


constituting a background, or setting, in which physical entities exist, and in which
physical activity takes place. All existence, according to this view, is in space and in
time. It then follows that there cannot be any existence outside of space and time. The
prevailing scientific opinion is that this is an incontrovertible conclusion. Furthermore, it
is claimed that every fact to which we have access can reasonably be explained in terms
of the physical universe alone, as would be expected on the basis of the foregoing
assertions.
Although it is generally conceded that this is the verdict of science at the present stage of
knowledge, it is, to most scientists, an unwelcome conclusion. The great majority of these
individuals have some kind of religious or philosophical convictions about non-physical
existence that they are not willing to give up, regardless of how strong a case against the
reality of such an existence science may present. For some this has created a very
difficult situation. As expressed by du Nouy:
It cannot be contested that the heart of many men is the stage of a conflict
between the strictly intellectual activity of the brain, based on the progress of
science, and the intuitive, religious, self. The greater the sincerity of the man, the
more violent is the conflict.348
The fact that the clarification of the physical relationships in our study of the universe of
motion has opened the door to an extension of this study into the non-physical realm thus
has a profound significance. The physical findings clearly demolish what previously
seemed to be an unassailable case against the reality of outside existence. Even the most
casual consideration of the claim that every known fact has a reasonable explanation in
physical terms is sufficient to show that the validity of this claim rests entirely on a
subjective assessment of what constitutes a reasonable explanation in each individual
case. The prevailing scientific position with respect to evidence of nonphysical existence
thus amounts to nothing more than a refusal to recognize any evidence that is offered in
favor of such existence. It follows that the scientific rejection of the possibility of
existence outside the physical universe has no basis other than the premise that all
existence is in space and in time.
In the universe of motion, this is not true. Space and time do not constitute a container for
the entities and phenomena of that universe; they are contents of the universe. Once this
is understood, the obstacle in the way of non-physical existence disappears. The results of
the investigation here being reported show that the physical universe consists entirely of a
specific finite quantity of a particular kind of motion. The question at issue now becomes:
Can anything exist other than this quantity of this kind of motion?
This is an issue that can be investigated by standard scientific methods and procedures.
We cannot apply the purely deductive method by which we have derived the answers to
similar questions within the boundaries of the physical universe after establishing the
validity of the fundamental postulates of the Reciprocal System of theory, as we have no
assurance that the laws and principles of the physical universe are applicable to the
outside region. We can, however, postulate the applicability of those of the previously
established principles that are not subject to any obvious regional limitations, and test the
validity of that postulate in the regular manner. In so doing, we are using one of the
versatile tools of inductive reasoning: the extrapolation process. We are making the kind
of an ‖inference from experience― upon which scientific theory was based before the
‖inventive― school of Einstein and his successors gained control of the scientific
Establishment.
First, we assume the validity of the Principle of Uniformity, identified as Principle (b) in
the list given at the beginning of this chapter. This principle then carries with it the
validity of the other items in the list that are relevant to the point at issue, particularly the
rationality of the outside existence, principle (a), and the assertion that what can exist
does exist, principle (g). We know from observation that motion can exist. Our
observations tell us only that it exists in a certain form and iri a certain finite quantity, but
there is no indication of any kind of a limiting factor that would restrict it to this form and
to this quantity. Principle (g) therefore tells us that motion can exist in other forms and in
other quantities if our hypothesis as to the applicability of the Principle of Uniformity to
the outside existence is valid.
Having formulated this hypothesis by extrapolating the principles and relations that we
have established in the physical universe, we are then ready to verify it in the standard
manner by developing the consequences of the hypothesis and comparing them with
observation. Notwithstanding the scientific contention that all observed phenomena can
be explained on a purely physical basis, it quickly becomes evident, when the verification
process is undertaken, that many of the effects of non-physical existence required by the
uniformity hypothesis are, in fact, observable. Their true status as unexplained non-
physical phenomena has not heretofore been recognized because they coexist with many
unexplained physical phenomena, and have not been distinguished from these obscure
features of physical existence.
The findings of this extension of the investigation of the physical universe into the non-
physical region are much too voluminous to be included with the physical results, and
will be described in a separate publication, but it would not be appropriate to conclude
the discussion in this volume without calling attention to the manner in which the
clarification of the properties of the physical universe sets the stage for a confirmation of
the reality of existence outside that universe. The more complete understanding of
physical existence opens the door to an exploration of existence as a whole, including
those nonphysical areas that have hitherto had to be left to religion and related branches
of thought. It is now evident that our familiar material world is not the whole of
existence, as modem science would have us believe. It is only a part—perhaps a very
small part—of a greater whole.

References
1. John, Laurie, Cosmology Now, Taplinger Publishing Co., New York, 1976, p. 85.

2. Verschuur, Gerrit, Starscapes, Little, Brown and Co., Boston, 1977, p. 143.

3. Shklovskii, I. S., Stars, W. H. Freeman & Co., San Francisco, 1978, p. 66.

4. Struve, Otto, Elementary Asrronomy, Oxford University Press, New York, 1959, p. 296.

5. Mitton, Simon, Exploring the Galaxies, Charles Scribner’s Sons, New York, 1976, p. 86.

6. Gold and Hoyle, Paper 104, Paris Symposium on Radio Astronomy, edited by Ronald N.
Bracewell, Stanford University Press, 1959.

7. Verschuur, Gerrit, op. cit., p. 102.

8. Harwit, Martin, Astrophysical Concepts, John Wiley & Sons, New York, 1973, p. 43.

9. McCrea, W. H., Cosmology Now, edited by Laurie John, op. cit., p. 94.

10. Bethe, Hans, Technology Review (MIT), June 1976.

11. Pasachoff, Jay M., Astronomy Now, W. B. Saunders Co., Philadelphia, 1978, p. 135.

12. Bok, Bart J., The Astronomer’s Universe, Cambridge University Press, 1958, p. 91.

13. Irwin, John B., Sky and Telescope, Nov. 1973.

14. Trimble, Virginia, Earth and Extraterrestrial Sciences, March. 1978.

15. Hoyle, Fred, Frontiers of Astronomy, Harper & Bros., New York, 1955, p. 278.

16. Hartmann, William K., Astronomy: The Cosmic Journey, Wadsworth Publishing Co., Belmont,
CA, 1978, p. 365.

17. Hershfeld, Alan, Sky and Telescope, Apr. 1980.

18. Couper, Heather, 1978 Yearbook of Astronomy, p. 190.

19. Shklovskii, I. S., op. cit., p. 60.

20. Hartmann, William K., op. cit., p. 386.

21. Silk, Joseph, The Big Bang, W. H. Freeman & Co., San Franscisco, 1980, p. 177.

22. Rees, M. J., The State of the Universe, edited by G. T. Bath, The Clarendon Press, Oxford, 1980,
p. 35.
23. Jastrow and Thompson, Astronomy, Fundamentals and Frontiers, 2nd edition, John Wiley &
Sons, New York, 1974, p. 231.

24. Cudworth, K., Astronomical Journal, July 1976.

25. Freeman and Norris, Annual Review of Astronomy and Astrophysics, 1981.

26. Datrow, Karl, Scientific Monthly, Mar. 1942.

27. Hogg, Helen S., McGraw-Hill Encyclopedia of Science and Technology, 1982, p. 13-53.

28. Struve, Otto, Sky and Telescope, July 1955.

29. Greenstein, J. L., McGraw-Hill Encyclopedia, p. I3-49.

30. Pasachoff, Jay M., op. cit., p. 87.

31. Kirshner, Robert P., Scientific American, Dec. 1976.

32. Struve, Otto, Sky and Telescope, June 1960.

33. Hogg, Helen S., Encyclopedia Britannica, l5th edition, p. 17-605.

34. Lohmann, W., Zeitschrift für Astrophysik, Aug. 1953.

35. Von Hoerner, Sebastian, Astrophysical Journal, Mar. 1957.

36. Inglis, Stuart J., Planets, Stars and Galaxies, 3rd edition, John Wiley & Sons, New York, 1961, p.
309.

37. Burnham, Robert, Jr., Burnham’s Celestial Handbook, Dover Publications, New York, 1978, p.
1294.

38. Shklovskii, I. S., op. cit., p. 110.

39. Aller, L. H., Encyclopedia Britannica, l5th edition, p. 17-600.

40. Ibid., p. 17-602.

41. . Shklovskii, I. S., op. cit., p. 144.

42. Gamow, George, The Creation of the Universe, Viking Press, New York, 1952, p. 46.

43. Jastrow and Thompson, op. cit., p. 133.

44. Herbst and Assousa, Scientific American, Aug. 1979.

45. Shklovskii, I. S., op. cit., p. 225.

46. Maffei, Paolo, Monsters in the Sky, The MIT Press, 1980, p. 129.

47. Shklovskii, I. S., op. cit., p. 227.

48. Ibid., p. 285.

49. Ibid., p. 186.

50. Ibid., p. 193.

51. Ibid., p. 109.


52. Mitton, Simon, op. cit., p. 89.

53. Aller, L. H., op. cit., p. 17-596.

54. Neugebauer and Leighton, Scientific American, Aug. 1968.

55. Wilson, Olin C., et al., Scientific American, Feb. 1981.

56. Baker and Fredrick, Astronomy, Ninth edition, Van Nostrand Reinhold Co., New York, 197 I , p.
393.

57. Kraft, Robert P., Scientific American, July 1959.

58. Bumham, Robert, Jr., op. cit., p. 590.

59. Harwit, Martin, op. cit,. pages 24 and 345.

60. Lynden-Bell, Donald, Cosmology Now, op. cit., p. 50.

61. . Jastrow, Robert, Red Giants and White Dwarfs, Harper & Row, New York, 1967, p. 41 .

62. Silk, Joseph, op. cit., p. 257.

63. Maffei, Paolo, op. cit., p. 205.

64. See, for instance, Hartmann, op. cit., p. 295.

65. Eddington, Arthur, New Pathways in Science, University of Michigan Press, 1959, Chapter VII.

66. Plavec, M. J., McGraw-Hill Encyclopedia, p. 13-118.

67. Shklovskii, I. S., op. cit., p. 165.

68. Greenstein, J. L., Stellar Atmospheres, University of Chicago Press, 1960, p. 676.

69. Bohlin, R. C., et al., Astrophysical Journal, Jan. 15, 1982.

70. Jastrow and Thompson, op. cit., p. 182.

71. Shklovskii, I. S., op. cit., p. 194.

72. Hartmann, William K., op. cit, p. 338.

73. Jeans, James, The Universe Around Us,fourth edition, Cambridge University Press, 1947, p. 236.

74. Allen, David, 1973 Yearbook of Astronomy.

75. Underhill, Anne B., Annual Review of Astronomy and Astrophysics, 1968.

76. Baker and Fredrick, op. cit., p. 372.

77. Burnham, Robert, Jr., op. cit., p. 407.

78. McLaughlin, Dean B., Sky and Telescope, May 1946.

79. Hartmann, William K., op. cit., p. 334.

80. Ibid., p. 333.

81. Burnham, Robert, Jr., op. cit., p. 995.


82. Ibid., p. 1263.

83. Bok, Bart J., Scientific American, Mar. 1981.

84. Shklovskii, I. S., op. cit., p. 207.

85. Ibid., p. 215.

86. Feynman, Richard, The Character of Physical Law, MIT Press, 1967, p. 30.

87. Harwit, Martin, Cosmic Discovery, Basic Books, New York, 1981 , p. 57.

88. Allen, David K., 1981 Yearbook of Astronomy, p. 201.

89. Hartmann, William K., op. cit., p. 337.

90. Shklovskii, I. S., op. cit., p. 214.

91. Davis and Day, Water, Doubleday & Co., New York, 1961, p. 117.

92. Basko, M. M., Annals of the New York Academy of Sciences, Feb 15, 1980.

93. Hartmann, William K., op. cit., p. 209.

94. News item, New Scientist, Apr. l I , 1974.

95. Ebbighausen, E. G., Astronomy, Charles E. Merrill Books, Columbus, Ohio, 1966, p. 57.

96. Hogg, Helen S., Encyclopedia Britannica, l5th edition, p. 17-604.

97. Bok and Bok, The Milky Way, 4th edition, Harvard University Press, 1974, p. 249.

98. Shklovskii, I. S., op. cit., p. 112.

99. Struve, Otto, Sky and Telescope, Apr. 1960.

100. Ebbighausen, E. G., op. cit., p. 76.

101. Harris, W. E., Astronomical Journal, Dec. 1976.

102. Burbidge, M. and G., Scientific American, June 1961.

103. Harris and Racine, Annual Review of Astronomy and Astrophysics, 1979.

104. Larson, R. B., Nature, Mar. 3, 1972.

105. Bok and Bok, op. cit., p. 97.

106. Blitz, Leo, Scientific American, Apr. 1982.

107. Wyatt, Stanley P., Principles of Astronomy, third edition, Allyn and Bacon, Boston,1977, p. 562.

108. News item, Sky and Telescope, Oct. 1975.

109. Bok and Bok, op. cit., p. 160.

110. Hartmann, William K., op. cit., p. 284.

111. Gamow, George, op. cit., p. 94.


112. Wyatt, Stanley P., op. cit., p. 568.

113. Verschuur, Gerrit, op. cit., p. 102.

114. Ibid., p. 105.

115. News item, Nature, July 12, 1974.

116. Thackeray, A. D., The Magellanic Clouds, edited by Andre B. Muller, D. Reidel Publishing Co.,
Dordrecht, Holland, 1971, p. 14.

117. Westerlund, Bengt, ibid., p. 31.

118. Payne-Gaposchkin, Cecilia, ibid., p. 36.

119. Oort, J. H., ibid., p. 189.

120. Burnham, Robert, Jr., op. cit., p. 1546.

121. Ibid., p. 347.

122. Philip, A. G. D., Sky and Telescope, July 1978.

123. Hartmann, William K., op. cit., p. 309.

124. Van den Bergh, Sidney, Annual Review of Astronomy and Astrophysics, 1975.

125. Kudritzki and Simon, Astronomy and Astrophysics, Dec. 1, 1978.

126. Hunger, K., et al., Astronomy and Astrophysics, March I, 1981.

127. Kaler, James B., Sky and Telescope, Feb. 1982.

128. Burnham, Robert, Jr., op. cit., p. 943.

129. Liller and Liller, Scientific American, Apr. 1963.

130. Burnham, Robert, Jr., op. cit., p. 2120.

131. Abell, G. O., Astrophysical Journal, Apr. 1966.

132. Pasachoff, Jay M., op. cit., p. 143.

133. Aller and Liller, Nebulae and Interstellar Matter, edited by Middlehurst and Aller, Univ. of
Chicago Press, 1968, p. 558.

134. Smith and Aller, Astrophysical Journal, Mar. 1, 1971.

135. Greenstein, J. L., Scientific American, Jan. 1959.

136. Greenstein, J. L., McGraw-Hill Encyclopedia, p. 14-633.

137. Liebert, James, Annual Review of Astronomy and Astrophysics, 1980.

138. Stothers, Richard, Astronomical Joumal, Dec. 1966.

139. Greenstein, J. L., Astronomical Journal, May, 1976.

140. Shipman, H. L., Astrophysical Journal, Feb. 15, 1979.


141. Greenstein, J. L., Stellar Atmospheres, op. cit., p. 689.

142. Van Horn, H. M., Physics Today, Jan. 1979.

143. Shklovskii, I. S., op. cit., p. 198.

144. Aller and Liller, op. cit., p. 483.

145. Kraft, Robert P., Scientific American, Apr. 1962.

146. Ebbighausen, E. G., op. cit., p. 101.

147. McLaughlin, Dean B., Stellar Atmospheres, J. L. Greenstein, editor, op. cit., p. 640.

148. Ibid., p. 593.

149. Kraft and Luyten, Astrophysical Journal, Oct. l, 1965.

150. Joy, A. H., Stellar Atmospheres, J. L. Greenstein, editor, op. cit., p. 668.

151. Burnham, Robert, Jr., op. cit., p. 225.

152. Joy, A. H., op. cit., p. 666.

153. Haro, Guillermo, Non-Stable Stars, edited by G. H. Herbig, Cambridge University Press, 1957, p.
26.

154. Burnham, Robert, Jr., op. cit., p. 174.

155. Ibid., p. 123.

156. Warner, Brian, Sky and Telescope, Nov. 1973.

157. Burnham, Robert, Jr., op. cit. p. 928.

158. Ibid., p. 218.

159. Schatzman, E, Stellar Structure, edited by Aller and McLaughlin, Univ. of Chicago Press, 1965, p.
329.

160. Joy, A. H., op. cit., p. 672.

161. Gallagher and Starrfield, Annual Review of Astronomy and Astrophysics, 1978.

162. McLaughlin, Dean B., Stellar Atmospheres, op. cit., p. 647.

163. Walker, Marshall, The Nature of Scientific Thought, Prentice-Hall, Englewood Cliffs, N.J., 1963,
p. 132.

164. Davies, Paul, The Runaway Universe, Harper & Row, New York, 1978, p. 159.

165. Jeans, James, op. cit., p. 279.

166. Ibid., p. 281.

167. Gorenstein and Tucker, Scientific American, Nov. 1978.

168. Neugebauer and Becklin, Scientific American, Apr. 1973.

169. Jastrow and Thompson, op. cit., p. 250.


170. Branch, David, Astrophysical Journal, Sept. 15, 1981.

171. Minkowski, R., Nebulae and Interstellar Matter, edited by Middlehurst and Aller, op. cit., p. 629.

172. Shklovskii, I. S., op. cit., p. 297.

173. Ibid., p. 226.

174. Kowal, Charles T., Astronomical Journal, Dec. 1968.

175. Poveda and Woltjer, Astronomical Journal, Mar. 1968.

176. Shklovskii, I. S., op. cit., p. 257.

177. Minkowski, R., op. cit., p. 652.

178. Ibid., p. 658.

179. Mitton, Simon, The Crab Nehula, Charles Scribner’s Sons, New York, 1978, p. 42.

180. Ibid.. p. 56.

181. Shklovskii. I. S., op. cit., p. 270.

182. Poveda and Woltjer, Astronomical Journal, Mar. 1968.

183. Smith, F. G., Pulsars, Cambridge University Press, 1977, p. 9.

184. Harwit, Martin, Cosmic Discovery, op. cit,. p. 243.

185. Ibid.. p. 327.

186. Hoyle, Fred, From Stonehenge to Modern Cosmology, W. H. Freeman & Co., San Francisco,
1972, p. 62.

187. Shklovsky, I. S., Publications of the Astronomical Society of the Pacific, Apr.-May 1980.

188. Smith, F. G., op. cit., p. 220.

189. Ibid., p. 169.

190. News item, Sky and Telescope, Dec. 1979.

191. Smith, F. G., op. cit., p. 229.

192. Ibid., p. 228.

193. Ibid., p. 9l.

194. Ibid., p. 103.

195. Manchester and Taylor, Pulsars, W. H. Freeman & Co., San Francisco, 1977, p. 226.

196. Smith, F. G., Nature, Dec. 5, 1970.

197. Manchester and Taylor, op. cit., p. 15.

198. Taylor and Manchester, Annual Review of Astronomy and Astrophysics, 1977.

199. Manchester and Taylor, op. cit., p. l8.


200. Ibid., p. 6.

201. Thome, Kip S., Scientific American, Dec. 1974.

202. Ruderman, M., Annals of the New York Academy of Sciences, Feb. 15, 1980.

203. Smith, F. G., Pulsars, op. cit., p. 171.

204. Burbidge and Burbidge, Quasi-Stellar Objects, W. H. Freeman & Co., San Francisco, 1967, p. 52.

205. Mitton, Simon, Exploring the Galaxies, op. cit., p. 179.

206. Bok and Bok, op. cit., p. 168.

207. Shklovskii, I. S., Stars, op. cit., p. 256.

208. Moore and Stockman, Astrophysical Journal, Jan. l, 1981.

209. Mitton, Simon, Exploring the Galaxies, op. cit., p. 108.

210. Ibid., p. 180.

211. Mason and Cordova, Sky and Telescope, July 1982.

212. Fabian, Andrew C., Earth and Extraterrestrial Sciences, Feb. 1973.

213. Giacconi, R, quoted by Fabian, ibid.

214. Fabian and Pringle, New Scientist, Feb. 7, 1974.

215. Hartmann, William K., op. cit., p. 371.

216. Kylafis, N. D., et al., Annals of the New York Academy of Sciences, Feb 15, 1980.

217. Holt and McCray, Annual Review of Astronomy and Astrophysics, 1982.

218. Shklovskii, I. S., Stars, op. cit., p. 384.

219. Gursky and Van den Heuvel, Scientific American, Mar. 1975.

220. Shklovskii, I. S., Stars, op. cit., p. 400.

221. News Item, Nature, Jan. 26, 1973.

222. Cocke, W. J., et al., Nature, Sept. 26, 1970.

223. Radhakrishnan, V., et al., Nature, Feb. l, 1969.

224. Giacconi, R., Physics Today, May 1973.

225. Shklovskii, I. S., Stars, op. cit., p. 244

226. Charles and Culhane, Scientific American, Dec. 1975.

227. Rothchild, R. E., Earth and Extraterrestrial Sciences, Mar. 1979.

228. Giacconi, R., Scientific American, Feb. 1980.

229. Mitton, Simon, The Crab Nebula, op. cit., p. 172.


230. Verschuur, Gerrit, Starscapes, op. cit., p. 171.

231. Weymann, R. J., Scientific American, Jan. 1969.

232. Harwit, Martin, Cosmic Discovery, op. cit., p. 23.

233. Jastrow and Thompson, op. cit., p. 254.

234. Strittmatter and Williams, Annual Review of Astronomy and Astrophysics, 1976.

235. Feynman, Richard, op. cit., p. 155.

236. Harwit, Martin, Cosmic Discovery, op. cit., p. 244.

237. Shklovskii, I. S., Stars, op. cit., p. 288.

238. Mitton, Simon, Exploring the Galaxies, op. cit., p. 135.

239. Bahcall and Hills, Astrophysical Journal, Feb. 1, 1973.

240. News Item, Nature, Sept. 7, 1968.

241. Bohuski and Weedman, Astrophysical. Journal, Aug. 1, 1979.

242. Hewish, A., Annual Review of Astronomy and Astrophysics, 1970.

243. Verschuur, Gerrit, Starscapes, op. cit., p. 116.

244. Arp, Halton, Astrophysical Journal, May 1967.

245. Arp, Halton, private communication.

246. Hogg, D. E., Astrophysical Journal, Mar. 1969.

247. Macdonald and Miley, Astrophysical Joumal, Mar. I , 1971 .

248. Kellerman, K. I., Astronomical Journal, Sept. 1972.

249. Arp, Halton, Science, Dec. 17, 1971.

250. Burbidge and Burbidge, op. cit., Chapter 3.

251. Burbidge and O’Dell, Astrophysical Journal, Dec. 15, 1972.

252. Miller, Joseph S., Publications of the Astronomical Society of the Pacific, Dec. 8l-Jan. 82.

253. Rieke and Lebofsky, Annual Review of Astronomy and Astrophysics, 1979.

254. Stein, W. A., et al., Annual Review of Astronomy and Astrophysics, 1976.

255. Disney and Veron, Scientific American, Aug. 1977.

256. Shipman, H. L., Black Holes, Quasars, and the Universe, Houghton Mifflin Co., Boston 1980, p.
215.

257. Sandage, Allan R., Astrophysical Journal, Nov, 15, 1972.

258. News item, Sky and Telescope, Jan. 1982.

259. Faber and Gallagher, Annual Review of Astronomy and Astrophysics, 1979.
260. Toomre and Toomre, Scientific American, Dec. 1973.

261. Dufour and Van den Bergh, Sky and Telescope, Nov. 1978.

262. Overbye, Dennis, Sky and Telescope, July 1979.

263. Jastrow and Thompson, op. cit., p. 240.

264. Gursky and Schwartz, Annual Review of Astronomy and Astrophysics, 1977.

265. Metz, William D., Science, Sept. 2l, 1973.

266. Bond and Sargent, Astrophysical Journal Letters, Nov. I, 1973.

267. Kristian, Jerome, Astrophysical Journal, Jan. 15, 1973.

268. Kellerman, K. I., Annals of the New York Academy of Sciences, Feb. 15, 1980.

269. Hey, J. S., The Evolution of Radio Astronomy, Neale Watson Academic Publications, New York,
1973, p. 169.

270. Shipman, H. L., Black Holes, op. cit., p. 204.

271. Fanti, R., et al., Astronomy and Astrophysics, Apr. 1, 1973.

272. Verschuur, Gerrit, Starscapes, op. cit., p. 157.

273. Shipman, H. L., Black Holes, op. cit., p. 180.

274. Mitton, Simon, Exploring the Galaxies, op. cit., p. I 12.

275. Hartmann, William K., op. cit., p. 290.

276. Ibid., p. 374.

277. Ibid., p. 375.

278. Hoyle, Fred, Galaxies, Nuclei, and Quasars, Harper & Row, New York, 1965, p. 4.

279. Rogstad and Ekers, Astrophysical Journal, Aug. 1969.

280. Mitton, Simon, Exploring the Galaxies, op. cit., p. 107.

281. . Burbidge and Burbidge, Nature, Oct. 4, 1969.

282. Sandage, Allan R., Scientific American, Nov. 1964.

283. Harwit, Martin, Cosmic Discovery, op. cit., p. 145.

284. Weedman, Daniel, Annual Review of Astronomy and Astrophysics, 1977.

285. Mitton, Simon, 1973 Yearbook of Astronomy.

286. Maffei, Paolo, op. cit., p. 288.

287. Mitton, Simon, Exploring the Galaxies, op. cit., p. 120.

288. Clark, George W., Scientific American, Oct. 1977.

289. Leventhal and MacCallum, Scientific American, July 1980.


290. Harwit, Martin, Cosmic Discovery, op. cit., p. 146.

291. Ibid., p. 147.

292. Einstein, Albert, The Structure of Scienlific Thought, edited by E. H. Madden, Houghton Mifflin
Co., Boston, 1960, p. 82.

293. Feynman, Richard, op. cit., p. 156.

294. Ibid., p. 171.

295. Shipman, H. L., Black Holes, op. cit., p. 19.

296. Ibid., p. 16.

297. Ibid.. p. 63.

298. Ibid., p. 98.

299. Ibid., p. 66.

300. Burbidge, Geoffrey, Sky and Telescope, Sept. 1983.

301. Davies, Paul, Science Digest, Sept. 1983.

302. Dingle, Herbert, A Century of Science, Hutchinson’s Publications, London, 1951 , p. 315.

303. Andrade, E. N. DaC., An Approach ro Modern Physics, G. Bell & Sons, London, 1959, p. 134.

304. Heisenberg, Werner, Philosophic Problems of Nuclear Science, Pantheon Books, New York,
1952, p. 55.

305. Margenau, Henry, Quantum Theory. Vol. I, edited by D. R. Bates, Academic Press, New York,
1961 , p. 6.

306. Feynman, Richard, op. cit., p. 129.

307. Heisenberg, Werner, op. cit., p. 38.

308. Bridgman, P. W., Reflections of a Physicist, Philosophical Library, New York, 1955, p. 186.

309. Maffei, Paolo, Beyond rhe Moon, MIT Press, Cambridge, Mass., 1978, p. 301.

310. News item, New Scientist, Oct. 17, 1968.

311. Davies, Paul, The Runaway Universe, op. cit., p. 33.

312. Larson, Dewey B., The Neglected Facts of Science, North Pacific Publishers, 1982, p. 58.

313. Bondi, Hermann, Cosmology Now, op. cit., p. 11.

314. Jastrow and Thompson, op. cit., p. 259.

315. Ibid., p. 271.

316. Bahcall, J. N., Astronomical Joumal, May 1971.

317. Lovell, Bemard, Cosmology Now, op. cit., p. 8.

318. Alfven, H., Worlds-Antiworlds, W. H. Freeman & Co., San Francisco, 1966, p. 100.
319. Silk, Joseph, op. cit., p. 61.

320. Sciama, Dennis, Cosmology Now, op. cit., p. 67.

321. McCrea, W. H., ibid., p. 91.

322. Shipman, H. L., Black Holes, op. cit., p. 256.

323. Davies, Paul, The Edge of Infinity, Simon and Schuster, New York, 1981, p. 137.

324. Spitzer, Lyman, Jr., Searching Between the Stars, Yale University Press, 1982, p. 5.

325. See discussion in Verschuur, Starscapes, op. cit., p. 190.

326. Calder, Nigel, The Violent Universe, The Viking Press, New York, 1969, p. 121.

327. Rees, Martin, Cosmology Now, op. cit., p. 129.

328. Kuhn, Thomas, The Structure of Scientific Revolutions, University of Chicago Press, 1962, p. 67.

329. Lovell, Bernard, Cosmology Now, op. cit., p. 7.

330. Verschuur, Gerrit, The Invisihle Universe, Springer-Verlag, New York, 1974, p. 139.

331. Kellerman, K. 1., Physics Today, Oct. 1973.

332. Feynman, Richard, op. cit., p. 160.

333. de Vaucoleurs, G., Sky and Telescope, Aug. 1978.

334. Shapiro, Irwin, Technology Review (MIT), Dec. 1975.

335. Walker, Marshall, op. cit., p. 28.

336. McVittie, G. C., General Relativiry and Cosmology, Chapman & Hall, London, 1956, p. 5.

337. Heisenberg, Werner, Physics and Philosophy, Harper & Bros., New York, 1958, p. 129.

338. Heisenberg, Werner, Physics and Beyond, Harper & Row, New York, 1971, p. 123.

339. Heisenberg, Werner, Philosophic Problems of Nuclear Science, op. cit., p. 55.

340. Waismann, F., Turning Points in Physics, Interscience Publishers, New York, 1959, p. 154.

341. Bridgman, P. W., op. cit., p. 93.

342. Lindsay, R. B., The Role of Science in Civilization, Harper & Row, New York, 1963, p. 84.

343. Walker, Marshall, op. cit., p. 6.

344. du Nouy, P. L., The Road to Reason, Longmans Green & Co., New York, 1949, page20.

345. Ford, K. W., Scientific American, Dec. 1963.

346. Ford, K. W., The World of Elementary Particles, Blaisdell Publishing Co., New York, 1963, p.
214.

347. Einstein and Infeld, The Evolution of Physics, Simon & Schuster, New York, 1938, p. 159.

348. du Nouy, P. L., Between Knowing and Believing, David McKay Co., New York, 1966, p. 239.
349. Gold, Michael, Science 84, Mar. 1984.

350. Schorn, Ronald A., Sky and Telescope, Feb. 1984.

351. Hoyle, Fred, Science Digest, May 1984.

DEWEY B. LARSON: THE COLLECTED WORKS


Dewey B. Larson (1898-1990) was an American
engineer and the originator of the Reciprocal System of
Theory, a comprehensive theoretical framework capable
of explaining all physical phenomena from subatomic
particles to galactic clusters. In this general physical
theory space and time are simply the two reciprocal
aspects of the sole constituent of the universe–motion.
For more background information on the origin of
Larson’s discoveries, see Interview with D. B. Larson
taped at Salt Lake City in 1984. This site covers the
entire scope of Larson’s scientific writings, including his
exploration of economics and metaphysics.

Physical Science The Universe of Motion


The third volume of the revised edition of
The Structure of the Physical Universe
The Structure of the Physical Universe,
The original groundbreaking publication
applying the theory to astronomy.
wherein the Reciprocal System of Physical
Theory was presented for the first time.
The Liquid State Papers
A series of privately circulated papers on the
The Case Against the Nuclear Atom
liquid state of matter.
―A rude and outspoken book.‖
The Dewey B. Larson Correspondence
Beyond Newton Larson’s scientific correspondence, providing
―...Recommended to anyone who thinks the many informative sidelights on the
subject of gravitation and general relativity development of the theory and the
was opened and closed by Einstein.‖ personality of its author.

New Light on Space and Time The Dewey B. Larson Lectures


A bird’s eye view of the theory and its Transcripts and digitized recordings of
ramifications. Larson’s lectures.
The Collected Essays of Dewey B. Larson
The Neglected Facts of Science Larson’s articles in Reciprocity and other
Explores the implications for physical publications, as well as unpublished essays.
science of the observed existence of scalar
motion. Metaphysics
Quasars and Pulsars Beyond Space and Time
Explains the most violent phenomena in the A scientific excursion into the largely
universe. unexplored territory of metaphysics.
Economic Science
Nothing but Motion
The first volume of the revised edition of The Road to Full Employment
The Structure of the Physical Universe, The scientific answer to the number one
developing the basic principles and relations. economic problem.

Basic Properties of Matter The Road to Permanent Prosperity


The second volume of the revised edition of A theoretical explanation of the business
The Structure of the Physical Universe, cycle and the means to overcome it.
applying the theory to the structure and
behavior of matter, electricity and
magnetism.

You might also like