From; http://www.reciprocalsystem.com/nbm/index.

htm

Nothing But Motion
DEWEY B. LARSON
Volume I of a revised and enlarged edition of THE STRUCTURE OF THE PHYSICAL UNIVERSE
Preface 1. Background 2. A Universe of Motion 3. Reference Systems 4. Radiation 5. Gravitation 6. The Reciprocal Relation 7. High Speed Motion 8. Motion in Time 9. Rotational Combinations 10. Atoms 11. Sub-atomic Particles 12. Basic Mathematical Relations 13. Physical Constants 14. Cosmic Elements 15. Cosmic Ray Decay 16. Cosmic Atom Building 17. Some Speculations 18. Simple Compounds 19. Complex Compounds 20. Chain Compounds 21. Ring Compounds References

Preface
Nearly twenty years have passed since the first edition of this work was published. As I pointed out in the preface of that first edition, my findings indicate the necessity for a drastic change in the accepted concept of the fundamental relationship that underlies the whole structure of physical theory: the relation between space and time. The physical universe, I find, is not a universe of matter existing in a framework provided by space and time, as seen by conventional science, but a universe of motion, in which space and time are simply the two reciprocal aspects of motion, and have no other significance. What I

have done, in brief, is to determine the properties that space and time must necessarily possess in a universe composed entirely of motion, and to express them in the form of a set of postulates. I have then shown that development of the consequences of these postulates by logical and mathematical processes, without making any further assumptions or introducing anything from experience, defines, in detail, a complete theoretical universe that coincides in all respects with the observed physical universe. Nothing of this nature has ever been developed before. No previous theory has come anywhere near covering the full range of phenomena accessible to observation with existing facilities, to say nothing of dealing with the currently inaccessible, and as yet observationally unknown, phenomena that must also come within the scope of a complete theory of the universe. Conventional scientific theories accept certain features of the observed physical universe as given, and then make assumptions on which to base conclusions as to the properties of these observed phenomena: The new theoretical system, on the other hand, has no empirical content. It bases a11 of its conclusions solely on the postulated properties of space and time. The theoretical deductions from these postulates provide for the existence of the various physical entities and phenomenamatter, radiation, electrical and magnetic phenomena, gravitation, etc.-as well as establishing the relations between these entities. Since all conclusions are derived from the same premises, the theoretical system is a completely integrated structure, contrasting sharply with the currently accepted body of physical theory, which, as described by Richard Feynman, is ―a multitude of different parts and pieces that do not fit together very well.‖ The last twenty years have added a time dimension to this already unique situation. The acid test of any theory is whether it is still tenable after the empirical knowledge of the subject is enlarged by new discoveries. As Harlow Shapley once pointed out, facts are the principle enemies of theories. Few theories that attempt to cover any more than a severely limited field are able to survive the relentless march of discovery for very long without major changes or complete reconstruction. But no substantive changes have been made in the postulates of this new system of theory in the nearly twenty years since the original publication, years in which tremendous strides have been made in the enlargement of empirical knowledge in many physical areas. Because the postulates and whatever can be derived from them by logical and mathematical processes, without introducing anything from observation or other external sources, constitute the entire system of theory, this absence of substantive change in the postulates means that there has been no change anywhere in the theoretical structure. It has been necessary, of course, to extend the theory by developing more of the details, in order to account for some of the new discoveries, but in most cases the nature of the required extension was practically obvious as soon as the new phenomena or relationships were identified. Indeed, some of the new discoveries, such as the existence of exploding galaxies and the general nature of the products thereof, were actually anticipated in the first published description of the theory, along with many phenomena and relations that are still awaiting empirical verification. Thus the new theoretical system is ahead of observation and experiment in a number of significant respects.

The scientific community is naturally reluctant to change its views to the degree required by my findings, or even to open its journals to discussion of such a departure from orthodox thought. It has been a slow and difficult task to get a significant count of consideration of the new structure of theory. However, those who do examine this new theoretical structure carefully can hardly avoid being impressed by the logical and consistent nature of the theoretical development. As a consequence, many of the individuals who have made an effort to understand and evaluate the new system have not only recognized it as a major addition to scientific knowledge, but have developed an active personal interest in helping to bring it to the attention of others. In order to facilitate this task an organization was formed some ears ago with the specific objective of promoting understanding and eventual acceptance of the new theoretical system, the Reciprocal System of physical theory, as we are calling it. Through the efforts of this organization, the New Science Advocates, Inc., and its individual members, lectures on the new theory have been given at colleges and universities throughout the United States and Canada. The NSA also publishes a newsletter, and has been instrumental in making publication of this present volume possible. At the annual conference of this organization at the University of Mississippi in August 1977 I gave an account of the origin and early, development of the Reciprocal System of theory. It has been suggested by some of those who heard this, presentation that certain parts of it ought to be included in this present volume in order to bring out the fact that the central idea of the new system of theory, the general reciprocal relation between space and time, is not a product of a fertile imagination, but a conclusion reached as the result of an exhaustive and detailed analysis of the available empirical data in a number of the most basic physical fields. The validity of such a relation is determined by its consequences, rather than by its antecedents, but many persons may be more inclined to take the time to examine those consequences if they are assured that the relation in question is the product of a systematic inductive process, rather than something extracted out of thin air. The following paragraphs from my conference address should serve this purpose. Many of those who come in contact with this system of theory are surprised to find us talking of ―progress in connection with it. Some evidently look upon the theory as a construction, which should be complete before it is offered for inspection. Others apparently believe that it originated as some kind of a revelation, and that all I had to do was to write it down. Before I undertake to discuss the progress that has been made in the past twenty years, it is therefore appropriate to explain just what kind of a thing the theory actually is, and why progress is essential. Perhaps the best way of doing this will be to tell you something about how it originated. I have always been very much interested in the theoretical aspect of scientific research, and quite early in life I developed a habit of spending much of my spare time on theoretical investigations of one kind or another. Eventually I concluded that these efforts would be more likely to be productive if I directed most of them toward some specific goal, and I decided to undertake the task of devising a

method whereby the magnitudes of certain physical properties could be calculated from their chemical composition. Many investigators had tackled this problem previously, but the most that had ever been accomplished was to devise some mathematical expressions whereby the effect of temperature and pressure on these properties can be evaluated if certain arbitrary ―constants‖ are assigned to each of the various substances. The goal of a purely theoretical derivation, one which requires no arbitrary assignment of numerical constants, has eluded all of these efforts. It may have been somewhat presumptuous on my part to select such an objective, but, after all, if anyone wants to try to accomplish ; something new, he must aim at something that others have not done. Furthermore, I did have one significant advantage over my predecessors, in that I was not a professional physicist or chemist. Most people would probably consider this a serious disadvantage, if not a definite disqualification. But those who have studied the subject in depth are agreed that revolutionary new discoveries in science seldom come from the professionals in the particular fields involved. They are almost always the work of individuals who might be considered amateurs, although they are more accurately described by Dr. James B. Conant as uncommitted investigators.‖ The uncommitted investigator, says Dr. Conant, is one who does the investigation entirely on his own initiative, without any direction by or responsibility to anyone else, and free from any requirement that the work must produce results. Research is, in some respects, like fishing. If you make your living as a fisherman, you must fish where you know that there are fish, even though you also know that those fish are only small ones. No one but the amateur can take the risk of going into completely unknown areas in search of a big prize. Similarly, the professional scientist cannot afford to spend twenty or thirty of the productive years of his life in pursuit of some goal that involves a break with the accepted thought of his profession. But we uncommitted investigators are primarily interested in the fishing, and while we like to make a catch, this is merely an extra dividend. It is not essential as it is for those who depend on the catch for their livelihood. We are the only ones who can afford to take the risks of fishing in unknown waters. As Dr. Conant puts it, Few will deny that it is relatively easy in science to fill in the details of a new area, once the frontier has been crossed. The crucial event is turning the unexpected corner. This is not given to most of us to do... By definition the unexpected corner cannot be turned by any operation that is planned... If you want advances in the basic theories of physics and chemistry in the future comparable to those of the last two centuries, then it would seem essential that there continue to be people in a position to turn unexpected corners. Such a man I have ventured to call the uncommitted investigator. As might be expected, the task that I had undertaken was a long and difficult one, but after about twenty years I had arrived at some interesting mathematical expressions in several areas, one of the most intriguing of which was an

expression for the inter-atomic distance in the solid state in terms of three variables clearly related to the properties portrayed by the periodic table of the elements. But a mathematical expression, however accurate it may be, has only a limited value in itself. Before we can make full use of the relationship that it expresses, we must know something as to its meaning. So my next objective was to find out why the mathematics took this particular form. I studied these expressions from all angles, analyzing the different terms, and investigating all of the hypotheses as to their origin that I could think of. This was a rather discouraging phase of the project, as for a long time I seemed to be merely spinning my wheels and getting nowhere. On several occasions I decided to abandon the entire project, but in each case, after several months of inactivity I thought of some other possibility that seemed worth investigating, and I returned to the task. Eventually it occurred to me that, when expressed in one particular form, the mathematical relation that I had formulated for the inter-atomic distance would have a simple and logical explanation if I merely assumed that there is a general reciprocal relation between space and time. My first reaction to this thought was the same as that of a great many others. The idea of the reciprocal of space, I said to myself, is absurd. One might as well talk of the reciprocal of a pail of water, or the reciprocal of a fencepost. But on further consideration I could see that the idea is not so absurd after all. The only relation between space and time of which we have any actual knowledge is motion, and in motion space and time do have a reciprocal relation. If one airplane travels twice as fast as another, it makes no difference whether we say that it travels twice as far in the same time, or that it travels the same distance in half the time. This is not necessarily a general reciprocal relation, but the fact that it is a reciprocal relation gives the idea of a general relation a considerable degree of plausibility. So I took the next step, and started considering what the consequences of a reciprocal relation of this nature might be. Much to my surprise, it was immediately obvious that such a relation leads directly to simple and logical answers to no less than a half dozen problems of long standing in widely separated physical fields. Those of you who have never had occasion to study the foundations of physical theory in depth probably do not realize what an extraordinary result this actually is. Every theory of present-day physical science has been formulated to apply specifically to some one physical field, and not a single one of these theories can provide answers to major questions in any other field. They may help to provide these answers but in no case can any of them arrive at such an answer unassisted. Yet here in the reciprocal postulate we find a theory of the relation between space and time that leads directly, without any assistance from any other theoretical assumptions or from empirical facts, to simple and logical answers to many different problems in many different fields. This is something completely unprecedented. A theory based on the reciprocal relation accomplishes on a wholesale scale what no other theory can do at all. To illustrate what I am talking about, let us consider the recession of the distant galaxies. As most of you know, astronomical observations indicate that the most

distant galaxies are receding from the earth at speeds which approach the speed of light. No conventiona1 physical theory can explain this recession. Indeed, even if you put all of the theories of conventional physics together, you still have no explanation of this phenomenon. In order to arrive at any such explanation the astronomers have to make some assumption, or assumptions, specifically applicable to the recession itself. The current favorite, the Big Bang theory, assumes a gigantic explosion at some hypothetical singular point in the past in which the entire contents of the universe were thrown out into space at their present high speeds. The rival Steady State theory assumes the continual creation of new matter, which in some unspecified way creates a pressure that pushes the galaxies apart at the speeds now observed. But the reciprocal postulate, an assumption that was made to account for the magnitudes of the inter-atomic distances in the solid state, gives us an explanation of the galactic recession without the necessity of making any assumptions about that recession or about the that are receding. It is not even necessary to arrive at any c as to what a galaxy is. Obviously it must be something-or its existence could not be recognized-and as long as it is something, the reciprocal relation tells us that it must be moving outward away from our location at the speed of light, because the location which it occupies is so moving. On the basis of this relation, the spatial separation between any two physical locations, the ―elapsed distance,‖ as we may call it, is increasing at the same rate as the elapsed time. Of course, any new answer to a major question that is provided by a new theory leaves some subsidiary questions that require further consideration, but the road to the resolution of these subsidiary issues is clear once the primary problem is overcome. The explanation of the recession, the reason why the most distant galaxies recede with the speed of light, leaves us with the question as to why the closer galaxies have lower recession speeds, but the answer to this question is obvious, since we know that gravitation exerts a retarding effect which is greater at the shorter distances. Another example of the many major issues of long standing that are resolved almost automatically by the reciprocal postulate is the mechanism of the propagation of electromagnetic radiation. Here, again, no conventional physical theory is able to give us an explanation. As in the case of the galactic recession, it is necessary to make some assumption about the radiation itself before any kind of a theory can be formulated, and in this instance conventional thinking has not even been able to produce an acceptable hypothesis. Newton’s assumption of light corpuscles traveling in the manner of bullets from a gun, and the rival hypothesis of waves in a hypothetical ether, were both eventually rejected. There is a rather general impression that Einstein supplied an explanation, but Einstein himself makes no such claim. In one of his books he points out what a difficult problem this actually is, and he concludes with this statement: Our only way out seems to be to take for granted the fact that space has the physical property of transmitting electromagnetic waves, and not to bother too much about the meaning of this statement.

So, as matters now stand, conventional science has no explanation at all for this fundamental physical phenomenon. But here, too, the reciprocal postulate gives us a simple and logical explanation. It is, in fact, the same explanation that accounts for the recession of the distant galaxies. Here, again, there is no need to make any assumption about the photon itself. It is not even necessary to know what a photon is. As long as it is something, it is carried outward at the speed of light by the motion of the spatial location which it occupies. No more than a minimum amount of consideration was required in order to see that the answers to a number of other physical problems of long standing similarly emerged easily and naturally on application of the reciprocal postulate. This was clearly something that had to be followed up. No investigator who arrived at this point could stop without going on to see just how far the consequences of the reciprocal relation would extend. The results of that further investigation constitute what we now know as the Reciprocal System of theory. As I have already said, it is not a construction, and not a revelation. Now you can see just what it is. It is nothing more nor less than the total of the consequences that result if there is a general reciprocal relation between space and time. As matters now stand, the details of the new theoretical system, so far as they have been developed, can be found only in my publications and those of my associates, but the system of theory is not coextensive with what has thus far been written about it. In reality, it consists of any and all of the consequences that follow when we adopt the hypothesis of a general reciprocal relation between space and time. A general recognition of this point would go a long way toward meeting some of our communication problems. Certainly no one should have any objection to an investigation of the consequences of such a hypothesis. Indeed, anyone who is genuinely interested in the advancement of science, and who realizes the unprecedented scope of these consequences, can hardly avoid wanting to find out just how far they actually extend. As a German reviewer expressed it. Only a careful investigation of all of the author’s deliberations can show whether or not he is right. The official schools of natural philosophy should not shun this (considerable, to be sure) effort. After all, we are concerned here with questions of fundamental significance. Yet, as all of you undoubtedly know, the scientific community, particularly that segment of the community that we are accustomed to call the Establishment, is very reluctant to permit general discussion of the theory in the journals and in scientific meetings. They are not contending that the conclusions we have reached are wrong; they are simply trying to ignore them, and hope that they will eventually go away. This is, of course, a thoroughly unscientific attitude, but since it exists we have to deal with it, and for this purpose it will be helpful to have some idea of the thinking that underlies the opposition. There are some individuals who simply do not want their thinking disturbed, and are not open to any kind of an argument. William James, in one of his books, reports a conversation that he had with a prominent scientist concerning what we now call

ESP. This man, says James, contended that even if ESP is a reality, scientists should band together to keep that fact from becoming known, since the existence of any such thing would cause havoc in the fundamental thought of science. Some individuals no doubt feel the same way about the Reciprocal System, and so far as these persons are concerned there is not much that we can do. There is no argument that can counter an arbitrary refusal to consider what we have to offer. In most cases, however, the opposition is based on a misunderstanding of our position. The issue between the supporters of rival scientific theories normally is: Which is the better theory? The basic question involved is which theory agrees more closely with the observations and measurements in the physical areas to which the theories apply, but since all such theories are specifically constructed to it the observations, the decision usually has to rest to a large degree on preferences and prejudices of a philosophical or other non-scientific nature. Most of those who encounter the Reciprocal System of theory for the first time take it for granted that we are simply raising another issue, or several issues, of the same kind. The astronomers, for instance, are under the impression that we are contending that the outward progression of the natural reference system is a better explanation of the recession of the distant galaxies than the Big Bang. But this is not our contention at all. We have found that we need to postulate a general reciprocal relation between space and time in order to explain certain fundamental physical phenomena that cannot be explained by any conventional physical theory. But once we have postulated this relationship it supplies simple and logical answers for the major problems that arise in all physical areas. Thus our contention is not that we have a better assortment of theories to replace the Big Ban and other specialized theories of limited scope, but that we have a general theory that applies to all physical fields. These theories of limited applicability are therefore totally unnecessary. While this present volume is described as the first unit of a ―revised and enlarged‖ edition, the revisions are actually few and far between. As stated earlier, there have been no substantive changes in the postulates since they were originally formulated. Inasmuch as the entire structure o g theory has been derived from these postulates by deducing their logical and mathematical consequences, the development of theory in this new edition is essentially y significant difference b the same as in the original, the only significant difference being ina few places where points that were originally somewhat vague have been clarified, or where more direct lines of development have been substituted for the earlier derivations. However, many problems are encountered in getting an unconventional work of this kind into print, and in order to make the original publication possible at all it was necessary to limit the scope of the work, both as to the number of subjects covered and as to the extent to which the details of each subject were developed. For this reason the purpose of this new edition is not only to bring the theoretical structure up to date by incorporating all of the advances that have been made in the last twenty years, but also to present the portions of the original results--approximately half of the total--that had to be omitted from the first edition.

Because of this large increase in the size of the work, the new edition will be issued in several volumes. This first volume is self contained. It develops the basic laws and principles applicable to physical phenomena in general, and defines the entire chain of deductions leading from the fundamental postulates to each of the conclusions that are reached in the various physical areas that are covered. The subsequent volumes will apply the same basic laws and principles to a variety of other physical phenomena. It has seemed advisable to change the order of presentation to some extent, and as a result a substantial amount of the material omitted from the first edition has been included in this volume, whereas some subjects, such as electric and magnetic phenomena, that were discussed rather early in the first edition have been deferred to the later volumes. For the benefit of those who do not have access to the first edition (which is out of print) and wish to examine what the Reciprocal System of theory has to say about these deferred items before the subsequent volumes are published, I will say that brief discussions of some of these subjects are contained in my 1965 publication, New Light on Space and Time, and some further astronomical information, with particular reference to the recently discovered compact astronomical objects, can be found in Quasars and Pulsars, published in 1971. It will not be feasible to acknowledge all of the many individual contributions that have been made toward developing the details of thc theoretical system and bringing it to the attention of the scientific community. However, I will say that I am particularly indebted to the founders of the New Science Advocates, Dr. Douglas S. Cramer, Dr. Paul F. de Lespinasse, and Dr. George W. Hancock; to Dr. Frank A, Anderson, the current President of the NSA, who did the copy editing for this volume, along with his many other contributions; and to the past and present members of the NSA Executive Board: Steven Berline, RonaId F. Blackburn, Frances Boldereff, James N. Brown, Jr., Lawrence Denslow, Donald T. Elkins, Rainer Huck, Todd Kelso, Richard L. Long, Frank H. Meyer, William J. Mitchell, Harold Norris, Carla Rueckert, Ronald W. Satz, George Windolph, and Hans F. Wuenscher. D. B. Larson

CHAPTER 1

Background
To the man of the Stone Age the world in which he lived was a world of spirits. Powerful gods hurled shafts of lightning, threw waves against the shore, and sent winter storms howling down out of the north. Lesser beings held sway in the forests, among the rocks, and in the flowing streams. Malevolent demons, often in league with the mighty rulers of the elements, threatened the human race from all directions, and only the intervention of an assortment of benevolent, but capricious, deities made man’s continued existence possible at all.

This hypothesis that material phenomena are direct results of the actions of superhuman beings was the first attempt to define the fundamental nature of the physical universe: the first general physical concept. The scientific community currently regards it as a juvenile and rather ridiculous attempt at an explanation of nature, but actually it was plausible enough to remain essentially unchallenged for thousands of years. In fact, it is still accepted, in whole or in part, by a very substantial proportion of the population of the world. Such widespread acceptance is not as inexplicable as it may seem to the scientifically trained mind; it has been achieved only because the ―spirit‖ concept does have some genuine strong points. Its structure is logical. If one accepts the premises he cannot legitimately contest the conclusions. Of course, these premises are entirely ad hoc, but so are many of the assumptions of modern science. The individual who accepts the idea of a ―nuclear force‖ without demur is hardly in a position to be very critical of those who believe in the existence of ―evil spirits.‖ A special merit of this physical theory based on the ―spirit‖ concept is that it is a comprehensive theory; it encounters no difficulties in assimilating new discoveries, since all that is necessary is to postulate some new demon or deity. Indeed, it can even deal with discoveries not yet made, simply by providing a ―god of the unknown.‖ But even though a theory may have some good features, or may have led to some significant accomplishments, this does not necessarily mean that it is correct, nor that it is adequate to meet current requirements. Some three or four thousand years ago it began to be realized by the more advanced thinkers that the ―spirit‖ concept had some very serious weaknesses. The nature of these weaknesses is now well understood, and no extended discussion of them is necessary. The essential point to be recognized is that at a particular stage in history the prevailing concept of the fundamental nature of the universe was subjected to critical scrutiny, and found to be deficient. It was therefore replaced by a new general physical concept. This was no minor undertaking. The ―spirit‖ concept was well entrenched in the current pattern of thinking, and it had powerful support from the ―Establishment,‖ which is always opposed to major innovations. In most of the world as it then existed such a break with accepted thought would have been impossible, but for some reason an atmosphere favorable to critical thinking prevailed for a time in Greece and neighboring areas, and this profound alteration of the basic concept of the universe was accomplished there. The revolution in thought came slowly and gradually. Anaxagoras, who is sometimes called the first scientist, still attributed Mind to all objects, inanimate as well as animate. If a rock fell from a cliff, his explanation was that this action was dictated by the Mind of the rock. Even Aristotle retained the ―spirit‖ concept to some degree. His view of the fall of the rock was that this was merely one manifestation of a general tendency of objects to seek their ―natural place,‖ and he explained the acceleration during the fall as a result of the fact ―that the falling body moved more jubilantly every moment because it found itself nearer home.‖1 Ultimately, however, these vestiges of the ―spirit‖ concept disappeared, and a new general concept emerged, one that has been the basis of all scientific work ever since. According to this new concept, we live in a universe of matter: one that consists of material ―things‖ existing in a setting provided by space and time. With the benefit of this

conceptual foundation, three thousand years of effort by generation after generation of scientists have produced an immense systematic body of knowledge about the physical universe, an achievement which, it is safe to say, is unparalleled elsewhere in human life. In view of this spectacular record of success, which has enabled the ―matter‖ concept to dominate the organized thinking of mankind ever since the days of the ancient Greeks, it may seem inconsistent to suggest that this concept is not adequate to meet present-day needs, but the ultimate fate of any scientific concept or theory is determined not by what it has done but by what, if anything, it now fails to do. The graveyard of science is full of theories that were highly successful in their day, and contributed materially to the advance of scientific knowledge while they enjoyed general acceptance: the caloric theory, the phlogiston theory, the Ptolemaic theory of astronomy, the ―billiard ball‖ theory of the atom, and so on. It is appropriate, therefore, that we should, from time to time, subject all of our basic scientific ideas to a searching and critical examination for the purpose of determining whether or not these ideas, which have served us well in the past, are still adequate to meet the more exacting demands of the present. Once we subject the concept of a universe of matter to a critical scrutiny of this kind it is immediately obvious, not only that this concept is no longer adequate for its purpose, but that modern discoveries have completely demolished its foundations. If we live in a world of material ―things‖ existing in a framework provided by space and time, as envisioned in the concept of a universe of matter, then matter in some form is the underlying feature of the universe: that which persists through the various physical processes. This is the essence of the concept. For many centuries the atom was accepted as the ultimate unit, but when particles smaller (or at least less complex) than atoms were discovered, and it was found that under appropriate conditions atoms would disintergrate and emit such particles in the process, the sub-atomic particles took over the role of the ultimate building blocks. But we now find that these particles are not permanent building blocks either. For instance, the neutron, one of the constituents, from which the atom is currently supposed to be constructed, spontaneously separates into a proton, an electron, and a neutrino. Here, then, one of the ―elementary particles,‖ the supposedly basic and unchangeable units of matter, transforms itself into other presumably basic and unchangeable units. In order to save the concept of a universe of matter, strenuous efforts are now being made to explain events of this kind by postulating still smaller ―elementary particles‖ from which the known sub-atomic particles could be constructed. At the moment, the theorists are having a happy time constructing theoretical ―quarks‖ or other hypothetical sub-particles, and endowing these products of the imagination with an assortment of properties such as ―charm,‖ ―color,‖ and so on, to enable them to fit the experimental data. But this descent to a lower stratum of physical structure could not be accomplished, even in the realm of pure hypothesis, without taking another significant steps away from reality. At the time the atomic theory was originally proposed by Democritus and his contemporaries, the atoms of which they conceived all physical structures to be composed were entirely hypothetical, but subsequent observations and experiments have revealed the existence of units of matter that have exactly the properties that are

attributed to the atoms by the atomic theory. As matters now stand, therefore, this theory can legitimately claim to represent reality. But there are no observed particles that have all of the properties that are required in order to qualify as constituents of the observed atoms. The theorists have therefore resorted to the highly questionable expedient of assuming, entirely ad hoc, that the observed sub-atomic particles (that is, particles less complex than atoms) are the atomic constituents, but have different properties when they are in the atoms than those they are found to have wherever they can be observed independently. This is a radical departure from the standard scientific practice of building theories on solid factual foundations, and its legitimacy is doubtful, to say the least, but the architects of the ―quark‖ theories are going a great deal farther, as they are cutting loose from objective reality altogether, and building entirely on assumptions. Unlike the hypothetical ―constituents‖ of the atoms, which are observed sub-atomic particles with hypothetical sets of properties instead of the observed properties, the quarks are hypothetical particles with hypothetical properties. The unreliability of conclusions reached by means of such forced and artificial constructions should be obvious, but it is not actually necessary to pass judgment on this basis, because irrespective of how far the subdividing of matter into smaller and smaller particles is carried, the theory of ―elementary particles, of matter cannot account for the observed existence of processes whereby matter is converted into non-matter, and vice versa. This interconvertibility is positive and direct proof that the ―matter‖ concept is wrong; that the physical universe is not a universe of matter. There clearly must be some entity more basic than matter, some common denominator underlying both matter and non-material phenomena. Such a finding, which makes conventional thinking about physical fundamentals obsolete, is no more welcome today than the ―matter‖ concept was in the world of antiquity. Old habits of thought, like old shoes, are comfortable, and the automatic reaction to any suggestion of a major change in basic ideas is resisted, if not outright resentment. But if scientific progress is to continue, it is essential not only to generate new ideas to meet new problems, but also to be equally diligent in discarding old ideas that have outlived their usefulness. There is no actual need for any additional evidence to confirm the conclusion that the currently accepted concept of a universe of matter is erroneous. The observed interconvertibility of matter and non-matter is in itself a complete and conclusive refutation of the assertion that matter is basic. But when the inescapable finality of the answer that we get from this interconvertibility forces recognition of the complete collapse of the concept of a universe of matter, and we can no longer accept it as valid, it is easy to see that this concept has many other shortcomings that should have alerted the scientific community to question its validity long ago. The most obvious weakness of the concept is that the theories that are based upon it have not been able to keep abreast of progress in the experimental and observational fields. Major new physical discoveries almost invariably come as surprises, ―unexpected and even unimagined surprises,‖2 in the words of Richard Schlegel. They were not anticipated on theoretical grounds, and cannot

be accommodated to existing theory without some substantial modification of previous ideas. Indeed, it is doubtful whether any modification of existing theory will be adequate to deal with some of the more recalcitrant phenomena now under investigation. The current situation in particle physics, for instance, is admittedly chaotic. The outlook might be different if the new information that is rapidly accumulating in this field were gradually clearing up the situation, but in fact it merely seems to deepen the existing crisis. If anything in this area of confusion is clear by this time it is that the ―elementary particles‖ are not elementary. But the basic concept of a universe of matter requires the existence of some kind of an elementary unit of matter. If the particles that are now known are not elementary units, as is generally conceded, then, since no experimental confirmation is available for the hypothesis of sub-particles, the whole theory of the structure of matter, as it now stands, is left without visible support. Another prime example of the inability of present-day theories based on the ―matter‖ concept to cope with new knowledge of the universe is provided by some of the recent discoveries in astronomy. Here the problem is an almost total lack of any theoretical framework to which the newly observed phenomena can be related. A book published a few years ago that was designed to present all of the significant information then available about the astronomical objects known as quasars contains the following statement, which is still almost as appropriate as when it was written: It will be seen from the discussion in the later chapters that there are so many conflicting ideas concerning theory and interpretation of the observations that at least 95 percent of them must indeed be wrong. But at present no one knows which 95 percent.3 After three thousand years of study and investigation on the basis of theories founded on the ―matter‖ concept we are entitled to something more than this. Nature has a habit of confronting us with the unexpected, and it is not very reasonable to expect the currently prevailing structure of theory to give us an immediate and full account of all details of a new area, but we should at least be able to place the new phenomena in their proper places with respect to the general framework, and to account for their major aspects without difficulty. The inability of present-day theories to keep up with experimental and observational progress along the outer boundaries of science is the most obvious and easily visible sign of their inadequacies, but it is equally significant that some of the most basic physical phenomena are still without any plausible explanations. This embarrassing weakness of the current theoretical structure is widely recognized, and is the subject of comment from time to time. For instance, a press report of the annual meeting of the American Physical Society in New York in February 1969 contains this statement: A number of very distinguished physicists who spoke reminded us of long-standing mysteries, some of them problems so old that they are becoming forgotten—pockets of resistance left far behind the advancing frontiers of physics.4 Gravitation is a good example. It is unquestionably fundamental, but conventional theory cannot explain it. As has been said it ―may well be the most fundamental and least understood of the interactions.‖5 When a book or an article on this subject appears, we

almost invariably find the phenomenon characterized, either in the title or in the introductory paragraphs, as a ―mystery,‖ an ―enigma,‖ or a ―riddle.‖ But what is gravity, really? What causes it? Where does it come from? How did it get started? The scientist has no answers . . . in a fundamental sense, it is still as mysterious and inexplicable as it ever was, and it seems destined to remain so. (Dean E. Wooldridge)6 Electromagnetic radiation, another of the fundamental physical phenomena, confronts us with a different, but equally disturbing, problem. Here there are two conflicting explanations of the phenomenon, each of which fits the observed facts in certain areas but fails in others: a paradox which, as James B. Conant observed, ―once seemed intolerable,‖ although scientists have now ―learned to live with it.‖7 This too, is a ―deep mystery,‖8 as Richard Feynman calls it, at the very base of the theoretical structure. There is a widespread impression that Einstein solved the problem of the mechanism of the propagation of radiation‖ and gave a definitive explanation of the phenomenon. It may be helpful, therefore, to note just what Einstein did have to say on this subject, not only as a matter of clarifying the present status of the radiation problem itself, but to illustrate the point made by P. W. Bridgman when he observed that many of the ideas and opinions to which the ordinary scientist subscribes ―have not been thought through carefully but are held in the comfortable belief . . . that some one must have examined them at some time.‖9 In one of his books Einstein points out that the radiation problem is an extremely difficult one, and he concludes that: Our only way out seems to be to take for granted the fact that space has the physical property of transmitting electromagnetic waves, and not to bother too much about the meaning of this statement. 10 Here, in this statement, Einstein reveals (unintentionally) just what is wrong with the prevailing basic physical theories, and why a revision of the fundamental concepts of those theories is necessary. Far too many difficult problems have been evaded by simply assuming an answer and ―taking it for granted.‖ This point is all the more significant because the shortcomings of the ―matter‖ concept and the theories that it has generated are by no means confined to the instances where no plausible explanations of the observed phenomena have been produced. In many other cases where explanations of one kind or another have actually been formulated, the validity of these explanations is completely dependent on ad hoc assumptions that conflict with observed facts. The nuclear theory of the atom is typical. Inasmuch as it is now clear that the atom is not an indivisible unit, the concept of a universe of matter demands that it be constructed of ―elementary‖ material units of some kind. Since the observed sub-atomic particles are the only known candidates for this role it has been taken for granted, as mentioned earlier, that the atom is a composite of sub-atomic particles. Consideration of the various possible combinations has led to the hypothesis that is now generally accepted: an atom in which there is a nucleus composed of protons and neutrons, surrounded by some kind of an arrangement of electrons.

But if we undertake a critical examination of this hypothesis it is immediately apparent that there are direct conflicts with known physical facts. Protons are positively charged, and charges of the same sign repel each other. According to the established laws of physics, therefore, a nucleus composed wholly or partly of protons would immediately disintegrate. This is a cold, hard physical fact, and there is not the slightest evidence that it is subject to abrogation or modification under any circumstances or conditions. Furthermore, the neutron is observed to be unstable, with a lifetime of only about 15 minutes, and hence this particle fails to meet one of the most essential requirements of a constituent of a stable atom: the requirement of stability. The status of the electron as an atomic constituent is even more dubious. The properties, which it must have to play such a role, are altogether different from the properties of the observed electron. Indeed, as Herbert Dingle points out, we can deal with the electron as a constituent of the atom only if we ascribe to it ―properties not possessed by any imaginable objects at all.‖11 A fundamental tenet of science is that the facts of observation and experiment are the scientific court of last resort; they pronounce the final verdict irrespective of whatever weight may be given to other considerations. As expressed by Richard Feynman: If it (a proposed new law or theory) disagrees with experiment it is wrong. In that simple statement is the key to science.... That is all there is to it.12 The situation with respect to the nuclear theory is perfectly clear. The hypothesis of an atomic nucleus composed of protons and neutrons is in direct conflict with the observed properties of electric charges and the observed behavior of the neutron, while the conflicts between the atomic version of the electron and physical reality are numerous and very serious. According to the established principles of science, and following the rule that Feynman laid down in the foregoing quotation, the nuclear theory should have been discarded summarily years ago. But here we see the power of the currently accepted fundamental physical concept. The concept of a universe of matter demands a ―building block‖ theory of the atom: a theory in which the atom (since it is not an indivisible building block itself) is a ―thing‖ composed of ―parts‖ which, in turn, are ―things‖ of a lower order. In the absence of any way of reconciling such a theory with existing physical knowledge, either the basic physical concept or standard scientific procedures and tests of validity had to be sacrificed. Since abandonment of the existing basic concept of the nature of the universe is essentially unthinkable in the ordinary course of theory construction, sound scientific procedure naturally lost the decision. The conflicts between the nuclear theory and observation were arbitrarily eliminated by means of a set of ad hoc assumptions. In order to prevent the break-up of the hypothetical nucleus by reason of the repulsion between the positive charges of the individual protons it was simply assumed that there is a ―nuclear force‖ of attraction, which counterbalances the known force of repulsion. And in order to build a stable atom out of unstable particles it was assumed (again purely ad hoc) that the neutron, for some unknown reason, is stable within the nucleus. The more difficult problem of inventing some way of justifying the electron as an atomic constituent is currently being handled by assuming that the atomic electron is an entity that transcends reality. It is unrelated to anything that has ever been observed, and is itself

not capable of being observed: an ―abstract thing, no longer intuitable in terms of the familiar aspects of everyday experience,‖13 as Henry Margenau describes it. What the theorists‖ commitment to the ―matter‖ concept has done in this instance is to force them to invent the equivalent of the demons that their primitive ancestors called upon when similarly faced with something that they were unable to explain. The mysterious ―nuclear force‖ might just as well be called the ―god of the nucleus.‖ Like an ancient god, it was designed for one particular purpose; it has no other functions; and there is no independent confirmation of its existence. In effect, the assumptions that have been made in an effort to justify retention of the ―matter‖ concept have involved a partial return to the earlier ―spirit‖ concept of the nature of the universe. Since it is now clear that the concept of a universe of matter is not valid, one may well ask: How has it been possible for physical science to make such a remarkable record of achievement on the basis of an erroneous fundamental concept? The answer is that only a relatively small part of current physical theory is actually derived from the general physical principles based on that fundamental concept. ―A scientific theory,‖ explains R. B. Braithwaite, ―is a deductive system in which observable consequences logically follow from the conjunction of observed facts with the set of the fundamental hypotheses of the system. ―14 But modern physical theory is not one deductive system of the kind described by Braithwaite; it is a composite made up of a great many such systems. As expressed by Richard Feynman: Today our theories of physics, the laws of physics, are a multitude of different parts and pieces that do not fit together very well. We do not have one structure from which all is deduced.15 One of the principal reasons for this lack of unity is that modern physical theory is a hybrid structure, derived from two totally different sources. The small-scale theories applicable to individual phenomena, which constitute the great majority of the ―parts and pieces,‖ are empirical generalizations derived by inductive reasoning from factual premises. At one time it was rather confidently believed that the accumulation of empirically derived knowledge then existing, the inductive science commonly associated with the name of Newton, would eventually be expanded to encompass the whole of the universe. But when observation and experiment began to penetrate what we may call the far-out regions, the realms of the very small, the very large, and the very fast, Newtonian science was unable to keep pace. As a consequence, the construction of basic physical theory fell into the hands of a school of scientists who contend that inductive methods are incapable of arriving at general physical principles. ―The axiomatic basis of theoretical physics cannot be an inference from experience, but must be free invention,‖16 was Einstein’s dictum. The result of the ascendancy of this ―inventive‖ school of science has been to split physical science into two separate parts. As matters now stand, the subsidiary principles, those that govern individual physical phenomena and the low-level interactions, are products of induction from factual premises. The general principles, those that apply to large scale phenomena or to the universe as a whole, are, as Einstein describes them, ―pure inventions of the human mind.‖ Where the observations are accurate, and the

generalizations are justified, the inductively derived laws and theories are correct, at least within certain limits. The fact that they constitute by far the greater part of the current structure of physical thought therefore explains why physical science has been so successful in practice. But where empirical data is inadequate or unavailable, present-day science relies on deductions from the currently accepted general principles, the products of pure invention, and this is where physical theory has gone astray. Nature does not agree with these ―free inventions of the human mind.‖ This disagreement with nature should not come as a surprise. Any careful consideration of the situation will show that ―free invention‖ is inherently incapable of arriving at the correct answers to problems of long standing. Such problems do not continue to exist because of a lack of competence on the part of those who are trying to solve them, or because of a lack of adequate methods of dealing with them. They exist because some essential piece or pieces of information are missing. Without this essential information the correct answer cannot be obtained (except by an extremely unlikely accident). This rules out inductive methods, which build upon empirical information. Invention is no more capable of arriving at the correct result without the essential information than induction, but it is not subject to the same limitations. It can, and does, arrive at some result. General acceptance of a theory that is almost certain to be wrong is, in itself, a serious impediment to scientific progress, but the detrimental effect is compounded by the ability of these inventive theories to evade contradictions and inconsistencies by further invention. Because of the almost unlimited opportunity to escape from difficulties by making further ad hoc assumptions, it is ordinarily very difficult to disprove an invented theory. But the definite proof that the physical universe is not a universe of matter now automatically invalidates all theories, such as the nuclear theory of the atom, that are dependent on this ―matter‖ concept. The essential piece of information that has been missing, we now find, is the true nature of the basic entity of which the universe is composed. The issue as to the inadequacy of present-day basic physical theory does not normally arise in the ordinary course of scientific activity because that activity is primarily directed toward making the best possible use of the tools that are available. But when the question is actually raised there is not much doubt as to how it has to be answered. The answer that we get from P. A. M. Dirac is this: The present stage of physical theory is merely a steppingstone toward the better stages we shall have in the future. One can be quite sure that there will be better stages simply because of the difficulties that occur in the physics of today.17 Dirac admits that he and his fellow physicists have no idea as to the direction from which the change will come. As he says, ―there will have to be some new development that is quite unexpected, that we cannot even make a guess about.‖ He recognizes that this new development must be one of major significance. ―It is fairly certain that there will have to be drastic changes in our fundamental ideas before these problems can be solved‖17 he concludes. The finding of this present work is that ―drastic changes in our fundamental

ideas‖ will indeed be required. We must change our basic physical concept: our concept of the nature of the universe in which we live. Unfortunately, however, a new basic concept is never easy to grasp, regardless of how simple it may be, and how clearly it is presented, because the human mind refuses to look at such a concept in any simple and direct manner, and insists on placing it within the context of previously existing patterns of thought, where anything that is new and different is incongruous at best, and more often than not is definitely absurd. As Butterfield states the case: Of all forms of mental activity, the most difficult to induce even in the minds of the young, who may be presumed not to have lost their flexibility, is the art of handling the same bundle of data as before, but placing them in a new system of relations with one another by giving them a different framework.18 In the process of education and development, each human individual has put together a conceptual framework which represents the world as he sees it, and the normal method of assimilating a new experience is to fit it into its proper place in this general conceptual framework. If the fit is accomplished without difficulty we are ready to accept the experience as valid. If a reported experience, or a sensory experience of our own, is somewhat outside the limits of our complex of beliefs, but not definitely in conflict, we are inclined to view it skeptically but tolerantly, pending further clarification. But if a purported experience flatly contradicts a fundamental belief of long standing, the immediate reaction is to dismiss it summarily. Some such semi-automatic system for discriminating between genuine items of information and the many false and misleading items that are included in the continuous stream of messages coming in through the various senses is essential in our daily life, even for mere survival. But this policy of using agreement with past experience as the criterion of validity has the disadvantage of limiting the human race to a very narrow and parochial view of the world, and one of the most difficult tasks of science has been, and to some extent continues to be, overcoming the errors that are thus introduced into thinking about physical matters. Only a few of those who give any serious consideration to the subject still believe that the earth is flat, and the idea that this small planet of ours is the center of all of the significant activities of the universe no longer commands any strong support, but it took centuries of effort by the most advanced thinkers to gain general acceptance of the present-day view that, in these instances, things are not what our ordinary experience would lead us to believe. Some very substantial advances in scientific methods and equipment in recent years have enabled investigators to penetrate a number of far-out regions that were previously inaccessible. Here again it has been demonstrated, as in the question with respect to the shape of the earth, that experience within the relatively limited range of our day-to-day activities is not a reliable guide to what exists or is taking place in distant regions. In application to these far-out phenomena the scientific community therefore rejects the ―experience‖ criterion, and opens the door to a wide variety of hypotheses and concepts that are in direct conflict with normal experience: such things as events occurring without specific causes, magnitudes that are inherently incapable of measurement beyond a

certain limiting degree of precision, inapplicability of some of the established laws of physics to certain unusual phenomena, events that defy the ordinary rules of logic, quantities whose true magnitudes are dependent on the location and movement of the observer, and so on. Many of these departures from ―common sense‖ thinking, including almost all of those that are specifically mentioned in this paragraph, are rather ill-advised in the light of the facts that have been disclosed by this present work, but this merely emphasizes the extent to which scientists are now willing to go in postulating deviations from every-day experience. Strangely enough, this extreme flexibility in the experience area coexists with an equally extreme rigidity in the realm of ideas. The general situation here is the same as in the case of experience. Some kind of semi-automatic screening of the new ideas that are brought to our attention is necessary if we are to have any chance to develop a coherent and meaningful understanding of what is going on in the world about us, rather than being overwhelmed by a mass of erroneous or irrelevant material. So, just as purported new experiences are measured against past experience, the new concepts and theories that are proposed are compared with the existing structure of scientific thought and judged accordingly. But just as the ―agreement with previous experience,‖ criterion breaks down when experiment or observation enters new fields, so the ―agreement with orthodox theory‖ criterion breaks down when it is applied to proposals for revision of the currently accepted theoretical fundamentals. When agreement with the existing theoretical structure is set up as the criterion by which the validity of new ideas is to be judged, any new thought that involves a significant modification of previous theory is automatically branded as unacceptable. Whatever merits it may actually have, it is, in effect, wrong by definition. Obviously, a strict and undeviating application of this ―agreement‖ criterion cannot be justified‖ as it would bar all major new ideas. A new basic concept cannot be fitted into the existing conceptual framework, as that framework is itself constructed of other basic concepts‖ and a conflict is inevitable. As in the case of experience‖ it is necessary to recognize that there is an area in which this criterion is not legitimately applicable. In principle, therefore, practically everyone concedes that a new theory cannot be expected to agree with the theory that it proposes to replace, or with anything derived directly or indirectly from that previous theory. In spite of the nearly unanimous agreement on this, point as a matter of principle, a new idea seldom gets the benefit of it in actual practice. In part this is due to the difficulties that are experienced in trying to determine just what features of current thought are actually affected by the theory replacement. This is not always clear on first consideration, and the general tendency is to overestimate the effect that the proposed change will have on prevailing ideas. In any event, the principal obstacle that stands in the way of a proposal for changing a scientific theory or concept is that the human mind is so constituted that it does not want to change its ideas, particularly if they are ideas of long standing. This is not so serious in the realm of experience, because the innovation that is required here generally takes the form of an assertion that ―things are different‖ in the particular new area that is under consideration. Such an assertion does not involve a

flat repudiation of previous experience; it merely contends that there is a hitherto unknown limit beyond which the usual experience is no longer applicable. This is the explanation for the almost incredible latitude that the theorists are currently being allowed in the ―experience‖ area. The scientist is prepared to accept the assertion that the rules of the game are different in a new field that is being investigated, even where the new rules involve such highly improbable features as events that happen without causes and objects that change their locations discontinuously. On the other hand, a proposal for modification of an accepted concept or theory calls for an actual change in thinking, something that the human mind almost automatically resists, and generally resents. Here the scientist usually reacts like any layman; he promptly rejects any intimation that the rules which he has already set up, and which he has been using with confidence, are wrong. He is horrified at the mere suggestion that the many difficulties that he is experiencing in dealing with the ―parts‖ of the atom, and the absurdities or near absurdities that he has had to introduce into his theory of atomic structure are all due to the fact that the atom is not constructed of ―parts.‖ Inasmuch as the new theoretical system presented in this volume and those that are to follow not only requires some drastic reconstruction of fundamental physical theory, but goes still deeper and replaces the basic concept of the nature of the universe, upon which all physical theory is constructed, the conflicts with previous ideas are numerous and severe. If appraised in the customary manner by comparison with the existing body of thought many of the conclusions that are reached herein must necessarily be judged as little short of outrageous. But there is practically unanimous agreement among those who are in the front rank of scientific investigators that some drastic change in theoretical fundamentals is inevitable. As Dirac said in the statement previously quoted, ―There will have to be some new development that is quite unexpected, that we cannot even make a guess about.‖ The need to abandon a basic concept, the concept of a universe of matter that has guided physical thinking for three thousand years is an ―unexpected development,‖ just the kind of a thing that Dirac predicted. Such a basic change is a very important step, and it should not be lightly taken, but nothing less drastic will suffice. Sound theory cannot be built on an unsound foundation. Logical reasoning and skillful mathematical manipulation cannot compensate for errors in the premises to which they are applied. On the contrary, the better the reasoning the more certain it is to arrive at the wrong results if it starts from the wrong premises.

CHAPTER 2

A Universe of Motion
The thesis of this present work is that the universe in which we live is not a universe of matter, but a universe of motion, one in which the basic reality is motion, and all physical entities and phenomena, including matter, are merely manifestations of motion. The atom, on this basis, is simply a combination of motions. Radiation is motion, gravitation is motion, an electric charge is motion, and so on.

The concept of a universe of motion is by no means a new idea. As a theoretical proposition it has some very obvious merits that have commended it to thoughtful investigators from the very beginning of systematic science. Descartes' idea that matter might be merely a series of vortexes in the ether is probably the best-known speculation of this nature, but other scientists and philosophers, including such prominent figures as Eddington and Hobbes, have devoted much time to a study of similar possibilities, and this activity is still continuing in a limited way. But none of the previous attempts to use the concept of a universe of motion as the basis for physical theory has advanced much, if any, beyond the speculative stage. The reason why they failed to produce any significant results has now been disclosed by the findings of the investigation upon which this present work is based. The inability of previous investigators to achieve a successful application of the ―motion‖ concept, we find, was due to the fact that they did not use this concept in its pure form. Instead, they invariably employed a hybrid structure, which retained elements of the previously accepted ―matter‖ concept. ―All things have but one universal cause‖ which is motion‖ 19 says Hobbes. But the assertion that all things are caused by motion is something quite different from saying that they are motions. The simple concept of a universe of motion‖ without additions or modifications–the concept utilized in this present work-is that of a universe which is composed entirely of motion. The significant difference between these two viewpoints lies in the role that they assign to space and time. In a universe of matter it is necessary to have a background or setting in which the matter exists and undergoes physical processes, and it is assumed that space and time provide the necessary setting for physical action. Many differences of opinion have arisen with respect to the details, particularly with respect to space–whether or not space is absolute and immovable, whether such a thing as empty space is possible, whether or not space and time are interconnected, and so on–but throughout all of the development of thought on the subject the basic concept of space as a setting for the action of the universe has remained intact. As summarized by J. D. North: Most people would accept the following: Space is that in which material objects are situated and through which they move. It is a background for objects of which it is independent. Any measure of the distances between objects within it may be regarded as a measure of the distances between its corresponding parts.20 Einstein is generally credited with having accomplished a profound alteration of the scientific viewpoint with respect to space, but what he actually did was merely to introduce some new ideas as to the kind of a setting that exists. His ―space‖ is still a setting, not only for matter but also for the various ―fields‖ , that he envisions. A field, he says, is ―something physically real in the space around it.‖21 Physical events still take place in Einstein’s space just as they did in Newton's space or in Democritus' space. Time has always been more elusive than space, and it has been extremely difficult to formulate any clear-cut concept of its essential nature. It has been taken for granted, however, that time, too, is part of the setting in which physical events take place; that is, physical phenomena exist in space and in time. On this basis it has been hard to specify just wherein time differs from space. In fact the distinction between the two has become

increasingly blurred and uncertain in recent years, and as matters now stand, time is generally regarded as a sort of quasi-space, the boundary between space and time being indefinite and dependent upon the circumstances under which it is observed. The modern physicist has thus added another dimension to the spatial setting, and instead of visualizing physical phenomena as being located in three-dimensional space, he places them in a four-dimensional space-time setting. In all of this ebb and flow of scientific thought the one unchanging element has been the concept of the setting. Space and time, as currently conceived, are the stage on which the drama of the universe unfolds–―a vast world-room, a perfection of emptiness, within which all the world show plays itself away forever.‖22 This view of the nature of space and time, to which all have subscribed scientist and layman alike, is pure assumption. No one, so far as the history of science reveals, has ever made any systematic examination of the available evidence to determine whether or not the assumption is justified. Newton made no attempt to analyze the basic concepts. He tells us specifically, ―I do not define time, space, place and motion, as being well known to all. ― Later generations of scientists have challenged some of Newton's conclusions, but they have brushed this question aside in an equally casual and carefree manner. Richard Tolman, for example, begins his discussion of relativity with this statement: ―We shall assume without examination . . . the unidirectional, one-valued, one-dimensional character of the time continuum.‖23 Such an uncritical acceptance of an unsubstantiated assumption ―without examination‖ , is, of course, thoroughly unscientific, but it is quite understandable as a consequence of the basic concept of a universe of matter to which science has been committed. Matter, in such a universe, must have a setting in which to exist. Space and time are obviously the most logical candidates for this assignment. They cannot be examined directly. We cannot put time under a microscope, or subject space to a mathematical analysis by a computer. Nor does the definition of matter itself give us any clue as to the nature of space and time. The net effect of accepting the concept of a universe of matter has therefore been to force science into the position of having to take the appearances which space and time present to the casual observer as indications of the true nature of these entities. In a universe of motion, one in which everything physical is a manifestation of motion, this uncertainty does not exist, as a specific definition of space and time is implicit in the definition of motion. It should be understood in this connection that the term ―motion,‖ as used herein, refers to motion as customarily defined for scientific and engineering purposes; that is, motion is a relation between space and time, and is measured as speed or velocity. In its simplest form, the ―equation of motion,‖ which expresses this definition in mathematical symbols, is v = s/t. The definition as stated, the standard scientific definition, we may call it, is not the only way in which motion can be defined. But it is the only definition that has any relevance to the development in this work. The basic postulate of the work is that the physical universe is composed entirely of motion as thus defined. What we are undertaking to do is to describe the consequences that necessarily follow in a universe composed of this

kind of motion. Whether or not one might prefer to define motion in some other way, and what the consequences of such a definition might be, has no bearing on the present undertaking. Obviously, the equation of motion, which defines motion in terms of space and time, likewise defines space and time in terms of motion. It tells us that in motion space and time are the two reciprocal aspects of that motion, and nothing else. In a universe of matter, the fact that space and time have this significance in motion would not preclude them from having some other significance in a different connection, but when it is specified that motion is the sole constituent of the physical universe, space and time cannot have any significance anywhere in that universe other than that which they have as aspects of motion. Under these circumstances, the equation of motion is a complete definition of the role of space and time in the physical universe. We thus arrive at the conclusion that space and time are simply the two reciprocal aspects of motion and have no other significance. On this basis, space is not the Euclidean container for physical phenomena that is most commonly visualized by the layman; neither is it the modified version of this concept which makes it subject to distortion by various forces and highly dependent on the location and movement of the observer, as seen by the modern physicist. In fact, it is not even a physical entity in its own right at all; it is simply and solely an aspect of motion. Time is not an order of succession, or a dimension of quasi-space, neither is it a physical entity in its own right. It, too, is simply and solely an aspect of motion, similar in all respects to space, except that it is the reciprocal aspect. The simplest way of defining the status of space and time in a universe of motion is to say that space is the numerator in the expression s/t, which is the speed or velocity, the measure of motion, and time is the denominator. If there is no fraction, there is no numerator or denominator; if there is no motion, there is no space or time. Space does not exist alone, nor does time exist alone; neither exists at all except in association with the other as motion. We can, of course, focus our attention on the space aspect and deal with it as if the time aspect, the denominator of the fraction, remains constant (or we can deal with time as if space remains constant). This is the familiar process known as abstraction, one of the useful tools of scientific inquiry. But any results obtained in this manner are valid only where the time (or space) aspect does, in fact, remain constant, or where the proper adjustment is made for whatever changes in this factor do take place. The reason for the failure of previous efforts to construct a workable theory on the basis of the ―motion,‖ concept is now evident. Previous investigators have not realized that the ―setting‖ concept is a creature of the ―matter‖ concept; that it exists only because that basic concept envisions material ―things‖ existing in a space-time setting. In attempting to construct a theoretical system on the basis of the concept of a universe of motion while still retaining the ―setting‖ concept of space and time, these theorists have tried to combine two incompatible elements, and failure was inevitable. When the true situation is recognized it becomes clear that what is needed is to discard the ―setting‖ concept of space and time along with the general concept of a universe of matter, to which it is intimately related, and to use the concept of space and time that is in harmony with the idea of a universe of motion.

In the discussion that follows we will postulate that the physical universe is composed entirely of discrete units of motion, and we will make certain assumptions as to the characteristics of that motion. We will then proceed to show that the mere existence of motions with properties as postulated, without the aid of any supplementary or auxiliary assumptions, and without bringing in anything from experience, necessarily leads to a vast number and variety of consequences which, in total, constitute a complete theoretical universe. Construction of a fully integrated theory of this nature, one, which derives the existence, and the properties of the various physical entities from a single set of premises, has long been recognized as the ultimate goal of theoretical science. The question now being raised is whether that goal is actually attainable. Some scientists are still optimistic. ―Of course, we all try to discover the universal law,‖ says Eugene P. Wigner, ―and some of us believe that it will be discovered one day.‖24 But there is also an influential school of thought which contends that a valid, generally applicable, physical theory is impossible, and that the best we can hope for is a ―model‖ or series of models that will represent physical reality approximately and incompletely. Sir James Jeans expresses this point of view in the following words: The most we can aspire to is a model or picture which shall explain and account for some of the observed properties of matter; where this fails, we must supplement it with some other model or picture, which will in its turn fail with other properties of matter, and so on.25 When we inquire into the reasons for this surprisingly pessimistic view of the potentialities of the theoretical approach to nature, in which so many present-day theorists concur, we find that it has not resulted from any new discoveries concerning the limitations of human knowledge, or any greater philosophical insight into the nature of physical reality; it is purely a reaction to long years of frustration. The theorists have been unable to find the kind of an accurate theory of general applicability for which they have been searching, and so they have finally convinced themselves that their search was meaningless; that there is no such theory. But they simply gave up too soon. Our findings now show that when the basic errors of prevailing thought are corrected the road to a complete and comprehensive theory is wide open. It is essential to understand that this new theoretical development deals entirely with the theoretical entities and phenomena, the consequences of the basic postulates, not with the aspects of the physical universe revealed by observation. When we make certain deductions with respect to the constituents of the universe on the basis of theoretical assumptions as to the fundamental nature of that universe, the entities and phenomena thus deduced are wholly theoretical; they are the constituents of a purely theoretical universe. Later in the presentation we will show that the theoretical universe thus derived from the postulates corresponds item by item with the observed physical universe, justifying the assertion that each theoretical feature is a true and accurate representation of the corresponding feature of the actual universe in which we live. In view of this oneto-one correspondence, the names that we will attach to the theoretical features will be those that apply to the corresponding physical features, but the development of theory will be concerned exclusively with the theoretical entities and phenomena.

For example, the ―matter‖ that enters into the theoretical development is not physical matter; it is theoretical matter. Of course, the exact correspondence between the theoretical and observed universes that will be demonstrated in the course of this development means that the theoretical matter is a correct representation of the actual physical matter, but it is important to realize that what we are dealing with in the development of theory is the theoretical entity, not the physical entity. The significance of this point is that physical ―matter,‖ ―radiation‖ and other physical items cannot be defined with precision and certainty, as there can be no assurance that our observations give us the complete picture. The ―matter‖ that enters into Newton's law of gravitation, for example, is not a theoretically defined entity; it is the matter that is actually encountered in the physical world: an entity whose real nature is still a subject of considerable controversy. But we do know exactly what we are dealing with when we talk about theoretical matter. Here there is no uncertainty whatever. Theoretical matter is just what the postulates require it to be–no more, no less. The same is true of all of the other items that enter into the theoretical development. Although physical observations have not yet given us a definitive answer to the question as to the structure of the basic unit of physical matter, the physical atom–indeed, there is an almost continuous revision of the prevailing ideas on the subject, as new facts are revealed by experiment–we know exactly what the structure of the theoretical atom is, because both the existence and the properties of that atom are consequences that we derive by logical processes from our basic postulates. Inasmuch as the theoretical premises are explicitly defined, and their consequences are developed by sound logical and mathematical processes, the conclusions that are reached with respect to matter, its structure and properties, and all other features of the theoretical universe are unequivocal. Of course, there is always a possibility that some error may have been made in the chain of deductions, particularly if the chain in question is a very long one, but aside from this possibility, which is at a minimum in the early stages of the development, there is no doubt as to the true nature and characteristics of any entity or phenomenon that emerges from that development. Such certainty is impossible in the case of any theory, which contains empirical elements. Theories of this kind, a category that includes all existing physical theories, are never permanent; they are always subject to change by experimental discovery. The currently popular theory of the structure of the atom, for example, has undergone a long series of changes since Rutherford and Bohr first formulated it, and there is no assurance that the modifications are at an end. On the contrary, a general recognition of the weakness of the theory as it now stands has stimulated an intensive search for ways and means of bringing it into a closer correspondence with reality, and the current literature is full of proposals for revision. When a theory includes an empirical component, as all current physical theories do, any increase in observational or experimental knowledge about this component alters the sense of the theory, even if the wording remains the same. For instance, as pointed out earlier, some of the recently discovered phenomena in the sub-atomic region, in which matter is converted to energy, and vice versa, have drastically altered the status of conventional atomic theory. The basic concept of a universe of material ―things,‖ to

which physical science has subscribed for thousands of years, requires the atom to be made up of elementary units of matter. The present theory of an atom constructed of protons, neutrons, and electrons is based on the assumption that these are the ―elementary particles‖ ; that is, the indivisible and unchangeable basic units of matter. The experimental finding that these particles are not only interconvertible, but also subject to creation from non-matter and transformation into non-matter, has changed what was formerly a plausible (even if somewhat fanciful) theory into a theory that is internally inconsistent. In the light of present knowledge, an atom simply cannot be constructed of ―elementary particles‖ of matter. Some of the leading theorists have already recognized this fact, and are casting about for something that can replace the elementary particle as the basic unit. Heisenberg suggests energy: Energy . . . is the fundamental substance of which the world is made. Matter originates when the substance energy is converted into the form of an elementary particle.26 But he admits that he has no idea as to how energy can be thus converted into matter. This ―must in some way be determined by a fundamental law,‖ he says. Heisenberg's hypothesis is a step in the right direction, in that he abandons the fruitless search for the ―indivisible particle,‖ and recognizes that there must be something more basic than matter. He is quite critical of the continuing attempt to invest the purely hypothetical ―quark‖ with a semblance of reality: I am afraid that the quark hypothesis is not really taken seriously today by its proponents. Questions dealing with the statistics of quarks, the forces that keep them together, the reason why the quarks are never seen as free particles, the creation of pairs of quarks inside an elementary particle, are all left more or less undefined.27 But the hypothesis that makes energy the fundamental entity cannot stand up under critical scrutiny. Its fatal defect is that energy is a scalar quantity, and simply does not have the flexibility that is required in order to explain the enormous variety of physical phenomena. By going one step farther and identifying motion as the basic entity this inadequacy is overcome, as motion can be vectorial, and the addition of directional characteristics to the positive and negative magnitudes that are the sole properties of the scalar quantities opens the door to the great proliferation of phenomena that characterizes the physical universe. It should also be recognized that a theory of the composite type, one that has both theoretical and empirical components, is always subject to revision or modification; it may be altered essentially at will. The theory of atomic structure, for instance, is simply a theory of the atom–nothing else–and when it is changed, as it was when the hypothetical constituents of the hypothetical nucleus were changed from protons and electrons to protons and neutrons, no other area of physical theory is significantly affected. Even when it is found expedient to postulate that the atom or one of its hypothetical constituents does not conform to the established laws of physical science, it is not usually postulated that these laws are wrong; merely that they are not applicable in the particular case. This fact that the revision affects only a very limited area gives the theory

constructors practically a free hand in making alterations, and they make full use of the latitude thus allowed. Susceptibility to both voluntary and involuntary changes is unavoidable as long as the development of theory is still in the stage where complex concepts such as ―matter‖ must be considered unanalyzable, and hence it has come to be regarded as a characteristic of all theories. The first point to be emphasized, therefore, in beginning a description of the new system of theory based on the concept of a universe of motion, the Reciprocal System, as it is called, is that this is not a composite theory of the usual type; it is a purely theoretical structure which includes nothing of an empirical nature. Because all of the conclusions reached in the theoretical development are derived entirely from the basic postulates by logical and mathematical processes the theoretical system is completely inflexible, a point that should be clearly understood before any attempt is made to follow the development of the details of the theory in the following pages. It is not subject to any change or adjustment (other than correction of any errors that may have been made, and extension of the theory into areas not previously covered). Once the postulates have been set forth, the entire character of the resulting theoretical universe has been implicitly defined, down to the minutest detail. Just because the motion of which the universe is constructed, according to the postulates, has the particular properties that have been postulated, matter, radiation, gravitation, electrical and magnetic phenomena, and so on, must exist, and their physical behavior must follow certain specific patterns. In addition to being an inflexible, purely theoretical product that arrives at definite and certain conclusions which are in full agreement with observation, or at least are not inconsistent with any definitely established facts, the Reciprocal System of theory is one of general applicability. It is the first thing of its kind ever formulated: the first that derives the phenomena and relations of all subdivisions of physical activity from the same basic premises. For the first time in scientific history there is available a theoretical system that satisfies the criterion laid down by Richard Schlegel in this statement: In a significant sense, the ideal of science is a single set of principles, or perhaps a set of mathematical equations, from which all the vast process and structure of nature could be deduced.28 No previous theory has covered more than a small fraction of the total field, and the present-day structure of physical thought is made up of a host of separate theories, loosely related, and at many points actually conflicting. Each of these separate theories has its own set of basic assumptions, from which it seeks to derive relations specifically applicable to certain kinds of phenomena. Relativity theory has one set of assumptions, and is applicable to one kind of phenomena. The kinetic theory has an altogether different set of assumptions, which it applies to a different set of phenomena. The nuclear theory of the atom has still another set of assumptions, and has a field of applicability all its own, and so on. Again quoting Richard Feynman: Instead of having the ability to tell you what the law of physics is, I have to talk about the things that are common to the various laws; we do not understand the connection between them.15

Furthermore, each of these many theories not only requires the formulation of a special set of basic assumptions tailored to fit the particular situation, but also finds it necessary to introduce a number of observed entities and phenomena into the theoretical structure, taking their existence for granted, and accepting them as ―given‖ , so far as the theory is concerned. The Reciprocal System now replaces this multitude of separate theories and subsidiary assumptions with a fully integrated structure of theory derived in its entirety from a single set of basic premises. The status of this system as a general physical theory is not a matter of opinion; it is an objective fact that can easily be verified by an examination of the theoretical development. Such an examination will disclose that the development leads to detailed conclusions in all major physical fields, and that these conclusions are derived deductively from the postulates of the system, without the aid of any supplementary or subsidiary assumptions, and without introducing anything from experience. The new theoretical structure not only covers the field to which the conventional physical theories are applicable; it also gives us answers to the basic physical questions with which the theories based on the ―matter‖ concept have been unable to cope, and it extends the scope of physical theory to the point where it is capable of dealing with those recent experimental and observational discoveries in the far-out regions of science that have been so baffling to those who are trying to understand them in the context of previously existing ideas. Of course, the theoretical development has not yet been carried to the point where it accounts for every detail of the physical universe. That point will not be reached for a long time, if ever. But it has been carried far enough to make it clear that the probability of being unable to deal with the remaining items is negligible, and that the Reciprocal System is, in fact, a general physical theory. The crucial importance of this status as a general physical theory lies in the further fact that it is impossible to construct a wrong general physical theory. At first glance this statement may seem absurd. It may seem almost self-evident that if validity is not required there should be no serious obstacle to constructing some kind of a theory of any subject. But even without any detailed consideration of the factors that are involved in the case of a general physical theory, a review of experience will show that this offhand opinion is incorrect. Construction of a general physical theory has been a prime goal of science for three thousand years, and an immense amount of time and effort has been devoted to the task, with no success whatever. The failure has not been a matter of arriving at the wrong answers; the theorists have not been able to formulate any single theory that would give them any answers, right or wrong, to more than a mere handful of the millions of questions that a general physical theory must answer. A long period of failure to find the correct theory is understandable, since the field that must be covered by a general theory is so immense and so extremely complicated, but thousands of years of inability to construct any general theory are explainable only on the basis that there is a reason why a wrong theory cannot be constructed. This reason is easily understood if the essential nature of the task is carefully examined. Construction of a general physical theory is analogous to the task of deciphering a very long message in code. If a coded message is short–a few words or a sentence–alternative

interpretations are possible, any or all of which may be wrong, but if the message is a very long one–a whole book in code would be an appropriate analogy to the subject matter of a general physical theory–there is only one way to make any kind of sense out of every paragraph, and that is to find the key to the cipher. If, and when, the message is finally decoded, and every paragraph is intelligible, it is evident that the key to the cipher has been discovered. The possibility that there might be an alternative key, a different set of meanings for the various symbols utilized, that would give every one of the thousands of sentences in the message a different significance, intelligible but wrong, is preposterous. It can therefore be definitely stated that a wrong key to the cipher is impossible. The correct general theory of the universe is the key to the code of nature. As in the case of the cipher, a wrong theory can provide plausible answers in a very limited field, but only the correct theory can be a general theory; one that is capable of producing explanations for the existence and characteristics of all of the immense number of physical phenomena. Thus a wrong general theory, like a wrong key to a cipher, is impossible. The verification of the validity of the theoretical structure as a whole that is provided by the demonstration that it is a general physical theory does not eliminate the need for checking each of the conclusions of the theory individually. It is not unlikely that those persons who carry out the process of development of the details of the theory will make some mistakes. But the fact that the individual conclusions have been derived by extension of a correct general structure of theory creates a strong presumption of their validity, a presumption that cannot be overcome by anything other than definite and conclusive contrary evidence. Hence, as conclusions are reached in the course of the development, it is not necessary to supply positive proof that they are correct, or to argue that the case in favor of their validity is superior to that of any competitor. All that is required is to show that these conclusions are not inconsistent with any definitely established facts. Recognition of this point is essential for a full understanding of the presentation in the pages that follow. Many persons will no doubt take the stand that they find the arguments in favor of certain of the currently accepted ideas more persuasive than those in favor of the conclusions derived from the Reciprocal System. Indeed, some such reactions are inevitable, since there will be a strong tendency to view these conclusions in the context of present-day thought, based on the no longer tenable concept of a universe of matter. But these opinions are irrelevant. Where it can be shown that the conclusions are legitimately derived from the postulates of the system, they participate in the proof of the validity of the structure of theory as a whole, a proof that has been established by two independent means: (1) by showing that this is a general physical theory, and that a wrong general physical theory is impossible, and (2) by showing that none of the authentic deductions from the postulates of the theory is inconsistent with any positively established information from observation or experiment. This second method of verification is analogous to the manner in which we would go about verifying the accuracy of an aerial map. The traditional method of map making involves first a series of explorations, then a critical evaluation of the reports submitted by the explorers, and finally the construction of the map on the basis of those reports that

the geographers consider most reliable. Similarly, in the scientific field, explorations are carried out by experiment and observation, reports of the findings and conclusions based on these findings are submitted, these reports are evaluated by the scientific community, and those that are judged to be authentic are added to the scientific map, the accepted body of factual and theoretical knowledge. But this traditional method of map making is not the only way in which a geographic map can be prepared. We may, for instance, devise some photographic system whereby we can secure a representation of an entire area in one operation by a single process. In either case, whether we are offered a map of the traditional kind or a photographic map we will want to make some tests to satisfy ourselves that the map is accurate before we use it for any important purposes, but because of the difference in the manner in which the maps were produced, the nature of these tests will be altogether different in the two cases. In checking a map of the traditional type we have no option but to verify each significant feature of the map individually, because aside from a relatively small amount of interrelation, each feature is independent. Verification of the position shown for a mountain in one part of the map does not in any way guarantee the accuracy of the position shown for a river in another part of the map. The only way in which the position shown for the river can be verified is to compare what we see on the map with such other information as may be available. Since these collateral data are often scanty, or even entirely lacking, particularly along the frontiers of knowledge, the verification of a map of this kind in either the geographic or the scientific field is primarily a matter of judgment, and the final conclusion cannot be more than tentative at best. In the case of a photographic map, on the other hand, each test that is made is a test of the validity of the process, and any verification of an individual feature is merely incidental. If there is even one place where an item that can definitely be seen on the map is in conflict with something that is positively known to be a fact, this is enough to show that the process is not accurate, and it provides sufficient justification for discarding the map in its entirety. But if no such conflict is found, the fact that every test is a test of the process means that each additional test that is made without finding a discrepancy reduces the mathematical probability that any conflict exists anywhere on the map. By making a suitably large number and variety of such tests the remaining uncertainty can be reduced to the point where it is negligible, thereby definitely establishing the accuracy of the map as a whole. The entire operation of verifying a map of this kind is a purely objective process in which features that can definitely be seen on the map are compared with facts that have been definitely established by other means. One important precaution must be observed in the verification process: a great deal of care must be exercised to make certain of the authenticity of the supposed facts that are utilized for the comparisons. There is no justification for basing conclusions on anything that falls short of positive knowledge. In testing the accuracy of an aerial map we realize that we cannot justify rejecting the map because the location of a lake indicated on the map conflicts with the location that we think the lake occupies. In this case it is clear that unless we actually know just where the lake is, we have no legitimate basis on which to dispute the location shown on the map. We also realize that there is no need to pay any attention to items of this kind: those about which we are uncertain. There are hundreds,

perhaps thousands, of map features about which we do have positive knowledge, far more than enough for purposes of comparison, so that we need not give any consideration to features about which there is any degree of uncertainty. Because the Reciprocal System of theory is a fully integrated structure derived entirely by one process–deduction from a single set of premices–it is capable of verification in the same manner as an aerial map. It has already passed such a test; that is, the theoretical deductions have been compared with the observed facts in thousands of individual cases distributed over all major fields of physical science without encountering a single definite inconsistency. These deductions disagree with many currently accepted ideas, to be sure, but in all of these cases it can be shown that the current views are not positive knowledge. They are either conclusions based on inadequate data, or they are assumptions, extrapolations, or interpretations. As in the analogous case of the aerial map, conflicts with such items, with what scientists think, are meaningless. The only conflicts that are relevant to the test of the validity of the theoretical system are conflicts with what scientists know. Thus, while recognition of human fallibility prevents asserting that every conclusion purported to be reached by application of this theory is authentic and therefore correct, it can be asserted that the Reciprocal System of theory is capable of producing the right answers if it is properly applied, and to the extent that the development of the consequences of the postulates of the theory has been correctly carried out, the theoretical structure thus derived is a true and accurate representation of the actual physical universe.

CHAPTER 3

Reference Systems
As indicated in the preceding chapter, the concept of a universe of motion has to be elaborated to some extent before it is possible to develop a theoretical structure that will describe that universe in detail. The additions to the basic concept must take the form of assumptions–or postulates, a term more commonly applied to the fundamental assumptions of a theory–because even though the additional specifications (the physical specifications, at least) obviously do apply to the particular universe of motion in which we live, there does not appear to be adequate justification for contending that they necessarily apply to an possible universe of motion. It has already been mentioned that we are postulating a universe composed of discrete units of motion. But this does not mean that the motion proceeds in a series of jumps. This basic motion is progression in which the familiar progression of time is accompanied by a similar progression of space. Completion of one unit of the progression is followed immediately by initiation of another, without interruption. As an analogy, we may consider a chain. Although the chain exists only in discrete units, or links, it is a continuous structure, not a mere juxtaposition of separate units. Whether or not the continuity is a matter of logical necessity is a philosophical question that does not need to concern us at this time. There are reasons to believe that it is, in fact,

a necessity, but if not, we will introduce it into our definition of motion. In any event, it is part of the system. The extensive use of the term ―progression‖ in application to the basic motions with which we are dealing in the initial portions of this work is intended to emphasize this characteristic. Another assumption that will be made is that the universe is three-dimensional. In this connection, it should be realized that all of the supplementary assumptions that were added to the basic concept of a universe of motion in order to define the essential properties of that universe were no more than tentative at the start of the investigation that ultimately led to the development of the Reciprocal System of theory. Some such supplementary assumptions were clearly required, but neither the number of assumptions that would have to be made, nor the nature of the individual assumptions, was clearly indicated by existing knowledge of the physical universe. The only feasible course of action was to initiate the investigation on the basis of those assumptions, which seemed to have the greatest probability of being correct. If any wrong assumptions were made, or if some further assumptions were required, the theoretical development would, of course, encounter insurmountable difficulties very quickly, and it would then be necessary to go back and modify the postulates, and try again. Fortunately, the original postulates passed this test, and the only change that has been made was to drop some of the original assumptions that were found to be deducible from the others and therefore superfluous. No further physical postulates are required, but it is necessary to make some assumptions as to the mathematical behavior of the universe. Here our observations of the existing universe do not give us guidance of as definite a character as was available in the case of the physical properties, but there is a set of mathematical principles which, until very recent times, was generally regarded as almost self-evident. The main body of scientific opinion is now committed to the belief that the true mathematical structure of the universe is much more complex, but the assumption that it conforms to the older set of principles is the simplest assumption that can be made. Following the rule laid down by William of Occam; this assumption was therefore made for the purpose of the initial investigation. No modifications have since been found necessary. The complete set of assumptions that constitutes the fundamental postulates of the theory of a universe of motion can be expressed as follows: First Fundamental Postulate: The physical universe is composed entirely of one component, motion, existing in three dimensions, in discrete units, and with two reciprocal aspects, space and time. Second Fundamental Postulate: The physical universe conforms to the relations of ordinary commutative mathematics, its primary magnitudes are absolute, and its geometry is Euclidean. Postulates are justified by their consequences, not by their antecedents, and as long as they are rational and mutually consistent, there is not much that can be said about them, either favorably or adversely. It should be of interest, however, to note that the concept of a universe composed entirely of motion is the only new idea that is involved in the postulates that define the Reciprocal System. There are other ideas, which, on the basis of current thinking, could be considered unorthodox, but these are by no means new. For

example, the postulates include the assumption that the geometry of the universe is Euclidean. This is in direct conflict with present-day physical theory, which assumes a non-Euclidean geometry, but it certainly cannot be regarded as an innovation. On the contrary, the physical validity of Euclidean geometry was accepted without question for thousands of years, and there is little doubt but that non-Euclidean geometry would still be nothing but a mathematical curiosity had it not been for the fact that the development of physical theory encountered some serious difficulties which the theorists were unable to surmount within the limitations established by Euclidean geometry, absolute magnitudes, etc. Motion is measured as speed (or velocity, in a context that we will consider later). Inasmuch as the quantity of space involved in one unit of motion is the minimum quantity that takes part in any physical activity, because less than one unit of motion does not exist, this is the unit of space. Similarly, the quantity of time involved in the one unit of motion is the unit of time. Each unit of motion, then, consists of one unit of space in association with one unit of time; that is, the basic motion of the universe is motion at unit speed. Cosmologists often begin their analyses of large-scale physical processes with a consideration of a hypothetical ―empty‖ universe, one in which no matter exists in the postulated space-time setting. But an empty universe of motion is an impossibility. Without motion there would be no universe. The most primitive condition, the situation which prevails when the universe of motion exists, but nothing at all is happening in that universe, is a condition in which units of motion exist independently, with no interaction. In this condition all speed is unity, one unit of space per unit of time, and since all units of motion are alike–they have no property but speed, and that is unity for all–the entire universe is a featureless uniformity. In order that there may be physical phenomena that can be observed or measured there must be some deviation from this one-to-one relation, and since it is the deviation that is observable, the amount of the deviation is a measure of the magnitude of the phenomenon. Thus all physical activity, all change that occurs in the system of motions that constitutes the universe, extends from unity, not from zero. The units of space, time, and motion (speed) that form the background for physical activity are simply scalar magnitudes. As matters now stand, we have no geometric means of representation that will express all three magnitudes coincidentally. But if we assume that the time progression continues at a uniform rate, and we measure this progression by some independent device (a clock), then we can represent the corresponding spatial magnitude by a one-dimensional geometric figure: a line. The length of this line represents the amount of space corresponding to a given time magnitude. Where this time magnitude is unity, the length of the line also represents the speed, the space per unit time. In present-day scientific practice, the datum from which all speed measurements are made, the point identified with the mathematical zero, is some stationary point in the reference system. But, as has been explained, the reference datum for physical magnitudes in a universe of motion is not zero speed but unit speed. The natural datum is therefore continually moving outward (in the direction of greater magnitudes) from the conventional zero datum, and the true speeds that are effective in the basic physical

interactions can be correctly measured only in terms of deviation upward or downward from unity. From the natural standpoint a motion at unit speed is no effective motion at all. Expressing this in another way, we may say that the natural system of reference, the reference system to which the physical universe actually conforms, is moving outward at unit speed with respect to any stationary spatial reference system. Any identifiable portion of such a stationary reference system is called a location in that system. While less-than-unit quantities of space do not exist, points within the units can be identified. A spatial location may therefore be of any size, from a point to the amount of space occupied by a galaxy, depending on the context in which the term is used. To distinguish locations in the natural moving system of reference from locations in the stationary reference systems, we will use the term absolute location in application to the natural system. In the context of a fixed reference system an absolute location appears as a point (or some finite spatial magnitude) moving along a straight line. We are so accustomed to referring motion to a stationary reference system that it seems almost self-evident that an object that has no independent motion, and is not subject to any external force, must remain stationary with respect to some spatial coordinate system. Of course, it is recognized that what seems to be motionless in the context of our ordinary experience is actually moving in terms of the solar system as a reference; what seems to be stationary in the solar system is moving if we use the Galaxy as a reference datum, and so on. Current scientific theory also contends that motion cannot be specified in any absolute manner, and can only be stated in relative terms. However, all previous thought on the subject, irrespective of how it views the details, has made the assumption that the initial point of a motion is some fixed spatial location that can be identified as the spatial zero. But nature is not required to conform to human opinions and beliefs, and in this case does not do so. As indicated in the preceding paragraphs, the natural system of reference in a universe of motion is not a stationary system but a moving system. Inasmuch as each unit of the basic motion involves one unit of space and one unit of time, it follows that continuation of the motion through an interval during which time is progressing involves a continued increase, or progression, of both space and time. If an absolute spatial location X is in coincidence with spatial location x at time t, then at time t + n this absolute location X will be found at spatial location x + n. As seen in the context of a stationary spatial system of reference, each absolute location is moving outward from its point of reference at a constant unit speed. Because of this motion of the natural reference system with respect to the stationary systems, an object that has no independent motion, and is not subject to any external force, does not remain stationary in any system of fixed spatial coordinates. It remains at the same absolute location, and therefore moves outward at unit speed from its initial location, and from any object that occupies such a location. Thus far we have been considering the progression of the natural moving reference system in the context of a one-dimensional stationary reference system. Since we have postulated that the universe is three-dimensional, we may also represent the progression

in a three-dimensional stationary reference system. Because the progression is scalar, what this accomplishes is merely to place the one-dimensional system that has been discussed in the preceding paragraphs into a certain position in the three dimensional coordinate system. The outward movement of the natural system with respect to the fixed point continues in the same one-dimensional manner. The scalar nature of the progression of the natural reference system is very significant. A unit of the basic motion has no inherent direction; it is simply a unit of space in association with a unit of time. In quantitative terms it is a unit scalar magnitude: a unit of speed. Scalar motion plays only a very minor role in everyday life, and little attention is ordinarily paid to it. But our finding that the basic motion of the physical universe is inherently scalar changes this picture drastically. The properties of scalar motion now become extremely important. To illustrate the primary difference between scalar motion and the vectorial motion of our ordinary experience let us consider two cases which involve a moving object X between two points A and B on the surface of a balloon. In the first case, let us assume that the size of the balloon is maintained constant, and that the object X is something capable of independent motion, a crawling insect, perhaps. The motion of X is then vectorial. It has a specific direction in the context of a stationary spatial reference system, and if that direction is BA–that is, X is moving away from B–the distance XA decreases and the distance XB increases. In the second case, we will assume that X is a fixed spot on the balloon surface, and that its motion is due to expansion of the balloon. Here the motion of X is scalar. It is simply outward away from all other points on the balloon surface, and has no specific direction. In this case the motion away from B does not decrease the distance XA. Both XB and XA increase. The motion of the natural reference system relative to any fixed spatial system of reference is motion of this character. It has a positive scalar magnitude, but no inherent direction. In order to place the one-dimensional progression of an absolute location in a threedimensional coordinate system it is necessary to define a reference point and a direction. In the subsequent discussion we will be dealing largely with scalar motions that originate at specific points in the fixed coordinate system. The reference point for each of these motions is the point of origin. It follows that the motions can be represented in the conventional fixed system of reference only by the use of multiple reference points. This was brought out in the first edition of this work in the form of a statement that photons (which, as will be shown later, are objects without independent motion, and therefore remain in their absolute locations of origin) ―travel outward in all directions from various points of emission.” However, experience has indicated that further elaboration of this point is necessary in order to avoid misunderstandings. The principal stumbling block seems to be a widespread impression that there must be some kind of a conceptually identifiable universal reference system to which the motions of photons and other objects that remain in the same absolute locations can be related. The expression ―natural reference system‖ probably contributes to this impression, but the fact that a natural reference system exists does not necessarily imply that it must be related in any direct way to the conventional three-dimensional stationary frame of reference.

It is true that the expanding balloon analogy suggests something of this kind, but an examination of this analogy will show that it is strictly applicable only to a situation in which all existing objects are stationary in the natural system of reference, and are therefore moving outward at unit speed. In this situation, any location can be taken as the reference point, and all other locations move outward from that point; that is, all locations move outward away from all other locations. But just as soon as moving objects (entities that are stationary, or moving with low speeds, in the fixed reference system, and are therefore moving with high speeds relative to the natural system of reference–emitters of photons, for example) are introduced into the situation, this simple representation is no longer possible, and multiple reference points become necessary. In order to apply the balloon analogy to a gravitationally bound physical system it is necessary to visualize a large number of expanding balloons, centered on the various reference points and interpenetrating each other. Absolute locations are defined only in a scalar sense (represented one-dimensionally). They move outward, each from its own reference point, regardless of where those reference points may be located in the threedimensional spatial coordinate system. In the case of the photons, each emitting object becomes a point of reference, and since the motions are scalar and have no inherent direction, the direction of motion of each photon, as seen in the reference system, is determined entirely by chance. Each of the emitting objects, wherever it may be in the stationary reference system, and whatever its motions may be relative to that system, becomes the reference point for the scalar photon motion; that is, it is the center of an expanding sphere of radiation. The finding that the natural system of reference in a universe of motion is a moving system rather than a stationary system, our first deduction from the postulates that define such a universe, is a very significant discovery. Heretofore only one so-called ―universal force,‖ the force of gravitation, has been known. Later in the discussion it will be seen that the customary term ―universal‖ is somewhat too broad in application to gravitation‖ but this phenomenon (the nature of which will be examined later) affects all units and aggregates of matter within the observational range under all circumstances. While not actually universal, it can appropriately be called a ―general‖ force. In a universe of motion a force is necessarily a motion, or an aspect of motion. Since we will be working mainly in terms of motion for the present, it will be desirable at this point to establish the relation between the force and motion concepts. For this purpose, let us consider a situation in which an object is moving in one direction with a certain velocity, and is simultaneously moving in the opposite direction with an equal velocity. The net change of position of the object is zero. Instead of looking at the situation in terms of two opposing motions‖ we may find it convenient to say that the object is motionless, and that this condition has resulted from a conflict of two forces tending to produce motion in opposite directions. On this basis we define force as that which will produce motion if not prevented from so doing by other forces. The quantitative aspects of this relation will be considered later. The limitations to which a derived concept of this kind are subject will also have consideration in connection with subjects to be covered in the pages that follow. The essential point to be noted here is that ―force‖ is merely a special way of looking at motion.

It has long been realized that while gravitation has been the only known general force, there are many physical phenomena that are not capable of satisfactory explanation on the basis of only one such force. For example, Gold and Hoyle make this comment: Attempts to explain both the expansion of the universe and the condensation of galaxies must be very largely contradictory so long as gravitation is the only force field under consideration. For if the expansive kinetic energy of matter is adequate to give universal expansion against the gravitational field it is adequate to prevent local condensation under gravity, and vice versa. That is why, essentially, the formation of galaxies is passed over with little comment in most systems of cosmology.29 Karl K. Darrow made the same point in a different connection, emphasizing that gravitation alone is not sufficient in many applications. There must also be what he called an ―antagonist,‖ an ―essential and powerful force,‖ as he described it. May we now assume that the ultimate particles of the world act on each other by gravity alone, with motion as the sole antagonist to keep the universe from gathering into a single clump? The answer to this question is a forthright and irrevocable No!30 The globular star clusters provide an example illustrating Darrow's point. Like the formation of galaxies, the problem of accounting for the existence of these clusters is customarily ―passed over with little comment, by the astronomers, but a discussion of the subject occasionally creeps into the astronomical literature. A rather candid article by E. Finlay-Freundlich which appeared in a publication of the Royal Astronomical Society some years ago admitted that ―the main problem presented by the globular clusters is their very existence as finite systems.‖ Many efforts have been made to explain these clusters on the basis of motions acting in opposition to gravitation, but as this author concedes, there is no evidence of the existence of motions that would be adequate to establish an equilibrium, and he asserts that ―their structure must be determined solely by the gravitational field set up by the stars which constitute such a cluster.‖ This being the case, the only answer he was able to visualize was that the clusters ―have not yet reached the final state of equilibrium,‖ a conclusion that is clearly in conflict with the many observational indications that these clusters are relatively stable long-lived objects. The following judgment that Finlay-Freundlich expressed with respect to the results obtained by his predecessors is equally applicable to the situation as it stands today: All attempts to explain the existence of isolated globular clusters in the vicinity of the galaxy have hitherto failed.31 But now we find that there is a second ―general force‖ that has not hitherto been recognized, just the kind of an ―antagonist‖ to gravitation that is necessary to explain all of these otherwise inexplicable phenomena. Just as gravitation moves all units and aggregates of matter inward toward each other, so the progression of the natural reference system with respect to the stationary reference systems in common use moves material units and aggregates, as we see them in the context of a stationary reference system, outward away from each other. The net movement of each object, as observed, is

determined by the relative magnitudes of the opposing general motions (forces), together with whatever additional motions may be present. In each of the three illustrative cases cited, the outward progression of the natural reference system provides the missing piece in the physical puzzle. But these cases are not unique; they are only especially dramatic highlights of a clarification of the entire physical picture that is accomplished by the introduction of this new concept of a moving natural reference system. We will find it in the forefront of almost every subject that is discussed in the pages that follow. It should be recognized, however, that the outward motions that are imparted to physical objects by reason of the progression of the natural reference system are, in a sense, fictitious. They appear to exist only because the physical objects are referred to a spatial reference system that is assumed to be stationary, whereas it is, in fact, moving. But in another sense, these motions are not entirely fictitious, inasmuch as the attribution of motion to entities that are not actually moving takes place only at the expense of denying motion to other entities that are, in fact, moving. These other entities that are stationary relative to the fixed spatial coordinate system are participating in the motion of that coordinate system relative to the natural system. The motion therefore exists, but it is attributed to the wrong entities. One of the first essentials for an understanding of the system of motions that constitutes the physical universe is to relate the basic motions to the natural reference system, and thereby eliminate the confusion that has been introduced by the use of a fixed reference system. When this is done it can be seen that the units of motion involved in the progression of the natural reference system have no actual physical significance. They are merely units of a reference system in which the fictitious motion of the absolute locations can be represented. Obviously, the spatial aspect of these fictitious units of motion is equally fictitious, and this leads to an answer to the question as to the relation of the ―space‖ represented by a stationary three-dimensional reference system, extension space, as we may call it, to the space of the universe of motion. On the basis of the explanation given in the preceding pages, if a number of objects without independent motion (such as photons) originate simultaneously from a source that is stationary with respect to a fixed reference system, they are carried outward from the location of origin at unit speed by the motion of the natural reference system relative to the stationary system. The direction of motion of each of these objects, as seen in the context of the stationary system of reference, is determined entirely by chance, and the motions are therefore distributed over all directions. The location of origin is then the center of an expanding sphere, the surface of which contains the locations that the moving objects occupy after a period of time corresponding to the spatial progression represented by the radius of the sphere. Any point within this sphere can be defined by the direction of motion and the duration of the progression; that is, by polar coordinates. The sphere generated by the motion of the natural reference system relative to the point of origin has no actual physical significance. It is a fictitious result of relating the natural reference system to an arbitrary fixed system of reference. It does, however, define a reference frame that is well adapted to representing the motions of ordinary human experience. Any such sphere can be expanded indefinitely, and the reference system thus defined is therefore coextensive

with all other stationary spatial reference systems. Position in any one such system can be expressed in terms of any other merely by a change of coordinates. The volume generated in this manner is identical with the entity that is called ―space‖ in previous physical theories. It is the spatial constituent of a universe of matter. As brought out in the foregoing explanation, this entity, extension space, as we have called it, is neither a void, as contended by one of the earlier schools of thought, or an actual physical entity, as seen by an opposing school. In terms of a universe of motion it is simply a reference system. An appropriate analogy is the coordinate system on a sheet of graph paper. The original lines on this paper, generally lightly printed in color, have no significance so far as the subject matter of the graph is concerned. But if we draw some lines on this sheet that are relevant to the subject matter, then the printed coordinate system facilitates our assessment of the interrelations between the quantities represented by those lines. Similarly, extension space, per se, has no physical significance. It is merely a reference system, like the colored lines on the graph paper, that facilitates cognition of the relations between the significant entities and phenomena: the motions and their various aspects. The true ―space‖ that enters into physical phenomena is the spatial aspect of motion. As brought out earlier, it has no independent existence. Nor does time. Each exists only in association with the other as motion. We can, however, isolate the spatial aspect of a particular motion, or type of motion, and deal with it on a theoretical basis as if it were independent, providing that the rate of change of time remains constant, or the appropriate correction is applied for whatever deviation from a constant rate actually does take place. This ability to abstract the spatial aspect and treat it independently is the factor that enables us to relate the spatial aspect of translational motion to the reference system that we recognize as extension space. It may be of interest to note that this clarification of the nature of extension space gives us a partial answer to the long-standing question as to whether this space, which in the context of a universe of matter is ―space‖ in general, is finite or infinite. As a reference system it is potentially infinite, just as ―number‖ is potentially infinite. But it does not necessarily follow that the number of units of space participating in motions that actually have physical significance is infinite. A complete answer to the question is therefore not available at this stage of the development. The remaining issue will have further consideration later. The finding that extension space is merely a reference system also disposes of the issue with respect to ―curvature,‖ or other kinds of distortion, of space, and it rules out any participation of extension space in physical action. Such concepts as those involved in Einstein's assertion that ―space has the physical property of transmitting electromagnetic waves, are wholly incorrect. No reference system can have any physical effects, nor can any physical action affect a reference system. Such a system is merely a construct: a device whereby physical actions and their results can be represented in usable form.

Extension space, the ―container‖ visualized by most individuals when they think of space, is capable of representing only translational motion, and its spatial aspect, not physical space in general. But the spatial aspect of any motion has the same relation to the physical phenomena in which it is involved as the spatial aspect of translational motion that we can follow by means of its representation in the coordinate system. For example, the space involved in rotation is physical space, but it can be defined in the conventional reference system only with the aid of an auxiliary scalar quantity: the number of revolutions. By itself, that reference system cannot distinguish between one revolution and n revolutions. Nor is it able to represent vibrational motion. As will be found later in the development, even its capability of representing translational motion is subject to some significant limitations. Regardless of whether motion is translatory, vibratory, or rotational, its spatial aspect is ―space,‖ from the physical standpoint. And whenever a physical process involves space in general, rather than merely the spatial aspect of translational motion, all components of the total space must be taken into account. The full implications of this statement will not become apparent until we are ready to begin consideration of electrical phenomena, but it obviously rules out the possibility of a universal reference system to which all spatial magnitudes can be related. Furthermore, every motion, and therefore every physical object (a manifestation of motion) has a location in three-dimensional time as well as in three-dimensional space, and no spatial reference system is capable of representing both locations. It may be somewhat disconcerting to many readers to be told that we are dealing with a universe that transcends the stationary three-dimensional spatial reference system in which popular opinion places it: a universe that involves three-dimensional time, scalar motion, a moving reference system, and so on. But it should be realized that this complexity is not peculiar to the Reciprocal System. No physical theory that enjoys any substantial degree of acceptance today portrays the universe as capable of being accurately represented in its entirety within any kind of a spatial reference system. Indeed, the present-day ―official‖ school of physical theory says that the basic entities of the universe are not ―objectively real‖ at all; they are phantoms which can ―only be symbolized by partial differential equations in an abstract multidimensional space.‖ 32 (Werner Heisenberg) Prior to the latter part of the nineteenth century there was no problem in this area. It was assumed, without question, that space and time were clearly recognizable entities, that all spatial locations could be defined in terms of an absolute spatial reference system, and that time could be defined in terms of a universal uniform flow. But the experimental demonstration of the constant speed of light by Michelson and Morley threw this situation into confusion, from which it has never fully emerged. The prevailing scientific opinion at the moment is that time is not an independent entity, but is a sort of quasi-space, existing in one dimension that is joined in some manner to the three dimensions of space to form a four-dimensional continuum. Inasmuch as this creates as many problems as it solves, it has been further assumed that this continuum is distorted by the presence of matter. These assumptions, which are basic to, in relativity theory, the currently accepted doctrine, leave the conventional spatial reference system in

a very curious position. Einstein says that his theory requires us to free ourselves ―from the idea that co-ordinates must have an immediate metrical meaning.‖33 He defines this expression ―a metrical meaning‖ as the existence of a specific relationship between differences of coordinates and measurable lengths and times. Just what kind of a meaning the coordinates can have if they do not represent measurable magnitudes is rather difficult to understand. The truth is that the differences in coordinates, which, according to Einstein, have no metrical meaning, are the spatial magnitudes that enter into almost all of our physical calculations. Even in astronomy, where it might be presumed that any inaccuracy would be very serious, in view of the great magnitudes involved, we get this report from Hannes Alfven: The general theory (of relativity) has not been applied to celestial mechanics on an appreciable scale. The simpler Newtonian theory is still employed almost exclusively to calculate the motions of celestial bodies.34 Our theoretical development now demonstrates that the differences in coordinates do have ―metrical meaning‖ , and that wherever we are dealing with vectorial motions, or with scalar motions that can be referred to identifiable reference points, these coordinate positions accurately represent the spatial aspects of the translational motions that are involved. This explains why the hypothesis of an absolute spatial reference system for the universe as a whole was so successful for such a long time. The exceptions are exceptional in ordinary practice. The existence of multiple reference points has had no significant impact except in the case of gravitation, and the use of the force concept has sidestepped the gravitational issue. Only in recent years have the observations penetrated into regions outside the boundaries of the conventional reference systems. But we now have to deal with the consequences of this enlargement of the scope of our observations. In the course of this present work it has been found that the problems introduced into physical science by the extension of experimental and observational knowledge are directly due to the fact that some of the newly discovered phenomena transcend the reference systems into which current science is trying to place them. As we will see later, this is particularly true where variations in time magnitudes are involved, inasmuch as conventional spatial reference systems assume a fixed and unchanging progression of time. In order to get the true picture it is necessary to realize that no single reference system is capable of representing the whole of physical reality. The universe, as seen in the context of the Reciprocal System of theory, is much more complex than is generally realized, but the simple Newtonian universe was abandoned by science long ago, and the modifications of the Newtonian view that we now find necessary are actually less drastic than those required by the currently popular physical theories. Of course, in the final analysis this makes no difference. Scientific thought will have to conform to the way in which the universe actually behaves, irrespective of personal preferences, but it is significant that all of the phenomena of a universe of motion, as they emerge from the development of the Reciprocal System, are rational, clearly defined, and ―objectively real.‖

CHAPTER 4

Radiation
The basic postulate of the Reciprocal System of theory asserts the existence of motion. In itself, without qualification, this would permit the existence of any conceivable kind of motion, but the additional assumptions included in the postulates act as limitations on the types of motion that are possible. The net result of the basic postulates plus the limitations is therefore to assert the existence of any kind of motion that is not excluded by the limiting assumptions. We may express this point concisely by saying that in the theoretical universe of motion anything that can exist does exist. The further fact that these permissible theoretical phenomena coincide item by item with the observed phenomena of the actual physical universe is something that will have to be demonstrated step by step as the development proceeds. Some objections have been raised to the foregoing conclusion that what can exist does exist, on the ground that actuality does not necessarily follow from possibility. But no one is contending that actual existence is a necessary consequence of possible existence, as a general proposition. What is contended is that this is true, for special reasons, in the physical universe. Philosophers explain this as being the result of a ―principle of nature.‖ David Hawkins, for instance, tells us that ―the principle of plenitude . . . says that all things possible in nature are actualized.‖35 What the present development has done is to explain why nature follows such a principle. Our finding is that the basic physical entities are scalar motions, and that the existence of different observable entities and phenomena is due to the fact that these motions necessarily assume specific directions when they appear in the context of a three-dimensional frame of reference. Inasmuch as the directions are determined by chance, there is a finite probability corresponding to every possible direction, and thus every possibility becomes an actuality. It should be noted that this is exactly the same principle that was applied in Chapter 3 to explain why an expanding sphere of radiation emanates from each radiation source (a conclusion that is not challenged by anyone). In this case, too, scalar motions exist, each of which takes one of certain permissible directions (limited by the translational character of the motions), and these motions are distributed over all of the directions. Inasmuch as it has been postulated that motion, as defined earlier, is the sole constituent of the physical universe, we are committed to the proposition that every physical entity or phenomenon is a manifestation of motion. The determination of what entities, phenomena, and processes can exist in the theoretical universe therefore reduces to a matter of ascertaining what kinds of motion and combinations of motions can exist in such a universe, and what changes can take place in these motions. Similarly, in relating the theoretical universe to the observed physical universe, the question as to what any observed entity or phenomenon is never arises. We always know what it is. It is a motion, a combination of motions, or a relation between motions. The only question that is ever at issue is what kinds of motions are involved. There has been a sharp difference of opinion among those interested in the philosophical

aspects of science as to whether the process of enlarging scientific understanding is ―discovery‖ or ―invention.‖ This is related to the question as to the origin of the fundamental principles of science that was discussed in Chapter l, but it is a broader issue that applies to all scientific knowledge, and involves the inherent nature of that knowledge. The specific point at issue is clearly stated by R. B. Lindsay in these words: Application of the term ―discovery‖ implies that there is an external world ―out there‖ wholly independent of the observer and with built-in regularities and laws waiting to be uncovered and revealed. They have always been there and presumably always will be; our task is by diligent search to find out what they are. On the other hand, the term ―invention‖ implies that the physicist uses not only his observations but his imaginative powers to construct points of view that identify with experience.36 The ―discovery‖ concept, says Lindsay, implies that the acquisition of scientific knowledge is cumulative, and that ultimately our understanding of the physical world should be essentially complete. On the contrary, the ―point of view of invention means that the process of creating new experience and the construction of new ideas to cope with this experience go hand in hand.‖ On this basis, ―the whole activity is open-ended‖ ; there is no place for the idea of completeness. The Reciprocal System now provides a definitive answer to this question. It not only establishes scientific investigation as a process of discovery, but reduces that discovery to deduction and verification of the deductions. All of the information that is necessary in order to arrive at a full description of any theoretically possible entity or phenomenon is implicit in the postulates. A full development of the consequences of the postulates therefore defines a complete theoretical unlverse. As will be seen in the pages that follow, the physical processes of the universe include a continuing series of interchanges between vectorial motions and scalar motions. In all of these interchanges causality is maintained; no motions of either type occur except as a result of previously existing motions. The concept of events occurring without cause, which enters into some of the interpretations of the theories included in the current structure of physics, is therefore foreign to the Reciprocal System. But the universe of motion is not deterministic in the strict Laplacian sense, because the directions of the motions are continually being redetermined by chance processes. The description of the physical universe that emerges from development of the consequences of the postulates of the Reciprocal System therefore identifies the general classes of entities and phenomena that exist in the universe, and the relations between them, rather than specifying the exact result of every interaction, as a similarly complete theory would do if it were deterministic. In beginning our examination of these physical entities and phenomena, the first point to be noted is that the postulates require the existence of real units of motion, units that are similar to the units of motion involved in the progression of the natural reference system, except that they actually exist, rather than being fictitious results of relating motion to an arbitrary reference system. These independent units of motion, as we will call them, are superimposed on the moving reference background in much the same manner as that in which matter is supposed to exist in the basic space of previous physical theory. Since they are units of the

same kind, however, these independent units are interrelated with the units of the background motion, rather than being separate and distinct from it, in the manner in which matter is presumed to be distinct from the space-time background in the theories based on the ―matter‖ concept. As we will see shortly, some of the independent motions have components that are coincident with the background motion, and these components are not effective from the physical standpoint; that is, their effective physical magnitude is zero. A point of considerable significance is that while the postulates permit the existence of these independent motions, and, on the basis of the principle previously stated, they must therefore exist in the universe of motion defined by the postulates, those postulates do not provide any mechanism for originating independent motions. It follows that the independent motions now existing either originated coincidentally with the universe itself, or else were originated subsequently by some outside agency. Likewise, the postulates provide no mechanism for terminating the existence of these independent motions. Consequently, the number of effective units of such motion now existing can neither be increased nor diminished by any process within the physical system. This inability to alter the existing number of effective units of independent motion is the basis for what we may call the general conservation law, and the various subsidiary conservation laws applying to specific physical phenomena. It suggests, but does not necessarily require, a limitation of the independent units of motion to a finite number. The issue as to the finiteness of the universe does not enter into any of the phenomena that will be examined in this present volume, but it will come up in connection with some of the subjects that will be taken up later, and it will be given further consideration then. The Reciprocal System of theory deals only with the physical universe as it now exists, and reaches no conclusions as to how that universe came into being, nor as to its ultimate fate. The theoretical system is therefore completely neutral on the question of creation. It is compatible with either the hypothesis of creation by some agency, or the hypothesis that the universe has always existed. Continuous creation of matter by action of the physical mechanism itself, as postulated by the Steady State theory of cosmology is ruled out, and there is nothing in that mechanism that will allow the universe to arrive at any kind of termination of its own accord. The question of creation or termination by action of an outside agency is beyond the scope of the theoretical development. Turning now to the question as to what kinds of motion are possible at the basic level, we note that scalar magnitudes may be either positive (outward, as represented in a spatial reference system), or negative (inward). But as we observe motion in the context of a fixed reference system, the outward progression of the natural reference system is always present, and thus every motion includes a one-unit outward component. The discrete unit postulate prevents any addition to an effective unit, and independent outward motion is therefore impossible. All dependent motion must have net inward or negative magnitude. Furthermore, it must be continuous and uniform at this stage of the development, because no mechanism is yet available whereby discontinuity or variability can be produced. Since the outward progression always exists, independent continuous negative motion is not possible by itself, but it can exist in combination with the ever-present outward progression.

The result of such a combination of unit negative and unit positive motion is zero motion relative to a stationary coordinate system. Another possibility is simple harmonic motion, in which the scalar direction of movement reverses at each end of a unit of space, or time. In such motion, each unit of space is associated with a unit of time, as in unidirectional translational motion, but in the context of a stationary three-dimensional spatial reference system the motion oscillates back and forth over a single unit of space (or time) for a certain period of time (or space). At first glance, it might appear that the reversals of scalar direction at each end of the basic unit are inadmissible in view of the absence of any mechanism for accomplishing a reversal. However, the changes of scalar direction in simple harmonic motion are actually continuous and uniform, as can be seen from the fact that such motion is a projection of circular motion on a diameter. The net effective speed varies continuously and uniformly from +1 at the midpoint of the forward movement to zero at the positive end of the path of motion, and then to -1 at the midpoint of the reverse movement and zero at the negative end of the path. The continuity and uniformity requirements are met both by a continuous, uniform change of direction, and by a continuous, uniform change of magnitude. As pointed out earlier, the theoretical structure that we are developing from the fundamental postulates is a description of what can exist in the theoretical universe of motion defined by those postulates. The question as to whether a certain feature of this theoretical universe does or does not correspond to anything in the actual physical universe is a separate issue that is explored in a subsequent step in the project, to be started shortly, in which the theoretical universe is compared item by item with the observed universe. At the moment, therefore, we are not concerned with whether or not simple harmonic motion does exist in the actual physical universe, why it exists, if it does, or how it manifests itself. All that we need to know for present purposes is that inasmuch as this kind of motion is continuous, and is not excluded by the postulates, it is one of the kinds of motion that exists in the theoretical universe of motion, under the most basic conditions. Under these conditions simple harmonic motion is confined to individual units. When the motion has progressed for one full unit, the discrete unit postulate specifies that a boundary exists. There is no discontinuity, but at this boundary one unit terminates and another begins. Whatever processes may have been under way in the first unit cannot carry over into the next. They cannot be divided between two totally independent units. Consequently, a continuous change from positive to negative can occur only within one unit of either space or time. As explained in Chapter 3, motion, as herein defined, is a continuous process–a progression–not a succession of jumps. There is progression even within the units, simply because these are units of progression, or scalar motion. For this reason, specific points within the unit–the midpoint, for example–can be identified, even though they do not exist independently. The same is true of the chain used as an analogy in the preceding discussion. Although the chain exists only in discrete units, or links, we can distinguish various portions of a link. For instance, if we utilize the chain as a means of measurement, we can measure 10-1/2 links, even though half of a link would not qualify as part of the chain. Because of this capability of identifying the different portions of the unit, we see the vibrating unit as

following a definite path. In defining this path we will need to give some detailed consideration to the matter of direction. In the first edition the word ―direction‖ was used in four different senses. Exception was taken to this practice by a number of readers, who suggested that it would be helpful if ―direction‖ were employed with only one significance, and different names were attached to the other three concepts. When considered purely from a technical standpoint, the earlier terminology is not open to legitimate criticism, as using words in more than one sense is unavoidable in the English language. However, anything that can be done to facilitate understanding of the presentation is worth serious consideration. Unfortunately, there is no suitable substitute for ―direction‖ in most of these applications. Some of the objections to the previous terminology were based on the ground that scalar quantities, by definition, have no direction, and that using the term ―direction‖ in application to these quantities, as well as to vectorial quantities, is contradictory and leads to confusion. There is merit in this objection, to be sure, in any application where we deal with scalar quantities merely as positive and negative magnitudes. But as soon as we view the scalar motions in the context of a fixed spatial reference system, and begin talking about ―outward‖ and ―inward,‖ as we must do in this work, we are referring not to the scalar magnitudes themselves, but to the representation of these magnitudes in a stationary spatial reference system, a representation that is necessarily directional. The use of directional language in this case therefore appears to be unavoidable. There are likewise some compelling reasons for continuing to use the term ―direction in time‖ in application to the temporal property analogous to the spatial property of direction. We could, of course, coin a new word for this purpose, and there would no doubt be certain advantages in so doing. But there are also some very definite advantages to be gained by utilizing the word ―direction‖ with reference to time as well as with reference to space. Because of the symmetry of space and time, the property of time that corresponds to the familiar property of space that we call ―direction‖ has exactly the same characteristics as the spatial property, and by using the term ―direction in time,‖ or ―temporal direction,‖ as a name for this property we convey an immediate understanding of its nature and characteristics that would otherwise require a great deal of discussion and explanation. All that is then necessary is to keep in mind that although direction in time is like direction in space, it is not direction in space. Actually, it should not be difficult to get away from the habit of always interpreting ―direction‖ as meaning ―direction in space‖ when we are dealing with motion. We already recognize that there is no spatial connotation attached to the term when it is used elsewhere. We habitually use ―direction‖ and directional terms of one kind or another in speaking of scalar quantities, or even in connection with items that cannot be expressed in physical imagery at all. We speak of wages and prices as moving in the same direction, temperature as going up or down, a change in the direction of our thinking, and so on. Here we realize that we are using the word ―direction‖ without any spatial significance. There should be no serious obstacle in the way of a similar conception of the meaning of ―direction in time.‖ In this edition the term ―direction‖ will not be used in referring to the deviations upward or

downward from unit speed. In the other senses in which the term was originally used it seems essential to continue utilizing directional language, but as an alternative to the further limitations on the use of the term ―direction‖ that have been suggested we will use qualifying adjectives wherever the meaning of the term is not obvious from the context. On this basis vectorial direction is a specific direction that can be fully represented in a stationary coordinate system. Scalar direction is simply outward or inward, the spatial representation of positive or negative scalar magnitudes respectively. Wherever the term ―direction‖ is used without qualification it will refer to vectorial direction. If there is any question as to whether the direction (scalar or vectorial) under consideration is a direction in space or a direction in time, this information will also be given. Vectorial motion is motion with an inherent vectorial direction. Scalar motion is inward or outward motion that has no inherent vectorial direction, but is given a direction by the factors involved in its relation to the reference system. This imputed vectorial direction is independent of the scalar direction except to the extent that the same factors may, in some instances, affect both. As an analogy, we may consider a motor car. The motion of this car has a direction in three-dimensional space, a vectorial direction, while at the same time it has a scalar direction, in that it is moving either forward or backward. As a general proposition, the vectorial direction of this vehicle is independent of its scalar direction. The car can run forward in any vectorial direction, or backward in any direction. If the car is symmetrically constructed so that the front and back are indistinguishable, we cannot tell from direct observation whether it is moving forward or backward. The same is true in the case of the simple scalar motions. For example, we will find in the pages that follow that the scalar direction of a falling object is inward, whereas the scalar direction of a beam of light is outward. If the two happen to traverse the same path in the same vectorial direction, as they may very well do, there is nothing observable that will distinguish between the inward and outward motion. In the usual situation the scalar direction of an observed motion must be determined from collateral information independently of the observed vectorial direction. The magnitude of a simple harmonic motion, like that of any other motion, is a speed, the relation between the number of units of space and the number of units of time participating in the motion. The basic relation, one unit of space per unit of time, always remains the same, but by reason of the directional reversals, which result in traversing the same unit of one component repeatedly, the speed of a simple harmonic motion, as it appears in a fixed reference system, is 1/x (or x/1). This means that each advance of one unit in space (or time) is followed by a series of reversals of scalar direction that increase the number of units of time (or space) to x, before another advance in space (or time) takes place. At this point the scalar direction remains constant for one unit, after which another series of reversals takes place. Ordinarily the vectorial direction reverses in unison with the scalar direction, but each end of a unit is the reference point for the position of the next unit in the reference system, and conformity with the scalar reversals is therefore not mandatory. Consequently, in order to maintain continuity in the relation of the vectorial motion to the fixed reference system the

vectorial direction continues the regular reversals at the points where the scalar motion advances to a new unit of space (or time). The relation between the scalar and vectorial directions is illustrated in the following tabulation, which represents two sections of a 1/3 simple harmonic motion. The vectorial directions are expressed in terms of the way the movement would appear from some point not in the line of motion. Unit Number 1 2 3 4 5 6 Scalar inward outward inward inward outward inward DIRECTION Vectorial right left right left right left

The simple harmonic motion thus remains permanently in a fixed position in the dimension of motion, as seen in the context of a stationary reference system; that is, it is an oscillatory, or vibratory, motion. An alternative to this pattern of reversals will be discussed in Chapter 8. Like all other absolute locations, the absolute location occupied by the vibrating unit, the unit of simple harmonic motion, is carried outward by the progression of the natural reference system, and since the linear motion of the vibrating unit has no component in the dimensions perpendicular to the line of oscillation, the outward progression at unit speed takes place in one of these free dimensions. Inasmuch as this outward progression is continuous within the unit as well as from one unit of the reference frame to the next, the combination of a vibratory motion and a linear motion perpendicular to the line of vibration results in a path which has the form of a sine curve. Because of the dimensional relationship between the oscillation and the linear progression, there is a corresponding relation between the vectorial directions of these two components of the total motion, as seen in the context of a stationary reference system, but this relation is fixed only between these two components. The position of the plane of vibration in the stationary spatial system of reference is determined by chance, or by the characteristics of the originating object. Although the basic one-to-one space-time ratio is maintained in the simple harmonic motion, and the only change that takes place is from positive to negative and vice versa, the net effect, from the standpoint of a fixed system of reference is to confine one component–either space or time–to a single unit, while the other component extends to n units. The motion can thus be measured in terms of the number of oscillations per unit of time, a frequency, although it is apparent from the foregoing explanation that it is actually a speed. The conventional measurement in terms of frequency is possible only because the magnitude of the space (or time) term remains constant at unity. Here, in this oscillating unit, the first manifestation of independent motion (that is, motion that is separate and distinct from the outward motion of the natural reference system) that has emerged from the theoretical development, is the first physical object. In the motion of

this object we also have the first instance of ―something moving.‖ Up to this time we have been considering only the basic motions, relations between space and time that do not involve movement of any ―thing.‖ Experience in presenting the theory to college audiences has indicated that many persons are unable to conceive of the existence of motion without something moving, and are inclined to argue that this is impossible. It should be realized, however, that we are definitely committed to this concept just as soon as we postulate a universe composed entirely of motion. In such a universe, ―things‖ are combinations of motions, and motion is thus logically prior to ―things.‖ The concept-of a universe of motion is generally conceded to be reasonable and rational. The long list of prominent and not-so-prominent scientists and philosophers who have essayed to explore the implications of such a concept is sufficient confirmation of this point. It follows that unless some definite and positive conflict with reason or experience is encountered, the necessary consequences of that concept must also be presumed to be reasonable and rational, even though some of them may conflict with long-standing beliefs of some kind. There is no mathematical obstacle to this unfamiliar type of motion. We have defined motion, for purposes of a theory of a universe of motion, by means of the relation expressed in the equation of motion: v = s/t. This equation does not require the existence of any moving object. Even where the motion actually is motion of something, that ―something,‖ does not enter into any of the terms of the equation, the mathematical representation of the motion. The only purpose that it serves is to identify the particular motion under consideration. But identification is also possible where there is nothing moving. If, for example, we say that the motion we are talking about is the motion of atom A, we are identifying a particular motion, and distinguishing it from all other motions, but if we refer to the motion which constitutes atom A, we are identifying this motion (or combination of motions) on an equally definite basis, even though this is not motion of anything. A careful consideration of the points brought out in the foregoing discussion will make it clear that the objections that have been raised to the concept of motion without anything moving are not based on logical grounds. They stem from the fact that the idea of simple motion of this kind, merely a relation between space and time, is new and unfamiliar. None of us likes to discard familiar ideas of long standing and replace them with something new and different, but this is part of the price that we pay for progress. This will be an appropriate time to emphasize that combinations or other modifications of existing motions can only be accomplished by adding or removing units of motion. As stated in Chapter 2, neither space nor time exists independently. Each exists only in association with the other as motion. Consequently, a speed 1/a cannot be changed to a speed 1/b by adding b-a units of time. Such a change can only be accomplished by superimposing a new motion on the motion that is to be altered. In carrying out the two different operations that were involved in the investigation from which the results reported herein were derived, it would have been possible to perform them separately; first developing the theoretical structure as far as circumstances would permit, and then comparing this structure with the observed features of the physical universe. In

practice, however, it was more convenient to identify the various theoretical features with the corresponding physical features as the work progressed, so that the correlations would serve as a running check on the accuracy of the theoretical conclusions. Furthermore, this policy eliminated the need for the separate system of terminology that otherwise would have been required for referring to the various features of the theoretical universe during the process of the theoretical development. Much the same considerations apply to the presentation of the results, and we will therefore identify each theoretical feature as it emerges from the development, and will refer to it by the name that is customarily applied to the corresponding physical feature. It should be emphasized, however, that this hand in hand method of presentation is purely an aid to understanding. It does not alter the fact that the theoretical universe is being developed entirely by deduction from the postulates. No empirical information is being introduced into the theoretical structure at any point. All of the theoretical features are purely theoretical, with no empirical content whatever. The agreement between theory and observation that we will find as we go along is not a result of basing the theoretical conclusions on appropriate empirical premises; it comes about because the theoretical system is a true and accurate representation of the actual physical situation. Identification of the theoretical unit of simple harmonic motion that we have been considering presents no problem. It is obvious that each of these units is a photon. The process of emission and movement of the photons is radiation. The space-time ratio of the vibration is the frequency of the radiation, and the unit speed of the outward progression is the speed of radiation, more familiarly known as the speed of light. When considered merely as vibrating units, there is no distinction between one photon and another except in the speed of vibration, or frequency. The unit level, where speed 1/n changes to n/l cannot be identified in any directly observable way. We will find, however, that there is a significant difference between the manner in which the photons of vibrational speed 1/n enter into combinations of motions and the corresponding behavior of photons of vibrational speed greater than unity. This difference will be examined in detail in the chapters that follow. One of the things that we can expect a correct theory of the structure of the universe to do is to clear up the discrepancies and ―paradoxes‖ of previously existing scientific thought. Here, in the explanation of the nature of radiation that emerges from the development of theory, we find this expectation dramatically fulfilled. In conventional thinking the concepts of ―wave‖ and ―particle‖ are mutually exclusive, and the empirical discovery that radiation acts in some respects as a wave phenomenon, and in other respects as an assembly of particles has confronted physical science with a very disturbing paradox. Almost at the outset of our development of the consequences of the postulates that define a universe of motion we find that in such a universe there is a very simple explanation. The photon acts as a particle in emission and absorption because it has the distinctive feature of a particle: it is a discrete unit. In transmission it behaves as a wave because the combination of its own inherent vibratory motion with the translatory motion of the progression of the natural reference system causes it to follow a wave-like path. In this case the problem that seemed impossible to solve while radiation was looked upon as a single entity loses all of its difficult features as

soon as it is recognized as a combination of two different things. Another difficult problem with respect to radiation has been to explain how it can be propagated through space without some kind of a medium. This problem has never been solved other than by what was described by R. H. Dicke as a ―semantic trick‖ ; that is, assuming, entirely ad hoc, that space has the properties of a medium. One suspects that, with empty space having so many properties, all that had been accomplished in destroying the ether was a semantic trick. The ether had been renamed the vacuum.5 Einstein did not challenge this conclusion expressed by Dicke. On the contrary ―he freely admitted‖ not only that his theory still employed a medium, but also that this medium is indistinguishable, other than semantically, from the ―ether‖ of previous theories. The following statements from his works are typical: We may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether.37 We shall say: our space has the physical property of transmitting waves, and so omit the use of a word (ether) we have decided to avoid.38 Thus the relativity theory does not resolve the problem. There is no evidence to support Einstin's assumption that space has the properties of a medium, or that it has any physical properties at all. The fact that no method of propagating radiation through space without a medium has ever been conceived is therefore still unreconciled with the absence of any evidence of the existence of a medium. In the theoretical universe of the Reciprocal System the problem does not arise, since the photon remains in the same absolute location in which it originates, as any object without independent motion must do. With respect to the natural reference system it does not move at all, and the movement that is observed in the context of a stationary reference system is a movement of the natural reference system relative to the stationary system, not a movement of the photon itself. In both the propagation question and the wave-particle issue the resolution of the problem is accomplished in the same manner. Instead of explaining why the seemingly complicated phenomena are complex and perplexing, the Reciprocal System of theory removes the complexity and reduces the phenomena to simple terms. As other long-standing problems are examined in the course of the subsequent development we will find that this conceptual simplicity is a general characteristic of the new theoretical structure.

CHAPTER 5

Gravitation
Another type of motion that is permitted by the postulates, and therefore exists in the theoretical universe, is rotation. Before rotational motion can take place, however, there must exist some physical object (independent motion) that can rotate. This is purely a

matter of geometry. We are still in the stage of the development where we are dealing only with scalar motions, and a single scalar motion cannot produce the directional characteristics of rotation. Like the sine curve of the photon they require a combination of motions: a compound motion, we may say. Thus, while motion is possible without anything moving, rotation is not possible unless some physical object is available to be rotated. The photon of radiation is such an independent motion, or physical object, and it is evident, from the limitations that apply to the kinds of motion that are possible at this stage of the development, that it is the only primary unit that meets the requirement. Simple rotation is therefore rotation of the photon. In our ordinary experience rotation is a vectorial motion, and its direction (a vectorial direction) is relative to a fixed spatial system of reference. In the absence of other motion, an object rotating vectorially remains stationary in the fixed system. However, any motion of a photon is a scalar motion, inasmuch as the mechanism required for the production of vectorial motion is not yet available at this stage of the development. A scalar motion has an inherent scalar direction (inward or outward), and it is given a vectorial direction by the manner in which the scalar motion appears in the fixed coordinate system. As brought out in Chapter 4, the net scalar direction of independent motion is inward. The significance of the term ―net‖ in this statement is that a compound motion may include an outward component providing that the magnitude of the inward component of that motion is great enough to give the motion as a whole the inward direction. Since the vectorial direction that this inward motion assumes in a fixed reference system is independent of the scalar direction, the motion can take any vectorial direction that is permitted by the geometry of three-dimensional space. One such possibility is rotation. The special characteristic of rotation that distinguishes it from the simple harmonic motion previously considered is that in rotation the changes in vectorial direction are continuous and uniform, so that the motion is always forward, rather than oscillating back and forth. Consequently, there is no reason for any change in scalar direction, and the motion continues in the inward direction irrespective of the vectorial changes. Scalar rotation thus differs from inherently vectorial rotation in that it involves a translational inward movement as well as the purely rotational movement. A rolling motion is a good analogy, although the mechanism is different. The rolling motion is one motion, not a rotation and a translational motion. It is the rotation that carries the rolling object forward translationally. Similarly, the scalar rotation is only one motion, even though it has a translational effect that is absent in the case of vectorial rotation. To illustrate the essential difference between rotation and simple harmonic motion, let us return to the automobile analogy. If the car is on a very narrow road, analogous to the one-dimensional path of vibration of the photon, and it runs forward in moving north, then when it reverses its vectorial direction and moves south it also reverses its scalar direction and runs backward. But if the car is on a circular track and starts moving forward, it continues moving forward regardless of the changes in vectorial direction that are taking place. The vectorial direction of the inward translational movement of the rotating photon, like the vectorial direction of the non-rotating photon, is a result of viewing the motion in the

context of an arbitrary reference system, rather than an inherent property of the motion itself. It is therefore determined entirely by chance. However, the non-rotating photon remains in the same absolute location permanently, unless acted upon by some outside agency, and the direction determined at the time of emission is therefore also permanent. The rotating photon, on the other hand is continually moving from one absolute location to another as it travels back along the line of the progression of the natural reference system, and each time it enters a new absolute location the vectorial direction is redetermined by the chance process. Inasmuch as all directions are equally probable, the motion is distributed uniformly among all of them in the long run. A rotating photon therefore moves inward toward all space-time locations other than the one that it happens to occupy momentarily. Coincidentally, it continues to move outward by reason of the progression of the reference system, but the net motion of the observable aggregates of rotating photons in our immediate environment is inward. The determination of the vectorial direction corresponding to ―outward‖ automatically determines the vectorial direction of ―inward‖ in each case, inasmuch as one is the reverse of the other. Some of the readers of the first edition found the concept of ―inward motion‖ rather difficult. This was probably due to looking at the situation on the basis of a single reference point. ―Outward‖ from such a point is easily visualized, whereas ―inward‖ has no meaning under the circumstances. But the non-rotating photon does not merely move outward from the point of emission; it moves outward from all locations in the manner of a spot on the surface of an expanding balloon. Similarly, the rotating photon moves inward toward all locations in the manner of a spot on the surface of a contracting balloon. The outward motion is simply the spatial representation of an increasing scalar magnitude, whereas the inward motion is the spatial representation of a decreasing scalar magnitude. If that decreasing magnitude reaches zero, it continues as an increasing negative magnitude; that is, if the object which was moving inward toward a certain location eventually arrives at that location, it continues in motion beyond that point (providing that nothing intervenes). Since space and time locations cannot be identified by observation, neither inward nor outward motion can be recognized as such. It is possible, however, to observe the changes in relations between the moving objects and other physical structures. The photons of radiation, for instance, are observed to be moving outward from the emitting objects. Similarly, each of the rotating photons in the local environment is moving toward all other rotating photons, by reason of the inward motion in space in which all participate, and the change of relative position in space can be observed. This second class of identifiable objects in the theoretical universe thus manifests itself to observation as a number of individual units, which continually move inward toward each other. Here, again, the identification of the physical counterparts of the theoretical phenomena is a simple matter. The inward motion in all directions of space is gravitation, and the rotating photons are the physical objects that gravitate; that is, atoms and particles. Collectively, the atoms and particles constitute matter. As in the case of radiation, the new theoretical development leads to a very simple explanation of a hitherto unexplained phenomenon. Previous investigators in this area have arrived at a reasonably good understanding of the physical effects of gravitation, but

they are completely at sea as to how it originated, and how the apparent gravitational effect is propagated. Our finding is that these previous investigators have misunderstood the nature of the gravitational phenomenon. Except at extreme distances, each unit or aggregate of matter in the observed physical universe continually moves toward all others, unless restrained in some way. It has therefore been taken for granted that each particle of matter is exerting a force of attraction on the others. However, when we examine the characteristics of that presumed force we find that it is something of a very peculiar nature, totally unlike the forces of ordinary experience. As nearly as we can determine from observation, the gravitational ―force‖ acts instantaneously, without an intervening medium, and in such a manner that it cannot be screened off or modified in any way. These observed characteristics are so difficult to explain theoretically that the theorists have given up the search for an explanation, and are now taking the stand that the observations must, for some unknown reason, be wrong. Even though all practical gravitational calculations, including those at astronomical distances, are carried out on the basis of instantaneous action, without introducing any inconsistencies, and even though the concept of a force which is wholly dependent upon position in space being propagated through space is self-contradictory, the theorists take the stand that since they are unable to devise a theory to account for instantaneous action, the gravitational force must be propagated at a finite velocity, all evidence to the contrary notwithstanding. And even though there is not the slightest evidence of the existence of any medium in space, or the existence of any medium-like properties of space, the theorists also insist that since they are unable to devise a theory without a medium or something that has the properties of a medium, such an entity must exist, in spite of the negative evidence. There are many places in accepted scientific thought where the necessity of facing up to clear evidence from observation or experiment is avoided by the use of one or more of the evasive devices that the modern theorists have invented for the purpose, but this gravitational situation is probably the only major instance in which the empirical evidence is openly and categorically defied. While the total lack of any explanation of the gravitational phenomenon that is consistent with the observations has undoubtedly been the primary cause of the flagrantly unscientific attitude that has been taken here, an erroneous belief concerning the nature of electromagnetic radiation has been a significant contributing factor. The enormous extension of the known range of radiation frequencies in modern times has been accomplished mainly through the generation of these additional frequencies by electrical means, and it has come to be believed that there is a unique connection between radiation and electrical processes, that radiation is the carrier by means of which electrical and magnetic effects are propagated. From this it is only a short step to the conclusion that there must also be gravitational waves, carriers of gravitational energy. ―Such (gravitational) waves resemble electromagnetic waves,‖ says Joseph Weber, who has been carrying on an extensive search for these hypothetical waves for many years. The theoretical development in the preceding pages shows that this presumed analogy does not represent the reality of the universe of motion.

In that universe radiation and gravitation are phenomena of a totally different order. But it is worth noting that radical differences between these two types of phenomena are also apparent in the information that is available from empirical sources. That information is simply ignored in current practice because it conflicts with the popular theories of the moment. Radiation is a process whereby energy is transferred from a material aggregate at some particular location in space (or time) to other spatial (or temporal) locations. Each photon has a definite frequency of vibration and a corresponding energy content; hence these photons are essentially traveling units of energy. The emitting agency loses a specific amount of energy whenever a photon leaves. This energy travels through the intervening space (or time) until the photon encounters a unit of matter with which it is able to interact, whereupon the energy is transferred, wholly or in part, to this matter. At either end of the path the energy is recognizable as such, and is readily interchangeable with other types of energy. The radiant energy of the impinding photon may, for instance, be converted into kinetic energy (heat), or into electrical energy (the photoelectric effect), or into chemical energy (photochemical action). Similarly, any of these other types of energy, which may exist at the point of emission of the radiation, can be converted into radiation by appropriate processes. The gravitational situation is entirely different. Gravitational energy is not interchangeable with other forms of energy. At any specific location with respect to other masses, a mass unit possesses a definite amount of gravitational (potential) energy, and it is impossible to increase or decrease this energy content by conversion from or to other forms of energy. It is true that a change in location results in a release or absorption of energy, but the gravitational energy which the mass possesses at point A cannot be converted to any other type of energy at point A, nor can the gravitational energy at A be transferred unchanged to any other point B (except along equipotential lines). The only energy that makes its appearance in any other form at point B is that portion of the gravitational energy which the mass possessed at point A that it can no longer have at point B: a fixed amount determined entirely by the difference in location. Radiant energy remains constant while traveling in space, but can vary almost without limit at any specific location. The behavior of gravitation is the exact opposite. The gravitational effect remains constant at any specific location, but varies if the mass moves from one location to another, unless the movement is along an equipotential line. Energy is defined as capability of doing work. Kinetic energy, for example, qualifies under this definition, and any kind of energy that can be freely converted to kinetic energy likewise qualifies. But gravitational energy is not capable of doing work as a general proposition. It will do one thing, and that thing only: it will move masses inward toward each other. If this motion is permitted to take place, the gravitational energy decreases, and the decrement makes its appearance as kinetic energy, which can then be utilized in the normal manner. But unless gravitation is allowed to do this one thing which it is capable of doing, the gravitational energy is completely unavailable. It cannot do anything itself, nor can it be converted to any form of energy that can do something. The mass itself can theoretically be converted to kinetic energy, but this internal energy equivalent of the mass is something totally different from the gravitational energy. It is

entirely independent of position with reference to other masses. Gravitational, or potential, energy, on the other hand, is purely energy of position; that is, for any two specific masses, the mutual potential energy is determined solely by their spatial separation. But energy of position in space cannot be propagated in space; the concept of transmitting this energy from one spatial position to another is totally incompatible with the fact that the magnitude of the energy is determined by the spatial location. Propagation of gravitation is therefore inherently impossible. The gravitational action is necessarily instantaneous as Newton's Law indicates, and as has always been assumed for purposes of calculation. It is particularly significant, therefore, that the theoretical characteristics of gravitation, as derived from the postulates of the Reciprocal System, are in full agreement with the empirical observations, strange as these observations may seem. In the theoretical universe of motion gravitation is not an action of one aggregate of matter on another, as it appears to be. It is simply an inward motion of the material units an inherent property of the atoms and particles of matter. The same motion that makes an atom an atom also causes it to gravitate. Each atom and each aggregate is pursuing its own course independently of all others, but because each unit is moving inward in space, it is moving toward all other units, and this gives the appearance of a mutual interaction. These theoretical inward motions, totally independent of each other, necessarily have just the kind of characteristics that are observed in gravitation. The change in the relative position of two objects due to the independent motions of each occurs instantaneously, and there is nothing propagated from one to the other through a medium, or in any other way. Whatever exists, or occurs, in the intervening space can have no effect on the results of the independent motions. One of the questions that is frequently asked is how this finding that the gravitational motion of each aggregate is completely independent of all others can be reconciled with the observed fact that the direction of the (apparent) mutual gravitational force between two objects changes if either object moves. On the face of it, there appears to be a necessity for some kind of an interaction. The explanation is that the gravitational motion of an object never changes, either in amount or in direction. It is always directed from the location of the gravitating unit toward all other space and time locations. But we cannot observe the motion of an object inward in space; we can only observe its motion relative to other objects whose presence we can detect. The motion of each object therefore appears to be directed toward the other objects, but, in fact, it is directed toward all locations in space and time irrespective of whether or not they happen to be occupied. Whatever changes appear to take place in the gravitational phenomena by reason of change of position of any of the gravitating masses are not changes in the gravitational motions (or forces) but changes in our ability to detect those motions. Let us assume a mass unit X occupying location a, and moving gravitationally toward locations b and c. If these locations are not occupied, then we cannot detect this motion at all. If location b is occupied by mass unit Y. then we see X moving toward Y; that is, we can now observe the motion of X toward location b, but its motion toward location c is still unobservable. The observable gravitational motion of Y is toward X and has the direction ba.

Now if we assume that Y moves to location c, what happens? The essence of the theory is that the motion of X is not changed at all; it is entirely independent of the position of object Y. But we are now able to observe the motion of X toward c because there is a physical object at that location, whereas we are no longer able to observe the motion of X toward location b, even though that motion exists just as definitely as before. The direction of the gravitational motion (or force) of X thus appears to have changed, but what has actually happened is that some previously unobservable motion has become observable, while some previously observable motion has become unobservable. The same is true of the motion of object Y. It now appears to be moving in the direction ca rather than in the direction ba, but here again there has been no actual change, other than the change in the position of Y. Gravitationally, Y is moving in all directions at all times, irrespective of whether or not that motion is observable. The foregoing explanation has been presented in terms of individual mass units, rather than aggregates, as the basic question with respect to the effect of variable mass on the gravitational motion has not yet been considered. The discussion will be extended to the multiple units in the next chapter. As emphasized in Chapter 3, the identification of a second general force, or motion, to which all matter is subject, provides the must needed ―antagonist,‖ to gravitation, and enables explaining many phenomena that have never been satisfactorily explained on the basis of only one general force. It is the interaction of these two general forces that determines the course of major physical events. The controlling factor is the distance intervening between the objects that are involved. Inasmuch as the progression of space and time is merely a manifestation of the movement of the natural reference system with respect to the conventional stationary system of reference, the space progression originates everywhere, and its magnitude is always the same, one unit of space per unit of time. Gravitation, on the other hand, originates at the specific locations which the gravitating objects happen to occupy. Its effects are therefore distributed over a volume of extension space the size of which varies with the distance from the material object. In three-dimensional space, the fraction of the inward motion directed toward a unit area at distance d from the object is inversely proportional to the total area at that distance; that is, to the surface of a sphere of radius d. The effective portion of the total inward motion is thus inversely proportional to d². This is the inverse square law to which gravitation conforms, according to empirical findings. The net resultant of these two general motions in each specific case depends on their relative magnitudes. At the shorter distances gravitation predominates, and in the realm of ordinary experience all aggregate of matter are subject to net gravitational motions (or forces). But since, the progression of the natural reference system is constant at unit speed while the opposing gravitational motion is attenuated by distance it accordance with the inverse square law, it follows that at some specific distance, the gravitational limit of the aggregate of matter under consideration, the motions reach equality. Beyond this point the net movement is outward, increasing toward the speed of light as the gravitational effect continues to decrease. As a rough analogy, we may visualize a moving belt traveling outward from a central location and carrying an assortment of cubes and balls. The outward travel of the belt

represents the progression of the natural reference system. The cubes are analogous to the photons of radiation. Having no independent mobility of their own, they must necessarily remain permanently at whatever locations on the belt they occupy initially and they therefore move outward from the point of origin at the full speed of the belt. The balls, however, can be caused to rotate, and if the rotation is in the direction opposite to the travel of the belt and the rotational speed is high enough, the balls will move inward instead of outward. These balls represent the atoms of matter, and the inward motion opposite to the direction of the travel of the belt is analogous to gravitation. We could include the distance factor in the analogy by introducing some means of varying the speed of rotation of the balls with the distance from the central point. Under this arrangement the closer balls would still move inward, but at some point farther out there would be an equilibrium, and beyond this point the balls would move outward. The analogy is incomplete, particularly in that the mechanism whereby the rotation of the balls causes them to move inward translationally is not the same as that which causes the inward motion of the atoms. Nevertheless, it does show quite clearly that under appropriate conditions a rotational motion can cause a translational displacement, and it gives us a good picture of the general relations between the progression of the natural reference system, gravitational motion, and the travel of the photons of radiation. All aggregates of matter smaller than the largest existing units are under the gravitational control of larger aggregates; that is, they are within the gravitational limits of those larger units. Consequently, they are not able to continue the outward movement that would take place in the absence of the larger bodies. The largest existing aggregates are not limited in this manner, and on the basis of the principles that have been stated, any two such aggregates that are outside their mutual gravitational limits recede from each other at speeds increasing with the distance. In the observed physical universe, the largest aggregates of matter are galaxies. According to the foregoing theoretical findings, the distant galaxies should be receding from the earth at extremely high speeds increasing with distance up to the speed of light, which will be reached where the gravitational effect is reduced to a negligible level. Until quite recently, this theoretical conclusion would have been received with extreme skepticism, as it conflicts with what was then the accepted thinking, and there was no way in which it could be subjected to a test. But recent astronomical advances have changed this situation. Present-day instruments are able to reach out to distances so great that the effect of gravitation is minimal, and the observations with this improved equipment show that the galaxies are behaving in exactly the manner predicted by the new theory. In the meantime, however, the astronomers have been trying to account for this galactic recession in some manner consistent with present astronomical views, and they have devised an explanation in which they assume, entirely ad hoc, that there was an enormous explosion at some singular point in the past history of the universe which hurled the galaxies out into space at their present fantastically high speeds. If one were to be called upon to decide which is the better explanation–one which rests upon an ad hoc assumption of an event far out of the range of known physical phenomena, or one which

finds the recession to be an immediate and direct consequence of the fundamental nature of the universe–there can hardly be any question as to the decision. But, in reality, this question does not arise, as the case in favor of the theory of a universe of motion is not based on the contention that it provides better explanations of physical phenomena, a contention that would have to depend, in most instances, on conformity with nonscientific criteria, but on the objective and genuinely scientific contention that it is a fully integrated system of theory which is not inconsistent with any established fact in any physical area. Another significant effect of the existence of a gravitational limit, within which there is a net inward motion, and outside of which there is a net outward progression, is that it reconciles the seemingly uniform distribution of matter in the universe with Newton's Law of Gravitation and Euclidean geometry. One of the strong arguments that has been advanced against the existence of a gravitational force of the inverse square type operating in a Euclidean universe is that on such a basis, ―The stellar universe ought to be a finite island in the infinite ocean of space,‖39 as Einstein stated the case. Observations indicate that there is no such concentration. So far as we can tell, the galaxies are distributed uniformly, or nearly uniformly, throughout the immense region now accessible to observation, and this is currently taken as a definite indication that the geometry of the universe is non-Euclidean. From the points brought out in the preceding pages, it is now clear that the flaw in this argument is that it rests on the assumption that there is a net gravitational force effective throughout space. Our findings are that this assumption is incorrect, and that there is a net gravitational force only within the gravitational limit of the particular mass under consideration. On this basis it is only the matter within the gravitational limit that should agglomerate into a single unit, and this is exactly what occurs. Each major galaxy is a ―finite island in the ocean of space,‖ within its gravitational limit. The existing situation is thus entirely consistent with inverse square gravitation operating in a Euclidean universe, as the Reciprocal System requires. The atoms, particles, and larger aggregates of matter within the gravitational limit of each galaxy constitute a gravitationally bound system. Each of these constituent units is subject to the same two general forces as the galaxies, but in addition is subject to the (apparent) gravitational attraction of neighboring masses, and that of the entire mass within the gravitational limits acting as a whole. Under the combined influence of all of these forces, each aggregate assumes an equilibrium position in the three-dimensional reference system that we are calling extension space, or a net motion capable of representation in that system. So far as the bound system is concerned, the coordinate reference system, extension space, is the equivalent of Newton's absolute space. It can be generalized to include other gravitationally bound systems by taking into account the relative motion of the systems. Any or all of the aggregates or individual units that constitute a gravitationally bound system may acquire motions relative to the fixed reference system. Since these motions are relative to the defined spatial coordinate system, the direction of motion in each instance is an inherent property of the motion, rather than being merely a matter of chance, as in the case of the coordinate representation of the scalar motions. These

motions with inherent vectorial directions are vectorial motions: the motions of our ordinary experience. They are so familiar that it is customary to generalize their characteristics, and to assume that these are the characteristics of all motion. Inasmuch as these familiar vectorial motions have inherent directions, and are always motions of something, it is taken for granted that these are essential features of motions; that all motions must necessarily have these same properties. But our investigation of the fundamental properties of motion reveals that this assumption is in error. Motion, as it exists in a universe composed entirely of motion, is merely a relation between space and time, and in its simpler forms it is not motion of anything, nor does it have an inherent direction. Vectorial motion is a special kind of motion; a phenomenon of the gravitationally bound systems. The net resultant of the scalar motions of any object–the progression of the reference system and the various gravitational motions–has a vectorial direction when viewed in the context of a stationary reference system, even though that direction is not an inherent property of the motion. The observed motion of such an object, which is the net resultant of all of its motions, both scalar and vectorial, thus appears to be simply a vectorial motion, and is so interpreted in current practice. One of the prerequisites for a clear understanding of basic physical phenomena is a recognition of the composite nature of the observed motions. It is not possible to get a true picture of activity in a gravitationally bound system unless it is realized that an object such as a photon or a neutrino which is traveling at the speed of light with respect to the conventional frame of reference does so because it has no capability of independent motion at all, and is at rest in its own natural system of reference. Similarly, the behavior of atoms of matter can be clearly understood only in the light of a realization that they are motionless, or moving at low speeds, relative to the conventional reference system because they possess inherent motions at high speed which counterbalance the motion of the natural reference system that would otherwise carry them outward at the speed of the photon or the neutrino. It is also essential to recognize that the scalar motion of the photons can be accommodated within the spatial reference system only by the use of multiple reference points. Photons are continually being emitted from matter by a process that we will not be prepared to discuss until a later stage of the theoretical development. The motion of the photons emitted from any material object is outward from that object, not from the instantaneous position in some reference system which that object happens to occupy at the moment of emission. As brought out in Chapter 3, the extension space of our ordinary experience is ―absolute space,‖ for vectorial motion and for scalar motion viewed from one point of reference. But every other reference point has its own ―absolute space,‖ and there is no criterion by which one of these can be singled out as more basic than another. Thus the location at which a photon originates cannot be placed in the context of any general reference system for scalar motion. That location itself is the reference point for the photon emission, and if we choose to view it in relation to some reference system with respect to which it is moving, then that relative motion, whatever it may be, is a component of the motion of the emitted photons. Looking at the situation from the standpoint of the photon, we may say that at the moment of emission this photon is participating in all of the motions of the emitting

object, the outward progression of the natural reference system, the inward motion of gravitation, and all of the vectorial motions to which the material object is subject. No mechanism exists whereby the photon can eliminate any of these motions, and the outward motion of the absolute location of the emission, to which the photon becomes subject on separation from the material unit, is superimposed on the previously existing motions. This, again, means that the emitting object defines the reference point for the motion of the photon. In a gravitationally bound system each aggregate and individual unit of matter is the center of a sphere of radiation. This point has been a source of difficulty for some readers of the first edition, and further consideration by means of a specific example is probably in order. Let us take some location A as a reference point. All photons originating from a physical object at A move outward at unit speed in the manner portrayed by the balloon analogy. Gravitating objects move inward in opposition to the progression, and can therefore be represented by positions somewhere along the lines of the outward movement. Here, then, we have the kind of a situation that most persons are looking for: something that they can visualize in the context of the familiar fixed spatial coordinate system. But now let us take a look at one of these gravitating objects, which we will call B. For convenience, let us assume that B is moving gravitationally with respect to A at a rate which is just equal to the outward progression of the natural reference system, so that B remains stationary with respect to object A in the fixed reference system. This is the condition that prevails at the gravitational limit. What happens to the photons emitted from B? If the expanding system centered at A is conceived as a universal system of reference, as so many readers have evidently taken it to be, then these photons must be detached from B in a manner which will enable them to be carried along by progression in a direction outward from A. But the natural reference system moves outward from all locations; it moves outward from B in exactly the same manner as it does from A. There is no way in which one can be assigned any status different from that of the other. The photons originating at B therefore move outward from B. not from A. This would make no difference if B were itself moving outward from A at unit speed, as in that case outward from B would also be outward from A, but where B is stationary with respect to A in a fixed coordinate system, the only way in which the motions of the photons can be represented in that system is by means of two separate reference points. Thus there is a sphere of radiation centered at A, and another sphere centered at B. Where the spheres overlap, the photons may make contact, even though all are moving outward from their respective points of origin. It has been suggested that the theoretical conclusion that the unit outward motion of the photon is added to the motion imparted to the photon by the emitting object conflicts with the empirically established principle that the speed of radiation is independent of the speed of the source, but this is not true. The explanation lies in some aspects of the measurement of speed that have not been recognized. This matter will be discussed in detail in Chapter 7.

CHAPTER 6

The Reciprocal Relation
Inasmuch as the fundamental postulates define a universe composed entirely of units of motion, and define space and time in terms of that motion, these postulates preclude space and time from having any significance other than that which they have in motion, and at the same time require that they always have that significance; that is, throughout the universe space and time are reciprocally related. This general reciprocal relation that necessarily exists in a universe composed entirely of motion has a far-reaching and decisive effect on physical structures and processes. In recognition of its crucial role, the name ―Reciprocal‖ has been applied to the system of theory based on the ―motion‖ concept of the nature of the universe. The reason for calling it a ―system of theory‖ rather than merely a ―theory‖ is that its subdivisions are coextensive with other physical theories. One of these subdivisions covers the same ground as relativity, another parallels the nuclear theory of the atom, still another deals with the same physical area as the kinetic theory, and so on. It is appropriate, therefore, to call these subdivisions ―theories,‖ and to refer to the entire new theoretical structure as the Reciprocal System of theory, even though it is actually a single fully integrated entity. The reciprocal postulate provides a good example of the manner in which a change in the basic concept of the nature of the universe alters the way in which we apprehend specific physical phenomena. In the context of a universe of matter existing in a space-time framework, the idea of space as the reciprocal of time is simply preposterous, too absurd to be given serious consideration. Most of those who encounter the idea of ―the reciprocal of space‖ for the first time find it wholly inconceivable. But these persons are not taking the postulates of the new theory at their face value, and recognizing that the assertion that ―space is an aspect of motion‖ means exactly what it says. They are accustomed to regarding space as some kind of a container, and they are interpreting this assertion as if it said that ―container space is an aspect of motion,‖ thus inserting their own concept of space into a statement which rejects all such previous ideas and defines a new and different concept. The result of mixing such incongruous and conflicting concepts cannot be otherwise than meaningless. When the new ideas are viewed in the proper context, the strangeness disappears. In a universe in which everything that exists is a form of motion, and the magnitude of that motion, measured as speed or velocity, is the only significant physical quantity, the existence of the reciprocal relation is practically self-evident. Motion is defined as the relation of space to time. Its mathematical expression is the quotient of the two quantities. An increase in space therefore has exactly the same effect on the speed, the mathematical measure of the motion, as a decrease in time, and vice versa. In comparing one airplane with another, it makes no difference whether we say that plane A travels twice as far in the same time, or that it travels a certain distance in half the time. Inasmuch as the postulates deal with space and time in precisely the same manner, aside from the reciprocal relation between the two, the behavior characteristics of the two

entities, or properties, as they are called, are identical. This statement may seem incredible on first sight, as space and time manifest themselves to our observation in very different guises. We know time only as a progression, a continual moving forward, whereas space appears to us as an entity that ―stays put.‖ But when we subject the apparent differences to a critical examination, they fail to stand up under the scrutiny. The most conspicuous property of space is that it is three-dimensional. On the other hand, it is generally believed that the observational evidence shows time to be one-dimensional. We have a subjective impression of a unidirectional ―flow‖ of time from the past, to the present, and on into the future. The mathematical representation of time in the equations of motion seems to confirm this view, inasmuch as the quantity t in v = s/t and related equations is scalar, not vectorial, as v and s are, or can be. Notwithstanding its general and unquestioning acceptance, this conclusion as to the onedimensionality of time is entirely unjustified. The point that is being overlooked is that ―direction,‖ in the context of the physical processes which are represented by vectorial equations in present-day physics, always means ―direction in space.‖ In the equation v = s/t, for example, the spatial displacement s is a vector quantity because it has a direction in space. It follows that the velocity v also has a direction in space, and thus what we have here is a space velocity equation. In this equation the term t is necessarily scalar because it has no direction in space. It is quite true that this result would automatically follow if time were one-dimensional, but the one-dimensionality is by no means a necessary condition. Quite the contrary, time is scalar in this space velocity equation (and in all of the other familiar vectorial equations of modern physics; equations that are vectorial because they involve direction in space) irrespective of its dimensions, because no matter how many dimensions it may have, time has no direction in space. If time is multi-dimensional, as our theoretical development finds it to be, then it has a property that corresponds to the spatial property that we call ―direction.‖ But whatever we call this temporal property, whether we call it ―direction in time,‖ as we are doing for reasons previously explained, or give it some altogether different name, it is a temporal property, not a spatial property, and it does not give time any direction in space. Regardless of its dimensions, time cannot be a vector quantity in any equation such as those of present-day physics, in which the property, which qualifies a quantity as vectorial, is that of having a direction in space. The existing confusion in this area is no doubt due, at least in part, to the fact that the terms ―dimension‖ and ―dimensional‖ are currently used with two different meanings. We speak of space as three-dimensional, and we also speak of a cube as threedimensional. In the first of these expressions we mean that space has a certain property that we designate as dimensionality, and that the magnitude applying to this property is three. In other words, our statement means that there are three dimensions of space. But when we say that a cube is three-dimensional, the significance of the statement is quite different. Here we do not mean that there are three dimensions of ―cubism,‖ or whatever we may call it. We mean that the cube exists in space and extends into three dimensions of that space.

There is a rather general tendency to interpret any postulate of multi-dimensional time in this latter significance; that is, to take it as meaning that time extends into n dimensions of space, or some kind of a quasi-space. But this is a concept that makes little sense under any conditions, and it certainly is not the meaning of the term ―three-dimensional time‖ as used in this work. When we here speak of time as three-dimensional we will be employing the term in the same significance as when we speak of space as threedimensional; that is, we mean that time has a property, which we call dimensionality, and the magnitude of that property is three. Here, again, we mean that there are three dimensions of the property in question: three dimensions of time. There is nothing in the role which time plays in the equations of motion in space to indicate specifically that it has more than one dimension. But a careful consideration along the lines indicated in the foregoing paragraphs does show that the present-day assumption that we know time to be one-dimensional is completely unfounded. Thus there is no empirical evidence that is inconsistent with the assertion of the Reciprocal System that time is three-dimensional. Perhaps it might be well to point out that the additional dimensions of time have no metaphysical significance. The postulates of a universe of motion define a purely physical universe, and all of the entities and phenomena of that universe, as determined by a development of the necessary consequences of the postulates, are purely physical. The three dimensions of time have the same physical significance as the three dimensions of space. As soon as we take into account the effect of gravitation on the motion of material aggregates, the second of the observed differences, the progression of time, which contrasts sharply with the apparent immobility of extension space, is likewise seen to be a consequence of the conditions of observation, rather than an indication of any actual dissimilarity. The behavior of those objects that are partially free from the gravitational attraction of our galaxy, the very distant galaxies, shows conclusively that the immobility of extension space, as we observe it, is not a reflection of an inherent property of space in general, but is a result of the fact that in the region accessible to detailed observation gravitation moves objects toward each other, offsetting the effects of the outward progression. The pattern of the recession of the distant galaxies demonstrates that when the gravitational effect is eliminated there is a progression of space similar to the observed progression of time. Just as ―now‖ continually moves forward relative to any initial point in the temporal reference system, so ―here‖ in the absence of gravitation, continually moves forward relative to any initial point in the spatial reference system. Little additional information about either space or time is available from empirical sources. The only items on which there is general agreement are that space is homogeneous and isotropic, and that time progresses uniformly. Other properties that are sometimes attributed to either time or space are merely assumptions or hypotheses. Infinite extent or infinite divisibility, for instance, are hypothetical, not the results of observation. Likewise, the assertions as to spatial and temporal properties that are made in the relativity theories are, as Einstein says, ―free inventions of the human mind,‖ not items that have been derived from experience.

In testing the validity of the conclusion that all properties of either space or time are properties of both space and time, such assumptions and hypotheses must be disregarded, since it is only conflicts with definitely established facts that are conclusive. The significance of a conflict with a questionable assertion cannot be other than questionable. ―Homogeneous‖ with respect to space is equivalent to ―uniform‖ with respect to time, and because the observations thus far available tell us nothing at all about the dimensions of time, there is nothing in these observations that is inconsistent with the assertion that time, like space, is isotropic. In spite of the general belief, among scientists and laymen alike, that there is a great difference between space and time, any critical examination along the foregoing lines shows that the apparent differences are not real, and that there is actually no observational evidence that is inconsistent with the theoretical conclusion that the properties of space and of time are identical. As brought out in Chapter 4, deviations from unit speed, the basic one-to-one space-time ratio, are accomplished by means of reversals of the direction of the progression of either space or time. As a result of these reversals, one component traverses the same path in the reference system repeatedly, while the other component continues progressing unidirectionally in the normal manner. Thus the deviation from the normal rate of progression may take place either in space or in time, but not in both coincidentally. The space-time ratio, or speed, is either 1/n (less than unity, the speed of light,) or n/1 (greater than unity). Inasmuch as everything physical in a universe of motion is a motion–that is, a relation between space and time, measured as speed–and, as we have just seen, the properties of space and those of time are identical, aside from the reciprocal relationship, it follows that every physical entity or phenomenon has a reciprocal. There exists another entity or phenomenon that is an exact duplicate, except that space and time are interchanged. For example, let us consider an object rotating with speed 1/n and moving translationally with speed 1/n. The reciprocal relation tells us that there must necessarily exist, somewhere in the universe, an object identical in all respects, except that its rotational and translational speeds are both n/1 instead of 1/n. In addition to the complete inversions, there are also structures of an intermediate type in which one or more components of a complex combination of motions are inverted, while the remaining components are unchanged. In the example under consideration, the translational speed may become n/1 while the rotational speed remains at 1/n, or vice versa. Once the normal (1/n) combination has been identified, it follows that both the completely inverted (n/1) combination and the various intermediate structures exist in the appropriate environment. The general nature of that environment in each case is also indicated, inasmuch as change of position in time cannot be represented in - a spatial reference system, and each of these speed combinations has some special characteristics when viewed in relation to the conventional reference systems. The various physical entities and phenomena that involve motion of these several inverse types will be examined at appropriate points in the pages that follow. The essential point that needs to be recognized at this time, because of its relevance to the subject matter now under consideration, is the existence of inverse forms of all of the normal (1/n) motions and combinations of motions.

This is a far-reaching discovery of great significance. In fact the new and more accurate picture of the physical universe that is derived from the ―motion‖ concept differs from previous ideas mainly by reason of the widening of our horizons that results from recognition of the inverse phenomena. Our direct physical contacts are limited to phenomena of the same type as those that enter into our own physical makeup: the direct phenomena, we may call them, although the distinction between direct and inverse is merely a matter of the way in which we see them, not anything that is inherent in the phenomena themselves. In recent years the development of powerful and sophisticated instruments has enabled us to penetrate areas that are far beyond the range of our unaided senses, and in these new areas the relatively simple and understandable relations that govern events within our normal experience are no longer valid. Newton's laws of motion, which are so dependable in everyday life, break down in application to motion at speeds approaching that of light; events at the atomic level resist all attempts at explanation by means of established physical principles, and so on. The scientific reaction to this state of affairs has been to conclude that the relatively simple and straightforward physical laws that have been found to apply to events within our ordinary experience are not universally valid, but are merely approximations to some more complex relations of general applicability. The simplicity of Newton's laws of motion, for instance, is explained on the ground that some of the terms of the more complicated general law are reduced to negligible values at low velocities, and may therefore be disregarded in application to the phenomena of everyday life. Development of the consequences of the postulates of the Reciprocal System arrives at a totally different answer. We find that the inverse phenomena that necessarily exist in a universe of motion play no significant role in the events of our everyday experience, but as we extend our observations into the realms of the very large, the very small, and the very fast, we move into the range in which these inverse phenomena replace or modify those which we, from our particular position in the universe, regard as the direct phenomena. On this basis, the difficulties that have been experienced in attempting to use the established physical laws and relations of the world of ordinary experience in the far-out regions are very simply explained. These laws and relations apply specifically to the world of immediate sense perception, phenomena of the direct space-time orientation, and they fail in application to any situation in which the events under consideration involve phenomena of the inverse type in any significant degree. They do not fail because they are wrong, or because they are incomplete; they fail because they are misapplied. No law–physical or otherwise-can be expected to produce the correct results in an area to which it has no relevance. The inverse phenomena are governed by laws distinct from, although related to, those of the direct phenomena, and where those phenomena exist they can be understood and successfully handled only by using the laws and relations of the inverse sector. This explains the ability of the Reciprocal System of theory to deal successfully with the recently discovered phenomena of the far-out regions, which have been so resistant to previous theoretical treatment. It is now apparent that the unfamiliar and surprising aspects of these phenomena are not due to aspects of the normal physical relations that come into play only under extreme conditions, as previous theorists have assumed; they

are due to the total or partial replacement of the phenomena of the direct type by the related, but different phenomena of the inverse type. In order to obtain the correct answers to problems in these remote areas, the unfamiliar phenomena that are involved must be viewed in their true light as the inverse of the phenomena of the directly observable region, not in the customary way as extensions of those direct phenomena into the regions under consideration. By identifying and utilizing this correct treatment the Reciprocal System is not only able to arrive at the right answers in the far-out areas, but to accomplish this task without disturbing the previously established laws and principles that apply to the phenomena of the direct type. In order to keep the explanation of the basic elements of the theory as simple and understandable as possible, the previous discussion has been limited to what we have called the direct view of the universe, in which space is the more familiar of the two basic entities, and plays the leading role. At this time it is necessary to recognize that because of the general nature of the reciprocal relation between space and time every statement that has been made with respect to space in the preceding chapters is equally applicable to time in the appropriate context. As we have seen in the case of space and time individually, however, the way in which the inverse phenomenon manifests itself to our observation may be quite different from the way in which we see its direct counterpart. Locations in time cannot be represented in a spatial reference system, but, with the same limitations that apply to the representation of spatial locations, they can be represented in a stationary three-dimensional temporal reference system analogous to the threedimensional spatial reference system that we call extension space. Since neither space nor time exists independently, every physical entity (a motion or a combination of motions) occupies both a space location and a time location. The location as a whole, the location in the physical universe, we may say, can therefore be completely defined only in terms of two reference systems. In the context of a stationary spatial reference system the motion of an absolute location, a location in the natural reference system, as indicated by observation of an object without independent motion, such as a photon or a galaxy at the observational limit, is linearly outward. Similarly, the motion of an absolute location with respect to a stationary temporal reference system is linearly outward in time. Inasmuch as the gravitational motion of ordinary matter is effective in space only, the atoms and particles of this matter, which are stationary with respect to the spatial reference system, or moving only at low velocities, remain in the same absolute locations in time indefinitely, unless subjected to some external force. Their motion in three-dimensional time is therefore linearly outward at unit speed, and the time location that we observe, the time registered on a clock, is not the location in any temporal reference system, but simply the stage of progression. Since the progression of the natural reference system proceeds at unit speed, always and everywhere, clock time, if properly measured, is the same everywhere. As we will see later in the development, the current hypotheses which require repudiation of the existence of absolute time and the concept of simultaneity of distant events are erroneous products of reasoning from premises in which clock time is incorrectly identified as time in general.

The best way to get a clear picture of the relation of clock time to time in general is to consider the analogous situation in space. Let us assume that a photon A is emitted from some material object X in the direction Y. This photon then travels at unit speed in a straight line XY which can be represented in the conventional fixed spatial reference system. The line of progression of time has the same relation to time in general (threedimensional time) as the line XY has to space in general (three-dimensional space). It is a one-dimensional line of travel in a three-dimensional continuum; not something separate and distinct from that continuum, but a specific part of it. Now let us further assume that we have a device whereby we can measure the rate of increase of the spatial distance XA, and let us call this device a ―space clock‖ , Inasmuch as all photons travel at the same speed, this one space clock will suffice for the measurement of the distance traversed by any photon, irrespective of its location or direction of movement, as long as we are interested only in the scalar magnitude. But this measurement is valid only for objects such as photons, which travel at unit speed. If we introduce an object, which travels at some speed other than unity, the measurement that we get from the space clock will not correctly represent the space traversed by that object. Nor will the space clock registration be valid for the relative separation of moving objects, even if they are traveling at unit speed. In order to arrive at the true amount of space entering into such motions we must either measure that space individually, or we must apply an appropriate correction to the measurement by the space clock. Because objects at rest in the stationary spatial reference system, or moving at low velocities with respect to it, are moving at unit speed relative to any stationary temporal reference system, a clock which measures the time progression in any one process provides an accurate measurement of the time elapsed in any low-speed physical process, just as the space clock in our analogy measured the space traversed by any photon. Here, again, however, if an object moves at a speed, or a relative speed, differing from unity, so that its movement in time is not the same as that of the progression of the natural reference system, then the clock time does not correctly represent the actual time involved in the motion under consideration. As in the analogy, the true quantity, the net total time, must be obtained either by a separate measurement (which is usually impractical) or by determining the magnitude of the adjustment that must be applied to the clock time to convert it to total time. In application to motion in space, the total time, like the clock registration, is a scalar quantity. Some readers of the previous edition have found it difficult to accept the idea that time can be three-dimensional because this makes time a vector quantity, and presumably leads to situations in which we are called upon to divide one vector quantity by another. But such situations are non-existent. If we are dealing with spatial relations, time is scalar because it has no spatial direction. If we are dealing with temporal relations, space is scalar because it has no temporal direction. Either space or time can be vectorial in appropriate circumstances. However, as explained earlier in this chapter, the deviation from the normal scalar progression at unit speed may take place either in space or in time, but not in both coincidentally. Consequently, there is no physical situation in which both space and time are vectorial.

Similarly, scalar rotation and its gravitational (translational) effect take place either in space or in time, but not in both. If the speed of the rotation is less than unity, time continues progressing at the normal unit rate, but because of the directional changes during rotation space progresses only one unit while time is progressing n units. Thus the change in position relative to the natural unit datum, both in the rotation and in its gravitational effect, takes place in space. Conversely, if the speed of the rotation is greater than unity, the rotation and its gravitational effect take place in time. An important result of the fact that rotation at greater-than-unit speeds produces an inward motion (gravitation) in time is that a rotational motion or combination of motions with a net speed greater than unity cannot exist in a spatial reference system for more than one (dimensionally variable) unit of time. As pointed out in Chapter 3, the spatial systems of reference, to which the human race is limited because it is subject to gravitation in space, are not capable of representing deviations from the normal rate of time progression. In certain special situations, to be considered later, in which the normal direction of vectorial motion is reversed, the change of position in time manifests itself as a distortion of the spatial position. Otherwise, an object moving normally with a speed greater than unity is coincident with the reference system for only one unit of time. During the next unit, while the spatial reference system is moving outward in time at the unit rate of the normal progression, gravitation is carrying the rotating unit inward in time. It therefore moves away from the reference system and disappears. This point will be very significant in our consideration of the high-speed rotational systems in Chapter 15. Recognition of the fact that each effective unit of rotational motion (mass) occupies a location in time as well as a location in space now enables us to determine the effect of mass concentration on the gravitational motion. Because of the continuation of the progression of time while gravitation is moving the atoms of matter inward in space, the aggregates of matter that are eventually formed in space consist of a large number of mass units that are contiguous in space, but widely dispersed in time. One of the results of this situation is that the magnitude of the gravitational motion (or force) is a function not only of the distance between objects, but also of the effective number of units of rotational motion, measured as mass, that each object possesses. This motion is distributed over all space-time directions, rather than merely over all space directions, and since an aggregate of n mass units occupies n time locations, the total number of space-time locations is also n, even though all mass units of each object are nearly coincident spatially. The total gravitational motion of any mass unit toward that aggregate is thus n times that toward a single mass unit at the same distance. It then follows that the gravitational motion (or force) is proportional to the product of the (apparently) interacting masses. It can now be seen that the comments in Chapter 5, with respect to the apparent change of direction of the gravitational motions (or forces) when the apparently interacting masses change their relative positions are applicable to multi-unit aggregates as well as to the individual mass units considered in the original discussion. The gravitational motion always takes place toward all space-time locations whether or not those locations are occupied by objects that enable us to detect the motion.

A point that should be noted in this connection is that two objects are in effective contact if they occupy adjoining locations in either space or time, regardless of the extent of their separation in the other aspect of motion. This statement may seem to conflict with the empirical observation that contact can be made only if the two objects are in the same place at the same clock time. However, the inability to make contact when the objects reach a common spatial location in a fixed reference system at different clock times is not due to the lack of coincidence in time, but to the progression of space that takes place in connection with the progression of time which is registered by the clock. Because of this space progression, the location that has the same spatial coordinates in the stationary reference system is not the same spatial location that it was at an earlier time. Scientific history shows that physical problems of long standing are usually the result of errors in the prevailing basic concepts, and that significant conceptual modifications are a prerequisite for their solution. We will find, as we proceed with the theoretical development, that the reciprocal relation between space and time which necessarily exists in a universe of motion is just the kind of a conceptual alteration that is needed to clear up the existing physical situation: one which makes drastic changes where such changes are required, but leaves the empirically determined relations of our everyday experience essentially untouched.

CHAPTER 7

High Speed Motion
As brought out in Chapter 3, the ―space‖ of our ordinary experience, extension space, as we have called it, is simply a reference system, and it has no real physical significance. But the relationships that are represented in this reference system do have physical meaning. For example, if the distance between object A and object B in extension space is x, then if A moves a distance x in the direction AB while B remains stationary with respect to the reference system, the two objects will come in contact. The contact has observable physical results, and the fact that it occurs at the coordinate position reached by object A after a movement defined in terms of the coordinates from a specific initial position in the coordinate system demonstrates that the relation represented by the difference between coordinates has a definite physical meaning. Einstein calls this a ―metrical‖ meaning; that is, a connection between the coordinate differences and ―measurable lengths and times.‖ To most of those who have not made any critical study of the logical basis of so-called ―modern physics‖ it probably seems obvious that this kind of a meaning exists, and it is safe to say that comparatively few of those who now accept Einsten's relativity theory because it is the orthodox doctrine in its field realize that his theory denies the existence of such a meaning. But any analysis of the logical structure of the theory will show that this is true, and Einstein’s own statement on the subject, previously quoted, leaves no doubt on this score. This is a prime example of a strange feature of the present situation in science. The members of the scientific community have accepted the basic theories of ―modern physics,‖ as correct, and are quick to do battle on their behalf if they are challenged, yet

at the same time the majority are totally unwilling to accept some of the aspects of those theories that the originators of the theories claim are essential features of the theoretical structures. How many of the supporters of modern atomic theory, for example, are willing to accept Heisenberg's assertion that atoms do not ―exist objectively in the same sense as stones or trees exist‖ ?40 Probably about as many as are willing to acceptEinstein'sassertion that coordinate differences have no metrical meaning. At any rate, the present general acceptance of the relativity theory as a whole, regardless of the widespread disagreement with some of its component parts, makes it advisable to point out just where the conclusions reached in this area by development of the consequences of the postulates of the Reciprocal System differ from the assertions of relativity theory. This chapter will therefore be devoted to a consideration of the status of the relativity concept, includes the extent to which the new findings are in agreement with it. Chapter 8 will then present the full explanation of motion at high speeds, as it is derived from the new theoretical development. It is worth noting in this connection that Einstein himself was aware of ―the eternally problematical character‖ of his concepts, and in undertaking the critical examination of his theory in this chapter we are following his own recommendation, expressed in these words: In the interests of science it is necessary over and over again to engage in the critique of these fundamental concepts, in order that they may not unconsciously rule us. This becomes evident especially in those situations involving development of ideas in which the consistent use of the traditional fundamental concepts leads us to paradoxes difficult to resolve.41 In spite of all of the confusion and controversy that have surrounded the subject, the factors that are involved are essentially simple, and they can be brought out clearly by consideration of a correspondingly simple situation, which, for convenient reference, we will call the ―two-photon case.‖ Let us assume that a photon X originates at location O in a fixed reference system, and moves linearly in space at unit velocity, the velocity of light (as all photons do). In one unit of time it will have reached point x in the coordinate system, one spatial unit distant from 0. This is a simple matter of fact that results entirely from the behavior of photon X, and is totally independent of what may be done by or to any other object. Similarly, if another photon Y leaves point O simultaneously with X, and travels at the same velocity, but in the opposite direction, this photon will reach point y, one unit of space distant from O. at the end of one unit of elapsed time. This, too, is entirely a matter of the behavior of the moving photon Y. and is independent of what happens to photon X or to any other physical object. At the end of one unit of time, as currently measured, X and Y are thus separated by two units of space (distance) in the coordinate system of reference. In current practice some repetitive physical process measures time. This process, or the device, in which it takes place, is called a clock. The progression of time thus measured is the standard time magnitude which, on the basis of current understanding, enters into physical relations. Speed, or velocity, the measure of motion, is defined as distance (space) per unit time. In terms of the accepted reference systems, this means distance between coordinate locations divided by clock registration. In the two-photon case, the increase in coordinate separation during the one unit of elapsed time is two units of space.

The relative velocity of the two photons, determined in the standard manner, is then two natural units; that is, twice the velocity of light, the velocity at which each of the two objects is moving. In 1887, an experiment by Michelson and Morley compared the velocity of light traveling over round trip paths in different directions relative to the direction of the earth’s motion. The investigators found no difference in the velocities, although the accuracy of the experiment was far greater than would be required to reveal the expected difference had it been present. This experiment, together with others, which have confirmed the original findings, makes it necessary to conclude that the velocity of light in a vacuum is constant irrespective of the reference system. The determination of velocity in the standard manner, dividing distance traveled by elapsed time, therefore arrives at the wrong answer at high velocities. As expressed by Capek, the initial impact of this discovery was ―shattering.‖ It seemed to undermine the whole structure of theoretical knowledge that had been erected by centuries of effort. The following statement by Sir James Jeans, written only a few decades after the event, shows what a blow it was to the physicists of that day: For more than two centuries this system of laws (Newton's) was believed to give a perfectly consistent and exact description of the processes of nature. Then, as the nineteenth century was approaching its close, certain experiments, commencing with the famous Michelson-Morley experiment, showed that the whole scheme was meaningless and self-contradictory.42 After a quarter of a century of confusion, Albert Einstein published his special theory of relativity, which proposed a theoretical explanation of the discrepancy. Contradictions and uncertainties have surrounded this theory from its inception, and there has been continued controversy over its interpretation in specific applications, and over the nature and adequacy of the various explanations that have been offered in attempts to resolve the ―paradoxes‖ and other inconsistencies. But the mathematical successes of the theory have been impressive, and even though the mathematics antedated the theory, and are not uniquely connected with it, these mathematical successes, in conjunction with the absence of any serious competitor, and the strong desire of the physicists to have something to work with, have been sufficient to secure general acceptance. Now that a new theory has appeared, however, the defects in the relativity theory acquire a new significance, as the arguments which justify using a theory in spite of contradictions and inconsistencies if it is the only thing that is available are no longer valid when a new theory free from such defects makes its appearance. In making the more rigorous appraisal of the theory that is now required, it should be recognized at the outset that a theory is not valid unless it is correct both mathematically and conceptually. Mathematical evidence alone is not sufficient, as mathematical agreement is no guarantee of conceptual validity. What this means is that if we devise a theoretical explanation of a certain physical phenomenon, and then formulate a mathematical expression to represent the relations pictured by the theory, or do the same thing in reverse manner, first formulating the

mathematical expression on an empirical basis, and then finding an explanation that fits it, the mere fact that this mathematical expression yields results that agree with the corresponding experimental values does not assure us that the theoretical explanation is correct, even if the agreement is complete and exact. As a matter of principle, this statement is not even open to question, yet in a surprisingly large number of instances in current practice, including the relativity theory, mathematical agreement is accepted as complete proof. Most of the defects of the relativity theory as a conceptual scheme have been explored in depth in the literature. A comprehensive review of the situation at this time is therefore unnecessary, but it will be appropriate to examine one of the long-standing ―paradoxes‖ which is sufficient in its self to prove that the theory is conceptually incorrect. Naturally, the adherents of the theory have done their best to ―resolve‖ the paradox, and save the theory, and in their desperate efforts they have managed to muddy the waters to such an extent that the conclusive nature of the case against the theory is not generally recognized. The significance of this kind of a discrepancy lies in the fact that when a theory makes certain assertions of a general nature, if any one case can be found where these assertions are not valid, this invalidates the generality of the assertions, and thus invalidates the theory as a whole. The inconsistency of this nature that we will consider here is what is known as the ―clock paradox.‖ It is frequently confused with the ―twin paradox,‖ in which one of a set of twins stays home while the other goes on a long journey at a very high speed. According to the theory, time progresses more slowly for the traveling twin, and he returns home still a young man, while his brother has reached old age. The clock paradox, which replaces the twins with two identical clocks, is somewhat simpler, as it evades the question as to the relation between clock registration and physical processes. In the usual statement of the paradox, it is assumed that a clock B is accelerated relative to another identical clock A, and that subsequently, after a period of time at a constant relative velocity, the acceleration is reversed, and the clocks return to their original locations. According to the principles of special relativity, clock B, the moving clock, has been running more slowly than clock A, the stationary clock, and hence the time interval registered by B is less than that registered by A. But the special theory also tells us that we cannot distinguish between the motion of clock B relative to clock A and the motion of clock A relative to clock B. Thus it is equally correct to say that A is the moving clock and B is the stationary clock, in which case the interval registered by clock A is less than that registered by clock B. Each clock therefore registers both more and less than the other. Here we have a situation in which a straightforward application of the special relativity theory leads to a conclusion that is manifestly absurd. This paradox, which stands squarely in the way of any claim that relativity theory is conceptually valid, has never been resolved except by means which contradict the basic assumptions of the relativity theory itself. Richard Schlegel brings this fact out very clearly in a discussion of the paradox in his book Time and the Physical World. ―Acceptance of a preferred coordinate system‖ is necessary in order to resolve the contradiction, he points out, but ―such an assumption brings a profound modification to special relativity theory; for the assumption

contradicts the principle that between any two relatively moving systems the effects of motion are the same, from either system to the other.‖43 G. J. Whitrow summarizes the situation in this way: ―The crucial argument of those who support Einstein (in the clock paradox controversy) automatically undermines Einstein's own position.‖44 The theory based primarily on the postulate that all motion is relative contains an internal contradiction which cannot be removed except by some argument relying on the assumption that some motion is not relative. All of the efforts that have been made by the professional relativists to explain away this paradox depend, directly or indirectly, on abandoning the general applicability of the relativity principle, and identifying the acceleration of clock B as something more than an acceleration relative to clock A. Moller, for example, tells us that the acceleration of clock B is ―relative to the fixed stars.‖45 Authors such as Tolman, who speaks of the ―lack of symmetry between the treatment given to the clock A, which was at no time subjected to any force, and that given to clock B which was subjected to . . . forces . . . when the relative motion of the clocks was changed,‖46 are simply saying the same thing in a more roundabout way. But if motion is purely relative, as the special theory contends, then a force applied to clock B cannot produce anything more than a relative motion–it cannot produce a kind of motion that does not exist–and the effect on clock A must therefore be the same as that on clock B. Introduction of a preferred coordinate system such as that defined by the average positions of the fixed stars gets around this difficulty, but only at the cost of destroying the foundations of the theory, as the special theory is built on the postulate that no such preferred coordinate system exists. The impossibility of resolving the contradiction inherent in the clock paradox by appeal to acceleration can be demonstrated in yet another way, as the acceleration can be eliminated without altering the contradiction that constitutes the paradox. No exhaustive search has been made to ascertain whether this streamlined version, which we may call the ―simplified clock paradox‖ has been given any consideration previously, but at any rate it does not appear in the most accessible discussions of the subject. This is quite surprising, as it is a rather obvious way of tightening the paradox to the point where there is little, if any, room for an attempt at evasion. In this simplified clock paradox we will merely assume that the two clocks are in uniform motion relative to each other. The question as to how this motion originated does not enter into the situation. Perhaps they have always been in relative motion. Or, if they were accelerated, they may have been accelerated equally. At any rate, for purposes of the inquiry, we are dealing only with the clocks in uniform relative motion. But here again, we encounter the same paradox. According to the relativity theory, each clock can be regarded either as stationary, in which case it is the faster, or as moving, in which case it is the slower. Again each clock registers both more and less than the other. There are those who claim that the paradox has been resolved experimentally. In the published report of one recent experiment bearing on the subject the flat assertion is made that ―These results provide an unambiguous empirical resolution of the famous clock paradox.‖47 This claim is, in itself, a good illustration of the lack of precision in current thinking in this area, as the clock paradox is a logical contradiction. It refers to a specific situation in which a strict application of the special theory results in an absurdity.

Obviously, a logical inconsistency cannot be ―resolved‖ by empirical means. What the investigators have accomplished in this instance is simply to provide a further verification of some of the mathematical aspects of the theory, which play no part in the clock paradox. This one clearly established logical inconsistency is sufficient in itself, even without the many items of evidence available for corroboration, to show that the special theory of relativity is incorrect in at least some significant segment of its conceptual aspects. It may be a useful theory; it may be a ―good‖ theory from some viewpoint; it may indeed have been the best theory available prior to the development of the Reciprocal System, but this inconsistency demonstrates conclusively that it is not the correct theory. The question then arises: In the face of these facts, why are present-day scientists so thoroughly convinced of the validity of the special theory? Why do front-rank scientists make categorical assertions such as the following from Heisenberg? The theory . . . has meanwhile become an axiomatic foundation of all modern physics, confirmed by a large number of experiments. It has become a permanent property of exact science just as has classical mechanics or the theory of heat.48 The answer to our question can be extracted from this quotation. ―The theory,‖ says Heisenberg, has been ―confirmed by a large number of experiments.‖ But these experiments have confirmed only the mathematical aspects of the theory. They tell us only that special relativity is mathematically correct, and that it therefore could be valid. The almost indecent haste to proclaim the validity of theories on the strength of mathematical confirmation alone is one of the excesses of modern scientific practice which, like the over-indulgence in ad hoc assumptions, has covered up the errors introduced by the concept of a universe of matter, and has prevented recognition of the need for a basic change. Like any other theory, special relativity cannot be confirmed as a theory unless its conceptual aspects are validated. Indeed, the conceptual aspects are the theory itself, as the mathematics, which are embodied in the Lorentz equations, were in existence before Einstein formulated the theory. However, establishment of conceptual validity is much more difficult than confirmation of mathematical validity, and it is virtually impossible in a limited field such as that covered by relativity because there is too much opportunity for alternatives that are mathematically equivalent. It is attainable only where collateral information is available from many sources so that the alternatives can be excluded. Furthermore, consideration of the known alternatives is not conclusive. There is a general tendency to assume that where no satisfactory alternatives have thus far been found, there is no acceptable alternative. This gives rise to a great many erroneous assertions that are given credence because they are modeled after valid mathematical statements, and have a superficial air of authenticity. For example, let us consider the following two statements: A. As a mathematical problem there is virtually only one possible solution (the Lorentz transformation) if the velocity of light is to be the same for all. (Sir George Thomson) 49

B. There was and there is now no understanding of it (the Michelson Morley experiment) except through giving up the idea of absolute time and of absolute length and making the two interdependent concepts. (R. A. Millikan)50 The logical structure of both of these statements (including the implied assertions) is the same, and can be expressed as follows: 1. A solution for the problem under consideration has been obtained. 2. Long and intensive study has failed to produce any alternative solution. 3. The original solution must therefore be correct. In the case of statement A, this logic is irrefutable. It would, in fact, be valid even without any such search for alternatives. Since the original solution yields the correct answers, any other valid solution would necessarily have to be mathematically equivalent to the first, and from a mathematical standpoint equivalent statements are merely different ways of expressing the same thing. As soon as we obtain a mathematically correct answer to a problem, we have the mathematically correct answer. Statement B is an application of the same logic to a conceptual rather than a mathematical solution, but here the logic is completely invalid, as in this case alternative solutions are different solutions, not merely different ways of expressing the same solution. Finding an explanation which fits the observed facts does not, in this case, guarantee that we have the correct explanation. We must have additional confirmation from other sources before conceptual validity can be established. Furthermore, the need for this additional evidence still exists as strongly as ever even if the theory in question is the best explanation that science has thus far been able to devise, as it is, or at least should be, obvious that we can never be sure that we have exhausted the possible alternatives. The theorists do not like to admit this. When they have devoted long years to the study and investigation of a problem, and the situation still remains as described by Millikan–that is, only one explanation judged to be reasonably acceptable has been found–there is a strong temptation to assume that no other possible explanation exists, and to regard the available theory as necessarily correct, even where, as in the case of the special theory of relativity, there may be specific evidence to the contrary. Otherwise, if they do not make such an assumption, they must admit, tacitly if not explicitly, that their abilities have thus far been unequal to the task of finding the alternatives. Few human beings, in or out of the scientific field, relish making this kind of an admission. Here, then, is the reason why the serious shortcomings of the special theory are currently looked upon so charitably. Nothing more acceptable has been available (although there are alternatives toEinstein's interpretation of the Lorentz equations that are equally consistent with the available information), and the physicists are not willing to concede that they could have overlooked the correct answer. But the facts are clear. No new valid conceptual information has been added to the previously existing body of knowledge by the special theory. It is nothing more than an erroneous hypothesis: a conspicuous addition to the historical record cited by Jeans:

The history of theoretical physics is a record of the clothing of mathematical formulae, which were right, or very nearly right, with physical interpretations, which were often very badly wrong.51 ‖As an emergency measure,‖ say Toulmin and Goodfield, ―physicists have resorted to mathematical fudges of an arbitrary kind.‖52 Here is the truth of the matter. The Lorentz equations are simply fudge factors: mathematical devices for reconciling discordant results. In the two-photon case that we are considering, if the speed of light is constant irrespective of the reference system, as established empirically by the Michelson-Morley experiment, then the speed of photon X relative to photon Y is unity. But when this speed is measured in the standard way (assuming that this might be physically possible), dividing the coordinate distance xy by the elapsed clock time, the relative speed is two natural units (2c in the conventional system of units) rather than one unit. Here, then, is a glaring discrepancy. Two different measurements of what is apparently the same thing, the relative speed, give us altogether different results. Both the nature of the problem and the nature of the mathematical answer provided by the Lorentz equations can be brought out clearly by consideration of a simple analogy. Let us assume a situation in which the property of direction exists, but is not recognized. Then let us assume that two independent methods are available for measuring motion, one of which measures the speed, and the other measures the rate at which the distance from a specified reference point is changing. In the absence of any recognition of the existence of direction, it will be presumed that both methods measure the same quantity, and the difference between the results will constitute an unexpected and unexplained discrepancy, similar to that brought to light by the Michelson-Morley experiment. An analogy is not an accurate representation. If it were, it would not be an analogy. But to the extent that the analogy parallels the phenomenon under consideration it provides an insight into aspects of the phenomenon that cannot, in many cases, be directly apprehended. In the circumstances of the analogy, it is evident that a fudge factor applicable to the general situation is impossible, but that under some special conditions, such as uniform linear motion following a course at a constant angle to the line of reference, the mathematical relation between the two measurements is constant. A fudge factor embodying this constant relation, the cosine of the angle of deviation, would therefore bring the discordant measurements into mathematical coincidence. It is also evident that we can apply the fudge factor anywhere in the mathematical relation. We can say that measurement 1 understates the true magnitude by this amount, or that measurement 2 overstates it by the same amount, or we can divide the discrepancy between the two in some proportion, or we can say that there is some unknown factor that affects one and not the other. Any of these explanations is mathematically correct, and if a theory based on any one of them is proposed, it will be ―confirmed‖ by experiment in the same manner that special relativity and many other products of present-day physics are currently being ―confirmed.‖ But only the last alternative listed is conceptually correct. This is the only one that describes the situation as it actually exists. When we compare these results of the assumptions made for purposes of the analogy with the observed physical situation in high-speed motion we find a complete

correspondence. Here, too, mathematical coincidence can be attained by a set of fudge factors, the Lorentz equations, in a special set of circumstances only. As in the analogy, such fudge factors are applicable only where the motion is constant both in speed and in direction. They apply only to uniform translational motion. This close parallel between the observed physical situation and the analogy strongly suggests that the underlying cause of the measurement discrepancy is the same in both cases; that in the physical universe, as well as under the circumstances assumed for purposes of the analogy, one of the factors that enters into the measurement of the magnitudes involved has not been taken into consideration. This is exactly the answer to the problem that emerges from the development of the Reciprocal System of theory. We find from this theory that the conventional stationary three-dimensional spatial frame of reference correctly represents locations in extension space, and that, contrary to Einstein's assertion, the distance between coordinates in this reference system correctly represents the spatial magnitudes entering into the equations of motion. However, this theoretical development also reveals that time magnitudes in general can only be represented by a similar three-dimensional frame of reference, and that the time registered on a clock is merely the one-dimensional path of the time progression in this three-dimensional reference frame. Inasmuch as gravitation operates in space in our material sector of the universe, the progression of time continues unchecked, and the change of position in time represented by the clock registration is a component of the time magnitude of any motion. In everyday life, no other component of any consequence is present, and for most purposes the clock registration can be taken as a measurement of the total time involved in a motion. But where another significant component is present, we are confronted with the same kind of a situation that was portrayed by the analogy. In uniform translational motion the mathematical relation between the clock time and the total time is a constant function of the speed, and it is therefore possible to formulate a fudge factor that will take care of the discrepancy. In the general situation where there is no such constant relationship, this is not possible, and the Lorentz equations cannot be extended to motion in general. Correct results in the general situation can be obtained only if the true scalar magnitude of the time that is involved is substituted for clock time in the equations of motion. This explanation should enable a clear understanding of the position of the Reciprocal System with respect to the validity of the Lorentz equations. Inasmuch as no method of measuring total time is currently available, there is a substantial amount of convenience in being able to arrive at the correct numerical results in certain applications by using a mathematical fudge factor. In so doing, we are making use of an incorrect magnitude that we are able to measure in lieu of the correct magnitude that we cannot measure. The Reciprocal System agrees that when we need to use fudge factors in this manner, the Lorentz equations are the correct fudge factors for the purpose. These equations simply accomplish a mathematical reconciliation of the equations of motion with the constant speed of light, and since this constant speed, which was accepted by Lorentz as an empirically established fact, is deduced from the postulates of the Reciprocal System, the mathematical treatment is based on the same premises in both cases, and necessarily

arrives at the same results. To this extent, therefore, the new system of theory is in accord with current thinking. As P. W. Bridgman once pointed out, many physicists regard ―the content of the special theory of relativity as coextensive with the content of the Lorentz equations.‖53 K. Feyerabend gives us a similar report: It must be admitted, however, that contemporary physicists hardly ever use Einstein’s original interpretation of the special theory of relativity. For them the theory of relativity consists of two elements: (1) The Lorentz transformations; and (2) mass-energy equivalence.54 For those who share this view, the results obtained from the Reciprocal System of theory in this area make no change at all in the existing physical picture. These individuals should find it easy to accommodate themselves to the new viewpoint. Those who still take their stand with Einstein will have to face the fact that the new results show, just as the clock paradox does, that Einstein's interpretation of the mathematics of high speed motion is incorrect. Indeed, the mere appearance of a new and different explanation of a rational character is a crushing blow to the relativity theory, as the case in its favor is argued very largely on the basis that there is no such alternative. As Einstein says, ―if the velocity of light is the same in all C.S. (coordinate systems), then moving rods must change their length, moving clocks must change their rhythm . . . there is no other way.‖ 55 The statement by Millikan quoted earlier is equally positive on this score. The status of an assertion of this kind, a contention that there is no alternative to a given conclusion, is always precarious, because, unlike most propositions based on other grounds, which can be supported even in the face of some adverse evidence, this contention that there is no alternative is immediately and utterly demolished when an alternative is produced. Furthermore, the use of the ―no alternative‖ argument constitutes a tacit admission that there is something dubious about the explanation that is being offered; something that would preclude its acceptance if there were any reasonable alternative. In contribution, in the form of the special theory, can be accurately evaluated only if it is realized that this, too, is a fudge, a conceptual fudge, we might call it. As he explains in the statement that has been our principal target in this chapter, what he has done is to eliminate the ―metrical meaning,‖ of spatial coordinates; that is, he takes care of the discrepancy between the two measurements by arbitrarily decreeing that one of them shall be disregarded. This may have served a certain purpose in the past by enabling the scientific community to avoid the embarrassment of having to admit inability to find any explanation for the high speed discrepancy, but the time has now come to look at the situation squarely and to recognize that the relativity concept is erroneous. It is not always appreciated that the mathematical fudge accomplished by the use of the Lorentz equations works in both directions. If the velocity is not directly determined by the change in coordinate position during a given time interval, it follows that the change in coordinate position is not directly determined by the velocity. Recognition of this point

will clear up any question as to a possible conflict between the conclusions of Chapter 5 and the constant speed of light. In closing this discussion of the high speed problem, it is appropriate to point out that the identification of the missing factor in the motion equations, the additional time component that becomes significant at high speeds, does not merely provide a new and better explanation of the existing discrepancy. It eliminates that discrepancy, restoring the ―metrical meaning‖ of the coordinate distances in a way that makes them entirely consistent with the constant speed of light.

CHAPTER 8

Motion in Time
The starting point for an examination of the nature of motion in time is a recognition of the status of unit speed as the natural datum, the zero level of physical activity. We are able to deal with speeds measured from some arbitrary zero in our everyday life because these are not primary quantities; they are merely speed differences. For example, where the speed limit is 50 miles per hour, this does not mean that an automobile is prohibited from moving at any faster rate. It merely means that the difference between the speed of the vehicle and the speed of the portion of the earth's surface over which the vehicle is traveling must not exceed 50 miles per hour. The car and the earth’s surface are jointly moving at higher speeds in several different directions, but these are of no concern to us for ordinary purposes. We deal only with the differences, and the datum from which measurement is made has no special significance. In current practice we regard a greater rate of change in vehicle location relative to the local frame of reference as being the result of a greater speed, that quantity being measured from zero. We could equally well measure from some arbitrary non-zero level, as we do in the common systems of temperature measurement, or we could even measure the inverse of speed from some selected datum level, and attribute the greater rate of change of position to less ―inverse speed,‖ In dealing with the basic phenomena of the universe, however, we are dealing with absolute speeds, not merely speed differences, and for this purpose it is necessary to recognize that the datum level of the natural system of reference is unity, not zero. Since motion exists only in units, according to the postulates that define a universe of motion, and each unit of motion consists of one unit of space in association with one unit of time, all motion takes place at unit speed, from the standpoint of the individual units. This speed may, however, be either positive or negative, and by a sequence of reversals of the progression of either time or space, while the other component continues progressing unidirectionally, an effective scalar speed of 1/n, or n/1, is produced. In Chapter 4 we considered the case in which the vectorial direction of the motion reversed at each end of a one-unit path, the result being a vibrational motion. Alternatively, the vectorial direction may reverse in unison with the scalar direction. In this case space (or time) progresses one unit in the context of a fixed reference system while time (or space) progresses n units. Here the result is a translatory motion at a speed of 1/n (or n/1) units.

The scalar situation is the same in both cases. A regular pattern of reversals results in a space-time ratio of 1/n or n/1. In the example shown in the tabulation in Chapter 4, where the space-time ratio is 1/3, there is a one-unit inward motion followed by an outward unit and a second inward unit. The net inward motion in the three-unit sequence is one unit. A continuous succession of similar 3-unit sequences then follows. As indicated in the accompanying tabulation, the scalar direction

DIRECTION
Number Unit 1 2 3 4 5 6 Scalar inward outward inward inward outward inward Vibratory Vectorial right left right left right left Scalar inward outward inward inward outward inward Translation Vectorial forward backward forward forward backward forward

of the last unit of each sequence is inward. (A sequence involving an even number of n alternates n - 1 and n + 1. For instance, instead of two four-unit sequences, in which the last unit of each sequence would be outward, there is a three-unit sequence and a five-unit sequence.) The scalar direction of the first unit of each new sequence is also inward. Thus there is no reversal of scalar direction at the point where the new sequence begins. In the vibrational situation the vectorial direction continues the regular succession of reversals even at the points where the scalar direction does not reverse, but in the translational situation the reversals of vectorial direction conform to those of the scalar direction. Consequently, the path of vibration remains in a fixed location in the dimension of the oscillation, whereas the path of translation moves forward at the scalar space-time ratio 1/n (or n/1). This is the pattern followed by certain scalar motions that will be discussed later and by all vectorial motions: motions of material units and aggregates. When the progression within a unit of motion reaches the end of the unit it either reverses or does not reverse. There is no intermediate possibility. It follows that what appears to be a continuous unidirectional motion at speed 1/n is, in fact, an intermittent motion in which space progresses at the normal rate of one unit of space per unit of time for a fraction 1/n of the total number of space units involved, and has a net resultant of zero, in the context of the fixed reference system, during the remainder of the motion. If the speed is 1/n–one unit of space per n units of time–space progresses only one unit instead of the n units it would progress unidirectionally. The result of motion at the 1/n speed is therefore to cause a change of spatial position relative to the location that would have been reached at the normal rate of progression. Motion at less than unit speed, then, is motion in space. This is a well-known fact. But because of the uncritical acceptance of Einstein’s dictum that speeds in excess of that of light are impossible, and a failure to recognize the reciprocal relation between space and time, it has not heretofore been realized that the inverse of this kind of motion is also a physical reality. Where the speed is n/1, there is a reversal of the time component that results in a change of position in time relative to that which would take place at the normal rate of time progression, the elapsed

time registered on a clock. Motion at speeds greater than unity is therefore motion in time. The existence of motion in time is one of the most significant consequences of the status of the physical universe as a universe of motion. Conventional physical science, which recognizes only motion in space, has been able to deal reasonably well with those phenomena that involve spatial motion only. But it has not been able to clarify the physical fundamentals, a task for which an understanding of the role of time is essential, and it is encountering a growing number of problems as observation and experiment are extended into the areas where motion in time is an important factor. Furthermore, the number and scope of these problems has been greatly increased by the use of zero speed, rather than unit speed, as the reference datum for measurement purposes. While motion at speeds of 1/n (speeds less than unity) is motion in space only, when viewed relative to the natural (moving) reference system, it is motion in both space and time relative to the conventional systems that utilize the zero datum. It should be understood that the motions we are now discussing are independent motions (physical phenomena), not the fictitious motion introduced by the use of a stationary reference system. The term ―progression,‖ is here being utilized merely to emphasize the continuing nature of these motions, and their space and time aspects. During the one unit of motion (progression) at the normal unit speed that occurs periodically when the average speed is 1/n, the spatial component of this motion, which is an inherent property of the motion independent of the progression of the natural reference system, is accompanied by a similar progression of time that is likewise independent of the progression of the reference system, the time aspect of which is measured by a clock. Thus, during every unit of clock time, the independent motion at speed 1/n involves a change of position in three-dimensional time amounting to 1/n units. As brought out in the preliminary discussion of this subject in Chapter 6, the value of n at the speeds of our ordinary experience is so large that the quantity 1/n is negligible, and the clock time can be taken as equivalent to the total time involved in motion. At higher speeds, however, the value of 1/n becomes significant, and the total time involved in motion at these high speeds includes this additional component. It is this heretoforeunrecognized time component that is responsible for the discrepancies that present-day science tries to handle by means of fudge factors. In the two-photon case considered in Chapter 7, the value of 1/n is 1/1 for both photons. A unit of the motion of photon X involves one unit of space and one unit of time. The time involved in this unit of motion (the time OX) can be measured by means of the registration on a clock, which is merely the temporal equivalent of a yardstick. The same clock can also be used to measure the time magnitude involved in the motion of photon Y (the time OY), but this use of the same temporal ―yardstick‖ does not mean that the time interval OY through which Y moves is the same interval through which X moves, the interval OX, any more than using the same yardstick to measure the space traversed by Y would make it the same space that is traversed by X. The truth is that at the end of one unit of the time involved in the progression of the natural reference system (also measured by a clock), X and Y are separated by two units of total time (the time OX and the time OY), as well as by two units of space (distance). The relative speed is the

increase in spatial separation, two units, divided by the increase in temporal separation, two units, or 2/2 = 1. If an object with a lower speed v is substituted for one of the photons, so that the separation in space at the end of one unit of clock time is 1 + v instead of 2, the separation in time is also 1 + v and the relative speed is (1+ v)/(1 + v) = 1. Any process that measures the true speed rather than the space traversed during a given interval of standard clock time (the time of the progression of the natural reference system) thus arrives at unity for the speed of light irrespective of the system of reference. When the correct time magnitudes are introduced into the equations of motion there is no longer any need for fudge factors. The measured coordinate differences and the measured constant speed of light are then fully compatible, and there is no need to deprive the spatial coordinates of their ―metrical meaning.‖ Unfortunately, however, no means of measuring total time, except in certain special applications, are available at present. Perhaps some feasible method of measurement may be developed in the future, but in the meantime it will be necessary to continue on the present basis of applying a correction to the clock registration, in those areas where this is feasible. Under these circumstances we can consider that we are using correction factors instead of fudge factors. There is no longer an unexplained discrepancy that needs to be fudged out of existence. What we now find is that our calculations involve a time component that we are unable to measure. In lieu of the measurements that we are unable to make, we find it possible, in certain special cases, to apply correction factors that compensate for the difference between clock time and total time. A full explanation of the derivation of these correction factors, the Lorentz equations, is available in the scientific literature, and will not be repeated here. This conforms with a general policy that will be followed throughout this work. As explained in Chapter 1, most existing physical theories have been constructed by building up from empirical foundations. The Reciprocal System of theory is constructed in the opposite manner. While the empirically based theories start with the observed details and work toward the general principles, the Reciprocal System starts with a set of general postulates and works toward the details. At some point each of the branches of the theoretical development will meet the corresponding element of empirical theory. Where this occurs in the course of the present work, and there is agreement, as there is in the case of the Lorentz equations, the task of this presentation is complete. No purpose would be served by duplicating material that is already available in full detail. Most of the other well-established relationships of physical science are similarly incorporated into the new system of theory, with or without minor modifications, as the development of the theoretical structure proceeds, not because of the weight of observational evidence supporting these relations, or because anyone happens to approve of them, or because they have previously been accepted by the scientific world, but because the conclusions expressed by these relations are the same conclusions that are reached by development of the new theoretical system. After such a relation has thus been taken into the system, it is, of course, part of the system, and can be used in the same manner as any other part of the theoretical structure.

The existence of speeds greater than unity (the speed of light), the speeds that result in change of position in time, conflicts with current scientific opinion, which accepts Einstein’s conclusion that the speed of light is an absolute limit that cannot be exceeded. Our development shows, however, that at one point where Einstein had to make an arbitrary choice between alternatives, he made the wrong choice, and the speed limitation was introduced through this error. It does not exist in fact. Like the special theory of relativity, the theory from which the speed limitation is derived is an attempt to provide an explanation for an empirical observation. According to Newton's second law of motion, which can be expressed as a= F/m, if a constant force is applied to the acceleration of a constant mass it should produce an acceleration that is also constant. But a series of experiments showed that where a presumably constant electrical force is applied to a light particle, such as an electron, in such a manner that very high speeds are produced, the acceleration does not remain constant, but decreases at a rate which indicates that it would reach zero at the speed of light. The true relation, according to the experimental results, is not Newton's law, a = F/m, but a = -\/1—(v /c ) F/m. In the system of notation used in this work, which utilizes natural, rather than arbitrary, units of measurement, the speed of light, designated as c in current practice, is unity, and the variable speed (or velocity), v, is expressed in terms of this natural unit. On this basis the empirically derived equation becomes a = F/m. There is nothing in the data derived from experiment to tell us the meaning of the term 1 v² in this expression; whether the force decreases at higher speeds, or the mass increases, or whether the velocity term represents the effect of some factor not related to either force or mass. Einstein apparently considered only the first two of these alternatives. While it is difficult to reconstruct the pattern of his thinking, it appears that he assumed that the effective force would decrease only if the electric charges that produced the force decreased in magnitude. Since all electric charges are alike, so far as we know, Whereas the primary mass concentrations seem to be extremely variable, he chose the mass alternative as being the most likely, and assumed for purposes of his theory that the mass increases with the velocity at the rate indicated by the experiments. On this basis, the mass becomes infinite at the speed of light. The results obtained from development of the consequences of the postulates of the Reciprocal System now show that Einstein guessed wrong. The new information developed theoretically (which will be discussed in detail later) reveals that an electric charge is inherently incapable of producing a speed in excess of unity, and that the decrease in the acceleration at high speeds is actually due to a decrease in the force exerted by the charges, not to any change in the magnitude of either the mass or the charge. As explained earlier, force is merely a concept by which we visualize the resultant of oppositely directed motions as a conflict of tendencies to cause motion rather than as a conflict of the motions themselves. This method of approach facilitates mathematical treatment of the subject, and is unquestionably a convenience, but whenever a physical situation is represented by some derived concept of this kind there is always a hazard that the correspondence may not be complete, and that the conclusions reached through the

medium of the derived concept may therefore be in error. This is what has happened in the case we are now considering. If the assumption that a force applied to the acceleration of a mass remains constant in the absence of any external influences is viewed only from the standpoint of the force concept, it appears entirely logical. It seems quite reasonable that a tendency to cause motion would remain constant unless subjected to some kind of a modifier. But when we look at the situation in its true light as a combination of motions, rather than through the medium of an artificial representation by means of the force concept, it is immediately apparent that there is no such thing as a constant force. Any force must decrease as the speed of the motion from which it originates is approached. The progression of the natural reference system, for instance, is motion at unit speed. It therefore exerts unit force. If the force–that is, the effect–of the progression is applied to overcoming a resistance to motion (the inertia of a mass) it will ultimately bring the mass up to the speed of the progression itself: unit speed. But a tendency to impart unit speed to an object that is already moving at high speed is not equivalent to a tendency to impart unit speed to a body at rest. In the limiting condition, where an object is already moving at unit speed, the force due to the progression of the reference system has no effect at all, and its magnitude is zero. Thus, the full effect of any force is attained only when the force is exerted on a body at rest, and the effective component in application to an object in motion is a function of the difference between the speed of that object and the speed that manifests itself as a force. The specific form of the mathematical function, ~ rather than merely 1-v, is related to some of the properties of compound motions that will be discussed later. Ordinary terrestrial speeds are so low that the corresponding reduction in the effective force is negligible, and at these speeds forces can be considered constant. As the speed of the moving object increases, the effective force decreases, approaching a limit of zero when the object is moving at the speed corresponding to the applied force–unity in the case of the progression of the natural reference system. As we will find in a later stage of the development, an electric charge is inherently a motion at unit speed, like the gravitational motion and the progression of the natural reference system, and it, too, exerts zero force on an object moving at unit speed. As an analogy, we may consider the case of a container full of water, which is started spinning rapidly. The movement of the container walls exerts a force tending to give the liquid a rotational motion, and under the influence of this force the water gradually acquires a rotational speed. But as that speed approaches the speed of the container the effect of the ―constant‖ force drops off, and the container speed constitutes a limit beyond which the water speed cannot be raised by this means. The force vanishes, we may say. But the fact that we cannot accelerate the liquid any farther by this means does not bar us from giving it a higher speed in some other way. The limitation is on the capability of the process, not on the speed at which the water can rotate. The mathematics of the equation of motion applicable to the acceleration phenomenon remain the same in the Reciprocal System as inEinstein's theory. It makes no difference mathematically whether the mass is increased by a given amount, or the effective force is decreased by the same amount. The effect on the observed quantity, the acceleration, is

identical. The wealth of experimental evidence that demonstrates the validity of these mathematics therefore confirms the results derived from the Reciprocal System to exactly the same degree that it confirms Einstein’s theory. All that this evidence does in either case is to show that the theory is mathematically correct. But mathematical validity is only one of the requirements that a theory must meet in order to be a correct representation of the physical facts. It must also be conceptually valid; that is, the meaning attached to the mathematical terms and relations must be correct. One of the significant aspects of Einstein’s theory of acceleration at high speeds is that it explains nothing; it merely makes assertions. Einstein gives us an ex cathedra pronouncement to the effect that the velocity terms represents an increase in the mass, without any attempt at an explanation as to why the mass increases with the velocity, why this hypothetical mass increment does not alter the structure of the moving atom or particle, as any other mass increment does, why the velocity term has this particular mathematical form, or why there should be a speed limitation of any kind. Of course, this lack of a conceptual background is a general characteristic of the basic theories of present-day physics, the ―free inventions of the human mind,‖ as Einstein described them, and the theory of mass increase is not unusual in this respect. But the arbitrary character of the theory contrasts sharply with the full explanation provided by the Reciprocal System. This new system of theory produces simple and logical answers for all questions, similar to those enumerated above, that arise in connection with the explanation that it supplies. Furthermore, one of these is, in any respect, ad hoc. All are derived entirely by education from the assumptions as to the nature of space and time that constitute the basic premises of the new theoretical system. Both the Reciprocal System and Einstein's theory recognize that there; a limit of some kind at unit speed. Einstein says that this is a limit on the magnitude of speed, because on the basis of his theory the mass reaches infinity at unit speed, and it is impossible to accelerate an infinite mass. The Reciprocal System, on the other hand, says that the limit is on the capability of the process. A speed in excess of unity cannot be produced by electromagnetic means. This does not preclude acceleration to higher speeds by other processes, such as the sudden release fo large quantities of energy in explosive events, and according to this new theoretical viewpoint there is no definite limit to speed magnitudes. In deed, the general reciprocal relation between space and time requires that speeds in excess of unity be just as plentiful, and cover just as wide a range, in the universe as a whole, as speeds less than unity. The apparent predominance of low-speed phenomena is merely a result of observing the universe from a location far over on the low-speed side of the neutral axis. One of the reasons why Einstein's assertion as to the existence of limiting speed was so readily accepted is an alleged absence of any observational evidence of speeds in excess of that of light. Our new theoretical development indicates, however, that there is actually no lack of evidence. The difficulty is that the scientific community currently holds a mistaken belief as to the nature of the change of position that is produced by such a motion. We observe that a motion at a speed less than that of light causes a change of location in space, the rate of change varying with the speed (or velocity, if the motion is other then linear). It is currently taken for granted that a speed in excess of that of light

would result in a still greater rate of change of spatial location, and the absence of any clearly authenticated evidence of such higher rates of location change is interpreted as proof of the existence of a speed limitation. But in a universe of motion an increment of speed above unity (the speed of light) does not cause a change of location in space. In such a universe there is complete symmetry between space and time, and since unit speed is the neutral level, the excess speed above unity causes a change of location in threedimensional time rather than in three-dimensional space. From this it can be seen that the search for ―tachyons‖ , hypothetical particles that move with a spatial velocity greater than unity, will continue o be fruitless. Speeds above unity cannot be detected by measurements if the rate of change of coordinate positions in space. We can detect them only by means of a direct speed measurement, or by some collateral effects. There are many observable effects of the required nature, but their status as evidence of speeds greater than that of light is denied by present-day physicists on the ground that it conflicts with Einstein’s assumption of an increase in mass at high speeds. In other words, the observations are required to conform to the theory, rather than requiring the theory to meet the standard test of science: conformity with observation and measurement. The current treatment of the abnormal redshifts of the quasars is a glaring example of this unscientific distortion of the observations to fit the theory. We have adequate grounds to conclude that these are Doppler shifts, and are due to the speeds at which these objects are receding from the earth. Until very recently there was no problem in this connection. There was general agreement as to the nature of the redshifts, and as to the existence of a linear relation between the redshift and the speed. This happy state of affairs was ended when quasars were found with redshifts exceeding 1.00. On the basis of the previously accepted theory, a 1.00 redshift indicates a recession speed equal to the speed of light. The newly discovered redshifts in the range above 1.00 therefore constitute a direct measurement of quasar motions at speeds greater than that of light. But the present-day scientific community is unwilling to challenge Einstein, even on the basis of direct evidence, so the mathematics of the special theory of relativity have been invoked as a means of saving the speed limitation. No consideration seems to have been given to the fact that the situation to which the mathematical relations of special relativity apply does not exist in the case of the Doppler shift. As brought out in Chapter 7, and as Einstein has explained very clearly in his works, the Lorentz equations, which express those mathematics, are designed to reconcile the results of direct measurements of speed, as in the Michelson-Morley experiment, with the measured changes of coordinate position in a spatial reference system. As everyone, including Einstein, has recognized, it is the direct speed measurement that arrives at the correct numerical magnitude. (Indeed, Einstein postulated the validity of the speed measurement as a basic principle of nature.) Like the result of the Michelson-Morley experiment, the Doppler shift is a direct measurement, simply a counting operation, and it is not in any way connected with a measurement of spatial coordinates. Thus there is no excuse for applying the relativity mathematics to the redshift measurements. Inasmuch as the ―time dilatation‖ aspect of the Lorentz equations is being applied to some other phenomena that do not seem to have any connection with spatial coordinates,

it may be desirable to anticipate the subsequent development of theory to the extent of stating that the discussion in Chapter 15 will show that those ―dilatation‖ phenomena that appear to involve time only, such as the extended lifetime of fast-moving unstable particles, are, in fact, consequences of the variation of the relation between coordinate spatial location (location in the fixed reference system) and absolute spatial location (location in the natural moving system) with the speed of the objects occupying these locations. The Doppler effect, on the other hand, is independent of the spatial reference system. The way in which motion in time manifests itself to observation depends on the nature of the phenomenon in which it is observed. Large redshifts are confined to high-speed astronomical objects, and a detailed examination of the effect of motion in time on the Doppler shift will be deferred to Volume II, where it will be relevant to the explanation of the quasars. At this time we will take a look at another of the observable effects of motion in time that is not currently recognized as such by the scientific community: its effect in distorting the scale of the spatial reference system. It was emphasized in Chapter 3 that the conventional spatial reference systems are not capable of representing more than one variable–space-and that because there are two basic variables–space and time–in the physical universe we are able to use the spatial reference systems only on the basis of an assumption that the rate of change of time remains constant. We further saw, earlier in this present chapter that at all speeds of unity or less time does, in fact, progress at a constant rate, and all variability is in space. It follows that if the correct values of the total time are used in all applications, the conventional spatial coordinate systems are capable of accurately representing all motions at speeds of 1/n. But the scale of the spatial coordinate system is related to the rate of change of time, and the accuracy of the coordinate representation depends on the absence of any change in time other than the continuing progression at the normal rate of registration on a clock. At speeds in excess of unity, space is the entity that progresses at the fixed normal rate, and time is variable. Consequently, the excess speed above unity distorts the spatial coordinate system. In a spatial reference system the coordinate difference between two points A and B represents the space traversed by any object moving from A to B at the reference speed. If that reference speed is changed, the distance corresponding to the coordinate difference AB is changed accordingly. This is true irrespective of the nature of the process utilized for measurement of the distance. It might be assumed, for instance, that by using something similar to a yardstick, which compares space directly with space, the measurement of the coordinate distance would be independent of the reference speed. But this is not correct, as the length of the yardstick, the distance between its two ends, is related to the reference speed in the same manner as the distance between any other two points. If the coordinate difference between A and B is x when the reference speed has the normal unit value, then it becomes 2x if the reference speed is doubled. Thus, if we want to represent motions at twice the speed of light in one of the standard spatial coordinate systems that assume time to be progressing normally, all distances involved in these motions must be reduced by one half. Any other speed greater than unity requires a corresponding modification of the distance scale.

The existence of motion at greater-than-unit speeds has no direct relevance to the familiar phenomena of everyday life, but it is important in all of the less accessible areas, those that we have called the far-out regions. Most of the consequences that apply in the realm of the very large, the astronomical domain, have no significance in relation to the subjects being discussed at this early stage of the theoretical development, but the general nature of the effects produced by greater-than-unit speeds is most clearly illustrated by those astronomical phenomena in which such speeds can be observed on a major scale. A brief examination of a typical high-speed astronomical object will therefore help to clarify the factors involved in the high-speed situation. In the preceding pages we deduced from theoretical premises that speeds in excess of the speed of light can be produced by processes that involve large concentrations of energy, such as explosions. Further theoretical development (in Volume II) will show that both stars and galaxies do, in fact, undergo explosions at certain specific stages of their existence. The explosion of a star is energetic enough to accelerate some portions of the stellar mass to speeds above unity, while other portions acquire speeds below this level. The low-speed material is thrown off into space in the form of an expanding cloud of debris in which the particles of matter retain their normal dimensions but are separated by an increasing amount of empty space. The high-speed material is similarly ejected in the form of an expanding cloud, but because of the distortion of the scale of the reference system by the greater-than-unit speeds, the distances between the particles decrease rather than increase. To emphasize the analogy with the cloud of material expanding into space, we may say that the particles expanding into time are separated by an increasing amount of empty time. The expansion in each case takes place from the situation that existed at the time of the explosion, not from some arbitrary zero datum. The star was originally stationary, or moving at low speed, in the conventional spatial reference system, and was stationary in time in the moving system of reference defined by a clock. As a result of the explosion, the matter ejected at low speeds moves outward in space and remains in the original condition in time. The matter ejected at high speeds moves outward in time and remains in its original condition in space. Since we see only the spatial result of all motions, we see the low-speed material in its true form as an expanding cloud, whereas we see the high-speed material as an object remaining stationary in the original spatial location. Because of the empty space that is introduced between the particles of the outwardmoving explosion product, the diameter of the expanding cloud is considerably larger than that of the original star. The empty time introduced between the particles of the inward-moving explosion product conforms to the general reciprocal relation, and inverts this result. The observed aggregate, a white dwarf star, is also an expanding object, but its expansion into time is equivalent to a contraction in space, and as we see it in its spatial aspect, its diameter is substantially less than that of the original star. It thus appears to observation as an object of very high density. The white dwarf is one member of a class of extremely compact astronomical objects discovered in recent years that is today challenging the basic principles of conventional physics. Some of these objects, such as the quasars are still without any plausible explanation. Others, including the white dwarfs, have been tied in to current physical

theory by means of ad hoc assumptions, but since the assumptions made to explain each of these objects are not applicable to the others, the astronomers are supplied with a whole assortment of theories to explain the same phenomenon: extremely high densities. It is therefore significant that the explanation of the high density of the white dwarf stars derived from the postulates of the Reciprocal System of theory is applicable to all of the other compact objects. As will be shown in the detailed discussion, all of these extremely compact astronomical objects are explosion products, and their high density is in all cases due to the same cause: motion at speeds in excess of that of light. This is only a very brief account of a complex phenomenon that will be examined in full detail later, but it is a striking illustration of how the inverse phenomena predicted by the reciprocal relation can always be found somewhere in the universe, even if they involve such seemingly bizarre concepts as empty time, or high speed motion of objects stationary in space. Another place where the inability of the conventional spatial reference systems to represent changes in temporal location, other than by distortion of the spatial representation, prevents it from showing the physical situation in its true light is the region inside unit distance. Here the motion in time is not due to a speed greater than unity, but to the fact that, because of the discrete nature of the natural units, less than unit space (or time) does not exist. To illustrate just what is involved here, let us consider an atom A in motion toward another atom B. According to current ideas, atom A will continue to move in the direction AB until the atoms, or the force fields surrounding them, if such fields exist, are in contact. The postulates of the Reciprocal System specify, however, that space exists only in units. It follows that when atom A reaches point X one unit of space distant from B. it cannot move any closer to B in space. But it is free to change its position in time relative to the time location occupied by atom B. and since further movement in space is not possible, the momentum of the atom causes the motion to continue in the only way that is open to it. The spatial reference system is incapable of representing any deviation of time from the normal rate of progression, and this added motion in time therefore distorts the spatial position of the moving atom A in the same manner as the speeds in excess of unity that we considered earlier. When the separation in time between the two atoms has increased to n units, space remaining unchanged (by means of continued reversals of direction), the equivalent spatial separation, the quantity that is determined by the conventional methods of measurement, is l /n units. Thus, while atom A cannot move to a position less than one unit of space distant from atom B. it can move to the equivalent of a closer position by moving outward in time. Because of this capability of motion in time in the region inside unit distance it is possible for the measured length, area, or volume of a physical object to be a fraction of a natural unit, even though the actual one, two, or three-dimensional space cannot be less than one unit in any case. It was brought out in Chapter 6 that the atoms of a material aggregate, which are contiguous in space, are widely separated in time. Now we are examining a situation in which a change of position in the spatial coordinate system results from a separation in time, and we will want to know just where these time separations differ. The explanation is that the individual atoms of an aggregate such as a gas, in which the atoms are

separated by more than unit distance, are also separated by various distances in time, but these atoms are all at the same stage of the time progression. The motion of these atoms meets the requirement for accurate representation in the conventional spatial coordinate systems; that is, it maintains the fixed time progression on which the reference system is based. On the other hand, the motion in time that takes place inside unit distance involves a deviation from the normal time progression. A spatial analogy may be helpful in getting a clear view of this situation. Let us consider the individual units (stars) of a galaxy. Regardless of how widely these stars are separated, or how much they move around within the galaxy, they maintain their status as constituents of the galaxy because they are all receding at the same speed (the internal motions being negligible compared to the recession speed). They are at the same stage of the galactic recession. But if one of these stars acquires a spatial motion that modifies its recession speed significantly, it moves away from the galaxy, either temporarily or permanently. Thereafter, the position of this star can no longer be represented in a map of the galaxy, except by some special convention. The separations in time discussed in Chapter 6 are analogous to the separations in space within the galaxies. The material aggregates that we are now discussing retain their identities just as the galaxies do, because their individual components are progressing in time at the same rates. But just as individual stars may acquire spatial speeds which cause them to move away from the galaxies, so the individual atoms of the material aggregates may acquire motions in time which cause them to move away from the normal path of the time progression. Inside unit distance this deviation is temporary and quite limited in extent. In the white dwarf stars the deviations are more extensive, but still temporary. In the astronomical discussions in Volume II we will consider phenomena in which the magnitude of the deviation is sufficient to carry the aggregates that are involved completely out of the range of the spatial coordinate systems. So far as the inter-atomic distance is concerned, it is not material whether this is an actual spatial separation or merely the equivalent of such a separation, but the fact that the movement of the atoms changes from a motion in space to a motion in time at the unit level has some important consequences from other standpoints. For instance, the spatial direction AB in which atom A was originally moving no longer has any significance now that the motion is taking place inside unit distance, inasmuch as the motion in time which replaces the previous motion in space has no spatial direction. It does have what we choose to call a direction in time, but this temporal direction has no relation at all to the spatial direction of the previous motion. No matter what the spatial direction of the motion of the atom may have been before unit distance was reached, the temporal direction of the motion after it makes the transition to motion in time is determined purely by chance. Any kind of action originating in the region where all motion is in time is also subject to significant modifications if it reaches the unit boundary and enters the region of space motion. For example, the connection between motion in space and motion in time is scalar, again because there is no relation between direction in space and direction in time. Consequently, only one dimension of a two-dimensional or three-dimensional motion can

be transmitted across the boundary. This point has an important bearing on some of the phenomena that will be discussed later. Another significant fact is that the effective direction of the basic scalar motions, gravitation and the progression of the natural reference system, reverses at the unit level. Outside unit space the progression of the reference system carries all objects outward in space away from each other. Inside unit space only time can progress unidirectionally, and since an increase in time, with space remaining constant, is equivalent to a decrease in space, the progression of the reference system in this region, the time region, as we will call it, moves all objects to locations which, in effect, are closer together. The gravitational motion necessarily opposes the progression, and hence the direction of this motion also reverses at the unit boundary. As it is ordinarily observed in the region outside unit distance, gravitation is an inward motion, moving objects closer together. In the time region it acts in the outward direction, moving material objects farther apart. On first consideration, it may seem illogical for the same force to act in opposite directions in different regions, but from the natural standpoint these are not different directions. As emphasized in Chapter 3, the natural datum is unity, not zero, and the progression of the natural reference system therefore always acts in the same natural direction: away from unity. In the region outside unit distance away from unity is also away from zero, but in the time region away from unity is toward zero. Gravitation likewise has the same natural direction in both regions: toward unity. It is this reversal of coordinate direction at the unit level that enables the atoms to take up equilibrium positions and form solid and liquid aggregates. No such equilibrium can be established where the progression of the natural reference system is outward, because in this case the effect of any change in the distance between atoms resulting from an unbalance of forces is to accentuate the unbalance. If the inward-directed gravitational motion exceeds the outward-directed progression, a net inward motion takes place, making the gravitational motion still greater. Conversely, if the gravitational motion is the smaller, the resulting net motion is outward, which still further reduces the already inadequate gravitational motion. Under these conditions there can be no equilibrium. In the time region, however, the effect of a change in relative position opposes the unbalanced force, which caused the change. If the gravitational motion (outward in this region) is the greater an outward net motion takes place, reducing the gravitational motion and ultimately bringing it into equality with the constant inward progression of the reference system. Similarly, if the progression is the greater, the net movement is inward, and this increases the gravitational motion until equilibrium is reached. The equilibrium that must necessarily be established between the atoms of matter inside unit distance in a universe of motion obviously corresponds to the observed inter-atomic equilibrium that prevails in solids and, with certain modifications, in liquids. Here, then, is the explanation of solid and liquid cohesion that we derive from the Reciprocal System of theory, the first comprehensive and completely self-consistent theory of this phenomenon that has ever been formulated. The mere fact that it is far superior in all respects to the currently accepted electrical theory of matter is not, in itself, very significant, inasmuch as the electrical hypothesis is definitely one of the less successful

segments of present-day physical theory, but a comparison of the two theories should nevertheless be of interest from the standpoint of demonstrating how great an advance the new theoretical system actually accomplishes in this particular field. A detailed comparison will therefore be presented later, after some further groundwork has been laid.

CHAPTER 9

Rotational Combinations
One of the principal difficulties that is encountered in explaining the Reciprocal System of theory, or portions thereof, is a general tendency on the part of readers or listeners to assume that the author or speaker, whoever he may be, does not actually mean what he says. No previous major theory is purely theoretical; every one takes certain empirical information as a given element in the premises of the theory. The conventional theory of matter, for example, takes the existence of matter as given. It then assumes that this matter is composed of ―elementary particles,‖ which it attempts to identify with observed material particles. On the basis of this assumption, together with the empirical information introduced into the theory, it then attempts to explain the observed range of structural characteristics. Inasmuch as all previous theories of major scope have been constructed on this pattern, there is a general impression that physical theories must be so constructed, and it is therefore assumed that when reference is made to the fact that the Reciprocal System utilizes no empirical data of any kind, this statement must have some meaning other than its literal significance. The theoretical development in the preceding chapters should dispose of this misapprehension so far as the qualitative aspect of the universe is concerned. While the task is still only in the early stages, enough of the basic features of the physical universe– radiation, matter, gravitation, etc,.–have been derived by deduction from the postulates, without the aid of further assumptions, or of empirical information, to demonstrate that a purely theoretical qualitative development is, in fact, feasible. But a complete account of a theoretical universe must necessarily include the quantitative aspects of physical phenomena as well as the qualitative aspects. Here is another place where the way in which the development of theory has taken place is mistakenly regarded as the way in which this development must take place. The theoretical products of the Newtonian era, the so-called ―classical‖ physics, were capable of being expressed in simple mathematical terms. But some deviations from the classical laws have been encountered in the far-out regions that have been reached by observation and experiment in recent years, and the physicists have not been able to account for these deviations without employing extremely complex mathematical processes, together with conceptual artifices of a rather dubious character, such as Einstein’s ―rubber yardstick‖ , or fudge factor. In the light of the points brought out in the preceding chapter it is now evident that the difficulties are due to a misunderstanding of the basic nature of the farout phenomena, but since the modern theorists have not realized this, they have

concluded that the true relationships of the universe are extremely complex, and that they cannot be expressed by anything other than very complex mathematics. The general acceptance of this view of the situation has led a large segment of the scientific community, particularly the theoretical physicists, to the further conclusion that any treatment of the subject matter by means of simple mathematics is necessarily wrong, and can safely be dismissed without examination. Indeed, many of these individuals go a step farther, and characterize such a treatment as ―non-mathematical.‖ This attitude is, of course, preposterous, and it cannot be defended, but it is nevertheless so widespread that it constitutes a serious obstacle in the way of a full appreciation of the merits of any simple mathematical treatment. In beginning the quantitative development of the Reciprocal System of theory it is therefore necessary to emphasize that simplicity is a virtue, not a defect. It is so recognized, in principle, by scientists in general, including those who are now contending that the universe is fundamentally complex, or even, as expressed by P. W. Bridgman, that it ―is not intrinsically reasonable or understandable.‖56 In its entirety, the universe is indeed complex, extremely so, but as the initial steps in the development of the Reciprocal System in the preceding pages have already begun to demonstrate from a qualitative standpoint, it is actually a complex aggregate of interrelated simple elements. The principal advantage of mathematical treatment of physical subject matter is the precision with which knowledge of a mathematical character can be developed and expressed. This is offset to a considerable degree, however, by the fact that mathematical knowledge of physical phenomena is incomplete, and from the physical standpoint, ambiguous. No mathematical statement of a physical relation is complete in itself. As Bridgman frequently pointed out, it must be accompanied by a ―text‖ that tells us what the mathematics mean, and how they are to be applied. There is no definite and fixed relation between this text and the mathematics; that is, every mathematical statement of a physical relation is capable of different interpretations. The importance of this point in the present connection lies in the fact that the Reciprocal System makes relatively few changes in the mathematical aspects of current physical theory. The changes that it calls for are primarily conceptual. They require different interpretations of the mathematics, changes in the text, as Bridgman would say. Such changes, modifications of our ideas as to what the mathematics mean, obviously cannot be represented by alterations in the mathematical expressions. These expressions will have to stand as they are. Many readers of the first edition have asked that the new ideas be ―put in mathematical form.‖ But what these individuals really mean is that they want the theory put into some different mathematical form. They are, in effect, demanding that we change the mathematics and leave the concepts alone. This, we cannot do. The errors in current physical thought are primarily conceptual, not mathematical, and the corrections have to be made where the errors are, not somewhere else. There is nothing extraordinary about the close correlation between the mathematical aspects of the Reciprocal System and those of current theory. The conventional mathematical relations were, for the most part, derived empirically, and any correct theory of a more general nature must necessarily arrive at these same mathematics. But

there is no guarantee that the prevailing interpretation of these mathematical results is correct. On the contrary, as Jeans pointed out in the statement previously quoted, the physical interpretations of correct mathematical formulae have often been ―badly wrong.‖ Correction of the errors that have been made in the interpretation of the mathematical expressions often has very significant consequences, not so much in the particular area to which such an expression is directly applicable, but in collateral areas. The interpretation is usually tailored to fit the immediate physical situation reasonably well, but if it is not correct it becomes an impediment to progress in related areas. If it does not actually lead to erroneous conclusions such as the limitation on speed that Einstein derived from a wrong interpretation of the mathematics of acceleration at high speeds, it at least misses all of the significant collateral implications of the true explanation. For example, the mathematical statement of the recession of the distant galaxies merely tells us that these galaxies are receding at speeds directly proportional to their distances. The currently popular interpretation of this mathematical relation assumes that the recession is an ordinary vectorial motion. The problem in accounting for it then becomes a matter of identifying (or inventing) a force of sufficient magnitude to produce the extremely high speeds of the most distant objects. The accepted hypothesis is that they were produced by a gigantic explosion of the entire contents of the universe at some unique stage of its history. The Reciprocal System is in agreement with the mathematical aspects of current theory. It arrives theoretically at the conclusion that the distant galaxies must recede at speeds proportional to their respective distances: the same conclusion that present-day astronomy derives empirically. But the new theoretical system says that this recession is not a vectorial motion imparted to the galaxies by some powerful force. It is a scalar outward motion that results from viewing the galaxies in the context of a stationary spatial frame of reference rather than in the natural moving system of reference to which all physical objects actually conform. So far as the recession phenomenon itself is concerned, it makes little difference, aside from the implications for cosmology, which interpretation of the mathematical relation between speed and distance is accepted, but on the basis of the currently popular hypothesis, this relation has no further significance, whereas on the basis of the explanation derived from the postulates of the Reciprocal System, the same forces that apply to the distant galaxies are applicable to all atoms and aggregates of matter, producing effects which vary with the relative magnitudes of the different forces involved. On the basis of this new information, the mathematical relation, which applies to the galaxies, is one of far-reaching importance. This present chapter will initiate a demonstration that the very complex mathematical relations that are encountered in many physical areas are the result of permutations and combinations of simple basic elements, rather than a reflection of a complex fundamental reality. The process whereby the compound unit of motion that we call an atom is produced by applying a rotational motion to a previously existing vibrational motion, the photon, is typical of the manner in which the complex phenomena of the universe are built up from simple foundations. We start with a uniform linear, or translational, motion at unit speed. Then by directional reversals we produce a simple harmonic motion, or vibration. Next the vibrating unit is caused to rotate. The addition of this motion of a

different type alters the behavior of the unit–gives it different properties, as we say–and puts it into a new physical category. All of the more complex physical entities with which we will deal in the subsequent pages are similarly built up by compounding the simpler motions. The first phase of this mathematical development is a striking example of the way in which a few very simple mathematical premises quickly proliferate into a large number and variety of mathematical consequences. The development will begin with nothing more than the series of cardinal numbers and the geometry of three dimensions. By subjecting these to simple mathematical processes, the applicability of which to the physical universe of motion is specified in the fundamental postulates, the combinations of rotational motions that can exist in the theoretical universe will be ascertained. It will then be shown that these rotational combinations that theoretically can exist can be individually identified with the atoms of the chemical elements and the sub-atomic particles that are observed to exist in the physical universe. A unique group of numbers representing the different rotational components will be derived for each of these combinations. The set of numbers applying to each element or type of particle theoretically determines the properties of that substance, inasmuch as these properties, like all other quantitative features of a universe of motion, are functions of the magnitudes of the motions that constitute the material substances. It will be shown in this and the following chapter that this theoretical assertion is valid for some of the simpler properties, including those, which depend upon the position of the element in the periodic table. The application of these numerical factors to other properties will be discussed from time to time as consideration of these other properties is undertaken later in the development. One preliminary step that will have to be taken is to revise present measurement procedures and units in order to accommodate them to the natural moving system of reference. Because of the status of unity as the natural reference datum, a deviation of n-l units downward from unity to a speed 1/n has the same natural magnitude as a deviation of n-l units upward to a speed n/1, even though, when measured from zero speed in the conventional manner, the changes are wholly disproportionate. When n is 4, for example, the upward change is from 1 to 4, an increase of 3 units, whereas the downward change is from 1 to ¼, a decrease of only ¾ unit. In order to reflect the fact that these deviations are actually equal in magnitude from the natural standpoint, the basis on which the fundamental processes of the universe take place, it is necessary to set up a new system of speed measurement, in which we express the magnitude of the speed in terms of the deviation, upward or downward, from unit speed, instead of measuring from some zero in the conventional manner. Inasmuch as the units in which speeds are measured on this basis are not commensurable with those of speed as measured from zero, it would lead to complete confusion if the units of the new system were called units of speed. For this reason, when reference is made to speed in terms of its natural magnitude in any of the publications dealing with the Reciprocal System of theory, it is not called speed. Instead, the term ―speed displacement‖ is used, the units of this displacement being natural units of deviation from unity.

In practice, the term ―speed displacement‖ is usually shortened to ―displacement,‖ and this has led to some criticism of the terminology on the ground that ―displacement‖ already has other scientific meanings. But it is highly desirable, as an aid to understanding, that the idea of a deviation from a norm should be clearly indicated in the language that is used, and there are not many English words that meet the requirements. Under the circumstances, ―displacement,‖ appears to be the best choice. The sense in which this term is used will almost always be indicated by the context in which it appears, and in the few cases where there might be some question, the possibility of confusion can be avoided by employing the full name, ―speed displacement.‖ Another reason for the use of a distinctive term in designating natural speed magnitudes is that this is necessary in order to make the addition of speeds meaningful. Conventional physics claims that it recognizes speed as a scalar quantity, but in actual practice gives it no more than a quasi-scalar status. True scalar quantities are additive. If we have five gallons of gasoline in one container and ten gallons in another, the total, the quantity in which we are most interested, is fifteen gallons. The corresponding sum of two speeds of the same object–rotational and translational, for example–has no meaning at all in current physical thought. In the universe of motion described by the Reciprocal System of theory, however, the scalar total of all of the speeds of an object is one of the most important properties of that object. Thus, even though speed has the same basic significance in the Reciprocal System as in conventional theory–that is, it is a measure of the magnitude of motion–the manner in which speed enters into physical phenomena is so different in the two systems that it would be inappropriate to express it in the same units of measurement in both cases, even if this were not ruled out for other reasons. It would, of course, be somewhat simpler if we could say ―speed‖ whenever we mean speed, and not have to use two different terms for the same thing. But the meaning of whatever is said should be clear in all cases if it is kept in mind that whenever reference is made to ―displacement,‖ this means ―speed,‖ but not speed as ordinarily measured. It is speed measured in different quantities, and from a different reference datum. A decrease in speed from 1/1 to 1/n involves a positive displacement of n-1 units; that is, an addition of n-1 units of motion in which time is unidirectional while the space direction alternates, thus, in effect, adding n-1 units of time to the original speed 1/1. Similarly, an increase in speed from 1/1 to n/1 involves a negative displacement, an addition of n-1 units of motion in which space is unidirectional while the time direction alternates; thus, in effect, adding n-1 units of space to the original speed 1/1. In the first edition of this work the displacements here designated positive and negative were called ―time displacement‖ and ―space displacement‖ respectively, to emphasize the fact that the positive displacement represents an increased amount of time in association with one unit of space, while the reverse is true in negative displacement. Experience has shown, however, that the original terminology tends to be confusing, particularly in that it is frequently interpreted as indicating addition of independent quantities of time or space to the phenomena under consideration, whereas, in fact, it is the speed that is being increased or decreased. As pointed out in Chapter 2, in a universe of motion there is no such thing as physical space or time independent of motion. We can abstract the space aspect of motion mentally, and imagine it existing independently, as a reference system

(extension space) or otherwise, but we cannot add or subtract space or time in actual practice except by superimposing a new motion on the motion we wish to alter. If we were dealing with speed measured from the mathematical zero it would be logical to apply the term positive to an addition to the speed, but where we measure from unity the values increase in both directions, and there is no reason why one increase should be considered any more ―positive‖ than the other. The choice has therefore been made on a convenience basis, and the ―positive‖ designation has been applied to the displacements on the low speed side of the unit speed datum because these are the displacements of the material system of phenomena. We will find, as we proceed, that the displacements toward higher speeds, where they occur at all in the material sector, do so mainly as negative modifications of the predominantly low speed motion combinations. Inasmuch as the units of positive displacement and of negative displacement are simply units of deviation from the natural speed datum they are additive algebraically. Thus, if there exists a motion in time with a negative speed displacement of n-1 units (equivalent to n units of speed in conventional terms) we can reduce the speed to zero, relative to the natural datum, by adding a motion with a positive speed displacement of n-1 units. Addition of further positive displacement will result in a net speed below unity; that is, a motion in space. But there is no way by which we can alter either the time aspect or the space aspect of the motion independently. The variable in a universe of motion is speed, and the variation occurs only in displacement units. The change in terminology has been made in the hope that it will contribute toward a full realization that what we are dealing with are units of speed, even though, for technical reasons, we cannot call it speed. In the case of radiation, there is no inherent upper limit to the speed displacement (conventionally measured as frequency), but in actual practice a limit is imposed by the capabilities of the processes that produce the radiation, examination of which will be deferred until after further groundwork has been laid. The range of radiation frequencies is so wide that, except near 1/1, where the steps from n to n + 1 are relatively large, the frequency spectrum is practically continuous. The rotational situation is very different. In contrast to the almost unlimited number of possible vibrational frequencies, the maximum number of units of rotational displacement that can participate in any one combination of rotations is relatively small, for reasons which will appear in the course of the discussion. Furthermore, probability considerations dictate the distribution of the total number of rotational displacement units among the different rotations in each individual case, so that in general there is only one stable combination among the various mathematically possible ways of distributing a given total rotational displacement. This limits the possible rotational combinations that we identify as material atoms and particles to a relatively small series, the successive members of which differ initially by one displacement unit, and at a later stage by two of the single displacement units. With this understanding of the fundamentals, let us now proceed to an examination of the general characteristics of the combinations of rotational motions. The existence of different rotational patterns is clear from the start, as the rotation can not only take place at different speeds (displacements), but, in a three-dimensional universe, can also take

place independently in the different dimensions. As we will see in our investigation, however, some restrictions are imposed by geometry. The photon cannot rotate around the line of vibration as an axis. Such a rotation would be indistinguishable from no rotation at all. But it can rotate around either or both of the two axes perpendicular to the line of vibration and to each other. One such rotation of the one-dimensional photon generates a two-dimensional figure: a disk. Rotation of the disk around the second available axis then generates a three-dimensional figure: a sphere. This exhausts the available dimensions, and no further rotation of the same nature can take place. The basic rotation of the atom or particle is therefore two-dimensional, and, as brought out in Chapter 5, it is in the inward scalar direction. But after the twodimensional rotation is in existence it is possible to give the entire combination of vibrational and rotational motions a rotation around the third axis, which is also inward from the scalar standpoint, but is opposed to the two-dimensional rotation vectorially. This reverse rotation is optional, as the basic rotation is distributed over all three dimensions, and nothing further is required for stability. A rotating system therefore consists of a photon rotating two-dimensionally, with or without a reverse rotation in the third dimension. Although the two dimensions of the basic rotation have been treated separately for descriptive purposes, first generating a disk by one rotation, and then a sphere by the second, it should be understood that there are not two one-dimensional rotations; there is one two-dimensional rotation. This distinction has a significant bearing on the properties of the rotational combinations. The combined magnitude of two one-dimensional rotations of n displacement units each is 2n. The magnitude of a two-dimensional rotation in which the displacement is n in each dimension is n². It is not essential that all of the rotations be effective in the physical sense. Unless there is effective rotation in at least one dimension it is meaningless to speak of rotation, as such motion cannot be distinguished from translation. But if there is effective rotation–that is, rotation with a speed differing from unity–in at least one dimension, there can be rotation at unit speed (zero displacement) in the other dimension or dimensions. The vibrational speed displacement of the basic photon may be either negative (greater than unity) or positive (less than unity). Let us consider the case of a photon with a negative displacement, to which we propose to add a unit of rotational displacement (rotate the photon). Inasmuch as the individual units of vibrational displacement are discrete (that is, they are not tied together in any way), the one applied unit of rotational motion results in rotation of only one of the vibrational units. Because of the lack of any connection between the vibrational units there is no force resisting separation. When the one unit starts moving inward by reason of the rotation it therefore moves away from the remainder of the photon, which continues to be carried outward by the progression of the natural reference system. Irrespective of the number of vibrational units in the photon to which the rotational displacement was added, the compound motion produced by this addition thus contains only the vibrational units that are being rotated. The remaining vibrational units of the original photon continue as a photon of lower displacement. When a compound motion of this type, rotation of a vibration, is formed, the inward motion due to the rotation replaces the outward motion of the progression of the reference

system. Thus the components of the compound motion are not subject to oppositely directed motions in the manner of the multi-unit rotating photons, and these components do not separate spontaneously. However, the rotational displacement of the photon now under consideration is negative. If the rotational displacement applied to this photon is also negative, the displacement units, being units of the same scalar nature, are additive in the same manner as the vibrational units of a photon. Like the photon units, they are easily separated when even a relatively small force is applied, and the rotational displacement is therefore readily transferred from the original photon to some other object, under appropriate conditions. For this reason, combinations of negative vibrational and negative rotational displacements are inherently unstable. On the other hand, if the applied rotational displacement is positive, equal numbers of the positive and negative displacement units neutralize each other. In this case the combination has no net displacement. A motion that does have a net displacement cannot be extracted from such a combination without the intervention of some outside agency. It is simple enough to separate one negative unit from an aggregate of n negative units, but getting one negative unit out of nothing at all is not so easily accomplished. A combination of a negative vibration and a positive rotation (or vice versa) is therefore inherently stable. All that has been said about additions to a photon with negative displacement applies with equal force, but in the inverse manner, to the addition of rotation to a photon with positive displacement. We therefore arrive at the conclusion that in order to produce stable combinations photons oscillating in time (negative displacement) must be rotated in space (positive displacement), whereas photons oscillating in space must be rotated in time. This alternation of positive and negative displacements is a general requirement for stability of compound motions, and it will play an important part in the theoretical development in the subsequent pages. It should be understood, however, that stability is dependent on the environment. Any combination will break up if the environmental conditions are sufficiently unfavorable. Conversely, there are situations, to be examined later, in which environmental influences create conditions that confer stability on combinations that are normally unstable. The combinations in which the net rotation is in space (positive displacement) can be identified with the relatively stable atoms and particles of our local environment, and constitute what we will call the material system. For the present we will confine the discussion to the members of this material system, and will leave the inverse type of combination, the cosmic system, as we will call it, for later consideration. Inasmuch as the oscillating photon is being rotated in two dimensions (the basic positive rotation), one unit of two-dimensional positive displacement is required to neutralize the negative vibrational displacement of the photon, and reduce the net total displacement to zero. Because of its lack of any effective deviation from unit speed (the reference datum) this combination of motions has no observable physical properties, and for that reason it was somewhat facetiously called ―the rotational equivalent of nothing‖ in the first edition. But this understates the significance of the combination. While it has no effective net total magnitude, its rotational component does have a direction. The idea of a motion that has direction but no magnitude sounds something like a physicist's version of the Cheshire cat, but the zero effective magnitude is a property of the structure as a whole,

while the rotational direction of the two-dimensional motion, which makes the addition of further positive rotational displacement possible, is a property of one component of the total structure. Thus, even though this combination of motions can do nothing itself, it does constitute a base from which something (a material particle) can be constructed that cannot be formed directly from a linear type of motion. We will therefore call it the rotational base. There are actually two of the rotational bases. The one we have been discussing is the base of the material system. The structures of the cosmic system are constructed from a different base; one that is just the inverse of the material base. In this inverse combination the photon is oscillating in space (positive displacement) and rotating in time (negative displacement). Successive additions of positive displacement to the rotational base produce the combinations of motions that we identify as the sub-atomic particles and the atoms of the chemical elements. The next two chapters will describe the structures of the individual combinations. Before beginning this description, however, it will be in order to make some general comments about the implications of the theoretical conclusion that the atoms and particles of matter are systems of rotational motions. One of the most significant results of the new concept of the structure of atoms and particles that has been developed from the postulates of the Reciprocal System is that it is no longer necessary to invoke the aid of spirits or demons or their modern equivalents: mysterious hypothetical forces of a purely ad hoc nature–to explain how the parts of the atom hold together. There is nothing to explain, because the atom has no separate parts. It is one integral unit, and the special and distinctive characteristics of each kind of atom are not due to the way in which separate ―parts‖ are put together, but are due to the nature and magnitude of the several distinct motions of which each atom is composed. At the same time, this explanation of the structure of the atom tells us why such a unit can expel particles, or disintegrate into smaller units, even though it has no separate parts; how it can act, in some respects, as if it were an aggregate of sub-atomic units, even though it is actually a single integral entity. Such a structure can obviously part with some of its motion, or absorb additional units of motion, without in any way altering the fact that it is a single entity, not a collection of parts. When the pitcher throws a curve ball, it is still a single unit–it is a baseball–even though it now has both a translational motion and a rotational motion, which it did not have while still in his hand. We do not have to worry about what kind of a force holds the rotational ―part,‖ the translational ―part,‖ and the horsehide covered ―nucleus‖ together. There has been a general impression that if we can get particles out of an atom, then there must be particles in atoms; that is, the atom must be constructed of particles. This conclusion seems so natural and logical that it has survived what would ordinarily be a fatal blow: the discovery that the particles which emanate from the atom in the process of radioactivity and otherwise are not the constituents of the atom; that is, they do not have the properties which are required of the constituents. Furthermore, it is now clear that a great variety of particles that cannot be regarded as constituents of normal atoms can be

produced from these atoms by appropriate processes. The whole situation is now in a state of confusion. As Heisenberg commented: Wrong questions and wrong pictures creep automatically into particle physics and lead to developments that do not fit the real situation in nature.27 It is now apparent that all of this confusion has resulted from the wholly gratuitous, but rarely questioned, assumption that the sub-atomic particles have the characteristics of ―parts‖ ; that is, they exist as particles in the structure of the atom, they require something that has the nature of a ―force‖ to keep them in position, and so on. When we substitute motions for parts, in accordance with the findings of the Reciprocal System, the entire situation automatically clears up. Atoms are compound motions, sub-atomic particles are less complex motions of the same general nature, and photons are simple motions. An atom, even though it is a single unitary structure without separate parts, can eject some of its motion, or transfer it to some other structure. If the motion which separates from the atom is translational it reappears as translational motion of some other unit; if it is simple linear vibration it reappears as radiation; if it is a rotational motion of less than atomic complexity it reappears as a sub-atomic particle; if it is a complex rotational motion it reappears as a smaller atom. In any of these cases, the status of the original atom changes according to the nature and magnitude of the motion that is lost. The explanation of the observed interconvertibility of the various physical entities is now obvious. All of these entities are forms of motion or combinations of different forms, hence any of them can be changed into some other form or combination of motion by appropriate means. Motion is the common denominator of the physical universe.

CHAPTER 10

Atoms
In some respects, the combinations of motions with greater rotational displacement, those which constitute the atoms of the chemical elements, are less complicated than those with the least displacement, the sub-atomic particles, and it will therefore be convenient to discuss the structure of these larger units first. Geometrical considerations indicate that two photons can rotate around the same central point without interference if the rotational speeds are the same, thus forming a double unit. The nature of this combination can be illustrated by two cardboard disks interpenetrated along a common diameter C. The diameter A perpendicular to C in disk a represents one linear oscillation, and the disk a is the figure generated by a one-dimensional rotation of this oscillation around an axis B perpendicular to both A and C. Rotation of a second linear oscillation, represented by the diameter B. around axis A generates the disk b. It is then evident that disk a may be given a second rotation around axis A, and disk b may be given a second rotation around axis B without interference at any point, as long as the rotational speeds are equal.

The validity of the mathematical principles of probability is covered in the fundamental postulates by specifically including them in the definition of ―ordinary commutative mathematics,‖ as that term is used in the postulates. The most significant of these principles, so far as the atomic structures are concerned, are that small numbers are more probable than large numbers, and symmetrical combinations are more probable than asymmetrical combinations of the same total magnitude. For a given number of units of net rotational displacement the double rotating system results in lower individual displacement values, and the probability principles give them precedence over single units in which the individual displacements are higher. All rotating combinations with sufficient net total displacement to enable forming double units therefore do so. To facilitate a description of these objects we will utilize a notation in the form a-b-c, where c is the speed displacement of the one-dimensional reverse rotation, and a and b are the displacements in the two dimensions of the basic two-dimensional rotation. Later in the development we will find that the one-dimensional rotation is connected with electrical phenomena, and the two-dimensional rotation is similarly connected with magnetic phenomena. In dealing with the atomic and particle rotations it will be convenient to use the terms ―electric‖ and ―magnetic‖ instead of ―one-dimensional‖ and ―two-dimensional‖ respectively, except in those cases where it is desired to lay special emphasis on the number of dimensions involved. It should be understood, however, that designation of these rotations as electric and magnetic does not indicate the presence of any electric or magnetic forces in the structures now being described. This terminology has been adopted because it not only serves our present purposes, but also sets the stage for the introduction of electric and magnetic phenomena in a later phase of the development. Where the displacements in the two magnetic dimensions are unequal, the rotation is distributed in the form of a spheroid. In such cases the rotation which is effective in two dimensions of the spheroid will be called the principal magnetic rotation, and the other the subordinate magnetic rotation. When it is desired to distinguish between the larger and the smaller magnetic rotational displacements, the terms primary and secondary will be used. Where motion in time occurs in the material structures now being discussed, the negative displacement values of this motion will be distinguished by placing them in parentheses. All values not so identified refer to positive displacement (motion in space). Some questions now arise as to the units in which the displacements should be expressed. As will quickly be seen when we start to identify the individual structures, the natural unit of displacement is not a convenient unit in application to the double rotating systems. The smallest change that can take place in these systems involves two natural units. As stated in Chapter 9, probability considerations dictate the distribution of the total displacement of a combination among the different dimensions of rotation. The possible rotating combinations therefore constitute a series, successive members of which differ by two of the natural onedimensional units of displacement. Since we will not encounter single units in these atomic structures, it will simplify our calculations if we work with double units rather than the single natural units. We will therefore define the unit of electric displacement in the atomic structures as the equivalent of two natural one-dimensional displacement units.

On this basis, the position of each element in the series of combinations, as determined by its net total equivalent electric displacement, is its atomic number. For reasons that will be brought out later, half of the unit of atomic number has been taken as the unit of atomic weight. At the unit level dimensional differences have no numerical effect; that is, 1³ = 1² = 1. But where the rotation extends to greater displacement values a two-dimensional displacement n is equivalent to n² one-dimensional units. If we let n represent the number of units of electric displacement, as defined above, the corresponding number of natural (single) units is 2n, and the natural unit equivalent of a magnetic (two-dimensional) displacement n is 4n², Inasmuch as we have defined the electric displacement unit as two natural units, it then follows that a magnetic displacement n is equivalent to 2n² electric displacement units. This means that the unit of magnetic displacement, the increment between successive values of the two-dimensional rotational displacement, is not a specific magnitude in terms of total displacement. Where the total displacement is the significant factor, as in the position in the series of elements, the magnetic displacement value must be converted to equivalent electric displacement units by means of the 2n² relation. For some other purposes, however, the displacement value in terms of magnetic units does have significance in its own right, as we will see in the pages that follow. In order to qualify as an atom–a double rotating system–a rotational combination must have at least one effective magnetic displacement unit in each system, or, expressing the same requirement in a different way, it must have at least one effective displacement unit in each of the magnetic dimensions of the combination structure. One positive magnetic (double) displacement unit is required to neutralize the two single negative displacement units of the basic photons; that is, to bring the total scalar speed of the combination as a whole down to zero (on the natural basis). This one positive unit is not part of the effective rotation. Thus, where there is no rotation in the electric dimension, the smallest combination of motions that can qualify as an atom is 2-1-0. This combination can be identified as the element helium, atomic number 2. Helium is a member of a family of elements known as the inert gases, a name that has been applied because of their reluctance to enter into chemical combinations. The structural feature that is responsible for this chemical behavior is the absence of any effective rotation in the electric dimension. The next element of this type has one additional unit of magnetic displacement. Since the probability factors operate to keep the eccentricity at a minimum, the resulting combination is 2-2-0, rather than 3-1-0. The succeeding increments of displacement similarly go alternately to the principal and subordinate rotations. Helium, 2-1-0, already has one effective displacement unit in each magnetic dimension, and the increase to 2-2-0 involves a second unit in one dimension. As previously indicated, the electric equivalent of n magnetic units is 2n². Unlike the addition of another electric unit, the addition of a magnetic unit is not a simple process of going from l to 2. In the case of the electric displacement there is first a single unit, then another single unit for a total of two, another bringing the total to three, and so on. But 2 x 1² = 2, and 2 x 2² = 8. In order to

increase the total electric equivalent of the magnetic displacement from 2 to 8 it would be necessary to add the equivalent of 6 units of electric displacement, and there is no such thing as a magnetic equivalent of 6 electric units. The same situation arises in the subsequent additions, and the increase in magnetic displacement must therefore take place in full 2n² equivalents. Thus the succession of inert gas elements is not 2, 10, 16, 26, 36, 50, 64, as it would be if 2(n+1)² replaces 2n² in the same manner that n+ l replaces n in the electric series, but 2, l0, 18, 36, 54, 86, 118. For reasons which will be developed later, element 118 is unstable, and disintegrates if formed. The preceding six members of this series constitute the inert gas family of elements. The number of mathematically possible combinations of rotations is greatly increased when electric displacement is added to these magnetic combinations, but the combinations that can actually exist as elements are limited by probability considerations, as noted in Chapter 9. The magnetic displacement is numerically less than the equivalent electric displacement, and is more probable for this reason. Its status as the essential basic rotation also gives it precedence over the electric rotation. Any increment of displacement consequently adds to the magnetic rotation if possible, rather than to the rotation in the electric dimension. This means that the role of the electric displacement is confined to filling in the intervals between the inert gas elements. On this basis, if all rotational displacement in the material system were positive, the series of elements would start at the lowest possible magnetic combination, helium, and the electric displacement would increase step by step until it reached a total of 2n² units, at which point the relative probabilities would result in a conversion of these 2n² electric units into one additional unit of magnetic displacement, whereupon the building up of the electric displacement would be resumed. This behavior is modified, however, by the fact that electric displacement in ordinary matter, unlike magnetic displacement, may be negative instead of positive. The restrictions on the kinds of motions that can be combined do not apply to minor components of a system of motions of the same type, such as rotations. The net effective rotation of a material atom must be in space in order to give rise to those properties which are characteristic of ordinary matter. It necessarily follows that the magnetic displacement, which is the major component of the total, must be positive. But as long as the larger component is positive, the system as a whole can meet the requirement that the net rotation be in space (positive displacement) even if the smaller component, the electric displacement, is negative. It is possible, therefore, to increase the net positive displacement a given amount either by direct addition of the required number of positive electric units, or by adding a magnetic unit and then adjusting to the desired intermediate level by adding the appropriate number of negative units. Which of these alternatives will actually prevail is affected to a considerable degree by the conditions that exist in the atomic environment, but in the absence of any bias due to these conditions, the determining factor is the size of the electric displacement, lower displacement values being more probable than higher values. In the first half of each group intermediate between two inert gas elements, the electric displacement is minimized if the increase in atomic number (equivalent electric displacement) is accomplished by direct

addition of positive displacement. When n² units have been added, the probabilities are nearly equal, and as the atomic number increases still further, the alternate arrangement becomes more probable. In the latter half of each group, therefore, the increase in atomic number is normally attained by adding one unit of magnetic displacement, and then reducing to the required net total by adding negative electric displacement, eliminating successive units of the latter to move up the atomic series. By reason of the availability of negative electric displacement as a component of the atomic rotation, an element with a net displacement less than that of helium becomes possible. Adding one negative electric displacement unit to helium produces this element, 2-1-(1), which we identify as hydrogen,, and thereby, in effect, subtracting one positive electric unit from the equivalent of two units (above the rotational base) that helium possesses. Hydrogen is the first in the ascending series of elements, and we may therefore give it the atomic number 1. The atomic number of any other material element is its net equivalent electric displacement. Above helium, 2-1-0, we find lithium, 2-1-1, beryllium, 2-1-2, boron, 2-1-3, and carbon, 21-4. Since this is an 8-atom group, the probabilities are nearly even at this point, and carbon can also exist as 2-2-(4). The elements that follow move up the atomic series by reducing the negative displacements: nitrogen, 2-2-(3), oxygen, 2-2-(2), fluorine, 2-2-(1), and finally the next inert gas, neon, 2-2-0. Another similar 8-element group follows, adding a second magnetic unit in the other magnetic dimension. This carries the series up to another inert gas element, argon, 3-2-0. Table 1 shows the normal displacements of the elements to, and including, argon. TABLE 1 THE ELEMENTS OF THE LOWER GROUPS Displacements Element 2-1-(1) 2-1-0 2-1-1 2-1-2 2-1-3 2-1-4 2-2-(4) 2-2-(3) 2-2-(2) 2-2-(1) 2-2-0 Hydrogen Helium Lithium Beryllium Boron Carbon Nitrogen Oxygen Fluorine Neon Atomic Number 1 2 3 4 5 6 7 8 9 10 Displacements Element Atomic Number

2-2-1 2-2-2 2-2-3 2-2-4 2-2-(4) 3-2-(3) 3-2-(2) 3-2-(1) 3-2-0

Sodium Magnesium Aluminum Silicon Phosphorus Sulfur Chlorine Argon

11 12 13 14 15 16 17 18

At element 18, argon, the magnetic displacement has reached a level of two units above the rotational base in each of the magnetic dimensions. In order to increase the rotation in either dimension by an additional unit a total of 2x3², or 18, units of electric displacement are required. This results in a group of 18 elements which reaches the midpoint at cobalt, 3-2-9, and terminates at krypton, 3-3-0. A second 18-element group follows, as indicated in Table

2. TABLE 2 THE INTERMEDIATE ELEMENTS Displacements 3-2-1 3-2-2 3-2-3 3-2-4 3-2-5 3-2-6 3-2-7 3-2-8 3-2-9 3-3-(9) 3-3-(8) 3-3-(7) 3-3-(6) 3-3-(5) 3-3-(4) 3-3-(3) 3-3-(2) 3-3-(1) 3-3-0 Element Potassium Calcium Scandium Titanium Vanadium Chromium Manganese Iron Cobalt Nickel Copper Zinc Gallium Germanium Arsenic Selenium Bromine Krypton Atomic Displacements Number 19 3-3-1 20 3-3-2 21 3-3-3 22 3-3-4 23 3-3-5 24 3-3-6 25 3-3-7 26 3-3-8 3-3-9 27 3-3-(9) 28 4-3-(8) 29 4-3-(7) 30 4-3-(6) 31 4-3-(5) 32 4-3-(4) 33 4-3-(3) 34 4-3-(2) 35 4-3-(1) 36 4-3-0 Element Rubidium Strontium Yttrium Zirconium Niobium Molybdenum Technetium Ruthenium Rhodium Palladium Silver Cadmium Indium Tin Antimony Tellurium Iodine Xenon Atomic Number 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54

The final two groups of elements, Table 3, contain 2x4², or 32, members each. The heaviest elements of the last group have not yet been observed, as they are highly radioactive, and consequently unstable in the terrestrial environment. In fact, uranium, element number 92, is the heaviest that exists naturally on earth in any substantial quantities. As we will see later, however, there are other conditions under which the elements are stable all the way up to number 117. TABLE 3 THE ELEMENTS OF THE HIGHER GROUPS Displacements 4-3-1 4-3-2 4-3-3 4-3-4 4-3-5 4-3-6 4-3-7 4-3-8 4-3-9 Element Cesium Barium Lanthanum Cerium Praseodymium Neodymium Promethium Samarium Europium Atomic Displacements Number 55 4-4-1 56 4-4-2 57 4-4-3 58 4-4-4 59 4-4-5 60 4-4-6 61 4-4-7 62 4-4-8 63 4-4-9 Element Francium Radium Actinium Thorium Protactinium Uranium Neptunium Plutonium Americium Atomic Number 87 88 89 90 91 92 93 94 95

4-3-10 4-3-11 4-3-12 4-3- 13 4-3-14 4-3-15 4-3-16 4-3-(16) 4-4-(15) 4 4-(14) 4 4-(13) 4 4-(12) 4 4-(11) 4 4-(10) 4-4-(9) 4-4-(8) 4-4-(7) 4-4-(6) 4-4-(5) 4-4-(4) 4-4-(3) 4-4-(2) 4-4-(1) 4-4-0

Gadolinium Terbium Dysprosium Holmium Erbium Thulium Ytterbium Lutetium Hafnium Tantalum Tungsten Rhenium Osmium Iridium Platinum Gold Mercury Thallium Lead Bismuth Polonium Astatine Radon

64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86

4-4-10 4-4-11 4-4-12 4-4-13 4-4-14 4-4-15 4-4-16 5-4-(16) 5-4-(15) 5-4-(14) 5 4-(13) 5-4-(12) 5 4-(11) 5-4-(10) 5-4-(9) 5-4-(8) 5-4-(7) 5-4-(6) 5-4-(5) 5-4-(4) 5-4-(3) 5-4-(2) 5-4-(1)

Curium Berkelium Californium Einsteinium Fermium Mendelevium Nobelium Lawrencium Rutherfordium Hafnium

96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117

For convenience in the subsequent discussion these groups of elements will be identified by the magnetic n value, with the first and second groups in each pair being designated A and B respectively. Thus the sodium group, which is the second of the 8-element groups (n=2) will be called Group 2B. At this point it will be appropriate to refer back to this statement that was made in Chapter 9: The (mathematical) development will begin with nothing more than the series of cardinal numbers and the geometry of three dimensions. By subjecting these to simple mathematical processes, the applicability of which to the physical universe of motion is specified in the fundamental postulates, the combinations of rotational motions that can exist in the theoretical universe will be ascertained, and it will be shown that these rotational combinations that theoretically can exist can be individually identified with the atoms of the chemical elements and the sub-atomic particles that are observed to exist in the physical universe. A unique group of numbers representing the different rotational components will be derived for each of these combinations. A review of the manner in which the figures presented in Tables 1 to 3 were derived will show that this commitment, so far as it applies to the elements, has been carried out in full. This is a very significant accomplishment. Both the existence of a series of theoretical elements identical with the observed series of chemical elements, and the numerical values which theoretically characterize each individual element have been derived from the general

properties of mathematics and geometry, without making any supplementary assumptions or introducing any numerical values specifically applicable to matter. The possibility that the agreement between the series of elements thus derived and the known chemical elements could be accidental is negligible, and this derivation is, in itself, a conclusive proof that the atoms of matter are combinations of motions, as asserted by the Reciprocal System of theory. But this is only the beginning of a vast process of mathematical development. The numerical values at which we have arrived, the atomic numbers and the three displacement values for each element, now provide us with the basis from which we can derive the quantitative relations in the areas that we will examine. The behavior characteristics, or properties, of the elements are functions of their respective displacements. Some are related to the total net effective displacement (equal to the atomic number in the combinations thus far discussed), some are related to the electric displacement, others to the magnetic displacement, while still others follow a more complex pattern. For instance, valence, or chemical combining power, is determined by either the electric displacement or one of the magnetic displacements, while the inter-atomic distance is affected by both the electric and magnetic displacements, but in different ways. The manner in which the magnitudes of these properties for specific elements and compounds can be calculated from the displacement values has been determined for many properties and for many classes of substances. These subjects will be considered individually in the chapters that follow. One of the most significant advances toward an understanding of the relations between the structures of the different chemical elements and their properties was the development of the periodic table by Mendeleeff in l869. In this diagram the elements are arranged horizontally in periods and vertically in groups, the order within the period being that of the atomic number (approximately defined in the original work by the atomic weights). When the elements are correctly assigned in the periods, those in the vertical groups are quite similar in their properties. On comparing the periodic table with the rotational characteristics of the elements, as tabulated in this chapter, it is evident that the horizontal periods reflect the magnetic rotational displacement, while the vertical groups represent the electric rotational displacement. In revising the table to take advantage of the additional information derived from the Reciprocal System of theory we may therefore replace the usual group and period numbering by the more meaningful displacement values. When this is done it is apparent that a further revision of the tabular arrangement is required in order to put all of the elements into their proper positions. Mendeleeff's original table included nine vertical groups, beginning with the inert gases, Group O, and ending with a group in which the three elements iron, cobalt, and nickel, and the corresponding elements in the higher periods, were all assigned to a single vertical position. In the more modern versions of the table the number of vertical groups has been expanded to avoid splitting each of the longer periods into two sub-periods, as was done by Mendeleeff. One of the most popular of these revised versions utilizes 18 vertical groups, and puts 15 elements of each of the last two periods into one of these l8 positions in order to accommodate the full number of elements. In the light of the new information now available, it can be seen that Mendeleeff based his

arrangement on the relations existing in the 8-element rotational groups, 2A and 2B in the notation used in this work, and forced the elements of the larger groups into conformity with this 8-element pattern. The modern revisers have made a partial correction by setting up their tables on the basis of the l8-element rotational groups, 3A and 3B, leaving blank spaces where the 8-element groups have no counterparts of the l8-element values. But these tables still retain a part of the original distortion, as they force the members of the 32-element groups into the l8-element pattern. To construct a complete and accurate table, it is only necessary to carry the revision procedure one step farther, and set up the table on the basis of the largest of the magnetic groups, the 32-element groups 4A and 4B. For some purposes a simple extension of the current versions of the table to the full 32 position width necessary to accommodate Grcups 4A and 4B is probably all that is needed. On the other hand, the useful chemical information displayed by the table is confined mainly to the elements with electric displacements below l0, and separating the central elements of the two upper groups from the main portion of the table, as in the conventional arrangements, has considerable merit. The particular elements that are thus separated on the basis of the electric displacement are not the same ones that are treated separately in the conventional tables, but the general effect is much the same. When the table is thus divided into two sections, it also appears that there are some advantages to be gained by a vertical, rather than a horizontal, arrangement, and the revised table, Fig. 1, has been set up on this basis. The new concept of ―divisions,‖ which is emphasized in this table, will be explained in Chapter 18. Inasmuch as carbon and silicon play both positive and negative roles rather freely, they have each been assigned to two positions in the table, but hydrogen, which is usually shown in two positions in the conventional tables, is necessarily negative on the basis of the principles that have been developed in this work and is only shown in one position. The aspects of its chemical behavior that have led to its classification with the electropositive elements will also be explained in Chapter 18. Figure 1 The Revised Periodic Table of the Elements Magnetic Displacement 2-1 3 Li 4 Be 5 B 2-2 11 Na 12 Mg 13 Al 3-2 19 K 20 Ca 21 Sc 3-3 37 Rb 38 Sr 39 Y 4-3 55 Cs 56 Ba 57 La 4-4 87 Fr 88 Ra 89 Ac I 1 2 3 10 II 4-3 64 Gd 4-4 96 Cm Div. Electric Displacement Div.

6 C

14 Si

22 Ti 23 V 24 Cr 25 Mn 26 Fe 27 Co 28 Ni 29 Cu 30 Zn 31 Ga

40 Zr 41 Nb 42 Mo 43 Tc 44 Ru 45 Rh 46 Pd 47 Ag 48 Cd 49 In 50 Sn 51 Sb 52 Te 53 I 54 Xe

58 Ce 59 Pr 60 Nd 61 Pm 62 Sm 63 Eu 78 Pt 79 Au 80 Hg 81 Tl 82 Pb 83 Bi 84 Po 85 At 86 Rn

90 Th 91 Pa 92 U 93 Np 94 Pu 95 Am 110 111 III 112 113 114 115 IV 116 117 II

4 5 6 7 8 9 (8) (7) (6) (5) (4) (3) (2) (1) 0 0

11 12 13 14 15 16 (15) (14) (13) (12) (11) (10) (9) III

65 Tb 66 Dy 67 Ho 68 Er 69 Tm 70 Yb 71 Lu 72 Hf 73 Ta 74 W 75 Re 76 Os 77 Ir 4-4

97 Bk 98 Cf 99 Es 90 Fm 101 Md 102 No 103 Lr 104 Rf 105 Ha 106 107 108 109 5-4

6 C 7 N 8 O 1 H 2 He 9 F 10 Ne

14 Si 15 P 16 S 17 Cl 18 Ar

32 Ge 33 As 34 Se 35 Br 36 Kr

In the original construction of the periodic table the known properties of certain elements were combined with the atomic number sequence to establish the existence of the relations between the elements of the various periods and groups, and thereby to predict previously undetermined properties, and even the existence of some previously unknown elements. The table thus added significantly to the chemical knowledge of the time. In this work, however,

the revised table is not being presented as an addition to the information contained in the preceding pages, but merely as a convenient graphic method of expressing some portions of that information. Everything that can be learned from the table has already been set forth in more detailed form, verbally and mathematically, in this and the earlier chapters. Some of the implications of this information, such as its application to the property of valence, will have further consideration later.

CHAPTER 11

Sub-Atomic Particles
While the series of elements contains no combinations of motions with net positive displacement less than that of hydrogen, 2-1-(1), this does not mean that such combinations are non-existent. It merely means that they do not have sufficient speed displacement to form two complete rotating systems, and consequently do not have the properties, which distinguish the rotational combinations that we call atoms. These less complex combinations of motion can be identified as the sub-atomic particles. As is evident from the foregoing, these particles are not constituents of atoms, as seen in current scientific thought. They are structures of the same general nature as the atoms of the elements, but their net total displacement is below the minimum necessary to form the complete atomic structure. They may be characterized as incomplete atoms. The term ―sub-atomic‖ is currently applied to these particles with the implication that the particles are, or can be, building blocks from which atoms are constructed. Our new findings make this sense of the term obsolete, but the name is still appropriate in the sense of a system of motions of a lower degree of complexity than atoms. It will therefore be retained in this work, and applied in this modified sense. The term ―elementary particle‖ must be discarded. There are no ―elementary‖ particles in the sense of basic units from which other structures can be formed. A particle is smaller and less complex than an atom, but it is by no means elementary. The elementary unit is the unit of motion. The theoretical characteristics of the sub-atomic particles, as derived from the postulates of the Reciprocal System, have been given considerable additional study since the date of the last previous publication in which they were discussed, and there has been a significant increase in the amount of information that is available with respect to these objects, including the theoretical discovery of some particles that are more complex than those described in the first edition. Furthermore, we are now in a position to examine the structure and behavior of the cosmic sub-atomic particles in greater depth (in the later chapters). In order to facilitate the presentation of this increased volume of information, a new system of representing the dimensional distribution of the rotation has been adopted. This means, of course, that we are now using one system of notation for the rotation of the elements, and a different system to represent the rotations of the same nature when we are dealing with the particles. At first glance, this may seem to be introducing an unnecessary

complication, but the truth is that as long as we want to take advantage of the convenience of using the double displacement unit in dealing with the elements, while we must use the single unit in dealing with the particles, we are necessarily employing two different systems, whether they look alike or not. In fact, lack of recognition of this difference has led to some of the confusion that we now wish to avoid. It appears, therefore, that as long as two different systems of notation are necessary for convenient handling of the data, we might as well set up a system for the particles in a manner that will best serve our purposes, including being distinctive enough to avoid confusion. The new notation used in this edition will indicate the displacements in the different dimensions, as in the first edition, and will express them in single units, as before, but it will show only effective displacements, and will include a letter symbol that will specifically designate the rotational base of the particle. It is necessary to take the initial non-effective rotational unit into consideration in dealing with the elements because of the characteristics of the mathematical processes that we will utilize. This is not true in the case of the subatomic particles, and as long as the atomic (double) notation cannot be used in any event, we will show only the effective displacements, and will precede them with either M or C to indicate whether the rotational base of the combination is material or cosmic. This will have the added advantage of clearly indicating that the rotational values in any particular case are being expressed in the new notation. This change in the symbolic representation of the rotations, and the other modifications of terminology that we are making in this edition, may introduce some difficulties for those who have already become accustomed to the manner of presentation in the earlier works. It seems advisable, however, to take advantage of any opportunities for improvement in this respect that may be recognized in the present early stage of the theoretical development, as improvements of this nature will become progressively less feasible as time goes on and existing practices become resistant to change. On the new basis, the material rotational base is M 0-0-0. To this base may be added a unit of positive electric displacement, producing the positron, M 0-0-1, or a unit of negative electric displacement, in which case the result is the electron, M 0-0-(1). The electron is a unique particle. It is the only structure constructed on a material base, and therefore stable in the local environment, that has an effective negative displacement. This is possible because the total rotational displacement of the electron is the sum of the initial positive magnetic unit required to neutralize the negative photon displacement (not shown in the structural notation) and the negative electric unit. As a two-dimensional motion, the magnetic unit is the major component of the total rotation, even though its numerical magnitude is no greater than that of the one-dimensional electric rotation. The electron thus complies with the requirement that the net total rotation of a material particle must be positive. As brought out earlier, adding motion with negative displacement has the effect of adding more space to the existing physical situation, whatever it may be, and the electron is therefore, in effect, a rotating unit of space. We will see later that this fact plays an important part in many physical phenomena. One immediate, and very noticeable, result is that electrons are plentiful in the material environment, whereas positrons are extremely rare. On the basis of the same considerations that apply to the electron, we can regard the

positron as essentially a rotating unit of time. As such, it is readily absorbed into the material system of combinations, the constituents of which are predominantly time structures; that is, rotational motions with net positive displacement (speed = 1/t). The opportunities for utilizing the negative displacement of the electrons in these structures, on the contrary, are very limited. If the addition to the rotational base is a magnetic unit rather than an electric unit, the result could be expressed as M 1-0-0. It now appears, however, that the notation M ½-½-0 is preferable. Of course, half units do not exist, but a unit of two-dimensional rotation obviously occupies both dimensions. To recognize this fact we will have to credit one half to each. The ½-½ notation also ties in better with the way in which this system of motions enters into further combinations. We will call this M ½-½-0 particle the massless neutron, for reasons, which will appear shortly. At the unit level in a single rotating system, the magnetic and electric units are numerically equal; that is, 1² = 1, Addition of a unit of negative electric displacement to the M ½-½-0 combination of motions, the massless neutron, therefore produces a combination with a net total displacement of zero. This combination, M ½-½-(1), Can be identified as the neutrino. In the preceding chapter, the property of the atoms of matter known as atomic weight, or mass, was identified with the net positive three-dimensional rotational displacement (speed) of the atoms. This property will be discussed in more detail in the next chapter, but at this time we will note that the same relationship also applies to the sub-atomic particles; that is, these particles have mass to the extent that they have net positive rotational displacement in three dimensions. None of the particles thus far considered meets this requirement. The electron and the positron have effective rotation in one dimension; the massless neutron in two. The neutrino has no net displacement at all. The sub-atomic rotational combinations thus far identified are therefore massless particles. By combination with other motions, however, the displacement in one or two dimensions can attain the status of a component of a three-dimensional displacement. For instance, a particle may acquire a charge, which is a motion of a kind that will be examined later in the development, and when this happens, the entire displacement, both of the charge and of the original particle, will then manifest itself as mass. Or a particle may combine with other motions in such a way that the displacement of the massless particle becomes a component of the three-dimensional displacement of the combination structure. Addition of a unit of positive, instead of negative, electric displacement to the massless neutron would produce M ½-½-1, but the net total displacement of this combination is 2, which is sufficient to form a complete double rotating system, an atom, and the greater probability of the double structure precludes the existence of the M ½-½-1 combination, other than momentarily. The same probability considerations likewise exclude the two-unit magnetic structure M 11-0, and its positive derivative M 1-1-1, which have net displacements of 2 and 3 respectively. However, the negative derivative, M 1-1-(1), formed in practice by the addition of a neutrino, M ½-½-(1), to a massless neutron, M ½-½-0. can exist as a particle, as its net

total displacement is only one unit; not enough to make the double structure mandatory. This particle can be identified as the proton. Here we have an illustration of the way in which particles that are individually massless, because they have no three-dimensional rotation, combine to produce a particle with an effective mass. The massless neutron rotates only two-dimensionally, while the neutrino has no net rotation. But by adding the two, a combination with effective rotation in all three dimensions is produced. The resulting particle, the proton, M 1-1-(1), has one unit of mass. At the present, rather early, stage of the theoretical development it is not possible to make a precise evaluation of the probability factors and other influences that determine whether or not a theoretically feasible rotational combination will actually be able to exist under a given set of circumstances. The information now available indicates, however, that any material type combination with a net displacement less than 2 should be capable of existing as a particle in the local environment. In actual practice none of the single system combinations identified in the preceding paragraphs has been observed, and there is considerable doubt as to whether there is any way whereby they can be observed, other than through indirect processes which enable us to infer their existence The neutrino, for example, is ―observed‖ only by means of the products of certain events in which this particle is presumed to participate. The electron, the positron, and the proton have been observed only in the charged state, not in the uncharged condition, which constitutes the basic state of all of the rotational combinations thus far discussed. Nevertheless, there is sufficient evidence to indicate that all of these uncharged structures do, in fact, exist, and play significant roles in physical processes. This evidence will be forthcoming as we continue the theoretical development. In the previous publications, the M ½-½-0 combination (1-1-0 in the notation utilized in those works) was identified as the neutron, but it was noted that in some physical processes, such as cosmic ray decay, the magnetic displacement that could be expected to be ejected in the form of neutrons is actually transferred in massless form. Since the observed neutron is a particle of unit atomic weight, it was at that time concluded that in these particular instances the neutrons act as combinations of neutrinos and positrons, both massless particles. On this basis, the neutron plays a dual role, massless under some conditions, but possessing unit mass under other circumstances. Further investigation, centering mainly on the secondary mass of the sub-atomic particles, which will be discussed in Chapter 13, has now disclosed that the observed neutron is not the single effective magnetic rotation with net displacements M ½-½-0. but a more complex particle of the same net displacement, and that the single magnetic displacement is massless. It is no longer necessary to conclude that the same particle acts in two different ways. Instead, there are two different particles. The explanation is that the new findings have revealed the existence of a type of structure intermediate between the single rotating systems of the massless particles and the complete double systems of the atoms. In these intermediate structures there are two rotating systems, as in the atoms of the elements, but only one of these systems has a net effective displacement. The rotation in this system is that of the proton, M 1-1-(1). In the second

system there is a neutrino type rotation. The massless rotations of the second system can be either those of the material neutrino, M ½-½-(1), or those of the cosmic neutrino, C ½-½-1. With the material neutrino rotation the combined displacements are M ½-½-(2) This combination is the mass one isotope of hydrogen, a structure identical with that of the normal mass two atom (deuterium), M 2-2(2), or M 2-1-(1) in the atomic notation, except that it has one less unit of magnetic rotation, and therefore one less unit of mass. When the cosmic neutrino rotation is added to the proton, the combined displacements are M 2-2-0. the same net total as that of the single magnetic rotation. This theoretical particle, the compound neutron, as we will call it, can be identified as the observed neutron. The identification of the separate rotations of these intermediate type structures with the rotations of the neutrinos and protons should not be interpreted as meaning that neutrinos and protons actually exist as such in the combination structures. What is meant is that one of the component rotations that constitutes the compound neutron, for instance, is the same kind of a rotation as that which constitutes a proton when it exists separately. Inasmuch as the net total displacement of the compound neutron is identical with that of the massless neutron, those aspects of the behavior of the particles–properties, as they are called–which are dependent on the net total displacement are the same for both. Likewise, those properties that are dependent on total magnetic displacement, or total electric displacement, are identical. But there are other properties that are related to those features of the particle structure in which the two neutrons differ. The compound neutron has an effective unit of three-dimensional displacement in the rotating system with the proton type rotation, and it therefore has one unit of mass. The massless neutron, on the other hand, has no effective three-dimensional displacement, and therefore no mass. The two neutrons also differ in that, although it is (or at least, as we will see in Chapter 17, may be) a still unobserved particle, the massless neutron is theoretically stable in the material environment, whereas the life of the compound neutron is short because of the ―foreign‖ nature of the rotation in the second system. After about 15 minutes, on the average, the compound neutron ejects the second rotating system in the form of a cosmic neutrino, and the particle reverts to the proton status. The structures of the sub-atomic particles of the material system may now be summarized as follows: Massless particles M 0-0-0 M 0-0-1 M 0-0-(1) M ½-½-0 M ½-½-(1) Particles with mass M+ 0-0-1 rotational base positron electron massless neutron neutrino

charged positron

MM M+ M C

0-0-(1) 1-1-(1) 1-1-(1) 1-1-(1) (½)-(½)-1

charged electron proton charged proton compound neutron

CHAPTER 12

Basic Mathematical Relations
It was pointed out in the introductory chapters that when we postulate a universe composed entirely of motion, every entity or phenomenon that exists in this universe is either a motion, a combination of motions, or a relation between motions. The discussion thus far has been addressed mainly to an examination of the primary features of the possible motions, and certain of the combinations of these motions. At this point it will be advisable to consider some of the basic kinds of relations that exist between motions. Inasmuch as motion in general is defined as a relation between space and time, expressed symbolically by s/t, all of the different kinds of motions, and the relations between motions, can be expressed in space-time terms. Such an analysis into space and time components will be particularly helpful in putting the various physical relationships into the proper perspective, and our first objective in the field we are now entering will therefore be to establish the space-time equivalents of the various quantities that constitute the so-called ―mechanical‖ system. Consideration of the analogous quantities of the electrical system will be deferred until we are ready to begin an examination of electrical phenomena. One set of these mechanical quantities is customarily expressed in velocity terms, and it presents no problems. One-dimensional velocity is, by definition, s/t. It follows that twodimensional and three-dimensional velocity is s²/t² and s³/t³ respectively. Acceleration, the time rate of change of one-dimensional velocity, is s/t². In addition to these quantities which express motion as velocities (or speeds), there is also a set of quantities which are fundamentally based on resistance to movement, although in some applications this basic significance is obscured by other factors. The objects, which resist movement, are atoms and particles of matter: three-dimensional combinations of motions. In a universe of motion, where nothing exists but motion, the only thing that can resist change of motion is motion. The particular motion that resists any change in the motion of an atom is the inherent motion of the atom itself, the motion that makes it an atom. Furthermore, only a three-dimensional motion, or motion that is automatically distributed over three dimensions, is able to offer effective resistance, as any vacant dimension permits motion to take place without hindrance. The magnitude of the resistance can be expressed in terms of the quantity required to eliminate the effective existing motion; that is, to reduce this motion to unity in the conventional reference system. This is the inverse of the motion of the atom, s³/t³, and the

resistance to motion, or inertia, is therefore t³/s³. In more general application, inertia is known as mass. Inasmuch as current physical theory recognizes gravitation and inerti a as phenomena of a quite different character, the equivalence of gravitational and inertial mass, which has been experimentally demonstrated to the almost incredible accuracy of less than one part in 1011, is regarded as very significant, although there is considerable difference of opinion as to what that significance actually is. As expressed by Clifford M. Will, ―the theoretical interpretation of the Eötvös experiment (which demonstrates the equivalence) has varied."57 Will asserts that it is now believed that the results of this experiment rule out all non-metric theories of gravitation (he defines metric theories as those ―in which gravitation can be treated as being synonymous with the curvature of space and time‖ ). After the theorists have arrived at such a far-reaching conclusion on the basis of what Will admits is no more than a ―conjecture,‖ it comes as something of an anticlimax when the Reciprocal System reveals that nothing of an esoteric nature is involved. Gravitation is a motion, but it can manifest itself either directly as motion or inversely as resistance to another motion. Multiplying mass, t³/s³, by velocity, s/t, we obtain momentum, t² /s², the reciprocal of twodimensional velocity. Another multiplication by velocity, s/t, gives us energy, t/s. Energy, then, is the reciprocal of velocity. When one-dimensional motion is not restrained by opposing motion (force) it manifests itself as velocity; when it is so restrained it manifests itself as potential energy. Kinetic energy is merely ―energy in transit,‖ so to speak. It is a measure of the energy that has been used to produce the velocity of a mass (½mv² = ½t³/s³ x s²/t² = ½ t/s), and can be extracted for other use by terminating the motion (velocity). This explanation of the nature of energy should be of some assistance to those who are still having some difficulty with the concept of scalar motion. Both speed and energy are scalar measures of motion. But on our side of the unit speed boundary, the low-speed side, where all motion is in space, speed can be represented in our conventional spatial system of reference because it causes a change of position, inward or outward, in space, whereas energy cannot be so represented. On the high-speed side of the boundary, the relations are inverted. There all motion is in time, and the measure of that motion, the energy, t/s, the inverse of speed, s/t, can be represented in a stationary temporal reference system, whereas speed is neither inward nor outward from the time standpoint, and cannot be represented in the temporal coordinate system. Here is the reason for the purely scalar nature of any increment of speed beyond the unit level, such as those discussed in Chapter 8. The added speed does have a direction, but it is a direction in time, and it has no vectorial effect in a spatial system of reference. We will find this very significant when we undertake an examination of some of the recently discovered high-speed astronomical objects in Volume II. Force, which is defined as the product of mass and acceleration, becomes t³/s³ x s/t²= t/s². Acceleration and force are thus inverse quantities, in the sense in which that term is generally used in this work; that is, they are identical except that space and time are

interchanged. They are not inverse in the mathematical sense, as their product is not equal to unity. One special type of force that is of particular interest is the gravitational force, that which the aggregates of matter appear to exert on each other by reason of there motions inward in space. In this case, the mathematical expression F = kmm'/d² by which the force is ordinarily calculated is quite different from the general force equation F= ma. When taken at their face value, these two expressions are clearly irreconcilable. If gravitational force is actually a force, even a force of the ―as if,‖ variety, it cannot be proportional to the product of two masses (that is, to m2) when force in general is proportional to the first power of the mass. There is an obvious contradiction here. Most of the other common quantities of the mechanical system can be reduced to spacetime terms without any complications. For example: Impulse, the product of force and time, has the same dimensions as momentum. Ft= t/s² x t = t²/s² Both work and torque are the products of force and distance, and have the same dimensions as energy. Fs= t/s² x s= t/s Pressure is force per unit area. F/s² = t/s x I/s² = t/s4 Density is mass per unit volume. m/s³ = t³/s³ x 1/s³ = t³/s6 Viscosity is mass per unit length per unit time. m x 1/s x 1/t = t³/s³ x 1/s x 1/t = t²/s4 Surface tension is force per unit length. F/s = t/s² x 1/s = t/s³ Power is work per unit time. W/t = t/s x 1/t = 1/s All of the established relations in the field of mechanics have the same dimensional consistency on the basis of these space-time dimensions as in the conventional forms, since the mass terms in the equations are, in all cases, balanced by derivatives of mass on the opposite side of the equation. The numerical values in these equations likewise retain the same relationships, as all that we have done, from this standpoint, is to change the size of the unit in which the quantity of mass is expressed. What has been accomplished, then, is to express mass in terms of the components of motion. Since mechanics deals

only with space, time, and mass, it follows that, so far as mechanics is concerned, by reducing mass to motion we have confirmed the validity of the basic postulate that the physical universe is composed entirely of motion. This is a very significant point. The concept of the nature of the physical universe on which conventional physics is based, the concept of a universe of matter existing in a framework provided by space and time, identifies matter as a fundamental quantity. The results of this present work now show that, in the physical field that is the most completely developed and understood, the fundamental entity is motion, not matter. Furthermore, it is now possible to see why the common denominator of the universe has to be motion; why it could not be anything else. It has to be something to which all of the mechanical quantities can be reduced (and all other physical quantities as well, but for the present we are examining the mechanical relations). The only entity that meets these requirements is the simple relationship between space and time that we are defining as motion. Motion is the common denominator of the field of mechanics. It still remains to be established that motion is the common denominator of the entire universe, but the demonstration that all of the quantities with which mechanics deals, including mass , can be reduced to motion creates a strong presumption that when the more complex phenomena in other fields are equally well understood they will also be found to be reducible to motion. The development of theory in the subsequent pages of this and the volumes to follow will show that this logical expectation is realized, and that all physical phenomena and entities can, in fact, be reduced to motion. The application of the Reciprocal System of theory to mechanics throws a significant light on the relation of this theoretical system to conventional scientific thought. It was asserted in Chapter 6 that the concept of a universe of motion, on which the new theoretical system is based, is ―just the kind of a conceptual alteration that is needed to clear up the existing physical situation: one which makes drastic changes where such changes are required, but leaves the empirically determined relations of our everyday experience essentially untouched.‖ Here, in application to a field in which the entire body of knowledge is a network of ―empirically determined relations,‖ the validity of this assertion is dramatically demonstrated. The only change that is found to be necessary in mechanics is to recognize the fact that mass is reducible to motion. Otherwise, the entire structure of mechanical theory is incorporated into the Reciprocal System just as it stands. As will be shown in the pages that follow, the same is true in other fields to the extent that the prevailing ideas in those fields are, like the principles of mechanics, solidly based on empirically determined facts. But where the prevailing ideas are based on assumptions–"free inventions of the human mind,‖ in Einstein’s words–the development of the theory of a universe of motion now shows that most of these invented ideas are erroneous, in part if not in their entirety. The Reciprocal System diverges from current scientific thought only in those respects where current theory has been led astray by erroneous assumptions. As indicated earlier, the phenomena involved are mainly those not accessible to direct apprehension, primarily the phenomena of the very small, the very large, and the very fast. In all of the space-time expressions of physical quantities that were derived in the preceding pages of this chapter, the dimensions of the denominator of the fraction are

either equal to or greater than the dimensions of the numerator. This is another result of the discrete unit postulate, which prevents any interactions from being carried beyond the unit level. Addition of speed displacement to motion in space reduces the speeds; the atomic rotation can take place only in the negative scalar direction, and so on. The same principle applies to the dimensions of physical quantities, and the dimensions of the numerator of the space-time expression of any real physical quantity cannot be greater than those of the denominator. Purely mathematical relations that violate this principle can, of course, be constructed, but according to the theoretical findings they have no real physical significance. For example, the reciprocal of viscosity is known as fluidity, and in certain applications it is more convenient for purposes of calculation to work with fluidity values rather than viscosity values. But the space-time expression for fluidity is s4/t², and on the basis of the principle just stated, we must conclude that viscosity is the quantity that has a real physical existence. The most notable of the quantities excluded by this dimensional principle is ―action.‖ This is the product of energy, t/s, and time t, and in space-time terms it is t²/s. Thus it is not admissible as a real physical quantity. In view of the prominent place which it occupies in some physical areas, this conclusion that it has no actual physical significance may come as quite a surprise, but the explanation can be seen if we examine the most familiar of the conventional applications of action: its use in the expression of Planck's constant. The equation connecting the energy of radiation with the frequency is E = hv where h is Planck's constant. In order to be dimensionally consistent with the other quantities in the equation this constant must be expressed in terms of action. It is clear, however, from the explanation of the nature of the photon of radiation that was developed in Chapter 4, that the so-called ―frequency‖ is actually a speed. It can be expressed as a frequency only because the space that is involved is always a unit magnitude. In reality, the space dimension belongs with the frequency, not with the Planck constant. When it is thus transferred, the remaining dimensions of the constant are t²/s², which are the dimensions of momentum, and are the reversing dimensions that are required to convert speed s/t to energy t/s. In space-time terms, the equation for the energy of radiation is t/s = t²/s² x s/t Similar situations have developed in other cases where dimensions have been improperly assigned in current practice. The energy of rotation, for instance, is commonly expressed as ½ Iw ², where I is the moment of inertia, and w is the angular velocity. The moment of inertia is the product of the mass and the square of the distance:I = ms² = t³/s³ x s² = t³/s This result shows that the moment of inertia is an artificial construct without physical significance. The important part that it plays in the expression for rotational energy may seem inconsistent with this conclusion, but again the explanation is that the space magnitude has been improperly assigned. It belongs with the velocity term, not with the

mass term. When it is so transferred, the moment of inertia is eliminated, and the rotational energy equation reverts to the normal kinetic form E= ½mv². The equation in its usual form is merely a mathematical convenience, and does not reflect the actual physical situation. In addition to the kinds of relations that have been discussed so far in this chapter, where the relations themselves are familiar, and only the analysis into space and time components is new, there are other types of physical relations that are peculiar to the universe of motion. At this time we will want to examine two of these: the limitations on unidirectional motion, and the relations between motion in space and motion in time. The translational and vibrational speeds with which we have been mainly concerned thus far are speeds attained by means of directional reversals, and their magnitudes are not subject to any limits other than those arising from the finite capabilities of the originating processes. Rotation, however, is unidirectional from the scalar standpoint, and unidirectional magnitudes are limited by the discrete unit postulate. On the basis of this postulate, the maximum possible one-dimensional unidirectional speed is one net displacement unit. However, the atom rotates in the inward scalar direction, and inward motion necessarily takes place in opposition to the omnipresent outward motion of the natural reference system. Two inward displacement units are therefore required in order to reach the limit of one net unit. These two units extend from unity in the positive scalar direction (the positive zero, in terms of the natural system) to unity in the negative scalar direction (the negative zero), and they constitute the maximum for any one-dimensional unidirectional motion. In three-dimensional space (or time) there can be two displacement units in each of the three dimensions, and the maximum three-dimensional unidirectional displacement is therefore 2³, or 8, units. There have been some suggestions that the number of possible directions (and consequently displacements) in three-dimensional space ought to be 3 x 2 = 6 rather than 2³ = 8. It should therefore be emphasized that we are not dealing with three individual dimensions of motion, we are dealing with three-dimensional motion. The possible directions in a three-dimensional continuum can be visualized by regarding a two-unit cube as being an assemblage of eight one-unit cubes. The diagonals from the center of the assemblage to the opposite corner of each of the cubes then define the eight possible directions. An important consequence of the fact that there are eight displacement units between the zero point of the positive motion and the end of the second unit, which is the zero from the negative standpoint, is that in any physical situation involving rotation, or other threedimensional motion, there are eight displacement units between positive and negative magnitudes. A positive displacement x from the positive datum is physically equivalent to a negative displacement 8—x from the negative datum. This is a principle that will have a wide field of application in the pages that follow. The key factor in the relation between motion in space and motion in time is the previously mentioned fact that in the context of a spatial reference system all motion in time is scalar, and in the context of a temporal reference system all motion in space is scalar. The regions of motion in time and motion in space therefore meet in what is

essentially no more than a point contact. It follows that of all of the possible directions that a motion in time can take, only one of these time directions brings the motion in time into contact with the region of motion in space. Only in this one direction can an effect be transmitted across the regional boundary. Inasmuch as all possible directions are equally probable, in the absence of any factors that would establish a preference, the ratio of the transmitted effect to the total magnitude of the motion is numerically equal to the total number of possible directions. As can be seen from the foregoing explanation, the transmission ratio depends on the nature of the motion, particularly on the number of dimensions involved. However, the value with which we will be most concerned is that applicable to the basic properties of matter. This is the relation that was called the inter-regional ratio in the first edition, and it appears advisable to retain this name, although the more extensive information now available shows the relation is not as general as the name might indicate. On the basis of the theoretical considerations discussed in the preceding paragraphs, there are 4 possible orientations of each of the two two-dimensional rotations of the atoms, and 8 possible orientations of the one-dimensional rotations, making a total of 4 x 4 x 8 = 128 different positions that a unit displacement of the scalar translational motion of the atom (the inward scalar effect of the rotation) can take in three-dimensional time. In addition, each of the rotating systems of the atom has an initial unit of vibrational displacement with three possible orientations, one in each dimension. For the two-dimensional basic rotation this means nine possible positions, of which two are occupied. Thus, for each of the 128 possible rotational positions there is an additional 2/9 vibrational position which any given displacement unit may occupy. The inter-regional ratio is then 128 (1 + 2/9) = 156.44. It is this inter-regional ratio that accounts for the small ―size‖ of atoms when the dimensions of these objects are measured on the assumption that they are in contact in the solid state. According to the theory developed in the foregoing pages, there can be no physical distance less than one natural unit, which, as we will see in the next chapter, is 4.56 x 10-6 cm. But because the inter-atomic equilibrium is established in the region inside this unit, the measured inter-atomic distance is reduced by the inter-regional ratio, and this measured value is therefore in the neighborhood of 10-8 cm. The inversion of space and time at the unit level also has an important effect on the dimensions of inter-regional relations. Inside unit space no changes in space magnitudes can take place, since less than unit space does not exist. However, as pointed out earlier, the motion in time, which can take place inside the space unit, is equivalent to a motion in space because of the inverse relation between space and time. An increase in the time aspect of a motion in this inside region (the time region, where space remains constant at unity) from 1 to t is equivalent to a decrease in the space aspect from 1 to 1/t. Where the time is t, the speed in this region is equivalent space 1/t divided by time t, or 1/t². In the region outside unit space, the speed corresponding to one unit of space and time t is 1/t. Now we find that in the time region it is 1/t². The time region speed, and all quantities derived therefrom, which means all of the physical phenomena of the inside region, as all of these phenomena are manifestations of motion, are therefore second power expressions

of the corresponding quantities of the outside region. This is an important principle that must be taken into account in any relation involving both regions. The intra-region relations may be equivalent; that is, the expression a= be is the mathematical equivalent of the expression a² = b²c². But if we measure the quantity a in the outside region, it is essential that the equation be expressed in the correct regional form: a = b²c². Although the difficulties which the Reciprocal System of theory does not encounter do not enter into the development of thought in these pages, and, strictly speaking, have no real place in the discussion, it may be of interest, while we are considering some of the factors that enter into the phenomena of very small dimensions, to point out that the theory of a universe of motion is free from the problem of infinities that plagues all conventional theories in this physical area. Richard Feynman gives us a candid assessment of the existing theoretical situation: We really do not know exactly what it is that we are assuming that gives us the difficulty producing infinities. A nice problem! However, it turns out that it is possible to sweep the infinities under the rug, by a certain crude skill, and temporarily we are able to keep on calculating.... We have all these nice principles and known facts, but we are in some kind of trouble: either we get the infinities, or we do not get enough of a description–we are missing some parts.58 The Reciprocal System is free of these problems because it is a fully quantized system of theory. Every physical phenomenon, this theory tells us, is a manifestation of motion, and every motion involves at least one unit of space and one unit of time. For convenience, we may identify a ―point‖ within a unit of space or a unit of time, but such a point has no independent existence. Nothing less than one unit of either space or time exists in the universe of motion.

CHAPTER 13

Physical Constants
Because motion and its components, space and time, exist only in units, the derivatives of motion, dimensional variations of the basic relation between space and time, such as acceleration, force, etc., also exist only in natural units. A natural unit of force, for example, is a natural unit of time divided by a two-dimensional natural unit of space. It then follows that where a relation of the kind discussed in Chapter 12 is correctly stated, it is valid as a quantitative relation between units without any arbitrary ―constant.‖ The expression F = ma, for example, tells us that one natural unit of force applied to one natural unit of mass will produce an acceleration of one natural unit. When all quantities are expressed in natural units there are no numerical constants in equations of this kind aside from what we may call structural factors: geometrical factors such as the number of effective dimensions, numerical factors such as the second and third powers of the quantities entering into the relations, and so on.

There has been a great deal of speculation as to the nature and origin of the ―fundamental constants‖ of present-day physics. An article in the Sept. 4, 1976 issue of Science News, for example, contends that we are confronted with a dilemma, inasmuch as there are only two ways of looking at these constants, neither of which is really acceptable. We must either, the article says, ―swallow them ad hoc‖ without justification for ―their necessity, their constancy, or their values,‖ or we must accept the Machian hypothesis that they are, in some unknown way, determined by the contents of the universe as a whole. The development of the Reciprocal System of theory has now resolved this dilemma in the same way that it handled a number of the long-standing problems considered in the earlier pages; that is, by exposing it as fictitious. When all quantities are expressed in the proper units–the natural units of which the universe of motion is constructed–the ―fundamental constants‖ reduce to unity and vanish. A preliminary step that has to be taken before we can compare the mathematical results derived from the new theory with the numerical values obtained by measurement is to ascertain the conversion ratios by which the values in the natural system can be converted to the conventional system of units in which the measurements are reported. Inasmuch as the conventional units are arbitrary, there is no way in which the conversion factors can be calculated theoretically. It is necessary to utilize a measurement of some specific physical quantity for each independent conventional unit. Any physical quantity, which involves the item in question, and can be clearly identified, will theoretically serve the purpose, but for maximum accu1acy certain basic phenomena that are relatively simple, and have been carefully studied observationally, are clearly preferable. There is no question as to where we should obtain the value of the natural unit of speed, or velocity. The speed of radiation, measured as the speed of light in a vacuum, 2.99793 x 1010 cm/sec, is an accurately measured quantity that is definitely identified as the natural unit by the theoretical development. There are some uncertainties with respect to the other conversion factors, both as to the accuracy of the experimental values from which they have to be calculated, and as to whether all of the minor factors that enter into the theoretical situation have been fully taken into account. Some improvement has been made in both respects since the first edition was published, and the principal discrepancies that existed in the original findings have been eliminated, or at least greatly minimized. No significant changes were required in the values of the basic natural units, but some of the details of the manner in which these units enter into the determination of the ―constants‖ and other physical magnitudes have been clarified in the course of extending the development of the theoretical structure. One of the problems in this connection is that of arriving at a decision as to which of the reported measured values should be used in the calculations. Ordinarily it would be assumed that the more recent results are the more accurate, but an examination of these recent values and the methods by which they have been obtained indicates that this is not necessarily true. Apparently the ―consistent‖ values listed in the up-to-date tabulations involve some adjustments of the raw data to conform with current theoretical ideas as to the relations that should exist between the various individual values. For purposes of this present work the unadjusted data are preferable.

The principal question at this point concerns the experimental values of Avogadro's number, as only three conversion constants are required for present purposes, and there are no significant differences in the measurements of the quantities that will be used in calculating two of these constants. The more recent values reported for Avogadro’s number are somewhat lower than those reported earlier, but the correlation with the gravitational constant, which will be discussed shortly, favors some of the earlier results. The value adopted for use in evaluating the conversion constant for mass, 6.02486 x 1023 g-mol-1, has therefore been taken from a 1957 tabulation by Cohen, Crowe and DuMond.59 In any event, it should be understood that wherever the results obtained in this work are expressed in the arbitrary units of a conventional system, they are accurate only to the degree of accuracy of the experimental values of the quantities used in determining the conversion constants. Any future change in these values resulting from improvement of experimental techniques will involve a corresponding change in the values calculated from theoretical premises. However, this degree of uncertainty does not apply to any results that are stated in natural units, or in conventional terms such as units of atomic number that are equivalent to natural units. As in the first edition, the natural unit of time has been calculated from the Rydberg fundamental frequency. A question has arisen here because this frequency varies with the mass of the emitting atom. The original calculation was based on the value applicable to hydrogen, but this has been questioned, as the prevailing opinion regards the vague applicable to infinite mass as the fundamental magnitude. A definitive answer to this question will not be available until the theory of the variation in the frequency has been worked out, but in the meantime a review of the situation indicates that we should stay with the hydrogen value in the interim. From the theoretical viewpoint it would seem that the unit value would come from an atom of unit magnitude, rather than from an infinite number of atoms. Also, even though the difference is small, the value thus derived seems to be more consistent with the general pattern of measured magnitudes than the alternative. From the manner in which the Rydberg frequency appears in the mathematics of radiation, particularly in such simple relations as the Balmer series of spectral lines, it is evident that this frequency is another physical manifestation of a natural unit, similar in this respect to the speed of light. It is customarily expressed in cycles per second on the assumption that it is a function of time only. From the explanation previously given, it is apparent that the frequency of radiation is actually a velocity. The cycle is an oscillating motion over a spatial or temporal path, and it is possible to use the cycle as a unit only because that path is constant. The true unit is one unit of space per unit of time (or the inverse of this quantity). This is the equivalent of one half-cycle per unit of time rather than one full cycle, as a full cycle involves one unit of space in each direction. For present purposes the measured value of the Rydberg frequency should therefore be expressed as 6.576115 x 1015 half-cycles per second. The natural unit of time is the reciprocal of this figure, or 1.520655 x 10-16 seconds. Multiplying the unit of time by the natural unit of speed, we obtain the value of the natural unit of space, 4.558816 x 10-6 centimeters.

By combing these two natural units as required, the natural units of all of the quantities of the velocity group can be calculated. Those of the inverse quantities, the energy group, can also be calculated in the same centimeter-second terms, but this gives us expressions such as 3.711381 x 10-32 sec³/cm³, which is the natural unit of mass. This value has no practical use because the inverse relations between the quantities of the velocity group and those of the energy group have not hitherto been recognized. In setting up the conventional system of units it has been assumed that mass is another fundamental quantity for which an additional arbitrary unit is necessary. The ratio of the velocitybased unit of mass to this arbitrary unit, the gram, can be derived from any clearly defined physical relation involving mass that has been accurately measured in conventional units. As indicated earlier, the measurement selected for this purpose is that of Avogadro's constant. This constant is the number of molecules per gram molecular weight, or in application to atoms, the number of atoms per gram atomic weight. The reported value is 6.02486 x 1023. The reciprocal of this number, 1.65979 x 10-24, in grams, is therefore the mass equivalent of unit atomic weight, the unit of inertial mass, as we will call it. With the addition of the value of the natural unit of inertial mass to the values previously derived for the natural units of space and time, we now have all of the information required for calculation of the natural units of the other primary quantities of the mechanical system. The mechanical units can be summarized as follows: Natural Units of Primary Quantities Space-time Units Conventional Units -6 4.558816 x 10 cm 4.558816 x 10-6 cm -16 1.520655 x 10 sec 1.520655 x 10-16 sec 2.997930 x 1010 cm/sec 2.997930 x 1010 cm/sec 26 1.971473 x 10 cm/sec² 1.971473 x 1026 cm/sec² 3.335635 x 10-11 see/cm 1.49175 x 10-3 ergs -6 7.316889 x 10 sec/cm² 3.27223 x 10² dynes 5 4 3.520646 x 10 sec/cm 1.57449 x 1013 dynes/cm² 1.112646 x 10-21 sec²/cm² 4.97593 x 10-14 g-cm/sec -32 3.711381 x 10 sec³/cm³ 1.65979 x 10-24 g

s space t time s/t speed s/t² acceleration t/s energy t/s² force 4 t/s pressure t²/s² momentum t³/s³ inertial mass

The values given in the first column of this tabulation are those derived by applying the natural units of space and time to the space-time expressions for each physical quantity. In the case of the quantities of the speed or velocity type, these are also the values applicable in the conventional systems of measurement. However, mass is regarded as an independent fundamental variable in the conventional systems, and a mass term is introduced into each of the quantities of the energy type. Momentum, for example, is not treated as t²/s², but as the product of mass and velocity, which, in space-time terms, is t³/s³ x s/t. The use of an arbitrary unit of mass then introduces a numerical factor. Thus, in order to arrive at the values of the natural units in terms of the cgs system of measurement, each of the values given for the energy group in the first column of the tabulation must be divided by this factor: 2.236055 x 10-8. As we saw in Chapter 10, the masses of the atoms of matter can be expressed in terms of units of equivalent electric displacement. The minimum quantity of displacement is one

atomic weight unit. It is therefore evident that this displacement unit is some kind of a natural unit of mass. In the first edition it was identified as the natural unit of mass in general. The continuing theoretical development has revealed, however, that this atomic weight unit, the unit of inertial mass, is actually a composite that includes not only a unit of what we will now call primary mass, the basic mass quantity, but also a unit of secondary mass. The concept of secondary mass was introduced in the first edition, without being developed very far. A considerably more detailed treatment is now available. The inward motion in space which gives rise to the primary mass does not take place from an initial level occupying a fixed location in a stationary frame of reference. Instead, the initial level itself is in motion in the region inside unit space. Since mass is an expression of the inward motion that is effective in the context of a stationary reference system, the primary mass is modified by the mass equivalent of the motion of the initial level. While the previous deductions with respect to the essential features of the secondary mass component have been confirmed in the subsequent studies, a few of the details take on a somewhat different appearance when viewed in the light of the more complete information now available. The recent findings indicate that although the primary mass is a function of the net total effective positive rotational displacement, the movement of the initial level that is responsible for the existence of the secondary mass depends on the magnitudes of the displacements in the different dimensions separately. The scalar directions of the motions inside unit distance play an important part in determining these magnitudes. Outside unit distance, the scalar direction of the rotational motion is inward because it must oppose the outward motion of the natural reference system. However, as we saw in Chapter10, the magnitude of that inward motion depends to some extent on whether the displacement in the electric dimension is positive or negative. Inside unit space there is still more variability, as the motion in this region is in time, and there is no fixed relation between direction in time and direction in space. (The rotational motion of which the material atom or particle is constructed is motion in space, but inside one spatial unit the translational motion of the atom is in time.) Because of this directional freedom in the time region, the secondary mass may be either positive or negative. Furthermore, the directions of the individual displacement units are independent of each other, and the net total secondary mass of a complex atom may be relatively small because of the presence of nearly equal numbers of positive and negative secondary mass components. This directional variability introduces a number of complications into the secondary mass pattern of the elements. The complete pattern has not yet been identified, but a substantial amount of information is now available with respect to the values applying to sub-atomic particles and the elements of low atomic number. The magnitudes of the natural units applicable to physical quantities are independent of the sector or region of the universe in which the phenomena to which they relate are located. As explained in Chapter12, however, only a fraction of any physical effect can be transmitted across a regional boundary, and the measured value beyond that boundary is substantially less than the original unit. This is the principal reason for the great

disparity between the magnitudes of the primary and secondary mass. A unit of mass in the region inside unit distance is inherently just as large as a unit of mass in the region outside unit distance. But when both are measured in terms of their effect in the outside region, the inside, or secondary, mass is reduced by the interregional ratio. In this chapter we are dealing with some very small quantities, and for greater accuracy we will extend the previously calculated value of the inter-regional ratio to two more decimal places, making it 156.4444. The reciprocal of this ratio, 0.00639205, is the fraction of a time region unit that is effective outside unit distance. It is therefore the unit of secondary mass applicable to the basic two-dimensional rotation of the atom or particle. The unit of inertial mass is one such secondary unit plus one unit of primary mass, or a total of 1.00639205. An analysis of the secondary mass relations enables us to compute the mass of each of the sub-atomic particles, a magnitude that is of interest not only as one more item of information about the physical universe, but also because of the light that it throws on the structure of the individual particle. Here we must take into account not only the twodimensional component of the secondary mass, the magnetic component, as we will call it, following our usual terminology, but also the other components that may be involved in the secondary mass. One of these is the component due to the electric rotation, if any. Inasmuch as this electric rotation, the rotation in the third dimension, is not an independent motion, but a reverse rotation of the pre-existing two-dimensional rotating system, or systems, it adds neither primary mass nor the magnetic unit that is the principal component of the secondary mass. It contributes only the mass equivalent of a unit of one-dimensional rotation. In this case, the 1/9 factor representing the possible positions of the basic photon applies directly against the basic 1/128 relation. We then have for the unit of electric mass: 1/9 x 1/128 = 0.00086806 This value applies specifically where the motion around the electric axis is a rotation of a two-dimensional displacement distributed over all three dimensions, as in a double rotating system. Where only one two-dimensional rotation is involved, the electric mass is 2/3 of the full unit, or 0.00057870. When two of the two-dimensional rotations (four dimensions in all) are consolidated to form a double rotating system (three dimensions), the two 0.00057870 mass units become one 0.00086806 unit. Another secondary mass component that may be present is the mass due to an electric charge. Like all other phenomena in a universe of motion, a charge is a motion, an additional motion of the atom or particle. We are not ready to discuss charges in detail at this stage of the presentation, so for the present we will merely note that on the basis of the restrictions on combinations of motions defined in Chapter 9, the charge, as a motion of the rotating particle or atom, must have a displacement opposite to that of the rotation in order to be stable. This means that the motion that constitutes the charge is on the far side of another regional boundary–another unit level–and it is subject to two successive inter-regional transmission factors.

The relation between the time region and the third region, in which the motion of the charge takes place, is similar to that between the time region and the region outside unit space. The inter-regional ratio is the same, except that because the electric charge is onedimensional the factor 1 + 1/9 has to be substituted for the factor 1+2/9 that appears in the inter-regional ratio previously calculated. This makes the interregional ratio applicable to the relation with the third region 128 x (1+1/9)= 142.2222. The mass of unit charge is the reciprocal of the product of the two inter-regional ratios, 156.4444 and 142.2222, and amounts to 0.00004494. The charge applicable to electrons and positrons deviates from this normal value because these particles have effective rotations in only one dimension, leaving the other two dimensions open. In some way, the exact nature of which is not yet clear, the motion of the charge is able to take place in these two dimensions of the time region instead of in the normal manner. Since this is on the opposite side of the unit boundary, the direction of the effect is reversed, making the mass increment due to the charge negative, as well as reducing its magnitude by one third. The effective mass of a charge applied to an electron or positron is therefore -2/3 x 0.00004494= -0.00002996 We may now apply the calculated values of the several mass components, as given in the foregoing paragraphs, to a determination of the masses of the sub-atomic particles described in Chapter 11. For convenience, these values will be recapitulated as follows: p primary mass m magnetic mass gravitational mass E electric mass (3 dim.) e electric mass (2 dim.) C mass of normal charge c mass of electron charge 1.00000000 0.00639205 1.00639205 0.00086806 0.00057870 0.00004494 -0.00002996

These are the masses of the various components on the natural scale. The measured values are reported in terms of a scale based on an arbiter assumed mass for some atom or isotope that is taken as a standard. For a number of years there were two such scales in common use, the chemical scale, based on the atomic weight of oxygen as 16, an the physical scale, which assigned the 16 value to the O16 isotope. More recently, a scale based on an atomic weight of 12 for the C12 isotope has found favor, and most of the values given in the current literature are expressed in terms of the C12 scale. In the light of the finding of this work the shift away from the O16 scale is unfortunate, as the theoretical development indicates that the O16 isotope has a mass c exactly 16 on the natural scale, and the physical scale (O16 = 16) is therefore coincident with the natural scale. It will, of course, be necessary to use the natural scale for our purposes. The observed values quote for comparison with the theoretical masses will therefore be stated in terms of the equivalent O16 physical scale. Here again we face the same issue that was encountered early in this chapter in connection with the selection of an empirical value c Avogadro’s number as a basis for calculating the unit of mass: the question as to whether we should regard the most recent determination as the most accurate. It would appear that the arguments that led to the acceptance of the 1957 value of Avogadro’s number are also applicable to the particle

masses, particularly since the agreement between the calculated and observed masses of the electron and proton is quit satisfactory on this basis. The empirical values cited in the paragraphs that follow have therefore been taken from the 1957 compilation by Cohen, Crowe and Du Mond.59 Since mass is three-dimensional, an independent one-dimensional or two-dimensional rotation has no mass. Nevertheless, when such a rotation becomes a component of a three-dimensional rotation, it contributes to the mass equivalent of that rotation. This amount that a rotation which is massless when independent will add to the mass of a particle or atom when it joins that combination of motions constitutes what we will call potential mass. In the case of the particles with no effective two-dimensional rotational displacement, the electron and the positron, the appropriate unit of electric mass, 0.00057870, is the entire mass of the particle, and even that mass is only potential, rather than actual, as long as the particle is in the basic uncharged condition. When a charge is added, the effect of the charge is distributed over all three dimensions by the chance process that governs the directions of the motion of the charge in the time region. Thus the charged particle has effective motion in all three dimensions, irrespective of the number of dimensions of rotation. This not only makes the mass of the charge itself an effective quantity, but, as indicated in Chapter 11, it also raises the potential mass of the rotation of the particles to the effective status. The net effective mass of the electron or the positron is then the rotational value 0.00057870 less the mass of the charge 0.00002996, or 0.00054874. The observed value is 0.00054877. The massless neutron, the M ½-½-0 combination, has no effective rotation in the third dimension, but no rotation from the natural standpoint is rotation at unit speed from the standpoint of a fixed reference system. This rotational combination therefore has an initial unit of electric rotation, with a potential mass of 0.00057870, in addition to the mass of the two-dimensional basic rotation 1.00639205, making the total potential mass of this particle 1.00697075. In this connection, it should be noted that the electron and positron also have rotation at unit speed (no rotation, in terms of the natural system) in the two inactive dimensions, but these rotations involve no mass, as they are independent, and are not rotating anything. The initial unit of rotation in the third dimension of the massless neutron, on the other hand, is a reverse rotation of the two-dimensional structure, and it therefore adds an electric mass unit. The neutrino, M ½-½-(1), has the same unit positive displacement in the magnetic dimensions as the massless neutron, but it has neither primary nor magnetic mass because these are functions of the net total displacement, and that quantity is zero for the neutrino. But since the electric mass is independent of the basic rotation, and has its own initial unit, the neutrino has the same potential mass as the uncharged electron or positron, 0.00057870. The potential mass of both the massless neutron and the neutrino is actualized when the rotations of these particles are joined to produce a three-dimensional rotation. The mass

of the resulting particle is then 1.00754945. As indicated in Chapter 11, this particle is the proton. As it is observed, however, the proton is positively charged, and in this condition the foregoing figure is increased by the mass of a unit charge, 0.00004494. The resulting mass of the theoretical charged proton is 1.00759439. The mass of the observed proton has been measured as 1.007600. Consolidation of two protons results in the formation of a double rotating system. As stated earlier, this substitutes one three-dimensional electric unit of mass- for two of the two-dimensional units, reducing the combined mass by 0.00028935. The mass of the product, the deuterium atom (H²), is the sum of two (uncharged) proton masses less this amount, or 2.014810. The corresponding observed value is 2.014735. Inasmuch as the proton already has a three-dimensional status, addition of another neutrino alters only the electric mass. The material neutrino adds the normal twodimensional electric unit, 0.00057870, making the total for the product, the mass one isotope of hydrogen, 1.00812815. The measured value is reported as 1.008142. The successive additions of neutrinos to the massless neutron that eventually produce the mass one isotope of hydrogen should be given special attention, as the considerations which will be discussed in Chapter 17 indicate that this addition process plays a very significant part in the overall cyclic mechanism of the universe. The following tabulation shows how the mass of the hydrogen isotope is built up step by step. Step by Step Building Process for the Hydrogen Isotope primary mass magnetic mass electric mass massless neutron neutrino proton neutrino hydrogen (H1)

M ½-½-0 M ½-½-(1) M 1-1-(1) M 2-2-(1) M 1½-1½-(2) * potential mass

1.00000000 0.00639205 0.00057870 1.00697075* 0.00057870* 1.00754945 0.00057870* 1.008l28l5

Neutrinos are plentiful in the local environment. The requirement for production of new matter in the form of hydrogen by the addition process is therefore a continuing supply of massless neutrons. In Chapter 15 we will find that there is in operation a gigantic process that furnishes just such a supply. Addition of a cosmic neutrino, the rotational displacements of which are on the opposite side of the unit boundary, to the proton, involves an additional initial electric unit, as both the rotation in time and the rotation in space must start from unity. Also the spatial effect of the cosmic neutrino rotation is three-dimensional, since the spatial direction of motion in time is indeterminate. The total addition of mass to the proton in the production of the compound neutron is then 0.00144676, and the resulting mass of the particle is 1.00899621. It has been measured as 1.008982.

The following is a summary of the particle masses and the mass components from which these masses are built up. The empirical values from the 1957 compilation are given for comparison. As noted earlier, the correlation is quite satisfactory for the electron and the proton, as it is within the estimated range of experimental error. The divergence in the case of the heavier particles is not large, but it exceeds the estimated error. Whether the source of this discrepancy is in the theoretical development or in the experimental determinations remains to be ascertained. Mass Composition e-c e-c e e e p+m+e p + m + 2e p + m + 2e + C p + m + 3e p + m + 3e + E * potential mass Particle charged electron charged positron electron positron neutrino massless neutron proton charged proton hydrogen (H1) compound neutron Calculated 0.00054874 0.00054874 0.00057870* 0.00057870* 0.00057870* 1.00697075* 1.00754945 1.00759439 1.008l28l5 1.00899621 Mass Observed 0.00054876 0.00054876 massless massless massless massless unobserved 1.007593 1.008142 1.008982

In the first edition the relation between the natural unit of mass and the arbitrary unit in the cgs system was identified in terms of the gravitational constant. It has recently been pointed out by Todd Kelso and Steven Berline that the relation thus established cannot be converted to a different system of units such as the SI (mks) system. This made it evident that the interpretation of the gravitational phenomenon on which the previous determination was based was, in some way, erroneous An analysis of the situation was therefore carried out in order to locate the point of error. The invalidation of the interpretation of the gravitational equation has no effect on any other feature of the theoretical results that have been obtained from the Reciprocal System, as described in this volume Its sole result has been to leave this system of theory without an, connection between the gravitational equation and the theoretical structure. Once the situation is viewed in this light, it is immediately apparent that the lack of connection between the equation and physical theory is not peculiar to the Reciprocal System. Conventional theory does not identify the connection either. The physics textbooks find it necessary to admit this fact in statements such as the following: ―It should be noted that Newton's law of universal gravitation is not a defining equation like Newton's second principle of mechanics and cannot be derived from defining equations. It represents an observed relation” . This is a theoretical discrepancy that conventional physics has not been able to resolve. But it is an isolated discrepancy, and it has been swept, under the rug by assigning fictitious dimensions to the gravitational constant. It follows from this that the error lies in some interpretation of that ―observed relation‖ that has been common to both conventional theory and the Reciprocal System. Evidently the developers of both systems of theory have misunderstood the true nature of the phenomenon. Here, again, recognition of the source of the difficulty points the way to the

resolution of the problem. As brought out in the earlier chapters, one mass does not actually exert a force on another–each is pursuing its own course independently of all others–but the results of the inward motions of two masses are similar to those that would follow if the masses did attract each other. These results can therefore be represented in terms of an attractive force, on an ―as if‖ basis. But in order to do this we must put the ―as if‖ forces on the same footing as real forces. A force can only be exerted against a resistance. Hence, when we attribute a force to the motion of one mass we cannot also attribute a force to the motion of the other. We must attribute a resistance to the second mass. Thus, an ―as if‖ force, a gravitational force, is exerted against an ―as if,‖ inertial resistance. In the previous discussion we identified gravitation as three-dimensional motion, s³/t³, and inertia as three-dimensional resistance to motion, t³/s³. The product of the gravitational motion and the inertial resistance therefore does not have the dimensions of mass to the second power, as the conventional expression of the gravitational equation indicates; it is dimensionless. This is a situation in which the ability to reduce all physical quantities to space-time terms is very helpful. It will also be convenient to exam the dimensional situation independently before taking up the question of the numerical values. The gravitational equation, as expressed in current practice, is assigned dimensions as follows: (dynes cm² g-²) x g² x cm-² = dynes (13-1) Reducing equation 13-1 to space-time terms in accordance with the relations established in Chapter 12 (in which dynes, as g-cm/sec², are t³/s³ x s x 1/t² = t/s²), we have (t/s² x s² x s6/t6) x t6/s6 x 1/s² = t/s² (13-2) In the light of the new understanding of the mm' term as the dimensionless product of gravitational and inertial mass, it is now evident that the s6/t6 dimensions belong with mm' rather than with the gravitational constant. When they are so applied, the resulting dimensions of mm' cancel out, as the true theoretical dimensions do. We can therefore replace them with the correct dimensions. As pointed out in the first edition, there are also two other errors in the customary assignment of dimensions to this equation. The distance term is actually dimensionless. It is the ratio of 1/n² to 1/1² The dimensions that are mistakenly assigned to this term belong to a term whose existence has not been recognized because it has unit value, and therefore does not enter into the numerical calculation. In order to put the ―as if‖ gravitational interaction on the same basis as a real interaction, we have to express it in terms of the action of a force on a resistance, not as the action of a mass on a resistance. And since the dimensions of the mass term cancel, so that the gravitational mass enters the equation only as a dimensionless number, the force of gravitation has to be expressed in actual force terms; that is, as t/s². The correct dimensional form of the equation is then (s³/t³ x t³/s³) x t/s² = t/s² (13-3) Turning now to the numerical magnitudes, we note that while the dimensions of the mm' term cancel out, the magnitudes do not. Every unit of mass is both a unit of s³/t³ and a unit of t³/s³, each in its proper context. Since the units are independent, the effective magnitude of the ―as if‖ action of m units of gravitation against m' units of inertial resistance is mm'. However, expressing both of the mass terms in conventional units

introduces a numerical error, as only the inertial mass term is counterbalanced by a conventional mass magnitude on the other side of the equation. To compensate for this error a corresponding inverse factor must be introduced into the gravitational constant. There is no error if the gravitational mass is expressed in natural units, as the value 1 does not require any counterbalancing term. The relation between the natural and conventional units therefore determines the magnitude of the necessary correction factor. One gram is 6.02486 x 1023 units of inertial mass (t³/s³). The reciprocal of this number is 1.65979 x 10-24. But only one sixth of the total number of mass units is effective in the gravitational interaction because this ―as if‖ interaction takes place in only one dimension, and in only one of the two directions in this dimension. The total number of s³/t³ units corresponding to an effective mass of one gram is therefore 9.95 x 74 x 10-24. Expressing this mass as one unit overstates the numerical value, and a correction of this magnitude must therefore be included as a component of the gravitational constant. A small additional correction is required because of the effect of the secondary mass. Gravitation and inertia are inversely related relative to the primary mass; that is, the primary mass is p/(p + s) units of gravitational mass and also p/(p + s) units of inertial mass, where p and s are the primary and secondary masses respectively. The product of a unit of gravitational mass and a unit of inertial mass is therefore 1/(1 + s)² units of primary mass. Where the result is expressed in terms of inertial mass, another 1 + s factor is introduced. The total effect of the secondary mass is then the introduction of a factor of 1.019299. Applying this factor to the value 9.95874 x 10-24, we obtain 1.015093 x 10-23. Replacing the 1/s² distance term by a t/s² force term has the effect of introducing a time dimension, which must be expressed in natural units to avoid creating a numerical unbalance. The numerical value of the natural unit of time, 1.520655 x 10-16, offsets in part the errors in the mass term. The net correction to be made is 1.015093 x 10-23 divided by the natural unit of time, and amounts to 6.67537 x 10-8. This is the gravitational constant in the cgs system of units. Looking now at the question of conversion to a different system of units, the issue that initiated the restudy of the situation, we find that a change from cgs to mks units in the conventional form of the equation (13-1) results in a change of 10-6 in the mass term, 10-4 the distance term, and 10-5 in the force term. A change of 10-3 in the gravitational constant is then required for a balance. In the theoretical equation (13-3) the net effect of a change in the system of units is confined to the relation of the natural and conventional units of mass. As can be seen from the explanation that has been given, the gravitational constant is proportional to the ratio of these units. Changing the conventional unit from grams to kilograms alters this ratio by 10-3. The gravitational constant is then changed by the same amount. This agrees with the result obtained from equation 13-1. Those who are familiar with the first edition will have noticed that the values of the natural unit of inertial mass and related quantities, as given earlier in this chapter, are larger than the values given in the original publication. At the time of the original investigation it seemed clear that a factor of 1/3 entered into the mass situation in some way, and there appeared to be sufficient justification for applying this factor to the size of the basic unit. As brought out in the preceding paragraphs, we now find that the 1/3 factor

is a result of the one-dimensional nature of the ―as if‖ gravitational interaction. This factor has therefore been eliminated from the mass units. As a result, the natural unit of inertial mass, as defined in this edition, is three times the value given in the first edition (with a small adjustment to reflect the results of the continuing studies of the details of the phenomena involved). The use of these larger units has no effect on the physical relations involving inertial mass, as the expressions of these relations are balanced equations in which the mass terms are in equilibrium with terms representing quantities derived from mass.

CHAPTER 14

Cosmic Elements
As pointed out in Chapter 6, the inversion of space and time in physical phenomena that is possible by reason of the reciprocal relation between the two entities may apply to only one of the constituent motions of a complex physical entity or phenomenon, or it may apply to the entire structure. We have already examined some of the effects of inversion of single motion components, such as translational motion in time, negative displacement in the electric dimension of the atomic rotation, etc. Now we are ready to take a look at the consequences of complete inversions. It has already been noted that the rotational combinations which constitute the atoms and sub-atomic particles of the material system are photons vibrating in time and rotating in space, and that they are paralleled by a similar system of combinations in which the photons are vibrating in space and rotating in time. The point to be emphasized at this juncture is that the inverse system, the cosmic system of atoms and sub-atomic particles, is identical with the material system in every respect, except for the space-time inversion. Corresponding to carbon, 2-1-4, there is cosmic carbon, (2)-(1)-(4). Corresponding to the neutrino, M ½-½-(1), there is a cosmic neutrino, C (½)-(½)-1, and so on. Furthermore, this identity applies with equal force to all of the entities and phenomena of the physical universe. Since everything that exists in the material sector of the universe is a manifestation of motion, every item is exactly duplicated in the cosmic sector with space and time interchanged. The detailed description of the material sector of the universe that we are deriving item by item through development of the consequences of the basic postulates of the Reciprocal System of theory is therefore equally applicable to the cosmic sector. Thus, even though the cosmic sector is almost entirely unobservable, we have just as exact and just as detailed knowledge of that sector (aside from information about specific individuals of the various classes of objects) as we do of the material sector. It should be noted, however, that our knowledge of the material sector is knowledge of how the phenomena of that sector appear to observation from a point within that sector; that is, a location in a gravitationally bound system. What we know about the cosmic sector through application of the reciprocal relation is knowledge of the same kind,

information as to how the phenomena of the cosmic sector appear to observation from a location within that sector; a location in a system that is gravitationally bound in time. Such knowledge has no direct significance from our standpoint, as we cannot make observations from such a base, but it does provide a basis from which we can determine how the phenomena of the cosmic sector, and the phenomena originating in that sector, theoretically should appear to our observation. One of the most perplexing questions of present-day physics is: Where is the antimatter? Considerations of symmetry applied to the current theories of the structure of matter indicate that there should be ―anti‖ forms of the elements of which ordinary matter is constituted, and that the ―antimatter‖ composed of those ―antielements,‖ ought to be equally as abundant in the universe as a whole as ordinary matter. ―Antistars,‖ and ―antigalaxies‖ should theoretically be as plentiful as ordinary stars and ordinary galaxies. But there is no hard evidence of the existence of any such objects. It has been suggested, to be sure, that some of the observed galaxies may be composed of antimatter. Alfven, for example, says that there is a ―distinct possibility that antiworlds may actually be neighbors of ours, astronomically speaking. It cannot be excluded that the Andromeda nebula, the closest galaxy to ours, or even stars within our own galaxy, are composed of antimatter.‖60 But this is pure speculation, in the absence of any demonstrated means of distinguishing the radiation produced by a galaxy of the hypothetical antimatter from that produced by a galaxy of ordinary matter. So the question remains, Where is the antimatter? The Reciprocal System now provides the answer. This new structure of theory agrees that antimatter (actually reciprocal matter: cosmic matter, s we are calling it) exists, and that it is equally as abundant in the physical universe as ordinary matter. But it tells us that the galaxies of cosmic matter are not localized in space; they are localized in threedimensional time. The progression of time to which we are subject carries us through this three-dimensional time in a manner analogous to a linear motion through threedimensional space. Only a very small fraction of the total number of objects occupying positions in the spatial reference system would be encountered in the course of a onedimensional spatial motion of this kind, and the same is true of the number of cosmic objects that are encountered in our progression through time, is compared with the total number of such objects occupying positions n a three-dimensional temporal reference system. Furthermore, gravitation in the cosmic sector acts in time, rather than in space, and the atoms of which a cosmic aggregate is composed are contiguous in time, but widely dispersed in space. Thus, even the relatively small number of cosmic aggregates that we do encounter in our movement through time are not encountered as spatial aggregates; they are encountered as individual atoms widely dispersed in space. We cannot recognize a cosmic star or galaxy because we observe it only one atom at a time. Radiation from the cosmic aggregate is similarly dispersed. Such radiation is continually reaching us, but as we observe it, this radiation originates from individual, widely scattered, atoms, rather than from localized aggregates, and it is therefore isotropic from our viewpoint. This radiation can no doubt be equated with the ―blackbody radiation‖ currently attributed to the remnants of the ―Big Bang.‖

All of the somewhat sensational suggestions as to the existence of observable stars and galaxies of antimatter, and the possible consequences of interaction between these aggregates and bodies composed of ordinary matter are thus without foundation. The antimatter-fueled generators, which supply the energy for space travel in science fiction, will have to remain on the science fiction shelves. The difference between a cosmic star and a white dwarf star should be noted particularly. Both are on the time side of the dividing line so far as the translational speed is concerned; that is, both are composed of matter that is moving faster than the speed of light. But the white dwarf is otherwise no different from the ordinary star of the material sector. The space-time relationship is inverted only in the translational motion of its components. In the cosmic star, on the contrary, all of the space-time relations are the inverse of those of the ordinary material star; not only the translational motion, but also the vibrational and rotational motions of its constituent atoms, and, what is especially significant in the present connection, the effect of gravitation. Consequently, the white dwarf is an aggregate in space, and we see it as such, whereas the cosmic star is an aggregate in time, and we cannot recognize it as an aggregate. Even those contacts which do take place between matter and the individual particles of cosmic matter (antimatter) that enter the local environment do not have the kind of results that are anticipated on the basis of current theory. In present-day thought the essential difference between matter and antimatter is conceived as a charge reversal. An atom is thought to consist of a positively charged nucleus surrounded by negatively charged electrons. It is then assumed that the antiatom has the reverse structure: a negatively charged nucleus surrounded by positively charged electrons (positrons). The further assumption then follows that an effective contact between any particle and its antiparticle would result in cancellation of all charges and reduction of both particles to radiant energy. This is a typical example of the results of the compartmental nature of present-day physical theory, which permits an assumption to be used in one field of application, and a direct contradiction of that assumption to be applied in another field, both under the banner of ―modern physics.‖ Where the accepted theory requires that opposite charges neutralize each other on close approach, it is assumed that they do so. Where this does not fit the theory, as in the electrical explanation of the structure of matter, it is cheerfully assumed that the charges accommodate their behavior to the requirements of the theory, and take up stable relative positions instead of destroying each other. In the present instance, both of these contradictory assumptions are employed at the same time. The stable charges that somehow have no effect on each other are ―annihilated,‖ by other charges, presumably identical in nature. Our findings are that wherever electric charges actually do exist, opposite charges destroy each other on contact. It does not follow, however, that charge neutralization is equivalent to annihilation. In actual practice, only one of the reactions between particles and what are presumed to be antiparticles follows the theoretical scenario of annihilation. The electron and positron do, in fact, annihilate each other on contact, with the production of oppositely directed photons. The antiparticle of the proton, in the accepted sense of the term–a particle equivalent to the proton in all observable respects except that it is negatively charged–has

been detected, but contact of this antiproton with a proton does not result in annihilation of the particles into radiant energy. ―Here the situation is not as straightforward as in the annihilation of an electron-positron pair,‖61 report Boorse and Motz. And indeed it is not. The interaction of these particles produces an assortment of transient and stable particles not essentially different from those, which appear in other high-energy interactions. As these authors say, ―different kinds of mesons are released‖ in the process. In the light of our new findings it is evident that these are not annihilation reactions; they are cosmic atom building reactions. We will examine the nature and characteristics of such reactions in Chapter 16. Detection of the antineutron has also been reported, but the evidence for this is indirect, and it is rather difficult to reconcile the various ideas as to just what an antineutron would be with the concept of charge reversal as the essential difference between particle and antiparticle. On the basis of the charge reversal hypothesis, the neutral particles should have no ―anti‖ forms. Indeed, those who contend that ―every particle has its antiparticle‖ justify this statement by asserting that each neutral particle is its own antiparticle. This would rule out the existence of a distinct antineutron, in the currently accepted sense of the term. In any event, this problem with respect to the neutral particles is another item that, like the lack of annihilation in the ―annihilation reactions‖ , emphasizes the inadequacy of the conventional theory of atomic structure in application to the ―antimatter‖ phenomena. In a universe of motion the atom is not an electrical structure. As has been brought out in detail in the earlier pages, it is a combination of rotational and vibrational motions. In the structures of the material type the speed of the rotational motions is less than unity (the speed of light) while the speed of the vibrational motion is greater than unity. In the structures of the cosmic type these relations are reversed. Here the speed of the vibrational motion is less than unity and the speed of the rotational motion is greater than unity. The true ―antiparticle‖ of a material particle or atom is a combination of motions in which the positive rotational displacements and negative vibrational displacements of the material structure are replaced by negative rotational displacements and positive vibrational displacements of equal magnitude. In one of the reactions currently attributed to mutual annihilation of antiparticles, the neutralization of displacements is actually accomplished, and in this case, the combination of electrons and positrons, the particles are actually annihilated; that is, they are converted to radiant energy and their existence as particles of the rotational class is terminated. But there are, in reality, two different processes involved in this reaction. First, the oppositely directed charges cancel each other, leaving both particles in the uncharged condition. Subsequently, their rotations, M 0-0-1 and M 0-0-(1) combine to 00-0, which is no effective rotation at all. In the vernacular, we might describe this second process as straightening out the rotational motion. There is a short interval between the two processes, and the effects attributed to ―positronium,‖ a hypothetical short-lived combination of an electron and a positron, probably originate during this interval. The extent to which annihilation can actually take place in contacts between antiparticles other than the electron and positron is still an open question. If the observed antiproton is actually the true antiparticle of the proton–that is, a cosmic proton–the results of the

observed contacts of these particles indicate rather definitely that annihilation is confined to the one-dimensional particles. If the observed antiproton is merely a material proton with a negative charge, a possibility that cannot be ruled out at the present stage of the investigation, the observed results of the interactions are not relevant to the question, but the situation is still unfavorable for annihilation, as the obstacles in the way of securing simultaneous contact between the corresponding motions obviously increase with the complexity of the rotational combination, and it is very doubtful if the necessary coincident contacts can be obtained in different dimensions. It therefore appears that the intriguing possibility of energy production by contact between matter and antimatter is not only ruled out as a large scale process by the impossibility of concentrating antimatter in space, as previously indicated, but is also unlikely even as a single atom process. Inasmuch as our present objective is to examine those phenomena of the cosmic sector of the universe that are accessible to our observation, the observed antiparticles, which are products of high-energy processes in the material sector, are pertinent only to the extent that they throw some light on the kind of behavior that can be expected from the cosmic objects that do enter our field of observation. As indicated earlier, some of these incoming objects make themselves known as a result of chance encounters during our progress through three-dimensional time. Additionally, there are processes, to be described later, which result in the ejection of substantial quantities of matter from each sector into the other. The portion of the material sector within our observational range is therefore subject to a continual inflow of cosmic matter. The incoming particles of this matter can be identified as the cosmic rays. As they appear to observation, the cosmic rays are particles entering the local frame of reference from all directions and at extremely high speeds, together with a variety of secondary particles produced in events initiated by the primary particles. The secondaries include some common sub-atomic particles of the material system, such as electrons and neutrinos, and also a number of transient particles of extremely short lifetime, from 10-6 seconds downward, that were unknown prior to the discovery of the cosmic rays, but have since been produced by high energy processes in the particle accelerators. In current thought, the primaries are regarded as ordinary material atoms. The evidence in favor of this conclusion may be summarized as follows: 1. Sub-atomic particles are excluded, as they are all incapable, for one reason or another, of producing the observed effects. This means that, unless they belong to an otherwise unknown class of particle, the primary cosmic rays must be atoms. 2. The masses of the atoms that constitute the primaries cannot be determined at the present stage of instrumentation and techniques, but it is possible to determine the charges on the individual particles, and on the assumption that they are fully ionized, this indicates the atomic numbers. The distribution of the elements in the incoming cosmic rays, on this basis, approximates the estimated distribution in the observed universe as a whole. In the absence of any known alternative, this amount of evidence has been sufficient to secure general acceptance of the conclusion that the primaries are atoms of ordinary

material elements. When the issue as to its validity is raised, however, as it must be when an alternative appears, it is clear that there are many counter indications in the empirical data. The most serious items are the following: 1. The speeds and energies of the primaries are too high to be compatible with production by ordinary physical processes. No known process, or even a plausible speculative process, based on conventional physics, is capable of producing energies that extend up to the vicinity of 1020 eV. As expressed in the Encyclopedia Britannica, “how to explain the acquisition of such energies is a disturbing physical and cosmological problem.‖ 2. With the exception of some of the relatively low energy rays that are thought to originate in the sun, most of the primaries have energies in the range, which indicates speeds in the neighborhood of the speed of light. Inasmuch as some decrease in speed has undoubtedly taken place before the observations, it is quite probable on the basis of the observational evidence (that is, disregarding any purely theoretical limitation) that the rays originally entering the local environment were traveling at the full speed of light. This is another indication of an extraordinary origin. 3. While the distribution of elements deduced from the cosmic ray charges approximates the estimated distribution in the observed universe as a whole, there are some very significant differences. For example, the proportion of iron atoms in the cosmic rays is 50 times that in average matter. Lithium has been reported to be as much as 1000 times as abundant (although some of the lithium may be a decay product). The cosmic rays therefore cannot be merely ordinary matter drawn from the common pool and accelerated to high speeds by some unknown process. They must have originated from some unusual kind of source. These anomalies in the ―charge spectrum‖ of the cosmic rays are given little attention in current physical thought, probably because they have no known explanation, but the significance that such deviations from the normal abundance would have, if confirmed, was clearly recognized at the time when the first indications of these deviations were observed. For instance, Hooper and Scharff (1958) made this comment: ―An excess of heavy nuclei would suggest the necessity of reconsidering our fundamental ideas on the origin of the primary radiation.‖62 4. All of the major products of the primary rays have extremely short lifetimes. If they do not undergo collisions before this time has elapsed, they decay in flight to particles of lower mass and equal or longer lifetime. There is much available evidence to indicate that this is also true of the primaries. For example, in some of the observed events a transient particle leaves the scene of the event in a continuation of the line of travel of the primary, and carries the bulk of the original energy. The straightforward interpretation of such events is that they represent processes in which the primary decays to the transient particle and continues on its way. The existence of a substantial number of high-energy pions in the incoming stream of particles is another item of evidence pointing in the same direction, as similar, but earlier, decays of primaries will produce pions with very high energies. It has been estimated that as much as 15 percent of the

incoming high-energy particles are pions. The conclusion that can logically be drawn from the observations is that the primaries are of the same general nature as the known transient particles, and that the entire cosmic ray phenomenon is a single process taking place in a succession of decay events–a process in which an atom with some strange and unusual properties is converted first into other similar, but less massive, particles, and then finally into products that are compatible with the local environment. The considerations summarized in the foregoing paragraphs indicate that the current explanation of the nature of the primary cosmic rays is not correct. They point to the conclusion that these primaries are not atoms of material elements, as now believed, but atoms of a special kind which have characteristics similar to those of the transient particles, and are produced under some unusual conditions that lead to entry into the local frame of reference at the full speed of light. Since we now find from the theoretical development that there is a continuing inflow of cosmic atoms, which are atoms of a special kind that, according to the theory, enter our environment at the speed of light, and are subject to rapid decay in the manner of the observed transient particles, the identity of the theoretical and observed phenomena is almost self-evident. An outstanding characteristic of the results obtained from development of the consequences of the postulates of the Reciprocal System of theory–one that we have had occasion to mention several times in the preceding pages–is the way in which they resolve long-standing and seemingly extremely difficult questions in a surprisingly simple manner. Nowhere is this more evident than in the case of the cosmic rays, where the finding that these incoming particles are atoms from the high-energy sector of the universe clears up the many previously intractable issues in this area with remarkable ease. The basic questions: What are the cosmic rays?, and Where do they come from?, are answered automatically by the theoretical discovery of a sector of the universe in which objects with the observed properties of the cosmic rays are indigenous. The particular properties that characterize the constituents of the cosmic rays, and distinguish them from the constituents of aggregates of ordinary matter, are naturally the ones that are the most difficult to explain on the basis of current theories which try to fit them into the material system of phenomena, but these explanations are practically obvious once the existence of the cosmic (high energy) sector is recognized. The energy questions are the central problems. As stated by W. F. G. Swann, ―no piece of matter can, under ordinary circumstances, contain, in any form, enough energy to provide cosmic ray energies for its particles.‖63 But this is only one phase of the energy problem. The total energy involved is also far too large. If cosmic rays move in straight lines, as does starlight, and have the same energy density as starlight, then the power supplies to each will have to be the same. There seems no conceivable way to find this much energy for cosmic radiation. (Leverett Davis, Jr.)64 Here again we meet the ―There is no other way‖ contention that is being used to justify so many of the otherwise untenable theories and assertions of present-day science, and again

the development of the Reciprocal System demonstrates that there is a ―conceivable way.‖ But because the cosmic ray physicists have been confined within the limited horizons of conventional basic ideas, they have not been able to account for the observed energies on any straightforward basis. They have therefore been forced to invent exotic hypothetical mechanisms for acceleration of the cosmic rays from the relatively low energies that are available in the material sector to the high levels that are actually observed, and equally far-fetched ―storage‖ processes to avoid the difficulty cited by Davis. The existence of another half of the universe, in which the prevailing speeds are greater than the speed of light, and the energies of the mass units are correspondingly great, disposes of both aspects of the energy issue. There are observable explosion processes in the material sector (which will be examined in detail in Volume II) that result in the acceleration of large quantities of matter to speeds in excess of the speed of light. The most energetic portions of these high-speed explosion products are ejected into the cosmic sector, the sector of motion in time. From the general reciprocal relation between space and time we can deduce that these same processes are operative in the cosmic sector, and that they result in the ejection of large quantities of cosmic matter into the material sector. This is the matter that we observe in the form of the cosmic rays. The characteristics of these interchange processes, as they will be developed in Volume II, explain why the distribution of the elements in the cosmic rays differs from the estimated average distribution in the observed physical universe. It will be shown that the proportion of heavier elements in matter increases with the age of the matter, and it will be further shown that the matter ejected from one sector of the universe into the other consists principally of the oldest (or most advanced) matter in the originating sector. Thus the cosmic rays are not representative of cosmic matter in general; they are representative of the cosmic matter that corresponds to the oldest matter in the material sector. The isotropic distribution of the incoming rays is likewise a necessary result of entry from the region of motion in time. Both the spatial location of entry, and the direction of motion of the particle after entry, are determined by chance, as the contact of the space and time motions is purely scalar. The identification of the cosmic rays as atoms of the cosmic elements was clear from the beginning of the development of the Reciprocal System. As stated earlier, the available evidence indicates that these so-called ―rays‖ must be atoms. On the other hand, their observed properties are quite different from those of the atoms of ordinary matter. The natural conclusion from these facts would be that the atoms of the cosmic rays are atoms of some different kind. Conventional science cannot accept this answer because it has no place for the kind of an atom that is indicated. The physicists have therefore been forced to conclude that the cosmic rays are ordinary atoms that, for some unknown reason, have unusual properties. In contrast, the basic postulates of the Reciprocal System require the existence of a type of atom, the inverse of the material atom, that has just the kind of characteristics, when observed in the material sector, that are found in the cosmic rays. It should be noted in this connection that the concept of antimatter, the conventional alternative to the reciprocal matter required by the postulates of the Reciprocal System, cannot be applied to the cosmic rays, because the interaction of matter and antimatter is

theoretically supposed to result in annihilation of both substances, rather than the particle production and other phenomena that are actually observed in the cosmic ray interactions. Although only a limited amount of time could be allotted to the cosmic rays in the early stages of the development of the Reciprocal System, because of the large number of physical areas that had to be given some study in order to confirm the status of the theory as one of general application, the first edition did include an account of the nature and origin of the primary rays, an explanation of the kind of modifications that these particles must undergo in the material environment, and a general description of this modification, or decay, process. In the meantime there has been substantial progress, both experimentally and theoretically, and it is now possible to expand the previous presentation very materially. The extension of theory in the cosmic ray area that has taken place in the twenty years since the publication of the first edition provides a good illustration of what is involved in the development of the theoretical system from the fundamental postulates. The basic facts–the identity of the cosmic rays, their place of origin, the reason for their enormous energies, etc.–were almost self-evident once the reciprocal relation between space and time was recognized. But it cannot be expected that such an understanding of the basic facts will immediately clear up the entire multitude of questions that arise in the course of developing the details of the theoretical structure. The answers to these questions are available. They can be derived from the fundamentals of the system of theory. But they do not emerge automatically. Where a theory is developed entirely by deduction from a single set of premises, as is true of the Reciprocal System, there should not be many cases in which wrong answers are reached, if the theoretical foundations are solid, and due care is exercised in the logical development. Only a very few of the conclusions stated in the first edition of this work have been invalidated by the twenty years of additional study that have followed. But it is altogether unrealistic to expect that the first exploration of a physical field by means of a totally new method of approach will accurately identify all of the significant features of the phenomena in that field. It is a virtual certainty that many of the original conclusions will be incomplete. Here, again, the Reciprocal System is no exception. The explanation of cosmic ray decay that will be given in the next chapter is, in all essential respects, the same explanation that was presented in the first edition. However, the development of the theoretical structure in the intervening years has brought to light many necessary consequences of the postulates of the Reciprocal System that have a significant bearing on the decay process and contribute to a more complete understanding of the decay events. These new items of information include such things as the existence of a transition zone, the two-dimensional nature of the motion in that zone, the existence of the massless form of the neutron, and the nature of the limitation on the lifetimes of the cosmic particles. With the benefit of all of this additional theoretical knowledge, and a substantial increase in the amount of available empirical information, it will be possible to define the decay sequence more accurately. Nevertheless, the presentation in Chapter 15 is not a new explanation of the phenomenon; it is the same explanation in more complete form.

CHAPTER 15

Cosmic Ray Decay
On the basis of the information developed in Chapter 14 we may describe the cosmic rays in general terms as cosmic atoms and particles which enter the material environment at the speed of light, at random spatial locations, and with random directions. Here, then, are the contents of the cosmic sector of the universe as they appear, very fleetingly, to our observation. We will now examine what happens to these objects after they arrive. In the earliest observed stages the cosmic particles are known as the primary cosmic rays. As many observers have pointed out, there is no assurance that these are the original rays, as the decay process may have already begun before the primary rays are observed. The theoretical development indicates that this is, indeed, true, as the primaries contain a considerable percentage of particles that are clearly decay products rather than normal constituents of the origina1 rays. In the subsequent discussion we will follow the general practice, and will refer to the observed incoming particles as the primary rays, but it should be understood that this does not imply that the observed primaries are identical with the particles that originally crossed the boundary into the material sector. Since the cosmic rays enter the material sector from a region in which the prevailing speeds are greater than unity, these particles make their entry at the speed of light. It is the decrease from a speed greater than unity to a speed less than unity which constitutes entry into the materia1 sector, but the dividing line between the cosmic sector and the material sector is unit speed in all three scalar dimensions. The speed of the primaries therefore remains at or near unity in the observable dimension even after the speed, in total, has decreased to some extent. This accounts for the previously noted fact that the observed speeds of the incoming particles are mainly close to the speed of light. Inasmuch as these speeds, and the corresponding kinetic energies, are greatly in excess of the normal speeds and energies of the material sector, transfer of the excess kinetic energy to the environment begins immediately on entry. Gravitational and electromagnetic forces, to which the cosmic atom is subject as soon as it crosses the boundary, accomplish part of the energy reduction. Contact with material particles is also an important factor, and a further loss occurs in connection with the reduction of the internal energy that must also take place. The cosmic atoms of maximum energy content (kinetic equivalent) are those of the most abundant cosmic elements: c-hydrogen and c-helium. The principal constituents of the cosmic rays, the cosmic elements of low atomic number, are therefore not only entering the material frame of reference at speeds which are far too high to be compatible with the material environment, but are also entering in the form of structures whose internal energy (rotational displacement) content is also much too great. These elements must lose rotational energy, as well as kinetic energy, before they can assume forms that will merge with the material phenomena. The required loss of rotational energy from the atomic structures is accomplished by ejection of particles of an appropriate nature. A readjustment of some kind in the atomic motions is required at very short intervals, and

the probability principles insure that the direction of the rearrangements is toward greater stability. In the material environment this means a reduction of the excess rotational energy. At the present stage of the theoretical development it appears that the limitation of the lifetimes of the cosmic elements to extremely short intervals is due to the fact that the rotation in the cosmic structure takes place at a speed greater than unity, and this structure therefore moves inward in time, rather than in space. Consequently, it can exist in a stationary spatial frame of reference for only one unit of time. If it is moving translationally at a speed above unity in all scalar dimensions, as is true of most of the cosmic atoms encountered by chance in our passage through time, it moves away from the line of the time progression of the material sector, and disappears. But this option is not available to cosmic atoms that have dropped below unit speed, and instead, they separate into two or more particles, each of which then has its own appropriate lifetime. The natural unit of time, in application to macroscopic physical phenomena, was evaluated in Chapter 13 as 1.521 x 10-16 seconds. Some of the observed particles have lifetimes in this neighborhood, but others range all the way from about 10-16 seconds to about 10-24 seconds. As will be brought out later, the magnitude of the deviation from unit time has been correlated with the dimensions of the spatial motion of the particles, but the exact nature of the modifying factor has not yet been identified, and for the present we will treat it as a modifier of the unit of time, similar to the inter-regional ratio that modifies the unit of space in application to the time region. The limiting lifetime to which the foregoing comments apply is the limit at zero speed. At higher speeds, the lifetime, as measured by a conventional clock, increases in accordance with the relations expressed in the Lorentz equations, which, as noted earlier, are equally as applicable in the Reciprocal System of theory as in conventional physics. The explanation of this longer life that we deduce from theory is that the particle can remain intact in the spatial reference system as long as it remains in the same unit of time. But an object moving at the speed of light remains in the same unit of time (in the natural system, which is controlling) permanently and such an object can exist indefinitely in any system of reference. The decrease in life at the lower speeds follows the mathematical pattern derived by Lorentz. From the foregoing it is evident that the primary cosmic rays, moving at the speed of light, did not necessarily enter the material sector in our immediate vicinity. The rays that we observe may have entered anywhere in interstellar, or even in intergalactic, space. In general, as pointed out in the first edition, the successive steps of the decay process which the cosmic atoms undergo after their entry consist of ejections of rotational displacement in the form of massless particles, which continue until the residual cosmic element reaches a status in which it can be transformed into a material structure. Of course, nothing physical can be transformed into something different. Only in the world of magic is that possible. Addition or removal of some constituent can alter a physical entity, but it can be transformed only into some other form of the same thing, as the term itself implies. In the case of the elements the transformation is made possible by the specific relation between the space and time zero points.

As explained in Chapter 12, the difference between a positive speed displacement x and the corresponding negative speed displacement 8—x (or 4—x in the case of twodimensional motion) is simply a matter of the orientation of the motion with respect to these space and time zero points. The rotational motions of material atoms and particles are all oriented on the basis of the spatial (positive) zero, because, as noted earlier, it is this orientation that enables the rotational combination to remain in a fixed spatial reference system. Similarly, the cosmic atoms and particles are oriented on the basis of the temporal (negative) zero, and are therefore capable of remaining permanently within a fixed temporal reference system, whereas they have only a transient existence in a spatial system. The only difference between a motion with a positive speed displacement x and one with a negative speed displacement 8—x (or 4—x) is in this orientation of the scalar direction. Either can therefore be converted to the other by a directional inversion. For example, if the negative magnetic displacements of the cosmic helium atoms, (2)-(1)0, are replaced by the 4—x positive values, this inverts the scalar directions of the rotations without altering the nature or magnitude of either of the rotational components. The product, an atom of the material element argon, 2-3-0 (or 3-2-0 in our usual notation) is therefore the same physical object as the cosmic helium atom. It is merely moving in a different scalar direction. Conversion of cosmic helium into argon is nothing more than a change to another form of the same thing, and thus it is a physical possibility that can be accomplished under the right conditions and by the appropriate processes. Every atom of either the cosmic or the material type in which the speed displacements do not exceed 3 in either of the magnetic dimensions or 7 in the electric dimension has an equivalent oppositely directed structure. This is illustrated in the following table of equivalents of cosmic and material elements of the inert gas series, the elements with no effective displacement in the electric dimension. Cosmic System c-helium c-neon c-argon c-krypton (2)-(1)-0 (2)-(2)-0 (3)-(2)-0 (3)-(3)-0 2-3-0 2-2-0 1-2-0 1-1-0 Material System argon neon helium 2 neutrons

It does not follow that a direct conversion of an atom of such an element to the equivalent inverse structure is always possible. On the contrary, it is seldom possible. For instance, in order to convert the cosmic helium atom directly to argon the rotations in the two magnetic dimensions would have to be inverted simultaneously, and at the same time the approximately 40 mass units required by the argon atom would have to be obtained from somewhere. The c-helium atom cannot meet these requirements, so at the end of the appropriate unit of time when it must do something, it does what it can do; that is, it ejects a massless particle. This carries away some positive rotational displacement, and moves the residual cosmic atom up the series of elements toward a higher cosmic atomic number, the equivalent of a lower material atomic number. This process continues until the residual cosmic atom is c-krypton, each rotating system of which is equivalent to a neutron. Here the transformation requirements can be met, as the inversion of each rotation involves only a single effective unit, and no provision for addition of mass is necessary, since the product of the conversion is a massless neutron.

The scalar directions of the c-krypton motions therefore invert, and two massless neutrons take their places in the material system. The question as to what then happens to these particles will be discussed in Chapter 17. The general nature of the cosmic ray decay process, as described in the foregoing paragraphs, was clear from the start of the investigation of the role of the cosmic rays in the theoretical universe of the Reciprocal System. It was therefore evident that the ejections during this decay process must consist of positive rotational displacement in order that the cosmic atoms would be modified in the direction of greater stability in the material environment and ultimately built up to the level where conversion is possible. In the first edition these ejections were discussed in terms of neutrons and neutron equivalents, although it was noted that, in the terrestrial environment at least, they must be massless. Transfer of mass in these events is impossible, as the cosmic atoms have no actual mass. The mass indicated by their behavior in the observed reactions is merely the mass equivalent of the cosmic (inverse) mass that these atoms of the cosmic elements actually do possess. What these atoms must eject is positive magnetic rotational displacement, and this can only take place through the medium of massless particles. The conclusion reached in the earlier study was that in these ejection events the carrier particles must be pairs of neutrinos and positrons (jointly equivalent to neutrons rotationally, but massless) rather than neutrons of the observed type. The more recent finding that the neutron exists in a massless form now resolves this difficulty, as it is now evident that the ejected particles are massless neutrons. The progress that has been made in both the observational and the theoretical fields has also enabled defining the decay path more accurately and in more detail than was possible in the first edition. Inasmuch as all features of the cosmic sector of the universe are identical with the corresponding features of the material sector, except that space and time are interchanged, the matter accelerated to high speeds by cosmic explosions of astronomical magnitude includes all of the components of cosmic matter: sub-atomic particles and atoms of all of the elements. But in order to be accelerated all the way to unity in three dimensions, a particle must offer a full unit of resistance in all three dimensions. Consequently, the only particles that are able to accelerate up to the escape speeds are the double rotating systems, the atoms. The unit particle in the interchange between the cosmic and material sectors is the atom of unit atomic number, the mass two isotope of hydrogen (deuterium). The mass one isotope of hydrogen does not qualify as a full-sized unit, but it lacks only the equivalent of a cosmic massless neutron, and this can be provided by ejection of a massless neutron of the material type. When subjected to a powerful explosive acceleration the H1 atom therefore ejects such a particle and assumes the H² status. The sub-atomic particles are not capable of being accelerated to the escape speed. They are all either inherently massless, or easily separated into massless components, and when they reach their limiting speeds they take the massless forms and thereby terminate the acceleration. The total absence of sub-atomic particles in the cosmic rays that results from this inability to reach the escape speed is not currently recognized because the singly charged particles are mistakenly identified as protons, and the cosmic atoms in the decay sequence–mesons, in the conventional terminology–are accorded a somewhat

indefinite kind of a sub-atomic status. But the absence of electrons is a conspicuous and puzzling feature of the cosmic ray phenomenon, and it imposes some severe constraints on theories which try to account for the origin of the rays. An effect so gross as to exclude completely high-energy electrons from the spectrum at the earth should, it would seem, be accounted for unambiguously by any successful theory for the origin of the cosmic radiation. (T. M. Donahue)65 The unambiguous explanation is now available. No sub-atomic particles are present in the original cosmic rays because these particles are not capable of accelerating to the high inverse speeds necessary for entry into the material sector. The cosmic property of inverse mass is observed in the material sector as a mass of inverse magnitude. Where a material atom has a mass of Z units on the atomic number scale, the corresponding cosmic atom has an inverse mass of Z units, which is observed in the material sector as if it were a mass of l/Z units. The masses of the particles with which we are now concerned are conventionally expressed in terms of million electron volts (MeV). One atomic mass unit (emu) is equivalent to 931.152 MeV. The atomic number equivalent is twice this amount, or 1862.30 MeV. The primary rotational mass of an element of atomic number Z is then 1862.30 Z MeV, and that of a cosmic element of atomic number Z is 1862.30/Z MeV. Where the atomic mass m is expressed in terms of atomic weight, this becomes 3724.61/m MeV. As matters now stand, neither the theoretical calculations nor the observations of the masses of the cosmic elements above hydrogen in the cosmic atomic series are sufficiently accurate to justify taking the secondary mass into consideration. The theoretical discussion of the masses of these elements will therefore be confined to the primary mass only, disregarding the small modification due to the secondary mass effect. For the same reasons, both the calculated and observed values in the comparisons that follow will be stated in terms of the nearest whole number of MeV. An exception has been made in the case of hydrogen, because the secondary mass of this element under normal conditions is relatively large, and the probability that it will be altered by changes in environmental conditions is relatively small. Since the mass of a material H² atom is 1.007405 on the atomic number scale, the mass of a cosmic H² atom is the reciprocal of this figure, or 0.99265 units. This is equivalent to 1848.61 MeV. At this point it will be necessary to recognize that the combinations of motions that constitute the atoms of the elements, both material and cosmic, are capable of acquiring additional motion components of a different kind, each unit of which alters the mass of the atom by one atomic weight unit. It will be convenient to defer the detailed consideration of this new type of motion, which we will call a gravitational charge, until we are ready to discuss the entire class of motions to which it belongs, but for present purposes we need to note that each material element of atomic number Z exists in a number of different forms, or isotopes, each of which has atomic weight 2Z+G, where G is the number of gravitational charges. The normal mass of the corresponding cosmic isotopes is the reciprocal of 2Z+ G. but when the cosmic atoms enter the material environment they are able to add gravitational charges of the material (positive) type to the cosmic combinations of motions (including the gravitational charges of the cosmic

(negative) type, if any). Each such material type charge adds one atomic weight unit, or 931.15 MeV, to the isotopic mass of the cosmic atom. In the first edition it was recognized that the incoming cosmic rays would consist primarily of c-hydrogen, but at that time there were no observational indications of any cosmic ray particles in the hydrogen mass range, and the extension of the theoretical development to the questions of scalar motion in two dimensions and the lifetimes of the cosmic atoms had not yet been undertaken. The exact theoretical status of the incoming c-hydrogen atoms was therefore still uncertain. Inasmuch as the ―mesons‖ then known were mainly cosmic elements of the inert gas series, it was concluded that the original chydrogen atoms must be stripped of their one-dimensional rotation and reduced to the two-dimensional (inert gas) condition almost immediately on crossing the speed boundary. In the meantime, however, the investigators have been able to extend their observations to earlier portions of the decay path, and they have recently discovered a short-lived particle with a mass that is reported as 3695 MeV. Identification of this 3695 ―psi‖ particle as a ―cosmic deuteron with two material isotopic charges‖66 by Ronald W. Satz was the crucial theoretical advance that opened the door to a clarification of the status of cosmic hydrogen. This now enables us to close the gap, and trace the progress of the cosmic atom from its entry into the material sector in the form of cosmic hydrogen (c-H²) all the way to its final transformation into material particles. For reasons which will be explained in Volume II, the cosmic atom has an effective translational motion in two of the three scalar dimensions at the neutral point where it enters the material half of the universe. The terrestrial environment, into which the observable cosmic atoms enter, is favorable for the acquisition of gravitational charges of the material type. Each of the two dimensions of motion therefore adds such a charge. The two charges acquired by the c-H² atom add 1862.30 MeV to the 1848.61 MeV mass equivalent of the cosmic mass, bringing the total mass of this, the first of the theoretical cosmic ray particles, to 3710.91 MeV. The mass of the newly discovered psi particle is reported as 3695 MeV. In view of the many uncertainties involved in the observations, this can be regarded as consistent with the theoretical value. As mentioned earlier, the particle lifetimes are correlated with the dimensions of the spatial motions that the particles acquire, the translational motion and the gravitational charges. While the theoretical situation has not yet been clarified, we find empirically that the life of a particle with two dimensions of scalar motion in space and no gravitational charge is about 10-16 seconds, approximately the natural unit of time. Each dimension of motion modifies the unit of time applicable to the particle life by approximately 10-8, while each gravitational charge modifies the unit by about 10-2. On this basis, the following approximate lifetimes are applicable: Dimensions 3 2 2 Charges 0 2 0 Life (sec) 10-24 10-20 10-16 Dimensions 1 1 Charges l 0 Life (sec) 10-10 10-8

The reported lifetime of the 3695 psi particle is in the neighborhood of 10-20 seconds, which agrees with the theoretical determination of the dimensions of motion on which the mass calculation is based. The general decay pattern defined in the preceding pages indicates that c-H² should undergo an ejection of positive rotational displacement, converting it to c-He³. From the expression 3724.61/m, we obtain 1242 MeV as the rotational mass of c-He³, to which we add the mass of two gravitational charges for a total of 3104 MeV. The observed 3695 particle decays to another psi particle with a reported mass of 3105 MeV, and a life of about 10-20 seconds. This second particle can clearly be identified with the c-He³ atom. Thus the observed masses, the lifetimes, and the decay pattern all confirm the basic identification of the c-hydrogen particle by Satz. Another decay of the same kind would produce c-He4, and it is probable that some particles of this composition are occasionally formed. Indeed, any cos mic atom between c-hydrogen and c-krypton may appear in the cosmic ray products. But the probabilities favor certain specific cosmic elements, and these are the products that constitute the normal decay sequence we are now examining. The speeds of the cosmic rays and their decay products decrease rapidly in the material environment, and by the time the decay of c-He³ is due the additional energy loss in the decay process is usually sufficient to drop the cosmic residue into the speed range below unity. The consequent elimination of the motion in the second scalar dimension results in a double decay which adds two atomic weight units to the cosmic atom. The product is c-Li5. Further increases in the inverse mass of the residual cosmic atom by successive additions of single atomic weight units would be possible, but the probabilities favor larger steps as the material equivalent of a cosmic unit increment continues decreasing. The one unit increment in each of the two steps from c-He3 to c-Li-5 is therefore followed by a series of increments that are uniformly one atomic weight unit larger in each successive decay, except for the step between c-N14 and c-Ne20, where the increase over the size of the previous increment is two units. On this basis, the two l-unit increments that produce c-Li5 are followed by a 2-unit increment to c-Be7, a 3-unit increment to c-B10, a 4-unit increment to c-N14, and a 6-unit increment to c-Ne20. These decay products are not capable of retaining both of the gravitational charges of their precursors, but they keep one of the charges, and all of the cosmic elements identified as members of this section of the decay sequence have masses which include a 931.15 gravitational increment, as well as the basic mass equivalent of the cosmic element, 1862.30/Z MeV. The indicated life of a cosmic atom with one gravitational charge, after dropping into the range of one-dimensional motion, is about l010 seconds. These theoretical masses and lifetimes are in agreement with the observed properties of the class of transient cosmic ray particles known as hyperons, as indicated in the following tabulation: Element c-Li5 Particle omega MASS Calculated 1676 Observed 1673 Lifetime 1.30 x l0-10

c-B10 c-N14 c-Ne20

xi sigma lambda

1304 1197 1117

1321 1197 1116

1.67 x l0-10 1.48 x l0-10 2.52 x l0-10

The masses given are those of the negatively charged particles. Positive electric charges and other variable factors introduce a ―fine structure‖ into the numerical values of the properties of the particles that has not yet been studied in the context of the Reciprocal System. The observed decay pattern is in agreement with the theory, so far as its general direction is concerned; that is, all of the members of the series decay in such a manner that the eventual result is c-neon. It is still uncertain, however, whether the decay always passes through all of the stages identified with the normal sequence, or whether this sequence is subject to modification, either by omission of one or more of the steps, or by a variation in the size of the ejections of time displacement. The c-Be7 atom, mass 1463 MeV, for instance, is not listed in the tabulation, as its identification with an observed particle of mass 1470 MeV is rather uncertain. This does not preclude its definite identification as a decay product eventually. It may be noted in this connection that the omega particle (cLi5) was found only as a result of an intensive search stimulated by a theoretical prediction. However, the fact that the last three members of this hyperon series (which were the first to be discovered and are still the best known) are separated by only one decay step, suggests that there is little, if any, deviation from the normal sequence in those cases where the full range of decay from c-He to c-Ne is involved. When we examine the properties of gravitational charges at a later stage of the theoretical development we will find that the stability of these charges is a function of the atomic number. The mathematical expression of this relation which we will derive from theory indicates that the stability limit for a double gravitational charge in the terrestrial environment falls between the material equivalents of c-He³ and c-Li5. This accounts for the previously mentioned fact that c-Li5 and the elements above it in the cosmic series are incapable of retaining two gravitational charges. But the center of the zone of stability for these elements is closer to the +1 isotope (one gravitational charge) than to the zero isotope (the basic rotation), and for this reason they are all singly (gravitationally) charged, as indicated in the preceding discussion. From c-Si27 upward in the cosmic series, the center of the zone of stability is closer to the zero isotope, and these elements carry no gravitational charges. Without the gravitational charge, the mass of c-Si27, the decay product resulting from a 7unit addition to c-Ne20, is 137.95 MeV, and the low speed lifetime is about 10-8 seconds. The corresponding observed particle is the pion, with measured mass 139.57 MeV, and lifetime 2.602 x 10-8 seconds. Pions are frequently reported as products of observed cosmic ray events initiated by primaries. As we will see in the next chapter, such production is quite feasible where there is a violent contact of some kind, with the release of a large amount of energy, but direct production of pions in decay is not consistent with the decay pattern as derived from theory. The apparent direct production is, however, understandable when the relative lifetimes of the pion and the earlier decay products are taken into consideration.

There is no reason to believe that normal decay in flight will result in any change of direction. Ejection of massless particles will take care of the conservation requirements without the necessity of directional modification. Because the entire decay process up to the production of the pion occupies only a very short time compared to the lifetime of the pion itself, it is unlikely that the usual methods of observation will be able to distinguish between a pion and a cosmic particle undergoing a complete decay to the pion status in flight. In the kind of a situation mentioned in Chapter 14, for instance, where a pion apparently leaves the scene of an event in a continuation of the direction of motion of the primary, and carries the bulk of the original energy, leading to the conclusion that the primary decayed directly to the pion, there is nothing in the observations that is inconsistent with the theoretical conclusion that during a short interval at the beginning of the motion attributed to the pion, the cosmic particle was actually going through the preceding steps in the decay sequence. The next event in this decay sequence, the decay of the pion, involves an 8-unit increment to c-Ar35. Again the zero isotope is the stable form. This leads to a mass of 106.42 MeV and a theoretical life equal to that of the pion. The observed particle is the muon, with mass 105.66 MeV, formed by decay of the pion, as required by the theory. Both the decay to c-Si27 (the pion) and the subsequent decay to c-Ar35 (the muon) continue the same pattern of a uniform one unit increase in the cosmic mass increment in each succeeding event that was followed in the earlier decay steps. But inasmuch as cargon is equivalent to helium, which, from the material standpoint, is only one step away from the neutron that is the end product of the decay process, the following ejection of positive displacement carries the cosmic atom to the final cosmic structure, c-krypton. Each of the two rotating systems of the c-Kr atom is rotationally equivalent to a neutron, and converts to that particle. Since c-Kr is massless (that is, its observed mass is merely the mass equivalent of the inverse mass of the cosmic sector) the conversion products are massless neutrons, or their equivalents, pairs of neutrinos and positrons. Some of the aspects of this conversion process will be given further consideration in Chapter 17. Unlike the decay events, which involve changes in the atomic structure, and therefore do not take place until they must, the conversion of the c-krypton rotations to massless neutrons is merely a change in scalar direction to conform with the new environment, and it takes place as soon as it can do so. Consequently, the c-krypton atom, as such, has a zero lifetime. As soon as the particle ejection from c-argon takes place, the conversion to massless neutrons begins. In view of the non-appearance of c-krypton, the apparent lifetime of c-argon, the muon, is the sum of its own proper lifetime and the conversion time. The value reported from observation is 2.20 x 10-6 seconds. A theoretical explanation of this value is not yet available, but it is probably significant that the difference between this and the life of an uncharged particle moving in one dimension, about 10-8 seconds, is approximately that associated with a gravitational charge. The absence of the c-krypton atom from the decay process is not due to any abnormal instability of this cosmic atom itself, but to the preference for the alternate scalar direction that prevails in the material environment. In the reverse process, where the

directional preference favors the c-krypton atom over the neutron alternate, it plays a prominent part, as we will see in Chapter 16. In those cases where the incoming cosmic atom is not in the normal decay sequence it ejects enough positive displacement in one or two decay events to reach one of the positions in that sequence, after which it follows the normal path in the same manner as the products of the decay of cosmic hydrogen. However, these heavier elements are beyond the stability limit for two gravitational charges, in a low energy environment, and consequently they do not form structures analogous to the psi particles. This has the effect of increasing the probability that some of the decay products that normally carry one gravitational charge will occasionally be found in the uncharged condition. The one allowable charge would result in an asymmetrical structure during the time that the speed of these particles is in the two-dimensional range, and if they are observed at this stage they are likely to be uncharged (gravitationally). The uncharged lifetime for a particle moving two-dimensionally is approximately one natural unit of time, or about 10-16 seconds. Such a life is the most definite indication that an observed particle is in this early stage of the decay process. For example, the eta particle, with observed mass 549 MeV and a life of .25 x 10-16 seconds is probably a gravitationally uncharged c-Be7 atom, which theoretically has a mass of 532 MeV. A more questionable identification equates the rho particle with c-Li5. The theoretical mass in this case is 745 MeV, and the observed values range from 750 to 770, the more recent measurements being the higher. The rho lifetime has been reported as about l10-23 seconds, but this is too short to be a decay time. It is evidently a fragmentation time, a concept which will be explained in connection with the discussion of particle production in the accelerators. Both c-Li5 and c-Be7 are in the normal decay sequence, a fact which lends some support to the foregoing identifications. The reported observations of particles that are outside the normal decay sequence will be given some further consideration in the next chapter. If the incoming cosmic atom is above c-krypton in the cosmic atomic series, so that it cannot enter the normal decay sequence in the manner of the elements of lower atomic number, it must nevertheless separate into parts at the end of the appropriate unit of time, and since it cannot eject massless neutrons as the lighter atoms do, it fragments into smaller units, which then follow the normal decay path.

CHAPTER 16

Cosmic Atom Building
In essence, the cosmic ray decay is a process whereby high energy combinations of motions that are unstable at speeds less than that of light are converted in a series of steps to low energy structures that are stable at the lower speeds. A requirement that must be met in order to make the process feasible is the existence of a low energy environment that can serve as a sink for the energy that must be withdrawn from the cosmic structures.

Where a high energy environment is created, either fortuitously or deliberately, the decay process is reversed, and cosmic elements of lower atomic number are produced from cosmic elements of higher atomic number, or from material particles, kinetic energy being absorbed from the environment to meet the additional energy requirements. The first step in the reverse process is the inverse of the last step in the decay process: a neutron equivalent is converted into one of the rotating systems of a cosmic krypton atom by inversion of the orientation with respect to the space-time zero points. It is convenient, from a practical standpoint, to work with electrically charged particles. The standard technique in the production of transient particles therefore is to use protons, or hydrogen atoms which fragment to protons, as the ―raw material‖ for cosmic atom building. In the high energy environment that is created in the production apparatus, the particle accelerators, the proton, M 1-1-(1), ejects an electron, M 0-0-(1), and then separates into two massless neutrons, M ½-½-0, each of which converts to a half c-Kr atom (that is, one of the rotating systems of that atom) by directional inversion. These half c-Kr atoms cannot add displacement and become muons because they are unable to dispose of the proton mass, which persists as a gravitational charge (half of the normal size, as the proton has only one rotating system). They remain as particles of a distinct type, each with half of the c-Kr mass (52 MeV), and half of the 931 MeV mass of a normal gravitational charge, the total being 492 MeV. They can be identified as K mesons, or kaons, the observed mass of which is 494 MeV. As can be seen from the foregoing, the initial production of transient (cosmic) particles in the accelerators is always accompanied by a copious production of kaons. Each of the subsequent steps in the cosmic at building process that requires the addition of mass, such as the product of c-neon (the lambda particle) from c-silicon (the pion) and the product of the psi-3105 particle from one of the heaviest of the hyperons similar to the initial cosmic particle production, except that the proton mass is added to the product as a gravitational charge instead of forming a kaon. Where kaons appear in connection with the product of these particles, they are the result of secondary processes. Furthermore, kaons are not produced in the decay processes, either in the cosmic rays or in the accelerators, because the decay takes place on a massless basis. A few kaons appear in the cosmic ray decay events, but they are not decay products. They are produced in collisions of cosmic rays with material atoms under conditions such that a temporary excess of energy is created—in miniature equivalents of the particle accelerators, we may say. If the reverse process, the atom building process, is carried beyond c-hydrogen the final particle vanishes into the cosmic sector. Otherwise the cosmic atom building which takes place in the material sector is eventually succeeded by a decay that follows the normal path back to the point of reconversion into massless neutrons. Where the excess kinetic energy in the environment is too great to permit the decal proceed to completion, the production and decay processes arrive at an equilibrium consistent with the existing energy level.

In such a high-energy environment, the life of a particle may be terminated by a fragmentation process before the unit time limitation takes effect. This is simply a process of breaking the particle into two or more separate parts. The degree of fragmentation depends on the energy of the disruptive forces, and at the lower energy levels the products of fragmentation of any transient particle are mainly pions. At higher energies kaons appear, and in the fragmentation of hyperons the mass of the gravitational charges may come off in the form of neutron or protons. Corresponding to fragmentation is the inverse process of consolidation, in which particles of smaller mass join to form particles of larger mass. Thus a particle, with a mass measured as 1020 MeV has been observed to fragment into two kaons. The 36 MeV excess mass goes into kinetic energy. Under appropriate conditions, the two kaons may consolidate to form a particle, utilizing 36 MeV of kinetic energy to supply the necessary addition to the mass of the two smaller particles. The essential difference between the two pairs of processes—building, and decay on the one hand, and fragmentation and consolidatior on the other—is that building and decay proceed from higher to lower cosmic atomic number, and vice versa, whereas fragmentation consolidation proceed from greater to less equivalent mass per particle, and vice versa. The decay process as a whole is a conversion from cosmic status to material status, and the atom building in the particle accelerators is a partial and temporary reversal of this process, but fragmentation and consolidation are merely changes in the state of the atomic constituents, a process that is common in both sectors. The change in cosmic atomic number due to fragmentation may be either upward or downward, in contrast to the decay process, which always results in an increase in the cosmic atomic number. This difference is a consequence of the manner in which the mass of the gravitational charges enters into the respective processes. For example, the decay of c-St, the pion, is in the direction of c-Kr. On the other hand, the kaon, a gravitationally charged c-Kr atom, cannot decay into any other cosmic particle, as it is at the end of the line so far as decay is concerned, but it can fragment into any combination of particles whose combined mass does not exceed the 492 MeV kaon mass. Fragmentation into pions reverses the direction of the decay. If the maximum conversion to pions (mass 138 MeV each) takes place, three pions are produced. Frequently, a larger part of the total energy goes into the kinetic energy of the products, and the production of pions decreases to two. The existence of both 2-pion and 3-pion events has been given a great deal of attention because of the bearing that they have on various hypotheses as to the laws that govern particle transformations. The present study indicates, however, that if the basic requirement, an excess energy environment, is met, so that conversion of the kaon to the material status is prevented, there are no restrictions on the fragmentation reactions, other than those considerations that are applicable to matter and energy in general in the material sector of the universe. The study of the transient particles, which had its origin in the observation of the cosmic rays, is now carried on mainly in the accelerators. It is assumed that the same particles and the same processes are involved, and that the details thereof can be more

conveniently ascertained where the conditions are subject to control. This is true, to a degree, of course, but the situation in the accelerators is much more complex than that to which the incoming cosmic rays are subject. The atom building process does not merely invert the decay process. The actual inverse of the cosmic ray decay is a situation in which material elements enter a cosmic (high energy) environment and eject negative displacement in order to build up into structures that can ultimately convert to the cosmic status. The cosmic entities initially produced in this process are sub-atomic particles. The accelerators, however, produce the cosmic elements that are closest to conversion to the material status (c-Kr, etc.), and then drive them back up the decay path by creating temporary energy concentrations in the material (low energy) environment. Because of the uneven character of these concentrations of energy, cosmic atom building in the accelerators is accompanied by numerous events of the inverse (decay) character, and by various fragmentation and consolidation processes that involve neither building nor decay. Many of the phenomena observed in the accelerator experiments are therefore peculiar to the kind of environment existing in the accelerators, and are not encountered in either the cosmic ray decay or in normal cosmic atom building. It should also be kept in mind that the actual observations of these events, the ―raw‖ data, have little meaning in themselves. In order to acquire any real significance they must be interpreted in the light of some kind of a theory as to what is happening, and in such areas as particle physics the final conclusion is often ten percent fact and ninety percent interpretation. The theoretical findings of this work are in agreement with the experimental results, and they also agree with the conclusions of the experimenters in most cases, but it can hardly be expected that the agreement will be complete where there are so many uncertainties in the interpretation of the experimental results. The sequence of events in cosmic atom building in the accelerators has been observed experimentally in the so-called ―resonance‖ experit meets. These involve accelerating two streams of particles—stable or transient—to extremely high speeds and allowing them to collide. The relation of the amount of interaction, the ―cross-section,‖ to the energy involved is not constant, but shows peaks or ―resonances‖ at certain farily welldefined values. This result is interpreted as indicating the production of very short-lived particles (indicated lifetime about 10-23 seconds) at the energies of the resonance peaks. This interpretation is confirmed in this work by the agreement of the sequences of resonance particles with the theoretical results of the cosmic atom building process. Because of the difference in the nature of the processes, the sequence of elements in cosmic atom building is not the inverse of the decay sequence, although most of the decay products above c-He are included. As brought out in Chapter 15, the decay process is essentially a matter of ejecting positive rotational displacement. There is also a decrease in equivalent mass, but the mass loss is a secondary effect. The primary objective of the process is to get rid of the excess rotational energy. In the atom building process in a high energy environment the necessary energy is readily available, and the essential task is to provide the required mass. This is supplied in the form of c-Kr atoms, mass 51.73 each. The full sequence of cosmic atoms in the building process therefore consists of a series of elements, the successive members of which differ by 52 MeV. Aside from the lower end of the series, where two of the 52 MeV units are required per

cosmic atomic weight unit, the only significant deviations from this pattern in the experimental results are that c-B9 is absent, while c-Ne (a member of the decay sequence) and c-O appear in lieu of, or in addition to, c-F. The complete atom building sequence is shown in Table 4.

TABLE 4 COSMIC ATOM BUILDING SEQUENCE
Atomic Number 36 18 12 (10) 9 (8) 7 6 5 4-½ 4 3-½ 3 2-½ * member of decay sequence Element *c-Kr *c-A c-Mg *c-Ne c-F c-O *c-N c-C *c-B10 c-B9 c-Be8 *c-Be7 c-Li6 *c-Li5 Atomic Mass 52 103 155 186 207 232 266 310 372 466 532 621 745 51.73 n 52 103 155 207 259 310 362 414 466 517 569 621 672 724

Most of the reported experimental results omit many of the steps in the full sequence. Whether this means that double or triple jumps are being made, or whether the intermediate stages have been missed by the investigators is not yet clear. However, the most complete set of results, the ―sigma‖ series, is close enough to the theoretical sequence to suggest that the build-up does, in fact, proceed step by step as indicated in Table 4. Regardless of any deviations from the normal sequence that may take place earlier, the first phase of the atom building process always terminates at c-Li5 (the omega particle, mass 1676 MeV) because, as is evident from the description of the steps in the cosmic ray decay, the motion must enter a second dimension in order to accomplish any further decrease in the cosmic atomic number. This requires a relatively large increase in energy, from 1676 to 3104 MeV. In the decay process there is no alternative, and the big drop in energy must take place, but in the reverse process the addition of energy in smaller amounts is made possible by reason of the ability of the cosmic atom to retain additional gravitational charges in an excess energy environment. The doubly (gravitationally) charged cosmic element of lowest energy within the atom building range is c-Kr, the first atom that can be formed from conversion of material

particles. The energy difference between doubly charged c-Kr and the last singly charged product, c-Li5, is substantial (238 MeV), and all of the cosmic atom building series theoretically include doubly charged c-Kr as well as singly charged c-Li5. There are, in fact, some intermediate stages. All but the last small increment of the mass required for the second charge is added in the form of c-Kr atoms (52 MeV each), as in building up the rotational mass, and this addition is accomplished in four steps. Similar inter-stages are possible between c-Be7 and c-Li6, also between c-Li6 and c-Li5, where two c-Kr mass increments are required between the cosmic elements. Beyond doubly charged c-Kr, the regular sequence is again followed, with some omissions or deviations which, as mentioned earlier, may or may not represent the true course of events. At doubly charged c-Li5, mass 2607 MeV, the atom building process again reaches the one-dimensional limit, and a third charge is added in the same manner as the second, inaugurating a new series of resonances which extends to the neighborhood of the 3104 MeV required for the production of the first of the particles that have scalar motion in two dimensions. Table 5 compares the theoretical and observed values of the masses of the particles included in the several series of resonances that have been reported. The agreement is probably about as close as can be expected in view of the difficulties involved in making the measurements. In more than a third of the total number of cases the measured mass is within 10 MeV of the theoretical value. It is also worth noting that in the only case where enough measurements are available to provide a good average value for an individual cosmic element, the 11 measurements on c-Li5, the agreement between this average and the theoretical mass is exact. All of the singly charged transient particles moving in only one dimension are stable against decay for about l0-10 seconds. However, they are extremely vulnerable to fragmentation under conditions such as those that prevail in the accelerators, and only the particles of lowest mass escape fragmentation long enough to decay. The lifetime of the heavier particles is limited by fragmentation to the absolute minimum, which appears to be the unit of time corresponding to three scalar dimensions of motion, or about 10-24 seconds. In the tabulations of particle data in the current scientific literature,

TABLE 5 “BARYON RESONANCES”
c-Atomic number 7 4 3-½ 3 Element *c-N c-Be8 *c-Be7 c-Li6 Grav. charge 1 1 1 1 InterTheor. stage Sigma Series 1197 1397 1463 1552 a 1604 Mass Obs. ** 1190 1385 1480 1620 Obs. ***

2-½

*c-Li5

1

36 18 12 10 9 8 7 5 3 2-½ 10 10 4 3 2-½

*c-Kr *c-Ar c-Mg *c-Ne c-F c-O *c-N *c-B c-Li6 *c-Li5 *c-Ne *c-Ne c-Be8 c-Li6 *c-Li5

2 2 2 2 2 2 2 2 2 2 3 1 1 1 1

12 8 4 2-½ 5 3 2-½ 36 10 5 3 3-½ 3 2-½

c-Mg c-O c-Be8 *c-Li5 *c-B c-Li6 *c-Li5 *c-Kr *c-Ne *c-B c-Li5 *c-Be7 c-Li6 *c-Li5

2 2 2 2 1 1 1 2 2 2 2 1 1 1

14

*c-St

2

1676 a 1728 b 1779 c 1831 d 1882 1914 1965 2017 2048 2069 2095 2128 2234 2483 2607 2979 Lambda Series 1117 1397 1552 1676 a 1728 b 1779 c 1831 d 1882 2017 2095 2328 2607 Xi Series 1303 1552 1676 c 1831 1914 2048 2234 2483 N Series 1463 1552 1676 a 1728 b 1779 d 1882 1995

1670 1750 1765

1690 1840 1880

1915 1940 2000 2030 2070 2080 2100 2250 2455 2620 3000 1115 1405 1520 1670 1815 1830 1870-1860 2020-2010 2110

1690 1750

2100 2350 2585 1320 1530

1630 1820 1940 2030 2250 2500 1470 1535 1670 1700 1780 1860

1520 1688

1990

10 8 6 5 2-½ 10 6 2-½

*c-Ne c-O c-C *c-B *c-Li5 *c-Ne c-C *c-Li5

2 2 2 2 2 3 Delta Series 1 1 d 2 2 2 2 3

2048 2095 2172 2234 2607 2979 1241 1676 1882 1914 1965 2172 2394 2845

2190 2220 2650 3030 1236 1670 1890 1910 1950 2420 2850

2040 2100 2175

1690

36 *c-Kr 18 *c-Ar 6 c-C 3-½ *c-Be7 36 *c-Kr * Decay sequence ** Well-established resonances *** Less certain resonances

1960 2160

the information with respect to the series of resonances thus far discussed is presented under the heading of ―Baryon Resonances.‖ A further classification of ―Meson Resonances‖ gives similar information concerning particles that were observed by a variety of other techniques. These are, of course, entities of the same nature—cosmic elements in the decay range—and largely the same elements, but because of the wide variations in the conditions under which they were produced the meson list includes a number of additional elements. Indeed, it includes all of the elements of the regular atom building sequence (with c-Ne and c-O substituted for c-F, as previously noted), and one additional isotope, c-Ci11. The masses derived from the experiments are compared with the theoretical masses of the cosmic elements in Table 6. The names currently applied to the observed particles have no significance, and have been omitted. In preparing this table, the observed particles were first assigned to the corresponding cosmic elements, an assignment that could be made without ambiguity, as the maximum experimental deviations from the theoretical masses are, in all but a very few instances, considerably less than the mass differences between the successive elements or isotopes. On the assumption that the deviations of the reported values from the true masses of the particles are due to causes whose effects are randomly related to the true masses, the individual values were averaged for comparison with the theoretical masses. The close correlation between the two sets of values not only confirms the status of these observed particles as cosmic elements, but also validates the assumption of random deviations, on which the averaging was based. Presumably these deviations are, in part, due to inaccuracies in obtaining and processing the experimental data, but they may also include a random distribution of differences of a real character: more of the ―fine structure‖ which, as previously noted, has not yet been studied in the context of the Reciprocal System. The averaged values are shown in parentheses. Where only single measurements are available, the deviations from the theoretical values are naturally greater, but they are

generally within the same range as those of the individual values that enter into the averages. Longer lived decay products such as c-Ne and c-N are not usually classified with the resonances, but they have been included in the table to show the complete picture. The gaps still remaining in the table will no doubt be filled as further experimental work is done. Indeed, many of these gaps, particularly in the upper portion of the mass range, can be filled immediately, simply by consolidating Tables 5 and 6. The difference between these two sets of resonances is only in the experimental procedures by which the reported values were derived. All of the transient

TABLE 6 “MESON RESONANCES”
c-Atomic number 3 2-½ Element c-Li6 *c-Li
5

36 18 12 10 8 7 6 5-½ 5 4-½ 4 3-½ 3 2-½

*c-Kr *c-Ar c-Mg *c-Ne c-O *c-N c-C12 c-C11 *c-B10 c-B9 c-Be8 *c-Be7 c-Li6 *c-Li
5

36 *c-Kr 8 c-O 5 *c-B10 4-½ c-B9 4 c-Be8 3-½ *c-Be7 36 *c-Kr 36 (kaon)½ c-Kr * Decay sequence

Grav. InterTheor. charge stage 0 621 a 673 0 745 a 797 d 952 1 983 1 1034 1 1086 1 1117 1 1164 1 1197 1 1241 1 1270 1 1303 1 1345 1 1397 1 1463 a 1515 1 1552 a 1604 1 1676 b 1779 c 1831 2 1914 2 2095 2 2234 2 2276 2 2328 2 2394 3 2845 1-½ 1423

Mass Obs. ** 700 (760) 784 (951) (986) (1031) (1090) 1116 (1165) 1197 (1240) (1274) 1310

Mass Individual Values

750,770 940,953-958 970,990,997 1020,1033,1040 1080,1100 1150,1170-1175 1237,1242 1265,1270,1286

(1455) 1516 1540 (1623) (1674) (1773) (1840) 1930 2100 2200 2275 2360 2375 2800 (1427)

1440,1470

1600,1645 1660,1664-1680,1690 1760,1765-1795 1830,1850

1416,1421,1430,1440

particles, irrespective of the category to which they are currently assigned, are cosmic elements or isotopes, with or without gravitational charges of the material type. The absence of singly (gravitationally) charged particles corresponding to c-B9 from the list of observed resonances is rather conspicuous, particularly since the similar particle of twice this atomic weight, c-F18 is also missing, as noted earlier. The reason for this anomaly is still unknown. The last particle listed in Table 6 is a kaon, one of the two rotating systems of a c-Kr atom, with a full gravitational charge in addition to the half-sized charge that it normally carries. This particle has the same relation to the normal kaon that the atoms of the doubly charged series in Tables 5 and 6 bear to the corresponding singly charged atoms. ln the first edition it was suggested that some of the cosmic ray particles entering the material sector might be cosmic chemical compounds rather than single atoms. In the light of the more complete information now available with respect to the details of the inter-regional transfer of matter, this possibility must now be excluded, but short-lived associations between cosmic and material particles, and perhaps, in some cases, between cosmic particles, are feasible, and evidence of some such associations has been obtained. For example, the lambda meson (c-neon) is reported to participate in a number of combinations with material elements, called hyperfragments, which disintegrate after a brief existence. The current view is that the meson, which is assumed to be a sub-atomic particle, replaces one of the ―nucleons‖ in the material atom. However, we find (l) that the material atom is not composed of particles, (2) that there are no nucleons, and (3) that the mesons are full-sized atoms, not sub-atomic particles. The hyperfragment therefore cannot be anything more than a temporary association between a material atom and a cosmic atom. The new findings as to the nature of the transient particles, and their production and decay, do not negate the results of the vast amount of work that has been done toward determining the behavior characteristics of these particles. As stated earlier in this chapter, these theoretical findings are generally consistent not only with the actual experimental results, but also with the experimenters' ideas as to what the raw data—the various ―tracks,‖ electrical measurements, counter readings, etc.—signify with respect to the existence and behavior of the different transient particles. But what appears to be an immense amount of experimental data actually contributes very little toward an explanation of the nature of these particles, and their place in the physical universe; it merely serves to define the problem. As expressed by V. F. Weisskopf, in a review of the situation, ―The present theoretical activities are attempts to get something from almost nothing.‖ Much of the information derived from observation is ambiguous, and some of it is definitely misleading. The experimentally established facts obviously have a bearing on the problem, but they are too limited in their scope to warn the investigators that they cannot be fitted into the pattern to which scientists are accustomed. For instance, in the world of ordinary matter, a particle mass less than that of the lightest isotope of hydrogen indicates that the particle belongs to the sub-atomic class. But when the effective masses of the transient particles, as determined by experiment, are interpreted according to this

familiar pattern, they give a totally false account of the nature of these entities. Thus, while the determination of the particle masses adds to the total amount of factual information available, its practical effect is to lead the investigators away from the truth rather than toward it. The following statements by Weisskopf in his review indicate that he suspected that some such misinterpretation of the empirical data is responsible for the confusion that currently surrounds the subject. We are exploring unknown modes of behavior of matter under completely novel conditions.... It is questionable whether our present understanding of high-energy phenomena is commensurate to the intellectual effort directed at their interpretation.67 Availability of a general physical theory which enables us to deduce the nature and characteristics of the transient particles in full detail from theoretical premises, rather than having to depend on physical observation of a very limited scope, now opens the door to a complete understanding. The foregoing pages have provided an account of what the transient particles are, where the particles of natural origin (the cosmic rays) come from, what happens to them after they arrive, and how they are related to the transient particles produced in the accelerators. The aspects of these particles that have been so difficult to explain on the basis of conventional theory—their multiplicity, their extremely short lifetimes, the high speed and great energies of the natural particles, and so on—are automatically accounted for when their origin and general nature is understood. Another significant point is that, on the basis of the new theoretical explanation, the cosmic rays have a definite and essential place in the mechanism of the universe. One of the serious weaknesses of conventional physical theory is that it is unable to find roles for a number of the recently discovered phenomena such as the cosmic rays, the quasars, the galactic recession, etc., that are commensurate with the magnitude of the phenomena, and is forced to treat them as products of exceptional or abnormal circumstances. In view of the wide extent of the phenomena in question, and their far-reaching consequences, such characterization is clearly inappropriate. The theoretical finding that these are stages of the cosmic cycle through which all matter eventually passes now eliminates this inconsistency, and identifies each of these phenomena with a significant phase of the normal activity of the universe. The existence of a hitherto unknown second half of the universe is the key to an understanding of all of these currently misinterpreted phenomena, and the most interesting feature of the cosmic rays is that they give us a fleeting glimpse of the entities of which the physical objects of that second half, the cosmic sector, are constructed.

CHAPTER 17

Some Speculations
The Reciprocal System of theory consists of the fundamental postulates, together with everything that is implicit in the postulates; that is, everything that can legitimately be derived from those postulates by logical and mathematical processes without introducing

anything from any other source. It is the theory as thus defined that can claim to be a true and accurate representation of the observed physical universe, on the grounds specified in the earlier pages. The conclusions stated in this and related publications by the present author and others are the results of the efforts that have thus far been made to develop the consequences of the postulates in detail. However‖ the findings that have emerged from the early phases of this theoretical development call for some drastic modifications of the prevailing conceptions of the nature of some of the basic physical entities and phenomena. Such conceptual changes are not easily made, and the persistence of previous habits of thought makes it difficult, not only for the readers of these works, but also for the investigators themselves, to grasp the full implications of the new ideas when they first make their appearance. The existence of scalar motion in more than one dimension, which plays an important part in the subject matter of the two preceding chapters, is a good example. It is now clear that such motion is a necessary and unavoidable consequence of the basic postulates, and there is no inherent obstacle that would stand in the way of a complete and detailed understanding of its nature and effects if it could be considered in isolation, without interference from previously existing ideas and beliefs. But this is not humanly possible. The minds into which this idea enters are accustomed to thinking along very different lines, and inertia of thought is similar to inertia of matter, in that it can be fully overcome only over a period of time. Even the simple concept of motion that is inherently scalar, and not merely a vectorial motion whose directional aspects are being disregarded, involves a conceptual change of no small magnitude, and the first edition of this work did not go beyond this point, except in specifying that the increase in the speed of recession of the galaxies is linear beyond the gravitational limit‖ a tacit assertion that the increment is scalar. Subsequent studies of high energy astronomical phenomena carried the development of thought on the subject a step farther, as they led to the conclusion that the quasars are moving in two dimensions. However, it took additional time to achieve a recognition of the fact that unit scalar speed in three dimensions constitutes the line of demarcation between the region of motion in space and the region of motion in time, and the first publication in which this point was brought out specifically was Quasars and Pulsars (1971). Now we further find that the same considerations also apply to the incoming cosmic particles. At the moment, it appears that the full scope of the subject has been covered, but past experience does not encourage a positive statement to that effect. This experience demonstrates how difficult it is to attain a comprehensive understanding of the various aspects of any new item of information that is derived from the basic postulates, and it explains why identification of the source from which the correct answers can be obtained does not automatically give us all of those answers; why the results obtained by application of the Reciprocal System of theory, like the products of all other research into previously unknown physical areas, necessarily differ in the degree of certainty that can be ascribed to them, particularly in the relatively early stages of an investigation. Many are established beyond a reasonable doubt; others can best be characterized as ―work in progress‖ ; still others are little, if any, more than speculations. However, because of the extremely critical scrutiny to which a theory based on a new and

radically different fundamental concept is customarily (and properly) subjected, publication of the results of the theoretical development described in this work has, in general, been limited to those items which have been given long and careful examination, and can be considered as having a very high degree of probability of being correct. Almost thirty years of study and investigation went into the project before the first edition of this work was published. The additions and modifications in this new edition are the result of another twenty years of review and extension of the original findings by the author and others. Inasmuch as the results of this development are conclusions about one universe derived in their entirety from one set of basic premises, every advance that is made in the understanding of phenomena in one physical field throws some light on outstanding questions in other fields. A review such as that required for the preparation of this new edition has the benefit of all of the advances that have been made subsequent to the last previous systematic study of each area, and a considerable amount of clarification of the subject matter previously examined, and extension of the development into new subject areas, was accomplished almost automatically during the revision of the text. Where it is evident that the new theoretical conclusions thus derived are firm enough to meet the criteria that were applied to the original publication they have been included in this new edition. But in general, any new ideas of major consequence that have emerged from this rather rapid review have been held over for further study in order to be sure that they receive adequate consideration before publication. In one particular case, however, there seems to be sufficient justification for making an exception to this general policy. In the preceding pages, the discussion of the decay of the cosmic elements after entry into the material environment was carried to the point where the decay was complete, and it was noted that the ultimate result would necessarily be conversion of the cosmic elements into forms that would be compatible with the new environment. Since hydrogen is the predominant constituent of the material sector of the universe, this element must ultimately be produced from the decay products, but just how the transition is accomplished has not been clear theoretically, and empirical information bearing on the subject is practically non-existent. It would be a significant advance toward completion of the basic theoretical structure if this gap could be closed. Consideration of the question during preparation of the text of the new edition has uncovered some interesting possibilities in this connection, and a discussion of these ideas in the present work seems to be warranted, even though it must be admitted that they are still speculative, or at least no more than ―work in progress.‖ The first of these tentative new conclusions is that the muon neutrino is not a neutrino. As the theoretical development now stands, there is no place for any neutrinos other than the electron neutrino and its cosmic analog, the electron antineutrino, as it is currently known. Of course, the door is not completely closed. Earlier in this volume it was asserted that sufficient evidence is now available to demonstrate that the physical universe is, in fact, a universe of motion, and that a correct development of the consequences of the postulates that define such a universe will produce an accurate representation of the existing physical universe. It is not contended‖ however, that the present author and his associates are infallible, and that the conclusions which they have

reached by these means are always correct. It is conceivable that further theoretical clarification may change some aspects of our existing view of the neutrino situation‖ but the theory as it now stands has no place for muon neutrinos. As brought out in the previous pages ―however‖ the theory does require the production of a different massless particle in the processes in which the ―muon neutrino‖ now appears‖ and the logical conclusion is that the particle now called the muon neutrino is the particle required by the theory: the massless neutron. From the observational standpoint this changes nothing but the name, as these two massless particles cannot be distinguished by any currently known means. On the theoretical side, the observed particle fits in very well with the theoretical deductions as to the behavior of the massless neutron. This particle should theoretically be produced in every decay event, whereas the neutrino should appear only in the last step, where separation of the residual cosmic atom into two massless particles takes place. This is in accord with observation, as the ―muon neutrino‖ appears in both the pion decay and the muon decay, whereas the electron neutrino appears only in the decay of the muon. Empirical confirmation of the theoretical produce lion of massless neutrons in the earlier decay events has not yet been observed, but this is understandable. The reported products of the decay of a positive muon are also in agreement with the massless neutron hypothesis. These products are currently considered to be a positron, which, according to our findings, is M 0-0-1, an electron neutrino, M 2-2-(1), and a ―muon antineutrino,‖ which we now identify as a massless neutron, M 2-2-0. The positron and the electron neutrino are jointly equivalent to a second massless neutron. Their appearance as two particles rather than one is probably due to the fact that they are the products of the final conversion of the residual cosmic atom, in which the electric and magnetic rotations are oppositely directed, rather than merely discrete particles ejected from the cosmic atom. It is claimed that muons also exist with negative charges, and that these decay into the antiparticles of the decay products of the positive muon: an electron, an electron antineutrino, and a ―muon neutrino.‖ These asserted products are the equivalent of two cosmic massless neutrons. The production of such particles, or of cosmic particles of any kind, other than the members of the regular decay sequence, as the result of a decay process, is rather difficult to reconcile with the theoretical principles that have been developed. Theoretical considerations indicate that there is no such thing as an ―antimeson,‖ and that the negatively charged muon is identical with the positively charged muon, except for the difference in the charge. On this basis, the decay products should differ only in that an electron replaces the positron. Inasmuch as two of the decay particles in each case are unobservable, there appears to be a rather strong probability that their identification in current physical thought comes from the ninety percent of interpretation rather than from the ten percent of observation that enters into the reported results. However, it is the existence of some unresolved questions of this kind that has made it necessary to characterize the contents of this chapter as somewhat speculative. On the basis of the theoretical decay pattern, the incoming cosmic atoms are eventually converted into massless neutrons and their equivalents. The problem then becomes: What

happens to these particles? There are no experimental or observational guideposts along this route; we will have to depend entirely on theoretical deductions. The massless neutron already has a material type structure--that is, a negative vibration and a positive rotation--and no conversion process is required. Likewise, no decay or fragmentation process is possible because this particle has only one rotational displacement unit. Progress toward the hydrogen goal must therefore take place by means of addition processes. Addition of a massless neutron to a positron, a proton, a compound neutron, or a second massless neutron, would produce a particle in which there is a single rotating system of displacement 2 (on the particle scale). As indicated in Chapter 11, it appears that such a particle, if it exists at all, is unstable, and in the absence of any means of transferring one of the units of displacement to a second rotating system, the unstable particle will decay back to particles of the original types. Such additions will therefore accomplish nothing. The additions that are actually possible constitute a regular series. The decay product, the massless neutron, M ½-½-0, can combine with an electron, M 0-0-(l), to form a neutrino, M ½-½-(1). Another massless neutron added to the neutrino produces a proton, M 1-1(1). As has been indicated, addition of a massless neutron to the proton is not feasible, but a neutrino can be added, and this produces the mass one hydrogen isotope, M ½-½-(2). So far as the rotational displacement is concerned, we now have a clear and consistent picture. By addition of the supply of massless neutrons resulting from the decay of the cosmic rays to electrons and neutrinos, particles that are plentiful in the material environment, hydrogen, the basic element of the material system, is produced. But there is still one important factor to be accounted for. There is no problem in the addition of the massless neutron to the electron, but in adding to the neutrino to produce the proton a unit of mass must be provided. The question that must be answered before this hypothetical hydrogen building process can be considered a reality is: Where does the required mass come from? It appears, on the basis of the recent extensions of the theory, that the answer to this question can be found in a hitherto unrecognized property of particles with twodimensional rotation. As explained in Chapter 12, mass is t³/S³, the reciprocal of threedimensional speed, whereas energy is t/s, the reciprocal of one-dimensional speed. Obviously, there is an intermediate quantity, the reciprocal of two-dimensional speed, t²/s². This has been recognized as momentum, or impulse, but it has been regarded as a derivative of mass. Indeed‖ momentum is customarily defined as the product of mass and velocity. What has not been recognized is that the reciprocal of two-dimensional speed can exist in its own right, independent of mass, and that a two-dimensional massless particle can have what we may call internal momentum, t²/s² ”just as a three-dimensional atom has mass― t³/s³. The internal energy of an atom‖ the energy equivalent of its mass‖ is equal to the product of its mass and the square of unit speed‖ t³/s³ x s²/t² = t/s. This is the relation discovered by Einstein‖ and expressed as E = mc². In order to provide the unit mass required in the addition of a massless neutron to a neutrino to form a proton, a unit quantity of energy, t/s must be provided.

The kinetic energy of a particle with internal momentum M is the product of this momentum and the speed: Mv = t²/s² x s/t = t/s. Inasmuch as the massless neutron has unit magnetic displacement, and therefore unit momentum, and being massless it moves with unit speed (the speed of light), its kinetic energy is unity. Thus the kinetic energy of the massless neutron is equal to the energy requirement for the production of a unit of mass, and by coming to rest in the stationary frame of reference the massless neutron can provide the energy as well as the rotational displacement necessary to produce the proton by combination with a neutrino. Here, then, is what appears, on initial consideration at least, to be a complete and consistent theoretical explanation of the transition from decay product to material atom. There is, of course, no observational confirmation of the hypothetical processes, and such confirmation may be hard to get. The conclusions that have been reached will therefore have to rest entirely on their theoretical foundations for the time being. It is worth noting that, on the basis of these conclusions, the hydrogen produced from the decay products originates somewhat uniformly throughout the extension space of the material sector, inasmuch as the neutrino population must be fairly uniformly distributed. This is in agreement with other deductions that were discussed in the first edition, and will be given further consideration in Volume II of this work. The standing of the conclusions that have just been outlined is considerably strengthened by the fact that the two lines of theoretical development meet at this point. As stated earlier, the inflow of cosmic matter into the material sector is counterbalanced by an ejection of matter from the material sector into the cosmic sector in the form of high-speed explosion products. These are the two crucial phases of the great cycle which constitutes the continuing activity of the universe. But the slow process of growth and development that the arriving matter undergoes before it is ready to participate in the events which will eject it back into the cosmic sector, and complete the cycle, is an equally important, even though less spectacular, aspect of the cycle. Consequently, one of the major tasks involved in developing a theoretical account of the physical universe from the basic postulates of the Reciprocal System is to trace the evolutionary path of the new matter, and of the aggregates into which that matter gathers. Our first concern, however, must be to identify the participants in physical activity, and to define their principal properties, as these are items of information that will be required before the events in which these entities participate can be accurately evaluated. Now that we have arrived, at least tentatively, at the hydrogen stage, we will defer further consideration of the evolution of matter to Volume II, and will return to our examination of the individual material units and their primary combinations.

CHAPTER 18

Simple Compounds
In the preceding chapters we have determined the specific combinations of simple rotations that are stable in the material sector of the universe, and we have identified each of these

combinations, within the experimental range, with an observed sub-atomic particle or atom of an element. We have then shown that an exact duplicate of this system of material rotational combinations, with space and time interchanged, exists in the cosmic sector, and we have identified all of the observed particles that do not belong to the material system as atoms or particles of the cosmic system. To the extent that observational or experimental data are available, therefore, we have established agreement between the theoretical and observed structures. So far as these data extend, there are no loose ends; all of the observed entities have been identified theoretically, and while not all of the theoretical entities have been observed, there are adequate theoretical explanations for this. The number of observed particles is increased substantially by a commonly accepted convention which regards particles of the same kind, but with different electric charges, as different particles. No consideration has been given to the effects of electric charges in this present discussion, as the existence of such charges has no bearing on the basic structure of the units. These charges may play a significant part in determining whether or not certain kinds of reactions take place under certain circumstances, and may have a major influence on the details of those reactions, just as the presence or absence of concentrations of kinetic energy may have a material effect on the course of events. But the electric charge is not part of the basic structure of the atom or sub-atomic particle. As will be brought out when we take up consideration of electrical phenomena, it is a temporary appendage that can be attached or removed with relative ease. The electrically charged atom or particle is therefore a modified form of the original rotational combination rather than a distinctly different type of structure. Our examination of the basic structures is not yet complete, however, as there are some associations of specific numbers of specific elements that are resistant to dissociation, and therefore act in the manner of single units in processes of low or moderate energy. These associations, or molecules, play a very important part in physical activity, and in order to complete our survey of the units of which material aggregates are composed we will now develop the theory of the structure of molecules, and will determine what kinds of molecules are theoretically possible. The concept of the molecule originated from a study of the behavior of gases, and as originally formulated it was essentially empirical. The molecule, on this basis, is the independent unit in a gas aggregate. But this definition cannot be applied to a solid, as the independent unit in a solid is generally the individual atom, or a small group of atoms, and in this case the molecule has no physical identity. In order to make the molecule concept more generally applicable, therefore, it has been redefined on a theoretical rather than an empirical basis, and as now conceived, a molecule is the smallest unit of a substance which can (theoretically) exist independently and retain all of the properties of the substance. The atoms of a molecule are held together by inter-atomic forces, the nature and magnitude of which will be examined in detail later. The strength of these forces determines whether or not the molecule will break up under whatever disruptive forces it may be subjected to, and the manner in which certain atoms are joined in a molecule may have an effect on the magnitude of the inter-atomic forces, but the determination of what atoms can combine with

what other atoms, and in what proportions, is governed by an entirely different set of factors. ln current theory, the factors responsible for the inter-atomic force, or ―bond,‖ are presumed to have a double function, not only determining the strength of the cohesive force, but also determining what combination can take place. The results of the present investigation indicate, however, that the force which determines the equilibrium distance between any two atoms is identical in origin and in general character regardless of the kind of atoms involved, and regardless of whether or not those atoms can, or do, take part in the formation of a molecule. Experience has indicated that it is advisable to lay more emphasis on the independence of these two aspects of the interrelations between atoms, and for this purpose the plan of presentation employed in the first edition will be modified in some respects. As already mentioned, the information that will be developed with respect to the molecular structure will be presented before any discussion of inter-atomic forces is undertaken. Furthermore, present indications are that whatever advantages there may be in using the familiar term ―bond‖ in describing the various molecular structures are outweighed by the fact that the term ―bond‖ almost inevitably implies the notion of a force of some kind. Inasmuch as the different molecular ―bonds‖ merely reflect different relative orientations of the rotations of the interacting atoms, and have no force implications, we will abandon the use of the term ―bond‖ in this sense, and will substitute ―orientation‖ for present purposes. The term ―bond‖ will be used in a different sense in a later chapter where it will actually relate to a force. The existence of molecules, either combinations of specific numbers of like atoms, or chemical compounds, which are combinations of unlike atoms, is due to the limitations on the establishment of inter-atomic equilibrium that are imposed by the presence of motion in time in the electric dimension of the atoms of certain elements. Those elements whose atoms rotate entirely in space (positive displacement in all rotational dimensions), or which are able to attain the all-positive status by reorientation on the 8-x basis, are not subject to any such limitations. An atom of an element of this kind can establish an equilibrium with any other such atoms in any proportions, except to the extent that the physical properties of the elements involved (such as the melting points) or conditions in the environment (such as the temperature) interfere. Material aggregates of this kind are called mixtures. In some cases, where the mixture is homogeneous and the composition is uniform, the term alloy is applied. There is a class of intermetallic compounds, in which these positive constituents are combined in definite proportions. CuZn and Cu5Zn8 are compounds of this class. But the combinations of copper and zinc are not limited to specific ratios of this kind in the way in which the composition of true chemical compounds is restricted. The commercially important alloys of these two metals extend through the entire range from a brass with 90 percent copper and 10 percent zinc to a solder with 50 percent of each constituent, and the possible alloys extend over a still wider range. The intermetallic compounds are merely those alloys whose proportions are especially favorable from a geometric standpoint. A typical comment in a chemistry textbook is that ―The theory of the bonding forces involved in these intermetallic compounds is very complex and is not, as yet, very well understood.‖ The reason is that there are no ―bonding forces‖ in these substances in the same sense in

which that term is ordinarily used in application to the true chemical compounds. As has been stated, negative rotation in the electric dimension of an atom is admissible because the requirement that the net total rotational displacement must be positive (in the material sector) can be met as long as the magnetic rotation is positive. In the time region inside unit distance, however, the electric and magnetic rotations act independently. Here the presence of a randomly oriented electric rotation in time makes it impossible to maintain a fixed inter-atomic equilibrium. Any relation of space to time is motion, and motion destroys the equilibrium. But an equilibrium can be established in certain cases if both of the interacting atoms are specifically oriented along the line of interactions in such a manner that the negative displacement in the electric dimension of one atom is counterbalanced by an equal positive displacement in one of the dimensions of the second atom, so that the magnitude of the resulting relative motion is zero with respect to the natural datum. Or a multi-atom group equilibrium may be established where the total negative displacements of the atoms with electric rotation in time are exactly equal to the total effective positive displacements of the atoms with which the interaction is taking place. In these cases there is an equilibrium because the net total of the positive and negative displacement involved is zero. Alternatively, the equilibrium may be based on a total of 8 or 16 units, since, as we have found, there are 8 displacement units between one zero point and the next. A negative displacement x may be counterbalanced by a positive displacement 8-x, the net total being 8, which is the next zero point, the equivalent of the original zero. As an analogy, we may consider a circle, the circumference of which is marked off into 8 equal divisions. Any point on this circle can be described in either of two ways: as x units clockwise from zero, or as 8—x units counter-clockwise from 8. A distance of 8 units clockwise from zero is equivalent to zero. Thus a balance between x and 8—x, with the midpoint at 8, is equivalent to a balance between x and -x, with the midpoint at zero. The situation in the inter-atomic space-time equilibrium is similar. As long as the relative displacement of the two interacting motions, the total of the individual values, amounts to the equivalent of any one of the zero points, the system is in equilibrium. Because of the specific requirements for the establishment of equilibrium, the components of combinations of this kind, molecules of chemical compounds, exist in definite proportions, each n atoms of one component being associated with a specific number of atoms of the other component or components. In addition to the constant proportions of their components, compounds also differ from mixtures or alloys in that their properties are not necessarily similar to those of the components, as is generally true in the all-positive combinations, but may be of an altogether different nature, as the resultant of a space-time equilibrium of the required character may differ widely from any of the effective rotational values of the individual elements. The rotational displacement in the dimension of interaction determines the combining power, or valence, of an element. Since the negative displacement is the foreign component of the material molecule that has to be counterbalanced by an appropriate positive displacement to make the compound possible, the negative valence of an element is the number of units of effective negative displacement that an atom of that element possesses. It

follows that, with some possible exceptions that will be considered later, there is only one value of the negative valence for any element. The positive valence of an atom in any particular orientation is the number of units of negative displacement which it is able to neutralize when oriented in that manner. Each element therefore has a number of possible positive valences, depending on its rotational displacements and the various ways in which they can be oriented. The occurrence of these alternate orientations is largely dependent upon the position of the element within the rotational group, and in preparation for the ensuing discussion of this subject it will be advisable, for convenient reference, to set up a classification according to position. Within each of the rotational groups the minimum electric displacement for the elements in the first half of the group is positive, whereas for those in the latter half of the group it is negative. We will therefore apply the terms electropositive and electronegative to the respective halves. It should be understood, however, that this distinction is based on the principle that the most probable orientation in the electric dimension considered independently is that which results in the minimum displacement. Because of the molecular situation as a whole, an electronegative element often acts in an electropositive capacity– indeed, nearly all of them take the positive role in chemical compounds under some conditions, and many do so under all conditions–but this does not affect the classification that has been defined. There are also important differences between the behavior of the first four members of each series of positive or negative elements and that of the elements with higher rotational displacements. We will therefore divide each of these series into a lower division and an upper division, so that those elements with similar general characteristics can be treated together. This classification will be based on the magnitude of the displacement, the lower division in each case including the elements with displacements from l to 4 inclusive, and the upper division comprising those with displacements of 4 or more. The elements with displacement 4 belong to both divisions, as they are capable of acting either as the highest members of the lower divisions or as the lowest members of the upper divisions. It should be recognized that in the electronegative series the members of the lower divisions have the higher net total positive displacement (higher atomic number). For convenience, these divisions within each rotational group will be numbered in the order of increasing atomic number as follows: These are the divisions which were indicated in the revised periodic table in Chapter 10. As will be seen from the points developed in the subsequent discussion, the division to which an element belongs has an important bearing on its chemical behavior. Including this divisional assignment in the table therefore adds substantially to the amount of information that is represented. Where the normal displacement x exceeds 4, the equivalent displacement 8-x is numerically less than x, and therefore more probable, other things being equal. One effect of this probability relation is to give the 8-x positive valence preference over the negative valence in Division III, and thereby to limit the negative components of chemical compounds to the elements of Division I

Division I Division II Division III Division IV

Lower electropositive Upper electropositive Upper electronegative Lower electronegative

V, except in one case where a Division III element acquires the Division IV status for reasons that will be discussed later. When the positive component of a compound is an element from Division I, the normal positive displacement of this element is in equilibrium with the negative displacement of the Division IV element. In this case both components are oriented in accordance with their normal displacements. The same is true if either or both of the components is double or multiple. We will therefore call this the normal orientation. The corresponding normal valences are the positive valence (x) and the negative valence (-x). It is theoretically possible for any Division I element to form a compound with any Division IV element on the basis of the appropriate normal valences, and all such compounds should be stable under favorable conditions, but whether or not any specific compound of this type will be stable under the normal terrestrial conditions is determined by probability considerations. An exact evaluation of these probabilities has not yet been attempted, but it is apparent that one of the most important factors in the situation is the general principle that a low displacement is more probable than a high displacement. If we check the theoretically possible normal valence compounds against the compounds listed in a chemical handbook, we will find nearly all of the low positive-low negative combinations in this list of common compounds. The low positive-high negative, and the high positive-low negative combinations are much less fully represented, while we will find the high positive-high negative combinations rather scarce. The geometrical symmetry of the resulting crystal structure is the other major determinant. A binary compound of two valence four elements (RX), for example, is more probable than a compound of a valence four and a valence three element (R3X4). The effect of both of these probability factors is accentuated in Division II, where the displacements corresponding to the normal valence have the relatively high values of 5 or more. Consequently, this valence is utilized only to a limited extent in this division, and is generally replaced by one of the alternative valences. Inasmuch as the basic requirement for the formation of a chemical compound is the neutralization of the negative electric displacement, the alternative positive valences are simply the results of the various ways in which the atomic rotation can be oriented to attain an effective positive displacement that will serve the purpose. Since each type of valence corresponds to a particular orientation, the subsequent discussion will be carried on in terms of valence, the existence of a corresponding orientation in each case being understood. The predominant Division III valence is based on balancing the 8-x displacement (positive because of the zero point reversal) against the displacement of the negative component. The resulting relative displacement is 8, which, as explained earlier, is the equivalent of zero. We

will call this the neutral valence. This valence also plays a prominent part in the purely Division lV compounds. The higher Division III members of Groups 4A and 4B are unable to utilize the 8—x neutral valence because for these elements the values of 8—x are less than zero, and therefore meaningless. instead, these elements form compounds on the basis of the next higher equivalent of zero displacement. Between the 8-unit level and this next zero equivalent there are two effective initial units of motion, as well as an 8-unit increment. The total effective displacement at this point is therefore l8, and the secondary neutral valence is 18-x. A typical series of compounds utilizing this valence, the oxides of the Division III elements of Group 4A, consists of HfO2, Ta2O5, WO3, Re2O7, and OSO4. Symmetry considerations favor balancing two electric displacements to arrive at the necessary space-time equilibrium, where conditions permit, but where the all-electric orientation encounters difficulties, it is possible for one of the magnetic rotations to take the positive role in the inter-atomic equilibrium. The magnetic valences, which apply in these magnetic-electric orientations, are the most common basis of combination in Division II, where the positive valences are high, and the neutral valences are excluded because the 8–x displacement is negative. They also make their appearance in the other three divisions where probability considerations permit. Each element has two magnetic rotations and therefore has two possible first order magnetic valences. In alternate groups the two rotations are equal, where no environmental influences are operative, and on this basis the number of magnetic valences should be reduced to one in half of the groups. As we saw in our original consideration of the atomic rotation in Chapter 10, however, any element can rotate with an addition of positive electric rotational displacement to the appropriate magnetic rotation, or with an addition of negative electric rotational displacement to the next higher magnetic rotation. Because of this flexibility, the limitation of the elements of alternate groups to a single magnetic valence actually applies only to the elements of Division I. Here this restriction has no real significance, as the elements of this division make little use of the magnetic valence in any event, because of the high probability of the low positive valences. To distinguish between the two magnetic valences, we will call the larger one the primary magnetic valence, and the smaller one the secondary magnetic valence. Neither of these valences has any inherent probability advantage over the other, but the geometrical considerations previously mentioned do have a significant effect. For instance, where the magnetic valence can be either two or three, a combination with a valence three negative element takes the form R3X2 if the magnetic valence is two, and the form RX if the alternate valence prevails. The latter results in the more symmetrical, and hence more probable, structure. Conversely, if the negative element has valence four, the R2X structure developed on the basis of a magnetic valence of two is more symmetrical than the R4X4 structure that results if the magnetic valence is three, and it therefore takes precedence. Many of the theoretically possible magnetic valence compounds that are on the borderline of stability, and do not make their appearance as independent units, are stable when joined with some other valence combination. For example, there are three theoretically possible first

order valence oxides of carbon: CO2 (positive electric valence), CO (primary magnetic valence), and C2O (secondary magnetic valence). The first two are common compounds. C2O is not. But there is another well-known compound, C3O2, which is obviously the combination CO C2O. As we will see later, this ability of the less stable combinations to participate in complex structures plays an important role in compound formation. The first order valences of the elements, the valences that have been discussed thus far, are summarized in Table 7. The great majority of the true chemical compounds of all classes are formed on the basis of these valences.

TABLE 7 FIRST ORDER VALENCES
Group Division Magnetic Valences Primary lB lB 2A IV 0 I 1 2 2 Secondary 1 1 1 H He Li Be B C C N O F Ne Na Mg Al Si Si P S Cl Ar 1 2 3 4 4 5 6 7 4 3 2 1 1 2 3 4 4 5 4 3 2 1 Element Electric Valences Normal Neutral (*Sec.) Negative 1

2A

IV

2

1

2A 2B

0 I

2 2

2 2

2B

IV

3

2

2B

0

3

3

3A

I

3

2

K Ca Sc Ti V Cr Mn Fe Co Ni Cu Zn Ga Ge As Se Br Kr Rb Sr Y Zr Nb Mo Tc Ru Rh Pd Ag Cd In Sn Sb Te I Xe

1 2 3 4 5 6 7 8

3A

II

3

2

3A

III

3

2

1 2 3 4 5 6 7 3 2 1

3A

IV

3

2

3A 3B

0 I

3 3

3 3

1 2 3 4 5 6 7 8

3B

II

4

3

3B

III

4

3

1 2 3 4 5 6 7 3 2 1

3B

IV

4

3

3B

0

4

3

4A

I

4

3

Cs Ba La Ce Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb Lu Hf Ta W Re Os Ir Pt Au Hg Tl Pb Bi Po At Rn Fr Ra Ac Th Pa U Np Pu

1 2 3 4 5 6 7 8

4A

II

4

3

4A

III

4

3

4* 5* 6* 7* 8*

1 2 3 4 5 6 7 3 2 1

4A

IV

4

3

4A 4B

0 I

4 4

4 4

1 2 3 4 5 6 7 8

4B

II

5

4

Am Cm Bk Cf Es Fm Md No 4B III 5 4 Lr Rf Ha

4* 5*

There is also an alternate type of inter-atomic orientation that gives rise to what we may call second order valences. As has been emphasized in the previous discussion, an equilibrium between positive and negative rotational displacements can take place only where the net resultant is zero, or the equivalent of zero, because any value of the space-time ratio other than unity (zero displacement) constitutes motion, and makes fixed equilibrium positions impossible. In the most probable condition, the initial level from which each rotation extends is the same zero point, or, where the nature of the orientation requires different zero points, the closest combination that is possible under the circumstances. This arrangement, the basis of the first order valences, is clearly the most probable, but it is not the only possibility. Inasmuch as the separation between natural zero points (unit speed levels) is two linear units (or eight three-dimensional units) it is possible to establish an equilibrium in which the initial level of the positive rotation (the positive zero) is separated from the initial level of the negative rotation (the negative zero) by two linear units. The effect of this separation on the valence is illustrated in Fig. 2. Fig.2 (a) (b) (c)

The basis of the first order valences is shown in (a). Here the normal positive valence V balances an equal negative valence V at an equilibrium point represented by the double line. In (b) the initial level of the positive rotation has been offset to the next zero point, two units distant from the point of equilibrium. These two units, being on the positive side of the equilibrium point, add to the effective positive displacement, and the positive valence therefore increases to V+2; that is, V+2 negative valence units are counterbalanced. In (c) it is the initial level of the negative rotation that has been offset from the point of equilibrium.

Here the two intervening units add to the effective negative displacement, and the positive valence decreases to V-2, as the V units of positive displacement are now able to balance only V—2 negative valence units. By reason of the availability of the zero point modifications illustrated in Fig.2(b), each of the positive first order valences corresponds to a second order valence, an enhanced valence, as we will call it, that is two units greater in the case of the direct valences (x+2), and two units less for the inverse valences: 8 - (x+2) = 6-x. Compounds based on enhanced normal valences are relatively uncommon, as the normal valence itself has a high degree of probability, and the enhanced valence is not only inherently less probable, but also has a higher effective displacement in any specific application, which decreases the relative probability still further. The probability factors are more favorable for the enhanced neutral valence, as in this case the effective displacement is less than that of the corresponding first order valences. The compounds of this type are therefore more numerous, and they include such well-known substances as SO2 and PCI3. An interesting application of this valence is found in ozone, which is an oxide of oxygen, analogous to SO2. It should theoretically be possible for valences to be diminished by orientation in the manner shown in Fig.2 (c), but it is doubtful if any stable compounds are actually formed on the basis of diminished electric valences. The reason for their absence is not yet understood. The magnetic valences are both enhanced and diminished. Either the primary or the secondary valence may be modified, but since enhancement is in the direction of lower probability (higher numerical value) the number of common compounds based on the enhanced magnetic valences is relatively small. Diminishing the valence improves the probability, and the diminished valence compounds are therefore more plentiful in the rotational groups in which they are possible (those with primary magnetic valences above two), although the list is still very modest compared to the immense number of compounds based on the first order valences. As indicated earlier, one component of any true chemical compound must have a negative displacement of four or less, as it is only through the establishment of an equilibrium between such a negative displacement and an appropriate positive displacement that the compound comes into existence. The elements with the required negative displacement are those which comprise Division IV, and it follows that every compound must include at least one Division IV element, or an element which has acquired Division IV status by valence enhancement. If there is only one such component, the positive-negative orientation is fixed, as the Division IV element is necessarily the negative component. Where both components are from Division IV, however, one normally negative element must reorient itself to act in a positive capacity, and a question arises as to which retains its negative status. The answer to this question hinges on the relative negativity of the elements concerned. Obviously a small displacement is more negative than a large one, since it is farther away from the neutral point where positive and negative displacements of equal magnitude are equivalent. Within any one group the order of negativity is therefore the same as the displacement sequence. In Group 2B, for instance, the most negative element is chlorine, followed by sulfur, phosphorus, and silicon, in that order. This means that the negative component in any Division IV chlorine-sulfur combination is chlorine, and the product is a

compound such as SCl2, not ClS or Cl2S. On the other hand, the compound P2S3 is in order, as phosphorus is normally positive to sulfur. Where the electric displacements are equal, the element with the smaller magnetic displacement is the more negative, as the effect of a greater magnetic displacement is to dilute the negative electric rotation by distributing it over a larger total displacement. We therefore find ClF3 and IBr3, but not FCl3 or BrI3. The magnitude of the variation in negativity due to the difference in magnetic displacement is considerably less then that resulting from inequality of electric displacement, and the latter is therefore the controlling factor except where the electric displacements are the same in both components. On the foregoing basis, all elements of Divisions I, II, and III are positive to Division IV elements. The displacement 4 elements on the borderline between Divisions III and IV belong to the higher division when combined with elements of lower displacement, and when elements lower in the negative series acquire valences of 4 or more through enhancement or reorientation they also assume Division III properties and become positive to the other Division IV elements. Thus chlorine, which is negative to oxygen in the purely Division IV compound OCl2, is the positive component in Cl2O7. Similarly, the normal relations of phosphorus and sulfur, as they exist in P2S3, are reversed in S3P4, where sulfur has the valence 4. Hydrogen, like the displacement 4 members of the higher groups, is a borderline element, and because of its position is able to assume either positive or negative characteristics. It is therefore positive to all purely negative elements (Division IV below valence 4), but negative to all strictly positive elements (Divisions I and II), and to the elements of Division III. Because of its lower magnetic displacement, it is also negative to the higher borderline elements: carbon, silicon, etc. The fact that hydrogen is negative to carbon is particularly significant in view of the importance of the carbon-hydrogen combination in the organic compounds. Another point that should be noted here is that when hydrogen acts in a positive capacity, it does so as a Division III element, not as a member of Division I. Its + 1 valence is therefore magnetic. This is why hydrogen was assigned only to the negative position in the revised periodic table, rather than giving it two positions, as has been customary. The variation in negativity with the size of the magnetic displacement has the effect of extending the Division III behavior into Division IV to a limited extent in the higher groups. Lead, for example, has practically no Division IV characteristics, and bismuth has less than its counterparts in the lower groups. At the lower end of the atomic series this situation is reversed, and the Division IV characteristics extend into Division III, as an alternative to the normal positive behavior of some of the elements of that division. Silicon, for instance, not only forms combinations such as MnSi and CoSi3, which, on the basis of the information currently available, appear to be intermetallic compounds similar to those of the higher Division III elements, but also combinations such as Mg2Si and CaSi2, which are probably true compounds analogous to Be2C and CaC2. Carbon carries this trend still farther and forms carbides with a wide variety of positive components.

In the 2A group, the Division IV characteristics extend to the fifth element, boron. This is the only case in which the fifth element of a series has Division IV properties, and the behavior of boron in compound formation is correspondingly unique. In its Division I capacity, as the positive component in compounds such as B203, boron is entirely normal. But its first order negative valence would be -5. Formation of compounds based on this -5 valence conflicts with the previously stated limitation of the negative valence to a maximum of four units. Boron therefore shifts to an enhanced negative valence, adding two positive units to its first order value of -5, with a resultant of -3. The direct combinations of boron with positive elements have such structures as FeB and Cu3 B2. However, many of the borides have complex structures in which the effective valences are not as clearly indicated. This raises a question as to whether boron may be an exception to the rule limiting the maximum negative valence to -4, and may utilize both the -5 and -3 valences. This issue will be considered in the next chapter.

CHAPTER 19

Complex Compounds
The discussion in the preceding chapter had direct reference only to compounds of the type Rm Xn, in which m positive atoms are combined with n negative atoms, but the principles therein developed are applicable to all combinations of atoms. Our next objective will be to apply these principles to an examination of some of the more complex situations. Any atom in one of the simple compounds may be replaced by another atom of the same valence number and type. Thus any or all of the four chlorine atoms in CCl4 may be replaced by equivalent negative atoms, producing a whole family of compounds such as CCl3 Br, CCl2 F2 , CClI3 , CF4 , etc. Or we may replace n of the valence one chlorine atoms by one atom of negative valence n, obtaining such compounds as COCl2 , COS, CSTe, and so on. Replacements of the same kind can be made in the positive component, producing compounds like SnCl4 . Simple replacement by an atom of a different valence type is not possible. Copper, for instance, has the same numerical valence as sodium, but the sodium atoms in a compound such as Na2 O are not replaceable by copper atoms. There is a compound Cu2 O, but the neutral valence structure of this compound is very different from the normal valence structure of Na2 O. Similarly, if we exchange a positive hydrogen (magnetic valence) atom for one of the sodium (normal valence) atoms in Na2 O, the process is not one of simple replacement. Instead of NaHO, we obtain NaOH, a compound of a totally different character. A factor which plays an important part in the building of complex molecular structures is the existence of major differences in the magnitudes of the rotational forces in the various inter-atomic combinations. Let us consider the compound KCN, for example. Nitrogen is the negative element in this compound, and the positive-negative combinations are K-N

and C-N. When we compute the inter-atomic distances, by means of the relations that will be developed later, we find that the values in natural units are .904 for K-N and .483 for C-N. As stated in Chapter 18, the term ―bond‖ is not being used in this work in any way connected with the subject matter of that chapter: the combining power or valence. The term ―valence bond,‖ or any derivative such as ―covalent bond,‖ has no place in the theoretical structure of the Reciprocal System. However, use of the word ―bond‖ is convenient in referring to the cohesion between specific atoms, atomic groups, or molecules, and in the subsequent discussion it will be employed in this restricted sense. On this basis we may say that the force of cohesion, or ―bond strength,‖ is considerably greater for the C-N bond than for the K-N bond, as is indicated by the difference in the inter-atomic distances. It has usually been assumed that this force of cohesion is an indication of the strength of the inter-atomic forces, but in reality the relation is inverse. As explained in Chapter 8, the gravitational forces exerted by the atoms, the forces due to the atomic rotation, are forces of repulsion in the time region, and the cohesion is therefore greater when the rotational forces are weaker. The short C-N distance, and the corresponding strength of this bond, are the results of inactive force dimensions in this combination which reduce the effective repulsive force, and require the atoms to move closer together to establish equilibrium with the constant force of the progression of the reference system. Because of its greater strength, the C-N bond remains intact through many processes which disrupt or modify the K-N bond, and the general behavior of the compound KCN is that of a K-CN combination rather than that of a group of independent atoms such as we find in K2 O. Groups like CN which have relatively high bond strengths and are therefore able to maintain their identity while changes are taking place elsewhere in the compounds in which they exist are called radicals. Inasmuch as the special properties of these radicals are due to the differences between their bond strengths and those of the other bonds within the compounds, the extent to which any particular group acts as a radical depends on the magnitude of these differences. Where the inter-atomic forces are very weak, and the bond is correspondingly strong, as in the C-N combination, the radical is very resistant to separation, and acts as a single atom in most respects. At the other extreme, where the differences between the various inter-atomic forces in the molecule are small, the boundary line between radicals and non-radical atomic groupings is rat per vague. The stronger radicals are definite structural groups. NH4 is, to a large degree, structurally interchangeable with the sodium atom, OH can substitute for I in the CdI2 crystal without changing the structure, and so on. The weakest radicals, those with the smallest margins of bond strength, crystallize in structures in which the radical, as such, plays no part, and the structural units are the individual atoms. The perovskite (CaTiO3 ) structure is a familiar example. Here each atom is structurally independent, and hence this type of arrangement is available for a compound like KMgF3 in which there definitely are no radicals, as well as for a compound such as KIO3 which contains a borderline group. From a structural standpoint the IO3 group in KIO3 is not a radical, although it acts as a radical in some other physical phenomena, and is commonly recognized as one.

From a thermal standpoint, for example, the IO3 group is definitely a radical at low temperatures, the entire group acting as a unit. But unlike the strong radicals such as OH and CN, which maintain this single unit status under all ordinary conditions, IO3 separates into two thermal units at higher temperatures. Other radical groups are still less resistant to the thermal forces. The CrO3 group, for example, acts as a single thermal unit at the lower temperatures, but in the upper part of the solid temperature range all four atoms are thermally independent. The thermal behavior of chemical compounds, including the examples mentioned, will be discussed in a subsequent volume. In order to take the place of single atoms in the three-dimensional inorganic structures, the radicals must have three-dimensional force distributions, and where some of the interatomic forces are inherently two-dimensional, as is true in some of the lower group elements, for reasons that will be explained later, the three-dimensional distribution must be achieved by the geometrical arrangement. The typical inorganic radical therefore consists of a group of satellite atoms clustered three-dimensionally around one or more central atoms. Inasmuch as the satellite atoms are between the central atom and the opposite component of the compound, the effective valence of the radical must have the same sign as that of the satellite atoms. This limitation on the net valence means that the great majority of these inorganic radicals are negative, as hydrogen is the only element that has a two-dimensional force distribution when acting in a positive capacity. The most important hydrogen radical of this class is ammonium, NH4 , in which hydrogen has the magnetic valence 1 and nitrogen the negative valence 3 for a net group valence of +1. The phosphonium radical is similar, but less common. A variation of NH4 is the tetramethylammonium radical N(CH3 )4 , in which the hydrogen atoms are replaced by positive CH3 groups. The theoretically possible number of negative radicals is very large, but the effect of probability factors limits the number of those actually existing to a small fraction of the number that could theoretically be constructed. Other things being equal, those groups with the smallest net displacement are the most probable, so we find BO2 -1 commonly, and BO3 -3 less frequently, but not BO4 -5, BO5 -7, or the other higher members of this series. Geometrical considerations also enter into the situation, the most probable combinations, where other features are equal, being those in which the forces can be disposed most symmetrically. The status of the binary radicals such as OH, SH, and CN, is ambiguous on the basis of the criteria developed thus far, since there is no distinction between central and satellite atoms in their structures, but these groups can be included with the inorganic radicals because they are able to enter into the three-dimensional inorganic geometric arrangements. Another special class of radicals combines positive and negative valences of the same element. Thus there is the azide radical N3 , in which one nitrogen atom with the neutral valence +5 is combined with two negative nitrogen atoms, valence -3 each, for a group total of -1. Similarly, a carbon atom with the primary magnetic valence +2 joins with a negative carbon atom, valence -4, to form the carbide radical, C2 , with a net valence of 2.

The common boride radicals, the combination boron structures mentioned in Chapter 18, are B2 , B4 , and B6 . The best known B4 compounds are all direct combinations with valence 4 elements of Division I. It can therefore be concluded that the net valence of the B4 combination is -4. Similarly, the role of B6 in such compounds as CaB6 and BaB6 indicates that the net valence of the B6 radical is -2. The status of B2 is not as clearly indicated, but it also appears to have a net valence of -2; that is, it is simply half of the B4 combination. This net valence of -2 could be produced either by a combination of the -3 negative valence with the secondary magnetic valence, +1, or by a combination of the -5 negative valence with the positive valence +3. The same two alternatives are available for B4 . The combination of +1 and -3 valences is also feasible for the radical B6 , and on the basis of these values the valences of all of the boride radicals constitute a consistent system, as shown by the following tabulation: B2 B4 B6 Positive B+1 2 B+1 4 B+1 Negative B-3 2 B-3 2 B-3 Net -2 -4 -2

On the other hand, the B6 radical cannot be produced by a combination of +3 and-5 valences, and in order to utilize the -5 valence it would be necessary to substitute valence +2 in the positive position. The -3 negative valence thus leads to a more consistent set of combinations, as well as being consistent with the boron valence in the direct combinae lions of boron with positive elements. At least for the present, therefore, it will have to be concluded that the weight of the evidence favors a single negative valence (-3) for boron. The general principles of compound formation developed for the simpler combinations apply with equal force to compounds contemning radicals of the inorganic class. The basic requirement is that the group valence of the radical be in equilibrium with an equal and opposite valence. A negative radical such as SO4 therefore joins the necessary number of positive atoms to form a compound on the order of K2 SO4 . The positive NH4 radical similarly joins with a negative atom to produce a compound like NH4 Cl. Or both components may be radicals, as in (NH4 )2 SO4 One new factor introduced by the grouping is that the relative negativity of the atoms within the group no longer has any significance. The azide group, N3 , for instance, is negative, and cannot be anything but negative. In the compound ClN3 , then, the chlorine atom is necessarily positive, even though chlorine is negative to nitrogen in direct Division IV combinations such as NCl3 . In the magnetic valence compounds the negative electric displacement is in equilibrium with one of the magnetic displacements of the positive component. This leaves the positive electric displacement free to exert a directional influence on other molecules or atoms. In its general aspects, this directional effect is similar to the orienting influence of the space-time equilibrium that is required in order to enable atoms of negative elements to join with other atoms in compounds. In both cases there are certain relative positions of the interacting atoms or molecules that permit a closer approach, which results in a greater cohesive force. Neither of these orienting agencies contributes anything to the cohesive forces; they simply hold the participants in the positions in which the stronger forces are generated. Without the directional restrictions imposed by these orienting

influences the relative positions would be random, and the greater cohesive forces would not develop. Since all magnetic valence compounds have free electric displacements, they all have strong combing tendencies, forming what we may call molecular compounds; that is, compounds in which the constituents are molecules instead of the individual atoms or radicals of the atomic compounds. Inasmuch as the free electric displacements are all positive, there is no valence equilibrium involved, and the molecular compounds can be of almost any character, but geometrical and symmetry considerations favor associations with units of the same kind, or with closely related units. Double molecules of a compound are not readily recognized in the solid or liquid states, but in spite of the obstacles to recognition there are many well-known combinations such as FeO, Fe2 O3 , C2 O, CO, etc. Water and ammonia, both magnetic valence compounds, are particularly versatile in forming combinations of this type, and join with a great variety of substances for form hydrates and ammoniates. There is only one free electric displacement in any binary magnetic valence combination, and the orienting effect is therefore exerted in only one direction. When the active molecular orientation effects, as we will call them, of a pair of molecules such as FeO and Fe2 O3 are directed toward each other, the system is closed, and the resulting Fe3 O4 association has no further combining tendencies. Even where several H2 O molecules combine with the same base molecule, as is very common, the association is between the base molecule and each H2 O molecule individually. A different situation develops where a two-dimensional molecule is formed on the basis of a magnetic valence. Here the intermolecular distance may be reduced to the point where three molecules are within a single natural unit of space, in which case each molecule exerts an orienting effect not only upon its immediate neighbor in the active direction, but also upon the next molecule beyond it. Limitation of the effective inter-atomic forces to two dimensions in this class of compounds contributes to the extension of the magnetic orientation effects in two separate ways. First, it reduces the inter-atomic distance by one third, since there is no effective rotational force in the third dimension. In the compound lithium chloride, for example, the distance between lithium and chlorine atoms on a three-dimensional basis would be 1.321 natural units. By reason of the two-dimensional orientation, this drops to .881 units. Then, the distance between molecules 1 and 3 is further reduced by the geometric effect illustrated in Fig. 3. In an aggregate in which the structural units are arranged three-dimensionally, as in (a), molecule 2 interposes its full diameter between molecules 1 and 3. Where the inter-atomic distance is x, the distance between the centers of molecules 1 and 3 is then 4x. But if the structural units are arranged twodimensionally, as in (b), this distance is reduced to 2y, where y is the distance between adjacent central atoms. In the case of lithium chloride, this reduction is not sufficient to enable any interaction between molecules 1 and 3, as the 2y distance is 1.398, and no effect is exerted where this distance exceeds unity. But there are other compounds, particularly those of carbon and nitrogen, in which the 2y distance is, or can be, less than unity. The C-C distance, for example, ranges from .406 to .528. With some aid from the geometric

Figure 3 (a) (b)

arrangement in the case of the greater distances, a large number of carbon compounds based on the magnetic orientation are within the range where the orienting effects of the free electric displacement extend to the third molecule. These two-dimensional magnetic valence molecules with very short inter-atomic distances are actually stable structures with their negative electric rotations fully counterbalanced by appropriate positive magnetic rotations, and they are therefore capable of independent existence in the manner of the other molecules that we have considered. Because of their strong combining tendencies, however, most of them do not actually lead an independent life more than momentarily if there are other molecules present with which they can combine, and in recognition of the fact that they are normally constituents of molecular compounds rather than molecules in their own right we will hereafter refer to them as magnetic neutral groups. While there are many atomic combinations with inter-atomic distances less than one half natural unit, or so close to this figure that they can be brought within it by structural modifications, the number of such combinations that can form magnetic neutral groups is limited by various factors such as probability, valence, relative negativity, etc. Thus the combinations CN and OH are excluded because they have active valences; that is, they are negative radicals, not neutral groups. NH2 is excluded by a probability situation that will be discussed later; OH2 is excluded because hydrogen is strongly positive to oxygen, and so on. Furthermore, the binary valence two combinations are subject to an additional restriction. Its exact nature is not yet clear, but its effect is to put CO at the limit of stability, so that combinations such as NO and CS are excluded. The practical effect of these several restrictions, together with the limitations on the inter-atomic distance, is to confine the magnetic neutral groups, aside from CO, almost entirely to combinations of carbon, nitrogen, and boron with valence one negative atoms or radicals. In the subsequent discussion we will find it convenient to use a diagram which identifies the orientation effects that are exerted by the various structural units, and thus shows how the different types of molecular compounds are held in combining positions; that is, positions in which the inter-group cohesive forces are maximized. In the diagram we will represent valence effects by double lines, as in CH3 =OH, while the primary molecular orientation effect will be represented by single lines, as in CH-CH. The secondary molecular effects exerted on the third group in line will then be shown by connecting lines, with arrows to indicate the direction of the orienting effect.

As this diagram indicates, there is a primary orientation effect between CH groups 1 and 2, and between groups 3 and 4. Because these effects are unidirectional, and paired, there is no interaction between groups 2 and 3. If the CH groups were three-dimensional, like the FeO and Fe2 O3 molecules previously mentioned, there would be no combination between the 1-2 pair and the 3-4 pair, and the result would be two CH-CH molecules. But because group 3 is within one unit of distance of group 1, the orienting effect of the free electric displacement of group 1, which acts at short range against group 2, also acts against group 3 at longer range, as shown in the diagram. Similarly, the 4-3 effect acts at long range against group 2. Thus the 1-2 and 3-4 pairs are held in the combining position by the secondary orientation effects in spite of the lack of any primary effect between groups 2 and 3. The relation of these orienting influences to the cohesion between the constituents of the atomic or molecular compound can be compared to the effect of a reduced temperature on a saturated liquid. The result of the lower temperature is solidification, and in the solid there is an additional cohesive force between the atoms that did not exist in the liquid, but this new force is not supplied by the temperature. What the change in the temperature actually accomplished was to create the necessary conditions under which the atoms could assume the relative positions in which the inter-atomic forces of cohesion are operative. Similarly, the orienting effects of the valence equilibrium and the free rotational displacement of the magnetic neutral groups do not provide the forces that hold the molecules together; they merely create the conditions which allow the stronger cohesive forces to operate. When the atoms or neutral groups are subjected to the orienting effects that permit them to establish equilibrium at one of the shorter inter-atomic or inter-group distances, it is the point of equilibrium between the rotational forces and the oppositely directed force due to the progression of the natural reference system that determines the magnitude of the cohesive forces. An important consequence is that the cohesive force between any two specific magnetic neutral groups is the same regardless of whether the orientation results from the short range primary effect, or the long range secondary effect, of the free electric displacements. In the preceding diagram, the magnitude of the cohesive force between groups 2 and 3 is identical with that of the 1-2 and 3-4 forces. It is simply the cohesive force between two CH groups. As we will see later, particularly in Chapter 21, this point is quite significant in connection with the attempts that are being made to draw conclusions concerning the molecular structure from the magnitudes of the inter-group forces. As the diagram indicates by the arrows at the two ends of the four-group combination, the 2-1 and 3-4 secondary orientation effects are not satisfied, and they are capable of extension to any other atom or group that comes within range. Such a combination of neutral groups is therefore open to further combination in both directions. The system is not closed by the addition of more groups of the same character, since this still leaves active secondary orientation effects at each end of the combined structure. The unique combining power that results from this continuation of the secondary effects gives rise to an extremely large and complex variety of chemical compounds. There is almost no limit on the number of groups that can be joined. As long as each end of the molecule is a

magnetic neutral group with an active secondary effect, there are still two active ends no matter how many groups are added. The necessary closure to form a compound without further combining tendencies can be attained in one of two ways. Enough of these magnetic neutral groups may combine to permit the ends of the chain to swing around and join, satisfying the unbalanced secondary effects, and creating a ring compound. Or, alternatively, the end groups may attach themselves to atoms or radicals which do not have the orienting effects of the magnetic groups. Such additions close the system and form a chain compound. Both the chain and ring structures are known as organic compounds, a name surviving from the early days of chemistry, when it was believed that natural products were composed of substances of a nature totally different from that of the constituents of inorganic matter. As used herein, the term ―organic‖ will refer to all compounds with the characteristic two-dimensional magnetic valence structure, rather than being defined as usual to cover only carbon compounds with certain exceptions. The excluded carbon compounds are practically the same under both definitions, and the only significant difference is that in this work a few additional compounds, such as the hydronitrogens, which have the same type of structure as the organic carbon compounds are included in the organic classification. The valence equilibrium must be maintained in the chain compounds, and the addition of a positive radical or atom at one end of the chain must be balanced by the addition of a negative unit with the same net valence at the other end. This equilibrium question does not arise in connection with the ring compounds as all of the structural units in the ring are either magnetic neutral groups or neutral associations of atoms or groups with active valences. Here the complete valence balance is achieved within the groups or associations. In order to join the two-dimensional magnetic group structures any radicals which are to occupy the end positions must also be two-dimensional. The inherently three-dimensional inorganic radicals such as NO3 , SO4 , etc., do not qualify. The two-atom and three-atom radicals like OH, CN, and NO2 are arranged three-dimensionally in the inorganic compounds, but they are not necessarily limited to this kind of an arrangement, and they can be disposed two-dimensionally. These radicals are therefore available for the twodimensional compounds. The two-dimensional structure also reverses the requirement with respect to the net valence of the radicals. The external contacts of the two-dimensional groups are made primarily by the central atoms, and instead of having the same direction as that of the satellite atoms, the net group valence conforms to the valence of the central atom. These groups, the organic radicals, are therefore opposite in valence to their counterparts among the inorganic radicals. Corresponding to the positive ammonium radical NH4 is the negative amine radical NH2 , the negative radical CN- in which carbon has the magnetic valence 2 has an organic analog in the positive radical CN+, in which carbon has the normal valence 4, and so on. Furthermore, the combinations of carbon and the valence one negative elements, including hydrogen, which are inherently twodimensional, and are therefore precluded from acting as inorganic radicals, are fully

compatible with the two-dimensional neutral groups. Since there are a large number of such combinations, the great majority of the organic radicals are structures of this type. From the foregoing it can be seen that the organic compounds are subject to exactly the same valence considerations as the inorganic compounds. They are, in fact, atomic associations of identically the same general nature. The only difference is that the very short inter-atomic distances in the magnetic valence compounds of the lower group elements permit the existence of secondary orientation effects that enable these compounds to unite into complex structures. This unification of the whole realm of chemical compounds is an example of the kind of simplification that results when the true reason for a physical phenomenon is ascertained. As we saw in Chapter 18, the formation of chemical compounds takes place because the atoms of the purely electronegative elements (Division IV) cannot establish a stable relationship with atoms of other elements except under certain special conditions in which their negative displacement (motion in time) is counterbalanced by an appropriate positive displacement of the elements with which they are interacting. These requirements are equally as applicable to carbon and the other lower elements as to the constituents of the inorganic compounds. All chemical compounds are governed by the same general principles. The clarification of the nature of the organic compounds will, of course, require some modification of existing chemical ideas. The concept of an electronic origin of the cohesive forces must be abandoned. Electrons are independent physical entities. They are not constituents of atoms, and they are not available to generate cohesive forces, even if they were capable of so doing. (It should be noted that the foregoing statement does not assert that there are no electrons in the atoms. That is an entirely different issue which will be given consideration when we are ready to begin a discussion of electrical phenomena.) The concepts of ―double bonds‖ and ―triple bonds‖ will also have to be discarded, along with the curious idea of ―resonance,‖ in which a system alternating between two possible states is supposed to acquire an additional energy component by reason of the alternation. Some of the theoretical concepts that are untenable in the light of the new findings, such as the ―double bonds‖ , have been quite useful in practice, and for this reason many chemists will no doubt find it difficult to believe that these ideas are actually wrong. As explained in the introductory discussion, however, much of the progress that has been made in the scientific field has been made with the help of theories that are now known to be wrong, and have been discarded. The reason for this is that none of these theories was entirely wrong. In order to gain any substantial degree of acceptance a theory must be correct in at least some respects, and, as experience has demonstrated in many cases, these valid features can contribute materially to an understanding of the phenomena to which they relate, even though other portions of the theory are totally incorrect. The necessity of parting with cherished ideas of long standing will be less distressing if it is realized that the ―double bonds‖ and associated concepts that must now be abandoned are not tangible physical entities; they are merely inventions by which certain empirical relations of a mathematical nature are clothed in descriptive language for more convenient manipulation. Linus Pauling brings this out clearly in the following statements:

The structural elements that are used in classical structure theory, the carbon-carbon single bond, the carbon-carbon double bond, the carbon-hydrogen bond, and so on, also are idealizations, having no existence in reality.... It is true that chemists, after long experience in the use of classical structure theory, have come to talk about, and probably to think about, the carbon-carbon double bond and other structural units of the theory as though they were real. Reflection leads us to recognize, however, that they are not real, but are theoretical constructs in the same way as the individual Kekule structures for benzene.68 When a correct theory appears it must include the valid features of the previous incorrect theory. But the identity of these features as they appear in the context of the different theories is often obscured by the fact that they are expressed in different language. In the case we are now considering, current chemical theory says that the cohesion in organic compounds is due to electronic forces. Development of the Reciprocal System of theory now leads to the conclusion that there are no electrons in the atomic structures, and consequently there are no electronic forces. At first glance, then, it would appear that the new findings repudiate the entire previous structure of thought. On closer examination, however, it can be seen that the electrons, as such, actually play no part in most of the explanations of physical and chemical phenomena that are presumably derived from the electronic theory. The theoretical development actually uses only the numerical values. For example, the conclusions that are drawn from the positions of the elements in the periodic table are currently expressed in terms of the number of electrons. Carbon has a valence of four in its ―saturated‖ condition because it has four electrons in its atomic structure, so the electronic theory says. It is clear from the empirical evidence that there actually are four units of some kind in the carbon atom, whereas the sodium atom has only one unit of this kind. But the empirical observations give us nothing but the numbers 4 and 1; they tell us nothing at all about the nature of the units to which the numerical values apply. The conclusion that these units are electrons is pure assumption, and the identification with electrons plays no part in the application of the theory. The maximum valence of carbon is four, not four electrons. Moseley's Law, which relates the frequencies of the characteristic x-rays of the elements to their atomic numbers, is another example. It is currently accepted as ―definite proof‖ of the existence of specific numbers of electrons in the atoms of these elements. Conclusions of the same kind are drawn from the optical spectra. In a publication of the National Bureau of Standards entitled Atomic Energy Levels we find this positive statement: ―Each chemical element can emit as many atomic spectra as it has electrons.‖ But, in fact, the empirical evidence in both cases contributes nothing but numbers. Here, again, the observations tell us that certain specific numbers of units are involved, but they give us no indication as to the nature of these units. So far as we can tell from the empirical information, they can be any kind of units, without restriction. Thus, when we discard the electronic theory in application to these phenomena we are not making any profound change; we are merely altering the language in which our understanding of the phenomena is expressed. instead of saying that there are 11 electrons in sodium, one of which is in a particular ―configuration,‖ we say, on the basis of our theoretical findings, that the total number of effective speed displacement units in

the rotational motions of the sodium atom is 11, and that only one of these applies to the electric (one-dimensional) rotation. Carbon has 6 total displacement units in its rotational motions, with 4 in the electric dimension. It follows that in those properties which are related to the total effective speed displacement (the net total quantity of motion in the atom) the number applicable to sodium is 11, and that applicable to carbon is 6, while in those properties which are determined by the displacement in the electric dimension individually the respective numbers are 1 for sodium and 4 for carbon. It is an equally simple matter to translate the formation of ―ionic compounds‖ from the language of the electronic theory to the language of the Reciprocal System. The electronic theory says that stability is attained by conforming to the ―electronic configuration‖ of one of the inert gas elements, and that potassium and chlorine, for example, accomplish this by transferring one electron from potassium to chlorine, thus bringing both to the status of the inert gas element argon. The Reciprocal System says that chlorine has a negative rotational speed displacement of one unit (a unit motion in time) in its electric dimension, and that it can enter into a chemical combination only by means of a relative orientation in which that negative displacement is balanced at a zero point by an appropriate positive displacement. Potassium has a positive displacement of one unit, and the combination of this one positive unit and the negative unit of chlorine produces the required net total of zero. So far as the ―ionic compounds‖ are concerned, the Reciprocal System changes practically nothing but the language, as the foregoing example shows. But when the language change is made, it becomes evident that the same theory that applies to this one restricted class of compounds applies to all of the true chemical compounds. On this basis there is no need for the profusion of subsidiary theories that have been formulated in order to deal with those classes of compounds to which the basic ―ionic‖ explanation is not applicable. Instead of calling upon the multitude of different ―bonds‖ –the ionic bond, the ion-dipole bond, the covalent bond, the hydrogen bond, the three-electron bond, and the numerous ―hybrid‖ bonds–that are required in order to adapt the electronic theory to the many types of compounds, the Reciprocal System applies the same theoretical principles to all compounds. In these cases that we have considered, the translation from electronic language to the language of the Reciprocal System leads to a significant clarification of the mechanism of the processes that are involved. Whatever value there may be in the electronic theory is not lost when that theory is abandoned; it is carried over into the theoretical structure of the Reciprocal System in different language.

CHAPTER 20

Chain Compounds
In undertaking a general survey of such an extended field as that of the structure of the organic compounds it is obviously essential to use some kind of a classification system to

group the compounds of similar characteristics together, so that we may avoid the necessity of dealing with so many individual substances. The distinction between chain and ring compounds has already been mentioned. The chemical properties of the chain compounds are determined primarily by the nature of the positive and negative radicals or atoms, and it will therefore be convenient to set up two separate classifications for these compounds, one on the basis of the positive component, and the other on the basis of the negative component. In general, the classifications utilized in this work will conform to the commonly recognized groupings, but the defining criteria will not necessarily be the same, and this will result in some divergence in certain cases. The first positive classification that we will consider comprises those compounds whose positive components contain valence four carbon atoms. These are called paraffins. This name originally referred only to hydrocarbons, but as used herein it will apply to all chain compounds with valence four carbon at the positive end of the molecule. The term ―saturated compound‖ is commonly used with essentially the same significance so far as the chain compounds are concerned, but its application is usually extended to the cyclic compounds as well. To avoid confusion it will not be used in this work, since the cyclic compounds cannot be considered saturated on the basis of the criteria that we are setting up. The paraffin hydrocarbon, or alkane, chain is a linking of CH2 neutral groups with a CH2 positive radical at one end of the chain, and a negative hydrogen atom at the other. The cohesion between this hydrogen atom and the adjacent CH2 group is very strong, and for most purposes it will be convenient to regard the CH2 • H combination as a negative CH3 radical. On this basis, the paraffin hydrocarbon chain is CH3 • (CH2)n• C3. If a valence two carbon atom is substituted for the valence four carbon atom of the paraffins, the result is an olefin, a chain which is identical with that of the paraffins except that it has the primary magnetic valence radical CH instead of the normal valence radical CH3 in the positive position. The general formula for the olefin hydrocarbons, or alkenes, is CH • (CH2)n• CH3. In the usual version of this formula one of the CH2 groups is placed outside of the CH group, but this is obviously incompatible with the structural principles developed in the preceding pages. On first consideration it might appear that the chemical evidence is favorable to the conventional CH2 • CH sequence. When we remove all of the internal magnetic neutral groups we come down to CH • CH3 as the theoretical structure of ethylene, the first of the olefins, whereas it is generally agreed that the chemical behavior of this compound is more in harmony with the structure CH2 • CH2. This apparent contradiction is explained by the nature of the CH3 negative radical. As has been pointed out, this radical is actually CH2 • H. For most purposes the combination may be treated as a single unit, but if we express the ethylene formula in full form as CH • CH2 • H it can be seen that the association between the CH and H structural units is closer than that between CH2 and H. It is true that the CH2 group is between CH and H when the ethylene molecule is intact, but CH and H are partners in a valence equilibrium,

whereas the intervening CH2 group is neutral. Consequently, if the molecule is sufficiently disturbed by chemical or other means, the CH and H units join and the compound enters the subsequent reaction as two methylene (CH2) molecules. This is not an unusual situation. Many observers have commented that the reacting molecule under such circumstances is not necessarily the same as the static molecule. A valence one carbon atom in the positive position produces an acetylene. Both the olefin and acetylene classifications, as herein defined, should be understood as including all compounds with the specified positive components, not merely the hydrocarbons. In the acetylenes, as in the olefins, the currently accepted molecular formulas must be revised to put the positive valence component at the end of the chain. We also find that the valence one orientation of a lone carbon atom is more stable if it is joined to a neutral group in which carbon has the same valence, rather than to one in which the carbon valence is +2. The independent carbon atom that constitutes the positive component of the acetylenes is therefore followed by a CH neutral group. The remainder of the acetylene hydrocarbon, or alkyne, molecule is identical with the corresponding portion of a molecule of either of the other two hydrocarbon chains, and the general formula is C • CH • (CH2)n• CH3. Acetylene itself is similar to ethylene in that the true structure is C • CH • H2 with a valence equilibrium between the single C and H atoms which causes them to combine if the molecule breaks up. The compound therefore acts chemically as two CH units. Addition of CH2 neutral groups to the straight chain hydrocarbons does not necessarily take place in the existing chain. The incoming groups may instead be inserted between the positive and negative components of any of the neutral groups, enlarging that group from CH2 to CH • CH2 • H2 which we may write as CH • CH3, or CHCH3, as previously indicated. Further additions may then be made in the same manner as they are made in the principal chain, lengthening the neutral group indefinitely. Such a lengthened group is known as a branch of the principal chain, and structures of this kind are called branched chain compounds. No branching of the CH3 radical is possible, since addition of a CH2 group results in CH2 • CH2 • H2 or CH2 • CH3, which merely extends the straight chain. A CH2 group may be added to the CH olefin radical however, as the product in this case is CCH3, which is not equivalent to an extension of the chain. This CCH3 group may then be lengthened in the usual manner to C • CH2 • CH3, and so on. Under the accepted systems of nomenclature the branched chain compounds are named as derivatives of the straight chain compounds, the chain position being indicated by number, as in 2-methyl butane, 2,3-dimethyl hexane, etc. The added possibility of a modification of the positive radical in the olefins introduces an extra variation into the system which is taken into account by setting up several basic classifications: 1-olefins, 2-olefins, 3-olefins, and so on. Branching is handled in the same manner as in the paraffins, and the compounds have names such as 2-ethyl-1-hexene, 3,4-dimethyl-2-pentene, etc.

The names applied to the paraffins under this current system are equally applicable to these compounds on the basis of the structural relations developed in this work. However, the current ideas as to the structure of the olefins and acetylenes, and the system of nomenclature that has been applied to them, are products of the electronic theory of compound formation. The results of our theoretical development show that certain modifications of the previously accepted structural arrangements are required, as has been noted, and the nature of these modifications is such that changes in the names applied to some of the compounds would also be appropriate. On this new basis no special system of names is required for the olefins, as the paraffin system can be applied to the olefins as well. The only difference between the two is in the branching of the olefin radical, and this can be handled by utilizing the 1-alkyl term, available but not used in the paraffin compounds. On this basis 1-pentene, CH • (CH2)3 • CH3, will become simply pentene, while 2-pentene, CCH3 • (CH2)2 • CH3, becomes 1-methyl butene, and 3-pentene, (C • CH2 • CH3) • CH2 • CH3, becomes 1-ethyl propene. The paraffin names are also applicable to the acetylenes in the same manner. 1-pentyne, C • CH • (CH2)2 • CH3, becomes pentyne; 2-pentyne, C • CCH3 • CH2 • CH3, becomes 2-methyl butyne, and so on. Such a revision of the nomenclature is not only desirable from the standpoint of more accurately reflecting the true structure of the molecules, and for the sake of uniformity, but also accomplishes a substantial amount of simplification. The information derived from theory will likewise require some modification of the conventional methods of representing the molecular structure of the organic compounds. The so-called ―extended‖ formulas, based on concepts such as electrons and double bonds that have no place in the molecule as we find it, must be discarded. But for most purposes the exact arrangement of the individual atoms is immaterial. The structural unit is the group rather than the atom, and the positions of the groups determine the nature and magnitude of the structure-dependent properties of the compound. The notation that has been used thus far, the ―condensed‖ structural formula which shows only the composition and sequence of the groups, is therefore adequate for most normal applications. The usual arrangement of these condensed formulas is not entirely satisfactory, as it does not recognize the existence of positive and negative valences, and therefore fails to distinguish between groups of the same composition but opposite valence. The CH; end groups in the paraffin molecule, for example, are currently regarded as identical. Since the opposing valences play a very important part in the molecular structure it is desirable that the formula should definitely indicate the positive and negative components of the compound. This can be accomplished without any serious dislocation of familiar patterns by identifying the positive and negative components of the compound as a whole with the left and right ends of the formula respectively, as is common practice in the inorganic division. It would be logical to extend this policy to the individual components of the molecules, and that probably should be done some day as a matter of consistency, but some compromise with logic and consistency seems advisable in this present work in order to avoid creating further complications for the readers, who already have many unavoidable departures from conventional practice to contend with. The familiar expressions for such

primary units as NH2 and OH will. therefore be retained, together with expansions such as NH • CH2 • CH3, O • CH2 • CH3, etc., even though this reverses the regular positive to negative order in most of the negative radicals. Continued use of CH3 rather than CH2 • H to represent the negative methyl radical is also.a departure from consistent practice, but in this case the condensed form is not only more familiar but also more convenient. The full CH2 • H representation will therefore be used only where, as in the discussion of the structure of the ethylene molecule, it is necessary to stress the true nature of the radical. In the case of the analogous CH2 negative radical there is no significant advantage to be gained by use of the condensed expression, and this radical, which is a combination of a CH neutral group and a negative hydrogen atom will be shown in its true form as CH • H. For a correct representation of the molecular structure it is essential that the neutral groups be clearly identified. Where there are methyl substitutions, the identification can be accomplished by omitting the dividing mark between the components of the neutral group; e.g., CH3 • CHCH3 • CH2 • CHCH3 • CH3, 2,4-dimethyl pentane. Longer neutral groups can be identified by parentheses, the positive-negative order being preserved within the group. The formula of 3–propyl pentane on this basis is CH3 • CH2 • (CH • CH2 • CH2 • CH3) • CH2 • CH3. If further subdivision within the neutral groups is necessary, the distinction between main and subgroupings can be indicated by brackets or other suitable symbols. Where a valence two negative component is involved and the chain is double, the customary expression such as (CH3 • CH2)2 • O is appropriate if the chains are equal. Unequal chains can be represented by treating the valence two component and one of the branches as a negative radical in this manner: CH2 • CH2 • CH2 • (O • CH2 • CH3),or the two branches can be shown on separate lines, as

In order to facilitate the presentation of the new principles of molecular structure that have been developed from the postulates of the Reciprocal System the revised structural formulas as described in the foregoing paragraphs will be used throughout this work. In designating positions in the chain we will number from the positive end, rather than following the Geneva system, which regards the two ends as interchangeable. The different numbering is necessary for clarity, in view of the modifications that have been made, not only in the order of the groups but also, in some cases, in the group composition. However, this revised numbering will be used only for purposes of the discussion, and the accepted names of the compounds will be retained, to avoid unnecessary confusion. A complete overhaul of the organic nomenclature will be advisable sooner or later. The somewhat minor modifications of current structural ideas that are required in the olefins and acetylenes become more significant in the diolefins, a class of compounds in which a pair of CH neutral groups with the acetylene carbon valence (one) is inserted into the olefin chain, a valence two structure. The C5compounds of this class are known as pentadienes. If the CH groups replace the CH2 groups in the third and fourth positions of

pentene the result is CH • CH2 • CH • CH • CH3. Instead of using the same numbering system that is applied to the other hydrocarbon families, the diolefins are numbered according to the locations of the hypothetical ―double bonds,‖ and this compound is called 1,3-pentadiene. Since the CH3 group at the negative end of the pentene molecule is actually CH2 • H2 the CH2 portion is open to replacement by CH. The incoming CH groups may therefore occupy the fourth and fifth positions, producing CH • CH2 • CH2 • CH • CH • H2 now called 1,4-pentadiene. Another possible structure involves removing the hydrogen atom from the CH positive radical, and splitting the molecule into two chains. If the chains are equal, we have C(CH•CH3H2, which we may also represent as

This is 2,3-pentadiene. A variation of this structure removes the CH2 group from one of the CH3 combinations. This reduces the compound to a C4 status, but it can be brought back up to a pentadiene by inserting the CH2 group in the other branch, which produces what is called 1,2-pentadiene:

One of the most important of the diolefins, from the industrial standpoint, is isoprene, another C5 compound, currently called 2-methyl1,3-butadiene. The structure is the same as that of 1,4-pentadiene, except that the CH2 group next to the first of the CH neutral groups is moved out of the chain and attached to the CH group as a branch: CH • CH2 • CCH3 • CH • H. Nitrogen, which is next to carbon in the atomic series, is also the next most prolific in the formation of compounds. Some of the ―carbon‖ compounds, such as urea, one of the first organic compounds to be synthesized, actually contain more nitrogen than carbon, but the positive component in these compounds is carbon, and the lengthening of the chain takes place primarily by the addition of carbon groups. There are other compounds, however, in which nitrogen takes the positive role both in the compound as a whole and in the neutral groups. Corresponding to the hydrocarbons are the hydronitrogens. The positive nitrogen radical in these compounds is NH2+, in which nitrogen has the enhanced neutral valence three. A combination of this radical with the negative amine group is hydrazine, NH2 • NH2. Inserting one NH neutral group we obtain triazane, NH2 • NH • NH2. Another similar addition produces tetrazane, NH2 • NH • NH • NH2. Just how far this addition process can be carried is uncertain, as the theoretical limits have not been established, and the hydronitrogens have not been given the same exhaustive study as the corresponding carbon compounds. A nitrogen series corresponding to the acetylenes has a lone nitrogen atom with the secondary magnetic valence one as the positive component. The parent compound of this series is diimide, N • NH2. One added NH neutral group results in

triazene, N • NH • NH2, and by a second addition we obtain tetrazene, N • NH • NH • NH2. Here again, the ultimate length of the chain is uncertain. All of the neutral groups in these nitrogen compounds have the composition NH2 in which nitrogen has the secondary magnetic valence one. A neutral group NH2 based on the primary magnetic valence is theoretically possible, but this group is identical with the amine radical except for the rotational orientation, and the orientation is subject to change in accordance with the relative probabilities. The amine radical is a more probable structure, and it prevents the existence of the NH2 neutral group. The NH2+ radical is also a much less probable structure than the amine radical, in which nitrogen has its normal negative valence, but this positive radical is not in competition with the amine group. Wherever a number of NH2 units exist in close proximity the interatomic forces tend toward combination, and in order that such combination may take place some groups must be reoriented so that they may act as the positive components of the compounds. The NH2+ radical has the most probable of the positive orientations, and it therefore takes over the positive role in NH2 • NH2 and similar combinations, a position that is not open to the amine radical. The NH2 neutral group has no such protected status. Beyond carbon and nitrogen the ability to form compounds of the molecular type drops sharply, but the corresponding elements in the higher groups do participate in a few compounds of this nature. Silicon forms a series of hydrides analogous to the paraffin hydrocarbons, with the composition SiH3 • (SiH2)n • H, and also some compounds intermediate between the silicon and carbon chains. Typical examples of the latter are Si3 • CH2 • SiH2 • H, and Si(CH3)3 • CH2 • SiH2 • CH2 • SiH2 • H. Germanium forms a series of hydrides, known as germanes, which are similar to the silicon hydrides, or silanes, and have the composition Ge3 • (GeH2)n • H. Only a few members of this series are known. An unstable tin hydride, Sn3 • SnH2 • H2 has also been reported. It could be expected that the higher valence three elements would form a limited number of compounds similar to the hydronitrogens, but the known compounds of this type are still scarce. Among those that have been reported are diphosphene, PH2 • PH2, and cacodyl, As(CH3)2 • As(CH3)2. Since the minimum magnetic valence of phosphorus and arsenic is two, these compounds cannot have the hydrazine structure NH2 • NH • H2 and are probably PH • PH2 • H and AsCH3 • As(CH3)2 • CH3. As pointed out in connection with ethylene and acetylene, the chemical behavior of such compounds is explained by the tendency of the positive and negative components of the compound as a whole, such as PH and H in diphosphene, to join when the compound is disturbed during a chemical reaction. Another series of compounds of the molecular class, but not related to either carbon or nitrogen, is based on boron. Because it acts as a Division IV element in these twodimensional compounds, boron takes the valence five, rather than the normal valence three which it has in a compound such as B2O3, where it acts as an element of Division I. The valence one radical on the valence five basis would be BH4, or an equivalent, but such a radical would be three-dimensional, and not capable of joining a two-dimensional chain. The positive radical in the boron chain is therefore the valence two combination B3. As in the hydrocarbons, the negative component of the molecule as a whole is hydrogen, and because of the valence of the positive radical two negative hydrogen atoms are required. Here again, the association between the hydrogen atoms and the adjacent

BH neutral group is close, as in the hydrocarbons, and the combination could be regarded as a valence two negative B3 radical. For present purposes, however, it appears advisable to show it in its true form as BH • H2. The magnetic neutral groups of the boron compounds can be formed on the basis of either the primary or the secondary magnetic valence, which produce BH2 and BH respectively. Because it minimizes the number of hydrogen atoms at the negative end of the molecule, the negative radical BH • H2 takes precedence over BH2 • H2 even where the interior groups are BH2 combinations. This presence of a BH neutral group at the negative end of the compound, together with some other factors that apparently favor BH over BH2, has the effect of making the BH structures more stable than those in which the neutral groups are BH2. The basic hydride of boron is diborane, B3 • BH • H2. Addition of BH neutral groups produces a series of compounds with the composition B3 • (BH)n • H2, the best known of which are hexaborane, in which n is 5, and decaborane, in which n is 9. Substitution of a pair of BH2 groups for two of the BH groups results in a series which has the composition B3 • (BH2)2 • (BH)n • H2. Beyond tetraborane, the first member of this series (n=1),these compounds, as indicated in the preceding paragraph, are less stable than the corresponding compounds of the all-BH series. In all of these boron compounds replacement of hydrogen atoms by other valence one atoms or radicals is possible in the same manner as in the hydrocarbons, but to a much more limited extent. As noted earlier, the extension of Division IV characteristics into Division III, which gives rise to the two-dimensional combining tendencies of boron, does not apply to the corresponding elements of the higher groups to any substantial degree, and they do not duplicate the boron series of compounds. There is an unstable hydride of aluminum, Al2H6, and a compound Ga2H6 called digallane, both of which may be structurally similar to diborane, but there is little, if any lengthening of these compounds by means of magnetic neutral groups. From the overall chemical standpoint, the molecular compounds formed by positive elements other than carbon are not of much concern, and they are given little or no attention in any but specialized textbooks. They are important in the present connection, however, because they serve to confirm the theoretical conclusions that were reached with respect to the structure of the carbon compounds. The nitrogen and boron compounds are not only constructed in accordance with the general pattern deduced from theory, and followed by the carbon compounds-that is, a chain of magnetic neutral groups with a positive radical at one end and a negative radical at the other-but also support the theoretical conclusions with respect to the structural details, inasmuch as they are like the carbon compounds in those respects in which the theory finds these elements to be alike, whereas they differ from the carbon compounds in those respects in which there are theoretical differences. For example, all three of these elements form both valence two (CH2 etc.) and valence one (CH etc.) magnetic neutral groups (with the exeption of NH2, the absence of which has been explained), because these magnetic valences are properties of the group of elements (2A) to which all three belong. On the other hand, the radicals in the end positions are unlike because the electric valences, which apply to these radicals, are properties of each of the three elements individually, and they are all different.

The second system of classification of the organic chain compounds, that based on the nature of the negative components, is not an alternate but a parallel system. A compound classified as an alcohol because of the nature of its negative component also belongs to one of the categories set up on the basis of the identity of the positive component. The previous discussion was confined mainly to the hydrocarbons to simplify the presentation, but all of the statements that were made with reference to compounds in which the negative component is hydrogen, alone or in combination with CH2 as a negative CH3 radical, are equally applicable to those in which the hydrogen has been replaced by an equivalent negative atom or group. Thus we have paraffinic alcohols, olefinic (unsaturated) alcohols, and so on. The primary requirement for the one for one substitutions is that the valence of the substituent must conform to the hydrogen valence both in magnitude and in sign. This requirement has been obscured to a large extent by current structural theories which do not recognize the existence of positive and negative valence in organic compounds, but some of the hydrogen atoms in these compounds are positive and others are negative, and this determines what substitutions can take place. Hydrogen in combination with carbon is negative, and may be replaced by any of the halogens or by negative radicals. Hydrogen combined with oxygen is positive, and can therefore be replaced only by positive elements and radicals. Thus from acetic acid, CH3 • CO • OH, we obtain by substitution CH2Cl • CO • OH, chloroacetic acid, but CH3 • CO • ONa, or Na • (O • CO • CH3 ),sodium acetate. A hydrogen atom acting alone may be either positive or negative, depending on its environment. The hydrogen atom at the end of a hydrocarbon chain is negative, and may be replaced by a halogen. CH3 • CH2 • H, ethane, becomes CH3 • CH2 • Cl, ethyl chloride. The lone hydrogen atom in formic acid, H • CO • OH, is positive, and a halogen cannot replace it. The normal valence alkali elements cannot replace this lone magnetic valence hydrogen atom either, and an incoming positive atom goes to the OH radical. The hydrogen in N-H combinations is also resistant to monatomic substitutions, but replacement by radicals of the proper valence is readily accomplished. Elements with higher valences substitute quite freely for either carbon or hydrogen in the positive and negative radicals, but enter into the magnetic neutral groups mainly as constituents of the common valence one radicals: OH, NH2, etc. Except in the direct carbon-oxygen combination CO, a single atom of valence two or three in a neutral group is necessarily a constituent of an extended radical such as (O • CH2 • CH3). In beginning a consideration of the principal families of substituted compounds, we will look first at the alcohols. This alcohol classification is one of several which result from the addition of oxygen to the hydrocarbons in different ways. Here an OH radical is directly attached to a hydrocarbon group, replacing a negative hydrogen atom. It is not essential, however, that this OH group replace the particular atom that constitutes the negative component of the compound as a whole. The chemical behavior of the normal alcohols, in which the OH radical is at the end of the chain, as in ethyl alcohol, CH3 • CH2 • OH, is closely paralleled if OH is substituted for a hydrogen atom in one of the neutral groups, as in secondary butyl alcohol, CH3 • CH2 • CHOH • CH3. If the substitution takes place in the positive radical the result is somewhat different. Such a substitution is more

readily made if oxygen is first introduced at the more favorable negative end of the compound, and the product of a double OH substitution is a dibasic alcohol, or glycol, the most familiar compound being ethylene glycol, CH2OH • CH2 • OH. Earlier in this chapter it was noted that the paraffin hydrocarbons are not actually the symmetrical structures that they appear to be. There is a combination of one carbon atom and three hydrogen atoms at each end of the molecule, but one end of the chain is necessarily positive, which means that the CH3 group at this end is a radical in which carbon has the +4 valence, while the other end is necessarily negative, and this, as previously explained, means that the CH3 group in this position is actually a close association of a negative hydrogen atom with a CH2 neutral group in which carbon has its +2 valence. Where the true molecular structure is important, as in understanding the chemical behavior of ethylene, it is essential to recognize that CH3 in the negative position is, in fact, CH2 • H. As indicated in the formula given for ethylene glycol, this same asymmetry also exists in the other seemingly symmetrical compounds. The CH2OH group in the positive position in the glycols has a +4 carbon valence and a group valence of + 1. In the negative position, the carbon valence is +2, and the true structure is CH2 • OH. The chemistry textbooks contain statements such as this: ―Theoretically the simplest glycol should be dihydroxy methane, CH2(OH)2.‖ The foregoing explanation of the glycol structure shows why this compound would not be a glycol, and why no such compound has been found. An oxygen atom added to a hydrocarbon may replace the two hydrogen atoms of a CH2 neutral group rather than forming an OH radical. The resulting group CO is very close to the point of not being able to act as a magnetic neutral group at all, and it is greatly restricted as to its position in the molecule. Straight chains of CO groups similar to the CH2 chains are not possible. This explains why carbon monoxide occurs as a separate compound, whereas methylene does not. In order to enable the CO group to join an organic combination some assistance from the geometric arrangement is necessary (a point which will be discussed further in connection with our examination of the ring compounds), and in the chain compounds this can be accomplished most readily at the negative end of the molecule. In the usual arrangement, therefore, a single CO neutral group is joined directly to the negative atom or radical. If the negative component is the radical OH, the resulting compound contains the combination CO • OH, and is an acid. Acetic acid, CH3 • CO • OH, and acrylic acid, CH • CH2 • CO • OH, are representative paraffmic and olefmic (unsaturated) acids respectively. Here again, a shift of the carbon valence to +4 produces a positive radical of the same composition, and enables formation of dibasic acids, such as oxalic acid, COOH • CO • OH, maleic acid, COOH • CH • CH • CO • OH, etc. Modification of the acid structure by substituting an alkyl group for the hydroxyl hydrogen results in another prolific family of compounds, the esters. Ethyl acetate, CH3 • CO • (O • CH2 • CH3), and diethyl oxalate, CO(O • CH2 • CH3) • CO • (O • CH2 • CH3), are typical of the mono and di esters respectively. A similar substitution in an alcohol produces an ether. This compound may be considered as a radical of the composition O • (CH2)n • CH3 in combination with an alkyl group. If we now substitute a second radical of the same kind for one of the hydrogen atoms in the adjacent hydrocarbon group we

obtain an acetal. Another such replacement results in an orthoester. By successive substitutions in ethyl alcohol, CH3 • CH2 • OH,for instance, we produce methyl ethyl ether, CH3 • CH2 • (O • CH3),dimethyl acetyl, CH3 • CH • (O • CH3)2, and trimethyl orthoacetate, CH3 • C • (O • CH3)3. Elimination of a water molecule from two acid molecules produces an anhydride, such as acetic anhydride, (CH3 • CO)2 • O. No new structural features are involved in these compounds. If the CO neutral group is joined directly to the negative hydrogen atom at the end of the hydrocarbon chain the compound is an aldehyde. Acetaldehyde, CH3 • CO • H, is the most familiar member of this family. The aldehyde radical is usually expressed as CHO (to avoid confusion with the OH radical, the textbooks say),but this does not reflect the true status of the CO combination as a neutral group. It may be worth noting that the CHO representation also does not explain, as the CO • H formula does, why one of the most prominent features of the aldehydes is that they are good reducing agents. Like the other organic families that have been discussed thus far, the aldehydes form dibasic, as well as monobasic, compounds. The simplest dibasic aldehyde is glyoxal, COH • CO • H. As in such structures as COOH • CO • OH, the conversion of the negative radical to a positive radical involves a valence shift, but in the acids the change is in the carbon valence, which goes from +2 in CO • OH to +4 in COOH, while in the aldehydes the change is in the hydrogen valence, which goes from - 1 in CO • H to + 1 in COH. These are the most basic valence changes in organic reactions, and their concurrent accomplishment is an essential element in a wide variety of chemical reactions. For instance, in the addition reactions that convert olefinic compounds to the paraffin status, such as adding HBr to acrylic acid, the carbon valence in the positive radical increases two units from +2 to +4. At the same time, the hydrogen atom that had a + 1 valence in HBr decreases that valence by two units to the - 1 level in the addition product CH2Br • CH2 • CO • OH. There are no obstacles in the way of a change of valence. This is merely a matter of reorientation, a change of rotational direction, and each atom is free to reorient itself to conform tc its environment. But the positive-negative balance in the compound must be maintained, and the change from positive to negative, or vice versa, in the hydrogen valence is one of the most common ways of compensating for an increase or decrease in the carbon valence. Because of the close association between the negative hydrogen atom of the hydrocarbons and the adjoining CH2 group, the CO neutral group is able to occupy a position adjoining the CH2 • H combination as an alternate to the aldehyde position next to the hydrogen atom. In this more remote position it is near the limit of stability, and this makes association with the positive radical more probable than participation in the negative combination CO • CH2 • H. For this reason, the monobasic compounds in this family, the ketones, have oxygen in the positive radical, COCH3, rather than in the negative radical as usual. The first member of the family, dimethyl ketone, or acetone, has the structure COCH3 • CH2 • H. The corresponding dibasic compound is dimethyl diketone, COCH3 • CO • CH2 • H. The monobasic ketone structure can be verified by comparing the results of simple addition reactions of the ketones with those of the aldehydes, the isomeric compounds in

which the CO group is neutral. The addition of hydrogen to the aldehydes proceeds in this manner: CH3 • CH2 • CO • H + H2 = CH3 • CH2 • CH2 • OH The final product, propyl alcohol, is a normal chain compound with a CH3 radical in the positive position, just as in the aldehyde itself. Only the negative end of the molecule has been altered. If the CO group in the corresponding ketone, methyl ethyl ketone, or 2butanone, had the same status as in the aldehyde (that is, if the compound were CH3 • CH2 • CO • CH3), we would expect essentially the same result. We would expect the CH3 positive radical to remain intact, and the product to be a primary, or perhaps a secondary, alcohol. But since the CO group in the ketone is part of a radical in which the carbon valence is four, and the compound is actually COCH3 • CH2 • CH3, both CH3 groups are negative. Addition of a hydrogen atom to the neutral group CH2 produces a third negative CH3 group. Inasmuch as no positive CH radical is present, hydrogenation results in a tertiary alcohol, in which the CH3 groups are negative, as in the original ketone: COCH3•CH2•CH3 + H2 = C(CH3)3•OH In the organic chain compounds thus far discussed, lengthening of the chain is accomplished mainly by the addition of CH2 neutral groups and, in some cases, CH • CH pairs. Introduction of oxygen produces a neutral group CHOH, and substitution of this group for CH2 originates additional families of compounds. These include such important substances as the hydroxy acids, the polyhydroxy alcohols, and the saccharides. The hydroxy acids may be either monobasic, like lactic acid, CH3 • CHOH • CO • OH, or dibasic, similar to tartaric acid, COOH • (CHOH)2 • CO • OH. In both cases the chains can be extended by adding more CHOH groups, although addition of CH2 is also possible, as in malic acid, COOH • CHOH • CH2 • CO • OH. The polyhydroxy alcohols are extensions of the glycol chain with CHOH neutral groups. The general formula is CH2OH • (CHOH)n• CH2 • OH. The saccharides result from conversion of the CH3 radicals in the aldehydes and ketones to CH2OH and addition of CHOH neutral groups. The products derived from the aldehydes are aldoses, the general formula for which is CH2OH • (CHOH)n • CO • H. Those derived from the ketones are ketoses, and have the structure (CO • CH2 • OH) • (CHOH)n • CH2 • OH. When nitrogen is introduced into an aldehyde or ketone, replacing the carbon-oxygen combination with a triple combination of nitrogen, hydrogen, and oxygen in the form of the valence two oxime radical NH • O, the nature of the addition products again shows the same relation to the structures of the two oxo derivatives that we noted in the case of hydrogen addition. Adding NH to the aldehyde alters only the negative radical, which expands from CO • H to CH • NH • O. Propionaldehyde, CH3 • CH2 • CO • H, for example, becomes propionaldehyde oxime, CH3 • CH2 • (CH • NH • O). On the other hand, addition of NH to the ketones requires a molecular rearrangement to bring both CH3 groups, which are negative, into combination with positive carbon in the positive radical. Adding NH to acetone, COCH3 • CH3 produces dimethyl ketoxime, C(CH3)2 • NH • O. As indicated in

these formulas, it is necessary to change the expression for the oxime radical from the conventional NOH to NH • O to show the true composition. Another way in which nitrogen may be introduced into the hydrocarbons is by substituting the NH2 amine group for negative hydrogen. Further substitutions are then possible for the positive hydrogen atoms in NH2, giving rise to a great variety of structures. The compounds in which the NH2 radical remains intact are primary amines, those with NH and one positive substitution are secondary amines, and those in which both hydrogen atoms have been replaced, leaving only the lone nitrogen atom from the original amine group, are tertiary amines. Since the amine replacements are positive, these compounds may have more than one olefinic branch, as in diallyamine, (CH • CH2 • CH2)2 • NH,a type of structure not found in the hydrocarbons, where all hydrogen atoms are negative, and can be replaced only by negative substituents. Diamines have the usual double structure, with CH2NH2 in the positive position and the normal amine combination CH2 • NH2 at the negative end of the molecule. Like the hydroxyl group OH which attaches to CH to form the neutral group CHOH, the amine group joins with CH to form a neutral group CHNH2. This group is more restricted as to its position in the chains than CHOH, which substitutes quite freely for CH2, but it has a special importance in that it is an essential component of the amino acids, which, in turn, are the principal building blocks af the proteins, the basic constituents of living matter. In the monoacids the CHNH2 group in effect extends the acid radical from CO • OH to CHNH2 • CO • OH. Further lengthening of the chain takes place by addition of hydrocarbon neutral groups, or CHOH, rather than CHNH2. Thus d-alanine, CH3 • CHNH2 • CO • OH lengthens to 1-leucine, CH3 • CHCH3 • CH2 • CHNH2 • CO • OH. These two compounds are members of one sub-group of the amino acids in which the positive radical is CH3. A second sub-group utilizes the carboxyl radical COOH in the posiiive position. The simplest compound of this type is d-aspartic acid, COOH • CH2 • CHNH2 • CO • OH. The third of the sub-groups, the diamino acids, has amine radicals in both the positive and negative positions, as in d-lysine, CH2NH2 • (CH2)3 • CHNH2 • CO • OH. Another combination containing nitrogen is the cyanide, or nitrile, radical. In the normal radical CN nitrogen has the negative valence three and carbon has the primary magnetic valence two, the net group valence being - 1. The positive and negative roles are reversed in the radical NC2 in which nitrogen has the enhanced neutral valence three. In this orientation nitrogen has Division III properties, and is positive to carbon rather than negative as usual. Since the negative valence of carbon is four, the net valence of the radical NC is - l, identical with the valence of CN. The NC compounds, the isocyanides, therefore have the same composition as the cyanides, but different properties. The CN+ radical makes its appearance in such compounds as cyanoacetic acid, CN • CH2 • CO • OH. Here nitrogen is negative, as in the CN• radical, but carbon has the normal positive valence four, and the net group valence is therefore + 1. Cyanogen, CN • CN, is a combination of the + 1 and -1 radicals. Compounds with the CO • CN combination in the negative position are nat generally regarded as constituting a separate family, and are named as members of the normal cyanides.

Introduction of the CO neutral group in conjunction with NH2 produces an amide, a structure which is open to an unusually wide variety of additions and substitutions. If we start with acetamide, CH3 • CO • NH2, we may add CH2 groups in the normal manner to form propionamide, CH3 • CH2 • CO • NH2, and the higher homologs, or we may substitute positive radicals for the amine hydrogen, obtaining compounds like N-ethy. acetamide, CH3 • CO • (NH • CH2 • CH3). The NH combination, which has a net valence of -2, can take the place of oxygen in the CO group of the amide, forming a CNH neutral group which has similar properties. Such a replacement in acetamide gives us acetamidine, CH3 • CNH • NH2. If the neutral CO group in acetamide is replaced by the positive CO radical we obtain aminoacetone, COCH3 • CH2 • NH2. Further replacement of carbon by nitrogen then changes the radical COCH3 to CONH2, and produces a whole new series: urea, CONH2 • NH2, and its derivatives. Another CO group changes the monobasic carbamide, urea, to a dibasic compound, oxamide, CONH2 • CO • NH2. A negative combination of oxygen and nitrogen that can be substituted for hydrogen is the nitro group, NO2. This results in a family known as the nitroparaffins. 1-nitropropane, CH3 • (CH2)2 • NO2, is typical. The NO group in these nitroparaffins is a combination of positive nitrogen (valence +3) with negative oxygen (-2 each). An isomeric family of compounds, the alkyl nitrites, substitutes a group ONO, in which one oxygen atom with the enhanced neutral valence +4 and a nitrogen atom with its normal -3 valence form a valence one positive radical ON. A further combination with negative oxygen then produces a valence one negative radical ONO. The CO • NO2 combination, like CO • CO, is outside the magnetic neutral limits under ordinary conditions, and there is no CO • NO2 series of compounds corresponding to those based on CO•NH2. In the quaternary ammonium compounds nitrogen has its neutral valence five, as in the inorganic nitrates, and joins with the equivalent of five valence one negative atoms or radicals to form compounds ranging from simple combinations such as tetramethylammonium hydroxide, N(CH3)4 • OH, to some very complex, and biologically important, compounds such as lecithin. The quaternary ammonium portion of the lecithin molecule also exists separately as choline, N(CH3)3OH • CH2• CH2• OH. Addition of oxygen to the cyanide and isocyanide radicals produces the radicals OCN and ONC, which form the basis of the cyanates and isocyanates. A comparison of the cyanides and cyanates provides a good illustration of the way in which the various pertinent factors enter into the construction of chemical compounds. Each element has several possible rotational orientations which it can assume to form chemical combinations, and in each of these orientations it has an effective speed displacement, or valence, which determines the status that the element can assume in a compound, and the ratio in which it combines with the other components. Some orientations are inherently more probable than others, but the type of combination that will be the most stable cannot be determined solely on the basis of this probability, since other factors also enter into the situation. The limitation imposed on direct combinations by the relative negativity of the constituents is one such factor. The greater relative probability of low net group valences in the radicals is another. Replacement capacity is likewise a significant factor. A valence

one radical is not only an inherently more probable structure than one of higher valence; it also has an ability to replace hydrogen atoms quite freely, while radicals of higher valence can accomplish such replacements only with some difficulty: In an environment favorable to these replacements the valence one radical therefore takes precedence, if such a radical can be formed. In any particular instance where there are two or more possible ways of constructing a valence one radical, the combined influence of all effective factors determines which of the possible combinations has the greatest over-all probability, and consequently the greatest stability. Where the margin of one structure over another is small, both may exist under appropriate conditions; where it is large, only the more stable compound can exist. In the cyanides the net total of all factors affecting the combination of carbon and nitrogen favors carbon valence +2 and nitrogen valence -3. An alternate with carbon -4 and nitrogen +3 is close enough to be stable. When oxygen, with valence -2, is added to either of these radicals the positive valence must increase by two units if the addition product is to be a valence one substitute for negative hydrogen. This is possible in both cases, as both carbon and nitrogen have the required higher valences. Carbon steps up from the primary magnetic valence +2 in CN to the normal valence +4 in OCN. Nitrogen goes from the enhanced neutral valence +3 in NC to the neutral valence +5 in ONC. The negative valences are unchanged: nitrogen has -3 in both CN and OCN, carbon has -4 in NC and ONC. The participation of elements of the higher rotational groups in chemical compounds involves no new structural features. Because of factors such as the higher magnetic valences, the greater inter-atomic distances, and the prevalence of three-dimensional force distributions, in the higher rotational groups, these elements are excluded from many of the types of combinations and structures in which the elements of Group 2A participate. But to the extent to which these elements can occupy positions in such combinations and structures, they do so on the same basis as the analogous Group 2A elements. The descriptions of the various types of combinations and structures in the preceding pages therefore apply to the compounds of these higher group elements as well as to those of the elements that were specifically mentioned. Sulfur comes the nearest to duplicating the lower group structures. The corresponding Group 2A element, oxygen, uses its negative valence almost exclusively, and to the extent that its somewhat greater inter-atomic distances will permit, sulfur, which has the same -2 valence, duplicates the oxygen compounds. Corresponding to the alcohols, acids, ethers, amides, etc. which have been discussed in the preceding pages, there are thioalcohols, thioacids, thioethers, thioamides, etc., that are identical except that sulfur substitutes for oxygen. The inter-atomic distance C-S is greater than the C-O distance, and this makes the sulfur compounds somewhat less stable than their oxygen analogs, limiting the total number of these compounds rather severely. One significant point is that the C-S distance will not permit the formation of CS neutral groups, and replacement of neutral CO by CS. This eliminates the possibility of families of sulfur compounds similar to the oxygen families whose negative radicals are CO • OH, CO • NH2, CO • OCH3, and so on. There are thioacids, but the radical is not CS • OH, or CS • SH; it is CO • SH. Where the formula of

a compound, as written in accordance with current practice, appears to indicate the presence of a CS group in a neutral position, this is actually a valence two combination that forms part of the positive radical. Thus thioacetamide and thiourea, commonly represented as CH3 • CS • NH2 and NH2 • CS • NH2, are actually CSCH3 • NH2 and CSNH2 • NH2. Neither CSOH nor CSSH is barred from acting as a valence one positive radical, a position in which the inter-atomic distance is not a controlling factor, but both are limited in their stability. CSOH tends to rearrange to the more probable form COSH2 while CSSH is vulnerable to loss of a CS molecule. For example, xanthic acid, CSSH • (O • CH2 • CH3) spontaneously separates into CS and ethyl alcohol. Oxidation of the sulfides provides another example of the displacement of the valences by addition of a strongly negative element. In methyl sulfide, (CH3)2S, sulfur has its normal negative valence, -2. Because it is positive to oxygen, oxidation forces it into the positive position in the compound, with a +4 valence, and the CH3 groups, which can take either + 1 or - 1, shift to the negative. The product is methyl sulfoxide, SO(CH2 • H)2. An additional oxygen atom is accommodated by a further shift in the sulfur valence to its maximum value +6 (the neutral valence). The new compound that is formed is methyl sulfone, SO(CH2 • H)2. The single element radicals, such as N3(N+5 • N-3 • N-3) and C2 (C+2• C•-4) conform to the same pattern of behavior as the other radicals. These particular combinations form azides and carbides respectively. The latter, since they contain no element other than carbon and hydrogen, have been named as a hydrocarbon family, although from a structural standpoint the introduction of the C2 radical into a normal hydrocarbon is the equivalent of the substitution of any other radical, and the resulting compounds should logically be called carbides. The carbide structure is quite evident in such compounds as (CH • CH2)2 • C2, which is divinylacetylene, or 1,5 hexadien-3-yne. The valence balance here is the same as in the binary carbides: CaC2, etc. As indicated earlier, however, probability considerations favor valence one radicals, where such radicals are possible, and in the hydrocarbons the C2, combination generally joins with a positive hydrogen atom to form the valence one radical C2H, structurally analogous to OH. The compounds utilizing this radical may be either olefinic (example: vinylacetylene, CH • CH2 • C2H) or acetylenic (example: butadiyne, C • CH • C2H). Magnetic neutral groups can be added in the usual manner, forming compounds such as 1,5 hexadiyne, C • CH • CH2 • CH2 • C2H. This compound, also known as dipropargyl, is isomeric with benzene, and attracted a great deal of attention in the early days of structural chemistry when the ―benzene problem‖ was the center of attention. A simple carbide, H • C2H, is the initial product of the action of water on calcium carbide, but since hydrogen is negative to carbon a direct combination of this kind between carbon and positive hydrogen is unstable, and the hydrogen carbide promptly changes to acetylene, in which the hydrogen atoms are negative. The valence changes in this series of reactions are interesting. In the original calcium carbide the valences are Ca +2, C+2, C-4. The reaction with water substitutes two + 1 hydrogen atoms for the calcium. The relative negativity of carbon and hydrogen then forces hydrogen into the negative position, and since the total negative valence on this basis is only two units, carbon has to take its + 1 valence to reach an equilibrium.

Although the three-dimensional inorganic radicals of the SO4 type are not able to substitute freely for hydrogen in organic compounds in the manner of the organic radicals, it is possible for organic chains to replace the atoms that are joined to these three-dimensional radicals in the inorganic compounds. In other words, there is no room for a three-dimensional component in a two-dimensional structure, but a two-dimensional combination can occupy a position in a three-dimensional structure. Typical compounds are ethyl sulfate, (CH3 • CH2)2 • SO4, and methyl phosphate, (CH3)3 • PO4. Compounds of the metals with organic radicals are usually grouped in a separate category as metal-organic, or organometallic, but they are classified as organic in this work, inasmuch as they have the regular organic structure. A compound such as ethyl sodium, Na • CH2 • CH3, has exactly the same structure as the corresponding paraffin hydrocarbon, propane, CH3 • CH2 • CH3. A compound such as diphenyl tin has exactly the same structure as diphenyl methane, one of the aromatic ring compounds that we will examine in Chapter 21. No separate consideration needs to be given, therefore, to either the organometallic compounds, or those compounds which have both organic and inorganic components, in this discussion of molecular structure. The number and diversity of the chain compounds can be increased enormously by additional branching, by combinations of the various substituents that have been discussed, and by the use of some less common substituents, but all such compounds follow the same structural principles that have been outlined for the most common organic chain families. There are some additional ways in which structural variations can occur, and to complete the molecular picture a few comments on these items are advisable, but since they are equally applicable to the ring compounds it will be appropriate to defer this discussion until after we have examined the ring structures.

CHAPTER 21

Ring Compounds
The second major classification of the organic compounds is that of the ring compounds. These ring structures are again divided into three sub-classes. In two of these, the positive components of the magnetic neutral groups of the rings are carbon atoms: the cyclic, or alicyclic, compounds in which the predominant carbon valence is two, and the aromatic compounds in which this valence is one. In the third class, the heterocyclic compounds, one or more of the carbon atoms in the ring is replaced by an atom of some other element. All of these classes are further subdivided into mononuclear and polynuclear divisions, the basic structure of the latter being formed by a condensation or fusion of two or more rings. It should be understood that the classifications are not mutually exclusive. A compound may consist of a ring joined to one or more chains; a chain compound may have one paraffinic and one olefinic branch; a cyclic ring may be joined to an aromatic ring; and so on.

As in the chain compounds, a parallel classification divides the ring compounds into families characterized by the nature of the negative components: hydrocarbons, alcohols, amines, etc. The normal cyclic hydrocarbon, a cyclane, or cycloparaffin, is a simple ring of CH2 neutral groups. The general formula can be expressed as -(CH2)n Beginning with cyclopropane (N=3) normal cyclanes have been prepared with all values of n up to more than 30. The neutral groups in these rings are identical with the CH2 neutral groups in the chain compounds, and they may be expanded in the same manner by CH2 additions. Corresponding to the branched chain compounds we therefore have branched rings such as ethylcyclohexane, -(CH2)5 (CH • CH2 • CH3)-, and 1-methyl-2ethyl cyclopentane, -CHCH3 • (CH • CH2 • CH3) • (CH2)3-. In the notation used herein, the neutral groups will be clearly identified by parentheses or other means, and the positive-negative order will be preserved within these groups as in the neutral groups of the chain compounds. To identify the substance as a ring compound and to show that the end positions in the straight line formula have no such special significance as they do in the chain compounds, dashes will be used at each end of the ring formula as in the examples given. If two or more rings are present, or if a portion of the compound is outside the ring, the positions of the dashes will so indicate. While any group could be taken as the starting point in expressing the formula of a single ring, the order of the usual numbering system will be followed as far as possible, to minimize the deviations from familiar practice. The branch names such as 1-methyl-2-ethyl are then clearly indicated by the formula. Replacement of all of the valence two groups in the cyclic ring by valence one groups, where such replacement is possible, converts the cyclic compound into an aromatic. In general, however, the distinctive aromatic characteristics do not appear unless the replacement is complete, and the intermediate structures in which CH or its equivalent has been substituted for CH2 in only part of the ring positions will be included in the cyclic classification. Since the presence of the remaining CH2 groups is the principal determinant of the molecular properties, the predominant carbon valence, in the sense in which that term is used in defining the classes of ring compounds, is two, even where there are more CH than CH2 groups in the molecule. As mentioned earlier, the probabilities favor association of like forces in the molecular compounds. The CH2 groups have sufficient latitude in their geometric arrangement to be able to compensate for substantial variations, and single CH2 groups can therefore fit into the molecular structure without difficulty, but the CH groups have very little geometric leeway, and for that reason they nearly always exist in pairs. This does not mean that the individual group is positively barred from existing separately, and in some of the more complex structures single CH groups can be found, but in the simple rings the pairs are so much more probable than the odd numbers of groups that the latter are excluded. The first two-group substitution in the cyclanes produces the cyclenes, or cycloolefins. A typical compound is cyclohexene, -(CH2)4 • (CH)2 . The designations cycloparaffin and cycloolefin are not appropriate, in view of the findings of this work, as the cycloparaffins contain no carbon atoms with the characteristic paraffin valence, and it is the substitution

of two acetylene valence groups into the CH2 rings that forms the cycloolefins. The names cyclane and cyclene are therefore preferable. Substitution of two more CH groups into the ring produces the cyclodienes. The existence of two CH • CH pairs in these compounds introduces a new factor in that the positions of the pairs within the ring may vary. No question of this kind arises in connection with cyclopentadiene, -(CH)Q•CH2, the first compound in this series, but in cyclohexadiene two different arrangements are possible: -(CH)4• (CH2)2 which is known as 1,3-cyclohexadiene, and -(CH)2 • CH2 • (CH)2 • CH2 which is I,4-cyclohexadiene. Negative hydrogen atoms in the cyclic compounds may be replaced by equivalent atoms or groups in the same manner as those in the magnetic neutral groups of the chain compounds. The resulting products, such as cyclohexyl chloride, -(CH2)5 • CHCI-, cyclohexanol, -(CH2)5 • CHOH-, cyclohexylamine, -(CH2)5 • CHNH2, etc., have properties quite similar to those of the equivalent chain compounds: chlorides, alcohols, amines, and so on. There are no atomic groups in the normal cyclic rings which have an amount of freedom of geometric arrangement comparable to that of the radicals at the two ends of the aliphatic chains, and the substituents which are limited to the radicals in the chains do not appear at all in the cyclic compounds unless a branch becomes long enough to put the end group beyond the range of the forces originating in the ring. In this case the structure is in effect a combination chain and ring compound. Because of this geometric restriction the range of substituents in the normal types of cyclic compounds is considerably narrower than in the chains. In addition to those already mentioned, Cl, OH, and NH2, the primary list includes the remaining halogens, oxygen, CN, and CO • OH. The compounds formed by direct substitution of oxygen for the two hydrogen atoms of the CH2 group are named as ketones, but they do not have the ketone structure, as the resulting CO group is part of the ring and is a magnetic neutral group. One substitution produces cyclohexanone, -(CH2)5 • CO-. A second results in a compound such as 1,3-cyclohexanedione, -CO • CH2 • CO • (CH2)3-. The CO substitution can extend all the way to cyclohexane hexone, -(CO)6, in which no hydrogen remains. It is also possible to make the oxygen substitution by means of a valence one combination instead of the full valence two replacement, in which case we obtain a compound such as cyclohexyl methyl ether, (CH2)5 • (CH • OCH3)-. Additional families of compounds are produced both by secondary substitutions, which result in structures on the order of cyclohexyl acetate, -(CH2)5 • CH(O • CO • CH3)-, and by parallel substitutions in two or more neutral groups. An example of the type of structure that is produced by the multiple substitutions is 1,2,3-cyclopropanetricarboxylic acid, -(CH • CO • OH)3-. The naturally occurring compounds of this cyclic class are highly branched rings beginning with such substances as menthol, -CHCH3 • CH2 • CHOH • (CH • CHCH3 • CH3) • (CH2)2, and extending to very complex structures, but they follow the same general structural patterns as the simpler cyclic compounds, and will not require additional discussion in the present connection.

As mentioned earlier, the CH2 groups have a considerable degree of structural latitude because of their three-atom composition. The angle between the effective lines of force varies from about 120 degrees in cyclopropane to less than 15 degrees in the largest cyclic rings thus far studied. The two-atom groups such as CH do not have this structural freedom, and are restricted to a narrow range in the vicinity of 60 degrees. The theoretically exact limits have not yet been determined, but the difficulties involved in the preparation of derivatives of cyclooctatetraene, -(CH)8, indicate that this compound is at the extreme limit of stability. This would suggest a maximum deviation of about 15 degrees from the 60 degree angle of the six-member ring. The atoms of which the molecular compounds are composed have a limited range in which they can assume positions above or below the central plane of the molecule. The actual angles between the effective lines of force will therefore deviate slightly from the figures given above, which are based on positions in the central plane, but this does not affect the point which is being made, which is that the cyclic ring is very flexible, whereas the aromatic ring is practically rigid. As long as there is even one CH2 group in the ring it has the cyclic flexibility. Cyclopentadiene can exist in spite of the rigidity of the portion of the ring occupied by the four CH groups because the CH2 group that completes the structure is able to accommodate itself to the position necessary for closing the ring. But when all of the three-atom groups have been replaced by two-atom groups or single atoms the ring assumes the aromatic rigidity. Cyclobutadiene, for example, would consist of four CH groups only, and the maximum deviation of the CH lines of force, somewhere in the neighborhood of 75 degrees, is far short of the 90 degrees that would be required for closure of the cyclobutadiene ring. All attempts to produce such a compound have therefore failed. The properties of the various ring compounds are dependent to a considerable degree on this question as to whether the members of the rings are restricted to certain definite positions, or have a substantial range of variability within which they can adjust to the requirements for combination. In view of this natural line of demarcation, the aromatic classification, as used in this work, is limited to the rigid structures, specifically to those compounds composed entirely of valence one CH groups or their monovalent substitution products, except for such connecting carbon atoms as may be present. Because of the limitations on the atomic positions, the aromatic compounds, with the exception of cyclooctatetraene, are confined to the six-member rings, the valence one equivalents of cyclohexane and its derivatives, and there are no aromatic analogs of cyclobutane, cycloheptane, etc. The structural rigidity therefore limits the compound forming versatility of the aromatic rings to a substantial degree, but this is more than offset by other effects of the same factor. The locations in the chain compounds which are open to the greatest variety of combinations are the ends of the chain and its longer branches, if any. In the aromatic rings every ring location has, to some degree, the properties of an end. Also, because of the rigidity of the ring, the maximum intergroup distance 1-3 in the ring is about ten percent less than the distance between the equivalent groups in the aliphatic chain, after making an allowance for the small amount of flexibility that does exist. This

brings some additional combinations of elements within the limit of effectiveness of the free electric displacements, and in these rings we find not only groups such as COH, CCI, CNH2, etc., which are the valence one equivalents of the combinations that make up the cyclic rings and the interior portions of the chain compounds, but also other combinations such as CNO2 and CSH which are just beyond the magnetic neutral limits in the nonaromatic structures. The number of available combinations in which the neutral group CO accompanies the negative radical is similarly increased. Secondary substitutions extend the length and diversity of the magnetic neutral groups of the ring, and produce a wide variety of single branch compounds on the order of isobutyl benzene, -(CH)5 • (C • CH2 • CHCH3 • CH3)- and N-ethyl aniline, -(CH)5 • (C • NH • CH2 • CH3)-, but the principal field for variability in the mononuclear aromatics lies in their capability of multiple branching. The aromatic rings not only have a greater variety of available substituents than any other type of molecular compound, but also a larger number of locations where these substituents may be introduced. This versatility is compounded by the fact that in the rings, as in the chains, the order of sequence of the groups has a definite effect on the properties of the compound. The behavior of 1,2-dichlorobenzene, -(CCl)2•(CH)4, for instance, is in many respects quite different from that of 1,4-dichlorobenzene, -CCl • (CH)2 • CCl • (CH)2 . A significant feature of the aromatic rings is their ability to utilize larger numbers of the less versatile substituents. For example, the limitation of such groups as NO2 to the negative radical in the chains means that only one such group can exist in any chain compound, unless a branch becomes so long that the compound is in effect a union of two chains. In the aromatic ring this limitation is removed, and compounds with three or four of the highly reactive nitro groups in the six-member ring are common. The list includes such well-known substances as picric acid (2,4,6-trinitrophenol), -COH • CNO2 • CH • CNO2 • CH • CNO2-, and TNT (2,4,6-trinitrotoluene), -CH3 • CNO2 • CH • CNO2 • CH • CNO2- . Since there is only one hydrogen atom in the CH group, the direct substitutions in the aromatic rings are limited to valence one negative components. In order to establish a valence equilibrium with a bivalent atom or radical two of the aromatic rings are required. These bivalent atoms or groups therefore constitute a means whereby two rings can be joined. Diphenyl ether, for example, has the structure -(CH)5 • C-OC • (CH)5-, in which the oxygen atom is not a member of either ring but participates in the valence equilibrium. The bivalenrt negative radical NH similarly produces diphenylamine, -(CH)5 • C-NH-C • (CH)5-. Each of these rings is a very stable structure with a minimum of eleven constituent atoms, and a possibility of considerable enlargement by substitution. This method of joining rings is therefore a readily available process whereby stable molecules of large size may be constructed. Further additions and substitutions may be made not only in the rings and their branches, but in the connecting link as well. Thus the addition of two CH2 groups to diphenyl ether produces dibenzyl ether, -(CH)5 • CCH2 • O • CH2 C • (CH)5-.

According to the definition of an aromatic compound, these multiple ring structures are not purely aromatic, as the connecting links do not qualify. This is a situation which we will encounter regardless of the manner in which the various organic classifications are set up, as the more complex compounds are primarily combinations of the different basic types of structure. Ordinarily a compound is classified as a ring structure if it contains a ring of any kind, even though the ring may be only a minor appendage on a long chain, and it is considered as an aromatic if there is at least one aromatic ring present. In the multiple ring compounds the combination (CH)5 • C, which is a benzene ring less one hydrogen atom, acts as a monovalent positive radical, the phenyl radical, and the simple substituted compounds can be named either as derivatives of benzene or as phenyl compounds; i.e., chlorobenzene or phenyl chloride. The net positive valence one is the valence condition in which the ring is left when a hydrogen atom is removed, but this net valence is due entirely to the + 1 valence of the lone carbon atom from which the hydrogen atom was detached, all other groups being neutral, and it does not necessarily follow that the carbon valence will remain at + 1. As emphasized earlier, valence is simply a matter of rotational orientation, and when acting alone any atom can assume any one of its possible valences, providing that there are no specific obstacles in the environment. The lone carbon atom is therefore free to accommodate itself to different environments by reorientation on the basis of any of its alternate valences: +2, +4, or -4. If two phenyl radicals are brought together, the inter-atomic forces will tend to establish an equilibrium. A valence balance is a prerequisite for a force equilibrium, and the carbon atoms will therefore reorient themselves to balance the valences. There are two possible ways of accomplishing this result. Since carbon has only one negative valence, -4, one carbon atom takes this valence, and a second must assume the +4 valence in order to arrive at an equilibrium. In a direct combination of two phenyl groups these valence changes can be made in the two independent carbon atoms, without modifying the neutral groups in any way, and this is therefore the most probable structure in such compounds as biphenyl, -(CH)5 • C-C • (CH)5-. A similar balanced pair of positive and negative valence 3 nitrogen atoms may be introduced, in combination with the valence 4 carbon atoms, to form azobenzene, -(CH)5 • CNNC • (CH)5-. The alternative is to make both valence changes in the same phenyl group, giving the lone carbon atom the -4 valence and increasing the v,alence of the carbon atom in an adjacent neutral group from +1 to +4. The product is a ring in which there are four CH neutral groups, a CH group with a net valence of +3, and a single carbon atom with the -4 valence. By this means the phenyl group is changed from a univalent positive radical, C • (CH)5, to a univalent negative radical, (CH)4 • CH • C. Like the methyl group, which can act either as a positive radical CH3 with valence + 1, or as a negative radical CH2 • H with valence - 1, the phenyl group is able to combine with substances of either valence type, taking the negative valence in combination with a positive component, and the positive valence when combining with a negative atom or group. It is negative in all of the phenyl compounds of the metal-organic class, and not only forms compounds such as phenyl copper, Cu-C • (CH)5-, and diphenyl zinc, Zn(-C • (CH)5-)2, but also combination phenyl-halide structures like phenyl tin trichloride, SnCl3-C • (CH)5-.

In combination with the CH3 radical the phenyl group is positive. Either radical can take either valence, but the methyl group probabilities are nearly equal, while the positive valence is more probable in the phenyl group, since it involves no change in the benzene ring other than the removal of a hydrogen atom. The combination -(CH)5 • CCH3- is therefore toluene, with positive phenyl and negative methyl (carbon valence two), rather than phenyl methane, which would have negative phenyl and positive methyl (carbon valence four). This option is not available in combination with other hydrocarbon radicals, or with carbon itself, and in such compounds the phenyl radical replaces hydrogen, and is negative. An additional phenyl substitution in toluene, for example, reduces the CH3 radical to CH2. This group cannot have the -2 net valence that would be necessary for combination with positive phenyl radicals, and both of the phenyl groups assume the negative status in the resulting compound, diphenyl methane. The olefinic and acetylenic benzenes likewise have this type of structure in which the phenyl radical is negative. Styrene, for instance, is not vinyl benzene, -(CH)5 • C-CH2 • CH, as that combination would contain two positive components and no negative. It is phenyl ethylene, CH • CH2 • -C • (CH)5-, in which CH is positive and the phenyl group is negative. An interesting phenyl compound is phenyl acetylene, the conventional formula for which is C6HS • C • CH. On the basis of our finding that hydrogen is negative to carbon, the hydrogen atom in the acetylene CH would have to be negative. But this is not true, as it can be replaced by sodium. It seems evident, then, that this is phenyl carbide, -(CH)5 • CC2H, a compound similar to butadiyne, which we have already identified as a carbide, C • CH • C2H. As noted previously, the relative negativity of carbon and hydrogen has no meaning with reference to the carbide radical, which has a net negative valence, and cannot be other than negative regardless of what element or group it combines with. According to the textbooks, the phenyl compound is identified as an acetylene because ―it undergoes the typical acetylene reactions.‖ But so does any other carbide. The acetylene lamp was a ―carbide‖ lamp to the cyclists of an earlier day. Like the phenyl radical, the cyclic radicals can accommodate themselves to either the positive or negative position in the molecule. These radicals, too, are positive in the monosubstituted compounds. A methyl substitution produces hexahydro toluene, not cyclohexyl methane. But if there are two cyclic substitutions in a methyl group they are both negative, and dicyclohexyl methane is a reality. At this point it will be desirable to examine the effects of the various modifications of the ring structure on the cohesion of the molecule. We may take the benzene ring as the basic aromatic structure. Textbooks and monographs on the aromatic compounds typically contain a chapter, or at least a lengthy section, on the ―benzene problem. ―69 The problem, in essence, is that all of the evidence derived from observation and experiment indicates that the interatomic forces and distances between any two of the six CH groups in the ring are identical, but no theory of the chemical ―bond‖ has been able to account for the structure of the benzene molecule without utilizing two or more different kinds of bonds. The currently favored ―solution‖ of the problem is to sweep it under the rug by postulating that the structure alternates, or ―resonates,‖ between the different bond arrangements.

The development of the Reciprocal System of theory now shows that the forces between the groups in the benzene ring are, in fact, identical. As has been emphasized throughout the preceding discussion, however, the existence and nature of chemical compounds is not determined by the cohesive forces between the atoms of the different elements, but by the directional relationships which the atomic rotations must assume in order to permit elements with electric rotation in time to establish stable force equilibria in space. The findings of this theoretical develop ment agree that the orienting effects which enable CH groups to combine into the benzene ring are of two different types, a short range effect and a long range effect, but they also reveal that the nature of the orienting influences has no bearing on the magnitude of the interatomic forces, and this explains why no difference in these forces can be detected experimentally. The forces between any two of the CH neutral groups in the ring are identical. Inasmuch as the orienting factors cause the atoms to align their rotations in certain specific relative directions, they are, in a sense, forces, but in order to distinguish them from the actual cohesive forces that hold the atoms, groups, and molecules together in the positions determined by these orienting factors we are using the term ―effects‖ rather than ―forces‖ in application to the orientation, even though this introduces an element of awkwardness into the presentation. The nature of these effects, as they apply to the benzene ring, can be illustrated by an orientation diagram of the kind previously introduced.

The pairs of CH groups, 1-2, 3-4, and 5-6, in the diagram, are held in the combining positions by the orienting effects of a directional character that are exerted by all magnetic groups or compounds. Alternate groups, 1-3, 2-4, etc., are within unit distance, and therefore within the effective range of these orienting effects. The primary effect of group l, for instance, is directed toward group 2, but group 3 is also within unit distance, and consequently there is a long range 1-3 secondary effect as well as a short range 1-2 primary effect. Because of the directional nature of these orienting effects there is no 2-3 primary effect, but the pairs 1-2 and 3-4 are held in position by the 1-3 and 4-2 secondary effects. If we replace one of the hydrogen atoms with some negative substituent, the orientation situation is unchanged. The new neutral group, or that portion of it which is within the range of the ring forces if the group is a long one, takes over the functions of the CH group without alteration. However, removal of a hydrogen atom and conversion of the benzene molecule into a positive phenyl radical changes the orientation pattern to

The secondary effect 3-5 has now been eliminated, as the lone carbon atom does not have the free electric rotation characteristic of the magnetic groups or compounds, but the

remaining orientation effects are still adequate to hold the structure together. The further valence change that is necessary if the phenyl radical is to assume a negative valence similarly eliminates the 4-2 secondary effect, as group 4 is no longer magnetic. However, the two carbon atoms and one hydrogen atom combine into a radical CCH, with a net valence of - 1. This radical has no orienting effect on its neighbors, but the adjoining magnetic neutral groups do exert their effect on it. The orientation pattern is

As previously explained, the carbon atoms in the CCH combination have valences +4 and -4. If we remove the hydrogen atom from this group we obtain a ring in which four CH neutral groups are combined with two individual carbon atoms. This structure is neutral and is capable of existing as an independent compound, but, like the methylene molecule, it does not actually do so, because it has a strong tendency to form a double ring. The four CH groups which are attached to the C-C combination can be duplicated on the opposite side of the C-C line of action, forming another similar ring which utilizes the same pair of carbon atoms as part of its ring structure. The fact that the effects originating from the free electric rotations are exerted on the carbon atoms by the CH groups on one side does not in any way interfere with the existence of similar effects on the other side. The orientation relations in the second ring are identical with those of the first. Neither ring can now recapture a hydrogen atom and become a phenyl radical because the presence of the other ring prevents the approach of the free hydrogen atoms. The double ring compound therefore has a high degree of stability. This compound is naphthalene, -(CH)4 • C=C • (CH)4, a condensed ring aromatic hydrocarbon. When used in the formula of a compound in this work, the double mark between two carbon atoms is a symbol indicating the condensed ring type of structure in which the rings are joined at two positions rather than at a single position as in compounds such as biphenyl. It has no implications of the kind associated with the ―double bonds‖ of the electronic theory. A third ring added in the same manner produces anthracene. Further similar additions in line result in a series of compounds: naphthacene, pentacene, and so on. But it is not necessary that the additions be made in line, and each of these compounds is accompanied by others which have the same composition, but different structures. For instance, the four ring compounds of the naphthacene composition, C18H12, include chrysene, naphthanthracene, 3,4-benzophenanthrene, and triphenylene. Pyrene has the same four rings, but a more compact structure, and a composition C16H10. The structural behavior of the condensed rings is essentially the same as that of the single benzene rings. They join to form compounds such as binaphthyl and bianthryl; they act as radicals (naphthyl, anthryl, phenanthryl, etc.); they attach more rings by substitution for hydrogen to produce compounds such as triphenyl anthracene; and they form a great variety of compounds by utilizing the other negative substituents available to the aromatic rings. Many interesting and important compounds are included in this category,

but no new structural features are involved, and they are therefore outside the scope of the present discussion. The two CH groups of the middle ring of the anthracene structure are not necessary for stability, and they can be eliminated. The resulting compound is biphenylene, -(CH)4 • CC=CC • (CH)4 . A structure with only one CH group in the middle ring, intermediate between anthracene and biphenylene, is ruled out by the low probability of the continued existence of a single CH group, but a similar compound can be formed by putting a CH2 group in the intermediate position, as the CH2 groups are not restricted to pairs. The new compound is fluorene. Another CH2 group in the opposite position restores the anthracene structure with a cyclic middle ring. This compound is dihydroanthracene. As previously mentioned, a ring with even one CH2 group deviates substantially from the typical aromatic behavior, and any such ring is classified with the cyclic structures, but this effect is confined to the specific ring, and any adjacent aromatic rings retain their aromatic character. Such compounds as fluorene and dihydroanthracene should therefore be regarded as combination cyclic-aromatic structures. These compounds occur in large numbers and in great variety, but the principles of combination are the same as in the purely aromatic compounds, and do not need to be repeated. Since the cyclic compounds are less stable than the corresponding aromatics, the combination structures do not cover as large a field as the aromatic compounds, but a very stable structure such as that of naphthalene does extend through the entire substitution range. Beginning with the purely aromatic compound, successive pairs of hydrogen atoms can be added all the way to the purely cyclic compound, decahydronaphthalene. The reduction in the variety of combination structures due to the fact that the cohesive force in the cyclic ring is weaker than that in the aromatic ring is offset to some extent by the ability of the CH2 groups to form rings of various sizes. 1,2,3,4tetrahydronaphthalene, for instance, can drop one of its CH2 groups, forming indane, (CH)4 • C=C • (CH2)3-. Because of the CH2 flexibility, the cyclic ring in this compound is still able to close even if two of the remaining CH2 groups are replaced by CH. This produces indene, -(CH)4 • C=C • (CH)2 • CH2 . Polynuclear cyclic compounds are formed in the same manner as the polynuclear aromatic and combination structures, but not in as great a number or variety. Corresponding to biphenyl and its substitution products are dicyclopentyl, dicyclohexyl, etc., and their derivatives; triphenyl methane has a cyclic equivalent in tricyclohexyl methane; the cyclic analog of naphthalene is bicyclodecane, and so on. The last major division of the ring compounds is the heterocyclic class, in which are placed all compounds in which any of the carbon atoms in the cyclic or aromatic rings are replaced by other elements. The principal reason for setting up a special classification for these compounds is that most of the substitutions of other elements for carbon require valence changes of one kind or another, unlike the substitutions for hydrogen, which normally involve no valence modifications, except in those cases where two valence one hydrogen atoms are replaced by one valence two substituent.

Some of the heterocyclic substitutions are of this two for one character, and in those cases the normal cyclic or aromatic structure is not altered. For example, if we begin with quinone, -(CH)2 • CO • (CH)2 • CO-, an aromatic carbon compound, and replace two of the CH groups with NH neutral groups we obtain uracil, -NH • CO • NH • CH • CH • CO-. One more similar pair replacement removes the last of the hydrocarbon groups and results in urazine, -NH • CO • NH • NH • CO • NH-. In the compound cyclohexane hexone previously mentioned all of the hydrogen has been replaced, and in borazole, BH • NH • BH • NH • BH • NH-, all carbon is eliminated. All of these heterocyclic compounds are composed entirely of two-member magnetic neutral groups, and therefore have the benzene structure: six groups arranged in a rigid aromatic ring. More commonly, however, the heterocyclic substituent is a single atom or a radical, and such a substitution requires a valence change in some other part of the ring to maintain the valence equilibrium. Substitutions therefore often take place in balanced pairs. In pyrone, -(CH)2 • CO • (CH)2 • O-, for example, the CO combination is not a neutral group, but a radical with valence +2 which balances the -2 valence of the oxygen atom. The CH2 radical, in which carbon also has its normal valence +4, has the same function in pyran, -(CH)2 • CH2 • (CH)2 • O-. Substitution of two nitrogen atoms with the balanced valences of +3 and -3 in the aromatic ring produces a diazine. If the nitrogen atoms are in the 1,2 positions the compound is pyridazine, -N•N•(CH)4 . The properties of the 1,3 and 1,4 compounds are enough different from those of pyridazine that they have been given distinctive names, pyrimidine and pyrazine, respectively. Since the positive and negative radicals in a ring have no fixed positions similar to the two ends of the chains, it is not possible to indicate their status by their positions as we do in the formulas we are using for the chain compounds. Some appropriate method of identification probably should be devised in order to make the formula as representative of the actual structure as possible, but this is not necessary for the purposes of the present work, and can be left for later consideration. The following orientation diagrams for pyrone and pyridazine are typical of those for heterocyclic compounds with single atom or radical substitutions:

If the valence equilibrium is not achieved in this manner by means of a pair of substitutions, a valence change in one of the neutral groups is necessary. A single nitrogen atom substituted into the ring requires a +3 valence elsewhere in the structure to counterbalance the negative nitrogen valence. This is readily accomplished by a shift of one of the carbon valences to +4. The reconstructed ring then consists of a nitrogen atom, valence -3, a CH radical, valence +3, and four CH neutral groups. This compound is

pyridine, -(CH)5 • N-. Hydrogenation can be carried out by steps through intermediate compounds all the way to the corresponding cyclic structure, piperidine, -(CH2)5 • NH-. When oxygen, or another valence two negative component, is introduced into the aromatic ring the necessary valence balance may be attained by a simultaneous replacement of one of the CH neutral groups by a CH2 radical, as already noted in the case of pyran. Or the required balance can be achieved without introduction of additional hydrogen if the carbon valences in two of the CH groups are stepped up to the +2 level (the primary magnetic valence), forming two CH radicals, each with valence + 1. This leaves an unstable odd number of CH neutral groups in the six-member ring, but there is sufficient flexibility in the structure to enable a ring closure on a five-member basis, and stability is restored by ejecting a neutral group. The resulting compound is furan, -(CH)4 • O-, a five-member ring with one oxygen atom, two CH neutral groups, and two CH valence one positive radicals. Substituting sulfur instead of oxygen produces thiophene, (CH)4 • S-, while inserting the negative radical NH into the same position produces pyrrole, -(CH)4 • NH-. Each of these furan type compounds also exists in the cyclic dihydro and tetrahydro forms. The furan orientation pattern is

The essential feature of all of these five-member rings of the furan class is a valence equilibrium in which three of the five components participate, the two remaining components being the neutral groups that furnish the ring-forming capability. In furan the equilibrium combination is C+-O•2-C+. Formation of a similar combination with nitrogen in the negative position requires that some element or radical positive to nitrogen take the positive position, and in the heterocyclic division nitrogen itself commonly accepts this role. The most probable valence under these conditions is +3, as in hydrazine. The two nitrogen valences, +3 and -3, are then in equilibrium, and in this case the fifth component of the five-member ring must be a neutral group. Since it is a single group, it is the cyclic group CH2, and the neutral trio is N+3-N-3-CH2°. The compound is isopyrazole, -N • CH • CH • CH2 • N-. An alternate group arrangement produces isoimidazole, -N • CH2 • N • CH • CH-. A variation of this structure moves a hydrogen atom from the CH2 group to the positive nitrogen, which changes the neutral combination to NH +2-N-3-CH+1. . The compounds formed on this basis are pyrazole, -N • (CH)3 • NH-, and imidazole, -N • CH • NH • CH • CH-. From these basic heterocyclic types a great variety of condensed systems such as coumarone (benzofuran), indole (benzopyrrole), quinoline (benzopyridine), etc., can be formed by combination with other rings. Both the single rings and the condensed systems are then open to further enlargement by all of the processes of addition and substitution previously discussed, and a very substantial proportion of the known organic compounds belong to this class. From a structural standpoint, however, the basic principles involved in the formation of all of these compounds are those that have been covered in the preceding discussion.

In the foregoing pages we have encountered several kinds of isomerism, the existence of different compounds with the same composition. Some, such as the cyanides and isocyanides, differ only in valence; some, such as the straight chain and branched paraffins, differ in the position of the neutral groups; and some, such as the aldehydes and the ketones, differ in the assignment of the atoms of the constituent elements to the structural groups. Most of these isomers that we have examined thus far are distinct stable compounds. There are also some isomeric systems in which the two forms of a substance convert so readily from one to the other that they establish an equilibrium which varies in accordance with the conditions to which the compound is subject. This form of isomerism is known as tautomerism. One of the familiar examples of tautomerism is that between the, ―keto‖ and ―enol‖ forms of certain substances. Ethyl acetoacetate COCH3 • CH2 • CO • (O • CH2 • CH3), is the keto form of a compound that also exists in the enol form as the ethyl ester of hydroxycrotonic acid, COH • CHCH3 • CO • (O • CH2 • CH3). The compound freely changes from one form to the other to meet changing physical and chemical conditions. This is another example of counterbalancing carbon and hydrogen valence changes, and it is an indication of the ease with which such changes can be made. In the radical COCH3 the carbon valence is +4, and all hydrogen is negative. The transition to the enol form involves a drop in the carbon valence to +2, and one hydrogen atom shifts from - 1 to + 1 to maintain the balance. The CH2 group in the radical is then superfluous, and it moves to the adjacent neutral group. The remainder of the molecule is unchanged. The development of the Reciprocal System of theory has not yet been extended to a study of tautomerism. Nor has it been applied to those kinds of isomerism which depend on the geometrical arrangement of the component parts of the molecules, such as optical isomerism. These aspects of the general subject of molecular structure will therefore have to be left for later treatment. This chapter is the last of the four that have been devoted to an examination of the structure of chemical compounds. In closing the discussion it will be appropriate to point out just how the presentation in these chapters fits into the general plan of the work, as defined in Chapter 2. The usual discussion of molecular structure, as we find it in the textbooks, starts with the empirical observation that certain chemical compounds-sodium chloride, benzene, water, ethyl alcohol, etc.-exist, and have certain properties, including different molecular structures. The theoretical treatment then attempts to devise plausible explanations for the existence of these observed compounds, their structures, and other properties. This present work, on the other hand, is entirely deductive. By developing the necessary consequences of the fundamental postulates of the Reciprocal System we find that in a universe of motion matter must exist; it must exist in the form of a series of elements; and those elements must have the capability of combining in certain specific ways to form chemical compounds. In this and the preceding chapters, the most important of the possible types of molecular structures have been derived from theory, and specific compounds have been characterized by composition and structure. The second objective of the work is to identify these theoretical combinations with the observed chemical compounds. For example, we deduce purely from theory that there must exist a compound in the form of a chain of three groups of atoms, in which the first

group contains three atoms of element number one and one atom of element number six, and has a net group combining power, or valence, of + 1. The second group has two atoms of element number one and one of number six, and is neutral; that is, its net valence is zero. The third group has one atom of element number one and one of element number eight, and a valence of - 1. This theoretical composition and structure are in full agreement with the composition of the observed compound known as ethyl alcohol, and with the structure of that compound as deduced from physical and chemical observation and measurement. We are thus entitled to conclude that ethyl alcohol is the chemical compound existing in the physical universe that corresponds to the compound which must exist in the theoretical universe of the Reciprocal System. In other words, we have identified the theoretical compound as ethyl alcohol. The great majority of the identifications cited in the preceding pages are unequivocalalmost self-evident, we may say-and this agreement establishes the validity of both the theoretical development and the empirical determination of the molecular structures. Where there are discrepancies, some of them, such as the one involved in the structure of ethylene, are quite easily explained. However, as the size and complexity of the molecules increases, the number and variety of the possible modifications of the theoretical structure also increases, in even greater proportion, and the observable differences between the various modifications decrease. The validity of the identifications is therefore less certain than in the case of the smaller and simpler molecules, but this does not mean that there is any additional uncertainty with respect to the existence of the more complex theoretical compounds. It merely means that the available empirical information is not adequate to permit a definite decision as to which of the observed compounds corresponds to a particular theoretical structure. It can be expected, therefore, that further investigation will clear up most of these questions. The discussion of chemical compounds in this and the preceding three chapters completes the description of the primary physical entities, the actors in the drama of the physical universe. In the next volume we will begin applying the theoretical findings to an examination of the drama itself: the action in which these entities are involved.

Nothing but Motion Dewey B. Larson
References
1. Butterfield, Herbert, The Origins of Modern Science, Revised Edition, The Free Press, New York, 1965, page 18. 2. Schlegel, Richard, Completeness in Science, Appleton-Century-Crofts, New York, 1967, page 152. 3. Burbidge and Burbidge, Quasi-Stellar Objects, W. H. Freeman & Co., San Francisco, 1967, page vii. 4. New Scientist, Feb. 13, 1969.

5. Dicke, R. H., American Scientist, March 1959. 6. Wooldridge, Dean E., The Machinery of Life, McGraw-Hill Book Co., New York, 1966, page 4. 7. Conant, James B., Modern Science and Modern Man, Columbia University Press, 1952, page 47. 8. Feynman, Richard, The Character of Physical Law, The M.I.T. Press, 1967, page 145. 9. Bridgman, P. W., The Nature of Physical Theory, Princeton University Press, 1936, page 134. 10. Einstein and Infeld, The Evolution of Physics, Simon & Schuster, New York, 1938, page 159. 11. Dingle, Herbert, A Century of Science, Hutchinson's Publications, London, 1951, page 315. 12. Feynman, Richard, op. cit., page 156. 13. Margenau, Henry, Quantum Theory, Vol. 1, edited by D. R. Bates, Academic Press, New York, 1961, page 6. 14. Braithwaite, R. B., Scientific Explanation, Cambridge University Press, 1953, page 22. 15. Feyoman, Richard, op. cit., page 30. 16. Einstein, Albert, The Structure of Scientific Thought, E. H. Madden, editor, Houghton MiMin Co., Boston, 1960, page 82. 17. Dirac, P. A. M., Scientific American, May 1963. 18. Butterfield, Herbert, op. cit., page 13. 19. Hobbes, Thomas, The Metaphysical System of Hobbes, M. W. Calkins, editor, The Open Court Publishing Co., La Sane, III., 1948, page 22. 20. North, J. D., The Measure of the Universe, The Clarendon Press, Oxford, 1965, page 367. 21. Einstein, Albert, Relativity, Henry Ho It & Co., New York, 1921, page 74. 22. Hocking, William E., Preface to Philosophy, The Macmillan Co., New York, 1946, page 425. 23. Tolman, Richard C., The Theory of the Relativity of Motion, University of California Press, 1917, page 27.

24. Wigner, Eugene P., Symmetries and Reflections, Indiana University Press, 1967, page 30. 25. Jeans, Sir James, The Universe Around Us, Cambridge University Press, 1947, page 113. 26. Heisenberg, Werner, On Modern Physics, Clarkson N. Potter, New York, 1961, page 16. 27. Heisenberg, Werner, Physics Today, March 1976. 28. Schlegel, Richard, op. cit., page 18. 29. Gold and Hoyle, Paris Symposium on Radio A stronomy, paper 104, Stanford University Press, 1959. 30. Darrow, Karl K., Scientific Monthly, March 1942. 31. Finlay-Freundlich, E., Monthly Notices of the Royal Astronomical Society, 105237. 32. Heisenberg, Werner, Philosophic Problems of Nuclear Science, Pantheon Books, New York, 1952, page 38. 33. Einstein, Albert, A Ibert Einstein: Philosopher-Scientist, Paul Schilpp, editor, the Library of Living Philosophers, Evanston, 111., 1949, page 67. 34. Alfven, Hannes, Worlds-Antiworlds, W. H. Freeman & Co., San Francisco, 1966 page 92. 35. Hawkins, David, The Language of Nature, W. H. Freeman & Co., San Francisco. 1964, page 183. 36. Lindsay, R. B., Physics Today, Dec. 1967. 37. Einstein, Albert, Sidelights on Relativity, E. P. Dutton & Co., New York, 1922 page 23. 38. Einstein and Infeld, op. cit., page 185. 39. Einstein, Albert, Relativity, op. cit., page 126. 40. Heisenberg, Werner, Physics and Philosophy, Harper & Bros., New York, 1958 page 129. 41. Einstein, Albert, Foreword to Concepts of Space, by Max Jammer, Harvard University Press, 1954. 42. Jeans, Sir James, op. cit., page 78. 43. Schlegel, Richard, Time and the Physical World, Michigan State University Press 1961, page 160.

44. Whitrow, G. J., The Natural Philosophy of Time, Thomas Nelson & Sons, London 1961, page 218. 45. Moller, C., The Theory of Relativity, The Clarendon Press, Oxford, 1952, page 49. 46. Tolman, Richard C., Relativity, Thermodynamics and Cosmology, The Clarendon Press Oxford, 1934, page 195. 47. Science, July 14, 1972. 48. Heisenberg, Werner, Philosophic Problems of Nuclear Science, op. cit., page 12. 49. Thomson, Sir George, The Inspiration of Science, Oxford University Press, London 1961, page 66. 50. Millikan, Robert A., Time and Its Mysteries, Collier Books, New York, 1962, page 24. 51. Jeans, Sir James, Physics and Philosophy, The Macmillan Co., New York, 1945 page 190. 52. Toulmin and Goodfield, The Architecture of Matter, Harper & Row, New York 1962, page 298. 53. Bridgman, P. W., A Sophisticate's Primer of Relativity, Wesleyan University Press 1962, page 10. 54. Feyerabend, P. K., Philosophy of Science, The Delaware Seminar, Vol. 2 (19621963) Bernard Baumrin, editor, Interscience Publishers, New York, 1963, page 17. 55. Einstein and Infeld, op. cit., page 195. 56. Bridgman, P. W., Reflections of a Physicist, Philosophical Library, New York, 1955 page 186. 57. Will, Clifford M., Scientific American, Nov. 1974. 58. Feynman, Richard, op. cit., pages 156, 166. 59. Cohen, Crowe and Du Mond, The Fundamental Constants of Physics, Interscienc' Publishers, New York. 1957. 60. Alfven, Hannes, Scientific American, Apr. 1967. 61. Boorse and Motz, The World of the Atom, Vol. 2, Basic Books, New York, 1966 page 1457. 62. Hooper and Scharff, The Cosmic Radiation, John Wiley & Sons, New York, 1958 page 57. 63. Swann, W. F. G., Journal of the Franklin Institute, May 1962.

64. Davis, Leverett, Jr., Nuovo Cimento Suppl., 10th Ser., Vol. 13, No. 1, (1959). 65. Donahue, T. M., Physical Review, 2nd Ser., Vol. 84, No. 5, (1951), page 972. 66. Satz, Ronald W., Reciprocity, May 1975. 67. Weisskopf, V. F., Comments on Nuclear and Particle Physics, Jan.-Feb. 1969. 68. Pauling, Linus, The Nature of the Chemical Bond, Cornell University Press, 1960 page 217. 69. See, for instance, Badger, G. M., The Structures and Reactions of the Aromatic Compounds, Cambridge University Press, 1954, Chapter 1. 70. Weisskopf, V. F., Lectures in Theoretical Physics, Vol. 111, Britten, J. Downs, and B. Downs, editors, Interscience Publishers, New York, 1961, page 80.

DEWEY B. LARSON: THE COLLECTED WORKS

Dewey B. Larson (1898-1990) was an American engineer and the originator of the Reciprocal System of Theory, a comprehensive theoretical framework capable of explaining all physical phenomena from subatomic particles to galactic clusters. In this general physical theory space and time are simply the two reciprocal aspects of the sole constituent of the universe–motion. For more background information on the origin of Larson’s discoveries, see Interview with D. B. Larson taped at Salt Lake City in 1984. This site covers the entire scope of Larson’s scientific writings, including his exploration of economics and metaphysics.

Physical Science
The Structure of the Physical Universe The original groundbreaking publication wherein the Reciprocal System of Physical Theory was presented for the first time. The Case Against the Nuclear Atom ―A rude and outspoken book.‖ Beyond Newton ―...Recommended to anyone who thinks the subject of gravitation and general relativity was opened and closed by Einstein.‖ New Light on Space and Time A bird’s eye view of the theory and its ramifications. The Neglected Facts of Science Explores the implications for physical science of the observed existence of scalar motion. Quasars and Pulsars Explains the most violent phenomena in the universe. Nothing but Motion The first volume of the revised edition of The Structure of the Physical Universe, developing the basic principles and relations. Basic Properties of Matter The second volume of the revised edition of The Structure of the Physical Universe, applying the theory to the structure and behavior of matter, electricity and magnetism.

The Universe of Motion The third volume of the revised edition of The Structure of the Physical Universe, applying the theory to astronomy. The Liquid State Papers A series of privately circulated papers on the liquid state of matter. The Dewey B. Larson Correspondence Larson’s scientific correspondence, providing many informative sidelights on the development of the theory and the personality of its author. The Dewey B. Larson Lectures Transcripts and digitized recordings of Larson’s lectures. The Collected Essays of Dewey B. Larson Larson’s articles in Reciprocity and other publications, as well as unpublished essays.

Metaphysics
Beyond Space and Time A scientific excursion into the largely unexplored territory of metaphysics.

Economic Science
The Road to Full Employment The scientific answer to the number one economic problem. The Road to Permanent Prosperity A theoretical explanation of the business cycle and the means to overcome it.

From; http://www.reciprocalsystem.com/bpm/index.htm

Basic Properties of Matter
DEWEY B. LARSON
Volume II of a revised and enlarged edition of THE STRUCTURE OF THE PHYSICAL UNIVERSE

Preface 1. Solid Cohesion 2. Inter–atomic Distances 3. Distances in Compounds 4. Compressibility 5. Heat 6. Specific Heat Patterns 7. Temperature Relations 8. Thermal Expansion 9. Electric Currents 10. Electrical Resistance 11. Thermoelectric Properties 12. Scalar Motion 13. Electric Charges 14. The Basic Forces 15. Electrical Storage 16. Induction of Charges 17. Ionization 18. The Retreat From Reality 19. Magnetostatics 20. Magnetic Quantities and Units 21. Electromagnetism 22. Charges in Motion 23. Magnetic Materials 24. Isotopes 25. Radioactivity 26. Atom Building 27. Mass and Energy References

Preface
THIS volume is the second in a series in which I am undertaking to develop the consequences that necessarily follow if it is postulated that the physical universe is composed entirely of motion. The characteristics of the basic motion were defined in Nothing But Motion,the first volume of the series, in the form of seven assumptions as to the nature and interrelation of space and time. In the subsequent development, the necessary consequences of these assumptions have been derived by logical and mathematical processes without the introduction of any supplementary or subsidiary assumptions, and without introducing anything from experience. Coincidentally with this theoretical development, it has been shown that the conclusions thus reached are consistent with the relevant data from observation and experiment, wherever a comparison can be made. This justifies the assertion that, to the extent to which the

development has been carried, the theoretical results constitute a true and accurate picture of the actual physical universe. In a theoretical development of this nature, starting from a postulate as to the fundamental nature of the universe, the first results of the deductive process necessarily take the form of conclusions of a basic character: the structure of matter, the nature of electromagnetic radiation, etc. Inasmuch as these are items that cannot be apprehended directly, it has been possible for previous investigators to formulate theories of an ad hoc nature in each individual field to fit the limited, and mainly indirect, information that is available. The best that a correct theory can do in any one of these individual areas is to arrive at results that also agree with the available empirical information. It is not possible, therefore, to grasp the full significance of the new development unless it is recognized that the new theoretical system, the Reciprocal System, as we call it, is one of general application, one that reaches all of its conclusions all physical fields by deduction from the same set of basic premises. Experience has indicated that it is difficult for most individuals to get a broad enough view of the fundamentals of the many different branches of physical science for a full appreciation of the unitary character of this new system. However, as the deductive development is continued, it gradually extends down into the more familiar areas, where the empirical information is more readily available, and less subject to arbitrary adjustment or interpretation to fit the prevailing theories. Thus the farther the development of this new general physical theory is carried, the more evident its validity becomes. This is particularly true where, as in the subject matter treated in this present volume, the theoretical deductions provide both explanations and numerical values in areas where neither is available from conventional sources. There has been an interval of eight years between the publication of Volume I and the first complete edition of this second volume in the series. Inasmuch as the investigation whose results are here being reported is an ongoing activity, a great deal of new information has been accumulated in the meantime. Some of this extends or clarifies portions of the subject matter of the first volume, and since the new findings have been taken into account in dealing with the topics covered in this volume, it has been necessary to discuss the relevant aspects of these findings in this volume, even though some of them may seem out of place. If, and when, a revision of the first volume is undertaken, this material will be transferred to Volume I. The first 11 chapters of this volume were published in the form of reproductions of the manuscript pages in 1980. Publication of the first complete edition has been made possible through the efforts of a group of members of the International Society of Unified Science, including Rainer Huck, who handled the financing, Phil Porter, who arranged for the printing, Eden Muir, who prepared the illustrations, and Jan Sammer, who was in charge of the project. D. B. Larson December 1987

CHAPTER 1

Solid Cohesion
The consequences of the reversal of direction (in the context of a fixed reference system) that takes place at unit distance were explained in a general way in Chapter 8 of Volume I. As brought out there, the most significant of these consequences is that establishment of an equilibrium between gravitation and the progression of the natural reference system becomes possible. There is a location outside unit distance where the magnitudes of these two motions are equal: the distance that we are calling the gravitational limit. But this point of equality is not a point of equilibrium. On the contrary, it is a point of instability. If there is even a slight unbalance of forces one way or the other, the resulting motion accentuates the unbalance. A small inward movement, for instance, strengthens the inward force of gravitation, and thereby causes still further movement in the same direction. Similarly, if a small outward movement occurs, this weakens the gravitational force and causes further outward movement. Thus, even though the inward and outward motions are equal at the gravitational limit, this is actually nothing but a point of demarcation between inward and outward motion. It is not a point of equilibrium. In the region inside unit distance, on the contrary, the effect of any change in position opposes the unbalanced forces that produced the change. If there is an excess gravitational force, an outward motion occurs which weakens gravitation and eliminates the unbalance. If the gravitational force is not adequate to maintain a balance, an inward motion takes place. This increases the gravitational effect and restores the equilibrium. Unless there is some intervention by external forces, atoms move gravitationally until they eventually come within unit distance of other atoms. Equilibrium is then established at positions within this inside region: the time region, as we have called it. The condition in which a number of atoms occupy equilibrium positions of this kind in an aggregate is known as the solid state of matter. The distance between such positions is the inter-atomic distance, a distinctive feature of each particular material substance that we will examine in detail in the following chapter. Displacement of the equilibrium in either direction can be accomplished only by the application of a force of some kind, and a solid structure resists either an inward force, a compression, or an outward force, a tension. To the extent that resistance to tension operates to prevent separation of the atoms of a solid it is commonly known as the force of cohesion. The conclusions with respect to the nature and origin of atomic cohesion that have been reached in this work replace a familiar theory, based on altogether different premises. This previously accepted hypothesis, the electrical theory of matter, has already had some consideration in the preceding volume, but since the new explanation of the nature of the cohesive force is basic to the present development, some more extensive comparisons of the two conflicting viewpoints will be in order before we proceed to develop the new theoretical structure in greater detail.

The electrical, or electronic, theory postulates that the atoms of solid matter are electrically charged, and that their cohesion is due to the attraction between unlike charges. The principal support for the theory comes from the behavior of ionic compounds in solution. A certain proportion of the molecules of such compounds split up, or dissociate, into oppositely charged components which are then called ions. The presence of the charges can be explained in either of two ways: (1) the charges were present, but undetectable, in the undissolved material, or (2) they were created in the solution process. The adherents of the electrical theory base it on explanation (1). At the time this explanation was originally formulated, electric charges were thought to be relatively permanent entities, and the conclusion with respect to their role in the solution process was therefore quite in keeping with contemporary scientific thought. In the meantime, however, it has been found that electric charges are easily created and easily destroyed, and are no more than a transient feature of matter. This cuts the ground from under the main support of the electrical theory, but the theory has persisted because of the lack of any available alternative. Obviously some kind of a force must hold the solid aggregate together. Outside of the forces known to result directly from observable motion, there are only three kinds of force of which there has heretofore been any definite observational knowledge: gravitational, electric, and magnetic. The so-called ―forces‖ which play various roles in present-day atomic physics are purely hypothetical. Of the three known forces, the only one that appears to be strong enough to account for the cohesion of solids is the electric force. The general tendency in scientific circles has therefore been to take the stand that cohesion must result from the operation of electrical forces, notwithstanding the lack of any corroboration of the conclusions reached on the basis of the solution process, and the existence of strong evidence against the validity of those conclusions. One of the serious objections to this electrical theory of cohesion is that it is not actually a theory, but a patchwork collection of theories. A number of different explanations are advanced for what is, to all appearances, the same problem. In its basic form, the theory is applicable only to a restricted class of substances, the so-called ―ionic‖ compounds. But the great majority of compounds are ―non-ionic.‖ Where the hypothetical ions are clearly non-existent, an electrical force between ions cannot be called upon to explain the cohesion, so, as one of the general chemistry tests on the author‘s shelves puts it, ―A different theory was required to account for the formation of these compounds.‖ But this ―different theory,‖ based on the weird concept of electrons ―shared‖ by the interacting atoms, is still not adequate to deal with all of the non-ionic compounds, and a variety of additional explanations are called upon to fill the gaps. In current chemical parlance the necessity of admitting that each of these different explanations is actually another theory of cohesion is avoided by calling them different types of ―bonds‖ between the atoms. The hypothetical bonds are then described in terms of interaction of electrons, so that the theories are united in language, even though widely divergent in content. As noted in Chapter19, Vol. I, a half dozen or so different types of bonds have been postulated, together with ―hybrid‖ bonds which combine features of the general types.

Even with all of this latitude for additional assumptions and hypotheses, some substances, notably the metals, cannot be accommodated within the theory by any expedient thus far devised. The metals admittedly do not contain oppositely charged components, if they contain any charged components at all, yet they are subject to cohesive forces that are indistinguishable from those of the ionic compounds. As one prominent physicist, V. F. Weisskopf, found it necessary to admit in the course of a lecture, ―I must warn you I do not understand why metals hold together.‖ Weisskopf points out that scientists cannot even agree as to the manner in which the theory should be applied. Physicists give us one answer, he says, chemists another, but ―neither of these answers is adequate to explain what a chemical bond is.‖1 This is a significant point. The fact that the cohesion of metals is clearly due to something other than the attraction between unlike charges logically leads to a rather strong presumption that atomic cohesion in general is non-electrical. As long as some nonelectrical explanation of the cohesion of metals has to be found, it is reasonable to expect that this explanation will be found applicable to other substances as well. Experience in dealing with cohesion of metals thus definitely foreshadows the kind of conclusions that have been reached in the development of the Reciprocal System of theory. It should also be noted that the electrical theory is wholly ad hoc. Aside from what little support it can derive from extrapolation to the solid state of the conditions existing in solutions, there is no independent confirmation of any of the principal assumptions of the theory. No observational indication of the existence of electrical charges in ordinary matter can be detected, even in the most strongly ionic compounds. The existence of electrons as constituents of atoms is purely hypothetical. The assumption that the reluctance of the inert gases to enter into chemical compounds is an indication that their structure is a particularly stable one is wholly gratuitous. And even the originators of the idea of ―sharing‖ electrons make no attempt to provide any meaningful explanation of what this means, or how it could be accomplished, if there actually were any electrons in the atomic structure. These are the assumptions on which the theory is based, and they are entirely without empirical support. Nor is there any solid basis for what little theoretical foundation the theory may claim, inasmuch as its theoretical ties are to the nuclear theory of atomic structure, which is itself entirely ad hoc. But these points, serious as they are, can only be regarded as supplementary evidence, as there is one fatal weakness of the electrical theory that would demolish it even if nothing else of an adverse nature were known. This is our knowledge of the behavior of positive and negative electric charges when they are brought into close proximity. Such charges do not establish an equilibrium of the kind postulated in the theory; they destroy each other. There is no evidence which would indicate that the result of such contact is any different in a solid aggregate, nor is there even any plausible theory as to why any different outcome could be expected, or how it could be accomplished. It is worth noting in this connection that while current physical theory portrays positive and negative charges as existing in a state of congenial companionship in the nuclear theory of the atom and in the electrical theory of matter, it turns around and gives us explanations of the behavior of antimatter in which these charges display the same

violent antagonism that they demonstrate to actual observation. This is the kind of inconsistency that inevitably results when recalcitrant problems are ―solved‖ by ad hoc assumptions that involve departures from established physical laws and principles. In the context of the present situation in which the electrical theory is challenged by a new development, all of these deficiencies and contradictions that are inherent in the electrical theory become very significant. But the positive evidence in favor of the new theory is even more conclusive than the negative evidence against its predecessor. First, and probably the most important, is the fact that we are not replacing the electrical theory of matter with another ―theory of matter.‖ The Reciprocal System is a complete general theory of the physical universe. It contains no hypotheses other than those relating to the nature of space and time, and it produces an explanation of the cohesion of solids in the same way that it derives logical and consistent explanations of other physical phenomena: simply by developing the consequences of the basic postulates. We therefore do not have to call upon any additional force of a hypothetical nature to account for the cohesion. The two forces that determine the course of events in the region outside unit distance also account for the existence of the inter-atomic equilibrium inside this distance. It is significant that the new theory identifies both of these forces. One of the major defects of the electrical theory of cohesion is that it provides only one force, the hypothetical electrical force of attraction, whereas two forces are required to explain the observed situation. Originally it was assumed that the atoms are impenetrable, and that the electrical forces merely hold them in contact. Present-day knowledge of compressibility and other properties of solids has demolished this hypothesis, and it is now evident that there must be what Karl Darrow called an ―antagonist,‖ in the statement quoted in Volume I, to counter the attractive force, whatever it may be, and produce an equilibrium. Physicists have heretofore been unable to find any such force, but the development of the Reciprocal System has now revealed the existence of a powerful and omnipresent force hitherto unknown to science. Here is the missing ingredient in the physical situation, the force that not only explains the cohesion of solid matter, but, as we saw in Volume I, supplies the answers to such seemingly far removed problems as the structure of star clusters and the recession of the galaxies. One point that should be specifically noted is that it is this hitherto unknown force, the force due to the progression of the natural reference system, that holds the solid aggregate together, not gravitation, which acts in the opposite direction in the time region. The prevailing opinion that the force of gravitation is too weak to account for the cohesion is therefore irrelevant, whether it is correct or not. Inasmuch as the new theoretical system applies the same general principles to an understanding of all of the inter-atomic and inter-molecular equilibria, it explains the cohesion of all substances by the same physical mechanism. It is no longer necessary to have one theory for ionic substances, several more for those that are non-ionic, and to leave the metals out in the cold without any applicable theory. The theoretical findings with respect to the nature of chemical combinations and the structure of molecules that were outlined in the preceding volume have made a major contribution to this simplification of the cohesion picture, as they have eliminated the need for different kinds of cohesive forces, or ―bonds.‖ All that is now required of a theory of cohesion is that it

supply an explanation of the inter-atomic equilibrium, and this is provided, for all solid substances under all conditions, by balancing the outward motion (force) of gravitation against the inward motion (force) of the progression of the natural reference system. Because of the asymmetry of the rotational patterns of the atoms of many elements, and the consequent anisotropy of the force distributions, the equilibrium locations vary not only between substances, but also between different orientations of the same substance. Such variations, however, affect only the magnitudes of the various properties of the atoms. The essential character of the inter-atomic equilibrium is always the same. As indicated in the original discussion of gravitation, even though the various aggregates of matter do not actually exert gravitational forces on each other, the observable results of their gravitational motions are identical with those that would be produced if such forces did exist. The same is true of the results of the progression of the natural reference system. There is a considerable element of convenience in expressing these results in terms of force, on an ―as if‖ basis, and this practice has already been followed to some extent in the previous volume. Now that we are ready to begin a quantitative evaluation of the inter-atomic relations, however, it is desirable to make it clear that the force concept is being used only for convenience. Although the quantitative discussion that follows, like the earlier qualitative discussion, will be carried on in terms of forces, what we will actually be dealing with are the inward and outward motions of each individual atom. While the items that have been mentioned add up to a very impressive case in favor of the new theory of cohesion, the strongest confirmation of its validity comes from its ability to locate the point of equilibrium; that is to give us specific values of the interatomic distances. As will be demonstrated in Chapter 2, we are already able, by means of the newly established relations, to calculate the possible values of the inter-atomic distance for most of the simpler substances, and there do not appear to be any serious obstacles in the way of extending the calculations to more complex substances whenever the necessary time and effort can be applied to the task. Furthermore, this ability to determine the location of the point of equilibrium is not limited to the simple situation where only the two basic forces are involved. Chapters 4 and 5 will show that the same general principles can also be applied to an evaluation of the changes in the equilibrium distance that result from the application of heat or pressure to the solid aggregate. Although, as stated in Volume I, the true magnitude of a unit of space is the same everywhere, the effective magnitude of a spatial unit in the time region is reduced by the inter-regional ratio. It is convenient to regard this reduced value, 1/156.44 of the natural unit, as the time region unit of space. The effective portion of a time region phenomenon may extend into one or more additional units, in which case the measured distance will exceed the time region unit, or the original single unit may not be fully effective, in which case the measured distance will be less than the time region unit. Thus the interatomic equilibrium may be reached either inside or outside the time region unit of distance, depending on where the outward rotational forces reach equality with the inward force of the progression of the natural reference system. Extension of the interatomic distance beyond one time region unit does not take the equilibrium system out of the time region, as the boundary of that region is at one full-sized natural unit of distance,

not at one time region unit. So far as the inter-atomic force equilibrium is concerned, therefore, the time region unit of distance does not represent any kind of a critical magnitude. As we saw in our examination of the composition of the magnetic neutral groups, however, the natural unit as it exists in the time region (the time region unit) is a critical magnitude from the orientation standpoint. An explanation of this difference can be derived from a consideration of the difference in the inherent nature of the two phenomena. Where the inter-atomic distance is less than one time region unit, the rotational forces are acting against the inward force of the progression of the reference system during only a portion of the unit progression. Similarly, where the inter-atomic distance is greater than one time region unit, the unit inward force is acting against only a portion of the greater-than-unit outward rotational forces. The variations in distance thus reflect differences in the magnitudes of the rotational forces. But the orientation effect has no magnitude. It either exists, or does not exist. As we have noted in the previous discussion, particularly in connection with the structure of the benzene molecule, this effect, if it exists, is the same regardless of whether it acts at short range or at long range. The essential requirement that it must meet is that it must be continuously effective. Otherwise, the orientation is destroyed during the off period. Where the rotational forces extend beyond one time region unit, so that the unit orientation effect is coincident with only a portion of the total rotational forces, the orienting effect is not continuous, and no orientation takes place. In this chapter we are dealing mainly with what we are calling ―rotational forces.‖ These are, of course, the same ―as if‖ forces due to the scalar aspect of the atomic rotation that were called ―gravitational‖ in some other contexts, the choice of language depending on whether it is the origin or the effect of the force that is being emphasized in the discussion. For a quantitative evaluation of the rotational forces we may use the general force equation, providing that we replace the usual terms of the equation with the appropriate time region terms. As explained in introducing the concept of the time region in Chapter 8 of Vol. I, equivalent space 1/t replaces space in the time region, and velocity is therefore 1/t² Energy, the one-dimensional equivalent of mass, which takes the place of mass in the time region expression of the force equation, because the three rotations of the atom act separately, rather than jointly, in this region, is the reciprocal of this expression, or t2. Acceleration is velocity divided by time: 1/t³. The time region equivalent of the equation F = ma is therefore F = Ea = t²x1/t³=1/t in each dimension. At this point we will need to take note of the nature of the increments of speed displacement in the time region. In the outside region additions to the displacement proceed by units: first one unit, then another similar unit, yet another, and so on, the total up to any specific point being n units. There is no term with the value n. This value appears only as a total. The additions in the time region follow a different mathematical pattern, because in this case only one of the components of motion progresses, the other remaining fixed at the unit value. Here the displacement is 1/x, and the sequence is 1/1, 1/2, 1/3...1/n. The quantity 1/n is the final term, not the total. To obtain the total that corresponds to n in the outside region it is necessary to integrate the quantity 1/x from x = 1 to x = n. The result is ln n, the natural logarithm of n.

Many readers of the first edition have asked why this total should be an integral rather than a summation. The answer is that we are dealing with a continuous quantity. As pointed out in the introductory chapters of the preceding volume, the motion of which the universe is constructed does not proceed in a succession of jumps. Even though it exists only in units, it is a continuous progression. A unit of this motion is a specific portion of this continuity. A series of units is a more extended segment of that continuity, and its magnitude is an integral. In dealing with the basic individual units of motion in the outside region it is possible to use the summation process, but only because in this case the sum is the same as the integral. To get the total of the 1/x series we must integrate. To evaluate the rotational force we integrate the quantity 1/t from unity, the physical datum or zero level, to t: (1-1) If the quantity ln t is below unity in any dimension there is no effective outward force in that dimension, but the natural logarithm exceeds unity for all values of x above 2, and the atoms of all elements have a rotational displacement of 2 (equivalent to t = 3) or more in at least one dimension. Consequently, all have effective rotational forces. The force computed from equation 1-1 is the inherent rotational force of the individual atom; that is, the one- dimensional force which it exerts against a single unit of force. The force between two (apparently) interacting atoms is F = ln tA ln tB For a two-dimensional magnetic rotation this becomes F = ln² tA ln² tB (1-3) (1-2)

As we found in Chapter12, Vol. I, the equivalent of distance s in the time region is s², and the gravitational force in this region therefore varies inversely as the fourth power of the distance rather than the square. Applying this factor to the expression for the force of the two-dimensional rotation, together with the inter-regional ratio, the ratio of effective to total force derived in the same chapter, we obtain the effective force of the magnetic rotation of the atom: Fm = (0.006392)4 s-4 ln² tA ln² tB (1-4)

The distance factor does not apply to the force due to the progression of the natural reference system, as this force is omnipresent, and unlike the rotational force is not altered as the objects to which it is applied change their relative positions. At the point of equilibrium, therefore, the rotational force is equal to the unit force of the progression. Substituting unity for Fm in equation 1-4, and solving for the equilibrium distance, we obtain s0 = 0.006392 ln½ tA ln½ tB (1-5)

The inter-atomic distances for those elements which have no electric rotation, the inert gas series, may be calculated directly from this equation. In the elements, however, tA = tB in most cases, and it will be convenient to express the equation in the simplified form: s0 = 0.006392 ln t
-8

(1-6)

The values thus calculated are in the neighborhood of 10 cm, and for convenience this quantity has been taken as a unit in which to express the inter-atomic and inter-molecular distances. When converted from natural units to this conventional unit, the Angstrom unit, symbol Å, equation 1-6 becomes s0 = 2.914 ln t Å (1-7) In applying this equation we encounter another of the questions with respect to terminology that inevitably arise in a basically new treatment of any subject. The significance of the quantity t as used in the foregoing discussion and in the equations is obvious from the context—it is the magnitude of the effective rotation—but the question is: What shall we call it? The basic quantity with which we are dealing, the rotational speed displacement, does not enter into the equations directly. The mathematical structure of these equations requires us to enter them with values that include the initial unit which constitutes the natural zero datum. Furthermore, each double vibrational unit rotates independently, and when the rotation extends to a second such unit the increment in the value of t is only one half unit per added unit of displacement. Under these circumstances, where the relation of the term t to the displacement is variable, it seems advisable to give this term a distinctive name, and we will therefore call it the specific rotation. As brought out in the discussion of the general characteristics of the atomic rotation in Chapter10, Vol. I, the two magnetic displacements may be unequal, and in this event the speed distribution takes the form of a spheroid with the principal rotation effective in two dimensions and the subordinate rotation in one. The average effective value of the specific rotation under these conditions is (t12t2)¹/3. In this case we are dealing with the properties of a single entity, and the mathematical situation seems clear. But it is not so evident how we should arrive at the effective specific rotation where there is an interaction between two atoms whose individual rotations are different. As matters now stand it appears that the geometric mean of the two specific rotations is the correct quantity, and the values tabulated in Chapters 2 and 3 have been calculated on this basis. It should be noted, however, that this conclusion as to the mathematics of the combination is still somewhat tentative, and if further study shows that it must be modified in some, or all, applications, the calculated values will be subject to corresponding modifications. Any changes will be small in most cases, but they will be substantial where there is a large difference between the two components. The absence of major discrepancies between the calculated and observed distances in combinations of atoms with much different dimensions therefore gives some significant support to the use of the geometric mean pending further theoretical clarification. The inter-atomic distances of four of the five inert gas elements for which experimental data are available follow the regular pattern. The values calculated for these elements are compared with the experimental distances in Table 1.

Table 1: Distances - Inert Gas Elements
Atomic Number 10 18 36 54 Element Neon Argon Krypton Xenon Specific Rotation 3-3 4-3 4-4 4½-4½ Distance Calculated Observed 3.20 3.20 3.76 3.84 4.04 4.02 4.38 4.41

Helium, which also belongs to the inert gas series, has some special characteristics due to its low rotational displacement, and will be discussed in connection with other elements affected by the same factors. The reason for the appearance of the 4¹/2 value in the xenon rotation will also be explained shortly. The calculated distances are those which would prevail in the absence of compression and thermal expansion. A few of the experimental data have been extrapolated to this zero base by the investigators, but most of them are the actual observed values at atmospheric pressure and at temperatures which depend on the properties of the substances under examination. These values are not exactly comparable to the calculated distances. In general, however, the expansion and compression up to the temperature and pressure of observation are small. A comparison of the values in the last two columns of Table 1 and the similar tables in Chapters 2 and 3 therefore gives a good picture of the extent of agreement between the theoretical figures and the experimental results. Another point about the distance correlations that needs to be taken into account is that there is a substantial amount of variation in the experimental results. If we were to take the closest of these measured values as the basis for comparison, the correlation would be very much better. One relatively recent determination of the xenon distance, for example, arrives at a value of 4.34, almost identical with the calculated distance. There are also reported values for the argon distance that agree more closely with the theoretical result. However, a general policy of using the closest values would introduce a bias that would tend to make the correlation look more favorable than the situation actually warrants. It has therefore been considered advisable to use empirical data from a recognized selection of preferred values. Except for those values identified by asterisks, all of the experimental distances shown in the tables are taken from the extensive compilation by Wyckoff.2 Of course, the use of these values selected on the basis of indirect criteria introduces a bias in the unfavorable direction, since, if the theoretical results are correct, every experimental error shows up as a discrepancy, but even with this negative bias the agreement between theory and observation is close enough to show that the theoretical determination of the inter-atomic distance is correct in principle, and to demonstrate that, with the exception of a relatively small number of uncertain cases, it is also correct in the detailed application. Turning now to the elements which have electric as well as magnetic displacement, we note again that the electric rotation is one-dimensional and opposes the magnetic rotation. We may therefore obtain an expression for the effect of the electric rotational force on the magnetically rotating photon by inverting the one-dimensional force term of equation 12.

Fe = 1/(ln t‘A ln t‘B)

(1-8)

Inasmuch as the electric rotation is not an independent motion of the basic photon, but a rotation of the magnetically rotating structure in the reverse direction, combining the electric rotational force of equation 1-8 with the magnetic rotational force of equation 1-4 modifies the rotational terms (the functions of t) only, and leaves the remainder of equation 1-4 unchanged. ln² tA ln² tB F = (0.006392)4 ————— s4 ln t‘A ln t‘B Here again the effective rotational (outward) and natural reference system progression (inward) forces are necessarily equal at the equilibrium point. Since the force of the progression of the natural reference system is unity, we substitute this value for F in equation 1-9 and solve for s0, the equilibrium distance, as before. (ln½ tA ln½ tB) s0 = (0.006392) —————— (ln t‘A ln t‘B) Again simplifying for application to the elements, where A is generally equal to B, s0 = 0.006392 ln t/ln½ t‘ In Angstrom units this become s0 = 2.914 ln t/ln½ t‘Å (1-12) As already noted, when the rotation is extended to a second (double) vibrational unit, to vibration two, we may say, each added displacement unit adds only one half unit to the specific rotation. Inasmuch as 8 electric displacement units distributed threedimensionally bring the rotation to a new zero point, and cause the rotational motion to revert to the translational status, the change to vibration two in the electric dimension must take place before the displacement reaches 8. Specific rotation 8 (displacement 7) is therefore followed by 8¹/2, 9, 9¹/2, etc. But the first effective rotational displacement unit is necessarily one-dimensional, and the linear equivalent of the 8-unit limit is 2 units. Thus this first unit has already reached the one-dimensional limit. The succeeding displacement units have the option of continuing on the one-dimensional basis and extending the rotation to vibration two rather than extending it into additional dimensions. The change to vibration two therefore may take place immediately after the first displacement unit. In this case specific rotation 2 (displacement 1) is followed by 2¹/2, 3, 3¹/2, etc. The lower value is commonly found where it first becomes possible; that is, displacement 2 normally corresponds to rotation 2¹/2 rather than 3. The next element may take the intermediate value 3¹/2, but beyond this point the higher vibration one value normally prevails. In the first edition it was indicated that the one or two vibrational diplacement units being rotated did not necessarily constitute the entire vibrational component of the basic photon, inasmuch as these one or two units are capable of being rotated independently of (1-11)
¼ ¼

(1-9)

(1-10)

the remaining vibrational units, if any. Further consideration now leads to the conclusion that one or two units of a multi-unit photon frequency can, in fact, be set in rotation independently, as previously indicated, and that the original photon may have had an excess of vibrational units, but that in such an event the rotating portion of the photon begins moving inward, whereas the non-rotating portion continues moving outward by reason of the progression of the natural reference system. The two portions therefore separate, and the rotating portion retains no non-rotating vibrational component. The general pattern of the magnetic rotational values is the same as that of the electric values. The tendency to substitute specific rotation 2¹/2 for 3 applies to the magnetic as well as to the electric rotation, and in the lower group combinations (both elements and compounds) that follow the regular electropositive pattern the specific magnetic rotations are usually 2¹/2–2¹/2 or 3–2¹/2, rather than 3-3. But the upper limit for specific magnetic rotation on a vibration one basis is 4 (three displacement units) instead of 8, as the twodimensional rotation reaches the upper zero level at 4 displacement units in each dimension. Rotation 4¹/2 therefore follows rotation 4 in the regular sequence, as we saw in the values given for xenon in Table 1. It is possible to reach rotation 5 in one dimension, however, without bringing the magnetic rotation as a whole up to the 5 level, and 5–4 or 5–4¹/2 rotation occurs in some elements either in lieu of, or in combination with, the 4¹/2-4 or 4¹/2–4¹/2 rotation

Chapter 2

Inter-atomic Distances
As equation 1-10 indicates, the distance between any two atoms in a solid aggregate is a function of the specific rotations of the atoms. Since each atom is capable of assuming any one of several different relative orientations of its rotational motions, it follows that there are a number of possible specific rotations for each combination of atoms. This number of possible alternatives is still further increased by two additional factors that were discussed earlier. The atom has the option, as we noted in Chapter10, vol. I, of rotating with the normal magnetic displacement and a positive electric displacement, or with the next higher magnetic displacement and a negative electric increment. And in either case, the effective quantity, the specific rotation, may be modified by extension of the motion to a second vibrating unit, as brought out in Chapter 1. It is possible that each of these many variations of the magnitude of the specific rotation, and the corresponding values of the inter-atomic distances, may actually be realized under appropriate conditions, but in any particular set of circumstances certain combinations of rotations are more probable than the others, and in ordinary practice the number of different values of the distance between the same two atoms is relatively small, except in certain special cases. As matters now stand, therefore, we are able to calculate from theoretical premises a small set of possible inter-atomic distances for each element or compound. Ultimately it will no doubt be advisable to evaluate the probability relations in detail so that

the results of the calculations will be as specific as possible, but it has not been feasible to undertake this full treatment of the probability relationships in this present work. In an investigation of so large a field as the structure of the physical universe there must not only be some selection of the subjects that are to be covered, but also some decisions as to the extent to which that coverage will be carried. A comprehensive treatment of the probability relations wherever they enter into physical situations could be quite helpful, but the amount of time and effort required to carry out such a project will undoubtedly be enormous, and its contribution to the major objectives of this present undertaking is not sufficient to justify allocating so much of the available resources to it. Similar decisions as to how far to carry the investigation in certain areas have had to be made from time to time throughout the course of the work in order to limit it to a finite size. It might be well to point out in this connection that it will never be possible to calculate a unique inter-atomic distance for every element or combination of elements, even when the probability relations have been definitely established, as in many cases the choice from among the alternatives is not only a matter of relative probability, but also of the history of the particular specimen. Where two or more alternative forms are stable within the range of physical conditions under which the empirical examination is being made; the treatment to which the specimen has previously been subjected plays an important part in the determination of the structure. It does not follow, however, that we are totally precluded from arriving at definite values for the inter-atomic distances. Even though no quantitative evaluation of the relative probabilities of the various alternatives is yet available, the nature of the major factors involved in their determination can be deduced theoretically, and this qualitative information is sufficient in most cases to exclude all but a very few of the total number of possible variations of the specific rotations. Further-more, there are some series relations by means of which the range of variability can be still further narrowed. These series patterns will be more evident when we examine the distances in compounds in the next chapter, and they will be given more detailed consideration at that point. The first thing that needs to be emphasized as we begin our analysis of the factors that determine the inter-atomic distance is that we are not dealing with the sizes of atoms; what we are undertaking to do is to evaluate the distance between the equilibrium positions that the atoms occupy under specified conditions. In Chapter 1 we examined the general nature of the atomic equilibrium. In this and the following chapter we will see how the various factors involved in the relations between the rotations of the (apparently) interacting atoms affect the point of equilibrium, and we will arrive at values of the inter-atomic distances under static conditions. Then in Chapters 5 and 6 we will develop the quantitative relations that will enable us to determine just what changes take place in these equilibrium distances when external forces in the form of pressure and temperature are applied. As we have seen in the preceding volume, all atoms and aggregates of matter are subject to two opposing forces of a general nature: gravitation and the progression of the natural reference system. These are the primary forces (or motions) that determine the course of physical events. Outside the gravitational limits of the largest aggregates, the outward motion due to the progression of the natural reference system exceeds the inward motion of

gravitation, and these aggregates, the major galaxies, move outward from each other at speeds increasing with distance. Inside the gravitational limits the gravitational motion is the greater, and all atoms and aggregates move inward. Ultimately. if nothing intervenes, this inward motion carries each atom within unit distance of another, and the directional reversal that takes place at the unit boundary then results in the establishment of an equilibrium between the motions of the two atoms. The inter-atomic distance is the distance between the atomic centers in this equilibrium condition. It is not, as currently assumed, an indication of the sizes of the atoms. The current theory which regards the interatomic distance as a measure of ―size‖ is, in many respects, quite similar to the electronic ―bond‖ theory of molecular structure. Like the electronic theory, it is based on an erroneous assumption - in this case, the assumption that the atoms are in contact in the solid state - and like the electronic theory, it fits only a relatively small number of substances in its simple form, so that it is necessary to call upon a profusion of supplementary and subsidiary hypotheses to explain the deviations of the observed distances from what are presumed to be the primary values. As the textbooks point out, even in the metals, which are the simplest structures from the standpoint of the theory, there are many difficult problems, including the awkward fact that the presumed ―size‖ is variable, depending on the nature of the crystal structure. Some further aspects of this situation will be considered in Chapter 3. The resemblance between these two erroneous theories is not confined to the lack of adequate foundations and to the nature of the difficulties that they encounter. It also extends to the resolution of these difficulties, as the same principles that were derived from the postulates of the Reciprocal System to account for the formation of molecules of chemical compounds, when applied in a somewhat different way, are the general considerations that govern the magnitude of the inter-atomic distance in both elements and compounds. Indeed, all aggregates of electronegative elements are molecular in their composition, rather than atomic, as the molecular requirement that the negative electric displacement of an atom of such an element must be counterbalanced by an equivalent positive displacement in order to arrive at a stable equilibrium in space applies with equal force to a combination with a like atom. As we saw in our examination of the structural situation, electropositive elements are not subject to this restriction, but in many cases the molecular (balanced orientation) type of structure takes precedence over the electropositive structure by reason of collateral factors that affect the relative probability. Because of this fact that the distances follow the structural pattern, the various ways of orienting the atomic rotations that were discussed in Chapter18, Vol. I, with a few modifications due to the special conditions that exist in the elemental aggregates, determine the manner in which the atoms of an element are able to combine with each other, and the effective values of the specific rotations in these combinations. In the electropositive elements the specific rotations are based, in the first instance, on the rotational displacements as listed in Chapter10, Vol. I. Where the inter-atomic orientation is the normal positive arrangement, the displacements as listed are translated directly into specific rotations by addition of the initial unit and reduction of the incremental values where the rotation extends to vibration two. Except for the elements of group 2A, which, as already noted, are subject to some special considerations because of their low magnetic

displacements, the elements of Division I all follow the regular electropositive pattern of specific rotations. The only irregularities are in the electric rotations of the second and third elements of each group, where the point of transition to vibration two varies between groups. The inter-atomic distances in this division are listed in Table2. The regular electropositive pattern is also applicable in Division II, and a number of the Division II elements of Group 3A crystallize on this basis, with inter-atomic distances determined in the same manner as in Division I. As noted in Volume I, however, the Division II elements generally favor the magnetic type of orientation in chemical compounds because the normal positive orientation becomes less probable as the displacement, increases. The same probability considerations operate against the positive orientation in the elements of this division, but instead of employing the magnetic orientation as the alternate, these elements utilize a type of orientation that is available only where all rotations of each participant in a combination are identical with those of the other. This arrangement reverses the effective directions of the rotations of alternate atoms. The resulting relative rotation is a combination of x and 8-x (or 4-x), as in the neutral orientation, and the effective specific rotations are 10 for vibration one and 5 for vibration two. A combination value 5-10 is also common.

Table 2: Distances - Division I
Group 2B Atomic Number 11 12 13 19 20 21 22 37 38 39 40 55 56 57 58 89 90 Element Sodium Magnesium Aluminum Potassium Calcium Scandium Titanium Rubidium Strontium Yttrium Zirconium Cesium Barium Lanthanum Cerium Actinium Thorium Specific Rotation Magnetic Electric 3-2½ 2 3-3 3-2½ 2½ 3-2½ 3 4-3 2 4-3 2½ 4-3 4 4-3 5 4-4 2 4-4 2½ 4-4 3½ 4-4 5 4½-4½ 2 5-4½ 3 4½-4½ 4 5-4½ 5 4½-5 4 4½-5 5 Distance Calc. Obs. 3.70 3.17 2.83 4.49 4.00 3.18 2.95 4.85 4.32 3.64 3.18 5.23 4.36 3.70 3.61 3.79 3.52 3.71 3.21 2.86 4.50 3.98 3.20 2.92 4.87 4.28 3.63 3.23 5.24 4.34 3.74 3.63 3.76* 3.56

3A

3B

4A

4B

This reverse type of structure makes its appearance in body-centered cubic crystal forms of Chromium and Iron which coexist with the regular positive hexagonal or face-centered

cubic structures. Vanadium and Niobium, the first Division II elements of their respective groups, combine the positive and reverse orientations. Beyond Niobium the positive orientation does not appear in the common Division II forms of the elements, the structures to which the present discussion is limited, and all elements take the reverse orientation, except Europium and Ytterbium, which combine it with a unit-specific rotation; that is, no electric-rotational displacement at all, as in the inert gas elements. On the basis of the considerations discussed in Chapter 1, the average effective specific rotation for such rotational combinations has been taken as the geometric mean of the two components. Where the orientations are the same, and the only difference is in the magnitude, as in the 5-10 combination, and in the combinations of magnetic rotations that we will encounter later, the equilibrium is reached in the normal manner. If two different electric rotations are involved, the two-atom pairs cannot attain spatial equilibrium individually, but they establish a group equilibrium similar to that which is achieved where n atoms of valence one each combine with one atom of valence n. The Division II distances are shown in Table 3.

Table 3: Distances - Division II
Group 3A Atomic Number 23 24 25 26 27 28 41 42 43 44 45 4A 46 59 60 62 63 64 65 66 67 68 69 70 Specific Rotation Magnetic Electric Vanadium 4-3 6-10 4-3 7 Chromium 4-3 10 Manganese 4-3 8 4-3 8½ Iron 4-3 10 Cobalt 4-3 9 Nickel 4-3 9½ Niobium 4-4 6-10 Molybdenum 4-4½ 10 Technetium 4-4½ 10 Ruthenium 4-4½ 10 4-4 10 Rhodium 4-4½ 10 Palladium 4-4½ 10 Praseodymium 5-4½ 5 Neodymium 5-4½ 5 Samarium 5-4½ 5 Europium 4½-5 1-5 Gadoaium 5-4½ 5 Terbium 5-4½ 5 Dysprosium 5-4½ 5 Holmium 4½-5 5 Erbium 4½-5 5 Thulium 4½-5 5 Ytterbium 4½-4½ 1-5 Element Distance Calc. Obs. 2.62 2.62 2.68 2.72 2.46 2.49 2.59 2.58 2.56 2.57 2.46 2.48 2.52 2.51 2.49 2.49 2.83 2.85 2.72 2.72 2.73 2.73* 2.73 2.70 2.66 2.69 2.73 2.76 2.73 2.74 3.61 3.64 3.61 3.65 3.61 3.62* 3.96 3.96 3.61 3.62 3.61 3.59 3.61 3.58 3.52 3.56 3.52 3.53 3.52 3.52 3.86 3.87

3B

4B

71 91 92 93 94 95 96 97

Lutetium Protactinium Uranium Neptunium Plutonium Americum Curium Berkelium

4½-5 4½-5 4½4½ 4½-4½ 4½4½ 4½-4½ 4½-4½ 4½-4½

5 5-10 10 5 5-10 5 5-10 5

3.52 3.22 2.87 3.43 3.14 3.43 3.14 3.43

3.50* 3.24* 2.85 3.46* 3.15* 3.46* 3.10* 3.40*

Because of the greater probability of the electropositive types of combinations, the characteristics of Division II carry over into the first elements of Division III, and these elements, Nickel, Palladium, and Lutetium, are included in the table. Some similar modifications of the normal division boundaries have already been noted in connection with other subjects. The net total rotation of the material atom is a motion with positive displacement-that is, a speed less than unity-and as such it normally results in a change of position in space. Inside unit space, however, all motion is in time. The orientation of the atom for the purpose of the space-time equilibrium therefore exists in the three dimensions of time. As we saw in our examination of the inter-regional situation in Chapter12, Volume I, each of these dimensions contacts the space of the region outside unit distance individually. To the extent that the motion in a dimension of time acts along the line of this contact it is a motion in equivalent space. Otherwise it has no spatial effect beyond the unit boundary. Because of the independence of the three dimensions of motion in time the relative orientation of the electric rotation of any combination of atoms may be the same in all spatial dimensions, or there may be two or three different orientations. In most of the elements that have been discussed thus far the orientation is the same in all spatial dimensions, and in the exceptions the alternate rotations are symmetrically distributed in the solid structure. The force system of an aggregate of such elements is isotropic. It follows that any aggregate of atoms of these elements has a structure in which the constituents are arranged in one of the Geometrical patterns possible for equal forces: an isometric crystal. All of the electropositive elements (Divisions I and II) crystallize in isometric forms, and, except for a few which apparently have quite complex structures, each of the crystal forms of these elements belongs to one or another of three types: the facecentered cube, the body-centered cube, or the hexagonal close-packed structure. We now turn to the other major subdivision of the elements, the electronegative class, those whose normal electric displacement is negative. Here the force system is not necessarily isotropic, since the most probable arrangement in one or two dimensions may be the negative orientation, a direct combination of two ne-ative electric displacements, similar to the all-positive combinations. It is not possible to have negative orientation in all three dimensions, and wherever it does exist in one or two dimensions the rotational forces of the atoms are necessarily anisotropic. The controlling factor is the requirement that the net total rotational displacement of a material atom as a whole must be positive. Negative orientation in all three dimensions is obviously incompatible with this requirement, but if the negative

displacement is restricted to one dimension the aggregate has fixed atomic positions in two dimensions, with a fixed average position in the third because of the positive displacement of the atom as a whole. This results in a crystal structure that is essentially equivalent to one with fixed positions in all dimensions. Such crystals are not usually isometric, as the interatomic distance in the odd dimension is generally different from that of the other two. Where the distances in all dimensions do happen to coincide, we will find on further investigation that the space symmetry is not an indication of force symmetry. If the negative displacement is very small, as in the lower division IV elements, it is possible to have negative orientation in two dimensions if the positive displacement in the third dimension exceeds the sum of these two negative components, so that the net result is still positive. Here the relative positions of the atoms are fixed in one dimension only, but the average positions in the other two dimensions are constant by reason of the net positive displacement of the atoms. An aggregate of such atoms retains most of the external characteristics of a crystal, but when the internal structure is examined the atoms appear to be distributed at random, rather than in the orderly arrangement of the crystal. In reality there is just as much order as in the crystalline structure, but part of the order is in time rather than in space. This form of matter can be identified as the glassy, or vitreous, form, to distinguish it from the crystalline form. The term ―state‖ is frequently used in this connection instead of ―form‖, but the physical state of matter has an altogether different meaning based on other criteria, and it seems advisable'to confine the use of this term to the one application. Both glasses and crystals are in the solid state. In beginning a consideration of the structures of the individual electronegative elements, we will start with Division III. The general situation in this division is similar to that in Division II, but the negativity of the normal electric displacement introduces a new factor into the determination of the orientation pattern, as the most probable orientation of an electronegative element may not be capable of existing in all three dimensions. As stated earlier, where two or more different orientations are possible in a given set of circumstances the relative probability is the deciding factor. Low displacements are more probable than high displacements. Simple orientations are more probable than combinations. Positive electric orientation is more probable than negative. In Division I all of these factors operate in the same direction. The positive orientation is simple, and it also has the lowest displacement value. All structures in this division are therefore formed on the basis of the positive orientation. In Division II the margin of probability is narrow. Here the positive displacement x is greater than the inverse displacement 8-x, and this operates against the greater inherent probability of a simple positive structure. As a result, both the positive and reverse types of structure are found in this division, together with a combination of the two. In Division III the negative orientation has a status somewhat similar to that of the positive orientation in Division II. As a simple orientation, it has a relatively high probability. But it is limited to one dimension. The regular division III structures of Groups 3A and 3B are therefore anisotropic, with the reverse orientation in the other two dimensions. A combination of these two types of orientation is also possible, and in copper and silver, the first Division III elements of their respective groups, the crystals formed on the basis of this

combination orientation have cubic symmetry. As in Division II, the elements of Division III in Groups 4A and 4B crystallize entirely on the basis of the reverse orientation. Table 4 lists what may be considered as the regular inter-atomic distances of the elements of Division III. Although the probability of the negative orientation is greater in Division IV than in Division III, because of the smaller displacement values, this type of structure seldom appears in the crystals of the lower division. The reason is that where this orientation exists in the elements of the lower displacements, it exists in two dimensions, and this produces a glassy or vitreous aggregate rather than a crystal. The reverse orientation is not subject to any restrictive factor of this nature, but it is less probable at the lower displacements, and except in Group 4A, where it continues to predominate, this orientation appears less frequently as the displacement decreases. Where it does exist it is increasingly likely to combine with some other type of orientation. As a result of these limitations that are applicable to the inherently more probable types of orientation, many of the Division IV structures are formed on the basis of the secondary positive orientation, a combination of two 8-x displacements.

Table 4: Distances - Division III
Group 3A Atomic Number 29 30 31 3B 47 48 49 4A 72 73 74 75 76 77 78 79 80 81 Element Copper Zinc Gallium Silver Cadmium Indium Hafnium Tantalum Tungsten Rhenium Osmium Eridium Platinum Gold Mercury Thallium Specific Rotation Magnetic Electric 4-3 8-10 4-4 7 4-4 10 4-3 6 4-3 10 4-5 8-10 5-4 7 5-4 10 5-4 6 5-4 6-10 4-4½ 5 4½-4½ 10 4-4½ 10 4-4½ 10 4-4½ 10 4-4½ 10 4-4½ 10 4½-4½ 10 4-4½ 5-10 4½-4½ 5 4½-4½ 5 Distance Calc. Obs. 2.53 2.55 2.90 2.91 2.66 2.66 2.79 2.80 2.46 2.44 2.87 2.88 3.20 3.26* 2.94 2.97 3.33 3.37 3.21 3.24 3.26 3.32 2.87 2.86 2.73 2.74 2.73 2.77* 2.73 2.73 2.73 2.71 2.73 2.77 2.87 2.88 2.98 3.00 3.43 3.47 3.43 3.45

The secondary positive orientation is not possible in the electropositive divisions, as 8-x is negative in these divisions, and like the negative orientation itself, an 8-x negative combination would be confined to a subordinate role in one or two dimensions of an asymmetric structure. Such a crystal structure cannot compete with the high probability of

the symmetrical electropositive crystals, and therefore does not exist. In the electronegative divisions, however, the 8-x displacement is positive, and there are no limitations on it, aside from those arising from the high displacement values. The effective displacement of this secondary positive orientation is even greater than might be expected from the magnitude of the quantity 8-x, as the change of zero points for the two oppositely directed motions is also oppositely directed, and the new zero points are 16 displacement units apart. The resultant relative displacement is 16-2x, and the corresponding specific rotation is 18-2x. In Division IV the numerical values of the latter expression range from 10 to 16, and because of the low probability of such high rotations, the secondary positive orientation is limited to one or one and one-half dimensions in spite of its positive character. In Division III the 8-x displacements are lower, but in this case they are too low. A two-unit separation of the zero points (16 displacement units) cannot be maintained unless the effective displacement is at least 8 (one full three-dimensional unit). The secondary positive orientation is therefore confined to Division IV. A special type of structure is possible only for those electronegative elements which have a rotational displacement of four units in the electric dimension. These elements are on the borderline between Divisions III and IV, where the secondary positive and reverse orientations are about equally probable. Under similar conditions other elements crystallize in hexagonal or tetragonal structures, utilizing the different orientations in the different dimensions. For these displacement 4 elements, however, the two orientations produce the same specific rotation: 10. The inter-atomic distance in these crystals is therefore the same in all dimensions, and the crystals are isometric, even though the rotational forces in the different dimensions are not of the same character. The molecular arrangement in this crystal pattern, the diamond structure, shows the true nature of the rotational forces. Outwardly this crystal cannot be distinguished from the isotropic cubic crystals, but the analogous body-centered cubic structure has an atom at each corner of the cube as well as one in the center, whereas the diamond structure leaves alternate corners open to accommodate the abnormal projection of forces in the secondary positive dimension. In those of the lower elements of Division IV that are beyond the range of the inverse type of orientation, there is no available alternative for combination with the secondary positive orientation. The crystals of these elements therefore have no effective electric rotation in the remaining dimensions, and the relative specific rotation in these dimensions is unity, as in all dimensions of the inert gas elements. The most common distances in the aggregates of the Division IV elements are shown in Table 5. Up to this point, no consideration has been given to the elements of atomic number below 10, as the rotational forces of these elements are subject to certain special influences which make it desirable to discuss them separately. One cause of deviation from the normal behavior is the small size of the rotational groups. In the larger groups the four divisions are distinct, and, except for some overlapping, each has its own characteristic force combinations, as we have seen in the preceding paragraphs. In an 8-element group, however, the second series of four elements, which would normally constitute Division III, is actually in the Division IV position. As a result, these four elements have, to a certain extent, the properties of both divisions. Similarly, the Division I elements of these groups may, in some

cases, act as if they were members of Division III. A second influence that affects the forces and the crystal structures of the lower group elements is the inactivity of the rotational forces in certain dimensions that was mentioned earlier. A specific rotation of two units produces no effect in the positive direction. The reason for this is revealed by equation 1-1. By applying this equation we find that the effective rotational force (ln t) for t = 2 is 0.693, which is less than the opposing space-time force 1.00. The net effective force of specific rotation 2 is therefore below the minimum value for action in the positive direction. In order to produce an active force the specific rotation must be high enough to make ln t greater than unity. This is accomplished at rotation 3.

Table 5: Distances - Division IV
Group 2B Atomic Number 14 15 16 17 3A 32 33 34 35 3B 50 51 52 53 4A 82 83 84 Specific Rotation Magnetic Electric Silicon 3-3 5-10 3-3 10 Phosphorus 3-4 3-4 1 3-3 10 Sulfur 3-3 1 3-3 16 Chlorine 3-3 1-16 Germanium 4-3 10 4-3 12 Arsenic 4-3 10 4-3 14 Selenium 3-4 1 4-3 16 Bromine 3-4 1 4½-4 10 Tin 5-4 5-10 5-4 10 5-4 12 Antimony 5-4 4-10 5-4½ 14 Tellurium 5-4½ 1-10 5-4 16 Iodine 5-4 1-16 5-4 1 Lead 4½-4½ 5 4½-4½ 5 Bismuth 4½-4½ 5-10 Polonium 4½-4½ 5 Element Distance Calc. Obs. 2.31 2.35 2.19 3.46 2.11 3.21 1.92 2.48 2.46 2.37 2.46 2.32 3.46 2.25 3.46 2.80 3.22 2.94 2.83 3.34 2.82 3.71 2.68 3.54 4.46 3.43 3.43 3.14 3.43 2.2 3.48* 2.07 3.27* 1.82 2.52 2.43 2.44* 2.51 2.32 3.46 2.27 3.30 2.80 3.17 3.02 2.87 3.36* 2.86 3.74 2.70 3.54 4.41* 3.49 3.47* 3.10 3.40*

The specific magnetic rotation of the 1B group, which includes only the two elements hydrogen and helium, and the 2A group of eight elements beginning with lithium, combines the values 3 and 2. Where the value 2 applies to the subordinate rotation (3-2), one dimension is inactive; where it applies to the principal rotation (2-3), two dimensions are inactive. This reduces the force exerted by each atom to 2/3 of the normal amount in the case of one inactive dimension, and to 1/3 for two inactive dimensions. The inter-atomic distance is proportional to the square root of the product of the two forces involved. Thus the reduction in distance is also 1/3 per inactive dimension. Since the electric rotation is not a basic motion, but a reverse rotation of the magnetic rotational system, the limitations to which the basic rotation is subject are not applicable. The electric rotation merely modifies the magnetic rotation, and the low value of the force integral for specific rotation 2 makes itself apparent by an inter-atomic distance which is greater than that which would prevail if there were no electric displacement at all (unit specific rotation). Theoretical values of the inter-atomic distances of the lower group elements are compared with measured values in Table 6

Table 6: Distances - Lower Group Elements
Group 1B *2A Atomic Number 1 2 3 4 5 6 7 8 9 Element Hydrogen Helium Lithium Beryllimn Boron C (diamond) C (graphite) Nitrogen Oxygen Fluorine Specific Rotation Magnetic Electric 3(1) 10 3(1) 1 2½-2½ 2 3(2) 2½ 3(2) 5 3-3 10 3(2) 5-10 3(2) 1 3-3 1 3(1½) 10 3-3 1 3(1½) 10 3-3 1 3(2) 10 Distance Calc. Obs. 0.70 0.74* 1.07 1.09 3.05 3.03 2.282 2.28 1.68 1.74* 2.11 2.03* 1.54 1.54 1.41 1.42 3.21 3.40 1.06 1.06 3.21 3.44* 1.06 1.15* 3.21 3.20* 1.41 1.44*

The figures in parentheses in column 4 of this table indicate the effective number of dimensions. Thus the notation 3(1) shown for hydrogen means that this element has a specific magnetic rotation of 3, effective in only one dimension. Except where the crystals are isometric, there is still much uncertainty in the distance measurements on these lower group elements, and many other values have been reported in addition to those included in the table. This situation will be discussed at length in Chapter 3, where we will have the benefit of measurements of the distances between like atoms that are constituents of chemical compounds.

As indicated in the introductory paragraphs of this chapter, we are not yet in a position where we can determine specifically just what the inter-atomic distance will be for any given element under a given set of conditions. The theoretical considerations that have been discussed actually do lead to specific values in many cases, but in other instances there is an uncertainty as to which of two or more theoretically possible rotational arrangements corresponds to the observed crystal structure. Continuing progress is being made in both the experimental and the theoretical fields, and it can be expected that these uncertainties will gradually diminish toward the irreducible minimum that was mentioned earlier. In the course of this process there will necessarily be some changes in the identifications of the observed inter-atomic distances with the theoretically possible structures. A comparison of Tables 1 to 6 with the corresponding tabulations of the first edition should therefore be of interest as an indication of the nature and magnitude of the changes that have taken place in our view of this interatomic distance situation in the last twenty years, and by extension, an indication of the amount of change that can be expected in the future. Such a comparison shows that the modifications of the original conclusions that now appear to be required, in the light of the additional information that has been made available, are confined almost entirely to those which have resulted from a better theoretical understanding of the behavior of the specific magnetic rotation above an effective value of 4. Few changes are required in either the magnetic or electric values in those rotational combinations where the specific magnetic rotation is 4-4 or less. One of the puzzling features of the rotational situation as it appeared at the time of the original publication was the apparent retrograde progression of the specific magnetic rotation in Groups 4A and 4B. It was recognized at that time that both the 4½ and 5 values of the specific rotation correspond to the same displacement, 4, the difference being that in the case of the 4½ value the rotation extends to two units of vibration, and the last increment of specific rotation in this case is only half size. The next half unit increment, if such an increment were possible, would bring the 4½ rotation back to the 5 value. It would therefore appear that the sequence of specific rotations beyond 4½-4 should be 4½-4½, 5-4½, 5-5, and so on. But the tendency is in the opposite direction. Instead of moving toward higher values as the atomic number increases, there is actually a decreasing trend. This was already evident at the time of publication of the first edition, as the low inter-atomic distances of the series of elements from Tungsten to Platinum could not be accounted for unless the specific magnetic rotation dropped back to 4-4½ from the higher levels of the preceding elements of the 4A group. This decreasing trend has become even more prominent as distances have become available for additional elements of Group 4B, as some of these values indicate specific magnetic rotations of 4-4, or possibly even 4-3½. As it happens, the continuation of the trend toward lower values in the more recent data has had the effect of clarifying the situation. It is now evident that the 5-5 specific rotation is not reached within the accessible portion of Groups 4A and 4B. (Considerations that will be discussed later show that the specific rotation of 5-5 would be unstable.) The lower values in the 4A and 4B groups do not result from a decrease in the magnetic displacement, but from a shift of the existing displacement units from vibration one to vibration two, a process which reduces the specific rotation of the units by one half. On a vibration one basis,

rotational displacements 4-3 correspond to specific rotations 5-4. Conversion of successive units of displacement to vibration two, without change in the number of displacement units, results in a series of specific rotations, 5-4, 4½-4, 4-4½, 4-4, and so on. A similar series with one additional displacement unit goes through the values 5-4½, 4½-5, 4½-4½, 4½-4, and then follows the same route as the series with the lower displacement. The modifications that have been made in the theoretical rotational values applicable to the elements of these two highest rotational groups since the publication of the first edition are the result of a review of the situation in the light of this new understanding of the trend of the specific rotation. The general pattern in group 4A is now seen to be that of the series from 5-4½ to 4-4½, with a return to 4½-4½ in the lower electronegative elements. So far as can be determined at this time, Group 4B follows the same pattern one step farther advanced; that is, it begins with 4½-5 rather than 5-4½. The difference in the inter-atomic distance corresponding to one of the steps in this conversion process is relatively small, and in view of the substantial variation in the experimental values it has not appeared advisable to take into account the possibility of combinations such as 4½-5 specific rotation of one atom of a pair and 4½-4½ in the other. It seems clear that such combinations do exist in some of the lower group elements, Sodium, for example, and they probably play some part in the higher groups. Most of the reported distances for Holmium and Erbium, for instance, agree more closely with a combination of 5-4½ and 4½-5 than with either individually. However, all of these values are theoretically possible, and the only question at issue in this and many other similar cases is which theoretical value corresponds to the observed distance. Definitive answers to identification questions of this kind will have to wait until the theoretical probabilities are specifically evaluated, or the experimental uncertainties are resolved. Many questions concerning alternate crystal structures will also have to wait for more information from theory or experiment, particularly where crystal forms that exist only at high temperatures or pressures are involved. There is, however, a large body of information already available in this area, and it can be tied into the theoretical picture as soon as someone has the time and the inclination to undertake the task.

Chapter 3

Distances in Compounds
Thus far in the discussion of the inter-atomic distances we have been dealing with aggregates composed of like atoms. The same general principles apply to aggregates of unlike atoms, but the existence of differences between the components of such systems introduces some new factors that we will now want to examine. The matters to be considered in this chapter have no relevance to direct combinations of electropositive elements (aggregates of which are mixtures or alloys, rather than chemical compounds). As noted in Chapter18, Vol. I, the proportions in which such elements can

combine may be determined, or limited, by geometrical considerations, but aside from such effects, unlike atoms of this kind can combine on the same basis as like atoms. Here the forces are identical in character and concurrent, the type of combination that we have called the positive orientation. The resultant specific electric rotation, according to the principles previously set forth, is (t1t2)1/2, the geometric mean of the two constituents. If the two elements have different magnetic rotations, the resultant is also the geometric mean of the individual rotations, as the magnetic rotations always have positive displacements, and these combine in the same manner as the positive electric displacements. The effective electric and magnetic specific rotations thus derived can then be entered in the applicable force and distance equations from Chapter 1. Combinations of unlike positive atoms may also take place on the basis of the reverse orientation, the alternate type of structure that is available to the elemental aggregates. Where the electric rotations of the components differ, the resultant specific rotation of the two-atom combination will not be the required neutral 5 or 10, but a second pair of atoms inversely oriented to the first results in a four-atom group that has the necessary rotational balance. As brought out in VolumeI, the simplest type of combination in chemical compounds is based on the normal orientation, in which Division I electropositive elements are joined with Division IV electronegative elements on the basis of numerically equal displacements. The resultant effective specific magnetic rotation can be calculated in the same manner as in the all-positive structures, but, as we saw in our consideration of the inter-atomic distances of the elements, where an equilibrium is established between positive and negative electric rotations, the resultant is the sum of the two individual values, rather than the mean.

Table 7: Distances - NaCl Type Compounds
Specific Rotation Compound Magnetic LiH LiF LiCl LiBr Li NaF NaCl NaBr NaI 3(2) 3(2) 3(2) 3(2) 3(2) 3-2½ 3-2½ 3-2½ 3-3 3(2) 3(2) 3½-3½ 4-4 5-4 3(2) 3½-3½ 4-4 5-4 Elec. 3 3 4 4 4 4 4 4 4 Calc. 2.04 2.04 2.57 2.77 2.96 2.26 2.77 2.94 3.21 Obs. 2.04 2.01 2.57 2.75 3.00 2.31 2.81 2.98 3.23 Distance

MgO MgS MgSe KF KCl KBr KI CaO CaS CaSe CaTe ScN TiC RbF RbCl RbBr RbI SrO SrS SrSe SrTe CsF CsCl BaO BaS BaSe

3-3 3-3 3-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-4 4-4 4-4 4-4 4-4 4-4 4-4 4-4 5-4 5-4 5-4½ 5-4½ 5-4½

3(2) 3½-3½ 4-4 3(2) 3½-3½ 4-4 5-4 3(2) 3½-3½ 4-4 5-4 3(2) 3(2) 3(2) 3½-3½ 4-4 5-4 3(2) 3½-3½ 4-4 5-4 3(2) 4-3 3(2) 4-3 4-4

5½ 5½ 5½ 4 4 4 4 5½ 5½ 5½ 5½ 7 8½ 4 4 4 4 5½ 5½ 5½ 5½ 4 4 5½ 5½ 5½

2.15 2.60 2.76 2.63 3.11 3.30 3.47 2.38 2.81 2.98 3.13 2.22 2.12 2.77 3.24 3.43 3.61 2.51 2.92 3.10 3.26 2.96 3.47 2.72 3.17 3.30

2.10 2.59 2.72 2.67 3.14 3.29 3.52 2.40 2.84 2.95 3.17 2.22 2.16 2.82 3.27 3.43 3.66 2.57 2.93 3.11 3.24 3.00 3.51 2.76 3.17 3.31

BaTe LaN LaP LaAs LaSb LaBi

5-4½ 5-4 5-4 5-4 5-4 5-4

5-4 3(2) 4-3 4-4 5-4 5-4½

5½ 6 6½ 7 7 7

3.47 2.61 2.99 3.04 3.20 3.24

3.49 2.63 3.01 3.06 3.24 3.28

When this arrangement unites one electropositive atom with each electronegative atom the resulting structure is usually a simple cube with the atoms of each element occupying alternate comers of the cube. This is called the Sodium Chloride structure, after the most familiar member of the family of compounds crystallizing in this form. Table 7 gives the interatomic distances of a number of common NaCl type crystals. From this tabulation it can be seen that the special rotational characteristics which certain of the elements possess in the elemental aggregates carry over into their compounds. The second element in each group shows the same preference for rotation on the basis of vibration two that we encountered in examining the structures of the elements. Here, again, this preference extends to some of the following elements, and in such series of compounds as CaO, ScN, TiC, one component keeps the vibration two status throughout the series, and the resulting effective rotations are 5½, 7, 8½, rather than 6, 8, 10. The elements of the lower groups have inactive force dimensions in the compounds just as in the elemental structures previously examined. If the active dimensions are not the same in both components, the full rotational force of the more active component is effective in its excess dimensions, the effective rotation in an inactive dimension being unity. For example, the value of ln t for magnetic rotation 3 is 1.099 in three dimensions, or 0.7324 in two dimensions. If this two-dimensional rotation is combined with a three-dimensional magnetic rotation x, the resultant value of ln t is (0.7324 x)½, the geometric mean of the individual values, in two dimensions, and x in the third. The average value for all three dimensions is (0.7324 x2)¹/3. This dimensional inactivity in the lower groups plays only a minor role in the structures of the elements, as can be seen from the fact that it did not need any attention until almost the end of Table 8.

Table 8: Distances - CaF2 Type Compounds
Compound Na2O Na2S Na2Se Na2Te Mg2Si Mg2Ge Mg2Sn Specific Rotation Magnetic 3-2½ 3(2) 3-2½ 4-3 3-2½ 4-4 3-2½ 5-4½ 3-3 4-3 3-3 4-4 3-3 5-4 Elec. 3½ 4 4 4 5 5½ 5½ Distance Calc. Obs. 2.39 2.40 2.83 2.83 2.94 2.95 3.13 3.17 2.73 2.77 2.76 2.76 2.90 2.93

Mg2Pb K2O K2S K2Se K2Te CaF2 Rb2O Rb2S SrF2 SrCl2 BaF2 BaCl2

3-3 4-3 4-3 4-3 4-3 4-3 4-4 4-4 4-4 4-4 5-4 5-4½

5-4½ 3(2) 4-3 4-4 5-4½ 3(2) 3(2) 4-3 3(2) 4-3 3(2) 4-3

5½ 3½ 4 4 4 5½ 3½ 4 5½ 5½ 5½ 5½

2.94 2.79 3.17 3.30 3.51 2.38 2.94 3.30 2.50 2.98 2.68 3.17

2.96 2.79 3.20 3.33 3.53 2.36 2.92 3.31 2.50 3.03 2.68 3.18*

The compounds of lithium with valence one negative elements follow the regular pattern, and were included in Table 7, but the compounds with valence two elements are irregular, and they have therefore been omitted from Table 8. As we will see in Chapter 6, the irregularity is due to the fact that the two lithium atoms in a molecule of the CaF2 type act as a radical rather than as independent constituents of the molecule. These two normal orientation tables, 7 and 8, provide an impressive confirmation of the validity of the theoretical findings. One of the problems in dealing with the inter-atomic distances of the elements is that because of the relatively small total number of elements, the number to which any particular magnetic rotational combination is applicable is quite small, and consequently it is rather difficult to establish a prima facie case for the authenticity of the rotatioml values. But this is not true of the normal type compounds, as they are more numerous and less variable. There are two elements in these tables, sulfur and chlorine, that have different magnetic rotations under different conditions. These elements have 4-3 rotation in the CaF2 type crystals, and in the NaCl type combinations with elements of group 4A. In the other compounds of the NaCl type they take the 3½-3½ rotations. There are also two more elements, each of which, according to the information now available, deviates from its normal rotations in one of the listed compounds. Otherwise, all of the elements entering into the 60 compounds in the two tables have the same specific magnetic rotations in every compound in which they participate. Furthermore, when the inherent differences between the elemental and compound aggregates are taken into account, there is also agreement between these rotations in the compounds and the specific rotations of the same elements in the elemental aggregates. The most common difference of this kind is a result of the fact that the Division IV element in a compound has a purely negative role. For this reason it takes the magnetic rotation of the next higher group. In the elemental aggregates half of the atoms are reoriented to act in a positive capacity. Consequently, thev tend to retain the normal rotation of the group to which they actually belong. For example, the Division IV elements of Group 3A, germanium, arsenic, selenium, and bromine, have the normal specific rotation of their group, 4-3, in the crystals of the elements, but in the compounds they take the 4-4 specific rotation of Group 3B, acting as negative members of that group. Another difference between the two classes of structures is that those elements of the higher

groups that have the option of extending their rotation to a second vibrational unit are less likely to do so if they are combining with an element which is rotating entirely on the basis of vibration one. Aside from these deviations due to known causes, the values of the specific magnetic rotation determined for the elements in Chapter 2 are also generally applicable to the compounds. This equivalence does not apply to the specific electric rotations, as they are determined by the way in which the rotations of the constituents of each aggregate are oriented relative to each other, a relation that is different in the two classes of structures. This applicability of the same equations and, in general, the same numerical values, to the calculation of distances in both elements and compounds contrasts sharply with the conventional theory that regards the inter-atomic distance as being determined by the ―sizes‖ of the atoms. The sodium atom, or ―ion,‖ in the NaCl crystal, for example, is asserted to have a radius only about 60 percent as large as the radius of the atom in the elemental aggregate. If this atom takes part in a compound which cannot be included in the ―ionic‖ class, current theory gives it still a different ―size‖ : what is called a ―covalent‖ radius. The need for assuming any extraordinary changeability in the size of what, so far as we can tell, continues to be the same object, is now eliminated by the finding that the variations in the inter-atomic distance have nothing to do with the sizes of the atoms, but merely indicate differences in the location of the equilibrium between the inward and the outward forces to which the atoms are subject. Another type of orientation that forms a relatively simple binary compound is the rotational combination that we found in the diamond structure. As in the elements, this is an equilibrium between an atom of a Division IV element and one of Division III, the requirement being that t1+ t2 = 8. Obviously, the only elements that can meet this requirement by themselves are those whose negative rotational displacement (valence) is 4, but any Division IV element can establish an equilibrium of this kind with an appropriate Division III element. Closely associated with this cubic diamond-like Zinc Sulfide class of crystals is a hexagonal structure based on the same orientation, and containing the same equal proportions of the two constituents. Since these controlling factors are identical in the two forms, the crystals of the hexagonal Zinc Oxide class have the same inter-atomic distances as the corresponding Zinc Sulfide structures. In such instances, where the inter-atomic forces are the same, there is little or not probability advantage of one type of crystal over the other, and either may be formed under appropriate conditions. Table 9 lists the inter-atomic distances for some common crystals of these two classes.

Table 9: Distances - Diamond Type Compounds
Compound AlP AlAs AlSb SiC 3-4 3-4 3-4 3-4 Specific Rotation Magnetic Elec. ZnS (Cubic) Class 3½-3½ 10 4-4 10 5-4½ 10 3(2) 10 Distance Calc. 2.32 2.62 2.62 1.94 Obs. 2.35 2.43 2.66 1.93*

CuCl CuBr CuI ZnS ZnSe ZnTe GaP GaAs GaSb AgI CdS CdTe InP InAs InSb AlN ZnO ZnS GaN AgI CdS CdSe InN

3-4 3-4 3-4 3-4 3-4 3-4 3-4 3-4 3-4 4-4 4-4 4-4 4-4 4-4 4-4 3-4 3-4 3-4 3-4 4-4 4-4 4-4 4-4

3½-3½ 4-4 5-4 3½-3½ 4-4 5-4½ 3½-3½ 4-4 5-4½ 5-4 3½-3½ 5-4 3½-3½ 4-4 5-4 3(2) 3(2) 3½-3½ 3(2) 5-4 3½-3½ 4-4 3(2)

10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10

2.32 2.46 2.59 2.32 2.46 2.62 2.32 2.46 2.62 2.80 2.51 2.80 2.51 2.66 2.80 1.94 1.94 2.32 1.94 2.80 2.51 2.66 2.15

2.35 2.46 2.62 2.36 2.45 2.63* 2.36 2.43 2.65 2.81 2.52 2.78 2.54 2.62 2.80 1.90 1.95 2.33 1.94 2.81 2.51 2.63 2.13

The comments that were made about the consistency of the specific rotation values in Tables 7 and 8 are applicable to the values in Table 9 as well. Most of the elements participating in the compounds of this table have the same specific rotations as in the previous tabulations, and where there are exceptions, the deviations are of a regular and predictable nature. A feature of Table 9 is the appearance of one of the normally electropositive elements of group 2B, Aluminum, in the role of a Division III element. Beryllium and magnesium also form ZnS type compounds, but like the lithium compounds previously mentioned they are irregular, probably for the same reason, and have not been included in the tabulation. The Division III behavior of these normally Division I elements is a result of the small size of the lower groups, which puts their their Division I elements into the same positions with respect to the electronegative zero point as the Division III elements of the larger groups. This relationship is indicated in the following tabulation, where the asterisks identify those elements that are normally in Division I.

Division III
Be* B* C N O F Mg* Al* Si P S Cl Zn Ga Ge As Se Br

None of the orientations thus far considered is applicable to compounds of the Division II elements. The normal orientation does not exist above a specific rotation of 5, as the higher value would put the relative rotation above the limiting value 10. The Zinc Oxide and Zinc Sulfide types of combination are electronegative structures, and the reverse orientation of the Division II elemental structures is not available for compounds with negative elements. The Division II elements therefore form their compounds on the basis of the magnetic orientation. This type of structure is theoretically available for any element, but its use is limited by probability considerations. It is utilized in many of the compounds of Divisions III and IV, especially in the higher rotational groups, but rarely appears in Division I combinations because of the very high probability of the normal orientation in this division. Since the magnetic rotation is distributed over all three dimensions, its effective component is not altered by a change in position, and has the same value in the magnetic orientations as in the corresponding compounds based on the electric orientations. In order to establish the magnetic type of equilibrium, however, the axis of the negative electric rotation has to be parallel to that of one of the magnetic rotations, and it is therefore perpendicular to the axis of the positive electric rotation. Consequently, the latter takes no part in the normal interatomic force equilibrium, and it constitutes an additional orienting influence, the effects of which were discussed in Volume I. In these compounds of the magnetic type the displacement of the negative component (-x) is balanced by a numerically equal positive displacement (x). Thus the magnetic orientation is somewhat similar to the normal orientation. However, the magnetic rotation is opposite in vectorial direction to the electric rotation, and the resultant relative rotation effective in the dimension of combination is therefore one of the neutral values 10, 5, or a combination of these two, rather than the 2x of the normal orientation. Compounds based on the magnetic orientation occur in a variety of crystal forms, the nature of which depends on the degree of force symmetry and the number of atoms of each kind in the equilibrium system. In some cases there is enough symmetry to make isometric structures of the NaCl, CaF2, and similar types possible. Other crystals are asymmetric. A common arrangement for the binary compounds is the Nickel Arsenide structure, a hexagonal crystal in which the positive atoms occupy the face positions and the negative atoms are in the central positions, spaced alternately 1/4 and 3/4 along the c axis. Table 10 shows the inter-atomic distances calculated for some NiAs and NaCl, type crystals of binary magnetic orientation compounds of Group 3A. Almost all of the NiAs type compounds that have been examined in the course of this present work take the vibration one value of the specific electric rotation: 10. The magnetic orientation compounds with the NaCl structure are quite evenly divided between the 10 rotation and the combination 5-10 in the 3A group, but utilize the 5-10 rotation almost exclusively in the higher groups. In order to show as wide a variety of the features of these magnetic type compounds as is possible in the limited amount of space that can be allotted to them, Table 10 has been restricted to Group 3A compounds, and the following Table 11 gives the data for a representative sample of the compounds of the rare earth elements (from Group 4A), together with a selection of compounds from Group 4B, in which the identical values of the inter-atomic distance in the combinations of the elements of this group with the

Division IV elements of Group 2A are emphasized.

Table 10: Distances - Binary Magnetic Orientation Compounds
Compound Specific Rotation Magnetic 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 4-3 3-4 3-4 3-4 3-4 3-4 3½-3½ 3½-3½ 3½-3½ Elec. Distance Calc. Obs. 2.42 2.56 2.42 2.56 2.73 2.73 2.56 2.73 2.42 2.56 2.69 2.59 2.32 2.46 2.59 2.59 2.37 2.42 2.64 2.04 2.04 2.04 2.18 2.59 2.75 2.12 2.12 2.42 2.55 2.44 2.54 2.74 2.77 2.58 2.78 2.45 2.55 2.67 2.61 2.33 2.46 2.58 2.62 2.38 2.43 2.64 2.06 2.05 2.07 2.22 2.61 2.72 2.16 2.12

NiAs (Hexagonal) Class—Group 3A VS VSe CrS CrSe CrSb CrTe MnAs MnSb FeS FeSe FeSb FeTe CoS CoSe CoSb CoTe NiS NiAs NiTe VN VO CrN MnO MnS MnSe FeO CoO 3½-3½ 10 4-4 10 3½-3½ 10 4-4 10 5-4½ 10 5-4½ 10 4-4 10 5-4½ 10 3½-3½ 10 4-4 10 5-4 10 5-4 10 3½-3½ 10 4-4 10 5-4 10 5-4 10 3½-3½ 10 4-3 10 5-4 10 NaCl (Cubic) Class-Group 3A 4-3 3(2) 10 4-3 3(2) 10 4-3 3(2) 10 3½-3½ 3(2) 5-10 3½-3½ 3½-3½ 5-10 3½-3½ 4-4 5-10 3-4 3(2) 5-10 3-4 3(2) 5-10

Thus far the calculation of equilibrium distances has been carried out by crystal types as a matter of convenience in identifying the effect of various atomic characteristics on the crystal form and dimensions. It is apparent from the points brought out in the discussion, however, that identification of the crystal type is not always essential to the determination of the inter-atomic distance. For example, let us consider the series of compounds NaBr, Na2Se, and Na3As. From the relations that have been established in the preceding pages we may conclude that these Division I compounds are formed on the basis of the normal orientation. We therefore apply the known value of the relative specific electric rotation of a normal orientation Sodium compound, 4, and the known values of the normal specific magnetic rotations of Sodium and the Group 3B elements, 3-3½ and 4-4 respectively, to

equation 1-10, from which we ascertain that the most probable inter-atomic distance in all three compounds is 2.95, irrespective of the crystal structure. (Measured values are 2.97, 2.95, and 2.94 respectively.)

Table 11: Distances - Binary Magnetic Orientation Compounds
Compound CeN CeP CeS CeAs CeSb CeBi PrN PrP PrAs PrSb NdN NdP NdAs NdSb EuS EuSe EuTe GdN YbSe YbTe ThS ThP UC UN UO NpN PuC PuN PuO AmO Specific Rotation Magnetic 5-4 3(2) 5-4 4-3 5-4 3½-3½ 5-4 4-4 5-4 5-4 5-4 5-4 5-4 3(2) 5-4 4-3 4½-4 4-4 4½-4 5-4 5-4 3(2) 5-4 4-3 4½-4 4-4 4½-4 5-4 5-4 4-3 5-4 4-4 5-4 5-4½ 5-4 3(2) 4½-4 4-4 4½-4 5-4 4½-4½ 3½-3½ 4½-4½ 4-3 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) 4½-4½ 3(2) Elec. 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 5-10 Distance Calc. Obs. 2.52 2.50 2.94 2.95 2.89 2.89* 3.06 3.03 3.22 3.20 3.22 3.24 2.52 2.58 2.94 2.93 2.98 3.00 3.14 3.17 2.52 2.57 2.94 2.91 2.98 2.98 3.14 3.15 2.94 2.98 3.06 3.08 3.26 3.28 2.52 2.50* 2.98 2.93 3.14 3.17 2.85 2484 2.91 2.91 2.47 2.50* 2.47 2.44* 2.47 2.46* 2.47 2.45* 2.47 2.46* 2.47 2.45* 2.47 2.48* 2.47 2.48*

The possible inter-atomic distances in the more complex compounds can be calculated in a similar manner, without the necessity of analyzing the great variety of geometrical structures in which these compounds crystallize. The usefulness of this procedure in application to compounds in general is limited, at the present stage of the theoretical development, because we are not normally able to define the specific rotations from theoretical premises as definitely as in the foregoing illustration. It is of considerable value, however, in dealing with the lower electronegative elements, whose specific electric rotations are confined to the neutral values, and whose variability in the magnetic dimensions is only in the number of

inactive dimensions (that is, dimensions in which the specific rotation is 2). The elements involved are those of groups 1B and 2A; hydrogen, carbon, nitrogen, oxygen, and fluorine, together with Boron, one of the normally electropositive elements of Group 2A. The other two positive elements of this group, lithium and beryllium, are also two-dimensional under most conditions, but they take the positive orientation, and have much greater inter-atomic distances. Table 12 gives the theoretically possible inter-atomic distances of these lower group elements, with some examples of the measured values corresponding to the calculated distances.

Table 12: Distances - Lower Negative Elements
Specific Rotation Magnetic 3(1) 3(1) 3(1) 3(1½) 3(1½) 3(1½) 3(1) 3(2) 3(1½) 3(2) 3(2) 3(2) 3(2) 3(2) Calc. 70. 92. Comb. H-H H-F H-C H-O 1.06 H-N H-C C-N N-N C-O C-O C-N H-B N-N N-0 C-C Example H2 HF Benzene Formic acid Hydrazine Ethylene NaCN N2 COS CO2 Cyanogen B2H6 CuN3 N2O Acetylene Obs. 74. 92. 94. 95. 1.04 1.06 1.09 1.09 1.10 1.15 1.16 1.17 1.17 1.19 1.20 1.54 Elec. 10 10 10 10 10 10 5-10 Comb. H-B C-O B-F C-N C-F C-C C-C N-O C-C C-N C-O C-F C-C C-C B-C Distance n.u. Å .241 .70 .317 .92 .363 1.06 .406 1.18 .445 1.30 .483 1.41 .528 1.54 Example B2H6 CaCO3 BF3 Oxamide Cf3Cl Ethylene Benzene HNO3 Graphite DI-Alanine Methyl ether CH3F Diamond Propane B(CH3)2 Obs. 1.27 1.29 1.30 1.31 1.32 1.34 1.39 1.41 1.42 1.42 1.42 1.42 1.54 1.54 1.56

Calc. 1.30

1.41

1.18

The experimental results are not all in agreement with the theory. On the contrary, they are widely scattered. The measured C-C distances, for example, cover almost the entire range from 1.18, the minimum for this combination, to the maximum 1.54. However, the basic compounds of each class do agree with the theoretical values. The paraffin hydrocarbons, benzene, ethylene, and acetylene, have C-C distances approximating the theoretical 1.54,

1.41, 1.30, and 1.18 respectively. All C-H distances are close to the theoretical 0.92 and 1.06, and so on. It can reasonably be concluded, therefore, that the significant deviations from the theoretical values are due to special factors that apply to the less regular structures. A detailed investigation of the reasons for these deviations is beyond the scope of this present work. However, there are two rather obvious causes that are worth mentioning. One is that forces exerted by adjacent atom may modify the normal result of a two-atom interaction. An interesting point in this connection is that the effect, where it occurs, is inverse; that is, it increases the atomic separation, rather than decreasing it as might be expected. The natural reference system always progresses at unit speed, irrespective of the positions of the structures to which it applies, and consequently the inward force due to this progression always remains the same. Any interaction with a third atom introduces an additional rotational outward) force, and therefore moves the point of equilibrium outward. This is illustrated in the measured distances in the polynuclear derivatives of benzene. The lowest C-C distances in these compounds, 1.38 and 1.39, are found along the outer edges of the molecular structures, while the corresponding distances in the interiors of the compounds, where the influence of adjoining atoms is at a maximum, characteristically range from 1.41 to 1.43. Another reason for discrepancies is -that in many instances the measurement and the theoretical calculation do not apply to the same quantity. The calculation gives us the distance between structural units, whereas the measurements apply to the distances between specific atoms. Where the atoms are the structural units, as in the compounds of the NaCl class, or where the inter-group distance is the same as the inter-atomic distance, as in the normal paraffins, there is no problem, but exact agreement cannot be expected otherwise. Again we can use benzene as an example. The C-C distance in benzene is generally reported as 1.39, whereas the corresponding theoretical distance, as indicated in Table 12, is 1.41. But, according to the theory, benzene is not a ring of carbon atoms with hydrogen atoms attached; it is a ring of CH neutral groups, and the 1.41 neutral value applies to the distance between these neutral groups, the structural units of the atom. Since the hydrogen atoms are known to be outside the carbon atoms, if these atoms are coplanar it follows that the distance between the effective centers of the CH groups must be somewhat greater than the distance between the carbon atoms of these groups. The 1.39 measurement between the carbon atoms is therefore entirely consistent with the theoretical distance calculations. The same kind of a deviation from the results of the (apparent) direct interaction between two individual atoms occurs on a larger scale where there is a group of atoms that is acting structurally as a radical. Many of the properties of molecules composed in part, or entirely, of radicals or neutral groups are not determined directly by the characteristics of the atoms, but by the characteristics of the groups. The NH4 radical, for example, has the same specific rotations, when acting as a group, as the rubidium atom, and it can be substituted in the NaCl type crystals of the rubidium halides without altering the volume. Consequently, the inter-atomic distances have no direct significance in compounds containing these groups. It is theoretically feasible to locate the effective centers of the various groups, and to measure the inter-group distances that correspond to those calculated from theory, but this task has not yet been undertaken, and it will not be possible it this time to present a comparison between theoretical and experimental distances in compounds containing radicals

comparable to the comparisons in Tables 1 to12. Some preliminary results have been made, however, on the relation between the theoretical distances and the density in complex compounds. There are a number of factors, not yet investigated in detail, that have some influence on the density of solid matter, and for that reason the conclusions thus far derived from theory are somewhat tentative, and the correlations between theory and observation are only approximate. Nevertheless, certain aspects of these tentative results are significant, and are of enough interest to justify giving them some attention. If we divide the molecular mass, in terms of atomic weight units, by the density, we arrive at the molecular volume in terms of the units entering into the density measurement. For present purposes it will be convenient to convert this quantity to natural units of volume. The applicable conversion factor is the cube of the time region unit of distance divided by the mass unit atomic weight. In the cgs system of units it has the numerical value 14.908. In Table 13 the average volumes per volumetric group of a representative number of inorganic compounds containing radicals (V), as calculated from the measured densities, are compared with the cubes of the inter-group distances (S03), as calculated on the theoretical basis previously described.

Table 13: Molecular Volume
m NaNO3 KNO3 Ca(NO3)2 RbNO3 Sr(NO3)2 CsNO3 Na2CO2 MgCO3 K2CO3 CaCO3 BaCO3 FeCO3 CoCO3 Cu2CO3 ZnC3 Ag2CO3 85.01 101.10 164.10 147A9 211.65 194.92 106.00 84.33 138.20 100.09 197.37 115.86 118.95 187.09 125.39 275.77 d 2.261 2.109 2.36 3.11 2.986 3.685 2.509 3.037 2.428 2.711 4.43 3.8 4.13 4.40 4.44 6.077 n 2 2 3 2 3 2 3 2 3 2 2 2 2 3 2 3 V 1.261 1.608 1.554 1.590 1.585 1.774 0.944 0.931 1.272 1.238 1.494 1.022 0.966 0.950 0.947 1.015 S03 1.241 1.565 1.565 1.63 1.631 1.825 0.970 0.970 1.222 1.222 1.532 0.976 0.976 0.976 0.976 1.096 c 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 ab1 3-3 4-3 4-3 4-4 4-4 4½-4½ 3-3 3-3 4-3 4-3 4½-4½ 4-3 4-3 4-3 4-3 4-4 ab2 4-5 4-5 4-5 4-4 4-4 4-4 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½ 3½-3½

The specific electric rotation (c) for the compounds with the normal orientation is 4, as in the valence one binary compounds. Those with the magnetic orientation take the neutral value 5. The applicable specific magnetic rotations for the positive component and the negative radical are shown in the columns headed ab1 and ab2 respectively. Columns 2, 3, and 4 give the molecular mass (m), the density of the solid compound (d), and the number of volumetric units in the molecule (n). Here, again, as in the earlier tables, the calculated and

empirical values are not exactly comparable, as the measured values of the densities have been used directly, rather than being projected back to zero temperature, a refinement that would be required for accuracy, but is not justified at this early stage of the investigation. In this table there are five pairs of compounds, such as Ca(NO3)2 and KNO3 in which the inter-group distances are the same, and the only difference between the pairs, so far as the volumetric factors are concerned, is in the number of structural groups. Because of the uncertainties involved in the measured densities, it is difficult to reach firm conclusions on the basis of each pair considered individually, but the average volume per group, calculated from the density, in the five two-group structures is 1.267, whereas in the five three-group structures the average is 1.261. It is evident from this that the volumetric equality of the group and the independent atom which we noted in the case of the NH4 radical is a general proposition, in this class of compounds at least. This is a point that will have a special significance when we take up consideration of the liquid volume relations. In closing the discussion in this chapter it is appropriate to reiterate that the values of the inter-atomic and inter-group distance derived from theory apply to the separations as they would exist if the equilibrium were reached at zero temperature and zero pressure. In the next two chapters we will consider how these distances are modified when the solid structure is subjected to finite pressures and temperatures.

Chapter 4

Compressibility
One of the simplest physical phenomena is compression, the response of the time region equilibrium to external forces impressed upon it. With the benefit of the information developed earlier, we are now in a position to begin an examination of the compression of solids, disregarding for the present the question of the origin of the external forces. For this purpose we introduce the concept of pressure, which is defined as force per unit area. P=F/s² (4-1) In many cases it will be convenient to deal with pressure on a volume basis rather than on an area basis. We therefore multiply both force and area by distance, s, which gives us the alternative equation: P=Fs/s³=E/V (4-2) In the region outside unit distance, where the atoms or molecules of matter are independent, the total energy of an aggregate can thus be expressed in terms of pressure and volume as E=PV As we will find in the next chapter when we begin consideration of thermal motion, a condition of constant temperature is a condition of constant energy, other things being equal. Equation 4-3 thus tells us that for an aggregate in which the cohesive forces between the atoms or molecules are negligible, an ideal gas, the volume at constant (4-3)

temperature is inversely proportional to the pressure. This is Boyle's law, one of the wellestablished relations of physics. For application to the time region in which the solid equilibrium is located, the second power of the volume must be substituted for the first power, in accordance with the general inter-regional relation previously established. The time region equivalent of Boyle‘s Law is therefore PV²=k In terms of volume this becomes V=k/P½ (4-5) This equation tells us that at constant temperature the volume of a solid is inversely proportional to the square root of the pressure. The pressure represented by the symbol P in this equation is, of course, the total effective pressure; that is, the pressure equivalent of all of the forces acting in opposition to the rotational forces of the atom. The force due to the progression of the natural reference system opposes the rotational forces, and acts in parallel with the external compressive forces, but it has the same magnitude regardless of whether or not any such external forces are present. It therefore exerts what we may call an internal pressure, an already existing level of pressure to which an external pressure becomes an addition. In order to conform to established usage and to avoid confusion, the symbol P will hereafter refer to the external pressure only, the total pressure being expressed as P0 + P. On this basis, quation 4-5 may be restated as V=k/(P0+P)½ (4-6) Compression is normally expressed in terms of relative rather than absolute volumes, the reference volume being the volume at zero external pressure, where equation 4-6 has the form V=k/P0½ Dividing the equation 4-6 by equation 4-7, and rearranging, we obtain V P0½ — = ——— V0 (P0+P)½ (4-8) (4-7) (4-4)

As this equation brings out, the internal pressure, P0, is the key factor in the compression of solids. Inasmuch as this pressure is a result of the progression of the natural reference system which, in the time region, is carrying the atoms inward in opposition to their rotational forces (gravitation), the inward force acts only on two dimensions (an area), and the magnitude of the pressure therefore depends on the orientation of the atom with respect to the line of the progression. As indicated in connection with the derivation of the inter-regional ratio, there are 156.44 possible positions of a displacement unit in the time region, of which a fraction az represents the area subjected to pressure, a and z being the effective displacements in the active dimensions. The letter symbols a, b, and c, are used as indicated in Chapter 10, Volume I. The displacement z is either the electric displacement c or the second magnetic displacement b, depending on the orientation of the atom.

From the principle of equivalence of natural units it follows that each natural unit of pressure exerts one natural unit of force per unit of cross-sectional area per effective rotational unit in the third dimension of the equivalent space. However, the pressure is measured in the units applicable to the effect of external pressure. The forces involved in this pressure are distributed to the three spatial dimensions and to the two directions in each dimension. Only those in one direction of one dimension–one sixth of the total–are effective against the time region structure. Applying this 1/6 factor to the ratio az/156.444, we have for the internal pressure per rotational unit at unit volume, P0 = az/938.67 This expression may now be generalized to apply to y rotational units and V units of volume, as follows: P0 = azy/(938.67V) (4-10) The force exerted by the progression of the natural reference system is independent of the geometrical arrangement of the atoms, and the volume term in equation 4-10 refers to what we may call the three-dimensional atomic space, the cube of the inter-atomic distance, rather than to the geometric volume. We will therefore replace V by S03. This gives us the internal pressure equation in final form: P0 = azy/(936.67S03) (4-11) The value derived from this equation is the magnitude of the internal pressure in terms of natural units. To obtain the pressure in terms of any conventional system of units it is only necessary to apply a numerical coefficient equal to the value of the natural unit of pressure in that system. This natural unit was evaluated in Volume I as 5.282 x 1012 dynes/cm2. The corresponding values in the systems of units used in the reports of the experiments with which comparisons will be made in this chapter are: 1.554 x 107 atm 1.606 x 107 kg/cm2 1.575 x 107 megabars In terms of the units used by P.W. Bridgman, the pioneer investigator in the field, in most of his work, equation 4-11 takes the form P0 =17109 azy/S03 kg/cm² (4-12) (4-9)

The internal pressure thus calculated for any specific substance is not usually constant through the entire external pressure range. At low total pressures, the orientation of the atom with respect to the line of progression of the natural refe- rence system is determined by the thermal forces which, as we will see later, favor the minimum values of the effective cross-sectional area. In the low range of total pressures, therefore, the cross-section is as small as the rotational displacements of the atom will permit. In accordance with Le Chatelier‘s Principle, a higher pressure, either internal or external, applied against the equilibrium system causes the orientation to shift, in one or more steps, toward higher displacement values. At extreme pressures the compressive force is exerted against the maximum cross-section: 4 magnetic units in one dimension and 8 electric units in another. Similarly, only one of the magnetic rotational units in the atom participates in the radial component y of the resistance to compression at the low

pressures, but further application of pressure extends the participation to additional rotational units, and at extreme pressures all of the rotational units in the atom are involved. The limiting value of y is therefore the total number of such units. The exact sequence in which these two kinds of factors increase in the intermediate pressure range has not yet been determined, but for present purposes a resolution of this issue is not necessary, as the effect of any specific amount of increase is the same in both cases. Helium and neon, the first two of the inert gases, the elements that have no effective rotation in the electric dimension, take the absolute minimum compression factors: one rotating unit with one effective unit of displacement in each of the two effective dimensions. The azy factors for these elements can be expressed as 1-1-1. In this notation, which we will use for convenience in the subsequent discussion, the numerical values of the compression factors are given in the same order as in the equations. It should be noted that the absolute minimum compression, that applicable to the elements of least displacement, is explicitly defined by the factors 1-1-1. The value of the factor a increases in the higher members of the inert gas series because of their greater magnetic displacement. Because of their negative displacement in the electric dimension, which, in this context, is equivalent to the zero displacement of the inert gases, the electronegative elements follow the inert gas pattern, taking the minimum 1-1-1 factors in the lowest members of the lowest rotational groups, and values that are higher, but still generally well below those of the corresponding electro-positive elements, as the displacement increases in either or both of the atomic rotations. None of the elements of the electronegative divisions below electric displacement 7 has the 4-8 az factors initially, although they are capable of these high levels, and can eventually reach them under appropriate conditions. All of the electropositive elements studied by Bridgman have the full 4 units in one dimension; that is, a = 4. The value of z for the alkali metals is equal to the electric displacement, one unit, and since y takes the minimum value under low pressure conditions, the compression factors for these elements are 4-1-1. The displacement 2 elements (calcium, etc.) take the intermediate values 4-2-1 or 4-3-1. The greater displacements of the elements that follow have a double effect. They increase the internal pressure directly by enlarging the effective cross-section, and this higher internal pressure then has the same effect as a greater external pressure in causing a further increase in the compression factors. Most of these elements therefore utilize the full displacements of the active cross-section dimensions from the start of compression; that is, 4-4-1 (az = ab, two magnetic dimensions) in some of the lower group elements and the transition elements of Group 4A, and 4-8-1, or 4-8-n (az = ac, one magnetic and one electric dimension) in the others. The factors that determine the internal pressures of the compounds that have been investigated thus far fall mainly in the intermediate range, between 4-1-1 and 4-4-1. NaCl, for instance, has 4-2-1 initially, and shifts to 4-3-1 in the pressure range between 30 and 50 M kg/cm2. AgCl has 4-3-1 initially, and carries these factors up to a transition point near Bridgman's pressure limit of 100 M kg/cm2. CaF2 has the factors 4-4-1 from the start of compression. The initial values of the internal pressure of most of the inorganic compounds examined in this investigation are based on one or another of these

three patterns. Those of the organic compounds are mainly 4-1-1, 4-2-1, or an intermediate value 4-1½-1 Compression is ordinarily measured in terms of relative volume, and most of the discussion in this chapter will deal with the subject on this basis, but for some purposes we will be interested in the compressibility, the rate of change of volume under pressure. This rate is obtained by differentiating equation 4-8. 1 dV P0½ —– —– = ———— V0 dP 2(P0+P)³/2 (4-13)

The compressibility at P0, the initial compressibility, is of particular interest. For all practical purposes it is the same as the compressibility at one atmosphere, this pressure being only a small fraction of the internal pressure P0. The initial compressibility may be obtained from equation 4-13 by letting P equal zero. The result is 1 dV —– —– V0 dP (P=0) 1 = —— 2P0 (4-14)

Since the initial compressibility is a quantity that can be measured, its simple and direct relation to the internal pressure provides a significant confirmation of the physical reality of that theoretical property of matter. Initial compressibility factors derived theoretically for those elements on which consistent compressibility data are available for comparison, the internal pressures calculated from these factors, and the initial compressibilities corresponding to the calculated internal pressures are listed in Table14, together with measured values of the initial compressibility at room temperature. Two sets of experimental values are given, one from Bridgman and one from a more recent compilation. Values of S03, except those marked with asterisks, are computed from the inter-atomic distances (S0) in the tables of Chapter 2. Where the structure is anisotropic the S03 value shown is the product of one of the distances given in the earlier tabulations by the square of the other. The reason for the occurrence of the indicated deviations from the Chapter 2 values will be explained later.

Table 14: Initial Compressibility
S03 Li Be C(dia.) Na Mg Al Si K Ca Ti V 1.151 0.482 0.147 2.048 1.291 0.915 0.497 3.659 2.588 1.033 0.729 Comp. Factors a z y 4 1 1 4 4 1 4 6 1 4 1 1 4 4 1 4 5 1 4 4 1 4 1 1 4 3 1 4 8 1 4 8 1 P0 (M kg/cm2) 59.5 568 2793 33.4 212 374 551 18.7 79.3 530 751 Initial Compressibility x 106 Calc. Obs.3 Obs.4 8.42 8.41 8.46 0.88 0.87 0.98 0.18 0.18 0.18 14.97 15.1 14.42 2.36 2.86 2.77 1.34 1.30 1.36 0.91 0.31 0.99 26.74 31.0 30.4 6.31 5.51 6.45 0.94 0.77 0.93 0.67 0.59 0.61

Cr Mn Fe Co Ni Cu Zn Ge Rb Sr Zr Nb Mo Ru Rh Pd Ag Cd In Sn Sb Cs Ba La Ce Pr Nd Sm Gd Dy Ho Er Tm Yb Lu Ta W Ir Pt Au Ti Pb Bi Th U

0.603 0.705 0.603 0.603* 0.603* 0.652 0.903 0.603 4.616 3.268 1.306 0.921 0.764* 0.764* 0.764 0.823 0.956 1.118 1.165* 0.913* 1.325* 5.774 2.686* 2.044 1.893 1.758* 1.758* 1.758* 1.346* 1.346 1.346* 1.346* 1.346* 2.167* 1.346* 1.027* 0.953* 0.823 0.823 0.953 1.631 1.249* 1.249 1.758 0.984

4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4

8 8 8 8 8 6 4 4 1 3 6 8 8 8 8 8 8 4 4 4 4 1 2 4 4 4 4 4 4 4 4 4 4 2 4 8 8 8 8 8 4 4 3 8 8

1 1 1 1 1 1 1 1 1 1 1½ 1½ 2 2 2 1½ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 3 3 2 1½ 1 1 1 1 1

908 777 908 908 908 630 303 454 14.8 62.8 472 892 1433 1433 1433 998 573 245 235 300 207 11.9 51.0 134 145 156 156 156 203 203 203 203 203 63.2 203 1066 1723 1996 1330 862 168 219 164 311 556

0.55 0.64 0.55 0.55 0.55 0.79 1.65 1.10 33.78 7.96 1.06 0.56 0.35 0.35 0.35 0.50 0.87 2.04 2.13 1.67 2.42 42.0 9.80 3.73 3.45 3.21 3.21 3.21 2.46 2.46 2.46 2.46 2.46 7.92 2.46 0.47 0.29 0.25 0.38 0.58 2.98 2.25 3.05 1.61 0.90

0.50 0.76 0.57 0.52 0.50 0.70 1.64 1.33 38.7 7.9 1.06 0.55 0.34 0.34 0.36 0.51 0.96 1.89 1.64 2.32 59.0 3.39 3.45

0.47 0.28 0.35 0.56 3.31 2.29 2.71 0.94

0.52 1.65 0.58 0.51 0.53 0.72 1.64 1.27 31.4 8.46 1.18 0.58 0.36 0.31 0.36 0.54 0.97 2.10 2.38 0.80 2.56 49.1 9.78 4.04 4.10 3.21 3.00 3.34 2.56 2.55 2.47 2.38 2.47 7.38 2.38 0.49 0.30 0.28 0.35 0.57 2.74 2.29 3.11 1.81 0.99

In most cases the difference between the calculated and measured compressibilities is within the probable experimental error. Substantial deviations from the calculated values are to be expected in the case of low melting point elements such as the alkali metals, unless corrections have been applied to the empirical data, as there is an additional component in the initial volume of such substances. Elsewhere, the differences between the calculated compressibilities and either of the two sets of experimental values are, on the average, no greater than the differences between the experimental results. This process is repeated at successively higher pressure levels until the maximum compression factors for the element are reached. Because of the nature of this compression pattern, a convenient method of analyzing the experimental values of the volume of various substances under compression can be made available by expressing equation 4-8 in the form (V0 /V)² = 1+P/P0 (4-15) According to this equation, if we plot the reciprocals of the squares of the relative volumes against the corresponding total pressure ratios we should obtain a straight line intersecting the zero pressure ordinate at the reference volume 1.00. The slope of the line is determined by the magnitude of the internal pressure, P0 Fig.1(a) is a curve of this kind for the element tin, based on Bridgman‘s experimental values.

Figure1: Compression Patterns

Where there is a transition to a higher set of compression factors within the experimental range, and the magnitude of P0 changes, the volumes diverge from the original line and follow a second straight line, the slope of which is determined by the new compression factors. On preparing curves of this kind for the other elements investigated by Bridgman, we find that about two-thirds of them actually do conform to a single straight line up to the 30,000 kg/cm2 pressure limit of his earlier work. His studies of the less compressible substances, such as the higher elements of the electropositive divisions, were not carried beyond this level, but he measured the compression up to 100,000 kg/cm2 on many other elements, and most of them were found to undergo a transition in which the effective internal pressure increases without any volume discontinuity. The compression curve for such a substance consists of two straight line segments connected by a smooth transition curve, as in Fig.1(b), which represents Bridgman's values for silicon. In addition to the changes of this type, commonly called second order transitions, some solid substances undergo first order transitions in which there is a modification of the crystal structure and a volume discontinuity at the transition point. The effective internal

pressure generally changes during a transition of this kind, and the resulting volumetric pattern is similar to that of KCl, Fig.1(c). With the exception of some values which are rather erratic and of questionable validity, all of Bridgman's results follow one of these three patterns or some combination of them. The antimony curve, Fig.1(d), illustrates one of the combination patterns. Here a second order transition between 30,000 and 40,000 kg/cm2 is followed by a first order transition at a higher pressure. The numerical values corresponding to these curves are given in the tables that follow. The experimental second order curves are smooth and regular, indicating that the transition process takes place freely when the appropriate pressure is reached. The first order transitions, on the other hand, show considerable irregularity, and the experimental results suggest that in many substances the structural changes at the transition points are subject to a variable amount of delay due to internal conditions in the solid aggregate. In these substances the transition does not take place at a well-defined pressure, but somewhere within a relatively broad transition zone, and the exact course of the transition may vary considerably between one series of measurements and another. Furthermore, there are many substances which appear to experience similar delays in achieving volumetric equilibrium even where no transitions take place. The compression curves suggest that a number of the reported transitions are actually volume adjustments which merely reflect delayed response to the pressure applied earlier. For example, in the barium curve based on Bridgman's results there are presumably two transitions, one between 20,000 and 25,000 kg/cm2, and the other between 60,000 and 70,000 kg/cm2. Yet the experimental volumes at 60,000 and 100,000 kg/cm2 are both very close to the values calculated on the basis of a single straight line relation. It is quite probable, therefore, that this element actually follows one linear relation at least up to the vicinity of 100,000 kg/cm2. The deviations from the theoretical curves that are found in the experimental volumes of substances with relatively high melting points are generally within the experimental error range, and those larger deviations that do make their appearance can, in most cases, be explained on the foregoing basis. The compression curves for substances with low melting points show systematic deviations from linearity at the lower pressures, but this is a normal pattern of behavior resulting from the proximity of the change of state. As will be brought out in detail in our examination of the liquid state, the physical state of matter is basically a property of the individual atom or molecule. The state of the aggregate merely reflects the state of the majority of its individual constituents. Consequently, a solid aggregate at any temperature near the melting point contains a specific proportion of liquid molecules. Since the volume of a liquid molecule differs from that of a solid molecule, the volume of the aggregate is modified accordingly. The amount of the volume deviation in each case can be calculated by methods that will be described in the subsequent discussion of the liquid volume relations. Table 15 compares the results of the application of equation 4-8 with Bridgman‘s measurements on some of the elements that maintain the same internal pressure all the way up to his pressure limit of 100,000 kg/cm2. In many cases he made several series of measurements on the same element. Most of these results agree within about 0.003, and it does not appear that listing all of the individual values in the tabulations would serve any

useful purpose. The values given in Table15, and in the similar tables that follow, are those obtained in experiments that were carried to the full 100,000 kg/cm2 pressure level. Where the high pressure measurements were started at some elevated pressure, or where the measurement interval was greater than usual, the gaps have been filled from the results of other Bridgman experiments.

Table 15: Relative Volumes Under Compression
Calc. Pressure (M kg/cm2) 0 5 10 15 20 25 30 35 40 50 60 70 80 90 100 Obs. Calc. Obs. Calc. Obs. Calc. Obs. Zn 4-4-1 1.000 .992 .984 .976 .969 .961 .954 .947 .940 .927 .914 .902 .890 .879 .868 1.000 .992 .984 .977 .969 .964 .954 .939 .925 .912 .900 .889 .878 .868 Zr 4-6-1½ 1.000 .995 .990 .985 .980 .975 .970 .965 .960 .951 .942 .933 .925 .917 .909 1.000 .995 .989 .983 .978 .973 .969 .964 .960 .946 .937 .929 .922 .916 .910 In 4-4-1 1.000 .988 .980 .970 .960 .951 .942 .933 .925 .909 .893 .878 .864 .851 .838 1.000 .988 .980 .967 .955 .948 .936 .932 .919 .903 .888 .874 .860 .847 .835 Sn 4-4-1 1.000 .992 .984 .976 .968 .961 .954 .947 .940 .926 .913 .901 .889 .878 .867 1.000 .991 .982 .975 .966 .960 .951 .936 .923 .909 .897 .886 .875 .864

Table16 extends the volume comparisons to representative elements of the classes that are subject to transitions within the experimental range of pressures. Transitions reported by the investigator or indicated by the theoretical calculations are shown by horizontal lines in the appropriate columns. In these tabulations the position of the upper branch of each curve has been fixed by using the experimental volume at a selected pressure in the straight line segment above the transition (identified by the symbol R) as a reference point. Thus the slope of this upper branch of the curve is determined theoretically, but its position relative to the 1/V2 scale is empirical. Some work has been done toward extension of the theoretical development to a determination of the exact position of the upper section of each curve, but this project is not far enough advanced to justify any discussion of it at this time.

Table 16: Relative Volumes Under Compression
Calc. Pressure (M kg/cm2) 0 5 1.000 .993 Al 4-5-1 4-8-1 1.000 .993 1.000 .996 Obs. Calc. Si 4-4-1 4-8-1 1.000 .995 1.000 .970 Obs. Calc. Ca 4-3-1 4-4-1 1.000 .969 Obs. Calc. Obs. Sb 4-4-1 4-4-1½ 1.000 .988 1.000 .987

10 15 20 25 30 35 40 50 60 70 80 90 100

.987 .981 .974 .968 .964 .957 .949 .942 .935 .928 .922 .915 R Ba 4-2-1 4-3-1

.987 .981 .975 .969 .964 .958 .951 .944 .937 .929 .922 .915

.991 .987 .982 .978 .974 .966 .960 .956 .952 .948 .944 .940 R La 4-4-1 4-8-1

.990 .986 .981 .978 .974 .968 .962 .957 .952 .948 .944 .940

.943 .917 .895 .878 .862 .847 .832 .805 R .780 .758 .737 .718 .701

.942 .918 .897 .878 .861 .845 .832 .805 .780 .748 .732 .716 .702

.977 .966 .955 .945 .935 .925 .916 .899 .888 .875 .864 R

.975 .964 .954 .944 .934 .925 .917 .899 .886 .875 .864 .815 .803

Pr 4-4-1 4-4-1½ 1.000 1.000 .984 .983 .970 .967 .955 .953 .942 .940 .929 .927 .916 .915 .904 .904 .893 .893 .878 .878 .863 .863 .849 R .849 .835 .836 .822 .823 .810 .811

U 4-8-1 4-8-2 1.000 1.000 .996 .955 .991 .990 .987 .986 .983 .981 .979 .978 .975 .973 .971 .971 .967 .966 .960 .960 .956 .955 .952 .951 .949 .947 .945 .944 .941 R .941

0 5 10 15 20 25 30 35 40 50 60 70 80 90 100

1.000 .955 .915 .880 .848 .820 .794 .771 .750 .712 .679 .650 .625 .603 .582

1.000 .955 .914 .879 .841 .814 .789 .770 .747 .712 .682 .639 .618 .598 .580

1.000 1.000 .982 .981 .965 .963 .949 .947 .933 .933 .918 .917 .904 .905 .891 .893 .878 .881 .858 .863 .845 .846 .833 .832 .821 .819 .809 .808 .798 R .798

Compressibility patterns of compounds are theoretically identical with those of the elements, and this theoretical conclusion is confirmed by compression data for a representative group of inorganic compounds presented in Table17.

Table 17: Relative Volumes Under Compressi
Calc. Pressure (M kg/cm2) 0 5 10 Obs. Calc. Obs. Calc. Obs. Calc. Obs.

NaCl 4-2-1 4-2-1½ .994 .979 .964 1.000 .982 .966 .987 .964 .942

NaI 4-2-1 4-2-1½ 1.000 .970 .944

KCl 4-2-1 4-2-1½ .994 .973 .953 1.000 .974 .952

ZnS 4-4-1 4-4-1½ .995 .991 .986 1.000 .994 .988

15 20 25 30 35 40 50 60 70 80 90 100

.950 .937 R .924 .912 .900 .889 .867 .847 .829 .815 .802 .790 R AgCl 4-3-1

.951 .937 .924 .912 .901 .892 .865 .848 .832 .817 .803 .790

.922 .903R .885 .868 .853 .840 .819 .799 .781 .765 .749 .734 R

.922 .902 .886 .871 .858 .840 .816 .795 .777 .761 .747 .734

.934 .916 R .803 R .791 .779 .768 .757 .741 .727 .714 .702 .690 .679 R

.933 .916 .803 .789 .778 .768 .758 .742 .723 .710 .698 .688 .679

.982 .977 R .973 .969 .964 .960 .952 .945 .940 .934 .929 .924 R

.982 .977 .972 .967 .963 .961 .954 .947 .940 .934 .929 .924

CsBr 4-3-1 4-4-1 .984 .962 .942 .923 .905 R .888 .871 .856 .842 .815 .790 .777 .760 .743 .728 R 1.000 .971 .947 .925 .905 .888 .870 .859 .840 .814 .792 .773 .757 .742 .728

NH4Cl 4-2-1 4-4-1 1.000 1.000 .974 .973 .950 .951 .928 .933 .910 .918 .900 .905 .889 .891 .879 .883 .869 .867 .851 .846 .833 .828 .817 .812 .801 .798 .787 .785 .773 R .773

KNO3 4-3-1 4-3-2 .894 .878 .862 .847 .833 .820R .807 .783 .761 .744 .733 .723 .712 .703 R 1.000 .882 .862 .846 .831 .820 .804 .781 .762 .745 .732 .720 .711 .703

0 5 10 15 20 25 30 35 40 50 60 70 80 90 100

1.000 .990 .980 .971 .961 .952 .944 .935 .927 .911 .895 .881 .867 .854 .841

1.000 .989 .979 .969 .960 .952 .942 .937 .926 .910 .896 .883 .871 .860 .835

As might be expected for the less uniform composition, transitions are somewhat more common in the compounds, but otherwise there is no difference in the compression curves. The curve for KCl, shown graphically in Fig. 1 and by numerical values in Table17, is of special interest because it includes a sharp first order transition in which there is a substantial decrease in the basic volume while the compression factors remain unchanged. The magnitude of the volume reduction that takes place indicates that there is a reorientation of the atomic rotations in which the neutral specific electric rotation 5 is substituted for the normal rotation 4 as the effective relative value. The theoretical volumes beyond the transition point, as shown in the table, are based on the small atomic volume corresponding to the higher rotation. Up to 20,000 kg/cm2 the volume follows the curve corresponding to compression factors 4-2-1 and S03 = 1.222, which produce an internal pressure of 112.7 M kg/cm2. At the transition point the basic volume (S03) drops to 0.976, increasing the internal pressure to 141.1 M kg/cm2. The compression then

continues on this basis up to the vicinity of 45,000 kg/cm2, where the compression factors change from 4-2-1 to 4-3-1, and the internal pressure rises accordingly. As in the compression of the elements, the theoretical calculations do not always confirm the transitions reported by the experimenters. On the other hand, these calculations show that a large proportion of the compounds, including six of the eight in Table17, undergo either a transition or some other process in which they eliminate a volume component in the pressure range below 5000 kg/cm2. The effect on the compression curve is to cause the linear segment of the curve to intersect the zero pressure ordinate at a volume below 1.000. The origin of these volume adjustments is still uncertain. The occurrence of a number of observable first order transitions at relatively low pressures suggests that some early second order transitions may also be taking place. But it is also possible that voids in the structure may be eliminated in the early stages of compression, or that there are geometrical readjustments. The structural characteristics of the organic compounds make them particularly susceptible to such geometrical readjustments. Because of their low melting points, their volumes under low pressure also include the additional component that exists near the change of state. It appears, however, that in a wide range of compounds elimination of these extra volume components is substantially complete at some pressure well below the 40,000 kg/cm2 level to which Bridgman's measurements on solid organic compounds were carried. This means that there is a fairly wide pressure range in which these compounds follow the normal compression pattern. The following comparison of theoretical and observed volume ratios for benzene and some of its polynuclear derivatives gives an indication of how the elimination of the excess volume progresses. A measured ratio lower than the theoretical means that some of the excess volume is eliminated in the pressure range for which the ratio is measured, and the amount of the difference is an indication of the amount by which the normal loss of volume due to compression is increased. Benzene P (M kg/cm2) 40/20 40/25 40/30 40/35 Ratio Calc. .938 .954 .970 .985 Obs. .920 .943 .964 .984 Benzene Naphthalene Anthracene Ratio 40/25 Calc. Obs. .954 .943 .954 .950 .954 .953

As these figures indicate, benzene is just getting rid of the last of the excess volume at the pressure limit of the experiments, and there is no linear section of the benzene compression curve on which the slope can be measured for comparison with the theoretical value. With increased molecular complexity, however, the linear section of the curve lengthens, and for compounds with characteristics similar to those of anthracene there is a 15,000 kg/cm2 interval in which the measured volumes should follow the theoretical line. Compounds of this nature have magnetic rotation 3-3 and electric rotation 4. The effective value of S03 is therefore 0.812, and where the compression factors are 4-1½-1

the resulting internal pressure is 127.2 M kg/cm2. As shown in the values tabulated for benzene, which were computed on the basis of this internal pressure, the ratio of the volume at 40,000 kg/cm2 to that at 25,000 kg/cm2 should be 0.954 for all organic compounds with characteristics (molecular complexity, melting point, compression factors, etc.) similar to those of anthracene. Table18 shows that this theoretical conclusion is corroborated by Bridgman's measurements.

Table 18: Measured Volume Ratio - 40/25 M/kg/cm 2 (Theoretikal radio:.954)
Urea Nitrourea Cyanamide o-Xylene p-Xylene Triphenyl methane o-Diphenyl benzene m-Diphenyl benzene p-Diphenyl benzene Chlorobenzene o-Nitrochlorobenzene o-Nitrobromobenzene p-Nitrobromobenzene o-Nitroiodobenzene .954 .956 .953 .956 .956 .953 .954 .955 .955 .954 .956 .955 .953 .953 p-Nitroiodobenzene o-Chlorobenzoic acid m-Chlorobenzoic acid p-Chlorobenzoic acid o-Bromobenzoic acid m-Bromobenzoic acid p-Bromobenzoic acid m-Iodobenzoic acid p-Iodobenzoic acid p-Nitroaniline o-Acetyl tuluidine Tetrahydronaphthalene Anthracene Acenaphthene .955 .954 .953 .954 .954 .954 .954 .955 .953 .954 .954 .953 .953 .955

At the time the theoretical values listed in the foregoing tables were originally calculated, Bridgman's results constituted almost the whole of the experimental data then available in the high pressure range, and his experimental limit at 100,000 kg/cm2 was the boundary of the empirical knowledge of the effect of high pressure. In the meantime the development of shock wave techniques by American and Russian investigators has enabled measuring compressions at pressures up to several million atmospheres. With the benefit of these new measurements we are now able to extend the correlation between theory and experiment into the region of the maximum compression factors. The nature of the response of the compression factors to the application of pressure has already been explained, and the maximum factors for each group of elements have been identified. However, the magnitude of the base volume (S03) also enters into the determination of the internal pressure, and coincidentally with the increase in these factors there is a trend toward a minimum base volume. In themselves, modifications of the crystal structure play only a small part in the compressibility picture. Application of sufficient pressure causes a solid to assume one of the crystal forms corresponding to the closest packing of the atoms, the face-centered cubic or close-packed hexagonal for isometric crystals, and the nearest equivalent structures if the crystals are anisometric. If some different crystal form exists at zero pressure, the volume decrease due to the change to one of the close-packed forms shows up as a percentage reduction in all subsequent volumes, but the compressibility is not otherwise affected. However, a difference in crystal structure often indicates a difference in the relative orientation of the atomic

rotations. Any such change in orientation alters the internal pressure, and consequently has a significant effect on the compressibility. Application of pressure tends to favor what may be called ―regular‖ structures at the expense of those structures that are able to exist only because of special conditions applicable to the particular elements involved. This tendency is evident from the start of the compression process, and is responsible for the large number of deviations from the Chapter 2 values of the inter-atomic distances that are identified by asterisks in Table14. For example the five elements from chromium to nickel have a number of different interatomic distances at low pressure, and are able to crystallize in alternate forms. In the early stages of compression, however, all of these elements, except manganese, orient themselves on the basis of the neutral relative rotation 10, and have an internal pressure that reflects the corresponding value of S03, which is 0.603. At still higher pressures vanadium shifts to the same relative rotation and joins the group. Manganese probably does likewise, but empirical confirmation of this change is still lacking. Thus the change of variation of the atomic arrangements is greatly reduced by external pressure. One of the collateral effects is that the amount of uncertainty in the identification of the rotation orientation, and the resulting base volume, is minimized. Most of the elements that change to a lower base volume at the start of compression maintain this new value of S03 throughout the remainder of the present range of the shock wave experiments. Those that do not make this change in the early stages of compression generally do so at some higher pressure. Only a few keep the same base volume up to the shock wave pressure limit. Still fewer undergo a second transition to a lower base volume. Thus the general pattern involves one reduction of the base volume in the pressure range from zero external pressure up to the limit of the shock wave experiments. This pattern is reflected in the twelve series of measurements that have been selected for comparison with the theoretical values. Out of the twelve elements that are included, only two, copper and chromium, have the same base volume in the shock wave range as at zero pressure. Four continue with the values of S03 applicable to the early stages of compression, the values listed in Table14, and six change to a lower base volume somewhere above Bridgman's pressure limit. The minimum base volumes, the corresponding maximum compression factors, and the resulting internal pressures for these elements are shown in Table19.

Table 19: Maximum Internal Pressures
V Cr Co Ni Cu Mo c 10 10 10 10 8-10 10 a-b 4-3 4-3 4-3 4-3 4-3 4-4 S0 0.603 0.603 0.603 0.603 0.652 0.764
3

a-z-y 4-8-2 4-8-3 4-8-3 4-8-3 4-8-3 4-8-4

P0 1816 2724 2724 2724 2519 2866

Ag W Au Tl Pb Th

c 8-10 10 10 5-10 5-10 5

a-b 4-4 4-4½ 4-4½ 4-4½ 4-4½ 4½-4½

S03 0.823 0.822 0.822 1.074 1.074 1.631

a-z-y 4-8-4 4-8-5 4-8-5 4-8-5 4-8-5 4-8-5

P0 2661 3330 3330 2549 2549 1678

Here again, as in the pressure range of the Bridgman experiments, the theoretical development is not yet far enough advanced to enable specifying the exact locations of

the upper sections of the compression curves. Nor is it yet clear in all cases just how many of the possible intermediate values of the compression factors are actually utilized as the pressure increases. What we are able to do at the present rather early stage of the development of the theory is to demonstrate that in this extreme high pressure range, as well as at the lower pressures of the preceding tables, the volume varies inversely with the square root of the total pressure, strictly in accordance with the theory. In this connection it should be noted that the section of each compression curve that is based on the maximum value of the internal pressure is long enough to make the square root pattern clear and distinct. Furthermore, we are able to show that the slope of the last section of the experimental curve for each element is identical with the theoretical slope determined by the calculated maximum values of the internal pressure, and that the slope of each of the intermediate sections is in agreement with one of the possible intermediate values of that internal pressure. An exact theoretical definition of the curves will have to wait for further progress along the lines discussed earlier. In the meantime, the amount of theoretical information already available will serve as a means of testing the validity of each set of empirical results, and will also enable a reasonable amount of extrapolation of the compression curves beyond the present limits of the shock wave technology. Table 20 is a comparison of the theoretical volumes, based on an empirical reference volume for each of the sections of the curves, as in the preceding tables, with the shock wave results obtained at Los Alamos5 on the elements that were investigated over the widest range of pressures. Unless there is an increase in the compression factors in the vicinity of 100,000 atmospheres, the compression curves established on the basis or Bridgman's measurements extend into the lower range of these shock wave experiments. In these cases the theoretical volumes up to the first change in the compression factors are calculated on the basis of the reference volume selected from the Bridgman data, and no reference point is identified in this table.

Table 20: Shock Wave Compressions
P a-z-y 0.1 4-8-3 .2 .3 .4 .5 4-8-4 .6 .7 .8 .9 1.0 1.1 1.2 4-8-5 1.3 Calc. W .972 .946 .922 .900 .880 .865 .850 .836R .823 .810 .798 .787 .778 Obs. a-z-y .970 4-8-1½ .944 4-8-3 .921 .901 .882 .866 .851 .836 .824 4-8-5 .812 .800 .790 .780 Calc. Au .946 .911 .888R .867 .847 .828 .811 .794 .780 .771 .762R .754 .745 Obs a-z-y .953 4-8-2 .917 .888 .864 4-8-3 .843 .825 .810 .796 .783 .772 .762 .752 .743 4-8-4 Calc. Mo .966 .936 .908 .885 .868 .851 .836 .822 .808 .795R .783 .771 .761 Obs. .966 .937 .912 .890 .870 .852 .836 .821 .807 .795 .783 .772 .762

1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 0.1 4-8-1½ .2 .3 .4 .5 .6 .7 4-8-3 .8 .9 1.0 1.1 1.2 1.3 1.4 0.1 4-8-1½ .2 .3 .4 .5 .6 .7 .8 .9 4-8-3 1.0 1.1 1.2 1.3 1.4 1.5 1.6 0.1 4-8-1 .2 4-8-2 .3 .4

.770 .762R .754 .747 .739 .732 .725 .718 Cr .955R .924 .895 .869 .845 .823 .805 .794 .783 .772R .762 .752 .742 .733 Co .953 .921 .893 .867 .843R .821 .801 .782 .769 .759 .749 .739 .730 .721 .712R .704 Ag .922 .879 .848 .820

.771 .762 .754 .746 .738 .731 .725 .718 .955 .920 .891 .867 .846 .827 .811 .797 .784 .772 .761 .751 .742 .733 4-4-1½ 4-4-3

.737 .730 .722 .715 .708 .701 .694 Pb .858 .796R .753 .716 .691 .673R .656 .640 .628 .619R .610 .602 .594 .586 Ni .953 .921 .893 .867 .843R .821 .801 .790 .779 .768 .758 .748 .739 .730 .721R Tl .850 .787 .736R .702

.735 .728 .720 .714 .708 .702 .696

.752 .743R .734 .726

.752 .743 .734 .726

4-8-3

4-8-5

.865 4-8-1 .796 4-8-1½ .751 .718 .693 .673 .656 .642 4-8-2 .630 .619 .609 .600 .593 .586 .954 4-8-1 .919 .889 4-8-1½ .865 .843 .825 .808 4-8-3 .794 .780 .768 .757 .747 .738 .729 .721

V .939 .900 .867R .838 .811 .787 .765 .750 .736 .723R .710 .698 .687 Cu .945 .898 .865 .838 .814R .792 .772 .760 .749 .738 .728 .718 .708 .699 .690R Th .869 .792 .747 .710

.945 .902 .867 .838 .812 .790 .770 .753 .737 .723 .709 .697 .686

.956 4-8-1½ .920 .890 .865 .843 .823 .806 .791 4-8-3 .776 .764 .752 .741 .731 .721 .712 .704 .929 4-4-3 .881 .845 .817 4-8-3

.940 .897 .864 .836 .814 .794 .777 .762 .749 .737 .726 .716 .707 .698 .690

.853 4-8-1 .783 4-8-2 .736 .703

.870 .795 .744 .707

.5 .6 .7 4-8-4 .8 .9 1.0 1.1 1.2 1.3 1.4 1.5 1.6

.794R .771 .752 .741 .730 .720R .710 .701 .692 .683 .675 .667

.794 .775 .759 .744 4-8-5 .731 .720 .710 .700 .692 .684 .677 .670

.678R .656 .637 .623 .614 .605R .597 .588 .581 .573 .566

.678 .658 4-8-3 .642 .628 .616 .605 4-8-5 .596 .587 .580 .573 .567

.677R .652 .632R .613 .596 .583 .572 .562 .553 .544 .535

.677 .652 .632 .614 .599 .585 .573 .562 .553 .544 .535

A rather surprising feature of these comparisons is that the agreement between the shock wave results and the theoretical volumes is as close as the agreement between Bridgman's static values and the theory. It is true that this set of measurements was deliberately selected for the comparison, and it represents the best results rather than the average, but in any event the close correlation is a significant confirmation of the validity of both the shock wave techniques and the theoretical relations. The question that now arises is what course the compressibility follows beyond the pressure range of this table. In some cases a transition to a smaller base volume appears to be possible. Copper, for instance, may shift to the rotations of the preceding electropositive elements at some pressure above that of the tabulation. Aside from such special cases, the factors that determine the compressibility in the range below two million atmospheres have reached their limits. At the present stage of the investigation, however, the possibility that some new factor may enter into the picture at extreme pressures cannot be excluded. A ―collapse‖ of the atomic structure of the kind envisioned by the nuclear theory is. of course, impossible, but as matters now stand we are not in a position to say that all aspects of the compressibility situation have been explored. It is conceivable that there may be some, as yet unknown, capability of change in the atomic motions that would increase the resistance to pressure beyond what now appears to be the ultimate limit. Some shock wave measurements have been made at still higher pressure levels, and these should throw some light on the question. Unfortunately, however, the results are rather ambiguous. Three of the elements included in these experiments, lead, tin, and bismuth, follow the straight line established in Table 20 up to the maximum pressures of about four million atmospheres. On the other hand, five elements on which measurements were carried to maximums between three and five million atmospheres show substantially lower compressions than a projection of the Table 20 curves would indicate. The divergence in the case of gold, for example, is almost eight percent. But there are equally great differences between the results of different experiments, notably in the case of iron. Whether or not some new factor enters into the compression situation at pressures above those of Table 20 will therefore have to be regarded as an open question.

Chapter 5

Heat
IF an atom is subjected to an external force of a transient nature, such as that involved in a violent contact a motion is imparted to it. Where the magnitude of the force is great enough the atom is ejected from the time region and the inter-atomic equilibrium is destroyed. If the force is not sufficient to, accomplish this ejection, the motion is turned back at some intermediate point and it becomes a vibratory, or oscillating, motion. Where two or more atoms are combined into a molecule, the molecule becomes the thermal unit. The statements about atoms in the preceding paragraph are equally applicable to these molecular units. In order to avoid continual repetition of the expression ―atoms and molecules,‖ the references to thermal units in the discussion that follows will be expressed in terms of molecules, except where we are dealing specifically with substances such as aggregates of metallic elements, in which the thermal units are definitely single atoms. Otherwise the individual atoms will be regarded, for purposes of the discussion, as monatomic molecules. The thermal motion is something quite different from the familiar vibratory motions of our ordinary experience. In these vibrations that we encounter in everyday life, there is a continuous shift from kinetic to potential energy, and vice versa, which results in a periodic reversal of the direction of motion. In such a motion the point of equilibrium is fixed, and is independent of the amplitude of the vibration. In the thermal situation, however, any motion that is inward in the context of the fixed reference system is coincident with the progression of the natural reference system, and it therefore has no physical effect. Motion in the outward direction is physically effective. From the physical standpoint, therefore, the thermal motion is a net outward motion that adds to the gravitational motion (which is outward in the time region) and displaces the equilibrium point in the outward direction. In order to act in the manner described, coinciding with the progression of the natural reference system during the inward phase of the thermal cycle and acting in conjunction with gravitation in the outward phase, the thermal vibration must be a scalar motion. Here again, as in the case of the vibratory motion of the photons, the only available motion form is simple harmonic motion. The thermal oscillation is identical with the oscillation of the photon except that its direction is collinear with the progression of the natural reference system rather than perpendicular to it. However, the suppression of the phvsical effects of the vibration during the half of the cycle in which the thermal motion is coincident with the reference system progression gives this motion the physical characteristics of an intermittent unidirectional motion, rather than those of an ordinary vibration. Since the motion is outward during half of the total cycle, each natural unit of thermal vibration has a net effective magnitude of one half unit. Inasmuch as the thermal motion is a property of the individual molecule, not an aspect of a relation between molecules, the factors that come into play at distances less than unity do not apply here, and the direction of the thermal motion, in the context of a stationary reference system is always outward. As indicated earlier, therefore, continued increase in the magnitude of the thermal motion eventually results in destruction of the inter-atomic

force equilibrium and ejection of the molecule from the time region. It should be noted, however, that the gravitational motion does not contribute to this result, as it changes direction at the unit boundary. The escape cannot be accomplished until the magnitude of the thermal motion is adequate to achieve this result unassisted. When a molecule acquires a thermal motion it immediately begins transferring this motion to its surroundings by means of one or more of several processes that will be considered in detail at appropriate points later in this and the subsequent volumes. Coincident with this outflow there is an inflow of thermal motion from the environment, and, in the absence of an externally maintained unbalance, an equilibrium is ultimately re ached at a point where inflow and outflow are equal. Any two molecules or aggregates that have established such an equilibrium with each other are said to be at the same temperature. In the universe of motion defined by the postulates of the Reciprocal System, speed and energy have equal standing from the viewpoint of the universe as a whole. But on the low speed side of the neutral axis, where all material phenomena are located, energy is the quantity that exceeds unity. Equality of motion in the material sector is therefore synonymous with equal energy. Thus a temperature equilibrium is a condition in which inflow and outflow of energy are equal. Where the thermal energy of a molecule is fully effective in transfer on contact with other units of matter, its temperature is directly proportional to its total thermal energy content. Under these conditions, E = kT In natural units the numerical coefficient k is eliminated, and the equation becomes E=T (5-2) (5-1)

Combining Equation 5-2 with Equation 4-3 we obtain the general gas equation, PV = T, or in conventional units, where R is the gas constant. PV = RT (5-3)

These are the relations that prevail in the ―ideal gas state.‖ Elsewhere the relation between temperature and energy depends on the characteristics of the transmission process. Radiation originates three-dimensionally in the time region, and makes contact one-dimensionally in the outside region. It is thus four-dimensional, while temperature is only one-dimensional. We thus find that the energy of radiation is proportional to the fourth power of the temperature. Erad = kT4 (5-4)

This relation is confirmed observationally. The thermal motion originating inside unit distance is likewise four-dimensional in the energy transmission process. However, this motion is not transmitted directly into the outside region in the manner of radiation. The

transmission is a contact process, and is subject to the general inter-regional relation previously explained. Instead of E = kT4, as in radiation, the thermal motion is E2 = k’T4, or E = kT2 (5-5)

A modification of this relation results from the distribution of the thermal motion over three dimensions of time, while the effective component in thermal interchan ce is only one-dimensional. This is immaterial as long as the thermal motion is confined to a single rotational unit, but the effective component of the thermal motion of magnetic rotational displacement n is only 1/n3 of the total. We may therefore generalize equation 5-5 by applying this factor. Substituting the usual term heat (symbol H) for the time region thermal energy E, we then have H = T2/n3 (5-6)

The general treatment of heat in conventional physical theory is empirically based, and is not significantly affected by the new theoretical development. It will not be necessary, therefore, to give this subject matter any attention in this present work, where we are following a policy of not duplicating information that is available elsewhere, except to the extent that reference to such information is required in order to avoid gaps in the theoretical development. The thermal characteristics of individual substances, on the other hand, have not been thoroughly investigated. Since they are of considerable importance, both from the standpoint of practical application and because of the light that they can shed on fundamental physical relationships, it is appropriate to include some discussion of the status of these items in the universe of motion. One of the most distinctive thermal properties of matter is the specific heat, the heat increment required to produce a specific increase in temperature. This can be obtained by differentiating equation 5-6. dH/dT = 2T/n3 (5-7)

Inasmuch as heat is merely one form of energy it has the same natural unit as energy in general, 1.4918 x 10-3 ergs. However, it is more conunonly measured in terms of a special heat energy unit, and for present purposes the natural unit of heat will be expressed as 3.5636 x 10-11 gram-calories, the equivalent of the general energy unit. Strictly speaking, the quantity to which equation 5-7 applies is the specific heat at zero pressure, but the pressures of ordinary experience are very low on a scale where unit pressure is over fifteen million atmospheres, and the question as to whether the equation holds good at all pressures, an issue that has not yet been investigated theoretically, is of no immediate concern. We can take the equation as being applicable under any condition of constant pressure that will be encountered in practice. The natural unit of specific heat is one natural unit of heat per natural unit of temperature. The magnitude of this unit can be computed in terms of previously established quantities, but the result cannot be expressed in terms of conventional units because the conventional temperature scales are based on the properties of water. The scales in

common use for scientific purposes are the Celsius or Centigrade, which takes the ice point as zero, and the Kelvin, which employs the same units but measures from absolute zero. All temperatures stated in this work are absolute temperatures, and they will therefore be stated in terms of the Kelvin scale. For uniformity, the Kelvin notation (oK, or simply K) will also be applied to temperature differences instead of the customary Celsius notation (oC). In order to establish the relation of the Kelvin scale to the natural system, it will be necessary to use the actual measured value of some physical quantity, involving temperature, just as we have previously used the Rydberg frequency, the speed of light, and Avogadro‘s number to establish the relations between the natural and conventional units of time, space, and mass. The most convenient empirical quantity for this purpose is the gas constant. It will be apparent from the facts developed in the discussion of the gaseous state in a subsequent volume of this series that the gas constant is the equivalent of two-thirds of a natural unit of specific heat. We may therefore take the measured value of this constant, 1.9869 calories, or 8.31696 x 107 ergs, per gram mole per degree Kelvin, as the basis for conversion from conventional to natural units. This quantity is commonlv represented by the symbol R, and this symbol will be employed in the conventional manner in the following pages. It should be kept in mind that R = 2/3 natural unit. For general purposes the specific heat will be expressed in terms of calories per gram mole per degree Kelvin in order to enable making direct comparisons with empirical data compiled on this basis, but it would be rather awkward to specifv these units in every instance, and for convenience only the numerical values will be given. The foregoing units should be understood. Dividing the gas constant by Avogadro‘s number, 6.02486 x 1023 per g-mole, we obtain the Bolzman constant, the corresponding value on a single molecule basis: 1.38044 x 10-16 ergs/deg. As indicated earlier, this is two-thirds of the natural unit, and the natural unit of specific heat is therefore 2.07066 x 10-16 ergs/deg. We then divide unit energy, 1.49175 x 10-3 ergs, by this unit of specific heat, which gives us 7.20423 x 1012 degrees Kelvin, the natural unit of temperature in the region outside unit distance (that is, for the gaseous state of matter). We will also be interested in the unit temperature on the T3 basis, the temperature at which the thermal motion reaches the time region boundary. The 3/4 power of 7.20423 x 1012 is 4.39735 x 109. But the thermal motion is a motion of matter and involves the 2/9 vibrational addition to the rotationally distributed linear motion of the atoms. This reduces the effective temperature unit by the factor 1 + 2/9, the result being 3.5978 x 109 degrees K. On first consideration, this temperature unit may seem incredibly large, as it is far above any observable temperature, and also much in excess of current estimates of the temperatures in the interiors of the stars, which, according to our theoretical findings, can be expected to approach the temperature unit. However, an indication of its validity can be obtained by comparison with the unit of pressure, inasmuch as the temperature and pressure are both relatively simple physical quantities with similar, but opposite, effects on most physical properties, and should therefore have units of comparable magnitude. The conventional units, the degree K and the gram per cubic centimeter have been

derived from measurements of the properties of water, and are therefore approximately the same size. Thus the ratio of natural to conventional units should be nearly the same in temperature as in pressure. The value of the temperature unit just calculated, 3.5978 x 109 degrees K, conforms to this theoretical requirement, as the natural unit of pressure derived in Volume I is 5.386 x 109 g/cm3. Except insofar as it enters into the determination of the value of the gas constant, the natural unit of temperature defined for the gaseous state plays no significant role in terrestrial phenomena. Here the unit with which we are primarily concerned is that applicable to the condensed states. Just as the gaseous unit is related to the maximum temperature of the gaseous state, the lower unit is related to the maximum temperature of the the liquid state. This is the temperature level at which the unit molecule escapes from the time region in one dimension of space. The motion in this low energy range takes place in only one scalar dimension. We therefore reduce the three-dimensional unit, 3.5978 x 109 K, to the one-dimensional basis, and divide it by 3 because of the restriction to one dimension of space. The natural unit applicable to the condensed state is then 1/3 (3.598 x 109)¹/3, degrees K = 510.8 oK. The magnitude of this unit was evaluated empirically in the course of a study of liquid volume carried out prior to the publication of The Structure of the Physical Universe in 1959. The value derived at that time was 510.2, and this value was used in a series of articles on the liquid state that described the calculation of the numerical values of various liquid properties, including volume, viscosity, surface tension, and the critical constants. Both the 510.2 liquid unit and the gaseous unit were listed in the 1959 publication, but the value of the gaseous unit given there has subsequently increased by a factor of 2 as a result of a review of the original derivation. Since the basic linear vibrations (photons) of the atom are rotated through all dimensions they have active components in the dimensions of any thermal motion, whatever that dimension may be, just as they have similar components parallel to the rotationally distributed motions. As we found in our examination of the effect on the rotational situation, this basic vibrational component amounts to 2/9 of the primary magnitude. Because the thermal motion is in time (equivalent space) its scalar direction is not fixed relative to that of the vibrational component. This vibrational component will therefore either supplement or oppose the thermal specific heat. The net specific heat, the measured value, is the algebraic sum of the two. This vibrational component does not change the linear relation of the specific heat to the temperature, but it does alter the zero point, as indicated in Fig.2.

Figure 2

In this diagram the line OB’ is the specific heat curve derived from equation 5-7, assuming a constant value of n and a zero initial level. If the scalar direction of the vibrational component is opposite to that of the thermal motion, the initial level is positive; that is, a certain amount of heat must be supplied to neutralize the vibrational energy before there is any rise in temperature. In this case the specific heat follows the line AA’ parallel to OB’ above it. If the scalar direction of the vibrational component is the same as that of the thermal motion, the initial level is negative, and the specific heat follows the line CC’, likewise parallel to OB’ but below it. Here there is an effective temperature due to the vibrational energy before any thermal motion takes place. Although this initial component of the molecular motion is effective in determining the temperature, its magnitude cannot be altered and it is therefore not transferable. Consequently, even where the initial level is negative, there is no negative specific heat. Where the sum of the negative initial level and the thermal component is negative, the effective specific heat of the molecule is zero. It should be noted in passing that the existence of this second, fixed, component of the specific heat confirm the vibrational character of the basic constituent of the atomic structure, the constituent that we have identified as a photon. The demonstration that there is a negative initial level of the specific heat curve is a clear indication of the validity of the theoretical identification of the basic unit in the atomic structure as a vibratory motion. Equation 5-7 can now be further generalized to include thespecific heat contribution of the basic vibration: the initial level, which we will represent by the symbol I. The net specific heat, the value as measured, is then dH/dT = 2T/n3 +I (5-8)

Where there is a choice between two possible states, as there is between the positive and negative initial levels, the probability relations determine which of the alternatives will prevail. Other things being equal, the condition of least net energy is the most probable, and since the negative initial level requires less net energy for a given temperature than the positive initial level, the thermal motion is based on the negative level at low temperatures unless motion on this basis is inhibited by structural factors.

Addition of energy in the time region takes place by means off a decrease in the effective time magnitude, and it involves eliminating successive time units from the vibration period. The process is therefore discontinuous, but the number of effective time units under ordinary conditions is so large that the relative effect of the elimination of one unit is extremely small. Furthermore, observations of heat phenomena of the solid state do not deal with single molecules but with aggregates of many molecules, and the measurements are averages. For all practical purposes, therefore, we may consider that the specific heat of a solid increases in continuous relation to the temperature, following the pattern defmed by equation 5-8. As pointed out earlier in this chapter, the thermal motion cannot cross the time region boundary until its magnitude is sufficient to overcome the progression of the natural reference system without assistance from the gravitational motion; that is, it must attain unit magnitude. The maximum thermal specific heat, the total increment above the initial level, is the value that prevails at the point where the thermal motion reaches this unit level. We can evaluate it by giving each of the terms T and n in equation 5-7 unit value, and on this basis we find that it amounts to 2 natural units, or 3R. The normal initial level is -2/9 and of this 3R is specific heat, or -2/3R. The 3R total is then reached at a net positive specific heat of 21/3 R. Beyond this 3R thermal specific heat level, which corresponds to the regional boundary, the thermal motion leaves the time region and undergoes a change which requires a substantial input of thermal energy to maintain the same temperature, as will be explained later. The condition of minimum energy, the most probable condition, is maintained by avoiding this regional change by whatever means are available. One such expedient, the only one available to molecules in which only one rotational unit is oscillating thermally, is to change from a negative to a positive initial level. Where the initial level is +2/3 R instead of -2/3 R, the net positive specific heat is 32/3 R at the point where the thermal specific heat reaches the 3R limit. The regional transmission is not required until this higher level is reached. The resulting specific heat curve is shown in Fig.3. Inasmuch as the magnetic rotation is the basic rotation of the atom, the maximum number of units that can vibrate thermally is ordinarily determined by the magnetic displacement. Low melting points and certain structural factors impose some further restrictions, and there are a few elements, and a large number of compounds that are confined to the specific heat pattern of Fig.3, or some portion of it. Where the thermal motion extends to the second magnetic rotational unit, to rotation two, we may say, using the same terminology that was employed in the inter-atomic distance discussion, the Fig. 3 pattern is followed up to the 21/3 level. At that point the second rotational unit is activated. The initial specific heat level for rotation two is subject to the same n3 factor as the thermal specific heat, and it is therefore 1/n3 x 2/3 R = 1/12 R. This change in the negative initial level raises the net positive specific heat corresponding to the thermal value 3R from 2.333 R to 2.917 R, and enables the thermal motion to continue on the basis of the preferred negative initial level up to a considerably higher temperature.

Figure 3

When the rotation two curve reaches its end point at 2.917 R net positive specific heat, a further reduction of the initial level by a transition to the rotation three basis, where the higher rotation is available, raises the maximum to 2.975 R. Another similar transition follows, if a fourth vibrating unit is possible. The following tabulation shows the specific heat values corresponding to the initial and final levels of each curve. As indicated earlier, the units applicable to the second column under each heading are calories per gram mole per degree Kelvin. Vibrating Units -0.667 R -0.0833 R -0.0247 R -0.0104 R Effective Initial Level -1.3243 -0.1655 -0.0490 -0.0207 2.3333 R 2.9167 R 2.9753 R 2.9896 R Maximum Net Specific Heat (negative initial level) 4.6345 5.7940 5.9104 5.9388

1 2 3 4

Ultimately the maximum net positive specific heat that is possible on the basis of a negative initial level is attained. Here a transition to a positive initial level takes place, and the curve continues on to the overall maximum. As a result of this mechanism of successive transitions, each number of vibrating units has its own characteristic specific heat curve. The curve for rotation one has already been presented in Fig.3. For convenient reference we will call this a type two curve. The different type one curves, those of two, three, and four vibrating units, are shown in Fig.4. As can be seen from these diagrams, there is a gradual flattening and an increase in the ratio of temperature to specific heat as the number of vibratory units increases. The actual temperature scale of the curve applicable to any particular element or compound depends on the thermal characteristics of the substance, but the relative temperature scale is determined by the factors already considered, and the curves in Fig.4 have been drawn on this relative basis.

Figure 4

As indicated by equation 5-8, the slope of the rotation two segment of the specific heat curve is only one-eighth of the slope of the rotation one segment. While this second segment starts at a temperature corresponding to 21/3 R specific heat, rather than from zero temperature, the fixed relation between the two slopes means that a projection of the two-unit curve back to zero temperature always intersects the zero temperature ordinate at the same point regardless of the actual temperature scale of the curve. The slopes of the three-unit and four-unit curves are likewise specifically related to those of the earlier curves, and each of these higher curves also has a fixed initial point. We will find this feature very convenient in analyzing complex specific heat curves, as each experimental curve can be broken down into a succession of straight lines intersecting the zero ordinate at these fixed points, the numerical values of which are as follows: Vibrating Units 1 2 3 4 Specific Heat at 0º K (projected) -0.6667 R 1.9583 R 2.6327 R 2.8308 R -1.3243 3.8902 5.2298 5.6234

These values and the maximum net specific heats previously calculated for the successive curves enable us to determine the relative temperatures of the various transition points. In the rotation three curve, for example, the temperatures of the first and second transition points are proportional to the differences between their respective specific heats and the 3.8902 initial level of the rotation two segment of the curve, as both of these points lie on this line. The relative temperatures of any other pair of points located on the same straight line section of any of the curves can be determined in a similar manner. By this means the following relative temperatures have been calculated, based on the temperature of the first transition point as unity. Vibrating Units 1 2 3 4 Relative Temperature Transition Point 1.000 2.558 3.086 3.391 End Point 1.80 4.56 9.32 17.87

The curves of Figs.3 and 4 portray what may be called the ―regular‖ specific heat patterns of the elements. These are subject to modifications in certain cases. For instance, all of the electronegadve elements with displacements below 7 thus far studied substitute an initial level of -0.66 for the normal -1.32. Another common deviation from the regular pattern involves a change in the temperature scale of the curve at one of the transition points, usually the first. For reasons that will be developed later, the change is normally downward. Inasmuch as the initial level of each segment of the curve remains the same, the change in the temperature scale results in an increase in the slope of the higher curve segment. The actual intersection of the two curve segments involved then takes place at a level above the normal transition point. There are some deviations of a different nature in the upper portions of the curves where the temperatures are approaching the melting points. These will not be given any

consideration at this time because they are connected with the transition to the liquid state and can be more conveniently examined in connection with the discussion of liquid properties. As mentioned earlier, the quantity with which this and the next two chapters are primarily concerned is the specific heat at zero external pressure. In Chapter 6 the calculated values of this quantity will be compared with measured values of the specific heat at constant pressure, as the difference between the specific heat at zero pressure and that at the pressures of observation is negligible. Most conventional theory deals with the specific heat at constant volume rather than at constant pressure, but our analysis indicates that the measurement under constant pressure corresponds to the fundamental quantity.

CHAPTER 6

Specific Heat Patterns
Fig.5 is a specific heat curve derived from experimental data. The points shown in this graph are the measured values of the specific heat of silver. The accompanying solid lines are the segments of the theoretical four-unit curve of Fig.4 with the temperature scale located empirically. While the curve defined by the plotted points has the same general shape as the theoretical curve, it is quite different in appearance inasmuch as the sharp angles of the theoretical curve have been replaced by smooth and gradual transitions. The explanation of this difference lies in the manner in which the measurements are made. As indicated by equation 5-8 and the curves in Figs.3 and 4, the specific heat of an individual molecule can be represented by a succession of straight lines. Experimental observations, however, are not made on single molecules, but on aggregates of molecules, and the observed temperature of the aggregate is the average of many different individual molecular temperatures, which are distributed about the average in accordance with the probability relations. Midway between the transition points the relation between temperature and specific heat for most of the individual molecules is such that their specific heats lie on the same straight line in the diagram. The average consequently lies on the same line, and coincides with the true molecular specific heat corresponding to the average temperature. In the neighborhood of a transition point, however, the molecules that are individually at the higher temperatures cannot continue on the same line beyond the 3R limit, and must conform to a lower curve based on a higher number of rotating units. This operates to reduce the specific heat of the aggregate below the true molecular value for the prevailing average temperature. In the silver curve, Fig.5, for example, the true atomic specific heat at 75º K is 4.69. This would also be the average specific heat of the silver aggregate at that temperature if the silver atoms were able to continue vibrating on the basis of one rotating unit up to the point beyond which the probability distribution is negligible. But at a specific heat of 2¹/3 R (4.633) the vibration changes to the two-unit basis. Those atoms in the probability distribution that have specific heats above this level cannot conform to the one-unit line

but must follow a line that rises at a much slower rate. The lower specific heat of these atoms reduces the average specific heat of the aggregate, and causes the aggregate curve to diverge more and more from the straight line relation as the proportion of atoms reaching the transition point increases. The divergence reaches a maximum at the transition temperature, after which the specific heat of the aggregate gradually approaches the upper atomic curve. Because of this divergence of the measured (aggregate) specific heats from the values applicable to the individual atoms the specific heat of silver at 75º K is 4.10 instead of 4.69.

Figure 5: Specific Heat -Silver

A similar effect in the opposite direction can be seen at the lower end of the silver curve. Here the specific heat of the aggregate (the average of the individual values) could stay on the one-unit theoretical curve only if it were possible for the individual specific heats to fall below zero. But there is no negative thermal energy, and the atoms which are individually at temperatures below the point where the curve intersects the zero specific heat level all have zero thermal energy and zero specific heat. Thus there is no negative deviation from the average, and the positive deviation due to the presence of atoms with individual temperatures above zero constitutes the specific heat of the aggregate. The specific heat of a silver atom at 15º K is zero, but the measured specific heat of a silver aggregate at an average temperature of 15º K is 0.163. Evaluation of the deviations from the linear relationship in these transitional regions involves the application of probability mathematics, the validity of which was assumed as

a part of the Second Fundamental Postulate of the Reciprocal System. For reasons previously explained, a full treatment of the probability aspects of the phenomena now under discussion is beyond the scope of this work, but a general consideration of the situation will enable us to arrive at some qualitative conclusions which will be adequate for present purposes. At the present stage of development of probability theory there are a number of probability functions in general use, each of which seems to have advantages for certain applications. For the purpose of this work the appropriate function is one that expresses the results of pure chance without modification by any other factor. Such a function is strictly applicable only where the units involved are all exactly alike, the distribution is perfectly random, the units are infinitely small, the variability is continuous, and the size of the group is infinitely large. The ordinary classes of events around which most presentday probability theory has been constructed, such as coin and dice experiments, obviously fail to meet these requirements by a wide margin. Coins, for instance, are not continuously variable with an infinite number of possible states. They have only two states, heads and tails. This means that a major item of uncertainty has become almost a certainty, and the shape of the probability distribution curve has been altered accordingly. Strictly speaking, it is no longer a true probability curve, but a combination curve of probability and knowledge. The basic physical phenomena do conform closely to the requirements of a system in which the laws of pure chance are valid. The units are nearly uniform, the distribution is random, the variability is continuous, or nearly continuous, and the size of the group, although not infinite, is extremely large. If any of the probability functions in general use can qualify as representing pure chance the most likely prospect is the so-called “normal” probability function, which can be expressed as 1 y = ———e-x²/2 ¯\/2 Tables of this function and its integral to fifteen decimal places are available.6 It has been found in the course of this work that sufficient accuracy for present purposes can be attained by calculating probabilities on the basis of this expression, and it has therefore been utilized in all of the probability applications discussed herein, without necessarily assuming the absolute accuracy of this function in these applications, or denying the existence of more accurate alternatives. For example, Maxwell’s asymmetric probability distribution is presumably accurate in the applications for which it was devised (a point that has not yet been examined in the context of the Reciprocal System), and it may also apply to some of the phenomena discussed in this work. However, the results thus far obtained, particularly in application to the liquid properties, favor the normal function. In any event it is clear that if any error is introduced by utilizing the normal function it is not large enough to be significant in this first general treatment of the subject matter. On the foregoing basis, the distribution of molecules with different individual temperatures takes the form of a probability function øt, where t is the deviation from the average temperature. The contribution of the øt molecules at any specified temperature to the deviation of the specific heat from the theoretical value corresponding to the average

temperature depends not only on the number of these molecules but also on the magnitude of the specific heat deviation attributable to each molecule; that is, the difference between the specific heat of the molecule and that of a molecule at the average temperature of the aggregate. Since the specific heat segment from which the deviation takes place is linear, this deviation is proportional to the temperature difference t, and may be represented as kt. The total deviation due to the øt molecules at temperature t is then ktøt, and the sum of all deviations in one direction (positive or negative) may be obtained by integration. It is quite evident that the deviations of the experimental specific heat curves from the theoretical straight lines, both at the zero level and at the transition point have the general characteristics of the probability curves. However, the experimental values are not accurate enough, particularly in the temperature range of the lower transition, to make it worth while to attempt any quantitative correlations between the theoretical and experimental results. Furthermore, there is still some theoretical uncertainty with respect to the proper application of the probability function that prevents specifying the exact location of the probability curve. The uncertain element in the situation is the magnitude of the probability unit. Equation 6-1 is complete mathematically, but in order to apply it, or any of its derivatives, to any physical situation it is necessary to ascertain the physical unit corresponding to the mathematical unit. One pertinent question still lacking a definite answer is whether this probability unit is the same for all substances. If so, the lower portion of the curve, when reduced to a common temperature base, should be the same for all substances with the 1.32 initial level. On this basis, the specific heat of the aggregate at the temperature T0, where the theoretical curve intersects the zero axis, should be a constant. Actually, most of the elements with the -1.32 initial level do have a measured specific heat in the neighborhood of 0.20 at this point, but a few others show substantial deviations from this value. It is not yet clear whether this is a result of variability in the probability unit, or reflects inaccuracies in the experimental values. Whether all of the curves with the same maximum deviation (0.20) are coincident below T0 is likewise still somewhat uncertain. There is a greater spread in the observed specific heats below 0.20 than can be ascribed to errors in measurement, but most of the scatter can probably be explained as the result of lack of thermal equilibrium. At these low temperatures it no doubt takes a long time to establish equilibrium, and even an accurate measurement will not produce the correct result unless the aggregate is in thermal equilibrium. It is significant that the specific heats of the common elements which have been studied most extensively deviate only slightly from a smooth curve in this low temperature region. Fig.6, which shows the measured values of the specific heats of six of these elements on a temperature scale relative to T0, demonstrates this coincidence. If the probability unit is the same for all, or most, of the elements, as these data suggest, the deviation of the experimental curve from the theoretical curve for the single atom at the first transition point, T1, should also have a constant value. Preliminary examination of the curves of the elements that follow the regular pattern indicates that the values of this deviation actually do lie within a range extending from about 0.55 to about 0.70. Considerable additional work will be required before these curves can be defined

accurately enough to determine whether these is complete coincidence, but present indications are that the deviation at T1 is, in fact, a constant for all of the regular elements, and is in the neighborhood of three times the deviation at T0.

Figure 6: Specific Heat -Low Temperatures

With the benefit of the foregoing information as to the general nature of the deviations from the theoretical curves of Chapter 5 due to the manner in which the measurements are made, we are now prepared to examine the correlation between the theoretical curves and the measured specific heats. In order to arrive at a complete definition of the specific heat of a substance it is not only necessary to establish the shapes of the specific heat curves, the objective at which most of the foregoing discussion is aimed, but also to define the temperature scale of each curve. Although the theoretical conclusions with respect to these two theoretical aspects of the specific heat situation, like all other conclusions in this work, are derived by developing the consequences of the fundamental postulates of the Reciprocal System of theory, they are necessarily reached by two different lines of theoretical development. For this reason a more meaningful comparison with the experimental data can be presented if we deal with these two aspects independently. In this chapter, therefore, the experimental values will be compared graphically with theoretical curves in which the temperature scales are empirical. Chapter 7 will complete the definition of the curves be deriving the relevant temperature magnitudes. The curves of Fig.7 are typical of those of most of the elements.7 As indicated in Fig.4, the final straight line segment of each curve occupies the greater part of the temperature range of the solid state in the case of the high melting point elements. The significant features of the curves are therefore confined to the lower temperatures, and in order to bring them out more clearly only the lower temperature range (up to 300º K) is shown in the illustrations that follow. the remaining sections of the curves of Fig.7 are extensions of the lines shown in the diagram, except in the case of tungsten, which undergoes a transition to the four-unit status at about 325º K.

Figure 7: Specific Heat

Fig.8 is a similar group of specific heat curves for four of the electronegative elements with the -0.66 initial level. Aside from this higher initial level these curves are identical with those of Fig.7 when all are reduced to a common temperature scale. The transition to the two-unit vibration takes place at 4.63 (2¹/3 R) regardless of the higher initial level. This point will be given further consideration in Chapter 7. The upper portions of the lead and antimony curves, which are not shown on the graph, are extensions of the lines in the diagram. Arsenic and silicon have transitions at temperatures above 300º K.

Figure 8: Specific Heat

As noted in Chapter 5, there are a number of elements that undergo a modification of the temperature scale at the first transition point. Two curves with the modified second segment are shown in Fig.9. These two curves actually apply to four elements, as the specific heat of lithium follows the aluminum curve, while that of ruthenium coincides with the molybdenum curve. Coincidence of the specific heat curves of different elements, as in the instances mentioned, is not as uncommon as might be expected. The number of possible curve patterns is quite limited, and, as we will see in the next chapter, where the nature of the change in the temperature will be examined, the temperature factors are confined to specific values mainly within a relatively narrow range. Also included in Fig.9 is an example of a specific heat curve for an element which undergoes an internal rearrangement that modifies the thermal pattern. The measurements shown for samarium follow the regular pattern up to the vicinity of the first transition point at 35º K. Some kind of a modification of the molecular structure is evidently initiated at this point in lieu of, or in addition to, the normal transition to the two-unit vibrational status. This absorbs a considerable quantity of heat, which manifests itself as an addition to the measured specific heat over the next portion of the temperature range. By about 175º K the readjustment is complete, and the specific heat returns to the normal curve. Most of the other rare earth elements undergo similar readjustments at comparable temperatures. Elsewhere, if changes of this kind take place at all, they almost always occur at relatively high temperatures. The reason for this peculiarity of the rare earth group is, as yet, unknown.

Figure 9: Specific Heat

All of the types of deviations from the regular pattern that have been discussed thus far are found in the electronegative elements of the lower rotational groups. There is also an additional source of variability in the specific heats of these elements, as their atoms can combine with each other to form molecules. The result is a wide enough variety of

behavior to give almost every one of these elements a unique specific heat curve. Of special interest are those cases in which the variation is accomplished by omitting features of the regular pattern. The neon curve, for example, is a single straight line from the -1.32 initial level to the melting point. the specific heat curve of a hydrogen molecule, Fig.10, is likewise a single straight line, but hydrogen has no rotational specific heat component at all, and this line therefore extends only from the negative initial level, 1.32, to the specific heat of the positive initial level, +1.32, at which point melting takes place. The specific heats of binary compounds based on the normal orientation, simple combinations of Division I and Division IV elements, follow the same pattern as those of the electropositive elements. In these compounds each atom behaves as an individual thermal unit just as it would in a homogeneous aggregate of like atoms. the molecular specific heats of such compounds are twice as large as the values previously determined for the elements, not because the specific heat per atom is any different, but because there are two atoms in each formula molecule.

Figure 10: Specific Heat -Hydrogen

The curves for KCl and CaS, Fig.11, illustrate the specific heat pattern of this class of compounds. Some binary compounds of other structural types conform to the same regular pattern as in the curve for AgBr, also shown in Fig.11.

Figure 11

. As in the elements there is also a variation of this regular pattern in which certain compounds of the electronegative elements have a higher initial level, but in the compounds such as ZnO and SnO this level is zero, rather than -0.66, as it is in the elements. Some of the larger molecules similarly act thermally as associations of independent atoms. CaF2 and FeS2 are typical. More often, however, two or more of the constituent atoms of the molecule act as a single thermal unit. For example, both the KHF2 molecule, which contains four atoms, and the CsClO4 molecule, which contains six, act thermally as three units. In the subsequent discussion the term thermal group will be used to designate any combination of atoms that acts as a single thermal unit. Where individual atoms participate in thermal motion jointly with groups of atoms, the individual atoms will be regarded as monatomic groups. On this basis we may say that there are three thermal groups in each of the KHF2 and CsClO4 molecules. The great majority of compounds not only form thermal groups but also alter the number of groups in the molecule as the temperature varies. A common pattern is illustrated by the chromium chlorides. CrCl2 acts as a single thermal group at very low temperatures; CrCl3 as two. The initial specific heat levels are -1.32 and -2.64 respectively. There is a gradual increase in the average number of thermal groups per molecule up to the first transition point, at which temperature all atoms are acting independently. At the initial point of the second segment of the curve this independent status is maintained, and above the transition temperature the CrCl2 molecule acts as three thermal groups, while CrCl3 has four. At the present stage of the investigation we can determine from theory the possible ways in which a molecule can split up into thermal groups, but we are not yet able to specify on theoretical grounds just which of these possibilities will prevail at any given temperature, or where the transition from one to the other will take place. The theoretical information thus far developed does, however, enable us to analyze the empirical data

and to establish the specific heat pattern of each substance; that is, to determine just how it acts thermally. Aside from some cases, mainly involving very large molecules, where the specific heat pattern is unusually complex, and in those instances where experimental errors lead to erroneous interpretation, it is possible to identify the effective number of thermal groups at the critical points of the curves. Once this information is available for any substance, the definition of its specific heat curve is essentially complete, except for the temperature scale, the determinants of which will be identified in Chapter 7. Where n is the number of active thermal groups in a compound, the initial level is -1.32 n, the initial point of the second segment of a Type 1 curve is 3.89 n, and the first transition point is 4.63 n. The tendency of the atoms of multi-atom molecules to form thermal groups is particularly evident where the molecules contain radicals, because of the major differences in the cohesive forces that are responsible for the existence of the radicals. The extent to which the association into thermal groups is maintained naturally depends on the relative strength of the cohesive and disruptive forces. Those radicals such as OH and CN in which the bonds are very strong act as single thermal groups under all ordinary conditions. Those with somewhat weaker bonding–CO3, SO4, NO3, etc.,–also act as single units at the lower temperatures. Thus we find that at the initial points of both the first and second segments of the specific heat curves there are two groups in MnCO3, three in Na2CO3, four in KAl(SO4)2, five in Ca3(PO4)2, and so on. At higher temperatures, however, radicals of this class split up into two or more thermal groups. Still weaker radicals such as ClO4 constitute two thermal groups even at low temperatures. It was mentioned in Volume I that the boundary line between radicals and groups of independent atoms is rather indefinite. In general, the margin of bond strength required for a structural radical is relatively large, and we find many groups commonly recognized as radicals crystallizing in structures such as the CaTiO3 cube in which the radical, as such, plays no part. The margin required in thermal motion is much smaller, particularly at the lower temperatures, and there are many atomic groups that act thermally in the same manner as the recognized radicals. In Li3CO3, for example, the two lithium atoms act as a single thermal group, and the specific heat curve of this compound is similar to that of MgCO3 rather than of Na2CO3. Extension of the thermal motion by breaking some of the stronger bonds at the higher temperatures gives rise to a variety of modifications of the specific heat curves. For example, MoS2 has only two thermal groups in the lower range, but the S2 combination breaks up as the temperature rises, and all atoms begin vibrating independently. VCl2 similarly goes from one group to three. Splitting of the radical accounts for a change from two groups to three in SrCO3, from one to three in AgNO3, and from two to six in (NH4)2SO4. All of these alterations take place at or prior to the first transitional point. Other compounds make the first transition on the initial basis and break up into more thermal groups later. In a common pattern, a radical that acts as one thermal group at the low temperatures splits into two groups in the temperature range of the second segment of the curve, just as the CO3 radical in SrCO3, PbCO3 and similar compounds does at a lower level. There are a number of structures such as KMnO4 and KIO3, where this

increases the number of groups in the molecule from two to three. Pb3(PO4)2, in which there are two radicals, goes from five to seven, and so on. The effect of water of crystallization is variable, depending on the strength of the cohesion. For example, BaCl2.2H2O acts as three thermal groups at the lower temperatures, the water molecules being firmly bound to the atoms of the compound. As the temperature increases these bonds give way, and the molecule begins vibrating on a five-group basis. In Al2(SO4)3.6H2O and in NH4Al(SO4)2.12H2O the bonds with the water molecules remain fixed through the entire experimental range, up to about 300º K, and the thermal groups in these hydrates are five and six respectively, just as in the corresponding anhydrous compounds. An example of a drastic change in thermal behavior due to the disruption of inter-atomic bonds by thermal forces is shown in Fig.12. The radical CrO3 in the compound AgCrO3 is a single thermal group at very low temperatures. There is a gradual separation into two groups in the temperature range up to the first transition point, and the change to the twounit vibration is made on the basis of a two-group radical. At about 150º K all four atoms in the radical begin vibrating independently, and the molecule undergoes a transition from the second segment of a three-group curve to the second segment of a five-group curve. At about 250º K the compound makes the normal transition to three-unit vibration, continuing as five thermal groups. The compounds used as examples in the foregoing discussion were selected mainly on the basis of the availability of experimental data within the significant temperature ranges. For an accurate definition of the slope of each of the straight line segments of any empirical curve it is necessary to have measurements in the temperature range where the deviations due to the proximity of a transition point are negligible. The examples have been taken from among those of the experimental results that satisfy this requirement.

Figure 12: Specific Heat -AgCrO3

In a theoretical treatment of specific heat such as that in this present work it is necessary to deal with this quantity on a per molecule basis. For practical application, however, it is more convenient to use the specific heat per unit of mass, and most of the collected data are expressed in this manner. It should be noted that the effect of association into thermal groups is to reduce the specific heat per unit of mass. For this reason, the specific heat of most complex compounds is relatively low at low temperatures, and rises toward the values applicable to individual atoms as increasing temperature breaks up the original thermal groups. The simplest organic compounds, those composed of only two or three structural units, generally divide into no more than two thermal groups. Many of the somewhat larger organic molecules, particularly among the ring structures and branched compounds, follow the same rule. The specific heat relations of these compounds are similar to those of the inorganic compounds, except that there are more organic compounds in which the thermal motion is restricted to one rotational unit. These substances, the hydrocarbons and some other compounds of the lower elements, undergo a transition to a positive initial level on reaching their first (and only) transition point. The resulting specific heat curve, the one illustrated in Fig.3, is not much more than a straight line with a bend in it. A few compounds, including ethane and carbon monoxide, even omit the bend, and do not make the transition to the positive initial level. Further addition of structural units, such as CH2 groups, to the simple organic compounds results in the activation of internal thermal groups, units that vibrate thermally within the molecules. The general nature of the thermal motion of these internal groups is identical with that of the thermal motion of the molecule as a whole. But the internal motion is independent of the molecular thermal motion, and its scalar direction (inward or outward) is independent of the scalar direction of the molecular motion. Outward internal motion is thus coincident with outward molecular motion during only one quarter of the vibrational cycle. Since the effective magnitude of the thermal motion, which determines the specific heat, is the scalar sum of the internal and molecular components, each unit of internal motion adds one-half unit of specific heat during half of the molecular cycle. It has no thermal effect during the other half of the cycle when the molecule as a whole is moving inward. Because of the great diversity of the organic compounds the specific heat patterns occur in a variety that is correspondingly large. The effect of internal motion in those of the organic compounds in which it is present is well illustrated, however, by the specific heats of the normal paraffins. The values of the initial levels and the specific heat at T1 for the compounds of this series in the range from C3(propane) to C16(hexadecane) are listed in Table 21, together with the number of internal thermal units in the molecule of each compound.

Table 21: Specific Heats -Paraffin Hydrocarbons
Internal Thermal Units Propane 0 1st -2.64 Initial Levels 2nd 2.64 Specific Heat at T1 9.27

Butane Pentane Hexane Heptane Octane Nonane Decane Hendecane Dodecane Tridecane Tetradecane Pentadecane Hexadecane

0 2 3 4 5 6 7 8 9 10 11 12 13

-2.64 -2.64 -2.64 -3.96 -3.96 -5.30 -5.30 -5.30 -6.62 -6.62 -6.62 -6.62 -7.95

2.64 6.62 7.95 9.27 10.59 11.92 13.24 14.57 15.89 17.22 18.54 19.86 21.19

9.27 13.90 16.22 18.54 20.86 23.18 25.49 27.81 30.12 32.44 34.76 37.08 39.39

Propane and butane have only the two molecular thermal groups corresponding to the positive and negative ends of the molecules, and their specific heat at T1 is the normal two-group value: 9.27. Beginning with two internal groups in pentane, each added CH2 structural group becomes and internal thermal unit, and adds 2.317 to the total specific heat of the molecule at the transition point. The initial level of the first segment of the specific heat curve is -2.64 (the two-group value) in the lower compounds, and changes slowly, adding units of -1.32, as the length of the chain increases. The initial level of the second segment is 2.64 in butane and propane. In the higher compounds, each of which consists of n structural groups (CH2 and CH3), this second initial level is 1.324 n. The values thus derived theoretically are all consistent with the experimental curves. In a few cases the intersection of the two curve segments may not coincide with the calculated specific heat of the transition point, but these deviations, if they are real, are small enough to be explainable on the basis of changes in the temperature factors, the nature of which will be one of the subjects of discussion in Chapter 7. Branching of a hydrocarbon chain tightens the structure and tends to reduce the number of internal thermal units. For example, octane has five internal thermal units, and a specific heat of 20.86 at the transition point. But 2,2,4-trimethyl pentane, a branched compound with the same composition, has no internal motion at all, and the T1 specific heat of this compound is 9.27, identical with that of the C3 paraffin, propane. Ring formation has a similar effect. Ethyl-benzene and the xylenes, which are also C8 compounds, have some internal motion, but their T1 specific heats are 11.59 (one internal unit) and 13.90 (two internal units) respectively, well below the octane level. In Fig.13 the specific heat curves of hexane (straight chain) and benzene (ring), both C6 hydrocarbons, are contrasted. The subject matter of this and the preceding five chapters consists of various aspects of the volumetric and thermal relations of material substances. The study of these relations was the principal avenue of approach to the clarification of basic physical processes that ultimately led to the identification of the physical universe as a universe of motion, and the determination of the nature of the fundamental features of that universe. There relations were examined in great detail over a period of many years, during which thousands of experimental results were analyzed and studied. Incorporation of the

accumulated mass of information into the theoretical structure was the first task undertaken after the formulation of the postulates of the Reciprocal System of theory, and it has therefore been possible to present a reasonably complete description of each of the phenomena thus far dicussed, including what we may call the small-scale effects. Beginning with the next chapter, we will be dealing with subjects not covered in the inductive phase of the theoretical development. In this second phase, the deductive development, we are extending the application of the theory to all of the other major fields of physical science, in order to demonstrate that it is, in fact, a general physical theory. Obviously, where the area to be covered is so large, no individual investigator can expect to carry the development into great detail. Consequently, some of the conclusions expressed in the subsequent pages with respect to the small-scale features of the areas covered are subject to a degree of uncertainty. In other cases it will be necessary to leave the entire small-scale patter for some future investigation.

Figure 13: Specific Heat

CHAPTER 7

Temperature Relations
AS explained in introducing the comparisons of the theoretical specific heats with
experimental results, the curves in Fig.5 to13 verify only the specific heat pattern, the temperature scale of each curve being adjusted to the empirical results. In order to complete the definition of the curves we will now turn our attention to the temperature relations. All of the distinctive properties of the different kinds of matter are determined by the rotational displacements of the atoms of which these substances are composed, and by the

way in which the displacements enter into the various physical phenomena. As stated in Volume I,
The behavior characteristics, or properties, of the elements are functions of their respective displacements. Some are related to the total net effective displacement... some are related to the electric displacement, others to the magnetic displacement, while still others follow a more complex pattern. For instance, valence, or chemical combining power, is determined by either the electric displacement or one of the magnetic displacements, while the inter-atomic distance is affected by both the electric and magnetic diplacement, but in different ways.

The great variety of physical phenomena, and the many different ways in which different substances participate in these phenomena result from the extension of this “more complex pattern” of behavior to a still greater degree of complexity. One of these more complex patterns was examined in Chapter 4, where we found that the response of the solid structure to compression is related to the cross–section against which the pressure is exerted. The numerical magnitude involved in this relation is determined by the product of the effective cross-sectional factors, together with the number of rotational units that participate in the action, a magnitude that determines the force per unit of the cross– section. Inasmuch as one of the dimensions of the cross–section may take either the effective magnetic displacement, represented by the symbol b in the earlier discussion, or the electric displacement, represented by the symbol c, two new symbols were introduced for purposes of the compressibility chapter: the symbol z to represent the second displacement entering into the cross–section (either b or c), and the symbol y to represent the number of effective rotational units (related to the third of the displacements). The a– b–c factors were thus represented in the form a–z–y. The values of these factors relative to the positions of the elements in the periodic table follow the same general pattern in application to specific heat as in compressibility, and most of the individual values are either close to those applying to compressibility or systematically related to those values. We will therefore retain the a–z–y symbols as a means of emphasizing the similarity. But the nature of the thermal relations is quite different from that of the relations that apply to compressibility. The temperature is not related to a cross–section; it is determined by the total effective rotation. Consequently, instead of the product, azy, of the effective rotational factors, the numerical magnitude defining the temperature scale of the thermal relations is the scalar sum, a+z+y, of these rotational values. This kind of a quantity is quite foreign to conventional physics. The scalar aspect of vectorial motion is recognized; that is, speed is distinguished from velocity. But orthodox physical thought does not recognize the existence of motion that is inherently scalar. In the universe of motion defined by the postulates of the Reciprocal System of theory, on the other hand, all of the basic motions are inherently scalar. Vectorial motions can exist only as additions to certain kinds of combinations of the basic scalar motions. Scalar motion in one dimension, when seen in the context of a stationary spatial reference system, has many properties in common with vectorial motion. This no doubt accounts for the failure of previous investigators to recognize its existence. But when motion extends into more than one dimension there are major differences in the way these two types of motion present themselves (or do not present themselves) to observation. Any

number of separate vectorial motions of a point can be combined into a single resultant, and the position of the point at any specified time can be represented in a spatial system of reference. This is a necessary consequence of the fact that vectorial motion is motion relative to that system of reference. But scalar motions cannot be combined vectorially. The resultant of scalar motion in more than one dimension is a scalar sum, and it cannot be identified with any one point in spatial coordinates. Such motion is therefore incapable of representation in a spatial reference system of the conventional type. It does not follow, however, that inability to represent this motion within the context of the severely limited kind of reference system that we are accustomed to use means that such motion is non–existent. To be sure, our direct perception of physical events is limited to those that can be represented in this type of a reference system, but Nature is not under any obligation to stay within the perceptive capabilities of the human race. As pointed out in Chapter 3, Volume I, where the subject of reference systems was discussed at length, there are many aspects of physical existence (that is, many motions, combinations of motions, or relations between motions) that cannot be represented in any single reference system. This is not, in itself, a new, or unorthodox conclusion. Most modern physicists, including all of the leading theorists, have realized that they cannot accommodate all of present–day physical knowledge within the limitations of fixed spatial reference systems. But their response has been the drastic step of cutting loose from physical reality, and building their fundamental theories in a shadow realm where they are free from the constraints of the real world. Heisenberg states their position explicitly. “The idea of an objective real world whose smallest parts exists objectively in the same sense as stones and trees exist, independently of whether or not we observe themÉis impossible,” 8 he says. In the strange half–world of modern physical theory the only realities are mathematical symbols. Even the atom itself is ―in a way only a symbol,‖ 9 Heisenberg tells us. Nor is it required that symbols be logically related or understandable. Nature, these front rank theorists contend, is inherently ambiguous and subject to uncertainties of a fundamental and inescapable nature. ―The world is not intrinsically reasonable or understandable,‖ Bridgman explains, ―It acquires these properties in ever–increasing degree as we ascend from the realm of the very little to the realm of everyday things.‖ 10 What the Reciprocal System of theory has done in this area is to show that once the true status of the physical universe as a universe of motion is recognized, and the properties of space and time are defined accordingly, there is no need for the retreat from reality, or for the attempt to blame Nature for the prevailing inability to understand the basic relations. The existence of phenomena not capable of representation in a spatial reference system is a fact that we must come to terms with, but the contribution of the Reciprocal System has been to show that the phenomena outside the scope of the conventional spatial reference systems can be described and evaluated in terms of the same real entities that exist within the reference system. The scalar sum of the magnitudes of motions in different dimensions, the quantity that we will now use in analyzing the temperature relations, is an item of this nature. It is just as real as any other physical quantity, and its components, the motions in the individual dimensions, are motions of the same nature as those one– dimensional scalar motions that are capable of representation in the spatial reference

systems, even though the scalar sum cannot be so represented in any manner accessible to our direct perception. In the theoretical minimum situation, where the effective thermal factors are 1–0–0, and the scalar sum of these factors is one unit, the temperature of the initial negative level is one unit out of the total of 128 that corresponds to the full 510.7 degrees temperature unit of the condensed states. But since the thermal motion is effective in only one direction, the ratio becomes 1/256, and the zero point temperature, T0, the temperature at which the thermal motion counterbalances the negative initial level of vibration, is 1.995º K. For a substance with thermal factors a, z, and y, and the normal 2/9 initial specific heat level, we then have T0 = 1.995 (a+z+y) degrees K (7-1) This value completes the definition of the specific heat curves by defining the temperature scales. It will be more convenient, however, to work with another of the fixed points on the curves, the first transition point, T1. As this is the unit specific heat level on the initial linear section of the curve, while T0 is 2/9 unit above the initial point, the temperature of the first transition point is T1 = 8.98 (a+z+y) degrees K (7-2) Thermal factors of the elements for which reliable specific heat patterns are available, and the corresponding theoretical first transition temperatures (T1) are listed in Table 22, together with the T1 values derived from curves of the type illustrated in Figs.5 to13, in which the temperature scale is empirical. In effect, this is a comparison between theoretical and experimental values of the temperature scales of the specific heat curves. The experimental values are subject to some uncertainty, as they have been obtained by inspection from graphs in which the linear portions of the curves were also drawn from visual inspection. Greater accuracy could be attained by using more sophisticated techniques, but the time and effort required for this refinement did not appear to be justified for the purposes of this initial investigation of the subject. The compressibility factors derived in Chapter 4, with a few values restated in different, but equivalent, terms, are shown in the table for comparison with the corresponding thermal factors. The principal determinants of the compressibility values, aside from the effect of the pressure level itself (including the internal pressure), were found to be the magnitude and sign (positive or negative) of the displacement in the electric dimension. The rotational group to which the element belongs (determined by the magnetic displacements) is much less significant. In the thermal situation the rotational group becomes the dominant influence. The elements of Group 3B (magnetic displacements 33), midway in the group order, generally have thermal factors close to the compression values. In half of the 3B elements included in the table the deviation is no more than one unit. But in each direction from this central group there is a systematic deviation from the compressibility values, upward in the lower groups and downward in the higher groups. Every element above number 42, molybdenum, that is included in the table, with one exception, has thermal factors either equal to or less than the corresponding compressibility factors. Every element below molybdenum, with three exceptions (two of which are alkali metals), has thermal factors that are either equal to or greater than the corresponding compressibility factors.

It was noted in Chapter 4 that in compression the lowest electropositive elements do not take the minimum 1-1-1 factors of their electronegative counterparts, but have a = 4 in all of the elements of this class investigated by Bridgman. The reason for this difference in behavior is not yet known (although it is no doubt connected with the all–positive nature of the rotational displacement of these elements), but it is even more pronounced in the thermal factors. Except for the alkali metals above sodium, which, as noted above, have thermal factors even lower than the compressibility values, the lower electropositive elements not only maintain the 6-unit minimum (4-1-1 or equivalent) but raise the effective magnitudes of their thermal factors still farther by omitting the n = 1 section of the specific heat curve based on equation 5-6, and going immediately to n = 2, which increases the temperature scale by a factor of 8. This pattern is followed by boron and carbon, and in part, by beryllium. The corresponding members of the next higher group, magnesium, aluminum, and silicon, also have n = 2 from the start of the thermal motion, but here the second unit is one–dimensional rather than three–dimensional. Beryllium combines the two patterns. It has the same thermal factors as lithium, but a dimensional multiplier halfway between those of lithium and boron, the two adjoining elements.

Table 22: Effective Rotational Factors
Factors Comp Therm. Li 4-1-1 4-2-1 4-1-1 Be 4-4-1 4-2-1 n 2 2 2 8 4-1-1 2 8 4-1-1 4-4-1 4-3-1 4-1-1 4-1-1 3-1-1 Al 4-5-1 4-2-1 4-1-1 Si 4-4-1 4-6-2 P-r 4-6-2 P-w 4-2-1 S 4-1-1 4-4-1 Cl 4-2-1 Ar 1-1-1 4-6-1 4-2-1 4-1-1 4-4-1 B C-d C-g Na Mg 8 8 8 2 2 2 2 2 2 14 12 14 56 12 48 48 72 64 6 12 10 14 12 24 24 7 9 7 3 126 108 314 314 269 269 431 647 575 54 108 90 126 108 216 216 63 81 63 27 T1 Tot. Calc. Obs. 131 110 323 323 267 267 420 635 578 52 109 91 131 112 220 207 66 84 62 28 Rh Pd Ag Cd In Sn Sb Te I Xe Cs Ru Y Zr Mo Factors 4-2-1 4-3-1 4-8-1 4-4-1 4-8-2 4-8-2 4-6-2 4-8-2 4-8-2 4-6-2 4-8-2 4-8-1 4-6-1 4-6-2 4-4-2 4-4-1 4-4-2 4-3-1 4-4-1 2-2-1 4-4-1 4-6-2 4-4-1 4-2-1 4-1-1 4-4-1 4-3-1 4-3-1 4-2-1 2-2-1 1-1-0 4-1-1 1-1-0 8 9 14 12 14 12 13 11 10 9 8 5 12 7 6 8 7 5 2 2 72 81 126 108 126 108 117 99 90 81 72 45 108 63 54 72 63 45 18 18 T1 71 84 129 107 128 107 117 95 91 78 72 46 105 66 57 68 61 44 19 17 Comp.Therm. Tot. Calc. Obs.

K Ca Sc Ti V Cr Mn Fe Co Ni Cu Zn Ga Ge As Se Br Kr Rb

4-1-1 2-1-1 4-3-1 4-3-1 4-6-1 4-5-1 4-8-1 4-8-2 4-8-1 4-8-3 4-6-2 4-8-1 4-8-2 4-8-1 4-8-1 4-5-1 4-8-1 4-8-4 4-6-2 4-8-1 4-8-2 4-6-1 4-8-1 4-8-2 4-6-1 4-6-1 4-6-2 4-4-1 4-3-1 2-1-1 4-4-1 4-8-1 4-4-1 4-6-2 4-1-1 4-3-1 4-2-1 1-1-0 4-1-1 1-1-0

4 8 11 10 14 15 12 14 13 10 16 12 14 11 14 11 12 8 4 13 12 8 6 2 2

36 72 99 90 126 135 108 126 117 90 144 108 126 99 126 99 108 72 36 117 108 72 56 18 18

32 76 103 88 124 133 107 162 128 115 92 142 108 126 100 131 97 108 73 36 119 106 75 54 20 20

Ba La Pr Nd Sm Eu Gd Tb Dy Ho Er Tm Yb Hf Ta W Re Ir Pt Au Hg Tl Pb Bi

4-2-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-4-1 4-2-1 4-8-2 4-8-3

4-8-3 4-8-2 4-6-2 4-4-1 4-4-1 4-3-1

2-1-1 2-2-1 1-1-1 1-1-1 2-1-1 2-1-1 2-2-1 2-2-1 2-2-1 2-1-1 1-1-1 1-1-1 2-1-1 4-3-1 4-3-1 4-6-2 4-4-2 4-4-1 4-6-1 4-5-1 4-3-1 4-1-1 2-1-1 2-1-1 2-1-1 2-2-1

4 5 3 3 4 4 5 5 5 4 3 3 4 8 8 12 10 9 11 10 8 6 4 4 4 5

36 45 27 27 36 36 45 45 45 36 27 27 36 72 72 108 90 81 99 90 72 54 36 36 36 45

34 42 27 31 36 33 48 44 41 33 28 29 37 71 74 108 93 78 98 88 76 57 32 34 33 44

The option of one dimension or three dimensions is open whenever motion advances from one unit to two units, but not under any other conditions. Three-dimensional motion of one displacement unit is meaningless, as 13 = 1. After two units there is no option, as there cannot be more than two units in linear succession, for reasons that were discussed in Volume I. But two-unit motion can be either one-dimensional or three-dimensional. At the point where the advance from one to two units takes place, the motion is therefore able to take the dimensions that are best suited to the existing situation. A onedimensional increase in the value of n results in increasing the temperature scale by a factor of 2 rather than 8. The alkali metals, which diverge from the normal electropositive behavior in a number of respects because of their low electric displacement, follow the same pattern as the elements listed in the preceding paragraph, but one step lower, as indicated in the following comparison: Group 1B 2A 2B Alkalis n=2 4-x-x 1-1-x OtherPositive n=8 n=2 4-x-x

As we found in the specific heat investigation, the electronegative elements below displacement 7 have a half-size initial negative specific heat level: 1/9 unit instead of the normal 2/9. It might be expected that this would result in a net effective specific heat of

8/9 unit or 2 2/3 R, at the transition point instead of the 7/9 unit (2 1/3 R) that exists when the initial negative level is 2/9 unit. But it is quite clear from the measured specific heat values that this is not true. The first transition point in the specific heat curves of the electronegative elements is 2 1/3 R just as it is in the curves with the 2/9 unit (2/3 R) negative initial level. Apparently the restriction that prevents the existence of the more negative initial level in the specific heat of these elements is gradually eliminated as the temperature rises, so that at the transition point the effective negative component of the specific heat is the normal 2/9 unit. The thermal factors of the higher inert gases, krypton and xenon, which have no rotation in the electric dimension, are 1-0-0 rather than 1-1-1, as in compressibility. This is a peculiarity of the mathematics, and has no physical significance. In both cases the meaning of the symbols is that the effective magnitude is determined entirely by the factors a and z. In multiplication this requires a unit value in the y position, whereas in addition a zero is required for the same purpose. But this equivalence of the 1-1-1 compressibility and 1-1-0 thermal factors does not mean that 1-1-1 thermal and 1-1-0 thermal are equivalent. The 1-1-1 thermal combination is the minimum for a substance with effective rotational displacement in all three dimensions. Where the thermal factors drop to 1-1-0, as indicated for rubidium and cesium, there is no effective displacement in the electric dimension, and the thermal motion is following the inert gas pattern. Such behavior is uncommon, but it is not without precedent in other properties. We found in Chapter 1, for instance, that a number of elements, including the halogens, the elements corresponding to the alkalis on the opposite side of the inert gases, have inter–atomic distances in one or two dimensions that are similarly based on magnetic rotation only. Since the empirical values listed in Table 22 are subject to a considerable degree of uncertainty, small differences between them and the calculated values have no significance. In some cases, however, the discrepancy is large enough to be real, and further study of the thermal relations of these elements will be required. Only one of the experimental values shown in the table, one of those applicable to chromium, is too far from any theoretical temperature to be incapable of explanation on the basis of the theoretical information now available. As brought out in the discussion of the general pattern of the specific heat curves in Chapter 5, in many substances there is a change in the temperature scale of the curve at the first transition point (T1), as a result of which the first and second segments of the curve do not intersect at the 2¹/3 R end point of the lower segment of the curve in the normal manner. This change in scale is due to a transition to the second set of thermal factors given, for the elements in which it occurs, in Table 22. With the benefit of the information that we have developed regarding the factors that determine the temperature scale we can now examine the quantitative aspects of these changes. As an example, let us look at the specific heat curve of molybdenum, Fig.9, which, as previously noted, also applies to ruthenium. The thermal factors applicable to these elements at low temperatures are 4-8-2, identical with the compressibility factors. The first transition point, specific heat 4.63, is reached at 126º K on the basis of these factors. The corresponding empirical temperatures, determined by inspection of the trend of the experimental values of the specific heats, are 129 for molybdenum and 128 for

ruthenium, well within the range of uncertainty of the techniques employed in estimating the empirical values. If the thermal factors remained constant, as they do in the “regular” pattern followed by such elements as silver, Fig.5, there should be a transition to n = 2 at this 126º K temperature, and the specific heat above this point would follow the extension of a line from the initial level of 3.89 to 4.63 at 126º K. But instead of continuing on the 4-8-2 basis, the thermal factors decrease to 4-6-2 at the transition point. These factors correspond to a transition temperature of 108º K. The specific heat of the molecule therefore undergoes an isothermal increase at 126º K to the extension of a line from the initial level of 3.89 to 4.63 at 108º K, and follows this line at higher temperatures. The effect of the isothermal increase in the specific heat of the individual molecules is, of course, spread out over a substantial temperature range in application to a solid aggregate by the distribution of molecular velocities. The temperature of the subsequent transition points and the end points of the various segments of the specific heat curves can be calculated from the temperatures of the first transition points by applying the relative values listed in Chapter 5 to the appropriate values of T1. An approximate agreement between the empirical data and the higher transition points thus calculated is indicated, but the angles at which the upper segments of the curves intersect are too small to permit any close empirical definition of the temperature of intersection. The only one of the end points that has any real significance is the end point of the last segment of the curve applicable to the substance under consideration. This is the temperature limit of the solid. Any further addition of heat initiates the transition to the liquid state. Inasmuch as it is the individual molecule that reaches its thermal limit at the solid end point, it is the individual molecule that makes the transition to the liquid state. Physical state is thus basically a property of the individual molecule rather than a property of the aggregate, as seen in conventional physical theory. The state of the aggregate is merely a reflection of the state of the majority of its constituents. Recognition of this fact some forty years ago, in the early stages of the investigation that led to the results now being reported, was a major step in the clarification of physical fundamentals that ultimately opened the door to the formulation of a general physical theory. The liquid state has long been an enigma to conventional physics. As expressed by V. F. Weisskopf, “A liquid is a highly complex phenomenon in which the molecules stay together yet move along each other. It is by no means obvious why such a strange object should exist.” 11 Weisskopf goes on to speculate as to what the outcome would be if physicists knew the fundamental principles on which atomic structure is based, as present-day theory sees them, but “had never had occasion to see structures in nature.” He doubts if these theorists would ever be able to predict the existence of liquids. In the Reciprocal System of theory, on the other hand, the liquid state is a necessity, an intermediate condition that must necessarily exist between the solid and gaseous states. When the thermal motion of a molecule reaches equality with the inward progression of the natural reference system in one dimension of the region outside unit distance, the cohesive force in that dimension is eliminated. The molecule is then free to move in that dimension, while it is held in a fixed position, or a fixed average position, in the other dimensions by the cohesive forces that are still operative. The temperature at which the

freedom in one dimension is reached is the melting point of the aggregate, because any additional thermal energy supplied to the aggregate is absorbed in changing the state of additional molecules until the remaining content of solid molecules reaches the percentage that can be accommodated within the liquid aggregate. These remaining solid molecules are gradually converted to the liquid state in a temperature range above the melting point. Thus the liquid aggregate in this range contains a percentage of solid molecules, while the solid aggregate in a similar temperature range below the melting point contains a percentage of liquid molecules. The presence of these “foreign” molecules has a significant effect on the physical properties of matter in both of these temperature ranges, an effect which, as we will see in the subsequent discussion of the liquid state, can be evaluated accurately by using probability relations to determine the exact proportions in which molecules of the two states exist at each temperature. While the end point of the solid state is the temperature at which the intermolecular forces reach an equilibrium at the unit level, arrival at this end point does not mean automatic entry into the liquid state. It merely means that the cohesive forces of the solid are no longer operative in all three dimensions, and therefore do not prevent the free movement in one dimension of space that is the distinguishing characteristic of the liquid state. The significant point here is that a liquid molecule is limited to certain specific temperatures. A liquid aggregate can take any temperature within the liquid range, but only because the aggregate temperature is an average of a large number of the restricted individual values. This same restriction to one of a limited set of values also applies to the temperature of the solid molecule, but in the vicinity of the melting point the solid is at a high time region temperature level, where the proportionate change from one possible value, n units, to the next, n + 1 units, is small. The motion of the liquid state, on the other hand, is in the region outside unit space, and is equivalent to gas motion in the one dimension in which the thermal energy exceeds the solid state limit. As we saw in Chapter 5, temperatures in the vicinity of the melting point are very low on the scale applicable to this outside region, and the proportionate change from n to n + 1 is large. The intervals between the possible temperatures of liquid molecules are therefore large enough to be significant. Because of the limitation of the liquid temperatures to specific values, the temperature at which a molecule qualifies as a liquid is not the end point temperature of the solid state, but a higher value that includes the increment necessary to bring the end point temperature up to the next available liquid level. This makes it impossible to calculate melting points from solid state theory alone. Such calculations will have to wait until the relevant liquid theory is developed in a subsequent volume in this series, or elsewhere. But the temperature increment beyond the solid end point is small compared to the end point temperature itself, and the end point is not much below the melting point. A few comparisons of end point and melting point temperatures will therefore serve to confirm, in a general way, the theoretical deductions as to the relation between these two magnitudes.

There is a considerable degree of uncertainty in the experimental results at the high temperatures reached by the melting points of many of the elements, and there are also some theoretical aspects of the thermal situation in the vicinity of the melting point that have not yet been fully explored. The examples for discussion in this initial approach to the subject have been selected form among those in which these uncertain elements are at a minimum. First, let us look at element number 19, potassium. This element has a specific heat curve of the type identified by the notation n = 3 in Fig.4. Its thermal factors are 2-1-1, and it maintains the same factors throughout the entire solid range. As indicated in Chapter 5, the end point temperature of this type of curve is 9.32 times the temperature of the first transition point. This leads to an end point temperature of 336º K. The measured melting point is 337º K. In this case, then, the solid end point and the melting point happen to coincide within the limits of accuracy of the investigation. Chlorine, an element only two steps lower in the atomic series than potassium, but a member of the next lower group, has the lower type of specific heat curve, with n = 2. The end point temperature of this curve is 4.56 on the relative scale where the first transition point is unity. The thermal factors that determine the transition point, and are applicable to the first segment of the curve, are 4-2-1, but if these factors are applied to the end point they lead to an impossibly high temperature. It is thus apparent that the factors applicable to the second segment of the curve are lower than those applicable to the first segment, in line with the previously noted tendency toward a decrease in the thermal factors with increasing temperature. The indicated factors applicable to the end point in this case are the same 2-1-1 combination that we found in potassium. They correspond to an end point temperature of 164º K, just below the melting point at 170º K, as the theory requires. Next we look at two curves of the n = 4 type, the end point of which is at a relative temperature of 17.87. On the basis of thermal factors 4-6-1, the absolute temperature of the end point is 1765º K, which is consistent with the melting points of both cobalt (1768) and iron (1808). Here, too, the indicated factors at the end point are lower than those applicable to the first segment of the specific heat curve, but in this case there is independent evidence of the decrease. Cobalt, which has the factors 4-8-2 in the first segment is already down to 4-6-1 at the second transition point, while iron, the initial factors of which are also 4-82, has reached 4-6-2 at this point, with two more segments of the curve in which to make the additional reduction. Compounds of elements about group 1B, or having a significant content of such elements, follow one or another of the Type 1 patterns that have been illustrated by examples from the elements. The hydrocarbons and other compounds of the lower group elements have specific heat curves of type 2 (Fig.3) in which the end point is at a relative temperature of 1.80. As an example of this class we can take ethylene. the thermal factors of these lower group compounds are limited to 1-1-1, 2-1-1, and the combination value 1¹/2-1-1. As we found in Volume I, however, the two groups of atoms of which ethylene and similar compounds are composed are inside one time region unit of distance. They therefore act jointly in thermal interchange rather than acting independently in the manner of two inorganic radicals, such as those in

NH4NO3. Each group contributes to the thermal factors of the molecule, and the value applicable to the molecule as a whole is the sum of the two components. Ethylene uses the 1-1-1 and 1¹/2-1-1 combinations. A difference of this kind between the two halves of an organic molecule is quite common, and no doubt reflects the lack of symmetry between the positive and negative components that was the subject of comment in the discussion of organic structure. the combined factors amount to a total of 6¹/2 units. This corresponds to a transition point at 58º K, which agrees with the empirical curve, and an end point at 104º K, coincident with the observed melting point. The joint action of the two ends of an organic molecule that combines their thermal factors in the temperature determination is maintained when additional structural units are introduced between the end groups. As brought out in Chapter 6, such an extension of the organic type of structure into chains or rings also results in the activation of additional thermal motions of an independent nature within the molecules. The general nature of this internal motion was explained in the previous discussion. The same considerations apply to the transition point temperature, except that the internal motion is independent of the molecular motion in vectorial direction as well as in scalar direction. It is therefore distributed three–dimensionally, and the fraction in the direction of the molecular motion is 1/8 rather than 1/2. Each unit of internal motion thus adds 1/8 of 8.98 degrees, or 1.12 degrees K to the transition point temperature. With the benefit of this information we are now able to compute the temperatures corresponding to the specific heats of the paraffin hydrocarbons of Table 21. These values are shown in Table 23.

Table 23: Temperatures of Critical Points – Paraffin Hydrocarbons
Thermal Factors Trans.Point Propane Butane Pentane Hexane and above 1-1-1 1-1-1 1¹/2-1-1 2-1-1 Total 1-1-1 1-1-¹/2 1¹/2-1-1 1¹/2-1-1 6 5¹/2 7 7¹/2 End point 1-1-1 2-1-1 2-1-1 2-1-1 1-1-0 1¹/2-1-1 2-1-1 2-1-1 Total 5 7¹/2 8 8 Melting Point 85 138 143 179 182 216

Temperatures Internal Units Propane Butane Pentane Hexane Heptane Octane 0 0 2 3 4 5 T1 54 50 65 71 72 73 End Point Factors Internal 0 1 1 3 3 5 Total 5 8¹/2 9 11 11 13 End Point 81 137 145 178 178 210

Nonane Decane Hendecane Dodecane Tridecane Tetradecane Pentadecane Hexadecane

6 7 8 9 10 11 12 13

74 75 76 77 79 80 81 82

5 7 7 8 8 9 9 10

13 15 15 16 16 17 17 18

210 242 242 259 259 275 275 291

220 243 247 263 268 279 283 291

The first section of this table traces the gradual increase in the thermal factors as the molecule makes the transition from a simple combination of two structural groups, with properties that are similar to those of inorganic binary compounds, except for the joint thermal action due to the short inter-group distance, to a long-chain organic structure. The increase in the factors follows a fairly regular course in this range except in the case of butane. If the experimental values of the specific heat of this compound are accurate, its transition point factors drop back from the total of 6 that applies to propane to 5¹/2, whereas they would be expected to advance to 6¹/2. The reason for this anomaly is unknown. At the C6 compound, hexane, the transition to the long-chain status is complete, and the thermal factors of the higher compounds as far as hexadecane (C16), the limit of the present study, are the same as those of hexane. In the second section of the table the transition point temperatures are calculated on the basis of 8.98 degrees K per molecular thermal factor, as shown in the upper section of the table, plus 1.12 degrees per effective unit of internal motion. The number of internal motions shown in Column 1 for each compound is taken from Table 21. Columns 3 and 4 are the values entering into the calculation of the solid end point, Column 5. As the table indicates, some of the internal motions that exist in the molecule at the transition temperature are inactive at the end point. However, the active internal motion components are thermally equivalent to the molecular motions at this point, rather than having only 1/8 of the molecular magnitude as they do at T1. This is a result of the general principle that the state of least energy takes precedence (in a low energy environmen