Without transcendent universal mind there is no distinction between consciousness being

a One or a many. The collective mass of what we don't know we know. And of course,
it is this inherent and perhaps unbounded multifacetedness of the individually collected
human experiential data, which must transcend the meaning of individual human
experience (as attributed by each individual to his or her own experience) that so strongly
suggests the reality of the transcendent realm and its Deity. This transcendent realm may
be thought to be akin to Saussure's "unlimited semiosis". Deity is implied by the notion
that there is an objective observer that is in turn implied by the necessity of an ultimate
―tying off‖ point of this otherwise unbounded semiosis.

August 2012
Having the epiphany that you personally don‘t really know anything at all means
that Man‘s knowledge can be at best but a collective illusion. . . and all this within the
context of a sheer abundance of grace. How can one not then have faith upon
recognizing the Providence involved in such a coherent manifestation of the radical
unknown. One is called forth from the Void and enters the world from one unknown
only to pass from it into another unknown. What is lost on some is that the world itself is
yet a third unknown, rendering the first and the latter qualitatively distinct. And so
metaphysical work can only be performed by experience if it is possible to transcend all
dual opposite categories. You emerged into the world from a hole you didn‘t crawl into
and shall leave it from a hole you can‘t crawl out of. No one as they say, ―gets out of
here alive‖.

―If a man is standing in the middle of the forest speaking and there is no woman around
to hear him - Is he still wrong?‖ ―Yep...because of the X chromosome lurking in each
cell of his body.‖

―Clay tablets would be better‖ Comment about the fragility of digital media data storage.

http://www.scribd.com/search?category=&language=1&num_pages=100%2B&filetype=
pdf&uploaded_on=&paid=false&query=%22Introduction+to+Virology%22

Begrudging the cultural and artistic expressions of others. Two forms of the very same
kind of blind faith: that of the ―God of the Gaps‖ versus ―Science will eventually find the
answers‖. What are the implications of a radical logical self-consistency approach to a
theory of truth? How does the principle of quantum superposition endanger the law of
excluded middle? Or does quantum superposition actually support the excluded middle?
The principle of the loose ends only showing on the underside of the carpet? Is
behavioral genetics fundamentally responsible for the coherence and cohesiveness of the
perceived external world and society?

http://www.scribd.com/doc/33520644/The-Nature-of-Qualia-a-Neurophilosophical-
Analysis-PhD-Dissertation-de-Sousa

www.forgottenbooks.org

ROUTLEDGE PHILOSOPHY GUIDEBOOKS
Edited by Tim Crane and Jonathan Wolff,
University College London

@$ ―at money‖ (an important or seminal passage)
@? ―kernel idea requiring further development‖

au= ―author is‖ (an important or seminal thinker)
cit= ―citation is‖ (important citation)
con= ―concept‖ (important concept)
cont‘d= (to be continued)
epi= ―epigram‖ (a candidate bon mot)
ess= ―essay‖ (passage containing a promising essay topic)
fic= ―novel or short story idea‖
hyp= ―hypothesis‖ (important hypothesis)
kwd= ―keyword‖ (an exhaustive list needs to be developed for essay production)
kwo= ―quote from elsewhere in this document‖
ref= ―reference‖ (missing reference)
ph= (a borrowed phrase, whose original context requires background explanation)
per= (from personal conversation or correspondence)
prn= ―principle‖ (important principle)
pru= ―proof‖ (a proof is being demonstrated)
voc= ―vocabulary‖ (a term whose meaning is less than certain or contextually clear)
coi= ―coining‖ (the coining of a new phrase with future illustrative utility)
web= ―web address‖
rsc+ ―research‖ (any ―proword‖ or phrase that seems to require further research)
<phrase or something> (Google search is indicated as appropriate)
Month yyyy (month and year a passage was added, e.g,. ―June 2011‖)

@? Develop a list of kernel idea search terms to facilitate expanding expositions of the
ideas in this text. (Actually, hyperlinked Excel spreadsheets should be developed for all
of the above superscripted ―footnotes‖ to include hyperlinking different Excel objects
together. Better yet the Excel spreadsheets should merely exist in potentia as queries to
an Access database. Developing a list of key terms, that is, those with say, more than
100 hits within this document, might facilitate this.

―There is no remembrance of men of old, and even those who are yet to come will not be
remembered by those who follow.‖ - Ecclesiastes 1:11.

―Whosoever reflects on four things, it were better for him if he had not come into the
world: what is above; what is beneath; what is before; and what is after.‖
- The Mishnah, Hagigah 2:1

―I could say harsh things about it but I cannot bring myself to do it—it is like hitting a
child. Man is not to blame for what he is.... He is flung head over heels into this world
without ever a chance to decline, and straightaway he conceives and accepts the notion
that he is in some mysterious way under obligations to the unknown Power that inflicted
this outrage upon him...‖ – Reflections on Religion, Mark Twain

My contributions to these writings during the period, January thru November 2011 were
made while deployed to Bagram Airfield, Afghanistan with the 415
th
MI BN in support
of DFIP operations for OEF. Contributions made during July 2003 through February
2004 were made during my deployment to Gjilane, Kosovo as THT 6 Team Sergeant in
support of KFOR‘s HUMINT collection efforts in support of OEF there. By the way, if
you happen upon these writings and it is not yet mid-century (by that I mean the 21
st
!)
then permit me to indulge in a bit of naïve, wishful thinking by saying that there is
perhaps a fair chance that I‘m still knocking about this rock and feel free to provide me
with your feedback @rusmann@yahoo.com. Bitte Schön.



July 2011
The perception of one‘s own intentional thought process is temporally interwoven
with the perception of sense data, memory as well that of one‘s own bodily movements
and orchestrated within Libet‘s 500 millisecond window of pre-consciousness in just
such a way as to maintain awareness of intention and action as interconnected so as to
present the appearance of a freely acting ―center of volition.‖ This brings up the question
of what may be the value to natural selection of merely the appearance of a self
possessing a free will if in fact the continual stable appearance to the human animal of its
possessing a self as the author of its own decisions and actions is an illusion without
substantive causal efficacy.


We sometimes revisit perforce the haughty metaphysical opinions of our youth because
the passing of all the learning and experience of the decades has done nothing to
undermine them and these same opinions if refined and dressed up as though uttered by a
mature person will not so quickly be dismissed out of hand and thus lend their weight to
sober consideration.

cit=
ArXiv: hep-th/9308061v1:
au=
―Aharonov, Anandan, and Vaidman [1] have recently
argued that in addition to its usual epistemological role, the wave function in quantum
mechanics in certain situations also has an ontological role. In other words, in addition to
acting as a device in the theory to encode the conditions (of our knowledge of the world)
it must also, in certain circumstances, be regarded as real [italics mine], in the sense that
one can completely determine an unknown wavefunction of a single system. Certainly if
their claim were true, that one could take a single system with an unknown wavefunction,
and completely determine that wave function on that single system, one would have to
accord the wave function a reality on the grounds that that which is measurable is real. In
the course of this paper I will argue that they have failed to establish the measurability of
the wave function, and thus have failed in their attempt to demonstrate the reality of the
wave function. The argument is however subtle. Thus the plan of this paper will be to
first discuss the problem of reality in quantum mechanics, to set stage for the question
that they are trying to answer.‖ My comment:
au=
Aharanov proved the reality of A, the
vector potential so that if A is identified with the photon wavefunction (see
au=
Bohm in
this connection), then the reality of Psi has been adequately demonstrated.

"Science proceeds as if the past was the home of explanation; whereas the future and the
future alone holds the key to the mysteries of the present. When that first cell divided, the
meaning of that division was to be discovered in the future, not in the past; when some
pre-human ancestor first uttered a human sound, the significance of that sound was to be
interpreted by human language, not by apish grunts; when the first plant showed
solicitude for its seed, the interest of that solicitude lay in the promise of maternal
affection. Things must be judged in the light of the coming morning, not in the setting
stars." (
au=
Sedgwick, 1916) And here it is again as stated by
au=
Terence McKenna: ―For
me, the key to unlocking what is going on with history, creativity, and progressive
processes of all sorts is to see the state of completion at the end as a kind of higher-
dimensional object that casts an enormous and flickering shadow over the lower
dimensions of organization, of which this universe is one‖, c.f.,
cit=
Trialogues at the Edge
of the West. Philosophers have a term for this: causal supervenience. The details of how
causal supervenience works may forever remain mysterious, however, in general we can
say that in must involve the spontaneous (whether or not intended) of vacuum
fluctuations, which are correlated in a manner which does not support a time-reversible
causal connection. Presumably entropy-laden, irreversible processes also have a causal
(time-reversible) physical basis, at least with respect to sufficiently small scale of
spacetime, but at some sufficiently large spacetime scale, the nature of the correlation of
vacuum fluctuations underlying the physical process in question invokes quantum
entanglement that is not reversible. Given that quantum entanglement is a relation of
absolute simultaneity, i.e., in all reference frames, we suspect the irreversibility comes
into play on account of higher dimensions of temporality.

April 2011 ―
Sedgwick‘s principle‖ is particularly relevant in connection with higher order
regulation of gene expression. 98.5% of the DNA base pair patterns in the human
genome are held in common with the Chimpanzee and the Bonobo Ape. Approximately
50% of base pair sequences are the common genetic heritage of both humans and the
lowly yeast mold. Recent research reveals that approximately 50% of human DNA is
also of viral origin. Perhaps so much of human DNA is of viral origin because these viral
genes are leftover ―vectors‖ utilized in the distant past as means of inserting gene
sequences which otherwise would have taken too long to develop on their own via natural
evolutionary processes.
August 2011
Viruses, which are not included in either of the three
Linnaean Kingdoms of taxonomic classification – well, arguably there are now four as of
this writing, because they are not considered by biologists to be alive, seem to predate the
appearance of the first unit of heredity, i.e. DNA and/or RNA, and so must have
originated and developed according to the intrinsic self-organizing properties of atoms
and molecules – what is thought to have been responsible for the first billion years of
chemical evolution that took place prior to and wholly in the absence of Darwinian
natural selection. Because viruses function so efficiently within the cell and cell nucleus
as well as interoperate admirably with strings of DNA and/or RNA, both reorganizing
genetic base pair sequences as well as altering the expression of these same sequences, it
is tempting to suppose that viruses themselves originated within ancient living cells or
cellular nuclei. But this would of course land us in a ―chicken or the egg‖ paradox. This
reminds us of the case in which a shattered hologram is gradually reassembled and
meanwhile the image encoded in the hologram becomes ever sharper. (Do base pair
sequences within viruses function akin to metaphors in relation to the base pair sequences
of the DNA within the cellular machinery that the invading virus subverts?) Now if this
impossibly causally twisted temporal relationship could be shown to be a mere projection
or shadow (appearance) of what is occurring in a higher dimension onto some lower
dimension, then this paradox would be solved or, rather dissolved. Since causal
relationships within 1-dimensional time are, according to Bohm‘s causal principle,
equivalent to a specific set of correlated vacuum fluctuations, but which only constitutes
a tiny subset of the total array of correlated fluctuations within the quantum vacuum, we
may seek the needed higher dimensional causal relationships, i.e., causal relationships
within higher temporal dimensions, within the quantum vacuum, perhaps within its
higher order, e.g., 2
nd
, 3
rd
, 4
th
loop, etc. (There is a New Scientist article of recent writing
entitled something to the effect of ―matter is composed of vacuum fluctuations‖). Or
perhaps even confining ourselves to 1
st
order fluctuations, if two or more sets of
frequencies can be shown to be orthogonal, out of which distinct time series can be
constructed, then this may be strong indication of higher temporal dimensionality. (This
orthogonality is distinctly different from that by which functions of different frequencies
are assigned different weighting coefficients within a Fourier expansion of a single time
domain function).

August 2012
A unit of heredity is to natural selection as a rational individual is to the free
market. Both DNA and the rational individual are providential kernels that grace their
respective dynamical systems while transcending the scope of explanation supported by
the logic of either.

April 2011
What makes mankind different from his more primitive apelike forebears is not
so much distinct differences in genetic base pair sequences as it is differences in the
regulation of the expression of these genes held in common with earlier or less evolved
life forms. There has been simultaneously two trains of evolution operating: evolution of
the individual gene and evolution of the regulation of the expression of genes and gene
sequences. It appears that the ―temporal space‖ within which evolution takes place must
possess a basis of at least two dimensions.
June 2011
The fact that the regulation of the
expression of genetic base pair sequences is open ended implies that the interpretation of
the genetic code as a language is no mere cute analogy,
@#
but is a fact of some profound
and enduring significance! The context sensitivity of the genetic code is likely to be a
two-way affair. The DNA only contains information if it is a component of an
informational system, which is to say that any information stored in it has to have been
put there and that the molecule is open to modification so as to receive additional
information. All this is by way of saying that an element only contains information if it is
contained within a feedback circuit or network such that the ―flow of information‖ is two-
way.

April 2011
Communications refer to the necessary or important features of things without
ever specifying the things themselves. Only data is transmitted between minds, which is
then interpreted as information. Context gives meaning just as nonlocality gives
reference. One comes from one unknown, spends the duration of one‘s life in another
unknown, only to pass on to still another unknown. (Aside: it is a most important life‘s
goal to find the message that was left waiting for one within the world into which one
was born). And the two domains of the unknown bookending this infinitesimal existence,
we tend to suppose are eternity past and eternity future. The question arises whether these
two domains of oblivion are indeed identical or, does this mote of a single fleeting human
existence enjoy or partake of the metaphysical power to divide eternity such that it must
take on an aspect of everlastingness, itself of two parts. This brief speck in time
represented by a human existence, converts eternity into an ever lasting oblivion.
(Eternity into ―everlastingness‖)
September 2011
I am reminded here of the playfully
irreverent statement by the crucified character played by Terry Gilliam at the end of the
Monty Python film, Life of Brian, which goes something like this: ―you started with
nothing and now you‘ve ended with nothing – you haven‘t lost anything!‖
September 2011
However, consider that the proposition, ―Russell Clark does not exist‖ uttered in 1958 (a
year before my mother conceived me) and ―Russell Clark does not exist‖ uttered in 2058
(after I am dead) can‘t possibly mean/refer to the same thing:
@$
in the first instance, the
subject ―Russell Clark‖ doesn‘t refer to anything (only if the ―Russell Clark‖ from the
second utterance is what is intended), otherwise this term refers to any number of persons
named ―Russell Clark‖ – past, present and future, including this Russell Clark. So
intentionality doesn‘t equate with reference because being and what is called existence
are not coextensive categories. What distinguishes intentionality and reference, suggested
above, shares some features with McTaggarts incompatible time predicates. For ―Russell
Clark‖ to successfully refer to this Russell Clark (the current one writing this), it must
fail to refer to him at a certain earlier and all previous times. But there is something,
which, if it were ever successfully referred to once and named at that time, it would be
successfully referred to with this name at all other times. And this something is that
something which avoids contradiction of the incorrigible principle, ex nihilo nihil fit.
Things which don‘t exist get and stay connected (say in the quantum nonlocality sense)
with things that do exist via the operation of mind. In this way all things are connected
regardless of what categories we might apply to them, including minds via the operation
of mind. But how might two or more minds be connected when the subjective by
definition is not composed of the intersubjective (―inter‖-subjective), unless ―inter‖ only
obtains it meaning via the above alluded to connection principle. This is the case where
the metaphysics of pluralism fails utterly and a kind of monism must take its place.
Another example of failure to refer is what is called unspecified reference. An example
of this is arbitrarily making up a name for a character in a story not yet conceived of and
never written down.
February 2012
All important philosophical, religious and evangelical
atheistic writtings (in fact, any writings, which imply metaphysical claims, affirmative or
negative) are open-ended in that they necessarily contain equivocations of sense traceable
to this confusion of the notions of being, existence and subsistence!

The principle of spontaneous decoherence, namely that of Penrose‘s ―one graviton limit‖,
i.e., Planck mass limit in the magnitude of a single fluctuation of the vacuum, defies
Leibniz‘ principle that whatever conditions were sufficient to create a thing are necessary
at every succeeding moment to maintain that thing in existence. We may take as a new
principle that, the inadequacy of nonlocality implies the necessity of temporality.
Correlated fluctuations embrace distinct subsets: those which are or are not causality
preserving and/or energy conserving. This reminds us of the humorous saw that, ―if it
weren‘t for the existence of time, everything would happen at once! The Planck mass
fluctuation limit means that
cont‘d


January 2012
Can it be demonstrated that the concept of temporally pure causation, i.e., pure
temporal persistence, i.e., as an isolated system and in the absence of causal context of a
substantive or ―beable‖ is logically inconsistent? In a physical sense this appears
impossible on account of the grounding of physical temporality in an embedding
quantum vacuum energy fluctuation field (responsible for the decay of energy
eigenstates, transition between energy eigenstates and temporal evolution of density
fuctions). Could phase rotation of the system wavefunction qualify as pure, context-free
temporal evolution?

au=
―Von Neumann‘s Process 1 is the physical aspect of the choice on the part of the
human agent. Its psychologically described aspect is experienced and described as a
focusing of attention and effort on some intention, and the physically described aspect
consists of the associated choice of the basis vectors and of the timings of the
action. Then there is a feedback quantum jump whose psychologically described
aspect is experienced and described as an increment in knowledge, and whose
physical aspect is a ―quantum jump‖ to a new physical state that is compatible
with that increment in knowledge‖, c.f.,
cit=
Gravity and Consciousness (35700017)

Without quantum entanglement being involved in the act of perception of the outcome of
a quantum experiment (likely through the quantum vacuum in which the observed system
and the observer‘s brain are commonly embedded), there would be the ever-present
necessity of the consciousness (whose brain it is supplying the observation to this
consciousness) revamping its system of interpreting the quantum behavior of ―its brain‖,
e.g., how does the observer synch up his subjective perception of ―dead cat‖ vs. ―live cat‖
with the actual Schrodinger‘s cat experimental outcome of ―dead cat‖ vs. ―live cat‖?

November 2011
It is very significant that the prime phenomenological manifestation of the
―ground of being‖ in the realm of physical being, i.e., ―quantum entanglement‖ is just
now making itself known. Science through the emergence of the field of the study of QE
appears to be arriving at a point analogous to that of the dreamer who finds himself at the
cusp between un-self-consciousness and ―lucidity‖. Science has finally found one of the
―handles‖ of the ―reality bootstrap mechanism‖ and so much be especially careful for it is
just at the point of the onset of lucidity that the the dream continuum encounters a
bifurcation point between meta- and robust-stability.
June 2012
I suspect that many of the
properties of the wavefunction, particularly with respect to the function‘s ―fragility‖ are
shared in common with those of the lucid dreaming state of consciousness.



May 1997
Inertial mass may be based in the density of momentum exchanges taking place
between the various subatomic particles and quantum fields composing a given mass,
while gravitational mass may be based in the density of energy exchanges taking place
between these subatomic particles/ quantum fields and the quantum vacuum field. The
equivalence of inertial and gravitational masses may be an artifact of the conservation of
momentum-energy uncertainty or the conservation of virtual momentum and energy as a
momentum-energy fluctuation four vector within four dimensional spacetime.

Relativistic mass increase may be associated with a shift in the relative densities of 3-
momentum and imaginary 4-momentum exchanges taking place between the mass and
itself and between the mass and its embedding quantum vacuum, respectively. (This
remark added March 1, 2006 in terms of my ―old parlance‖ of May 1997)
June 2011
The
reason for the equality of inertial and gravitational mass is to be found in the happy
coincidence of there being a common mechanism between mass‘ effect upon the vacuum
and vacuum‘s effect upon mass.
@$
Mass is an effect of gravity‘s interaction with the
vacuum *and* gravity is an effect of mass‘ interaction with this same vacuum.

July 1997
It is only nonzero expectation values of momentum-energy which may possess
gravitational or inertial mass. And what contributes to this mass is any boundary
conditions placed upon the quantum vacuum field which alters this field so that the 3-
momentum fluctuations and imaginary 4-momentum fluctuations do not precisely cancel.

June 1998
The expectation values may always be defined in terms of a fluctuation term and
an uncertainty. This fluctuation term may be intrinsic to the quantum vacuum field and
the uncertainty may be associated with the observer of the quantum system. Through a
kind of coherence (or resonance) between the intrinsic vacuum fluctuation term and the
observer's uncertainty, the emergence of a nonzero expectation value, i.e., a classical
observable, may emerge. There is no reason why we cannot attribute the entirety of the
term, /\E, to the observer performing the energy-determining measurement. But to do so
means that one is considering the Heisenberg Uncertainty Principle (HUP) to be entirely
epistemological in nature. To attribute this uncertainty entirely to the quantum system
itself is to maintain that the Heisenberg uncertainty is ontological in nature. This
alternative interpretation of the HUP is not feasible, however, as the observer‘s brain
equally constitutes a quantum mechanical system just as does the system he is observing
or performing measurements upon. The perhaps more reasonable approach to
interpreting the HUP might be to compromise between the two extremes by admitting
that there is a dynamic interrelationship between the uncertainties of both the observer‘s
brain and the quantum system he is observing.




March 2006
It might be supposed that the manner in which the world appears, i.e., is
perceived by the observer is merely a collaborative product of the observer and the world
through the quantum interference between the correlated quantum fluctuations of the
observer‘s brain and the correlated quantum fluctuations constituting the system being
observed. But this cannot ultimately be a correct description of the mechanism of
conscious observation because the quantum correlated fluctuational structure of the
observer‘s brain possesses this ―necessary interiority‖ by virtue of outstripping the
brain‘s embedding quantum vacuum‘s computational capacity qua quantum
computer.
03/01/06



There must arise an exact matching or mutual coherence of internal and external
frequencies for an object to become manifest.
March 2006
Conscious manifestation of form
within the observer‘s perceptions of his environment is supported by the irreducible
complexity of the quantum fluctuation-correlational structure, c.f., Rezik‘s Three-way
Vacuum Nonlocality, which we have said engenders interiority to this quantum
fluctuation structure that cannot be modeled within the simpler fluctuation structure of
the brain‘s embedding in a quantum vacuum substrate,
November 2011
c.f.,
au=
Plotinus in
cit=
Ennead IV.4.23, 4–19: ―Well, then, the soul will either apprehend alone by itself or in
company with something else. But how can it do this when it is alone and by itself? For
when it is by itself it apprehends what is in itself, and is pure thought. If it also
apprehends other things [i.e. sensibles], it must first have taken possession of them as
well, either by becoming assimilated to them, or by keeping company with something
which has been assimilated. But it cannot be assimilated while it remains in itself. For
how could a point be assimilated to a line? For even the intelligible line would not
assimilate to the sensible one, nor would the intelligible fire or man assimilate to the
sense-perceived fire or man. . . . But when the soul is alone, even if it is possible for it to
direct its attention to the world of sense, it will end with an understanding of the
intelligible; what is perceived by sense will escape it, as it has nothing with which to
grasp it. Since also when the soul sees the visible object from a distance, however much it
is a form which comes to it, that which reaches it, though it starts by being in a way
without parts, ends in the substrate which the soul sees as color and shape with the
extension it has out there.‖
July 2011
If the substrate of being is information rather than
independently existing fundamental particles, then there is really no impasse posed by it
being otherwise necessary for matter to ―bootstrap‖ itself into greater complexity. It is
tempting to suppose that vacuum 3-way and higher order nonlocality is the source of 2-
way nonlocality of all quantum mechanical states/systems. The interiority of subjectivity
would on this view not be a truly emergent quality of biological systems, but merely the
awareness of this interiority would be.

One way to model the relativistic effects of mass, length and time might be to think of a
mass as the result of the mutual interference of myriad alternate universe copies of the
abstract objects possessing the formal structure which the mass exhibits upon analysis.
The greater the coherence of the MWI (many worlds interpretation) quantum copies of
the object, that is, the more closely the copies mutually interfere, the greater is the mass
of the object, the more dilated is the external time (and more contracted is the internal
time) of the mass. In this way Penrose‘s connection between gravity and wavefunction
collapse may be explored further.
March 2006
Perhaps this is better put in terms of the
greater the density of cohering of (and mutually interfering, i.e., quantum correlated)
MWI quantum near duplicates, the greater is the object‘s inertial mass.

These frequencies, or their spectra, would be associated with the fluctuations in the
vacuum's intrinsic energy and with fluctuations in the ability of an ideal observer to
determine the system's energy independently of the effect of the vacuum energy
fluctuations, respectively. We are basing this idea of observer-system resonance as the
basis of perception on an idea expressed by David Bohm in his book,
cit=
Quantum
Theory. According to
au=
Bohm (1951),
@$
the fluctuations in an observable, in
combination with the correlations of the phases of these quantum fluctuations, together
comprise the average values and average temporal evolution of any observable. An act
of observation has the effect of destroying the delicate phase relations between the
eigenfunctions, the product of which constitute the pure state wavefunction representing
the state of the system.
July 2011
(Can changes in the entanglement signature of nonlocally
correlated fluctuations induced by acts of conscious observation impact causal
relationships? Conversely, can causally induced changes to the system‘s momentum and
energy be traced to corresponding changes in nonlocally correlated fluctuations in the
system‘s momentum and energy?)

This is reflected in the instantaneous shift in the values of all incompatible observables
relative to the observable the value of which is being more fully determined as a result of
observation. The effect of the fluctuation energy upon our energy measuring devices is,
of course, an effect which even the perfect calibration of our energy-measuring
instruments cannot in principle eradicate. If the observer‘s consciousness inevitably
induces collapse of the wavefunction for the system he‘s observing, then this is perhaps
because:
@$
1) The dynamics of the observer‘s conscious mental processes is
fundamentally quantum mechanical in nature and 2) The mental processes of the observer
are quantum entangled with those of the system under observation.

Mass-energy is a result of an imbalance in these two energy terms. In this way particles
are seen to be not flux-stabilities in themselves, but structured alterations in the flux-
stabilities as a result of the influence, penultimately, of our energy measuring devices -
ultimately per von Neumann - the influence of not the individual mind per se but the
consciousness, fundamental in nature, which is structured through the complex system of
boundary conditions upon the very same vacuum field being measured (in essence)
constituted from the operation of the observer's brain, since the existence of the brain as a
mass-energy system, would otherwise presuppose, if identified with the observer's
individual consciousness, the existence of that which its observations are partially
constituting.
@$
―If reality is this second way, then the role of the neuronal system is not
to mysteriously create awareness and mind from alien substance. Rather, it is to organize
a pre-existing propensity for awareness into useful, functional awareness, and provide for
its modulation by useful information‖, c.f.,
cit=
Implications of a Fundamental
Consciousness,
au=
MacDonald (1998). "The mere possibility of observation results in the
reduction of the state vector."
November 2007
If a great enough interlocking feedback
between such possibilities comes about which then alters the statistics of matter and
energy (including the embedding vacuum energy field), which results in a great enough
contraction/collapse in the density rate of change in these state vector reductions (through
the conversion of disjoint states into correlated mixtures), producing an overall coherent
state, then a barrier will spontaneously be created between internal and external, i.e., a
rudimentary real, as opposed to a mere hypothetical, possible observer will be
engendered. (consider here the necessary interiority of the brain of the quantum
observer), c.f., so-called ―Boltzmann brains. ‖
June 2012 cit=
―We would like to argue that
this is not the case. Suppose we do not look at the whole box at once, but only at a piece
of the box. Then, at a certain moment, suppose we discover a certain amount of order. In
this little piece, white and black are separate. What should we deduce about the condition
in places where we have not yet looked? If we really believe that the order arose from
complete disorder by a fluctuation, we must surely take the most likely fluctuation which
could produce it, and the most likely condition is not that the rest of it has also become
disentangled! Therefore, from the hypothesis that the world is a fluctuation, all of the
predictions are that if we look at a part of the world we have never seen before, we will
find it mixed up, and not like the piece we just looked at. If our order were due to a
fluctuation, we would not expect order anywhere but where we have just noticed it‖, c.f.,
Richard Feynman on Boltzmann Brains by
au=
Sean Carroll.

By the equivalence principle, fermion-antifermion production in a gravitational field
should not exist for a freely falling observer. But neither should this freely falling
observer witness a blackbody spectrum of photons. Since through the (likely identical)
particle production mechanisms underlying both Hawking radiation and Davies-Unruh
radiation, the fermion-antifermion and boson particle production fields are observer-
dependent in their intensities, there should be an invariant transformation rule by which
we can connect the respective particle production rates for bosons and fermion-
antifermion pairs out of the vacuum.
November 2007
This reminds us of the equilibrium
principle manifest in the relationship of the charged particle and its electric field during
free fall – a charged particle accelerates through an electric potential in such a manner
that the particle‘s electric field appears to it (relative to the particle instantaneous Lorentz
frame) to be spherically distributed about itself.



This invariance is probably not that of simple Lorenz-invariance because the observer-
dependent shift in intensities/current densities of the particle production is dependent
upon the acceleration of the observer - not on his relative velocity. For instance, the
masses of the particles produced, in the case of fermion-antifermion production, varies in
an opposing sense to the manner in which the fermion-antifermion rate alters due to an
arbitrary Lorenz-transformation of the gravitational field engendering the enhanced
f(+)/f(-) production. And this occurs just in such a manner that the mass-creation rate for
f(+)/f(-)'s remains constant. This would seem to imply that the concept of mass is
absolute within the Theory of Relativity. The 4-voume in which f(+)/f(-)
creation/annihilation is taking place within this gravitational field is also unaffected by an
arbitrary Lorenz-transformation since the length contraction and time dilation take place
in opposite senses as well. In this way, the mass creation rate for f(+)/f(-)'s divided by
the local 4-volume we are considering, i.e., the 4-current density of the general
relativistic particle production, is conserved as a result of an arbitrary Lorenz
transformation of the gravitational field inducing the particle production field.

<passages in green are in need of “clean-up”>Returning to our first analogy, this
exchange of information is not actually occurring among the pixels (as was the case for
the gnats), but is, for the greater part, occurring within the CPU itself, that is, between its
individual circuit elements; in small part, this exchange of energy/information is taking
place between the CPU and the pixels on the screen it is controlling. The greater the ratio
of information exchanges taking place between the CPU and itself relative to the those
taking place between the CPU and the screen, the slower will be the maximum
permissible velocity across the screen for an object represented on this screen, assuming
an absolute clock rate for the CPU (akin to the notion of cosmological proper time). A
similar statement would apply to the "acceleration" of the cursor across the screen - the
larger the group of pixels which one wishes to simultaneously move across the screen,
the smaller will be the maximum acceleration attainable by the coherent group of pixels
represented by the cursor.

The quantum principle of the identity of indiscernibles is weakened for composite
objects, which is related to the action of the principle of the 2
nd
Law of Thermodynamics.
The discernibility of composite objects grows sharper as each object develops its own
unique history (as opposed to mere ―trajectory‖) and the failure of this quantum principle,
i.e., identity of indiscernibles in view of consideration of the 2
nd
Law is bound up in the
vacuum energy/cosmological constant paradox (non-gravitating vacuum energy). It is
moreover bound up with the important distinction of data and information. The
acceleration of an object changes the state of the object in a way that must be reconciled
not only with boundary but also with the initial conditions by which the object was
originally constituted as well. Thus gravitational fields must be capable of possessing
nonlocal components.
03/01/06
We recall here that a Bohmian style interpretation of the
EPR experiment with a Stern-Gerlach device calls for backward-in-time signaling of the
separately measured particles with their earlier state as part of a composite spin-0 particle
just prior to this particle‘s spontaneous decay.
03/01/06


If we are looking for something to play the role of "mass" within our computer analogy
we would do so in vain unless we modify somewhat (in a way which doesn't render our
analogy useless, I think) the programming of the software driving the computer monitor
(output device). If we were to think of the quantum vacuum field as generating and
sustaining all of the various "forms" such as all of the particles and fields of spacetime,
doing this in a manner exactly paralleling that in which a CPU creates/sustains all of the
digital graphical representations appearing on a computer screen, then the suggestion
arises that perhaps there is not only a maximum possible velocity, but also a maximum
possible acceleration through spacetime. An obvious choice for this maximum
acceleration is simply c
2
/L
planck
. (But certainly the maximum acceleration becomes
smaller in magnitude as we treat progressively more complex systems) An equivalent
representation of this limit is c/t
planck
. And, of course, we are thinking of L
planck
as the
dimension of a three dimensional "pixel" composing the spatial part of global spacetime,
while (tplanck)-1 represents the clock rate of the "global spacetime central processing
unit (CPU)," i.e. the global quantum mechanical vacuum. We stated earlier, that the
temporality of a quantum mechanical system is owing entirely to the presence of energy
uncertainty within this system.
03/01/06
(It should be noted here that different energy
uncertainties of identical magnitude might represent different quantities of information
upon interaction with the environment due to their possessing distinct quantum
fluctuation-correlational structure.)
03/01/06
We now realize that the temporality of
quantum mechanical systems owes to the interaction of this system with the fluctuating
quantum mechanical vacuum; consequently, the rate at which time passes within a given
region of spacetime is a function of the energy density of the vacuum within this region.
We have proposed that the inertial mass of a body is directly related to its binding energy
due to nongravitational forces. This is a seeming paradox since binding energy is
negative and should result in an overall reduction in the inertial mass (positive energy) of
the body. <passages in green are in need of “clean-up”>

May 1997
An example of where there is a change only in gravitational binding energy is
when the increase in negative binding energy is resulting from the action of gravitation
alone which is exactly counterbalanced by the general relativistic increase in the mass
energy of the body. To wit, here we have increased the gravitational binding energy of a
body without having affected the total energy, and hence, inertial mass, of the body. <<

When the density of a given region of space increases, there does not result merely a
simple decrease in the energy density of the vacuum. Rather, there is a momentum
current density tensor, which is diagonal in free space, experiences a shuffling of its
components so that it is no longer diagonal - with respect to a free space Minkowski
spacetime.

August 1996
There is another way of looking at the phenomenon of inertia in terms of how
spin-coupling of real bosons of integral spin and real fermions of integral
1
/
2
spin to the
spins of virtual bosons and fermions. The particle manifestations of the vacuum
momentum-energy fluctuations may be incorporated into the view of the earlier stated
mechanism of inertia/gravitation alluded to in the paragraph immediately above. It is
through the spin-coupling of real and virtual particles that the momentum current density
components are altered from their diagonal 2nd rank tensor distribution to a non-diagonal
component distribution of this momentum energy which underlies manifest gravitational
fields. The theory of "squeezed states," where the uncertainties in momentum along a
particular axis are increased by borrowing momentum uncertainty from along other
orthogonal axes, may provide the necessary mathematical framework within which the
effects of matter upon vacuum momentum-energy uncertainty may be adequately
described: matter affects the quantum vacuum by inducing a broadening of the vacuum's
momentum uncertainty, i.e., its momentum fluctuation spectrum by utilizing fluctuation
energy provided by the vacuum's uncertain energy, which is decreasing in step at the
same time.

August 1996
This may be described in terms of the rotation of the matter + vacuum
momentum current density tensor. A second rank tensor multiplied by this diagonal
momentum current density four vector would produce the appropriate connection
between this four vector at points in spacetime infinitesimally contiguous with one
another. Such a 2nd rank tensor must somehow be assimilated to the metric tensor of
general relativity. Instantaneous correlations would manifest themselves as a departure
from locally deterministic causality and could constitute an explanation for the existence
of Heisenberg energy uncertainty. I think there is no doubt that if the basic framework of
Special Relativity is to be maintained, then we are forced to accept an origin for nonlocal
correlations which lies completely outside four dimensional spacetime or at least outside
the local absolute past/future of the best possible causal chain.
11/11/07
These nonlocal
correlations must always comprise causal interactions and so never be explicable in terms
of them. We know that a photon traveling though free space experiences acceleration due
to the cosmological expansion, and that this acceleration is equal to Hc, where H is
Hubble's constant and c is the speed of light in vacuum. Therefore, if a spherical mass is
instantaneously converted into pure energy, i.e., photons, the photons will instantly,
collectively exert a positive pressure,

P = MHc/4tR
2
.

Consequently, the vacuum must exert a pressure upon spherical masses, which is equal
and opposite to this above quantity. This notion should be investigated further in
connection with the Pioneer Anomaly, i.e., the anomalous component of acceleration that
some physicists have connected to either a cosmological constant or to the cosmological
acceleration field in some other sense, dark energy.
11/11/07
Some theoretical evidence for
this claim can be provided by the calculation of a work-energy integral. This integral is
ultimately motivated by an extension of the equipartation theorem of kinetic gas theory to
the cosmological distribution of energy in the Universe. This question will be addressed
at a later occasion, however. As for the integral itself, it is used to calculate the work
which the Universe performed on some small volume of energy as the energy density of
this volume decreased from a very high value early in the history of the Universe (say in
the first few seconds) until the present epoch of cosmological expansion when the energy
density of this volume has become almost negligible. One may think of this work as
being performed on this volume by some cosmological acceleration force field and if we
assume that this tiny volume managed to hold itself together without expanding
throughout the entire expansion phase, then this volume must have exerted a force upon
the Universe equal and opposite to the cosmological force which was attempting to
spread it apart. Conservation of momentum holds for the combined mass-
energy/vacuum-energy system so that there is a balancing of the force of the Hubble
cosmological force field acting upon the vacuum and the gravitational force of the
vacuum acting upon the total matter distribution of the Universe. We may even say that
the vacuum's gravitational field is simply a reaction force produced by the tendency of
the Hubble cosmological acceleration force to alter the momentum of the vacuum. This
reaction force acts to conserve the momentum of the vacuum energy field. This action -
reaction force relationship is expressed by the equation given below,

H
2
r x E
v
= E
o
x GM/R
2
,

where GM/R
2
= the acceleration field produced by the vacuum's gravitational field. The
gravitational field of matter distributions is not an inherent property of these distributions,
but must be conceived along the same general lines as the electrical repulsive force
between dislocations or holes in an otherwise electrically neutral crystalline matrix. This
idea is more or less captured by the following relationship,

E
v
/E
o
x H
2
r = {c
2
/r x (1 - E/E
v
) - c
2
/r} = g,

where the term, c
2
/r, is the acceleration field produced by the vacuum reaction - force
which compensates the action of the Hubble cosmological acceleration force upon the
vacuum energy field. Hc is the cosmological acceleration field, which acts upon freely
moving photons, and implies the existence of a precisely balancing and opposing reactive
force upon the particles forming a bound matter distribution. Let us assume that this tiny
volume is that occupied by a neutron and that the work-cycle is to begin at an early epoch
in the Universe's expansion when the average energy density of the vacuum was equal to
that of the neutron itself: approximately 10
33
Joules/m
3
. The work integral is defined to
be:

W =
}
P*dv,

where the limits of integration are to be from Pi =
c
i
to
c
f ,
where P and
c
are the
pressure and energy density of the vacuum, respectively.

We will at first define the work integral in terms of
}
F*dr. F is just the cosmological
acceleration force acting on the tiny volume and F = MH
2
r, where r is the radius of the
volume, M is the mass contained within the volume and H is Hubble's constant. If the
volume were to expand exactly in step with the expansion of space in its immediate local
region, then the surface of the volume would move with respect to its center (chosen as
coordinate origin) with velocity Hr, and the acceleration of this surface with respect to
the chosen origin would be d/dt[Hr] = H[dr/dt] = H
2
r, so that, again, F = MH
2
r. The
work integral becomes

W =
}
MH
2
rdr

between the distance limits R
i
= R
neutron
and R
f
= R
universe
. To transform this work
integral into one in terms of pressure and volume rather than force and distance involves
defining the parameters e and de, i.e., mass density and differential of mass density,
respectively.


µ
=
3
/
4
M
t
R
3
===> R =
3
root[
3
/
4
M
t
] x
µ -1/3
dr =

-
1
/
3
x
3
root x [
3
/
4
m/4
t
] x
µ
-4/3 d
µ


If one performs this work integral one finds that the energy necessary to expand neutron-
packet of unbound neutron mass-energy is precisely,

E = G(M
neutron
)
2
/R,

so that the energy necessary to prevent this expansion is

E = -G(M
neutron
)
2
/R.

This result is, of course, provided that the mass density of the universe is given by the
formula,

µ
= 3H
2
/4
t
G

ess=
This formula for the mass density of the Universe is implied by an equality of
magnitude of the kinetic energy and gravitational binding energy of the Universe as a
whole. This equality constitutes a proposed solution to the so-called "flatness problem"
of cosmological theory. Alan Guth's Inflationary Theory was originally proposed to
solve, essentially, just this cosmological problem. A rough and ready definition of the
flatness problem is the nearly exact equality between the Universe's expansion velocity
and its "escape velocity" - of somewhere between 1 part per 10
12
and 1 part per 10
60
,
depending upon which sources in the literature are cited; the problem is not that this exact
ratio conflicts with the standard "Big Bang" cosmological model, but rather that it is
obviously a non-arbitrary (structural) feature of the Cosmos about which the model can
make no meaningful explanation. Proponents of the Standard Model are forced to lump
this fact in with the other initial conditions which were set at the beginning of the
Universe's expansion, and which physical science cannot explain, such as the
fundamental physical constants and the Universe's initial mass. Other proponents of this
model invoke the Anthropic Cosmological Principle to explain this ratio. Its argument
goes like the following: if the ratio of escape velocity vs. expansion velocity is too much
greater than 1, then the Universe would have already re-collapsed by this time and we
would not be here; on the other hand, if the ratio is too much less than 1, then the density
of the universe would not have been great enough, for long enough, to allow the
formation of stars and galaxies so that yet again we would not be here to worry about the
question. But the Anthropic Cosmological Principle cannot explain the exactness with
which this ratio approaches unity, but can only provide relatively crude limits on either
side of this ratio, say between 0.9 and 1.1, conceivably these limits could be one order of
magnitude smaller - this shaves off only 1 or 2 orders of magnitude and there are still at
least 11 more orders of magnitude in need of explanation.
October 2011
A related problem in
cosmology and one which is not currently thought to pertain to the sensitivity of initial
and boundary conditions is that of the stupendous mismatch between the predicted and
observed energy densities of the vacuum represented by the ―cosmological constant‖:
cosmological theory predicts a value for this constant on the order of 10
-26
kg/m
3
while
quantum theory predicts a value on the order of 10
95
kg/m
3
a discrepancy of
approximately 120 orders of magnitude! When one considers on the one hand, that the
densities of the mass energy and vacuum energy (cosmological prediction) are of the
same magnitude, if not almost precisely equal (the condition for a ―flat‖ universe) and on
the other, that, if gravity were to be attributed, not to the absolute energy density of the
vacuum, but to the difference between two large energy densities approximating one
another by an order of magnitude and constituting two distinct components of the
enormous vacuum energy density predicted by quantum mechanics, then, if this energy
density difference is attributed to an ―effective energy density‖ for the vacuum, i.e., that
to which Einstein‘s equivalence principle applies, then in turn we would be able to
explain away the the enormous energy density predicted by quantum theory for the
vacuum using a logic similar to that of
prn=
renormalization theory. The cost of all this is
perhaps only an aesthetic one for purist relativists, that of the seeming loss of theoretical
elegance and simplicity in the form of a considerably reduced universality of Einstein‘s
strong equivalence principle. Note that the effective energy density in this approach to
solving the ―cosmological constant problem‖ is now comparable to that of the
cosmologically predicted mass density. The above approach does not seem so far
fetched once one considers that the starting mass density of the universe must have been
very sensitively determined by initial and boundary conditions upon the cosmological
constant itself. An additional consideration here is that a so-called flat universe also
requires that the density of the gravitational binding energy of the Universe be equal to its
mass energy density, c.f., the derivation above for the equality of, e.g., the neutron‘s mass
and gravitational binding energy densities. On account of the combined principles of
equipartation of energy between degrees of freedom and conservation of quantum
entanglement between the two complementary components of the fluctuation
momentum-energy of the quantum vacuum, which together make up its total energy
density (zero-point energy), we suspect that it is these two components of the physical
vacuum, which are placed out of mutual balance by the perturbing effect of inertial mass,
while the back reaction of this induced vacuum momentum-energy imbalance upon mass
constitutes the origin of gravitational mass.
Here the mismatch is still greater But invoking the so-called Solipsistic Cosmological
Principle may well place far tighter constraints upon the value of the cosmological
constant.
November 2007
The theory which I propose, however, which can be considered to
be an extension of Van Flandern's S-hypothesis, explains this ratio not as an arbitrary
initial condition, but as a necessary feature of any universe where the energy of
cosmological expansion drives the forces of the universe's gravitation; to wit, if the
expansion velocity of the Universe were greater than it is, then the energy of its
expansion would be greater and hence the gravitational energy of the Universe would be
correspondingly increased such that the ratio of unity would be maintained. This
postulate has a favorable bearing on many other unsolved problems of cosmological
theory. Our postulate states, in essence, only that the total gravitational potential and
kinetic energies are equal. The postulate is not a mere arbitrary assumption however, as
it is supported by the principle of energy equipartation. However, this principle can only
be applied if it is assumed that there exists some form of energy which acts as a medium
physically linking these two types of energy, i.e., the energy of position with the energy
of momentum, in order that the equilibrium between them can be maintained through
mutual energy exchanges - in much the same way that the rotational and vibrational
energies of gas molecules maintain a balance through continual exchange of kinetic
energy between these molecules through random collisions.
May 1997
A ready candidate for this medium connecting the gravitational potential and
kinetic energies of particles, for example, is the fluctuating component of the
Hamiltonian for the quantum system in question. As stated earlier, this fluctuation
component of the Hamiltonian cannot be "screened." This fluctuation component of the
Hamiltonian may be thought of as the product of its space and time components, H(r) and
H(t). This Hamiltonian is, of course, only and average, <H(r,t)>. There are three basic
types of interaction for this system: exchanges of momentum/energy between the parts
of the system entirely among themselves, exchanges of momentum/energy between the
system and other similar systems, and exchanges of momentum/energy between the
system and its fluctuation Hamiltonian. <<

July 1998
The equation, constant = p*p + r*r, seems to imply that p and r may both be
incompatible observables despite the absence of fluctuations in the sum, p*p + r*r. But if
one looks at this equation, one immediately realizes that it is the equation of a circle in
phase space. But a circle in phase space represents a precisely defined trajectory in phase
space which, in turn, implies that p and r, though each uncertain in an epistemological
sense, must at any moment both possess precise values. And this fact would contradict
the thesis of p‘s and r‘s incompatibility as observers.

Conservation of vacuum 4-momentum is asserted here to provide the mechanism by
which the necessary energy exchanges are effected between the gravitational and kinetic
energies of the vacuum. We have already seen how gravitational acceleration itself ,i.e.,
the conversion of potential energy into kinetic energy (the converse of acceleration under
thrust) which has been formalized by, e.g., Hamilton's canonical equations of motion,
results from a spatio-temporal vacuum energy density gradient which itself, in turn,
comes into being through the operation of the principle of vacuum momentum
conservation, and which sustains itself in existence through the vacuum's fundamental
dynamism of self-energy-exchange. Gravitational potential, it is said, cannot be defined
absolutely. Rather, only relative differences in potential are meaningful. For
mathematical convenience, all potentials are referenced with respect to a potential at
infinity where the !/R dependence of the potential causes it to vanish to zero. Yet this
definition contains a presumption, namely, that is meaningful to speak of a gravitational
potential at an infinite distance. In actuality, the furthest that a mass can be placed so that
its potential is a minimum, is at the so-called edge of the observable Universe, that is, just
this side of the spherical light horizon - where the cosmological red-shift of
electromagnetic radiation becomes infinite. According to some simple calculations I
have performed, this distance is roughly 1.1 x 10
26
meters. In the particular case of our
own Earth this potential is about 20 orders of magnitude smaller than the potential at the
Earth's surface - a vanishingly small value of approximately 10
-30
Joules per Kilogram.
prn=
The Schrodinger equation may be thought of as describing diffusion along the ict axis.
Moreover, Graham's Law of effusion states that more massive particles diffuse more
slowly than less massive particles. According to Hawking and Bekenstein, the entropy
of a black hole is directly proportional to the surface area of the hole. This relation is
given below.

S ~ 4
t
R
2

But the energy density of the black hole is given by the relation,

µ
= 3c
4
/4
t
R
2
G

so that, S = e
-1


where S is the entropy and e is the energy density of the black hole, respectively.
Consequently, if the energy density of the vacuum is equal to the energy density of black
hole masses, then the entropy of the vacuum should increase with decreasing vacuum
energy density. We believe that the energy density of the vacuum is equal to the
effective energy density of black holes because the radial outward pressure of the
vacuum, Pvac, must be 0 at the event horizon surface of a black hole and the vacuum
obeys the equation of state, namely, e vac = Pvac. Furthermore, as already stated
elsewhere, eo = emass + evac. because there is no fundamental distinction between
creating mass from the vacuum energy locally available within a particular region of
spacetime and importing already existing mass from outside this region of spacetime into
this region because, in turn, matter particles may not be thought of as having a
permanent, continuous existence after the manner of the substances of Aristotelian
physics; this follows from the fact that there is no real distinction between relativistic and
non-relativistic mass. Pvac must be 0 here because the matter composing a black hole
may exchange energy only with itself; it exchanges no energy with the vacuum energy
field outside its event horizon. ematter = eo in this particular case and, as well, evac =
Pvac = 0. Again, half of the mass-energy contained within the black hole is due solely to
the general relativistic increase in mass which had accumulated once the hole had
formed.

e = eo{1 - GM/RC2},

where eo = 3c4/4piGR2.

Generalizing this result, we may say that
prn=
the maximum rate of increase in the entropy
of the vacuum is parallel to the direction along which the decrease in the vacuum's energy
density is maximal. In so-called free space, the direction along which the maximal
decrease in the vacuum's energy density exists is along the ict axis; in other words, the
vacuum energy density varies in a purely temporal manner in free space, i.e., due to
cosmological expansion. Therefore, the so-called thermodynamic arrow of time points in
a direction orthogonal (in free space) to any 3 dimensional rectangular system of
coordinates describing an inertial frame of reference; moreover, a gravitational field is
associated with an alteration in the orientation of the thermodynamic arrow of time
because a component of the direction of the maximally increasing vacuum entropy now
points radially inward - in the simple case of spherical masses. The thermal particle
creation which is observed to occur within accelerated reference frames is a manifestation
of a creation/annihilation process which is normally balanced in the free space vacuum
but which is unbalanced within the accelerated frame. In the presence of a gravitational
potential, the arrow of time possesses a component along the vacuum energy density
gradient so that a new time axis is defined within this new vacuum which exactly
corresponds to this new time axis as defined within the general theory of relativity as
applied to the Minkowski light cone.

11/96 The second law of thermodynamics only applies to physical processes taking
place within a closed system which is in interaction with an infinite heat reservoir. The
2nd Law does not, however, apply to open thermodynamics systems since in these
systems no global thermodynamic arrow of time can be consistently defined. Such
thermodynamic arrows can only be defined locally. This reminds us of how standing
waves cannot form in containers of infinite size. So the concept of a particle, which is
itself just a Gaussian packet of superposed standing waves, can only possess validity in a
local sense; globally speaking, the notion of a particle does not refer to anything which
possesses ultimate reality, but an abstraction grounded in a low order approximation. See
Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics by Robert
M. Wald, Chicago University Press, for further discussion of the limitations of the
"particle concept" in strongly curved or rapidly time-varying spacetimes. This book also
discusses the phenomenon of particle production in expanding Einstein-DeSitter
spacetimes as being closely related to Hawking radiation.

December 1996
Changes in the boundary conditions of the wavefunction which take place
with a rapidity such that,

dB/dt > /\B//\t ~ /\E/h X /\B,


where B are the boundary conditions of the quantum mechanical superposition state, S,
will inevitably result in a collapse of the wavefunction, Psi, into one of its eigenstates of
the observable bound by B. This is provided that the new boundary conditions, B', are
stabilized to within c x /\t, where /\t is the time uncertainty in the time interval of this
transition, B ===> B'. Wavefunctions representing locally-connected quantum
mechanical systems are constituted by a system of boundary conditions placed upon the
nonlocally-connected quantum vacuum stress-momentum-energy field. The principle of
superposition illustrates the importance of unrealized possibilities: they play a
substantive role in the behavior of the real.
October 2011
This notion apparently does not
occur to defenders of the doctrine of Modal Realism, e.g.,
au=
David Lewis. Initial and
boundary conditions are what distinguish the merely virtual from the real, the possible
from the actual, c.f, Max Tegmark‘s hierarchy of multiverse types.

The energy uncertainty of a quantum mechanical system, /\E, is both independent of the
observer, that is, it represents an ontological, rather than a, merely epistemological
uncertainty in the energy of the system and it is dependent upon the state of the observer's
knowledge of this system. This suggests that the observer and his state of knowledge are
essentially separable. His knowledge of quantum mechanical system states is from the
inside, meaning that the observer's knowledge is coded nonlocally in the quantum energy
uncertainty of his own brain, itself a quantum mechanical system!
@$
The brain of the
observer simply provides a set of boundary conditions upon the quantum vacuum energy
field. Thermal particle production is expected to occur in the direction of the entropy
gradient of a vacuum possessing a gravitational potential; and the principle of relativity
demands that particle production be associated with the global increase in vacuum
entropy engendered by the process of cosmological expansion. The maximally entropic
state within any region of spacetime is that of the vacuum itself. In general, due to
gravitational time dilation, the entropy of matter distributions can never catch up, so to
speak, with the entropy of the vacuum: the result of this is that matter and energy
distributions can never quite reach a state of thermodynamic equilibrium within an
expanding universe.

December 1996
It is our belief that the global orientation of the arrow of time is determined by
the global distribution of matter in the Universe, and that without the presence of matter,
there is no determinate direction for the arrow of time. This implies that the Universe
conceived of as a radically open system cannot possess a complete, self-consistent
topological description. Using the analogy of a system of vibrating strings: a finite sum
of Fourier component functions, F(w), adequately describes the system of string
vibrations provided that each of the strings be "anchored" on at least one end, which is to
say that, in the absence of spatial boundary conditions placed upon the strings' vibrations,
standing wave patterns of string vibration cannot exist and no purely spatial description
of the system of string vibrations is possible - only a spatiotemporal description is
possible in this case, and one in which there is no unique decomposition of the
spatiotemporal description into a particular 3(space) + 1(time) manifold. The result
similar to the one above obtains where no unique time direction for the dynamical
evolution of the system can be specified. The ratio of mass energy density to vacuum
energy density varies with R-1 for spherical masses. e = eo{1 - GM/RC2} The
previous formula seems to imply that when R = RSchwarzchild, the energy density of
the vacuum has only been reduced to 1/2 of its normal free space value. However, this is
to neglect the effect which a reduced vacuum energy density has upon the measurement
of mass values: the inverted fraction by which the vacuum's energy density is reduced
gives us the fraction by which the masses occupying this vacuum relativistically increase.
In other words, the mass of a body may increase to just short of 1/2 of its Schwarzchild
value and still remain stable against total gravitational collapse. When the mass of a
body increases to just over its Schwarzchild mass a positive feedback occurs between
each successive "cycle" of relativistic mass increase, whereupon half of the vacuum's
energy has already been "displaced" by the piling on of mass from outside, while the
other half of the vacuum's energy is converted directly into mass energy entirely through
relativistic mass increase. This is the reason why we may properly say that the true
energy density of the vacuum is not 3c4/8piGR2, but actually twice this value: eo =
3c4/4piGR2. Also, when one considers the process of "evaporation of black holes" via
the mechanism of Hawking radiation, it is easy to see that in a very real sense the density
of black holes must be exactly twice that predicted by the general theory of relativity,
more particularly, via the Schwarzchild solution to the field equations: a quantity of
mass, 2mc2, where m is the mass of the black hole, must be created from out of the
vacuum before a black hole of mass, m, evaporates completely.

The ultimate substratum which mediates all the fundamental physical interactions must
itself be nondeterministically chaotic in nature; or else time cannot be considered a true
dynamical variable. Since a fundamental process of creation and annihilation underlies
all particle interactions, the action of the vacuum energy field may be identified with the
translation of all composite matter along a direction orthogonal to the total set of
orthogonal spatial axes.


January 1997
Space without Time is Determinism. Time without Space is Chaos.
Determinism and Chaos are simply opposite ends of a single continuum. Complexity is
that which governs the movement of a dynamical system back and forth along what we
might well term the Cosmos/Chaos continuum. The underlying order which pushes a
dynamical system this way and that along this continuum cannot itself be described in
terms of a classical, dynamical system because this order necessarily operates from
outside this continuum. What ultimately governs this movement of dynamical systems
along this continuum is the underlying fluctuations in spacetime.

November 1997
Deterministic change can only be a phenomenal appearance since either the
deterministic phenomena are the play of projections from determinate objects from
within higher dimensional spaces containing our space or the phenomena conceal an
indeterminism at a deeper level behind the appearances. In the same way that the
continual creation and destruction of a circular disk confined to a two dimensional sphere
may be thought of as the continuous penetration or projection of a three dimensional
cylinder orthogonally through this two dimensional spherical surface, we may model the
continual process of creation and annihilation of spherical massive bodies as the
continuous penetration or projection of hypercylindrical bodies orthogonally through a
three dimensional hypersurface constituting normal three dimensional space. If massive
bodies were composed of permanent, continuously existing substance, there would be no
reason to postulate the existence of an additional 4th spatial axis associated with the
dimension of time. It is the energy of matter's continual re-creation of itself which
constitutes the latent energy of matter, E = mc2. When a material body is uniformly
accelerated, the body is no longer re-creating itself along the time dimension alone, but
must be considered to be in the act of re-creating itself along two orthogonal component
directions: part of the energy of re-creation is associated with a momentum in the
direction the body is accelerating, and the remaining part of this re-creation energy is
associated with the body's momentum in a direction orthogonal to this acceleration
vector, and moreover, orthogonal to the 3 dimensional space (instantaneous inertial
frame) which it occupies at any given moment. Our question at this juncture, then, is: is
there any reason for treating a 4th spatial dimension as being ontologically real, rather
than as just an abstract entity within a particular formalization of special relativity? Yes.
We list them below.

1) Conservation of vacuum momentum.

2) The conversion of mass to energy as the 90o rotation of imaginary momentum.

3) The thermodynamic arrow of time in a gravitational field.

4) The Hubble distance-velocity relationship describing galactic recession.

5) The tunneling of all masses through a hyperspherical potential barrier. The derivation
of Einstein's mass-velocity relationship within an expanding four-hyperspherical
universe.

7) The conservation of four dimensional angular momentum as an explanation for the
perhelion advance in the orbit of the planet Mercury.

8) The implication of quantum mechanics that real particles possess no continuous
existence, but are essentially being continuously created and destroyed.

ds2 = c2t2 - x2 -y2 -z2 , so that the interval, ds, may take on either real or imaginary
values. If ds2 > 0, then two events separated by this interval are locally connectable, and
may be connected by a series of reversible interactions. If ds2 < 0, then two events
separated by this interval are nonlocally connectable, and may only be connected by a
series of irreversible interactions. All reversible processes are mediated by vacuum
processes which are themselves irreversible. Because gravitation is a phenomenon
resulting from conservation of four-momentum, the sign of mass (+/-) is immaterial to
the direction of the gravitational acceleration vector. If anything analogous to what might
be termed mass charge exists, it is in the form of an imaginary mass. Imaginary mass
would have the effect of producing a gravitational field with an acceleration vector which
is reversed in its normal direction. This suggests to us that the mass, or energy, of the
vacuum field is itself imaginary so that real mass may be understood as a deficit of
imaginary energy within the vacuum field, producing an acceleration vector of the
normal gravitational acceleration vector field. If gravitons, as massless particles, are
assumed to be the true mediators of the gravitational force, then there is a serious
problem with interpreting the gravitational field associated with a spherical wavefront of
gravitons which is expanding outward at the speed of light: We notice that in the many
various forms in which the Heisenberg Uncertainty Principle may be stated there is
always the product of two uncertainties in physical quantities which is greater than or
equal to Planck's constant and that one of these paired uncertainties is with respect to a
physical quantity which is conserved, and for which there exists a quantum number,
while the other paired uncertainty is with respect to a physical quantity which is not
conserved, and for which no quantum number exists.

To list just a few examples of this general rule: ^E^t > h, ^p^x > h, ^n^L > h, etc.
Moreover, each form of expression of the fundamental Heisenberg uncertainty relation
may be, in turn, paired with another such expression where the conserved quantities of
the two paired expressions form with one another a symmetrical tensor which possesses
the property of Lorenz-invariance, while the unconserved quantities of the two paired
expressions form, with one another, another symmetrical tensor which also possesses the
property of Lorenz-invariance. It is the Lorenz invariant tensorial relationship of the
paired conserved quantities which is responsible for the Lorenz invariance and tensorial
nature of the paired unconserved quantities and not the converse. For example, the fact
that momentum and energy may be subsumed together under a unified description as the
relativistic momentum-energy tensor is what is responsible for the tensorial nature of the
interrelationship of the space and time variables, i.e., the Lorenz-invariance of space and
time which manifests itself separately as time-dilation and length-contraction which is
observed within frames of reference traveling an appreciable fraction of the velocity of
light relative to an observer reference frame. The momentum-energy tensor is, by the
way, also responsible for the Lorenz-invariant, tensorial nature of the Maxwell tensor
describing the electromagnetic field, and we may now see why the Maxwell tensor does
not possess a term denoting the divergence of the magnetic field, i.e., why magnetic
monopoles do not exist in nature. Aesthetically minded physicists have for generations
noted this missing term in Maxwell's equations and suggested the inevitable existence of
monopoles, since their existence would render the electromagnetic field equations more
perfectly symmetrical. But we see now that the lack of greater symmetry in Maxwell's
equations is explicable in terms of the presence of the even deeper symmetry of the
Heisenberg uncertainty relations, and so this apparent lack of symmetry on the part of the
electromagnetic field need no longer be viewed as a "flaw" in the structure of
mathematical physics.

This deeper symmetry may be understood in the following way: the fluctuation in
electric field strength ( an unconserved quantity ) is due to the uncertainty in the position
(an unconserved quantity) of a conserved quantity - electric charge, combined with the
uncertainty in momentum ( a conserved quantity ) of the magnetic charge ( an
unconserved quantity ). If we try to establish the fluctuation in the magnetic field
strength independently of the fluctuation in electric field strength, we end up violating the
symmetry of the uncertainty relations, e.g., the fluctuation in magnetic field strength ( an
unconserved quantity ) is due to the uncertainty in charge momentum ( a conserved
quantity) of a conserved quantity - electric charge, combined with the uncertainty in
charge position (an unconserved quantity ) of the magnetic charge (assumed here to be a
conserved quantity ). Again, the symmetry is only restored here by treating magnetic
charge as an unconserved quantity. We may apply our rule in a more direct fashion by
postulating an uncertainty relation which obtains provided that magnetic charges do exist.
This uncertainty relation is the product of uncertainties in electric and magnetic charge.
To wit, the product in the uncertainties of these two physical quantities must be greater
than or equal to the value of Planck's constant. Following our same symmetrically-based
rule, we find that Planck's constant must be less than or equal to the product of
uncertainties in a conserved quantity and an unconserved quantity. This new uncertainty
relationship would be written, expressing Planck's constant as the lower limit for the
product of the uncertainty in electric charge with the uncertainty in the quantity of
magnetic charge, divided by c, the speed of light, in order to have consistency of physical
dimensions. Again, only one of these paired quantities is the conserved quantity, and this
conserved physical quantity must be the electric charge. So we see from consideration of
the symmetry exhibited by the many alternate expressions of the Heisenberg uncertainty
principle, that if monopoles exist, their charge cannot be a conserved quantity so that
magnetic charge may not possess a quantum number. However, if Maxwell's equations
are modified to allow for the existence of magnetic charge, the symmetry of these
equations demands magnetic charge conservation, but this leads to a contradiction with
the more general symmetry argument for the non-conservation of magnetic charge, and
so we see that magnetic charge cannot exist.

QZ Particle creation in a non-inertial reference frame is not a symmetrical process: it is
not possible for one to accelerate in such a manner that real particles become virtual
particles, i.e., are absorbed back into the vacuum energy field from which they were
originally created in the same way that it is not possible to accelerate in such a manner
that the local rate at which time passes increases rather than decreases; however, if a
given real particle does not have an infinite lifetime (which no particle does), then within
an unaccelerated reference frame, the lifetimes of quasi-stable particles, as viewed from
an accelerated reference frame, will be shortened by the relativistic time dilation factor.

It is in terms of this fundamental asymmetry that we can more simply resolve the
so-called twin paradox of special relativity: the acceleration of the spacefaring twin and
the earthbound twin cannot be considered to be merely relative because the twin in the
rocketship observes thermal particle production within his vacuum, while the twin
confined to the Earth observes no such phenomenon within his own vacuum. This
phenomenon of particle production within accelerated reference frames is to be expected
because a particle is real only if its energy is greater than the energy uncertainty of
quantum system to which it belongs, and the time dilation associated with accelerated
motion affects the fundamental uncertainty relation, ^E^t > h, such that some particles
which were virtual within the unaccelerated frame relativistically increase their energy
which is now even greater in relation to a reduced energy uncertainty, and so "become"
real particles within the new vacuum state. All particles which are virtual in one
particular reference frame are real particles with respect to some other reference frame;
the converse of this is not the case, however - the irreversibility enters the picture, as
stated before, through the differential observations of thermal particle production within
the vacuum of observers within different inertial frames of reference. This relationship
between real and virtual particles within special relativity can perhaps be understood as a
restatement of the principle of causality within special relativity: events which are
causally connected in one particular reference frame are causally connected and have the
same time order within all possible reference frames, and it is only those events which are
not causally connected (nor potentially causally connected) which might have the order
of their occurrence switched when observed from the standpoint of different reference
frames, from which it also follows that events which are not causally connected within a
given frame of reference, are not connected in any reference frame. Real particle
production within a vacuum of reduced energy uncertainty may be interpreted as being
converse but parallel to the process of virtual particle production within a vacuum of
increased energy uncertainty. Also, if real particles are understood as feedback structures
of virtual particle processes which essentially may be understood as a network of circular
energy fluxes, these individual processes being causally connected with one another
within one particular spacetime, then these feedback structures are destroyed when the
energy uncertainty of the vacuum becomes greater than the energy of the real particles, so
that an increase of energy uncertainty is associated with a loss of information in the form
of the cybernetic control "holding the particles together."

Consequently, the particles which are produced within an accelerated reference
frame, say, within the curved spacetime of a gravitational potential, may not "appear" out
of the vacuum in a collective state of causal interconnection with one another. The only
assurance that a set of particles is not causally connected with one another, i.e., locally
connected, is if they are nonlocally connected. Moreover, the notion of the continuous
existence of particles is simply not consistent with the asymmetry of virtual particle/ real
particle transformations which are necessitated by a change in Lorenz frames.

We know that virtual particles do not preserve their identity from one moment to
the next; by "moment" we mean a period of time greater than h/E , where E is the total
energy of the virtual particle-antiparticle pair which has been spontaneously created out
of the vacuum state. We also know that this particular virtual pair will appear as a pair of
real particles with respect to an accelerated frame of reference. So if the virtual pairs
possess no enduring continuous existence within a flat spacetime, then neither do they
possess a continuous existence within any other possible frame of reference. This notion
follows simply from the principle of the general equivalence (from the standpoint of the
fundamental invariance of physical law) of all frames of reference. From this we arrive
at the general result that real particles, what we call matter, must be a stable pattern of
fluctuation of the field energy of the quantum mechanical vacuum, whereas virtual
particles are unstable patterns of vacuum field fluctuation. Here we see that the
fundamental difference between stable and unstable patterns of vacuum fluctuation, real
and virtual particles, respectively, is not qualitative, but quantitative; it is due merely to
the availability or non-availability of raw, undifferentiated energy. The structure of all
possible matter configurations already exists latent within the vacuum fluctuation field;
what is required to "create" these configurations is simply the necessary quantity of raw
energy.
April 2011
But this is only true up the limit set by the Planck energy. Configurations
of vacuum stress-momentum-energy larger than E
Planck
must have been ―assembled‖ from
configurations smaller than E
Planck
, c.f., arXiv:quant-ph/0603269 v2 12 Jul 2006
April 2011

When energy is supplied to the vacuum, the structures which are produced are simply
those which are the most probable and hence the simplest. More exotic configurations of
matter may be produced if energy is supplied to the vacuum field while it is experiencing
"improbable" fluctuation patterns. These so-called improbable fluctuations are simply
those which possess a more fleeting existence.

In the August 1993 issue of Scientific American there appears an article which
describes experiments in which the time for photons to quantum mechanically tunnel
through a barrier is measured for a coherent beam of incident photons where 99% of the
beam is reflected off of the barrier, but in which approximately 1% of the photons are
transmitted ("tunnel") across the barrier. The experimental data indicated that the
photons which tunneled through the barrier traveled at superluminal speeds, some of the
photons reaching 1.7c. The phenomenological explanation for this was that the tunneling
photons changed the shape of their wavefunctions such that the peak of the wave function
is shifted in the direction of photon tunneling, resulting in the photons having a finite
probability of being found just on the opposite side of the barrier somewhat earlier than if
the shape of their wavefunctions had experienced no distortion. Increasing the width of
the barrier decreased the probability of photons successfully tunneling through the
barrier, but resulted in increased measured superluminal velocities for the photons, which
actually succeeded in tunneling through the barrier.

6/97 >> In this case, the photons' wavefunction peak had to shift toward the opposite
end of the barrier faster if they were to be observed on the other side of the barrier within
the short time that it would have taken for the photon to be absorbed by the barrier. <<

In theory, particles which quantum-tunnel through a potential barrier possess a
negative kinetic energy, and hence an imaginary momentum while engaged in the
tunneling process. If the four-momentum of the tunneling photons is conserved (as it is
required to do by special relativity), then an increased photon imaginary momentum must
be precisely compensated by an increased real photon momentum such that the
magnitude of total four-momentum of the photon is, again, conserved: the tunneling
photons are effectively being scattered in four-dimensional spacetime!
April 2011
The
tunneling photons possess a negative imaginary momentum while in the act of tunneling
through the barrier.
April 2011


A photon scattered within a four-dimensional space would experience a decrease in its
so-called real momentum; (actually, in this case, the real momentum of the photon is
simply the momentum associated with its motion though the space which is directly
observable to us, i.e., 3 dimensions) however, the scattering of a photon within a 4-
dimensional space where it is possible for the interval, ds2 < 0, superluminal velocities
are made possible by the conservation, as stated earlier, of the photon's four-momentum.
If there is a functional relationship between the integral of both the gravitational self-
energy and the kinetic energy of cosmological expansion, then there will be a functional
relationship between the gravitational self-energy of expansion and the kinetic energy of
expansion such that when the kinetic energy of cosmological expansion approaches zero,
the gravitational self-energy of the Universe approaches zero, implying a flat global
spacetime geometry. Because of the negative feedback coupling between the kinetic and
gravitational self-energies, we expect that these two energies are strongly coupled in the
early history of the cosmological expansion, but become very weakly coupled by this
relatively late epoch in the history of the Universe. In this scenario we expect a time
variation in the strength of the Newton's gravitational constant which is proportional to
the time derivative of the quantity, e-t/T , where T = 1/H where H is Hubble's constant.

This gives a time variation of G of H/e x G. Since the coupling between the
gravitational self-energy and the kinetic energy of cosmological expansion is virtually
zero in the present epoch of the Universe's history, we expect that there will obtain a
force of cosmological repulsion which almost exactly counterbalances the gravitational
force which would tend to slow and eventually reverse the process of cosmological
expansion.

QUESTION: We know that for low velocities, the addition of velocities is according to
Galilean relativity, i.e., velocities are simply additively superposed. However, it does not
appear that small accelerations may be simply additively superposed according to
Galilean relativity. According to what rule are both large and small accelerations added
together to yield the total relative acceleration? The energy required to rotate a pure
imaginary momentum by 90o so that this momentum becomes a pure real momentum is
just mc2. This quantity of energy may be thought of as the latent energy of matter which
it possesses by virtue of its being initially accelerated by the forces of the Big Bang
explosion. The negative kinetic energy of matter implies the existence of a
hyperspherical potential barrier through which all matter tunneled (in quantum
mechanical fashion and through which it continues to tunnel. This notion constitutes a
kind of hyper-extended inflationary theory. The gradient of this potential associated with
this barrier may be described by a pure imaginary four-vector ( in "free space" ), while
the orientation of the gradient of this hyperspherical potential is altered in the presence of
mass-energy in such a manner that the magnitude of the gradient (in four dimensions ) is
always conserved. Given a typical distribution of matter, in general this four vector will
possess no non-zero components, and the introduction of new matter into this distribution
will transform the components of the potential gradient four-vector after the manner of a
second rank tensor. In fact, this tensor provides the "connecting rule" by which the
gradient transforms, as we move along an arbitrary trajectory through a given matter
distribution, considering in succession points along the trajectory which are only
negligibly distant from one another (so that the potential does not change "too rapidly"
between successive points). All of the terms of Einstein's general relativistic field
equations are second rank tensors, the energy-momentum tensor providing the rule by
which the metric tensor at one point in spacetime is transformed at infinitesimally
contiguous points of spacetime. We must keep in mind that the potential gradient around
any particular particle of matter is described by a four-vector, and it is only the meshing
of the gradients of one particle's vector field with that of its neighbor which requires the
use of a tensor description.

If this vector field were assumed to be quantized, so that a unique exchange
particle, or boson, were thought to mediate the action of the field, then this boson would
have a spin of 1, not 2, and hence could not be described as a graviton, itself the mediator
of a purely attractive force field; a spin 1 particle, however, is the exchange particle of a
force field which is, like the photon, either attractive or repulsive, depending on whether
the gradient of the potentials of both particles are of like sign or of opposite sign. The
"charge" of the matter particles corresponds to the case of the particles being either of
real or imaginary mass, as stated earlier. The effect, however, of two matter particles of
either both real mass or both imaginary mass upon each other's spin 1 vector fields is to
create a stress within the spacetime between the two particles which, as we stated earlier,
must be described in terms of a tensor field. The imaginary mass of virtual particles, as
alluded to earlier, would result in a mutually repulsive force field tending to drive these
virtual particles apart from one another, resulting in the cosmological expansion of the
vacuum, or of space itself. Localized deficits in the density of imaginary mass (due to the
"displacing" presence of real mass) would manifest themselves in a diminution of the
cosmological acceleration vector describing the cosmological force of repulsion
obtaining between all virtual particles. The acceleration of massive particles due to
gravitational fields may be interpreted as an attempt on the part of real massive particles
to maintain a spherically symmetrical distribution of vacuum energy about themselves - a
condition obtaining for a particle at "rest" with respect to some fundamental reference
frame. The general relativistic effect of mass increase within a gravitational field may be
explained in terms of a function of the alteration in the three variables: vacuum energy
density, magnitude of the hyperspherical potential barrier, and the imaginary momentum
of the particle experiencing the mass increase. The mechanism by which the vacuum
energy density is reduced by the presence of mass-energy has already been discussed.
The reduction in the local value of the hyperspherical potential is explained in terms of
the projection of its gradient within an altered spacetime. The alteration in the imaginary
momentum is also explained in terms of its projection within the same altered spacetime.

The retardation in the local rate of cosmological expansion which manifests itself
as a linear increase in the loss of synchronization of clocks separated by a difference in
gravitational potential and which according to general relativity is an effect of
gravitational time dilation alone, is on our view on account of the conservation of four-
momentum of the body engendering the gravitational potential. The mass of the body, as
measured from the point of weaker gravitational potential, is increased by a fraction equal
to the fractional change in the vacuum's zero-point energy density at the point of greater
potential, relative to the point of weaker potential, where the density of this vacuum
energy ( in free space ) is equal to the density of mass energy of a black hole mass of
radius equal to the radius of the body in question which is producing the difference in
gravitational potential. One might justifiably ask about any 2nd or higher order effects
which could arise out of the particular cosmological vacuum mechanism that we propose
for the gravitational field. For instance, if the time rate of decrease in the energy density
of the vacuum is suppressed (relative to its "free space" value) in regions of spacetime
possessing massive bodies, then wouldn‘t one expect a kind of "piling up" of vacuum
energy in those regions of spacetime where general relativistic time dilation is locally
strongest in such a manner that a repulsive gravitational field develops? , c.f. , Dr. Brian
L. Swift. The relationship in general relativity between mass and curvature where
increasing curvature leads to increasing mass as well as increasing mass leading to
increasing curvature has an analogy within our theory of gravitation based on
spatiotemporal variation in vacuum energy density. Within our theory, decreasing
vacuum energy density leads to increasing mass and increasing mass leads to decreasing
vacuum energy density. Within our theory, the role of the metric tensor components, gik,
correspond to the 2nd partial derivatives of vacuum energy density with respect to the
variables x,y,z, ict. The stress-momentum-energy tensor of general relativity corresponds
to the 2nd partial derivatives of the mass, or nongravitational binding energy density
within our theory. The 1st partial derivatives are not sufficient to provide the
mathematical structure needed to describe the spatiotemporal variations in the vacuum
energy density responsible for the parasitic gravitational force. We must remember that
Newton's third law of action-reaction is modified within relativity theory and that it does
not strictly hold within this theory. No gravitational forces must lie along any 3-
hypersurface of simultaneity within 4-dimensional spacetime. It is easy to see why this is
so when one considers two distinct points which are gravitationally coupled, i.e.,
connected by a geodesic arc.

The time rates of change in the vacuum energy density at these two spacetime
points differ by an amount related to the differential severity of gravitational time dilation
(relative to some arbitrary 3rd point in spacetime) and so there is a variation in the time
rate of change of vacuum energy density as one moves along the geodesic arc connecting
these two points. We believe that the role of the curvature tensor within general relativity
is to fix the relationship of the metric and momentum-energy tensors with respect to the
condition of spacetime at the arbitrary point within it where the observer is located. Are
two gravitationally coupled points within spacetime linked by a geodesic arc of the
spacetime, or are they linked by an arc length of null spacetime interval, where ds = 0?
If a spin 0 particle decays into two spin 1/2 particles of opposite sign (so as to conserve
spin quantum number ), and the two spin 1/2 particles become separated by a great
distance such that when a quantum spin measurement is performed upon one of the two
particles, the wavefunction which describes both particles "collapses" so that the spin
orientation of the unmeasured particle must instantly become opposite to that of the spin
orientation observed in the measurement of the former spin 1/2 particle. This EPR
(Einstein-Podolsky-Rosen) type gedanken experiment, performed within a curved
spacetime raises an interesting question concerning the wavefunction which describes the
two particles, as this wavefunction takes two different forms at two points along any
segment of a curved spacetime. If the communication between the two spin 1/2 particles
is nonlocal and hence "instantaneous," then the wavefunction experiences a discontinous
change at the point in spacetime occupied by the second particle, i.e., the wavefunction as
expressed within spacetime B is instantaneously expressed in terms of the nonlocally
connected spacetime A, where measurement of the spin of the first particle was
performed; in this way the spins of the two particles would add to zero, resulting in spin
remaining a "good" quantum number. However, the only way to avoid the appearance of
discontinuity (of the wavefunction) , in this case, is to postulate the existence of a
physical description which is more fundamental than the wavefunction itself so that the
wavefunction becomes but the projection, within a given local spacetime, of the more
fundamental physical description. which itself remains continuous.

6/97 >> If such a more fundamental description of the quantum mechanical system
exists, then why is the reduction of the wavepacket or collapse of the wavefunction itself
necessarily accompanied by a discontinuous change in the probabilities for
observation/measurement of physical observables? We might rather assume for
consistency's sake (that of QM) that the wavefunction describing the particle pair must
undergo a "self-collapse" when some critical separation of the particles is reached - a
separation at which the difference in the representation of the pair, in terms of its
wavefunction expressed within the local spacetimes of either particle of the pair, has
reached some critical value. This critical value would, according to Penrose, be related to
the mass-energy difference of the spacetimes in which each particle is embedded.
Perhaps as long as this mass-energy difference is less than the most energetic massless
particle which can be defined within a self-consistent theory of quantum gravity, say, the
mass-energy of a Planck particle of some 10-8g, there is no necessity that the
wavefunction describing the particle pair undergo what Penrose terms "Objective
Reduction," (OR), because, perhaps, the energy difference up to this critical value of
mass-energy can be compensated through the exchange of a massless quantum (boson),
i.e., through exchange of a virtual particle representing a vacuum 3-momentum
fluctuation. Another possible explanation of the objective reduction of the pair's
wavefunction is related to the overall energy uncertainty of the component of the
quantum vacuum of both particles. This is to suggest that when the difference in mass-
energy of the local spacetimes of both particles exceeds the energy uncertainty of the
nonlocally connected component of the local vacua of the particles, objective reduction
of the pair's wavefunction must take place - for otherwise, the mass-energy difference in
the local spacetimes of the particles has outstripped the nonlocally-connected vacuum's
ability to compensate the disparity in the local spacetime representations of the pair's
wavefunction in the spacetimes of each particle, resulting in the incommensurability of
the quantum numbers of each particle should a reduction of the pair's wavefunction take
place after this critical difference in spacetimes has been reached - as a result of the
spatial separation of the particles. What has been said thus far suggests that quantum
entanglement, i.e., nonlocal connectivity, of particles or fields within significantly
differing local spacetimes may not be admissible in a consistent theory of quantum
gravity. This, in turn, suggests that nonlocal vacuum process may not actually be
responsible for the maintaining of particular spacetime geometries or that, there is some
rather small limit to the differences in local spacetime curvatures within an overall
nonlocally connected vacuum. We must investigate the possibility that the temporality,
i.e., the rate of time's passage relative to cosmic time, of a local spacetime is directly
related to the nonlocal connection of the local vacuum of this spacetime to the
nonlocally-connected vacuum of the universe at its largest scale. <<


The result of this maneuver, however, is that quantum mechanics could no longer
be viewed as a "complete theory," since the wavefunction would no longer constitute a
complete description, in general, of a quantum mechanical system.

On the other hand, if the expression of the wavefunction remains in terms of its
own local spacetime, then there is no unique wavefunction which describes both particles
prior to a measurement being performed on one of the particles, so that the spins of the
two particles would not necessarily add to zero after a spin measurement is performed,
with the result that spin would not be a "good" quantum number within a curved
spacetime. In such as case, the general invariance of physical law within the theory of
relativity would be violated. According to the physicist David Bohm, in his book, The
Special Theory of Relativity, the latent energy, E = mc2, which any particle of mass, m,
possesses, exists by virtue of internal motions, which may be thought of as taking place
within the particle, or alternately defining the existence of the particle, and that the
conversion of mass into energy, and vice versa, consists merely in converting the circular
internal motions of a number of mass(ive/less) virtual particles into a set of linear
external motions of a number of massless real particles, and then converting them back
again into the original set of circular internal motions. It is as though one were to take a
tiny particle in rapid linear motion, bend or divert this motion so that it assumed the form
of a rapid circular motion, so that the particle now possessed the appearance of a ring,
and then utilize a portion of this circular motion to set the ring rotating so rapidly that the
ring now took on the appearance of a solid sphere, most of which, to be sure, would be
composed of empty space, but which would possess a great deal of energy by virtue of
the two perpendicular internal circular motions which, in conjunction with one another,
defined the sphere's existence, and then to proceed to undo, or reverse this series of
operations, retrieving the original linear motion with which one started. We know, of
course, that this simple analogy of the "kinetic sphere" is rather naive, and that it is only
intended as a basic model of the interconnected meshwork of virtual particle reactions,
which defines the existence of any real particle. The important point here is that there is
an exact parallel between this internal circular motion and the linear motion of massive
particles along the imaginary axis of our cosmological model of the hypersurface which
is expanding at the speed of light, or rather, of the hyperspherical potential energy barrier
through which all massive particles are presently in the process of quantum mechanically
tunneling, at approximately the speed of light. To explore this parallel, we need to
make a relatively simple observation about the relationship of the internal circular
motions to the external linear motions into which they are converted whenever mass is
converted into energy. We notice from the example of the "kinetic sphere," that it only
required two independent (orthogonal) internal motions in order to define the existence of
this 3 dimensional object. We imagine that this conversion of these two circular,
orthogonal "internal" motions will result in the creation of two linear, orthogonal
"external" motions.

We believe this conversion process is described by an isomorphic group
operation, such that the number of dimensions of motion is conserved, while the
orthogonality of the motions is retained, because the conversion of energy into mass, the
reverse of this operation, is accomplished through a continuous series of simple Lorenz
transformations, i.e., through the relativistic mass-velocity relationship of Einstein, and
because we know that the conversion of energy into mass is a reversible (symmetrical)
operation so that the conversion of mass into energy can be described in terms of a linear
matrix operation; i.e., it is group-theoretic in nature. But we know that the direct
conversion of mass into energy produces an out-rush, if you will, of released energy
which streams outward in 3 spatial dimensions. This obvious empirical fact seems to
require that there be 3 independent orthogonal, circular, internal motions which underlie
the latent energy of massive bodies, and this implies that massive bodies which possess
this latent energy, E = mc2, must be, either themselves, 3 dimensional hypersurfaces
binding a 4 dimensional hypervolume, such that the mass possesses three independent
degrees of freedom, or that they must possess two circular (orthogonal) internal motions,
defining two dimensional surfaces binding 3 dimensional volumes constituting the
massive particles, and that the third orthogonal internal degree of freedom is that
associated with the linear motion of these 3 dimensional objects which occupy a 3
dimensional hypersurface which is expanding within four spatial dimensions at the speed
of light. There seems to be a problem, however, in associating all of the latent energy of
motion, E = mc2, with the linear degree of freedom associated with the cosmological
expansion, simply because it means ignoring the contributions from the two other internal
degrees of freedom, corresponding to the internal motions of massive bodies. If,
however, we assume an equipartation of energy between the energy magnitudes
associated with all three degrees of freedom, and this seems reasonable because the
Lorenz transformation of special relativity represents a symmetrical operation, then the
energy, mc2, which is needed to rotate the pure imaginary linear momentum by 90o, to
convert it into a pure real momentum, is provided by the two energies, 1/2mc2 and
1/2mc2, respectively, associated with the two circular internal degrees of freedom. In
this way, the two energies, 1/2mc2, combined with the negative kinetic energy, -
1/2mc2, of massive particles, tunneling through the hyperspherical potential energy
barrier, yields the new energy, +1/2mc2, associated with the pure real momentum of
outstreaming massless particles which results from the total conversion of a real massive
body into energy.

We now see that the energy of massive bodies, mc2, may be thought of as
stemming, alone, from the internal motions defining these bodies, which is released
whenever these circular internal motions, i.e., the energy circulating within the feedback
loops of the virtual particle reactions composing the massive bodies, is "deflected" into
the linear motion of real massless particles. An additional bonus from these
considerations is that it is now possible to see that the distinction between virtual and real
particles is not a fundamental one. Mass, on this view, is simply a function of the
topological structure of the virtual particle reactions which occur everywhere within the
quantum mechanical vacuum on account of a fundamental energy uncertainty of the
vacuum state which, in turn, stems from the fact that the Hamiltonian of the vacuum is,
itself, a function of the "incompatible" observables, position and momentum. Moreover,
the massless force-carrying particles, i.e., bosons, which are the end product of any total
conversion of matter into energy, exist solely by virtue of their interaction with the
vacuum state, and in no way depend upon, or are defined by, any self-interaction.
Consequently, these massless bosons, e.g., photons, can be considered to be virtual
particles even though are capable of being observed. In other words, the mass which a
given volume of space possess is merely a function of the imbalance in the ratio of the
volume's self-interaction to its, if you will, not-self-interaction: in free space, where no
matter is present, the flux density of energy exchange between the interior of an arbitrary
volume with itself and the flux density of energy exchange between the interior of this
volume and its exterior, is delicately balanced. It is the alteration of this balance in favor
of greater self-energy exchanges, which engenders the phenomenon of mass. The self-
energy exchanges correspond to the energies of circular internal motion, discussed
earlier, which we invoked as a simplistic model of the interconnected meshwork of
virtual particle reaction paths defining the existence of massive bodies. There is a very
convenient mathematical description of this self-energy and so-called not-self-energy
exchanges; these are, respectively: the energy density and the pressure of the vacuum.
This balance of external and internal vacuum energy exchanges is exact, indicating the
condition of free space, obtains, therefore, when the pressure and energy density of the
vacuum are equal, and it is on this condition that the speed of light has its maximum local
value, as seen from application of Mach's formula for the speed of pressure wave
oscillations within a material medium. There is no reason why we may not apply Mach's
formula in this case because the only essential difference between the propagation of
pressure oscillations in a material medium such as the Earth's atmosphere and such
oscillations in the vacuum is that of Lorenz invariance, i.e., the value of the speed of
sound within a material medium is dependent on the state of motion of the observer
performing the velocity measurement, while the velocity of "sound," i.e., light, within the
vacuum is itself independent of the state of motion of the observer within this vacuum.

Another way to point up this difference between the vacuum and an ordinary material
medium is to note that the particles within the vacuum, being virtual particles, do not
possess a continuous existence with time and so cannot be chosen as the origin of an
absolute frame of reference within which the velocity of pressure oscillations of the
vacuum might be measured relative to an observer's state of motion. An ordinary
material medium such as the atmosphere is composed of particles which possess a
continuous existence and so an absolute frame of reference may be established within this
medium by which the velocity of pressure oscillations of the medium may be measured
which is then dependent upon the state of motion of the observer.

The ict axis of Minkowski four-dimensional spacetime, may be understood not to
represent a physically real 4th dimension, in an analogous sense to the other three
familiar spatial dimensions, but that it merely functions as an abstraction within the
formalism of special relativity to model conservation laws, e.g., energy, momentum, etc.,
and the linear transformation law connecting inertial reference frames (the Lorenz
transformation) which, in turn, govern the general relationship of the internal and external
motions of real/virtual mass and field energy within a universe of three spatial
dimensions and one time dimension. Perhaps now it is easy to see why it is not necessary
to underpin the mathematical structure of special relativity with a physically real 4th
dimension. It is perhaps possible to remain consistent with Einstein's and Minkowski's
view of time as being associated with what is merely an abstract (not physically real)
dimension. If the Universe is not, in fact, and expanding 4-dimensional hypersphere,
then the Hubble distance-velocity relationship for galactic recession requires the
existence of a repulsive cosmological force field whose force increases linearly with
galactic separation. The gradient of the hyperspherical potential, postulated earlier to
explain, in part, the imaginary coefficient of the ict axis, would itself then have to be
interpreted as a manifestation of the negative time-rate-of-change in the energy density of
the quantum mechanical vacuum which occurs due to the global cosmological expansion.
Moreover, the continuous series of local Lorenz transformations which may be thought to
connect two non-inertial reference frames (centered about two points in space of differing
gravitational potential) would be understood in terms of a continuous tensor
transformation of the four independent components of spatiotemporal variation in
vacuum energy density, i.e., the gravitational energy gradient (spatial variation in vacuum
energy density) in conjunction with the temporal variation of vacuum energy density,
connecting the two non-inertial frames of reference.

The equivalence between spatiotemporal variations in vacuum energy density and
variations in spacetime curvature may be more simply grasped by examining two
different expressions of the Heisenberg uncertainty principle within curved spacetimes.
These two expressions are:

^E^t > h and ^p^x > h.

We know that within a curved spacetime, say, in the vicinity of a massive spherical body,
there is a general relativistic length contraction along the spherical body's radial direction
while at the same time there is relativistic dilation of time. If we are considering virtual
particles, then the > sign appearing in the two formulas, above, may be replaced by an =
sign so that a dilation and a contraction in the variables, ^t and ^x, respectively, must be
coupled with an inversely proportional shrinkage and dilation in the dual variables, ^E
and ^p, respectively. In this way, the energy of the vacuum decreases as one moves into
regions of increasing gravitational potential while the momentum of the vacuum, if you
will, increases along this direction. If the vacuum momentum is correctly described by a
four-vector of conserved magnitude, then the vacuum momentum may only increase with
increasing strength of local gravitational potential at the expense of a compensating
decrease in the vacuum's momentum along an orthogonal direction. It is the decrease in
the vacuum's momentum in the direction orthogonal to the radius of our spherical
massive body with which we must associate the decrease in the vacuum's energy along
the body's radial direction. So we obtain what perhaps appears to be a trivial result: the
momentum of the vacuum along a certain direction may only be increased by utilizing the
energy of the vacuum itself associated with its momentum in directions orthogonal to the
direction of increasing momentum, so that local mass distributions do not, themselves,
provide the energy required to support the existence of a local gravitational field; the
effect of mass is merely to redirect the vacuum momentum, utilizing the locally available
energy of the vacuum itself; to put this in the language of Einstein: mass does not
produce spacetime curvature, it locally alters the global curvature of spacetime. This
may all seem like an exercise in splitting hairs, but there is an important difference in
these two interpretations in the relationship of mass to spacetime curvature: if mass, or
what amounts to mass, alone, is responsible for the existence of spacetime curvature, then
an "empty" universe may not possess a globally curved spacetime. On the other hand, if
mass merely locally alters the background spacetime curvature, then there is nothing to
prevent the existence of so-called empty, curved spacetimes.

It is not correct to say that energy and information are interdefinable so that if energy is a
conserved quantity, then information is also a conserved quantity. A simple
counterexample suffices here. It is possible for transitions to occur, within a gas, say,
where both the entropy and the energy of the gas are conserved, even though the different
configurations between which the transitions occur may be thought of as representing
different quantities of information so that information is not itself conserved. The notions
of energy and entropy are separable from the notion of information because the former
are only definable with respect to a closed system of a finite number of distinct state
space configurations while the latter is always defined with respect to something outside
the system in which its coded configuration is defined. It is not possible for one thing to
represent another unless there be at least two distinct levels of description available to the
system within which the representation is to be constructed. If we waive the requirement
of an "external" observer who is to give different meanings to the different
configurations, then information and energy are not interdefinable.

Another reason for not equating the two, i.e., energy and information, is on account of the
existence of energy degeneracy. Since different wavefunctions may possess the same
associated energy eigenfunctions, it should be possible for a quantum mechanical system
possessing energy degeneracy to undergo arbitrary transitions from one degenerate
eigenfunction to another without the changes be associated with changes in any definable
QM observables.
07/98


I have found that under certain unusual sleeping condition such as the presence of bright
lights, chemical stimulants (caffeine or a nicotine patch), after having previously taken a
standard dose of melatonin, and especially after having returned to sleep just after being
briefly awakened, that I am able to experience dreams in which I become aware that I am,
in fact, merely dreaming and that none of what I am witnessing and doing is real but is
something which I am manufacturing out of my own consciousness, including myself as
one of the participants in the action. But precisely when this happens, when my
consciousness of my role as creator of all I survey becomes total, the events I am
witnessing begin to lose their independent character and the whole scene begins to
quickly dissolve whereupon I immediately awaken. It is as though my simultaneous
existence as subject and object is outlawed as paradoxical, except in the partial sense of
my being a mere participating "character" in the unfolding action. This is perhaps
because the "currents" which power the dream become like the tributaries of a river
whose flow must quell upon reaching the level of the river's source; as long as the "self"
which is creating the dream-phenomena (the source) is "higher" than the "self" which
participates in it on equal par with the other "participants" (the tributaries) the action
continues in a natural and uninterrupted manner. In the fleeting moments of near total
consciousness, before the dreamscape has had adequate time to disintegrate, it usually
occurs to me to try to do something, which is considered impossible in waking life. I
may cause various objects around me to leap into the air or explode or I might even move
myself to hover in the air and fly about. But usually there is barely time even to begin to
try out my new "powers" before I am forced to awaken. Overwhelmingly, such
miraculous acts as these I freely perform in the normal state of dreaming consciousness in
which I have not the slightest clue that I am doing anything extraordinary.

Jesus Christ informs us that were we to have a mustard seed's faith we would be able to
hurl mountains into the sea. As Christians we cannot help but take him at his word on
this. Perhaps here we are now able to glimpse a fundamental difference between
certainty and what we call faith. Usually Christians equate the two in some fashion, but
confer with such common exhortations as "You cannot be certain of this, but you must
have faith." If ordinary parlance can clarify the point I want to make it can also obscure:
" I'm not as certain of this as you" vs. "He has more faith than I." That is, we commonly
speak both of degrees of certainty as well as degrees of faith. Yet in the most literal
sense faith is something which indeed admits of degree, certainty, on the other hand,
doesn't - you are either certain about something or you are not, in which case, you are
entertaining doubt. But if faith and certainty are different in the way I describe, then one
cannot have faith and have certainty at one and the same time. But we just said that if we
are not certain then we must in some way entertain doubt. Faith can exist in the face of
doubt, but it takes a faith, which is completely free from all doubt for us to budge
mountains from their very foundations. We cannot, of course, have certainty that any
thing in particular shall come to pass unless we have the type of assurance which only
God himself provides in the form of Scripture or Revelation. Can we distinguish Jesus
and God the Father in the following manner: Jesus is the incarnate, humble God; we
might say that the Angels and ourselves witness his glory in Heaven, but not experienced
by Himself personally. God the Father is Himself not a humble Being having never
experienced the humiliation which Christ experienced for us on the Cross, but
experiences this Glory in an intimately personal way which we sill never be able to
imagine; by meditating upon it, however, we may learn to marvel at the mere idea of it.
Jesus says that we must come to him as little children. What this means is that our belief
in Him is based on our faith and has nothing to do with our intellects and so what we call
certainty is not here the relevant quantity. We should not therefore envy those who seem
to possess intellectual certainty concerning their Christian beliefs as still more clever
arguments to the opposite side may be waiting just around the corner. And one is in a
Catch-22 situation here because if one possesses the headstrong pride to resist the
seductive and highbrowed attacks upon one's own intellectualized beliefs, then one holds
these beliefs for the wrong reasons. On the other hand, if one gives way to "superior"
argument, all is lost; if indeed anything worthwhile was risked in the first place. Faith
can exist in the face of the gravest of doubts, c.f., "Oh Lord, please give me faith now in
my moment of doubt."

Intellectual certitude is toppled by even a slight suspicion. It is characterized by
ignorance of the subtler complications at still deeper levels and dismay is perhaps
experienced when they are suddenly uncovered, threatening what were thought to be pat
arguments. However a Christian who has tried his hand at intellectualizing his beliefs but
who has now learned to base them on faith, probably retains some respect for their
intellectual dimension although he now trusts that the hidden complications must
ultimately work out in his favor. The Sophists of ancient Greece enjoyed indulging in
disputation and debate for its own sake alone and used to pride themselves for being able
to make the worse side of an argument seem the better one. Although these kind of
people are still around today, intellectual justifications need not even be attacked to lose
their quality of conviction as they have a natural half-life which is on the whole quite
short; the newness of discovering or inventing them must soon enough begin to wear off.
The Christian beliefs which they were thought to support are now weakened although the
intellectual vanity which they are really in service to merely goes on to still more subtler
or fancier arguments. Christian apologists must realize that part of what motivates them
to put pen to paper is identical to that which motivates any writer, that is, intellectual
vanity and pride. It is an inevitable fact that if one were ever to become conscious of not
being a proud person one would at that very same time become proud about it. If I work
a miracle I do so in complete humility; as soon as I become conscious of myself as the
author of this great sign the conduit of my power becomes squelched. The faith of the
mustard seed which allows me to hurl the mountain is intimately connected with my
conviction that it is not myself which is the source of this terrific power, but God
Himself. This complete humility which is inextricably one with the faith necessary to
perform miracles can only exist if it is not conscious of itself. As we know a miracle
involves the suspension of natural law which normally prevents it. We also know that
natural law can only be formulated concerning objects or processes which are in principle
reproducible as reproducibility depends in turn upon what is called reducibility. This is
why we refer to deterministic laws describing reproducible phenomena as reductionistic.
Reductionistic laws are "closed" form expressions of physical relationships which is to
say they are mathematical. "Closed" because the expression contains in an abstract way
everything that can possibly occur between the terms related to one another by the
expression - all that is needed are the initial and boundary conditions ; nothing is able to
come in from "outside," as it were, to disrupt or complicate the lattice-work of physical
relationships.

@$
And this is just the point. In interacting with a truly unique entity or system I am not
able to independently vary the circumstances in which I observe this system unless I am
capable of distinguishing what are the dependent from what are the independent variables
and to succeed in doing this I must be able to vary these experimental circumstances in a
way which can be disentangled from the resulting changes in the behavior of the system
itself as opposed to the conditions of the experimental setting. I can do this if the
pervious state of the system is made not to carry over and influence the experiment at a
later time when the system is in its new state, c.f.,
au=
Rupert Sheldrake,
cit=
The Presence
of the Past. Essentially this can only happen if the new state of the system is just the state
of some different but identical system so that these states become decoupled from each
other thus allowing that mutual relationships among the dependent (internal) variables to
be distinguished from the relationships of the dependent variables to the independent
(external) variables which can now in this way be manipulated independently. So
"closedness" and "openness" of a system are not merely convenient descriptive labels but
quite aptly describe genuine topological relations between the system variables. Should
there exist variables which can be varied independently but which we are not able to
control, then usually this is a symptom of over-complexity of the system which is best
overcome by the defining of various "random" variables which are then treated
statistically. However, this leaves the possibility of independent variables which are
neither random nor which we are able to control. These would be not definable or
unknown variables and represent influences upon our system which issue from some
hidden region "outside" the space-time continuum. Naturally, such systems will not be
reductionistic and hence they are not reproducible but instead, are unique indeed. Every
act which issues from the Will of God is necessarily a mystery to all but Himself. But the
difference is that a good apologist knows how hopeless it is to completely subdue this
monster and so exploits it in the service of a lofty purpose: that of hopefully turning the
intellectual vanity of others against their unbelief for just long enough so that something
of real convicting power might be given its chance to work, which is to say, God's grace.
By a natural extension of this analogy we may easily illustrate the major difference
between what might be termed the Eastern versus Western view of God and Man as well
as their relation to one another. The eastern view of this relationship seems to be that
secretly the dreamer himself and that even though the dream eventually passes it is soon
enough replaced by yet another where this special participant assumes some other
possibly quite different form and perhaps even recalls events and participants from some
previous dream.

Although reincarnation is usually associated with this kind of conception it is not really
consistent with the full-blooded Eastern view that the individual soul is just an illusion; if
I am, on account of this doctrine, not really different from you or anybody else for that
matter, then neither am I any more like one particular person who lived at any time in the
past than I am from any of his millions of contemporaries unless, of course, I secretly
subscribe to the more or less theistic view that there is, indeed, such a thing as an
individual immortal soul. If I were to claim that I am the reincarnation of Thales, the
founder of Greek philosophy, then you may make the deduction from this that I could not
then be the reincarnation of Laotzsu, the founder of Taoism as he and Thales were
contemporaries; but then this in turn implies that the distinction between Thales and
Laotzsu cannot secretly be an illusion and this is contra hyp. Salvation for the Easterner
is effected by the shedding of this illusion of individuality; the paltry and minuscule
particle of being returns to its source which is Being itself. Here the Ego is identified
with self-consciousness. On the Western, theistic view, on the other hand, the Ego and
the individual Soul are indeed two distinct entities, the Ego being a false aggrandizement
of one's true individuality (the Soul), alienating it from the Divine Will and known as Sin
which is Pride. Salvation for Christianity is effected by the removal of Sin (Pride) with
the result that the individual Will and the Divine Will become One. We should consider
in this connection the puzzling statement in the book of Ecclesiastes, that the spirit
returns to the Father who made it. (
August 2012
This is apparently without regard to the
ultimate fate of the soul). Another important distinction here is the manner in which
Salvation is effected. For the Easterner, Salvation can be achieved through the
individual's own efforts, through the practice of various types of self-discipline, and
concerns itself not so much with seeking a desirable state in the Afterlife as with the
avoidance of suffering in an endless cycle of Death and Rebirth to which one is confined
through the influence of Karma.

@$
Perception is largely governed by expectations conditioned by previous experience.
Memory is selective. The tale grows in the re-telling. The most we can mean by
"objective reality" is intersubjective reality. Language is incapable of conveying actual
substance, or content, but can only convey distinctions and similarities at multiple levels
of abstraction. Human suggestibility is always a factor in the seeming immediacy of
interpersonal communication. The commonality of language and environment structure
creates the illusion that our perceptions of this environment are similar, if nonidentical to
those of our neighbors. The vagueness and confusion, if not the outright contradictory
nature of, our conceptions and beliefs is never fully revealed to the individual in the
absence of problem-solving applications. The courtesy and civility of strangers and
acquaintances is frequently mistaken for genuine caring and regard for oneself and one's
interests. The continual appearance of so-called "coincidences," which make life seem
more intimately related to the individual than it is in reality, are largely artifacts of the
filtering of sensory/perceptual data in the light of recent learning, still reverberating
within the individual's subconscious. Altruism is an investment in the future caring and
kindness extended to oneself. Originality is frequently a combination of sloppy
scholarship and an imperfect memory. Superstition is a metaphorical, inductive
generalization upon coincidentally connected events.

epi=
We do not change. We simply embark upon new self-deceptions.

Are all signals secretly meaningful, none intrinsically noisy?
July 2011
Or is it just that one
man‘s noise is another man‘s signal? There‘s no clear demarcation between, data,
information and executive function signaling in the brain as these all are quantities of
different levels of abstraction, each possessing the same underlying biophysical reality as
the other, or so we suppose. The probability of a given complex signal belonging to two
or more distinct contexts should be rendered small by virtue of the complexity potential
of the signal carrier code addressing a much greater number of contexts than which are
actually possible.
October 2011 @$
There may be a necessity of making a further distinction
between data and information in the light of recent quantum entanglement and
teleportation experiments and in light of Hawking‘s notion of chronology protection. We
perhaps need to distinguish between controllable, intentional information and
information merely as “interpreted data”. Thought experiments involving quantum
entanglement based communication between two or more quantum computers is what
suggests the need for this additional distinction. Since the gimmick of ―parallel
universes‖ is the favorite mechanism of choice by science fiction writers for the obviating
of time paradoxes, it might be worth while to seek one or more of the essential
components of chronology protection in gedanken experiments connected with quantum
superposition, specifically in the observational/experimental logic pointed up by ―delayed
choice‖, ―quantum nondemolition‖ and ―quantum eraser‖ experiments. There are several
distinct stages in any causal chain of events that potentially lead to a ―time paradox‖
where one could insert a ―no-go‖ theorem in order to head off its formation. An event
horizon of a black hole is an example of chronology protection. This is because the
formation of a ―naked singularity‖, which is otherwise permitted would enable backward
timelike trajectories through spacetime. But causality is also violated by any freely
willed act. The causal paradoxes involved in free will are most likely buffered and
defused through the probabilistic quantum structure of spacetime, in particular the
superposition principle and the principle of the complemtarity of incompatible
observables, i.e., the fact that these paired observables are related through Fourier Series
and inverse Fourier Series expansions. (
July 2012
Parallel multiverse selves act freely
without engenering causal paradoxes – this is a correlary to the principle of chronology
protection and what should be understood to be a kind of ―containment principle‖.)
Turning to the theory of the correlation-dissipation of fluctuations in the vacuum‘s energy
within the finer ―interior structure‖ of manifest Heisenberg energy uncertainty, i.e., the
―hidden variables‖ of quantum uncertainty we may profitably seek to define each stage in
the causal chain of a ―closed spacetime loop‖ in its act of establishing itself where nature
could reliably and opporutunistically intervene. The development of a standing wave
structure within a superluminal signal wave packet allows us to make signal phase
velocity effectively dependent upon the wavepacket group velocity instead of vice versa
(the usual case), provided that the wavepacket is itself built up from the mutual
interference of ―retarded‖ and ―advanced‖ waves, which are in turn dependent upon the
initial and boundary conditions. This presupposes the notion that wavepackets, which
travel with group velocity can be treated as ―phase waves‖, c.f., signals propagating with
imaginay momentum during ―quantum tunneling‖. But this is subject to the special
conditions for wavepacket lifetime in relation to the wavepacket‘s Heisenberg energy
uncertainty, i.e., simply that /\E/\t >= h. The distinction between the measurability of real
particles and fields on the one hand and the immeasurability of virtual particles and fields
on the other (except in statistical terms) should be of service here. Again, at each crucial
step in the would be formation of a closed spacetime loop (which by the way seems to
necessarily invoke a topology of two-dimensional time), there should be opportunity for a
presumed chronology protection mechanism to intervene. Note that if causality is really
underpinned in higher dimensional time, i.e., 5d, 6d, etc. spacetime, then closed time
loops in 4d spacetime are no real threat to causality, but merely present the appearance of
such to creatures not possessing the perspectival advantage afforded by intersubjectively
mediated theories of causality within these higher spacetime dimensions. The final stage
at which a chronology protection mechanism could reasonably intervene, actually is
composed of three distinct stages, namely the following: that of the formation of the
observer‘s and experimenter‘s ideas, that of the reliable recording and that of rational
application and implementation of these ideas. We would of course like ―chronology
protection‖ to possess an underlying mechanism, but unfortunately, “mechanism” by its
very nature is essentially part of that which a chronology protection mechanism is there
to protect. This suggests that the chronology protection ―mechanism‖ is to be sought
within recursive structures, which would have to possess no beginning in time (to exempt
them from needing to be ―chronology protected‖) It is starting to look like we shall be
forced to seek chronology protection at some ―transhuman‖ level, e.g., Lazslo‘s Akashic
Record, Hawking‘s Chronology Protection Agency, the big-headed alien ‖programmers‖
―outside‖ the Ancestor Simulation, and so on. Because of the fundamental limitations
imposed upon automation, i.e., ―mechanism‖ by Godel‘s and Turing‘s theorems, and the
suspected immunity of consciousness to the theoretical limitations imposed by these
theorems (probably because consciousness itself is the theory builder), we shall have to
take our lead from
au=
Chomsky that, the most crucial step in the state censorship of ideas
does not occur ―at the printing press‖, but at the very root of the process for the operation
of an otherwise free press, that is, at the level of ―the formation of ideas‖, i.e., at the level
of ―consciousness‖. We may ultimately seek the master mechanism of chronology
protection at the highest possible level of correlation of quantum fluctuations in the sense
of ―higher‖ versus ―lower‖ information processing, or at the boundary between reducible
and irreducible complexity within the quantum vacuum, namely at the Planck energy.

December 2011 epi=
God didn‘t build the Heisenberg Uncertainty Principle into the very
structure of the Universe to grace Mankind with the elbow room befitting a free will, but
so as to spare Him from the need to attend to the myriad details of His creation.

November 1996
What psychologists, philosophers, and theologians have referred to for
millennia as "conscious-ness" is something apart from any structures of thought which
might "inhere" in it and is apart from any particular sensory, perceptual, or conceptual
modalities of this consciousness. This is related to Hume's complaint that whenever he
tried to perceive what is called the Self, he only could seize upon some particular
sensation or perception or other.

July 1997
The fundamental process by which the objects of perception are generally
constituted cannot itself, of course, ever be itself an object of perception. Stating this in
quantum mechanical terms, quantum observables cannot constitute or be used to describe
what consciousness is in its essence, but are necessarily mere manifestations of
consciousness. Now the quantum state of a system is what creates the possibility of
observables. Consciousness, therefore, must be intimately associated with the creation of
such quantum states, but not through the mere temporal evolution of pre-existing
quantum states. Consciousness must be ultimately mediated through discontinuous
changes in boundary conditions, the primary example of which is the instituting of
boundary conditions upon the fundamental quantum (vacuum) field from outside
spacetime. This is what essentially the process of informing means. ―The question is
actually quite simple in nature. Is consciousness a primordial aspect of existence, or is
consciousness somehow derivative of the information that is quantized out of the void of
empty space? Does consciousness somehow arise from the way information is coherently
organized, or is its nature primordial, an aspect of the existence of the void? Is the void
the true nature of consciousness?‖ ―…It is possible to prove that it is impossible for
consciousness to arise from the way information is coherently organized, no matter how
complex that organization. If the only logical alternative is proven to be false, then in
effect we have proven the only true proposition, which is that the void is the nature of
consciousness.‖
See Non-duality: A Scientific Perspective, c.f., www.nonduality.com/publications.htm

And so, information, unlike energy, cannot be understood in terms of the operation of any
continuity equation. The action of consciousness is inherently discontinuous with respect
to any particular system of abstract forms. This type of action necessarily involves
violation of any conservation laws and so the action of consciousness is inherently
asymmetrical and irreversible, and, moreover its dynamic cannot be understood in terms
of quantizable physical parameters/variables. An operation, such as the searching of a
memory cache, does not constitute what is called thought, but is merely the seeking of
thought for a connection which will merely enable its manifestation or expression in
terms of a representation which is not unique, as would be appropriate to the operation of
a code, and is always completed through a kind of open-ended complementing of the
cipher medium through the interpretive action of its ground. This complementing is not
logical, but translogical.

Consciousness, as understood by these thinkers, if it indeed "exists," as opposed to
possessing being, must be something forever prior to and underlying all these modalities
or structurings of it. Consciousness is not something which is merely common to all
these, for if it is, it is something which transcends individual experience since common to
the experiences of any and all possible/actual persons, but the most that one can possibly
mean by the term "consciousness" is one's own individual consciousness. The only way
we can actually possess a genuine intuition of consciousness is if the distinction between
different individual consciousnesses is merely like the distinction between a given
individual's consciousness at various particular times within that individual's biography
but with the merely incidental difference that the structurings/ modalities of one
individual consciousness are not cross-referenced to those of another individual by virtue
of a more general modality which includes them - itself less general than consciousness at
large.

April 1997
In other words, if consciousness possesses being as a particular among
particulars, it may only possess an objective meaning if it possess an intersubjective
meaning. But according to what has just been observed, various self-consciousnesses
must be merely different structurings of a single consciousness and so consciousness
therefore must have a being which transcends any particularity, any particular form of
itself. Consciousness as such is therefore nondual, transcendent of Form and so must be
constituted from beyond the "field of time." Consciousness is constituted within Eternity
and so qualifies as the ground of existence. This is pure Consciousness.

It is just that the many different structurings of consciousness at large do not completely
cohere, but form numerous relatively disjoint domains marking the different individual
consciousnesses. Consciousness as such, on this view, cannot be understood by an
individual conscious which is merely a transitory, finite structuring of this fundamental
consciousness. Here again, we see our principle of the inability of the stream ( the
individual's stream of consciousness, in this case ) to rise to the level of its source (
consciousness at large). It will not be possible for computer science or brain physiology
or whatever to come to an adequate theoretical understanding of the process which gives
a particular individual the gift (or curse) of the individual consciousness that he happens
to possess unless any of these sciences can somehow ground the explanation of his
mental processes in the consciousness originating utterly from outside him qua
individual. One must, in other words, first understand consciousness as such before
moving on to an explanation of the individual's own particular consciousness. But this
means that science will not be able to arrive at a general theory of consciousness through
induction from individual cases. So application of the scientific method is barred from
treating the problem of consciousness as such. Nothing that the individual is capable of
perceiving could count for the essential feature which makes both his consciousness and
that of the other examples of consciousness as such. Suchness is inherently
transcendental. Negative Karma is the result of the individual's choices made in
previous lifetimes while the equivalent to this in Christianity is Sin which is inherited by
all human beings at birth and cannot be overcome through the individual's efforts alone
but only through the active intervention of God himself in His immanent form as Jesus
Christ. Although in a historical sense the person of Jesus is the incarnation of God the
Father in human form, in no way does He become absorbed or lose His identity upon
being assume into Heaven but is Himself unique and eternally part of the Divine Trinity.
We know that the individual soul does not return to the great Ocean of Being at each
interval between incarnations as the identity of the individual is utterly blotted out
whenever this occurs.

Otherwise there would be no definite guarantee that one's eventual escape from the
Wheel of Life, through Spiritual Enlightenment, would be indeed a permanent one since
obviously whatever process started the chain of incarnation in the first place might well
again give rise to one's very self-same soul, as it would customarily do between each
incarnation, reinitiating the whole process; if one insists that the original process is
unique and hence unrepeatable, then one is implying that each person's individual identity
is really unique and therefore not in the least illusory and furthermore that neither is our
separation from the Godhead an illusion, but the very real and potentially alarming
circumstance of our Earthly existence.

September 1992
Leibniz writes in 1714, "Moreover, it must be avowed that perception and
what depends upon it cannot possibly be explained by mechanical reasons, that is, by
figure and movement. Supposing that there be a machine, the structure of which
produced thinking, feeling, and perceiving; imagine this machine enlarged but preserving
the same proportions, so that you might enter it as if it were a mill. This being supposed,
you might enter its inside; but what would you observe there? Nothing but parts which
push and move each other, and never anything that could explain perception." The
philosopher David Cole argues to disarm Leibniz' "mill" argument in the following way,
"Blow up a tiny drop of water until it is the size of a mill. Now the H
2
O molecules are as
big as the plastic models of H
2
O molecules in a chemistry class. You could walk
through the water droplet and never see anything wet." Cole's point is that we all know
that an individual water molecule is "wet" and so our inability to see its wetness from the
inside, as it were, doesn't prove that its not really wet; similarly, a machine might be
conscious even though this consciousness is invisible from every perspective - except the
machine's, of course. There are a number of serious objections to Cole's refutation of
Leibniz' mill argument. Firstly, it is doubtful whether the property of wetness may be
attributed to a single water molecule in the first place because wetness can be seen to be
an essentially collective phenomenon. The following example will serve to explain. In
the modern version of Robert Millikan's oildrop experiment, wherein he demonstrated
that the charge of the electron was quantized, oildrops, being somewhat messy and
inconvenient, have today been replaced by tiny silicon spheres. These little spheres are
so diminutive (about .001 millimeters in diam.) that they are not visible to the naked eye,
nor are they palpable in the hand, unless there are quite a large number of them
assembled together, in which case they "coalesce" together to form a little pool in the
palm. This pool is indistinguishable from one made of water except that there is a slight
difference in the perceived viscosity of the "fluid" which seems a little on the cloudy side.
The point here is that this collection of tiny spheres feels wet; and there is no question
whatever that a tiny sphere of silicon, blown up to the size of one of Cole's overgrown
H
2
O molecules, simply could not possess the property or quality of wetness.

But what is all this thought to prove? Well, in Cole's case, a person hearing his
argument experiences a contradiction of sorts between his expectation that size (which is
owing to external perspective) shouldn't make any difference in the wetness (intrinsic
property) of a water molecule, and his vague, visually based intuition that "if I can see the
damned parts of the thing, then, well, how can it really be wet?" In the case of the tiny
spheres of the "oildrop" experiment, we know that wetness is an emergent collective
property which depends upon our not being able to distinguish the individual spheres,
either visually or in tactile manner. But the reader should not think that I have simply
done Cole's argument one better; for a quality which arises in perception strictly through
our inability to perceive what is part of the object "out there" cannot be thought to be a
quality of the object itself, but is supplied by the perceiving mind. But there is an even
more serious objection to Cole here. A hypothetical set of conditions, relating to mere
appearance, cannot be of any support to an argument if the would-be physical state of
affairs, to which the appearance corresponds, is itself impossible - however clearly one
"intuits" its possibility. This observation is very much operative in the case of Cole's
overgrown H
2
O molecule: to conceive of the water molecule as increased in size by a
factor of roughly 10 billion is equivalent to shrinking Planck's constant by the square of
this factor, or by a factor of about 100,000 trillions; this is because the constant contains
units of length squared.

But as any theoretical physicist will tell you, the energy uncertainty ( via
Heisenberg‘s principle) is proportional to this constant and this energy uncertainty is
responsible for the fluctuations in vacuum energy that "pump up" the electronic structure
of all atoms. With the value of h so diminished, all of the electrons within the H
2
O
molecule will collapse into their respective nuclei, making the water molecule's covalent
bond structure, and so its intrinsic "wetness", flatly impossible. Since all matter particles
are created and sustained through the dynamical action of the fluctuating vacuum energy,
these particles exist merely as abstractions from the vacuum structure, with the quantum
vacuum providing the unified natural physical law for the cosmos, so we might expect
that all peculiar phenomena which emerge across the interface between the microscopic
and macroscopic domains (of Reality) are due to the fundamental vacuum dynamism; and
this goes for the emergence of "mind" within the developing human brain, as it has been
found that individual cortical neurons are capable of responding to the presence of
vacuum electromagnetic field fluctuations. But if a unified law of the vacuum exists, it
must be of a transcendental nature since the vacuum constitutes an open system; any
statement of physical law for such a vacuum must necessarily leave something out of its
scope. A physical law for the vacuum is always for the vacuum plus certain
superimposed boundary and initial conditions. Infinite perturbative levels of description
of the vacuum's Hamiltonian are possible. It is perhaps more than a strange coincidence
that our inability to distinguish, one from another, the discrete energy states of the H
2
O
molecule, reflected in this molecule's quantum energy uncertainty, is responsible for its
quality of "wetness," just as in the case of the tiny silicon spheres, which, through our
ignorance of their individuality, coalesced to form a pool of wetness. To continue what
may amount to the unfair importation of facts into a philosophical discussion, we will
continue in our vein of drawing lessons from quantum theory so as to further examine the
possibility of a "thinking machine." Another way of looking at Leibniz' mill argument
against a conscious computing device is in terms of the blueprint containing the design
from which the would-be computing device is constructed. We should ask ourselves this
important question, "How does a computing device, while it is engaged in the very act of
"thinking," constitute a better embodiment of mental activity than does the network of ink
scratches on paper which constitute its blueprint?" "Well," we might say, "the ink
scratches are just sitting there on the paper, they're not doing anything - still less could
they be thinking!" You see, somehow we have the intuition that something which is
moving or undergoing change possesses a better chance of having thought than
something which isn't moving. Partly this is just the influence of that residue of animistic
thinking which we have inherited from our primitive ancestors, but partly, again, it is, I
believe, a case of our seeing, however vaguely, into the heart of a problem which, it
seems to us, involves an illusive peculiarity of consciousness' relation to time.

Now, if a computer can simulate the human thought process so well that the
simulation becomes the reality, then what is there, in principle, to prevent a human being,
if only a particularly gifted one, from simulating the functioning of a computing device -
by simply being given a problem and then tracing out with his eye the relevant wire and
circuit element symbols on the blueprint so as to produce the correct answer? Now such
a hypothetical human being would not need to know how to solve the problem posed to
him, but he need only know how to read the circuit diagram describing the computer
whose functioning he is simulating. Now suppose that this talented human were to
simulate the computer's act of imagining a red sphere against a background of sky-blue.
To do this the human must read the right series, in the right sequence, of markings on the
computer's logic diagram; does it matter how fast the series is read off? - that is a
question which we will have cause to examine later. But if we postulate that
consciousness is a necessary component in the faculty of recognition - a faculty very
much involved in the human's act of reading a stream of symbols - then it appears we
require consciousness (that of the human) to get consciousness ( that of the computing
device): even if the recognition required to read or interpret the symbols of the circuit
diagram doesn't itself require consciousness, it nevertheless requires the utilization of
circuitry (and plenty of it!) not described anywhere on the blueprint itself because
presumably this part of the computer's blueprint has to be read as well and so we're
landed in a viciously circular, infinite regress. But suppose, all the same, we tried to
construct the blueprint in this manner, representing at each progressively tinier level of
calligraphic scale, the details of the blueprint at one particular level, which were to
provide the instructions for reading the blueprint at the next higher scale. At some level
along the downward spiraling hierarchy of spatial scale we would be working, whether
with micro-rapidograph or submicroscopic etching tool, at a scale where the particle
behavior of matter (as collections of independently existing "things") gives way to its
wavelike behavior. It is at this scale where the tiny subdiagrams which we try to etch
into the paper are subject to the seemingly random fluctuations of the vacuum
electromagnetic field energy so that the tiny etchings themselves fluctuate at this level.

It is clear that, however these sub-sub-etc. - diagram etchings fluctuate, they do it
in a way which still manages to capture the design of our computing device. It now
appears that any attempt to produce a static description of the computing system
architecture, i.e., its blueprint, results in a dynamic fluidity in the structure of the physical
realization of this description at the level of scale marking the approximate boundary
between particle and wavelike behavior of matter.
July 2011
The ultimate context for the
atomic scale circuitry of our proposed conscious computing device must be the quantum
vacuum electromagnetic field, itself the original source of Heisenberg momentum-energy
uncertainty. The vacuum fields provide the context for the computer‘s circuitry, which
itself provides the initial and boundary conditions upon the vacuum fields‘ self-
interaction. This circuitry acts passively to allow the vacuum to reprocess its own
preexistent quantum entanglements (containing preexistent quantum entanglement-
encoded information).

Dear Greg,

I‘d like to share with you some thoughts I‘ve had since we last spoke in person
concerning the fundamental limitations of artificial intelligence, i.e., ―why computers
can‘t think.‖ One of which that we can point to straight away as being necessary to the
possession of consciousness in a cybernetic system is memory. Memory is what makes
James‘ ―stream of consciousness‖ the Heraclitean river into which one cannot step twice.
au=
Henri Bergson has pointed out in his book, Time and Free Will, that a genuine
recurrence of a thought or feeling is impossible if it is suffused with the memory of its
having occurred at some earlier time – this peculiarly self-referential quality of the
memory could not have formed part of the texture of the thought as it originally occurred:
if the same notion occurs to one at points in one‘s life widely separated in time, the
context will be significantly altered between each such occurrence so that this notion as it
appears in this new context must stand in a metaphorical relationship to itself in its
originating context – that is, if the latter thought is to constitute the recollection of the
former. Paradoxically enough, if the second occurrence of an idea were really identical
to what is was at its first occurrence, then the example of having remembered something,
since there could be no memory must be an abstract feature of an experiential field which
is itself temporally integrated. The process of the temporal integration of experience is a
process which itself must occur in a time sequence. We might ask the question, What is
the purpose of experiential, i.e., phenomena which formerly bodied forth within the
consciousness stream, if they are never again recollected?, or rather, never can again be
recollected? How could these very experiential phenomena ever have formed part of the
consciousness-stream, itself essentially characterized by the unique property of temporal
integration or wholeness, if they are never referred to further on ―down the stream?‖
The answer, or something fundamentally akin to an answer to this question , must be that
all experience is in some sense remembered experience, that each and every experience
which one can point to as it ―bodies forth‖ in the stream of individual consciousness
contains within itself densely-packed myriad references to analogues of itself in earlier
―incarnations.‖ Conscious experience itself, in other words, but makes its appearance
within the stream as already temporally integrated. Another theory is that there are
myriad different but interacting selves connected with the normal functioning of a human
brain, each with its own information frequency bandwidth, with the range of frequencies
peculiar to each being associated with its own scale of temporal integralness or
wholeness. One of the major functions of the faculty of attention may be the switching
of consciousness between different bandwidths associated with distinct experiential
temporal scales.
July 2011
The brain stimulation experiments of Libet in which the cortical
area of the hand must be electrically stimulated for 500 ms before development of a
primary evoked potential (EP) associated with conscious perception of stimulation of the
hand can be perceived by the subject, combined with the fact that stimulation of the hand
several hundred milliseconds after cortical stimulation of the corresponding area had
already begun was always perceived as occurring hundreds of milliseconds earlier than
direct cortical stimulation. This fact was interpreted by Libet as meaning that an
approximately 500 ms time delay or time buffer was needed in order to effect necessary
temporal integration of conscious sense-perception.
@$
Also, the backwards-in-time
referral of the stimulation event by up to 500 ms is a phenomenon (one might say
epiphenomenon) of this temporal integration as well as being tangible proof of the
temporal multidimensionality of conscious experience and the subjective, projective
nature of the perceived unidirectionality of so-called objective time. Along the lines of
the ―Boltzmann Brain‖ idea, the quantum vacuum is supposed to possess a latent
recursive structure, one that includes not only perceptions and memories of experiences,
but also memories of memories of experiences, etc.
July 2012
―If one has an equilibrium
state that last an infinite time, fluctuations around equilibrium can lead to any state
whatever popping out of the vacuum just as a statistical fluctuation, with emergence of a
local arrow of time. This leads to Poincare‘s Eternal return (any state whatever that has
occurred will eventually recur) and the Boltzmann Brain scenario: you can explain the
existence of Boltzmann brain not as a result of evolution but just as an eventual inevitable
result of statistical fluctuations if an infinite amount of time is available ([12]:201-227)‖,
c.f., http://www.mth.uct.ac.za/~ellis/Quantum_arrowoftime_gfre.pdf

Large bandwidths associated with large information frequency ranges would be in turn
associated with large integral time scales in which richer, more meaningful and nuanced
experiences would be possible than on smaller time scales.

The commonsensical way of comparing these different scales experientially is in terms of
the concept of retentive memory. A new anti-anxiety drug much used in dental offices
and perhaps more popular among both dentists and their patients than the old standby,
i.e., laughing gas is the drug Ativan. Ativan succeeds as an anti-anxiety agent due to its
peculiar property of contracting the anxious patient’s retentive memory span. Pain
experienced by the dental patient during surgical procedures is eased through the use of
this drug, not by an actual reduction of the intensity of the pain itself, but through a
reduction of the patient’s anxiety about the pain he is experiencing.

The temporal window within which the patient is constrained to experience what is
happening to him while he is under the influence of the drug is simply too contracted to
contain temporally unwieldy thoughts such as, ―Oh no, he‘s going to hit the root,‖ or , ―I
wonder if it‘s going to get worse?!‖ In retrospect, my experience of my own state of
consciousness while under the drug‘s influence was so like how I imagine my childhood
consciousness to have been, although paradoxically, I was much more afraid of the
dentist then than I am presently!

This suggests an altogether new way of understanding the function of the brain as the
―organ of consciousness.‖ Instead of building up conscious experience from some
determinate set of primitive elements, e.g., sense data, the brain abstracts from a sea of
information signals (eternally preexistent, as we will argue later), in much the same
manner, I believe, that the single strand of DNA extracts from its cytoplasmic soup the
amino acid bases it needs to duplicate itself. The brain acts as a kind of template for
thought in the sense that the activity of the brain as a whole serves as a frame within
which the particular features of this activity of the signal matrix, e.g., quantum vacuum
electromagnetic field, which is complementary to the particular brain excitation pattern,
as it were, is defined through ―resonating‖, i.e., interacting with a peculiar spectrum of
vacuum electromagnetic field fluctuations, c.f., ancient Google cache of Hearthmath
Institute webpage detailing experiments in which DNA electromagnetic signatures were
recorded in the vacuum electromagnetic fields of resonant quantum cavities (circa 1995).

Certainly the more a task is repeated the more the series of actions constituting this task
converges to a rigidly inflexible sequence of acts: less and less, therefore, does the
sequence of neural discharges underlying its outward performance admit modification
due to the presence of the constantly varying web of cerebral electrochemical events
taking place within the totality of contemporaneous brain discharges, which form the
biophysical basis of a task being memorized by rote, becomes more and more impervious
to the influence of what is occurring within the brain as a whole, i.e.., progressively more
insensitive to the neurochemical context, and to precisely this extent, to which the neural
sequence becomes impervious to outside influences, to this very extent it sinks into the
milieu of the unconscious mental processes.

So memory on the part of cybernetic systems, generally speaking, makes it impossible for
their mental states, their computational non-reproducible in time. It can also be shown
that memory prevents a cybernetic machine‘s computational states from being
reproducible in space, i.e., they cannot be copied. One of the functions of consciousness
may be the inscribing of patterns of sensory data into memory for use in the interpretation
of patterns of sensory stimulation occurring in future – for establishing the context of
these future events, which have not been experienced consciously.

But the converse of this is also true, namely, that the faculty of memory is essential for
the existence of conscious states of awareness for it is only through the placing of sensory
stimuli into a context formed in light of previous experience that these new stimuli might
be categorized, that patterns of stimuli might be recognized in terms of the presence of
some object or objects, in other words, that sensory experience might possess the
property of intentionality.

This is one of the essential characteristics of historical change which distinguishes it
primarily from deterministic change. Historical change possesses temporal integrity
because the series of events/processes comprising it tells a story or potentially does so
because of referring to a subject or a set of related subjects. In a deterministic series of
events, in a very real sense, nothing new ever happens because all of the information
about the entire series already exists at the time of the first event in the series. ^^(look at
fringe effects due to boundary and initial conditions – if the series has a beginning, then it
can‘t be completely deterministic – there is a powerful theorem lurking here, I suspect! –
probably just Gibb‘s Theorem or an adaptation/lemma associated with Gibb‘s Theorem
though)^^ Time is spatialized in such a series because the series is infinitely divisible; it
is utterly without the property of integral wholeness, contained within a single instant of
time which sweeps along a line of finite length. (Paradoxically, infinite divisibility is
intimately connected with a holographic topology, which is to say an important variety of
integral wholeness.) Events within a historical series, however, do not assume a
recognizable identity unless something is known about some of the events leading up to
them as well as concerning events succeeding in their wake. Moreover, the identity of
meaning of historical changes as well as in light of increasing knowledge of the history of
the events themselves. This is to say, historical events are always partially indeterminate
with respect to the revelation of future historical change; furthermore, indeterminacy
forms the very ground temporality in that all changes of state are ultimately predicated
upon it. This assertion is borne out quite literally by quantum theory through the
statement of its time-energy uncertainty principle first put forward by Werner
Heisenberg, one of the theory‘s early founders. One of the major implications of this
quantum principle is that the transitoriness of all quantum processes is directly
proportional to the size of the energy uncertainty of the quantum mechanical system in
which these processes are taking place. The magnitude of this energy uncertainty is
directly related to the degree by which the system violates the energy conservation
principle of classical physics.

Information is characterized by the appearance of something new, or the notification that
something new has occurred – itself the occurrence of something new. Information and
energy differ from one another in the sense that the very same spatiotemporal series of
inputs of energy into an energy system may occur again and again, but the analogue to
this situation for data and informational systems is not definable. Here repeated data
inputs to an informational system do not constitute two separate identical series of
information inputs to the system, and, again, this is due to the possession of memory by
any informational system worthy of the name.

This follows from the indeterminate‘s two basic properties: 1) the negation of the
indeterminate is itself also indeterminate so that it contains within itself a contradiction
which it transcends and from which anything at all may follow, and 2) the indeterminate,
not existing any particular state at any particular time, must be forever undergoing all
manner of fluctuations. To wit, the indeterminate is the eternal excluded middle – the
void through which everything passes as it changes from what it is into what it is not. It
must be clear that when A changes into B, it does so by passing through a state which is
neither A nor B, and so their must be that which is necessarily neither/nor in general, i.e.,
the indeterminate. Intentionality fulfills the function of an underlying fundamental
substance, or subject of change. The physical is characterized by wholes being
determined through merely the sum of their parts; while the mental is characterized by
parts being determined by wholes. This definition of the distinction of mental and
physical is very similar to that provided by Schopenhauer.

These unconscious mental processes are probably logical or computational in their
essential nature and only alert the conscious mind when a recursive tangle between
several different levels of description take place, for what is called recursion cannot be
adequately dealt with by any logical/deductive system whatever which possesses greater
deductive power than simple arithmetic. Automatically it becomes and to this degree the
more it occurs without the aid of conscious guidance or reflection. This is owing to each
successive occurrence being less and less connected to the situational context, more and
more connected internally to its immediately previous occurrence.

Another characteristic of conscious thought which is essential to it is what John Searle
calls, in his book, Minds, Brains and Machines, intentionality. A thought possesses
intentionality if it is ―about something.‖ In other words, thoughts, to be of the conscious
variety, must be transcendent in the same sense in which ink scratches on paper in a
language known to the person reading them, i.e., the interpreter of these scratches,
transcend them as physical tokens. The ink scratches on paper which constitute the
circuit diagram of a so-called supercomputer do not appear to be transcendent, at least as
far as the computer itself is concerned, as the behavior of the computer is simply
isomorphic, i.e., runs parallel to, the structure embodied in the diagram. Any sort of
causal interaction between the physical embodiment of the diagram, i.e., the computer‘s
―hardware‖ and any program it might be carrying out, i.e., its ―software‖ disrupts this
nice isomorphism and would constitute transcendence by the computer of its program
together with the physical embodiment of its circuit diagram, its hardware. The
distinction hardware/software significantly parallels the distinction (more fundamental
for our purposes), that of energy/information.
October 2011
But any computer program is
always implemented within a causal context, however, the fluctuation-correlation
structure of the underlying quantum vacuum does not significantly contribute to this
context until a delicacy, fineness and subtlety of operation is reached approaching that of
energies comparable in size to quantum fluctuations, which are in turn comparable in size
to the Heisenberg energy uncertainty of the computer‘s central processor in its capacity
as a quantum mechanical system.

Although the context of thought is forever changing, we do not find ourselves lost in a
bewildering phantasmagorical world of endless metamorphosis. The human mind is able
to utilize notions, in their original occurrence as insights, in every newly-arising contexts.
Some philosophers of mind style mind as the metaphor of all metaphors. The stability
of the Self within the stream of consciousness is heavily dependent upon its facile use of
metaphor. Metaphor, however, is only the application of categories of thought in one
context which have been borrowed from another. Needless to say, these categories had
to be at some point created ab initio through the more general process of abstraction, or,
the formation of abstract categories. The process of abstraction always involves the
treatment of certain details in which things differ as unimportant so that other features
may be foregrounded and grouped together within the same set or class. The creation of
a system of such categories sets the stage for the recognition of a cybernetic system.
There will always be multiple ways of categorizing the data which are continually
streaming into the sensory apparatus of the system; no hard and fast rule or set of rules
may be worked out ahead of time to prevent the emergence of an ambiguous collection of
data, and so the necessity is always at hand of deciding how the data will be interpreted,
and this may only be accomplished through metaphor or through the defining of new
broader or narrower categories with which one structures the ambiguous data. Note that
the information content of data is always open-ended and contingent. Abstraction
requires first of all the capacity of the cybernetic system to define for itself what is to be
considered relevant and what is not. Relevance, however, may only be established in the
light of some previously determined aim or purpose. Purposes are always defined in
service to the larger or broader aim which is in view. The broader the purpose which
one is pursuing, the greater the scope which one has in satisfactorily fulfilling it. For
human beings, this broadest purpose, the instinct which we share with the rest of
biological creation, is simply the ever-recurring goal of physical survival.

If the Copenhagen interpretation of quantum mechanics is essentially correct, i.e., where
the wavefunction is a probability wave representing the state of an observer's knowledge
so that it is indeed the consciousness of the observer which is responsible for collapsing
the wavefunction and not the physical disturbance to the wavefunction provoked by his
measuring device, then it should be possible to carry out a "delayed choice" type
experiment : a standard two slit interference set up is constructed where two video
cameras are substituted for two conscious observers, one "viewing" both slits (camera A)
and another camera "viewing" the backstop where either an interference pattern or a
random "buckshot" pattern of photon strikes appears. If this experiment is performed in
the absence of a human observer and then afterwards, perhaps years later, the film in the
back of cameras A and B are examined, it will be found that the order in which the
cameras are opened and their film emulsions examined will make a difference in whether
the film from camera B contains recorded on it either an interference pattern of photon
waves or a "buckshot" pattern of photon "bullets." In other words, if the film in camera
A is examined first, then an observer possesses knowledge as to which slit each photon
passed through so that the wavefunction of the paramagnetic particles coating the surface
of the film emulsion in camera A undergo a collapse from the previous superposition
state leading to an interference pattern to a positional eigenstate leading to the "buckshot"
pattern of photon "strikes." On the other hand, if the back of camera B is opened up first
and its film developed and examined, then one finds that an interference pattern has been
recorded on the film. But what now for the film in the back of camera A which had been
set up to "view" and record events at the double-slit? Should not the series of images
recorded on this film be smeared out just enough to prevent us from telling which
photons traveled through which slits? If this is the case, then the images stored on the
film of camera B may be used to tell us whether camera A in fact did of did not record
the "actual paths" taken by the photons, though the double-slit superposition state
associated with the photon interference pattern does not require any unique and
mysterious influence of human consciousness upon the results of the experiment, but
amounts to nothing more than the effect of camera A in blocking the "pilot waves"
traveling through the slits through which the photons are observed not to be traveling.

March 1997
This preposterously counter-intuitive thought experiment can be defused if one
requires that merely the possibility of an observer gaining knowledge about which slit the
electrons went through would be sufficient to collapse the electron position
wavefunctions so as to produce the "buckshot" pattern of electron strikes on the
phosphorescent backstop. This is actually what has been demonstrated by several
ingenious "delayed-choice" experiments, which have been performed during the 1990's.
And it is the position of the camera relative to the slits which, of course, determines this.
October 2011
As an aside, consider that the question of whether or not neutrinos can be
observed with superluminal velocities may be dependent upon experimental set up, which
is in accordance with the logic of the observer‘s role in interpreting the results of a two-
slit experiment. If true, this would mean that the logic of superposition and wavefunction
collapse serves the vital role of ―chronology protection.‖ Note that chronology
protection is not any concern when one is speaking of the subjective information or
incommunicable knowledge of the individual. Any information that cannot be translated
into intersubjective communication must avoid the stringent restrictions of chronology
protection. I say ―must‖ in this connection because of the
prn=
―fecundity principle‖ of
quantum mechanics (first uttered by Feynmann) that, anything not forbidden by the laws
of quantum mechanics happens. Since the information contained in energy structures
smaller than the Heisenberg energy uncertainty, /\E of the system cannot be
communicated locally – only being able to accompany a state undergoing quantum
teleportation, it follows that the physical processes constitutive of consciousness
(assuming consciousness is not ―non-physical‖) are apiece with the fundamental
processes of virtual particles, fields and their reactions. If consciousness is to be
considered ―non-physical‖, then the distinction of ―physical-nonphysical‖ lines up with
the parallel distinction of ―real-virtual‖ of quantum field theory.
January 2012
Einstein‘s rod
and clock convention could perhaps be recast in terms of neutrinos as ―relativity
yardstick‖ instead of photons. If the neutrino maximum speed could be pinned down to
just a tiny fraction of a percent faster than the speed of light, a tiny fraction which only
results in a correspondingly tiny fraction of a percent error in all heretofore well
established measurable relativistic corrections, e.g., with respect to time, mass, length,
etc., then the theory of special relativity could be retained more or less intact. It would be
just as though Einstein had started his gedanken experiments using the neutrino instead
of the light ray (photon) such that special relativity remains valid as a physical principle.
This of course poses problems for the myriad physical analogue alternative
interpretations of relativity, e.g, Desiato‘s polarizable vacuum model. The momentum
and energy deficits in particle interaction calculations that formerly relied on a photon-
based special relativity might then point of the existence of heretofore undetected (and
unsuspected) theoretical particles of extremely low mass, which would now be necessary
in order to reestablish an exact balance of, for example, nuclear particle scattering and
reaction equations.

But what if we could assure nature, as it were, that despite the appropriate positioning of
the camera in front of the double-slit, the observer, or any observer, would be unable to
take advantage of the appropriate physical arrangement of the camera in order to
determine which slit the electrons go through? I believe that, in this case, the interference
pattern would, again, reappear on the phosphorescent backstop! If this were the case,
then the observer would regain his mysterious status with respect to wavefunction
collapses, c.f.,
au=
Vic Stenger,
cit=
The Myth of Quantum Consciousness. One must, to wit,
assure, first of all, that it is possible to establish a closed system within which the
experimental apparatus is to be contained, in order to, in turn, assure that, no matter how
large a physical arena this quantum experiment is performed in, the observer will not
possess the possibility of knowing the trajectories of the electrons. Not all closed
experimental situations can assure this, but it is just that a closing off of the experimental
setup from the rest of an open reality must be achievable to assure the inability of the
observer to draw on hidden resources to divine the trajectories of the electrons from their
source to the backstop. (Does this mean we must here be able to isolate the system,
which is the subject of our experiment from its embedding quantum vacuum so as to
effectively separate the system from the observer‘s brain, which is also ―embedded‖ in
the context of this same ―quantum vacuum‖?) This suggests that the observer's ability to
collapse the wavefunction consists in a peculiar connection which he is able to make with
an open-ended reality, a reality which, as alluded to earlier, is therefore indeterminate,
i.e., nondeterministic. It is interesting to note that it is only within a closed physical
system, where the boundary conditions of the vacuum field are changing only
adiabatically, that a superposition state may be supposed to exist. Presumably, the closed
system cannot adequately accommodate the phenomenon of the observer's consciousness,
which is what disturbs the system, resulting in a collapse of the superposition state which
heretofore existed within it, and this, just by virtue of the mere possibility that the
observer may obtain knowledge of the system's state with respect to the superposition
observables.
October 2011
So whether or not a system is ―closed‖ or ―open‖ is of material
importance to the question of whether what the system ―contains‖ is context-free data or
context-dependent information. An important question is whether the deciding difference
here is to be determined exclusively through some fundamental difference in the
correlational structure of the system‘s ―fluctuation matrix‖, e.g., recursive vs.
nonrecursive, etc.

Throughout this discussion, we must not lose sight of the fact that the wavefunction itself
does not actually represent anything physically real or measurable, and so all purported
interactions occurring between wavefunctions must be realized in terms of the interaction
of their associated probability density functions. (Caveat: there is growing experiment
evidence at the time of this writing, October 2011 that the wavefunction is measurable
and so constitutes a real physical entity). For example, Aharonov experimentally proved
the reality of A the magnetic vector potential by measuring changes in the quantum phase
of A within a region where the magnetic field, B was absent. So if A is identified with
the wavefunction of the photon, this proves the reality of Psi.

A superposition state is only defined where each of the component superposed
wavefunctions has an associated probability via the square of its amplitude, although here
the assignment of unique probabilities to both the interference pattern - a turn of events
which, on the
prn=
Copenhagen interpretation, is determined solely by the decision of the
conscious observer as to which camera, A or B, he/she opens first. Remember that in the
theory of quantum mechanic.s a particular event only possesses a probability of 1 if it has
already occurred. It is in this sense in which we speak of the superposition state as a
combination of quantum states, no one of which is real in itself.

The pre-Socratic philosopher,
au=
Parmenides, was of that philosophical tradition which
considered the ultimate metaphysical question to be "Why is there something rather than
nothing?" And he is noted for having proclaimed "Nothing does not exist."
@$
But he
considered that all real change necessarily involved the instant by instant creation of new
attributes ex nihilo. Parmenides concluded from this that change was, itself, impossible
and the universe; being cannot come from nonbeing, therefore the universe is a static and
indestructible closed system; time was for
au=
Parmenides a kind of tenacious illusion.
Time would later be characterized similarly by
au=
Einstein. In the present day, owing to
the advent and development of the Quantum Theory, the suggested reformulation of this
most fundamental metaphysical question is: "Why is there Information rather than
Chaos?" For those persons for whom the question, "why is there something rather than
nothing," is meaningful, belief in the existence of a transcendent reality beyond space and
time, and what is more, beyond the most general dichotomy, the dual opposite categories,
existence vs. nonexistence, the granting of the being of Deity is theoretically but a small
step. Such persons merely have to be convinced of the necessity of Will within the realm
beyond Representation, c.f.,
cit=
Will and Representation (
au=
Schopenhauer). For other
persons, this most fundamental of metaphysical questions is, as
au=
Martin Gardner puts it,
"cognitively meaningless."
September 2011
As the late 20
th
Century Philosopher,
au=
Robert
Nozick pointed out in his
cit=
Philosophical Investigations, ―The question cuts so
deep…that any approach that stands a chance of yielding an answer will look extremely
weird. Someone who proposes a non-strange answer shows he didn’t understand the
question [italics mine]. As logicians are fond of saying: ―Everything follows from a
contradiction‖. If ―nothing‖ and ―everything‖ are veritable dual-opposite categories,
then logic tells us that everything in between these two extremes stem from the
interrelation or interactivity of the two. In a sense, it is ―nothing‖ which gives
―everything‖ its ontological status. On the information paradigm of existence, ―nothing‖
corresponds to chaos, while ―everything‖ corresponds to pure, self-existent information
and anything in between would seem to depend upon both. In a twist of
au=
Clarke‘s
principle, i.e., that ―any sufficiently advanced technology would be indistinguishable
from magic‖, we could say that, any sufficiently information-dense structure would be
indistinguishable from chaos. Here we are relying on the rather speculative intuitive
notion that encryption is necessary to achieve optimal information ―packing densities‖.
The most efficient form of encryption is likely to invoke self-referentiality or
incursiveness of some kind. We propose that the reductio ad absurdum of self-
referentiality is represented by consciousness. Consciousness represents the ultimate
exemplar of “hard encryption”. The doctrines of the incorrigibility of sense data and
au=
Russell‘s ―pivileged access‖ to the contents of consciousness are consistent with the
notion that, the hard encryption represented by consciousness are ―incorrigible‖ and the
contents to which they grant access are ―privileged.‖ Substance is indifferent to the
passage of time and so must be bound up in the phenomena of emergence through
substance‘s ability to transcend all possible historical accounts of a process. Just think of
Leibniz‘ monads here as ―possessing no windows‖. The quality or qualities of substance,
being that it/they universally pervade(s) all things that exist, would forever be free of
being intersubjectively identified or classified. Classification depends upon some things
or states being the case that fall outside a given class‘ purview. The individual
consciousness qua substance that all contents of a given individual consciousness must be
supposed to possess can never be identified or classified by that individual, moreover, on
account of the hard encryption represented by consciousness, nor shall any other
individual be able to identify or classify the substance that makes up that individual‘s
subject contents of consciousness. Note that there could potentially be an indefinite
number of distinct differences between the consciousnesses of an unlimited number of
individual minds, both real and possible, opening up the possibility of an ever greater
number of abstract categories and relations involving these individual consciousnesses.
And yet all of this taxonomic knowledge is forever totally inaccessible, except to a
transcendental being. It was not without very good reason that
au=
Alvin Plantinga
devoted an entire book to the topic of
cit=
God and Other Minds.
July 2012
Unfortunately,
one cannot find a free of copy of Plantinga's 1967 book and he only cuts to the chase and
states his real argument on the last 3 or 4 pages of his book. The rest of the book is
groundwork examining all of the arguments in favor of God's existence and why these
arguments fail. Plantinga's own argument does not fail, once one really takes to heart the
fact that one has only ever know one's own sensory states. Even if one lives forever in a
heaven or hell after this life, it will still be *just you* experiencing *your own thoughts
and sensory states*. The reality of a genuine plurality of subjective states of conscious
experience (of distinct persons) necessarily brings in a perspective transcending mere
everlastingness of an individual (particular) mind, i.e., that of a Universal Mind. In a
word, even if you live for infinite time, you cannot have proof of the existence of other
minds - this shall always remains a question of faith a faith no weaker than that of the
deist who has faith in a Universal Mind.
April 2012
Ethics without any practical possibility
for reciprocation and without an underlying karmic metaphysical principle, e.g., ethics
for a consciousness that is continually branching through new and possibly only
projectively extant universes. This is an example of the hidden presumption of theism.
Many other examples exist such as from philosophy of science, linguistics, art, sociology,
psychology, etc.


September 2012
"Without transcendent, universal mind there is no distinction between the case
of consciousness being a one or its indeed being a many." The universal consciousness
field splits the number degeneracy, e.g.,
@$
―photon number degeneracy‖ of the
Boltzmann brains, which real, biological brains resonantly tune to whenever those brains
enjoy conscious states of awareness. The continuity of conscious experience necessarily
―piggybacks‖ off of the nonlocal connectivity of unique vacuum fluctuation frequency
spectra. And so personal identity qua substantial continuity of mind and mental states is
far more a function of vacuum nonlocality than it is a function of specific reproducible
instantaneous patterns of neural or microtubule network interaction configurations in a
given biological brain. In a word, personal identity is a function of resonant Boltzmann
brains qua nonlocally connected vacuum entropy fluctuations. And here the normal
distinction of closed versus open thermodynamic systems must be reinterpreted in light of
this nonlocal connectivity, which in some sense renders mysteriously fuzzy this otherwise
hard and fast distinction from the theory of classical thermodynamics.

At the level of individual human beings, there are myriad though mutually conflicting
points of coherence pertaining to an equally diverse number of points of view.

epi=
If naive realism is metaphysical baking powder, then atheism is just half-baked
solipsism.
April 2012
Buddhism seems to over the only viable path bypassing the theism vs.
solipsism dual opposition. This is because Buddhism views the self as an illusion, either
that of the individual or of God.

This reformulation constitutes, almost by itself, the answer to its precursor: the
pre-Socratic question "Why is there something rather than nothing?" is insoluble in its
demand for a relation between being and nonbeing apart from their mutual exclusiveness
whereas the modern counterpart to this question does not at all demand from us the
impossible as there are many examples, both empirical and mathematical, where chaotic
systems acquire order through self-organization or ordered systems become chaotic
through an increase in entropy, also c.f., ―deterministic chaos‖.
July 2011
Data may be
considered to be the embodiment of information in the sense of constituting the
necessary, but not the sufficient condition for the presence of information. There must be
a special characterization of some subset of the sum total of necessary conditions for
some state to occur or obtain, which combined with another subset of such conditions
constitutes the sufficient condition, The relationship of the two subsets would be in a
complimentary manner akin to figure-ground.
April 2012
If information is gerundial as in
informing, then there should be something akin to proto-information. The question then
is whether this implies such a process as ―proto-consciousness‖. It might be profitable to
distinguish, instructions, data, metadata, information. Analogies such as Internet,
webpage, hyperlink, operating system, cloud/cloud computing may be both helpful as
well as limiting here, hence the ever present need to create new contexts, i.e., new myths.

But what, you may ask, is contained within the Quantum Theory which suggests this
reformulation? Very simply, the Quantum Theory does not treat the vacuum as a
veritable emptiness, but rather as a medium of chaotic fluctuations of positive and
negative energy which cancel each other, averaging out to zero net energy over distances
larger than an atomic diameter, say. Subatomic particles, the penultimate constituents of
matter come into existence when energy fluctuations over a small region of the vacuum
respond to each other's presence through the accidental formation of feedback paths
among themselves. These feedback structures may remain stable for only extremely
fleeting periods of time or they may become robust and persist against their chaotic
backdrop for longer periods permitting the formation of more complex hierarchical
structures. The presence of information is the key ingredient determining if such
fluctuation networks persist against the background of quantum fields. In terms of
information theory, the vacuum is filled with an infinite number of messages crossing it
to and fro from every direction; material particles are constituted by more messages being
exchanged within this region than between this region and the "outside" of this region.
On this interpretation, matter does not respond instantaneously to accelerations
(possesses inertia) owing to a communication bottleneck existing between its interior and
the surrounding vacuum; matter cannot respond to the world in "real time," but must take
time out to "process" the coded instructions which it receives from its "inputs." One need
here only compare the ease with which a single gnat can change its direction in flight ( to
avoid an obstacle, say) to the difficulties involved when an entire swarm of gnats, ore a
swarm of swarms of gnats, for that matter, attempts to perform the same maneuver based
on the intelligence ( in the military sense) of a small group of harbinger gnats. These
chaotic fluctuations of vacuum energy are a manifestation of the Heisenberg uncertainty
principle. this principle states a numerical relationship between the dual physical
quantities position / momentum and time / energy. The bridging constant between these
dual quantities is Planck's constant, h, and the exact expression of this relation is:

X*P = h/2pi or T*E = h/2pi,

which is derived from Planck's older relation,

E = h * f;

where E is energy (Joules), f is frequency (hertz), X is distance (meters), P is momentum
and T is time (seconds); h is, of course, Planck's constant which has units of Joule-
seconds. There is a more sophisticated and complete matrix algebraic statement of the
principle, but this need not concern us here. Heisenberg's uncertainty principle is an
epistemological one as it rigidly specifies how the accuracy in our knowledge of one
physical quantity affects the accuracy of our determination of the remaining paired
quantity. Heisenberg's principle can be obtained by generalizing Planck's relation in
terms of the matrix algebraic expression:

p*q - q*p = h/2pi x I .

If consciousness is, itself, required to collapse the wave-function, then consciousness
must originate in the interaction of uncollapsed wavefunctions. This suggests that the
wavefunctions interacting with one another within consciousness are of the ―already
collapsed‖ variety, that is the perceptual representations of wavefunctions all interact
based upon a subluminal propagation of mutual influence. Quantum wavefunctions
which have not yet collapsed are capable of interacting with one another at a distance
instantaneously and this sort of phenomenon is referred to by Physicists as the Einstein-
Podolsky-Rosen, or E.P.R. effect. There are two basic schools of the Quantum Theory.
Where they differ is in their interpretation of the status of Heisenberg's uncertainty
principle. One school maintains that this uncertainty is due merely to the practical
limitations of observation, that is; the uncertainty is only epistemological in nature. The
other school maintains that this uncertainty is a theoretical limitation, that is; the
uncertainty is ontological in character. The dispute between these two schools is solved
easily enough, however. In the 1990's when computers have reached a relatively high
level of sophistication it is not uncommon to encounter the opinion, among otherwise
enlightened (educated) individuals, that computers are capable of or exhibit a kind of
elementary consciousness. These are the same people who would deny without
hesitation that earlier more primitive computers such as Babbage‘s differential analyzer
(originally designed in the 1840's) or perhaps even the Eniac ( circa 1945) which
calculated artillery paths are themselves incapable of anything approaching what might
be called conscious thought. This reveals an intuition that somehow sheer complexity is
the essential factor, which separates the mechanical brute or automaton from the
sophisticated high speed digital computer of today.
August 2011
Note: ―sheer complexity‖ of
deterministic computing is only important because there exists some intrinsic threshold
within those nondeterministic fields in which the classical digital state machine
is/becomes embedded, which provides context for an otherwise meaningless, context-free
affair (just as in the case of Babbage‘s ―differential analyzer‖). It is likely that this
threshold lies at the boundary between the quantum and classical worlds, which exhibit
wavelike vs. particle-like behavior, respectively.
@$
There is no context-free threshold of
computing complexity at which any qualitative change in the nature of computing is to be
rationally expected. That is just magical thinking.

Even among those who flatly deny that modern high speed computers possess anything
like real intelligence or consciousness, there is the implicit assumption of sheer
complexity as the necessary magical ingredient: a revolutionary jump in switching
speed, memory capacity, architecture design - all of which are essentially functions of
increased density of miniaturized components - would undoubtedly bring about the
necessary gain in complexity, i.e., that which approaches the complexity of the human
brain itself, so that machines would acquire a kind of consciousness. Marvin Minsky -
the leading figure within the so-called hard-AI community - once designated human
brains as nothing more than "meat machines." But if there is this almost ineffable
intuition about a vital connection between complexity and consciousness then a perhaps
even greater or deeper one is that between the notions of consciousness and freedom. So-
called hard-AI theorists such as Minsky, Dennet, and the Churchland's use analytical
arguments which miss the point in objecting to Searle because they do not address their
criticisms to the principle thesis that he advances, namely, that the causal powers of
matter play an essential role in determining the phenomenon of consciousness and that
such causal considerations go beyond those of formal symbol manipulation. Where
these two intuitions meet and reinforce one another is when one considers a digital
computing device, say, where the packing densities of the microelectronic components
approach that of naturally occurring crystals. It is at precisely this point where we expect
to see the quantum mechanical effects described by the famous Heisenberg Uncertainty
Principle. Here despite all attempts at insulation and grounding, filtering, or rectification,
it becomes nevertheless impossible to force the device to operate according to some pre-
established blueprint of operation, i.e., program, as the fluctuating voltages and electric
currents inherently reside within the device as a consequence of the interaction of the
device's wires and circuit elements with the vacuum electromagnetic field which in its
own turn must fluctuate randomly. This random and irretrievable fluctuation in the
vacuum's electromagnetic field is due to an extension of the Uncertainty principle which
states that the electric and magnetic field strengths may not both be simultaneously
specified at any point in space - in much the same fashion in which the position and
momentum of and individual particle may not be simultaneously specified. A sharp
determination of the electric field at a point will cause a large spread of uncertainty in
measurements of the magnetic field at this point and vice versa. The fluctuating
(quantum) voltages and currents to which the circuitry of any really advanced computing
device would be subject would be utterly useless and manifest themselves as noise
signals disruptive to the normal operation of the device - unless the device could manage
to interact with these fluctuating fields.

February 1998
Clearly the spatiotemporal scale at which these quantum fluctuations take place
represents an insuperable physical barrier to the continued operation of Moore's Law
which states that, microprocessor computing power increases by doubling every eighteen
months to two years, given, at least, no significant departure from conventional
microprocessor architectural design as it has manifest itself over the previous four or five
generations of microprocessor development.
October 2011
Although in some sense spatial
scale is only meaningful within the context of locality, though with respect to Penrose‘s
―one graviton limit‖, energy scale is indeed relevant.

The breakthrough in the evolution of microprocessor technology which will make
possible the continuation of Moore's Law, at the same time as it transcends it, will come
in the form of a significant paradigm shift in the relationship between computer
architecture designers and programmers and knowledge. This paradigm shift will
manifest itself in two distinct but closely related ways. The movement will take place
from a representational to a participatory basis for the communication of information and
knowledge. Instead of knowledge undergoing many transformations from information to
data to information and back again at each stage in its passing from one person to
another, knowledge will be communicated not through any physical transmission data,
but through a nonrepresentational and participatory sharing of knowledge between minds.

But this is still not enough. Our intuition that the phenomenon of consciousness is a
radically deep one pushes us to suppose that this device - however it is supposed to
function - merely sets the stage for this chaotically fluctuating vacuum field to interact
with itself - the device becomes just an intermediary, a facilitator, of a process which
must ultimately fall under the control of this energy itself. We see that our intuition
about the importance of smallness and complexity captured in the motion of sensitivity
(to vacuum fluctuations) and our intuition concerning freedom ( of vacuum energy to
self-organize) appear to intersect. There is an exact parallel between the relationship of
energy and entropy to each other and the relationship between signal bandwidth and
signal information capacity. We might liken the comparison between a conscious
(intelligent) computer and automaton (unintelligent computer) in the following manner.
A dumb computer is searching a maze for its exit...There are a number of respects in
which the paradigm shift from a bottom-up to a top-down metaphysics may be realized.:

1) Physical processes are not "pushed up from below" by blind efficient causation, but
are "pulled up from above" by teleological causation. This may be seen through
Margenau's observation that all differential equations representing processes of blind
causation may be recast as any one of an infinite family of integral equations (depending
on initial conditions) where some physical quantity such as time, energy, distance, etc., is
minimized or maximized. Teleology, however, in its own way, presupposes the existence
of a determinate framework just as much as does classical physics; in fact, events are not
merely determined within teleological causation, but are overdetermined.

2)The vacuum is not empty as it was conceived to be in the 18th century classical physics
with solid particles caroming through it, but the vacuum is, rather, a plenum, a fullness of
energy while so-called particles are mere excitations of this vacuum medium. The energy
density of the vacuum is far greater than the energy density of the particles "occupying"
it.

3) chaos may be reinterpreted as a thermal reservoir of virtually infinite information
content as opposed to a condition of no information.

4) There is an empirical-theoretical spectrum with the unified theory of physics at the
theoretical end of this spectrum and pure consciousness at the opposite empirical end.
Therefore, it is just as meaningless to ask what the fundamental "constituents" of matter
posited by unified physical theory, are in themselves as it is to inquire into the process by
and through which the phenomenon of consciousness originates.

Both questions are posed at the wrong extreme of the empirical-theoretical spectrum so
that any attempt to answer them appear incoherent or self-contradictory. If the empirical-
theoretical really constitutes a spectrum which exhaustively "covers" reality, then we
expect that the bootstrap explanations applied at each end of this spectrum must
somehow merge or interpenetrate.

5) The creation of material particles is not the direct conversion of energy into matter,
rather the energy required is that needed to dissociate them from the network of
interactions in which they pre-exist. Creation is not ex nihilo, but is an abstraction of a
low level of structure from a preexisting dynamic whole of virtually infinite (maximal)
complexity. Each act of abstraction, however, is founded on negations performed within
a predefined whole which is itself a form of abstraction of a higher order than mere
negation which is an operation which presupposes the ability to partition a system into
disjoint and complementary halves. This setting-up of such a system decomposable into
complementary partitions, is the higher order abstraction which cannot be understood as
being based in mere negation within a larger system. The transformation of elements
within a particular system of representation through expansion of the context grounding
the representational elements is a kind of transformation which cannot be explained in
causal or merely rational terms.

6) Consciousness is channeled, structured, limited, abstracted by the functioning of the
human brain, it is not produced through its action. The brain acts, per the Bergson –
James - Lange theory, as a kind of reducing valve.

7) Gravitational time dilation, rather than being an effect of a gravitational field, may be
an essential part of the physical vacuum mechanism by which matter produces a
gravitational field.

8) Rather than conservation of four-momentum being deduced from the theory of special
relativity, conservation of four-momentum is the very foundation upon which the edifice
of special relativity is built. What is referred to as locality is the sum of physical
processes governed via the strong coupling mediated through the exchange of energy
between particles possessing an energy greater than the energy uncertainty of the
quantum mechanical system within which these energy exchanges are occurring. What is
called nonlocality is the sum of physical interactions governed via the weak coupling
mediated through the exchange of energy between particles possessing an energy smaller
than the energy uncertainty of the quantum mechanical system within which these
"weak" energy exchanges are occurring.

The presence of real photons is evidence that at some point in spacetime a
fermion made an energy transition which was trigger either by bombarding real photons
or the action of the vacuum electromagnetic field, i.e., spontaneous emission. Of course,
both process must be invoked again, repeatedly, to explain the existence of the
bombarding photons. this infinite regress converges in the sense that at progressively
earlier moments we find the vacuum electromagnetic field in an ever more compressed
state and as the process of spontaneous emission tends to outstrip that of stimulated
emission at high frequencies the explanation for the decay of excited fermionic states is
found to lie exclusively with the action of the vacuum electromagnetic field.


06/98

The reason for the momentum fluctuation spectrum of an electron contained within a
quantum well being identical to the spectrum of possible discrete energy transitions
between possible quantum well energy levels may be on account of the following simple
observation. Such transitions downward by a real electron are stimulated to occur either
by real or virtual photons while such transitions upward by a virtual electron are
stimulated to occur likewise either by a real or virtual photon, and the spectrum of such
virtual photons represents that of the vacuum electromagnetic waves with which the
bound electron can resonate with and with which it can exchange energy. Since the
photon propagates through vacuum part of the time as a electron/ positron pair, and in a
gravitational field the density of virtual fermion/antifermion pairs is somewhat decreased,
it follows that the velocity of the photon through this modified vacuum will be
correspondingly decreased. It follows from this that the energy density of the vacuum
must vary proportionally to the cube of the local value of the speed of light within the
gravitational field-laden, and hence, modified vacuum. This may similarly be interpreted
as the energy density of the vacuum being proportional to the inverse cube of the
frequency of vacuum electromagnetic waves. This is just the relationship of vacuum
energy density to virtual photon frequency which renders the quantum vacuum perfectly
Lorenz-invariant.

In Nature, Oct. 19, p(574), the time required for quantum mechanical tunneling
of an electron across a Josephson junction was measure. This result means that there is
some meaning, which can be attached to the velocity of the particle during its act of
quantum tunneling. Sudden, nonadiabatic compression of the Casimir plates should
result in the spontaneous emission of photons by the vacuum. Similarly, nonadiabatic
expansion of tightly compressed plates should result in the spontaneous absorption of
some real photons, which happen to be within the geometry of the plates at this time.

NOTE: This statement may not be true because the Einstein coefficient of spontaneous
absorption is identically zero; the coefficients of spontaneous emission, and hence, of
stimulated absorption and emission, may be changed through altering the vacuum
electromagnetic energy density utilizing Casimir plates, resonant cavities, etc.

The Universe might be described by a wavefunction representing its tunneling
through a hyperspherical barrier, in four real spatial dimensions. The quantum tunneling
of the Universe through this hyperspherical barrier may be alternately described as the
collapse of a false vacuum state and the subsequent creation of free particle
wavefunctions propagating along an imaginary axis of a four dimensional hypersphere of
3 real + 1 imaginary spatial dimension. The probability density of this wavefunction
adjusts as time passes reflecting the increasing uncertainty of its would-be position
eigenstate. Any vector at a point where its scalar product, with the wavenumbers of the
eigenfunction expansion (of the universal wavefunction), is zero is assigned an imaginary
coefficient reflecting its being rotated 90owith respect to the wavenumber set of the
eigenfunction expansion. There was a recently announced discovery that the linear
Hubble relationship between galactic distances and recession rates does not strictly hold,
but that the recession velocities are distributed discretely with increasing distance, each
velocity being roughly an integral multiple of 72 Km/sec. These observation suggest two
distinct but related possibilities.

One, that the initial collapse of the quantum mechanical vacuum state occurred in discrete
stages in much the same way that an excited electron decays from a highly excited state.
Two, that the Universe tunneled, quantum mechanical fashion, out of a hyperspherical
potential barrier where, as in the usual case, the transmission coefficient varied
sinusoidally with the wavenumber. The vacuum electromagnetic field is said to be
incompressible, but this is not strictly true. The vacuum electromagnetic field actually
appears to decrease in energy density when confined within a resonant cavity of
decreasing volume. This seems to suggest that the energy density of the vacuum
electromagnetic field is in a sense negative. We may think of the effect of shrinking the
resonant cavity upon the photons present within this cavity in two distinct ways:

1) The photons wavelengths are simply compressed by the cavity shrinkage factor or

2) The zero-point of the vacuum electromagnetic field is altered by a certain fraction so
that the energy of photons within the cavity "appear" to be greater (relative to the new
zero-point) by this same fraction. Of course, the first alternative appears more intuitively
evident but embodies the simplistic assumption that the photons within the cavity possess
some permanent and abiding existence rather than being a packet of energy which is
continually being emitted (created) and absorbed (annihilated) by the fluctuating
electromagnetic vacuum field. If a photon is in a momentum eigenstate, then the
position of this photon along its translation axis is totally uncertain. We say therefore
that in the position representation of the photon's wavefunction that the probability
density of photons along the particular photon's translation axis is exactly zero.
Consequently, a photon or photon beam which is in a momentum eigenstate - and hence
an energy eigenstate also - does not alter the probability versus frequency distribution
function (along its translation axis) for virtual photons of like eigenenergy. This may be
seen to follow from the fact that an increased likelihood of finding a photon of a
particular eigenenergy within a certain spatial interval means that the probability vs.
frequency distribution function in this region experiences a peak at the frequency
corresponding to this eigenenergy.

The rates of stimulated emission and absorption of electromagnetic radiation at a
particular frequency are proportional to the density of the ambient radiation at this
frequency. The constants of proportionality are the Einstein coefficients of emission and
absorption, respectively. It was stated earlier as a general principle that all physical
processes were mediated through the exchange of energy between matter and the
vacuum, the reservoir of energy uncertainty. This principle may be made more specific
by invoking the Einstein relationships for electromagnetic radiation emission and
absorption as the mechanism for all energy emission - absorption, that is, for all forms of
energy exchange, so that the rates at which all physical processes take place becomes
proportional to the spectral energy density of the fluctuating boson fields of the vacuum -
in accordance with our earlier intuitions. this assignment of the Einstein mechanism ( for
want of a more convenient term) for physical processes in general depends upon the
implicit assumption that in the absence of stimulated emission (and absorption) the
coefficients of spontaneous emission and absorption are identical - just as are the
coefficients of stimulated emission and absorption are identical in the absence of
spontaneous emission. But the problem here is that there really is no such thing as
spontaneous absorption - as noted before this condition would violate the principle of
energy conservation. Spontaneous emission appears to only occur to electrons which
have already been elevated to excited energy levels through stimulated absorption - in
other words the energy fluctuations of the vacuum serve merely to trigger the decay of
excited states produced through ambient electromagnetic radiation. However, this would
not be the case if spontaneous absorption applied only to energy in the form of virtual
particles. The lifetime of virtual particle is determined by the uncertainty principle and
therefore the absorption of these particles out the vacuum does not violate conservation
of energy. It must be observed here that the assignment of the value ) to the coefficient
of spontaneous absorption is only required by the assumption that the energy density of
the vacuum is itself zero. A number of experiments on vacuum cavity resonance suggest
that spontaneous emission rates are suppressed by imposing boundary conditions upon
the electromagnetic vacuum. It is our deepest suspicion that the fraction by which the
emission rate is suppressed is equal to the fraction by which the density of the
electromagnetic vacuum is reduced through the imposed boundary conditions. In the
chapter on nonclassical light in the work, Light and Quantum Fluctuations, a
correspondence is drawn between the effect of a dielectric medium within a certain
region of the vacuum and the alternate introduction of specific boundary conditions upon
this vacuum, say, utilizing conducting plates, resonant cavities, etc. In this chapter it was
concluded that the fractional increase in the index of refraction is directly proportional to
the fractional increase in the electromagnetic energy density of the vacuum with the
wavenumber being also altered by this fraction but with the frequency being unaltered by
the dielectric medium so that a fractionally decreased local value of the speed of light
results.

How do we represent a trajectory despite the fact that the motion of the particle
must be continually recast in terms of a time varying set of basis functions. This time
variation of the basis functions must contain an element of randomness, or
unpredictability since otherwise a unique unchanging basis could be found with which to
represent the motion. Distinct trajectories can only be co-represented within the same
presentational space if each and all are differing projections of a single evolving
trajectory. Each eigenfunction is related to its noncommuting spectrum of superposed
complementary eigenfunctions in the sense that figure is related to ground. The
complementary eigenfunction spectrum is a data set; the selection of one of these
eigenfunctions within the observational context constitutes the engendering of a bit of
information. The component eigenfunctions become mutually coupled provided that
their wavefunction resists alteration through external influences. The eigenfunctions are
coupled to one another if each contains at least a tiny projection along all of the other
eigenfunctions, which together with it make up their wavefunction. This is only possible
if this set of eigenfunctions contributes to the defining of the Hilbert space geometry
within which they find expression. This requires that the time evolution of the
wavefunction be nondeterministic, which is to say, nonunitary.

The information content of a given structure is determined by the degree to which
it approximates its intentional object. On this view, things are defined in terms of a
holographic contextual matrix or system. Meaning is context-dependent. Because of
this, there is a world of difference between what is called data and what is called
information. Information may be thought of as data provided with context adequate to
determine its meaning; information is processed data, while data may be conversely
thought of as uninterpreted signals. Data are overdetermined by information; information
is underdetermined by data. Data may be physically transmitted through space, but this is
not so for information. Data suggest myriad possible informational structures while
information narrows one's focus upon a tiny subset of an unlimited variety of different
possible sets of data. Recalling the beads on a string analogy, rational numbers may be
represented by an finite number of terms of a convergent infinite series, itself,
representing an irrational number. This finite set of terms, gotten by truncating a
convergent infinite series, are amenable to arithmetic manipulation. This is because,
metaphorically speaking, we are able to take the beads off their finite string and re-thread
them in arbitrary order without changing the topological relationships of the beads. Not
so for an infinite number of beads on an infinite string. A finite number of beads on an
infinite string may correspond to matrices. The rearrangement of the order of the beads is
here a reversible process or procedure and so may not be thought to possess intrinsic
information.

I am fascinated by systems with a group theoretic structure. More generally, I am
intrigued by specialized language systems. Why do such systems appear to be "closed"
and yet permit the appearance within themselves of genuinely novel, or emergent
structures. Emergence is always explainable in terms of the interpretation of such
structures within the context of larger, in fact, "open" systems. Formal symbol
manipulating systems, such as computing devices, do not admit the existence of what are
called semantic structures. Information is reduced, or de-interpreted, if you will, by one
programmer, to produce a coherent set of inputs to the computing device, and the outputs
engendered by the computational process are then re-interpreted by another (or the same)
programmer. The computational process itself, in isolation from the interpretation
process, which bounds it, is not "about anything." Structure only has meaning when it is
de-constituted back into the unbounded self-referential flux from which it originally arose
through the process of abstraction. In fact, the general procedure of composing a
computer program is itself an example par excellence of reduction or abstraction. If the
activity of the system as a whole, or, rather, as a totality, is without meaning because of
its not being embedded in some larger context, then neither are any sub processes
occurring within it meaningful. By extension, the human brain must be embedded in a
larger mediating context, which is itself completely open-ended in its possibilities -
otherwise those processes occurring within any given human brain would not be, as
alluded to earlier, "about anything."
September 2011
Such brain processes could not then
possess intentionality, which is yet another way of seeing the incompatibility of free will
and determinism, c.f, Empedocles‘ remarks concerning the incompatibility of ―atoms and
void‖ determinism and logical reasoning (rationicination), c.f., God and the Argument
from mind (Chapter 13)…


Besides excluding temporal evolution by being deterministic, closed "dynamic" systems
lack temporality because, in addition, being closed bound-energy systems, their energy
may only change in discrete amounts. Any finite set of data within an infinite
informational system has an unlimited number of theoretical structures, which suffice to
explain the coherence of these data. An infinite informational system is able to contain
within itself a complete symbolic representation of its own structure.

The phrase, 'information processing," is a confusing and ambiguous one as
information is probably itself a stable pattern of interlocking activity within the flux of a
more substantive and comprehensive data processing action. On this view, data and
information are not synonymous commodities - data possessing merely a form in the
restricted sense of a spatiotemporal frequency, in itself possessing no content or
intentional object, which is to say meaning. Whereas information is the result of the
interpretation of data, not in the sense of divining their intended message or meaning
(data possess none such in and of themselves), but in the less obvious sense of
reconciling the new data with a long interpretive history stored in memory based on data
received previously.

Given an infinite number of possible 'events,' the probability of any one occurring
is infinitesimal, still less could the 'same' events occur repeatedly and in predictable
order, unless the events were causally overdetermined by a sequence of preceding events
which themselves constitute a backwards diverging sequence of necessary causes. The
origin of these "infinitely improbable events" must be a nonlocally connected infinite set
of events (a continuum) where the singular event is an intentional object defined in terms
of the self-referential topological relationship/interaction of infinite subsets within the
continuum. Another paradoxical usage popular in the literature of physiological
psychology, artificial intelligence, philosophy of mind, etc., is the phrase "transmitting
information." One must realize that only energy may be transmitted, information is
always constructed through the interpretation of data received in situ; information does
not physically move from place to place. On this view, information is not a conserved
quantity, at least in the sense of some physical continuity equation governing its 'flow,'
and so if energy and information are in some physical context interdefinable, it should
only be under a set of circumstances where the principle of conservation of energy does
not strictly hold. Transmission presupposes the notion of the conveyance of some
conserved quantity within some closed space or continuum. We now know that "closed
continuum" is a contradiction in terms. The only such situation known to physics is the
one in which processes occur within a frequency spectrum with a lowest frequency larger
than the reciprocal of the quantum mechanical time uncertainty of the physical system
within which the relevant processes are occurring. Another reason to believe that a
physical continuity equation does not apply to information or its flows is that information
appears to reside in between the discrete energy levels of crystalline quantum systems,
and so information is not here really spatially localizable, in principle. Reducing the
energy uncertainty of a neural network will squelch some of the nonlocally virtual
interactions occurring within the energy bands of the network because the bands will be
contracted resulting in a contraction of the bandwidth of vacuum electromagnetic field
frequencies available to the network, reducing the data processing capacity of the
network.


Because information is not stored in the brain‘s neural network at any particular physical
locations within the brain per se, but a more approximately correct description is to say
that learned information is stored at various discrete energy levels of this network,
conceived as a quantum mechanical system. When a ―piece of information‖ or a memory
is recalled, the neural network will attempt to connect to a new spectrum of the
nonlocally connected quantum vacuum fluctuation field. In essence, the brain becomes
embedded in a new vacuum or ground state which causes a restructuring or reconstituting
of its ―stack‖ of energy levels at which the data was stored.
06/98
After the restructuring
of this stack, a new array of discrete energy levels prevails along with a new spectrum of
possible virtual energy transitions within the new stack. Now the brain has become
resonantly tuned to a new spectrum of (nonlocally-connected) vacuum energy
fluctuations. This is how data or ―passive information‖ gets re-presented as ―active
information.‖ The brain may be thought of analogously to a hardware interface between
the individual soul and the impersonal and open-ended information reservoir. As
indicated already, data encoded by the vacuum in the discrete energy structure of the
brain is overdetermined. This same data as decoded from ―memory traces‖ within the
brain‘s energy structure is underdetermined. What permits a quantity of data within the
brain‘s neural network to persist as this selfsame quantity is established neither by any
physical continuity which the brain may possess from one moment to the next, nor can
the persistence of this data be placed on any formally descriptive footing. The
informational continuity of the ―memory traces‖, i.e., data stored within the brain‘s neural
network is maintained outside the brain in the sense of this continuity being of a nonlocal
nature: not contained within the brain‘s local spacetime. So on this view, the brain may
be thought of as a kind of ―terminal‖ interfacing with the ―network‖ of the nonlocally
connected, fluctuating quantum vacuum energy field. The physical traces within the
brain associated with memory consist merely of pointers, or, borrowing from the more
current Internet metaphor; these traces are to be thought of (along with Laszlo) as being
akin to ―links to World Wide Web sites‖ so that memories are not stored in the brain but
merely memory addresses. Continuing the Internet analogy, these physical memory
traces within the brain may be understood after the fashion of web browser ―bookmarks.‖
Particular eigenvalues of energy associated with the discrete quantized energy levels of
the brain‘s neural network cannot remain proper memory addresses for information
dynamically stored within the ―quantum vacuum network‖ if the underlying
eigenfunctions are not adequately ―tracked‖ through adequate self-interaction of the
vacuum with itself through the brain as quantum neural network hardware interface. This
is due to the inevitable presence of energy degeneracy within the brain. The brain may
be functioning as a running convolution integrator of multiple unbounded vacuum
spectra. The brain in this way establishes resonant and therefore maximal connectedness
between different vacuum topologies. These vacuum topologies are not contained within
the local spacetime of the brain because any particular metric must presuppose an already
given spacetime topology.

There seems to be two conflicting views of the vacuum electromagnetic field in
its important role in opening up the otherwise mechanically determined processes of the
human neural network. Firstly, the v.e.f. provides context for real particle/field processes
occurring within the brain, and secondly, it provides the field of possible informational
structures which are filtered and selected by the brain's neural network ( if only
passively) to give meaning to its interior processes. The reason the question, "Why does
time appear to pass at the particular rate that it does?," does not really make sense is
because the interpretation of sensory data is radically dependent upon the timing of the
events represented by these data with respect to the mind which interprets them and gives
them contextual significance, and so there is no such thing as identical sequences of
events occurring at different rates. It is more true to say that formal systems are created
with the intent of demonstrating (formally) certain theorems which some person already
has in mind, rather than, that theorems are to be deduced from within already existing
formal deductive systems of inference. Energy and information are not interdefinable
within a closed dynamical system. What are called "mind" and "matter" are not
fundamental categories in terms of which fundamental distinctions may be validly
thought to subsist. Both terms represent somewhat complementary ways of abstracting
from the fundamental substantive process of the absolute ground of being. Reality as it is
in and of itself is neither and both of these "things." A unitary and unique "pure
consciousness" offers itself up as the best candidate for ultimate Reality or the ground of
existence: it is the most harmonious integration of all possible abstract forms while being
at the same time the most concrete, logically a priori, entity.

April 1997
The distinction between that which has form and that which is formless is a
distinction which cuts across the distinction between mind and matter since one may
speak of both formless mind and formless matter. <<

December 1996
But there may, indeed, be no most harmonious integration as such, but an
unlimited number of progressively higher integrations. To suppose that there is some
unique highest integration would be to presume that there can be some objective rule
relating lower level manifestations of ground into a convergence. Recursive structures
may only come into existence by being distilled from other recursive structures more
complex than themselves.

Particle creation at the event horizon of a black hole gives rise to a precisely thermal
spectrum. This suggests that the vacuum itself is in thermal equilibrium with itself so that
the vacuum must be continually exchanging energy with itself. Because the time rate of
change of all physical quantities depends on the existence of energy uncertainty, dq/dt =
[H, q] + f[H,q], where f[H,q] is usually written as @q/@t. On this view, quantum
mechanical systems possess energy uncertainty because they are continually perturbed by
intrinsic vacuum energy fluctuations. In this way, all mass-energy systems are in a
process of constant energy exchange with the quantum mechanical vacuum. Since all
macroscopic transfers and exchanges of energy between two points in spacetime are
mediated via the submicroscopic energy exchanges occurring within the vacuum, it
follows that conservation of energy macroscopically is dependent upon conservation of
energy exchanges within the vacuum. It is not possible to distinguish different time rates
of change within a closed dynamical system. This is because such a closed system
possesses only a finite number of discrete energy levels, and when the total system is in a
particular energy eigenstate, its energy uncertainty is 0 so that there are no vacuum
fluctuations available with which to mediate changes in physical observables of the
system. We may define the distance separating two events as a function of the number of
vacuum momentum fluctuations existing between the two said events. Similarly, we may
define the time interval between two such events as a function of the number of vacuum
energy fluctuations existing between the two said events. Of course, the partitioning of
the relativistic momentum - energy tensor into pure momentum versus pure energy
components is dependent upon the particular Lorenz reference frame within which one
performs the momentum and energy measurements; the converse of this is also true.
Since the energy levels at which information is stored in a neural network are defined in
terms of the lowest stable energy of the neural network as a whole, virtual energy
transitions between these energy levels presuppose a coupling between the wavefunctions
describing the quantum mechanical states of all of the individual neurons of the network
in the sense of their being nonlocally connected.

It is the spontaneous coherence in which the neural network is embedded which
provides the ultimate context within which the neurological events are to be interpreted.
This coherent field is that of the nonlocally connected vacuum electromagnetic
fluctuation field. The many worlds interpretation of the quantum measurement problem
may be understood as a reversal in causal relationship between the uncollapsed
wavefunction representing the mind of the observer and the uncollapsed wavefunction
representing the potentialities of the quantum mechanical system being observed by this
mind in the following manner: when the observer notes the collapse of the wavefunction
with respect to an observable he is attempting to measure, what is actually occurring is
the collapse of the wavefunction describing the observers mind so that it (the observer's
mind) now abstracts from the Weltall one particular eigenvalue of the object
wavefunction, but without inducing a collapse of the object wavefunction itself. Without
a God's eye view of Reality in which to ground these complementary possibilities, there
is not legitimate distinction, which can be made between them. One might ask what is
the fundamental difference between these two interpretations if there is not some third
realm, independent of both the observer's and object wavefunctions in terms of which one
interpretation might be favored over the other as being ontologically prior. This third
realm belongs neither to that of causality (the mutual interaction of collapsed
wavefunctions), nor to that of contingency (the interaction of collapsed with uncollapsed
wavefunctions, and vice versa), but to that realm constituted solely by the mutual
interaction of all uncollapsed wavefunctions. This realm we may refer to as the
composite contingency - necessity manifold or continuum.

There is an exactly parallel assimilation between the category space - time with
our category of necessity - contingency. In this way we may realize that the concepts of
locality and nonlocality constitute a distinction that cuts across that constituted by the
polar concepts chance and necessity, time and space. There is chaos, Heraclitus' ever-
living fire, the dynamic substance out of which all forms are derived. Then there are the
forms, themselves, both actual and potential. But there is a third factor, if you will, and it
is whatever power extracts these forms from the flux. This power possesses the freedom
of the flux, but also the order of all those forms which it is capable of extracting, or,
rather, abstracting from this flux, and so is not contained within either category, that of
order and that of chaos.
July 2011
Epicurus writes to au=Heroditus that ―. . . we must admit
that nothing can come of that which does not exist; for were the fact otherwise, then
everything would be produced from everything, and there would be no need of any seed.
Now the great relevance of
au=
Epicurus‘ notion of the essential importance of a ―seed‖ to
us is that there is indeed a ―third power‖ apart from
au=
Monod‘s ―chance and necessity‖
and that this power is information. Information is abstract in that there are endless open-
ended means of encoding information as data. We say this with the express
understanding that information is never exhaustively determined by data, but context is
always required in addition to the awareness and intention by which a set of abstract
relations was enacted when the information was originally encoded. (By the way, there
must be a converse process to abstraction, i.e.,
au=
Whitehead‘s concretion.) The material
medium in which information is encoded as data can never be uniquely associated with
said information, except by an arbitrary act (arbitrary from the standpoint of
determinism), that is, by assignment and convention, not to mention interpretation
wherein new information is engendered from old via the operation of metaphor
(reprocessing of information native to one context within a distinctly different context).
If there is an evolutionary process which profits by being graced with a preexistent
infrastructure of the very subtlest of data processing machinery which is suited to operate
(by imposing initial and boundary conditions) upon a medium, then the question becomes
whether this medium must itself be creatively dynamic in the sense of ―self-organizing‖,
or if merely the presence of a sufficient density of ―relic information‖ encoded within the
medium‘s fundamental processes should provide sufficient grist for an upward
evolutionary process.

According to the molecular biologist,
au=
Stuart Kauffman, the evolvability of
dynamic systems is maximized precisely on the boundary between the system's chaotic
and orderly regimes, far from system equilibrium. Good is that which enhances
creativity which is the explicit expression of implicit integral wholeness. Evil constitutes
that which seeks to destroy, confuse, disintegrate as well as to impair the expression of
unity and wholeness through creativity. All creativity is in reality re-creativity (of God).

The probability spectrum of a given wavefunction may be underdetermined so
that there exists an unlimited number of ways in which an ensemble of measurements of
the eigenstates of the wavefunction with respect to a particular observable may sum
together so that the wavefunction appears perfectly normalized; this property may permit
an additional degree of freedom within quantum mechanical virtual processes not
previously suspected to exist.

Probability density conservation in 4-dimensional spacetime is at the heart of the
underlying physical mechanism for gravitation that we are proposing. For instance, the
gravitational reddening of starlight may be simply explained in terms of this concept of
probability (density) conservation. Probability conservation is the most general statement
of the principle of causality. There is an absolute simultaneity, which mental events
distinctly enjoy due to the fact that they do not admit of perspective; if anything they
constitute perspective. However, the order in which neurophysiological occurrences
occur (in the brain) is at least partially dependent upon the reference frame (in the
relativistic sense) that these events occur (as observables). There must be an embedding
of these neural events in a substrate, which extends beyond the merely
neurophysiological in order for a reference frame to be defined in which there can arise a
correspondence between subjective and objective simultaneities. The nonlocally
connected vacuum electromagnetic field offers itself as the prime candidate for this
embedding substrate.

If metaphysical dualism is false in the strict sense of there existing two distinct
and parallel fundamental processes, one physical, the other mental, but if this doctrine is
nevertheless true in the less restrictive sense of there actually existing mental and
physical realms which are not distinct but somehow mutually interacting, then it is in
principle impossible to formalize the operation of mind.

It is quite true what many psychologists (as well as lay persons) have noted
concerning the tendency of a task to become executable without the aid of conscious
attention the more and more that it is performed. However, what has not perhaps been
widely noted by either is the somewhat contrary tendency for one to become more, rather
than less, aware of the abstract operations lying behind the performance of a task in new
contexts where the specific concrete operations constituting the task would never
otherwise suggest themselves. This tendency for us to become aware of the abstract
operations specific to one particular oft-repeated task within a context normally foreign to
it, or at least for our performances of operations within new previously unrelated contexts
to be guided by these abstract operations, I refer to as operational modulation - or op-
mod, for short. What we are calling op-mod may be alternately thought of as the
manipulation of something in terms of an operational metaphor; it is itself the very
essence of the human tool-using intelligence, and may be considered to be a general
property of any neural network-computing device.

More specifically, op-mod is peculiar to the problem solving strategy of the
neural network device because the specific neural circuits which are utilized by such a
network for solving one particular "problem" will necessarily overlap with neural circuits
which are being established in the course of attempting to solve ―similar‖ problems in
new extraneous contexts.

The existence of the ground of Reality consists exhaustively in its very activity.
Consequently, that which creates this ground is that which sustains this ground; from
which further follows the truth of Leibniz's principle that, "the conditions sufficient to
create the world are necessary at every succeeding moment to sustain its existence."

But the implications of quantum mechanics as pertains to what is called the quantum
vacuum conceived of as the naturalistic interpretation of the ground of being in the
application of this concept to induced gravity theory or effective field theories of gravity
and inertia may suggest that Leibniz‘ principle must break down in connection with the
fundamental quantum-thermodynamic phenomenon of environmental decoherence.^^
March 2011
Decoherence is witness to the fact that the conduit of communication between
the quantum system and its supporting vacuum state does not possess ―enough
bandwidth‖ for the system to update itself ―in real time‖, hence the relatedness of
gravitational decoherence and gravitational time dilation.

We know that there has to have always been something in existence and so the ground of
Reality must be self-sustaining, and hence, self-creating. It follows that the ground of
existence necessarily exists, and so is eternal. All possibility ultimately lies dormant
within that which necessarily exists. In the language of quantum mechanics, every
determinate eigenstate with respect to a particular physical observable may be alternately
represented as a series of eigenstates with respect to an indeterminate physical observable
incompatible with the first.

December 1996
When one conceives of some universal substance or "stuff" which does not
depend on any activity for its existence, one is conceiving of something, which is at once
a form and a substance. One is conceiving of a substance, which is a particular
determination of it, which possesses greatest generality.

Hermann Weyl notes in his book, "The Open World," that the state of a two-electron
system is not determined by the state of each individual electron added together, but that
the states of each electron may be deduced from the state of the two-electron system.

Leibniz's series: 1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + . . . , does not converge when the terms
are rearranged into a sum of the following two sequences: (1 + 1/5 + 1/9 + . . . +) + ( -1/3
- 1/7 - 1/11 - . . . ). This is a rather common property of what are called alternating
infinite sequences. This property is very mysterious, but can be made to seem less so if
one pictures each term of the sequence as a numbered bead on a string. A finite number
of terms of the series may be rearranged arbitrarily to produce the identical sum, and this
may be thought to be possible simply because the string, being finite in length, permits
the removal, and hence, rethreading of all the beads onto the string in any arbitrary order.
However, given an infinite number of beads, the string is now itself infinite in length and
so it is no longer possible to remove the beads so as to put them into a new order.

@?
The order of the beads may only be changed into that represented by the two sums
provided that the original string is cut, and this changes the topological relationship of the
beads; in a finite sequence the order of the terms (beads) may be rearranged without
altering the topological relationship of the beads. Herein lies the irreversibility of the
procedure. It is also interesting to note that Leibniz' series converges to the value of pi/4
because the value of convergence is itself an irrational number possessing a decimal
expansion which possesses no determinate order whatever so that what we have is an
equation between an irrational number and an infinite sum of rational numbers, on the
one hand, and, on the other hand, an equation holding between an infinite sum of terms
possessing a mathematically determinate sequential order with respect to a simple
mathematical operation, namely, addition, and an infinite sum of terms possessing no
mathematically determinable sequential order - no sequential order with respect to any
definable mathematical operation. We may suspect that Cantor's argument for the
existence of what he calls nondenumerable infinity, i.e., the famous "diagonal argument,"
can be applied to the decimal expansion of pi to show that this sequence of decimal
fractions itself constitutes a nondenumerable set of rational numbers. What is interesting
here is that no possible rearrangement of the indeterminate sequence of nondenumerable
rational numbers constituting the decimal expansion of pi will produce an irrational
which diverges although there do exist rearrangements of the terms of Leibniz' series
which diverge. From this simple fact we may deduce that there is no infinite sequence of
denumerably infinite subsets of terms taken from Leibniz' series, on the left hand side of
our equation, which will produce a one-to-one correspondence with the individual
rational numbers of the infinite sequence of rational numbers in the decimal expansion of
pi.
July 2011
Although there‘s a kernel idea here that requires further development, it‘s
obvious that by what has just been noted about Leibniz‘ series reveals that pi has a
topological structure. Does the topology of the real line, namely that it has one imply
that the infinite real line must possess some kind of closed loop structure?

Godel has stated that his incompleteness theorem applies only to logico-deductive
systems more powerful than that represented by arithmetic (Peano Arithmetic). This is
because the proof of the theorem is based on the Godel-numbering procedure, where each
operator, as well as all the symbols utilized by the system, are represented by Godel
numbers, while all of the logical operations of the system are defined in terms of
arithmetic operations. So we may say that arithmetic is definable within all so-called
Godelian deductive systems. The domain of all arithmetical operations is a domain
devoid of topological structure. Self-referential propositions introduce a topological
structure into the domain of proof.

Rational numbers are the sums of convergent infinite series where the order in
which the terms of the series appear does not affect the value of the sum. We may say in
this case that rational numbers occupy a number field possessing arithmetic, or null,
topological structure. Irrational numbers, on the other hand, are the sums of infinite
series, which may diverge if the order in which the terms of the series appear are altered.
We may say that the irrational numbers occupy a number field possessing a topological
structure. The degrees of freedom required for certain reactions, or interactions, to take
place, are only allowable within a space of large enough dimensionality to accommodate
them. The unreasonable effectiveness of mathematics within the physical sciences,
borrowing the famous phrase of the quantum physicist Eugene Wigner, is owing to the
radically and, perhaps, infinitely, overdetermined nature of natural phenomena. To wit,
sensory data are grossly insufficient to determine uniquely the information structures
with which they are interpreted and explained. A genuinely recursive system may only
be derived from a recursive system equally or more complex than itself, or if the
recursive system is "constructed" out of simpler recursive elements, the control system
effecting or mediating the process of construction is, itself, a recursive system, of greater
complexity than the system being constructed. The information content of a particular
structure is defined by the degree of preciseness to which the system approximates its
intentional object. This definition is best understood in terms of the "shattered hologram"
metaphor. A molecule belonging to a complementary molecule pair, two molecules
which naturally hydrogen-bond to one another, favors the spontaneous self-assembly
(from locally available components) of the molecule to which it bears a topologically
complementary relationship. More generally, the spontaneous self-assembly of
molecules is favored by a vacuum containing energy resonances complementary to those
upon which the molecule's energy structure depends for its sustained existence. On this
view, the quantum vacuum electromagnetic field may be thought of as a kind of dynamic
template which "informs" certain simple molecules "how to self-assemble," with these
simple molecules acting as complex waveguides receptive, or sensitive to, a certain tiny
portion of the spectrum of electromagnetic frequencies originating from within this
vacuum.

July 2011
An important question in this connection is whether there are contingent
conditions for the emergence of altogether new structures? And whether there is no
contradiction in the dynamical substrate of the quantum vacuum being able to support
and sustain emergent structures that it is nonetheless unable to anticipate? Another way
to put this is: can open-ended conditions be posited for irreducibly complex structures to
―boot strap‖ themselves into existence? Here the dynamical substrate is intelligent,
creative, however not all-knowing. Here also, the complex structures engendered are
irreducibly complex, however this is in the absence of intelligent design, as only
intelligent recognition is required. The teleology bespoken by the emergence of
irreducibly complex structures in one temporal dimension can be given a causal
explanation through the operation of feedback structures in higher dimensions of time.
Septermber 2011
There are two fundamentally disparate concepts of intelligent design: the one,
such a biochemist or molecular biologist, applies design concepts derived wholly from
his study of already available biological structures and systems, the other is in the case of
a demiurge or deity who develops a system or structure by directly lifting it out of chaos,
calling it out of the inchoate flux of open-ended possibilities.

If what might be called
prn=
time-scale reductionism (TSR) constitutes a fundamentally
false understanding of the dynamics of natural phenomena, then the traditional
philosophical view of time as possessing only a single dimension must be abandoned.
Time-scale reductionism says, simply, that events taking place over a certain time
interval are owing exclusively to events taking place over intervals of time smaller than
and ―contained within‖ the first time interval, which are in turn dependent upon events
occurring over smaller time intervals, and so on.
July 2011
The phenomena of quantum
entanglement and teleportation, particularly within the transactional interpretation of
quantum mechanics, c.f., Cramer, appear to flout the principle of TSR. One ready
example of the failure of TSR is the case of
prn=
historical time. In the case of historical
time, there is a critical ―window of opportunity‖ within which certain events must
transpire if certain significant changes or revolutions, e.g., cultural, social, political are to
occur. Paradoxically, the sensitivity to initial conditions of the timeline goes hand-in-
hand with the timeline‘s ―robustness‖, c.f., the misguided, awkward and ultimately
unsuccessful attempts of future time travellers to meddle with the timeline. More
broadly, events in a historical sequence do not merely cause each other or concatenate as
in a blind causal sequence of events, but events in the historical process echo as well as
anticipate events in the past and future, respectively. Clearly it is due to historical events
both creating and reacting to a temporal context, which makes this type of determination
in time possible. And here it is obvious that the temporal context is only efficacious if it
is also meaningful, which implies the operation of consciousness in both its individual
and collective forms. Quantum entanglement may be understood as causality operating
collectively rather than merely individually as in the case of classical physics. We should
be mindful here that what is called thermodynamics is merely a collective description of
particles acting individually according to Newtonian mechanics and which does not
invoke any new concept of causality.
October 2011
It may turn out that we shall only succeed
in developing a ―concept of consciousness‖ for the individual by borrowing from the
theory of the consciousness of the collective. If individual consciousness is not a
metaphysical entity, i.e., substance, but is instead a social construct, then the
philosophical quest to solve Chalmers‘ ―hard problem‖ of consciousness shall be seen to
have been all along the pursuit of a red herring. ―Contrary to what most people believe,
nobody has ever been or had a self. But it is not just that the modern philosophy of mind
and cognitive neuroscience together are about to shatter the myth of the self. It has now
become clear that we will never solve the philosophical puzzle of consciousness—that is,
how it can arise in the brain, which is a purely physical object—if we don‘t come to
terms with this simple proposition: that to the best of our current knowledge there is no
thing, no indivisible entity, that is us, neither in the brain nor in some metaphysical realm
beyond this world. So when we speak of conscious experience as a subjective
phenomenon, what is the entity having these experiences?‖, The Ego Tunnel: The Science
of the Mind and the Myth of the Self (Metzinger).

au=
John Searle, the linguist and philosopher, has stated that formal computational systems
are incapable of consciousness because such formal systems do not effectively exploit the
causal powers of computation available for utilization by the human brain. Since the
causal powers of matter, as Searle terms them, stem from what is forever spontaneously
occurring in the natural realm at the very smallest dimensions of time and space, the
process of abstraction, itself founded upon the systematic ignorance of finer details of
structure and function, introduces a kind of built-in blockheadedness into systems of
"artificial intelligence" which are physically realized from relatively macroscopic and
"insensitive" component parts, in accordance with "analytically closed-form" designs.

Vacuum fluctuations which are simultaneous in one reference frame (Lorenz frame) will
not necessarily be simultaneous in other frames. This theoretical implication of special
relativity for quantum mechanics, combined with the fact that the energy density of the
quantum vacuum is decreasing with time as the universe expands, leads us to deduce that,
not only is the density of the quantum vacuum different in different Lorenz frames, but so
is its time rate of decrease.

I do not think that Hugh Everett's many worlds interpretation of quantum mechanics is
consistent with the implications of quantum experiments which have been performed in
the last few decades since the time (1957) when he originally proposed his interpretation
of quantum theory. In Everett's theory, the collapse of the wavefunction is interpreted as
a sudden, discontinuous branching of the observer from one parallel universe, where the
wavefunction is uncollapsed, to a new parallel universe where the wavefunction exists in
one of its component eigenstates. From this do we suppose that all of the collapsed
wavefunctions within our universe owe their existence to observations made by quantum
physicist doing experiments in other universes?

Enantiomer molecules, that is, molecules which were once thought to be identical in
every way except that they are the mirror reflection of each other, have recently been
generally found to differ in respect to their binding energies. So-called "right-handed"
molecules, such as the amino acids, D-tyrosine, D-glutamine, etc., have been found to
posses smaller binding energies (and hence are less stable) than their mirror image
counterparts, the L - series amino acids of the same names. Given the existence of a
spatial fourth dimension, it is possible to convert a right-handed molecule into its
identical left-handed counterpart by pulling the molecule into 4 - space and rotating it
180
o
against the hyperplane (normal to) and returning the molecule to its original position
within this 3 - hypersurface. This would suggest the existence of a preferential curl field
acting within this four dimensional continuum in a direction opposing the rotation of an L
- molecule and aiding the rotation of a R - molecule. This mechanism would be one
logical way to account for the observed differences in the binding energies of identical L
- and R - molecules. But such imagined hyperdimensional rotations must be seen to be
only a metaphor for a re-creation of the molecule into its mirror-reversed double. This is
because the metric of Minkowski spacetime is not positive definite, but negative definite.
Information is neither created nor destroyed; information is always conserved, and when
it appears to be created, being re-expressed within another medium is merely transducing
it.

There are myriad different media through which portions of the eternally pre-existent
information may be expressed, but there exists a primary medium which contains all
information originally. All other media through which information might be expressed
are ultimately dependent upon this primary information medium. In the same way that
the transduction of energy from one medium, say mechanical, to another medium, say
electrical, is always accompanied by a loss of a portion of the transduced energy as heat
energy (whereby entropy is increased), some information is always lost in the
transduction of information from the primary medium to other secondary media. For this
reason, no informational systems or structures are permitted to come into being which
possess an information density greater than that of the volume which they occupy, this
volume being pervaded by energy in its primary form (vacuum energy). In the same
way, there is a limit to the mass-energy density of any particular volume of spacetime;
this limit is that specified by Schwarzchild's equation for the energy density of black
holes. The information which is inevitably lost as a result of the transduction of
information from the primary medium to secondary media simply passes back into the
primary medium.

July 1998
Information cannot be independent of the medium in which it is expressed. Data,
on the other hand, are independent of the medium in which they are expressed.

March 1998
The pre-existence and transduction of information are not logically self-
consistent notions. Pre-existence implies something which is continually a part of the
temporal progression of the whole but which itself remains latent and changeless.
Transduction of information also implies a contradictory context-freedom for
information. For the transduction of information implies that, like energy, no information
is gained or lost in its "changing form" as it passes from one medium through another and
then to another, and so on. This is to say that the media carrying information contribute
nothing to the content of this information. And this is also to say that information is
always abstract and is constituted by relationships. One then might ask, what is it then
which differentiates information from mere data - or are they synonymous? Data and
information may be understood as constituting a merely relative distinction. What is
meant by this is that what are data in one context may be information in a larger one and
information in a smaller context. In other words, information is data interpreted in light
of context while data in this same context function as information with respect to smaller
subcontexts contained therein. Since information are context-dependent, it would follow
that all information possess a characteristic lifetime rather analogous to a the half-life of
radioactive isotopes.

The law of the temporal evolution of information systems is provided by the pre-existing
spatial distribution of information. The determinate is dependent upon the indeterminate.
The finite exists only through its participation with the infinite. All transformations are
definable in terms of mere projection operations; therefore, these transformations, when
investigated, always reveal the presence of conservation laws which seem to govern, or
provide constraints upon, these transformations.

What is called the unity of apperception in Kant's Critique of Pure Reason is synonymous
with the existence of the underlying noumenon, which provides the rationality of any
particular series of perceived continuous transformations entertained within a finite mind.
The interpenetration of the categories of time and space support the unity of
apperception.

October 1997
A functionalist theory of mind must presuppose a decomposition of spacetime
into a particular "3 + 1" configuration of absolute space and absolute time. It must do
this in order to define the boundary between what are merely input-output operations and
what constitute operations of the processing of inputted data/information into outputted
data/information. We may understand the distinction between information and data to be
simply this: information are data placed in context and interpreted in light of this
context; data are information taken out of context, that is to say, data are simply context-
free information. Now in order for information to be passed from one person (or
subjectivity) to another, information which the one person intends to convey to the other
must be translated, or more aptly, perhaps, converted into context-free data.

Although there is not such distinction as subjective versus intersubjective data, we may
support such a distinction for information. Intersubjective information may be
understood as information which two or more persons may hold in common with one
another due to similarities in the nature of the mental boundary conditions, if you will,
which act upon their respective consciousnesses. Subjective information may be
understood as information which is private to each person and therefore cannot be held in
common between different subjectivities. Different consciousness are not all exemplars
of consciousness itself, or consciousness at large, by virtue of each individual
consciousness possessing one or more general qualities in common with one another for
this would be to presuppose that different individual consciousnesses are simply different
structurings of a single fundamental consciousness.

If we suppose that the constituting of each individual subjective spatiotemporal
continuum assumes the prior existence of an objective, as opposed to an absolute, spatio-
temporal continuum, and that this objective space and time are, in turn, constituted out of
the activity of some fundamental consciousness, then each individual consciousness, or
ego, is merely one particular structuring of the fundamental and unitary consciousness
among many other such possible structurings.

Temporality presupposes the givenness of Space. Duration, however, does not
presuppose the givenness of Space. Temporality pertains to the evolution of things
existing within a particular space. The temporal evolution of a particular spacetime treats
this spacetime as though it is itself a "thing." The local temporal evolution of spacetime,
which is an admissible concept within classical general relativity, is not reducible to the
temporal evolution of entities and their mutual spatial relations within this spacetime.
Temporality pertains to changes occurring to the system boundary conditions. Duration
pertains to changes of the system including the changes to the system boundary
conditions. Within temporality, the rate at which a sequence of events or evolution takes
place cannot be determined; temporality and duration are required for this. Temporality
is duration plus deterministic causal relations taking place within a particular spacetime
frame of reference.

May 1997
And this is perhaps another important distinction which can be made between the
subjective (mental) and the objective (physical): in subjectivity, the forms of time and
space are fused and continuous with one another. Communication between different
subjectivities, that is, intersubjectivity which is objectivity, requires that the fused
spatiotemporality of each subjectivity be decomposed into separate space and time
dimensions within the realm of the objective.

The increase in complexity of coherent systems with time would seem to involve the
creation ex nihilo of quantities of information. There are reasons for believing, however,
that what is really involved in cases such as this is merely the partial redistribution of data
along a spatial information gradient onto a gradient which is part spatial and part
temporal where the total quantity of information is conserved in the process. With the
introduction of excitation energy, the nonlocal, distributed information content is partially
transformed into local, lumped information content. The information content contained
within a purely spatial manifold is nonlocally encoded through relations of within the
manifold which originate externally to it. The relations which are constitutive of a
spacetime manifold cannot be represented in terms of a distribution of relations between
localities on this manifold. No causal theory can explain the manner in which a
spacetime is constituted. A spacetime is constituted nonlocally, that is, through the
operation of nonlocally connected dynamical processes which are only partially located
within this spacetime. Is it, in principle, possible for all the neural firings which comprise
the brain state to which is associated a particular mental state to have been stimulated to
occur entirely from outside the brain's neural network, obviating the need for intricate
feedback loops connecting the neurons with one another which normally support such
patterns of neuron firings? Intuitively we suspect that merely reproducing the requisite
neural firing patterns from outside the network would not be sufficient to produce the
normally occurring associated mental state. This is because the observed neural firings
would only possess a determinate order in terms of the perception of their order by means
of a neural network genuinely possessing intricate feedback structures.

We might, in turn, be puzzled by the force of this particular intuition which has at its root
the notion of the importance of timing and synchronization of the neurons with respect to
one another. But this would really only be important if there was something which the
neurons incidentally interact with in the course of their normal process of functioning to
which the order of their "firing" might be fundamentally related. We might then seek to
include this additional something and produce the changes in it also from outside, just as
in the case of the neurons. Notice that in every case where we are supposedly able to
reproduce a given sequence of neural firings, we are dependent upon a favorable
condition wherein the time interval between firings within a given small region of the
brain are larger than the time uncertainty of the quantum system constituting the neural
network. We find that our earlier intuition about the problem of the timing of the events
appropriate to the establishing of the requisite brain states crops up yet again. The timing
of local causal interactions is between particular boundary conditions of, and relative to,
the nonlocally connected vacuum in which the energy uncertainties of the neural network
as a quantum mechanical system ultimately originate. There is still something with
respect to which the patterned events (comprising the requisite brain states) occur which
is important from the standpoint of timing and synchronization, and we might, therefore,
again, seek to include it, just as before. The point here is that this process of trying to
include the entire background against which the timing of the brain events are significant
can never be completed; we face an infinite regress here, or, if successful in including the
entire background, then there remains nothing against which the rate of causally
sequential events or the timing of not causally, but merely correlated events within the
network may be established. This regress is apparently resolved within the quantum
mechanically uncertain time interval of the network and therefore is forever beyond
manipulation from outside, that is to say, there cannot exist a determinate program
adequate to produce the timing necessary to integrate or unify the neural firings into the
requisite coherent pattern we term consciousness. This timing is not to be understood
after the normal fashion of a mechanical timing of articulated events. For purely relative
timing in the above sense does not take into account the duration of the whole process
(relative to its determining ground). Moreover, the rate at which a sequence of causally
connected physical occurrences unfolds is determined through the availability of the
spacetime constituting vacuum momentum/energy fluctuations with which the network
must continually interact. To restate, this is because the ultimate embedding substrate of
the neural network functions through interconnected events possessing a time uncertainty
which prevents their delicate synchronization from ever being introduced from outside -
outside either in the physical/spatial sense or in the purely formal sense of a design or
template imposed upon the concrete dynamical substrate through the imposition of
boundary conditions. Of course, what is called dualism is completely ruled out in the
case where the brain is thought to function in a deterministic manner. This is because
the isomorphism which must maintain between brain states and mental states precludes
the possibility that these two qualitatively different types of states are causally connected;
for any effective causal interaction between the two would necessarily disrupt the
isomorphism which is presupposed by the dualistic theory of mind.

On the other hand, in the absence of causal interaction between brain states and mental
states, there is no rational basis upon which we can say that particular mental states
correspond to, or belong to a particular brain. On the other hand again, however, if
dualism is rejected and causal relationships are allowed to obtain between brain states
and mental states, then both types of events/processes must be mediated by the very same
underlying substance, or substrate. In this way, the whole distinction between what are
called brain states and mental states completely breaks down, and one is forced to adopt a
monistic theory of mind. Because of the veritable existence of temporality, we know that
the fundamental dynamism mediating all physical processes must be irreversible (in the
thermodynamic sense). Consequently, the appearance of reversibility in physical
reactions, e.g., chemical, nuclear, etc., is just that, and the entities which take part in these
physical reactions/processes are fundamentally overdetermined by the underlying
dynamism grounding them, producing an underdetermination of their mutual casual
interactions which is in principle irremediable. This is due to the essential nature of
causal analysis as always being performed in terms of the laws governing the dynamical
relations between indefinitely reproducible entities.

March 1997
In the passage from one "state" to the next, a system exists momentarily in a
configuration which cannot be characterized by either, and this, no matter how finely we
might seek to partition the system's flow of change. From which, it immediately follows
that, it is what is peculiar to a dynamic system, but common to all such systems
generally, which permits them to transcend a sequential state description of the change
they experience on which their ineradicable temporality depends. Temporality, to wit,
exists exclusively in the domain which transcends or lies beyond all possible abstract
descriptions. The concrete is the temporal. As we have already commented, the medium
of abstraction must not admit of an adequate description in terms of any set of abstract
categories, since the categories presuppose the existence of that which brings them forth.
The notion of historicism, in the sense provided by the theories of Marx and Weber, is
conceptually unintelligible because it assumes the existence of a distinction the validity
of which it then later denies. That is the distinction between physical causal factors and
historical factors in the explication of social, political, cultural, and economic
developments. The validity of historicism would mean that history as a science of large
scale human development doesn't really work, that it doesn't have anything substantive to
say at all because the real causal efficacy behind the changes which history has
traditionally studied lies at a level of description which is at once lower and more
fundamental than that where historical explanations are articulated.

That which is the source and sustainer of all things cannot be viewed as being anything
but infinite. The paradigmatic example of this transduction process is the spontaneous
production of fundamental particles out of the vacuum within accelerated reference (non-
Lorentzian) frames.

Evolution may only be a local phenomenon - not a global phenomenon; evolution in the
sense of the genuine emergence of wholly unprecedented and unanticipated forms,
structures, or dynamisms - without this process of development somehow drawing upon a
pre-existing reservoir of information within which these "emergent" forms are at least
implicitly prefigured, and which mediates and underpins the evolutionary developmental
process - is tantamount to the acceptance of a fundamental process which is itself
uncaused and which is not admitted to be the cause (or reason) for its own existence. In
the vacuum, information exists in a nonlocal, simultaneously connected form. When the
vacuum energy fluctuations mediate the occurrence of physical processes, there is a
transduction of nonlocal, parallel and simultaneously connected information into a new
local, sequential, and temporally connected form. But such a transduction phenomenon
cannot take place within the pristine vacuum, that is, within the vacuum in the absence of
any action upon it even if this action ultimately arises from itself. This vacuum must
have something to react with or against; it must be "seeded." The ground of existence
cannot be outstripped by any possible existent "thing." Nonlocality presents the
possibility of putting quantum mechanical probability on a "rational" footing; in other
words, a given wavefunction is normalizable on the average. The condition of
normalizability is not a very restrictive condition on a quantum mechanical
wavefunction; there are an infinite number of ways to refract or decompose a given
wavefunction into a spectrum of eigenstates (of an incompatible observable) so as to
satisfy the normalization condition.

December 1996
It cannot be by way of some impossibly complex causal sequence that a thing
manifests itself as such. For instantaneous context, which cannot be analyzed in causal
terms must play a role in the act of determination. The fundamental paradigm shift which
marked the transition from classical (Newtonian) mechanics to the mechanics of quantum
phenomena may be captured in the manner in which the implied unified physical law
constrains the phenomena of nature: classical physical law states that what it prescribes
to occur must necessarily occur - any behavior apart from this being forbidden; quantum
mechanical physical law states that what it does not proscribe or forbid to occur
necessarily occurs. This constitutes a kind of fecundity principle. And the possibilities
can only be defined when boundary conditions are superimposed upon the system which
in its natural state is not necessarily inclusive of any particular let of boundary conditions.
The boundary conditions upon the system are what constitutes locality. Only the effects
upon the bounded system can be measured. Causality concerns the dynamics of the
boundary conditions but cannot capture the behavior of the system in an unconditional
way, that is, in the absence of any supplied boundary conditions. If the quantum
mechanical vacuum is the origin of temporality, then the vacuum must itself be timeless,
which is to say, eternal, c.f., Ehrenfest’s Theorem in which the instantaneous value of the
expectation value of the operator is 0 – here the instantaneous rate of change in the
operator is the negative of the commutator of the operator with the Hamiltonian.
Moreover, that which is the originator of space must itself be spatially unlimited. Human
intelligence has evolved to a point just short of that required to think something genuinely
interesting. C.f., Understanding Quantum Physics by Michael A. Morrison for a fuller
explication of how temporality and Heisenberg energy uncertainty are related.

The neural network computer does not store information in any particular location within
the network, but stores information at particular energy levels of the global interaction of
the network as a whole. Each new bit of data which is fed into the network is stored at a
next higher energy level of the network. What ultimately becomes this next higher
energy level is determined by a virtually chaotic process of neural "firings" which occurs
throughout the network and which is stimulated by the introduction of a data input signal.
Myriads of neurons throughout the entire network continue to fire randomly until a new
state of least energy is reached by the neural network as a whole.

October 1996
According to a kind of James-Lange theory of the operation of mind (the view
of the individual mind as kind of "reducing valve") the brain does not react to
environmental stimuli in the commonsensical way, but it is a dynamic system of
resonators which are continually retuned to new signals within the quantum vacuum
electromagnetic field in response to stimuli.
September 2011
But this notion of the brain
acting merely as a kind of filter of vacuum signals is only part of the story. The brain
also reprocesses the vacuum signals of a spectrum peculiar to it, and it is this reprocessed
spectrum that undergoes the filtering action.
April 2012
―The fact that a high level of
consciousness is associated with complex neural structures does not prove that the neural
structures produce this consciousness.‖ – David Bohm

June 2012
An important question is whether or not multiple brains process quantum
entanglement from the same vacuum information spectrum, putting back reprocessed and
new entanglements into this same spectrum or whether each does so only in interaction
with its own, unique signature-quantum vacuum information spectrum. If each brain
only interacts with its own vacuum information and signal spectrum, the question arises
as to whether entanglement information can be passed between consciousnesses or only
signals, data and instructions, the meaning of which is exclusively the domain of the
recipient of interpersonal data. There is also an additional distinction to be drawn
between data within different regions of the same brain versus data transmitted via
physical signals passing between one brain and another.

If the separation between different energy levels within the neural network (representing
different bits of information) are close enough together in energy, then it becomes very
probable that there will be a process of continual virtual energy transitions occurring
between the various discrete energy levels of the network throughout its entirety. An
interesting point here is that these virtual energy transitions within the network owe
entirely to the action of the quantum fluctuations in the energy of the vacuum
electromagnetic field. Moreover, the probabilities of given neural energy transitions
occurring within the network are determined by the presence of the constantly occurring
virtual energy transitions of the network which, again, are mediated entirely by way of
the quantum mechanical fluctuations in the vacuum electromagnetic field, themselves,
owing to the necessary presence of Heisenberg energy uncertainty within the quantum
vacuum. An essential difference between what are called virtual and what are called real
energy transitions, is parallel to the distinction between what are called virtual particles
and what are called real particles, respectively, in the theory of particle physics, namely,
virtual energy transitions cannot be measured directly, whereas real energy transitions
can be measured directly, for instance, in laboratory experiments. The real energy
transitions which take place within the neural network, and which are responsible for the
communication of its processed information to the "outside world," i.e., to the
consciousness of both the individual subject as well as his listeners, in the case of verbal
communication, are, themselves, overdetermined phenomena. This is to say, there are an
indefinite number of distinct sequences of virtual energy transitions, which are capable of
producing the very same real energy transition within the neural network. This assertion
reminds us of Feynman‘s sum of histories formalism for calculating the probabilities of
fundamental particle reactions. If it were not for the existence of energy degeneracy
within the neural network, there would be only one path of neural firings possible
connecting one energy level of the network to the next higher one.

The operation of a neural network would in this case be formalizable in the form of a
computer algorithm. Thus is the way to what is called intentionality opened up and made
possible: the very same determinate sequence of neural firings may have an unlimited
number of alternative future brain states in view, in other words. It is interesting to note
that the interaction of the virtual energy states of the neural network is not mediated
primarily by the physical realization of the network itself, but by the next highest order of
perturbations to the energy of the neural network. What we have been calling "virtual
energy transitions," are really only the first order perturbations to the global energy of the
neural network, conceived of as a quantum mechanical system. The first order
perturbations, what we have been calling, "virtual transitions" within the network, are
themselves, informed or mediated by, the quantum mechanical perturbations to the first
order perturbation energies of the network, i.e., 2nd order energy perturbations, thus
making the first order perturbations overdetermined phenomena as well. In turn, the
second order perturbation energy transitions (what might whimsically be called, virtual -
virtual energy transitions) are mediated by the occurrence of transitions between second
order perturbation energies, etc., and so on. At this point we might realize that the real
energy transitions occurring within the neural network which are normally thought to be
immediately responsible for the processing of all information by the network, are
engendered not by physical processes occurring at a lower level of organization, but via
processes taking place at higher levels of organization, represented by the next higher
order perturbation energy description of the neural network. We know that the virtual
energy transitions of any given quantum mechanical system are owing to the presence of
energy uncertainty in the system. It is more accurate, however, to say that this energy
uncertainty is present in the quantum vacuum itself, and is merely communicated to the
quantum system, of interacting elementary particles, say, through the exchange of energy
between the quantum system and the vacuum, which is itself where the energy
uncertainty originates; we saw earlier that the wavefunction describing a quantum
mechanical system cannot be normalized if the energy uncertainty is conceived of as
being a property of the quantum system itself, so that it must be an inherent property of
the quantum vacuum.

So perhaps we see now that the neural network itself acts merely as a kind of terminus to
an information reduction process, it acts as a kind of "reducing valve" which serves to
abstract a relatively tiny portion of information from the virtually infinite information
content of the overdetermined quantum vacuum which is instantaneously and nonlocally
connected with itself, and therefore represents the highest level of information processing
because it constitutes the interconnectivity of energy at its most exhaustive level. On this
view, information, like energy, may not be created or destroyed, but is a conserved
quantity, and its ultimate source is the infinite energy reservoir represented by the
quantum vacuum. We already saw how Temporality itself stems from the presence of
quantum energy uncertainty, which, in turn, originates in the vacuum, conceived of,
again, as a reservoir of virtually infinite energy density. Consequently, since Temporality
itself has its origin in the vacuum, it follows that the this infinite sea of vacuum energy
itself had no beginning in time! The vacuum now begins to remind us of Heraclitus'
"ever living fire," "in measures kindling and in measures going out," and thereby
mediating, as an eternal flux, all the changes continually taking place in the natural order.
Moreover, Heraclitus' statement that, "everything is an exchange for fire, and fire an
exchange for every thing," reminds us the interconvertibility of mass and energy in
quantum relativistic field theory, this interconvertibility being mediated by the continual
exchange of energy between matter and vacuum. Heraclitus' "ever living fire" is to him
the fire of the gods, yet uncreated by the gods. His statement that "Thunderbolt steers
the Universe" no doubt refers to the thunderbolt wielded by Zeus, the greatest of all the
Olympian gods; when this thunderbolt is identified with "the fire of the gods," that is,
with Heraclitus' ever living fire, the parallel between it and the vacuum becomes an
intriguingly close one; the quantum vacuum, by eternally mediating all physical
processes, manages to "steer the Universe." It is also interesting that Greek mythology
tells us that Time owes its existence to Chaos through that fact that the god, Chaos, is
named as the father of Kronos. Moreover, the Greek word, arche, which means source or
origin in ancient Greek, is translated into Latin as principium, i.e., ordering principle.
The idea behind this particular translation of arche into principium is the same one
expressed by
au=
Leibniz, when he states in his
cit=
Monadology that, "the conditions
sufficient to create the world are necessary at every succeeding moment of time in order
to sustain the world's existence." We now arrive at the notion of first cause, not in the
usual sense of first in a temporal sequence, but in the at once broader and subtler sense of
most fundamental or substantive. ―Given such a reality, the author concludes that human
mentality evolved in bottom-up fashion, with mind-associated neuronal systems not so
much creating mind as organizing a pre-existing propensity for awareness into useful,
functional awareness, and providing for its modulation by useful information,‖ c.f.,
cit=
Implications of a Fundamental Consciousness (1998) by
au=
Copthorne MacDonald.

Since it is the pattern of virtual particle emission and absorption which every real particle
continually undergoes which determines the mass of the particle, it follows that real
particle masses are determined through the particular manner in which real particles
exchange energy with the fluctuating quantum vacuum field; consequently, alterations in
the density of the vacuum field energy will affect the masses of particles occupying this
vacuum. We might expect that this relationship between mass-energy and vacuum-
energy is symmetrical in nature because the interactions mediating the continual
exchange of energy between matter and vacuum are themselves reversible interactions.

November 1996
The quantum vacuum energy fluctuations collectively, as we have seen, may
be understood as the first cause of the world in the more fundamental sense of sustainer
of all of the structures ultimately deriving from it in that the quantum vacuum is the
originator of temporality. Matter does not possess a genuine substantial existence since
its energy is forever being replenished by the vacuum fluctuations continually interacting
with it, much in the same manner as a particular spot in a river is continually replenished
with new waters so that, as Heraclitus says, one cannot step twice into the same place
within it. This two-way causal, symmetrical relationship between mass energy and
vacuum energy within quantum field theory reminds us of a similar relationship between
mass and space-time curvature within the theory of general relativity: the presence of
mass within a given region of spacetime produces an additional curvature in this
spacetime; also, an increase in the curvature of a particular region of spacetime produces
an increase in the mass of particles or material bodies already occupying this region.
Since spatio-temporal variations in the energy density of the vacuum energy field are
correlated with variations in spacetime curvature, we might suppose that some sort of
conformal mapping relationship obtains between the ratio of real particle to virtual
particle energy densities and the degree of mutual inclination of the time and space axes (
of the Minkowski light cone ) to one another. This relationship is also suggested by the
fact that real particles are virtual particles which have been promoted to the level of real
existence through the absorption of energy; particles are excitations of the vacuum state
which is itself a reservoir or sea of virtual particles. Also, through the application Mach's
formula for the speed of sound to this vacuum energy reservoir, we see that such a
conformal mapping relationship between Einsteinian space-time curvature and spatial-
temporal variations in the zero-point energy of the vacuum (or, alternatively, its energy
density) must involve mappings between the hypersolid angle swept out by the light line
in four-dimensional (Minkowski ) spacetime, and the energy density (or pressure) of the
vacuum.

We must distinguish between evolution's creative and its critical faculties. Adaptation is
not the purpose of evolution; it is the trial and error critical process which evolution is
subjected to by the contingent environmental conditions within which it finds itself
operating. Darwinian natural selection is merely a critical process; it is not in any way a
creative process, in and of itself. Natural selection merely structures, channels or filters
the creative suggestions made to it; it plays no role whatever in the fundamental
dynamism driving biological evolution; natural selection is merely the set of boundary
conditions within which the dynamism of evolution functions, perhaps in the sense of
Bergson's elan vital. Here again, we have an example of how boundary and initial
conditions are essentially separable from the dynamisms which they conscribe. Similar
remarks were made regarding the process of technological advancement, which was
viewed as a progression in sophistication in imposing initial and boundary conditions
upon the invariant and unitary dynamism of Nature. We know that natural selection is
not able to operate unless self-reproducing information-bearing structures are already in
existence; moreover, natural selection has little opportunity to mold these self-
reproducing structures into more complex forms unless it can profit from the creative
suggestions made to it through the operation of random mutations, themselves useless if
they cannot be passed on to future offspring. So it is also necessary that something
analogous to a genetic code be contained within these self-reproducing structures,
themselves the expression of the information contained within this genetic code.

The problem, then, with Darwinism, or its modern derivative, neo-Darwinism, is that a
great deal of evolutionary development must have already occurred, in the form of so-
called chemical evolution prior to the appearance of the first self-reproducing,
information bearing structures, before the distinctly Darwinian process of natural
selection of random mutations is permitted to begin. So the creative dynamism, spoken
of previously, is to be identified with that dynamism lying behind the prebiotic process of
chemical evolution, a process which does not halt its operation once the Darwinian
process of evolution commences, but which continues hand in hand with natural
selection, and, moreover, maintaining its central role as the motivating dynamism in the
evolution of progressively more complex forms of life. The subjective "I am," which is
just the individual's personal consciousness, dependent upon the objective world outside
itself, is not to be confused with the objective "I AM," (see
au=
Nisargadatta Maharaj‘s
Book,
cit=
I AM THAT), which is the one and unique self-existent Consciousness which is
the source of all individual subjective consciousnesses.

It is the quite common type of fantasy, frequently indulged in by proud and vain human
mortals, to imagine oneself in some glorious situation where one is either vindicated,
suddenly elevated in greatness (or suddenly shown to have been great) or rendered,
usually by one's own efforts, victorious or triumphant over some powerful adversity or
persecution, or moreover, to receive praise and adulation from the many as one speaks,
performs or otherwise acts in a manner which compellingly displays ones authority. We
human beings indulge in this kind of fantasization a great deal when we are children,
perhaps more so when developing adolescents, while some of us, upon becoming
"adults," tend as we approach middle age to set aside, eventually completely, such
obvious puerile self-glorifications of the imagination. Some of us, on the other hand,
never seem to put such self-gloried imaginings behind us, despite advancing age and
maturity. Everyone has either heard of or reflected upon the phenomenon of
selfishness exhibited in the strong tendency we all have of seeking out (in secret, of
course) from a pack of family photographs, those particular photos in which we ourselves
appear. Eventually, we become aware of the implications of this kind of behavior, which
shames us: if everyone were to be this self-centered, then there would be no one left to
care for me as much as (or more than) I care for myself and I would be a selfish person
alone in a universe of selfish persons. Of course, part of one's motivation for thinking in
this way, perhaps unbeknownst to oneself, is the Kantian notion of the Golden Rule; to
wit, that I must act in such a manner that I will that my action become a universal moral
law. But what of the phenomenon where I imagine myself being some other person
when I am in the midst of some profound or deeply moving aesthetic or intellectual
experience; I imagine what this experience would be like for this other person and
somehow the intensity and wonder of the experience is amplified for me, myself, through
this psychological projection. Part of the augmentation of the aesthetic experience for me
is the sense of personal, if partly disembodied glory which redounds to my sense of
identity because it is I who am leading this person, in my mind's eye, to the
unaccustomed though fuller appreciation of this experience. Partly again, the experience
is, for me, augmented because I borrow the other's innocence, using it as a foil against
which the experience may be rediscovered by me in all its aboriginal wonder. Moreover,
I act as this person's spiritual mentor; I help this person penetrate a mystery which I have
long ago discovered for myself, and if this other person is ultimately identified with my
own self; this is implied because all of these projections occur within my mind's eye, then
I seek to view myself as the father of my own future spiritual and intellectual
development.

But on a more basic human level, I am imagining the sharing of a profound idea or
experience with another person in a way which is seldom, if ever demonstrated in actual
social intercourse with my fellow human beings; certainly it is love which motivates this
peculiar psychological projection - the kind of love which does not distinguish self from
other.

I have no acquaintance with either physical objects so-called nor with any phenomena
taking place within the domain of other minds; in fact, I have had no acquaintance with
any phenomena whatever other than those pertaining to my own psychological states,
states which are presumably closely related to the concerted electrochemical activity of
my billions of cortical neurons. Consequently, I am forced to accept the existence of
physical objects and other minds purely on a faith borne of appearances, which might be
easily explained numerous other ways than those, which seem to be dictated by what is
called "common sense." I wish to remark here that if an omniscient (not to mention
omnipotent) God fails to exist who is, by the way, the only possible vouchsafe for the
existence of an objective world containing other consciousness‘s than my own, then there
is absolutely nothing standing in the way of my drawing the less than comforting
conclusion that I alone exist, i.e., Solipsism is the metaphysical truth, and moreover, there
is absolutely nothing standing in the way of my concluding that I, myself, am ultimately
responsible for all the phenomena which I have experienced or ever will experience and
that God does indeed exist and that I am Him. But of course, I wholeheartedly reject
such a preposterous conclusion: solipsism is a thesis, which I must reject out of hand and
with it the proposition that God does not exist. What I have just stated above is by no
means a rational proof of the existence of God. But it is an argument, which reflects the
inner logic of the whole of my being in its ineluctable affirmation of His existence from
which I have not the strength to depart. The very structure of language contains within
itself the subtle presupposition that all human beings possess the belief, whether they
@$
consciously realize it or not, that the sum total of possible knowledge is integral. But
this hypothesis about the inherently integral nature of knowledge implies, in turn, the
existence of a unitary repository for this sum of knowledge – one that is by its very nature
(because knowledge cannot be “static”), which is to say, a universal mind or intellect. It
occurs to me that all true mysteries are intimately connected and intertwined with one
another; to find the solution to only one of these mysteries would mean having found the
answer to all, since in the case of solving either it was necessary to trace back to the same
common origin.

Just listing some examples may succeed in illustrating to one's deeper intuition that this
must be true: a few of these mysteries are that of existence itself, the origin of
consciousness, freedom of the will, the mystery of Time and along with it that of eternity,
the mystery of immortality and that of divinity. A bright young child may agree with this
observation, remarking, "well, God exists and He knows the answer to all things." But it
really does seem that the contemplation of any one of these mysteries inevitably leads to
the contemplation of all the others as well as some which I haven't mentioned, and one
may ask, ―why might this be so?" Most individuals are totally incapable of what is called
lucid dreaming, dreaming where the dreamer remains aware that it is he who is in control
of all the action of the dream. Freud's doctrine of the conservation of psychic energy
suggests that the control of the dream action is mediated by the domain of the psyche
lying between full consciousness and the level of consciousness at which the dreamer's
mind operates so that the action of the dream must dissolve upon the dreamer attaining
his full consciousness because the intermediary domain of consciousness which controls
the dream is reduced to nil or "squeezed out." An analogy will serve here. A river may
not rise to an altitude greater than that of its source, at which level its kinetic energy is
completely transmuted into potential energy. It might therefore be thought that only
those individuals who experience repression of their normal full consciousness would be
capable of "lucid dreaming" as the control of the dream action would be mediated by the
consciousness within the domain between the individual's repressed consciousness and
his normal potentially full consciousness; this is just a slightly more abstract way of
saying that the psychic energy which is usually unavailable for utilization by the
conscious mind is freed up during the unconsciousness of sleep and rendered available to
the unconscious for use in mediating the phantasmagorical action of the various dream
sequences, themselves, according to Freud, the acting out of wish fulfillments. The
upshot of all this is that the presence of lucid dreaming is a possible indication that the
individual experiencing it is not reaching his normal psychic potential for full wakeful
consciousness and that the reason for this is a deficit of available psychic energy due to
the presence of myriad emotional conflicts lying repressed within his unconscious mind.
Psychic energy is bound up for use in maintaining a compartmentalization of early
experiences repressed from conscious recollection. There is no reason during the
unconsciousness of sleep for this psychic energy to continue to be diverted to man the
defenses against the recollection of early childhood experiences by the now inert,
anaesthetized ego, and so this psychic energy suddenly becomes available during deep
sleep.

November 1996
I have frequently had the experience of a renewed fixation upon some person,
usually a former love-interest, lasting from hours to perhaps an entire 24-hour period,
whenever I had dreamed about the person on the previous night. Perhaps the
remobilization of repressive psychic defenses takes a characteristic time of several hours
after these defenses have been relaxed during dreaming. Repressed wish-fulfillment
fantasies may sometimes manifest themselves as "false memories." The only way that
one knows these memories to be false is through their failure to cohere with other well-
established parts of one's biography, parts which one has memories of having previously
recollected at numerous different times throughout one's past. One does not possess
memories of having recollected the false memories at any time in the past so that they are
not "woven into" one's biographic database, as it were. These false memories do have,
however, the compelling tinge of having actually happened in the form of a very
idiosyncratic feeling associated with them which is perhaps merely the more or less
requisite vividness of a legitimate recollection. In order for the categories of Being and
non-Being to be of an absolute nature, the indeterminate or infinite must possess a
structure so that nothing contained within it may possess a definition as such. But this is
precisely what the indeterminate does not admit. Reality is both bottomless and without
a determinate apex. Reality is, in other words, boundless and this is what gives it its
fundamental, which is to say, irreducible temporality.

April 1997
This is Eternity bringing forth Time. Therefore, all categorical distinctions are
transcended by Reality, including that of Being and non-Being. Existence is then a
genuine predicate, but not an absolute one. Given any domain there is, indeed, a most
perfect being, but perfection is now merely relative and there is no Absolute Perfection.
There is, however, that which transcends any possible relative perfection and this is the
eternally pre-existent, boundless Indeterminate in which unlimited possibilities exist in
potentia.

Quantum Mechanics verifies the old Scholastic metaphysical understanding of all change
or Becoming as occurring due to a deficit of Being: all real physical processes are
mediated via virtual processes; these virtual processes possess by definition an energy
smaller than the energy uncertainty of the quantum mechanical system which is
comprised by the real processes mediated by them. The total energy uncertainty of a
quantum mechanical system is, by the way, relative to the reference frame within which
the system is "viewed," and therefore differences in the vacuum's zero-point energy
reflect changes in our frame of motion - in the sense provided by relativity. More
specifically, Lorenz contractions occur not only to the eigenvalues of length, but also to
the quantum uncertainties of length. Similar statements may be made with respect to
momentum, energy, time, etc.. The unity of all opposites cannot itself possess an
opposite.

April 1997
Duality arises out of that which is itself Nondual. The basis of identity which
transcends abstract description is that of continuity, substantial continuity. This is where
substance is the concrete medium from which all forms are generated through abstraction
which is limitation and negation. The modern version of the Being versus Becoming
dichotomy of the pre-Socratic philosophers is that of Space and Time of modern physics.
Whereas Being and Becoming were thought to be disjoint categories by the Greeks, in
modern times, the theory of relativity has shown space and time to have only a separate
existence as abstractions dependent upon the frame of reference of the observer within
objective spacetime.
@$
If through Descartes' categories of Res Cogitans vs. Res Extensa
we relate space to matter and time to mind, then relativity perhaps points to a unity which
transcends this distinction of mind vs. matter. Newtonian mechanics effectively
"spatialized" time. Physical laws are simply "descriptions of," as Russell reminds us,
―how Nature, in fact, behaves;" Nature does not "obey" any such "laws," nor is she
"governed" by them. This would be to explain a process in terms of the very derivatives
or by-products which necessarily presuppose the process allegedly "explained" by them.
An example of this fallacy is saying that natural selection "explains" or "causes"
evolution. Natural Selection cannot even begin to operate until a genome, or some such
unit of heredity, is already in existence. The question concerning the origins of life and
that of Darwinian evolution are seemingly quite distinct. This is a great and ever growing
problem for evolution theory as the science of self-organizing complexity continues to
develop. We speak always of the information contained within the genetic code as being
"expressed," either in terms of the synthesis of particular proteins or as control of the
expression of other genes. Yet we never seem to think of the fact that language is two-
sided; information is not only expressible, that is, decodable, but it is also encodable.
Without the interpretive function of mind, information is never expressed; data are
merely transduced from one form within one medium into another form in another. Can
the function of expression be reversible? Transduction seems to fall short of the
creativity of expression. The function of expression is not reversible without the context
for the original encoding, examples of which are the creation of art, music, or poetry.
Would we be satisfied with a concept of information as merely metadata?

January 1998
There are myriad medically documented cases of persons in states of profound
hypothermia having undergone cardiac arrest and existing in a state of clinical death for
periods of up to several hours who, upon being gradually thawed and heated in an
emergency operating room, revived completely and without exhibiting any sign of brain
trauma or loss of other healthy physiological function. Open heart surgery is now
performed in the former Soviet Union upon patients whose core body temperatures have
been carefully lowered to just above freezing to buy precious additional hours of surgical
time in cases where particularly complex and life-threatening procedures are required.
Now assuming that the persons who revive are the selfsame individuals whose bodies had
been in a state of clinical death, we may conclude that, whatever is causally necessary to
provide the underlying continuity of personal identity which remains preserved
throughout, must not depend upon the chemical processes which are significantly
impaired or halted as a result of a near freezing core body temperature. We may deduce
from this that, although one's individual consciousness is structured and shaped by the
near infinite number of electrochemical reactions taking place within the brain, such
electrochemical processes are not, themselves, responsible for the fact of one's individual
consciousness existing. Perhaps this line of reasoning appears to beg the question, since
the emphasis placed on continuity here presupposes that consciousness is some kind of
substance.

April 1997
Darwinian evolutionary theory always is opposed to the special creation theory,
but Darwinian theory does not really stand in complete contradiction to the Biblical
creation account because it does not admit flow of information into the genome, but only
outward flow in the form of the genome's expression as phenotype. But in any
informational system, elements do not contain information statically, but within an
interpretive context, which implies that the genome represents a nexus for the exchange
of information between two or more systems, and so the Darwinian doctrine of one-way
flow of genetic information renders the theory inconsistent and prevents it from being in
true opposition to the special creation theory.

April 1997
Encoding is over-determination. Decoding is under-determination. Hence, in a
deterministic system, in which one state of the system simply determines, and neither
over-determines nor under-determines, its succeeding state - in such a system neither
decoding nor encoding is taking place. Information is not a relevant quantity where
strictly deterministic systems are concerned.

This precise "deficit of Being" may only be defined in terms of the complete description
of the total process; this description, as already noted, exists only for itself and cannot be
derived from without as an infinite regress of description stands in our way here. In this
connection we may state the fundamental principle that "the mediator may not be defined
in terms of the total set of processes which it mediates." This "deficit of Being" of
Scholastic philosophy is exactly analogous to the energy uncertainty of quantum
mechanics. If Hegel is correct in saying that positive entities exist only by virtue of the
accumulation of various negations of relations holding between the Absolute and itself,
then each entity must more or less clearly and distinctly exhibit the unified totality of
Reality in microcosm, the developmental program of which is contained in the greater
quantity of information which is determined when a number of these entities come into
conjunction with one another. For instance, the molecular bonding of atoms, whether it
be ionic, covalent, or by the weaker Van der Waal's force, cannot be induced to occur
within a previously specified period of time simply through the manipulation of the
atoms "from outside;" one may only place the atoms in the approximate position
whereupon the action of bond formation ensues through the spontaneous action of the
vacuum electromagnetic field. In fact, the quantum mechanical method utilized in
physical or quantum chemistry to determine the various bonding probabilities has nothing
whatever to say about which precise configuration or shape, out of the many possible
configurations, will be formed as a result or the spontaneous bonding process.

This fact is readily seen when one attempts to view the spontaneous conformation of a
denatured macromolecule such as a nucleic acid as the result of the amino acids of
composing the molecule trying myriad possible different conformations by trial and error
until the energetically favored (most stable) conformation is found. In even relatively
small macromolecules the total number of possible conformations is so staggeringly large
that even if the components try a different configuration every 10
-13
seconds (a very
liberal assumption) the time required to hit on the "correct" conformation by chance
would take many orders of magnitude longer than the present age of the observable
universe! The wavefunction which describes the total system of interacting atoms
entering into the bonding process is approximated as a product of the individual
wavefunctions describing the approximate quantum state of the atoms; it is only the
complete wavefunction which describes these atoms as being inextricably entangled with
the whole vacuum-energy/ mass-energy matrix which contains the information about the
shape of the resultant molecule, among other things. No individual quantum mechanical
wavefunction is truly normalizable, although a large ensemble of such wavefunctions will
approach normalizability in the limit of infinite ensembles. There will always appear to
be coupling between the eigenstates of a wavefunction which is, itself, merely an
approximately exact wavefunction; in reality, there is only one universal wavefunction,
as its normalizability requires.

This process is very much akin to the decrease of fuzziness in a holographic image which
occurs on two or more pieces of a shattered holographic emulsion when the various
pieces are refitted together. On this view all development of material systems from the
simplest subatomic constituents to the most complex living organisms, consists in the
negation of negation, engendered by the conjunctions of these constituents which occurs
by chance outside the quantum mechanical positional (or length) uncertainty of the
constituents, but which is actively directed once they interpenetrate within this
uncertainty, where they enter into the effective range of their innate quantum-uncertainty
mediated tendencies toward self-organization, tendencies which are a manifestation of
the partially distinct "image" of the Absolute, i.e., the universal wavefunction, which each
constituent contains in microcosm within itself. Because creation is conceived under the
aspect of the negation of the negation of contextual relatedness within the Absolute, this
negation which is negated being understood as the reduction of information resulting
from the partial expression of the universal wavefunction as a product of partial
wavefunctions corresponding to relatively uncertain subatomic constituents, the problem
of "Why is there something rather than nothing?," is no longer rendered insoluble by the
requirement of explaining creation ex nihilo, but it is simply recognized that there has to
have always been something or other, so what better candidate for this something than
that unified plenum which is its own description, which is per Aristotle its own eternal
contemplation; that entity which
au=
Hegel calls the
au=
Absolute, and which we have styled
"the universal wavefunction." ―In the same way, the Buddhists express the same idea
when they call the ultimate reality sunnyata for emptiness or void and affirm that it is a
living void which gives birth to all forms in the phenomenological world.‖ (
au=
Capra,
1976)

au=
Dr. Scholem, in his
cit=
Trends in Jewish Mysticism, tells us that in any case, where
there is a transition from one state or configuration to another that, "the void is spanned
for an instant."
au=
Madame Blavatsky refers to evolution as a "spiritual Opus in reverse,"
meaning that the world arose through a process of involution (a reduction of the infinite
to the finite), but containing a vague recollection of the whole from which it was
originally abstracted and which guides its future development. This future development
is constituted by the return of the Absolute back unto itself, directed from above and
hence is a recapitulation of the timeless and transcendent within the realm of the temporal
and immanent. This is to understand evolution as the negation of negation. To say that
Reality cannot be given a complete description on account of the inevitable pitfall of
infinite regress is merely to say that, if this description does not already exist, then it can
in no way be constructed;
@$
there is nothing in what has just been said to prevent a
complete description of Reality, which has always existed. In fact, it is due to the lack of
a complete description which is responsible for the existence of temporality, c.f.,
fluctuations necessarily associated with a perturbation theory description of the quantum
field. This observation is very much in the same spirit of the mathematician
au=
Kurt
Godel's discovery that the notion of truth is stronger than the notion of provability; the
fact that a theorem expressible within the symbolic language of a particular logico-
deductive, mathematical system may not be constructed from the axioms of its system
utilizing the rules of inference of this system does not prevent this theorem from
"existing" in the sense of its being true, i.e., ―subsisting‖.
September 2011
It is actually the
interaction of the open-ended sets of suppressed details (that is, those concrete details
suppressed that originally constituted the categories of abstract thought), which do the
―heavy lifting‖ on behalf of creative intelligence and not the categories themselves. The
categories, when wielded (just think of tools here) merely signal to the underlying
creative ground, which direction it is to apply the genius of colluding devilish detail to
which problem, as well as conveniently indexing the end products of lucubration and so
enabling their inter-subjective communication. On this view the sociolinguistic construct
of the self is merely the master abstract category, which is but one of the many categories
of thought, and which serve the real master that is the will.


But one might ask what is the meaning of mathematical theorems which are beyond the
comprehension of any finite mind and which are not true by construction from a possible
collection of axioms, but true in some more fundamental sense. Wittgenstein tells us
that we may not use substantives in a manner which attempts to extend the reference of
these terms beyond the scope of all possible experience without falling into either
meaninglessness or absurdity. Therefore, when we ask the question, "Does God exist?"
the most we can possible mean by this question is, "Does God exist within our space-time
continuum?" We cannot ask whether God exists in Reality (Wirklichkeit) since Reality
cannot possess a complete description without this description having always existed and
without admitting the existence of such a description, one necessarily beyond all possible
attempts to construct it, and which may only exist from a single point of view or
perspective. So it seems that one may not ask whether God exists in Reality (in the sense
of Wirklichkeit ) without presupposing an answer in the affirmative,
@$
because the
admission of the existence of Ultimate Reality is one and the same with the admission of
God's existence. It is meaningful, however, to ask this same question in the sense of
Realitat. That which has always existed, and its complete description, which has always
existed must be a part of this same eternally pre-existent Reality. It is obvious that the
description and the Reality must be essentially one and the same entity which is, who is,
God. The finite may not be complete without conditions being specified, and these
specified conditions may not obtain except within the purview of a larger context,
containing the particular finite thing.

Only those independently occurring genetic mutations may occur at lower levels in the
gene regulatory hierarchy, i.e., to structural genes, which become integrated together
within a member of an evolving species which might have possessed a common source in
the form of single mutation to a higher order regulatory gene, one which controls the
expression of the original set of structural genes prior to the occurrence of these
independent mutations. The operation of Darwinian natural selection presupposes the
existence of a gene regulatory hierarchy which controls not only the expression of
individual genes, but more importantly controls the integration of the expressions of large
numbers of genes at the lower levels of this hierarchy.

To reiterate, there has always been something. What better candidate for this something
than that than which nothing greater can be conceived (per Anselm) ? This something
has no complete description except within and through itself. The only truly determinate
entity is the totality of everything which is possible or it is the void, as nothingness is, by
its very nature, determinate. It has been humorously noted that Time exists so that
everything will not happen at once. Given this obvious truth, then if Reality is infinite in
scope, there is no "everything" for there to possibly be given or occur all at once. For the
term "everything" presupposes a finally completed something which is incompatible with
infinitude. So temporality may be simply necessary in an infinite Reality. Temporality
seems to require that (per Bergson) genuine novelty continually intrude into the world.
The act of creation is not an event within history; it is the beginning of history; it is the
very inception of novelty. Bergson advocates what might be called a Machian theory of
Time.

But since the continued presence of temporality seems to require continual activity of
creation, it seems more consistent to assume that creation itself is a fundamental process
which itself had no beginning, which was itself never created, that the activity of creation
is that which has always existed and which requires no explanation other than itself. On
the other hand, however, it seems that this fundamental process of creation cannot be a
unified process, for otherwise it is an act which could have been consummated
instantaneously, once for all. Temporality must therefore be a process of recapitulation
of timeless reality with Reality itself as the medium through which the recapitulation is
accomplished. If Reality has a complete description, it is not one which can be
constructed, not one which had any origin in time: it is indeterminacy itself which is
ultimately responsible for the existence of temporality itself. If such a complete
description exists it must have always existed and the description and the reality must be
one and the same. Consciousness offers itself immediately as a likely candidate for such
an ultimate reality since consciousness is its own representation. If it is true that
consciousness is required in order to collapse a quantum mechanical wavefunction into a
determinate eigenstate, then consciousness, if it had an origin in time, must have arose
out the interaction of uncollapsed wavefunctions - it must have arisen out of a situation of
infinite energy uncertainty. The velocity of light is determined by the ratio of real
particle energy to virtual particle energy. Hence, the elsewhere region of the Minkowski
light cone may be identified with that region of the vacuum which stands in a relation of
Bell nonlocality to the observer. The unity of the conscious mind is no doubt owing to
Bell-nonlocal connectivity of neurons within the brain. If it is true that there has always
been something (as in the metaphysical question, "why is there something rather than
nothing"), out of which the Universe arose, assuming that this something is not just the
Universe itself, then there must not be, ultimately speaking, a universal or all
encompassing, ongoing process of either evolution or degradation in the development of
Reality. This is because by this present time an infinite amount of time must have
already passed so that

Reality should have either degraded into utter nothingness or reached a state of eternally
unchanging perfection and we do not observe the Universe to presently occupy either of
these two extreme states: temporality could not exist within a universe which derives its
existence from a ground of unchanging perfection (fullness of being); nor could the
universe derive its existence from a ground of nothingness (complete degradation).

December 1996
Evolution and devolution are concepts which may only possess an application
locally, however broadly. We now arrive at the conclusion that Reality as a whole is
neither evolving (increasing in complexity) nor is it devolving (decreasing in complexity)
so that any apparent changes in complexity in the Universe, e.g., biological evolution,
increasing entropy, are merely local changes which are on the whole compensated by
opposing processes elsewhere. We may think of causal relationships as obtaining
between terms or entities occupying a plane of equal levels of abstraction with the
process of abstraction itself and its opposing process, that of concretion, being processes
which do not admit of an exhaustively causal analysis. If it is only possible to alter the
boundary conditions and initial conditions which the dynamism of Nature is subject to ,
but not to alter in any way the dynamism itself, then the most advanced technologies will
amount to nothing more than having discovered how to goad Nature into doing, in the
best way she knows how, what she has all along been in the very act of doing. It is the
possibility of formulating recursive propositions and this possibility alone which allows
the domain of true theorems, expressible within the language of a deductive system, to
transcend in magnitude the domain of provable theorems, i.e., theorems which may be
deduced from the axioms of the system through application of the rules of inference of
the system. There is no comprehensive rule by which a computing device may recognize
the appearance of a recursive proposition since recursiveness is a structure which can
only be suggested; it cannot be literally stated. All baryons are composed of various
combinations of three different quarks out of the six possible different types of quark; the
mesons, however, are each composed of different quark pairs from among the six
fundamental quark types. The fundamental force responsible for binding together the
various quark combinations out of which all baryons and mesons are composed possesses
the bizarre property of increasing in strength as the distance separating the quarks
increases.

The important result of this is that it is impossible to fragment these quark laden particles
into their fundamental constituent quarks: it is impossible to produce a "free quark," in
other words. This is almost as though the quark does not possess a determinate energy
structure except as it exists within groups of two, three or possibly larger numbers of
quarks. The quark may be an example of an entity whose identity is totally context
dependent with the quark itself, paradoxically, providing the context for its own
determination as a substantive entity. Such an entity might not possess a definite energy
equivalence in the sense that it is not possible to determine the quark's energy
independently of the energies associated with the force fields the particle produces. An
entity such as the quark which is defined in terms of the combined effect that it has upon
itself (and one or more duplicates of itself ) provides us with the best example to date of
what might be called a self-existent entity. Quantum nonlocality effects could govern the
superluminal transmission of information between various widely separated locations
within the brain's cortex without producing a verifiable violation of special relativity's
restriction on superluminal transmission of information between separate minds. This
would be possible if the very same nonlocality effects are required for the integration of
the individual's consciousness as a whole. The idea or flash of insight would then be
time-delayed by the necessary time interval involved in conveying this idea from the
location in his brain where the crucial integration occurred to those parts of his nervous
system which pick up this message and in turn translate it into classically described nerve
impulses then responsible for the ultimate series of muscle cell contractions required to
transmit the message to the external world of attending observers.

Another way of looking at this kind of nonlocality is for the nonlocalized quantum effects
to be confined to a vacuum energy/information reservoir, exhaustively connected with
itself nonlocally, which is continually tapped into by the neural network of the person's
brain. There seems to be nothing to strictly forbid the existence of superluminal causal
connections between events which lie outside of any observer's Minkowski light cone.

Since the common sense view alleges that the past is nothing more than a fixed,
crystallized record of what has actually occurred within the present, it follows from this
view then that a present which has not had adequate time within which to resolve its
internal contradictions and ambiguities must retain a certain small degree of freedom
within which change might continue even after this moment becomes past. In this way,
backwards causality would be admissible, if only for the purpose of "cleaning up" the
"loose ends" of still indeterminate entities occupying past times.
@$
But doesn't common
sense also define the past in such a manner that it does not actually become past as such
until such a point as this crystallization process is complete.

January 1998
Within the above schema of "concrete" temporal evolution, the time dimension
cannot be spatialized and treated analogously to an additional dimension of space such as
it is treated by the Minkowski relativistic formalism.

In other words, common sense defines the past in such a way that the time axis is
necessarily orthogonal to the three dimensions of space and this at every spatial scale,
however small; it defines the past in such a manner that there is no substantive distinction
between past, present and future, which is to say, it defines the past as a half-infinite time
interval with its later endpoint being the present moment whose status as present must be
completely arbitrary. Within a deterministic time continuum there is no nonarbitrary
basis upon which any particular moment along the continuum may be chosen over other
contenders as being actually present.

January 1998
The first order approximate description of the first order perturbations to a
given quantum mechanical system may be evaluated in terms of the discrepancy between
the system's first order and its second order perturbation description. The true nature of
the perturbations of the system to which entirely owes its genuine temporality, cannot be
understood with respect to any possible common descriptive denominator which these
perturbations may be thought to have with the formal elements of the first or, for that
matter, any higher order perturbative descriptions of the system. In other words, in
absolute terms, the perturbations to any system cannot be given any formal description of
representation of any kind. <<

A natural lawlike system of relationships which govern the behavior of a given entity
may only be formulated provided that this entity is not unique ( provided that multiple
duplicates of the entity exist and are available ) as in the case of subatomic particles, e.g.,
an electron.

January 1998
Otherwise, there is not objective means of distinguishing dependent from
independent variables of the system.

As observed by
au=
Jacques Monod in
cit=
Chance and Necessity, "Science can neither say
nor do anything about a unique occurrence. It can only consider occurrences that form a
class, whose a priori probability, however faint, is yet definite."

We note here that a determinate occurrence merely constitutes a special case of a definite
probability, namely, that of unity. Consequently, if a probability of unity for a particular
occurrence demands that this occurrence be contained within a finite number which
altogether form a closed system of possible occurrences, then equally does so a definite
probability for this, or any other, occurrence within the self same system. It is doubtful
whether the probabilities of "material" events can change due to disturbances of the
system from outside which leave these events unchanged as discrete entities. For the
sensitivity of system elements within open systems follows from the general nature of
opens systems as not being themselves constituted from closed systems. To wit,
additivity and commutatability, which are manifestations of system reversibility, do not
obtain within open systems.

The meaning of discursive symbols must be completely arbitrary if these symbols are to
be interpretable, i.e., if they are to be data to which information can be associated. There
must be an important distinction to be drawn between the dissemination of data and the
transmission of information. There is not distinction between these two types of
propagation of influence if the endpoints are not taken into account. The transmission of
data from point A to point B is merely an arbitrary link within a larger arbitrary link
connecting point A' to point B' of the longer path containing path AB. And the smaller
path, AB, is arbitrary because it exists only through the arbitrary abstraction of it from the
larger path A'B'. In other words, there is no reason for considering the path AB than that
of arbitrary choice. There is no determinate relationship between data and information
because they do not exist at the same level of abstraction. This would presuppose,
however, that data and information, are only different, generally speaking, in a relative
sense, i.e., relative to the level of abstraction. <<


The quest for the "theory of everything" is therefore doomed to ultimate failure, since
what we call "everything" is necessarily unique, and this uniqueness prevents us from
separating those "variables" which are particular to the thing itself from those which owe
in part to our investigatory involvement with this thing. The self, in the act of
investigating ultimate reality, must be included within the dynamic of the reality for
which we are seeking a complete description. This inherent recursiveness which lies at
the heart of any earnest attempt to develop a complete description of reality is alone
responsible for the fact that the domain of truth necessarily transcends the sum of
knowledge comprising any point of view (of reality).

Quantum Mechanics tells us that a closed dynamical system may only undergo temporal
evolution provided that a certain energy uncertainty exists within the system. This
energy uncertainty is just the standard deviation of the energy about its mean or
expectation value. This energy uncertainty may be interpreted in terms of a time-average
sum of random energy perturbations to the system "from outside" the system. The phase
of the isolated quantum system formally undergoes temporal evolution, but there is no
physical meaning to be attached to an absolute phase. It is only when another system is
brought into interaction with the first system do we get temporal evolution of relative
phases of the two systems which possess measurable and observable effects. If these
energy perturbations, or some component of them are not removable, are not merely the
artifacts of our inadequate perturbative analyses of quantum systems, but are
ontologically real, then the infinity, and perhaps the infinite dimensionality, of the world
logically follow.

These random energy perturbations manifest themselves in the form of energy exchanges
between the quantum mechanical system and the sea of virtual particles in which this
system is embedded. The interaction of these virtual particles with the quantum
mechanical system are responsible for virtual transitions of the quantum state of the
system to other quantum states. The only real energy transitions available to the quantum
mechanical (dynamical) system are those from amongst the set of virtual energy
transitions which are continually occurring within the time interval specified by the
system's time uncertainty. The density of this virtual particle energy sea has a direct
bearing upon the rate of temporal evolution of any given quantum mechanical system.

Our central hypothesis is that the presence of matter has a perturbing effect upon this
virtual particle energy sea, i.e., the quantum vacuum field, and this perturbing effect is,
namely, to decrease the overall density of this vacuum energy which results in a similar
decrease in the time rate of change of all physical processes within the spatial volume
occupied by this matter. This proposed vacuum mechanism is exactly similar to the
mechanism by which a quantum resonant cavity decreases the rate of spontaneous
emission of 'cavity - detuned' photons by a Rydberg excited atom. The resonant cavity
achieves this by excluding most of the photons of half-wavelength larger than the cavity
diameter: to wit, it does this by decreasing the energy density of vacuum electromagnetic
field fluctuations of roughly the same energy as that of the suppressed atomic energy
transitions.

Given this inherent circularity in the nature of technological growth, it follows that the
ultimate constituents of the World must be wholly recursive: they must be their own
symbolic representations. If a "conscious computer" is ever developed in what will
undoubtedly be the far distant future, this mysterious property of such a device will in no
way stem solely from the design or blueprint by which its engineers will have conceived
its construction; the blueprint will, of course, be a necessary component in the realization
of such a conscious machine, but will have been derived from a necessarily somewhat
fortuitous discovery, owing to much trial and error investigation, of the "correct"
hardware interface of the device with the recursive, self-organizing matrix of quantum -
vacuum energy fluctuations which underpin and mediate the stability of all fundamental
particles and their exchange forces. Only in appropriate dynamic conjunction with this
fluctuating energy matrix will any realization of a hardware design possess the
topological energy structure sufficient to tap the pre-existing "consciousness potential" of
spacetime. In other words, it is only the grace of Reality's fundamental dynamism which
will permit the eventual construction of a so-called conscious computing device.

This empirical discovery of the correct interface design will manifest itself perhaps
during a testing phase where a long series of simulated sensory inputs, of increasing
complexity, are in the process of being fed into the device while its output signals
(responses) are being closely monitored. The memory medium of the device will begin
to accumulate stored or remembered inputs which it has never received through its
various sensory apparatus. Identical sets or series of inputs will produce significantly
different series of outputs both from an individual machine over time as well as from
different machines at the same time - even if these machines possess identical designs.
Occasionally, radically different series or sets of inputs will produce identical sets of
outputs. A significant portion of the functional relationship between output and input
will depend upon changes in energy in the internal components of the machine's
hardware which are, themselves, smaller than the overall quantum energy uncertainty of
the device as a whole. Moreover, no mutually orthogonal set of eigenfunctions will
describe the "functioning components" of the device. This is why we have been saying
that the abstract spatial structure of our hypothetical computing device is non-topological.
Clearly, any realization of a static blueprint for a computing device, regardless how
complex, in the form or an dynamically functioning prototype, will itself be merely a
topologically-preserving transformation of the blueprint from 2 or perhaps 3 spatial
dimensions to 4 spatial dimensions rather than the topology-non-preserving
transformation from 3 spatial to 4 dimensions of 3 space and 1 time. This is because the
state space of the transcribed structure, i.e., the design, can be adequately described in
spatial terms. In a very real sense, an object may not be thought to possess an internality
unless it possess a genuine "outside" in the sense of a radically open system - a system
which cannot be contained within a closed system; a system is "closed" only if it is finite
and neither receives nor transmits energy to or from any system except itself. Such a
closed system possesses no "outside."

Contingent uniqueness versus necessary uniqueness. The size of the Universe and the
black hole mass limit as important parameters in determining the density of the quantum
vacuum energy.

The asymmetrical nature of time perhaps has some bearing on the hierarchical structuring
of complex macromolecules. The fact that a molecule has been formed from a set of
simpler constituents does not guarantee that it can then be decomposed back into this set
of constituents. Similarly, the fact that a molecule has been broken down into a set of
simpler constituents does not guarantee that it can be spontaneously recomposed from
this selfsame set of constituents. Perhaps the asymmetrical nature of temporality implies
that any sufficiently large set of macromolecules may be partitioned into two disjoint
parts; those molecules possessing a bottom - up structure and those possessing a top -
down structure. This distinction which I am drawing is not a solid theoretical one; it is a
pragmatic distinction which assumes that status of a theoretical distinction when we refer
to molecules occupying either extreme end of the probability spectrum ( in terms of their
ability to form "naturally" from simpler parts ). Will may only be defined in terms of a
"rational" order foreign to itself which it resists or subverts. Will is the external
manifestation of consciousness. Will is a principle of incommensurate causation. The
set of lawlike relations which may be said to govern Will's operation are unique and
irreproducible. Rational order is simply that order which can be made to manifest its
lawlike relations in a reproducible manner. There is no need to invoke a temporal
description of this state space - the only reason one would attempt it is because we project
our genuine temporality onto the mind's eye realization of the computing device in its act
of "functioning." Henri Bergson, in his essay, Time in the History of Western
Philosophy, complained of a confusion which inevitably cropped up whenever
metaphysicians attempted to analyze the problem of motion. With a kind of gentle
contempt he described the frustration of these philosophers in trying to reconstruct linear
motion from infinite series of "immobilities", i.e., fixed points of infinitesimal length.

He explained their failure as being due to their ignorance of the nature of a linear interval
as a mere projection of "mobility onto immobility." This projection, naturally as such,
does not capture the whole phenomenon, but merely a point of view with respect to it out
of an infinity of equally valid points of view, and so from a single projection, or even a
finite number of projections, one is never permitted to reconstruct the original object as it
is. There are possible boundary conditions which might be easily placed upon the
dynamic of the "flux" which are nonetheless infinitely improbable as "natural"
occurrences, which is to say that the operation of intelligence is required to institute them.
It is these seemingly magical self - organizing properties of matter, owing to the
recursiveness of its ultimate "constituents," which make any attempt to calculate the
"improbability" of biological macromolecules an incoherent and meaningless enterprise.
Similar activities are the routine pastime of myriad scientifically inclined creationists
attacking evolution. The staggeringly large numerical ratios which they cite against the
"chance occurrence" of the simplest eukaryote DNA are calculated upon a permutational
/ combinational modeling of a prebiotic "soup" in which chance collisions continually
occur between precursor molecules, e.g., peptides, amino acids, nucleic acids, etc. The
serious problem with such a modeling approach is that it is not an empirically derived
statistical calculation as in actuarial computations, where a distinct probability is assigned
to each possible event within a pool, based on the observed relative frequencies of each
event, but is an abstract calculation where the probabilities are fixed at the outset and
remain unchanged throughout all series of calculations.

For example, there are a vast number of nucleic acid reactions taking place within the
ribosome organelle of every living animal cell which in the absence of certain mediating
enzymes will take place anywhere from 9 to 20 orders of magnitude more slowly than if
these enzymes are present - the ribosome is responsible for "translating" the coded
information of nucleic acids into various macromolecules (proteins) and in so doing
expressing the genotype of the developing organism. We see from this example that the
probability of the occurrence of various macromolecules essential to the appearance of
the first reproducing, metabolizing organic units begins infinitesimally small when the
molecule's precursors are yet unavailable, but that this probability grows by large
discontinuous jumps each time one of these precursors, the precursors' precursors, etc.
arise inadvertently in the prebiotic "soup" so that by the time the exhaustive set of
macromolecular precursors is present, the formation of the original macromolecule is
virtually assured.

The ribosome itself, despite its inordinately complex structure, has been observed under
experimental conditions to reform spontaneously after having been dissociated within a
carefully prepared enzyme bath into its precursor polynucleotide constituents - and this
within the space of only several hours! It is indeed true that a countless variety of
different enzymes (of the precisely correct type) must be simultaneously present along
with the polynucleotide matrix for this seemingly magical act of spontaneous self -
organization to occur. This is because the self - organization of such an enormously
complex organic structure depends, throughout the entire process, upon the almost
simultaneous juxtaposition (collision is a better term) of three or more precursor
molecules which all happen to have the exactly correct spatial orientation, with sufficient
kinetic energy to overcome the activation energy barrier against the reaction occurring. It
should be noted here that just the chance of any three compatible molecules ( in terms of
a desired product ) colliding at once with the roughly correct spatial orientation is an
event with a probability only infinitesimally greater than zero - let alone the question of
proper activation energies. And so, even if the primordial Earth possessed the
appropriate reducing atmosphere with oceans chalk full of all of the required precursor
molecules for the ribosome to properly form, without the necessary catalyzing cofactors (
the enzymes ) there would not likely have formed a single such structure by this late
epoch. Then perhaps there must have been an extremely long period of time during
which the necessary enzymes appeared on the scene, one might think. One suspects,
then, a similar self - organizing process behind the formation of these necessary enzymes,
the only difference being that the precursors which we are now concerned with are
simpler, while their precursors must have been simpler still, and so on. But the precursor
macromolecules for many particular enzymes have, indeed, never been manufactured (
because we don't know how to do it), but have to be obtained from more complex
molecules than themselves, if not from living or recently living organisms. The theory of
evolution, chemical evolution in this case, has secretly conditioned us to believe that
there must be some definite if inordinately complex sequence: precursors + cofactors ~
simpler precursors + cofactors ~ etc. leading back to the very simplest organic molecules
which form by self - organization spontaneously and easily in a wide variety of
environments and without the need for cofactors or helper molecules of any kind, and
that it must have been such a process (only in reverse) which ultimately lead to the first
self - reproducing biological units which could then be developed further through
Darwinian natural selection.

The notion of self - organization gives some of us pause because it concerns a natural
process which sits precisely on the fence between what might be called less - than - self -
organization, i.e., formation from simpler components, and what is aptly called greater -
than - self - organization, i.e., formation from more complex components - and it is just
such a notion which strongly suggests a top - down hierarchy within the natural order
which can only have intelligence at its apex. At every step in the chain in the formation
of higher level precursor molecules, the mediation of the required reactions is
accomplished via self - organization principles : those who attempt to calculate the
probability against "chance" formation of important precursor molecules forget a very
important general principle first articulated by the great rationalist philosopher Leibniz -
which is - that set of conditions which in combination are sufficient to produce some
complex structure must necessarily remain in operation at every succeeding moment to
sustain the existence of this structure. The upshot of this is that a complex structure
which owes its origin to mere chance cannot endure, still less could it respond to its
environment in an adaptive fashion. To bend Nature toward our intentions it is merely
necessary for us to block all of those except the one paralleling that one which is our
own. It is in the very nature of recursion not to be derivable from anything but recursion
itself. Therefore, if a recursion or recursive structure has a beginning in time it is
complex, but may only be analyzed in terms of other "simpler" recursive structures.
These simpler components of the recursive structure are themselves approximations to
the larger structure which they form in combination with one another. The behavior of
self - organizing systems cannot ultimately be explained in terms of principles of
organization which obtain externally, which is to say, such dynamic structures will not
submit to a reductionistic analysis.

The distinction between the "mental" and the "physical" may be drawn in the following
way: both are wholes composed of parts, both possess principles of organization, but
what is termed a physical whole is defined exclusively in terms of its constituent parts
while the "parts" which "compose" what is termed a mental whole are, themselves,
defined in terms of the whole which they compose. The reconstruction of a mental whole
must be guided in a top - down manner whereas the construction of a physical whole
must be guided in a bottom - up manner. The principle of a mental whole must exist
prior to its actual realization ( in terms of whatever substance). Without substance
change is not possible. Coextensive with this principle is: change owes itself to a lack of
determination, to a deficit of Being, to negation. From which it at once follows that
substance, rather than being the seat of being, of thinghood, as common sense conceives
it to be, it owes its existence, to the contrary, to a lack of being. It is not possible for a
determinate thing to be made up out of substance insofar as this thing possesses
determination. It is easy enough to see that continuity is required for the subsistence of
what is called historical time which we will henceforth refer to as temporality.
Indeterminate substance is the only basis for the continuity underlying all change. The
theory of entropic processes tells us that energy and information are interdefinable and
this fact in conjunction with the principle of energy conservation suggests that
information, like energy, may neither be created nor destroyed: the "creation" of a
quantity of information is really constituted by its being abstracted from the preexisting
integral system of information defining it.p94Like a piece broken off from a holographic
emulsion, there is a necessary trade - off involved in this abstraction process: the "newly
created" entity only acquires a relative independence from the whole in which it eternally
preexisted (and which defined its true being) by losing some of its knowledge of itself.
There is a direct analogy with this process in the creation of "new" subatomic particles
through the collision of particles within a linear accelerator.

In the first couple of decades after the first "atom - smashing" experiments performed
with the primitive particle accelerators of the 1930's, it had been supposed that the
particle products of these violent collisions were actually pieces of the colliding particles
which had been jarred loose by the sudden impulsive force of their slamming together.
But soon after this early period the kinetic energies of the particles going into these
accelerator collisions began to significantly exceed the combined mass - energy of the
particles which themselves initiated the reaction, with the result that the end product of
these collisions was a set of particles with individual member particles possessing a mass
greater than the combined mass of the particles originally participating in the collision.
The common sense "broken pieces" explanation of accelerator products now had to be
modified in some way or rejected outright. Two alternative interpretations of this "mass
paradox" were suggested by particle theorists: either the product particles were created
from the excitation of the vacuum by the kinetic energy of the collision with the "input"
particles serving as the points of application of the excitation energy, or they were really
inside the initial particles all along but the excess mass - energy was being exactly
balanced by an equal and opposite negative energy due to the internal binding forces
holding the particles together. The science of Physics, at least before the development of
relativistic quantum field theory, in the 1940's, imagined that there might be such things
as ultimate material constituents - elementary particles - out of which all material bodies
would be composed. The implicit Metaphysics behind the notion of elementary particles
is that of substance. There is no such thing as nothing. Nothing, by its very nature, is a
nonexistent entity: it is its own negation. We might be tempted to say then that
"something," being the opposite of nothing, must exist. But not just any "something"
constitutes the opposite or negation of nothing/nothingness, but only a something which
is, itself, the unity of all that might possibly exist, and the very essence of which is to
exist. In other words, nothing, not being possible because containing within itself its own
negation, implies that there must have always been something (or other).

But the only guarantee that there has always been something is the existence of
something which contains within itself its own affirmation, if you will, the reason for its
own existence. A fundamental and most general property of a thing which contains
within itself the reason for its own existence is that of recursion, something which is
defined solely in terms of itself, a recursive structure. There are logical grounds for
believing that there can be only one recursive structure, that there can be only one self-
existent entity - with this entity being the "ground" for existence of all other entities. A
recursive structure, if it may be thought to be composite, would be composed of parts
which are totally interdependent upon one another; no part is the cause of any other
without also being itself caused by this other part and so if this recursive structure had a
beginning in time, it must have been given existence through a pre-existing, broader and
more complex recursive structure. We see now that a given finite recursive structure
comes into existence through a process of uncovering or abstraction from a more
complex whole - through a process of negation. We are reminded of Michelangelo's
claim that a truly great work of sculpture already exists within its marble block and that
all he did in order to produce his works was merely to free them from the marble in
which they were imprisoned. All simpler recursive structures come into being through a
kind of figure-ground transformation, through a change in perspective. This reminds us
of Leibniz' monads, each of which are only different perspectives on an eternally pre-
existent whole, c.f., holography, with each monad defined in terms of, or dependent
upon, all of the other monads making up the whole. This exhaustive interdependence is
what Leibniz refers to as the preestablished harmony between monads. Again, a
recursion may only be defined in terms of other recursions like itself. Consciousness is
an example of an inherently recursive structure as it is its own symbolic representation (
of itself to itself). Consciousness' objectivity is exactly coextensive with its subjectivity;
the representation of a decomposed 3 + 1 space and time is always against a
nondecomposed spacetime where space and time are fused. This is simply a restatement
of Bishop Berkeley‘s principle, esse est percipi, i.e., to be is to be perceived, and as
@$
consciousness necessarily experiences itself as a unity and never as a multiplicity - the
objective reality of any multiplicity of consciousness' could only exist as a subjective
reality within a larger individual consciousness (itself a unity) and so cannot really be a
multiplicity of consciousnesses at all. This was one of Schrodinger‘s metaphysical
insights.

This argument against the multiplicity of consciousness was succinctly stated by the
physicist
au=
Erwin Schrodinger in his short seminal work,
cit=
Mind and Matter. It follows
that since consciousness cannot experience itself as a multiplicity, it therefore cannot
exist objectively as a multiplicity: consequently there can only be one consciousness.
@$
But if the nature of consciousness is that of self-transcending, then
au=
Schrodinger‘s
clever line of reasoning loses considerable force. Both the American psychologist
au=
William James as well the French philosopher
au=
Henri Bergson did not believe that
the brain was itself productive of consciousness but that the brain was merely a kind of
reducing valve (Bergson) which filtered and reduced down the cosmic consciousness
(James) to a vastly simpler form containing roughly the level of information handling
capacity (intelligence) which the human organism would need to adapt to the limited
challenges of an earthly environment. The novelist and author of Brave New World
(1932),
au=
Aldous Huxley, discussed this view of consciousness in his popular treatise,
cit=
The Doors of Perception - Heaven and Hell. In this work Huxley describes the effects
of the psychoactive (psychedelic) drugs mescaline and LSD which he experimented with
towards the latter part of his life in an attempt to trigger or facilitate the mystical
experience of enlightenment in which he had had an almost lifelong curiosity. Huxley
explained the effects of these substances in terms of the
prn=
James-Bergson theory of
consciousness: the experience of self-transcendence and transfiguration which Huxley
experienced on mescaline was for him due to the drug's disabling effect upon the cosmic
reducing valve. The brain under the influence of mescaline is no longer able to filter out
the thoughts and intuitions irrelevant to earthly life (because appropriate to the upwardly
contiguous levels of consciousness) - hence the experience of a vastly larger mental
space. This type of explanation would have been readily understandable to the ancient
Greek philosopher
au=
Socrates who viewed learning (through experience or education) as
simply a mechanism by which latent knowledge is recollected. A computational state
space configuration which cannot be duplicated in space can neither be duplicated in
time, except by virtue of the additional advantage
@$
afforded by substantial continuity.
Somehow the quantum no-cloning theorem does not apply to two or more identical states
of the same system. Genealogy is the temporal component of meaning-purveying
context.

Moreover, a state space configuration which cannot be duplicated in time can neither be
duplicated in space. In an infinite state space, there are an infinite number of available
states, and so the probability of any one state occurring is, formally speaking, zero.
However, if the state space has a nonpermutational/combinational structure, then it is
possible for individual states to be overdetermined with a given state representing an
infinite number of different "configurations" within the state space. This would be the
case if the state space is that belonging to a computing device possessing genuine
memory. For the device would either remember that it had been in the exact same
computational state before, in which case it would have to be in a new state slightly
different from the one it "remembers" being in previously, or it would not remember ever
having been in that state which it had occupied in the past. The device could only really
succeed in exactly remembering one of its previous states if the device had

1) been in operation for an infinite period of time and,

2) the device possessed a memory of infinite size so as to adequately contain an infinitely
regressive self-representation of one of its previous states, itself an infinitely regressive
self-representation. A state space configuration can only be copied if it is locally
separable from the nonlocally connected environment which contains it.

Otherwise, the state cannot be defined sharply enough in order to be copied.

Separating the state from the nonlocally connected (non-representational) environment in
which it is embedded, in order to try to copy it spatially, results in the destruction of the
state earlier than, or just before, a complete description of the state is in hand, and which
is required as a template. The velocity of light limits the transmission speed of
intersubjective data which must be carried on a carrier signal of energy greater than /\E,
the energy uncertainty of the nonlocally connected quantum substrate through which the
carrier propagates. That which is locally separable from its physical environment is a
nonrepresentational abstraction. An abstract system is a re-presentation if it remains
connected to its defining context; if it remains coupled to the medium in which it
historically originated. Semantical symbol manipulations are characterized by
transformations of symbols through the alteration of the defining contexts in which the
symbols are "embedded." Syntactical symbol manipulations are characterized by
transformations of symbols through the application of rules of inference to the symbols
within a constant defining context. It is perhaps possible to see that syntactical and
semantical transformations are mutually "orthogonal" operations.

Recursive relationships involve a content, theorem or proposition, in at least two distinct
contextual levels. Infinitely recursive functions, therefore, whether they be propositional,
mathematical, etc., may constitute a kind of hybridized operation possessing both
semantical and logic features simultaneously. Is it a valid line of speculation to suppose
that since some particles which are virtual within an inertial frame of reference, become
real particles with respect to an accelerated, or noninertial, reference frame, therefore
inertia/gravitation must bear some intimate connection with the phenomenon of the
"collapse of the wavefunction," since this phenomenon means the rendering as "real" one
among the many superposed virtual quantum states? Is particle production associated
with nonadiabatic, and hence, irreversible, changes in the energy density of the quantum
vacuum electromagnetic field?

October 1996
We know that nonadiabatic changes in the boundary conditions of the infinite
potential well problem results in a transition of the particle energy to an excited state with
respect to the new wavefunction describing the new potential well resulting from this
sudden change. This suggests that perhaps irreversible, or, nonadiabatic, changes in a
quantum mechanical system are necessary for the wavefunction describing it to undergo
"collapse." Perhaps changes in the boundary conditions of the (non?)locally connected
vacuum can be modeled upon a change in the dynamics of this vacuum in the absence of
changes of the boundary conditions. Perhaps all changes in the dynamics of the
nonlocally connected vacuum are only measurable in terms of their manifestation as
changes in the boundary conditions of a locally connected quantum system. When the
boundary conditions applied to a given wavefunction are treated classically, then
nonadiabatic changes in the boundary conditions will usually result in a discontinuous
change in the wavefunction, i.e., a collapse in the wavefunction. But if the classical
boundary conditions are themselves treated quantum-mechanically, then the composite
wavefunction will not suffer a collapse, but will evolve according to the time-dependent
Schrodinger wave equation.

January 1997
Is a nonadiabatic change to be understood as a change in vacuum boundary
conditions which cannot be expected (by the vacuum itself) because /\B//\t > /\B x h//\E
?It is clear in a geometrically intuitive sense that transformations of entities which are not
truly independent and separable from an open-ended context or system in which they are
grounded cannot be genuinely reversible but only abstractly to within a certain
approximation. Participatory knowledge transcends abstract description in terms of
abstract representations of independent "things" or entities. This is the knowledge based
in the intimate interaction with the open-ended. Is it possible to not be in an eigenstate of
any quantum mechanical observable whatever? Does this describe the normal condition
which the quantum vacuum finds itself in? If the mode of interaction of real particles
with real particles, i.e., real-real interactions, is correctly described as deterministically
ordered, and the mode of interaction of real with virtual particles as randomly ordered,
then should we describe the mode of interaction of virtual particles with themselves as
both random and deterministic ?

Is a superposition state possible in the absence of wavefunction boundary conditions?
Are some superpositions well-formed in the sense that they can be inverse Fourier-
transformed to a unique eigenstate with respect to a definable observable? Are some ill-
formed in the contrary sense of not possessing an inverse Fourier-transform to a unique
eigenstate of a single observable? Perhaps well-formed superposition states may only be
defined given appropriate spacetime boundary conditions, i.e., initial and boundary
conditions. Notice that when a measurement is performed upon one of the separated
particles of an EPR type experiment, that the particles remain nonlocally connected after
the "collapse of the wavefunction" describing the particles jointly, although the particles
are now nonlocally connected in a new way precipitated by the observer's act of
measurement. Has the observer simply succeeded in discontinuously altering the inertial
frame of reference in which the particle pair is embedded. If so, doesn't he do this by
accelerating the particles? Are the nonlocal connections within the observer's mind
merely apiece with the nonlocally connected vacuum state in which his brain is
embedded, and so when he performs his measurement upon the particle pair, the pair
must "jump into" a new nonlocally connected vacuum state, resulting in a discontinuous
change in its superposition state? Does the observer recoil nonadiabatically into a new
nonlocally-connected vacuum upon performing an act of quantum measurement which
induces what appears to him as a wavefunction collapse? Must the vacuum possess
infinite self-similarity so that "identical events" may unfold with different rates of
temporal evolution, depending upon which inertial frame of reference they are "viewed
from?" Self-similarity can never be exact. If the vacuum state were merely locally
connected, then its temporal evolution "as a whole" would necessarily follow along a
predetermined continuum of vacuum states. However, a nonlocally connected vacuum
state creates its own trajectory as it evolves temporally.

I am trying to build a case for distinguishing between two seemingly very different
descriptions of the process of quantum measurement, namely, the discontinuous collapse
of the wavefuntion of the quantum mechanical system being observed/measured from a
similar collapse of the wavefunction describing the mind of the observer performing the
measurement, which is to say, the Copenhagen from the "Many Worlds" interpretation.
As is well known, Newton's law of gravitation may be given a Gaussian formulation
exactly paralleling the electromagnetic flux law. What is surprising is that the black hole
mass of a given radius may also be given a Gaussian formulation.

To wit, 1/4piG x Int(Hc) x dS = Mblackhole.
pru=
It is possible to "derive" the Pauli
Exclusion Principle from the Heisenberg Uncertainty Principle. This may be shown in
the following manner. If two spin-1/2 particles of the same quantum mechanical system
were to be in the same quantum state - what is precisely forbidden by the Pauli principle,
say two electrons "orbiting" the same hydrogen nucleus, it would be possible for us to
measure the kinetic energy (a function of momentum) of one of the electrons, and then to
measure the potential energy (a function of position) of the other electron with the result
that we would have demonstrated the existence of a quantum mechanical state
possessing, simultaneously, an exact potential and an exact kinetic energy; but this is
precisely what is forbidden to exist by the Heisenberg uncertainty principle - QED.

June 1997
This conclusion does not go through, however, if the requirement is made that two
particles which are in the same quantum state must be described by one and the same
wavefunction. <<

The Feynmann path - integral formalism of relativistic quantum field theory indicates that
real particles, i.e., fundamental particles whose mass - energy is greater than the quantum
mechanical energy uncertainty of the quantum mechanical system to which they belong,
may be represented as stable and interlocking patterns of vacuum energy fluctuation, that
is, as patterns of virtual particle creation, annihilation, and particle (fermion and boson)
exchange processes which form with one another a stable, interconnected meshwork of
feedback loops of virtual particle reactions.

June 1997
It is not certain what the concept of stability means within the context of virtual
particle processes. "Stable" certainly does not mean here persistence of a structure
against fluctuations or perturbations - thermal or otherwise, since the virtual particle
processes themselves are the fluctuation phenomena. Stability must mean in this case
the relatively unchanging probabilities of recurring patterns of quantum fluctuation
manifesting themselves as virtual particle reactions. <<

Thus, real fundamental particles are viewed within this formalism as mere excitations of
the vacuum (ground) state with more complex matter structures, e.g., atoms, molecules,
etc., as feedback structures into which these vacuum excitations are organized - provided
that adequate excitation energy is available.

One possible test as to whether or not a given particle is composite or simple might be:
does the particle have a virtual counterpart, i.e., can the particle be produced out or the
vacuum as a pure energy fluctuation - out of a fluctuation of purely imaginary4-
momentum? <<Although in theory it should be possible to produce whole atoms,
molecules, or more complex matter structures through direct excitation of the vacuum
state (see above paragraph), the intelligent coordination of the myriad and highly
localized excitations required to do this, from within any particular modified vacuum
state, is probably rendered impossible due to the inherent uncertainty of total energy
which is responsible for vacuum fluctuations: certain existing boundary conditions to the
matrix of vacuum fluctuations may already be immediately present - in the form of
already created particles, molecules, etc., but these boundary conditions cannot be
produced ab initio, but may only be "reproduced," utilizing identical pre-existing
boundary conditions (in the form of already available matter) as template and catalyst for
the reproduction of the desired vacuum boundary conditions. Any instrumentalities
which we might employ to alter the vacuum field boundary conditions would only be
effective by virtue of the vacuum field fluctuations themselves which mediate their
action; we must realize that the imposition of genuinely new boundary conditions upon
the vacuum, i.e., without the utilization of a "template," even if locally, would imply a
change in the global boundary conditions of the entire vacuum energy system ( the entire
spacetime continuum). On the view of matter and vacuum which is espoused here,
matter is seen as not having an existence independent of the vacuum energy field, rather,
the stability of matter at all levels of its hierarchical structure, is underpinned by the
myriad field energy fluctuations of the quantum vacuum. Consequently, matter does not
possess an independent existence in the Demarcatean sense of "atoms and void;" our
view is more consonant with that put forward by Heraclitus, to wit, that everything is
composed "of fire in measures kindling and in measures going out;" all change is driven
by the clash of opposites and all potential for change lies with the tension between these
opposites.

Here "fire" is given the modern physical interpretation as "vacuum energy" and the "clash
of opposites," as the creation and annihilation operators ( 2nd quantization of quantum
theory) into which all operators corresponding to physical "observables" are analyzable.
What Heraclitus' physics lacked was a basis for physical continuity from one moment of
time to the next; the reproduction of vacuum boundary conditions (in virus - like manner)
supplies this missing element within modern physics. Within this understanding of the
relationship between matter and vacuum Democritus' notion of persisting "substance" no
longer has any application and the continuous existence of real matter particles consists
in the continual recreation of a coherent and interlocking pattern of virtual particle
reactions which is apiece with the larger pattern of vacuum energy fluctuations within the
indefinite surrounding region of spacetime.

October 1996
The basic idea behind a perturbative analysis of a quantum system is that one is
not able to write down with infinite precision the exact Hamiltonian of the system under
consideration and so one describes the energy of the system in terms of a Hamiltonian
plus a perturbation energy. This perturbation energy is usually the first nonzero term in
an expansion of energy terms where additional terms are progressively smaller and must
be neglected since to include them poses analytic intractability. In other words, one does
not have the precise energy eigenfunction expansion of the system's wavefunction; if one
did, then one could in theory prepare the system in any one of its energy eigenstates
where the system would exist at a precisely defined energy for all time, assuming the
system were not interfered with as a result of exchanging energy with some other system.
But since the Hamiltonian of a quantum mechanical system is always a function of the
system's momentum and position, which are incompatible observables, the energy of the
system, which is a function of both the system's particle/field momentum and
particle/field source position, can never be precisely defined. In this way we see that
energy perturbations are not an ad hoc and practically useful accounting device needed to
make up for a merely practical, and, hence, theoretically removable, ignorance
concerning the system's real energy eigenfunction expansion. Rather, perturbations to the
system's energy - any system's energy - are not merely artifacts of a perturbative analysis,
but are ontologically real and not due to a temporary inability to specify the system's true
energy eigenfunction expansion. There is a small component of the perturbation energy
which is forever irremediable and represents the exchange of energy between any
quantum system and another quantum system which is always present.

An important conclusion to be drawn for quantum theory here is that, the wavefunction
only represents the most that can be known about a quantum system in the absence of the
irremovable perturbations. We might be tempted to speculate here that more can be
known about a quantum system than can be contained in any wavefunction provided that
the effect of the irremovable perturbations are included. If the objective and the
subjective are considered to be disjoint categories, then we may say that just as the
wavefunction represents the most that can be objectively known about a quantum system,
what can be subjectively known about a quantum system in due entirely to influences
lying altogether outside all possible wavefunction descriptions of the system. Such
influences, collectively, are the so-called irremovable perturbations. We must not
straight-away identify such "irremovable perturbations" with the virtual particles and
fields of relativistic quantum field theory as these entities are largely artifacts of low
order perturbative analysis involving perturbations which are largely removable, in
theory, should the observer acquire greater knowledge of the system under observation.
What uniquely distinguishes virtual particles and fields from their real counterparts does,
perhaps, point to some of the properties of the medium with which all quantum systems
forever exchange energy, leading to the so-called irremovable perturbations.

Therefore, the introduction of matter particles into a volume of spacetime is not distinct
in principle from creating these particles ab initio from a portion of the vacuum energy
already present within this particular volume of spacetime; in an inertial frame of
reference, a real matter particle imparts an excitation energy to the vacuum such that a
particle identical to itself is created out of the fluctuating vacuum field energy; at the
same time the previous particle is destroyed, its mass-energy providing the excitation
energy necessary to re-create itself anew. In an accelerated, or more generally, a non-
inertial reference frame, the particles mass-energy excites the vacuum field in a different
manner, continually producing a new variety of particles to take its place. It has often
been noted in the literature of modern physics that particle production from the vacuum
state is to be expected within curved spacetimes. This leads us to the idea that merely
localized alterations in boundary conditions of the vacuum field in no way alters the total
energy density of the region occupied by the vacuum field, but merely changes the ratio
of mass - energy to vacuum energy from zero to some fraction approaching infinity (in
the case of black hole masses). The general relativistic alteration in the local velocity of
light may be understood in terms of Mach's formula for the speed of sound in an energy
conducting medium in its application to the quantum vacuum. Mach's formula states that
the velocity of sound in an energy conducting medium is a function of the pressure and
the energy density of the medium. Specifically, the velocity of sound in the medium is
the square root of the pressure of the medium times the speed of light squared divided by
the energy density of the medium.

Since the pressure of the vacuum is equal to its energy density, and the pressure of matter
is effectively zero, the energy density and pressure terms in Mach's formula are the total
energy density and pressure of space, respectively; the pressure of the vacuum is always
equal to its energy density, which decreases in step with the increase in the mass-energy
density. By letting the total energy density of space equal to the sum of the vacuum
energy and mass-energy densities, i.e., Etot = Ev + Em, and the vacuum pressure equal to
the modified vacuum energy density, i.e., Ev' = Ev - Em, Mach's formula works out to
vsound = sqrt[(Ev - Em)c2/Etot] which reduces to the result, vsound = [1 - GM/RC2] * c,
and this result is identical to the reduced local value of the speed of light calculated from
general relativity (in the weak field limit). Our requirement of no spatial variation in the
total energy density of space, i.e., that the mass-energy and vacuum energy densities are
complementary, seems to demand that the density of gravitational energy density
cont=
If
we are correct in reinterpreting the gravitational redshift of photons propagating in a
spatially varying gravitational potential as being due to a spatial variation in the zero-
point of the vacuum's energy (against which the photon's energy is to be measured), then
the imposition of boundary conditions upon the vacuum field merely produces local and
discontinuous variations in the spatial (and temporal) distribution of the field energy,
leading to the appearance of negative binding energies which exactly counterbalance the
positive gains in mass - energy which thereto result; it is in this sense that mass-energy
may be thought to occupy a "hollow" in the vacuum energy field and the "displaced"
vacuum energy has merely assumed a new form as mass-energy. Meff = Mr/sqrt(1 -
(c')2/(c)2), where Mr = binding mass and Meff = effective mass. The binding mass stems
from the sum of all (+) and (-) non-gravitational binding energies. The accumulation of
many such discontinuous energy gradients submicroscopically leads to the appearance,
macroscopically, of continuous energy gradients in the vacuum. Since the energy of the
vacuum field owes its existence entirely to the quantum mechanical energy uncertainty of
spacetime, in turn owing to the fact that the energy Hamiltonian is a function of mutually
incompatible observables, it follows that the vacuum field shares in the general properties
of quantum mechanical energy uncertainty. One such property is that energy uncertainty
is required for any discrete change in a quantum mechanical observable; for example, all
changes in a physical system stem from the application of forces upon the system while
all fundamental forces of nature are mediated via the exchange of virtual bosons between
fermions composing the system.

@$
Consequently, physical processes undergo temporal evolution only insofar as they
comprise quantum mechanical systems possessing finite energy uncertainty, with the
rates of the component processes determined by the magnitude of system energy
uncertainty. A fermion-boson quantum mechanical system may be thought of as an
interconnected meshwork of temporal fermion energy transitions with spatial boson
momentum transitions, with the fermion wavefunctions and boson wavefunctions being
antisymmetric and symmetric, respectively, so that increasing the density of interacting
fermions and bosons within a particular region of spacetime results in a decrease in the
energy uncertainty and increase in the momentum uncertainty of the vacuum state,
respectively.

November 1996
Any wavefunction may be alternately represented as a sum of symmetric and
antisymmetric wavefunctions. If one calculates the probability density function for a
wavefunction in this new representation, one is tempted to give some physical
interpretation of the three distinct components which result.

X*X = X*symXsym + X*antiXanti + 2X*symXanti

The first term represents the probability function resulting from the mutual interaction of
bosons while the second term represents the probability function resulting from the
mutual interaction of fermions. The third term may represent the probability function
resulting from the interaction of bosons and fermions with each other.

July 1997 >>
In Fourier analysis, a function which satisfies the
prn=
Dirichlet conditions, may
always be represented as a
prn=
Fourier sum of weighted sine and cosine functions of the
bounded variables.
@$
We note here that this function may be represented as either purely
even or purely odd, i.e., as either purely a Fourier sum of cosine functions or sine
functions, provided that the appropriate transformation of the coordinate system is
performed within which the Fourier expansion is to be computed. In direct analogy to
what has been said concerning Fourier analysis, we may say that through a judicious
transformation of the spacetime coordinates, we may represent an arbitrary wavefunction
as either of purely even parity or of purely odd parity. The notable exception to this is
what we cannot do, however: take a wavefunction of purely even parity and transform the
coordinate system so that this function is now represented as possessing purely odd
parity, or vice versa. Continuing with our analogy, we cannot represent a sine function in
terms of a sum of cosine functions and so on. We cannot do this, as was said, through a
transformation of the spacetime coordinates, however, an odd function can be readily
converted into an even function and vice versa through the mere addition of a phase
factor ( of pi/2 ) within the argument of the function we wish to transform. We know that
if an operator does not commute with the Hamiltonian operator, then the observable
corresponding to the first operator cannot be a conserved quantity. Conversely, any
operator which commutes with the Hamiltonian will be tied to a change in the total
energy of the system if this operator itself suffers any changes. It is well known that
parity is conserved within the theory of both the electromagnetic and strong nuclear
interactions. This is all to suggest that an alteration of the momentum-energy tensor
through the judicious insertion of phase factors into each momentum and energy
eigenfunction, may result in a transformation of the momentum-energy eigenfunction,
Psi(x,y,z,t), without altering the momentum-energy tensor, Ti,k itself. (See spontaneous
symmetry breaking in gauge theory and its artifact exchange boson, which results from
the transition from global to local field symmetry) This is just saying that the
wavefunction representing the quantum mechanical system with momentum-energy
tensor, Ti,k, is itself degenerate with respect to the phase. We may deduce from this that
matter cannot exist in either a purely fermionic or purely bosonic state. Otherwise, we
would be in a position to alter the tensor, Ti,k, describing this matter distribution, through
a non-coordinate transformation, namely, through the mere introduction of an arbitrary
nonperiodic phase factor into the energy eigenfunction representing this mass
distribution. This would constitute a stark violation of the Equivalence Principle of
General Relativity which implies that each distinct stress-momentum-energy distribution,
as represented by T, uniquely correlates to a distinct curvature of the spacetime metric.
To wit, matter must always exist as a mixed system of fermions and bosons, namely, any
given real matter distribution must be described by a wavefunction which is neither
purely symmetric nor purely antisymmetric. This figures into the necessity of any
quantum field giving rise to virtual particles.

November 1996
By calculating expectation values for various observables for the quantum
vacuum, such as <p>, <E>, <p2>, <E2>, etc., we may be able to exploit our intuitions
about what X*qX(vac), where q is the observable in question, must be in order to guess at
the probable relationships of these various vacuum expectation values.

The relativistic effects upon kinematics (space and time) are grounded in the relativistic
effects upon the dynamics through the conservation of momentum-energy. We believe,
for instance, that the relativistic contraction of the positional uncertainty of a particle, say,
and the relativistic time dilation (of the particle's lifetime, if it is unstable), do not lie
behind the dilation of /\p and contraction of /\E, respectively through the Heisenberg
uncertainty relations. This would be to ground dynamical effects in mere kinematics.
Rather, the kinematics should be grounded in the dynamics: the effects on space and
time are epiphenomenal to the substantive effects associated with the conservation of
momentum-energy. This is thought to take place through the Heisenberg uncertainty
relations for position/momentum and time/energy. Changes in the components of the
momentum-energy tensor cause alterations in the tensor of stress-momentum-energy
uncertainty. We may suppose that the presence of real fermions reduces the number of
available vacuum fermionic energy states while the presence of real bosons increases the
number of available virtual bosonic momentum states, relative to the reduced number of
virtual fermionic energy states. In this manner, more virtual energy transitions occurring
within the vacuum state must be effected via similar transitions occurring within the
massive body in question. This situation is consistent with the effect mass has upon the
surrounding vacuum of simultaneously decreasing the energy uncertainty and increasing
the momentum uncertainty radially about the gravitating massive body. A general result
of the preceding discussion is that the accumulation of mass - energy, more particularly
binding energy, within a volume of spacetime causes a corresponding reduction in the
density of energy uncertainty (vacuum energy), in turn resulting in a corresponding
decrease in the rate at which physical processes occur within this particular region of
spacetime. How are we to understand so-called energy-degenerate transitions within the
vacuum state, which is to say, transitions within the vacuum state not involving a change
in the vacuum's energy? The degenerate wavefunctions represent the possibility of
change which falls outside of the physically temporal.

An important question in this connection is whether gravitational time dilation shall have
any effect upon the frequency of energy degenerate transitions. Is the density matrix an
approximation made in lieu of the actual wavefunction which we are for merely practical
reasons unable to specify, or does a quantum system sometimes not possess a bona fide
wavefunction at all? What relation does the 2nd rank tensor relating two different virtual
particle current densities have to the momentum-energy tensor of GR...to the metric
tensor of GR? Would an exceedingly intense beam of coherent electromagnetic radiation
(laser beam) result in a kind of anti-squeezed state? This might have the precisely
opposite effect to that of the Casimir Effect which normally induces an expansion of the
momentum uncertainty along two orthogonal directions to the axis along which the
conducting plates are oriented. A question here is whether the momentum uncertainty
along the time axis (the energy uncertainty) is also dilated due to a squeezing of the
momentum uncertainty between the plates. The token reflexives, here and now, seem to
presuppose the token-reflexive, I, or me. Conversely, the token-reflexives, I, or me, seem
to equally presuppose the token-reflexives, here and now. This seems to suggest that the
nonlocal connections, manifested in the relations of virtual particles/fields to abstract
spacetime may also be essential in mediating the individual consciousness of observers
interacting with spacetime. Within the context of an expanding universe, then, matter
does not merely alter the density of the vacuum, but also alters the rate at which the
density of the vacuum energy decreases with time due to cosmological expansion, and
since the time rate of change in energy density is, itself, a physical process, matter, by
reducing the energy uncertainty of the vacuum, also causes a radially varying vacuum
field energy density which manifests itself as a spherically symmetric energy gradient
centered about a mass which is identical to the gravitational field!
July 2011
There is an
exponential relationship involved here with the effect of cosmological expansion upon
the time rate of change of vacuum energy density. The discrete cosmological redshift
may be understood in terms of the model suggested here, c.f., WKB approximation of
electron tunneling problem. Certain versions of Modified Newtonian Dynamics
(MOND) call for a gravitational ―constant‖ with an exponential factor giving rise to the
small anomalous constant acceleration which has been observed producing inaccuracies
in the charting of deep space probe trajectories, c.f., Pioneer Anomaly and the papers of
au=
P. W. Anderson.

Changes in the composition of the total energy density of a region of space with respect
to the proportions of mass - energy and vacuum energy are reflected in the transformation
of the spatio-temporal variation in vacuum energy density from being purely temporal, in
the case of free space, to a mixture of two parts, temporal and spatial, in the case of
typical distributions of matter, to a purely spatial variation of vacuum energy density, in
the case of black hole masses; and there is a homologous mapping between the degree of
tipping of the Minkowski light cone in curved spacetimes and the degree of
transformation of a temporally varying vacuum energy into one which is purely spatial in
its variation. Within curved spacetimes, the local value of the velocity of light is reduced
below its normal value in "free space," and this may be envisioned as a narrowing of the
hypersolid angle swept out by the Minkowski light cone centered at a given point within
this region possessing a gravitational potential. This contraction in the area of the
hypersurface of the Minkowski light cone may be alternately described in terms of a light
cone which suffers no contraction of its hypersurface area, but a decrease in the uniform
density of vacuum energy occupying the uncontracted light cone surface, and hence
@$
the
equivalence of the spacetime curvature with the spatiotemporal variation in vacuum
energy density.

If we are correct in positing an exact equivalence between spacetime curvature
and spatio-temporal variations in the density of the vacuum's zero-point energy, then the
phenomenon of particle production in a spatially or temporally varying spacetime
curvature, or via the equivalence principle, due to the effects of noninertial motion, may
be explained alternatively in terms of spatial or temporal variations in the boundary
conditions on the vacuum field such that spatial or temporal variations in its zero-point
energy result. In this scenario, the existence of real particles is understood as just a
manifestation of zero-point energy from the vantage point of a noninertial frame of
reference or equivalently, from the standpoint of a region of the vacuum possessing "less
restrictive" boundary conditions than the region of the vacuum in which the particles
appear. On account of the precisely thermal spectrum of the particles produced within
curved spacetimes and also due to the unique requirement of a thermal spectrum for the
vacuum itself in order that it possess Lorenz invariance, an entropy may be meaningfully
assigned to both the vacuum as well as the particles produced from it as a result of the
imposed vacuum boundary conditions.

June 2011
There must be a degeneracy in how energy uncertainty in the vacuum is
transformed into 3-momentum uncertainty such that entropy is increased, i.e., there is a
choice among alternative ways to effect this transformation that is not determined by the
initial and boundary conditions. For example, lifting a particle out of a gravitational
potential and allowing it to fall back to its starting point involves a path-dependence in
addition to creating an ―open loop‖ in spacetime and so necessarily involves the
production of some entropy. ―The equivalence principle strongly suggests that freely
falling motion in a gravitational field should be viewed as analogous to inertial motion in
pre-relativity physics and special relativity, c.f.,
cit=
Teaching General Relativity,
arXiv:gr-qc/0511073v1 14 Nov 2005. So what determines the local velocity of light
does so by determining the local velocity of cosmological expansion.
@$
See the
relationship between quantum entanglement, polarization and magnetization, electric
field permittivity and magnetic field permeability, Heisenberg 3-momentum and energy
uncertainty, spin-0, spin-1 and composite spin-2 correlated vacuum fluctuations,
Einstein‘s causality principle versus the so-called Bohm‘s causality principle.

Since this production of particles from the vacuum state due to imposed boundary
conditions is a reversible process, because the particles are reabsorbed if the boundary
conditions are later removed, the change in the entropy of the vacuum field must be
exactly compensated by the entropy increase due the particle creation so that the total
entropy of the particle - vacuum system is a constant. The Feynmann path integral
technique for calculating the ground state energies of atoms may ( in principle ) similarly
be utilized to calculate the ground state energy of the vacuum state of free space or,
indeed, the vacuum state of a region of space in which a gravitational field is present. It
is probable that fewer paths comprise the Feynmann integral where a gravitational field is
present than in the free space vacuum; this limits the number of valid available paths
along which energy may be exchanged between two points in this particular region of
spacetime - hence the reduced value of the integral, and in turn, the decreased value of
the vacuum state energy in this region. The reduced number of Feynmann paths, or
histories, means that the vacuum's ability to exchange energy with itself, as well as its
ability to exchange energy with particles and fields, and thusly to mediate the exchange
of energy between particles and fields among themselves, is correspondingly diminished
so that the rate at which the vacuum's energy density decreases with time ( due to the
expansion of the universe ) is likewise diminished.

In light of the diminished self-energy of the vacuum, the resultant increased
inertial mass of particles within this altered vacuum may be viewed in two distinct, but
fundamentally similar ways. First, the diminished capacity of the vacuum to undergo
energy exchange with itself means that it is more difficult for the gravitational field
energy to redistribute itself in response to changes in the matter distribution within the
altered vacuum state; consequently, by the general equivalence of gravitational and
inertial masses, it follows that there is an equal difficulty for matter configurations to
change their distributions in response to impressed external forces attempting to
accelerate these mass configurations. This is further theoretical evidence for the
complementary relationship between the mass energy density and the vacuum energy
density which together define the total energy density of any particular region of
spacetime. Moreover, if there are already existing particles both prior and subsequent to
the imposition of the vacuum boundary conditions, then the masses of these previously
existing particles is expected to increase in accordance with the decrease in the vacuum
energy density (and vice versa); this is consistent with viewing particle production more
generally as an increase in mass within the region of varying vacuum energy - as the
conversion of vacuum energy into mass - energy: the fraction by which particle masses
are increased in transporting them from a region of higher vacuum energy density to one
of lower density must complement the fraction by which the vacuum energy density
decreases between these two points.

This means that the maximum density of mass possible within a certain spherical
region is equal to the maximum density of particles which may be created from the
vacuum energy occupying this region, via excitation of the vacuum state. We arrive at
the interesting result that the density of the vacuum energy in a certain spherical volume
of free space (where no mass-energy is present) is precisely equal to the mass-energy
density of a black hole which could possible occupy this same volume. One important
idea which suggests itself within the context of this discussion is the famous
cosmological constant problem and the discordant interpretations of it within quantum
theory and general relativity theory. There is a 46 order of magnitude discrepancy
between the calculations of the value of this constant within these two theories, hence the
profound difficulties in developing a consistent theory of quantum gravity! Now if the
energy of the vacuum is interpreted as suggested by the work of Sakharov and more
recently by the zero-point energy gravitation theory of Hal Puthoff then rather than being,
itself, a source of gravitational fields, like particle or field energy, the energy of the
vacuum would merely be the mediator of gravitation so that differences in gravitational
potential would correspond exactly to differences in the energy density of the vacuum at
two different points in spacetime. A uniform distribution of vacuum field energy would
therefore have no more effect upon matter particles within this energy distribution than
would a series of concentric mass shells upon the matter particles contained within them;
which is to say, no effect whatever, and this due to the precise mutual cancellation of the
combined perturbations to the matter particles by the fluctuating vacuum energy field.
Thus, only differences in vacuum energy density would have any meaning so that the
overall vacuum energy density would play no role in the definition of Einstein's
cosmological constant, and there would be no necessity of postulating a unique exchange
particle mediating the gravitational force; gravity would not in this case be viewed as a
fundamental force as are the electromagnetic, strong and weak nuclear forces, but would
be understood as a "parasitic" force stemming from the imposing of boundary conditions
upon the combined vacuum electromagnetic, strong and weak nuclear fields which
together owe their existence to the fundamental energy uncertainty of the vacuum state,
described by an energy Hamiltonian which is a function of incompatible observables.

The pure imaginary momentum of all "rest masses" within the 4 - hyperspherical
cosmological model may be justified beyond its value as a convenient mathematical
formalism if these masses are viewed as presently being in the act of tunneling through a
hyperspherically symmetric potential barrier. The gradient of this hyperspherical
potential would be a four - vector with components 1,2, and 3 vanishing in free space, but
transforming through multiplication by a tensor into a new four - vector with non-
vanishing spatial components, resulting in the appearance of a gravitational field.
Certainly this tensor is the matter-stress-energy tensor described in the field equations of
Einstein; the only difference is that the vacuum energy does not contribute to the value of
T, the matter-stress-energy tensor, which is responsible for altering the metric tensor
which describes the curvature of spacetime, or alternatively, the spatiotemporal variation
in the vacuum field energy density. It is perhaps now easier to see at an intuitive level
why the field equations of general relativity predict the existence of a universe which is
either globally contracting or expanding: unless the energy density of the vacuum field is
temporally varying in free space, the matter-stress-energy tensor operates upon a zero
four-vector (representing the gradient of the hyperspherical potential) and the
introduction of matter distributions, represented by the matter-stress-energy tensor, into
this vacuum field, cannot produce a non-zero four-vector, namely, non-vanishing spatial
components of the free space four-vector, i.e., a gravitational field. Within this
particular cosmological model, the energy, linear 4-momentum, and angular 4-
momentum of a particle is always conserved, regardless of motions or accelerations
which it might undergo as a result of interactions with other particles and fields.

We are saying here that gravitation is, itself, a four-vector, whose magnitude is always
conserved independently of the matter distribution. The matter-stress-energy distribution
within a particular volume of space merely alters the decomposition of this four-vector
into a new set of vector components in much the same way that a boost, rotation or
translation produces a new decomposition of the Minkowski four-vector which describes
the instantaneous world segment of a particle; hence, matter distributions manifest
themselves as tensor fields in spacetime. If the gravitational field owed its existence to
the presence of matter-stress-energy distributions in spacetime, then we would certainly
describe the gravitational field as being itself a tensor field; however, the gravitational
field is actually a conserved four-vector ( in the sense that the magnitude of this vector is
conserved ), and this four-vector owes its existence to the inverse square decrease in the
vacuum's zero-point energy density in combination with the inverse cubic decrease in the
mass-energy density which results due to the process of cosmological expansion. The
action of matter distributions, however, must be described in terms of a tensor field;
again, the gravitational field, itself, is not a tensor field; the action of mass upon this field
is, however, tensorial in nature. As we know, from the many discussions of attempts to
produce quantum gravity theories, quantization of a 2nd order tensor field results in the
appearance of a spin 2 boson which acts as the unique exchange particle mediating the
tensor field. The four-dimensional zero-point energy gradient does not transform itself
with time in free space in a manner which necessitates a tensor description;
consequently, gravitons will not be present in free space as vacuum field fluctuations;
however, any valid theory of quantum gravity ( assuming one is possible ) demands,
along with the uncertainty principle, that the total vacuum field contain virtual gravitons
in its mix of fluctuating energy, but because a tensor does not describe the transformation
with time of the free space vacuum, the quantization of the total free space vacuum field
cannot include spin 2 particles, which is to say, the free space quantum mechanical
vacuum does not possess virtual gravitons and hence does not possess (per se)
gravitational field fluctuations. Consequently, gravitons do not exist in regions where
matter distributions are present so that the search for gravitational waves must turn out to
be a fruitless endeavor.

Another way in which the imaginary coefficient may be justified is to note that
the rate at which the vacuum energy density decreases with time is proportional to the
vacuum energy density itself, just as are the time rates of all physical processes, so that if
the vacuum energy density is reinterpreted as its probability density (in terms of the
square of the vacuum wavefunction amplitude), then the negative exponential time
evolution of the vacuum probability density implies that the vacuum has a purely
imaginary four - momentum with a four velocity of magnitude c. The effect of
accelerations, for instance, upon a particle is merely to change the distribution of its total
linear/angular momentum within the conserved 4-quantity. The perhelion shift in the
orbit of Mercury, predicted by general relativity, may be simply understood as a cyclic
redistribution of the planet's 4-angular momentum as it moves around its orbit so that the
3-dimensional projection of it 4-angular momentum varies sinusoidally with the orbital
period; this causes Mercury's 3-angular momentum to be slightly greater than that
predicted by classical mechanics, producing the observed advance in perhelion. The
black hole, as noted earlier, represents mass-energy in its most compressed state. For
maximum symmetrical energy exchange between any two shells occupying a given
volume of matter ( of uniform density ) where the density of vacuum energy exchanges is
proportional to the density of the vacuum energy itself, we require that the density of
mass-energy decrease with the inverse square because certainly the density of bundled
energy trajectories (along which all energy exchanges occur) must also fall off with the
inverse square due simply to the geometry of spherically symmetric radiation of energy
in 3 dimensions. We expect the density of exchange energy, due to vacuum field
fluctuations, to be proportional to the density of energy so exchanged because it has
already been established that the rate at which all physical processes occur is proportional
to the density of Heisenberg-uncertain energy (vacuum energy) and the decrease in the
density of this energy with the expansion of the universe is itself a physical process;
moreover, there is a vectorial continuity equation, analogous to a field equation of
Maxwell's, which describes the relationship of spatial and temporal variations in the
density of the vacuum field energy so that the spatial variation of this zero-point energy
will have the same structure as the temporal variation of the zero-point energy due to
cosmological expansion. The question then arises, " what is the structure of this variation
in vacuum energy density in free space, where no mass-energy is present?"

Well, the density of the vacuum zero-point energy is only meaningful as a physical
quantity in relation to the density of the mass-energy just as the energy of a particle is
only meaningful in relation to the energy of the vacuum state, so the general time
variation in the mass-energy density due to cosmological expansion should give us a clue
to the manner in which the vacuum energy density changes with time; provided that our
hypothesis of a dynamic vacuum energy mechanism for gravitational fields is
fundamentally correct. Therefore, if we postulate this vacuum mechanism, then it is clear
that the time variation of the vacuum energy density in the universe due to cosmological
expansion must be such that the ratio of the temporal variation in vacuum energy and
mass-energy has the same mathematical structure as the spatial variation in the ratio of
these two densities about massive bodies which acts as a gravitational potential. Since the
gravitational potential decreases inverse linearly so that the strength of the gravitational
field itself decreases with the inverse square, and since the density of vacuum energy
(zero-point energy) must be smaller in stronger gravitational potentials than at weaker
ones because gravitational time dilation increases in step with the increasing potential, it
follows that the ratio of mass-energy to vacuum energy must decrease inverse linearly to
mimic the inverse linear variation in the magnitude of gravitational time dilation;
remember that gravitational time dilation is owing to a decrease in available exchange
energy with which all physical processes are mediated. Hence, since the decrease in
mass-energy is with the inverse cube, the decrease in vacuum energy must itself be with
the inverse square. At this point we note that the decrease in black hole energy density is
with the inverse square of black hole radius. We are therefore led to think of a black hole
as constituting the maximum density of mass-energy possible in the sense that all energy
exchanges occurring within the volume of space occupied by the black hole, occur
between the black hole and itself, symmetrically, with no exchange energy left over to
mediate matter-vacuum energy exchanges.

This is presumably why the intensity of gravitational time dilation is infinite at the
surface of a black hole; the vacuum energy fluctuation field (zero-point energy) no longer
interacts with the black hole mass so that no physical processes (which can be
communicated to the "outside") are mediated. As stated earlier, it the interaction of the
vacuum zero-point energy with quantum mechanical systems which is wholly responsible
for all changes in the quantum mechanical observables in the system, i.e., temporality of
the system. The theory of quantum electrodynamics explains the propagation of
fermions and bosons in the following manner: a massless photon propagates through
spacetime by continually transforming into an e+e- pair and back again into a photon of
identical energy (assuming a flat spacetime), while an electron propagates through
spacetime by continually transforming into


I thought I would write you, first of all, to thank you for the interpretive astrological chart
you mailed to me a couple of days ago. I could have done so by telephone, but somehow
it's more sincere to reciprocate by writing you - thanks so much for the time and
consideration you must have put into constructing it, as well as explaining its obviously
portentous, if to me somewhat cloudy significance. Even if it turns out that you have
enlisted the aid of a computing device in interpreting it, I remain flattered, nonetheless,
by the obvious attention. I have always known that the horoscopes appearing in your
average city newspaper are very much like fortune-cookie messages. They are so
ambiguous that, combined with human suggestibility ( cause for the unreasonable
effectiveness of natural languages ), the component words can't help but conspire to form
an intriguing personal insight, fleeting, as it usually turns out. In the case of this "a boy's
first astrological reading," I'm sentimental in thinking that my chart, lovingly prepared by
you, does, in fact, contain some important hints and warnings which I might do well to
meditate on.

Your claim, through the chart, that I need to "do more Leo things," and that I need
to seek out persons with a lot of energy in their earth and air signs in order to balance the
plethora of energy flowing from my water trine (sp?), seems to be really good advice.
Also, the fact that the north node of my moon, representing my path of highest potential,
is opposed by Saturn, representing the influence of my father, or perhaps fatherly
influences, appears to explain chronic problems I've had in the past in self-definition and
development. I confess, Leslie, that I haven't given enough thought to the other messages
in my chart. I have already ordered the book, Inner Sky, from Eliot's, and I've promised
myself that I will return to the study of my personal chart anew, once I feel I've gleaned
enough of a working knowledge from reading, or skipping through the book. Getting
back to the ambiguity question. Astrology probably possesses too few built-in
constraints upon its interpretive procedures, the number and variety of which having
steadily increased over the millennia, compared to its relatively smaller and constant
number of possible symbolic structures - notwithstanding the discovery of a few new
planets. I nonetheless feel that it is valuable as a metaphor in a couple of different ways.
Firstly, it acknowledges the fact, kept secret since the Enlightenment, I think, that the
development, or unfolding, if you will, of history is by no means linear or logical, but at
once cyclical and suprarational: history, both that of civilizations as well as that of
individual persons, repeats itself interminably, though never exactly in the same way
twice. It is an inextricably intertwined dance between the act of creation and the act of
interpretation. If History seems so amenable to a systematic interpretation, it is only
because so many of the great human figures who have played a role in shaping it have,
themselves, been serious students of history. Secondly, the social relationships which
form between persons place them in various mutual orbits which help to realize or inhibit
their multiple though not altogether consistent potentialities, determining more fully their
identity and, hence, their life's fate. Assimilating the basic handful of distinct personality
archetypes to their analogous astrological signs opens up a rich interpretive structure
within which the transformative or stultifying effects of each personality type upon the
other may be predicted and explained. The positions of the various planets within our
Solar system move along cyclical courses, to be sure, but they never exactly repeat any of
their previously held configurations - contrary to what the best of eighteenth century
science might have had to say about it. This seems to grace existence with a richness of
multiple potentialities.


07/98

And now, we will practice Microsoft Word by taking notes on Jacques Derrida,"The
Retrait of Metaphor," which was published in ENCLITIC 2:2 (1978): 5-33. As we know,
the editors point out that the article works with two semantic systems for the word
RETRAIT, which has a variety of meanings in French.

The reason that words tend to have myriad meanings is that words coined to denote
things or activities within some original context are borrowed for use in an unfamiliar or
less familiar one. But the first denotative terms were actually metaphor since various
images were being assimilated over time (in the subjective experience of primitive man)
to the notion of a thing which appeared and reappeared. So objects are distilled out of the
flux of experience in the developing feedback between the infant and his environment
and constitute a kind of reified metaphor.
September 2011 au=
Nozick notes that
au=
Frege held
the view that ―concepts cannot be referred to (as concepts).‖

And this is what the Self really is. This is somewhat Kantian and is along the lines of
what Piaget says about cognitive development in the sensorimotor stage: the infant
learns through interacting with a reactive environment that the image of her hand moving,
the sound it makes when it strikes a mobile hanging over her crib, the feeling in her hand,
the kinesthetic sensations in the arm and shoulder muscles are all part of the same
―thing.‖ This integration of sensation has to be learned from experience. Existence
definitely precedes essence and objects in the external world and the Self emerge from
the flux of dissociated sensation simultaneously. We commonly here of the
―thrownness‖ of the individual. More correct here is to speak of the thrownness of the
self and its world together in a single act.
@$
Information never once entered one‘s
developing infantile brain. Rather, data streamed in the form of trains of neural impulses
into one‘s developing brain via the various sensory nerve channel and were interpreted in
terms of context-providing structures, which were in the midst of being developed by you
or, rather, these context-providing structures and your ego/self were hand-in-hand
simultaneously developing.

Metaphor represents the right brain version of the left brain/analytical activity of the
instantiation of abstract categories.
September 2011
Metaphor cuts across the established lines
along which abstract categories are fashioned and seeks a larger umbrella category for
two or more contexts whereas abstraction or abstract thought consistently works within a
single context. This distinction is akin to that between disciplinary and interdisciplinary
theoretical research.

There is always somewhat of an insight involved in the use of metaphor and it‘s a
linguistic competency not likely to ever be equaled by a machine. When we learn a
language this latent structure of metaphor lying at various levels beneath the surface of
language is subconsciously assimilated and it conditions and delimits all thought, even at
its most creative. Especially then.

―Rich as the English language is in media of expression, it is curiously lacking in terms
suitable to the conveyance of abstract philosophical premises. A certain intuitive grasp of
the subtler meanings concealed within groups of inadequate words is necessary therefore
to an understanding of the ancient Mystery Teachings‖, c.f., The Secret Teachings of All
Ages.

Now mostly what Heidegger does when he's doing metaphysics is to unearth this latent
structure by going back, he thinks, much closer to where it originated. Usually in the
Greek. When you read Heidegger you realize that, at bottom, that's all metaphysics really
is - it's just archeology of language, the "mining" of latent metaphors which are
masquerading as purely denotative concepts or logical categories. When I'm doing
metaphysics, I always feel that I'm not completely in control of what I'm thinking and
sometimes I feel like I'm more or less a passive vessel into which insights flow and
intuitions crystallize. And that's because I think that I'm supposed to be utilizing clear
and distinct categories although I'm really utilizing metaphor. All of the time, in fact.
This is why logocentrism doesn‘t really work. And logocentrism is itself the
kwd=
reification of a metaphor and does not really qualify as a truly denotative concept.
Deconstruction deconstructs itself. The statement that absolute truth is false cannot be
absolutely true! Deconstruction is a giant case of question-begging, I think.


In instantiation of a concept or logical category, the grasping of a particular is prefigured
in the pre-existing concept or form which is not expanded or enlarged through this re-
cognition under the concept of one of its concrete particulars. In metaphor, however,
there is a creative interpretation of the unknown or unfamiliar through the importation of
a contextual web of associations (based in experience) as opposed to logical relations or
abstract categories.. A static, stable order in the old context becomes a dynamic ordering
principle whenever it is transplanted into the new context. The dynamism is generated
by the reactivity of the new context as ground into which a seed or viral contaminant of
foreign meaning is introduced. In metaphor an inductive as opposed to a deductive step
is taken which enlarges the original category that was borrowed. And all of the entities
treated of denotatively are, as alluded to already, metaphorical constructs. This is what
makes metaphor open-ended and irreducible in the scope of its action, as well as
translogical. Because logic presupposes metaphorical relationships and so the process by
which formal categories are generally brought about cannot itself be given a formal
description, which is to say, no formal description can be given for how formal
descriptions are generally brought into being. Metaphor, which is prelogical, underlies
the production of all formal categories/abstract concepts. This idea seems to support
au=
Alan Watts‘ critique of the presumably inviolate principle come down to us from the
ancient Greek philosophers of ex nihilo nihil fit.
September 2011
―Discover all that you are
not — body, feelings, thoughts, time, space, this or that — nothing, concrete or abstract,
which you perceive can be you. The very act of perceiving shows that you are not what
you perceive.‖ [italics mine] - Sri Nisargadatta Maharaj, I am That.

One of the systems has to do with retreats, retracings, withdrawals, and soon: leading to
questions of economy, pathway, passage, and circulation. The second system has to do
with erasure/rubbing but also usury, by which use and wear BUILD UP or increase
value/meaning.

There are two kinds of metaphor - two senses in which metaphor is a "Retrait." The first
is the interpretation of the new in terms of the old. The second is the reverse of this: the
reinterpretation of the old in terms of the new, such as a metaphor, suggested, for
example, by new social relations enabled by developments in technology. An example of
this might be the drawing of an analogy between the rise of the Internet and the World
Wide Web's impact upon postmodernity and the social/cultural impact of the printing
press upon the Renaissance in Europe ( in terms of the freeing-up of individualism). By
making of history a Palimpsest, we make the transition (passage) into the future less
discontinuous and more comprehensible.

Wornness, worn-outness, will be important here as well, since Derrida will be talking
about metaphor as something old, something coming near its end. Is Derrida talking
about the ending of History in the sense of the end of grand narratives?

Myth is metaphysics clothed in metaphor. The most fundamental myth is that of the Ego
or Self-consciousness. The Ego is the most fundamental of myths because it represents
the operation of metaphor at its most fundamental: consciousness is an unbounded flux
which is in continual change along a determined but not predetermined path. The Ego
possesses continuity throughout this fluxion despite its always being the artifact of an
ever changing ground. The Ego always manages to reconstitute itself as such against this
changing, grounding flux of altering consciousness. The Ego in the present moment is
always the importation of a structure from the previous momentary ground
(consciousness) into a new one all the while remaining the self same Ego. Sorry if I‘m
belaboring the obvious.

Derrida begins by pointing out that metaphor works with these notions of passage and
circulation: inhabiting, transporting oneself, passing through, and so on: all of this is of
course is good for poetry in general, and given my fixation, for Vallejo in particular. A
key initial idea is that while we think we "use" metaphor, it in fact comprehends us,
sweeps us away, displaces us: we aren‘t like a pilot in his ship, we‘re DRIFTING,
skidding.

The importation of the structuring of the old ground from the preceding moment manages
always (or almost always) to impose a new structure upon the newly emerging ground
which returns the Ego to itself. This return of the Self to Itself continually, all the while
the ground of consciousness fluctuates underneath it, represents the power of metaphor in
its greatest generality.

For this reason we might term Mind the metaphor of all metaphors. And that is
inevitable, for no speech is possible without metaphor.

[It is not clear to me why Derrida thinks metaphor is coming to the end of its life he says
it‘s old, does he say how he knows it‘s almost ―retiring‖ (he says it is retiring)?] But here
comes something: because it‘s old, it has MORE and not less weight: a lot is attached to
metaphor. Metaphor is "a suspensive withdrawal and return supported by the line
(TRAIT) delimiting a contour" (9) [this again is good for Vallejo].

Now he asks why we privilege Heidegger‘s text (he doesn‘t say which text) on this topic.
It seems to be because of H_s concentration on TRAIT, in the sense of line, the "tracing
incision of language" (10). Now D reveals two of H_s titles: DER SATZ VOM GRUND
and UNTERWEGS ZUR SPRACHE. He also reminds us, in his inimitable way, that he
will quote himself ("WHITE MYTHOLOGY: Metaphor in the Text of Philosophy") but
this is not in order to draw attention to himself but rather, so as not to have to repeat here
what he said there (yeah, yeah, Jacques-baby).

This is getting difficult. D is going to slip himself through one of H_s notes on metaphor
- in which "the metaphoric exists only within the boundaries of metaphysics" - as
discussed by Ricoeur in LA METAPHORE VIVE, whose eighth essay, in turn, discusses
D_s "White Mythology" piece. [Gossip: the current piece by Derrida was read at a
symposium in Geneva where Ricoeur also read.] Anyway, the point is that we will be
relating metaphor and metaphysics here, in the above sense, which the metaphoric exists
only within the boundaries of metaphysics. [Guessing: as we know, D wants to get
beyond metaphysics, so I suppose this article will try to lead us beyond metaphor: let‘s
see, that‘s interesting, it sounds THEOLOGICAL to me and I know D would probably
hate me for thinking so.]

D says R didn‘t pay enough attention this point of H_s. So now he will critique R. First
point. R, according to D, assimilates D too easily to H. Second point. More on R_s
misreading of "White Mythology;" over-assimilation to H. [Not having read "WM" or
the Heidegger piece on it, it‘s hard for me to comment here.] [Gossip: D comes from a
repressive family background, I can tell, he‘s like me, keeps saying "but that‘s not what I
said, how can you attribute it to me" - he is very fixated on being precisely understood, I
agree intellectually with that feeling, but what I am gossiping about here is
his tone.] Here, he‘s also mad at R because, D says, R criticizes D from the place to
which D had himself carried the critique.

A key point appears to be that according to R, "WM" makes death or dead metaphor is
watchword - this idea offends D (note though that R_s text is called LIVE METAPHOR).
What D purports to really be talking about is the TWO DEATHS or SELF-
DESTRUCTIONS OF METAPHOR [he doesn‘t explain this here; we have to read
"WM" which I‘m beginning to suspect is more interesting than the piece at hand].

Now we talk about economy. A. usury B. the law of the house C. EREIGNIS [?] D.
passage, fraying, transfer, translation, withdrawal (because, I intuit, metaphor TODAY is
withdrawing, according to D). Now we look at mother-tongue and father-language again,
complicated little arguments, my first guess here is that mother-tongue is not metaphoric,
but father-language is metaphorical and metaphysical, has to do with formal language,
the law, and so forth.

Retreat, tracing, translation_let_s talk about "traits," then. We need metaphor when we
can_t get to Being_if we could get there, there would be no metaphor. And, what
Heidegger calls metaphysics ITSELF corresponds to a withdrawal of Being. So we only
get out of metaphysical/metaphorical discourse by a withdrawal of the withdrawal of
Being.

[SIDE NOTE: COMPARE TO THE NIETZCHEAN TALK ON MINERALS IN THE
YES:
COMPARED> TO THAT, THIS SEEMS VERY WESTERN AND ENCLOSED, AND
COMPARED TO EASTERN RELIGION, WELL, NEED I SAY MORE?] Anyway,
what we_re going to get with metaphor is a series of retreats, withdrawals_this is how
metaphor gets so complex_as it withdraws, it "gives place" to "an abyssal generalisation
of themetaphoric."

Being, like metaphor, "withdraws into its crypt" [VAMPIRES AGAIN! DOES THIS
MAKE BEING A VAMPIRE? PROBABLY, DAS STIMMT, IT FITS.] BUT, and this
is going to be important, we get a CATASTROPHE when metaphoricity no longer allows
itself to be contained in its _metaphysical_ concept when (I THINK) metaphor stops
being a metaphor of something that is absent (but whose absence is palpable, as in the
absence of Abraham‘s God).


Existence is, of course, the contextualizing of Being. The withdrawal of Being would
mean the loss of coherence of the ground of existence. Metaphor is the continual
recontextualization of Being which maintains this coherence of ground. Metaphor, in its
root and most basic manifestation, effectively simulates the continued presence of Being.
But this is for some not satisfactory. One tragically desires the Being of the Other. The
phenomena resulting from the very action of Being, Being‘s metaphorical manifestation
as existent entities falls short of this tragic desire for Being. But secretly the Being of the
Self and the Being of the Other are one and the same. For one is the Other for the Other.
But one can see from one‘s own case, that one is more than merely the Other for the
Other! Metaphysics is the attempt to discursively describe what can only be glimpsed,
which is the coherence of existence in the light of Being‘s presencing as Other.
Metaphysics tries to reconstitute Being from out of the coherence of Being‘s existence
even in Being‘s absence. The immediacy of Being obviates the necessity of metaphysics
as "ontological neurosis" caused by its withdrawal. On the other hand, the thrownness of
Being is its thrownness as Self and Other simultaneously.

The Self is a sociolinguistic construct. The self only emerges within the social
environment of a linguistic community. Part of the process of learning any given task is
that of the making of subvocalized, mental notes to oneself as one is attempting to
perform and master the task. So this is here not entirely a case of learning by doing. But
when it comes to learning the ―task‖ of becoming minimally competent in one‘s first
language - this is entirely an example of ―learning by doing‖. The understanding of what
one is doing appears later after the necessary preparation of ground.

We have here the real thing in hand and we can dispense with saying what something is
like. Our difficulties in having an authentic relationship with Being which would have
powerfully validated the self stimulates in us an impulse to hatefully gossip - to talk bad
about Being. Metaphysics is an attempt to deconstruct Being which is motivated by a
dark, underlying necrophilic urge to tear down, demystify, and demythologize the Other
which seems to have rejected us, not unlike a haughty and unapproachable, would-be
lover. But it is not Being which has done the rejecting here. Rather this necrophilic and
destructive impulse, which manifests itself in the form of a metaphysics of Being, is
precipitated not through Being‘s callous rejection of us, but on account of rage against
impotence to intimately relate to Being.

But there is another sort of metaphor in Heidegger, a non-metaphysical one. [AND AT
THIS POINT, I AGAIN CONFESS I DON_T KNOW: I THINK DERRIDA READS
HEIDEGGER MORE WESTERN- LY THAN I, AND THIS MAY BE CORRECT_BUT
SOMEONE TELL ME SOMETHING ON HEIDEGGER, I READ HIS BEING IN
SORT OF A ZEN SENSE, THOUGH I WOULD SAY HE IS MORE INTERESTED IN
AN ANTHROPMORPHIC WHOLE THAN ARE THESE EASTERN TYPES _I GET
THE STRANGE FEELING THAT I DON_T KNOW ENOUGH TO HAVE AN
OPINION, BUT THAT ON THE OTHER HAND IF I KNEW ENOUGH TO OPINE I
WOULD BE INSIDE THE DISCOURSE AND HAVE TO AGREE WITH IT. ]

End of metalanguage, metaphysics, meta-rhetoric, but pure metaphoricity_ By now
we_re talking about famous Heidegger lectures like "The Nature of Language."
Metaphors, words, are INCISCIONS, tracings_as in wood-cuttings, gravures,
engravings_and these incisions make possible graftings, so to speak, splicings_ and
BEING ITSELF IS A DIVERGENCE, A SPLITTING, SOMETHING CONSTANTLY
IN WITHDRAWAL, A BORDER [***AGAIN THINK VALLEJO, THIS IS VERY
SUGGESTIVE FOR "LINDES"]_ITS INSCRIPTION SUCCEEDS ONLY IN BEING
EFFACED (that_s the English translation, a more interesting affirmation than the French
original ―n_arrive qu_a s_effacer‖)_being happens and comes about only in effacing
itself_(there is more on this) The essence of speech is INCISION [this is interesting, we
speak of "incisive arguments" but here speech IS incision]_INCISION BRINGS
TOGETHER AND SEPARATES THE VEILING AND THE UNVEILING [now there_s
a metaphorical phrase_;)) so today, metaphor is withdrawing, splicing, un/joining.
What is happening? ―Rien, pas de reponse, sinon que de la metaphore le retrait se passe
de lui-meme.

I have often marvelled at how the movement of Being through time is at the selfsame,
identical instant, both a passing away and a coming into being. In other words, the
coming into being and the passing away of the Self within the flux of consciousness
(during each passing instant) are grounded in the very same phenomena, and this paradox
of passage is essential to the continuity of experienced time.

07/98
Representation is grounded in the participatory and the objective is no more ―real‖
than the intersubjective. Representations are metaphors and convenient recapitulations of
an open-ended historical process. All form is metaphor; the concrete always transcends
the metaphorical.
07/98


dynamical system temporally evolves may be given a consistent definition in
terms of the ratio of the density of energy exchanges of the system with its outside
environment to the density of 3-momentum exchanges of the system with itself. By this
definition, the most rapidly temporally evolving dynamical system would be that of the
pristine quantum mechanical vacuum state - the quantum vacuum in the absence of real
particles or fields. We must note that the notion of the absolute passage of time, i.e., the
passage of time for reality as a whole, is a meaningless concept, or at least, a concept
which cannot be given a self-consistent formulation or interpretation. This fact is
intimately related to the fact that a thermodynamic system to which the notion of entropy
applies (the 2nd Law of Thermodynamics) is by definition an open system in the sense of
a system undergoing continual energy exchange with a thermal reservoir or "heat bath."
@$
A completely closed system, as noted earlier, would possess initial and boundary
conditions resulting in the quantizing of energy and momentum throughout the system
giving it a closed state space and a Poincare recurrence time which would be
indistinguishable from a finite 4th spatial dimension. In such a system, with time being
spatialized, the notion of the direction of time is completely arbitrary - there is not outside
to which the system is tied which can serve as a memory of the history of the system to
prevent the system from being completely reversible. The system would be ergodic and
possess a conserved phase space volume. In perturbation theory within quantum
mechanics, we find that an incompletely described dynamical system is approximated by
a Hamiltonian possessing a perturbation energy which may be thought of as a system
exactly described in terms of a Hamiltonian, H
0
, which is interacting with a larger energy
system through the perturbation Hamiltonian, H
fluc
which is simply added to H
0
such that
the new wavefunction calculated from this sum through the Schrodinger equation is just
the new wavefunction expanded in terms of the old one defined in terms of H
0
. In this
way the actual system is seen to be the old system undergoing virtual transition between
its energy eigenfunctions. The old system's energy uncertainty is represented in terms of
the perturbation energy associated with the fluctuation Hamiltonian, H
fluc
. In this way, it
is seen that, in general, the temporal evolution of any quantum system is representable in
terms of the interaction of an approximate system represented by a zeroeth order
Hamiltonian, H
0
, with its outside environment from which is has originally been
abstracted. When one has taken into account all possible perturbations due to real
particles and fields interacting with the given system in question, one is left with the
ineradicable residue of the quantum vacuum itself. So the concrete (and real)
temporality of any quantum system, when the mere appearance of change in the system
due to inadequacies in our nth order perturbation expansion description of the system
have been taken into account, is wholly attributable to the action of the quantum
mechanical vacuum. So we now come to an important distinction: changes in the system
which are not directly measurable and hence understood as virtual transitions between
energy levels of an approximate Hamiltonian description of the system versus transitions
between energy levels of the system due to an actual incompleteness or openness of the
system description due to ontological, i.e., actual, indeterminacy or indefiniteness of the
system itself, as opposed to mere epistemological indefiniteness of the system which is a
mere artifact of an incomplete quantum-perturbative analysis of the system. This is the
distinction of ontological versus epistemological energy uncertainty of a quantum
mechanical system. This above discussion pertains to the distinction, made in an earlier
letter, of /\E, which I have said may be wholly attributable to the observer, and the square
root
cont‘d



Since momentum and position are incompatible observables, then so are a function of
momentum and a function of position. Now the total energy of any quantum mechanical
system, the Hamiltonian, H(p,r), is the sum of its kinetic and potential energies, H(p,r) =
f(p) + f(r), where p and r are momentum and position, respectively. So by what has been
said, H(p,r) cannot have a precise value - for this would imply simultaneously precise
values for the kinetic and potential energies, which, in turn, would imply simultaneous
values of p and r. So the value, H(p,r) must undergo fluctuations of a fundamental sort.
Now even the vacuum is a quantum system, i.e., a qm ground state.
@$
So the vacuum's
Hamiltonian, that is, its total energy, must also fluctuate. These fluctuations interact with
every particle and field, introducing uncertainty in the location of particles in phase
space, i.e., x-p space. All measurement does is alter the shape of the area of phase space
"occupied" by the particle. Measurement does not change the area of phase space where
this particle is likely to be found ("occupied" by this particle"), however. The particle
does not possess an exact "position" within the x-p (phase) space. We can never say
beforehand how the vacuum fluctuations interacting with the particle (and out of which
the particle is constituted and sustained) will nonlocally resonate with the vacuum
fluctuations interacting at the time of measurement with the observer's brain (the
observer's brain is also a quantum system, BTW). Remember that qbar = sqrt[^q**2 -
/\q**2 ] where ^q is the fluctuations of q due to the quantum vacuum and /\q is the
uncertainty in q which may be wholly attributed to the observer's brain due to the
influence of vacuum energy fluctuations upon it!. It is the cooperation of these two terms
which results in qbar, the expectation value (classical value) of q! This perhaps reminds
some of you of Huxley's theory of perception: the receipt of photons by the retina of the
observer results in a stimulation of the brain in such a way that its ―ether wave filters‖
reconfigure so that the signals representing the object seen are no longer screened out by
the consciousness reducing valve (the brain, that is) which are then "picked up". The
brain is then conceived of as a kind of ether wave tuning device and perception is just an
altering of the set of frequencies of ether waves (vacuum fluctuations, if you prefer
modern parlance) which the vacuum can resonate with where the brain acts only as a
hardware interface between two unbounded sets of interfering ether wave spectra.
pru=
The brain on this view is simply a changeable and complex set of boundary conditions
placed upon the vacuum electromagnetic field's self-interaction! ―What we see and hear,
or what we feel and smell and taste, is only a small fraction of what actually exists out
there. Our conscious model of reality is a lowdimensional projection of the inconceivably
richer physical reality surrounding and sustaining us. Our sensory organs are limited:
They evolved for reasons of survival, not for depicting the enormous wealth and richness
of reality in all its unfathomable depth. Therefore, the ongoing process of conscious
experience is not so much an image of reality as a tunnel through reality‖,
cit=
The Ego
Tunnel: The Science of the Mind and the Myth of the Self (
au=
Metzinger).

Is there some general relationship between the height of the potential barrier and the
magnitude of the energy uncertainty? Or is there really no general principle at work here
relating these two quantities? H# = E# --> <#*E#> = <E> --> <#*(E**2)#> = <E**2>
/ <#*E#>**2 = <E>**2
/\E = sQrt{<E**2> - <E>**2} , where H = H(T(p),V(x))

What is the relationship between the reduction of the wavepacket upon an observation
being performed on some quantum mechanical system and the conversion of virtual
particles into real particles?

It may be possible to modify Poisson's equation, @
2
P/@
2
r = 4pi(rho), to include a 2nd
partial derivative of P, the potential, with respect to the time such that we might
assimilate the 2nd partial derivative with respect to r to the state variable, (rho)
mass
, and
assimilate the 2nd partial derivative of P with respect to t to the state variable,
(rho)
vacuum
, so that (rho) in the above equation may be interpreted as the space density
which is a locally conserved quantity.


Let us examine Einstein's field equation for any potential mathematical affinity it might
have with respect to our equation relating the space energy density to the sum of the
vacuum and mass energy densities.

====> T
uv
= -R
uv
-1/2Rg
uv


Each of these three terms are what are called tensor densities. They have physical
dimensions of energy density. In Mach's formula for the speed of pressure wave
oscillations in a continuous, energy-conducting medium, the pressure is associated with
the vacuum energy density since the quantum vacuum always obeys the equation of state
that its pressure and energy density are identical. But this identification leaves only one
possible further identification of the medium energy density; that is, the energy density
must be identified with the total energy density of space, what is termed within our
theory, the space density. In order for an entropy and temperature to be assigned to the
quantum vacuum, we must suppose that this vacuum remains in thermal equilibrium with
this heat reservoir, the energy density of which is the space density referred to earlier.

Intuitively, if any further identifications are to be made between terms within our theory
and terms within Einstein's theory, then the following identifications might be made:

The scalar curvature, R, should be identified with the space density, the momentum-
energy tensor, T
uv
, should be identified with the mass-energy density, and the term, -
R
uv
, should be identified with the vacuum energy density. The term, g
uv
, which in
relativity theory is the dimensionless dot product of the spacetime coordinate unit
vectors, e
u
and e
v
, may be alternatively interpreted to correspond to the ratio of sum of
the momentum-energy and Riemannian tensor densities to the scalar energy density.
Within our theory, the g
uv
correspond to mixed 2nd order partial derivatives of the ratio
of the sum of the vacuum scalar energy density to the total space energy density.

xx

virtual - virtual real - real real - virtual

|-----------------------------------| |-------------------------------| |----------------------
-----------|
| x x x | | o o o o o | | x o x x o o o x |
| x x x | | o o o o | | x o oo x x o |
|x x x x | | o o o o | | o o x x o o x |
| x x x | | o o o o | | x o o x x o|
| x x x x | | o o o o | | o o x x x o |
| x x x x | | o o o o o o| | x x o o o x|
|-----------------------------------| |--------------------------------| | o x
o |



G = U - ST (free energy) is minimized and configurational entropy is maximized when
rho(v) = rho(m) in the formation of a black hole.

xx

Do the partial derivatives of the gravitational potential transform like the components
of a four vector? It would appear that an arbitrary Lorenz transformation of the 1st order
partial derivatives of a standard static gravitational potential should transform so as to
evince the existence of a time-varying potential, and hence, that of a 4-hyperspherical
potential.




There is an important distinction to be made between massive and massless particles.
This distinction consists in the fact that a massive particle which is seen to be at rest has a
4-momentum which is purely imaginary, but which may be re-represented by a Lorenz
transformation in terms of a new set of real and imaginary components within some
different inertial reference frame. This is not generally true of massless particles,
however. A massless particle, such as a photon, possesses a relativistic 4-momentum
which is purely real in any and all inertial reference frames. There is no possible Lorenz
transformation which can succeed in re-representing the 4-momentum of the photon as a
mixture of real and imaginary momentum components. However, in the case of real
massive particles, the relativistic mass increases exactly in step with the increase in
imaginary momentum. This suggests that perhaps photons do not possess a gravitational
mass, and that the true source of the gravitational field is a massive body's imaginary
momentum. How then, if this is true, do we account for the disappearance of the
gravitational mass which results from the total conversion of mass into photon energy?
Does this energy disappear in the form of longitudinal pressure waves in the quantum
vacuum?




A photon which is climbing out of a gravitational potential must acquire an imaginary
component of 4-momentum relative to its previous location within a stronger potential.
We say, then, that a photon possesses an imaginary momentum relative to a point in
spacetime of greater gravitational potential.





The inertial frame-dragging effect deduced by Lenz and Thirring from Einstein's field
equations, may be understood intuitively in the following manner: angular momentum of
a massive gravitating body as observed from a great distance away (where the body's
gravitational potential has fallen off appreciably) appears greatly reduced when the
observer is transported close to this body. This change in the appearance of the 3-angular
momentum of the massive body in transporting the observer from a reference frame of
small gravitational potential to one possessing large potential may be understood in
terms of a different partitioning of the total conserved 4-angular momentum of the body
in the two different, locally Minkowskian frames. In other words, 4-angular momentum
which is mostly about an arbitrary z-axis, for example, when the body is viewed from a
region of spacetime of small potential, (relative to so-called "free space") is rotated
within 4-dimensional spacetime in moving the observer to the region of large potential in
such a way that most of the 4-angular momentum of the body now "appears" along the
local time axis within this spacetime. The angular momentum seen by the more distant
observer is hidden from the observer in close vicinity to the body because he is
occupying a space which is, relative to the distant observer, spinning in the same sense as
the body itself. This interpretation is consistent with the general relativistic effect of
perhelion precession which occurs in the sense opposite to the direction of the body's
orbital motion.

In a conversation with Brian Swift it was suggested by me, in connection with a
discussion of the old density wave theory of galactic spiral arm formation, that perhaps a
spinning supermassive blackhole lies at the center of any given spiral arm galaxy and that
the Lenz-Thirring inertial frame-dragging effect could be at least partially responsible
(maybe also in conjunction with density waves) for the formation of the classic spiral arm
structure of, for example, the Milky Way Galaxy.





Outline of a taped conversation between Dr. Brian Swift and Russell Clark


Energy and time transform in opposite manner within relativity from how they transform
within the Heisenberg Uncertainty Principle. The local velocity of light is affected by a
Lorenz transformation analogously to the way time and length transform within this
transformation. There may be a difference between energy and mass parallel to the
distinction between fermions and bosons within quantum mechanics. It may be that
gravity is only generated by fermions and not by bosons.
This would violate the principle of broken
supersymmetry
, c.f.,
au=
Barrow and Scherrer,
cit=
astro-ph/0406088v3, Do Fermions and
Bosons Produce the Same Gravitational Field? ―Spin‖ is responsible for generation of
―artificial gravity‖.
November 2011
According to Einstein, the gravitational force is not a
bona fide ―force‖, but is an effect due to curved spacetime. There may be a
generalization lurking within the notion of ―spin‖ so that ―real gravity‖ like artificial
gravity is an effect generated by the action of ―spin‖ though at the subatomic level of
scale.

Energy and mass may not be equivalent in all reference frames. Mass and energy may
transform in opposite manner within a Lorenz transformation.
Real fermions disturb the normally balanced renormalization which exists between the
vacuum fermion and boson fluctuation fields. There is no fundamental distinction
between real bosons and virtual bosons. End of this installment of the conversation
between Dr. Brian Swift and Russell Clark (1996)


It is possible to understand from quantum theory, the causal relationship between the
momentum - energy tensor and the space - time tensor of general relativity by noting a
pair of bridging relations between these tensors via the Heisenberg space-momentum and
time-energy uncertainty relations? These uncertainty relations prevent the defining of
precise, deterministic trajectories for particles moving within 4-dimensional Minkowski
spacetime. In particular, no precise trajectory can be defined for particles whose sole
component of motion is along the Minkowski ict axis. Such particles are observed to be
"at rest" with respect to the local system of coordinates. What does it mean, we may ask,
for a particle at rest to have an ill-defined trajectory, as implied by the Heisenberg
principle? One obvious interpretation is for the particle to lack the continuous,
independent existence of a classically described, inert and atom-like substance, which
subsists indifferent to the passage of time. The particle must possess an uncertain
momentum and hence trajectory on account of continual exchange of virtual photons (in
the case of a charged particle) with the vacuum electromagnetic field.
The analogue of the particle - wave complementarity in quantum theory is the dualism
between mass and energy within the theory of relativity. The general absence of either a
precisely defined particle position or momentum implies an oscillation of the particle
between its particle and wave mode manifestations which may be understood in terms of
the continual back-and-forth transformation of matter from its mass to its energy
manifestation. This spontaneous activity on the part of matter may be visualized in terms
of its continual reformation and disintegration into mass and energy.
July 2011
Just consider
here the fact that all quantum mechanical operators may be alternately expressed in terms
of the creation and annihilation operators corresponding to field observables.

Only massless particles are reintegrated exclusively from the vacuum energy. Though
massive particles are largely reintegrated out of the energy of the quantum vacuum, a tiny
percentage of this energy must be supplied internally, that is, from energy resources of
the mass itself, which is to imply that, massive particles possess internal binding energies
whereas the photon does not. However, just as in the case of superconductivity in which
the photon takes on mass, the photon takes on an effective mass within a vacuum laden
with a gravitational potential. The measure of this fraction is the ratio of the mass-energy
and vacuum energy densities within the volume occupied by the mass.
@$
This tendency
for matter to replenish itself from a fraction of its own existing mass-energy in
competition with its reintegration out of the locally available vacuum energy may account
for the linkage of inertia and gravitation. This reintegration process may be modeled as a
constant process of 3-momentum (boson) exchanges amongst matter particles (fermions)
in competition with energy exchanges between these particles and the thermal reservoir
of the vacuum nuclear electro-weak field fluctuations necessitated by the Heisenberg
uncertainty principle. The exchange of energy within quantum mechanical systems may
be generally characterized by three principle modes of energy exchange: first, the
exchange of energy between mass-energy and itself which is mediated by the totality of
fundamental force-carrying particles, collectively known as bosons. This particular mode
of energy exchange is owing to the position-momentum manifestation of the generalized
Heisenberg principle,
April 2011
c.f., creation-annihilation/exchange of quantum correlated,
spin-1 virtual vector bosons. Second, the exchange of energy between mass-energy and
the vacuum energy field which is the mode of energy exchange responsible for the
phenomena of spontaneous emission, nuclear decay, quantum mechanical tunneling, etc.,
owes its origin to the time-energy form of the Heisenberg principle. Finally, there is the
energy exchange mode taking place between the vacuum energy field and itself,
April 2011
c.f., creation-annihilation of quantum correlated, spin-0 virtual fermion-antifermion
pairs (virtual Cooper pairs/scalar bosons). This energy exchange mode we suspect
powers the process of global cosmological expansion. Dark matter and dark energy may
perhaps be better understood in connection with the model suggested here. In general, an
operator which does not commute with the Hamiltonian operator, i.e., [q,H] \= 0, must
experience fluctuations. The Hamiltonian itself is subject to fundamental quantum
fluctuations so we may say that [H,H] =\ 0. This means that changing the order in which
we measure H makes a difference in the results of our measurement. This doesn't seem
to make sense unless we are speaking of making these same measurements, but in
opposite time order.
@$
If this is the correct interpretation of [H,H] =/ 0, then quantum
fluctuations in the Hamiltonian of spacetime may be responsible for time's fundamental
asymmetry. But how can H fail to commute with itself? This mode also constrains, we
believe, the thermodynamic equilibrium of mass-energy systems embedded within the
expanding mass-energy/vacuum-energy system, and so seems the most general
manifestation of the Heisenberg principle.

The overarching system of energy exchanges will altogether comprise a total
conservative energy system to which will correspond the conservative force-field known
as gravitation. On this view, gravitation is not thought to be mediated by a unique force-
carrying particle, or boson, i.e., graviton, but is a fundamentally "parasitic" force, one
which depends for its action on the collective interaction between matter, its fundamental
exchange forces, and the total vacuum nuclear-electroweak field. Specifically, it is the
shift in the balance between the three types of energy exchange continually occurring
within the quantum vacuum: particle-particle, particle-wave, and wave-wave, energy
exchanges. Because matter is continually being reintegrated from the vacuum energy
field which originally created it, the transport of matter particles from one region of
vacuum locally, to another region, cannot, on our view, be understood as being
fundamentally different from the destruction of these particles within one local region of
the vacuum field (and subsequent conversion to vacuum energy within this region) with
the subsequent re-creation of these particles from the vacuum energy locally available
within the destination-region where they are ultimately "brought to rest."
April 2011
This is
a crude paraphrase of David Bohm’s principle, set forth in his dialectical materialist
quantum mechanics textbook, Quantum Theory (1951, heretofore and subsequently
referred to as Bohm’s principle of causality/Bohm’s causal principle – the principle
namely, that all causal relationships may be alternately represented as a sum of correlated
fluctuations. The question arises at this point as to whether there are two distinct types of
causality implied by Bohm‘s causal principle: classically correlated versus quantum
correlated fluctuations, or even perhaps four distinct types of causality are implied here:
classically correlated classical fluctuations, quantum correlated classical fluctuations,
classically correlated quantum fluctuations and quantum correlated quantum fluctuations?

July 2011
The brains real action may indeed be in the construction of quantum
entanglements between the subtlest components of its structure, i.e., microtubule dimer
electrons as well as electronic states of DNA. The actual processing of these
entanglements, which give them context, meaning and reference (pointing ―outside‖ the
otherwise closed system of the biological brain) and indeed, as well giving them
temporality is the preexistent quantum entanglement network constituted by the quantum
vacuum electromagnetic field aka the QED vacuum. The substance of awareness lies
here and the object of awareness is constituted from a particular and unique spectrum of
frequencies of the entangled vacuum electromagnetic field with which this or that
particular person‘s brain resonantly tunes. The question arises as to whether the unique
spectrum of networked quantum entangled vacua which constitutes the particular
person‘s dynamic contextual ground with which his brain resonantly tunes was originally
―knitted together‖ after his fetal brain had reached a crucial stage of neuroanatomical
development in which brain development and that of the networked vacuum frequency
spectrum move hand in hand or whether the fetal brain must first reach a certain critical
stage of developmental complexity before this matter-vacuum feedback mechanism can
begin to operate. What perhaps allows this feedback to ―phase-lock‖ and to begin
―homing‖ and ―tracking‖ is the fact that the quantum vacuum itself also informed the
evolution and and function of the DNA which coded for the structure and function of that
individual person‘s brain in the first instance.
@$
It seems as though providence is
inescapable when it comes to the bringing into being of a self-conscious entity.

Therefore, we believe that the total energy density of any given region of locally
Euclidean 3 - space may not be altered through changes in the local distribution of energy
constituted by real matter particles and fields. We understand energy density more
broadly here as the total four-momentum density of local regions of Minkowski
spacetime, and understand the conservation of energy density as the constancy of total 4-
momentum density despite phenomenological (apparent) variations in energy density
(classically understood) within local Euclidean 3-spaces. To wit, though the magnitudes
of the various components of the total 4-momentum density may change within an
arbitrary 3-volume of Euclidean space, the magnitude of the total 4-momentum density of
spacetime does not change locally; that is to say, it does not change observably over
relatively small distances and times within a Minkowski metric.
April 2011
The vacuum
energy within the context of cosmological expansion either acts like an expanding
medium of finite energy or it behaves like a cosmological constant, c.f., the arXiv pre-
print article,
cit=
Dilution of Zero Point Energies in the Cosmological Expansion.
The so-called mass-energy reformation process is limited by the density of available
vacuum field energy out of which real particle/field energy systems must constantly
reform themselves, and there is an antagonistic relationship between real particle/field
energy and virtual particle/field energy (via the Pauli exclusion and Bose
condensation/Pauli Inclusion principles) such that the relative alterations in the densities
of each be constrained by the principle of their conservation in total combination through
the principle of conservation of total 4-momentum density.
In general outline, the mechanism of gravitation works through the parallel connections
mentioned earlier between the momentum-energy tensor and the space-time tensor in the
following manner: a decrease in the positional uncertainty of a collection of particles
induces an increase in the momentum uncertainty of these particles, one which is
associated, through the definition of momentum uncertainty within quantum mechanics,
with an increase in the collective energy of the particles which cannot originate with the
forces initially bringing the particles together. Consequently, to conserve energy, this
energy must be supplied from somewhere; we maintain that this energy is supplied by the
quantum vacuum. This consequent decrease in the energy of the vacuum energy field
leads, in turn, to an increase in the energy of other distributions of particles already
occupying the general region of this modified vacuum state. This increase in energy of
the other particles occurs through an increase in the expectation value of the square of the
particles momentum, but without altering the quantum expectation value of the
magnitude of the particles‘ total 4-momentum (consistent with special relativity). The
only consistent way of effecting such a change in the quantum state of these particles is
for the momentum uncertainty of the particles to increase. In turn, the positional
uncertainty of these neighboring particles must decrease, and in such a manner that the
total system of particles experiences a decrease in its positional uncertainty.
April 2011
The
loss of magnetization of the vacuum spin-
1
/
2
fields is compensated by the increased
polarization of the vacuum spin-1 fields. Quantum entanglement is conserved though it
is transferred from the spin-
1
/
2
fields to the spin-1 fields in this process. Similarly, the
momentum-energy uncertainty is rotated in spacetime.

The specific manner in which the particles do this is by being attracted toward the center
of mass of the total particle distribution - an effect which manifests itself generally in the
phenomenon of gravitational attraction. Because a particle's energy uncertainty is not an
intrinsic property of the particle itself, but must be communicated to the particle through
the interaction of the particle and the vacuum energy field sustaining its existence, the
communication of energy uncertainty between particles distributed throughout space is
after the fashion of an inverse-square law. Of course, a collection of particles may not
really be thought to have a defined positional uncertainty unless these particles form with
one another a bound system of particles. This is why we suspect that the gravitational
force is only capable of coupling to binding energy so that the energy of the
unconstrained vacuum may not itself be thought to gravitate; it is only spatiotemporal
variations in the energy of the vacuum field which may be thought to produce
gravitational effects. In fact, it is the tendency of massive bodies to hold themselves
together against the opposite tendency of the cosmological acceleration field to disperse
the particles forming these bodies, which sets up the spherically symmetric imbalance in
the distribution and flow of the vacuum energy field ( in the case of spherically
symmetric matter distributions ) which manifests itself as the gravitational field
engendered by these and all other massive bodies within the expanding universe.
Three-momentum is conserved in particle collisions because the acceleration of a
particle always involves the rotation of its 4-momentum, describable by a Lorenz
transformation, and equal and opposite 4-momentum rotations on the part of the colliding
particles always results; this is just a relativistic expression of Newton's action-reaction
principle. In the case of two colliding particles with 0 initial and final total net
momentum, an arbitrary quantity of energy may be supplied to the two particles without
disturbing the net momentum of the particles. This may be regarded as a special instance
of a property of momentum which is normally not obvious to an observer confined in his
observations to the three dimensions of Euclidean space, but which is always operative
within the context of the higher dimensionality of Minkowski spacetime. Accelerations
merely have the effect of rotating the 4-momentum of particles within Minkowski space,
as mentioned earlier, and so the magnitude of a particle's 4-momentum can never be
altered. In general, forces are manifestations of momentum exchanges between the local
imaginary and real momenta of particles and fields. When these momenta exchanges are
rendered asymmetrical, the 3-momenta of particles and fields are not generally
conserved. Within a hypersurface of simultaneity in flat Minkowski space, the vacuum
3-momenta are conserved despite the participation of the vacuum energy field in the local
cosmological velocity field. This is due to the inherent symmetry of the momenta
exchanges between the real and imaginary vacuum momentum components. The
presence of matter induces an asymmetry in the momentum exchanges between the
vacuum's real and imaginary components of momentum reflected in the asymmetry of the
vacuum's self-energy exchanges. When energy is spontaneously imparted to a massy
particle and then returned spontaneously to the vacuum energy field, within this brief
interval of time, the energy state of the local vacuum has altered slightly in the direction
of decreasing vacuum energy density so that each time the energy originally imparted to
the mass is paid back to the vacuum, the vacuum receives in return a slightly smaller
quantity of energy.

May 2012
If we recast the Heisenberg Uncertainty Principle (HUP) in terms of information
and its conjugate or complementary quantity, then the Boltzmann Brain Paradox can be
considerably sharpened. If the enormous information content of the cosmos is assumed
to have arisen from a vacuum fluctuation on account of the HUP, then if energy content
and information content are proportional (
prn=
in some sense) then for an enormous
information fluctuation (we need at some point to define the notion of negative
information because conservation requires fluctuations to occur in +/- pairs) we have a
vanishingly small lifetime for this information (but doesn‘t it continue to exist in some
latent form in between manifestations?) so either the great age of the universe is an
illusion or . . . the true age of the universe (age of the true universe) is immensely greater
than the age of our universe. On the other hand, for the subjective appearance of a
universe to persist for say, a human lifetime, the amount of information borrowed would
have to be relatively small (compared to that contained in an entire cosmos).
June 2012
Can
the Boltzmann brain paradox be dealt with by invoking the fact that human consciousness
only possesses a temporal integration span, i.e., specious present or span of integrated
time of something between one and two seconds, such that the complementary quantity of
energy or rather its subjective time analogue, a form of integrated energy or “intergy”is
vast. If this complementary quantity of integrated energy or intergy provides the
informational grounding or context for the its embedded bubble of subjective time, then
there would be some natural limit for the temporal size of the bubble based upon
available forces of temporal cohesion provided by the underlying intergy reservoir. If the
time bubble of the specious present grows too large, the reservoir of available intergy
becomes too diminished. The action of the brain as filter, reducing valve and interface
provides the conduit for this reservoir of integrated (quantum-entangled) energy. So we
are concerned here not merely with the quantity of energy, but with the quality of this
energy in terms of the energy’s degree of development of quantum entanglement.

June 2012
The shared, causally and narratively structured intersubjective world of any given
quantum observer is not contradicted without reason, i.e., without causal or narrative and
hence, reasonable explanation by the accounts of other persons occupying the observer‘s
holographic projection because, however the MWI wavefunction collapses, it does so in
such as manner as to respect causality and narrative coherence up to and including the
sensory and mnemonic states of other iconic observers. All are part of the same in-
reality- discontinuous though always-perceived/remembered as continuous, i.e., classical
shared physical reality. Invoking a generalization of the anthropic principle in this
connection seems to suggest that it is my universe that undergoes superposition and
collapse and everybody else is just carried along for the ride! One of the implications of
Everett‘s relative state formulation of quantum mechanis seems to imply that I share my
world with a motley collection of also-rans, in other words. But is not my subjective
state of consciousness as perceiver of each new alternate quantum universe just as much
along for the ride as is anyone else‘s. I mean isn‘t my sense of continuity with my
remembered past just as arbitrary as is that of those who are carried forward with me into
each altogether new branching of the universal wavefunction? I know that I am
continuous while they are not. But cannot each of them say the very same thing about
themselves? The sensory and mnemonic states of my brain are just as discontinuously
and randomly changing as are theirs, and I am no more the same person in this new
quantum universe than they are, isn‘t that correct? This is a case of completely
egalitarian multisolipsism in other words. We all secretly rely on God or universal,
transcendent mind as bulwark against the collapse of ontology into what is merely a
largest conceivable epistemology.

The result of this is that the mass of the particle continually increases very slowly with
passing time as the universe continues to expand during the course of the constant
exchange of energy between the particle and the vacuum in which it is embedded.
@$
It is
this constant exchange of energy between the particle and its vacuum energy field which
is responsible for the magnitude of the particles momentum/energy uncertainty. We term
this the "perturbation interpretation" of the Heisenberg uncertainty principle. Because the
cosmological expansion rate is locally constant, the imaginary momentum of particles is
always increasing very slowly with the cosmological expansion. It can be independently
demonstrated that the real momentum of particles is always increasing at the very same
rate as is their imaginary momentum. If the mass of a body is relativistically increased,
then if the magnitude of its 4-momentum is to be conserved, then the 4-momentum of this
massive body must experience a rotation in Minkowski space which just compensates the
effect of this increase in mass on the imaginary momentum of the body. In brief, we say
that an acceleration field induces an increase in the relativistic mass of a body, and
conversely, a field which induces an increase in the relativistic mass of a body, must
itself constitute an acceleration field.

The presence of a real fermion inhibits the appearance of certain virtual fermion-
antifermion pairs out of the vacuum because, by the Pauli exclusion principle, a virtual
fermion in the same quantum state as the real fermion which is already present is
forbidden to appear where the positional uncertainties of the real and virtual fermions
were to overlap. Thus, the creation of the entire pair within this region of overlapping
positional uncertainty is suppressed. There should, of course, be some sort of smooth
decay of this suppressive effect of real fermions on the creation of virtual fermion-
antifermion pairs in the vacuum away from the center of the "volume of positional
uncertainty" within which the real fermion is to be found. In a similar manner, an energy
of 2m
s
c
2
must be created out of the vacuum in order for a black hole of mass energy,
m
s
c
2
, to "evaporate" via the emission of Hawking radiation. In the case of bosons, the
opposite principle is operating. This principle might be termed the Pauli "inclusion
principle." The more bosons we have in a particular quantum state, the greater is the
probability that more bosons will enter this same quantum state. We might, therefore,
expect the presence of real matter to enhance the probability of spontaneous
emission/absorption of virtual bosons from the vacuum in a quantum state with operator
values closely approximating those expectation values describing the bosons mediating
the mean nuclear electro-weak field responsible for the binding forces of this matter. Of
course, what we are really saying here is that the operator expectation values themselves
for vacuum operators are altered, or shifted in value, from their "free space" values. This
alteration in the vacuum field may be viewed as stemming from either: 1) a shift in the
value of the quantum operators, 2) an alteration of the vacuum wavefunction acted upon
by the quantum operators, or 3) a combination of both 1) and 2). In the particular case
where only the vacuum wavefunction itself is altered, we might interpret this in terms of
an alteration of the vacuum Hamiltonian from which the vacuum wavefunction is
calculated. We already know that any alteration in the Hamiltonian describing the energy
of a harmonic oscillator will result in the oscillator undergoing a change in its zero-point
oscillations, that is to say, the oscillator will suffer a shift in its zero-point energy. Any
change to the zero-point energy of a harmonic oscillator may be modeled on a change in
the oscillator's Hamiltonian owing exclusively to the appearance of an additional
potential term within the Hamiltonian function of the oscillator.

If we want to integrate the quantum mechanical and relativistic effects of matter on the
vacuum nuclear electro-weak field, then we must reconcile the influence, which changing
mass-energy distributions have upon the uncertainty relations within the vacuum, with
our requirement that the variations in vacuum momentum-energy and position-time
uncertainties be connected to one another along contiguous points in spacetime by series
of instantaneous Lorenz transformations. If the energy structure of the vacuum is
modeled as a crystalline lattice of coupled harmonic oscillators, then the reconcilement of
the two so-called Heisenberg and Einstein effects of matter upon the vacuum energy field
might be possible. We might succeed in doing this by introducing just the sort of ad hoc
potential term alluded to earlier. By this , we mean the potential function which
incorporated into the Hamiltonian of the vacuum's oscillator meshwork effects the
desired spatio-temporal alteration in the vacuum's zero-point energy. Such a spatio-
temporal variation in the vacuum's zero-point energy should recoup all of the anticipated
general relativistic effects, e.g., gravitational redshift, light deflection, time dilation,
length contraction, mass increase, etc. It should achieve this while at the same time
explaining a concomitant change in the Bose-Einstein and Fermi-Dirac statistics of the
vacuum consistent with the application of wavefunction symmetry/antisymmetry to the
interaction of matter and vacuum. We might begin doing this by explaining away, if you
will, the seemingly inconsistent demands of the time/energy expression of the Heisenberg
principle and the relativistic expressions for time and energy within relativity theory.
This must be done with respect to the predicted interactions of time and energy
uncertainty within both theories. First, let us note that both principles, Einstein's and
Heisenberg's, agree with one another concerning the relationships of changes in length
and positional uncertainty, on the one hand, and momentum and momentum uncertainty,
on the other hand. Where these two theories conflict, is in comparing the effect of a
change in energy uncertainty on the value of the time uncertainty: relativity predicts that
a relativistic increase in energy uncertainty will be accompanied by a relativistic increase
in time uncertainty, while Heisenberg uncertainty principle predicts that an increase in
the energy uncertainty of a quantum mechanical system (here, a relativistic increase) will
be associated with a decrease in the time uncertainty of the system. The solution to this
dilemma may lie with the simple fact that position and time are not on an equal footing
with one another as they are within the special relativity theory - there is no operator
corresponding to the time variable within quantum mechanics as in the case of position,
momentum and energy. Or the solution may lie with the possible inconsistencies of the
notion of energy uncertainty within both theories. This may be due to a deeper
inconsistency in the definition energy within both theories. Energy in quantum
mechanics is defined as the Hamiltonian function whereas the energy referred to in
relativity theory is the mass-energy, or, perhaps, the kinetic energy. The Hamiltonian is,
of course, the sum of both the kinetic and potential energies of the quantum system.
Of course, if the vacuum modeled as a Debye solid, that is, as a network of coupled
harmonic oscillators, then the Hamiltonian describing this system of oscillators must be
consistent with relativity. The potential energy of the Hamiltonian must be a function of
not only x,y and z, but must also be a function of the variable, ict, within the Minkowski
metric. The kinetic energy component of this vacuum Hamiltonian must be a function of
all four components of the relativistic 4-momentum vector of special relativity.

Perhaps we may think of virtual particle reactions as lying "off the mass shell" between
two extreme points off-shell. These are: virtual momentum fluctuations with negligible
virtual fluctuations in energy, and virtual energy fluctuations with negligible virtual
fluctuations in momentum. We may liken the spontaneous creation of virtual bosons
from the vacuum as pure momentum fluctuations, and of virtual fermion-antifermion
pairs as pure energy fluctuations of this vacuum. Is spin another name for angular
momentum about the ict axis? Is it possible, then, for a spin 0 particle to possess a
component of angular momentum within the three normal spatial dimensions? If so, then
wouldn't this constitute a stark violation of the principle of the relativistic invariance of
angular momentum?

There is an apparent paradox associated with the gravitational redshift of starlight
predicted by Einstein's theory of general relativity. The general theory explains this
reddening of the sun's light, for instance, as being due to the fact that the energy of
photons has an inertia associated with it and that, therefore, the photons must give up the
requisite energy in overcoming the Sun's gravitational potential as they fly away from the
Sun, off to infinity. The specific paradox is seen when one considers the reverse of this
process, the gravitational "bluing" of starlight as it falls into a gravitational potential, and
then imagines "bouncing" photons off of a huge mirror stationed closed to the surface of
the Sun, presumable in a very tight circular orbit! Photons leaving the Earth for the Sun,
for example, experience an increase in their energy ("bluing") which will exactly offset
the decrease in their energy on their return journey, after bouncing off the mirror, so that
the wavelength of these photons will not differ from that when initially leaving the Earth.
October 2011
So, for example, although the characteristic wavelength of photons emitted by
spectroscopically known elements or molecules on the Sun‘s surface will exhibit a
redshift of the precise magnitude predicted by general relativity, these same characteristic
photons, if emitted, say from Earth‘s orbit in the direction of the Sun, will suffer no
observable reshift on their return journey after being reflected from a mirror stationed at
the Sun‘s surface. But we must consider here that what is called reflection involves the
stimulation of an excited atomic state, i.e., the ―boosting‖ of an outer orbital electron into
an energetically unstable orbit, which immediately decays, yielding back a photon of
precisely the same wavelength as originally caused the excited state, and a ―blueshifted‖
photon would possess an energy too large to produce a reflection photon of the
appropriate wavelength predicted by general relativity when the gravitational redshift is
taken into account. This appears to suggest that, for example, a television signal
containing real time footage of an Earth-based chronograph, if transmitted to and
reflected off the surface of the Sun, would reveal no mismatch between the time
displayed by the transmitted image of the clock and the actual time indicated on the
original chronograph. And this would be presumed to be the case even though real time
footage of a second clock based on the Sun‘s surface and transmitted to us
simultaneously along with the reflected transmission of the Earth-based chronograph
reveals a time-dilated rate of elapsed time of the Sun-based clock relative to our Earth-
based clock.

When particles are compressed into a progressively smaller volume of space, the
positional uncertainty of all the particles decreases. Consequently, the momentum
uncertainty of all of the particles will increase. Although the quantum mechanical
expectation value of momentum for the particles will not be affected by a change in the
momentum uncertainty of the particles, nor the square of the expectation value of the
momentum, the expectation value of the square of the momentum will change, however -
it will increase. This all follows from the mere definition of momentum uncertainty in
quantum mechanics. This is to say that the total energy of the particles will be increased
simply by virtue of the obvious decrease in quantum positional uncertainty of the
particles as a result of their having been confined to a smaller volume. Note that this
energy conferred to the particles cannot be explained in terms of any work which might
have been performed upon the particles in the process of pushing them together, as we
might have taken, theoretically, any amount of force at all in pushing them together,
depending upon how much time we were willing to take in doing so. This is yet another
reason for believing that the collective vacuum energy field is associated with the
operation of a conservative force-field. If we have not really imparted any energy to
these particles simply by virtue of having moved them together somewhat, then how are
we to explain the appearance of this energy in such a manner that the total energy of the
volume occupied by the particles remains constant, that is to say, so that the total energy
of this volume is conserved? We might postulate a kind of hidden energy which, along
with the particles, also occupies their space. We might further suppose that these particles
may be thought to be made out of this energy so that an increase in the energy of particles
within a particular volume of space becomes tied to a corresponding and compensating
decrease in the amount of this hidden energy such that the total energy of the volume
remains unchanged - a kind of radical energy conservation principle. One way to make
such an assumption, and there are indeed many different ways in which this assumption
might be realized, would be to postulate that there is a fourth component of particle
momentum, previously unsuspected, itself unchanged by our having pushed the particles
together, but possessing a square whose quantum expectation value has been altered in a
manner which exactly cancels the changes in the expectation values of the squares of the
usual three independent components of momentum along the x, y and z axes of a
Cartesian coordinate system. One way for the momentum of the particles along the
hypothetical "w-axis," as well as along the other three axes, to remain unchanged, with
the energy of the particles changing at the same time, would be if the masses of the
particles were permitted to change in inverse proportion to the change in the velocity of
the particles along this new w-axis. We can succeed in doing this by permitting the
particles to possess a negative kinetic energy which is decreased as the particles are
pushed together. But turning to an analogy with the case of a particle "tunneling"
through a potential barrier, any change in the necessarily negative kinetic energy of the
tunneling particle could be compensated for through judicious instantaneous adjustment
of the height of the potential barrier though which it is moving, that is to say, through the
appearance of a kind of ad hoc potential term which is to be added to the original barrier
potential, V(x). If we identify this ad hoc potential so-called with the gravitational
potential, then two things immediately follow: 1) a gravitational potential exists in space
whether or not matter is present; it is built into the very structure of space itself. And, 2)
matter has the peculiar effect of altering this essentially cosmologically-based potential
through quantum mechanical interactions taking place between all matter particles and
the continuum of space in which they are embedded. The quantum vacuum offers itself
as a logical candidate for this medium of space (aether, if you will) with which all matter
particles are in interaction. Moreover, the variation of the density of this vacuum energy
due to the process of the cosmological expansion of space provides a logical basis for our
postulated potential barrier.

The increase in energy of this hypothetical system of particles is based on the decrease in
their mutual positional uncertainty and the masses of the particles are irrelevant to the
determination of this energy increase. If gravitational effects are to be ultimately traced
to variations in the energy uncertainty of mass-energy distributions, leading in turn to a
modification in the cosmological spatiotemporal variation in the vacuum nuclear-
electroweak field from its equilibrium momentum density in so-called free-space, then
there must be some means of defining the masses of particles, as well as the mass
equivalence of field energies, in terms of their binding or self-energies alone. Lorenz
attempted to do this in the early 1900'
s
with respect to the mass of the electron; he tried
to define the mass of the electron exclusively in terms of its electromagnetic self-energy.
He was, however, unsuccessful, and to my knowledge, no further efforts have been made
to repeat the attempt.

Let us look at this question in term of a hopefully illustrative analogy. Suppose instead
of simple monochromatic light, we send a modulated carrier wave of electromagnetic
radiation from the Earth to the Sun and back again. Suppose the modulation upon the
carrier wave was a simple TV transmission of a normally functioning analogue wall
clock.

Particle creation at the event horizon of a black hole gives rise to a precisely thermal
spectrum. This suggests that the vacuum itself is in thermal equilibrium with itself so that
the vacuum must be continually exchanging energy with itself. because the time rate of
change of all physical quantities depends on the existence of energy uncertainty, dq/dt =
[H, q] + f[H,q], where f[H,q] is usually written as @q/@t. On this view, quantum
mechanical systems possess energy uncertainty because they are continually perturbed by
intrinsic vacuum energy fluctuations. In this way, all mass-energy systems are in a
process of constant energy exchange with the quantum mechanical vacuum. Since all
macroscopic transfers and exchanges of energy between two points in spacetime are
mediated via the submicroscopic energy exchanges occurring within the vacuum, it
follows that conservation of energy macroscopically is dependent upon conservation of
energy exchanges within the vacuum. The temporal evolution of the quantum vacuum is,
therefore, mediated by its own action. A number of conclusions follow from this fact. 1)
the vacuum‘s energy is conserved, but not by virtue of this energy possessing a
determinate quantity: the vacuum‘s energy is conserved even though it is an
indeterminate quantity. The rate of decrease of the vacuum‘s energy density,
cosmologically, is exponential because the energy density of the vacuum itself governs
the rate of decrease.

It is not possible to distinguish different time rates of change within a closed dynamical
system. This is because such a closed system possesses only a finite number of discrete
energy levels, and when the total system is in a particular energy eigenstate, its energy
uncertainty is 0 so that there are no vacuum fluctuations available with which to mediate
changes in physical observables of the system.

We may define the distance separating two events as a function of the number of vacuum
momentum fluctuations existing between the two said events. Similarly, we may define
the time interval between two such events as a function of the number of vacuum energy
fluctuations existing between the two said events. Of course, the partitioning of the
relativistic momentum - energy tensor into pure momentum versus pure energy
components is dependent upon the particular Lorenz reference frame within which one
performs the momentum and energy measurements.



Since the energy levels at which information is stored in a neural network are defined in
terms of the lowest stable energy of the neural network as a whole, virtual energy
transitions between these energy levels presuppose a coupling between the wavefunctions
describing the quantum mechanical states of all of the individual neurons of the network
in the sense of their being nonlocally connected.

It is the spontaneous coherence in which the neural network is embedded which provides
the ultimate context within which the neurological events are to be interpreted. This
coherent field is that of the nonlocally connected vacuum electromagnetic fluctuation
field.

The many worlds interpretation of the quantum measurement problem may be understood
as a reversal in causal relationship between the uncollapsed wavefunction representing
the mind of the observer and the uncollapsed wavefunction representing the potentialities
of the quantum mechanical system being observed by this mind in the following manner:
when the observer notes the collapse of the wavefunction with respect to an observable
he is attempting to measure, what is actually occurring is the collapse of the
wavefunction describing the observers mind so that it now abstracts from the Weltall one
particular eigenvalue of the object wavefunction, but without inducing a collapse of the
object wavefunction itself. One might ask what is the fundamental difference between
these two interpretations if there is not some third realm, independent of both the
observer's and object wavefunctions in terms of which one interpretation might be
favored over the other as being ontologically prior. This third realm belongs neither to
that of causality (the mutual interaction of collapsed wavefunctions), nor to that of
contingency (the interaction of collapsed with uncollapsed wavefunctions, and vice
versa), but to that realm constituted solely by the mutual interaction of all uncollapsed
wavefunctions. This realm we may refer to as the composite contingency - necessity
manifold or continuum. There is an exactly parallel assimilation between the category
space - time with our category of necessity - contingency. In this way we may realize
that the concepts of locality and nonlocality constitute a distinction which cuts across that
constituted by the polar concepts chance and necessity.

Good is that which enhances creativity which is the explicit expression of implicitly
integral wholeness. Evil constitutes that which seeks to destroy, confuse, disintegrate as
well as to impair the expression of unity and wholeness through creativity. All creativity
is in reality re-creativity.

The probability spectrum of a given wavefunction may be overdetermined so that there
exists an unlimited number of ways in which an ensemble of measurements of the
eigenstates of the wavefunction with respect to a particular observable may sum together
so that the wavefunction appears perfectly normalized; this property may permit an
additional degree of freedom within quantum mechanical virtual processes not previously
suspected to exist.



There is an absolute simultaneity which mental events distinctly enjoy due to the fact that
they do not admit of perspective; if anything they constitute perspective. However, the
order in which neurophysiological occurrences occur ( in the brain) is at least partially
dependent upon the reference frame (in the relativistic sense) that these events occur (as
observables). There must be an embedding of these neural events in a substrate which
extends beyond the merely neurophysiological in order for a reference frame to be
defined in which there can arise a correspondence between subjective and objective
simultaneities.

If metaphysical dualism is false in the strict sense of there existing two distinct and
parallel fundamental processes, one physical, the other mental, but if this doctrine is
nevertheless true in the less restrictive sense of there actually existing mental and
physical realms which are not distinct but somehow mutually interacting, then it is in
principle impossible to formalize the operation of mind.

It is quite true what many psychologists (as well as lay persons) have noted concerning
the tendency of a task to become executable without the aid of conscious attention the
more and more that it is performed. However, what has not perhaps been widely noted
by either is the somewhat contrary tendency for one to become more, rather than less,
aware of the abstract operations lying behind the performance of a task in new contexts
where the specific concrete operations constituting the task would never otherwise
suggest themselves. This tendency for us to become aware of the abstract operations
specific to one particular oft-repeated task within a context normally foreign to it, or at
least for our performances of operations within new previously unrelated contexts to be
guided by these abstract operations, I refer to as operational modulation - or op-mod, for
short. What we are calling op-mod may be alternately thought of as the manipulation of
something in terms of an operational metaphor; it is itself the very essence of the human
tool-using intelligence, and may be considered to be a general property of any neural
network computing device. More specifically, op-mod is peculiar to the problem solving
strategy of the neural network device because the specific neural circuits which are
utilized by such a network for solving one particular "problem" will necessarily overlap
with neural circuits which are being established in the course of attempting to solve
problems in extraneous contexts.

The existence of the ground of Reality consists exhaustively in its very activity.
Consequently, that which creates this ground is that which sustains this ground; from
which further follows the truth of Leibniz's principle that, "the conditions sufficient to
create the world are necessary at every succeeding moment to sustain its existence." We
know that there has to have always been something in existence and so the ground of
Reality must be self-sustaining, and hence, self-creating. It follows that the ground of
existence necessarily exists, and so is eternal. All possibility ultimately lies dormant
within that which necessarily exists. In the language of quantum mechanics, every
determinate eigenstate with respect to a particular physical observable may be alternately
represented as a series of eigenstates with respect to an indeterminate physical observable
incompatible with the first.

Hermann Weyl notes in his book, "The Open World," that the state of a two-electron
system is not determined by the state of each individual electron added together, but that
the states of each electron may be deduced from the state of the two-electron system.

Leibniz's series: 1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + . . . , does not converge when the
terms are rearranged into a sum of the following two sequences: (1 + 1/5 + 1/9 + . . . +) +
( -1/3 - 1/7 - 1/11 - . . . ). This is a rather common property of what are called
alternating infinite sequences. This property is very mysterious, but can be made to seem
less so if one pictures each term of the sequence as a numbered bead on a string. A finite
number of terms of the series may be rearranged arbitrarily to produce the identical sum,
and this may be thought to be possible simply because the string, being finite in length,
permits the removal, and hence, rethreading of all the beads onto the string in any
arbitrary order. However, given an infinite number of beads, the string is now itself
infinite in length and so it is no longer possible to remove the beads so as to put them into
a new order. The order of the beads may only be changed into that represented by the
two sums provided that the original string is cut, and this changes the topological
relationship of the beads; in a finite sequence the order of the terms (beads) may be
rearranged without altering the topological relationship of the beads. It is also
interesting to note that Leibniz' series converges to the value of pi/4 because the value of
convergence is itself an irrational number possessing a decimal expansion which
possesses no determinate order whatever so that what we have is an equation between an
irrational number and an infinite sum of rational numbers, on the one hand, and, on the
other hand, an equation holding between an infinite sum of terms possessing a
mathematically determinate sequential order with respect to a simple mathematical
operation, namely, addition, and an infinite sum of terms possessing no mathematically
determinable sequential order - no sequential order with respect to any definable
mathematical operation. We may suspect that Cantor's argument for the existence of
what he calls nondenumerable infinity, i.e., the famous "diagonal argument," can be
applied to the decimal expansion of pi to show that this sequence of decimal fractions
itself constitutes a nondenumerable set of rational numbers. What is interesting here is
that no possible rearrangement of the indeterminate sequence of nondenumerable rational
numbers constituting the decimal expansion of pi will produce an irrational which
diverges although there do exist rearrangements of the terms of Leibniz' series which
diverge. From this simple fact we may deduce that there is no infinite sequence of
denumerably infinite subsets of terms taken from Leibniz' series, on the left hand side of
our equation, which will produce a one-to-one correspondence with the infinite sequence
of rational numbers in the decimal expansion of pi.

Godel has stated that his incompleteness theorem applies only to logico-deductive
systems more powerful than that represented by arithmetic. This is because the proof of
the theorem is based on the Godel-numbering procedure, where each operator, as well as
all the symbols utilized by the system, are represented by Godel numbers, while all of the
logical operations of the system are defined in terms of arithmetic operations. So we may
say that arithmetic is definable within all so-called Godelian deductive systems.

The domain of all arithmetical operations is a domain devoid of topological structure.
Self-referential propositions introduce a topological structure into the domain of proof.
Rational numbers are the sums of convergent infinite series where the order in which the
terms of the series appear does not affect the value of the sum. We may say in this case
that rational numbers occupy a number field possessing arithmetic, or null, topological
structure. Irrational numbers, on the other hand, are the sums of infinite series which
may diverge if the order in which the terms of the series appear are altered. We may say
that the irrational numbers occupy a number field possessing a topological structure.

The degrees of freedom required for certain reactions, or interactions, to take place, are
only allowable within a space of large enough dimensionality to accommodate them.

The unreasonable effectiveness of mathematics within the physical sciences, borrowing
the famous phrase of the quantum physicist Eugene Wigner, is owing to the radically
overdetermined nature of natural phenomena.

A genuinely recursive system may only be derived from a recursive system equally or
more complex than itself, or if the recursive system is "constructed" out of simpler
recursive elements, the control system effecting or mediating the process of construction
is, itself, a recursive system, of greater complexity than the system being constructed.

The information content of a particular structure is defined by the degree of preciseness
to which the system approximates its intentional object. This definition is best
understood in terms of the "shattered hologram" metaphor.

A molecule belonging to a complementary molecule pair , a molecule which naturally
hydrogen-bond to one another, favors the spontaneous self-assembly of the molecule to
which it bears a topologically complementary relationship. More generally, the
spontaneous self-assembly of molecules is favored by a vacuum containing energy
resonances complementary to those upon which the molecule's energy structure depends
for its sustained existence.

John Searle, the linguist and philosopher, has stated that formal computational systems
are incapable of consciousness because such formal systems do not effectively exploit the
causal powers of computation available for utilization by the human brain. Since the
causal powers of matter, as Searle terms them, stem from what is forever spontaneously
occurring in the natural realm at the very smallest dimensions of time and space, the
process of abstraction, itself founded upon the systematic ignorance of finer details of
structure and function, introduces a kind of built-in blockheadedness into systems of
"artificial intelligence" which are physically realized from relatively macroscopic and
"insensitive" component parts, in accordance with "analytically closed-form" designs.

Vacuum fluctuations which are simultaneous in one reference frame (Lorenz frame) will
not necessarily be simultaneous in other frames. We may deduce from this that the
density of the quantum vacuum is different in different Lorenz frames.


I do not think that Hugh Everett's many worlds interpretation of quantum mechanics is
consistent with the implications of quantum experiments which have been performed in
the last few decades since the time (1957) when he originally proposed his version of
quantum theory. In Everett's theory, the collapse of the wavefunction is interpreted as a
sudden, discontinuous branching of the observer from one parallel universe, where the
wavefunction is uncollapsed, to a new parallel universe where the wavefunction exists in
one of its component eigenstates.

Enantiomer molecules, that is, molecules which were once thought to be identical in
every way except that they are the mirror reflection of each other, have recently been
generally found to differ in respect to their binding energies. So-called "right-handed"
molecules, such as the amino acids, D - tyrosine, D - glutamine, etc., have been found to
posses smaller binding energies (and hence are less stable) than their mirror image
counterparts, the L - series amino acids by the same names. Given the existence of a
spatial fourth dimension, it is possible to convert a right-handed molecule into its
identical left-handed counterpart by pulling the molecule into 4 - space and rotating it
180
o
against the hyperplane (normal to) and returning the molecule to its original
position within this 3 - hypersurface. This would suggest the existence of a preferential
curl field acting within this four dimensional continuum in a direction opposing the
rotation of an L - molecule and aiding the rotation of a R - molecule. This mechanism
would be one logical way to account for the observed differences in the binding energies
of identical L - and R - molecules.


Information is neither created nor destroyed; information is always conserved, and when
it appears to be created, it is merely being transduced by being re-expressed within
another medium. There are myriad different media through which portions of the
eternally pre-existent information may be expressed, but there exists a primary medium
which contains all information originally. All other media through which information
might be expressed are ultimately dependent upon this primary information medium. In
the same way that the transduction of energy from one medium, say mechanical, to
another medium, say electrical, is always accompanied by a loss of a portion of the
transduced energy as heat energy (whereby entropy is increased), some information is
always lost in the transduction of information from the primary medium to other
secondary media. For this reason, no informational systems or structures are permitted to
come into being which possess an information density greater than that of the volume
which they occupy, this volume being pervaded by energy in its primary form (vacuum
energy). In the same way, there is a limit to the mass-energy density of any particular
volume of spacetime; this limit is that specified by Schwarzchild's equation for the
energy density of black holes. The information which is inevitably lost as a result of the
transduction of information from the primary medium to secondary media simply passes
back into the primary medium.

The law of the temporal evolution of information systems is provided by the pre-existing
spatial distribution of information. The determinate is dependent upon the indeterminate.
Any eigenstate which results from the process of quantum measurement is sustained in
existence by the eigenfunction spectrum of noncommuting operators. In other words, the
determinate eigenvalue associated with the determinate eigenstate which results from the
act of quantum measurement must be sustained in existence through the fluctuations of
incompatible eigenvalues which constitute the infinite Heisenberg uncertainty which
exists with respect to the noncommuting variables. To wit, the finite exists only through
its participation with the infinite. All transformations are definable in terms of mere
projection operations; therefore, these transformations, when investigated, always reveal
the presence of conservation laws which seem to govern, or provide constraints upon,
these transformations. What is called the unity of apperception in Kant's Critique of
Pure Reason is synonymous with the existence of the underlying noumenon which
provides the rationality of any particular series of perceived continuous transformations
entertained within a finite mind. The interpenetration of the categories of time and space
support the unity of apperception.


The increase in complexity of coherent systems with time would seem to involve the
creation ex nihilo of quantities of information. There are reasons for believing, however,
that what is really involved in cases such as this is merely the partial redistribution of data
along a spatial information gradient onto a gradient which is part spatial and part
temporal where the total quantity of information is conserved in the process. With the
introduction of excitation energy, the nonlocal, distributed information content is partially
transformed into local, lumped information content.


Is it, in principle, possible for all the neural firings which comprise the brain state to
which is associated a particular mental state to have been stimulated to occur entirely
from outside the brain's neural network, obviating the need for intricate feedback loops
connecting the neurons with one another which normally support such patterns of neuron
firings? Intuitively we suspect that merely reproducing the requisite neural firing patterns
from outside the network would not be sufficient to produce the normally occurring
associated mental state. This is because the observed neural firings would only possess a
determinate order in terms of the perception of their order by means of a neural network
genuinely possessing intricate feedback structures. We might, in turn, be puzzled by the
force of this particular intuition which has at its root the notion of the importance of
timing and synchronization of the neurons with respect to one another. But this would
really only be important if there was something which the neurons incidentally interact
with in the course of their normal process of functioning to which the order of their
"firing" might be fundamentally related. We might then seek to include this additional
something and produce the changes in it also from outside, just as in the case of the
neurons. Notice that in every case where we are supposedly able to reproduce a given
sequence of neural firings, we are dependent upon a favorable condition wherein the time
interval between firings within a given small region of the brain are larger than the time
uncertainty of the quantum system constituting the neural network. It is precisely at this
point where the time interval between neural events becomes comparable to the quantum
mechanical time uncertainty in which we become no longer able to determine the
sequence of these events from without, or outside these events! It is here that we say that
the temporal order of events becomes indeterminate. However, what we really mean to
say here is that the sequence of events has become indeterminate with respect to a set of
external controls. We find that our earlier intuition about the problem of the timing of the
events appropriate to the establishing of the requisite brain states crops up yet again.
There is still something with respect to which the patterned events (comprising the
requisite brain states) occur which is important from the standpoint of timing and
synchronization, and we might, therefore, again, seek to include it, just as before. The
point here is that this process of trying to include the entire background against which the
timing of the brain events are significant can never be completed; we face an infinite
regress here. This regress is apparently resolved within the quantum mechanically
uncertain time interval of the network and therefore is forever beyond manipulation from
outside, that is to say, there cannot exist a determinate program adequate to produce the
timing necessary to integrate or unify the neural firings into the requisite coherent pattern
we term consciousness. To restate, this is because the ultimate embedding substrate
within which the neural network functions is made up of interconnected events
possessing a time uncertainty which prevents their delicate synchronization from ever
being introduced from outside; moreover, such a synchronization of events could not be
simulated by a computer utilizing a deterministic program since the notion of the rate of
time's passage is meaningless within a deterministic system.

We must remember that a computer simulation is a strictly formal operation for which the
physical system mediating the simulation must satisfy a mere set of necessary conditions
such as stability, continuity, causality, internal synchronization, etc. The timing of
physical occurrences taking place within the physical medium of the program with
respect to external events is completely immaterial because, in effect, nothing new ever
takes place during the course of the calculation which is one and the same process as the
simulation itself. Contrast to this, a situation in which the medium in which data
manipulation or processing is taking place, constitutes not merely a necessary condition,
but a necessary and sufficient condition for the "calculation" being performed.
Distributed throughout the medium, is the information necessary to the carry through the
action of data processing to its desired conclusion, but this information is not
immediately available, it must be accessed.
@$
We stated earlier that digital computer did
not, in a fundamental sense, possess memory since it is always possible to duplicate a
digital computer possessing data in memory without the duplicate ever having been
"switched on" so that it might receive "input." We also noted that a quantum computer
is capable of possessing genuine memory due to the impossibility, in principle, of
copying a quantum state! This genuine memory possessed by the ideal quantum
computer represents an authentic example of privileged or private access of the computer
to its own "computational" states!

How do we succeed in explaining the action of a medium in terms of elements derived
through reductive abstraction of these elements from the original activity of such a
medium? An example of this paradox is the on-going attempt by theoretical physicists
to explicate the action of the total quantum vacuum energy field in terms of the
subatomic particles discovered to date.


Of course, what is called dualism is completely ruled out in the case where the brain is
thought to function in a deterministic manner. This is because the isomorphism which
must maintain between brain states and mental states precludes the possibility that these
two qualitatively different types of states are causally connected; for any effective causal
interaction between the two would necessarily disrupt the isomorphism which is
presupposed by the dualistic theory of mind. On the other hand, in the absence of causal
interaction between brain states and mental states, there is no rational basis upon which
we can say that particular mental states correspond to, or belong to a particular brain. On
the other hand again, however, if dualism is rejected and causal relationships are allowed
to obtain between brain states and mental states, then both types of events/processes must
be mediated by the very same underlying substance, or substrate. In this way, the whole
distinction between what are called brain states and mental states completely breaks
down, and one is forced to adopt a monistic theory of mind.

The notion of historicism, in the sense provided by the theories of Marx and Weber, is
conceptually unintelligible because it assumes the existence of a distinction the validity
of which it then later denies. That is the distinction between physical causal factors and
historical factors in the explication of social, political, cultural, and economic
developments. The validity of historicism would mean that history as a science of large
scale human development doesn't really work, that it doesn't have anything substantive to
say at all because the real causal efficacy behind the changes which history has
traditionally studied lies at a level of description which is at once lower and more
fundamental than that where historical explanations are articulated.


That which is the source and sustainer of all things cannot be viewed as being anything
but infinite. Evolution may only be a local phenomenon - not a global phenomenon;
evolution in the sense of the genuine emergence of wholly unprecedented and
unanticipated forms, structures, or dynamisms - without this process of development
somehow drawing upon a pre-existing reservoir of information within which these
"emergent" forms are at least implicitly prefigured, and which mediates and underpins the
evolutionary developmental process - is tantamount to the acceptance of a fundamental
process which is itself uncaused and which is not admitted to be the cause (or reason) for
its own existence. In the vacuum, information exists in a nonlocal, simultaneously
connected form. When the vacuum energy fluctuations mediate the occurrence of
physical processes, there is a transduction of nonlocal, parallel and simultaneously
connected information into a new local, sequential, and temporally connected form. The
paradigmatic example of this transduction process is the spontaneous production of
fundamental particles out of the vacuum within accelerated reference (non-Lorentzian)
frames.

The ground of existence cannot be outstripped by any possible existent "thing."
Nonlocality presents the possibility of putting quantum mechanical probability on a
"rational" footing; in other words, a given wavefunction is normalizable on the average.
The condition of normalizability is not a very restrictive condition on a quantum
mechanical wavefunction; there are an infinite number of ways to refract or decompose a
given wavefunction into a spectrum of eigenstates (of an incompatible observable) so as
to satisfy the normalization condition.

The fundamental paradigm shift which marked the transition from classical (Newtonian)
mechanics to the mechanics of quantum phenomena may be captured in the manner in
which the implied unified physical law constrains the phenomena of nature: classical
physical law states that what it prescribes to occur must necessarily occur - any behavior
apart from this being forbidden; quantum mechanical physical law states that what it does
not proscribe or forbid to occur necessarily occurs. This constitutes a kind of fecundity
principle.

If the quantum mechanical vacuum is the origin of temporality, then the vacuum must
itself be timeless, which is to say, eternal. Moreover, that which is the originator of space
must itself be spatially unlimited.

Human intelligence has evolved to a point just short of that required to think something
genuinely interesting.

The neural network computer does not store information in any particular location within
the network, but stores information at particular energy levels of the global interaction of
the network as a whole. Each new bit of data which is fed into the network is stored at a
next higher energy level of the network. What ultimately becomes this next higher
energy level is determined by a virtually chaotic process of neural "firings" which occurs
throughout the network and which is stimulated by the introduction of a data input signal.
Myriads of neurons throughout the entire network continue to fire randomly until a new
state of least energy is reached by the neural network as a whole. If the separation
between different energy levels within the neural network (representing different bits of
information) are close enough together in energy, then it becomes very probable that
there will be a process of continual virtual energy transitions occurring between the
various discrete energy levels of the network throughout its entirety. An interesting point
here is that these virtual energy transitions within the network owe entirely to the action
of the quantum fluctuations in the energy of the vacuum electromagnetic field.
Moreover, the probabilities of given neural energy transitions occurring within the
network are determined by the presence of the constantly occurring virtual energy
transitions of the network which, again, are mediated entirely by way of the quantum
mechanical fluctuations in the vacuum electromagnetic field, themselves, owing to the
necessary presence of Heisenberg energy uncertainty within the quantum vacuum. An
essential difference between what are called virtual and what are called real energy
transitions, is parallel to the distinction between what are called virtual particles and
what are called real particles, respectively, in the theory of particle physics, namely,
virtual energy transitions cannot be measured directly, whereas real energy transitions
can be measured directly, for instance, in laboratory experiments. The real energy
transitions which take place within the neural network, and which are responsible for the
communication of its processed information to the "outside world," i.e., to the
consciousness of both the individual subject as well as his listeners, in the case of verbal
communication, are, themselves, overdetermined phenomena. This is to say, there are an
indefinite number of distinct sequences of virtual energy transitions which are capable of
producing the very same real energy transition within the neural network. This assertion
reminds us of Feynmann's sum of histories formalism for calculating the probabilities of
fundamental particle reactions. If it were not for the existence of energy degeneracy
within the neural network, there would be only one path of neural firings possible
connecting one energy level of the network to the next higher one. The operation of a
neural network would in this case be formalizable in the form of a computer algorithm.
Thus is the way to what is called intentionality opened up and made possible: the very
same determinate sequence of neural firings may have an unlimited number of alternative
future brain states in view, in other words. It is interesting to note that the interaction of
the virtual energy states of the neural network is not mediated primarily by the physical
realization of the network itself, but by the next highest order of perturbations to the
energy of the neural network. What we have been calling "virtual energy transitions," are
really only the first order perturbations to the global energy of the neural network,
conceived of as a quantum mechanical system. The first order perturbations, what we
have been calling, "virtual transitions" within the network, are themselves, informed or
mediated by, the quantum mechanical perturbations to the first order perturbation
energies of the network, i.e., 2nd order energy perturbations, thus making the first order
perturbations overdetermined phenomena as well. In turn, the second order perturbation
energy transitions (what might whimsically be called, virtual - virtual energy transitions)
are mediated by the occurrence of transitions between second order perturbation energies,
etc., and so on.

At this point we might realize that the real energy transitions occurring within the neural
network which are normally thought to be immediately responsible for the processing of
all information by the network, are engendered not by physical processes occurring at a
lower level of organization, but via processes taking place at higher levels of
organization, represented by the next higher order perturbation energy description of the
neural network. We know that the virtual energy transitions of any given quantum
mechanical system are owing to the presence of energy uncertainty in the system. It is
more accurate, however, to say that this energy uncertainty is present in the quantum
vacuum itself, and is merely communicated to the quantum system, of interacting
elementary particles, say, through the exchange of energy between the quantum system
and the vacuum, which is itself where the energy uncertainty originates; we saw earlier
that the wavefunction describing a quantum mechanical system cannot be normalized if
the energy uncertainty is conceived of as being a property of the quantum system itself,
so that it must be an inherent property of the quantum vacuum. So perhaps we see now
that the neural network itself acts merely as a kind of terminus to an information
reduction process, it acts as a kind of "reducing valve" which serves to abstract a
relatively tiny portion of information from the virtually infinite information content of the
overdetermined quantum vacuum which is instantaneously and nonlocally connected
with itself, and therefore represents the highest level of information processing because it
constitutes the interconnectivity of energy at its most exhaustive level. On this view,
information, like energy, may not be created or destroyed, but is a conserved quantity,
and its ultimate source is the infinite energy reservoir represented by the quantum
vacuum. We already saw how Temporality itself stems from the presence of quantum
energy uncertainty, which, in turn, originates in the vacuum, conceived of, again, as a
reservoir of virtually infinite energy density. Consequently, since Temporality itself has
its origin in the vacuum, it follows that the this infinite sea of vacuum energy itself had
no beginning in time! The vacuum now begins to remind us of Heraclitus' "ever living
fire," "in measures kindling and in measures going out," and thereby mediating, as an
eternal flux, all the changes continually taking place in the natural order. Moreover,
Heraclitus' statement that, "everything is an exchange for fire, and fire an exchange for
every thing," reminds us the interconvertibility of mass and energy in quantum relativistic
field theory, this interconvertibility being mediated by the continual exchange of energy
between matter and vacuum. Heraclitus' "ever living fire" is to him the fire of the gods,
yet uncreated by the gods. His statement that "Thunderbolt steers the Universe" no
doubt refers to the thunderbolt wielded by Zeus, the greatest of all the Olympian gods;
when this thunderbolt is identified with "the fire of the gods," that is, with Heraclitus'
ever living fire, the parallel between it and the vacuum becomes an intriguingly close
one; the quantum vacuum, by eternally mediating all physical processes, manages to
"steer the Universe." It is also interesting that Greek mythology tells us that Time owes
its existence to Chaos through that fact that the god, Chaos, is named as the father of
Kronos. Moreover, the Greek word, arche, which means source or origin in ancient
Greek, is translated into Latin as principium, i.e., ordering principle. The idea behind this
particular translation of arche into principium is the same one expressed by Leibniz,
when he states in his Monadology that, "the conditions sufficient to create the world are
necessary at every succeeding moment of time in order to sustain the world's existence."
We now arrive at the notion of first cause, not in the usual sense of first in a temporal
sequence, but in the at once broader and subtler sense of most fundamental or
substantive.







Since it is the pattern of virtual particle emission and absorption which every real particle
continually undergoes which determines the mass of the particle, it follows that real
particle masses are determined through the particular manner in which real particles
exchange energy with the fluctuating quantum vacuum field; consequently, alterations in
the density of the vacuum field energy will affect the masses of particles occupying this
vacuum. We might expect that this relationship between mass-energy and vacuum-
energy is symmetrical in nature because the interactions mediating the continual
exchange of energy between matter and vacuum are themselves reversible interactions.
This two-way causal, symmetrical relationship between mass energy and vacuum energy
within quantum field theory reminds us of a similar relationship between mass and space-
time curvature within the theory of general relativity: the presence of mass within a
given region of spacetime produces an additional curvature in this spacetime; also, an
increase in the curvature of a particular region of spacetime produces an increase in the
mass of particles or material bodies already occupying this region. Since spatio-temporal
variations in the energy density of the vacuum energy field are correlated with variations
in spacetime curvature, we might suppose that some sort of conformal mapping
relationship obtains between the ratio of real particle to virtual particle energy densities
and the degree of mutual inclination of the time and space axes ( of the Minkowski light
cone ) to one another. This relationship is also suggested by the fact that real particles are
virtual particles which have been promoted to the level of real existence through the
absorption of energy; particles are excitations of the vacuum state which is itself a
reservoir or sea of virtual particles. Also, through the application Mach's formula for the
speed of sound to this vacuum energy reservoir, we see that such a conformal mapping
relationship between Einsteinian space-time curvature and spatial-temporal variations in
the zero-point energy of the vacuum (or, alternatively, its energy density) must involve
mappings between the hypersolid angle swept out by the light line in four-dimensional
(Minkowski ) spacetime, and the energy density (or pressure) of the vacuum.


We must distinguish between evolution's creative and its critical faculties. Adaptation is
not the purpose of evolution; it is the trial and error critical process which evolution is
subjected to by the contingent environmental conditions within which it finds itself
operating. Darwinian natural selection is merely a critical process; it is not in any way a
creative process, in and of itself. Natural selection merely structures, channels or filters
the creative suggestions made to it; it plays no role whatever in the fundamental
dynamism driving biological evolution; natural selection is merely the set of boundary
conditions within which the dynamism of evolution functions, perhaps in the sense of
Bergson's elan vital. Here again, we have an example of how boundary and initial
conditions are essentially separable from the dynamisms which they conscribe. Similar
remarks were made regarding the process of technological advancement, which was
viewed as a progression in sophistication in imposing initial and boundary conditions
upon the invariant and unitary dynamism of Nature. We know that natural selection is
not able to operate unless self-reproducing information-bearing structures are already in
existence; moreover, natural selection has little opportunity to mold these self-
reproducing structures into more complex forms unless it can profit from the creative
suggestions made to it through the operation of random mutations, themselves useless if
they cannot be passed on to future offspring. So it is also necessary that something
analogous to a genetic code be contained within these self-reproducing structures,
themselves the expression of the information contained within this genetic code.

The problem, then, with Darwinism, or its modern derivative, neo-Darwinism, is that a
great deal of evolutionary development must have already occurred, in the form of so-
called chemical evolution prior to the appearance of the first self-reproducing,
information bearing structures, before the distinctly Darwinian process of natural
selection of random mutations is permitted to begin. So the creative dynamism, spoken
of previously, is to be identified with that dynamism lying behind the prebiotic process of
chemical evolution, a process which does not halt its operation once the Darwinian
process of evolution commences, but which continues hand in hand with natural
selection, and, moreover, maintaining its central role as the motivating dynamism in the
evolution of progressively more complex forms of life.



The subjective "I am," which is just the individual's personal consciousness, dependent
upon the objective world outside itself, is not to be confused with the objective "I AM,"
which is the one and unique self-existent Consciousness which is the source of all
individual subjective consciousnesses.

It is the quite common type of fantasy, frequently indulged in by proud and vain human
mortals, to imagine oneself in some glorious situation where one is either vindicated,
suddenly elevated in greatness (or suddenly shown to have been great) or rendered,
usually by one's own efforts, victorious or triumphant over some powerful adversity or
persecution, or moreover, to receive praise and adulation from the many as one speaks,
performs or otherwise acts in a manner which compellingly displays ones authority. We
human beings indulge in this kind of fantazisation a great deal when we are children,
perhaps more so when developing adolescents, while some of us, upon becoming
"adults," tend as we approach middle age to set aside, eventually completely, such
obvious puerile self-glorifications of the imagination. Some of us, on the other hand,
never seem to put such self-gloried imaginings behind us, despite advancing age and
maturity.
Everyone has either heard of or reflected upon the phenomenon of selfishness exhibited
in the strong tendency we all have of seeking out (in secret, of course) from a pack of
family photographs, those particular photos in which we ourselves appear. Eventually,
we become aware of the implications of this kind of behavior, which shames us: if
everyone were to be this self-centered, then there would be no one left to care for me as
much as (or more than) I care for myself and I would be a selfish person alone in a
universe of selfish persons. Of course, part of one's motivation for thinking in this way,
perhaps unbeknownst to oneself, is the Kantian notion of the Golden Rule; to wit, that I
must act in such a manner that I will that my action become a universal moral law.
But what of the phenomenon where I imagine myself being some other person when I
am in the midst of some profound or deeply moving aesthetic or intellectual experience; I
imagine what this experience would be like for this other person and somehow the
intensity and wonder of the experience is amplified for me, myself, through this
psychological projection. Part of the augmentation of the aesthetic experience for me is
the sense of personal, if partly disembodied glory which redounds to my sense of identity
because it is I who is leading this person, in my mind's eye, to the unaccustomed though
fuller appreciation of this experience. Partly again, the experience is, for me, augmented
because I borrow the other's innocence, using it as a foil against which the experience
may be rediscovered by me in all its aboriginal wonder. Moreover, I act as this person's
spiritual mentor; I help this person penetrate a mystery which I have long ago discovered
for myself, and if this other person is ultimately identified with my own self; this is
implied because all of these projections occur within my mind's eye, then I seek to view
myself as the father of my own future spiritual and intellectual development. But on a
more basic human level, I am imagining the sharing of a profound idea or experience
with another person in a way which is seldom, if ever demonstrated in actual social
intercourse with my fellow human beings; certainly it is love which motivates this
peculiar psychological projection - the kind of love which does not distinguish self from
other.



I have no acquaintance with either physical objects so-called nor with any phenomena
taking place within the domain of other minds; in fact, I have had no acquaintance with
any phenomena whatever other than those pertaining to my own psychological states,
states which are presumably closely related to the concerted electrochemical activity of
my billions of cortical neurons. Consequently, I am forced to accept the existence of
physical objects and other minds purely on a faith borne of appearances which might be
easily explained numerous other ways than those which seem to be dictated by what is
called "common sense." I wish to remark here that if an omniscient ( not to mention
omnipotent ) God fails to exist who is, by the way, the only possible vouchsafe for the
existence of an objective world containing other consciousnesses than my own, then there
is absolutely nothing standing in the way of my drawing the less than comforting
conclusion that I alone exist, i.e., Solipsism is the metaphysical truth, and moreover, there
is absolutely nothing standing in the way of my concluding that I, myself, am ultimately
responsible for all the phenomena which I have experienced or ever will experience and
that God does indeed exist and that I am Him. But of course, I wholeheartedly reject
such a preposterous conclusion: solipsism is a thesis which I must reject out of hand and
with it the proposition that God does not exist. What I have just stated above is by no
means a rational proof of the existence of God. But it is an argument which reflects the
inner logic of the whole of my being in its ineluctable affirmation of His existence from
which I have not the strength to depart.

The very structure of language contains within itself the subtle implication that all
human beings possess the belief, whether they consciously realize it or not, that the sum
total of possible knowledge is itself integral. But this hypothesis about the inherently
integral nature of knowledge implies, in turn, the existence of a unitary repository for this
sum of knowledge, which is to say, a universal mind or intellect.





It occurs to me that all true mysteries are intimately connected and intertwined with one
another; to find the solution to only one of these mysteries would mean having found the
answer to all. Just listing some examples may succeed in illustrating to one's deeper
intuition that this must be true: a few of these mysteries are that of existence itself, the
origin of consciousness, freedom of the will, the mystery of Time and along with it that
of eternity, the mystery of immortality and that of divinity. A bright young child may
agree with this observation, remarking, "well, God exists and He knows the answer to all
things." But it really does seem that the contemplation of any one of these mysteries
inevitably leads to the contemplation of all the others as well as some which I haven't
mentioned, and one may ask, " why might this be so?"




Most individuals are totally incapable of what is called lucid dreaming, dreaming where
the dreamer remains aware that it is he who is in control of all the action of the dream.
Freud's doctrine of the conservation of psychic energy suggests that the control of the
dream action is mediated by the domain of the psyche lying between full consciousness
and the level of consciousness at which the dreamer's mind operates so that the action of
the dream must dissolve upon the dreamer attaining his full consciousness because the
intermediary domain of consciousness which controls the dream is reduced to nil or
"squeezed out." An analogy will serve here. A river may not rise to an altitude greater
than that of its source, at which level its kinetic energy is completely transmuted into
potential energy. It might therefore be thought that only those individuals who
experience repression of their normal full consciousness would be capable of "lucid
dreaming" as the control of the dream action would be mediated by the consciousness
within the domain between the individual's repressed consciousness and his normal
potentially full consciousness; this is just a slightly more abstract way of saying that the
psychic energy which is usually unavailable for utilization by the conscious mind is freed
up during the unconsciousness of sleep and rendered available to the unconscious for use
in mediating the phantasmagorical action of the various dream sequences, themselves,
according to Freud, the acting out of wish fulfillments. The upshot of all this is that the
presence of lucid dreaming is a possible indication that the individual experiencing it is
not reaching his normal psychic potential for full wakeful consciousness and that the
reason for this is a deficit of available psychic energy due to the presence of myriad
emotional conflicts lying repressed within his unconscious mind.




Quantum Mechanics verifies the old Scholastic metaphysical understanding of all change
or Becoming as occurring due to a deficit of Being: all real physical processes are
mediated via virtual processes; these virtual processes possess by definition an energy
smaller than the energy uncertainty of the quantum mechanical system which is
comprised by the real processes mediated by them. The total energy uncertainty of a
quantum mechanical system is, by the way, relative to the reference frame within which
the system is "viewed," and therefore differences in the vacuum's zero-point energy
reflect changes in our frame of motion - in the sense provided by relativity. More
specifically, Lorenz contractions occur to not only to the eigenvalues of length, but also
to the quantum uncertainties of length. Similar statements may be made with respect to
momentum, energy, time, etc..
This precise "deficit of Being" may only be defined in terms of the complete
description of the total process; this description, as already noted, exists only for itself
and cannot be derived from without as an infinite regress of description stands in our way
here. In this connection we may state the fundamental principle that "the mediator may
not be defined in terms of the total set of processes which it mediates." This "deficit of
Being" of Scholastic philosophy is exactly analogous to the energy uncertainty of
quantum mechanics. If Hegel is correct in saying that positive entities exist only by
virtue of the accumulation of various negations of relations holding between the Absolute
and itself, then each entity must more or less clearly and distinctly exhibit the unified
totality of Reality in microcosm, the developmental program of which is contained in the
greater quantity of information which is determined when a number of these entities
come into conjunction with one another. For instance, the molecular bonding of atoms,
whether it be ionic, covalent, or by the weaker Van der Waal's force, cannot be induced
to occur within a previously specified period of time simply through the manipulation of
the atoms 'from outside;" one may only place the atoms in the approximate position
whereupon the action of bond formation ensues through the spontaneous action of the
vacuum electromagnetic field. In fact, the quantum mechanical method utilized in
physical or quantum chemistry to determine the various bonding probabilities has nothing
whatever to say about which precise configuration or shape, out of the many possible
configurations, will be formed as a result or the spontaneous bonding process. This fact
is readily seen when one attempts to view the spontaneous conformation of a denatured
macromolecule such as a nucleic acid as the result of the amino acids of composing the
molecule trying myriad possible different conformations by trial and error until the
energetically favored (most stable) conformation is found. In even relatively small
macromolecules the total number of possible conformations is so staggeringly large that
even if the components try a different configuration every 10
-13
seconds (a very liberal
assumption) the time required to hit on the "correct" conformation by chance would take
many orders of magnitude longer than the present age of the observable universe! The
wavefunction which describes the total system of interacting atoms entering into the
bonding process is approximated as a product of the individual wavefunctions describing
the approximate quantum state of the atoms; it is only the complete wavefunction which
describes these atoms as being inextricably entangled with the whole vacuum-energy/
mass-energy matrix which contains the information about the shape of the resultant
molecule, among other things. No individual quantum mechanical wavefunction is truly
normalizable, although a large ensemble of such wavefunctions will approach
normalizability in the limit of infinite ensembles. There will always appear to be
coupling between the eigenstates of a wavefunction which is, itself, merely an
approximately exact wavefunction; in reality, there is only one universal wavefunction,
as its normalizability requires. This process is very much akin to the decrease of
fuzziness in a holographic image which occurs on two or more pieces of a shattered
holographic emulsion when the various pieces are refitted together. On this view all
development of material systems from the simplest subatomic constituents to the most
complex living organisms, consists in the negation of negation, engendered by the
conjunctions of these constituents which occurs by chance outside length uncertainty of
the constituents, but which is actively directed once they interpenetrate within this range
where they enter into the effective range of their innate quantum- uncertainty mediated
tendencies toward self-organization, tendencies which are a manifestation of the partially
distinct "image" of the Absolute, i.e., the universal wavefunction, which each constituent
contains in microcosm within itself. Because creation is conceived under the aspect of
the negation of the negation of contextual relatedness within the Absolute, this negation
which is negated being understood as the reduction of information resulting from the
partial expression of the universal wavefunction as a product of partial wavefunctions
corresponding to relatively uncertain subatomic constituents, the problem of "Why is
there something rather than nothing?," is no longer rendered insoluble by the
requirement of explaining creation ex nihilo, but it is simply recognized that there has to
have always been something or other, so what better candidate for this something than
that unified plenum which is its own description, which is per Aristotle its own eternal
contemplation; that entity which Hegel calls the Absolute, and which we have styled "the
universal wavefunction."
September 2011
Should the notion of substance turn out to be an
incoherent one, then we are not barred from asserting that, ―you can indeed get something
from nothing provided that there are no possible conditions to prevent this.‖


Dr. Scholem, in his Trends in Jewish Mysticism, tells us that in any case where there is a
transition from one state or configuration to another that, "the void is spanned for an
instant." Madame Blavatsky refers to evolution as a "spiritual Opus in reverse," meaning
that the world arose through a process of involution (a reduction of the infinite to the
finite, but containing a vague recollection of the whole from which it was originally
abstracted and which guides its future development. This future development is
constituted by the return of the Absolute back to itself, directed from above and hence is
a recapitulation of the timeless and transcendent within the realm of the temporal and
immanent.

To say that Reality cannot be given a complete description on account of the inevitable
pitfall of infinite regress is merely to say that if this description does not already exist,
then it can in no way be constructed; there is nothing in what has just been said to prevent
a complete description of Reality which has always existed. In fact, it is due to the lack
of a complete description which is responsible for the existence of temporality. This
observation is very much in the same spirit of the mathematician Kurt Godel's discovery
that the notion of truth is stronger than the notion of provability; the fact that a theorem
expressible within the symbolic language of a particular logico-mathematical system may
not be constructed from the axioms of its system utilizing the rules of inference of this
system does not prevent this theorem from "existing" in the sense of its being true. But
one might ask what is the meaning of mathematical theorems which are beyond the
comprehension of any finite mind and which are not true by construction from a possible
collection of axioms, but true in some more fundamental sense.

Wittgenstein tells us that we may not use substantives in a manner which attempts to
extend the reference of these terms beyond the scope of all possible experience without
falling into either meaninglessness or absurdity. Therefore, when we ask the question,
"Does God exist?" the most we can possible mean by this question is, "Does God exist
within our space-time continuum?" We cannot ask whether God exists in Reality
(Wirklichkeit) since Reality cannot possess a complete description without this
description having always existed and without admitting the existence of such a
description, one necessarily beyond all possible attempts to construct it, and which may
only exist from a single point of view or perspective. So it seems that one may not ask
whether God exists in Reality (in the sense of Wirklichkeit ) without presupposing an
answer in the affirmative, because the admission of the existence of Ultimate Reality is
one and the same with the admission of God's existence. It is meaningful, however, to
ask this same question in the sense of Realitat. That which has always existed, and its
complete description, which has always existed must be a part of this same eternally pre-
existent Reality. It is obvious that the description and the Reality must be essentially one
and the same entity which is, who is, God. The finite may not be complete without
conditions being specified, and these specified conditions may not obtain except within
the purview of a larger context, containing the particular finite thing.


Only those independently occurring genetic mutations may occur at lower levels in the
gene regulatory hierarchy, i.e., to structural genes, which become integrated together
within a member of an evolving species which might have possessed a common source in
the form of single mutation to a higher order regulatory gene, one which controls the
expression of the original set of structural genes prior to the occurrence of these
independent mutations. The operation of Darwinian natural selection presupposes the
existence of a gene regulatory hierarchy which controls not only the expression of
individual genes, but more importantly controls the integration of the expressions of large
numbers of genes at the lower levels of this hierarchy.




To reiterate, there has always been something. What better candidate for this something
than that than which nothing greater can be conceived (per Anselm) ? This something
has no complete description except within and through itself. The only truly determinate
entity is the totality of everything which is possible or it is the void, as nothingness is, by
its very nature, determinate.




It has been humorously noted that Time exists so that everything will not happen at once.
Temporality seems to require that (per Bergson) genuine novelty continually intrude into
the world. The act of creation is not an event within history; it is the beginning of
history; it is the very inception of novelty. But since the continued presence of
temporality seems to require continual activity of creation, it seems more consistent to
assume that creation itself is a fundamental process which itself had no beginning, which
was itself never created, that the activity of creation is that which has always existed and
which requires no explanation other than itself. On the other hand, however, it seems that
this fundamental process of creation cannot be a unified process, for otherwise it is an act
which could have been consummated instantaneously, once for all. Temporality must
therefore be a process of recapitulation of timeless reality with Reality itself as the
medium through which the recapitulation is accomplished.



If Reality has a complete description, it is not one which can be constructed, not one
which had any origin in time: it is indeterminacy itself which is ultimately responsible
for the existence of temporality itself. If such a complete description exists it must have
always existed and the description and the reality must be one and the same.
Consciousness offers itself immediately as a likely candidate for such an ultimate reality
since consciousness is its own representation.
If it is true that consciousness is required in order to collapse a quantum mechanical
wavefunction into a determinate eigenstate, then consciousness, if it had an origin in
time, must have arose out the interaction of uncollapsed wavefunctions - it must have
arisen out of a situation of infinite energy uncertainty.

The velocity of light is determined by the ratio of real particle energy to virtual particle
energy.
Hence, the elsewhere region of the Minkowski light cone may be identified with that
region of the vacuum which stands in a relation of Bell nonlocality to the observer.

The unity of the conscious mind is no doubt owing to Bell-nonlocal connectivity of
neurons within the brain.

If it is true that there has always been something (as in the metaphysical question, "why is
there something rather than nothing"), out of which the Universe arose, assuming that this
something is not just the Universe itself, then there must not be, ultimately speaking, a
universal or all encompassing, ongoing process of either evolution or degradation in the
development of Reality. This is because by this present time an infinite amount of time
must have already passed so that Reality should have either degraded into utter
nothingness or reached a state of eternally unchanging perfection and we do not observe
the Universe to presently occupy either of these two extreme states: temporality could
not exist within a universe which derives its existence from a ground of unchanging
perfection (fullness of being) nor could the universe derive its existence from a ground of
nothingness (complete degradation). We now arrive at the conclusion that Reality as a
whole is neither evolving (increasing in complexity) nor is it devolving (decreasing in
complexity) so that any apparent changes in complexity in the Universe, e.g., biological
evolution, increasing entropy, are merely local changes which are on the whole
compensated by opposing processes elsewhere.




We may think of causal relationships as obtaining between terms or entities occupying a
plane of equal levels of abstraction with the process of abstraction itself and its opposing
process, that of concretion, being processes which do not admit of an exhaustively causal
analysis.

If it is only possible to alter the boundary conditions and initial conditions which the
dynamism of Nature is subject to , but not to alter in any way the dynamism itself, then
the most advanced technologies will amount to nothing more than having discovered how
to goad Nature into doing, in the best way she knows how, what she has all along been in
the very act of doing.

It is the possibility of formulating recursive propositions and this possibility alone which
allows the domain of true theorems, expressible within the language of a deductive
system, to transcend in magnitude the domain of provable theorems, i.e., theorems which
may be deduced from the axioms of the system through application of the rules of
inference of the system.

There is no comprehensive rule by which a computing device may recognize the
appearance of a recursive proposition since recursiveness is a structure which can only be
suggested; it cannot be literally stated.


June 1998
The recursiveness of consciousness is apparent in the fact that the dying away of
represent-ations within consciousness takes place in absolute simultaneity with the
bringing into being of new representational contents to consciousness. In fact, these two
processes, that of the passing away and the coming into being of consciousness‘
transitory representations, indeed, are one and the very same process!

Unlike
representations within consciousness, in which there necessarily exists a figure/ground
structure, with only figure being represented and ground supporting this figure beneath
the level of awareness, in the dynamic flow of the representations of consciousness,
figure and ground coexist on an equal footing as a unified phenomenal flux. We are,
however, only using the terms, figure and ground, in the metaphorical sense of becoming
as figure, or figuring, and fading, if you will, as ground, or, more aptly, grounding, i.e.,
grounding of figuring. But insofar as this metaphor of figure and ground applies to the
changing representations of consciousness, to this extent we shall say that the process of
the simultaneous becoming and fading of these representations is a process which is not
enframed. It is a process which takes place ultimately always outside. The contents of
consciousness always come into being not from elsewhere within itself, but always from
outside itself - from otherness or from alterity, but never from within alterity. For there is
not within-ness for the unconditioned, the unbounded, the indeterminate. The
indeterminate does not specify anything of itself as a coordinated totality. The contraction
of the infinite upon finiteness is itself a transfinite or transcendental process. Now a
transcendental being is itself an infinite determination. The outsideness of alterity
transcends the category of spatiality. For this variety of outsideness is not simply
coextensive with the outsideness of all other determinations. This holds for a particular
existent among a finite set of existents within a preestablished spacetime. In short, the
outsideness of alterity is intrinsic or innate, that is, it is not determined outside itself.
So alterity possesses its own alterity and this implies the givenness of other alterities.
This is why an object determined in space and time is not other in the same way that
another being like myself is other. The becoming of a transcendental being is
nontemporal as it is not reducible to a determination of substance. The being and
becoming of alterity is what constitutes spatiotemporality.

All baryons are composed of various combinations of three different quarks out of the six
possible different types of quark; the mesons, however, are each composed of different
quark pairs from among the six fundamental quark types. The fundamental force
responsible for binding together the various quark combinations out of which all baryons
and mesons are composed possesses the bizarre property of increasing in strength as the
distance separating the quarks increases. The important result of this is that it is
impossible to fragment these quark laden particles into their fundamental constituent
quarks: it is impossible to produce a "free quark," in other words. This is almost as
though the quark does not possess a determinate energy structure except as it exists
within groups of two, three or possibly larger numbers of quarks. The quark may be an
example of an entity whose identity is totally context dependent with the quark itself,
paradoxically, providing the context for its own determination as a substantive entity.
Such an entity might not possess a definite energy equivalence in the sense that it is not
possible to determine the quark's energy independently of the energies associated with
the force fields the particle produces. An entity such as the quark which is defined in
terms of the combined effect that it has upon itself (and one or more duplicates of itself )
provides us with the best example to date of what might be called a self-existent entity.

Quantum nonlocality effects could govern the superluminal transmission of information
between various widely separated locations within the brain's cortex without producing a
verifiable violation of special relativity's restriction on superluminal transmission of
information between separate minds. This would be possible if the very same nonlocality
effects are required for the integration of the individual's consciousness as a whole. The
idea or flash of insight would then be time-delayed by the necessary time interval
involved in conveying this idea from the location in his brain where the crucial
integration occurred to those parts of his nervous system which pick up this message and
in turn translate it into classically described nerve impulses then responsible for the
ultimate series of muscle cell contractions required to transmit the message to the
external world of attending observers. Another way of looking at this kind of nonlocality
is for the nonlocalized quantum effects to be confined to a vacuum energy/information
reservoir, exhaustively connected with itself nonlocally, which is continually tapped into
by the neural network of the person's brain.

September 2011
It is intriguing to consider that, consciousness (implementing a will capable of
―startling‖ the quantum vacuum in its instinctive ambition to marshall its multifarious
fluctuations into a giant and useless standing wave pattern) is informed by brain
functions likely constituted out of the very superluminal signals that are intersubjectively
forbidden by special relativity. Consciousness acts to buffer these forbidden information
signals. Time paradoxes can be harmlessly dissipated if the would be causally
contradictory information generated by processes potentially leading to them are
―vented‖ to higher dimensional time. Libet‘s subjective ―time bubble‖ (specious present)
of approximately 500 ms in duration swallows up any possible forbidden signals, since
these presumably can travel no further into the past or future than somewhat less than 250
ms. If freely willed action on the part of an embodied spacetime agent in the sense of his
possessing the powers and authority to initiate causal chains, presupposes conscious
intention to act, while consciousness is incapable of ―bootstrapping‖ itself into an
intentional state in real time, but requires the minimum time to do this which is
presupposed by the brain stimulation experiments of Dr. Libet, then what we appear to
have here is a natural mechanism within the brain enforcing chronology protection. In a
word, if information is always constituted by superluminal signals (because this
necessarily occurs within a unified conscious state), but superluminal signals are never
constituted by information, then chronology protection seems assured. The fact that only
data can be transmitted and received intersubjectively, while information is always
transmitted “infrasubjectively”, i.e., is constitutive of subjective conscious states capable
of informing intentions of the free agent to act by initiating a causal chain, is what at
bottom enforces chronology protection. Also, the prevention of cross-talk between
distinct subjective centers of volition through the judicious administration and parceling
of vacuum signal bandwidth by what may well constitute the equivalent of a ―cosmic
FCC‖, could be described as an equivalent way of describing the chronology protection
mechanism. Or perhaps we are really only attempting to describe the very same
chronology protection mechanism from opposite ends as it were. Recall that the
initiation of a causal chain itself constitutes a ―causality violation‖. An amusing thought
is that individual consciousness‘ may serve a larger ―ecological purpose‖ vis a vis
chronology protection within the transhuman context of cosmic spacetime. I am
reminded here of those happy and enthusiastic ―scrubbing bubbles‖ from the ubiquitous
toilet bowl cleaner commercials of the 1990‘s. It would not surprise me if consciousness
possesses a peculiar action akin to an entropy engine, the fuel for which being these
superluminal signals that Einstein says cannot exist. Note that the conscious mind in the
fundamental role conceived for it here, is entirely powerless to stop or prevent the
superluminal propagation of momentum and energy!

There seems to be nothing to strictly forbid the existence of superluminal causal
connections between events which lie outside of any observer's Minkowski light cone.
July 2011
Since the speed of light does not serve as a limit for virtual particle processes, we
might suppose that the elsewhere region, that outside the light cone is constituted by
vacuum fluctuations possessing a correlational structure inconsistent with determinism.

Since the common sense view alleges that the past is nothing more than a fixed,
crystallized record of what has actually occurred within the present, it follows from this
view then that a present which has not had adequate time within which to resolve its
internal contradictions and ambiguities must retain a certain small degree of freedom
within which change might continue even after this moment becomes past. In this way,
backwards causality would be admissible, if only for the purpose of "cleaning up" the
"loose ends" of still indeterminate entities occupying past times. But doesn't common
sense also define the past in such a manner that it does not actually become past as such
until such a point as this crystallization process is complete.
In other words, common sense defines the past in such a way that the time axis is
necessarily orthogonal to the three dimensions of space and this at every spatial scale,
however small; it defines the past in such a manner that there is no substantive distinction
between past, present and future, which is to say, it defines the past as a half-infinite time
interval with its later endpoint being the present moment whose status as present must be
completely arbitrary. Within a deterministic time continuum there is no nonarbitrary
basis upon which any particular moment along the continuum may be chosen over other
contenders as being actually present.

A natural lawlike system of relationships which govern the behavior of a given entity
may only be formulated provided that this entity is not unique ( provided that multiple
duplicates of the entity exist and are available ) as in the case of subatomic particles, e.g.,
an electron. The quest for the "theory of everything" is therefore doomed to ultimate
failure, since what we call "everything" is necessarily unique, and this uniqueness
prevents us from separating those "variables" which are particular to the thing itself from
those which owe in part to our investigatory involvement with this thing. The self, in the
act of investigating ultimate reality, must be included within the dynamic of the reality
for which we are seeking a complete description. This inherent recursiveness which lies
at the heart of any earnest attempt to develop a complete description of reality is alone
responsible for the fact that the domain of truth necessarily transcends the sum of
knowledge comprising any point of view (of reality).

Quantum Mechanics tells us that a closed dynamical system may only undergo temporal
evolution provided that a certain energy uncertainty exists within the system. This
energy uncertainty is just the standard deviation of the energy about its mean or
expectation value. This energy uncertainty may be interpreted in terms of a time-average
sum of random energy perturbations to the system "from outside" the system. These
random energy perturbations manifest themselves in the form of energy exchanges
between the quantum mechanical system and the sea of virtual particles in which this
system is embedded. The interaction of these virtual particles with the quantum
mechanical system are responsible for virtual transitions of the quantum state of the
system to other quantum states. The only real energy transitions available to the quantum
mechanical (dynamical) system are those from amongst the set of virtual energy
transitions which are continually occurring within the time interval specified by the
system's time uncertainty. The density of this virtual particle energy sea has a direct
bearing upon the rate of temporal evolution of any given quantum mechanical system.
Our central hypothesis is that the presence of matter has a perturbing effect upon this
virtual particle energy sea, i.e., the quantum vacuum field, and this perturbing effect is,
namely, to decrease the overall density of this vacuum energy which results in a similar
decrease in the time rate of change of all physical processes within the spatial volume
occupied by this matter. This proposed vacuum mechanism is exactly similar to the
mechanism by which a quantum resonant cavity decreases the rate of spontaneous
emission of 'cavity - detuned' photons by a Rydberg excited atom. The resonant cavity
achieves this by excluding most of the photons of half-wavelength larger than the cavity
diameter: to wit, it does this by decreasing the energy density of vacuum electromagnetic
field fluctuations of roughly the same energy as that of the suppressed atomic energy
transitions.



Given this inherent circularity in the nature of technological growth, it follows that the
ultimate constituents of the World must be wholly recursive: they must be their own
symbolic representations.
If a "conscious computer" is ever developed in what will undoubtedly be the far distant
future, this mysterious property of such a device will in no way stem solely from the
design or blueprint by which its engineers will have conceived its construction; the
blueprint will, of course, be a necessary component in the realization of such a conscious
machine, but will have been derived from a necessarily somewhat fortuitous discovery,
owing to much trial and error investigation, of the "correct" hardware interface of the
device with the recursive, self-organizing matrix of quantum - vacuum energy
fluctuations which underpin and mediate the stability of all fundamental particles and
their exchange forces. Only in appropriate dynamic conjunction with this fluctuating
energy matrix will any realization of a hardware design possess the topological energy
structure sufficient to tap the pre-existing "consciousness potential" of spacetime. In
other words, it is only the grace of Reality's fundamental dynamism which will permit the
eventual construction of a so-called conscious computing device. This empirical
discovery of the correct interface design will manifest itself perhaps during a testing
phase where a long series of simulated sensory inputs, of increasing complexity, are in
the process of being fed into the device while its output signals (responses) are being
closely monitored. The memory medium of the device will begin to accumulate stored or
remembered inputs which it has never received through its various sensory apparatus.
Identical sets or series of inputs will produce significantly different series of outputs both
from an individual machine over time as well as from different machines at the same time
- even if these machines possess identical designs. Occasionally, radically different
series or sets of inputs will produce identical sets of outputs. A significant portion of the
functional relationship between output and input will depend upon changes in energy in
the internal components of the machine's hardware which are, themselves, smaller than
the overall quantum energy uncertainty of the device as a whole. Moreover, no mutually
orthogonal set of eigenfunctions will describe the "functioning components" of the
device. This is why we have been saying that the abstract spatial structure of our
hypothetical computing device is non-topological. Clearly, any realization of a static
blueprint for a computing device, regardless how complex, in the form or an dynamically
functioning prototype, will itself be merely a topological transformation of the blueprint
from 2 or perhaps 3 spatial dimensions to 4 spatial dimensions rather than the non -
topological transformation from 3 spatial to 4 dimensions of 3 space and 1 time. This is
because the state space of the transcribed structure, i.e., the design, can be adequately
described in spatial terms.
In a very real sense, an object may not be thought to possess an internality unless it
possess an a genuine "outside" in the sense of a radically open system - a system which
cannot be contained within a closed system; a system is "closed" only if it is finite and
neither receives nor transmits energy to or from any system except itself. Such a closed
system possesses no "outside."


QZ Contingent uniqueness versus necessary uniqueness. The size of the Universe and
the black hole mass limit as important parameters in determining the density of the
quantum vacuum energy.




The asymmetrical nature of time perhaps has some bearing on the hierarchical structuring
of complex macromolecules. The fact that a molecule has been formed from a set of
simpler constituents does not guarantee that it can then be decomposed back into this set
of constituents. Similarly, the fact that a molecule has been broken down into a set of
simpler constituents does not guarantee that it can be recomposed from this selfsame set
of constituents. Perhaps the asymmetrical nature of temporality implies that any
sufficiently large set of macromolecules may be partitioned into two disjoint parts; those
molecules possessing a bottom - up structure and those possessing a top - down structure.
This distinction which I am drawing is not a solid theoretical one; it is a pragmatic
distinction which assumes that status of a theoretical distinction when we refer to
molecules occupying either extreme end of the probability spectrum ( in terms of their
ability to form "naturally" from simpler parts ).

Will may only be defined in terms of a "rational" order foreign to itself which it resists or
subverts. Will is the external manifestation of consciousness. Will is a principle of
incommensurate causation. The set of lawlike relations which may be said to govern
Will's operation are unique and irreproducible. Rational order is simply that order which
can be made to manifest its lawlike relations in a reproducible manner.


There is no need to invoke a temporal description of this state space - the only reason one
would attempt it is because we project our genuine temporality onto the mind's eye
realization of the computing device in its act of "functioning." Henri Bergson, in his
essay, Time in the History of Western Philosophy, complained of a confusion which
inevitably cropped up whenever metaphysicians attempted to analyze the problem of
motion. With a kind of gentle contempt he described the frustration of these philosophers
in trying to reconstruct linear motion from infinite series of "immobilities", i.e., fixed
points of infinitesimal length. He explained their failure as being due to their ignorance
of the nature of a linear interval as a mere projection of "mobility onto immobility." This
projection, naturally as such, does not capture the whole phenomenon, but merely a point
of view with respect to it out of an infinity of equally valid points of view, and so from a
single projection, or even a finite number of projections, one is never permitted to
reconstruct the original object as it is.

There are possible boundary conditions which might be easily placed upon the dynamic
of the "flux" which are nonetheless infinitely improbable as "natural" occurrences, which
is to say that the operation of intelligence is required to institute them.



It is these seemingly magical self - organizing properties of matter, owing to the
recursiveness of its ultimate "constituents," which make any attempt to calculate the
"improbability" of biological macromolecules an incoherent and meaningless enterprise.
Similar activities are the routine pastime of myriad scientifically inclined creationists
attacking evolution. The staggeringly large numerical ratios which they cite against the
"chance occurrence" of the simplest eukaryote DNA are calculated upon a permutational
/ combinational modeling of a prebiotic "soup" in which chance collisions continually
occur between precursor molecules, e.g., peptides, amino acids, nucleic acids, etc. The
serious problem with such a modeling approach is that it is not an empirically derived
statistical calculation as in actuarial computations, where a distinct probability is assigned
to each possible event within a pool, based on the observed relative frequencies of each
event, but is an abstract calculation where the probabilities are fixed at the outset and
remain unchanged throughout all series of calculations. For example, there are a vast
number of nucleic acid reactions taking place within the ribosome organelle of every
living animal cell which in the absence of certain mediating enzymes will take place
anywhere from 9 to 20 orders of magnitude more slowly than if these enzymes are
present - the ribosome is responsible for "translating" the coded information of nucleic
acids into various macromolecules (proteins) and in so doing expressing the genotype of
the developing organism. We see from this example that the probability of the
occurrence of various macromolecules essential to the appearance of the first
reproducing, metabolizing organic units begins infinitesimally small when the molecule's
precursors are yet unavailable, but that this probability grows by large discontinuous
jumps each time one of these precursors, the precursors' precursors, etc. arise
inadvertently in the prebiotic "soup" so that by the time the exhaustive set of
macromolecular precursors is present, the formation of the original macromolecule is
virtually assured. The ribosome itself, despite its inordinately complex structure, has
been observed under experimental conditions to reform spontaneously after having been
dissociated within a carefully prepared enzyme bath into its precursor polynucleotide
constituents - and this within the space of only several hours! It is indeed true that a
countless variety of different enzymes (of the precisely correct type) must be
simultaneously present along with the polynucleotide matrix for this seemingly magical
act of spontaneous self - organization to occur. This is because the self - organization of
such an enormously complex organic structure depends, throughout the entire process,
upon the almost simultaneous juxtaposition (collision is a better term) of three or more
precursor molecules which all happen to have the exactly correct spatial orientation, with
sufficient kinetic energy to overcome the activation energy barrier against the reaction
occurring. It should be noted here that just the chance of any three compatible molecules
( in terms of a desired product ) colliding at once with the roughly correct spatial
orientation is an event with a probability only infinitesimally greater than zero - let alone
the question of proper activation energies. And so, even if the primordial Earth possessed
the appropriate reducing atmosphere with oceans chalk full of all of the required
precursor molecules for the ribosome to properly form, without the necessary catalyzing
cofactors ( the enzymes ) there would not likely have formed a single such structure by
this late epoch. Then perhaps there must have been an extremely long period of time
during which the necessary enzymes appeared on the scene, one might think. One
suspects, then, a similar self - organizing process behind the formation of these necessary
enzymes, the only difference being that the precursors which we are now concerned with
are simpler, while their precursors must have been simpler still, and so on. But the
precursor macromolecules for many particular enzymes have, indeed, never been
manufactured ( because we don't know how to do it), but have to be obtained from more
complex molecules than themselves, if not from living or recently living organisms. The
theory of evolution, chemical evolution in this case, has secretly conditioned us to believe
that there must be some definite if inordinately complex sequence: precursors +
cofactors ~ simpler precursors + cofactors ~ etc. leading back to the very simplest organic
molecules which form by self - organization spontaneously and easily in a wide variety of
environments and without the need for cofactors or helper molecules of any kind, and
that it must have been such a process (only in reverse) which ultimately lead to the first
self - reproducing biological units which could then be developed further through
Darwinian natural selection.
The notion of self - organization gives some of us pause because it concerns a natural
process which sits precisely on the fence between what might be called less - than - self -
organization, i.e., formation from simpler components, and what is aptly called greater -
than - self - organization, i.e., formation from more complex components - and it is just
such a notion which strongly suggests a top - down hierarchy within the natural order
which can only have intelligence at its apex. At every step in the chain in the formation
of higher level precursor molecules, the mediation of the required reactions is
accomplished via self - organization principles : those who attempt to calculate the
probability against "chance" formation of important precursor molecules forget a very
important general principle first articulated by the great rationalist philosopher Leibniz -
which is - that set of conditions which in combination are sufficient to produce some
complex structure must necessarily remain in operation at every succeeding moment to
sustain the existence of this structure. The upshot of this is that a complex structure
which owes its origin to mere chance cannot endure, still less could it respond to its
environment in an adaptive fashion.

To bend Nature toward our intentions it is merely necessary for us to block all of those
except the one paralleling that one which is our own.

It is in the very nature of recursion not to be derivable from anything but recursion itself.
Therefore, if a recursion or recursive structure has a beginning in time it is complex, but
may only be analyzed in terms of other "simpler" recursive structures. These simpler
components of the recursive structure are themselves approximations to the larger
structure which they form in combination with one another.

The behavior of self - organizing systems cannot ultimately be explained in terms of
principles of organization which obtain externally, which is to say, such dynamic
structures will not submit to a reductionistic analysis.

The distinction between the "mental" and the "physical" may be drawn in the following
way: both are wholes composed of parts, both possess principles of organization, but
what is termed a physical whole is defined exclusively in terms of its constituent parts
while the "parts" which "compose" what is termed a mental whole are, themselves,
defined in terms of the whole which they compose. The reconstruction of a mental whole
must be guided in a top - down manner whereas the construction of a physical whole
must be guided in a bottom - up manner. The principle of a mental whole must exist
prior to its actual realization ( in terms of whatever substance).


Without substance change is not possible. Coextensive with this principle is: change
owes itself to a lack of determination, to a deficit of Being, to negation. From which it at
once follows that substance, rather than being the seat of being, of thinghood, as common
sense conceives it to be, it owes its existence, to the contrary, to a lack of being. It is not
possible for a determinate thing to be made up out of substance insofar as this thing
possesses determination.


It is easy enough to see that continuity is required for the subsistence of what is called
historical time which we will henceforth refer to as temporality. Indeterminate substance
is the only basis for the continuity underlying all change.



The science of Physics, at least before the development of relativistic quantum field
theory, in the 1940's, imagined that there might be such things as ultimate material
constituents - elementary particles - out of which all material bodies would be composed.
The implicit Metaphysics behind the notion of elementary particles is that of substance.


There is no such thing as nothing. Nothing, by its very nature, is a nonexistent entity: it
is its own negation. We might be tempted to say then that "something," being the
opposite of nothing, must exist. But not just any "something" constitutes the opposite or
negation of nothing/nothingness, but only a something which is, itself, the unity of all
that might possibly exist, and the very essence of which is to exist. In other words,
nothing, not being possible because containing within itself its own negation, implies that
there must have always been something (or other). But the only guarantee that there has
always been something is the existence of something which contains within itself its own
affirmation, if you will, the reason for its own existence. A fundamental and most
general property of a thing which contains within itself the reason for its own existence is
that of recursion, something which is defined solely in terms of itself, a recursive
structure. There are logical grounds for believing that there can be only one recursive
structure, that there can be only one self-existent entity - with this entity being the
"ground" for existence of all other entities. A recursive structure, if it may be thought to
be composite, would be composed of parts which are totally interdependent upon one
another; no part is the cause of any other without also being itself caused by this other
part and so if this recursive structure had a beginning in time, it must have been given
existence through a pre-existing, broader and more complex recursive structure. We see
now that a given finite recursive structure comes into existence through a process of
uncovering or abstraction from a more complex whole - through a process of negation.
We are reminded of Michelangelo's claim that a truly great work of sculpture already
exists within its marble block and that all he did in order to produce his works was
merely to free them from the marble in which they were imprisoned.
All simpler recursive structures come into being through a kind of figure-ground
transformation, through a change in perspective. This reminds us of Leibniz' monads,
each of which are only different perspectives on an eternally pre-existent whole, with
each monad defined in terms of, or dependent upon, all of the other monads making up
the whole. This exhaustive interdependence is what Leibniz refers to as the
preestablished harmony between monads. Again, a recursion may only be defined in
terms of other recursions like itself. Consciousness is an example of an inherently
recursive structure as it is its own symbolic representation ( of itself to itself).
Consciousness' objectivity is exactly coextensive with its subjectivity; this is simply a
restatement of Bishop Berkeley‘s principle, esse est percipi, i.e., to be is to be perceived,
and as consciousness necessarily experiences itself as a unity and never as a multiplicity -
the objective reality of any multiplicity of consciousness' could only exist as a subjective
reality within a larger individual consciousness (itself a unity) and so cannot really be a
multiplicity of consciousnesses at all. This argument against the multiplicity of
consciousness was succinctly stated by the physicist Erwin Schrodinger in his short
seminal work, Mind and Matter. It follows that since consciousness cannot experience
itself as a multiplicity, it therefore cannot exist objectively as a multiplicity:
consequently there can only be one consciousness.
Both the American psychologist William James as well the French philosopher Henri
Bergson did not believe that the brain was itself productive of consciousness but that the
brain was merely a kind of reducing valve (Bergson) which filtered and reduced down the
cosmic consciousness (James) to a vastly simpler form containing roughly the level of
information handling capacity (intelligence) which the human organism would need to
adapt to the limited challenges of an earthly environment. In fact, if we view the brain as
being productive of consciousness rather than as merely structuring a pre-existing
consciousness, then there seems no compelling reason to believe in a so-called infinite
consciousness. However, if the brain is viewed as not productive of thougt, but as merely
providing a complex set of constraining boundary conditions upon some pre-existent
conscoiusness field, equated, say, with the quantum vacuum energy fiedl, then the case
for a universal infinite consciousness does become rather compelling. The novelist and
author of Brave New World (1932), Aldous Huxley, discussed this view of
consciousness in his popular treatise, The Doors of Perception - Heaven and Hell. In this
work Huxley describes the effects of the psychoactive (psychedelic) drugs mescaline and
LSD which he experimented with towards the latter part of his life in an attempt to trigger
or facilitate the mystical experience of enlightenment in which he had had an almost
lifelong curiosity. Huxley explained the effects of these substances in terms of the
James-Bergson theory of consciousness: the experience of self-transcendence and
transfiguration which Huxley experienced on mescaline was for him due to the drug's
disabling effect upon the cosmic reducing valve. The brain under the influence of
mescaline is no longer able to filter out the thoughts and intuitions irrelevant to earthly
life (because appropriate to the upwardly contiguous levels of consciousness) - hence the
experience of a vastly larger mental space. This type of explanation would have been
readily understandable to the ancient Greek philosopher Socrates who viewed learning
(through experience or education) as simply a mechanism by which latent knowledge is
recollected.
The theory of entropic processes tells us that energy and information are interdefinable
and this fact in conjunction with the principle of energy conservation suggests that
information, like energy, may neither be created nor destroyed: the "creation" of a
quantity of information is really constituted by its being abstracted from the preexisting
integral system of information defining it.
Like a piece broken off from a holographic emulsion, there is a necessary trade - off
involved in this abstraction process: the "newly created" entity only acquires a relative
independence from the whole in which it eternally preexisted (and which defined its true
being) by losing some of its knowledge of itself. There is a direct analogy with this
process in the creation of "new" subatomic particles through the collision of particles
within a linear accelerator.
In the first couple of decades after the first "atom - smashing" experiments performed
with the primitive particle accelerators of the 1930's, it had been supposed that the
particle products of these violent collisions were actually pieces of the colliding particles
which had been jarred loose by the sudden impulsive force of their slamming together.
But soon after this early period the kinetic energies of the particles going into these
accelerator collisions began to significantly exceed the combined mass - energy of the
particles which themselves initiated the reaction, with the result that the end product of
these collisions was a set of particles with individual member particles possessing a mass
greater than the combined mass of the particles originally participating in the collision.
The common sense "broken pieces" explanation of accelerator products now had to be
modified in some way or rejected outright. Two alternative interpretations of this "mass
paradox" were suggested by particle theorists: either the product particles were created
from the excitation of the vacuum by the kinetic energy of the collision with the "input"
particles serving as the points of application of the excitation energy, or they were really
inside the initial particles all along but the excess mass - energy was being exactly
balanced by an equal and opposite negative energy due to the internal binding forces
holding the particles together.


Thanks for the response. An energy eigenstate is an abstraction in the sense that only a
closed system can be in an energy eigenstate, but thermodynamically this is not possible
because of the fact that vacuum fluctuations in momentum-energy cannot be screened
(kinda like gravity). Some modes can be screened of course and the Casimir Effect is an
example of this. But in this case only momentum fluctuations, virtual photons, are being
suppressed here. Virtual electron/positron pairs are not suppressed, in fact, the
probability of the creation/annihilation of these pairs is actually enhanced between the
Casimir plates. We may think of virtual fermion/antifermion creation-annihilation events
as energy fluctuations, collectively of spin 0 and the photon creation-annihilations as
momentum fluctuations of spin 1. Together, the spin 0 energy fluctuations and the spin 1
momentum fluctuations may be considered to be a fluctuating momentum-energy four
vector of expectation value for T(i,k) (momentum-energy tensor) = 0 and this is part of
the reason that the vacuum does not gravitate, I believe. Getting back to the main point,
as long as there is a fluctuation component to the Hamiltonian, which cannot be
completely removed through supplied boundary conditions, the system will never exist in
a true energy eigenstate and will be forced to temporally evolve due to the exchange of
momentum energy between the system and its fluctuation Hamiltonian (the vacuum
fluctuations). (April 2011 addition: reevaluate the above in terms of experimental
investigations of the Scharnhorst effect)

Regards,
Russell

P.S., see my post, "What is gravity?" Use dejanews power search.

I see two electrical cords coming into my computer. One of these is the power cord and
the other is the data cord. We may think of the first cord as a conduit of the substance of
what is simulated on the screen of my computer and the second, of the form which this
substance shall be induced to assume at any given time. Data versus information:
passive versus active channeling of the chaotically dynamic substance.



Downloading versus moving of files from one directory to another.

Beginning of Physics Notes

-------------------- =
web=
update www.driveway.com with new material highlighted in
blue.

Do nonlocally connected observables commute? (
April 2011
If they are compatible
observables, otherwise, not) What about spacelike separated ones? (
April 2011
Yes, unless
they are nonlocally connected)
An adiabatic change in a thermal system is one in which there is no exchange of heat
(energy, really) between the system and its outside. Now on account of exchanges of
energy continually taking place between the system and the quantum vacuum, no thermal
(or quantum mechanical, for that matter) system can really undergo changes without
some exchange of energy with the system‘s outside. But cannot a large component of
quantum vacuum energy fluctuations interacting with such a system be just that necessary
to reconstitute the system from one moment to the next? How can those quantities of
energy be understood to alter the net entropy of the system? Is the system entropy
changed, then, by just those fluctuations of vacuum energy that are coupled to the internal
exchanges of 3-momentum within the system responsible for the system‘s binding
together of itself, and hence, also of its inertia (gravitation)? There are two distinctly
different possible bases for novelty. These are: changes in vacuum energy interacting
with a quantum system, which cannot be anticipated by the system and, changes in the
state of the system, in terms of its internal energies, which cannot be anticipated by the
vacuum itself. Interactions taking place internally to the system fail to be anticipated by
a given vacuum (heretofore referred to as the vacuum) on account of these interactions
being mediated via an altogether different vacuum state, unbeknownst to the first
vacuum state. This amounts to a more general statement of the problem of wavefunction
collapse being triggered by unanticipated internal interactions of a quantum system – one
which subsumes the distinction, heretofore noted, that between interactions of the
vacuum not anticipated by a system and interactions of the system not anticipated by the
vacuum, both of which, perhaps, then, turn out to be different though equivalent
interpretations of the mechanism of wavefunction collapse.
April
2011 Partial differential
equations forcing function has equivalence in terms of time-varying boundary conditions
and absent forcing function? An adiabatic interaction with a quantum mechanical
system does not change the wavefunction, but may result in the system making a
transition between energy eigenstates.
A closed system is one in which what ―takes place‖ within the system is not and cannot
be communicated to consciousness. Is there a distinction here, that between what, in
fact, is versus what may only potentially be, communicated to a state of consciousness?
These kinds of considerations might help to clarify the connections between the quantum
wavefunction, as representing the most that can be possibly known
(information/consciousness) about the state of the system it describes, and gravitation,
wherein, it has been stated on various sides, that either gravitation or consciousness or
both are relevant to the underlying mechanism of wavefunction collapse, itself
understood to be a nondeterministic phenomenon. Here the ―binding problem‖ of
consciousness, nonlocality and the fact of the general relativistic nonlocality of
gravitational energy (which must be described within this theory by a pseudotensor rather
than by a bona fide tensor, may be found to possess a close connection to one another
while at the same time a description of the complex structure of the relationship of
information to entropy supplants the heretofore simplistic understanding of this
relationship as being one of mere dual opposition.
Implicate Order as a bed of quantum degeneracy. Nonlocality~correlations~energy
fluctuations. Locality~causality~momentum fluctuations.

Certainly quantum theory allows that the manner in which the subatomic particles shall
interact with one another during a random collision (mutual scattering) must be
significantly different if this event occurs under the watchful eye of some physicist armed
with a high-resolution electron force microscope.

Might nonlocal interactions be partly explicable in terms of a latent >c velocity of
propagation. In the case of the rapidly separating components of the decayed particle,
the state of each component may be overdetermined. The extra information (about the
state of the opposite particle) may be comoving with each component by each particle‘s
local vacuum state.

But in the case of both component particle and comoving vacuum state propagating at the
speed of light, there would seem to be no room for interaction between each component
particle and its locally comoving vacuum state.
April 2011
This is reminiscent of the
problem of symmetry breaking for psychophysical dualism.

Cosmologically, the direction of time was consistent with the direction in which the rate
of decrease of the (matter + vacuum) energy density and the rate of entropy increase were
maximal. This direction would be described by a vector within Euclidean 4-space and
the manner of cosmological decomposition of this 4-space into a 3+1 dimensional
spacetime would have been determined by the global, cosmic distribution of momentum-
energy. The coupling of the dynamics of space to time (and vice versa) must, of course,
be mediated by the nondiagonal elements of the stress-momentum-energy tensor, T
uv
.

Fluctuations in energy of a system due to Heisenberg energy uncertainty introduces
discontinuity into the system, i.e., change in the system‘s topology. The less the system
interacts with vacuum energy fluctuations, the greater is the system‘s inertia.

All of these distinctions of beings, past vs. present vs. future vs. potential (potential-in-in-
the-past,‖ ―-future,‖ ―-present), across ―uni-verses,‖ within ―universes,‖ and all of this
within consciousness that is still to be distinguished from beings outside consciousness,
paradoxically, all of these distinctions of different types may be unified by admitting the
reality of only one ―concourse of transcendental otherness‖ populated by an infinite
number of infinite beings (with perhaps an infinite majority subset of which abstaining
from self-limitation).

Virtual momentum and energy imply the existence of virtual space and virtual time
(through the Heisenberg uncertainty relation. Virtual stress implies the coupling of
virtual space to virtual time. But independent fluctuations in vacuum stress must
become correlated in order for these fluctuations to grow (relative to a background) and
stabilize at elevated levels, i.e., real stress possessing a net spacetime curvature.
April 2011
This looks like the quantum entanglement basis of induced gravity.
It appears that an initial ―seed compliment‖ or quantity of real stress energy must be
supplied before the global spacetime boundary conditions can be favorable for the
production of real stress-momentum-energy. To have a net spacetime curvature, ―flat‖
or otherwise (positive, negative or some complex, periodic or aperiodic, structuring of
positive and negative curvatures), momentum and energy have to be initially ―coupled.‖
Of course, once a global spacetime structure is already in place, the coupling of
momentum and energy within this continuum or on this manifold is a given. This is
because gravitational waves (or gravitons as spin-2 exchange particles) presuppose such a
coupling of space and time (through the coupling of momentum to energy).
The second law of thermodynamics states that all matter and energy tend toward a
maximally entropic state. If we were to interpret entropy as the complement of
information, then the increase of entropy of a closed system would mean that information
was flowing out of a closed system! Based on arguments supplied elsewhere, a closed
system does not possess temporality, but time is spatialized within such a system.
Discuss the relationships between Penrose‘s argument in favor of a gravitational basis for
decoherence (as well as for wavefunction collapse due to measurements performed by a
conscious observer) and the necessity of the breaching of a closed quantum mechanical
system, i.e., superposition state, producing state vector reduction. A relevant
consideration here is our earlier discussions of information being a (perhaps necessary)
artifact of an open system, an eigenstate of the Hamiltonian being a state of a system with
zero energy uncertainty where the coupling of the system to the quantum vacuum
(understood as the system‘s ground of possible change, including both growth and
decay). But then we are left with the notion of entropy increasing through information
leaving an open system. But then also, how can entropy be the complement of
information within an open system? The notion of something‘s being the compliment
of something else presupposes another notion of a closed set that can be partitioned into
two disjoint subsets, e.g., X entropy + X information = 0 entropy and 0 information. If
there is a residue of entropy or information within the system (and we still wish to term
the system causally closed, then this residue should be nonlocal, and it should be present
in the form of quantum mechanical, nonlocal correlations.
Also, we are faced with the notion of information ―flowing‖ out of an open system to
some where outside. Now two or more closed systems exchanging momentum and
energy with one another implies the existence of a background spacetime (which must
possess some kind of curvature structure so that in reality these two systems must also be
exchanging some small component of stress and the momentum and energy exchanged
between them must be partially (if only infinitesimally) coupled. In turn each system
itself must possess coupling together of its own local momentum and energy.
There is by the way potential relevance of the speed of gravity controversy here. Two
truly isolated systems would only be able to exchange nonlocal correlations with one
another.

Nonadiabatic changes in boundary conditions cause increase in system entropy.

It seems probable that somehow the two distinctly quantum phenomena of wavefunction
―collapse‖ and nonlocality (in the form of the determination, instantaneously and at a
distance, of the eigenstate of a quantum mechanical system) reflect opposite tendencies
of the vacuum to act (in determining a quantum state) in which the vacuum either has or
has not, respectively, adequate time to have prepared itself for such an act of
determination.

Nonlocality may be explained in terms of the fact that both, widely separated components
of a decayed particle, for example, are perceived simultaneously by this same vacuum.
And although the quantum state of either, if determined individually, without knowledge
of the uncertain state of the other particle, must collapse to produce a randomly
determined eigenstate, information nonetheless
exists between the structured uncertainties of each particle of this pair in the form of a
correlation of two random signals.

We may look at causality as not just a propagation of momentum, stress, and energy
along trajectories through space, but also as propagation in the sense of causal influences
doing so through the progressive enlisting of one another to form structures of ever easier
delectability, that is, as propagation upward from smaller to larger, through continua of
spatiotemporal scale.

November 12, 2000

Since the action of every operator, ^q, is reducible to the actions of ^a
+
and ^a, then
certainly the quantum statistics of the quantum states upon which ^a
+
and ^a act are to be
investigated as the likely underpinning of the dynamics of all quantum mechanical
systems.

What is the relationship, if any, between the mechanism (perhaps this term is a
fundamental misnomer, since normalization is always assumed) of ―late time
normalization‖ and the role of consciousness in the quantum measurement problem.

Can an extremely dense body, e.g., neutron star, be a black hole in some reference frames
but not in others? If not, then would this constitute a counterexample to Einstein‘s
equivalence principle?

^H(x
î
, p
î) =
Ñ(x
î
)E

But V(x
I
) is a function of /\p
I
= /\p
î
; /\p
î
; î =
1, 2, 3


And T(p
I
) is a function of <p
I
>,

So more generally V(x
î
) is a function of the magnitude of fluctuations in the vacuum‘s
momentum-energy and T(p
î
) is a function of the expectation values of the components of
the momentum-energy of real particles and field within the quantum vacuum.

Cellular automata theory can, of course, be recast in terms of conservation of some large,
but nonetheless finite, signal bandwidth.

Time uncertainty necessarily includes fluctuations in the direction of time due to the
simultaneous presence of momentum fluctuations. Just as the external input of energy
can make a virtual particle ―real‖ so can an input of energy make an alternate time axis
real. /\t/\E >= h takes into account the possibility of energy degeneracy. A similar
statement applies to the equation, /\x/\p >= h.

Perhaps the energy uncertainty, /\E, can be broken into two pseudo-orthogonal
components, a degenerate and a non-degenerate component, each with their respective
conjugate time variables.

Do disturbances to the potential energy structure, i.e., the structural relationships of local
energy minima, have to be treated as ―late time‖ (ad hoc) adjustments to the system initial
conditions?

The degeneracy space is not spanned by basis vectors associated with thermodynamic
degrees of freedom.

Superlattice of quantum wells model of competing tunneling resonance phenomena.

Myriad, interpenetrating 4-vacuum which are to varying degrees in and out of resonance
with one another. Each possesses a timeless temporality that only takes on the ordinary
attributes of time ( as historical time) through its resonance and interaction with other 4-
vacua.uiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii

The speed of light increases between casimir plates due to an increase in the probability
rate of production of virtual e+e- pairs between the plates. The e+e- virtual pair
creation/annihilation events are how the energy uncertainty between the plates in the
modified vacuum manifests itself as energy fluctuations. So the decrease in momentum
fluctuations, virtual photon exchanges between the plates, is offset by a corresponding
increase in energy fluctuations between the plates in the form of virtual scalar particle
exchanges between the modified vacuum and the four dimensional quantum vacuum.

The substance of any given relative 3 + 1 vacuum state is an absolute 4-vacuum. These
two vacua continually exchange energy with one another.


The linear density of virtual e+e- virtual pairs is increased although photon wavelengths
are unchanged, resulting in higher frequency and same wavelength, i.e., a greater value of
―c.‖ This is explained by a greater probability rate for energy-compatible (or what is
called ―resonant‖) virtual e+e- virtual pairs being created and annihilated along the
photon‘s ―trajectory‖ between the Casimir plates. The concept of


resonance is pervasive within physics and there are myriad instances of concept of
resonance in almost all branches of physics.

In the above paragraph we have an example of ―resonance-tuning.‖ By altering the
electromagnetic boundary conditions of the quantum vacuum electromagnetic field
through the use of simple Casimir rectangular plates, one has succeeded in modifying the
free space vacuum by altering this vacuum‘s resonance structure. Alternately, virtual
photons of a given energy/frequency within this modified quantum vacuum are no longer
described by a virtual e+e- pairs of energy, E = hc / lambda or, no longer resonate with
virtual e+e-‗s of energy, hc/lambda. We speculate that the change introduced to the
vacuum‘s resonance spectrum is confined to the volume contained within the world
hypervolume defined by the casimir plate geometry over time. If this is indeed the case,
then the change to the probability rate of virtual e+e- pairs being created and annihilated
within the geometry of the casimir plates is only for those pairs possessing ―half-
wavelength‖ smaller than the plates separation. Here is the conceptual difficulty:
although the density of vacuum electromagnetic photons within the plates geometry has
been effectively decreased, the density of vacuum e+e- pairs being created and
annihilated within this same geometry is increased. The idea behind these two allegedly
related vacuum effects is that of the conservation of momentum-energy or 4-momentum
density with all possible (self-consistent) hypervolumes. A simplistic way to understand
this phenomenon where the above casimir geometry is concerned, which is not, I believe,
fundamentally misguided, is the following. The amount of energy (in the form of
photons) that has been excluded from the casimir plate geometry is made up for by an
exactly offsetting increase in the density of virtual photons of energies falling within the
frequency spectrum, hc/lambda, where lambda < 2 x distance between the plates, d
casimir.

E <> hc/ì

So in a given interval of time, say, for convenience here, d
casimir
/c, where ―c‖ is the
normal value of the speed of light in vacuo, the linear probability density of virtual e+e-
creation/annihilation events along an axis perpendicular to the plates and contained well
within the geometry described by said plates (in order to be able to ignore
electromagnetic ―fringe effects) is enhanced above its free space value. So a real photon
introduced from one end of the plate geometry with momentum, h/ì‘, directed
perpendicular to the plates and possessing a frequency greater than hc/ì where, again, ì
is less than twice the value of d
casimir
, shall, on account of the enhanced linear density of
resonant e+e- virtual pairs created and annihilated along the photon‘s perpendicular path,
reach the metal plate at the opposite end of the casimir geometry sooner than would have
been implied by the simple equation, E= pc. This equation is, in fact, no longer valid
within the modified vacuum of the volume described by the casimir plate geometry.
But some equation reminiscent of E
2
= m
2
c
4
+ p
2
c
2
. Only by introducing an imaginary
mass component to the photon that is sufficiently, offsettingly large may the
simultaneous increase (above that of ―c‖ in vacuo) is permitted for the real photon‘s
velocity perpendicular to the casimir plates. Now it is suggested that performing a
modification to the vacuum‘s resonant structure opposite in sense to that described above,
we shall succeed in imbuing the affected particle (that confined to the vacuum thus
modified) with a real as opposed to imaginary mass!

What symmetry is implied by the conservation of uncertainty - phase space graininess,
perhaps?

The kernel of a thought can travel from one mind to another at a velocity which is
effectively faster than light, but the time taken for this kernel to develop into actionable
thought would be tied to an implied observance of the above speed being ―effectively c.‖

Decoherence of quantum states through entanglement interpreted as observation by the
universal consciousness field. Why clones may not become fully conscious and fully in
possession of a free volition explained in terms of the proprietary nature of brain quantum
vacuum fields and there evolution in relation to the evolution of the universal
consciousness field.

Any interference with the vacuum within spacetime, which exceeds in the energy of its
imparted impulse, Penrose‘s one graviton limit, cannot be transmitted through (and
perhaps ultimately absorbed by) this vacuum exclusively via nonlocal, virtual particle and
field interactions. Here we might describe such an interaction with the vacuum as
exceeding maximum energy uncertainty supportable free of momentum exchanges
between multiple components, i.e., the energy fluctuation of spacetime has exceeded the
threshold of energy at which the quantum vacuum can manifest fluctuations of its
momentum-energy as pure energy, as opposed to a mixture of energy and momentum
fluctuations. Is this an example of time symmetry breaking and the loss of energy
degeneracy through having exceeded the largest quantity of non-measurable energy?
The necessity for energy fluctuations in the vacuum, exceeding the one graviton
threshold, to manifest themselves as a collection of particles and fields. It is at or just
beyond this energy threshold that the energy of the fluctuation must fragment into
elements or structures forming a composite maintained by a sustained mutual interaction
of the components- with this composite structure having necessarily now become
―embedded‖ within spacetime as one of its really existing (as opposed to merely virtual)
objects.

It is only through the interference of wavefunction, either self or mutual interference, that
the wavefunction takes on a physical meaning in terms of the probability of a physical,
spacetime phenomena. Within quantum theory temporality manifests itself in two
seemingly fundamentally distinct manners, deterministically through the evolution of the
time dependent Schrodinger wave equation, non-deterministically via the discontinuous
reduction of the state vector as a result of quantum measurement. The relativistic effects
of Einstein‘s special and general theories (of relativity) do not appear to make any such
distinction of varieties of time.

Only if the circuit elements of an artificial intelligence device are sensitive enough to a
circle spectrum of energy fluctuations in the quantum vacuum, as well as these elements
being adequately sensitive in their mutual interaction via the changing currents and fields
sustained within the device‘s circuitry, will an adequately responsive feedback obtain
between this device and the embedding quantum vacuum that will enable the vacuum (or
some spectrum of quantum fluctuations ―within‖ it) to utilize the device circuitry to
bringing about coherent and robust changes to the boundary conditions upon the
vacuum‘s self-interaction.
September 2011
Quantum entanglement provides the underlying
mechanism for temporal feedback. Temporal feedback of this type exploits the degrees
of freedom (quantum vacuum electromagnetic field fluctuation-correlation structure)
normally hidden from intersubjective awareness, i.e., measurement. So processes with
energies smaller than the Heisenberg energy uncertainty of the system embedding these
processes constitute an infrastructure supporting the engendering of quantum
entanglements, as well as the reprocessing of quantum entanglements into causal sets of
correlated quantum fluctuations. Although the wavefunction of a solitary system cannot
be determined, this wavefunction may nonetheless possess causal efficacy, which might
seem to support Bohm‘s hidden variable hypothesis, but not necessarily so because on
this view causal relationships have a fluctuation-correlation substructure that is broader
than any set of correlated fluctuations that may formally represent a causal relationship.

Similar remark to that following are to be found elsewhere. Another way of looking at
the interaction of the subjective state of the observer with the observed measured
quantum state, which is responsible for state vector reduction, is that of the copresence of
the newly observed system (upon observation) with everything else which the observer
has in the concurrent state of awareness as constituting the irreversible entanglement of
the newly observed system‘s state with those of the other objects within the observer‘s
immediate environment. This would help to unify the two heretofore distinctly
understood mechanisms of state vector reduction – that of observation by a conscious
observer with the mechanism of state function decoherence via irreversible
environmental entanglement. Here the consciousness of the observer performing a
quantum measurement simply plays the role of irreversibly connecting link between the
isolated quantum system and the outside physical environment. Another possible
interpretation might be that of the individual consciousness of the quantum mechanical
observer somehow alerting the ―universal consciousness field‖ (UCF) of the presence of
the particular system observed. The idea here is that the UCF notices the quantum state
of various systems where no human observer is present, if such systems are not small
enough or, if large, not isolated enough to ―escape its notice.‖

The actions of the human observer, say, in the laboratory, of utilizing measurement
apparatus to either amplify the effect (upon the observer) of the states of system‘s
otherwise too miniscule or, to breach the heretofore isolation of these systems from ―the
world at large.‖ These actions render the states of systems known to a human observer
and, hence, to the UCF or to consciousness at large. The possible importance of a
hypothesized UCF is particularly implicated in those in which the mere fact that a
quantum measurement could have been performed upon the system observable proves
enough to cause state vector reduction.

That which is responsible for sustaining the existence of all things would not itself
possess a sustained existence. The above statement points up the fundamental
distinction of the real versus the virtual. As perhaps alluded to above, the real is nothing
more than a distinction within the virtual. In the formation of bound energy structures,
e.g., atoms, molecules, etc., the application of focused, undifferentiated energy alone to
the quantum vacuum is less than adequate and some mutual interaction of locally
available (real) particles is additionally necessary for such bound structures to form, in
other words, differentiated energy must be available.


(1)

Important here is the positional uncertainty of these momentum fluctuations.
(2)

Relative to the photon, all of the changes occurring during its movement between two (distinct points) take place
instantaneously.


Travelling waves obeying the Schrodinger Wave Equation can always be represented as a
superposition of standing waves within a closed spacetime manifold in which time is like,
although not identical to, an extra dimension of space. When energy is conserved, the
system Hamiltonian operator must be able to be diagonalized. There must be a reference
frame in which the system is in an eigenstate of the Hamiltonian.

The masslessness of light is connected to the fact that it is a chain of momentum
fluctuations in the quantum vacuum relative to which no uniform motion can take place
in the direction parallel to the momentum of these fluctuations
1
. These fluctuations of
which light is composed possess 0 energy uncertainty and hence, infinite time
uncertainty. (this proposition needs to be reexamined more carefully) That is, relative to
the vantage point of a photon, nothing takes place, i.e., energy does not fluctuate (no
energy enters the hypersurface of simultaneity defined by the photon‘s instantaneous
reference frame?), anywhere in the Universe during the photon‘s ―flight.‖

The degenerate evolution of a quantum system‘s wavefunction does not require any
change in the system‘s total energy and so interactions with such energy degenerate
system by a ―non-physical‖ agency would not violate conservation of energy. This goes
also for changes in the kinetic and potential energies of the system separately when
combined do not result in a change to the system‘s Hamiltonian.

Resonances (as Platonic forms) are dependent on the structure of boundary conditions,
which are contingent and unforeseeable in their effects. An example of this is the
superfluidity of Helium-4 produced and observed for the first time ever in a cryogenics
laboratory!

The fringe effects represented by the Gibb‘s phenomenon of Fourier analysis as applied
to quantum probability amplitudes represents the dynamical (and creative) interface
between all (would-be) isolated quantum mechanical systems and the creative ground in
which they find themselves embedded, and out of which they are necessarily (partially)
constituted.

Fringe effects are unavoidable within any artificial construct. This is the dual , onto-
epistemological principle of the boundary (conditions) never becoming the master of the
dynamic and substantive processes never being fully explicable in terms of relations on
the set of their possible (epi)phenomena. (Gibb‘s phenomenon as applied to probability
amplitudes and quantum tunneling bears some consideration here)

How does the distinction, real versus virtual, relate to the nature of superposition? Also,
how is the phenomenon of Bose condensation related to superposition? Are ―viral‖
memes exchanged between people? How is symmetry and degeneracy related?

The interaction of two or more open systems would not merely be exchanging energy
since energy is only property defined as such if it is a conserved quantity. There could
be no time translation symmetry in such an interaction and some something altogether
new must come out of this so that we then speak of the systems rather than energy. The
2
nd
Law of thermodynamics does not encompass nonlocally connected systems.

In a closed, dynamic system a quantity has it particular character as conserved substance
by virtue of boundary conditions and topology. Such is not the case for ―quantities‖
―occupying‖ an open-ended system.


The resonant modes of vacuum field oscillation to which a crystal may be ―tuned‖ are
simply those defined by the frequencies for allowed transitions between crystalline
energy levels.

What is the function (―meaning‖) of less than perfectly stable modes of excitation and
resonance?


The DeBroglie spacetime wavenumber may be aptly understood as the localized
bandwidth for the access of a dynamic system to the ―quantum vacuum‖ CPU.

Mutual interference of wavefunctions (or of eigenfunctions) does not constitute a bona
fide ―interaction‖ between the state functions in the absence of nonlocal connections
obtaining between the functions.

A function that back reacts, in the course of its (temporal) evolution (what are we to
make of the notion of the ―spatial evolution‖ of a wavefunction?), upon one or more of its
independent variables, or whose independent variables act and react upon each other
without merely being functions of each other, e.g., where the initially stated set of
―independent‖ variables proves to be reducible to some smaller set, must be a nonlinear
function (of a set of variables).

Do nonlocal connections in the form of advanced wave solution components of particle
or field propagators create a kind of backward in time feedback which is responsible for
exchange resonances displayed by, e.g., fields never before in direct or indirect mutual
contact?

De Broglie waves for a translating particle have both energy (timelike) and momentum
(spacelike) components, or, alternatively, particle-like and wavelike components,
respectively. A particle‘s DeBroglie frequency, if you will, is maximal when the particle
is ―at rest.‖ Upon translation, some of the particle‘s rest frequency shows up as
DeBroglie wavenumber.


This constitutes quantum-based special relativistic effects for the ―individual particle.‖
But if DeBroglie frequency and wavenumber are understood after the fashion of the
unified quantity of momentum-energy, then we may speak of the DeBroglie spacetime
wavenumber of composite matter as being a sort of conserved fluid.


Propagation speeds of greater than ―c‖ are possible so long as the velocity of light in
vacuo is not exceeded during the necessary phases of preparation of experimental
apparatus at both ends (and everywhere in between) of the path of the, e.g., particle, field,
or disturbance, etc. in question.

If a supraliminal message is received, the time that must be taken to decrypt or decipher
the message must be greater than t = d/c, where ―d‖ is the distance between transmitter
and receiver, and ―c‖ is the speed of light.

This suggests an objective relationship between ―encryption hardness‖ and entropy (or,
conversely) information content of the transmitted ―script.‖ ―Time . . . flowing like a
river. . . ― Of course, if time is to be properly likened to a (conserved) fluid, then
certainly the appearance of vortices within the linear flow of this fluid corresponds to an
―equal and opposite‖ reduction in the linear current density of the fluid.



There is a shifting back and forth of current density between its linear and vorticular
components, however, the overall current density of the system is conserved. Of course,
the fluid is under acceleration within its vorticular component, but no work is performed
by this component of the fluid if the pressures and stresses within the fluid are orthogonal
to the vortices of the fluid.

Is any energy dissipated by the vacuum in the mere sustaining of stable matter that is not
currently undergoing any ―reactions,‖ e.g., chemical, nuclear, etc.? Energy is dissipated
by the quantum vacuum when a mass is accelerated. Is this energy dissipated because
the vacuum must be continually ―updated‖ on the current state of motion of the mass?
What about in the case of gravitating masses? As the Schrodinger equation dictates the
temporal (and spatial) evolution of a quantum system‘s wavefunction, e.g., that of an
accelerated particle, bound structure composed of multiple particles and fields, etc., any
change in the momentum-energy of some quantum mechanical system introduced from
outside the scope of the system wavefunction, must induce a ―collapse‖ of this
wavefunction, as well as suddenly alter the uncertainties of both conserved and
nonconserved system observables. So in each set of incompatible, conjugate
observables, the action of measurement, environmental entanglement, or acceleration
introduces either a decoherence of reduction of the state vector, resulting in an increase in
the Heisenberg uncertainties of each of one set of observables along with a corresponding
decrease in these uncertainties with respect to the set of conjugate observables of the
system. So it appears that information must be taken up by the system as a result of a
change in the system wavefunction. So differences between distinct system
wavefunctions may be thought to ‖contain‖ information, whereas a single, isolated
wavefunction, not correlated or otherwise entangled with other systems, may not be
supposed to possess information in an absolute sense. In other words, only relative
differences between wavefunctions possesses information content.


Locality and causality are merely abstract features of he physical world. How can form
sustain itself in the absence of boundary conditions upon the dynamic.

Standing wave structures possess boundary conditions at both ends (one dimensional
case) while travelling waves only have one initial condition. Is relativistic (spacetime)
symmetry violated here?

Energy uncertainty without the necessity of vacuum fluctuations – superposition within a
closed system. So here we have two types of fluctuation: that which is only definitional,
i.e., fluctuations within a closed system, and the other which is inherently unpredictable,
i.e., not part of any system behavior. So we may make the distinction of these two
fundamental types of fluctuation thus: fluctuations that are real versus virtual
perturbations. Temporality cannot subsist in virtual perturbations – real fluctuations
must be present.

Is some kind of degeneracy and symmetry involved in the overdetermination of one‘s
interpretation of one‘s own mental states? Causal connections as interpretations of
correlations and causal entities as ―beds of correlation.‖

The basis for the radical overdetermination of nature is the fact of its origin being an
infinitely symmetric state. This infinite symmetric ―initial state‖ was partially broken in
the ―moment of creation.‖ Therefore all of the symmetries observed in nature are partial
ones described by subgroups of some original symmetry group.

Strangeness of infinite sets: cultural polarization field, pattern recognition, and
enhancement of patterns and metaphysical presence. Mentor-protégé‘ relationship for all
members of a set of student/teachers without entangling of levels. Recursiveness only
causes entangling of levels of description in a finite field of elements.


Now a given mass, accelerated to a velocity approaching that of light cannot so closely
approach this delimiting value so as to become more massive than its own black hole
mass. And the black hole mass is determined by the density of the mass having reached
the delimiting value of the vacuum energy density/c
2
for the particular volume occupied
by the accelerating mass. Also, we know that the acceleration of the mass is just the 90
o

spacetime rotation of the mass original, purely timelike velocity.

In this way the limiting velocity to which a given mass may be accelerated is a function
of the energy density of the vacuum occupied by the accelerating mass. So we see now
that the original, purely timelike four velocity of the mass (prior to acceleration) is itself a
function of the density of the mass relative to the density of the vacuum energy
coextensive with the mass‘ displaced volume. But if the acceleration to light velocity is
actually to be effected, the mass must utilize its own mass energy as the ―fuel‖ for
propelling itself such that, upon actually approaching c, all of its starting mass has
already been converted to photon (or other massless particle) energy. The fact that some
of this fuel must come from the binding energy of the mass rather than from the positive
mass energy itself suggests that the mass when a ―at rest‖ already possesses a small
quantity of spacelike momentum connected with dynamical processes within the mass
responsible for the existence of this binding energy.


A theoretical group describes a closed set of reversible operations with the elements of
the group. What is closed is the set composed of the distinct operations; the number of
distinct elements, which composes the domain, which the operations of the group take as
inputs, may itself be infinite. When the symmetry of the group is ―broken,‖ the
operations of the group are altered and perhaps also new operations added, which no
longer are reversible, singly or collectively. The discovery of a new mathematical group
for the description of some physical phenomenon means that various manifestations of
physical variables, heretofore treated as distinct, are now seen to be differing
manifestations of the same, underlying entity/physical quantity.

Changes in state to some subpart of a closed dynamic system which are wholly
attributable to other, earlier changes in neighboring subparts of the overarching system
constitute ceteris paribus reversible, symmetric interactions. However, the reactions
triggered by these interactions may not be themselves symmetric and time-reversible.
Any asymmetries of the system‘s changes of state are ultimately traceable to input to the
system wholly from outside it. The sense of ―outside‖ intended here, one in which the
system as a whole is not secretly a subpart of some overarching system, with
perturbations to its outside transmitted to it and mediated by the overarching system‘s
matrix and originating within some other subpart of the crypto-overarching system.
What we are really saying here is that asymmetrical interactions always are ultimately
between the uppermost overarching system and its outside. Of course, the uppermost
overarching system in almost any physical setting is just the global spacetime continuum
itself in which this system is ―embedded.‖


Infinite degeneracy would mean that the wavefunction of the degenerate system could
undergo transition between an infinite number of eigenstates on a continuous spectrum
without measurable changes resulting to any of its quantum observables.

Should Psi be understood as a knowledge representation rather than the most that can
actually be known about the system?

Since no energy is required to effect changes in the system‘s quantum state from one to
another of its energy degenerate eigenstates, free will and conscious thought may
presuppose energy degeneracy. Also, there is not basis for assigning scales of physical
time to energy-degenerate quantum transitions.

Infinite symmetry may be understood as infinite sameness in which, no matter where the
observer is and no matter what happens, no ―objective‖ differences between here and
there and no changes from now to then are in any sense evident. Now such a state of
infinite symmetry in the absence of degeneracy (with respect to some set of as yet
―unmanifest‖ parameters) is synonymous with an empty void or nothingness. Moreover,
for there to exist a state of infinite symmetry, an infinite degeneracy must be present, that
is, degeneracy with respect to an infinite number of distinct though still unmanifest
parameters. But all this is to presume that there is some presently obtaining state, which
represents physical reality (as a whole, in some sense) but which is both underpinned by
and insulated against an infinitely chaotic flux of change to an infinite number of still
unmanifest proto-physical variables (proto-observables?). A tremendous quantity of
entropy is released as a result of the breaking of this state of infinite symmetry (one might
suppose an infinite quantity for this entropy) so that the aboriginal state of infinite
symmetry is to be understood as possessing infinite order. The breaking of this infinitely
symmetric, initial state results in a kind of ―unpacking‖ of this state into an intricate,
dynamic structure/continuum of ordered and disordered energy, i.e., some of the
symmetry (order) of the initial state survives the transition to the broken, symmetric state
and the remainder is ―lost.‖ But a perfectly and infinitely symmetric state would
possess no internal fluctuations in any of its group symmetric parameters outside of the
spectrum of potential change for the parameters defined within the symmetry group, and
so it is hard to imagine how such a symmetric system, heretofore everlastingly cut off
from any outside (realm possessing foreign parameters or ―off-scale‖ values of
―domestic‖ parameters), could ―spontaneously break,‖ that is, without having been
―helped‖ from outside.

If the Big Bang is simultaneous with the initial, abrupt phase of spontaneous symmetry
breaking, then its continued expansion may perhaps be understood as a continuous
further breaking of (global) symmetry in favor of the continued establishment of greater
myriad‘s of domains of local symmetry, interconnected through mutual exchanges of
quantities of their local variables. We might suppose that the rate-density of entropy
production therefore is uniform throughout the Universe provided that one refers to the
production of total entropy, local + global, i.e., that due to the local evolution of complex
structures/systems + local contributions to the entropy production, exclusively due to the
global entropy production occurring as a consequence of cosmological expansion. This
means that the global rate-density of entropy increase is affected by the rates of change to
the local entropy. The rate-density of global entropy may be understood, paradoxically
seeming, as the local rate to time‘s passage relative to changes in the cosmical time
parameter. This cosmic time parameter is based upon changes in the total entropy of the
Universe.


Ambiguities of interpretation of fluctuations occurring near the vertex of the light cone.
Ambiguities of the interpretation of quantum tunneling. Can we have tunneling of
particles across a timelike potential barrier? Can we construct an experiment to test this,
say one which involves the tuning/detuning of some already well understood tunneling?
The consistency of the notion of virtual gravity waves depends on the independent
existence of spacetime fluctuations.


At the event horizon of a black hole the space and time axes are in some sense reversed.
This is because an object (or particle so we can neglect tidal effects) just outside of the
event horizon possesses virtually all of its momentum in the direction of the hole‘s center
and relatively none of it in a timelike direction. But neither is their any ―room‖ for
internal spacelike momentum for this mass (in the form of binding force-mediating boson
particle exchanges). All of the fluctuations of the vacuum are in the form of 3-
momentum and none in the form of imaginary (timelike) 4-momentum. And so the mass
is not so much torn apart as it is dissolved due to the hole‘s sapping of the mass‘ own
binding forces. In the extreme case depicted here, gravity is not truly a ―force,‖ but our
projecting of our large, latent stores of common experience with everyday objects causes
the phenomenology of gravitation to be naturally interpreted in terms of the action of a
distance of a gravitating force. Gravitation is more accurately (or objectively)
understood as the effects of mass upon a higher dimensional quantum vacuum that must,
after the fashion of a cellular automaton, simultaneously parallel reprocess the data
representing both matter and the colocated/coextensive 3-vacuum, and where this
vacuum possess only a finite computational capacity.

If the vacuum does not itself gravitate, then momentum and energy fluctuations do not
directly cause spacetime fluctuations but only do so indirectly through the effects of these
momentum-energy vacuum fluctuation upon matter itself. Fluctuations of momentum
and energy, in other words, directly produce fluctuations in x and t of test particles that
we conveniently interpret as the effects of fluctuations of x and t upon test particles. The
problem here is that it we allow äx and ät to affect <p> and <E> directly, then we also
have to permit then to affect äp and äE directly, with the result that the vacuum itself
must possess a gravitational mass.

Perhaps the way to put time and space on an equal footing within a theory of quantum
gravity should be not to try to make the time parameter into a bona fide observable, but to
take away this status from the space variable, x.

This reminds us of the expansion of a permutation group through the discovery that some
members of this group are not simple, but composite.
@$
Processing information means
taking two pieces of information from two smaller, and heretofore never before directly
connected, contexts, and bringing the pieces of information together in such a way that
one now has new information concerning the broader context including all of the original
―pieces.‖



From my light reading on the subject of supersymmetry (SUSY), I gather that when

The breaking of the spacetime symmetry of the quantum vacuum by mass is not the
fundamental or absolute symmetry breaking that requires the engendering of a new gauge
boson. This situation is quite unlike that where a broken global symmetry causes the
creation of a gauge boson, which, by being exchanged between splintered symmetry
domains, results in the restoring locally of the symmetry that was broken globally, e.g.,
the creation of the Higgs boson in the theory of electroweak symmetry breaking.
Another reason that the breaking of the vacuum spacetime symmetry by mass is not
fundamental is that mass is not an irreducible, conserved physical quantity, but is a
phenomenon produced by the peculiar manner in which the components of the total
(mass-energy + vacuum-energy) fluctuation stress-momentum-energy tensors mutually
interact.
April 2011
―5. Spin 1 photon field has positive zero point quantum energy density.
Spin 1/2 Dirac electron-positron field has negative zero point quantum energy density
from spin-statistics connection. Ref: Peter Milonni "The Quantum Vacuum"‖, c.f.,
Sarfatti‘s ebook, ―Destiny Matrix‖.

Prior to the breaking of the spacetime symmetry of the vacuum by mass, the spectrum of
momentum fluctuations within the vacuum is just that defined in terms of the spectrum of
virtual transitions allowed between the discrete energy levels of this global vacuum state.
This is exactly the situation we should expect if the quantum vacuum can correctly be
modeled as a four dimension array of coupled harmonic oscillators, i.e., as a four
dimensional crystalline latticework. The additional 3-momentum fluctuations, over and
above those permitted in the symmetric state of this vacuum, correspond to those
transition energies of the modified vacuum lattice which are now forbidden by the state
of broken symmetry.

After this symmetry is broken, the spectrum of vacuum 3-momentum fluctuations
becomes denser while that of the vacuum‘s energy fluctuations, i.e., fluctuations in the
imaginary component of the vacuum‘s 4-momentum, becomes attenuated. Were this
vacuum two dimensional, this change in symmetry could have been effected by a simple
―rotation‖ of the fluctuation momentum-energy vector, i.e., a 2
nd
rank tensor would have
been required to describe the rotation, but not the end state, which could be adequately
described in terms of a 2-vector. And the dynamics of this rotation of momentum-
energy within this 2d spacetime can be modeled in terms of the exchange of spin-1 gauge
bosons. However, in four-dimensional spacetime, a 2
nd
rank tensor is required to
describe the vacuum‘s end state – that post symmetry-breaking. Moreover, a 4
th
rank
tensor is necessary to describe this transformation of the vacuum caused by the breaking
of its symmetry, as mentioned earlier. Consequently, a transformation of the fluctuation
stress-momentum-energy tensor can be described in terms of time-varying probability
current densities of both spin-1 vector gauge bosons and spin-2 gauge bosons.

As the density of a given mass is increased, so increases the communications
―bottleneck‖ between change within the mass, mediated by momentum fluctuations in the
form of spin-1 boson exchange, and the temporality of the quantum vacuum. Since the
modes of momentum fluctuation are determined (just as in the case of a crystal) by the
available energy transitions between discrete energy levels of the (lattice) vacuum,
simultaneous increases in momentum fluctuation density with corresponding decreases in
energy fluctuations within the vacuum occupying the very same volume (as the mass in
question), require that these momentum fluctuations become progressively more energy
degenerate. Is this paradoxical? We must understand the relationship between the
concepts of symmetry and degeneracy to answer this question.

The nonlinearities exhibited by general relativity may only apply to the expectation
values of all forms of energy and not to mere fluctuations in energy (that are grounded in
Heisenberg energy uncertainty, that is). Is gravitation a manifestation of the loss of
spacetime symmetry? In other words, the spatiotemporal distribution of the
wavefunction, which matters not to an isolated, single particle within an ―empty‖
universe, in terms of when and where this particle is situated within this empty spacetime,
becomes an important parameter for the momentum-energy (or 4-momentum) of this
(test) particle, i.e., a degeneracy has been removed (and not here because of some specific
force coupling having been introduced) – remember, gravitation is not a ―force.‖


global symmetry is broken, one or more gauge bosons are engendered as a result. These
bosons are necessary to restore this broken symmetry, but only at a ―local‖ level. The
continual mutual exchange of these gauge bosons between the appropriate fermions
(those ―feeling‖ the gauge force mediated by these particular bosons) constitutes the
mechanism by which this local gauge symmetry is maintained. My understanding is
that, in the case of all gauge bosons, with the notable exception of the Higgs, the
fermions involved in the exchanging of these bosons acquire an effective mass
component of their total mass. (Lorenz, by the way, had tried during the early years of
the 20
th
Century to demonstrate that all of the electron‘s mass was electromagnetic in
origin and was attributable to the electron‘s electromagnetic self-energy. Lorenz‘
attempt ultimately proved unsuccessful: it appears that even the great hunches of great
minds do not always ―pan out!‖) Now the hypothesis for the gravity mechanism alluded
to throughout these writings points to the mutual exchanges of gauge (vector) bosons as
being materially important in understanding this mechanism as well as the mechanism of
inertia – hopefully so since we would like to remain true to Einstein‘s strong equivalence
principle, the other postulate of general relativity being that of general covariance.

Can we think of the loss of purely timelike 4-momentum, and this 4-momentum being
reconstituted into both timelike and spacelike components as representing a particularly
simple form of symmetry breaking? An important question in this connection becomes
does the vacuum not gravitate because of a kind of mutual cancellation of its timelike and
spacelike momentum fluctuation current densities? Or is this the case because the 4-
momentum of the vacuum, in the absence of mass, is itself purely timelike?
Supersymmetry, which requires the contributions to the vacuum energy from
creation/annihilation of virtual fermion-antifermion pairs to contribute a negative
component of this energy, also requires the contribution to this vacuum energy from the
creation/annihilation of bosons to be positive. Moreover, SUSY requires that these two
contributions from the two basic types of vacuum fluctuation somehow precisely cancel!

Magnitude of the tensor as a kind of container for a conserved fluid which can have 16
distinct (flow?) components. Shuffling of component intensities is reversible.

It has been stated many times by physicists that the existence of Heisenberg energy
uncertainty implies the non-conservation of energy. Similar statements apply to the
momentum. But with the unification of momentum and energy by special relativity into
a conserved momentum-energy four-vector, and the concomitant unification of space and
time into a symmetric, spacetime continuum, it becomes possible for both Heisenberg
momentum and energy uncertainty to exist throughout spacetime without the necessity of
net uncertainty of the four-momentum throughout this spacetime. This appears possible
if the vacuum fluctuations in 3-momentum are dynamically related to those of the energy
so that the sum of these two yields a magnitude of the four-momentum fluctuations of the
vacuum which remains a constant 0 or, undetectably close to 0 such that a) the quantum
vacuum does not itself act as a gravitating source and b) the spacetime maintains (for all
practical purposes) its Lorenz invariance. The action of mass through the mechanisms of
Pauli Exclusion (and ―Inclusion‖) upon the vacuum and, conversely, of this vacuum upon
the mass, serves to disrupt this spacetime symmetry (of Lorenz invariance) causing an
imbalance in the mutual cancellation of vacuum momentum and energy fluctuations.
This imbalance, induced by mass on the vacuum, acts to give the vacuum an effective
mass just as this modified vacuum enhances the mass of test bodies introduced into it
from outlying, virtual ―free space.‖ Mass is enhanced in two distinct but closely related
ways: the real time processing burden that increased momentum fluctuation densities
(within vacuum occupied by masses) pose for the quantum vacuum that mediates them
and the reduced computational resources available to this vacuum with which to perform
this function. On this view, the density of vacuum energy that acts as a source of
gravitation is only that component of the vacuum‘s total energy density which fails to
cancel with its momentum fluctuations. In this way, the effective vacuum energy density
is determined by the density of mass in the universe. It is this tiny component of the
total quantum vacuum energy density, which we should term the cosmological constant.

The building-in of phenotype degeneracy may depend also upon quantum indeterminacy.
Can the phenomenon of quantum degeneracy be related to that of classical, deterministic
chaos, e.g., chaotic attractor theory, i.e., ―quantum chaos?‖

So it is actually the tidal forces associated with a gravitational field that produce the
acceleration of massive bodies.

By virtue of the Bose principle (PIP), the enhanced binding energy of the vacuum
associated with its enhanced density of 3-momentum fluctuations/suppressed energy
(imaginary 4-momentum) fluctuations, induce a mirroring (of this shift in the magnitude
of the components of its stress-momentum-energy fluctuation tensor) by the equivalent
matter fluctuation tensor.

This tendency of the structure of matter to imitate/ borrow from that of the sustaining
vacuum energy is simply suggested by the distinction of real versus virtual, e.g., particle,
field, etc.


A
11
A
12
A
13
A
14
B
1
C
11
C
12
C
13
C
14

A
21
A
22
A
23
A
24
= B
2
X C
21
C
22
C
23
C
24

A
31
A
32
A
33
A
34
B
3
C
31
C
32
C
33
C
34
A
41
A
42
A
43
A
44
B
4
C
41
C
42
C
43
C
44

Contract each C
ij
with B
j
to get the magnitude of the vector, then to get the direction of
this magnitude, take four dot products with the basis four vector of the coordinate system,
and contract vector with tensor.

T
áâ
contracted with R
áâãä
yields 16 dot products of 16 pairs of 2
nd
rank tensors, or,
contracting T
ik
with 16 metric tensor-like 2
nd
rank tensors to produce Táâ, a new 2
nd
rank
tensor.

Contravariant versus covariant tensors, vectors, etc.?

=
ik
T
ik
= 0. T is a conserved quantity. What‘s the underlying symmetry at work here?

T
ik
can be symmetric or antisymmetric, covariant or contravariant.

The derivative of T
ik
is not a tensor, i.e., on a manifold with a metric of (+,+,+,-), i.e., n
ik

– Minkowski metric. T
ik
must be spacetime manifold in order for this differentiation to
produce a tensor.


The spin-statistics connection is not necessary in nonrelativistic quantum mechanics, i.e,
there is no necessary connection within this theory between the spin of a particle and the
symmetry (or antisymmetry) of the particle‘s wavefunction. This connection is however
demanded by consistency requirements of relativity applied quantum mechanics.

/\Q us the Heisenberg uncertainty, not the total uncertainty, which includes statistical and
experimental uncertainties, etc.

There is a distinction between the quantum uncertainty, /\Q, and the fluctuations in Q,
i.e., oQ for all quantum systems except that of the quantum mechanical observer‘s own
brain.

Effortless execution of what has been performed and practiced on many previous
occasions is possible as we exploit the well-worn rut of a one-to-one, ―onto‖ function.
Effortless improvisation exploits an altogether different functionality. Sometimes it does
seem as though we are getting some help in developing and ordering our thoughts
somewhat after the fashion of a matching funds subsidy. This is somewhat after the
fashion, again, that the poet‘s leaps of insight and facilitated by the simultaneous striving
after an appropriate sounding verse. Another related phenomenon is that by which
certain conceptions, best expressed in a particular vernacular or dialect, are also more
easily conceived – even when these conceptions are altogether novel to the person
imitating a particular form or speech.

It is not that faster than light influences do not exist, it is just that these supraliminal
influences are not measurable. The influences would possess these normally conserved
quantities in amounts less than the Heisenberg uncertainties of these quantities. These
quantities are in this case not conserved and so, for example, spacetime symmetry is
absent over spacetime regions of dimension ―smaller‖ than /\x
0
, i.e., /\t, /\x
1
, /\x
2
, /\x
3
.
These quantities are, of course, /\p
x
, /\p
y
, /\p
z
, /\E. The question arises whether such
small momenta and energies are actually not conserved or, whether these quantities are
just below the threshold of the measurability necessary to verify the conservation or these
quantities. What this means is that there is a question about the origin of the
nonconservation of the Heisenberg uncertainties of these dynamical quantities. The
hypothesis advanced here is that the expectation values of these dynamical quantities are
jointly determined by the observer uncertainties in these quantities and the magnitude of
the vacuum fluctuations of these quantities. What is conserved is perhaps the vacuum
fluctuation term component of the expectation value while the nonconserved component
remains the quantum uncertainty itself.

Perhaps vacuum fluctuation momentum and energy are only nonconserved quantities
only in the restricted sense of there being no measurable inputs of momentum and energy
to the continuity equation of fluctuation momentum current density.

Perhaps the expectation values of, for example, a system‘s momentum or energy, are
conserved quantities due to a special coordination between the observer‘s uncertainties
and the vacuum fluctuations in the system‘s (matter + vacuum) momentum and energy.

Our theory is in difficulty in regard to this one important point: we have maintained that
the magnitudes of the fluctuations in the quantum vacuum‘s momentum and energy
combine together into a conserved four vector (in the case of an ―empty‖ spacetime) and
yet we are also maintaining that the gravitational field breaks the spacetime symmetry of
this vacuum, say, by disrupting the phase relationships of the individual amplitudes in the
vacuum‘s momentum and energy fluctuations.

There is another form of probability, of course, which is that of detector efficiency.

conscious experience corresponding to each alternative possible outcome of an act of
observation (of the state of a quantum system), therefore consciousness has nothing
whatever to do with the selection from among these alternative possibilities.

There is no basis provided for within Everett‘s theory for the distinguishing of actual and
possible quantum states. Everett‘s theory depends upon the possibility of their being a
complete quantum description of the Universe.

Of course, a computation, taking place within the Universe, of the state of the Universe as
a whole, is in principle impossible, unless, perhaps, one is only speaking of this whole as
a kind of shell or outer form with inner states which remain undetermined. An
interesting thought here is that considerations of logic only tell us that something is per
impossible, but leaves completely unexplained the why and wherefore of this
impossibility. The question arises whether there is some meaning in asking why some
thing or event, dictated by logic as an impossibility, in terms of limitations within a
physical order.

Consciousness is a structure of simultaneity, which is grounded in the integration of
temporality. Does this imply that temporality can exist in an aboriginal, unintegrated
state, and paradoxically only at a later time being brought into a state of greater integral
wholeness? Isn‘t a mere single dimension of time an inadequate domain from which the
functions of integration are to take their arguments, i.e., independent variables?

The temporal multidimensionality may be found within Everett‘s Many Worlds Theory
of quantum measurement, if integration of temporality is effected through the interaction
of physical noncommunicating branches of the Universe (or merely of the individual
mind‘s universe).

Conscious action may be defined within a quantum mechanical context as the
organization of undetermined quantum states by thought.

We might say that choice transcends quantum description because on the relative
statistical weights are given for each eigenfunction within a give quantum superposition
state, but if we do, we neglect the fact that however human beings choose to actualize the
eigenstates of their quantum brain superposition states, over time these choices must
collectively always be achieved in such a manner that the relative statistical weights are
not violated. Does this suggest that if, unlike the case of pure, statistical probability of
the classical world, the actualization of quantum brain states by the individual human
person have continued to defy, if you will, what is otherwise indicated by the relative
statistical weightings of the individual quantum brain eigenstates, that then some
quantum potential must have all along been concurrently building up (in the quantum
vacuum with which the person‘s brain is interacting or, perhaps, within that person‘s
brain itself), which strives (and ultimately must succeed) to redress this disparity so that
the relative statistical probability weights remain in force. After all, Bohm‘s coauthor,
Basil Hiley himself routinely referred to the quantum potential as an information
potential.

The fundamental difference between the eigenstates of a superposition and those of a
statistical mixture is essentially this: in a statistical mixture of quantum states there is one
state among the states of the mixture that the system is actually in; in the case of a
superposition the system is not secretly already in one of these possible states because
this has yet to be determined by, e.g., observation or decoherence.

The order of spacelike separated events is only determined within an individual
consciousness.

The observer can‘t copy a quantum state, but can arrange so that the system and his own
brain state form a joint quantum state.


The dynamism of fragments resulting from a broken symmetry is convergent and
evolutionary. The internal symmetry is broken and simple enough to fit into externality
whereupon the original symmetrical, highly-ordered structure can be approximately
reconstituted. This reconstitution is effected and sustained by a set of locally symmetric
interactions, which can be contained within locality, i.e., spacetime.

The breaking of the infinite, global symmetry of an infinite, unified consciousness
produced the local gauge symmetric network of interacting (temporal) consciousnesses
(minds). The order which began as infinite and nondistributed has been transmuted into
finite and distributed order of intersubjective spacetime.

Spacetime is the symmetry domain of momentum-energy. What is the symmetry
domain for data-information?

Gravitational redshift can be interpreted as a lengthening of the photon‘s wavelength or
as a reduction in the photon‘s momentum, which in turn causes a redshift in the photon
wavelength/frequency.

We must distinguish changes in the energy of the crystalline lattice 3-vacuum due to its
interaction with the 4-vacuum versus changes in 3-vacuum energy due to gauge
interaction within this 3-vacuum.

Classical GR schema: matter tells spacetime how to curve and the curvature of spacetime
informs matter‘s geodesic motion.

Quantum GR schema: momentum-energy (of matter) affects the fluctuation momentum-
energy of the quantum vacuum through their shared quantum statistics. And the
structure of the quantum vacuum fluctuation momentum-energy in turn affects the
expectation values of the momentum-energy. Here the effects of fluctuations in the
position-time, i.e., spacetime fluctuations are not fundamental, i.e., are not on an equal
footing with momentum-energy fluctuations. In other words, each fluctuation in position
or time (which under resonance produce nonzero alterations in the expectation values of
x
I
and t) is derivative of its corresponding momentum or energy fluctuations. And so, on
the view spacetime fluctuations do not themselves have any independent existence, i.e.,
virtual gravitons are fictional entities (even as virtual particles!) The virtual transition
of energy between distinct levels of the crystalline vacuum lattice may be interpreted as
virtual, timelike tunneling of fermions of the lattice to higher energy levels of the crystal
followed by decay of these virtual, excited states.

But this prevents the existence of real gravitons as mere excitations of the vacuum
graviton field. Moreover, on this view, gravitational waves are an artifact of a purely
phenomenological description of a peculiar dynamics of gravitationally perturbed matter
distributions.

Such is the difference between a particle tunneling through a potential barrier versus
tunneling over this barrier.

Can we understand the quantum measurement process as an example of induced
symmetry breaking, i.e., because the measurement of an observable of a complex
quantum state always returns a real number that is quantum measured takes from the
general linear, GL(n; C) to the general linear subgroup of GL(n; C), i.e., GL(n;R).

If so, then what is the quantity that is conserved within GL(n;C), which is not longer
conserved within GL(n;R)? The uncertainties in the observables commuting with H?


The dynamism of fragments resulting from a broken symmetry is convergent and
evolutionary. The internal symmetry is broken and simple enough to fit into externality
whereupon the original symmetrical, highly-ordered structure can be approximately
reconstituted. This reconstitution is effected and sustained by a set of locally symmetric
interactions, which can be contained within locality, i.e., spacetime.

The breaking of the infinite, global symmetry of an infinite, unified consciousness
produced the local gauge symmetric network of interacting (temporal) consciousnesses
(minds). The order which began as infinite and nondistributed has been transmuted into
finite and distributed order of intersubjective spacetime.

Spacetime is the symmetry domain of momentum-energy. What is the symmetry
domain for data-information?

Gravitational redshift can be interpreted as a lengthening of the photon‘s wavelength or
as a reduction in the photon‘s momentum, which in turn causes a redshift in the photon
wavelength/frequency.
As the photon is parallel transported along with its simultaneity hypersurface (image) in which
the imaginary momentum of the hypersurface is increasing with total 4-momentum conserved. 02-14-02

We must distinguish changes in the energy of the crystalline lattice 3-vacuum due to its
interaction with the 4-vacuum versus changes in 3-vacuum energy due to gauge
interaction within this 3-vacuum.

Classical GR schema: matter tells spacetime how to curve and the curvature of spacetime
informs matter‘s geodesic motion.

Quantum GR schema: momentum-energy (of matter) affects the fluctuation momentum-
energy of the quantum vacuum through their shared quantum statistics. And the
structure of the quantum vacuum fluctuation momentum-energy in turn affects the
expectation values of the momentum-energy. Here the effects of fluctuations in the
position-time, i.e., spacetime fluctuations are not fundamental, i.e., are not on an equal
footing with momentum-energy fluctuations. In other words, each fluctuation in position
or time (which under resonance produce nonzero alterations in the expectation values of
x
I
and t) is derivative of its corresponding momentum or energy fluctuations. And so, on
the view spacetime fluctuations do not themselves have any independent existence, i.e.,
virtual gravitons are fictional entities (even as virtual particles!) The virtual transition
of energy between distinct levels of the crystalline vacuum lattice may be interpreted as
virtual, timelike tunneling of fermions of the lattice to higher energy levels of the crystal
followed by decay of these virtual, excited states.

But this prevents the existence of real gravitons as mere excitations of the vacuum
graviton field. Moreover, on this view, gravitational waves are an artifact of a purely
phenomenological description of a peculiar dynamics of gravitationally perturbed matter
distributions.

Such is the difference between a particle tunneling through a potential barrier versus
tunneling over this barrier.

Can we understand the quantum measurement process as an example of induced
symmetry breaking, i.e., because the measurement of an observable of a complex
quantum state always returns a real number that is quantum measured takes from the
general linear, GL(n; C) to the general linear subgroup of GL(n; C), i.e., GL(n;R).

If so, then what is the quantity that is conserved within GL(n;C), which is not longer
conserved within GL(n;R)? The uncertainties in the observables commuting with H?


Not only does the concentration of the nongravitational binding force interactions (i.e.,
virtual photon, w, z, gluon, etc. exchanges between real fermions) increase, but along
with this, field energy from these increasing binding forces excite ―unoccupied,‖ virtual
fermion ―vacuum modes,‖ resulting in the concomitant creation (out of this field energy)
of real fermions, which then become part of the structure of the gravitationally collapsing
matter distribution. And there is reason to believe (of course, also from the standpoint of
symmetry considerations) that there obtains an equipartation of field energy (real
fermions) and interaction or binding energy (virtual bosons).

Do fermions and bosons provide boundary conditions for each other?

Exchanging a fermion for an antifermion induces a sign change in the time. Exchanging
a boson for an antiboson induces no sign changes in either space or time variables.


Theta = (/\k
*
x – w
*
t)

/\theta = /\k
*
x + k
*
/\x - /\w
*
t – w
*
/\t

Oscillatory transmission coefficient for a barrier tunneling problem may be a good
analogy for the Hubble recessional velocity periodicity of 72 km/sec/Mpc for the
cosmological expansion.


Virtual gravitons are just stress-energy fluctuations of the quantum vacuum, which can
also be understood, through a generalization of the Heisenberg uncertainty principle, as
fluctuations in the coupling of space to time, i.e., as fluctuations in the strength in the
coupling of momentum and energy fluctuations to the quantum vacuum.

―Late time normalization‖ involves an accounting system, i.e., the conservation of
probability (or probability density, tensor density, etc.), which subsumes (and may well
essentially depend on) the action of nonlocal quantum influences.

Nonlocal interactions seem to be necessary for a system described by a wavefunction.
This is because the wavefunction perfectly mimics a particular class of random function
although the physics underpinned by this functions dynamics is anything but random.

Although the momentum-energy of the system of charged particles and electromagnetic
fields do not each transform covariantly, the momentum-energy of the combined system
of both does transform covariantly.

/\X
uv
x /\T
uv
¥ h
2
(possible generalization of the Heisenberg Uncertainty
Principle?)

But don‘t we need a metric tensor, g
uv
, in front of the left-hand side of the above
equation? So here we have fluctuations in T
uv
, i.e., dT
uv
, related to fluctuations in X
uv
,
i.e., dX
uv
. We have already related fluctuations in 3-momentum to the exchanges of
vector bosons within 3-space and fluctuations in energy to the creation/annihilation of
virtual fermion-antifermion pairs in vacuum. How are we to represent fluctuations in the
stress component of the vacuum momentum-energy tensor, t
uv
in terms of the
fundamental particle and field processes of the quantum vacuum? We mentioned earlier
that these stress fluctuations could be described in terms of the exchange of spin-2
particles, i.e., gravitons.

Now the quantity, T
uv
, in /\T
uv
(above) must be a conserved quantity and so the quantity
X
uv
, in /\

X
uv
, (above) must be an unconserved quantity. But then this means that X
uv

cannot be properly quantized such that gravitons (as the spin-2 quanta of spacetime)
cannot exist.

Should scattering in vacuum be treated any different than scattering in a refractive or
dielectric medium?

A four-vector equation shows the relationship of two quantities, one of which is
conserved. There should be a symmetry associated with the space defined by the
unconserved quantity. Then there should also exist an uncertainty relation (of the
Heisenberg variety) obtaining between the uncertainties in both the conserved and
unconserved quantities.

Can we speak of the symmetry space of the unconserved quantity as being ―curved‖ or
―warped‖ as a result of the action of the conserved quantity, say due to some dynamism
taking place ―within‖ the local symmetry space ―occupied‖ by the conserved quantity?

The gravitational field is the spacetime field, which exists against the backdrop of an
infinite, Euclidean space?

Topology is a consideration of quantum statistics because of the underlying
transformation of the exchange of identical particles. Many different topologies can
have the very same metric or is it that any given topology can possess myriad distinct
metrics? Presumably the metric tensor varies throughout the spacetime of a
gravitational field even though the topology of the spacetime manifold, i.e., its
connectedness, remains a constant.

Can we calculate the increase in entropy resulting from the breaking of a symmetry?
Look at the interacting gas compartment model of the entropy of mixing as a kind of
reduction in symmetry.

Fluctuation size as related to 1/
n
in occupation number formalism of quantum
mechanics. The mathematics of thermal fluctuations adequately describes thermal
phenomena only if a background heat reservoir is assumed.

Rotation angular momentum-absolute direction-inertial guidance system. /\u/\L ¥ h.

Phase relations disrupted by observation and by a gravitational field. Spin versus phase
fluctuations. Special relativity requires spin statistics and general relativity mediated by
spin statistical forces.

Metaphor underpins the integral consistency of culture. It is because of this fact that we
may meaningfully speak of revolutions occurring in numerous distinct fields of human
endeavor simultaneously.

If it is possible to move relative to an always greater fraction of the total vacuum energy
density contained within a region upon which is impressed a gravitational field, this is
due to the locally reduced value of the velocity of light within this region.

Phase is determined by both frequency, f, and wavenumber, k. In other words, ê
sys
is
determined by the energy and momentum of the quantum mechanical system. Changes
in the phase with time may be linked to fluctuations in the quantum mechanical system‘s
energy. While changes in ê
sys
along coordinate intervals may be linked to fluctuations in
the quantum mechanical system‘s momentum. Are nondeterministic functions described
by anharmonic Fourier expansions, deterministic by harmonic?

Performing a quantum measurement disrupts the delicate phase relationships between the
orthogonal modes of the quantum mechanical system, i.e., between the system‘s
orthogonal eigenstates. Does quantum mechanical measurement abruptly introduce
anharmonicity into the system? Does adiabatic change in the system mean merely a shift
in the amplitudes of system‘s eigenfunctions?

In some gauge theories, the vacuum is defined by a field that induces a breaking of the
vacuum‘s symmetry (Higg‘s mechanism) and giving particles their respective masses (as
opposed to merely causing the already defined, potential masses for each type of particle
of these particles from their status as heretofore latent and virtual to a new status as actual
(or manifest) and real.

On p. 152 of How is Quantum Field Theory Possible?, we read, ―Quantum theories tell us
there is no coherent formulation of nothingness.‖

All operators corresponding to quantum vacuum observables, e.g., spin, momentum,
energy, etc., are expressible in terms of the creation and annihilation operators
/\
a
+
and
/\
a.
/\
a
+
and
/\
a are equivalent (in the case of small amplitudes of oscillation) to the excitation
and deexcitation of the quantum vacuum normal modes of harmonic oscillation. Of
course, for large amplitudes of excitation (and deexcitation, as well?), the distinct
harmonics are no longer independent of one another, i.e., orthogonal. In this situation
the vacuum has been excited into anharmonic oscillation. By what mechanism are we to
model the induced interaction of the vacuum normal modes under the condition of
anharmonicity? Does anharmonicity imply the occurrence of broken symmetry, e.g.,
spatiotemporal or in the intricate phase relations of these normal modes? Does the
disruption of harmonicity imply the exchange of quantities of phase between the
interacting normal modes and is the quantity of phase so exchanged itself quantized?


The comments about the accelerated electric charge pertain to freely-falling charges.

Are the energies of vacuum fluctuations blue or red shifted relative to an accelerated
body – a body either being accelerated within free space or freely falling in a
gravitational field? Or are the probability current densities of these fluctuations merely
shifted in their distributions?

Re-examine the question of gravitating quantum vacuum in light of the notion of metric
degeneracy of spacetime topological fluctuations.

Distance and time interval can be defined in terms of the line density of momentum and
energy fluctuation events, respectively.


Think about /\x and /\p in relation to double potential barrier height (in distinguishing the
two interpretations of tunneling). This question of interpretation can be connected to
another one where vacuum spacetime fluctuations are interpreted as either
phenomenological or physically real.


C.f., p. 247, Theoretical Concepts in Physics (Longair), ―fluctuations in the field are of
the same magnitude as the energy density of the radiation itself. (waves of random
phase for a particular mode of oscillation).


An electric charge moves in such a manner (when undergoing accelerated motion, that is)
as to remain at the center of its ―field emanations‖ so that it ―appears to itself‖ to be at
rest. This is an example of forces being induced by the breaking of a symmetric vacuum
field. This would seem to imply that a component of this charged particle‘s mass is
electromagnetic in origin. We may make similar observations concerning the other
components of the particle‘s mass in terms of it maintaining local symmetry as a source
of other fundamental force fields, i.e., strong nuclear, weak nuclear, and perhaps also
gravitational. Such a particle, while being accelerated in ―free space,‖ disrupts the
global spacetime symmetry of the vacuum electromagnetic field, along with the other
global symmetries of this vacuum – those associated with the other fundamental forces.
But then the particle proceeds to emit and absorb various force-carrying quanta, i.e.,
bosons, in such a manner, then, so that the local vacuum symmetries with respect to these
fundamental fields is retained. A kind of ―self-force‖ develops out of this tendency of
the particle to maintain the symmetries of its local spacetime (vacuum), which acts in
opposite sense to the acceleration. And it is these self-forces, acting in concert, which
account for the inertia of the particle. In the case where this particle is bound up within
the quantal structure of bulk matter, considered to be ―at rest,‖ the fundamental fields
now manifest themselves in the form of continual emission and absorption of force-
carrying quanta.

These are the same forces, which manifested themselves as the self-forces, described
above for an accelerating particle. These forces are now mediated by the to-and-fro
exchange of force-carrying quanta between the particle and its immediate neighbors (to
which it is bound). These are the forces underlying the binding energies of bulk matter,
which collectively engender its inertial mass. The sum of the momentum current
densities, äò
i
, are at the expense, as explained earlier, of the imaginary component of the
total 4-momentum current density, äò
î
, of the ―free space,‖ quantum vacuum. In the
absence of internal stresses and strains within this bulk matter, the fluctuation 4-
momentum current density tensor is diagonal, containing only pressure and energy
density terms.

In the special case of the event horizon of a spherical black hole, the pressure terms are
maximal and the energy density term is 0. In other words, at the event horizon of the
hole, all of the fluctuation 4-momentum current density is in the form of internal
exchanges of fluctuation 3-momentum. The time-time component, as well as the other
timelike components of the fluctuation stress-momentum-energy tensor for the local
vacuum (just outside the event horizon) is 0. So at the event horizon of a black hole, the
energy uncertainty of the quantum vacuum is also 0. Notice that the fluctuation energy
(as opposed to momentum) associated with all of the fundamental vacuum fields are
each, individually, 0 (since there can be no mutual cancellation between the energies of
distinct spin-1 fields).

The duration of individual vacuum energy fluctuations is not what is altered by time
dilation within a gravitational field, just the interval between these energy transitions and
the current density of the vacuum fluctuations in energy relative to the increased
momentum current densities within bulk matter. Similarly, the wavelength of individual
momentum fluctuations is not contracted in the radial direction within a gravitational
field, just the average distance between individual momentum fluctuation events. If this
is a more or less correct interpretation, then how are we to explain the phenomenon of
gravitational redshift?

So would energy and momentum here be a collective phenomenon rather than absolute
physical quantities, such that the momentum and energy of real particles are not absolute,
for instance, but are merely change relative to the abstract representations of the
vacuum‘s momentum and energy grounded in this vacuum‘s momentum and energy
current densities? This interpretation is perhaps consistent with and helps to clarify the
meaning of the notion of time being relative (rather than ―absolute‖).

So it is actually the tidal forces associated with a gravitational field that produce the
acceleration of massive bodies.

By virtue of the Bose principle (PIP), the enhanced binding energy of the vacuum
associated with its enhanced density of 3-momentum fluctuations/suppressed energy
(imaginary 4- momentum) fluctuations, induce a mirroring (of this shift in the magnitude
of the components of its stress-momentum-energy fluctuation tensor) by the equivalent
matter fluctuation tensor.

The question arises that if the fluctuations in the momentum and energy in the particles of
which the mass is composed are independent of these fluctuations of the local vacuum
state, then interpreting either set of fluctuations as the Heisenberg uncertainties in
momentum and energy and the remaining set of fluctuations as the fluctuation terms of
these two quantities, then given that these two sets of fluctuations, matter and vacuum,
are appropriately correlated, nonzero expectation values result for the momentum and
energy for this matter. More specifically, negatively correlated sets of fluctuations will
produce nonzero expectation values of the momentum or energy and positively correlated
sets of fluctuations will produce zero expectation values of the momentum or energy.
An important question here is: how are vacuum momentum fluctuations related to
vacuum energy fluctuations (within the same spacetime ―neighborhood‖)?

This tendency of the structure of matter to imitate/ borrow from that of the sustaining
vacuum energy is most directly suggested by the distinction of real versus virtual
particles and fields and physically realized through the action of PIP and the divergence
free nature of T
áâ
, i.e., the fact

that, ï/ï
áâ
T
áâ
= 0

where T
áâ
is the vacuum stress-momentum-energy fluctuation tensor and ï/ï
áâ
is a
covariant derivative.

The symmetry of real vs. virtual duality is broken in at least one important and perhaps,
highly significant, sense: real particles possess mass both singly and collectively; virtual
particles possess this property of mass only as individual particles. This is because the
principles of PEP and PIP apply equally to the interaction of virtual with virtual particles
as they do to the interaction of real particles with virtual particles. The characteristic of
massiveness does not extend to virtual particles collectively because virtual particles
generally do not possess a continuous existence, i.e., existence through more than one
―period‖ defined by Planck‘s constant divided by the particle‘s energy.

In terms of the cellular automata (CA) model of vacuum and matter, the information
processing capacity that must be respectively devoted to the computation of virtual versus
real particle structures (individual and collective) and virtual versus real spacetime
trajectories (evolution) surely must be of widely divergent magnitude.

The difference in the computational resources required (on the part of the quantum
vacuum) in order to continuous recreate a particle if the same type or class, according to
a statistical rule, versus the computational resources that must be devoted to the task of
continuously recreating a certain individual (individually labeled) particle, according to a
deterministic rule, must be one of many orders of magnitude. This difference in order of
magnitude may well approximate that between the vacuum energy densities predicted by
quantum theory for the quantum vacuum and general relativity theory for the
cosmological constant. In this manner, CA theory may allow a large step in the direction
of reconciling these two fundamental and radically conflicting theories.

The probability of throwing a ―head‖ and a ―tail‖ when tossing two coins is classically
speaking ½. The quantum probability of throwing a ―head‖ and a ―tail‖ is just
1
/
3
. In
the first case, heads and tails are distinguishable; in the latter, they are indistinguishable.
This is far less than an order of magnitude difference, if we carry this over from the
context of simple probabilities to that of probability densities or probability current
densities and probability rates.

When distinguishing ―throws‖ (which may be likened to creation/annihilation events), we
note that, classically, when distinguishing coins (which may be likened to distinguishing
particles), the probability of each possible ―throw‖ is ¼ or 0.25. When not
distinguishing ―throws,‖ which is what is required in analogy with the statistical
requirements of the quantum theory, the probability of each possible ―throw‖ is
1
/
3
or
0.33. So how can we, making use of this analogy between coins and particles and
between ―throws‖ and quantum ―events,‖ a|a
t
, to account for the fantastic order of
magnitude difference between what are effectively the probability current densities of the
vacuum in QM and GR? The ratio, [
0.33
/
0.25
]
n
with n ~ 1000 yields a factor of an order of
magnitude of 10
120
.

The computational resource requirements for a representation within the computational
state space of the cellular automaton (CA) in relation to the total available computational
resources available to the CA with which to compute this representation is the principle
determining factor of the inertial mass exhibited by the representation (representational
object). The information content of a state space configuration is very closely tied to the
probability of this configuration relative to that of the other distinct configurations of the
state space. When the degeneracy of a representation increases, its relative probability
within the state space is enhanced.

Within classical CA theory, there is no meaning in the notion of a degenerate state;
degeneracy is always that of a representation within the state space with respect to some
quantum observable; identical states are indistinguishable within classical CA theory
(time-independent quantum state space, i.e., state space in which the total set of possible
configurations is closed?). Representations may be degenerate, which comprise any
number of distinct states. The degeneracy of an individual state would allow such a state
to continue through more than one clock cycle of the CA‘s CPU, which would imply the
possibility of distinguishing identical states of the CA state space. In the case where
individual identical states of the CA state space are distinguishable, (time-dependent
quantum state space? – state space with memory?) then the probabilities of a given state
are

Alternatively, the divergence free nature of ï/ï
áâ
T
áâ
is may itself be understood to stem
from the combined manifestation of Pauli Exclusion and Bose Inclusion, if you will
(what we have been calling the Pauli Inclusion Principle).

When the quantum uncertainties are expressed in terms of the creation and annihilation
operators, a and a
+
the causal indeterminacy underlying quantum mechanical phenomena
is much more clearly discernible.

All of the basic definitions of special relativity presuppose the concept of ―event.‖ This
concept is not clearly and unambiguously defined within this theory.

The apparent standing contradiction of two otherwise ―correct‖ theories, general
relativity and quantum mechanics, in terms of these theories‘ widely divergent
predictions of the vacuum‘s energy density, combined with the renormalization problem
of Quantum Gravity theory as well as the corresponding absences of experimental
observations of gravitational waves, strongly suggests that a SUSY theory in which both
the quantum of the gravitational field and also a gravitating vacuum energy are absent,
may bold much promise as a future successful theory that solves these problems as well
as reconciling QM and GR.

Mass skews the spacetime symmetry of the quantum vacuum via the Pauli exclusion
mechanism (which, according to Feynmann, applies equally to both real and virtual
fermions) and, concomitantly, via the mechanism underlying the phenomena of Bose
condensation, superconductivity, superfluidity, lasing, etc. The principle of the action
of this mechanism we will term for convenience sake, the Pauli Inclusion Principle (PIP).
This mechanism also applies equally to both real and virtual bosons.

Because matter, through the Pauli Exclusion Principle (PEP), acts upon scalar energy
functions of the vacuum, in the form of, e.g., virtual e
+
e
-
creation-annihilation events in a
precisely converse manner to how matter acts via PIP upon the vector momentum
fluctuations of this vacuum, the overall effect of matter upon the momentum and energy
fluctuations of the vacuum is to destroy the diagonality of the vacuum‘s momentum-
energy fluctuation tensor, effectively rotating (relative to some cosmic background
spacetime, perhaps defined in terms of nonlocal energy fluctuations) the local space and
time uncertainties experienced by, say, test masses within this modified vacuum. The
result of this is that time dilation, length contraction (in the direction of the gravitating
mass), are experienced by the mass in this vacuum simultaneous with the appearance of
stress fluctuations terms within the momentum-energy fluctuations tensor.

Push and pull forces are responsible for changes in the diagonal components of the
momentum-energy fluctuation tensor, which are described in terms of virtual spin 1
particles (bosons). However, the mutual exchange of stresses within the mass‘ bulk must
be described in terms of the mutual exchange of virtual spin 2 particles (gravitons).

The transformation of a 1
st
rank tensor is described, of course, by multiplication of this
vector by a 2
nd
rank tensor. The transformation of a 2
nd
rank tensor (even where this
tensor‘s diagonal components are all 0) into another 2
nd
rank tensor must be described by
a 2
nd
rank tensor being multiplied by a 4
th
rank tensor.

The structure of the spacetime fluctuation tensor (a mere phenomenological entity) may
be related to the metric tensor of GR.

The stress-energy-momentum fluctuation tensor of combined matter and vacuum may be
related to the tensor T
uv
of GR. The 4
th
rank tensor alluded to above may be related to
the Riemannian curvature tensor of GR. The fact that weak gravitational fields in the
vacuum correlate with a vacuum momentum-energy fluctuation tensor which contains
negligibly small off-diagonal components, sensibly corresponds to a vacuum dominated
by scalar and vector fluctuations in its stress, a case where a scalar-vector theory of
gravitation such as that of Newtonian gravity stands as a more than adequate,
approximate description of the dynamics of mass distributions.

Psi represents everything which can be known about the system described by Psi. But
this is not correct in the unqualified manner in which this assertion is usually made. For
Psi is always somewhere at some time, i.e., Psi = Psi(x,t). And although we cannot
perform a measurement upon the system described by Psi(x,t) without inadvertently
inducing a discontinuous change in its wavefunction (unless the system happened to
already be in an eigenstate of the operator corresponding to the type of measurement of
the system we are performing), the system itself is not destroyed by such a measurement
being performed upon it, merely the quantum state of the system has been
discontinuously changed. The transition from Psi(1) to Psi(2) where Psi(j) does not
represent an eigenstate of the relevant operator could not, of course, have been predicted
from Psi(1), nor could it have been predicted from the mutual interference of the
wavefunction discerning the observer with the wavefunction, Psi(1).

A more precise way of stating the fact of the relativity of time or the ―rate of its passage‖
(a rather problematic notion, by the way) is to do this in terms of relative probability rates
where the notion of quantum probability can only be rendered logically consistent by
bringing in the notion of probability density. Density of what, of some kind of substance
of events? And the answer here is both – the probability density of that which
constitutes the substance of events as such. More specifically, the relevant quantity is
the density of events within spacetime, rather than simply within space alone. This is
because, although it is prima facie possible (on a deeper level within the context of
general relativity, this can be questioned) for motion to occur along one spatial axis
without the necessity of motion, however small, along perpendicular spatial axes, this is
not true where ―motion along the time axis is concerned. This is in part because relative
motion in space (and its associated momentum) is dependent upon a borrowing during
some earlier phase of kinetic energy associated with the erstwhile purely timelike motion
of the starting masses.


Regardless of what happens to the wavefunction, one‘s consciousness does not ―come to
a halt.‖ The wavefunction we are told represents the most that one can possibly
objectively know about a quantum system that is in a pure state (with respect to some
system observable). Of course, if the system is in a superposition state with respect to
(wrt) one particular observable, it must be a so-called pure state with respect to each
incompatible, or conjugate, observable. And this is just the basis of the uncertainty
principle via the theorems of Fourier Analysis. Two question arise here – one for the
moment appears to be merely secondary; the other more fundamental, although we might
find (after that this situation is actually the reverse of this.)

What does the quantum state of a given observer‘s brain then represent? The most that
that observer can know about the quantum state and evolution of this state, which his
brain is presently in.

For some reason, the entanglement of a given system‘s wavefunction with those forming
the system‘s ―sum of histories‖ does not trigger decoherence of Psi.

One‘s state of consciousness is thought by some to represent the continuous ―self-
measurement‖ by the observer of his own quantum state (that of his brain, at least – based
on the theory that it is processes within one‘s brain that constitute all that is relevant to
consciousness). Somehow this continuous self-measurement does not cause any
reduction of the brain‘s Psi.

This would suggest that the brain is in an eigenstate of the observable which is to
continual subject of (self) measurement.


Nothing to distinguish one alternate universe (in the superposition) from another without
intention and awareness. The superposition possessed temporal symmetry (time
translation symmetry) so that energy was conserved.

Energy uncertainty without the necessity of vacuum fluctuations – superposition within a
closed system. So here we have two types of fluctuation: that which is only definitional,
i.e., fluctuations within a closed system, and the other which is inherently unpredictable,
i.e., not part of any system behavior. So we may make the distinction of these two
fundamental types of fluctuation thus: fluctuations that are real versus virtual
perturbations. Temporality cannot subsist in virtual perturbations – real fluctuations
must be present.


The basis for the radical overdetermination of nature is the fact of its origin being an
infinitely symmetric state. This infinite symmetric ―initial state‖ was partially broken in
the ―moment of creation.‖ Therefore all of the symmetries observed in nature are partial
ones described by subgroups of some original symmetry group.


Is some kind of degeneracy and symmetry involved in the overdetermination of one‘s
interpretation of one‘s own mental states? Causal connections as interpretations of
correlations and causal entities as ―beds of correlation.‖


Now a given mass, accelerated to a velocity approaching that of light cannot so closely
approach this delimiting value so as to become more massive than its own black hole
mass. And the black hole mass is determined by the density of the mass having reached
the delimiting value of the vacuum energy density/c
2
for the particular volume occupied
by the accelerating mass. Also, we know that the acceleration of the mass is just the 90
o

spacetime rotation of the mass original, purely timelike velocity.

In this way the limiting velocity to which a given mass may be accelerated is a function
of the energy density of the vacuum occupied by the accelerating mass. So we see now
that the original, purely timelike four velocity of the mass (prior to acceleration) is itself a
function of the density of the mass relative to the density of the vacuum energy
coextensive with the mass‘ displaced volume. But if the acceleration to light velocity is
actually to be effected, the mass must utilize its own mass energy as the ―fuel‖ for
propelling itself such that, upon actually approaching c, all of its starting mass has
already been converted to photon (or other massless particle) energy. The fact that some
of this fuel must come from the binding energy of the mass rather than from the positive
mass energy itself suggests that the mass when a ―at rest‖ already possesses a small
quantity of spacelike momentum connected with dynamical processes within the mass
responsible for the existence of this binding energy.

A theoretical group describes a closed set of reversible operations with the elements of
the group. What is closed is the set composed of the distinct operations; the number of
distinct elements, which composes the domain, which the operations of the group take as
inputs, may itself be infinite. When the symmetry of the group is ―broken,‖ the
operations of the group are altered and perhaps also new operations added, which no
longer are reversible, singly or collectively. The discovery of a new mathematical group
for the description of some physical phenomenon means that various manifestations of
physical variables, heretofore treated as distinct, are now seen to be differing
manifestations of the same, underlying entity/physical quantity.

Changes in state to some subpart of a closed dynamic system which are wholly
attributable to other, earlier changes in neighboring subparts of the overarching system
constitute ceteris paribus reversible, symmetric interactions. However, the reactions
triggered by these interactions may not be themselves symmetric and time-reversible.
Any asymmetries of the system‘s changes of state are ultimately traceable to input to the
system wholly from outside it. The sense of ―outside‖ intended here, one in which the
system as a whole is not secretly a subpart of some overarching system, with
perturbations to its outside transmitted to it and mediated by the overarching system‘s
matrix and originating within some other subpart of the crypto-overarching system.
What we are really saying here is that asymmetrical interactions always are ultimately
between the uppermost overarching system and its outside. Of course, the uppermost
overarching system in almost any physical setting is just the global spacetime continuum
itself in which this system is ―embedded.‖

Infinite degeneracy would mean that the wavefunction of the degenerate system could
undergo transition between an infinite number of eigenstates on a continuous spectrum
without measurable changes resulting to any of its quantum observables.

Should Psi be understood as a knowledge representation rather than the most that can
actually be known about the system?

Since no energy is required to effect changes in the system‘s quantum state from one to
another of its energy degenerate eigenstates, free will and conscious thought may
presuppose energy degeneracy. Also, there is not basis for assigning scales of physical
time to energy-degenerate quantum transitions.

Infinite symmetry may be understood as infinite sameness in which, no matter where the
observer is and no matter what happens, no ―objective‖ differences between here and
there and no changes from now to then are in any sense evident. Now such a state of
infinite symmetry in the absence of degeneracy (with respect to some set of as yet
―unmanifest‖ parameters) is synonymous with an empty void or nothingness. Moreover,
for there to exist a state of infinite symmetry, an infinite degeneracy must be present, that
is, degeneracy with respect to an infinite number of distinct though still unmanifest
parameters. But all this is to presume that there is some presently obtaining state, which
represents physical reality (as a whole, in some sense) but which is both underpinned by
and insulated against an infinitely chaotic flux of change to an infinite number of still
unmanifest proto-physical variables (proto-observables?). A tremendous quantity of
entropy is released as a result of the breaking of this state of infinite symmetry (one might
suppose an infinite quantity for this entropy) so that the aboriginal state of infinite
symmetry is to be understood as possessing infinite order. The breaking of this infinitely
symmetric, initial state results in a kind of ―unpacking‖ of this state into an intricate,
dynamic structure/continuum of ordered and disordered energy, i.e., some of the
symmetry (order) of the initial state survives the transition to the broken, symmetric state
and the remainder is ―lost.‖ But a perfectly and infinitely symmetric state would
possess no internal fluctuations in any of its group symmetric parameters outside of the
spectrum of potential change for the parameters defined within the symmetry group, and
so it is hard to imagine how such a symmetric system, heretofore everlastingly cut off
from any outside (realm possessing foreign parameters or ―off-scale‖ values of
―domestic‖ parameters), could ―spontaneously break,‖ that is, without having been
―helped‖ from outside.

If the Big Bang is simultaneous with the initial, abrupt phase of spontaneous symmetry
breaking, then its continued expansion may perhaps be understood as a continuous
further breaking of (global) symmetry in favor of the continued establishment of greater
myriad‘s of domains of local symmetry, interconnected through mutual exchanges of
quantities of their local variables. We might suppose that the rate-density of entropy
production therefore is uniform throughout the Universe provided that one refers to the
production of total entropy, local + global, i.e., that due to the local evolution of complex
structures/systems + local contributions to the entropy production, exclusively due to the
global entropy production occurring as a consequence of cosmological expansion. This
means that the global rate-density of entropy increase is affected by the rates of change to
the local entropy. The rate-density of global entropy may be understood, paradoxically
seeming, as the local rate to time‘s passage relative to changes in the cosmical time
parameter. This cosmic time parameter is based upon changes in the total entropy of the
Universe.


Ambiguities of interpretation of fluctuations occurring near the vertex of the light cone.
Ambiguities of the interpretation of quantum tunneling. Can we have tunneling of
particles across a timelike potential barrier? Can we construct an experiment to test this,
say one which involves the tuning/detuning of some already well understood tunneling?
The consistency of the notion of virtual gravity waves depends on the independent
existence of spacetime fluctuations.


At the event horizon of a black hole the space and time axes are in some sense reversed.
This is because an object (or particle so we can neglect tidal effects) just outside of the
event horizon possesses virtually all of its momentum in the direction of the hole‘s center
and relatively none of it in a timelike direction. But neither is their any ―room‖ for
internal spacelike momentum for this mass (in the form of binding force-mediating boson
particle exchanges). All of the fluctuations of the vacuum are in the form of 3-
momentum and none in the form of imaginary (timelike) 4-momentum. And so the mass
is not so much torn apart as it is dissolved due to the hole‘s sapping of the mass‘ own
binding forces. In the extreme case depicted here, gravity is not truly a ―force,‖ but our
projecting of our large, latent stores of common experience with everyday objects causes
the phenomenology of gravitation to be naturally interpreted in terms of the action of a
distance of a gravitating force. Gravitation is more accurately (or objectively)
understood as the effects of mass upon a higher dimensional quantum vacuum that must,
after the fashion of a cellular automaton, simultaneously parallel reprocess the data
representing both matter and the colocated/coextensive 3-vacuum, and where this
vacuum possess only a finite computational capacity.

If the vacuum does not itself gravitate, then momentum and energy fluctuations do not
directly cause spacetime fluctuations but only do so indirectly through the effects of these
momentum-energy vacuum fluctuation upon matter itself. Fluctuations of momentum
and energy, in other words, directly produce fluctuations in x and t of test particles that
we conveniently interpret as the effects of fluctuations of x and t upon test particles. The
problem here is that it we allow äx and ät to affect <p> and <E> directly, then we also
have to permit then to affect äp and äE directly, with the result that the vacuum itself
must possess a gravitational mass.

Perhaps the way to put time and space on an equal footing within a theory of quantum
gravity should be not to try to make the time parameter into a bona fide observable, but to
take away this status from the space variable, x.


This reminds us of the expansion of a permutation group through the discovery that some
members of this group are not simple, but composite. Processing information means
taking two pieces of information from two smaller, and heretofore never before directly
connected, contexts, and bringing the pieces of information together in such a way that
one now has new information concerning the broader context including all of the original
―pieces.‖

It has been stated many times by physicists that the existence of Heisenberg energy
uncertainty implies the non-conservation of energy. Similar statements apply to the
momentum. But with the unification of momentum and energy by special relativity into
a conserved momentum-energy four-vector, and the concomitant unification of space and
time into a symmetric, spacetime continuum, it becomes possible for both Heisenberg
momentum and energy uncertainty to exist throughout spacetime without the necessity of
net uncertainty of the four-momentum throughout this spacetime. This appears possible
if the vacuum fluctuations in 3-momentum are dynamically related to those of the energy
so that the sum of these two yields a magnitude of the four-momentum fluctuations of the
vacuum which remains a constant 0 or, undetectably close to 0 such that a) the quantum
vacuum does not itself act as a gravitating source and b) the spacetime maintains (for all
practical purposes) its Lorenz invariance. The action of mass through the mechanisms of
Pauli Exclusion (and ―Inclusion‖) upon the vacuum and, conversely, of this vacuum upon
the mass, serves to disrupt this spacetime symmetry (of Lorenz invariance) causing an
imbalance in the mutual cancellation of vacuum momentum and energy fluctuations.
This imbalance, induced by mass on the vacuum, acts to give the vacuum an effective
mass just as this modified vacuum enhances the mass of test bodies introduced into it
from outlying, virtual ―free space.‖ Mass is enhanced in two distinct but closely related
ways: the real time processing burden that increased momentum fluctuation densities
(within vacuum occupied by masses) pose for the quantum vacuum that mediates them
and the reduced computational resources available to this vacuum with which to perform
this function. On this view, the density of vacuum energy that acts as a source of
gravitation is only that component of the vacuum‘s total energy density which fails to
cancel with its momentum fluctuations. In this way, the effective vacuum energy density
is determined by the density of mass in the universe. It is this tiny component of the
total quantum vacuum energy density, which we should term the cosmological constant.

The breaking of the spacetime symmetry of the quantum vacuum by mass is not the
fundamental or absolute symmetry breaking that requires the engendering of a new gauge
boson. This situation is quite unlike that where a broken global symmetry causes the
creation of a gauge boson, which, by being exchanged between splintered symmetry
domains, results in the restoring locally of the symmetry that was broken globally, e.g.,
the creation of the Higgs boson in the theory of electroweak symmetry breaking.
Another reason that the breaking of the vacuum spacetime symmetry by mass is not
fundamental is that mass is not an irreducible, conserved physical quantity, but is a
phenomenon produced by the peculiar manner in which the components of the total
(mass-energy + vacuum-energy) fluctuation stress-momentum-energy tensors mutually
interact.

Prior to the breaking of the spacetime symmetry of the vacuum by mass, the spectrum of
momentum fluctuations within the vacuum is just that defined in terms of the spectrum of
virtual transitions allowed between the discrete energy levels of this global vacuum state.
This is exactly the situation we should expect if the quantum vacuum can correctly be
modeled as a four dimension array of coupled harmonic oscillators, i.e., as a four
dimensional crystalline latticework. The additional 3-momentum fluctuations, over and
above those permitted in the symmetric state of this vacuum, correspond to those
transition energies of the modified vacuum lattice which are now forbidden by the state
of broken symmetry.

After this symmetry is broken, the spectrum of vacuum 3-momentum fluctuations
becomes denser while that of the vacuum‘s energy fluctuations, i.e., fluctuations in the
imaginary component of the vacuum‘s 4-momentum, becomes attenuated. Were this
vacuum two dimensional, this change in symmetry could have been effected by a simple
―rotation‖ of the fluctuation momentum-energy vector, i.e., a 2
nd
rank tensor would have
been required to describe the rotation, but not the end state, which could be adequately
described in terms of a 2-vector. And the dynamics of this rotation of momentum-
energy within this 2d spacetime can be modeled in terms of the exchange of spin-1 gauge
bosons. However, in four-dimensional spacetime, a 2
nd
rank tensor is required to
describe the vacuum‘s end state – that post symmetry-breaking. Moreover, a 4
th
rank
tensor is necessary to describe this transformation of the vacuum caused by the breaking
of its symmetry, as mentioned earlier. Consequently, a transformation of the fluctuation
stress-momentum-energy tensor can be described in terms of time-varying probability
current densities of both spin-1 vector gauge bosons and spin-2 gauge bosons.


As the density of a given mass is increased, so increases the communications
―bottleneck‖ between change within the mass, mediated by momentum fluctuations in the
form of spin-1 boson exchange, and the temporality of the quantum vacuum. Since the
modes of momentum fluctuation are determined (just as in the case of a crystal) by the
available energy transitions between discrete energy levels of the (lattice) vacuum,
simultaneous increases in momentum fluctuation density with corresponding decreases in
energy fluctuations within the vacuum occupying the very same volume (as the mass in
question), require that these momentum fluctuations become progressively more energy
degenerate. Is this paradoxical? We must understand the relationship between the
concepts of symmetry and degeneracy to answer this question.

The nonlinearities exhibited by general relativity may only apply to the expectation
values of all forms of energy and not to mere fluctuations in energy (that are grounded in
Heisenberg energy uncertainty, that is). Is gravitation a manifestation of the loss of
spacetime symmetry? In other words, the spatiotemporal distribution of the
wavefunction, which matters not to an isolated, single particle within an ―empty‖
universe, in terms of when and where this particle is situated within this empty spacetime,
becomes an important parameter for the momentum-energy (or 4-momentum) of this
(test) particle, i.e., a degeneracy has been removed (and not here because of some specific
force coupling having been introduced) – remember, gravitation is not a ―force.‖

From my light reading on the subject of supersymmetry (SUSY), I gather that when glo
bal symmetry is broken, one or more gauge bosons are engendered as a result. These
bosons are necessary to restore this broken symmetry, but only at a ―local‖ level. The
continual mutual exchange of these gauge bosons between the appropriate fermions
(those ―feeling‖ the gauge force mediated by these particular bosons) constitutes the
mechanism by which this local gauge symmetry is maintained. My understanding is
that, in the case of all gauge bosons, with the notable exception of the Higgs, the
fermions involved in the exchanging of these bosons acquire an effective mass
component of their total mass. (Lorenz, by the way, had tried during the early years of
the 20
th
Century to demonstrate that all of the electron‘s mass was electromagnetic in
origin and was attributable to the electron‘s electromagnetic self-energy. Lorenz‘
attempt ultimately proved unsuccessful: it appears that even the great hunches of great
minds do not always ―pan out!‖) Now the hypothesis for the gravity mechanism alluded
to throughout these writings points to the mutual exchanges of gauge (vector) bosons as
being materially important in understanding this mechanism as well as the mechanism of
inertia – hopefully so since we would like to remain true to Einstein‘s strong equivalence
principle, the other postulate of general relativity being that of general covariance.

Can we think of the loss of purely timelike 4-momentum, and this 4-momentum being
reconstituted into both timelike and spacelike components as representing a particularly
simple form of symmetry breaking? An important question in this connection becomes
does the vacuum not gravitate because of a kind of mutual cancellation of its timelike and
spacelike momentum fluctuation current densities? Or is this the case because the 4-
momentum of the vacuum, in the absence of mass, is itself purely timelike?
Supersymmetry, which requires the contributions to the vacuum energy from
creation/annihilation of virtual fermion-antifermion pairs to contribute a negative
component of this energy, also requires the contribution to this vacuum energy from the
creation/annihilation of bosons to be positive. Moreover, SUSY requires that these two
contributions from the two basic types of vacuum fluctuation somehow precisely cancel!


An individual random sequence of numbers may not possess any information. But what
about two random sequences which are correlated?

Computation can be represented in terms of the deterministic evaluation of Psi in
conjunction with adiabatic changes in the boundary conditions to Psi. Thought, on the
other hand, cannot be understood in this manner, but must be represented by
discontinuous changes in Psi with nonadiabatic irreversible changes in the boundary
conditions upon Psi. If the boundary conditions to Psi are overdetermined (degenerate),
then

Individuality and unity go hand in hand. Not either/or but both/and.

What is the precise relationship between degeneracy, e.g., energy degeneracy, and
symmetry, e.g., with respect to time reflection. The interpretation of rotation in four
spatial dimensions is unproblematic, but what about within four-dimensional Minkowski
spacetime?

Leonard Mendel‘s 1991 optical-interference experiment in which the very possibility of
determining which path the quantum particle takes through the double slit destroys the
particle superposition state and the observable interference pattern at the phosphorescent
screen. Possibility of knowledge has the same effect here as actual knowledge.



Contrast the reaction of a crystal lattice (closed system) to a perturbation or disturbance
with the response of an open system to an ―input.‖ Without any definite or determinable
boundary between itself and its alterity, there is no possibility of either intentionally
manipulative inputting into the system, nor of a chance, finite collection of inputs applied
that would cause the system to change it state in a determinate or determinable manner.
In such a case, the system cannot be understood or modeled as a kind of state machine,
e.g., Turing machine. The ―operation‖ of such a system is not therefore reversible. This
is because, by back-reacting upon its embedding ground state, changing this ground state
in a necessarily irreversible manner (because the ground state is an open system), the
system‘s own reaction to its altered ground state means there is not conceivable
continuity of this system (against the backdrop of an irreversible change in its ground
state). Consequently, there is no conceivable set of ―inputs‖ (to the system from outside)
that could have reproduced (really, alternately produced) the system‘s behavior
(response).


The input of power, i.e., the supply of energy over time to the system succeeds only in
redistributing the probabilities or probability rates for fundamental fluctuations.

The collapse of the wavefunction necessitates violation of energy conservation for if
energy were conserved, the evolution of the wavefunction would have remained
deterministic.

Decoherence of the Psi function occurs when, we might say, the quantum system in
question becomes more connected to (or ―entangled‖ with) the local environment than
with the nonlocal quantum vacuum state in which it originated.

How can it be demonstrated, scientifically, that is, that the Psi function of some quantum
system in a prepared superposition state can be ―collapsed‖ via the influence of a physical
measuring instrument alone, apart from the influence of any would-be ―conscious
observer?‖ Such a physical measuring device must be able to enter a classically
describable physical state, e.g., experience a determinate deflection of an indicator needle
of one of its analogue measurement dials.

Is there any fundamental difference between the measurement apparatus collapsing Psi
and the measurement approaches entering a superposition state of its own which is then
collapsed by the conscious act of observation on the part of the experimenter reading one
of the device‘s dial indicators?

The measurement apparatus, being apiece with the larger quantum mechanical state of
which it and the observed system form but a small part, cannot effect collapse of Psi
obs

any more than it can collapse the overarching system state function. The conscious
observer introduces the irreversible element by virtue of the fact that his mind is not, nor
can itself ever be, a quantum mechanical observable.

The awesome potential for growth of large scale economies is rooted in one fundamental
economic fact or principle: the collective wealth represented by a nickel in the hands of
50 million wage slaves is far outstripped by 2.5 million dollars in the hands of a few
creative and dynamic entrepreneurs.

/\x/\P
x
¾ h
bar
¬ /\x/\(mv
x
) ¾ h
bar
¬ /\x(/\mv
x
+ E/\v
x
) ¾ h
bar

/\x/c
2
(/\Ev
x
+ E/\v
x
) ¾ h
bar
¬ (/\Ev
x
+ E/\v
x
) ¾ h
bar
¬ (/\Ev
x
+ E/\v
x
) ¾ h
bar

To each observable there corresponds a quantum number so that the physical quantity
represented by this observable is quantized. Such a physical quantity is conserved,
implying that the quantity exhibits a certain kind of symmetry. Such observables are
Hermitian, i.e., they may only possess real eigenvalues, and each of these observables
commute with some and not with other observables, implying that this and every other
observable participates in a Heisenberg uncertainty relation with at least one other
noncommuting observable.

Such observables may exist in various superposition states of their eigenfunctions – their
Psi function undergoing state vector reduction, i.e., ―collapse‖ when an observation is
performed with respect to this observable. The deterministic Schrodinger wave equation
can only consistently describe an isolated system and only such a closed system may
enter into a superposition state. This is due to the Schrodinger equation being derived
from a linear differential equation involving a Hamiltonian representing a conserved
system total energy. As Wigner has forcefully point out, one system cannot induce Psi-
function collapse of some other system if it is possible to describe the ―observed‖ and
―observer‖ systems as two components of a larger Psi-function separable into these and
other component Psi
I
the product of which from which the original Psi may be simply
constituted.

Can we say that consciousness (of the individual), if it can be treated as an observable in
the first instance, is an observable that does not commute with itself.

Are strange attractors in phase space also regions of dynamical resonance?

Do e+e- virtual pairs suppressed by strong electromagnetic fields manifest themselves as
the creation of real e+e- pairs?

In Psi collapse, an action (rather than a ―disturbance‖) that is originally incommensurate
with the vacuum or ground state within the spacetime region in question, is
spontaneously ―fitted‖ or ―fused‖ or ―grafted‖ to the local vacuum of the event, whether,
decoherence, quantum measurement, etc. One way of doing this might be to ―update‖
the local ground state initial conditions. Another way might be to suppose that the
ground state or quantum vacuum is itself in a perennial near infinite superposition state.
Does one quantum measurement wipe out all information about previous measurements?

Quantum fluctuations mediate the closeness of coincidence of competing fermionic
states. How is Pauli Exclusion and Bose Inclusion related to the phenomena of
constructive and destructive interference.

In the absence of the electron, the vacuum is free to experience a fluctuation in its energy,
dE
vac
at the particular spacetime coordinate in question (well, really we must consider the
values of /\}, } = radius of a 3-sphere, and /\t for real fermions when applying the Pauli
Exclusion Principle to both real and virtual particles). If a real electron is ―present‖, say
within some crystal lattice, its energy of fluctuation, rather than manifesting itself as the
brief appearance (and then disappearance) of a virtual e+e- pair, manifests itself as the
raising and lowering of the real electron, already occupying one of the distinct quantum
states available to the crystal, between two distinct energy levels of the crystal and
resulting in the exchange with the crystal of a virtual photon by the real electron.

A generalization of the Pauli exclusion principle would be the following. Rather than
just a given fermion totally excluding, i.e., excluding with a probability of 1.0, the
simultaneous and coincident occupancy by an identical fermion of a quantum state
characterized by a set of discrete eigenvalues with respect to a complete commuting set
of observables, we would speak more generally instead of this exclusion falling between
0 and 1.0 along the spatiotemporal overlapping of the wavefunctions of the two fermions
in questions.

The undifferentiated position and time uncertainties say where a particle could be found
but without providing information about the probabilities of the particle being found
within various sub-intervals within these overall uncertainties.

By accelerating we can shrink the spacelike component of the spacetime trajectory to our
destination, but only by at the same time, of course, lengthening the timelike component
of this interval. What we cannot do is the converse of this: change our state of motion
in such a way that the spacelike and timelike components of my trip are lengthened and
contracted, respectively.

Teleportation, if it is not to be mere propagation in disguise, must be effectively the
disappearance of an object at A and the reappearance of the very same object at B. But
this seemingly defining requirement for teleportation cannot possibly be met for a
quantum particles which, if similar, must be indistinguishable in quantum theory.
Because a teleported particle possesses no timelike component in its spacetime trajectory,
the particle must in some important and relevant capacity already exist at point B just
prior to the teleportation of the identical particle at A.

So the teleportation of a particle does not entail the creation of a particle at B, merely that
the required quantity of energy be available at B or within a region defined by the 3-
sphere centered about B (/\},}/\h,}sinh/\v) within a time interval centered about the
moment of the particle being destroyed at A – or rather, about the centroid of the
uncertain time interval within which the event of the destruction of the particle at A
actually took place.

Duality ¬ trinity
Logic ¬ dialectic
Causal ¬ historical

We must make a key distinction here between teleporting, copying, and propagating.
Two fundamentally distinct paradigms inform the question of teleportation, embodied in
two very different methods – copying from the outside – in versus copying from the
inside out. Any structure of higher resolution than that dictated by the Heisenberg
uncertainty principle shall be imposed by the process upon the object to be teleported. In
the method according to the 2
nd
paradigm, the object imposes its fine structure, if you
will, upon the teleportation process.


A test particle at potential Phi(B) has angular momentum perpendicular to the x-ict plane
(according to a right hand rule?) relative to the origin of a coordinate system based at a
potential Phi(A). In what direction does this angular momentum vector point?

What is the difference between a linear succession of virtual e+e- pair
creation/annihilation events which represents the propagation of a real photon from a
similar linear succession of e+e- events that represents only the passage of a virtual
photon?

Light intro stuff: The thought of timelike vectors or vectors oriented along the time axis
should easily make sense to the average person. We can look forward, for example, in
time just as we look out into space. The time axis is aptly described as imaginary.

Electrical interactions of particles on the spin [only in the relativistic regime], c.f., p. 230,
vol. 3 of Landau and Lifshitz. This suggests that any electromagnetic basis for gravity
is itself based in spin interactions.

The very reason for the existence of spin is relativity. J = L + S.

The, e.g., photon population in the final state stimulates the transition of more electrons/
positrons into this same state (resonance phenomenon here, e.g., lasing). This inclusion
principle applies equally well to virtual photons and virtual electrons/ positrons. This
implies that the vacuum does have, for example, a bound structure of virtual e+e-
continually undergoing transitions in energy.

Two mechanisms of exclusion (a la Pauli) are at work here, however. Decreased
probability density rate of creation/ annihilation of e+e-‗s of particular arbitrary energy,
and decreased availability of quantum energy states due to ―occupancy‖ of these states by
real fermions within bulk material.

It is significant that all quantum mechanical operators may be alternately expressed as
linear combinations of creation and annihilation operators.



o
2
E
vac
= [-omc
2
]
2
+ [opc]
2


o
2
E
vac
= o
2
(mc
2
) + o
2
(pc)

= {o(mc
2
)}
2
+ {o(pc)}
2


= {(om)c
2
+ m2coc}
2
+ {(op)c + poc}
2


Let oc = 0

o
2
E
vac
= (o
2
m)c
4
+ (o
2
p)c
2


but oE
vac
|
spacelike
= 0; oE
vac
|
timelike
= 0

Let -o(mc
2
) + o(pc) = 0

-(om)c
2
+ (op)c = 0

(op)c = (om)c
2


op = (om)c

So the magnitude of spacelike momentum fluctuations is equal to the magnitude of the
timelike fluctuations, if

o
vac
|
four momentum
= 0

In a Lorenz frame, o
vac
|
4-momentum
= 0,

But not in an

in which nondiagonal components of the quantitiy oT
µv
become non 0. These are the
stress terms of T
µv(vac)
.

If the vacuum has time to respond, i.e., adiabatically equilibrate itself during
accelerations, then no nonzero stress terms in Tµv
(vac)
develop and the momentum and
energy uncertainties of the quantum vacuum remain equal to the corresponding free space
fluctuation values of these quantities.

Free space fluctuations of momentum and energy cannot induce wavefunction collapse of
any quantum mechanical system. The boundary conditions applied to the free space
vacuum by a system of real particles (partons) and fields (bosons) alters the values of
op
vac
and oE
vac
so that <p> and <E> take on nonzero values, i.e., the uncertainties of each
are such that,

/\P > op and /\E > oE

/\p = Sqrt{ <p
2
> - <p>
2
} and /\E = Sqrt{<E
2
> - <E>
2
}

When a test particle is in free fall and following a spacetime geodesic, the stress terms in
its local quantum vacuum are 0.

The quantum vacuum is not a thermal system within any Lorenz frame and therefore
possesses no definable entropy, only acquiring entropy within an accelerated frame. A
thermal vacuum, i.e., an accelerated vacuum, can cause decoherence of a quantum
superposition. By possessing entropy, the vacuum lacks certain information. Gravity
¬ thermal vacuum ¬ decoherence.

Casimir pressure and Casimir energy density as spacelike and timelike manifestations of
the vacuum‘s momentum-energy tensor. Matter can be viewed as simply vacuum
boundary conditions with the dynamics of a quantum system being that of the vacuum
subject to boundary conditions supplied by the presence of real particles and fields.





oE
vac
= oE
2

a+a
+ oP
2
ea

o
2
E
vac
=
1
/
2m
[ (imc)
2
+ (-imc)
2
] + (opc)
2
c
2



Because i
2
= (-i)
2
= (-1), the creation/annihilation of, e.g., e+e- virtual pairs, i.e.,
fluctuations in the quantum vacuum‘s energy may be treated as a single imaginary
momentum fluctuation in the vacuum and possessing a spin of 0.


E
2
vac
= [(imc)
2
/2m]
2
+ [(-imc)
2
/2m]
2
+ p
2
c
2


If p = mc, i.e., if h/ì = mc, then c = ìf and E
vac
= 0,


If c > c
0
, then the photon possess an imaginary mass and mass is relativistically
decreased as deduced from the inertia of mass exhibited during accelerations along a
vector perpendicular to the casimir plates and E
vac
< 0.

This relativistic equation remains to be worked out, but the standard relativistic mass
equation certainly can be used as a guide in the construction of this equation.


If c < c
0
, then the photon possess a real mass and mass is relativistically increased
as deduced from the inertia of mass exhibited during accelerations along a vector
perpendicular to the casimir plates and E
vac
< 0.


Because an energy fluctuation in the form of a creation/annihilation of a virtual e+e- pair
possesses spin 0, it cannot trigger an atomic transition involving a change in angular
momentum if the atom is blocked from emitting (spontaneously) a photon into a mode of
appropriate quantum numbers if this mode (of electromagnetic oscillation) is one of the
modes excluded from the modified, casimir vacuum.

/\L/\| >=
1
/
2
h/2pi


Let /\L <= 4pi steradians for a spin 1 particle. The orientation of the photons spin, 1h,
can be anywhere in 3-space. Because the photon moves at c, it can‘t have any spin
component oriented perpendicularly to its local 3-space.


e+e- = spin 0; timelike
e+e+ = spin +1; spacelike
e-e- = spin –1 anti-spacelike

2piR = d/dr piR
2

4piR
2
= d/dr 4/3piR
3


2pi
2
R
3
= d/dr pi
2
R
4



Does a transformation of an e+e+ pair into an e-e- pair always involve passing through
the e+e- state.

Matter itself provides the ideal boundary conditions to modify the vacuum statistics
within its bulk, resonance structure, so as to dilate time uniformly within its bulk.
Casimir plates merely selectively induce probability rates of specific processes.

For a photon, /\L = /\S, and when exposed as e+e-, e+e+, or e-e-, or admixtures, i.e.,
superpositions of these, the variation in the component of spin of the photon in a
particular coordinate direction can be as much as 1h.

The distribution of spin amongst all of the virtual particles within the vacuum determines
both the distribution of momentum-energy and the geometry of spacetime so that
momentum-energy and position-time are related by Einstein‘s equations through a
correlation of these via the spin statistics connection of real and virtual particles/fields.

A photon is slowed in a gravitational field due to spending ―more time‖ as a particular
virtual e+e- pair in which the e+ and e- exchange virtual photons (3-momentum
fluctuations) with each other and ―less time‖ being resorbed and reemitted in a linear
succession of every new virtual e+e- pairs ( energy or imaginary momentum
fluctuations).

A photon is accelerated (relative to its velocity in vacuo) when travelling perpendicularly
to parallel casimir plates due to spending ―less time‖ as a particular virtual e+e- pair in
which the virtual e+ and e- exchange virtual photons (3-momentum fluctuations) with
each other and ―more time‖ being resorbed and reemitted in a linear succession of ever
new virtual e+e- pairs (energy or imaginary momentum fluctuations).

Similar observations can be made about the delay in the recreation of fermions (by
exchange of real matter fermions with the virtual (antimatter) fermions within
created/annihilated virtual fermion-antifermion pairs due to these real fermions spending
some ―time‖ exchanging virtual bosons with other real fermions (neighboring), c.f.,
cellular automata theory.

A thermal reservoir composed of fermions will emit particles into available quantum
states, i.e., modes, in a phase space-filling process that is governed by the Pauli Exclusion
Principle. A thermal boson reservoir will emit particles into available modes of phase
space through a process mediated by Bose-Einstein statistics, through either stimulated or
spontaneous emission. The particle number fluctuation functions of each thermal
emission process is described by the following equations.

</\N
k
2
>
B
= <N
k
>(1 + <N
k
>); bosons and

</\N
k
2
>
F
= <N
k
>(1 - <N
k
>); fermions and where

<N
k
> = f
F
(E
k
) and <N
k
> = f
B
(E
k
) are the Fermi-Dirac

and Bose-Einstein distributions, respectively.




The Pauli Exclusion Principle applies to the occupation of ―cells‖ within phase space by
fermions. The question arises as to what is the structure of these cells to which the Pauli
principle applies. Does time-energy, position momentum, or some other structure define
these cells? Or are these cells ―addressed‖ by the four angular momentum uncertainties,
/\L
0
, /\L
1
, /\L
2
, /\L
3
. A given angular momentum uncertainty, /\Lµ, µ = n, n = 0, 1, 2, 3,
are defined as

/\Lµ = {<Xi>/\Pj + <Pi>/\Xj}, where i = j =µ

Due to the Heisenberg Uncertainty Principle (HUP), these cells in phase space cannot
possess infinitesimal phase space volume. We can use the additional relation,

/\Pj/\Xj > h and

The expectation values, <Xi> and <Pi> , together address the centroid of each cell,

/\Pj/\Xj when i = j.

It has already been stated as a working hypothesis that vacuum momentum fluctuations
and vacuum energy fluctuations may be combined together into a conserved 4-vector of
fluctuation momentum-energy. Let us examine what possible conserved quantities are
available to consider.

Conserved quantities: <p>, <E>, op, oE, and hence /\p and /\E, the uncertainties of p and
E, are conserved; x
1
, x
2
, x
3
, and ict, or x
0
are not conserved, but the interval, s
2
= x
0
2
+ x
1
2

+ x
2
2
+ x
3
2
= 0 is conserved. Also, if the angular momenta are conserved, Lx
0
, Lx
1
, Lx
2
,
Lx
3
, then there is conservation of phase space in each plane.



The speed of light increases between casimir plates due to an increase in the probability
rate of production of virtual e+e- pairs between the plates. The e+e- virtual pair
creation/annihilation events are how the energy uncertainty between the plates in the
modified vacuum manifests itself as energy fluctuations. So the decrease in momentum
fluctuations, virtual photon exchanges between the plates, is offset by a corresponding
increase in energy fluctuations between the plates in the form of virtual scalar particle
exchanges between the modified vacuum and the four dimensional quantum vacuum.

The substance of any given relative 3 + 1 vacuum state is an absolute 4-vacuum. These
two vacua continually exchange energy with one another.



The linear density of virtual e+e- virtual pairs is increased although photon wavelengths
are unchanged, resulting in higher frequency and same wavelength, i.e., a greater value of
―c.‖ This is explained by a greater probability rate for energy-compatible (or what is
called ―resonant‖) virtual e+e- virtual pairs being created and annihilated along the
photon‘s ―trajectory‖ between the Casimir plates. The concept of resonance is pervasive
within physics and there are myriad instances of concept of resonance in almost all
branches of physics.

In the above paragraph we have an example of ―resonance-tuning.‖ By altering the
electromagnetic boundary conditions of the quantum vacuum electromagnetic field
through the use of simple Casimir rectangular plates, one has succeeded in modifying the
free space vacuum by altering this vacuum‘s resonance structure. Alternately, virtual
photons of a given energy/frequency within this modified quantum vacuum are no longer
described by a virtual e+e- pairs of energy, E = hc / lambda or, no longer resonate with
virtual e+e-‗s of energy, hc/lambda. We speculate that the change introduced to the
vacuum‘s resonance spectrum is confined to the volume contained within the world
hypervolume defined by the casimir plate geometry over time. If this is indeed the case,
then the change to the probability rate of virtual e+e- pairs being created and annihilated
within the geometry of the casimir plates is only for those pairs possessing ―half-
wavelength‖ smaller than the plates separation. Here is the conceptual difficulty:
although the density of vacuum electromagnetic photons within the plates geometry has
been effectively decreased, the density of vacuum e+e- pairs being created and
annihilated within this same geometry is increased. The idea behind these two allegedly
related vacuum effects is that of the conservation of momentum-energy or 4-momentum
density with all possible (self-consistent) hypervolumes. A simplistic way to understand
this phenomenon where the above casimir geometry is concerned, which is not, I believe,
fundamentally misguided, is the following. The amount of energy (in the form of
photons) that has been excluded from the casimir plate geometry is made up for by an
exactly offsetting increase in the density of virtual photons of energies falling within the
frequency spectrum, hc/lambda, where lambda < 2 x distance between the plates, d
casimir.

E <> hc/ì

So in a given interval of time, say, for convenience here, d
casimir
/c, where ―c‖ is the
normal value of the speed of light in vacuo, the linear probability density of virtual e+e-
creation/annihilation events along an axis perpendicular to the plates and contained well
within the geometry described by said plates (in order to be able to ignore
electromagnetic ―fringe effects) is enhanced above its free space value. So a real photon
introduced from one end of the plate geometry with momentum, h/ì‘, directed
perpendicular to the plates and possessing a frequency greater than hc/ì where, again, ì
is less than twice the value of d
casimir
, shall, on account of the enhanced linear density of
resonant e+e- virtual pairs created and annihilated along the photon‘s perpendicular path,
reach the metal plate at the opposite end of the casimir geometry sooner than would have
been implied by the simple equation, E= pc. This equation is, in fact, no longer valid
within the modified vacuum of the volume described by the casimir plate geometry.
But some equation reminiscent of E
2
= m
2
c
4
+ p
2
c
2
. Only by introducing an imaginary
mass component to the photon that is sufficiently, offsettingly large may the
simultaneous increase (above that of ―c‖ in vacuo) is permitted for the real photon‘s
velocity perpendicular to the casimir plates. Now it is suggested that performing a
modification to the vacuum‘s resonant structure opposite in sense to that described above,
we shall succeed in imbuing the affected particle (that confined to the vacuum thus
modified) with a real as opposed to imaginary mass! This suggests that bound
configurations of matter particles, i.e., fermions, by enhancing the probability (in
comparison with the free space vacuum) of the mutual exchange of force mediating
photons, suppress at the same time the probability current density of virtual e+e- pairs
within the bulk of the matter distribution in question, resulting in a reduced local velocity
of light and the appearance of real mass.
July 2011
However, we must consider that the
density of radiation in the form of virtual photons and other bosons is likewise
decreasing with cosmological expansion and according to basic astrophysics, the density
of radiation decreases with the inverse 4
th
power. This to the contrary implies that the
velocity of light is actually increasing with cosmological expansion.
October 2011
This is
similar to the underlying explanation of the anomalous sunward acceleration noted for a
number of deep space probes from the 1970‘s as owing to a mismatch of actual, local
cosmologically expanding and projected (static,
voc=
Ephemeris) reference frames within
which the acceleration of the space probes is reckoned in accordance with classical
mechanics + negligible relativistic corrections. Except in this instance, the rate of
cosmological expansion is reckoned within a uniformly expanding reference frame where
the speed of light is assumed constant though in which, the speed of light is actually
increasing with the cosmological expansion on account of a progressive shift in the
relative densities of spin 1 and composite spin 0 bosons, which of course mimics the
reverse situation for how these relative densities in the two species of virtual bosons
changes with an increasing gravitational potential.

Here we may indeed have a ready explanation for dark energy and the accelerating
cosmological expansion and may also have an explanation for dark matter, since general
relativistic time dilation would be greatest for the greatest relative accumulations of
matter, i.e., in galaxies, c.f., Feynmann‘s
voc=
parton ―bag‖ model. It appears that the
accelerated cosmological expansion occurs within a background three dimensional
Euclidean space, which implies that the observed effects of relativity are due to properties
of the expanding vacuum and not of the background space (background vacuum).
@$
Absolute space and time in other words exist in the background of our apparent
expanding spacetime so that the physics of special and general relativity are now
appreciated as merely properties of a simulated physics.

August 2011
Is it possible that a physical process, e.g., fundamental particle or interaction
might one day be discovered, which fits into the order of the cosmos solely due to its
satisfaction of some anthropic cosmological imperative? A candidate example of such a
process is that represented by what is termed dark matter. Dark matter only interacts
with the rest of the particles and processes of the universe via gravitation and appears to
serve no other purpose than to enhance the gravitational stability of large scale
aggregations of normal matter, which were absolutely necessary for the formation of the
first stars and for the integrity of galactic and higher order gravitational structures. If a
spacetime wormhole structure amenable to exploitation for interstellar travel is ever
discovered, it appears likely that dark matter shall play an indispensable role in the
stability of this cosmic transportation infrastructure. Whatever appears on the surface to
possess only an anthropic cosmological purpose must upon deeper investigation be
revealed to be a manifestation of the integral nature of the cosmos at the most
fundamental level. This notion is of course consistent with the Eastern mystic view of
the individual as an immanent manifestation of transcendental, i.e., ―duality-
transcending‖ deity. The anthropic cosmological principle taken in earnest in its full
metaphysical implications, effectively makes Man the fundamental or ―God particle‖ of
the cosmos. If true, then the search for a bona fide God particle, e.g., ―Higgs boson‖ is
likely to prove a fruitless one.

We may be able to interpret the creation and annihilation of virtual e+e- pairs as the
exchange of supraliminal photons of imaginary mass by the same bound structure of
matter with itself at two different times. The difference between these two ―times‖
constitutes a time interval defined by Planck‘s constant divided by the energy of the e+e-
virtual pair in question, i.e., h/E
e+e-
.

Without the presence of mass or any dynamic independent of the quantum vacuum as a
whole, this vacuum is free to communicate with all past and future versions of itself.
@$
The appearance of any momentum-energy within this vacuum which is not already
eternally prefigured within it, disrupts the delicate structure of this temporally integrated
and pristine vacuum state.

The collapse of a system wavefunction not only wipes out all contending eigenfunctions
excepting one, distributed throughout instantaneous space, but also wipes out one set of
―probable histories‖ for the system with another. Now the communication of ―mutually
exclusive‖ eigenfunctions may be equivalent to the communication of the system as a
whole with itself in the immediate future/immediate past.

Concerning the phenomenon of the appearance of a ―force‖ between the casimir plates
which tends to push them together, we may give an interpretation in terms of a magnetic
force. With the decrease in the vacuum electromagnetic field between the plates
together with an increase in the probability current density of virtual e+e- pairs, also
between the plates, we might suppose that, in the case of a static configuration of plates,
that it is an imbalance in the static magnetic force between the plates that actually is
responsible in pushing them together. Can the force between the plates be interpreted as
being due to the appearance of a surface density of charge upon the plates?







For some particle or field to act as a gravitational source, it must be possible for a test
particle to be accelerated relative to this source in such a manner that their velocity
through time may be made equal.

Is the velocity of a mass through proper time, i.e., its 4-velocity while ―at rest‖
determined by the vector sum of the rest mass times the velocity along its proper time
axis with the product of this mass with the relative velocity of ―in-falling vacuum‖
energy? And can this be equivalently described in terms of the relative magnitudes of
the quantum vacuum timelike and spacelike current densities?

It is the effect of matter upon the quantum vacuum of rotating the vacuum momentum
current density from being virtually purely timelike to being less timelike and more
spacelike through increase the spacelike current density while decreasing the timelike
current density. And exact match in the ―in-fall‖ and ―out-fall‖ rates of the quantum
vacuum energy would mean that the net cosmological expansion is zero. A slight
imbalance in the above ―in-fall‖ rates would translate to a local time of the ―free space‖
vacuum that is somewhat retarded relative to the ―rate of passage‖ of ―cosmical time.‖
This implies that the universe possesses an overall net mass. An accelerating cosmic
expansion rate (relative to cosmic time) would imply a decrease in the ―rate of local
time‖ relative to the rate of cosmic time. The practical upshot of this is that the local
velocity of light is decreasing with cosmic time. This would seem to imply that matter is
being created out of the vacuum with the course of cosmological expansion.

We may think of the matter this way: a photon may either pass through a dielectric
medium without interacting with it or it may be absorbed by this medium with the
production of the appropriate transition between energy levels of this dielectric material
(assuming this medium possesses a ―perfect‖ crystalline structure). We may assume that
the virtual transitions continually occurring to the vacuum‘s energy are always in tandem
with the absorption by this vacuum of virtual bosons, e.g., photons so that a decrease in
the density of energy transitions in vacuum ( per unit time) must be associated with either
an increase in the, e.g., photon, flux density or in the current density of spacelike
momentum fluctuations, i.e., exchanges of, e.g., virtual photons between components of
a, e.g., dielectric medium (other than the quantum vacuum itself).

The vacuum acts as a reference frame in a number of senses, but not in the usual sense of
―reference frame‖ invoked in elementary discussion of special relativity. One such sense
is the role of the vacuum as providing the ground and medium (both) for all forms of
resonance/sympathetic inter-action of quantum mechanical systems (which is to say, all
physical systems). The role of the quantum vacuum as resonance medium/ ground for
conscious states shall be discussed later). Moreover, the vacuum distinguishes virtual
from real components of quantum mechanical systems by only entering into complete
resonance with a system of virtual particles/fields. The resonance of the vacuum with
real quantum mechanical systems is always damped.

Keywords for websearch: damping, resonance, transmission, potential, barrier, tunneling,
imaginary, momentum, negative kinetic energy, etc.

Look at tunneling rates versus transmission times. If all particles are tunneling through a
4-hyperspherical potential energy barrier at near the speed of light, then particles are
normally ―near vacuum resonance.‖ When the velocity of tunneling of matter particles
is reduced significantly relative to the free space value of c, then we say that the particles
are far away from vacuum resonance and the probability (rate) of their tunneling is
greatly reduced below the ―free space‖ resonant value of the tunneling rate for this
particle (as a virtual particle there is ―perfect vacuum resonance‖ and maximal tunneling
probability rate, i.e., the virtual particle is ―moving‖ at the speed of light (in vacuo) along
the cosmic time axis. Of course, what causes matter to possess less than perfect
resonance with the local quantum vacuum is the presence of competing resonant
processes taking place within the bulk of the matter.

The cosmological redshift quantization and the accelerated cosmological expansion are
two manifestations of a single underlying energy structure of the universe. This appears
as an exponentially increasing, oscillatory solution to a differential equation, suggesting
that the kinetic energy of expansion and the gravitational potential energy of the universe
are not, in fact, precisely balanced (so-called flat space solution), but the hyperspherical
potential energy barrier is larger than the kinetic energy imparted to matter by the big
bang. The frequency of the oscillatory solution should indicate how closely matched the
kinetic energy of expansion is to the energy of the hyperspherical potential.


What would a topological analysis of the concept of pure temporality reveal? Can there
be temporality without spatiality? No, because there must always be breaking into any
time line that attempts to establish itself and these disruptions always come from
perpendicular to this would-be timeline. Spatial dimension is a deterministic time
dimension, c.f., concept of three dimensional time (physics eprint archive/ gr-qc). Self
organization is only an appearance or representation within a deterministic state space,
but can actually exist within an open system. Investigate the concept of an open system ,
which is closed with respect to some other open system.

The Schrodinger‘s Cat experiment can be modified so that we are not dealing with a
superposition of a dead cat and a live one, but in which we are dealing with a
superposition of two distinct states of a single conscious human individual. (The latter
case we can consider later, in which we have a superposition of two distinct persons, say,
where, depending upon which of an equiprobable two-state radioactive transition occurs,
either person A or person B enters and seals himself in some kind of isolation chamber.)
But I insist that these are two radically different types of ―quantum‖ superposition. The
point here, however, is that quantum mechanics does not recognize a distinction between
these two types of superposition, one, which is a superposition of two different states of
the same person, and, two, a superposition of two distinct persons! Does this mean that
we can never look to quantum mechanics for a solution to the question, ―Why am I me
rather than that guy over there?‖ For in this gedanken experiment, we have two cases of
superpositions or eigenstates with respect to what we presume for purposes of argument
are quantum observables. But what type of observables are we referring to here?
Mustn‘t observables be at least intersubjectively defined within quantum mechanics?
And yet we intuitively see that these ―consciousness observables,‖ and their eigen-states
are of fundamentally distinct types – I would say the most fundamentally distinct
conceivable! So we see that quantum mechanics cannot form the basis of a Theory of
Everything because it cannot possibly take into account the distinctness of individual
consciousness, partly because quantum observables must be intersubjectively defined so
that what we might call infra-subjectivity, that is, subjectivity itself, is prior to the critical
notion of quantum observable, and hence, to physical theory as such.

In this case, the necessity for spontaneous collapse of superposed quantum states at some
mesoscopic scale, i.e., somewhere between the subatomic and the macroscopic,
forcefully presents itself for consideration. On the one hand, the individual does always
(when awake) appear to be in some constant state of his own individual consciousness,
regardless of how his mood and state of mind might otherwise change or fluctuate. And
so there seems to be some reason for supposing that the general state of an individual‘s
being conscious is an eigenstate with respect to some kind of quantum observable or
other. But if we admit this seemingly unproblematic idea, then we must also consider
what is to be meant by Heisenberg uncertainty with respect to such a quantum observable
as is represented by the individual‘s general consciousness. The spectrum of
consciousness, if you will, represented by such a quantum uncertainty, if a discrete
spectrum, cannot be consistently thought to be one composed of the distinct
consciousness eigenstates of distinct individuals. This is because the general
consciousness of each distinct individual is represented as the eigenstate with respect to a
distinct quantum observable. Of course, the concept of one‘s individual consciousness
being a quantum observable, observable only by himself, as well as the implication that
the individual can somehow be thought of as separate from his own individual
consciousness such that it makes some sense to speak of him observing his own state of
consciousness, is rather ludicrous, one is forced to admit. So the idea of other peoples
consciousness‘s being observables for me as well as the notion of my own consciousness
being an observable for me are both inconsistent and, hence, untenable concepts.

The greatest possible energy uncertainty is represented by the Planck mass, m
pl
=
hc
/
G
.
Any energy larger than this cannot ―exist‖ as a virtual particle or field (state), but must be
―real‖ and measurable. Paradoxically, a time eigenstate, (were the concept of time as a
quantum observable admissible in this theory) would correspond to a system in which
time both stands still and ―moves‖ infinitely fast. The question arises, what energy
uncertainty must a given quantum mechanical system possess in order for it to move
through time at the speed of light? It is interesting that /\E of a quantum mechanical
system is responsible for the system‘s temporal evolution (apart from changes in the
system phase) which interactions between the brain and a nonlocally connected /\E is
beginning to be thought to mediate consciousness, which may be otherwise thought of in
Kantian terms as the ―intuition of time.‖

web=
In the following, c.f., www.integralscience.org, Consciousness and Quantum
Mechanics.
―Now Schrodinger gets devious. He puts the radioactive atom, a detector, a hammer, a
poison bottle into a cage with a cat in it. . . ―

Is it the cat‘s ―conscious‖ brain back-reacting upon the vacuum which the radioactive
atom interacts with which causes the decoherence/prevents a live cat/dead cat
superposition from forming? Or is it just that the mass of the cat is greater than the
Planck mass?

Because of the nature of nonlocal connections as being superluminally connected, we
may say that all nonlocal connections are only correlations independently of the observer.
Do we mean here that nonlocal connections only exist if they are seen? (for the nonlocal
connection that we are saying existed previously to our knowledge could have itself
propagated backwards in time)

The time travel (―grandfather paradox‖) is only insoluble if we assume that there is
physical continuity between myself in the present and myself in the time and place at
which I perpetrate my grandfather‘s murder.

What is the meaning, if any, of nonlocally connected influences ―propagating?‖ To
trigger the manifestation of nonlocal connections is to back-react upon the ground of the
phenomena of the physical system concerned – perhaps even to back-react upon one‘s
own ground.


Loose speculation concerning the relationships of gravity, topology, and degeneracy:


Does an individual mind constitute a gravitational equivalence class of degenerate
topologies? Does the effect of conscious observers upon prepared quantum systems, in
the sense of inducing wavefunction collapse, have a basis in topology? Are time and
space separable and absolute solely in systems composed only of topological equivalence
classes defined by a particular gravitational degeneracy, which may be viewed as a
special case of quantum energy degeneracy in which temporality of the system is distinct
from the temporality of a global quantum system which is itself nondegenerate? Is this
the origin of subjective versus objectively real temporality? Are all infinite sets of
continuously energy degenerate quantum systems temporally orthogonal with respect to
all other such infinite sets?

Is the value of the cosmological constant so small despite an enormous vacuum energy
density due to this vacuum energy being overwhelmingly degenerate such that the
transformations of its wavefunction do not possess real physical temporality and so do
not represent a spatiotemporal momentum distribution of real energy?

A 360 degree rotation introduces a ―twist‖ into the topology of either the object or the
embedding space of the object. Now what do we have to assume occurs at the quantum
level within the substance of the rotated object which prevents this twist from occurring,
ie.., preserves the topology? The principle of preservation of topology for macroscopic
objects and their kinematics. Topological fluctuations which are coherently
interconnected so to maintain a zero net change in the expectation value of the topology
of the embedding space of the metric. How does this fact of the relationship of topology
and rotation within three spatial dimensions affect our consideration of general four
dimensional spacetime rotations? Might inertia originate out of a Lorenz type force, but
one more general than one possessing merely an electromagnetic origin? Might
acceleration of a mass result in such a ―gravitational Lorenz-type force‖ through this
acceleration‘s interaction with the interconnected matrix of macroscopic topological
preserving quantum fluctuations in spacetime topology necessitated by a four
dimensional generalization of the topology theorem alluded to above? – or rather, by the
topological elasticity responsible for these compensating quantum fluctuations in
spacetime topology? Could a four dimensional statement of Gauss‘ Law, or
generalization of ideas taken from the three dimensional derivation of the equivalence of
its surface and volume expressions, be utilized in motivating a proof of a four
dimensional generalization of the three dimensional ―rotation/twist theorem‖ stated
above?

The nonlinearity of general relativity may lie behind the cosmological constant problem.
If gravity is a truly universal ―force‖ in the sense of sustaining the equivalence principle,
then we would expect the quantum vacuum to generally mediate the gravitational
interaction, rather than a particular exchange particle, i.e., ―graviton.‖

Time may be likened to a CPU clock while motion of coherent structures is a
combination of linear and feedback, i.e., ―circular‖ binary calculations. The competition
between feedback binary and CPU binary operations (by which coherent structures are
―refreshed‖ or updated – the CPU clock rate constitutes the ―refresh rate‖ for non-
cohesive ―pixel sets;‖ for cohesive pixel sets, the refresh rate for the structure is less than
the CPU clock rate) may be likened to special and general relativistic effects,
respectively.

Higgs boson as a particle physics metaphorical particle. Photon mass determined by
gravitational size, i.e., black hole radius of the Universe and would be related to the
breakdown of perfect Lorenz invariance.

What type of new conservation principle is pointed to by supersymmetry?

Dynamical symmetry breaking requires a composite Higgs particle, perhaps virtual
Cooper pairs of fermion/ antifermion pairs. Supersymmetry entails several Higgs bosons,
perhaps all of the various types of virtual fermion/antifermion pairs which are
manifestations of the fundamental energy fluctuations of the quantum vacuum.

Spin appears to be the most ubiquitous property of particles, both matter particles as well
as the particles responsible for mediating all of the fundamental forces of nature. Spin is
an essential consideration in all interactions among subatomic particles. So the
equivalence principle should be consistent with a spin-based theory of quantum gravity,
rather than an electromagnetic-based theory such as that put forward by Haisch, Rueda,
and Puthoff. ―In fact, the spin of a planet is the sum of the spins and angular momenta of
all its elementary particles. But can angular momentum itself be ultimately reducible to
subatomic particle spins?
cit=web=
(c.f., www.sciam.com/askexpert/physics/physics10.html
– page 2)

Spin, circular motion, accelerated motion, spin networks, symmetry of rotation (not just
in space, but in spacetime), symmetry and conservation laws (of interaction)

Loops in space interwoven with loops in time in an elastic, dynamic network of
interactions.

A maximum density of momentum exchanges in matter would imply a minimum current
density of imaginary 4-momentum exchanges.

Is a time interval being so small that time ―loses its meaning‖ the same thing as quantized
time?

Is vacuum lattice gravity theory related to spin network gravity theory?



Gravity and density of happenings: if all action be ultimately composed of energy
fluctuations. . .
It does not exist far away (at a distance) in space, but in time.

Why the gradient of an energy field is a force and the phenomenological basis of mass,
inertia, gravity, etc. Do we really need the notion of substance within physical theory?

1-800-STROKES ~ Atrial Fibrulation FAQ

How is the behavior of a spacelike separated collection of energy fluctuations, which are
nonlocally connected distinct in their collective behavior from such a set, which exhibit
the same behavior, but without this connection? Without the nonlocal connection the
collective behavior of these fluctuations is only identical to that of the former in a single
reference frame. This reminds us of comparing light propagating through luminiferous
ether compared to its manner of propagation within a Lorenz-invariant vacuum.

Mass is conserved in pre-relativity physics. In relativity theory mass and energy are
interconvertible. But energy is conserved in relativity theory ( special theory, at least),
while energy is not conserved in quantum theory. The question perhaps arises here,
―into what is energy interconvertible which ―accounts‖ for the breakdown of
conservation of energy within quantum mechanics?‖

Everett‘s relative state interpretation of quantum measurement seems to invoke the
existence of a kind of ―hypertime dimension.‖ Just as the integration of the time
dimension with those of space explained the nonconservation of mass, perhaps the
incorporation of Everett‘s hypertime dimension with that of four dimensional spacetime
will account for the nonconservation of energy within standard quantum theory.

What is the real purpose of the laser interferometric gravitational observatory (LIGO)
which has been built in Livingston Parish, Louisiana, if not to detect gravitational waves
generated by supernovae, colliding blackholes, etc? Might this facility be used to detect
gravity waves generated by extraterrestrial warp engines? It would be much easier to
detect locally occurring alien propulsion than astronomically distant astrophysical
processes. Using the Drake equation, combined with some other intelligent assumptions
about advanced extraterrestrial civilizations and the typical frequency density of
supernovae, might we be able to show that there is much greater likelihood of detecting
alien spaceships?
September 2011
By the way, my stab at answering the Fermi Paradox is
that, all of our immediate interstellar neighbors are inhabited by highly developed
civilizations and have told those still more (remote from us) advanced extraterrestrials
visiting them, ―don‘t worry about the inhabitants of the Sol System; they still live just on
their home planet and aren‘t even close to developing the resources of their planetary
system in order for us to be able to profitably trade with them‖. Presumably it was long
ago realized by the older, more advanced civilizations of the galaxy that the economic
cost of gaining raw materials from a neighboring system is much lower once one has a
naïve native host with which to trade. Just imagine here Europeans trading worthless
trinkets for the gold of naïve Mesoamerican natives.

Real particles as solitons in the locally connected quantum vacuum momentum-energy
field. By viewing virtual particles within the vacuum as being themselves solitons in the
nonlocally connected vacuum field, we are admitting the existence of forms of matter and
energy more fundamental than the particles and fields treated in the ―standard model‖ of
particle physics.

What is the quantum conjugate quantity, which should be paired with information,
conceived as a physical quantity? And is information the conserved quantity of such a
conjugate pair? With the other conjugate quantity serving merely as a bookkeeping or
accounting variable?



Dynamically interacting vacua, each of which is an open system. How do these vacua
mutually interfere constructively with one another in the absence of a closed system of
feedback? Within a closed system of feedback there is a back and forth exchange of
energy, but no ―communication.‖ (Only context free data are exchanged) However, in
the interaction of two or more open systems, stable and persisting structures can only be
created and sustained through communication and cooperation between these various
systems. The individual fluctuations of energy which collectively comprise /\E may not
be thought to be evolving in time as /\E is what determines /\t, which, in turn, defines the
time scale of dynamical processes within the system. Spacetime is a projection based
upon the expectation values of time and position. The field equations relate the
expectation value of the spacetime curvature to the expectation value of the momentum-
energy density. This is a formal relationship, which is concretely underpinned by the
physical connection between the fluctuations in momentum and energy to the fluctuations
in position and time. Fluctuations in time are not merely fluctuations in the position in
time at which a particular event occurs. Otherwise, an absolute time would have to be
assumed and which would served as a backdrop for these fluctuations in the timing of
events.



The imaginary momentum of a particle tunneling within 3-space suggests a purely
timelike motion for a particle moving along a spacelike interval. This is a case of a
particle being only an information-based representation. Quantum tunneling may be
mediated exclusively via nonlocally connected quantum fields. This also suggests that
the particle, while tunneling at least, possesses no continuous existence while it is within
the potential energy barrier possessing a negative kinetic energy.

Is there a single mathematical function that connects all possible numbers, real,
imaginary, matrix, fractal, surreal, etc.?

There is the temporality of the sustaining of form and then the temporality of
transformation of forms. Mass, length and time effects of relativity are distinct but
related, depending upon when one treats accelerated motion or not. The temporality of
the sustainment of the structure of spacetime is a temporality not incorporated into the
spacetime itself.

Clearly the timelike momentum of objects on a spacetime surface is connected with its
motion through and along this hypersurface. Compare the oppositions of prepositions
―on‖ versus ―by‖ with ―through‖ versus ―along.‖ Time dilation and Hubble constant
contraction is determined by the ration of mass to black hole (perhaps also ―vacuum‖)
density. Reconcile this with a H**2/8piG cosmic mass density. Black hole entropy
suggests that black hole density should be proportional to 1/R**2 or to black hole surface
area, c.f., cellular automata-based theories of relativity. The matter and vacuum
momentum current densities are reduced by this time dilation or Hubble frequency
contraction factor.

The circular motion, feedback through 3-momentum exchanges, is converted to linear 3-
momentum directed along a single axis. When the propellant is in its ―inert‖ form, 3-
momentum is virtual and directed along all three spatial axes. When a mass is
accelerated from rest to some velocity, v, its mass and length locally, relativistically
change, but the quantum vacuum undergoes a global relativistic change in its energy
density (during the acceleration). A Lorenz transformation has no effect upon the energy
density of the quantum vacuum, as can be mathematically demonstrated for a vacuum
with energy density proportional to frequency**3.

It appears at least superficially that the density of mass is increased by a factor of
gamma**2 as a result of a Lorenz transformation. How do we explain the single gamma
factor involved in time dilation, given the proposed cellular automata model or relativity,
in this case?

Since the vacuum does not obey the Lorenz symmetry group, we might say that
gravitation breaks the Lorenz symmetry of the quantum vacuum. We must conclude that
the momentum-energy density of the vacuum is unaffected by a Lorenz transformation,
in agreement with what theory predicts for an energy field with a directly proportional
frequency**3 dependence.

Clearly the vacuum possessing a gravitational field loses it‘s the proportional to
frequency**3 energy density dependence! Hence, the gravitational field is at least partly
based in spatial or spatiotemporal variations in the vacuum‘s energy density.

There is nothing remarkable about coincidences from a probabilistic standpoint as they
inevitably happen given sufficient number of distinct settings and a sufficient amount of
time.

A Lorenz transformation causes a specific, coordinated change in both vacuum frequency
and wavenumber. The strict proportional to frequency**3 dependence is lost due to
presence of a new proportional to wavenumber**3 component. All of the vacuum‘s
momentum is timelike and none of its momentum is spacelike, hence the lack of
wavenumber dependence. A gravitating body must possess some spacelike momentum
(in the form of exchanges of force-mediating virtual bosons) and this costs the mass some
of its timelike (or imaginary) momentum. This prevents the energy density of the
vacuum from being purely dependent upon the frequency since this dependency
presupposes a constant relationship between frequency and wavenumber through the
constancy of the speed of light. However, the speed of light is not a constant within
general relativity, i.e., where gravitational fields are present. In other words, saying that
the energy density of the vacuum is directly proportional to the cube of frequency is, in
Minkowski or ―flat space‖ that this energy density is proportional to the cube of the
product of the velocity of light and the wavenumber. In a gravitational field, the velocity
of light is not a constant, but can vary spatially throughout the field (and by the relativity
principle, the velocity of light should also vary temporally since there is no objective
decomposition of spacetime into separate space and time components). A given
frequency of vacuum electromagnetic field fluctuation is given in free space (in the
absence of a gravitational field) by just the velocity of light multiplied by the
wavenumber of the fluctuation. In a gravitational field this frequency is given by the
product of three factors: the speed of light in vacuum, c, the wavenumber of the
fluctuation, 1/lambda, and dimensionless functional with a spatial (and perhaps temporal,
as well) dependence which is determined by the spatial (and perhaps, temporal)
dependence of the 16 gravitational potentials, i.e., the metric components, specified by
the Einstein field equations. A Lorenz transformation alters frequency and wavenumber
in a complementary manner to which it alters time and length. So although a Lorenz
transformation alters the vacuum‘s frequency, it also alters the vacuum‘s wavenumber
producing a null net effect on the energy density. So in a gravitational field the density
of the quantum vacuum is proportional to the cube of the vacuum electromagnetic field
fluctuation frequency provided that the above functional is itself invariant under a Lorenz
transformation. This is only possible if the multiplication of this functional generally
represents a transformation, which forms at least a subgroup of the Lorenz transformation
group. This is not possible because the Lorenz group is itself a subgroup of the
Diffeomorphism group of general relativistic coordinate transformations.



All conserved dynamical variables are purely timelike in free space. Inertia and
gravitation are phenomena associated with the projection of these timelike four vectors
into spacelike components. An important question here is whether there is a conserved
four potential. Does the creation of spacelike components of four potential induce a
change in the timelike four potential so that the sum of time and spacelike components of
some new four potential vectorially sum to produce a new four potential with the same
magnitude as the inertial free space four potential.


It has been suggested that the ubiquitous and collectively enormous energy fluctuations
of the quantum vacuum are scattered echoes, virtually infinite in number, of the initial
scattering, or shattering, explosion of the Big Bang, itself thought to have originated in a
vacuum fluctuation of statistically infinitesimal probability.

In nonrelativistic physics, any linear momentum can be transferred away via an
appropriate Galilean transformation. This is not the ease with angular momentum
because this is motion not taking place within an inertial frame. In relativity, the 3-
momentum can only be transformed away while preserving the total 4-momentum.
/\x/\py + x/\Py + y/\Px + /\y/\Px + /\x/\Py = /\Lz

Use superposition to express Lz in four-dimensional spacetime. Necessity of quantum
mechanics within relativistic physics. /\f/\g = h/2pi, where either f or g must be a
quantum number and the other term represents a nonconserved quantity.

In the paper, The Mystery of the Cosmic Vacuum Energy Density and the Accelerated
Expansion of the Universe, it is stated that the effective cosmological constant is
expected to obtain contributions from short-distance physics, corresponding to an energy
scale of at least 100 GeV.


The density of the quantum vacuum is calculated within the present quantum theory to
possess a density of approximately 10**95 kg/cm**3. This is what is called the Planck
density of matter. But for matter this density can only exist for a particular smallest
spatial dimension, i.e., roughly 10**-35 meters, and for virtual matter, at this same spatial
dimension but for the most fleeting instant of around 10**-42 seconds. The difference
between real and virtual Planckions here is that in the former case all of the fluctuations
are in the form of 3-momentum exchanges whereas in the latter they are entirely in the
form of 4-momentum exchanges, imaginary, and exclusively along the time dimension.
Between these two forms of Planck matter there subsists a 90-degree hyperspatial
rotation in accordance with the symmetry of the Lorenz group. There is a similar
relationship obtaining between rest mass and the collection of photons yielded up when
this ―inert‖ mass is 100% converted into energy, in accordance with E = mc**2. The
reason that energy possesses mass when subject to spatiotemporal boundary conditions is
that is that some of the energy is necessarily converted into mass as a result of the
boundary conditions imposed upon this otherwise totally free energy. The reason that
the black hole state equation, if you will, places such a tight restriction upon the
relationships of mass and length or, rather, density and length, in this case, whereas this
restriction is utterly absent in the case of vacuum energy, is no doubt owing to the
imposed boundary conditions. We may then think of mass as just vacuum energy with
suitably imposed boundary conditions, the dynamics of matter and vacuum being
otherwise essentially the same.

A tangled network of feedback loops of quantum vacuum fluctuation momentum-energy
constitutes the structure of matter. Feedback necessarily involves the action of memory.
There is feedback within a given data/information system and then this feedback is itself
continually being updated through nonlocal feedback with some global, distributed
system. This updating of the system‘s internal, dynamical feedback structures may be
thought of as the feedback of that component of the system which is globally unified with
itself at different times.

Data are related synchronically while information may be thought of as data related
diachronically. Note here that the synchronic cannot become diachronic without
participating in the history of its system.



The obvious advantage from a technological standpoint to a spin-based theory of inertia
and gravitation is the potential that exists within such theories for the manipulation of
inertia and gravitation through the application of electromagnetic fields.

It is interesting how phonons are formally identical to bosons, which are responsible for
the binding together of particles into extended material structures. Phonons may be
properly understood as quanta of sound. So it is quanta of sound, i.e., phonons, which
are responsible for the extended structure of all matter. Bosons (as a kind of phonons)
are responsible for transforming timelike, nonlocally connected, unbound vacuum
fluctuations (in the form of virtual particles and fields) into spacelike, extended and
locally-connected, bound configurations of real particles. If real particles are bound
together and interact with one another through the exchange of virtual bosons, then how
are virtual particles bound together and interact with one another, and is this a meaningful
question?

There seems to be an analogous relationship of the systems, consonants and vowels
versus fermions and phonons.

To calculate the binding energy of nuclear matter, one must perform a partial sum of an
infinite number of infinite terms, yielding a finite result.

We can look at the ordering relation between matter and vacuum in two possibly related
ways. The structure of matter mirrors that of the vacuum because it is constituted out of
and sustained by this vacuum – from what other source could matter have derived its
structure? Alternatively, we can assume matter has an existence somehow independent
of the vacuum. Here matter resides in this vacuum and the structure of the vacuum is
perturbed by the presence of matter, more specifically, the structure of matter imposes the
pattern of its structure upon that of the vacuum or upon the structure of its pattern of
fluctuating.


Two scalar densities may be equivalent up to a divergence term. A variation in a
physical quantity may always be represented as a coordinate derivative multiplied by
some infinitesimal displacement in a coordinate space. FlucA = pdA/pdXi x flucXi

Can a variation in some physical quantity, such as energy or action, be self-consistently
defined within a coordinate space which has an indeterminate or time varying topology or
metric?

If we are speaking of the variation in the action integral from which the general
relativistic field equations are to be determined, then certain restrictions upon the phase
space of the action are necessarily established beforehand. Is the problem with the
equivalence principle in parallel with the difficulties with the Everett theory of quantum
measurement. The two way coupling of matter to its electromagnetic field, and vice
versa, is not pervasive enough to necessitate a nonlinear description. It is the
nonlinearity of the GR field equations which seems to demand the existence of black hole
singularities.

The logical inconsistency of a time-varying space time seems to pose serious problems
for Einstein‘s equivalence principle. More specifically, it is the Einsteinian concept of
the inertia of energy which is at the root of the present and long-standing conflict
between GR and QM, particularly with regard to the two theories‘ predictions of the
vacuum energy density.

Invariance with respect to the local phase group is accomplished by converting all
derivatives to covariant derivatives. What would happen if a small sub-volume of the
total momentum-energy tensor field were to free fall (move along a geodesic path?) with
the very spacetime determined by this global stress-momentum-energy tensor? A related
question is, is a particular stree-momentum-energy distribution necessary to sustain the
―flat space‖ Minkowski metric? Certainly if we are limited to the context of classical
relativity, the answer is no. However, in the case where quantum vacuum zero-point
fluctuations become necessary.

Since altering the zero-point of the vacuum field alters the mass of particles occupying
this vacuum, we might ask whether the gravitational and inertial masses of the particles
has been changed so as to leave the equivalence principle unharmed. The zero-point
can, of course, be shifted through imposition of electromagnetic boundary conditions
upon the vacuum electromagnetic field. The equivalence principle would have us
believe that the masses of the particles has been altered simply because the density of
vacuum energy contained within the particles has changed (decreased). We might term
this a locality-based explanation for the change in mass. We might also say that general
relativity‘s explanation of the mass of bodies is strictly phenomenological.



A tensor relates vector inputs and vector outputs. But what happens ―within the black
box‖ is just exchanges of momentum and energy, which are vector, rather than tensor
qualities. Is it possible for tensor fields to be reducible to some system of exchanges of
vector impulse (fluctuation) quantities which merely reconfigure themselves spatially and
temporally in a manner conveniently described by 2
nd
and higher rank tensor quantities?
Can all tensors be built up out of scalar and vector operations applied to scalar and vector
field quantities? Still more, can all higher rank tensors be built up from scalar quantities,
say, by adding dimensions and then projecting higher dimensional scalar quantities
―down into‖ lower dimensional, embedded spaces?

This is analogous to the case of four dimensional rotations of three dimensional objects
being equivalent (at the level of expectation values) to disassembly and reassembly of a
three dimensional object into its mirror image (enantiomorph).

The distribution of momentu-energy within free space is analogous to that of a black hole
with ―sliding Schwarzschild radius.‖ The cosmological redshift contains a component
which is a function of the angular displacement of the distant body‘s local time axis
relative to that of the observer‘s local spacetime.

Atoms cannot be created out of the vacuum‘s fluctuation in energy, c.f., Physics News
Update. Temporality of a quantum system is a measure of its vacuum resonance. In a
gravitational field a photon can no longer consistently be described as an e+e- virtual
pair.

Because of the tilting of the light cone within a gravitational field (relative to a reference
frame in free space), virtual e+e- pairs manifest themselves to a small degree as real
photons and to a larger degree (determined by gamma = GM/rC**2) as virtual photons,
i.e., rotation of flucE(/\E) with respect to flucP(/\P). Stress terms, i.e., ―off-diagonal‖
terms in T(j,k) indicate that p and E in T(j,k) deviate from being mutually orthogonal.


Minimum velocity supportable by cellular automata (CA) system due to representation of
time differences between inertial frames, i.e., time dilation amount cannot be less than
Tplanck. Some information in a CA has to be bound up in memory, which is
information about information, context, time and place, of the information.





Intrinsic quantum uncertainty, what might be conveniently termed iso-uncertainty, is due
to the presence of vacuum fluctuations.

The force acting on a photon escaping from a gravitational field is just

F(photon) = d/dr c[r] fluc[p] = d/dr{c[r] h/lambda[r]} or

F(photon) = d/dr fluc[E]; f[phot] = dc[r]/dr h/lambda[r]

- c[r] h/lambda**2[r] d lambda[r]/dr

The notion of a photon moving ―through‖ space at a finite speed due to the photon‘s
being scattered along its path by virtual particles neglects the basic fact that the photon is
itself constituted from moment to moment by such vacuum fluctuations with which the
photon allegedly interacts during its travels.

Rather than a photon being absorbed and re-emitted by charged virtual fermions we say
that the photon is itself the pairing, into spin 1 complexes, of these self same charged,
virtual particles.

The larger that /\E is for the quantum vacuum, the smaller is /\t, which may be understood
as the minimum time delay between photon scattering events. /\x may be interpreted as
the minimum distance between these scattering events.

The shift in /\E along a photon‘s trajectory, i.e., /\fluc[E], where fluc[E] is the spectral
fluctuation energy, where /\fluc[E] = h*/\fluc[f]

The time delay between photon scatterings between point A and point B, where the
distance line[AB] = d is increased. This is due to the increased linear density of
scattering events between A and B.

Increase in energy degeneracy of matter in gravitational field relative to free space. The
temporality of energy degenerate transformations is beginning to supplant the free space
temporality associated with changes in quantum system energy. A black hole is 100%
energy degenerate in that all processes taking place within the even horizon occur
without the hole changing its energy.

The spaceship is not moving ―through space‖, just its representations moves across the
observer‘s visual field. Because the observer views the ship from a slightly different
reference frame (separated from that of the ship by a timelike Lorenz ―boost‖), one views
a tiny component of the ship‘s motion through time along a spatial axis of one‘s own
reference frame.

Some of the information dictating the reconstitution or recreation of individual fermions
is received not from the quantum vacuum but from other fermions with which the first
fermion is exchanging 3-momentum.

The input of alleged raw energy is responsible for extremely complicated changes in
subatomic processes.

The vacuum must not only reconstitute the mass, but it must do so for the mass plus its
kinetic energy. To reconstitute energy from vacuum energy may be ―less labor
intensive‖ than reconstituting of a similar amount of energy in the form of mass. This is
because of the information content associated with the boundary conditions to the energy
which causes this energy to manifest itself as interacting matter particles and fields. To
reconstitute the boundary conditions, energy must enter the mass from the vacuum side
travel a certain minimum circuit within the mass responsible for its material structure,
exit the mass and return to the vacuum prior to the vacuum possessing sufficient
information to reconstitute the mass for the ―next cycle.‖ In the case of uniform
velocity, the vacuum needs and extra amount of time to perform this calculation is
determined (in the 2
nd
order approximation) by the ratio of the kinetic and rest mass
energies of the mass in question. In other words, the vacuum does not second guess the
mass that it will continue in uniform motion through space because the vacuum has no
knowledge of such a quantity, perhaps.

Does light travel faster between Casimir plates? C.f.,
au=
Ostoma, physics eprint archive.


Superposition versus eigenvalues – unitary evolution of Psi(r,t) according to the
Schrodinger‘s equation versus Psi collapse.

When Schrodinger‘s
@$
Principle of objectivation breaks down at sensitive feedback
scales.

Is no objectively valid theory of Heisenberg uncertainty possible because of the necessary
ultimate interaction of the quantum uncertainties of the observer‘s brain and that of the
quantum system he is observing? What about coherent, resonant interaction between the
quantum fluctuations of the observer‘s brain and the quantum fluctuations in the system
being observed? Would the correlation of fluc(obs) with fluc(sys) be manifest in the
array of expectation values for objective observables, while the interactions of fluc(obs)
with fluc(sys) are responsible for the uncertainties in these physical observables that can
either be wholly attributed to the system or to the observer would the abstract features of
the system being observed be based in the nonlocal connectivity of an infinite system of
would-be observers, while the concrete features of the system are based in local
connections, or potential causal relationships between the system and all those
experimenters and observers ―actually present‖ at the experiment?


If wavefunction collapse is truly random, then how is quantum computing possible?
The prohibition stated in special relativity that no mass may move faster than light is
perhaps more correctly cast as, ― no mass may travel faster than the velocity of time.‖ Of
course, in a vacuum possessing an energy density greater than that of the free space
vacuum masses could theoretically travel faster than ―c‖ which is really only a local
velocity of light – this is because ―free space‖ is a misnomer if implied here is a vacuum
completely free of boundary conditions upon its momentum-energy as well as the
statistics of the fluctuations of this momentum-energy.
July 2011
This could possibly play a
role in the anomalously high energies of some cosmic rays.

We may speak of mass as energy in its purely spatial aspect or mode. Conversely, we
may speak of energy as mass in its purely temporal aspect. Gravitational energy cannot
be included with all other energy into a tensor: this quantity must be described in general
relativity as a pseudo tensor, which is an unconserved quantity. It is for this reason that
gravitational energy cannot be localized within GR theory.

An object with mass cannot, according to the theory of general relativity, be indifferent to
the passage of time. Even photons are not totally indifferent to the passage of time
provided that they traverse a space in The notion of substance necessarily involves
indifference to the passage of time of substances. Do we see a way of deriving quantum
mechanics from general relativity, say, through a stochastic dynamic principle, i.e., where
time is ineradicable within general relativity because of the way this theory necessarily
treats the concept of mass? The nonlocality of quantum mechanics is strongly suggested
by the essential nonlocality of gravitational energy within general relativity theory.

But this time that is ―stimulated‖ is not some universal, cosmic time, for none such exists:
intersubjective temporality is always reference frame dependent – nonlocal quantum time
is not. Person as their own, private nonlocally connected vacuum state w/ its own /\E,
/\p, etc.

Is locality produced by the collective mutual interference of quantum nonlocal systems?
Eigenfunctions of the same Psi do not mutually interfere because all are mutually
orthogonal, but how can two independent nonlocally connected quantum fields?

Your argument for a wrong potential energy term in the Schrodinger equation seems
based on the notion that imaginary velocity is meaningless. You must know that
imaginary velocity is a staple notion within special relativity theory, e.g., the current
density of electric charge of a charge "at rest" is just Rho*(ic) where current density is
defined as density time velocity.

Moreover, during quantum tunneling a charged particle possesses an imaginary
momentum. If it were not possible to accurately measure the potential barrier through
which the charged particle is tunneling, it might be conceivable to say the particle does
not actually possess an imaginary velocity, but that the potential through which it moves
must have a, say, frequency structure, which allows the particle to harmonically penetrate
it (or something similar). If you do some more basic research, you will see that the
notion of imaginary velocity is widespread in modern physics and doesn't appear to land
physicists in contradictions or absurdities.


Regards,
Russell

The distinction of real versus virtual which, of course, comes out of quantum field theory,
is a binary opposition which cuts across the opposition existence versus non-existence. If
this is true, then existence versus non-existence is not itself a binary opposition, i.e.,
existence and nonexistence are not mutually exclusive categories. If not mutually
exclusive categories, then each category, that of existence and that of non-existence, may
be defined independently of the negation of the other. This might be the case if existence
and non-existence are manifestations of some deeper system, e.g., that of Being. Since
both mathematical entities and physical objects possess Being although mathematical
entities themselves do not "exist", rather, they may be thought of as "subsisting," Being is
seen as a broader category than mere Existence. Your theory seems to presuppose that
existence and non-existence are dual opposing, or, binary opposite or mutually exclusive
categories. But there seems no good reason for supposing this presupposition correct.

Regards,
Russell


Think of a spherical shell of arbitrary thickness expanding outward at the speed of light
from some origin. If the speed of gravitation is c, which is somewhat in doubt (c.f., Tom
Van Flandern Internet postings c/o
web=
www.deja.com), then the gravitational field of
this spherically symmetric distribution of photons will also be moving outward at the
speed of light. The photon shell's gravitational field is comoving, if you will, with the
photons.

But if the gravitational field comoves, as it were, with the photons, then the notion of the
photons being the "source" of this gravitational field becomes highly problematic, as you
can see if you stop to consider the gravitational field at points at rest ( "at rest" in the
sense of null red- or blue- shifting of the photons relative to this point) within the
expanding photon distribution. There is, of course, no gravitational field outside the
photon distribution.

It is much less problematic to view the comoving gravitational field of the photon shell as
a retarded potential expanding outward at the speed of light and stemming from a *matter
distribution‖ existing just prior to its being converted into energy, i.e., photons, than to
contemplate an instantaneous (as opposed to retarded) gravitational (potential) field
generated by the photons in real time, if you will.

It appears that if the speed of gravity is only c, then the gravitational field is separable
from its source upon this source being converted completely into energy, i.e., photons
and other massless particles.

Regards,
Russell



We cannot predict individual quantum events, but only the probabilities of those
individual events occurring. Similarly, we cannot predict a given particle‘s momentum,
generally speaking, but we can generally predict the particle‘s momentum uncertainty.
Hypothesis: only conserved quantities can be predicted; if the value of a non-conserved
quantity can be predicted, it is by way of some conserved quantity to which it is
connected.

We imagine the photon passing in front of us and unreflectingly suppose that although
relativity states that no time passes for the photon, we see that time does not pass for the
photon ―from our point of view‖ in which we imagine the photon moving at the speed of
light ―through space‖ and ―across our visual field.‖ But for no time to pass for the
photon, it must not interact with anything along its path for to do so would mean that less
than 100% of the photon‘s action is directed along its direction of motion. The
scattering of the photon away from its original trajectory involves an exchange of energy
along other spatial directions but which is secretly an exchange of i-momentum, say for
energy and then an exchange of this energy for j & k momentum, say. Exchanges of 3-
momentum are always at the expense of exchanges of energy and quantum spin is the
index of the distribution of the components of stress-momentum-energy of the three types
of vac-vac, mat-vac, mat-mat exchanges. The components of angular momentum
(generalized) are each composed of commuting observables where one observable which
is conserved and possesses a quantum number (and so subject to quantum selection rules)
and the other with which it is paired to make up the angular momentum component is not
conserved. In an abstract and formal way, contracting /\Px will result in a sympathetic
expansion of /\x via the simple Heisenberg equation, /\x/\Px >= h/2pi. However, the
underlying mechanism for the above Heisenberg relation lies with the four-dimensional
spin network that comprises both matter and vacuum.

In times own reference frame, no time passes, provided that time does not interact with
anything along its trajectory. A free photon is in an energy eigenstate. Quantum
mechanics from General Relativity. Time-space versus here-now with each person
having his or her own time line. Is there gravity for the subjective spacetime?


We must remember that a mass, which falls in a straight radial line in three-dimensional
space, is following a curved trajectory within four-dimensional spacetime. Moreover
this curvature is by and large, entirely with respect to the time dimension. The mass‘
instantaneous velocity along its curved timelike trajectory has every thing to do with the
timelike curvature ―seen‖ by this mass. We state here as a hypothesis that the radius of
timelike curvature experienced by a test mass falling toward the centroid of a spherically
symmetric mass distribution can be simply calculated from the mass‘ instantaneous
velocity and acceleration. According to the equation below,

R (timelike) = V (instantaneous)/|a|(instantaneous)

So curvature isn‘t uniquely defined independent of a reference frame being specified. Of
course, inertial reference frames are defined in terms of the four velocity of a test mass.

Although the form of ordinary human communication is largely after the fashion of an
absolute, the substance thereof is by and large metaphorical. An idiom may be thought
of as a latent metaphor expressed in the grammar current when the historical context of
the idiom‘s metaphor commonly understood.

The irreversibility of time‘s arrow may equivalently be described as the hysteresis of the
Universe within the frequency domain. Reversibility is prevented when the underlying
ground of change is itself back-reacted upon by change to the entities it creates and
sustains, resulting in a global shift or change in it.

Vacuum fluctuations vs. radiation reaction is related to Lorenz transformation of mass
energy vs. Lorenz transformation of vacuum energy. The energy density of vacuum is
Lorenz transformed, but not its mass.

The curvature of a circular arc possessing a timelike circumference, resulting in a
spacelike acceleration toward a centroid of a mass distribution. And so in this way we
see gravitation as a kind of centripetal/centrifugal force. Perhaps the expansion of the
universe or the repulsive force behind it) is normally in local balance with . . .?


The self-generative aspect (endurance) of matter is dependent upon the binding forces
ultimately responsible for gravity. Persistence of a particle depends upon dilation of the
particle‘s local time. Binding problem and temporality of consciousness.

The recursive nonlinearity of general relativity suggests that spacetime and matter were
engendered together during the Big Bang.

Momentum-energy exchanges in free space vacuum are symmetrical, implying that the
net spacetime density of momentum-energy fluctuations is 0. A slight deviation of this
density might be closely related to Einstein's cosmological constant. Perhaps the local
imbalances in the density of momentum-energy vacuum fluctuations must be redressed
for equilibrium's sake by an equivalent counteracting system of nonlocal, global
fluctuations. This reminds us of Puthoff's zero-point field electromagnetic energy
feedback loop between real particle produced ZPF and quantum vacuum electromagnetic
fluctuations.
@#
Perhaps this is the reason for radiation reaction and vacuum fluctuations
comprising the overall ZPF in equal parts.

Only the timelike component of the momentum-energy fluctuation field connected with
the "plus part" (relative to free space) will possess an approximately inverse linear spatial
variation (weak field limit assumed here).

The spatial component of the difference in momentum-energy fluctuation between "true
free space" and that of the Universe's global spacetime, is what must spatially vary in
order to produce a Newtonian inverse square field.

Giving a particle a Lorenz boost changes the vacuum that the particle interacts with (and
not just in the direction of motion), but does not change it in any way that is
detectable/measurable within the particle, that is, i.e., there is no measurable shift in the
distribution of momentum-energy fluctuations within the particle (or mass). Perhaps a
Lorenz boost causes nonlocal changes to the vacuum, say, in the form of timelike and
spacelike shifts in the phase of Psi functions. Lorenz invariance depends on /\p and /\E
of the vacuum with which a mass interacts, not on any /\p and /\E intrinsic to the mass
itself. We must remember that vacuum statistics are Lorenz invariant.

Spacetime is constructed from the disparities in local and global values of the
uncertainties and fluctuations in momentum and energy. Momentum and energy must
have an internal component - otherwise we cannot invoke their uncertainties and
fluctuations in order to define spacetime. One must already have a spacetime in
existence prior to defining a gravitational field - this is the notion of gravity atop a
background metric - weak field approximation.

Certainly what the spacetime is constituted from does not require the existence of
spacetime metric for its existence, but the metric is simply a collection of initial and
boundary conditions upon this transcendental vacuum field. The word "transcendental"
is invoked because the vacuum here is understood to transcend any particular spacetime
or spatiotemporality itself.




Symmetries are generally dependent upon the specification of initial and boundary
conditions for some nonconserved field quantity, resulting in a new description for this
field involving a collection of interacting fields, which together comprise a closed system
of substrate, common denominator ―stuff‖ for which there exists a continuity equation.
The components of this conserved quantity, i.e., the ―stuff‖ are reversibly inter-
transformable and reactions between these components, i.e., the distinct field quantities,
undergo time-reversible reactions with one another. Since the above boundary
conditions are actually artifacts of the nonconserved field and sustained by it, irreversible
and noncomputable processes underlie the maintenance of the field components as static
components of the total conserved, composite, and interacting field system. The
component field quantities of the closed system construct, borne of the nonlocally
connected, open system field (the field existing prior to the instituting of any initial and
boundary conditions whatever), are thusly radically overdetermined. And so the ―mother
field‖ can temporally evolve independent of the temporal evolution of the field
components existing as artifacts of the imposed closed system of fields. And there is no
unique interconvertibility between the closed and open system field quantities.


If spacetime is generated and sustained through fluctuations of vacuum energy, then the
notion of vacuum fluctuation length, time, and mass is to ―put the cart before the horse.‖

So is Psi really a function of vacuum momentum-stress-energy current/flux densities
instead of the spacetime coordinates. Einstein did not include a term for external
pressure in his cosmological field equations since he believed that spacetime could not
actually ―contain‖ vacuum energy, but included in these equations a term for ―internal
pressure.‖

There is not alteration in the mass of the energy and momentum fluctuations in the
quantum vacuum, but only changes in their flux densities. The only way to reconcile the
120 order of magnitude discrepancy between general relativity and quantum theory's
predictions for the energy density of the vacuum is to locate the underlying mechanism
for gravitation within the dynamics of the quantum vacuum itself. This is to say that in
this case the subatomic virtual process, which constitutes the phenomenon of gravitation
as such, would not themselves be gravitational sources. A similar example of this line
of reasoning might be the statement that the physical processes taking place within the
brain, by which the perception of color is effect, do not themselves possess color, but
color is constituted out of these processes. Another example of this deductive/causal
principle applies to artificial intelligence and the question of whether or not conscious
thought might be formalizable in the form of a computer program: the process by which
abstract categories are generally brought into being, or brought into being as such, does
not itself submit to a complete description in terms of abstract categories, i.e., particular
exemplars of what it is ultimately and generally responsible for producing.

Discuss the difference, if any, between current and flux densities. Cellular automata
theory provides an underlying mechanism for inertial mass, and hence, by the
equivalence principle, for gravitation as well, hopefully. Current densities are dependent
upon the velocity of some substance, i.e., conserved quantity, obeying a continuity
equation. The flux density is not dependent by definition upon the velocity, as is the
current density, but rather upon the number of particles penetrating a surface, or
hypersurface, for that matter, per unit time. The asymmetry of lower dimensional
interactions can be compensated for within a higher dimensional space/continuum.

We are saying that the uncertainties in x, t, p, and E obey Lorenz covariance as
components and Lorenz invariance when together comprising tensorial quantities. But
we are also saying that the dynamics of the interaction of these uncertainties, specifically,
through the interaction of the fluctuations comprised by these uncertainties, does not
itself conform to Lorenz covariance/invariance. So this suggests that a time asymmetry
appears as a result of the correlation/interference of fluctuations in momentum-energy
which accounts for the fluctuations not themselves being gravitational sources, but
themselves intimately related to the thermodynamics of black holes and the vacuum as
manifested in such phenomena as Hawking radiation, spontaneous emission, and the
Davies-Unruh Effect, as well as the phenomenon of wavefunction collapse.



The reason that the velocity of light is reduced in a gravitational field (relative to the free
space field) is that the velocity of time is reduced in such spacetime regions. This in turn
is due to a reduction by a similar fraction in the density, or rather, the current density of
the quantum vacuum‘s energy.

The vacuum is not unified, but there is a plurality of nonlocal vacua, which interact with
one another. Momentum exchanges are defined by virtual energy transitions between
these nonlocal vacua. Reduction of uncertainty is information. But there is reduction of
uncertainty of distinct nonlocal vacua. Interference of energy eigenstates creates energy
uncertainty. Interference of nonlocal vacua creates information. The nonlocal quantum
vacua through there mutual interference constitute spacetime.

The temporal evolution of the amplitude, Psi = Psi x exp(iwt), is an oscillation or cycling
through a periodically connected continuum of degenerate Psi-i. This temporal
evolution of mere amplitudes itself does not translate into a temporal evolution of any
physical observables. Only through interference of this amplitude temporal evolution
with other such temporally oscillating amplitudes will translate into a physically
measurable temporal evolution of some density function with respect to some physical
observable.

Two dimensional time: the rate at which a mass reconstitutes itself out of the quantum
vacuum (the other) and the rate at which the mass reconstitutes itself out of itself.
February
2012
So can time dilation be more intimately related to absolute 3 + 2 structure of
spacetime in which local temporality is governed by shifting in the components of 2d
time within a time plane? Is each time component weighted after the fashion of the
orthogonal components of a 2d wavefunction? On this view, extending this notion of a
model of hyperdimensional spacetime, the spatial hypersurface would be structured by a
sufficiently rich n-dimensional temporal space, e.g., each g
ik
in the metric tensor
corresponding to a distinct dimensional of absolute/simulation time.

Momentum fluctuations with energies identical to the transition energies within the
vacuum lattice. Apparently, with the exchange of virtual bosons between real fermions
occupying quantum states within the vacuum. This exact correspondence between
momentum and energy fluctuations is broken. What symmetry is broken by the
gravitational field here? So are virtual energy transitions decreased in density allowing a
higher current density of momentum fluctuations in the form of boson exchanges?

In special relativity, spacetime is a closed system, but due to the irreversibility and
nonlinearity of general relativity and the nonlocality of gravitational energy, gravitation
opens up spacetime as it breaks its Lorenz symmetry.

Information is not a function of a closed system and so is not a conserved quantity. Does
this mean that an infinite amount of information exists. Certainly an infinite quantity is
not conserved. This would only not be true if information is a component of some larger
quantity that is conserved.

When we say that a quantity is not conserved, we might mean either that it passes away
or can be destroyed or that it can be created or that it keeps ―coming back,‖ i.e.,
spontaneously reappearing – that it fluctuates in an unpredictable manner.

Interference of orthogonal Psi-I leads to energy uncertainty. Interference of
nonorthogonal Psi-I leads to momentum uncertainty. Does the introduction of a
gravitational field into the environment of a quantum superposition of orthogonal energy
eigenstates cause a shift toward nonorthogonality of these energy eigenfunctions, say,
through the ―curving of time?‖ And the concomitant ―curving of space‖ in a contrary
manner to the dilation of time resulting in suppressed energy exchanges and enhanced
momentum exchanges?

The sixteen g(i,k) and the coordinates of the 16 dimensional phase hyperspace.
Sum(i)/\Xi/\E + Sum(i)/\t/\Pi = /\**2 F(x,y,z,t). These 16 dimensions are defined by the
orthogonal phase uncertainty planes, (/\x, /\Px), (/\y, /\Py), (/\z, /\Pz), (/\ct, /\Pct). Just as
four orthogonal lines form a four-dimensional space, four orthogonal planes form a 16
dimensional phase uncertainty superspace. Can this information can be recovered by
considering the angular momentum uncertainties matrix? But there are only 12 distinct
components (at first glance) within in this matrix. Does this imply that there are four
equations, which can be found for the parameters of the phase uncertainty superspace
given above? There are, of course, the four Heisenberg uncertainty relations between /\x
and /\Px, /\y and /\Py, etc. Each of the above uncertainty planes appears to possess an
area of Planck‘s constant units of angular momentum. Does this suggest a cellular
automata analogy for spacetime? Or maybe these 16 parameters may be functions of
points within a four-dimensional space since /\x and /\Px are just related by a constant, h?
This does sound somewhat more reasonable than invoking 16 separate dimensional of a
superphasespace. After all, the g(i, k) are defined on spacetime. In other words, we
expect to be able to recover both the spacetime curvature information and the dynamical
geodesic motion information from the /\Xi and /\Pi.

Virtual particles are phenomena of one‘s own vacuum state. Real particles are the
product of the interaction of nonlocal vacua, i.e., personal ground states. Virtual
particles are ―off mass shell.‖ Is this related to the question of whether virtual particles
have a real mass at all?

As is well understood in conventional, i.e., Copenhagen interpretation, of quantum
theory, the amplitude or wave function does not itself possess any absolute physical
meaning, but only the square of this quantity. Just as the cross term of two distinctly
different, but nonorthogonal wave functions is interpreted as a correlation of these two
wave functions, the square of any given amplitude may be understood as self-correlation
of this wave function. And it is this self-correlation which can be measured and which
possesses a definite physical meaning. Self-correlation, i.e., |Psi x Psi*|, if Psi is in an
eigenstate with respect to the energy, results in a purely spatially varying density
distribution function. Any correlation of nonorthogonal eigenfunctions which is not an
autocorrelation will generally give a density function which is both spatially and
temporally varying throughout some region of space time. So even though each
eigenfunction is in an eigenstate of the energy and so is not temporally varying in an
absolute sense, the correlation of these two eigenfunctions, which is its interaction
density, is temporally varying.


Is there a single, mathematical function that connects all possible numbers, e.g., real,
imaginary, matrix, fractal, surreal, etc., . . . ?

One integrates the differential of the functional representing the Lorenz transformation
along the geodesic path of the test particle to transform from frame A(x`,y`,z`,t`) to frame
B(x``,y``,z``,t``) within a gravitational field.

The steady state response of the system is, under certain conditions (boundary and
initial), the infinite sum of all of its possible transient responses. Of course, each of the
system‘s transient responses may be alternatively represented in the frequency domain
via Fourier analysis.


The reason that the velocity of light is reduced in a gravitational field (relative to the free
space field) is that the velocity of time is reduced in such spacetime regions. This in turn
is due to a reduction by a similar fraction in the density, or rather, the current density of
the quantum vacuum‘s energy.

The vacuum is not unified, but there is a plurality of nonlocal vacua which interact with
one another. Momentum exchanges are defined by virtual energy transitions between
these nonlocal vacua. Reduction of uncertainty is information. But there is reduction of
uncertainty of distinct nonlocal vacua. Interference of energy eigenstates creates energy
uncertainty. Interference of nonlocal vacua creates information. The nonlocal quantum
vacua between themselves constitute spacetime.

The temporal evolution of the amplitude, Psi = Psi x exp(iwt), is a continuous oscillation
through degenerate Psi-i.

Two dimensional time: the rate at which a mass reconstitutes itself out of the quantum
vacuum (the other) and the rate at which the mass reconstitutes itself out of itself.

Momentum fluctuations with energies identical to the transition energies within the
vacuum lattice. Apparently, with the exchange of virtual bosons between real fermions
occupying quantum states within the vacuum. This exact correspondence between
momentum and energy fluctuations is broken. What symmetry is broken by the
gravitational field here? So are virtual energy transitions decreased in density allowing a
higher current density of momentum fluctuations in the form of boson exchanges?

In special relativity, spacetime is a closed system, but due to the irreversibility and
nonlinearity of general relativity and the nonlocality of gravitational energy, gravitation
opens up spacetime as it breaks its Lorenz symmetry.

Information is not a function of a closed system and so is not a conserved quantity. Does
this mean that an infinite amount of information. This would only not be true if
information is a component of some larger quantity that is conserved.

When we say that a quantity is not conserved, we might mean either that it passes away
or can be destroyed or that it can be created or that it keeps ―coming back,‖ i.e.,
spontaneously reappearing.

Interference of orthogonal Psi-I leads to energy uncertainty. Interference of
nonorthogonal Psi-I leads to momentum uncertainty.

The sixteen g(i,k) and the coordinates of the 16 dimensional phase hyperspace.
Sum(i)/\Xi/\E + Sum(i)/\t/\Pi = /\**2 F(x,y,z,t).

Virtual particles are phenomena of one‘s own vacuum state. Real particles are the
product of the interaction of nonlocal vacua, i.e., personal ground states. Virtual
particles are ―off mass shell.‖ Is this related to the question of whether virtual particles
have a real mass at all?


As is well understood in conventional, i.e., Copenhagen interpretation, of quantum
theory, the amplitude or wave function does not itself possess any absolute physical
meaning, but only the square of this quantity. Just as the cross term of two distinctly
different, but nonorthogonal wave functions is interpreted as a correlation of these two
wave functions, the square of any given amplitude may be understood as self-correlation
of this wave function. And it is this self-correlation which can be measured and which
possesses a definite physical meaning. Self-correlation, i.e., |Psi x Psi*|, if Psi is in an
eigenstate with respect to the energy, results in a purely spatially varying density
distribution function. Any correlation of nonorthogonal eigenfunctions which is not an
autocorrelation will generally give a density function which is both spatially and
temporally varying throughout some region of space time. So even though each
eigenfunction is in an eigenstate of the energy and so is not temporally varying in an
absolute sense, the correlation of these two eigenfunctions, which is its interaction
density, is temporally varying.


The circular motion, feedback through 3-momentum exchanges, is converted to linear 3-
momentum directed along a single axis. When the propellant is in its ―inert‖ form, 3-
momentum is virtual and directed along all three spatial axes. When a mass is
accelerated from rest to some velocity, v, its mass and length locally, relativistically
change, but the quantum vacuum undergoes a global relativistic change in its energy
density (during the acceleration). A Lorenz transformation has no effect upon the energy
density of the quantum vacuum, as can be mathematically demonstrated for a vacuum
with energy density proportional to frequency**3 (f
3
).

It appears at least superficially that the density of mass is increased by a factor of
gamma**2 as a result of a Lorenz transformation. How do we explain the single gamma
factor involved in time dilation, given the proposed cellular automata model or relativity,
in this case?

Since the vacuum does not obey the Lorenz symmetry group, we might say that
gravitation breaks the Lorenz symmetry of the quantum vacuum. We must conclude that
the momentum-energy density of the vacuum is unaffected by a Lorenz transformation,
in agreement with what theory predicts for an energy field with a directly proportional
frequency**3 dependence.

Clearly the vacuum possessing a gravitational field loses it‘s the proportional to
frequency**3 energy density dependence! Hence, the gravitational field is at least partly
based in spatial or spatiotemporal variations in the vacuum‘s energy density.

There is nothing remarkable about coincidences from a probabilistic standpoint as they
inevitably happen given sufficient number of distinct settings and a sufficient amount of
time.

A Lorenz transformation causes a specific, coordinated change in both vacuum frequency
and wavenumber. The strict proportional to frequency**3 dependence is lost due to
presence of a new proportional to wavenumber**3 component. All of the vacuum‘s
momentum is timelike and none of its momentum is spacelike, hence the lack of
wavenumber dependence. A gravitating body must possess some spacelike momentum
(in the form of exchanges of force-mediating virtual bosons) and this costs the mass some
of its timelike (or imaginary) momentum. This prevents the energy density of the
vacuum from being purely dependent upon the frequency since this dependency
presupposes a constant relationship between frequency and wavenumber through the
constancy of the speed of light. However, the speed of light is not a constant within
general relativity, i.e., where gravitational fields are present. In other words, saying that
the energy density of the vacuum is directly proportional to the cube of frequency is, in
Minkowski or ―flat space‖ that this energy density is proportional to the cube of the
product of the velocity of light and the wavenumber. In a gravitational field, the velocity
of light is not a constant, but can vary spatially throughout the field (and by the relativity
principle, the velocity of light should also vary temporally since there is no objective
decomposition of spacetime into separate space and time components). A given
frequency of vacuum electromagnetic field fluctuation is given in free space (in the
absence of a gravitational field) by just the velocity of light multiplied by the
wavenumber of the fluctuation. In a gravitational field this frequency is given by the
product of three factors: the speed of light in vacuum, c, the wavenumber of the
fluctuation, 1/lambda, and dimensionless functional with a spatial (and perhaps temporal,
as well) dependence which is determined by the spatial (and perhaps, temporal)
dependence of the 16 gravitational potentials, i.e., the metric components, specified by
the Einstein field equations. A Lorenz transformation alters frequency and wavenumber
in a complementary manner to which it alters time and length. So although a Lorenz
transformation alters the vacuum‘s frequency, it also alters the vacuum‘s wavenumber
producing a null net effect on the energy density. So in a gravitational field the density
of the quantum vacuum is proportional to the cube of the vacuum electromagnetic field
fluctuation frequency provided that the above functional is itself invariant under a Lorenz
transformation. This is only possible if the multiplication of this functional generally
represents a transformation, which forms at least a subgroup of the Lorenz transformation
group. This is not possible because the Lorenz group is itself a subgroup of the
Diffeomorphism group of general relativistic coordinate transformations.


Clearly the timelike momentum of objects on a spacetime surface is connected with its
motion through and along this hypersurface. Compare the oppositions of prepositions
―on‖ versus ―by‖ with ―through‖ versus ―along.‖ Time dilation and Hubble constant
contraction is determined by the ration of mass to black hole (perhaps also ―vacuum‖)
density. Reconcile this with a H**2/8piG cosmic mass density. Black hole entropy
suggests that black hole density should be proportional to 1/R**2 or to black hole surface
area, c.f., cellular automata-based theories of relativity. The matter and vacuum
momentum current densities are reduced by this time dilation or Hubble frequency
contraction factor.


To calculate the binding energy of nuclear matter, one must perform a partial sum of an
infinite number of infinite terms, yielding a finite result.

We can look at the ordering relation between matter and vacuum in two possibly related
ways. The structure of matter mirrors that of the vacuum because it is constituted out of
and sustained by this vacuum – from what other source could matter have derived its
structure? Alternatively, we can assume matter has an existence somehow independent
of the vacuum. Here matter resides in this vacuum and the structure of the vacuum is
perturbed by the presence of matter, more specifically, the structure of matter imposes the
pattern of its structure upon that of the vacuum or upon the structure of its pattern of
fluctuating.


Clearly the timelike momentum of objects on a spacetime surface is connected with its
motion through and along this hypersurface. Compare the oppositions of prepositions
―on‖ versus ―by‖ with ―through‖ versus ―along.‖ Time dilation and Hubble constant
contraction is determined by the ration of mass to black hole (perhaps also ―vacuum‖)
density. Reconcile this with a H**2/8piG cosmic mass density. Black hole entropy
suggests that black hole density should be proportional to 1/R**2 or to black hole surface
area, c.f., cellular automata-based theories of relativity. The matter and vacuum
momentum current densities are reduced by this time dilation or Hubble frequency
contraction factor.

It has been suggested that the ubiquitous and collectively enormous energy fluctuations
of the quantum vacuum are scattered echoes, virtually infinite in number, of the initial
scattering, or shattering, explosion of the Big Bang, itself thought to have originated in a
vacuum fluctuation of statistically infinitesimal probability.

The density of the quantum vacuum is calculated within the present quantum theory to
possess a density of approximately 10**95 kg/cm**3. This is what is called the Planck
density of matter. But for matter this density can only exist for a particular smallest
spatial dimension, i.e., roughly 10**-35 meters, and for virtual matter, at this same spatial
dimension but for the most fleeting instant of around 10**-42 seconds. The difference
between real and virtual Planckions here is that in the former case all of the fluctuations
are in the form of 3-momentum exchanges whereas in the latter they are entirely in the
form of 4-momentum exchanges, imaginary, and exclusively along the time dimension.
Between these two forms of Planck matter there subsists a 90-degree hyperspatial
rotation in accordance with the symmetry of the Lorenz group. There is a similar
relationship obtaining between rest mass and the collection of photons yielded up when
this ―inert‖ mass is 100% converted into energy, in accordance with E = mc**2. The
reason that energy possesses mass when subject to spatiotemporal boundary conditions is
that is that some of the energy is necessarily converted into mass as a result of the
boundary conditions imposed upon this otherwise totally free energy. The reason that
the black hole state equation, if you will, places such a tight restriction upon the
relationships of mass and length or, rather, density and length, in this case, whereas this
restriction is utterly absent in the case of vacuum energy, is no doubt owing to the
imposed boundary conditions. We may then think of mass as just vacuum energy with
suitably imposed boundary conditions, the dynamics of matter and vacuum being
otherwise essentially the same.

A tangled network of feedback loops of quantum vacuum fluctuation momentum-energy
constitutes the structure of matter. Feedback necessarily involves the action of memory.
There is feedback within a given data/information system and then this feedback is itself
continually being updated through nonlocal feedback with some global, distributed
system. This updating of the system‘s internal, dynamical feedback structures may be
thought of as the feedback of that component of the system which is globally unified with
itself at different times.


In nonrelativistic physics, any linear momentum can be transferred away via an
appropriate Galilean transformation. This is not the ease with angular momentum
because this is motion not taking place within an inertial frame. In relativity, the 3-
momentum can only be transformed away while preserving the total 4-momentum.
/\x/\py + x/\Py + y/\Px + /\y/\Px + /\x/\Py = /\Lz

Use superposition to express Lz in four-dimensional spacetime. Necessity of quantum
mechanics within relativistic physics. /\f/\g = h/2pi, where either f or g must be a
quantum number and the other term represents a nonconserved quantity.

In the paper, The Mystery of the Cosmic Vacuum Energy Density and the Accelerated
Expansion of the Universe, it is stated that the effective cosmological constant is
expected to obtain contributions from short-distance physics, corresponding to an energy
scale of at least 100 GeV.


All conserved dynamical variables are purely timelike in free space. Inertia and
gravitation are phenomena associated with the projection of these timelike four vectors
into spacelike components. An important question here is whether there is a conserved
four potential. Does the creation of spacelike components of four potential induce a
change in the timelike four potential so that the sum of time and spacelike components of
some new four potential vectorially sum to produce a new four potential with the same
magnitude as the inertial free space four potential.

Re: Todd Desiato‘s Probability Wave Dispersion Interpretation of Relativity. To include
inertia use the analogy of an RLC circuit. The ―L‖ and ―C‖ components of the circuit
mediate time/energy and the ―R‖ component, the position/momentum component of the
Heisenberg uncertainty.

We must remember that a mass, which falls in a straight radial line in three-dimensional
space, is following a curved trajectory within four-dimensional spacetime. Moreover
this curvature is by and large, entirely with respect to the time dimension. The mass‘
instantaneous velocity along its curved timelike trajectory has every thing to do with the
timelike curvature ―seen‖ by this mass. We state here as a hypothesis that the radius of
timelike curvature experienced by a test mass falling toward the centroid of a spherically
symmetric mass distribution can be simply calculated from the mass‘ instantaneous
velocity and acceleration. According to the equation below,

R (timelike) = V (instantaneous)**2/|a|(instantaneous)

So curvature isn‘t uniquely defined independent of a reference frame being specified. Of
course, inertial reference frames are defined in terms of the four velocity of a test mass.

A mass moves as though it has a conserved four-momentum within a (+++-) signature
four-dimensional spacetime. Time axis as direction of centripetal/centrifugal force
makes the notion of local spacetime concrete.


The irreversibility of time‘s arrow may equivalently be described as the hysteresis of the
Universe within the frequency domain. Reversibility is prevented when the underlying
ground of change is itself back-reacted upon by change to the entities it creates and
sustains, resulting in a global shift or change in it.


Vacuum fluctuations vs. radiation reaction is related to Lorenz transformation of mass
energy vs. Lorenz transformation of vacuum energy. The energy density of vacuum is
Lorenz transformed, but not its mass.

The curvature of a circular arc possessing a timelike circumference, resulting in a
spacelike acceleration toward a centroid of a mass distribution. And so in this way we
see gravitation as a kind of centripetal/centrifugal force. Perhaps the expansion of the
universe or the repulsive force behind it) is normally in local balance with . . .?


We cannot predict individual quantum events, but only the probabilities of those
individual events occurring. Similarly, we cannot predict a given particle‘s momentum,
generally speaking, but we can generally predict the particle‘s momentum uncertainty.
Hypothesis: only conserved quantities can be predicted; if the value of a non-conserved
quantity can be predicted, it is by way of some conserved quantity to which it is
connected.

We imagine the photon passing in front of us and unreflectingly suppose that although
relativity states that no time passes for the photon, we see that time does not pass for the
photon ―from our point of view‖ in which we imagine the photon moving at the speed of
light ―through space‖ and ―across our visual field.‖ But for no time to pass for the
photon, it must not interact with anything along its path for to do so would mean that less
than 100% of the photon‘s action is directed along its direction of motion. The
scattering of the photon away from its original trajectory involves an exchange of energy
along other spatial directions but which is secretly an exchange of i-momentum, say for
energy and then an exchange of this energy for j & k momentum, say. Exchanges of 3-
momentum are always at the expense of exchanges of energy and quantum spin is the
index of the distribution of the components of stress-momentum-energy of the three types
of vac-vac, mat-vac, mat-mat exchanges. The components of angular momentum
(generalized) are each composed of commuting observables where one observable which
is conserved and possesses a quantum number (and so subject to quantum selection rules)
and the other with which it is paired to make up the angular momentum component is not
conserved. In an abstract and formal way, contracting /\Px will result in a sympathetic
expansion of /\x via the simple Heisenberg equation, /\x/\Px >= h/2pi. However, the
underlying mechanism for the above Heisenberg relation lies with the four-dimensional
spin network that comprises both matter and vacuum.

In times own reference frame, no time passes, provided that time does not interact with
anything along its trajectory. A free photon is in an energy eigenstate. Quantum
mechanics from General Relativity. Time-space versus here-now with each person
having his or her own time line. Is there gravity for the subjective spacetime?


We may speak of mass as energy in its purely spatial aspect or mode. Conversely, we
may speak of energy as mass in its purely temporal aspect. Gravitational energy cannot
be included with all other energy into a tensor: this quantity must be described in general
relativity as a pseudo tensor, which is an unconserved quantity. It is for this reason that
gravitational energy cannot be localized within GR theory.


An object with mass cannot, according to the theory of general relativity, be indifferent to
the passage of time. Even photons are not totally indifferent to the passage of time
provided that they traverse a space in The notion of substance necessarily involves
indifference to the passage of time of substances. Do we see a way of deriving quantum
mechanics from general relativity, say, through a stochastic dynamic principle, i.e., where
time is ineradicable within general relativity because of the way this theory necessarily
treats the concept of mass? The nonlocality of quantum mechanics is strongly suggested
by the essential nonlocality of gravitational energy within general relativity theory.


Is locality produced by the collective mutual interference of quantum nonlocal systems?
Eigenfunctions of the same Psi do not mutually interfere because all are mutually
orthogonal, but how can two independent nonlocally connected quantum fields?


Your argument for a wrong potential energy term in the Schrodinger equation seems
based on the notion that imaginary velocity is meaningless. You must know that
imaginary velocity is a staple notion within special relativity theory, e.g., the current
density of electric charge of a charge "at rest" is just Rho*(ic) where current density is
defined as density time velocity.

Moreover, during quantum tunneling a charged particle possesses an imaginary
momentum. If it were not possible to accurately measure the potential barrier through
which the charged particle is tunneling, it might be conceivable to say the particle does
not actually possess an imaginary velocity, but that the potential through which it moves
must have a, say, frequency structure, which allows the particle to harmonically penetrate
it (or something similar). If you do some more basic research, you will see that the
notion of imaginary velocity is widespread in modern physics and doesn't appear to land
physicists in contradictions or absurdities.


Think of a spherical shell of arbitrary thickness expanding outward at the speed of light
from some origin. If the speed of gravitation is c, which is somewhat in doubt (c.f., Tom
Van Flandern Internet postings c/o www.deja.com), then the gravitational field of this
spherically symmetric distribution of photons will also be moving outward at the speed of
light. The photon shell's gravitational field is comoving, if you will, with the photons.

But if the gravitational field comoves, as it were, with the photons, then the notion of the
photons being the "source" of this gravitational field becomes highly problematic, as you
can see if you stop to consider the gravitational field at points at rest ( "at rest" in the
sense of null red- or blue- shifting of the photons relative to this point) within the
expanding photon distribution. There is, of course, no gravitational field outside the
photon distribution.

It is much less problematic to view the comoving gravitational field of the photon shell as
a retarded potential expanding outward at the speed of light and stemming from a *matter
distribution‖ existing just prior to its being converted into energy, i.e., photons, than to
contemplate an instantaneous (as opposed to retarded) gravitational (potential) field
generated by the photons in real time, if you will.

It appears that if the speed of gravity is only c, then the gravitational field is separable
from its source upon this source being converted completely into energy, i.e., photons
and other massless particles.


Does the independence of the spin quantum number from Lorentz transformations imply
that there is no substance interaction, i.e., in the sense of producing dynamical effects,
between spin and ―flat spacetime.‖


Dynamically interacting vacua, each of which is an open system. How do these vacua
mutually interfere constructively with one another in the absence of a closed system of
feedback? Within a closed system of feedback there is a back and forth exchange of
energy, but no ―communication.‖ However, in the interaction of two or more open
systems, stable and persisting structures can only be created and sustained through
communication and cooperation between these various systems. The individual
fluctuations of energy which collectively comprise /\E may not be thought to be evolving
in time as /\E is what determines /\t, which, in turn, defines the time scale of dynamical
processes within the system. Spacetime is a projection based upon the expectation
values of time and position. The field equations relate the expectation value of the
spacetime curvature to the expectation value of the momentum-energy density. This is a
formal relationship, which is concretely underpinned by the physical connection between
the fluctuations in momentum and energy to the fluctuations in position and time.
Fluctuations in time are not merely fluctuations in the position in time at which a
particular event occurs. Otherwise, an absolute time would have to be assumed and
which would served as a backdrop for these fluctuations in the timing of events.

Mass is conserved in pre-relativity physics. In relativity theory mass and energy are
interconvertible. But energy is conserved in relativity theory( special theory, at least),
while energy is not conserved in quantum theory. The question perhaps arises here,
―into what is energy interconvertible which ―accounts‖ for the breakdown of
conservation of energy within quantum mechanics?‖

Everett‘s relative state interpretation of quantum measurement seems to invoke the
existence of a kind of ―hypertime dimension.‖ Just as the integration of the time
dimension with those of space explained the nonconservation of mass, perhaps the
incorporation of Everett‘s hypertime dimension with that of four dimensional spacetime
accounts for the nonconservation of energy.


Real particles as solitons in the locally connected quantum vacuum momentum-energy
field. By viewing virtual particles within the vacuum as being themselves solitons in the
nonlocally connected vacuum field, we are admitting the existence of forms of matter and
energy more fundamental than the particles and fields treated in the ―standard model‖ of
particle physics.

What is the quantum conjugate quantity, which should be paired with information,
conceived as a physical quantity? And is information the conserved quantity of such a
conjugate pair? With the other conjugate quantity serving merely as a bookkeeping or
accounting variable?


What is the meaning, if any, of nonlocally connected influences ―propagating?‖ To
trigger the manifestation of nonlocal connections is to back-react upon the ground of the
phenomena of the physical system concerned – perhaps even to back-react upon one‘s
own ground.

Time may be likened to a CPU clock while motion of coherent structures is a
combination of linear and feedback, i.e., ―circular‖ binary calculations. The competition
between binary and CPU operations (by which coherent structures are ―refreshed‖ or
updated – the CPU clock rate constitutes the ―refresh rate‖ for non-cohesive ―pixel sets;‖
for cohesive pixel sets, the refresh rate for the structure is less than the CPU clock rate)
may be likened to special and general relativistic effects, respectively.

Higgs boson as a particle physics metaphorical particle. Photon mass determined by
gravitational size, i.e., black hole radius of the Universe and would be related to the
breakdown of perfect Lorenz invariance.

What type of new conservation principle is pointed to by supersymmetry?

Dynamical symmetry breaking requires a composite Higgs particle, perhaps virtual
Cooper pairs of fermion/ antifermion pairs. Supersymmetry entails several Higgs bosons,
perhaps all of the various types of virtual fermion/antifermion pairs which are
manifestations of the fundamental energy fluctuations of the quantum vacuum.

Spin appears to be the most ubiquitous property of particles, both matter particles as well
as the particles responsible for mediating all of the fundamental forces of nature. Spin is
an essential consideration in all interactions among subatomic particles. So the
equivalence principle should be consistent with a spin-based theory of quantum gravity,
rather than an electromagnetic-based theory such as that put forward by Haisch, Rueda,
and Puthoff. ―In fact, the spin of a planet is the sum of the spins and angular momenta of
all its elementary particles. But can angular momentum itself be ultimately reducible to
subatomic particle spins? (c.f.,
cit=
www.sciam.com/askexpert/physics/physics10.html –
page 2)

Spin, circular motion, accelerated motion, spin networks, symmetry of rotation (not just
in space, but in spacetime), symmetry and conservation laws (of interaction)

Loops in space interwoven with loops in time in an elastic, dynamic network of
interactions.

A maximum density of momentum exchanges in matter would imply a minimum current
density of imaginary 4-momentum exchanges.

Is a time interval being so small that time ―looses its meaning‖ the same thing as
quantized time?

Is vacuum lattice gravity theory related to spin network gravity theory?






Obviously, if the energy density of the quantum vacuum is on the order of 10**95
kg/cm**3, matter cannot actually pass ―through‖ this medium, but must traverse it by
moving ―atop‖ this hyperdense medium. Continuous reformation/reconstruction of
particles and fields at successive locations of spacetime appears the only viable means for
matter to have motion within the vacuum. A ready analogy here is the software icon
appearing on one‘s computer desktop. This icon possesses only so much continuous
existence as is permitted by the limited speed of the electron gun which is continually
―repainting‖ the desktop. CPU processor speed may also play somewhat of a role
though.

The apparent repulsive and attractive forces involved in the collective behavior of
particles in accordance with the quantum statistics of fermions and bosons operate in the
absence of exchange particles mediating the three or four fundamental forces of nature.
These ―statistical forces‖ operate in addition to the fundamental ―exchange forces.‖
These two types of quantum statistical forces, those described by the Pauli exclusion
principle and exemplified in the phenomenon of Bose Condensation, when each operates
independently of the other, manifests its action at a local level usually confined to the
subatomic scale. However, these non-exchange forces may prove to be of a more
fundamental nature than those forces dubbed as fundamental, i.e., the electromagnetic,
strong, and weak nuclear forces. And this might prove to be particularly evident where
these two statistical forces are strongly interacting with one another all the while that they
are acting upon particles of matter. The existence of geometrodynamic theories of
quantum field fluctuations of the vacuum in accordance with these quantum statistical
laws, may describe how these fluctuations underpin, in turn, fluctuations and stability of
spacetime topology. Spacetime topology is itself necessarily presupposed as initial and
boundary conditions by Einstein‘s field equations of the inertio-gravitational field, which
suggests that the energy density of the vacuum field may fall outside of the purview of
the Einstein stress-momentum-energy tensor. This tensor general relativity treats as the
complete source of the gravitational field. This further suggests that such quantum
statistical laws may be turned to for an explanation of the physical mechanism by which
the efficacy of the field equations are realized – in short, these quantum statistical laws
may hold the key to understanding the underlying dynamics of gravitation, fleshing out,
if you will, the abstract formalism of general relativity.

Although gravitation may not be an ―exchange force‖ in the standard sense of its being
mediated through the exchange of a specific force-carrying boson, as in the case of the
other three ―fundamental forces,‖ this force may nonetheless be aptly described as an
―exchange force,‖ however, one of the most general kind conceivable so as to properly
account for both universality of gravity‘s influence while opening the way to solving the
heretofore intractable Cosmological Constant Problem. Since gravity is not merely a
force acting in three dimensions, as is very much the case with the other three
fundamental forces, when they are not operating within a sufficiently strong gravitational
field, that is, we expect gravity to be a force which is at least partially mediated through
particle exchanges which take place along the local time axis. Such a very general type
of exchange might be effected through the two basic types exchanges of stress-
momentum-energy - those which take place between real matter and virtual matter, i.e.,
between matter and the quantum vacuum, and those which take place between this
vacuum and itself. One might ask what interpretation is to be made of an obvious third
category of ―exchange,‖ that occurring between real matter and itself. This would be the
component of force which is mediated by specific exchange bosons operating within a 3-
dimensional hypersurface. This component would be combined, as alluded to earlier,
with the purely timelike exchanges of momentum-energy (which we are saying are
somehow intimately associated with the action of the gravitational field. +


An exchange particle would appear to only be necessary in cases where the action of the
fundamental force in question was only between particles of a particular type, e.g.,
photons interact via the electromagnetic but do not interact via the strong nuclear force,
gluons do not interact via the weak force, etc. Since the action of the gravitational force
is supposed via the equivalence principle to be truly universal, and we can only be
assured of a complete correspondence between real particles and fields and their virtual
counterparts, rather than supposing a universal force interaction, i.e., "correspondence" to
somehow be maintained through the action of a specific exchange particle, we expect the
total symmetric/ antisymmetric/ nonsymmetric quantum vacuum field to be the logical
candidate for mediating the "gravitational force." The problem with the quantizing of
the gravitational field then is that action of the gravitational field is via the equivalence
principle supposed to be truly universal on the one hand, whereas gravitation is mediated
through the action of a specific exchange boson, i.e., the graviton.






Loose speculation concerning the relationships of gravity, topology, and degeneracy:

So long as continuity of action is maintained, the selfsame topology remains in effect.
Is it possible to have a degenerate metric, that is, with respect to multiply distinct
topologies? Would this imply that changes in topology might be without gravitational
effect? What might be termed here as gravitational equivalence classes of topology?
There might be transitions between distinct wavefunctions in the absence of changes in
energy such that we may speak of atemporal changes in a quantum mechanical system.




Each order process in a perturbative expansion of the vacuum state must be composed of
all topologically distinct ways that fermions can interact. Neither does general relativity
distinguish between topologically equivalent spacetimes so that the interaction of
topologically degenerate spacetimes as well as the mutual transformation of topologically
degenerate spacetimes are types of interaction occurring outside the scope of general
relativity theory.

Phonons are an example of bosons. The mutual exchange of phonons binds together
quasiparticles. Quasiparticles and phonons are artifacts of the mean or effective field
model of quantum particles and their interactions.



Time lag in the exchange of force-mediating bosons? Does this figure in the mechanism
of inertia?

Convergence of elements from different points in one‘s biography to form a more
meaningful and coherent history.

Concerning the possibility of true singularities:
/\E = 0 is inconsistent with the requirement of Lorenz-invariance of the ground state.
This is connected with the nonexistence of true energy eigenstates, due to the fact of the
universe being an open system and continually exchanging energy with the virtual
particles and fields of the quantum vacuum through the existence of the /\E.

Concerning the end state of the Hawking radiation process:
Conservation of information problem. Actually, this is a question of the conservation (or
nonconservation) of data. Information is only definable on an open system, whereas
entropy is only definable on a closed system ( just as the 2
nd
Law of Thermodynamics
only applicable to closed systems). Is a black hole an open or a closed system?
Eigenstates are not definable within an open system. This may explain the
indeterministic change in the wavefunction, Psi, as a result of measurements performed
by a ―conscious observer.‖

The lynchpin point of incompatibility between general relativity and quantum mechanics
is that posed by The Cosmological Constant Problem.

Can Psi constitute spacetime while being defined within a particular spacetime?
Nonlinearity problem?

Energy cutoff in the quantum vacuum that is gravitationally effectual. The assumption
that the equivalence principle applies to vacuum energy is perhaps holding back the
development of a workable quantum gravity theory.

Degeneracy with respect to various quantum numbers.

One becomes confined to a subgroup of the original larger symmetry group.

Invariance, covariance, symmetry, conservation of physical quantities/quantum numbers.

Reduction in the number of distinct eigenstates.

Degeneracy is when the same eigenvalue can be associated with different eigenstates.

If the source of inertia lies outside the spatial boundary of the object then the ―object‖ is
only a system representation – like an icon on a personal computer desktop.

Some mechanism which prevents directed dispersion of Psi packets, i.e., acceleration of
Psi packet.

Ontological priority of state space description over that of the spacetime manifold.

The energy of a nonlocally-connected field cannot gravitate without a modification of the
field‘s equations being required. Deficit of vacuum energy may act as the gravitational
source term due to the vacuum possessing a negative energy. In this case, only
differences in the density of vacuum energy would be significant for gravitation.
Perhaps a negative vacuum energy density would drive cosmological expansion, resulting
in a local time-varying metric against a background of constant spacetime topology.

Torsion of spacetime is determined by vacuum spin currents which are conserved, but
which determine the momentum and energy uncertainties of a given local volume of
spacetime ( local 3-hypersurface). Vacuum spin structure dynamics ~ the torsion of
spacetime.

Pauli and Bose vacuum statistics which are determined and evolve via this vacuum spin
structure/dynamics.

To include inertia use the analogy of an RLC circuit. The ―L‖ and ―C‖ components of
the circuit mediate time-energy and the ―R‖ component, the position-momentum
components of the tensor uncertainty.

x/\Py + y/\Px + /\xPy + /\x/\Py + /\yPx = [(/\Lz)**2]/(h/2pi)

We can, of course, choose (x,y) = (0,0), if we interpret them as expectation values, that is,
so that

/\xPy + /\yPx + /\x/\Py + /\y/\Px + /\Lz + [(/\Lz)**2]/h/2pi)

Px and Py can be defined as zero by appropriate choice of equivalent inertial reference
frame, provided we interpret Px and Py as expectation values so that

/\x/\Py + /\y/\Px = /\Lz + [(/\Lz)**2/(h/2pi)

Intuitively, the dimensions of this ―equation‖ do not balance with respect to the degree of
the uncertainty, /\, if ―/\‖ is interpreted as an operator. Of course, operators possess
special properties and one must perform a proof of the operator property of ―/\‖ in order
to sustain this part of the discussion. But if ―/\‖ in the above does, indeed, function as an
operator, then we may drop the term, ―/\Lz‖ and say the following:

/\x/\Py + /\y/\Px = [(/\Lz)**2]/(h/2pi)

The generalization of which might be,

Sum(j = 1 to 3) x Sum(i = 1 to 3) x {1 – DiracDelta(i,j)}[/\Xi/\Pj] =
[(/\Lk)**2]/(h/2pi)

Where, of course, DiracDelta(i,j) = 1 when i = j.

Notice that the above tentative formula is nonrelativistic. How would one generalize this
formula to 4 dimensions of spacetime? This would require use of the notion of ―rotations
about the time axis‖ – something which is surely demanded by relativistic spin 0 (scalar
field) such as that presented by the vacuum energy fluctuations in the form of creation
and annihilation of virtual Cooper pairs.

September 2012
Although a doubly-spinning sphere or ball is difficult to visualize, a doubly-
spinning ring can be visualized relatively easily. If one can convince oneself that
arbitrary angular momentums are not substantively different from orthogonal angular
momentum components, then there is no reason to believe that adding or decomposing
angular momenta in the case of an arbitrarily spinning sphere or ball is substantively
different than in the case of an arbitrarily spinning ring. What is the term for a
simplification of an insoluble problem that does not take away any of the relevant
features of the problem, but which makes the original problem relatively intuitive and
easy to solve?

All conserved dynamical variables are purely timelike in free space. Inertia and
gravitation are phenomena associated with the projection of these timelike four vectors
into spacelike components. An important question here is whether there is a conserved
four potential. Does the creation of spacelike components of a, say, hyperspherical four
potential induce a change in the original pure timelike potential so that the sum of time
and spacelike components of some new four potential vectorially sum to produce a new
four potential with the same magnitude as the initial free space four potential? Could
there be a preferred reference frame defined by that 3-hypersurface slice of spacetime
possessing the greatest timelike four potential, or, alternatively, that 3-hypersurface slice
possessing no spacelike components of the four potential? But the four potential may be
a function of mass, length, and time in such a manner that merely translational motion at
constant velocity does not result in a reconfiguration of the space and timelike
components of the four potential, i.e., the four potential is Lorenz-invariant.

Obviously, if the energy density of the quantum vacuum is on the order of 10**95
kg/m**3, matter cannot actually ―pass through‖ this medium, but must traverse it by
moving ―atop‖ this hyperdense medium. In a sense all particle motion is tunneling
through a potential in the sense in which a ―particle‖ which tunnels through a potential
barrier (in the conventional quantum tunneling sense) must some how merge with the
energy of the field composing the potential barrier. Tunneling possesses three distinct
cases, E > V, E < V, and E = V. Continuous reformation/reconstitution of particles and
fields, either in or out of resonance with each other (perhaps there is a special case where
the particle is at ―near resonance‖ with the energy field composing the potential through
which it is tunneling ) at successive locations of spacetime appears the only viable means
for matter to have motion within the vacuum. A ready analogy here is the software icon
appearing on one‘s computer desktop. This icon possesses only so much continuous
existence as is permitted by the limited speed of the electron gun which is continually
―repainting‖ the desktop. CPU processor speed may also play somewhat of a role
although F(cpu) > F(electron gun).

Dirac point particles which are fermions do not perturb the fermi-dirac statistics of the
quantum vacuum without simultaneously perturbing the bose-einstein statistics of the
vacuum, as well. Do isolated Dirac point particles (fermions) have an individual gravity
field?

The role of the Higgs boson may be fulfilled by virtual fermion/antifermion ―Cooper
pairs.‖ Do these Cooper pairs , i.e., ―pairons,‖ confer mass to particles? Systems
(mesons) with different spin are different particles with different masses.

―Indeed vacuum energy is modified in a space curved by the gravitational field. ― (c.f.,
quant-ph/9801071, Relativity of motion in vacuum.) Action-reaction principle here –
modified vacuum manifests itself as gravitation. The Euler/LaGrange formulation of the
field equations of gravitation describe the dynamics of continuously existing particles in
which kinetic and potential energy (which go into the definition of the Lagrangian and
Hamiltonian).

High density vacuum field comes from a gauge transformation of the Psi associated with
the cosmological constant.

If spin is for a fermion formally similar to polarization for a photon, then several
questions at once arise. Can this similarity be extended to other vector bosons?

If a lower limit can be placed on pseudo gravitational effects caused by the interaction of
mass and vacuum quantum statistics large enough to observed so that no deviation from
geodesic motion is observable, then this mechanism must be the one underlying gravity
itself. In other words, if the predicted effects are large enough to be observed but are not
in fact observed, then the theory is an alternate account for effects already familiarly
observed.

The vacuum must possess a spin 2 gauge symmetry such that the presence of gravitons
remains latent or is rendered unnecessary. Normally, the spin ½ and spin 1
wavefunctions cannot be superposed and therefore cannot interact.

Symmetry breaking, phase transition, vacuum decay, gauge bosons, etc. Matter perturbs
the symmetry of the vacuum.

The fact that composite matter cannot exist as virtual particles suggests that it is only
ocmposite matter, i.e., matter possessing a binding energy over and above that of the
vacuum constituting the elementary particles separately. This fact suggests that the
particle ―vacuum fields‖ do not, in isolation, possess distinct gravitational fields.

Relative versus absolute spacetime rotations are important in connection with spin
statistics.

During uniform acceleration, the space and time axes are displaced by twice the angle
relative to one another as does each axis displaces angularly relative to its former
position.

The future crystallizing until it becomes the present. Then does the present moment
decay? Like an excited atom or radioactive isotope? Crystallizing into an excited state
of something else?

Does the vacuum possess a gauge symmetry which makes the spin 2 exchange particle
unnecessary?

Supposing that the quantum vacuum itself can be the source of a gravitational field leaves
no quantum mechanism available to mediate gravitation.

Certainly the positive definite signature (+,+,+,-) of the Minkowski metric is intimately
related to the symmetry and antisymmetry of the wavefunctions describing fermions and
bosons.

Because topology is constituted by spin statistics of virtual particles, the energy of the
zero-point field (ZPF) falls outside of the scope of general relativity.

Because energy uncertainty drives temporal evolution and gravitation can only manifest
its power by deflecting timelike four vectors so that they acquire spacelike components, it
follows that the underlying dynamism of gravity must be the energy fluctuations of the
quantum vacuum.

Because an object‘s energy is, even if only in a tiny part, reconstituted from out of its
own enregy, the object cannot move along its own time axis at the speed of light.

An antiparticle can be consistently described as a particle travelling backwards in time.
The process of the creation of virtual particle/antiparticle pairs is a reversible process,
according to quantum mechanics. We might imagine more complex virtual structures,
i.e., composite particles, such as simple molecules, being reversibly created out of the
vacuum (along with their ―anti-molecules‖) in that they are immediately destroyed again
(return to the quantum) within a exceedingly brief period of time specified by the
Heisenberg Uncertainty Principle.

Perhaps virtual bosons are exchanged between both real and virtual fermions in a
completely indiscriminate fashion.



080799

Of course, strictly speaking, this type of creation/annihilation is only possible if the
virtual molecule/anti-molecule pair collectively constitute a spin zero ―particle.‖ (It may
be possible to understand ―spin 0‖ as being spin about the particle‘s local time axis and
―spin 1‖ being spin about an axis oriented in some way in the 3-dimensional space to
which the local time axis is orthogonal). Such tiny systems could be consistently and
exhaustively described with quantum theory by some finite set of quantum numbers,
being in this way indistinguishable from any other system defined by the same quantum
numerical set. These quantum numbers, as such, index observables, which are conserved
physical quantities. But we know that at some point reversibility is lost and this must
take place when the structure can no longer be produced from out of the vacuum ―in a
single go,‖ but must be ―cobbled together‖ from a number of such vacuum-engendered
particles which are to exist in some kind of bound structure maintained through
exchanges of momentum between all of those particles (which emerged from the vacuum
in a single step). This might well be due to there being no definable ―anti-entity‖ with
which the ―entity‖ can annihilate so as to return the pair to the quantum vacuum from
which they had originated. We may suppose that it is here that physical processes
describable in terms of nonconserved quantities come into play. Is it at this point that the
phenomena of inertia and gravitation emerge or become significant? Do irreversible
structures participate in more than one distinct vacuum state? Or do they just possess
some kind of independence from a single vacuum state, preventing the vacuum from
―anticipating‖ an anti-structure? It is interesting to note that irreversibility creeps into
those systems that cannot be maintained by the same processes by which they were
originally engendered. It is at this very same stage where physical processes within the
system are no longer directly, but only indirectly supported by quantum vacuum
processes.

We do not expect nonlocal energy distributions to possess inertia or to be sources of
gravitational fields. Inertia and gravitation are phenomena based in the distribution and
dynamics of the distribution of momentum and energy within four dimensional
spacetime. An individual vacuum fluctuation possesses only an uncertain momentum
energy which therefore possesses no determinate composition of momentum energy.
This is based on the hypothesis, derived from statements of David Bohm in his work,
Quantum Theory, that all causal relationships between the expectation values of physical
quantities are constituted out of correlations of fluctuations in the values of these physical
quantities. In other words, it is only coherent networks of interrelated momentum energy
fluctuations that exhibit the back-reaction of inertia.
080599

The construction of ―squeezed states‖ in which the momentum uncertainty along a
particular axis is decreased at the expense of increases in this uncertainty along other
orthogonal axes supplies tangible proof that the spacetime components in quantum
mechanical momentum-energy uncertainty form with one another a conserved four
vector. If this is true, then it should be possible to construct a squeezed state in which the
energy uncertainty of a system is increased due the construction of squeezed states in
which a component of the 3-momentum uncertainty is decreased.

Through a Lorenz transformation the expectation values of all components of 3-
momentum can be adjusted to zero so that the quantum uncertainties in the components
of 3-momentum are wholly constituted by the respective momentum fluctuation terms.
The question arises as to whether the timelike component of the 4-momentum can
likewise be ―transformed away‖ through such a simple operation as a Lorenz
transformation?
In the absence of accelerated motion or gravitational fields, the velocity of light is a
universal constant. To transform away all of a mass‘ timelike momentum would require
that one utilize a frame of reference which is itself moving at the speed of light within
some 3-hypersurface. There is a kind of symmetry between the spacelike and timelike
components of an object‘s momentum: no component of a massive object‘s spacelike
velocity may reach the speed of light and, the object‘s timelike velocity can never reach
zero.

Now from previous discussion we are aware that no massive body actually possesses a
timelike momentum such that its velocity through time is actually 100% of the speed of
light (in vacuum). The symmetry underlying momentum-energy would be broken, if we
allowed what is permitted in free space, namely a Lorenz transformation wherein an
object is given a component of 3-velocity which, though still less than the velocity of
light in free space, is nonetheless greater than the timelike velocity of the object. The
structure of spacetime within the 3-hypersurface surrounding the object must have been
altered so as to prevent the acceleration of an object to velocities within this part of the
hypersurface which are greater than the timelike velocity of co-located objects at rest
relative to the chosen coordinate system.
@$
The appearance of tidal forces responsible
for the initial acceleration of objects released in a gravitational field is easily explained in
terms of conservation of four momentum in conjunction with the spatially varying local
velocity of light, c.f., ―For it cannot actually be "rigid" due to these tidal forces; in fact,
the concept of a "rigid body" is already forbidden in special relativity as allowing
instantaneous causal actions. Secondly, such a rod must indeed be "infinitesimal", i.e., a
freely falling body of negligible thickness and of sufficiently short extension, so as to not
be stressed by gravitational field inhomogeneities; just how short depending on strength
of local curvatures and on measurement error‖ (Torretti (1983), 239), c.f., Early
Philosophical Interpretations of General Relativity (Nov 28, 2001). ―…Reichenbach's
analysis of spacetime measurement treatment is plainly inappropriate, manifesting a
fallacious tendency to view the generically curved spacetimes of general relativity as
stiched together from little bits of flat Minskowski spacetimes. Besides being
mathematically inconsistent, this procedure offers no way of providing a non-
metaphorical physical meaning for the fundamental metrical tensor gμν, the central
theoretical concept of general relativity, nor to the series of curvature tensors derivable
from it and its associated affine connection. Since these sectional curvatures at a point of
spacetime are empirically manifested and the curvature components can be measured,
e.g., as the tidal forces of gravity, they can hardly be accounted as due to conventionally
adopted "universal forces". Furthermore, the concept of an "infinitesimal rigid rod" in
general relativity cannot really be other than the interim stopgap Einstein recognized it to
be. For it cannot actually be "rigid" due to these tidal forces; in fact, the concept of a
"rigid body" is already forbidden in special relativity as allowing instantaneous causal
actions‖, c.f., Ibid.

A hollow sphere filled with electromagnetic radiation, i.e., photons, possesses an
additional mass equal to the total energy of the photons divided by the speed of light
squared, solely due to the impulsive forces and accelerations experienced by the photons
as they bounce around inside the sphere. Of course, from the DeBroglie relation, p = h/ì,
and the red shifting of photons moving in the direction of the sphere‘s motion and the
blue shifting of photons moving in the direction contrary to this motion, we can easily
deduce that when the sphere is uniformly accelerated, there will result an increasing
differential of impulsive momenta developing between the red and blue shifted photons.
In other words, the instantaneous change in this momentum differential with respect to
time will correspond to a force, F = d(/\p)/dt, which will oppose the acceleration of the
photon-filled sphere. This force divided by the acceleration of the hollow sphere will, of
course, yield the effective mass of the photons. Note that it is only because the photons
change direction through interaction with (impacting against) the hollow sphere that the
photons collectively acquire an effective mass.

Hypothesis: when ¢ is in any eigenstate of the Hamiltonian, a superposition state of ¢
with respect to purely time-varying eigenfunctions fully accounts for the uncertainty in
the lifetime. I was just trying to say that,

¢(x,t) = ¢(0)(x) x exp(iwt) for ¢ whenever ¢ is in an eigenstate of H.

A Theory of Everything would be able to determine the true Hamiltonian, H, for any
system, including for ―the whole Universe‖. Such a theory would render any energy
fluctuation term in H, H(fluc), a mere phenomenological artifact of our previous
ignorance of the correct refinement of quantum theory, A ―TDE‖ would convert the
Heisenberg Uncertainty Principle into a purely epistemological principle. Such a system
would possess no fundamental fluctuations because it would possess no indeterminate
―outside‖ with which it could be in the process of dynamically exchanging energy. In
such a situation, the ―power input‖ to the Universe as a whole would be identically 0.

It is hard to conceive of how anything could ever happen within such a ―zero power
input‖ device. In short, It is hard to conceive how a Universe with zero power input
could be rightfully said to possess any real temporality, beside endlessly repeating
patterns of interference between a closed set of time-independent eigenfunctions.
If in an attempt to accelerate by mechanical means a perfectly spherical mass leads not to
a change in the location of this body‘s center of mass in the direction along which we
would attempt to make it move, but instead, to a supefluid-like streaming of its composite
material around the hand and between the fingers which together would urge it forward,
then despite this action having lead to a redistribution of the body‘s mass, no energy may
be supposed to have been expended (is there a question of degeneracy here?) throughout
the course of this operations. Such a strange object may be said to possess not inertia. It
is hopefully obvious from what has been considered thus far that, were it but for the
absence of all internal binding forces within this ―mass‖ (as opposed to the notable case
of a ―superfluid‖), at least some small acceleration of the body‘s center of mass would
have been effected in the direction along which one‘s hand was attempting to urge it. We
note the absence, in the case considered above, of all compression forces in the direction
of the body‘s would-be acceleration. The opposite-directed tension force is likewise
zero, as the matter distribution was still prior to our attempting to move it. Moreover, all
shear forces within the mass were similarly zero. Now it is but a simple and reversible
linear transformation of spacetime coordinates connecting the representation of a matter
distribution as possessing pressure, energy density and stress (relating to the presence of
shear forces within the body) to another representation of this distribution as one
possessing only energy density and pressure but without any stress due to shear forces.
In other words, locally at least, shear forces can always be transformed away through an
appropriate choice of spacetime coordinates.
October 2011
The generalization of
conservation of momentum to the conservation of stress-momentum-energy in general
relativity means that the metric responds to inertial forces in just such a manner that the
time rate of change (with respect to ―proper time‖) in some important quantity
proportional to f(g
uv
, T
uv
) = 0

But it is clear that a mere change in coordinate system will have no effect whatever upon
any actual physics – this is merely a somewhat informal restatement of the principle of
general relativity. So any mass distribution not possessing off-diagonal terms in its
energy tensor in one system of coordinates, may be represented within some new
coordinate system as having an energy tensor possessing such off-diagonal terms (stress
terms) and owing to the existence of shear forces within the body. Now it is the binding
forces within a body which are responsible for that body possessing forces of
compression, tension, and shear. The question which faces us now is this: might a body
possess an energy tensor with only a term with this being true for all possible
transformations of the spacetime coordinates?

Is there some component of the energy tensor which cannot be, locally at least,
transformed away? Now a transformation of the spacetime coordinates can always be
found which allows us to locally transform away a body‘s gravitational field.

What are we to make of the Gibb‘s phenomenon in the case of waves of the probability
distribution of quantum states? Might we expect extremely counter-intuitive behavior by
quantum systems at the spacetime boundaries of their system wavefunctions?

For quantum tunneling, /\x >= /\x(0) and/or (both?) /\E >= /\E(0).

It is only nonzero expectation values of momentum-energy which may possess
gravitational mass/inertial mass equivalency. The expectation values may always be
derived from a combination of fluctuation terms and uncertainties. The fluctuation term
for the energy may be wholly attributed to the vacuum whereas its uncertainty in its
energy to the effect of the fluctuation energy upon our energy-measuring apparatus –
what perfect calibration cannot eradicate (in principle). Mass-energy is a result of an
imbalance in these two energy terms. In this way particles are seen to be not flux-
stabilities in themselves, but structured alterations in the flux-stabilities as a result of the
influence, penultimately of our energy-measuring devices-ultimately per von Neumann –
upon the influence of not the individual mind per se but the consciousness fundamental in
nature, which is structured through the complex system of boundary conditions upon the
quantum vacuum field being measured (in essence) constituted through the operation of
the observer‘s brain, since the existence of the brain as a mass-energy system, would
otherwise presuppose, if identified with the observer‘s individual consciousness, the
existence of that which its observations are potentially constituting.


If all topological transformations of spacetime at the quantum level may be reducible to
successive or collective symmetric and antisymmetric topological transformations
grounded in virtual boson and fermion particle exchanges, then spacetime topology
would be determined by vacuum quantum statistics. So this spacetime topology, about
which general relativity is undecided, would be determined by the quantum statistics of
the quantum vacuum. On this view, gravitation and inertia would necessitate ―preloaded‖
quantum vacuum boundary conditions. So gravitation, in particular, could no longer be
treated as possessing its own, unique and universal quantum field, but would be particular
in that gravitational fields would simply be vacuum fields + particular boundary
conditions supplied for this vacuum.

Of course, the zero-point energy field is responsible for inertia since matter remains at
rest, i.e., continues travelling at near the speed of light along the time axis, due to its
energy bein