Professional Documents
Culture Documents
.
Edward Harrison
15,16
has argued that there are many of these physically significant
dimensionless ratios. He maintained that they cluster around values of the sequence:
2 3/2 1 1/2 0 1/2 1 3/2 2
... , , , , , , , , ... N N N N N N N N N
If the constants had been accidentally specified, their values would have been random, as
would be all physically significant dimensionless ratios. In this case, we would not have
expected any such pattern. This sequence, if real, would indicate that the constants were
not accidentally specified. This would serve to falsify all accidental scenarios.
17
As for theories of intelligent design, these also come in two broad categories:
theistic and naturalistic, which we now consider in turn. Various religious academics,
e.g., the astronomers Bernard Lovell
18
and Hugh Ross
19
as well as the philosopher
William Lane Craig
20
have argued that the apparent fine tuning is evidence of a divine
origin. Other theorists, perhaps most notably William A. Dembski
2123
and Michael
Behe
24
advocate a theistic model for all complexity in nature. This model would account
for the apparent fine-tuning, but it suffers from obvious issues relating to the
supernatural; any such hypothesis is highly speculative (to say the least) and, almost by
definition, beyond the reach of experimental or observational testing. The scientific
community has overwhelmingly rejected this approach.
If the accidental and theistic scenarios are indeed eliminated, then by logical
necessity, we are left with those of naturalistic intelligent design (NID). In modern times,
the development of this thesis arguably begins with Einstein; he often expressed support
for similar ideas.
25,26
Most memorably, he made reference to an amazement at the
harmony of natural law, which reveals an intelligence of such superiority that, compared
with it, all the systematic thinking and acting of human beings is an utterly insignificant
reflection.
27
Nevertheless, it is probably more accurate to say that, as a scientific thesis,
NID begins with the astronomer, Fred Hoyle.
28,29
He suggested the possibility of a super-
calculating intellect in connection with accounting for the fortuitous properties of the
carbon atom. A little later, the cosmologists Edward Farhi and Alan Guth
30
discussed, in
some detail, the possibility that naturally occurring intelligence in this Metacluster might
someday have the scientific and technological sophistication to produce a new
metacluster, although these authors concluded that The requirement for an initial
singularity appears to be an insurmountable obstacle. This topic has also been
considered by M. Mitchell Waldrop.
31
(Waldrop seems to credit Guth for the idea.)
Subsequent development of the approach then takes off from, oddly enough, a scenario of
the accidental category proposed by physicist, Lee Smolin.
32
He suggested that
metaclusters are the result of a process of natural selection favoring the formation of
black holes, which, in turn, supposedly favor life. This inspired the cosmologist Edward
Harrison
33,34
to postulate a theory which identifies intelligent life as a critical component
in a process of natural selection that favors the formation of metaclusters suitable for
intelligent life. Similar notions have been championed by the mathematician Louis
Crane.
35
James N. Gardner
3638
has been perhaps the most noticeable proponent of this
model. He suggests that humans, aided by highly advanced computers, will one day have
- 6 -
the capacity to replicate the Metacluster. He also argues that his theory is potentially
falsifiable in various ways, but the experimental scenarios themselves are speculative and
we are apparently required to wait a very lengthy period before the opportunities arise.
More recently, the cosmologist, John Gribbin
39
has argued in favor of NID. Linde
40
has
suggested that a physicist hacker may have been responsible for creating the
Metacluster. Biologist, John E. Stewart
41
has discussed the possibility of naturally
occurring life that tunes the parameters of offspring universes. Various others have
commented on this thesis, sometimes negatively. These include the philosopher Clment
Vidal,
4244
the astrophysicists J. Richard Gott and Li-Xin Li
45
and the mathematician, John
Byl.
46
This approach is hardly problem-free. It is, for example, at least at present, highly
speculative. These theories also encounter resistance due to the fact that they are so
similar to the theistic models. They are thus sometimes assumed to be unscientific.
However, in explicitly eliminating any reference to the supernatural, they adequately
address this objection. The critical problem concerns the fact that no one has specified a
physically realistic scenario by which this replication is to be achieved, and therein lies
the potential for resolving problems relating to verification and falsification. If a scenario
were developed with sufficient detail to provide physical implications, we would have
this potential. The following discussion, then, can be viewed as an attempt to express the
implications of systematic parameter variation. As we will see, this leads directly to a
remarkably simple scenario in which naturally occurring intelligent life acts to specify the
parameters in a cyclic fashion, one with numerous observational implications.
Before proceeding, however, it would be helpful to consider briefly just what we
would realistically require of any successful scenario that explained the cosmos as
resulting from naturalistic intelligent design. Alternatively expressed, what would be the
demand of an ardent skeptic? We would be rightly dubious of any such theory, since it
would be reminiscent of ancient myths. In order to be truly comfortable, then, we would
hold it to the most stringent standards. Any such explanation would be so unexpected and
so far-reaching in its impact that we would require, above all else, several forms of
evidence. Indeed, this has been one of the important lessons from the history of standard
model cosmology; the systematic redshifts of distant galaxies, now recognized as
unequivocal evidence in support of the model, were not enough to be decisive, at least
not at first; corroborating evidence, the CMB, was required. But in the case of NID, even
multiple categories of evidence might be insufficient. We would certainly be much more
comfortable if the explanation were also epistemically ideal. In this regard, it would have
to be, most importantly, completely consistent with established science; if the theory
were to require a reexamination of any established principle, we would be rightly
dubious. Likewise, we would prefer that it be especially simple and straightforward; if
the theory involved any implausible assumptions, slippery logic, dubious method or
unexpectedly complex projections, we would be, again, less than satisfied. We would
expect it to be falsifiable, and almost immediately so; we would not want to wait a
million years to have this. We would also expect it to solve theoretical problems (e.g.,
questions relating to net baryon number), since this is a common characteristic of
important, successful theories. We would also want a simple, clear-cut resolution to the
various philosophic issues associated with the question of ultimate origins (e.g., first
cause vs. infinite regress). Moreover, we would want some clearly credible explanation
- 7 -
as to why creation myths would have been crudely accurate. More to the point, this
account would have to be entirely consistent with any of the established facts of
comparative mythology (the study of myths, a cross-disciplinary focus of anthropology,
linguistics, psychology, history and religious studies). Nevertheless, any such theory
would be, prima facie, so implausible, that we might still be uncomfortable if not for
something possibly quite new to science: self-fulfilling validity (if such a thing is
possible). This, then, is the ideal and a tall order it is. Some of us might be content with
less, but these criteria specify the comfort level of the die-hard skeptic. Let us see how
well we do.
There is another way to look at this. If the cosmos is the result of deliberate
specification, the intelligent agent, being clever enough to have produced the
Metacluster with its rightly skeptical conscious life, would have anticipated a scientific
societys reluctance to accept something so reminiscent of ancient myths. This agent
would have done everything possible to provide such things as unequivocal physical
evidence, epistemic perfection and self-fulfilling validity. If these considerations are
valid, then the hypothesis of NID necessarily implies the availability of this support. The
ideal character of the explanation itself would be part of the design. This is not to say that
this support would have always been available; an intelligent species would presumably
have to reach a certain level of sophistication before becoming aware of it.
As we will see, the scenario presented below involves only a few, reasonable,
indeed common, assumptions, and it appears to be entirely consistent with established
science. Furthermore, it has numerous, testable implications and multiple categories of
potentially supporting evidence, some of which is modestly developed and presented
below. This explanation addresses numerous outstanding problems in science and
philosophy, especially astrophysics, as is discussed, and it apparently does so without
producing a tangle of new problems. Furtherstill, the scenario entails, as alluded to,
something that might be characterized as self-fulfilling validity. Putting aside, for the
moment, questions relating to present conditions, the explanation presented here is such
that it would apply to future conditions if we were determined to make it so. Since these
future conditions would be indistinguishable from present conditions, as is made clear
below, the explanation would apply to both.
3. Basic Postulants
Let us start, then, by supposing that the Metacluster provides some mechanism by
which (a) reiteration is to occur and (b) in such a manner as to produce an otherwise
identical system that is scaled down at all levels of material organization. Alternatively
expressed, this would be a system that is scaled down in terms of all fundamental
parameters, most notably the masses of elementary particles. This would involve the
input of only a very modest mass, as compared, for example, to the total mass of the
Metacluster for the classical, oscillating model (or the creation of matter in other models
of Metacluster replication). Any intelligent life in such a scaled system would evaluate its
environment without reference to anything external, as we do in ours. They would find
their situation indistinguishable from that in which we find ourselves; reiteration would
have been achieved. This scenario, then, provides the desired end without invoking
- 8 -
speculative notions of mass-energy creation, supplemental spatial dimensions, bubbles
in spacetime, etc. The second-generation system, being essentially identical to its
predecessor, would also include the same mechanism for reiteration, and so on. Notice
that this would in fact circumvent the heat death and without violating the second law.
Postulate 1: The reiteration of cosmogony takes place based on a fractional
portion of the Hubble mass, producing a second-generation system that is scaled down in
terms of all fundamental parameters.
Hereafter, this fractional portion will be referred to as the bulk mass. (This term has no
connection to the concept of bulk space in M theory.)
Now, this process is significantly simplified by several considerations. First, the
bulk mass need only be reduced to conditions under which it would evolve essentially in
accordance with the standard model (proper scaling assumed). This is an almost obvious
fact, of course, but it does significantly simplify the process; there would be no need for
any further influence after such initial conditions have been specified. Indeed, it is
generally thought that any such influence would be impossible.
39
Postulate 2: The bulk mass is reduced to conditions under which it would evolve
essentially in accordance with the standard model (proper scaling assumed).
This reiterative process is greatly simplified in yet another respect. Suppose that the bulk
mass is reduced as described, with the exception of some small residual portion. Since the
reduced portion would have a density equivalent to that of the Metacluster at some early
moment
90 3
10 g cm , it would have far in excess of the density of a black hole and
would be capable of assimilating the residual mass via gravitational accretion. The
assimilation of additional matter would likely produce changes in the developmental
trajectory of the initially reduced portion; thus, initial specifications would have to be
such that after assimilating the additional mass, the aggregate would evolve essentially in
accordance with the standard model. Therefore, as long as initial specifications took this
accretion into consideration, any change caused by it would have the effect of adjusting
the process such that the total mass would evolve as required.
We can extend this logic to a scenario in which the initially reduced mass is much
less than the bulk mass, i.e., some very small portion might be initially reduced and then
situated such as to assimilate the now substantial, residual portion; a progenitive
modicum is initially produced and then situated within the bulk mass. The theory thus
reduces to the suggestion that the reiteration of cosmogony is dependant simply on the
production of a progenitive modicum of suitable properties. Presumably, this would be
some variation on the conditions that existed at a very early point in the origin of our
Metacluster, either a false vacuum state or a plasma of bosons and fermions (properly
scaled). If it is the false vacuum option, then there would be some energy imparted
during its formation. After inflation, the latent energy breaks free, reheats space, and
particle formation commences. Indeed this does seem the more likely choice, since
inflationary theory has been convincingly supported by WMAP observations.
47,48
There
are still problems with inflationary theory, most notably the requirement for an initial
- 9 -
fine-tuning,
49
but notice that this problem is immediately resolved if initial conditions are
deliberately contrived. As for other problems with inflationary theory, they may result
from the assumption that there would have been no mass accretion. In any case, that is
the essence of the theory: if such a modicum were produced, inevitable consequences
would spontaneously produce a descendant Metacluster.
Postulate 3: The reiteration of cosmogony depends simply on the synthesis of the
progenitive modicum (proper situation assumed).
Notice that this does not involve an initial singularity. Note also that this is the full extent
of speculation for the mechanics of the model proposed here. (Some additional
guesswork is discussed below, but it relates to verification rather than the mechanics of
the process.) It is perhaps also worth mentioning that theorists seem to agree that
deliberate metacluster formation is at least theoretically possible.
30,35,40
Now, the evaporation time for a black hole is given by
( )
2 3 4
ev 0
5120 t G M c h
,
where G is the gravitational constant M is the mass of the black hole, h is Plancks
constant and
c
is the speed of light. Therefore, a very small black hole simply explodes.
However, this equation assumes no mass accretion and no technological manipulation,
such as reflection due to force fields or confinement of some other sort. For the time
being, let us assume that the modicum either is initially too large to explode or is
technologically or situationally manipulated to prevent this. No attempt will be made here
to describe how this modicum is to be produced, although various authors, including
Stephen Hawking, have suggested that particle accelerators might be used to produce
black holes.
5053
This topic, however, would need to be developed at some point in time.
We can take at least a very modest first step toward putting this on a quantitative
basis by considering the scaling factor for mass in a second-generation system. If the
subsequent system is to be based on the mass of, say, a star (
30
10 kg ), and with the
Hubble mass as
55
10 kg then the scaling factor would be
25
10
then implies emission at
3
3.28 10 m. R
The continuum emission, then, is coming from
just outside of the Schwarzchild radius. Far from being a galaxy, a quasar would be about
the size of a large building. In this case, for a quasar seen edge-on, the transits of planets
in that system would produce noticeable occultations. Aside from any periodic
fluctuations due to the accretion process, this would appear as an occasional blinking.
Assuming that such planetary systems are similar to ours, these occultations would
produce a characteristic effect. If we assume that the system has a Mercury-like planet,
- 14 -
for example, there would be an occultation approximately every eighty-eight days.
Superimposed on this would be the occultations of a Venus-like planet, approximately
every 225 days, and so on for additional planets. The duration of an occultation for the
Mercury-like planet would be approximately 100 seconds. For a Venus-like planet, it
would be approximately346seconds. To establish a pattern, we would want to observe at
least three occultations. For the Mercury-like pattern, this would require approximately
one year of observations. (To the same level of rigor, occultations for the three inner
planet would require about three years of observations.) With some
7
10
objects to choose
from, we should be able to find at least a few with sufficiently precise alignment for this
test. (All of this assumes that the edge-on view is not excessively obscured by dust in the
plane of the planetary system.) The existing observational data is already suggestively
consistent with these expectations. The typical quasar variability is on the order of a few
months, just what we would expect for the most noticeable occultations, those of a
Mercury-like planet.
67
Furthermore, as indicated above, the reduction in luminosity
should be brief (about a minute) followed by sustained, months-long outbursts. This is,
apparently, the typically observed condition.
66
Thus, a careful reexamination of the quasars provides the best opportunity to
develop evidence in support of these postulates. This could conceivably be accomplished
in as little as a few months. It is mostly a matter of working out the details of the model;
the observational data is largely in place.
Meanwhile, for purposes of illustrating the potential, it would certainly be helpful
to develop this evidence to some small degree. For this purpose, we will calculate the
number of civilizations that would have reached the reiterative phase during that
cosmological epoch which would correspond to the incidence of quasars at this time. We
will then compare this to the know number of quasars, qso
. N
If the above postulates are
correct, we would have the verifying condition:
civ qso
. N N
The first step in this calculation requires an estimation of the relevant number of
civilizations, and to determine this, we need to know the time frame over which to make
the count. This period can be parameterized as beginning at
10
p civ qso
1.37 10 yr t t t
and ending at
10
p civ
1.37 10 yr t t where
10
1.37 10
is the present age of the
Metacluster, p
t
is the time for a civilization to first appear on an Earth-like planet,
civ
t
is
the lifetime of a civilization and qso
t
is the luminous period of a quasar. Quasars formed
earlier than
10
p civ qso
1.37 10 yr t t t would now be black holes and therefore
invisible, and no quasars would have been formed yet for the period subsequent to
10
p civ
1.37 10 yr t t .
- 15 -
Now, as is well known, p
4.57 Gyr t
.
68
As for
civ
, t
many authors have offered
essentially speculative estimates, with values ranging from a few hundred years
69
to
virtual infinity
1
. However, the most rigorous estimate comes from von Bloh et al.
62
These
theorists have developed an integrated Earth system analysis that takes into
consideration stellar luminosity variations and geodynamic factors such as silicate rock
weathering and the global energy balance. This analysis appears to be widely respected
and gives a value of
8
civ
5 10 yr. t As we will see, the calculations offered below
constitute a test of this value. Note, then, that:
10 9
p civ
1.37 10 8.63 10 t t
Finally in this connection, we consider qso
. t
The lifetime of a quasar, per the
above postulates, would correspond to the accretion time for a small black hole
assimilating a Sun-like star. Observations of the quasars indicate that this is an accretion
disc process, and most such models presume a stationary and axially symmetric disc. In
this case, we would use the Eddington accretion rate, ( )
17
Edd solar
1.5 10 g sec M M M
&
where
M
is the mass of the Sun and M is the mass of the hole when the quasar is first
visible. Accurately estimating M is unnecessary for present purposes. However, we
crudely recognize that some substantial portion of the star would have to be consumed
before the accretion would be visible through the stellar envelope. Even if this portion
were as small as
0.1M
,
11
qso
10 . t yr This would be two orders of magnitude greater
than the entire prior period indicated by equation . We can conclude, then that our lower
bound is effectively the time 0 t and the upper bound is given by equation. More
accurately, Earth-like planets do not even begin to form until
2.5Gyr
.
70
In the integral
below, we will use the limits
9
2.5 10
and
9
8.63 10 yr.
Historically, a count for the number of civilizations would have been pursued
within the context of an abbreviated form of the Drake equation. This, however, is an
approach that has fallen out of favor with astrobiologists. A more rigorous approach can
be based on the work of Charles Lineweaver,
70
and von Bloh et al.
62
This involves a
convolution of probabilities relating to metallicity and the prevalence of hot Jupiters to
calculate the number of Earth-like planets over cosmic time. If we further assume that
each such planet harbors a civilization (per considerations following Postulate 4), this
approach will provide a count of civilizations.
According to this approach, the number of civilizations in the Milky Way over a
period t would be given by:
( ) ( )
2
1
civ hab
t
t
N PFR t p t dt
where ( ) PFR t
is the Earth-like planet formation rate:
- 16 -
( ) ( )
( )
( )
HE
2
1
exp
2 2
t
PFR t A SFR t P d
1
1
]
and ( )
hab
p t
is the probability that an Earth-like planet is within the habitable zone of a
Sun-like star:
( )
( )
( ) outer ,
inner ,
1.2
2.5 1
hab
0.8
1
.
M t
M t
R
M
M R
p t M R dRdM
C
Here 0.05 A is the fraction of stars that form Sun-like stars,
is the metallicity 0.3
is the dispersion, M is stellar mass, ( )
HE
P
is the probability that a star may harbor an
Earth-like planet,
1.5
1.57 C M
is a normalization factor,
inner
R
and
outer
R
are the inner
and outer boundaries of the circumstellar habitable zone and ( ) SFR t
is the star formation
rate.
Putting all of this together with the above-indicated limits of integration, we get:
( )
( )
( )
( )
( )
9
9
outer ,
inner ,
8.63 10
civ HE
2
2.5 10
1.2
2.5 1
0.8
0.05 1
exp
2 2
M t
M t
R
M
M R
t
N SFR t P
C
M R dRdMd dt
1
1
]
As we see, this is somewhat more involved than the outdated Drake equation. The
solution is
7
civ
1.95 10 N .
The current estimate for the number of quasars is given by the generally preferred
quasar luminosity function of Hopkins et al.
71
( )
( ) ( )
1 2
b b
log
d
L
d L
L L L L
+
where
is a normalization,
b
L
is the break luminosity and
1
and
2