You are on page 1of 4

How Unlikely is a Doomsday Catastrophe?

Max Tegmark1 & Nick Bostrom2


1
Department of Physics, Massachusetts Institute of Technology,
Cambridge, MA 02139, USA
and
2
Future of Humanity Institute, Faculty of Philosophy,
Oxford University, OX14JJ, Oxford, UK

(Dated: December 18, 2005. This paper is an extended version of the Brief Communication published in Nature, 438, 754 [1].)
Numerous Earth-destroying doomsday scenarios have recently been analyzed, including break-
down of a metastable vacuum state and planetary destruction triggered by a “strangelet” or micro-
scopic black hole. We point out that many previous bounds on their frequency give a false sense of
security: one cannot infer that such events are rare from the the fact that Earth has survived for
arXiv:astro-ph/0512204v2 21 Dec 2005

so long, because observers are by definition in places lucky enough to have avoided destruction. We
derive a new upper bound of one per 109 years (99.9% c.l.) on the exogenous terminal catastro-
phe rate that is free of such selection bias, using planetary age distributions and the relatively late
formation time of Earth.

I. INTRODUCTION i.e., uncorrelated with human activities and our techni-


cal level of development. The purpose of the present pa-
As if we humans did not have enough to worry about, per is to assess the likelihood per unit time of exogenous
scientists have recently highlighted catastrophic scenarios catastrophic scenarios such as 1-5.
that could destroy not only our civilization, but perhaps
even our planet or our entire observable universe. For in- One might think that since life here on Earth has sur-
stance, fears that heavy ion collisions at the Brookhaven vived for nearly 4 Gyr (Gigayears), such catastrophic
Relativistic Heavy Ion Collider (RHIC) might initiate events must be extremely rare. Unfortunately, such an
such a catastrophic process triggered a detailed technical argument is flawed, giving us a false sense of security.
report on the subject [2], focusing on three risk cate- It fails to take into account the observation selection ef-
gories: fect [6, 7] that precludes any observer from observing
anything other than that their own species has survived
1. Initiation of a transition to a lower vacuum state, up to the point where they make the observation. Even
which would propagate outward from its source at if the frequency of cosmic catastrophes were very high,
the speed of light, possibly destroying the universe we should still expect to find ourselves on a planet that
as we know it [2, 3, 4]. had not yet been destroyed. The fact that we are still
alive does not even seem to rule out the hypothesis that
2. Formation of a black hole or gravitational singular- the average cosmic neighborhood is typically sterilized
ity that accretes ordinary matter, possibly destroy- by vacuum decay, say, every 10000 years, and that our
ing Earth [2, 4]. own planet has just been extremely lucky up until now.
If this hypothesis were true, future prospects would be
3. Formation of a stable “strangelet” that accretes or- bleak.
dinary matter and converts it to strange matter,
possibly destroying Earth [2, 5].
We propose a way to derive an upper bound on cosmic
Other catastrophe scenarios range from uncontroversial catastrophe frequency that is unbiased by such observer
to highly speculative: selection effects. We argue that planetary and stellar age
distributions bound the rates of many doomsday scenar-
4. Massive asteroid impacts, nearby supernova explo- ios, and that scenarios evading this bound (notably vac-
sions and/or gamma-ray bursts, potentially steril- uum decay) are instead constrained by the relatively late
izing Earth. formation time of Earth. The idea is that if catastrophes
were very frequent, then almost all intelligent civiliza-
5. Annihilation by a hostile space-colonizing robot tions would arise much earlier than ours did. Using data
race. on planet formation rates, it is possible to calculate the
distribution of birth dates for intelligent species under
The Brookhaven report [2] concluded that if 1-3 are pos- different assumptions about the rate of cosmic steriliza-
sible, then they will with overwhelming likelihood be tion. Combining this with the information about our own
triggered not by RHIC, but by naturally occurring high- temporal location enables us to conclude that the cosmic
energy astrophysical events such as cosmic ray collisions. sterilization rate for a habitable planet is at most of order
Risks 1-5 should probably all be treated as exogenous, one per Gigayear.
2

FIG. 1: The left panel shows the probability distribution for observed planet formation time assuming catastrophe timescales τ of ∞
(shaded), 10, 9, 8, 7, 6, 5, 4, 3, 2 and 1 Gyr, respectively (from right to left). The right panel shows the probability of observing a
formation time ≥9.1 Gyr (that for Earth), i.e., the area to the right of the dotted line in the left panel.

II. AN UPPER BOUND ON THE In contrast, if scenarios 2 or 3 involved doomsday particle


CATASTROPHE RATE emission and proceed as a chain reaction spreading sub-
luminally, we would observe spherical dark regions cre-
Suppose planets get randomly sterilized or destroyed at ated by expanding destruction fronts that have not yet
some rate τ −1 which we will now constrain. This means reached us. We will now show that the vacuum decay
that the probability of a planet surviving a time t decays timescale can be bounded by a different argument.
exponentially, as e−t/τ . The formation rate fp (tp ) of habitable planets as a
The most straightforward way of eliminating observer function of time since the Big Bang is shown in Figure 1
selection bias is to use only information from objects (left panel, shaded distribution). This estimate is from
whose destruction would not yet have affected life on [10], based on simulations including the effects of heavy
Earth. We know that no planets from Mercury to Nep- element buildup, supernova explosions and gamma-ray
tune in our solar system have been converted to black bursts. If regions of space get randomly sterilized or de-
holes or blobs of strange matter during the past 4.6 stroyed at a rate τ −1 , then the probability that a ran-
Gyr, since their masses would still be detectable via their dom spatial region remains unscathed decays as e−t/τ .
gravitational perturbations of the orbits of other planets. This implies that the conditional probability distribu-
This implies that the destruction timescale τ must be cor- tion fp∗ (tp ) for the planet formation time tp seen by an
respondingly large — unless their destruction is be linked observer is simply the shaded distribution fp (tp ) multi-
to ours, either by a common cause or by their implosion plied by e−tp /τ and rescaled to integrate to unity, giving
resulting in the emission of doomsday particles like black the additional curves in Figure 1 (left panel).1 As we
holes or strangelets that would in turn destroy Earth. lower the catastrophe timescale τ , the resulting distribu-
This observer selection effect loophole is tightened if we tions (left panel) are seen to peak further to the left and
consider extrasolar planets that have been seen to par-
tially eclipse their parent star [8] and are therefore known
not to have imploded. The doomsday particles discussed
in the literature would be more readily captured gravi- 1 Proof: Let fo (to ) denote the probability distribution for the time
tationally by a star than by a planet, in which case the to after planet formation when an observer measures tp . In our
observed abundance of very old (> ∼ 10 Gyr) stars (e.g., case, to = 4.6 Gyr. We obviously know very little about this
[9]) would further sharpen the lower bound on τ . function fo , but it fortunately drops out of our calculation. The
conditional probability distribution for tp , marginalized over to ,
The one disaster scenario that exploits the remaining is
observer bias loophole and evades all these constraints is Z ∞ to +tp tp

vacuum decay, either spontaneous or triggered by a high- fp∗ (tp ) ∝ fo (to )fp (tp )e− τ dto ∝ fp (tp )e− τ , (1)
0
energy event. Since the bubble of destruction expands
independently of the unknown distribution fo (to ), since
with the speed of light, we are prevented from observing e−(to +tp )/τ = e−to /τ e−tp /τ and hence the entire integrand is
the destruction of other objects: we only see their de- separable into a factor depending on tp and a factor depending
struction at the instant when we ourselves get destroyed. on to .
3

the probability that Earth formed as late as observed (9.1 rather robust.
Gyr after the Big Bang) or later drops (right panel). The
dotted lines show that we can rule out the hypothesis that
τ < 2.5 Gyr at 95% confidence, and that the correspond- III. CONCLUSIONS
ing 99% and 99.9% confidence limits are τ > 1.6 Gyr and
τ > 1.1 Gyr, respectively. We have shown that life on our planet is highly unlikely
Risk category 4 is unique in that we have good direct to be annihilated by an exogenous catastrophe during
measurements of the frequency of impacts, supernovae the next 109 years. This numerical limit comes from the
and gamma-ray bursts that are free from observer se- scenario on on which we have the weakest constraints:
lection effects Our analysis therefore used the habitable vacuum decay, constrained only by the relatively late for-
planet statistics from [10] that folded in such category 4 mation time of Earth. conclusion also translates into a
measurements. bound on hypothetical anthropogenic disasters caused by
Our bounds do not apply in general to disasters of high-energy particle accelerators (risks 1-3).
anthropogenic origin, such as ones that become possible This holds because the occurrence of exogenous catas-
only after certain technologies have been developed, e.g., trophes, e.g., resulting from cosmic ray collisions, places
nuclear annihilation or extinction via engineered microor- an upper bound on the frequency of their anthropogenic
ganisms or nanotechnology. Nor do they apply to natu- counterparts. Hence our result closes the logical loop-
ral catastrophes that would not permanently destroy or hole of selection bias and gives reassurance that the risk
sterilize a planet. In other words, we still have plenty to of accelerator-triggered doomsday is extremely small, so
worry about [11, 12, 13, 14]. However, our bounds do ap- long as events equivalent to those in our experiments oc-
ply to exogenous catastrophes (e.g., spontaneous or cos- cur more frequently in the natural environment. Specif-
mic ray triggered ones) whose frequency is uncorrelated ically, the Brookhaven Report [2] suggests that possible
with human activities, as long as they cause permanent disasters would be triggered at a rate that is at the very
sterilization. least 103 times higher for naturally occurring events than
Our numerical calculations made a number of assump- for high-energy particle accelerators. Assuming that this
tions. For instance, we treated the exogenous catastrophe is correct, our 1 Gyr limit therefore translates into a con-
rate τ −1 as constant, even though one could easily imag- servative upper bound of 1/103 × 109 = 10−12 on the an-
ine it varying by of order 10% over the relevant timescale, nual risk from accelerators, which is reassuringly small.
since our bound on τ is about 10% of the age of the Uni-
verse.2 Second, the habitable planet formation rate in-
volved several assumptions detailed in [10] which could
readily modulate the results by 20%. Third, the risk
from events triggered by cosmic rays will vary slightly
with location if the cosmic ray rate does. Fourth, due
to cosmological mass density fluctuations, the mass to
scatter off of varies by about 10% from one region of
size cτ ∼ 109 lightyear region to another, so the risk of
cosmic-ray triggered vacuum decay will vary on the same
order.
In summary, although a more detailed calculation
could change the quantitative bounds by a factor of order
unity, our basic result that the exogenous extinction rate
is tiny on human and even geological timescales appears

2 As pointed out by Jordi Miralda-Escude (private communica-


tion), the constraint from vacuum decay triggered by bubble nu-
cleation is even stronger than our conservative estimate. The
probability that a given point is not in a decayed domain at time
t is the probability of no bubble nucleations in its backward light
cone, whose spacetime 4-volume ∝ t4 for both matter-dominated
and radiation-dominated expansion. A constant nucleation rate
per unit volume per unit time therefore gives a survival prob-
4
ability e−(t/τ ) for some destruction timescale τ . Repeating
4
our analysis with e−t/τ replaced by the sharper cutoff e−(t/τ )
sharpens our constraint. Our quoted bound corresponds to the
conservative case where τ greatly exceeds the age of the universe
at the dark energy domination epoch, which gives a backward
lightcone volume ∝ t.
4

Acknowledgements: Davies, Charles Harper, Andrei Linde and the John Tem-
The authors are grateful to Adrian Kent, Jordi pleton Foundation for organizing a workshop where this
Miralda-Escude and Frank Zimmermann for spotting work was initiated. This work was supported by NASA
loopholes in the first version of this paper, to the authors grant NAG5-11099, NSF CAREER grant AST-0134999,
of [10] for use of their data, and to Milan Circovic, Hunter and fellowships from the David and Lucile Packard Foun-
Monroe, John Leslie, Rainer Plaga and Martin Rees dation and the Research Corporation.
for helpful comments and discussions, Thanks to Paul

[1] M. Tegmark and N. Bostrom, Nature, 438, 754 (2005) [9] B. M. S Hansen et al., ApJ, 574, L155 (2002)
[2] R. L. Jaffe, W. Busza, Sandweiss J, and F. Wilczek, [10] C. H. Lineweaver, Y. Fenner, and B. K. Gibson, Science,
Rev.Mod.Phys., 72, 1125 (2000) 203, 59 (2004)
[3] P. Frampton, Phys. Rev. Lett., 37, 1378 (1976) [11] J. Leslie, The End of the World: The Science and Ethics
[4] P. Hut and M. J. Rees 1983, “How Stable Is Our Vac- of Human Extinction (Routledge: London, 1996)
uum?”, Nature, 302, 508 P. Hut 1984, Nucl.Phys. A, [12] N. Bostrom, Journal of Evolution and Technology, 9, 1
418, 301C (2002)
[5] A. Dar, A. De Rujula, and U. Heinz, Phys.Lett. B, 470, [13] M. J. Rees, Our Final Hour: How Terror, Error, and
142 (1999) Environmental Disaster Threaten Humankind’s Future in
[6] B. Carter 1974, in IAU Symposium 63, ed. M. S. Longair This Century — On Earth and Beyond (Perseus: New
(Reidel: Dordrecht) York, 2003)
[7] N. Bostrom, Anthropic Bias: Observation Selection Ef- [14] R. Posner, Catastrophe: Risk and Response (Oxford
fects in Science and Philosophy (Routledge: New York, Univ. Press: Oxford, 2004)
2002)
[8] F. Pont, astro-ph/0510846, 2005

You might also like