New version of “Structure of global catastrophe”

,
2016 revision, draft.
Most ideas are the same, but some maps are added.

1

How to prevent
existential risks
From full list to complete prevention plan

(Draft)

2

Copyright © 2016 Alexey Turchin.

3

Contents
Preface

11

About this book
What is human extinction risks?
Part 1. General considerations.

11
11
13

Chapter 1. Types and nature of global catastrophic risks

13

Main open question about x-risks: before or after 2050?
The difference between a global catastrophe and existential risk
Smaller catastrophes as steps to possible human extinction
One-factorial scenarios of global catastrophe
Principles of classification of global risks
Precautionary principle for existential risks
X-risks and other human values
Global catastrophic risks, human extinction risks and existential risks
Chapter 2. Problems of probability calculation

13
16
17
20
21
23
24
24
26

Problems of calculation of probabilities of various scenarios
28
Quantitative estimates of the probability of the global catastrophe, given by various authors 29
The model of the future
32
Chapter 3. History of the research on global risk
36
Current situation with global risks
Part 2. Typology of x-risks

47

45

Chapter 4. The map of all know global risks

47

Block 1 – Natural risks

47

Chapter 5. The risks connected with natural catastrophes

47

Universal catastrophes
Geological catastrophes
Eruptions of supervolcanoes
Falling of asteroids
Asteroid threats in the context of technological development
Zone of defeat depending on force of explosion
Solar flashes and luminosity increase
Gamma ray bursts
Supernova stars
Super-tsunami
Super-Earthquake
Polarity reversal of the magnetic field of the Earth
Emerge of new illness in the nature
Debunked and false risks from media, science fiction and fringe science or old theories
Block 2 – Anthropogenic risks
80
Chapter 6. Global warming

80
4

47
49
50
51
54
58
60
62
65
65
66
69
70
79

Chapter 7. The anthropogenic risks which are not connected with new technologies83
Chapter 8. Artificial Triggering of Natural Catastrophes
Chapter 9. Nuclear Weapons

86
102

The Evolution of Scientific Opinion on Nuclear Winter
Nuclear War and Human Extinction
More Exotic Nuclear Scenarios
Nuclear space weapons
Nuclear attack on nuclear power stations
Cheap nukes and new ways enrichment
Estimating the Probability of Nuclear War Causing Human Extinction
Near misses
The map of x-risks connected with nuclear weapons
Chapter 10. Global chemical contamination

103
112
112
119
119
119
119
121
123
128

Chapter 11. Space Weapons

135

Latest developments:
Block 4 – Super technologies. Nanotech and Biotech.

176

Chapter 12. Biological weapons

176

The map of biorisks
Chapter 13. Superdrug

209

172

203

Design features
Chapter 14: Nanotechnology and Robotics

212
217

The map of nanorisks
Chapter 15. Space Weapons

275

Chapter 16: Artificial Intelligence

316

270

Current state of AI risks – 2016
317
Distributed agent net as a basis for hyperbolic law of acceleration and its implication for AI
timing and x-risks
317
The map AI failures modes and levels
323
AI safety in the age of neural networks and Stanislaw Lem 1959 prediction
325
The map of x-risks prevention
334
Estimation of timing of AI risk
335
Doomsday argument in estimating of AI arrival timing
339
Several interesting ideas of AI control
341
Chapter 17. The risks of SETI
348
Latest development in 2016
365
Chapter 18. The risks connected with blurring the borders of human and transhumans 366
The risks connected with a problem of "the philosophical zombie”
Chapter 19. The causes of catastrophes unknown to us now

367
368

Phil Torres article about unknown unknowns (UU)
List of possible types of Unknown unknowns
Unexplained aerial phenomena as global risks
Part 3. Different factors influencing global risks landscape

372

Chapter 20. Ways of detecting one-factorial scenarios of global catastrophe

372

5

369
369
371

The general signs of any dangerous agent
Chapter 19. Multifactorial scenarios

372
381

Integration of the various technologies, creating situations of risk
Double scenarios
Studying of global catastrophes by means of models and analogies
Inevitability of achievement of a steady condition
Recurrent risks
Global risks and problem of rate of their increase
Comparative force of different dangerous technologies
Sequence of appearance of various technologies in time
Comparison of various technological risks
The price of the question of x-risks prevention
The universal cause of the extinction of civilizations
Does the collapse of technological civilization means human extinction?
Chapter. Agents, which could start x-risks
The social groups, willing to risk the destiny of the planet
Humans as a main factors in global risks, as a coefficient in risks assessment
Decision-making about nuclear attack
Chapter 21. The events changing probability of global catastrophe

381
382
386
388
390
390
391
392
393
395
397
398
399
403
404
406
408

Definition and the general reasons
Events which can open a vulnerability window
System crises
Crisis of crises
Technological Singularity
Overshooting leads to simultaneous exhaustion of all resources
System crisis and technological risks
Chapter 21. Cryptowars, arms race and others scenario factors raising probability of global
catastrophe
426

408
409
410
418
419
422
424

Cryptowar
Chapter 22. The factors influencing for speed of progress

426
441

Global risks of the third sort
Moore's law
Chapter 23 X-risks prevention

441
443
446

The general notion of preventable global risks
Introduction

451

The problem
The context
In fact, we don’t have a good plan
Overview of the map
The procedure for implementing the plans
The probability of success of the plans
Steps
Plan A. Prevent the catastrophe
Plan A1. Super UN or international control system
A1.1 – Step 1: Research
6

446
451
451
452
452
453
454
454
455
455
455

Plan A1.1: Step 2: Social support
457
Reactive and Proactive approaches
458
A1.1-Step 3. International cooperation
459
Practical steps to confront certain risks
460
А1.1 Risk control
461
Elimination of certain risks
462
A1.1 – Step 4: Second level of defense on high-tech level: Worldwide risk
prevention authority
463
Planetary unification war
464
Active shields
464
Step 5 – Reaching indestructibility of civilization with negligible annual
probability of global catastrophe: Singleton
466
Plan A1.2 – Decentralized risk monitoring
466
A1.2 – 1.Values transformation
Ideological payload of new technologies
A1.2 – 2: Improving human intelligence and morality
Intelligence
A1.2 – 3. Cold War, local nuclear wars and WW3 prevention
A1.2 – 4. Decentralized risk monitoring
Plan А2. Creating Friendly AI
A2.1 Study and Promotion
A2 – 2. Solid Friendly AI theory
A2.3 AI practical studies
Seed AI
Superintelligent AI
UnfriendlyAI
Plan A3. Improving Resilience

467
469
469
469
470
471
471
471
472
473
473
473
474
475

A3 – 1.Improving sustainability of civilization
А3 – 2. Useful ideas to limit the scale of catastrophe
А3.3 High-speed Tech Development needed to quickly pass risk window
A3.4. Timely achievement of immortality on highest possible level
AI based on uploading of its creator
Plan А4. Space Colonization
477

475
476
476
477
477

А4.1. Temporary asylums in space
А4.2. Space colonies near the Earth
Colonization of the Solar System
А4.3. Interstellar travel
Interstellar distributed humanity
Plan B. Survive the catastrophe

478
478
479
479
480

B1. Preparation
B2. Buildings
Natural refuges
B3. Readiness
B4. Miniaturization for survival and invincibility
7

480
481
481
481
482
482

B5. Rebuilding civilization after catastrophe
Reboot of civilization
Plan С. Leave Backups

483
483
483

C1. Time capsules with information
C2. Messages to ET civilizations
C3. Preservation of earthly life
C4. Robot-replicators in space

485

Resurrection by another civilization
Plan D. Improbable Ideas

486

484
484
484
486

D1. Saved by non-human intelligence
486
D2. Strange strategy to escape Fermi paradox
488
D4. Technological precognition
489
D5. Manipulation of the extinction probability using Doomsday argument 489
D6. Control of the simulation (if we are in it)
490
Bad plans
491
Prevent x-risk research because it only increases risk
Controlled regression
Depopulation
Computerized totalitarian control
Choosing the way of extinction: UFAI
Attracting good outcome by positive thinking
Conclusion

494

Literature:

495

491
492
493
493
494
494

Active shields
Existing and future shields
Conscious stop of technological progress
Means of preventive strike
Removal of sources of risks on considerable distance from the Earth
Creation of independent settlements in the remote corners of the Earth
Creation of the file on global risks and growth of public understanding of the problematics
connected with them
Refuges and bunkers
Quick spreading in space
«All somehow will manage itself»
Degradation of the civilization to level of a steady condition
Prevention of one catastrophe by means of another
Advance evolution of the man
Possible role of the international organizations in prevention of global catastrophe
Infinity of the Universe and question of irreversibility of human extinction
Assumptions of that we live in "Matrix".
Global catastrophes and society organization
Global catastrophes and current situation in the world
The world after global catastrophe
The world without global catastrophe: the best realistic variant of prevention of global
catastrophes
8

495
497
501
502
504
504
505
505
509
510
511
511
512
513
515
516
517
520
521
522

Maximizing pleasure if catastrophe is inevitable
Chapter 24. Indirect ways of an estimate of probability of global catastrophe

524

523

Chapter 25. The most probable scenario of global catastrophe

541

Part 5. Cognitive biases affecting judgments of global risks

546

Chapter 1. General Remarks:

546

Cognitive Biases and Global Catastrophic Risks

546

Chapter. Meta-biases

549

Chapter 2. Cognitive biases concerning global catastrophic risks

551

Chapter 2. Cognitive biases concerning global catastrophic risks

551

Chapter 3. How cognitive biases in general influence estimates of global risks

565

Chapter 3. How cognitive biases in general influence

565

estimates of global risks

565

Chapter 4. The universal logical errors, able to be shown in reasoning on global risks

615

Chapter 5. Specific errors arising in discussions about danger of uncontrollable development of
Artificial Intelligence
626
Chapter 7. Conclusions from the analysis of cognitive biases in the estimate of global risks and
possible rules for rather effective estimate of global risks
651
The conclusion. Prospects of prevention of global catastrophes
What is AI? – MA part
Artificial Intelligence Today
Projects in Artificial General Intelligence
Whole Brain Emulation
Ensuring the Continuation of Moore's Law
The Software of Artificial General Intelligence
Features of Superhuman Intelligence
Seed Artificial Intelligence
From Virtuality to Physicality
The Yudkowsky-Omohundro Thesis of AI Risk
Friendly AI
Stages of AI Risk
AI Forecasting
Broader Risks
Preventing AI Risk and AI Risk Sources
AI Self-Improvement and Diminishing Returns Discussion
Philosophical Failures in Advanced AI, Failures of Friendliness
Impact and Conclusions
Frequently Asked Questions on AI Risk
Chapter. Collective biases and errors

9

652
666
671
673
675
679
684
687
690
694
699
704
708
712
718
719
723
730
733
740
743

10

Preface
Existential risk – one where an adverse outcome would either
annihilate Earth-originating intelligent life or permanently and
drastically curtail its potential.
N. Bostrom. “Existential Risks: Analyzing Human
Extinction Scenarios and Related Hazards”

About this book
This book had developed from an encyclopedia of global catastrophic risks to a roadmap of
risk prevention.

What is human extinction risks?
In the 20th century the possibility of the extinction of humankind was almost entirely
connected to nothing but the threat of nuclear war. Now, at the beginning of the XXI century, we can
easily name more than 10 various sources of possible irreversible global catastrophe, mostly
deriving form novel technologies, and the number of sources of risk constantly grows. Research on
global risk is widely neglected. Problems such as of the potential exhaustion of oil, the future of the
Chinese economy, or outer space exploration get more attention, than irreversible global
catastrophes. It seems senseless to discuss the future of human civilization before we have an
intelligent estimate of its chances of survival. Even if as a result of such research we learn that the
risk is negligibly small, it is important to study the question. Preliminary studies by experts do not
give encouraging results. Sir Martin Rees, formerly British astronomer royal and author of a book
on global risks, estimates mankind's chances of survival to 2100 at only 50% estimate.
This book is devoted to consistent review of the "threats to existence”, that is, to risks of the
irreversible destruction of all human civilization and extinction of mankind. The purpose of this
book is to give a wide review of this theme. However, many of the claims therein have a debatable
character. Our goal is not to give definitive answers, but to encourage measured thought in the
reader and to nurture soil for further discussions. Many of the hypotheses stated here might seem
11

unduly radical. However, when discussing them, we were guided by a “precautionary principle”
which directs us to consider worst realistic scenarios as a matter of caution. The point is not to
fantasize about doom for its own sake, but to give the worst scenarios the attention they deserve
from a straightforward utilitarian moral perspective.
In this volume you will find monograph Risks of human extinction. The monograph consists of
two parts – research of concrete threats and methodology of the analysis. The analysis of concrete
threats in the first part consists of a detailed list with references to sources and critical analysis.
Then, systemic effects of the interaction of different risks are investigated, and probability estimates
are assessed. Finally we suggest roadmap of existential risks prevention.
The methodology offered in the second part consists of critical analysis of the ability of
humans to intelligently estimate the probability of global risks. It may be used, with little change,
and in other systematic attempts to assess the probability of uncertain future events.
Though only a few books with general reviews of the problem of global risks have been
published in the world, a certain tradition has already been formed. It consists of discussion of
methodology, classification of possible risks, estimates of their probability, ways of ameliorating
those risks and then a review of further philosophical issues related to global risks, such as the
Doomsday argument, which will be introduced shortly. The current main books on global
catastrophic risks are: J. Leslie's The End of the world. A science and ethics of human extinction,
1996, Sir Martin Rees' Our Final Hour, 2003, Richard Posner's Catastrophe: Risk and Response,
2004, and the volume edited by Nick Bostrom Global catastrophic risks, 2008.
This book differs considerably from previous books on the topic of global risks. First of all,
we review a broader set of risks than prior works. For example, an article by Eliezer Yudkowsky
lists ten cognitive biases affecting our judgment of global risks. In our book, we address more then a
100 such biases. In the section devoted to the classification of risks, we mention some risks, which
are entirely missing from previous works. I have aspired to create a systematic point of view of the
problem of global risks, which allows us not only to list various risks and understand them, but also
how different risks, influencing one another, form an interlocked structure.
I will use the terms “global catastrophe”, “x-risk”, “existential risk” and “human extinction” as
synonymous designating total and irreversible die off of all Homo sapience.
Main conclusion of the book is that chances of human extinction is around 50 per cent on the
XXI century but they could be lowered by the order of magnitude if all need actions will be done.
All information used in the analysis is taken from the open sources listed in the bibliography.

12

Part 1. General considerations.

Chapter 1. Types and nature of global catastrophic risks

Main open question about x-risks: before or after 2050?

Robin Hanson wrote a lot about two modes of attitude usually displayed, that is near mode and far mode.
People have very different attitude to things that are happening now and to things that may happen
in distant future. For example if there is a fire in the house, everyone would try to escape, but if the
question of discussion will be should humanity live forever many nice people would say that they
are OK with human extinction.
And even inside the discussion of x-risks we could easily distinguish two approaches.
Two main opinions about timing of exist: decades or centuries. Or catastrophe will happen in next 15-30
years, or in next couple of centuries.
If we take in account many predictions about continuing exponential or even hyperbolic development of
new technologies when we should conclude that superhuman AI and ability to create super deadly
biological viruses should be ready between 2030 (Vinge) or 2045 (Kurzweil). We write this text in
2014, so it is just 15-30 years from now. As well as predictions about runaway global warming,
limits of growth, peak oil and some version of Doomsday argument – all of them are centers around
the year 2030 – 2050.
If it is 2030, or even earlier, not much could be done to prepare. Late dates left more room for possible
action. But the main problem is not the risk are so large, but that society, government and research
communities are completely unprepared and unwilling to deal with them.
But if we take one hundred years risks timeframe we (as authors) will have some advantages. We are
signaling to be more respectful and conservative. It will be almost never proved that we are false
during our lifetime. We have more chances to be right just because we have larger timeframe. We
have plenty of time to implement some defense measures or in fact to think that such measures
13

would be implemented in the remote future (they will not). We may also think that we are correcting
overoptimistic bias. It is well known that predictions about AI used to be overoptimistic.
The difference between two risks timeframe prediction is like difference in two predictions about future
of a man: one that claims that he will die in next 50 years and another that he will have cancer in
next 5 years. First of them doesn’t bring almost any new information, the second is urgent message
which could be false and have high costs. But also the urgent messages tend to attract more
attention, which could bias some sensationalist author. More scientific authors tend to be more
careful and try to distinguish themselves from sensationalist and so give prediction on longer time
periods.
Good example here is E. Yudkowsky who in 2001 claimed that super exponential growth is possible with
ever-shorter doubling periods and super AI in 2005. After this prediction failed he and his
community Lesswrong are more biased to around 100 years estimate until super AI.
So the question if technologies continue their exponential growth is equal to the question of time scale of
global catastrophe. If they do continue to grow exponentially, then the global catastrophe will
happened in nearest decades or will be permanently prevented.
Let take a closer look at both scenarios. Arguments for “decades” scenario:
1. Because of NBIС convergence, advanced nanotech, biotech and AI will appear almost
simultaneously.
2. New technologies will grow exponentially with a doubling time of approximately 2 years and
their risks will grow with a similar or even greater speed.
3. They will interact with each other as any smaller catastrophe could lead to a bigger one. For
example global warming will lead to a fight for recourses and nuclear war, and this nuclear war
will result in the release of dangerous biological weapons. It may be called “oscillations near
Singularity”.
4. Several possible triggers of x-risks could happen in the near future. It is world war and especially
new arms race, peak oil and runaway global warming.
There are several compelling arguments for “centuries scenario”:
1. Most predictions about AI were premature. The majority of “Doomsday” prophecies also have
been proven to be false.
2. Exponential growth will level up. Moore’s law may come to an end in the near future.
3. The most likely x-risks could be caused by an independent accidental event of unknown
origin, and not by complex interaction of known things.
14

4. There will be no cascade chain reaction of destructive even
5. Long-term predictions are more scientific in the public view thus improving the reputation of
the field of x-risks research, finally helping to prevent x-risks.
Decades scenario is worse, because it is sooner, we have less time to prepare – in fact no time, knowing
how little have been done before. Because catastrophic scenarios are more complex and because we
will shorter expected personal and civilizational lifetime.
One of main factors in timeframe of x-risks is our assessment of the time then full AI capable to
self-improvement will be created. Several articles tried to address with question from different
points of view. Modelling, extrapolation and expert quiz were used. For example, here:
http://sethbaum.com/ac/2011_AI-Experts.html We will address this question again in AI chapter, but
the fact is that no body knows for sure, and we should use very vague prior to catch all different
estimates. Bostrom claims that AI will be created before the beginning of 22 century or it will be
proved that some hard obstacles exist, and this vaguest estimate seems to be true.
We need to search for effective mode of actions to prevent x-risks. We need to create social
demand for preventing existential risks – as for example the fight against nuclear war in 80ies. We
need political parties, which consider prevention of existential risks as well as life extension as the
main goals of society.
I would like to draw attention to the investigation of non-scientific social, and media factors in
discussing global catastrophe. Much attention is concentrated on scientific research with popular
reception and reaction seen as a secondary factor. However, the value of this secondary factor should
not be underestimated, as it can have a huge effect on the way in which global catastrophe might be
met, and eventually, averted. There is the example of 1980’s nuclear disarmament after global
antinuclear protests.
The main difference of both scenarios is that first will happened during our expected lifetime, and
the second most likely will not touch us personally. So the second is hypothetical and not urgent.
The border between two scenarios is around 2050.
Claims that global catastrophe may happen only after 2050 make it insignificant problem, and
preclude any real efforts to prevent it.
Human mind and society is build in such a way that not much questions about remote future is
interesting for us with several important exceptions. One is safety of buildings, and another is our
interest in wellbeing and longevity of our children and grandchildren. But these questions existed for
centuries and our culture had adapted to build strong buildings and invest in children. Global risks
15

are new problem and no such adaptation has happened. More about cognition on global risks is said
in the second part of the book.
The difference between a global catastrophe and existential risk
Any global catastrophe will affect the entire surface of the earth and all of its inhabitants, though not all
of them will perish. From the viewpoint of the personal history of any person, there is no big
difference between a global catastrophe and total extinction – in both cases, he will most likely die.
But from the viewpoint of human civilization, the difference is enormous. It will either end forever
or simply transform and go on a new path.
Bostrom suggested expanding the term “existential risks” to include events that threaten human
civilization with irreversible damage, like, for example, half-hostile artificial intelligence that
evolves in the direction completely opposite to the current values of humanity, or a worldwide
totalitarian government that will forever stop progress.
But perpetual worldwide totalitarianism is impossible: it will either lead to the extinction of civilization,
maybe in several million years, or smoothly evolve into a new form.
However, it is possible to still include many other things under the category of “existential risks” – we
have lost many things indeed over the course of history – dead languages, extinct styles of the
history of art, ancient philosophy and so on.
The real dichotomy passes between complete extinction and “simple” global catastrophe. Extinction
means the complete destruction of mankind and cessation of history. A global catastrophe could
destroy 90% of the population, but only slow down the course of history for 100 years.
The difference here, rather, is the value of the continuation of human history and the value of future
generations, which for most people is extremely speculative. This probably presents one of the
reasons for ignoring the risks of complete extinction.

Smaller catastrophes as steps to possible human extinction
Though in this book we investigate global catastrophes, which can lead to human extinction, it
is easy to notice that the same catastrophes on a smaller scale may not destroy mankind, but damage
us greatly.
For example global virus pandemic could or completely destroy humanity or kill only a part of
it. In second case the level of civilization would decline to some extent, and after it is possible or
further extinction or restoration of the civilization. Therefore the same class of catastrophes can be
16

both the direct cause of human extinction, or a factor, which opens a window of vulnerability for
following catastrophes. For example after pandemic the civilization would be more vulnerable to the
next pandemic or famine or cant prevent collision with asteroid.
In 2007 was published article of Robin Hanson “Catastrophe, Social Collapse, and Human
Extinction” in which he used simple math model to estimate how variance in resilience would
change probability of extinction by aftermath of not total catastrophe. He wrote: “For many types of
disasters, severity seems to follow a power law distribution. For some of types, such as wars and
earthquakes, most of the expected harm is predicted to occur in extreme events, which kill most
people on Earth. So if we are willing to worry about any war or earthquake, we should worry
especially about extreme versions. If individuals varied little in their resistance to such disruptions,
events a little stronger than extreme ones would eliminate humanity, and our only hope would be to
prevent such events. If individuals vary a lot in their resistance, however, then it may pay to increase
the variance in such resistance, such as by creating special sanctuaries from which the few
remaining humans could rebuild society”.
Depending on the size of the catastrophe there can be various degrees of damage, which can
be characterized by different probabilities of the subsequent extinction, and the further recovery. It is
possible to imagine several semi-stable levels of degradation:
1. “New Middle ages”: Destruction of the social system similar to downfall of the Roman
Empire but in global scale. It means termination of the development of technologies, connectivity
reduction and population falling for several percent, however some essential technologies continue
to develop successfully. For example, some kinds of agriculture continue to develop in the early
Middle Ages. Technological development will continue, manufacture and use of dangerous
weaponry will also continue which could lead to total extinction or to degradation to even lower
level during next war. Degradation of Easer Island during internal war is clear example. But return
to the previous level is rather probable.
2. “Postapocalyptic world”: Considerable degradation of economy, loss of statehood and
society disintegration on small fighting kingdoms. The example of it could be current Somali in
some extent. The basic form of activity is a robbery or digging in ruins. This is probably a society
after large-scale nuclear war. The population is reduced many times, but, nevertheless, millions of
people survive. Reproduction of technologies will stop, but separate carriers of knowledge and
library will remain. Such world can be united in hands of one governor, and revival of the state will
begin. The further degradation could occur as a result of epidemics, pollution of environment excess
weaponry stored from previous epoch etc.
17

3. “Survivals”: Catastrophe in which survive only separate small groups of the people, which
are not connected to each other: polar explorers, crews of the sea ships, inhabitants of bunkers. On
one side, small groups appear to be in more favorable position, than in the previous case, as in them
there is no struggle between groups of people. On the other hand, forces, which have led to
catastrophe of such scales, are very great and, most likely, continue to operate and limit freedom of
moving of people from the survived groups. These groups will be compelled to struggle for the life.
The restoration period under the most favorable circumstances will occupy hundreds years and will
be connected with change of generations with loss of knowledge and skills. Ability to continue
sexual reproduction will be the basis of survival of such groups. Hanson estimated that the group of
healthy individuals, which may survive, is around 100 people. It is not accounted for injured, elderly
and parentless young children who could survive initial catastrophe but will not contribute to the
future survival of the group.
4. “Last man on Earth”: Only a few people survive on the Earth, but they are incapable neither
to keep knowledge, nor to give rise to new mankind. In this case people, most likely, are doomed.
5. “Bunker people”: It is possible to designate also "bunker" level – that is level on which only
those people survive who are out of the usual environment. Groups of people would survive in the
certain closed spaces, accidentally or preplanned. Conscious transition to bunker level is possible
even without quality loss – if the mankind will keep ability to further develop technologies.
Intermediate scenarios of the post-apocalyptic world are possible also, but I believe, that the
listed four variants are the most typical. Each step down means bigger chances to fall even lower
and less chances to rise. On the other hand, the long-term stability island is possible at a level of
separate tribal communities when dangerous technologies have already collapsed, dangerous
consequences of their applications have disappeared, and new technologies are not created yet and
cannot be created – like different species of Homo lived in tribes for millions years before neolith.
But it required lower level of intelligence as stability factor. And the former required less Darwinian
pressure to increase intelligence again.
It is thus incorrect to think, that degradation is simply switching of historical time for a
century or a millennium in the past, for example, on level of a society XIX or XV centuries.
Degradation of technologies will not be linear and simultaneous. For example, it will be difficult to
forget such thing as Kalashnikov's gun. In Afghanistan, for example, locals have learned to make
rough copies of Kalashnikov's. But in a society where there is an automatic machine, knightly
tournaments and horse armies are impossible. What was stable equilibrium at movement from the
past to the future cannot be an equilibrium condition at the path of degradation. In other words, if
18

technologies of destruction degrade more slowly, than technologies of creation the society is
doomed to continuous sliding downwards.
However we can classify degradation degree not by the quantity of victims, but by degree of
loss of knowledge and technologies. In this sense it is possible to use historical analogies,
understanding, however, that loss of technologies will not be linear. Maintenance of social stability
at lower level of evolution demands lesser number of people, and this level is steadier both against
progress and decline. Such communities can arise only after the long period of stabilization after
catastrophe.
As to "chronology", following base variants of regress in the past (partly similar to the
previous classification) are possible:
1. Industrial production level: railways, coal, a firearms, etc. Level of self-maintenance
demands, possibly, tens millions humans. In this case it is possible to expect preservation of all basic
knowledge and skills of an industrial society, at least by means of books.
2. Level, sufficient for agriculture maintenance. It demands, probably, from thousand to
millions of people.
3. Level of small group like a hunter-gathers society.
We also could measure smaller catastrophes by the time on which they delay technical
progress and by probability that the humanity will recover at all.
Technological Level of Catastrophe and "Periodic System" of possible disasters
Possible disasters can be classified in terms of the contribution of the person and then the
technology in their causes. These disasters can also be referred to by the period of history of when
they are the most typical. But you can also give a total estimate of the probability for each type of
disaster for the 21st century.
In this case, it turns out that the more “likely” a disaster can happen, in the sense of a
technological disaster, has a high probability. In addition, the disaster can be classified according to
their possible causes, in the sense of how they cause the death of people (explosion, replication,
poisoning or infection disease) on the basis of that, I made a sort of "periodic system", which
outlined the possible global risks, a large map, which is on the site “immortality-roadmap.com”,
along with a map of the ways to prevent global risks and a map of how to achieve immortality.
1)
Natural. They are the disasters which have nothing to do with human activity,
and can occur on their own. These include falling asteroids, supernova explosions, and so on.
Their likelihood is relatively small, just tens of millions of years, but perhaps we seriously
underestimate them because of the effects of observational selection.
2)
Anthropogenic. These natural processes are caused by human activities. The
first is global warming and resource depletion. There are also more exotic options, such as
19

artificial awakening using atomic bombs. The main thing is that one awakens the natural
course of business.
3)
Process-level existing technologies. They are primarily concerned with atomic
weapons, as well as conventional biological weapons made by already existing influenza,
smallpox, anthrax.
4)
The expected breakthrough of technology. It is, first of all, nanotech and
biotech, i.e., the creation of microscopic robots or synthetic biology with the creation of
entirely new biological organisms through genetic engineering and synthetic biology. They
can be called super-technologies, because in the end they will give power over the dead and
living matter.
5)
Artificial Intelligence - superhuman levels as the ultimate technology.
Although it can be created in about the same period as the nanotech, but due to its potential
ability to self-upgrade, it is able to transcend or create any other technology.
6)
Space and hypothetical risks that we face in the distant future with the
development of the universe.
Various possible disasters may differ in their destructive power. One could be withstood
relatively easily, others practically impossible. It is more difficult to resist the disasters that have a
greater speed, more power, more penetrating power, and most importantly, an agent which has the
greatest intelligence. And also those which are more likely and more sudden, are difficult to predict,
and thus, it makes it difficult to prepare for them.
Therefore, man-made disasters more natural and technological – they are man-made. At the
top of the pyramid as disasters are super-technology disasters and their king – the enemy of artificial
intelligence.
The view of this catastrophe, which is on the top of the pyramid disasters – is that it is the most
likely and the most destructive.

One-factorial scenarios of global catastrophe
In several following chapters we will consider the classic point of view on global catastrophes,
which consists of individual disasters, each of which might lead to the destruction of all mankind.
Clearly this description is incomplete, because it does not consider multifactorial and long-decline
scenarios of global catastrophe. A classic example of the consideration of one-factorial scenarios is
covered in Nick Bostrom's article “Existential risks”.
Here we will also consider some sources of global risks which, from the point of view of the
author, are not real global risks, but in the public view their danger is exacerbated, and so we will
examine them. In other words, we will consider all events, which usually are considered global risks
even if we later reject these events.
The next point is studding double factor scenarios of existential risks, which recently did Seth
Baum in his article “Double Catastrophe: Intermittent Stratospheric Geoengineering Induced By
Societal

Collapse”

http://sethbaum.com/ac/2013_DoubleCatastrophe.pdf

20

In

it

he

studied

hypothetical situation in which anti global warming geoengineering program is interrupted by social
collapse which lead to rapid rise of global temperatures.
There are many possible pairs of such double risks, and from such pairs could be built
“chains”. For example: Nuclear was – accidental release of bioweapons.

Principles of classification of global risks
The method of classification of global risks is extremely important, because it allows, as a
periodical table of elements, to figure out empty spots and to predict the existence of new threats.
Also, it opens the possibility of understanding our own methodology and offering principles on
which new risks may be assessed.
The most obvious approach to an establishment of possible sources of global risks is the
historiographical method. It consists in the analysis of all accessible scientific literature on a theme;
first of all, all already conducted survey works on global risks. One could also scan existing hard
science fiction, for interesting ideas of human extinction.
The method of extrapolation of small catastrophes consists in finding of small types of
catastrophes and the analysis of whether there can be a similar event on a much larger scale. For
example, if we see small meteors falling on Earth, we could ask is it possible that large enough
asteroid would wipe all life on Earth?
Upgrading global events method is based on idea that any global catastrophe should have
global outreach. So we should consider any events that could influence on all surface of the Earth
and see if the could became deadly. For example – is it possible that sufficiently large tidal wave in
the oceans will destroy all life on the continents if heavy celestial body would fly nearby?
Upgrading causes of death method consist of consideration of one particular way of human
death and asking a question if this cause of death could became global. For example, some people
die because of allergic shock. Is it possible that some bioengineered pollen could cause global
deadly allergy? Probably, not.
Extrapolating cause of extinction from other specie method – 99 species that existed had
vanished and many continue to do so. They are going extinct by different reasons, and by analyzing
these reasons we could hypotize that our specie could do so the same way. For example, in the
beginning of XX century total specie of food banana was destroyed by fungal rust, now we re eating
different specie, and fungi also known to completely kill other species. So is it possible to
genetically engineer human killing fungi? I do not know the answer to this question.
21

The paleontological method consists of the analysis of previous mass extinctions in Earth's
history, such as Permian-Triassic extinction, which wiped out 99% of all species on Earth. Could it
happen again and with stronger force?
At last, the method of “devil’s advocate” consists in the deliberate design of scenarios of
extinction as though our purpose is to destroy the Earth.
Each extinction risk could be described by following criteria:

Anthropogenic/natural,

Total probability,

Are needed technologies already exist,

How far in time it from us,

How we could defend us from it

Natural risks could happen with any specie of living beings. Technological risks are not quite
identical to anthropogenic risks, as an overpopulation and exhaustion of resources is quite
anthropogenic. The basic sign of technological risks is their uniqueness for a technological
civilization.
Where also a category of proved and unproved existential risks:
Events, which we considered as possible x-risks and decided that they are not.
Events about we cant say now definitely are they risky or not.
Events about which we have good scientific base to say that they are risky.
But the biggest part here is type of events, which may be risky based on some consideration,
which seems to be true but can’t be proved as a matter of fact and because of that are questionable
by lot of skeptics.
It is possible to distinguish three categories of technological risks:

Risks for which the technology is completely developed or which demands only slightly
improved technology. This includes nuclear warfare.

Risks, technologies for which are “under construction” and there are no visible theoretical
obstacles for its development in the foreseeable future (e.g. biotechnology).

Risks, technologies for which are in accordance with known laws of physics, but large
practical and theoretical obstacles need to be overcome – that is nanotechnology and AI.
22

Risks, which demand for their appearance new physical discoveries. The bigger part of
global risks in the XX century has occurred from essentially new and unexpected discoveries
at the time – I mean nuclear weapons.
The list of global risks in following chapters is sorted by degree of readiness of technologies

necessary for them.

Precautionary principle for existential risks
The classical interpretation of precautionary principle is that it is about who prove the safety:
“The precautionary principle or precautionary approach states that if an action or policy has a
suspected risk of causing harm to the public or to the environment in the absence of scientific
consensus that the action or policy is not harmful, the burden of prove that it is not harmful falls on
those taking an action” (Wikipedia).

In our reasoning we will use our version of «the precaution principle»: something is an
existential risk until the otherwise is proved. “Something” here is a hypothesis or crazy idea. And
the burden of prove is not on one who suggest the crazy idea, but on specialist of existential risks.
For example, if someone asks, could Earth’s core explode, we should calculate all possible scenarios
in which it may happen and estimate their probabilities.

X-risks and other human values
To prevent x-risks we need value of x-risks prevention.
It probably consist of the value of life all living now people + value to the future generations + the
value of the whole human history.
If we value only people who live now, we should concentrate of preventing their death and aging.

The main thing, which makes x-risks unique is the value of future
generations.
Anyway most people give very small value to future generations,
23

especially as a practical value in real acts, not claimed value. And
so it translate in marginal interest in remote x-risks.
May be we should make stronger value connection with x-risks, if we
want real actions against them.
People were afraid of nuclear catastrophe because it was able to
affect them at the moment. So they thought about it in "near mode"
and were ready to protest against it.
People also give a lot of value to their children and grandchildren,
but almost zero value to the 6th generation after them.
One article was named “lets prevent AI from killing our children” and
seems to be good way to connect existing values
Global catastrophic risks, human extinction risks and existential risks

Global catastrophic risks (GCR) have been defined as risks of catastrophe in which 90 per cent of
humanity will die. It will most likely include me and the reader. From personal point of view there
will be not much difference between human extinction and a global catastrophe. The main difference
is the future of humanity.
The main question is this what probability GCR will result in human extinction. 700 mln people is
still a lot, but scavenging economy, remaining nukes and other weapons, worldwide guerilla war,
epidemics, AIDS, global warming and depletion of easy resources could result in constant decline
and even in human extinction. I will address the question further, but I think it is safe to estimate that
GCR is equal to 1 per cent chance of human extinction. This will help us to unite two fields of
research, one of which is much more established and the is more important.
Phil Torres gave rightful criticism of Bostrom term of existential risks. This term is not selfapparent, and it combines human extinction and just mere not realizing whole potential of humanity.
Humanity could live billions years and colonize the Galaxy and still not reach its whole potential,
may be because other civilization will colonize another Galaxy. Most human being live full lives,
but how we could say that a random person reached his full potential? Maybe he was born to be
Napoleon? Even Napoleon didn’t get what he wanted after all.
“Problems with Defining an Existential Risk” http://ieet.org/index.php/IEET/more/torres20150121
But term existential risks wins and often reduced to “x-risks”. Bostrom’s classifications of 4 types of
x-risks has not become popular.
He suggests the following classification:
1. Bangs – abrupt catastrophes, which results in human extinction
2. Crunches – slow arrest of development, resource deletion, totalitarian government. It is not
extinction, and may be not very bad for people there, but it is unstable configuration which would
result in either extinction or supercivilization. The lower the level of equilibrium, the longer
24

civilization could exist on it that is it is more stable. Paleolithic people could live for million of
years.
3. Shrieks – this is not extinction but replacement of humanity with some other more powerful
agent, which is non human. It is either AI or some posthumans. Most
4. Whimpers – “A posthuman civilization arises but evolves in a direction that leads gradually but
irrevocably to either the complete disappearance of the things we value or to a state where those
things are realized to only a minuscule degree of what could have been achieved.” It is mostly a
catastrophes which could happened with advance supercivilization. Bostrom suggest two thing: war
with aliens and erosion of our core values.
We could also add context risks – different situations in the world imply different chances of a
global catastrophe. Cold war results in arm race and risks of hot war. Apocalyptic ideologies raises
probability of the existential terrorism.
We could add risks changing the speed of tech. progress.
May be we should add a category of risks which change our ability to prevent x-risks. I am not in
position to make recommendations which may be implemented. But may be it would be safe to
create a couple more centers of x-risks research. Or not, if it result in rivalry and incomparable
safety solutions.

25

Chapter 2. Problems of probability calculation

Human extinction catastrophe is one-time event, which will not have any observer (as long
as they are alive, it is not a such catastrophe).
For this reason, the traditional use of the term “probability” in relation to the likelihood
of global risks rather is pointless, no matter whether we understand probability in statistical terms, as
a proportion, or in Bayesian terms, as a measure of the uncertainty of our knowledge.
If the catastrophe starts to happen we can’t distinguish is it very rare type of catastrophe or
inevitable one.
The concept of probability has undergone a long evolution, and it has two directions –
objectivist, where the probability is considered as the fraction of events of a certain set,
and subjectivist, where the probability is considered as a measure of our ignorance.
Both approaches are applicable to the determination of what constitutes a probability of global
catastrophe, but with certain amendments.
Regularly is rising the question: "what is the probability of global catastrophe," but the answer
depends of what kind of probability is meant. I propose a list of different definitions of probability
and notions of probability of x-risk in which the answer will make sense.
1. “The Fermi probability” (so named by me in honor of the Fermi paradox). This term I
suggest for probability that a certain percentage of technological civilizations in the Universe die for
one specific reason, and the probability is defined as their share from the total amount of
technological civilizations. This quantity is unknown and unlikely to be objectively known until we
survey the entire galaxy, so that it can only be the object of subjective assumptions. Obviously, some
civilizations will make very big efforts in risk prevention, and some – smaller efforts, but Fermi
probability also reflects the total share of the effectiveness of prevention – I mean chances that they
will be applied and will be successful. I called it Fermi because knowing this probability distribution
could help answer Fermi paradox.
2. Objective probability – if we were in the multiverse, it would be a share of all versions of
the future Earth, where will be a global catastrophe of certain type. In principle, it should be close to
the Fermi probability, but differ from it at the expense of some special features of the Earth, if any,
or if we will have create some.
26

3. Conditional probability of the certain type of catastrophe is the probability of the accident,
provided that no other global catastrophes will happen. (E.g. chance of asteroid impact within the
next million years). It is opposite to the probability that the specific type of accident will happen, in
between of all possible catastrophes.
4. Minimum probability is the probability that a disaster would happen anyway, even if we
undertake all possible efforts to prevent it. And the maximum probability of an x-risk is the
probability of it if nothing will be done to prevent it. I think that these probabilities differ in average
10 times for global risks, but better assessment is needed may be based on some analogies.
5. The total probability of global catastrophe vs. the probability of the extinction because of
one particular reason. Since many scenarios of global catastrophe may include several reasons, for
example, one there most of the population dies as a result of a nuclear war, the rest of population is
severely affected by multipandemia, and the last survivors on a remote island are dying because of
hunger. What is the main reason in this case – hunger, biotech or nuclear war – or dangerous
combination of this three, which are not so deadly independently?
6. Assigned probability – the probability which we must ascribe to the particular risk to protect
ourselves from it in the best way, but do not overspend on it our resources that could be directed to
other risks. This is like a stake in the game; with the only difference being that the game is played
once. Here we are talking about estimates or order of magnitude, which are needed to properly plan
our actions. It is also replacement of Bayesian probability of existential risk, which cannot be
calculated without some subjectivism. Torino scale of asteroid risk is good example here.
7. Expected survival time of the civilization. Although we cannot measure the very probability
of global catastrophe of some type, we can transform it into an expected lifetime. Expected lifetime
includes our knowledge of the future change of the probability of a disaster – whether it will grow
exponentially or decrease smoothly with increasing our knowledge to prevent it.
8. Yearly probability density. For example, if the probability of certain event in one year is 1
per cent, during 100 years it would be 63.3 percent, and in 1000 years period it would be 99.9.
Yearly linear probability density implies exponential growth of total probability of the event.
9. Exponentially growing probability density of total x-risk can be associated with the
exponential growth of new technologies based on Moore's Law, and it gives the total probability of
catastrophe as an exponent of the exponent, that is growing very quickly. It could go from near 0 to
almost 1 in just 2-3 doubling of technology based on Moore law, or in 4-6 years based on current
temp of technological developments. This means that a “period of high catastrophic risk” will be
around 6 years and probably during it some smaller catastrophes will also happen.
27

10. Posteriori probability is the probability of a global catastrophe, which we estimate after it
did not happen, for example, all-out nuclear war in the XX century (if we assume that it was an
existential risk). Such an assessment of probability is greatly distorted by observational selection
toward understatement.
We will return to the question of the probability of x-risks in the end of the book.

Problems of calculation of probabilities of various scenarios
The picture of global risks and their interaction with each other inspires a natural desire to
calculate exact probabilities of different scenarios. However, it is obvious that giving exact answers
is impossible; the probabilities merely represent our guesses, and may be updated with further
information. Even though our probabilities are not perfect, refusing to make any estimate is not
helpful. It is important to interpret and apply the probabilities we derive in a reasonable way. For
example, say we determine, based on some model, that the probability of appearance of dangerous
unfriendly AI is 14% over the next 30 years. How can we use this information? Will our actions
differ if it would be 15% estimate? So exact probabilities matters only if the could differ our actions.
Further, such calculation should consider time sequence of different risks. For example, if the
risk A has probability 50% in first half of the XXI century, and risk B — 50% in second half, our
real chances to die from risk B are only 25% because in half of cases we will not survive until it.
At last, for different risks we wish to receive annual probability density. I will remind, that
here should be applied the formula of continuous increase of percent, as in case of radioactive decay.
It means, that any risk set on some time interval is possible to normalize on "half-life period", that is
time on which it would mean 50% probability of extinction of the civilization.
In our methodology part we will discuss approximately 150 different cognitive biases, which
can perturb the rational evaluation of risks. Even if the contribution of each bias error is no more
than one percent, it adds up. When people undertake a project for the first time, they usually
underestimate the riskiness of the project by a factor as much as 40-100. This was apparent in the
examples of Chernobyl and the Challenger. (Namely, the Space Shuttle had been calculated for one
failure every 1000 flights, but exploded on the 25th flight. In his paper on cognitive biases with
respect to global risks, Yudkowsky highlights that a safety estimate of 1 in 25 would be more
correct, 40 times greater than the initial estimate; the Chernobyl reactors were calculated to undergo
one failure every one million years, but the first large scale failure occurred after less than 10,000
station-years of operation, that is, a safety estimate 100 times lower would have been more precise.)
28

So, there are serious biases to consider, which mean we should greatly expand our default
confidence intervals to come up with more realistic estimates. Confidence intervals are range of
probabilities for some risk. Like for example nuclear war may have probability interval 0.5 – 2% per
year. How much we should expand our confidence intervals?
For decision-making we need to know an order of the size of the risk, instead of exact value.
Let's assume that the probability of global catastrophes can be estimated, at the best, to within
an order of magnitude (and, the accuracy of such an estimate will be plus-minus an order of
magnitude) and that such an estimate is enough to define the necessity of the further attentive
research and problem monitoring. Similar examples of risk scales are the Turin and Palermo scales
of asteroid risk.
The eleven-point (from 0 to 10) Turin scale of asteroid danger «characterizes the degree of
potential danger of an Earth-threatening asteroid or comet. The point on the Turin scale of asteroid
danger is assigned to a small Solar system bodies at the moment of their discovery, depending on the
weight of the body, speed, and probability of its collision with the Earth. In the process of further
research of the orbit of an object, its point on the Turin scale can be updated. Zero means an absence
of threat, ten – indicates a probability more than 99% of collision with a body with a diameter more
than 1 km. The Palermo scale differs from Turin in that it considers time remaining before the fall of
an asteroid: less time means a higher score on the scale.

Quantitative estimates of the probability of the global catastrophe, given by
various authors
The estimates of the extinction by leading experts are not far from one another. Of course
there could be some bias in peaking experts but these one are forming a group which deliberately
studding risks of human extinction from all possible courses:

J. Leslie, 1996, "The end of the world": 30% the next 500 years with the account of
Doomsday Argument, without it – 5%.

N. Bostrom, 2002, in «Existential risks. The analysis of scenarios of human extinction
and similar dangers» wrote: «my subjective opinion is that setting this probability
lower than 25% would be misguided, and the best estimate may be considerably
higher… in the next two centuries».

Sir Martin Rees, 2003, «Our final hour»: 50% in the XXI century.

29

In 2009 was published a book by Willard Wells "Apocalypse when?" [Wells 2009]
devoted to the mathematical analysis of a possible global catastrophe mostly based on
Doomsday argument in Gott’s version. Its conclusion is that its probability is about 3%
per decade, which roughly corresponds to 27% per century.

On the other hand, in days of cold war some estimates of probability of extinction was even
higher.
The researcher of a problem of extraterrestrial civilizations Horner attributed «to a selfliquidation hypothesis of psyhozoe» chances of 65%.
Von Neumann considered that nuclear war is inevitable and also all will die in it.
During 2008 conference “Global catastrophic risks” was made survey of experts, which
estimated total risks of human extinction as 19 per cent before 2010 (http://www.fhi.ox.ac.uk/gcrreport.pdf). The result of survey is presented in the table:

30

31

The model of the future
The model of global risks depends on the very model of future. The model of the future should
answer, what is the main driving force of historical process, and how this historical process would
develop in near term, medium term and long term perspective.
This book is based on idea that the main driving force is self-sustained evolution of
technologies and (which I will show later) the speed of this evolution is hyperbolic.
Different theories of the future predict different global risks. For example Rome club model
states that driving force of evolution is interaction of 5 forces, from which most important is
exhaustion of natural resources. This predicts sinusoidal graph of future evolution, where
overpopulation and ecological crisis is the main expected catastrophes.
If the take religious model of the world, it is driving by God will and its end is Apocalypses.
Taleb’s black sawn world model emphasis unknown risks as most important.
There are also only two possible outcomes of our civilization evolution that is extinction or
becoming supercivilization, controlled by AI and practicing interstellar travel.
There are many more world models and they are presented in the following table. Models are
not reality and they may be or may be not useful.
Many of these models are based on empirical data, so how we should choose the right model?
Main principle: strongest process wins. Large wave will obliterate smaller waves.
In this case hyperbolic model should win. But the main contrurgument to it is its too extreme
predictions which will begin to differ from other model very soon. It predict singularity in 2030 and
extreme acceleration of the events should start now. (We could use ad hoc idea that singularity
would postponed for 20 years or something because of growing resistance of materials. It is like in
collapsing star, in stops collapse several times because compressing and heating matter produce
enough pressure to temporary postpone contraction, until this matter cools somehow and pressure
drops)
Meta model: As we are uncertain about which of the models is correct, we could try to use
Bayesian approach. We choose several most promising models and update our estimates of them
based on new facts.
United models: for example, spiral-hyperbolic model. Some models may be paired with one
another.

32

The precautionary principle recommends us to do the same: as we can’t know for sure which
future model is correct, and different models imply different global risks, we should use several
most plausible models. (we can’t use all models, as we get to much noise in result).
In our case hyperbolic, exponential, wave and black swan models are most plausible.
To assess the models we could use several characteristics:
1)

Refutability – some models have early predictions and we could check these
predictions and see if this model works. This is like Popper criteria. Other models
can’t be falsified and this makes them weaker.

2)

Completeness – the model should take into account all known strong historic trends.

3)

Evidence base – the model should “predict past” that is be built on large empirical
base.

4)

Support – the model should have support from different futurists, and better if they
came to it independently.

5)

We should distinct prediction models from planning (normative) models. The
planning models tend to turn into wishful thinking. Planning must be based on some
model of future itself.

6)

Complexity. If the model is too complex or its predictions are too sharp it is probably
false as we live in very uncertain world.

7)

Too strong predictions for near future contradict Copernican mediocracy. Because
if we randomly chosen from the time when the model works, we should be
somewhere in the middle of it. For example, if I predict that nuclear war will happen
tomorrow, I will strongly go against the fact that if didn’t happen for 70 years, and it’s
a priory probability to happen tomorrow of very small.

---- The same text in thesis mode ----

TL;DR: Many models of the future exist. Several are relevant.
Hyperbolic model is strongest, but too strange.
Wall of text: off
Our need: correct model of the future
Different people: different models = no communication.
Assumptions:
Model of the future = main driving force of historical process +
33

graphic of changes
Model of the future determines global risks

The map
The map: lists all main future models.
Structure: from fast growth – to slow growth models.

How to estimate validity of future's
models?
Refutability –
check early predictions. Popper criteria.
Falsification.
Completeness – Include: all known strong historic trends.
Evidence base – “predict past” + built on large empirical base.
Support – Who: thinkers and futurists. How: independently
Planning (normative) models. No wishful thinking. Planning
must be based on some model of future itself.
Complexity. Too complex = false. Too exact = false.
Too strong predictions for near future contradict Copernican
mediocracy. Because if we randomly chosen from the time
when the model works, we should be somewhere in the middle
of it. For example, if I predict that nuclear war will happen
tomorrow, I bet against the fact that if didn’t happen for 70
years, and it’s a priory probability to happen tomorrow of very
small.
Main principle: strongest process wins.
Metaphor: Large wave will obliterate smaller waves.
Conclusion:
Hyperbolic model should win. It is strongest process.
But: Too extreme predictions.
Like: Singularity in 2030
Ad hoc repair: growing inertia of process results in postponed
Singularity (20 years).
Analogy: Stellar collapse into black hole. Overheated matter delays
collapse by pressure. Not for long.

34

Combining models

1. Meta model: Bayesian approach, many models.
How: Update estimates of models based on evidence

2. United models
Example: spiral-hyperbolic model.
Some models may be paired with one another.

3. The precautionary principle: use several models in risks
assessment.
Criteria: Only plausible models.
In case of global risks: hyperbolic, exponential, wave and black swan
models

Other methods of prediction:
Extrapolation
Analogy
Statistical
Induction
Trends
35

Poll of experts, foresight
Prediction markets
Debiasing

Chapter 3. History of the research on global risk

Antique and medieval ideas about the doomsday were that it could happened based on will of
God or as a result of war between demons (Armageddon, Ragnarek).
First scientific ideas of the end of the world and human race appeared in XIX century. In the
beginning of XIX century Malthus created idea of overpopulation – though it was not directly
related to complete human extinction, it became a base of future “limits of growth” ideas. Lord
Kelvin in 1850ies suggested the possibility of “thermal death” of the Universe that is thermal
equilibrium and stop of all processes. Most popular idea was that life on earth would vanish after
dimming of Sun. Now we know that the Sun is becoming even brighter as it is going older as well as
19 century timescale for these event was quite different – in order of several million years as the
source of the energy of stars was not yet known. As the idea of space travel didn’t appear yet the
death of life on Earth was equal to death of humanity. All these scenarios had in common that
natural end of the world would be slow and remote process of something like freezing. Humans
can’t do anything about it – neither stop nor create.
In first half of the XX century we find descriptions of grandiose natural disasters in science
fiction, for example, the works of H.G. Wells (“War of the Worlds”) and Sir Conan Doyle (“The
poison belt”). Like collision with giant meteors, poisoning by comets gases, genetic degradation.
During great influenza pandemic in 1918 one physician said that if the thing would go this
way the humanity would be finished in several weeks.
The history of modern scientific study of global risks dates back to 1945. Before the first
atomic bomb tests in the United States there were worries if it would lead to chain reaction of fusion
of the nitrogen in Earth's atmosphere.
In order to assess the risk of it was established commission, headed by physicist Arthur
Compton. He prepared report LA-602 «Ignition of the Atmosphere with Nuclear Bombs» [LA-602
36

1945], which was recently declassified and is now available to everyone on the Internet in the form
of poorly scanned typewritten pages. Compton shows that due to the scattering of photons by
electrons, the latter will be cooled (because their energy is greater than that of photons), and the
radiation will not heat and but cool the reaction region. Thus, increasing the area of the reaction
process is not able to become self-sustaining. This ensured the impossibility of the chain reaction in
nitrogen atmosphere. However at the end of the text has been mentioned that not all factors were
taken into account – for instance, the effect of water vapor contained in the atmosphere.
Because it was a secret report, it was not intended to convince the public – which differs it
from in favorable way from recent reports on the safety of the collider. But its target audience was
the decision makers. Compton told them that the chance that a chain reaction will start reaction in
the atmosphere is 3 per million. In the 1970s was conducted journalist investigation, found that
Compton took these figures "out of my head" because found them compelling enough for the
president – in the report there is no probability estimates [Kent 2004]. Compton believed that a
realistic assessment of the probability of disaster is not important, because if we Americans
repudiate the bomb tests the Germans or other hostile countries would do them.
In 1979 was published an article by Wood and Weaver [Weaver, Wood 1979] on
thermonuclear detonation in the atmosphere and oceans, which shows that conditions suitable for
thermonuclear self-sustainable reaction do not exist on our planet (but they are possible on other
planets, if there is a high enough concentration of deuterium – and they provide minimum
requirements for it).
The next important step in history of global risks research was the realization of not just the
possibility of accidental global catastrophe, but also technical possibility of the deliberated
destruction of humanity. It has become clear after proposal of cobalt bomb by Leo Szilard [Smith
2007]. During the debate on the radio show with Hans Bethe in 1950 about a possible threat to life
on Earth by nuclear weapons, he proposed a new type of bomb: the hydrogen bomb (which was still
not physically at that time, but the ability to create which has been widely discussed), wrapped in a
shell of cobalt-59, which would be converted during the explosion in cobalt-60. This highly
radioactive isotope with a half-life of about 5 years could make the entire continent or the whole
Earth uninhabitable – if the bomb is large enough. After such declaration the Department of Energy
has decided to conduct an investigation in order to prove that such a bomb is impossible. However,
it hired scientist has shown that if the mass of the bomb is 200 000 tonnes (i.e. something like 20
modern nuclear reactors and so theoretically achievable), it is enough to destroy all highly organized
life on Earth. Such a device would inevitably be stationary. So could be used only as a doomsday
37

weapon. After all, no one would dare attack a country that has created such a device. In the 60s the
idea of the theoretical possibility of the destruction of the world by using a cobalt bomb was a very
popular and widely discussed in the press and in scientific literature and in art, but then was quite
forgotten. For example, in the book by Kahn "On Thermonuclear War” [Khan 1960], N. Shute's
novel «On the beach» [Shute 1957], in the Kubrick's movie "Dr. Strangelove".
In 1950 Fermi postulated his famous paradox “Where are they?” – that if alien civilizations
exist, why we don’t see them. One of obvious explanation at that time was that civilizations use to
destroy themselves in nuclear war – and silence of the Universe is explained by this selfdectruction.
And as we are typical civilization we will probably also destroy our selves.
Furthermore, in the 60s appeared a lot of ideas about potential disasters or dangerous
technologies that would be developed in the future. English mathematician J. Good wrote an essay
«On the first super-intelligent machine» [Good 1965], where he showed that as soon as such a
machine will be created, it will be able to improve itself, and leave humans behind forever – later
these ideas formed the basis of ideas about technological singularity by V. Vinge [Vinge 1993], the
essence of which is that, based on current trends, by 2030, it will be create artificial intelligence, a
superior to human beings, and then the history will become fundamentally unpredictable.
Astrophysicist F. Hoyle [Hoyle 1962] wrote the novel "Andromeda", which described the
attack on Earth by hostile alien artificial intelligence, downloaded via radio telescope from space.
He gave most plausible description of scenario of such attack with several steps.
Physicist Richard Feynman wrote an essay «There’s plenty of room at the bottom» [Feynman
1959], where he was first to suggested the possibility of molecular manufacturing, i.e. nanorobots.
Important role in the realization of global risks played science fiction, which had its golden
age in sixties.
Forester published in 1960 an article entitled «Judgment Day: Friday, November 13, 2026. At
this date human population will approach infinity if it grows as it has grown in the last two
millennia» [Foerester, 1960]. It is more likely that he chose this title not for making prediction but to
draw attention to his explanation of past growth. So he did false projection of infinite growth of
human population but these prediction interestingly show the same date as several other predictions
made by other by different methods, like on done by Vinge about AI – 2030. It also paved the way to
Limits of Growth theories. Forester idea was that human population is growing based on hyperbolic
law and any hyperbolic law is reaching infinity in final time. Of course population can’t grow as
much by biological reasons but if we add “population” of computers we could find that his
prediction was still working around 2010.
38

In 1972 was published Meadows book “Limits of growth”. It didn’t directly predict human
extinction, but only a decline of the human population at the end of the XXI century due to complex
crisis caused by overpopulation, limited recourses and pollution.
In general we have two lines of thoughts regarding future global catastrophe. One is based on
Malthusian theories, and another is based on predictions of technological developments. Idea of total
human extinction belongs to the second line, because “Limit of growth” theories tend to
underestimate role of technologies in all aspects of human life and one of them is role of
technologies in building weapons. Meadows theory doesn’t take in account possibility of nuclear
war (or even more destructive wars and catastrophes based on XXI century technologies), which
could be logically predicted as a result of war for recourses.
In the 1970s became clear the danger associated with biotechnology. In 1971, American
biologist Robert Pollack learned [Teals 1989] that in the next laboratory are planned experiments to
embed oncogenic SV40 virus genome into the bacterium Escherichia coli. He immediately
suggested that if as E. coli spread throughout the world and it could cause a worldwide epidemic of
cancer. He appealed to this laboratory to suspend experiments before they started the experiments.
The result was the ensuing discussions in Asilomar Conference in 1975, which adopted the
recommendations

for

the

safe

genetic

engineering.

http://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA
In 1981 Asimov published a book "Choice of catastrophes" [Asimov 2002]. Although it was
one of the first attempts to systematize various global risks, the focus was on distant events, such as
the expansion of the Sun, and the main message of the book was that people would be able to
overcome the global risks.
In 1983 B. Carter suggested now famous anthropic principle. Carter's reasoning had the
second part, which he decided not to publish, but only to report on the meeting of the Royal Society,
because he knew that it would cause an even bigger protest. Later it was popularized by J. Leslie
[Leslie 1996]. This second half of the argument became known as the Doomsday argument, DA.
Briefly its essence is that on the basis of past humanity lifetime and the assumption that we are
roughly in the middle of its existence, we can estimate the future existence of mankind. Carter used
more complex form of DA with conditional probability of future catastrophe, which should change
depending of the fact if we find ourselves before or after the catastrophe. In 1993 Richard Gott
suggested simpler version, which is working directly with future lifetime.
In the early '80s appeared a new theory of human extinction as a result of use of nuclear
weapons – the theory of "nuclear winter." In the computer simulation of the behavior of the
39

atmosphere after nuclear war was shown that shade from the emission of soot particles in the
troposphere will be a long and significant. This would result in prolong freezing. The question of
how realistic is such blackout and what temperature drop can survive mankind, remains open. This
theory was part of ongoing in 80s political fight against nuclear war. Nuclear war was portrayed in
mass consciousness as inevitably leading to human extinction. While it was not true in most realistic
cases, it was helpful in promoting idea of nuclear disarmament. And it has resulted in drastic
reduction of nuclear arsenals after cold war. So successful public fighting against existential risks is
possible.
In the 80s first publications appeared about the risks of particle accelerators experiments.
In 1985 was published a book of E. Drexler «Engines of Creation» [Drexler 1985] devoted to
radical nanotechnology – that is, the creation of self-replicating nanobots. Drexler showed that such
an event would have revolutionary consequences for the economy and military affairs. He examines
various scenarios of global catastrophe associated with nanorobots. The first is “gray goo", i.e.
unrestricted breeding of nanorobots over which control is lost in the environment. In just a few days
they could fully absorb the Earth's biosphere. The second risk is "unstable arms race."
Nanotechnology will allow fast and extremely cheap creation of weapons of unprecedented
destructive power. First of all we are talking about microscopic robots capable of engaging
manpower and equipment of the enemy. Instability of the arms race means that it "one who starts
first takes all" and the balance between two opposing forces, as it was during the Cold War is
impossible.
Public perception of existential risks was mostly formed by art, which later was criticized for
unrealistic description of risk scenarios. In 1984 appeared first movie of Terminator trilogy, where
military AI named Skynet tried to eliminate humanity for self-defense reasons, which later became a
metaphor of dangerous AI. In fact risks of military AI are still underestimated by Friendly AI
community partly because of rejection, which they feel to the Terminator movie.
In 1993 Vernor Vinge coined idea of technological Singularity that will be the moment when
will be created first superhuman AI and one of clear options after it as that it will destroy all
humanity. He predicted that he would be surprise if it happen before 2005 or after 2030. All
predictions about AI are known to be premature.
In 1996 was published the book by Canadian philosopher John Leslie "End of the World.
Science and ethics of human extinction” [Leslie 1996], which was radically different from the
Asimov’s book primarily by its pessimistic tone and focus on the near future. It examines all the

40

new discoveries of hypothetical disaster, including nanorobots and DA, and concludes that the
chances of human extinction are 30 percent in the next 200 years.
John Leslie was probably the first one who summarized all possible risks as well as DA
argument and started the modern tradition in its discussion but it was Bill Joy who brings these ideas
to the public.
In 2000 Wired magazine came out with sensational article by Bill Joy one of the founders of
Sun Microsystems «Why the future doesn’t need us» [Joy 2000]. In it, he paints a very pessimistic
picture of the future of civilization in which people will be replaced by robots. Human will be like
pets for AI at best. Advances in technology will create a "knowledge of mass destruction" that can
be distributed over the Internet, for example, the genetic codes of dangerous viruses. In 2005, Joy
was in the company to remove from the Internet recently published Spanish flu virus genome. In
2003, Joy said that he wrote two manuscript books, that he decided not to publish. At first he wanted
to warn people of impending danger, but his published article fulfilled this task. In the second, he
wanted to offer possible solutions, but the solutions are not yet satisfying him, and that "knowledge
is not an area where you have right to a second shot."
Since the end of the last century, J. Lovelock [Lovelock 2006] has developed the theory of the
possibility of runaway global warming. The gist of it is that if the usual warming associated with the
accumulation of carbon dioxide in the atmosphere exceeds a certain threshold which is very small
(1-2 degrees C), than the vast reserves of methane hydrates on the seabed and in the tundra,
accumulated there during the recent ice ages begin stand out. Methane is tens times stronger
greenhouse gas than carbon dioxide, and this may lead to a further increase of the temperature of the
Earth, it will launch other chains with positive feedback. For example, it could start burning of the
vegetation on land and more CO2 will be emitted into the atmosphere; oceans would also warm up
and as a result would fall solubility CO2, and again it will be emitted into the atmosphere, and will
form the anoxic area in the ocean – there will emit methane. In September 2008 were discovered
bubbles of methane escaping from the bottom of the pillars of the Arctic Ocean. Finally, water vapor
is also a greenhouse gas, and with its concentration rising temperatures will also rise. As a result, the
temperature could rise by tens of degrees, greenhouse catastrophe happens, and all living things die.
Although it is not inevitable, the risk of such a development is the worst possible outcome with the
maximum expected damage. From 2012 and as of 2014 group of scientists unite in Arctic Methane
emergency group with collective blog (http://arctic-news.blogspot.ru/). They are predicting total ice
melt in Arctic as yearly as in 2015 which will help more relise of methane and could lead to growing
of temperature of 10-20 degrees in XXI century and total human extinction by their opinion.
41

At the end of XX – beginning of XXI century appeared several articles describing
fundamentally new risks, the realization of which was made possible by creative analysis of the
capabilities of new technologies. It is works by R. Freitas «Gray goo problem» [Freitas 2000], R.
Carrigan «Do potential SETI signals need to be decontaminated?» [Carrigan 2006], the book
«Doomsday men» by P.D. Smith [Smith 2007] and "Accidental nuclear war" by Bruce Blair [Blair
1993]. Another risk is an artificial revival of supervolcano using deep drilling. There are projects of
autonomous probes, which can penetrate into the core of the earth to a depth of 1000 km by melting
of mantle material [Stivenson 2003], [Circovic 2004].
From 2001 E. Yudkowsky explores the problem of so called Friendly AI (that is safe selfimproving AI) in California. He created Singularity Institute (now MIRI, do not confuse it with
startup incubator Singularity University). He wrote several papers on security issues of AI first of
them is «Creating Friendly AI». In 2006, he wrote two articles on which we will often refer in this
book: «Cognitive Biases Potentially Affecting Judgment of Global Risks» [Yudkowsky 2008b] and
«Artificial Intelligence as a positive and negative factor in global risk» [Yudkowsky 2008a].
In 2003 English Royal Astronomer Sir Martin Rees published the book «Our Final Hour»
[Rees 2003]. It is much smaller in volume than the Leslie’s book, and does not contain
fundamentally new information, however, it was addressed to a wide audience and sold in large
quantities.
In 2002 Nick Bostrom wrote his seminal article “Existential risks. Analizing human extinction
scenarios” and several other articles about DA and so called Simulation. He coined term existential
risk, which included not only extinction but also everything, which could harm human potential
forever. He also showed that main risks are “unknown unknowns” – that is new unpredictable risks.
In 2004 Richard Posner published «Catastrophe: Risk and response» [Posner 2004], which
repeat many of previous books but provide an economic analysis of the risks of global catastrophe
and prices of efforts to prevent it (for example, efforts to deflect asteroids and values of the
experiments on accelerators).
In the XXI century the main task of the researchers was not only listing of the various possible
global risks, but analysis of the general mechanisms of their origin and prevention. It turned out that
most of the possible risks associated with the wrong knowledge and wrong decisions people. At the
beginning of the XXI century began the formation of global risk analysis methodology, the
transition from the description of risks to the meta-analysis of human ability to detect and correctly
assess global risks. It should be mostly attributed to Bostrom and Yudkowsky.

42

In 2008 several events have increased interest to the risks of global catastrophe: it is planned
(but not yet fully realized) start of adron collider, vertical jump in oil prices, release of methane in
the Arctic, the war with Georgia and the global financial crisis.
At the beginning of the XXI century is the formation of global risk analysis methodology, the
transition from the transfer of risk to the meta-analysis of human ability to detect and correctly
assess global risks.
In 2008, a conference was held in Oxford "Global catastrophic risks” and its materials
proceedings were published under the same title, edited by N. Bostrom and M. Ćirković [Bostrom,
Circovic 2008]. It includes more than 20 articles by various authors.
There was an article by M. Ćirković about the role of observational selection in the evaluation
of the frequency of future disasters, which claim impossible to draw any conclusions about the
frequency of future disasters, based on a previous frequency
Arnon Dar examines risks of supernovae and gamma-ray bursts, and also shows that the
specific threat to the Earth comes from cosmic rays produced by galactic gamma-ray bursts.
William Neiper in the article about the threat of comets and asteroids showed that perhaps we
are living in a period of intense cometary bombardment, when the frequency of impact is 100 times
higher than average.
Michael Rampino gave an overview of catastrophe risk associated with supervolcanoes.
At the beginning of the XXI century appeared several organizations that promote the
protection of the global risks, for example, Lifeboat Foundation and CRN (Centre for Responsible
Nanotechnology), MIRI, Future of Humanity Institute in Oxford and CGR by Seth Baum. Most of
them are very small and don’t have any impact. Most interesting work are done by MIRI and its
subsidiary Lesswrong community forum.
In Cambridge in 2012 was created Centre for study of existential risks with several prominent
figures in board: Huw Price, Martin Rees and Jaan Tallin (http://cser.org/). It got a lot of public
attention, which is good, but no much actual work was done.
Except MIRI which created working community all other institution are in the shadow of
work of their leaders, or not doing any work at all. For example Lifeboat Foundation has very large
boards with thousands people in them but rarely consult them. But they have good mailing list about
x-risks.
The blog Overcoming Bias and articles of Robin Hanson were important contribution to the
study of x-risks. Katja Grace from Australia did important contribution to the theory of Doomsday

43

argument

by

mathematically

connecting

it

with

Fermi

Paradox

(http://www.academia.edu/475444/Anthropic_Reasoning_in_the_Great_Filter).
The study of global risks had the following path: awareness about possibility of human
extinction and the possibility of extinction in the near future, then realization of several different
risks – and then attempt to create an exhaustive list of global risks, and then create a system of their
description, which takes into account any global risks and determine the risk of any new
technologies and discoveries. Systematic description has greater predictive value than just a list, as it
allows finding new points of vulnerability, just as the periodic table allows us to find new elements.
And then, study the limits of human thinking about global risks as a first step in creating
methodology that is capable of effectively find and evaluate global risks.
From 2000 to 2008 was the golden age of x-risk research. Many seminal books and articles
were published – from Bill Joy to Bostrom and Yudkowsky – and many new ideas appeared. But
after that, the stream of new ideas almost stopped. This might be good, because every new idea
increased the total risk, and perhaps all important ideas about the topic had been discussed but
unfortunately nothing was done for preventing x-risks, and dangerous tendencies continued.
Risks of nuclear war are growing. No FAI theory exists. Biotech is developing very quickly
and genetically modified viruses are cheaper and cheaper. The time until the catastrophe is running
out. The next obvious step is to create a new stream of ideas – ideas on how to prevent x-risks and
when to implement these ideas. But before doing this, we need a consensus between researchers
about the structure of the incoming risks. This can be via dialog, especially informal dialogs during
scientific conferences.
Lack of new ideas was somehow compensated by appearance of many think tanks as well as
publication of many popular articles about the problem. FHI, GCRI, Lesswrong, Arctic methane
group are among new players in field. But communication between them was not good especially
then it was ideological barrier – these are mostly about which risk is most serious: AI or climate, war
or viruses.
Also x-risks researches seems to be less cooperative than for example anti-aging researches,
because maybe each of x-risk researchers pretend to save the world and has his own understanding
how to do it. And these theories don’t add up to one another. This is my personal impression.

Current situation with global risks

44

The first version of this book was written in Russian in 2007 (under the name “The Structure
of the global catastrophe”) and from that time not much changes for good. The predicted risky
trends have continued and now risks of world war are very high. The world war is a corridor for
creating x-risks weapons and situations, as well as of disseminating values, which are promoting xrisks. These values are values of nationalism, religion sectarianism, mysticism, fatalism, short-term
income, risky behaviour and “winning” in general. The values of human life, safety, rationality and
unity of humankind are not growing as quick as they should be.
In fact there are two groups of human values. One of them is about fighting with other
groups of people and is based on false and irrational beliefs from nationalism and religion, while the
other is about value of human life. The first of these promotes global catastrophe while the second is
good. Of course this is oversimplification – but this simple map of values is very useful. And values
can't save us from catastrophe, because different people have different values but they can change
probability.
1. Most important thing: a global catastrophe has not happened. Colossal terrorist attacks, wars,
natural disasters also didn’t happened.
2. Key technology trends like exponential growth in the spirit of Moore's Law – have not changed. It
is especially true about biology and genes.
3. Economic crisis of 2008 has began. And I think its aftermath is not ended yet because
Quantitative Easing creates a lot of money, which could spiral as inflation, and large defaults are still
possible.
4. Several new potentially pandemic viruses appear like Swine flu and MERS.
5. New artificial viruses were created to test how mutated bird flu could wipe out humanity and
protocols of the experiments were published.
6. Arctic ice is collapsing and methane readings in Arctic are high.
7. Fukushima nuclear catastrophe shows again that unimaginable catastrophes could happen. By the
way Martin Rees predicted it in his book about existential risks.
8. Orbit infrared telescope “Wise” was launched, it will be able to clarify the question of the
existence of dark comets and directly answer the question of risk associated with them.
9. Many natural language processing AI projects has started and maximum computer power has rose
around 1 000 times after first edition of the book.
10. 2012 end of world crazy bonanza spoiled a lot the efforts of promoting rational approach to
global risks.
45

11. The start of adron collider helped to raise questions about risks of scientific experiments but
truth was lost in quarrels between opinions for and against it. I mean works of Adrian Kent and
Andreas Sandberg about small risks with large consequences.
12. In 2014 the situation in Ukraine becomes close to war between Russia and West. The peculiarity
of this situation is that it could deteriorate in small steps, and there is no natural barrier like not
using nuclear weapons was for nuclear war. And it will already result in new cold war, arm race and
nuclear race.
13. Peak oil has not happened mostly because of shale oil and shale gas. Again intellect proved to be
more powerful then limits of resources.

46

Part 2. Typology of x-risks
Chapter 4. The map of all know global risks

I like to create full exhaustive lists, and I could not stop myself from creating a list of
human extinction risks. Soon I reached around 100 items, although not all of them
are really dangerous. I decided to convert them into something like periodic table —
i.e to sort them by several parameters — in order to help predict new risks.
For this map I chose two main variables: the basic mechanism of risk and the
historical epoch during which it could happen. Also any map should be based on
some kind of future model, and I chose Kurzweil’s model of exponential
technological growth which leads to the creation of super technologies in the middle
of the 21st century. Also risks are graded according to their probabilities: main,
possible and hypothetical. I plan to attach to each risk a wiki page with its
explanation.
I would like to know which risks are missing from this map. If your ideas are too
dangerous to openly publish them, PM me. If you think that any mention of your idea
will raise the chances of human extinction, just mention its existence without the
details.
I think that the map of x-risks is necessary for their prevention. I offered prizes for
improving the previous map which illustrates possible prevention methods of x-risks
and it really helped me to improve it. But I do not offer prizes for improving this map
as it may encourage people to be too creative in thinking about new risks.

http://immortality-roadmap.com/x-risks%20map15.pdf
lesswrong discussion: http://lesswrong.com/lw/mdw/a_map_typology_of_human_extinction_risks/
In the following chapters I will go in details about all mentioned risks.

Block 1 – Natural risks
Chapter 5. The risks connected with natural catastrophes

Universal catastrophes
47

Catastrophes which will change all Universe as whole, on scale equal to the Big Bang
are theoretically possible. From statistical reasons their probability is less than 1 % in the
nearest billion years as have shown by Bostrom and Tegmark. However the validity of
reasonings of Bostrom and Тегмарка depends on the validity of their premise - namely that
the intelligent life in our Universe could arise not only now but also a several billions years
ago. This suggestion is based on that the heavy elements necessary for existence of a life,
have arisen already after several billions years after Universe appearance, long before
formation of the Earth. Obviously, however, that degree of reliability which we can attribute
to this premise is less than 100 billion to 1 as we do not have its direct proofs - namely the
traces of early civilisations. Moreover, obvious absence of earlier civilisations (Fermi's
paradox) gives certain reliability to an opposite idea - namely, that the mankind has arisen
extremely improbable early. Probably, that existence of heavy elements is not a unique
necessary condition for emergence of intelligent life, and also there are other conditions,
for example, that frequency of flashes of close quasars and hypernovas has considerably
decreased (and the density of these objects really decreases in process of expansion of
the Universe and exhaustion of hydrogen clouds). Bostrom and Тегмарк write: «One might
think that since life here on Earth has survived for nearly 4 Gyr (Gigayears), such
catastrophic events must be extremely rare. Unfortunately, such an argument is flawed,
giving us a false sense of security. It fails to take into account the observation selection
effect that precludes any observer from observing anything other than that their own
species has survived up to the point where they make the observation. Even if the
frequency of cosmic catastrophes were very high, we should still expect to find ourselves
on a planet that had not yet been destroyed. The fact that we are still alive does not even
seem to rule out the hypothesis that the average cosmic neighborhood is typically sterilized
by vacuum decay, say, every 10000 years, and that our own planet has just been
extremely lucky up until now. If this hypothesis were true, future prospects would be
bleak».
And though further Bostrom and Тегмарк reject the assumption of high frequency of
"sterilising catastrophes”, being based on late time of existence of the Earth, we cannot
accept their conclusion, because as we spoke above, the premise on which it is based, is
unreliable. It does not mean, however, inevitability of close extinction as a result of
universal catastrophe. The only our source of knowledge of possible universal
catastrophes is theoretical physics as, by definition, such catastrophe never happened
48

during life of the Universe (except for Big Bang). The theoretical physics generates a large
quantity of unchecked hypotheses, and in case of universal catastrophes they can be
essentially uncheckable. We will notice also, that proceeding from today's understanding,
we cannot prevent universal catastrophe, nor be protected from it (though, we can provoke
it - see the section about dangerous physical experiments.) Let's designate now the list of
possible - from the point of view of some theorists - universal catastrophes:
1. Disintegration of false vacuum. We already discussed problems of false vacuum in
connection with physical experiments.
2. Collision with object in multidimensional space - brane. There are assumptions,
that our Universe is only object in the multidimensional space, named brane (from a word
"membrane"). The Big Bang is a result of collision of our brane with another brane. If there
will be one more collision it will destroy at once all our world.
3. The Big Rupture. Recently open dark energy results, as it is considered, to more
and more accelerated expansion of the Universe. If speed of expansion grows, in one
moment it will break off Solar system. But it will be ten billions years after modern times, as
assumes theories. (Phantom Energy and Cosmic Doomsday. Robert R. Caldwell, Marc
Kamionkowski, Nevin N. Weinberg. http://xxx.itep.ru/abs/astro-ph/0302506)
4. Transition of residual dark energy in a matter. Recently the assumption has been
come out, that this dark energy can suddenly pass in a usual matter as it already was in
time of the Big Bang.
5. Other classic scenario of the death of the universe are heat-related deaths rise
in entropy and alignment temperature in the universe and the compression of the Universe
through gravitational forces. But they again away from us in the tens of billions of years.
6. One can assume the existence of certain physical process that makes the
Universe unfit for habitation after a certain time (as it was unfit for habitation because of
intense radiation of nuclei of galaxies - quasars - billions of early years of its existence). For
example, such a process can be evaporation of primordial black holes through Hawking
radiation. If so, we exist in a narrow interval of time when the universe is inhabitable - just
as Earth is located in the narrow space of habitable zone around the Sun, and Sun - in a
narrow field of the galaxy, where the frequency of its rotation synchronized with the rotation
of the branches of the galaxy, making it does not fall within those branches and is not
subjected to a supernova.
8. If our world has to some extent arisen from anything by absolutely unknown to us
way, what prevents it to disappear suddenly also?

Geological catastrophes
Geological catastrophes kill in millions times more people, than falling of asteroids,
however they, proceeding from modern representations, are limited on scales.
Nevertheless the global risks connected with processes in the Earth, surpass space risks.

49

Probably, that there are mechanisms of allocation of energy and poisonous gases from
bowels of the Earth which we simply did not face owing to effect of observation selection.

Eruptions of supervolcanoes
Probability of eruption of a supervolcano of proportional intensity is much more, than
probability of falling of an asteroid. However modern science cannot prevent and even
predict this event. (In the future, probably, it will be possible to pit gradually pressure from
magmatic chambers, but this in itself is dangerous, as will demand drilling their roofs.) The
basic hurting force of supereruption is volcanic winter. It is shorter than nuclear as it is
heavier than a particle of volcanic ashes, but them can be much more. In this case the
volcanic winter can lead to a new steady condition - to a new glacial age.
Large eruption is accompanied by emission of poisonous gases - including sulphur. At
very bad scenario it can give a considerable poisoning of atmosphere. This poisoning not
only will make its of little use for breath, but also will result in universal acid rains which will
burn vegetation and will deprive harvest of crops. The big emissions carbon dioxid and
hydrogen are also possible.
At last, the volcanic dust is dangerous to breathe as it litters lungs. People can easily
provide themselves with gas masks and gauze bandages, but not the fact, that they will
suffice for cattle and pets. Besides, the volcanic dust simply cover with thick layer huge
surfaces, and also pyroclastic streams can extend on considerable distances. At last,
explosions of supervolcanoes generate a tsunami.
It all means that people, most likely, will survive supervolcano eruption, but it with
considerable probability will send mankind on one of postapocalyptic stages. Once the
mankind has appeared on the verge of extinction because of the volcanic winter caused by
eruption of volcano Toba 74 000 years ago. However modern technologies of storage of
food and building of bunkers allow considerable group of people to go through volcanic
winter of such scale.
In an antiquity took place enormous vulgar eruptions of volcanoes which have flooded
millions square kilometres with the fused lava - in India on a plateau the Decan in days of
extinction of dinosaurs (probably, is was provoked by falling of an asteroid on the Earth
opposite side, in Mexico), and also on the East-Siberian platform. There is a doubtful
50

assumption, that strengthening of processes of hydrogen decontamination on Russian
plain is a harbinger of appearance of the new magmatic centre. Also there is a doubtful
assumption of possibility catastrophic

dehiscence of Earth crust on lines of oceanic

breaks and powerful explosions of water steam under a curst.
An interesting question is that whether the overall inner heat inside the Earth groes
through the disintegration of radioactive elements, or vice versa, decreases due to cooling
emissivity. If increases, volcanic activity should increase throughout hundreds millions
years. (A. Asimov writes in the book «Choice of catastrophes», about glacial ages: «On
volcanic ashes in ocean adjournment it is possible to conclude, that volcanic activity in the
last of 2 million years was approximately four times more intensively, than for previous 18
million years».)

Falling of asteroids
Falling of asteroids and comets is often considered as one of the possible reasons of extinction
of mankind. And though such collisions are quite possible, the chances of total extinction as a result
of them are often exaggerated. Experts think an asteroid would need to be I about 37 miles (60 km)
in diameter to wipe out all complex life on Earth. However, the frequency of asteroids of such size
hitting the Earth is extremely rare, approximately once every billion years. In comparison, the
asteroid that wiped out the dinosaurs was about 6 mi (10 km) in diameter, which is a volume about
200 times less than a potential life-killer.
The asteroid Apophis has approximately a 3 in a million chance of impacting the Earth in
2068, but being only about 1,066 ft (325 m), is not a threat to the future of life. In a worst-case
scenario, it could impact in the Pacific Ocean and produce a tsunami which kills several hundred
thousand people.
2,2 million years ago the comet in diameter of 0,5-2 km fell between southern America and
Antarctica (Eltanin catastrophehttp://de.wikipedia.org/wiki/Eltanin_(Asteroid) ). The wave in 1 km
in height threw out whales to the Andes. In vicinities of the Earth there are no asteroids in the sizes
which could destroy all people and all biosphere. However comets of such size can come from Oort
cloud. In article of Napir, etc. «Comets with low reflecting ability and the risk of space collisions» is
shown, that the number of dangerous comets can be essential underestimated as the observable
quantity of comets in 1000 times less than expected which is connected with the fact that comets
51

after several flights round the Sun become covered by a dark crust, cease to reflect light and become
imperceptible. Such dark comets are invisible by modern means. Besides, allocation of comets from
Oort cloud depends on the tidal forces created by the Galaxy on Solar system. These tidal forces
increase, when the Sun passes through more dense areas of the Galaxy, namely, through spiral
sleeves and a galactic plane. And just now we pass through a galactic plane that means, that during a
present epoch comet bombardment is in 10 times stronger, than on the average for history of the
Earth. Napir connects the previous epoch intensive of comet bombardments with mass extinction 65
and 251 million years ago.
The basic hurting factor at asteroid falling would become not only a wave-tsunami, but also
«asteroid winter», connected with emission of particles of a dust in atmosphere.
The basic hurting factor at asteroid falling would become not only a wave-tsunami, but also
«asteroid winter», connected with emission of particles of a dust in atmosphere. Falling of a
large asteroid can cause deformations in Earth crust which will lead to eruptions of
volcanoes. Besides, the large asteroid will cause the worldwide Earthquake dangerous first
of all for technogenic civilisation.
The scenario of intensive bombardment of the Earth by set of splinters is more
dangerous. Then strike will be distributed in more regular intervals and will demand smaller
quantity of a material. These splinters to result from disintegration of some space body
(see further about threat of explosion Callisto), comet splitting on a stream of fragments
(the Tungus meteorite was, probably, a splinter of comet Enke), as a result of asteroid hit in
the Moon or as the secondary hurting factor from collision of the Earth with a large space
body. Many comets already consist of groups of fragments, and also can collapse in
atmosphere on thousand pieces. It can occur and as a result unsuccessful attempt to bring
down an asteroid by means of the nuclear weapon.
Falling of asteroids can provoke eruption of supervolcanoes if the asteroid gets to a
thin site of Earth crust or in a cover of a magmatic copper of a volcano or if shift from the
stike disturbs the remote volcanoes. The melted iron formed at falling of an iron asteroid,
can play a role «Stevenson's probe» - if it is possible in general, - that is melt Earth crust
and a mantle, having formed the channel in Earth bowels that is fraught with enormous
volcanic activity. Though usually it did not occur at falling of asteroids to the Earth, lunar
"seas" could arise thus. Besides, outpourings of magmatic breeds could hide craters from
such asteroids. Such outpourings are Siberian trap basalts and a Decan plateau in India.
The last is simultaneous to two large impacts (Chixulub and crater Shiva). It is possible to
52

assume, that shock waves from these impacts, or the third space body, a crater from which
has not remained, have provoked this eruption. It is not surprising, that several large
impacts

occur simultaneously. For example, core s of comets can consist of several

separate fragments - for example, comet Shumejker-Levi running into Jupiter in 1994, has
left on it a dotted trace as by the collision moment has already broken up to fragments.
Besides, there can be periods of intensive formation of comets when the solar system
passes near to other star. Or as a result of collision of asteroids in a belt of asteroids.
Much more dangerously air explosions of meteorites in some tens metres in diameter
which can cause false operations of systems of early warning of a nuclear attack, or hits of
such meteorites in areas of basing of rockets.

Pustynsky in his article comes to following conclusions: «According to the estimates made in
present article, the prediction of collision with an asteroid is not guaranteed till now and is casual. It
is impossible to exclude that collision will occur absolutely unexpectedly. Thus for collision
prevention it is necessary to have time of an order of 10 years. Asteroid detection some months prior
to collision would allow to evacuate the population and nuclear-dangerous plants in a falling zone.
Collision with asteroids of the small size (to 1 km in diameter) will not result to all planet
consequences (excluding, of course, practically improbable direct hit in area of a congestion of
nuclear materials). Collision with larger asteroids (approximately from 1 to 10 km in diameter,
depending on speed of collision) is accompanied by the most powerful explosion, full destruction of
the fallen body and emission in atmosphere to several thousand cubic km. of stones. On the
consequences this phenomenon is comparable with the largest catastrophes of a terrestrial origin,
such as explosive eruptions of volcanoes. Destruction in a falling zone will be total, and the planet
climate will in sharply change and will settle into shape only in some years (but not decades and
centuries!) Exaggeration of threats of global catastrophe proves to be true by the fact that during the
history of the Earth it has survived set of collisions with similar asteroids, and it has not left is
proved an appreciable trace in its biosphere (anyway, far not always left). Only collision with larger
space bodies (diameter more ~15-20 km) can make more appreciable impact on planet biosphere.
53

Such collisions occur less often, than time in 100 million years, and we while do not have the
techniques allowing even approximately to calculate their consequence».
So, the probability of destruction of mankind as a result of asteroid falling in the XXI century
is very small.

Asteroid threats in the context of technological development
It is easy to notice that the direct risks of collision with an asteroid decrease as the
technology development. First of all, they are decreasing due to more accurate
measurement of their very probability - that is due to more and more accurate detection of
dangerous asteroids and measuring their orbits. (If, however, confirmed the assumption
that we live during an episode of cometary bombardment, the risk assessment will increase
by 100 times the background.) Second, they are decreasing due to the growth of our
abilities deflect asteroids.
On the other hand, the effects of asteroid strikes are becoming larger - not only
because the density of the population is growing, but because of growing global
connectivity of the system, resulting in damage in one spot can backfire on the entire
planet.
In other words, although the probability of collision is reducing, indirect risks related to
the asteroid danger is increasing.
The main indirect risks are as follows:
A) The destruction of hazardous industries in the crash site, for example, a nuclear
power plant. The whole mass of the station in such a case would be evaporated and the
release of radiation will be higher than in Chernobyl. In addition, there may be additional
nuclear reactions because of heavy compression of the power plant when the asteroid
destroys it. Yet the chances of a direct hit by an asteroid in a nuclear plant are small, but
they grow with the number of plants.
B) There is a risk that even a small group of meteors, moving at a certain angle in a
certain place of the earth's surface can trigger early warning and cause of accidental
nuclear war. The same consequences will be of an explosion of a small asteroid (a few
54

meters in size) in the air. The first option is more likely to happen with superpowers (with
some unsecured areas in their missile defense system like in the Russian Federation which
results in inability to track full missile traectory) warning system missile attack, while the
second - for a regional nuclear powers (like India and Pakistan, North Korea, etc.), not
reaching is able to track missiles, but able to react to a single explosion.
C) Technology to move asteroids in the future will create a hypothetical possibility to
direct the asteroids not only from the Earth, but also to it. And even if there will be a
accidential asteroid impact, there will be gossips that it was sent on purpose. Yet hardly
anyone will direct the asteroids to the Earth, as such action is easy to spot, hit accuracy is
low, and this should be done for decades before the impact.
D) For safe deflection of asteroids will require the creation of space weapons, which
can be nuclear, laser or kinetic. Such weapons may be used against the Earth or against
the satellites of the potential enemy. Although the risk of the use of it against the land is
small, it still creates a greater potential for damage than falling asteroids.
E) Asteroid destruction by a nuclear explosion would lead to an increase in its lethal
force at the expense of its fragments that is an increasing number of explosions over a
larger area, as well as radioactive contamination of the debris.
Modern technical means are able to reject only the relatively small asteroids which
don’t have global threat. The real danger is black comet bodies several kilometers accross
moving along elongated elliptical orbits with great speed.
However, in the future (may be as soon as 2030_2050), the space can be quickly and
cheaply scanned (and transformed) via self-replicating robots, based on nanotech. They
will create a huge telescopes in space able detect all dangerous body in the solar system.
It will be enough to land on an asteroid a microrobot that will multiply on it and then took it
apart into pieces and built the engine, which will change its orbit. Nanotech will help to
create self-sustaining human settlements on the Moon and other celestial bodies. This
suggests that the problem of the asteroid hazard will be irrelevant in a few decades.
Thus, the problem of preventing the collision of Earth with asteroids in the coming
decades can only be a diversion of resources from global risks.

55

Firstly, because that we still cannot reject the objects that can really lead to the
complete extinction of mankind.
Secondly by the time (or shortly thereafter), when the system nuclear annihilation of
asteroids will be created, it will become obsolete as nanotech can be used for quick and
cheap exploration of the solar system by the middle of the 21st century, and perhaps
earlier.
Third, because such a system in the conditions when the Earth is divided into warring
states, asteroid deflection system will be weapon in the event of war.
Fourth, because the probability of human extinction as a result of asteroid impact in
the narrow period of time, when the asteroid deflection system is already deployed, but
powerful nanotechnology may not be established, is extremely small. This time interval can
be set to 20 years, say from 2030 to 2050, and the chances of falling 10 kilometer body
during this time, even if we assume that we live in a period of cometary bombardment,
when the intensity is 100 times higher than is 1 in 15 000 (based on the average rate of
incidence of such bodies in time 30 million years). Moreover, if we consider the dynamics,
we will able to reject only by the end of this period the really dangerous objects, and
perhaps even later, since the larger the asteroid, the more large-scale and long-term
project for its rejection is required. Although 1 to 15 000 is still unacceptably high risk, it is
commensurate with the risk of the use of space-based weapons against Earth.
Fifth, anti-asteroid protection diverts attention from other global problems, due to the
limited human attention span (even mass media attention span) and financial resources.
This is due to the fact that the asteroid danger is very easy to understand: it is easy to
imagine the impact, it is easy to calculate its probability and it is understandable to the
general public. And there is no doubt in its reality and it is clear how to protect us against it.
(For example, the probability of a volcanic catastrophe comparable to the asteroid
according to various estimates, from 5 to 20 times more for the same level of energy - but
have no idea how it can be prevented.) This differs from other risks that are difficult to
imagine, that cannot be to quantify, but may mean the probability of extinction of tens of
percent. It is AI risks, biotech, nanotech and nuclear weapons.
56

Sixth, if we talk about relatively small bodies like Apophis, it may be cheaper to
evacuate the area of future impact than to deflect the asteroid. And, most likely, will it fall
area ocean, so it would require antitsunami measures.
Still, I do not call to abandon anti-asteroid protection, because our first need is to find
out whether we are living in a period of cometary bombardment. In this case, the probability
1 km body impact within the next 100 years is like 6 percent. (Based on data from a
hypothetical impacts in the last 10 000 years like a so called comet Clovis
http://en.wikipedia.org/wiki/Younger_Dryas_impact_event, which traces may be 500,000 or
similar to craters formations called Carolina bays http://en.wikipedia.org/wiki/Carolina_bays
and

created

large

crater

near

New

Zealand

in

1443

http://en.wikipedia.org/wiki/Mahuika_crater etc.). It is necessary first of all to give up power
to the monitor dark comets and on the analysis of fresh craters.
These ideas that we live during Comet bombardment episode are presented by group
of scientists https://en.wikipedia.org/wiki/Holocene_Impact_Working_Group
Napier also wrote that we could strongly underestimate number of dark comets.
http://lib.convdocs.org/docs/index-124751.html?page=30 The number of such comets we
have found is 100 times less than expected.
If we improve our observation technics, we could prove that the probability of
extinction level impact in 21 century is zero. If we prove it, there will be no need of large
scale space defense systems, which could be dangerous itself. Smaller scale system may
be useful to stop regional effect size impactors like Cheliabinsk asteroid.
We already proved the reduced probability of impact by creating large catalog of Near
Earth objects.
Very important instrument in it is infrared telescope Wise. If its observation is correct it
now found 90 per cent of planetary killing asteroids that is equal to 90 per cent reduction of
their risk in near term perspective.
“ In 2005 Congress gave the agency until 2020 to catalogue 90 percent of the

total population of midsized NEOs at or above 140 meters in diameter—objects
big enough to devastate entire regions on Earth. NASA has already catalogued 90
percent of the NEOs that could cause planetary-scale catastrophe—those with a
57

diameter of one kilometer or more—but is unlikely to meet the 2020 deadline for
cataloguing midsize NEOs.”
But in 2016 some question aroused about validity of their data.
http://www.scientificamerican.com/article/for-asteroid-hunting-astronomersnathan-myhrvold-says-the-sky-is-falling1/
If infrared observations are correct they greatly reduce chances of large family of
dark comets (but 99 per cent of them are most time beyond Mars which make
difficult to find them).
In 2015 article Napier speaks about dangers of comets centauris, which sometimes
come from outer Solar System, disintegrate and result in the period of bombardment
https://www.ras.org.uk/images/stories/press/Centaurs/Napier.Centaurs.revSB.pdf
Encke comet could be fragment of larger disintegrated comet. “ Analysis of the ages of the lunar 
microcraters (“zap pits”) on rocks returned in the Apollo programme indicate that the near­Earth interplanetary dust 
(IPD) flux has been enhanced by a factor of about ten over the past ~10kyr compared to the long­term average”. 

“The effects of running through the debris trail of a large comet are liable to be complex, and to involve both the 
deposition of fine dust into the mesosphere and, potentially, the arrival of hundreds or thousands of megaton­level 
bolides over the space of a few hours. Incoming meteoroids and bolides may be converted to micron­sized smoke 
particles (Klekociuk et al. 2005), which have high scattering efficiencies and so the potential to yield a large optical 
depth from a small mass. Modelling of the climatic effects of dust and smoke loading of the atmosphere has focused on 
the injection of such particulates in a nuclear war.”
Such work has implications for atmospheric dusting events of cosmic origin, although there are significant differences, 
of course. Hoyle & Wickramasinghe (1978) considered that the acquisition of ~1014g of comet dust in the upper 
atmosphere would have a substantial effect on the Earth’s climate. Such an encounter is a reasonably probable event 
during the active lifetime of a large, disintegrating comet in an Encke­like orbit (Napier 2010)
Apart from their effects on atmospheric opacity, a swarm of Tunguska­level fireballs could yield wildfires over an area 
of order 1% of the Earth’s surface

Zone of defeat depending on force of explosion
Here we will consider hurting action of explosion as a result of asteroid falling (or for
any other reason). The detailed analysis with similar conclusions see in article of
Pustynsky.
The defeat zone grows very slowly with growth of force of explosion that is true as
asteroids, and for super-power nuclear bombs. Though energy of influence falls
proportionally to a square of distance from epicentre, at huge explosion it falls much faster,
first, because of curvature of the Earth which as though protects that is behind horizon
(therefore nuclear explosions are most effective in air, instead of on the Earth), and
58

secondly, that ability of a matter is elastic to transfer a shock wave is limited by a certain
limit from above, and all energy moreover is not transferred, and turns to heat around
epicentre. For example, at ocean there can not be a wave above its depth, and as
explosion epicentre is a dot (unlike epicentre of a usual tsunami which it represents a break
line), then will linearly decrease depending on distance. The superfluous heat formed at
explosion, or is radiated in space, or remains in the form of lake of the fused substance in
epicentre. The sun delivers for days to the Earth light energy of an order 1000 gigaton (10
22

joules), therefore the role of the thermal contribution of superexplosion in the general

temperature of the Earth is insignificant. (On the other hand, the mechanism of distribution
of heat from explosion will be not streams of heated air, but the cubic kilometres of splinters
thrown out by explosion with the weight comparable to weight of the asteroid, but smaller
energy, many of which will have the speed close to first cosmic speed, and owing to it to fly
on ballistic trajectories as intercontinental rockets fly. In an hour they reach all corners of
the Earth and though they, operating as the kinetic weapon, will hurt not each point on a
surface, they will allocate at the input in atmosphere huge quantities of energy, that is will
warm up atmosphere on all area of the Earth, probably, to temperature of ignition of a tree
that else will aggravate.)
We can roughly consider, that the destruction zone grows proportionally to a root of 4
power from force of explosion (exact values are defined by military men empirically as a
result of tests and heights lay between degrees 0,33 and 0,25, thus depending from force
of explosion, etc.). Thus each ton of weight of a meteorite gives approximately 100 tons of
a trotyl equivalent of energy - depending on speed of collision which usually makes some
tens kilometres per second. (In this case a stone asteroid in 1 cubic km. in the size will give
energy in 300 gigatons. The density of comets is much less, but they can be scattered in
air, strengthening strike, and, besides, move on perpendicular to ours orbits with much
bigger speeds.) Accepting, that the radius of complete destruction from a hydrogen bomb
in 1 megaton makes 10 km, we can receive radiuses of destruction for asteroids of the
different sizes, considering, that the destruction radius decreases proportionally the fourth
degree force of explosion. For example, for an asteroid in 1 cubic km it will be radius in 230
km. For an asteroid in diameter in 10 km it will be radius in 1300 km. For 100 km of an
asteroid it will be radius of dectruction of an order of 7000 km. That this radius of the
guaranteed destruction became more than half of width of the Earth (20 000 km), that is
guaranteed covered all Earth, the asteroid should have the sizes of an order of 400 km. (If
59

to consider, that the destruction radius grows as a root of the third degree it will be
diameter of an asteroid destroying all about 30 km. Real value lays between these two
figures (30-400 km), also the estimation Pustynsky gives independent estimation: 60 km.)
Though the given calculations are extremely approximate, from them it is visible, what
even that asteroid which connect with extinction of dinosaurs has not hurt all territory of the
Earth, and even all continent where it has fallen. And extinction if it has been connected
with an asteroid (now is considered, that there complex structure of the reasons) it has
been caused not by strike, but by the subsequent effect - « the asteroid winter» connected
with the dust carrying over by atmosphere. Also collision with an asteroid can cause an
electromagnetic impulse, as in a nuclear bomb, for the account of fast movement of
plasma. Besides, it is interesting to ask a question, whether there can be thermonuclear
reactions at collision with a comet if its speed is close to greatest possible about 100 km/s
(a comet on a counter course, the worst case) as in a strike point there can be a
temperature in millions degrees and huge pressure as at implosion in a nuclear bomb. And
even if the contribution of these reactions to energy of explosion will be small, it can give
radioactive pollution.
Strong explosion will create strong chemical pollution of all atmosphere, at least by
oxides of nitrogen which will form rains of nitric acid. And strong explosion will litter
atmosphere with a dust that will create conditions for nuclear winter.
From the told follows, that the nuclear superbomb would be terrible not force of the
explosion, and quantity of radioactive deposits which it would make. Besides, it is visible,
that terrestrial atmosphere represents itself as the most powerful factor of distribution of
influences.

Solar flashes and luminosity increase
That is known to us about the Sun, does not give the bases for anxiety. The sun
cannot blow up. Only presence unknown to us or the extremely improbable processes can
lead to flash (coronary emission) which will strongly singe the Earth in the XXI century. But
other stars have flashes, in millions times surpassing solar. However change of luminosity
of the sun influences change of a climate of the Earth that proves coincidence of time of a
small glacial age in XVII century with a Maunder minimum of solar spots. Probably, glacial
ages are connected with luminosity fluctuations also.
Process of gradual increase in luminosity of the sun (for 10 percent in billion years)
will result anyway to boiling oceans - with the account of other factors of warming - during
60

next 1 billion years (that is much earlier, than the sun becomes the red giant and,
especially, white dwarf). However in comparison with an interval investigated by us in 100
years this process is insignificant (if only it has not developed together with other
processes conducting to irreversible global warming - see further).
There are assumptions, that in process of hydrogen burning out in the central part of
the Sun, that already occurs, will grow not only luminosity of the Sun (luminosity grows for
the account of growth of its sizes, instead of surface temperatures), but also instability of its
burning. Probably, that last glacial ages are connected with this reduction of stability of
burning. It is clear on the following metaphor: when in a fire it is a lot of firewood, it burns
brightly and steadily but when the most part of fire wood burns through, it starts to die away
a little and brightly flash again when finds not burnt down branch.
Reduction of concentration of hydrogen in the sun centre can provoke such process
as convection which usually in the Sun core does not occur therefore in the core fresh
hydrogen will arrive. Whether such process is possible, whether there will be it smooth or
catastrophic, whether will occupy years or millions years, it is difficult to tell. Shklovsky
assumed, that as a result of convection the Sun temperature falls each 200 million years
for a 10 million perod, and that we live in the middle of such period. That is end of this
process when fresh fuel at last will arrive in the core and luminosity of the sun will increase
is dangerous. (However it is marginal theory, as at the moment is resolved one of the basic
problems which has generated it - a problem of solar neitrino.)
It is important to underline, however, that the Sun cannot flash as supernova or nova,
proceeding from our physical representations.
At the same time, to interrupt a intelligent life on the Earth, it is enough to Sun to be
warmed up for 10 percent for 100 years (that would raise temperature on the Earth on 1020 degrees without a greenhouse effect, but with the green house effect account, most
likely, it would appear above a critical threshold of irreversible warming). Such slow and
rare changes of temperature of stars of solar type would be difficult for noticing by
astronomical methods at supervision of sun-like stars - as necessary accuracy of the
equipment only is recently reached. (The logic paradox of a following kind is besides,
possible: sun-like stars are stable stars of spectral class G7 by definition. It is not
surprising, that as a result of their supervision we find out, that these stars are stable.)
So, one of variants of global catastrophe consists that as a result of certain internal
processes luminosity of the sun will steadily increase on dangerous size (and we know,
61

that sooner or later it will occur). At the moment the Sun is on an ascending century trend
of the activity, but any special anomalies in its behaviour has not been noticed. The
probability of that it happens in the XXI century is insignificant is small.
The second variant of the global catastrophe connected with the Sun, consists that
there will be two improbable events - on the Sun there will be very large flash and emission
of this flash will be directed to the Earth. Concerning distribution of probability of such event
it is possible to assume, that the same empirical law, as concerning Earthquakes and
volcanoes here operates: 20 multiple growth of energy of event leads to 10 multiple
decrease in its probability (the law of repeatability of Gutenberg-Richter). In XIX century
was observed flash in 5 times, by modern estimations, stronger, than the strongest flash in
the XX century. Probably, that time in tens and hundred thousand years on the Sun there
are the flashes similar on a rarity and scale to terrestrial eruptions of supervolcanoes.
Nevertheless it is the extremely rare events. Large solar flashes even if they will not be
directed to the Earth, can increase a little solar luminosity and lead to additional heating of
the Earth. (Usual flashes give the contribution no more than 0,1 percent).
At the moment the mankind is incapable to affect processes on the Sun, and it looks much
more difficult, than influence on volcanoes. Ideas of dump of hydrogen bombs on the Sun for
initiation of thermonuclear reaction look unpersuasively (however such ideas were expressed, that
speaks about tireless searches by human mind of the weapon of the Doomsday).
There is a precisely enough reckoned scenario of influence to the Earth magnetic making solar
flash. At the worst scenario (that depends on force of a magnetic impulse and its orientation - it
should be opposite to a terrestrial magnetic field), this flash will create the strong currents in electric
lines of distant transfer of the electric power that will result in burning out of transformers on
substations. In normal conditions updating of transformers occupies 20-30 years, and if all of them
burn down will be nothing to replace them there, as will require many years on manufacture of
similar quantity of transformers that it will be difficult to organize without an electricity. Such
situation hardly will result in human extinction, but is fraught with a world global economic crisis
and wars that can start a chain of the further deterioration. The probability of such scenario is
difficult for estimating, as we possess electric networks only about hundred years.

Gamma ray bursts
Gamma ray bursts are intensive short streams of gamma radiation coming from far space.
Gamma ray bursts, apparently, are radiated in the form of narrow bunches, and consequently their
62

energy more concentrated, than at usual explosions of stars. Probably, strong gamma ray bursts from
close sources have served as the reasons of several mass extinctions tens and hundred millions years
ago. It is supposed, that gamma ray bursts occur at collisions of black holes and neutron stars or
collapses of massive stars. Close gamma ray bursts could cause destruction of an ozone layer and
even atmosphere ionization. However in the nearest environment of the Earth there is no visible
suitable candidates neither on sources of gamma ray bursts, nor for supernovas (the nearest
candidate for a gamma ray burst source, a star Eta Carinae - it is far enough - an order of 7000 light
years and hardly its axis of inevitable explosion in the future will be directed to the Earth - Gamma
ray bursts extend in a kind narrow beam jets; However at a potential star-hypernew of star WR 104
which are on almost at same distance, the axis is directed almost towards the Earth. This star will
blow up during nearest several hundreds thousand years that means chance of catastrophe with it in
the XXI century less than 0.1 %, and with the account of uncertainty of its parameters of rotation
and our knowledge about scale - splashes - and is even less). Therefore, even with the account of
effect of observant selection, which increases frequency of catastrophes in the future in comparison
with the past in some cases up to 10 times (see my article « Anthropic principle and Natural
catastrophes») the probability of dangerous gamma ray burst in the XXI century does not exceed
thousand shares of percent. Mankind can survive even serious gamma ray burst in various bunkers.
Estimating risk of gamma ray bursts, Boris Stern writes: «We take a moderate case of energy relies
of 10 ** 52 erg and distance to splash 3 parsec, 10 light years, or 10 ** 19 sm - in such limits from
us are tens stars. On such distance for few seconds on each square centimeter of a planet got on
ways of gamma ray will be allocated 10 ** 13 erg. It is equivalent to explosion of a nuclear bomb on
each hectare of the sky! Atmosphere does not help: though energy will be highlighted in its top
layers, the considerable part will instantly reach a surface in the form of light. Clearly, that all live
on half of planet will be instantly exterminated, on second half hardly later at the expense of
secondary effects. Even if we take in 100 times bigger distance (it a thickness of a galactic disk and
hundred thousand stars), the effect (on a nuclear bomb on a square with the party of 10 km) will be
the hard strike, and here already it is necessary to estimate seriously - what will survive and whether
something will survive in general». Stern believes, that gamma ray burst in Our galaxy happens on
the average time in one million years. Gamma ray burst in such star as WR 104, can cause intensive
destruction of the ozone layer on half of planet. Probably, Gamma ray burst became reason of
Ordovician mass extinction 443 million years ago when 60 % of kinds of live beings (and it is
considerable the big share on number of individuals as for a survival of a specie there is enough
preservation of only several individuals) were lost. According to John Scalo and Craig Wheeler,
63

gamma ray bursts make essential impact on biosphere of our planet approximately everyone five
millions years.
Even far gamma ray burst or other high-energy space event can be dangerous by radiation hurt
of the Earth - and not only direct radiation which atmosphere appreciably blocks (but avalanches of
high-energy particles from cosmic rays reach a terrestrial surface), but also for the formation
account in atmosphere of radioactive atoms, that will result in the scenario similar described in
connection with cobalt bomb. Besides, the scale radiation causes oxidation of nitrogen of
atmosphere creating opaque poisonous gas – dioxide of nitrogen which is formed in an upper
atmosphere and can block a sunlight and cause a new Ice age. There is a hypothesis, that neutrino
radiation arising at explosions of supernovas can lead in some cases to mass extinction as neutrino is
elastic dissipate on heavy atoms with higher probability, and energy of this dispersion is sufficient
for infringement of chemical bonds, and therefore neutrino will cause more often DNA damages,
than other kinds of radiation having much bigger energy. (J.I.Collar. Biological Effects of Stellar
Collapse Neutrinos. Phys.Rev.Lett. 76 (1996) 999-1002 http://arxiv.org/abs/astro-ph/9505028)
Danger of gamma ray burst is in its suddenness - it begins without warning from invisible
sources and extends with a velocity of light. In any case, gamma ray burst can amaze only one
hemisphere of the Earth as they last only a few seconds or minutes.
Activization of the core of galaxy (where there is a huge black hole) is too very improbable
event. In far young galaxies such cores actively absorb substance which twists at falling in accretion
disk and intensively radiates. This radiation is very powerful and also can interfere with life
emerging on planets. However the core of our galaxy is very great and consequently can absorb stars
almost at once, not breaking off them on a part, so, with smaller radiation. Besides, it is quite
observed in infra-red beams (a source the Sagittarius), but is closed by a thick dust layer in an
optical range, and near to the black hole there is no considerable quantity of the substance ready to
absorption by it, - only one star in an orbit with the period in 5 years, but also it can fly still very
long. And the main thing, it is very far from Solar system.
Except distant gamma ray bursts, there are the soft Gamma ray bursts connected with
catastrophic processes on special neutron stars - magnitars. On August, 27th, 1998 flash on magnitar
has led to instant decrease in height of an ionosphere of the Earth on 30 km, however this magnitar
was on distance of 20 000 light years. Magnitars in vicinities of the Earth are unknown, but find out
them it can not to be simple.

Supernova stars
64

Real danger to the Earth would be represented by close explosion supernova on distance
to 25 light years or even less. But in vicinities of the Sun there are no stars which could become
dangerous supernova. (The nearest candidates - the Mira and Betelgeuse - are on distance of
hundreds light years.) Besides, radiation of supernova is rather slow process (lasts months), and
people can have time to hide in bunkers. At last, only if the dangerous supernova will be strict in an
equatorial plane of the Earth (that is improbable), it can irradiate all terrestrial surface, otherwise one
of poles will escape. See Michael Richmond's review. “Will a Nearby Supernova Endanger Life on
Earth?

http://www.tass-survey.org/richmond/answers/snrisks.txt Rather close supernova can

be sources of space beams which will lead to sharp increase in cloud amount at the Earth that is
connected with increase in number of the centers of condensation of water. It can lead to sharp
cooling of a climate for the long period. (Nearby Supernova May Have Caused Mini-Extinction,
Scientists Say http://www.sciencedaily.com/releases/1999/08/990803073658.htm)

Super-tsunami
Ancient human memory keep enormous flooding as the most terrible catastrophe.
However on the Earth there is no such quantity of water that ocean level has risen above
mountains. (Messages on recent discovery of underground oceans are a little exaggerated
- actually it is a question only of rocks with the raised maintenance of water - at level of 1
percent.) Average depth of world ocean is about 4 km. And limiting maximum height of a
wave of the same order - if to discuss possibility of a wave, instead of, whether the reasons
which will create the wave of such height are possible. It is less, than height of highmountainous plateaus in the Himalayas where too live people. Variants when such wave is
possible is the huge tidal wave which has arisen if near to the Earth fly very massive body
or if the axis of rotation of the Earth would be displaced or speed of rotation would change.
All these variants though meet in different "horror stories" about a doomsday, look
impossible or improbable.
So, it is very improbable, that the huge tsunami will destroy all people - as the
submarines, many ships and planes will escape. However the huge tsunami can destroy a
considerable part of the population of the Earth, having translated mankind in a
postapocalyptic stage, for some reasons:

65

1. Energy of a tsunami as a superficial wave, decreases proportionally 1/R if the
tsunami is caused by a dot source, and does not decrease almost, if a source linear (as at
Earthquake on a break).
2. Losses on the transmission of energy in the wave are small.
3. The considerable share of the population of the Earth and a huge share of its
scientific and industrial and agricultural potential is directly at coast.
4. All oceans and the seas are connected.
5. To idea to use a tsunami as the weapon already arose in the USSR in connection
with idea of creations gigaton bombs.
Good side here is that the most dangerous tsunami are generated by linear natural
sources - movements of geological faults, and the most accessible for artificial generation
sources of a tsunami are dots: explosions of bombs, falling of asteroids, collapses of
mountain.

Super-Earthquake
We could name super-earthquake a hypothetical large scale quake leading to full
destructions of human built structures on the all surface of the Earth. No such quakes
happened in human history and the only scientifically solid scenario for them seems to be
large asteroid impact.
Such event could not result in human extinction itself, as there would be ships,
planes, and people on the countryside. But it unequivocally would destroy all technological
civilization. To do

so

it should have

intensity around

10

in

Mercally scale

http://earthquake.usgs.gov/learn/topics/mercalli.php

It would be interesting to assess probability of worldwide
earthquakes, which could destroy everything on the earth surface.
Plate tectonics as we know it can't produce them. But distribution of
largest earthquakes could have long and heavy tail which may
include worldwide quakes.
So, how it could happen?
1) Asteroid impact surely could result in worldwide earthquake. I
think that 1 mile asteroid is enough to create worldwide
earthquake.
66

2) Change of buoyancy of large land mass may result whole
continent uplifting may be in miles. (This is just my conjecture,
not proved scientific fact, so the possibility of it needs further
assessment.) Smaller scale event of this type happened in
1957 during Gobi-Altay earthquake when the whole mountain
ridge moved.
https://en.wikipedia.org/wiki/1957_Mongolia_earthquake
3) Unknown processes in mantle sometimes results in large deep
earthquakes
https://en.wikipedia.org/wiki/2013_Okhotsk_Sea_earthquake
4) Very hypothetical changes in Earth core also may result in
worldwide earthquakes. If core somehow collapse, because of
change of crystal structure of iron in it or because of possible
explosion of (hypothetical) natural uranium nuclear reactor in
it. Passing through clouds of dark matter may result in
activation of Earth core as it could be heated by annihilation of
dark matter particles as was suggested in one recent research.
http://www.sciencemag.org/news/2015/02/did-dark-matter-killdinosaurs
Such warming of the earth core will result in its expansion and
may trigger large deep quakes.
5)

Superbomb explosion. Blockbusters bombs in WW2 was used to create miniquakes
as its main killing effect, and they explode after they penetrate ground. Large nukes
may be used the same way, but super earthquake requires energy which is beyond
current power of nukes on several orders of magnitude. Many superbombs may be
needed to create superquake.

6)

The Earth cracks in the area of oceanic rifts. I read about suggestions that oceanic
rifts expand not gradually but in large jumps. This middle oceanic rifts creates new
oceanic floor. https://en.wikipedia.org/wiki/Mid-Atlantic_Ridge The evidence for it is
large “steps” in ocean floor in the zone of oceanic rifts. Boiling of water trapped into
the rift and contacted with magma may also contribute to explosive zip style rapture
of the rifts. But this idea may be from fridge science catastrophism so should be
taken with caution.
67

7) Supervolcano explosions. Largescale eruptions like the
Kimberlitic tube explosion would also produce earthquake
which will be felt on all earthquake, but not uniformly. But they
must be much stronger than Krakatoa explosion in 1883. Large
explosions of natural explosives at the depth of 100 km like
https://en.wikipedia.org/wiki/Trinitrotoluene were suggested as
a possible mechanism of Kimberlitic explosions.
Superquakes effects:
1.Superquake surely will come with megatsunami which will
result in most damage. Supertsunami may be miles high in some
areas and scenarios. Tsunamis may have different ethology, for
example resonance may play a role or change of speed of rotation
of the Earth.
2.Ground liquefaction
https://en.wikipedia.org/wiki/Soil_liquefaction may result in “ground
waves”, that is some kind of surface waves on some kind of soils
(this is my idea, which should be more researched).
3. Supersonic impact waves and high frequency vibration.
Superquake could come with unusual patterns of wavering, which
are typically dissipate in soil or not appear. It could be killing sound –
more than 160 db, or shock supersonic waves which reflect from the
surface, but result in destruction of solid surface by spalling the
same way as antitank munitions do it
https://en.wikipedia.org/wiki/High-explosive_squash_head
4. Other volcanic events and gases release. Methane deposits in
Arctic would be destabilized strong greenhouse methane will erupt
on surface. Carbon dioxide will be released from oceans as a result
of shaking (the same way as shaking of soda can result in bubbles).
Other gases including sulfur and CO2 will be realized by volcanos.
5. Most dams will fall resulting in flooding
6. Nuclear facilities will meltdown. See Seth Baum discussion
http://futureoflife.org/2016/07/25/earthquake-existentialrisk/#comment-4143
7. Biological weapons will be released from facilities
8. Nuclear warning system will be triggered.
9. All roads and buildings will be destroyed.
10. Large fires will happen.
11. As natural ability of the earth to dissipate the seismic waves will
be saturated, the waves will reflect inside the earth several times
resulting in very long and repapering quake.
68

12. The waves (from the surface location event) will focus on the
opposite side of the Earth, as it may be happened after Chicxulub
asteroid impact which coincide with Deccan traps on opposite side
of the Earth and result in comparable destruction where.
13. Large displacement of mass may result into small change of the
speed of rotation of the Earth, which would contribute to tsunamis.
14. Secondary quakes will follow, as energy will be realized from
tectonic tensions and mountain collapses.
Large, non-global earthquakes also could become precursors for
global catastrophes in several occasions. The following podcast by
Seth Baum is devoted to this possibility.
http://futureoflife.org/2016/07/25/earthquake-existentialrisk/#comment-4147
1) Destruction of biological facilities like CDC which has smallpox
samples or other viruses
2) Nuclear meltdowns
3) Economical crisis or showering of tech. progress in case of large EQ
in San Francisco or other important area.
4) Starting of nuclear war.
5) X-risks prevention groups are disproportionally concentrated in San
Francisco and around London. They are more concentrated than
possible sources of risks. So in event of devastating EQ in SF our
ability to prevent x-risks may be greatly reduced.

Polarity reversal of the magnetic field of the Earth
We live in the period of easing and probably

polarity reversal of the magnetic field

of the Earth. In itself inversion of a magnetic field will not result in extinction of people as
polarity reversal already repeatedly occurred in the past without appreciable harm. In the
process of polarity reversal the magnetic field could fall to zero or to be orientated toward
Sun (pole will be on equator) which would lead to intense “suck” of charged particles into
the atmosphere. The simultaneous combination of three factors - falling to zero of the
magnetic field of the Earth, exhaustion of the ozone layer and strong solar flash could
result in death of all life on Earth, or: at least, to crash of all electric systems that is fraught
with falling of a technological civilisation. And itself this crash is not terrible, but is terrible
what will be in its process with the nuclear weapon and all other technologies.
69

Nevertheless the magnetic field decreases slowly enough (though speed of process
accrues) so hardly it will be nulled in the nearest decades. Other catastrophic scenario magnetic field change is connected with changes of streams of magma in the core, that
somehow can infuence global volcanic activity (there are data on correlation of the periods
of activity and the periods of change of poles). The third risk - possible wrong
understanding of the reasons of existence of a magnetic field of the Earth.
There is a hypothesis that the growth of solid nucleus of Earth did the Earth's
magnetic field less stable, and it exposed more often polarity reversal, that is consistent
with the hypothesis of weakening the protection that we «receive» from anthropic principle.

Emerge of new illness in the nature
It is extremely improbable, that there will be one illness capable at once to destroy all people.
Even in case of a mutation of a bird flu or bubonic plague many people will be survived and do not
catch the diseased. However as the number of people grows, the number of "natural reactors” in
which the new virus can be cultivated grows also. Therefore it is impossible to exclude chances of a
large pandemic in the spirit of "Spaniard" flu of in 1918. Though such pandemic cannot kill all
people, it can seriously damage level of development of the society, having lowered it on one of
postapocaliptic stages. Such event can happen only before will appear powerful biotechnologies as
they can create quickly enough medicines against it - and will simultaneously eclipse risks of natural
illnesses possibility with much bigger speed of creation of the artificial deceases. The natural
pandemic is possible and on one of postapocaliptic stages, for example, after nuclear war though and
in this case risks of application of the biological weapon will prevail. For the natural pandemic
became really dangerous to all people, there should be simultaneously a set of essentially different
deadly agents - that is improbable naturally. There is also a chance, that powerful epizootic – like the
syndrome of a collapse of colonies of bees CCD, the African fungus on wheat (Uganda mold
UG99), a bird flu and similar - will break the supply system of people so, that it will result in the
world crisis fraught with wars and decrease of a level of development. Appearance of new illness
will make strike not only on a population, but also on connectivity which is the important factor of
existence of a uniform planetary civilization Growth of the population and increase in volume of
identical agricultural crops increase chances of casual appearance of a dangerous virus as speed of
"search" increases. From here follows, that there is a certain limit of number of the interconnected
70

population of one specie after which new dangerous illnesses will arise every day. From real-life
illnesses it is necessary to note two:
Bird flu. As it was already repeatedly spoken, not the bird flu is dangerous, but possible
mutation of strain H5N1, capable to be transferred from human to human. For this purpose, in
particular, should change attaching fibers on a surface of the virus that would attached not in the
deep in lungs, but above where there are more chances for virus to get out as cough droplets.
Probably, that it is rather simple mutation. Though there are different opinions on, whether H5N1 is
capable to mutate this way, but in history already there are precedents of deadly flu epidemics. The
worst estimate of number of possible victims of mutated bird flu was 400 million humans. And
though it does not mean full extinction of mankind, it almost for certain will send the world on a
certain post-apocalyptic stage.
AIDS. This illness in the modern form cannot lead to full extinction of mankind though he has
already sent a number of the countries of Africa on a post-apocalyptic stage. There are interesting
reasoning of Supotinsky about the nature of AIDS and on how epidemics of retroviruses repeatedly
cut the population of hominids. He also assumes, that the HIV have a natural carrier, probably, a
microorganism. If AIDS began to transfer as cold, the mankind fate would be sad. However and now
AIDS is deadly almost on 100 %, and develops slowly enough to have time to spread.
We should note new strains of microorganisms which are steady against antibiotics, for
example, the hospital infection of golden staphylococci and medicine-steady tuberculosis. Thus
process of increase of stability of various microorganisms to antibiotics develops, and such
organisms spread more and more, that can give in some moment cumulative wave from many steady
illnesses (against the weakened immunity of people). Certainly, it is possible to count, that
biological supertechnologies will win them but if in appearance of such technologies there will be a
certain delay a mankind fate is not good. Revival of the smallpox, plague and other illnesses though
is possible, but separately each of them cannot destroy all people. On one of hypotheses,
Neanderthal men have died out because of a version of the “mad cow decease” that is the illness,
caused by prion (autocatalytic form of scaling down of protein) and extended by means of
cannibalism so we cannot exclude risk of extinction because of natural illness and for people.
At last, the story that the virus of "Spaniard" flu has been allocated from burial places, its
genome was read and has been published on the Internet looks absolutely irresponsible. Then under
requirements of the public the genome have been removed from open access. But then still there was
a case when this virus have by mistake dispatched on thousand to laboratories in the world for
equipment testing.
71

Marginal natural risks
Further we will mention global risks connected with natural events which probability in the
XXI century is smallest, and moreover which possibility is not conventional. Though I believe, that
these events should be taken into consideration, and they in general are impossible, I think, that it is
necessary to create for them a separate category in our list of risks that, from a precaution principle,
to keep certain vigilance concerning appearance of the new information, able to confirm these
assumptions.

Hypercanes

Kerry Emanuel from the University of Michigan put forward the hypothesis that in the past,
the Earth's atmosphere was much less stable, resulting in mass extinction. If the temperature of
ocean surface would be increased to 15-20 degrees, which is possible as a result of a sharp global
warming, falling asteroid or underwater eruption, it would invoke the so-called Hypercane--a huge
storm with wind speeds of approximately 200-300 meters per second, the size of a continent, high
live-time and pressure in the center of about 0.3 atmosphere. Removed from their place of
appearance, such hypercane would destroy all life on land and at the same time, in its place over
warm ocean site would form new hypercane. (This idea is used in the Barnes novel «The mother
storms».)
Emanuel has shown that when fall asteroid with diameter more than 10 km in the shallow sea
(as it was 65 million years ago, when the fall asteroid near Mexico, which is associated with the
extinction of dinosaurs) may form site of high temperature of 50 km, which would be enough to
form hypercane. Hypercane ejects huge amount of water and dust in the upper atmosphere that
could lead to dramatic global cooling or warming.
http://en.wikipedia.org/wiki/Great_Hurricane_of_1780
http://en.wikipedia.org/wiki/Hypercane
Emanuel, Kerry (1996-09-16). "Limits on Hurricane Intensity". Center for Meteorology and
Physical

Oceanography

,

MIT

http://wind.mit.edu/~emanuel/holem/node2.html#SECTION00020000000000000000
Did

storms

land

the

dinosaurs

in

hot

water?

http://www.newscientist.com/article/mg14519632.600-did-storms-land-the-dinosaurs-in-hot72

water.html

Unknown processes in the core of the Earth
There are assumptions, that the source of terrestrial heat is the natural nuclear
reactor on uranium several kilometres in diameter in the planet centre. Under certain
conditions, assumes V. Anisichkin, for example, at collision with a large comet, it can pass
in supercritical condition and cause planet explosion, that, probably, caused Phaeton
explosion from which, probably, the part of a belt of asteroids was generated. The theory
obviously disputable as even Phaeton existence is not proved, and on the contrary, is
considered, that the belt of asteroids was generated from independent planetesimals.
Other author, R. Raghavan assumes, that the natural nuclear reactor in the centre of the
Earth has diameter in 8 km and can cool down and cease to create terrestrial heat and a
magnetic field.
If to geological measures certain processes have already ripened, it means what
much easier to press «a trigger hook», to start them, - so, human activity can wake them.
The dictanse to border of the terrestrial core is about 3000 km, and to the Sun - of 150 000
000 km. From geological catastrophes every year perish ten thousand people, and from
solar catastrophes - nobody. Directly under us there is a huge copper with the stuck lava
impregnated with compressed gases. The largest extinction of live beings well correlate
with epoch of intensive volcanic activity. Processes in the core in the past, probably,
became the reasons of such terrible phenomena, as trap volcanics. On the border of the
Perm period 250 million years ago in the Eastern Siberia has streamed out 2 million cubic
km. of lavas, that in thousand times exceeds volumes of eruptions of modern
supervolcanoes. It has led to extinction of 95 % of species.
Processes in the core also are connected with changes of a magnetic field of the
Earth, the physics of that is not very clear yet. V.A. Krasilov in article «Model of biospheric
crises. Ecosystem reorganisations and biosphere evolution» assumes, that the invariance
periods, and then periods of variability of a magnetic field of the Earth precede enormous
trap eruptions. Now we live in the period of variability of a magnetic field, but not after a
long pause. The periods of variability of a magnetic field last ten millions years, being
replaced by not less long periods of stability. So at a natural course of events we have
73

millions years before following act of trap volcanic if it at all will happen. The basic danger
here consists that people by any penetrations deep into the Earths can push these
processes if these processes have already ripened to critical level.
In a liquid terrestrial core the gases dissolved in it are most dangerous. They are
capable to be pulled out of a surface if they get a channel. In process of sedimentation of
heavy iron downwards, it is chemically cleared (restoration for the heat account), and more
and more quantity of gases is liberated, generating process of de-gasation of the Earth.
There are assumptions, that powerful atmosphere of Venus has arisen rather recently as a
result of intensive de-gasation of its bowels. Certain danger is represented by temptation to
receive gratuitous energy of terrestrial bowels, extorting the heated magma. (Though if it to
do it in the places which have been not connected with plumes it should be safe enough).
There is an assumption, that shredding of an oceanic bottom from zones of median rifts
occurs not smoothly, but jerky which, on the one hand, are much more rare (therefore we
did not observe them), than Earthquakes in zones of subduction, but are much more
powerful. Here the following metaphor is pertinent: Balloon rupture is much more powerful
process, than its corrugation. Thawing of glaciers leads to unloading литосферных plates
and to strengthening of volcanic activity (for example, in Iceland - in 100 times). Therefore
the future thawing of a glacial board of Greenland is dangerous.
At last, is courageous assumptions, that in the centre of the Earth (and also other
planets and even stars) are microscopic (on astronomical scales) relict black holes which
have arisen in time of Big Bang. See A.G. Parhomov's article «About the possible effects
connected with small black holes». Under Hawking's theory relic holes should evaporate
slowly, however with accruing speed closer to the end of the existence so in the last
seconds such hole makes flash with the energy equivalent approximately of 1000 tons of
weight (and last second of 228 tons), that is approximately equivalent to energy 20 000
gigaton of trotyl equivalent - it is approximately equal to energy from collision of the Earth
with an asteroid in 10 km in diameter. Such explosion would not destroy a planet, but would
cause on all surface Earthquake of huge force, possibly, sufficient to destroy all structures
and to reject a civilisation on deeply postapocalyptic level. However people would survive,
at least those who would be in planes and helicopters during this moment. The microscopic
black hole in the centre of the Earth would test simultaneously two processes – accretion of
matter and energy losses by hawking radiation which could be in balance, however
balance shift in any party would be fraught with catastrophe - either hole explosion, or
74

absorption of the Earth or its destruction for the account of stronger allocation of energy at
accretion. I remind, that there are no facts confirming existence of relic black holes and it is
only the improbable assumption which we consider, proceeding from a precaution principle.

Sudden de-gasation of the gases dissolved at world ocean
Gregory Ryskin has published in 2003 the article «Methane-driven oceanic eruptions and mass
extinctions” in which considers a hypothesis that infringements of a metastable condition of the
gases dissolved in water were the reason of many mass extinctions, first of all relies of methane.
With growth pressure solubility of methane grows, therefore in depth it can reach considerable sizes.
But this condition is metastable as if there will be a water hashing de-gasation chain reaction as in
an open bottle with champagne will begin. Energy allocation thus in 10 000 times will exceed
energy of all nuclear arsenals on the Earth. Ryskin shows, that in the worst case the weight of the
allocated gases can reach tens billions tons that is comparable to weight of all biosphere of the
Earth. Allocation of gases will be accompanied by powerful tsunami and burning of gases. It can
result or in planet cooling for the account of formation of soot, or, on the contrary, to an irreversible
warming up as the allocated gases are greenhouse. Necessary conditions for accumulation of the
dissolved methane in ocean depths is the anoxia (absence of the dissolved oxygen, as, for example,
in Black sea) and absence of hashing. Decontamination off metan-hydrates on a sea-bottom can
promote process also. To cause catastrophic consequences, thinks Ryskin, there is enough
decontamination even small area of ocean. Sudden decontamination of Lake Nios which in 1986 has
carried away lives of 1700 humans became an example of catastrophe of a similar sort. Ryskin
notices that question on what the situation with accumulation of the dissolved gases at modern world
ocean, demands the further researches.
Such eruption would be relatively easy to provoke, lowering pipe in the water and starting to
pour up water that would run self-reinforcing process. This can happen accidentally when drilling
deep seabed. A large quantity of hydrogen sulfide has accumulated in the Black Sea, and there also
is anoxic areas.
Gregory Ryskin. Methane-driven oceanic eruptions and mass extinctions. Geology 31, 741 744 2003. http://pangea.stanford.edu/Oceans/GES205/methaneGeology.pdf

75

Explosions of other planets of solar system
There are other assumptions of the reasons of possible explosion of planets, besides explosions
of uranium reactors in the centre of planets by Anisichkin, namely, special chemical reactions in the
electrolyzed ice. E.M. Drobyshevsky in article «Danger of explosion of Callisto and priority of
space missions» (Дробышевский Э.М. Опасность взрыва Каллисто и приоритетность
космических

миссий

//

1999.

Журнал

технической

физики,

том

69,

вып.

9.

http://www.ioffe.ru/journals/jtf/1999/09/p10-14.pdf) assumes that such events regularly occur in
the ice satellites of Jupiter, and they are dangerous to the Earth by formation of a huge meteoric
stream. Electrolysis of ice occurs as a result of movement of containing it celestial body in a
magnetic field, causing powerful currents. These currents result in the degradation of water to
hydrogen and oxygen, which leads to the formation of explosive mixture. He states hypothesis, that
in all satellites these processes have already come to the end, except Callisto which can blow up at
any moment, and suggests to direct on research and prevention of this phenomenon considerable
means. (It is necessary to notice, that in 2007 has blown up Holmes's comet, and knows nobody why
- and electrolysis of ice in it during Sun fly by is possible.)
I would note that if the Drobyshevsky hypothesis is correct, the very idea of the research
mission to Callisto and deep drilling of his bowels in search of the electrolyzed ice is dangerous
because it could trigger an explosion.
In any case, no matter what would cause destruction of other planet or the large satellites in
Solar system, this would represent long threat of a terrestrial life by fall of splinters. (The
description of one hypothesis about loss of splinters see here: An asteroid breakup 160 Myr ago as
the

probable

source

of

the

K/T

http://www.nature.com/nature/journal/v449/n7158/abs/nature06070.html

76

impactor
)

Cancellation of "protection" which to us provided Antropic principle
In detail I consider this question in article «Natural catastrophes and Antropic
principle». The threat essence consists that the intelligent life on the Earth, most likely, was
generated in the end of the period of stability of natural factors necessary for its
maintenance. Or, speaking short, the future is not similar to the past because the past we
see through effect of observant selection. An example: a certain human has won three
times successively in a roulette, putting on one number. Owing to it it, using the inductive
logic, he comes to a fallacy that will win and further. However if he knew, that in game,
besides it, 30 000 humans participated, and all of them were eliminated, it could come to
more true conclusion, as he with chances 35 to 36 will lose in following round. In other
words, his period of the stability consisting in a series from three prizes, has ended.
For formation of intelligent life on the Earth there should be a unique combination of
conditions which operated for a long time (uniform luminosity of the sun, absence of close
supernova, absence of collisions with very big asteroids etc.) However from this does not
follow at all, these conditions will continue to operate eternally. Accordingly, in the future we
can expect that gradually these conditions will disappear. Speed of this process depends
on that, and unique the combination of the conditions, allowed to be generated intelligent
life on the Earth (as in an example with a roulette was how much improbable: the situation
of prize got three times in a row is more unique successively, the bigger probability the
player will lose in the fourth round - that is be in that roulette of 100 divisions on a wheel
chances of an exit in the fourth round would fall to 1 to 100). If such combination is more
improbable, it will end faster. It speaks effect of elimination - if in the beginning there were,
let us assume, billions planets at billions stars where the intelligent life could start to
develop as a result of elimination only on one Earth the intelligent life was formed, and
other planets have withdrawn, as Mars and Venus. However intensity of this elimination is
unknown to us, and to learn to us it stirs effect of observation selection - as we can find out
ourselves only on that planet where the life has survived, and the intelligence could
develop. But elimination proceeds with the same speed.
For the external observer this process will look as sudden and causeless deterioration
of many vital parametres supporting life on the Earth. Considering this and similar
examples, it is possible to assume, that the given effect can increase probability of the
sudden natural catastrophes, capable to tear off a life on the Earth, but no more, than in 10
77

times. (No more as then enter the actions of the restriction similar described in article of
Bostrom and Тегмарка which consider this problem in the relation of cosmic catastrophes.
However real value of these restrictions for geological catastrophes requires more exact
research.) For example if absence of superhuge eruptions of volcanoes on the Earth,
flooding all surface, is lucky coincidence, and in norm they should occur time in 500 million
years the chance of the Earth to appear in its unique position would be 1 to 256, and
expected time of existence of a life - 500 million years.
We still will return to discussion of this effect in the chapter about calculation of
indirect estimations of probability of global catastrophe in the end of the book. The
important methodological consequence is that we cannot use concerning global
catastrophes any reasonings in the spirit of: it will not be in the future because it was not in
the past. On the other hand, deterioration in 10 times of chances of natural catastrophes
reduces expected time of existence of conditions for a life on the Earth from billion to
hundred millions that gives very small contribution to probability of extinction to the XXI
century.
Frightening acknowledgement of the hypothesis that we, most likely, live in the end of
the period of stability of natural processes, is R.Rods and R.Muller's article in Nature about
cycle of extinctions of live beings with the period 62 (+/-3 million years) - as from last
extinction has passed just 65 million years. That is time of the next cyclic event of
extinction has come for a long time already. We will notice also, that if the offered
hypothesis about a role of observant selection in underestimations of frequency of global
catastrophes is true, it means, that intelligent life on the Earth is extremely unusual event in
the Universe, and we are alone in the observable Universe with a high probability. In this
case we cannot be afraid of extraterestial intrusion, and also we cannot do any conclusions
about frequency of self-destruction of the advanced civilisations in connection with Fermi's
paradox (space silence). As a result net contribution of the stated hypothesis to our
estimation of probability of a human survival can be positive.

Debunked and false risks from media, science fiction and fringe science or old
theories
Nemesis
Gases from comets
78

Rogue black holes
Neutrinos are warming earth core
There are also a number of theories that are either made by different researchers and have been
refuted, or circulating in the yellow press and in popular consciousness and are based on honest
mistakes, lies and misunderstanding, or associated with certain belief systems. It should, however,
allow a tiny chance that some of these theories prove correct.
1. The sudden change of direction and / or the speed of rotation of the Earth, which causes
catastrophic earthquakes, floods and climate change. Changing the shape of Earth, associated with
the increase of the polar caps may cause the axis of rotation will cease to be the axis with the lowest
moment of inertia, and turn the Earth as «nut Dzhanibekova». Or does it happen as a result of
changes in the Earth's moment of inertia associated with the rebuild of its subsoil, or the speed of
change as a result of a collision with a large asteroid.
2.Theories of «great deluge», based on the Biblical legend.
3.Explosion of the Sun after six years, supposedly predicted the Dutch astronomer.
4.Collision of the Earth with wandering black hole. In the vicinity of the Sun is not black
holes, as far as is known, because they could be found on the accretion of interstellar gas on them
and on gravitational distortions of light from more distant stars. In doing so, «sucks ability» of black
hole is no different from any stars with similar mass, so black hole is not more dangerous than a star.
But collisions with stars, or at least, dangerous rapprochement with them, occur rarely, and all such
convergence are millions of years from now. Because black holes at galaxy are far less than the
stars, then the chances of the collision with a black hole are even smaller. We cannot, however,
exclude collision of Solar system with single rogue planets, but it is highly unlikely, and relatively
no-harm event.

Weakening of stability and human interventions
The contribution of probability shift because of cancellation of defence by Antropic
principle in total probability of extinction in XXI century, apparently, is small. Namely, if the
Sun maintains comfortable temperature on the Earth not for 4 billion years, but only 400
million in the XXI century it all the same gives ten-thousand shares of percent of probability
of catastrophe, if we uniformly distribute this probability of the Sun “failture” (0,0004 %).
However easing of stability which to us gave Antropic principle, means, first, that processes
79

become less steady and more inclined to fluctuations (that is quite known concerning the
sun which will burn, in process of hydrogen exhaustion, more and more brightly and nonuniformly), and secondly, that it seems to more important, - they become more sensitive to
possible small human influences. That is one business to pull a hanging elastic band, and
another - for an elastic band tense to a limit of rapture.
For example, if a certain eruption of a supervolcano has ripened, there can pass still
many thousand years while it will occur, but there is enough chink in some kilometres depth
to break stability of a cover of the magmatic chamber. As scales of human activity grow in
all directions, chances to come across such instability increase. It can be both instability of
vacuum, and terrestrial lithosphere, and something else of what we do not think at all.

Block 2 – Anthropogenic risks
Chapter 6. Global warming
TL;DR: Small probability of runaway global warming requires preparation of urgent unconventional
measures of its prevention that is sunlight dimming.
Abstract:
The most expected version of limited global warming of several degrees C in 21 century will not
result in human extinction, as even the thawing after Ice Age in the past didn’t have such an impact.
The main question of global warming is the possibility of runaway global warming and the
conditions in which it could happen. Runaway warming means warming of 30 C or more, which
will make the Earth uninhabitable. It is unlikely event but it could result in human extinction.
Global warming could also create some context risks, which will change the probability of other
global risks.
I will not go here in all details about nature of global warming and established ideas about its
prevention as it has extensive coverage in Wikipedia (https://en.wikipedia.org/wiki/Global_warming
and https://en.wikipedia.org/wiki/Climate_change_mitigation).
Instead I will concentrate on heavy tails risks and less conventional methods of global warming
prevention.
The map provides summary of all known methods of GW prevention and also of ideas about scale
of GW and consequences of each level of warming.
80

The map also shows how prevention plans depends of current level of technologies. In short, the
map has three variables: level of tech, level of urgency in GW prevention and scale of the warming.
The following post consists of text wall and the map, which are complimentary: the text provides in
depths details about some ideas and the map gives general overview of the prevention plans.
The map: http://immortality-roadmap.com/warming3.pdf
Uncertainty
The main feature of climate theory is its intrinsic uncertainty. This uncertainty is not about climate
change denial; we are almost sure that anthropogenic climate change is real. The uncertainty is about
its exact scale and timing, and especially about low probability tails with high consequences. In the
case of risk analysis we can’t ignore these tails as they bear the most risk. So I will focus mainly on
the tails, but this in turn requires a focus on more marginal, contested or unproved theories.
These uncertainties are especially large if we make projections for 50-100 years from now; they are
connected with the complexity of the climate, the unpredictability of future emissions and the
chaotic nature of the climate.
Clathrate methane gun
An unconventional but possible global catastrophe accepted by several researchers is a greenhouse
catastrophe named the “runaway greenhouse effect”. The idea is well covered in wikipedia
https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis
Currently large amounts of methane clathrate are present in the Arctic and since this area is warming
quickly than other regions, the gasses could be released into the atmosphere.
https://en.wikipedia.org/wiki/Arctic_methane_emissions
Predictions relating to the speed and consequences of this process differ. Mainstream science sees
the methane cycle as dangerous but slow process which could result eventually in a 6 C rise in
global temperature, which seems bad but it is survivable. It will also take thousands of years.
It has happened once before during Late-Paleocene, known as the Paleocene-Eocene thermal
maximum, https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene_Thermal_Maximum
(PETM), when the temperature jumped by about 6 C, probably because of methane. Methane-driven
global warming is just 1 of 10 hypotheses explaining PETM. But during PETM global methane
clathrate deposits were around 10 times smaller than they are at present because the ocean was
warmer. This means that if the clathrate gun fires again it could result in much more severe
consequences.
But some scientists think that it may happen quickly and with stronger effects, which would result in
runaway global warming, because of several positive feedback loops. See, for example the blog
http://arctic-news.blogspot.ru/
There are several possible positive feedback loops which could make methane-driven warming
stronger:
1)
The Sun is now brighter than before because of stellar evolution. The increase in the Sun’s
luminosity will eventually result in runaway global warming in a period 100 million to 1 billion
years from now. The Sun will become thousand of times more luminous when it becomes a red
giant. See more here: https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans

81

2)
After a long period of a cold climate (ice ages), a large amount of methane clathrate
accumulated in the Arctic.
3)
Methane is short living atmospheric gas (seven years). So the same amount of methane
would result in much more intense warming if it is released quickly, compared with a scenario in
which it is scattered over centuries. The speed of methane release depends on the speed global
warming. Anthropogenic CO2 increases very quickly and could be followed by a quick release of
the methane.
4)
Water vapor is the strongest green house gas and more warming results in more water vapor
in the atmosphere.
5)
Coal burning resulted in large global dimming
https://en.wikipedia.org/wiki/Global_dimming And the current switch to cleaner technologies could
stop the masking of the global warming.
6)
The ocean’s ability to solve CO2 falls with a rise in temperature.
7)
The Arctic has the biggest temperature increase due to global warming, with a projected
growth of 5-10 C, and as result it will lose its ice shield and that would reduce the Earth’s albedo
which would result in higher temperatures. The same is true for permafrost and snow cover.
8)
Warmer Siberian rivers bring their water into the Arctic ocean.
9)
The Gulfstream will bring warmer water from the Mexican Gulf to the Arctic ocean.
10)
The current period of a calm, spotless Sun would end and result in further warming.
Anthropic bias
One unconventional reason for global warming to be more dangerous than we used to think is
anthropic bias.
1. We tend to think that we are safe because not runaway global warming events have ever
happened in the past. But we could observe only a planet where this never happened. Milan
Cirncovich and Bostrom wrote about it. So the real rate of runaway warming could be much higher.
See here: http://www.nickbostrom.com/papers/anthropicshadow.pdf
2. Also we, humans tend to find ourselves in a period when climate changes are very strong because
of climate instability. This is because human intelligence as a universal adaptation mechanism was
more effective in the period of instability. So climate instability helps to breed intelligent beings.
(This is my idea and may need additional proof).
3. But if runaway global warming is long overdue this would mean that our environment is more
sensitive even to smaller human actions (compare it with an over-pressured balloon and small
needle). In this case the amount of CO2 we currently release could be such an action. So we could
underestimate the fragility of our environment because of anthropic bias. (This is my idea and I
wrote about here: http://www.slideshare.net/avturchin/why-anthropic-principle-stopped-to-defendus-observation-selection-and-fragility-of-our-environment)
The timeline of possible runaway global warming
We could name the runaway global warming a Venusian scenario because thanks to a greenhouse
effect on the surface of Venus its temperature is over 400 C, despite that, owing to a high albedo
(0.75, caused by white clouds) it receives less solar energy than the Earth (albedo 0.3).
A greenhouse catastrophe can consist of three stages:
1. Warming of 1-2 degrees due to anthropogenic C02 in the atmosphere, passage of «a trigger
point». We don’t where the tipping point is, we may have passed it already, conversely we may be
underestimating natural self-regulating mechanisms.
2. Warming of 10-20 degrees because of methane from gas hydrates and the Siberian bogs as well as
the release of CO2 currently dissolved in the oceans. The speed of this self-amplifying process is
82

limited by the thermal inertia of the ocean, so it will probably take about 10-100 years. This process
can be arrested only by sharp hi-tech interventions, like an artificial nuclear winter and-or eruptions
of multiple volcanoes. But the more warming occurs, the lesser the ability of civilization to stop it
becomes, as its technologies will be damaged. But the later that global warming happens, the higher
the tech will be that can be used to stop it.
3.Moist greenhouse. Steam is a major contributor to a greenhouse effect, which results in an even
stronger and quicker positive feedback loop. A moist greenhouse will start if the average
temperature of the earth is 47 C (currently 15 C) and it will result in a runaway evaporation of the
oceans, resulting in 900 C surface temperatures.
(https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans ). All the water on the planet
will boil, resulting in a dense water vapor atmosphere. See also here:
https://en.wikipedia.org/wiki/Runaway_greenhouse_effect
Prevention
If we survive until positive Singularity, global warming will be not an issue. But if strong AI and
other super techs don’t arrive until the end of the 21st century, we wll need to invest a lot in its
prevention, as the civilization could collapse before the creation of strong AI, which means that we
will never be able to use all of its benefits.
I have a map, which summarizes the known ideas for global warming prevention and adds some
new ones for urgent risk management. http://immortality-roadmap.com/warming2.pdf
The map has two main variables: our level of tech progress and size of the warming which we want
to prevent. But its main variable is the ability of humanity to unite and act proactively. In short, the
plans are:
No plan – do nothing, and just adapt to warming
Plan A – cutting emissions and removing greenhouse gases from the atmosphere. Requires a lot of
investment and cooperation. Long term action and remote results.
Plan B – geo-engineering aimed at blocking sunlight. Not much investment and unilateral action are
possible. Quicker action and quicker results, but involves risks in the case of switching off.
Plan C – emergency actions for Sun dimming, like artificial volcanic winter.
Plan D – moving to other planets.
All plans could be executed using current tech levels and also at a high tech level through the use of
nanotech and so on.
I think that climate change demands that we go directly to plan B. Plan A is cutting emissions, and
it’s not working, because it is very expensive and requires cooperation from all sides. Even then it
will not achieve immediate results and the temperature will still continue to rise for many other
reasons.
Plan B is changing the opacity of the Earth’s atmosphere. It could be a surprisingly low cost exercise
and could be operated locally made. There are suggestions to release something as simple as sulfuric
acid into the upper atmosphere to raise its reflection abilities.
"According to Keith’s calculations, if operations were begun in 2020, it would take 25,000 metric
tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of
sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric
tons of it each year, at an annual cost of $700 million, would be required to compensate for the
83

increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program
would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft."
https://www.technologyreview.com/s/511016/a-cheap-and-easy-plan-to-stop-global-warming/
There are also ideas to recapture CO2 using genetically modified organisms, iron seeding in the
oceans and by dispersing the carbon capturing mineral olivine.
The problem with that approach is that it can't be stopped. As Seth Baum wrote, a smaller
catastrophe could result in the disruption of such engineering and the consequent immediate return
of global warming with a vengeance. http://sethbaum.com/ac/2013_DoubleCatastrophe.html
There are other ways pf preventing global warming. Plan C is creating an artificial nuclear winter
through a volcanic explosion or by starting large scale forest fires with nukes. This idea is even more
controversial and untested than geo-engineering.
A regional nuclear war I capable of putting 5 mln tons of black carbon into the upper athmosphere,
“average global temperatures would drop by 2.25 degrees F (1.25 degrees C) for two to three years
afterward, the models suggest.”
http://news.nationalgeographic.com/news/2011/02/110223-nuclear-war-winter-global-warmingenvironment-science-climate-change/ Nuclear explosions in deep forests may have the same effect
as attacks on cities in term of soot production.
Fighting between Plan A and Plan B
So we are not even close to being doomed by global warming but we may have to change the way
we react to it.
While cutting emissions is important it will probably not work within a 10-20 year period, quicker
acting measures should be devised.
The main risk is abrupt runaway global warming. It is low probability event with the highest
consequences. To fight it we should prepare rapid response measures.
Such preparation should be done in advance, which requires expensive scientific experiments. The
main problem here is (as always) funding, and regulators’ approval. The impact of sulfur aerosols
should be tested. Complicated math models should be evaluated.
Contra-arguments are the following: “Openly embracing climate engineering would probably also
cause emissions to soar, as people would think that there's no need to even try to lower emissions
any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever
was disrupted, we'd be in trouble. And do we know enough of such measures to say that they are
safe? Of course, if we believe that history will end anyways within decades or centuries because of
singularity, long-term effects of such measures may not matter so much… Another big issue with
changing insolation is that it doesn't solve ocean acidification. No state actor should be allowed to
start geo-engineering until they at least take simple measures to reduce their emissions.” (comments
from Lesswrong discussion about GW).
Currently it all looks like a political fight between Plan A (cutting emissions) and Plan B (geoengineering), where plan A’s approval is winning. It has been suggested not to implement Plan B as
an increase in the warming would demonstrate a real need to implement Plan A (cutting emissions).
84

Regulators didn’t approve even the smallest experiments with sulfur shielding in Britain. Iron ocean
seeding also has regulatory problems.
But the same logic works in the opposite direction. China and the coal companies will not cut
emissions, because they want to press policymakers to implement plan B. It looks like a prisoner’s
dilemma of two plans.
The difference between the two plans is that plan A will return everything to its natural state and
plan B is aimed on creating instruments to regulate the planet’s climate and weather.
In the current global political situation, cutting emissions is difficult to implement because it
requires collaboration between many rival companies and countries. If several of them defect (most
likely China, Russia and India, who have heavy use of coal and other fossil fuels), it will not work,
even if all of Europe were solar powered.
Transition to zero-emission economy could happen naturally in 20 years after electric transportation
will become widespread as well as solar energy.
Plan C should be implemented if the situation suddenly changes for the worse, with the temperature
jumping 3-5 C in one year. In this case the only option we have is to bomb Pinatubo volcano to
make it erupt again, or probably even several volcanos. A volcanic winter will give us time to adopt
other geo-engineering measures.
I would also advocate for a mixture of both plans, because they work on different timescale. Cutting
emissions and removing CO2 using the current level of technologies would take decades to have an
impact on the climate. But geo-engineering has a reaction time of around one year so we could use it
to cover the bumps in the road.
Especially important is the fact that if we completely stop emissions, we could also stop global
dimming from coal burning which would result in a 3 C global temperature jump. So stopping
emissions may result in a temperature jump, and we need a protection system in this case.
In all cases we need to survive until stronger technologies develop. Using nanotech or genetic
engineering we could solve the warming problem with less effort. But we have to survive until this
time.
It seems to me that the idea of cutting emissions is overhyped and solar management is
"underhyped" in terms of public opinion and funding. By changing that misbalance we could
achieve more common good.
An unpredictable climate needs a quicker regulation system
The management of climate risks depends on their predictability and it seems that this is not very
high. The climate is a very complex and chaotic system.
It may react unexpectedly in response to our own actions. This means that long-term actions are less
favorable. The situation could change many times during their implementation.
The quick actions like solar shielding are better for management of poor predictable processes, as
we can see the results of our actions and quickly cancel them or make them stronger if we don't like
the results.
85

Context risks – influencing the probability of other global risks
Global warming has some context risks:
1 it could slow tech progress,
2. it could raise the chances of war (probably already happened in Syria because of draught
http://futureoflife.org/2016/07/22/climate-change-is-the-most-urgent-existential-risk/),
and exacerbate conflicts between states about how to share recourses (food, water etc.) and about the
responsibility for risk mitigation. All such context risks could lead to a larger global catastrophe. See
also a book “Climate wars” https://www.amazon.com/Climate-Wars-Fight-SurvivalOverheats/dp/1851688145
3. Another context risk is that global warming is captures almost all the available public attention for
global risks mitigation, and other more urgent risks may get less attention.
4 Impaired cognition. Rising CO2 levels could also impair human intelligence and slow tech
progress as CO2 levels near 1000 ppm are known to have negative effects on cognition.
5. Warming may also result in large hurricanes. They can appear if the sea temperature reaches 50 C
and they have a wind speed of 800 km/h, which is enough to destroy any known human structure.
They will be also very stable and live very long, thus influencing the atmosphere and creating strong
winds all over the world. The highest sea temperature currently is around 30C.
http://en.wikipedia.org/wiki/Hypercane
Warming and other risks
Many people think that runaway global warming constitutes the main risk of global catastrophe.
Another group think it is AI, and there is no dialog between these two groups.
The level of warming which is survivable strongly depends of our tech level. Some combinations of
temperature and moisture are non-survivable for human beings without air conditioning: If the
temperature rises by 15 C, half of the population will be in a non-survivable environment
http://www.sciencedaily.com/releases/2010/05/100504155413.htm because very humid and hot air
prevents cooling by perspiration and feels like a much higher temperature. With the current level of
tech we could fight it, but if humanity falls to a medieval level, it would be much more difficult to
recover in such conditions.
In fact we should compare not the magnitude but speed of global warming with the speed of tech
progress. If the warming is quicker it wins. If we have very slow warming, but even slower progress,
the warming still wins. In general I think that progress will overrun warming, and we will create
strong AI before we have to deal with serious global warming consequences.
The war also could happen if one country attempts geo-engineering, say USA, and another will
think about is as climate weapon which undermines its agricultural productivity (China or Russia).
Scientific studies and preparation for GE is probably the longest part of GE, it and could and should
be done in advance, and it should not provoke war. If real necessity of GE appear, all need
technologies will be ready.
Different predictions
86

Multiple people predict extinction due to global warming but they are mostly labeled as “alarmists”
and are ignored. Some notable predictions:
1.
David Auerbach predicts that in 2100 warming will be 5 C and combined with resource
depletion and overcrowding it will result in global catastrophe.
http://www.dailymail.co.uk/sciencetech/article-3131160/Will-child-witness-end-humanity-Mankindextinct-100-years-climate-change-warns-expert.html
2.
Sam Carana predicts that warming will be 10 C in the 10 years following 2016, and
extinction will happen in 2030. http://arctic-news.blogspot.ru/2016/03/ten-degrees-warmer-in-adecade.html
3.
Conventional predictions of the IPCC give a maximum warming of 6.4 C at 2100 in worst
case emission scenario and worst climate sensitivity to them:
https://en.wikipedia.org/wiki/Effects_of_global_warming#SRES_emissions_scenarios
4.
The consensus of scientists is that climate tipping point will be in 2200
http://www.independent.co.uk/news/science/scientists-expect-climate-tipping-point-by-22002012967.html
5.
If humanity continues to burn all known carbon sources it will result in a 10 C warming by
2030. https://www.newscientist.com/article/mg21228392-300-hyperwarming-climate-could-turnearths-poles-green/ The only scenario in which we are still burning fossil fuels by 2300 (but not
extinct and not a solar powered supercivilzation running nanotech and AI) is a series of nuclear wars
or other smaller catastrophes which will permit the existence of regional powers which often smash
each other into ruins and then rebuild using coal energy. Something like global nuclear “Somali
world”.
6. “Kopparapu says that if current IPCC temperature projections of a 4 degrees K (or Celsius)
increase by the end of this century are correct, our descendants could start seeing the signatures of
a moist greenhouse by 2100”. Earth on the edge of runaway warming. http://arctic-

news.blogspot.ru/2013/04/earth-is-on-the-edge-of-runaway-warming.html
We should give more weight to less mainstream predictions, because they describe heavy tails of
possible outcomes. I think that it will be reasonable to estimate the risks of extinction level runaway
global warming in the next 100-300 years at 1 per cent and act as it is the main risk from global
warming.

87

The map of (runaway) global warming prevention
Time

High tech realization
in the second half of 21 century

Low tech realization
s
in the firt hal f of 21 cent ur y
No
plan

Plan A

Greenhouse
gases
limitation

Local adaptations
to changing
climate

Cutting emissions
of CO2

Reduction of the
emissions of other
greenhouse
gases

• Air conditioning
• Cities replacement
• New crops
• Irrigation

• Could work in case small
changes like 2-4 C
• Will not work in case of
runaway warming

• Switching from coal to gas
• Switching to electric
i
cars
• More fuel-efficent car s
• The transition to renewable and solar energy, which does not result in CO2
emissions
• Lower consumption

Wait until strong AI creation

• Using nanotech and biotech to create new modes of transport and energy
sources
• Using AI to create a comprehensive climate model and calculate impacts

• Use of lasers to destroy CFC gases
• Methane in the atmosphere can be destroyed by releasing hydroxyl groups or
methanophile organisms

wiki

Capturing CO2
from the
atmosphere

• Reforestation
• Stimulation of plankton: seeding the ocean with iron
• Using common mineral olivine, which when spray enters into chemical reaction
with CO2 and absorbs it, and

• GMO organisms capable of capturing CO2
• Nanorobots use carbon from the air for self replication
• Nanomachines in the upper layers of the atmosphere trap harmful gases

wiki

Plan B

Use of
geo-engineering for
c
reflet i on
of sunlight
Risk: turning off
could result in immediate bounce of
global warming
Risk: Less
incentives in
cutting emissions

Reflet ion of
sunlight
radiation
in stratosphere

Increase of
albedo of the
earth’s surface

Increase of
cloud albedo

Space solutions
i

1. Stratospheric aerosols based on sulfuric acid wiki
Risk: the destruction of the ozone layer.
Risk: acid rains
Expected price: may be zero, if cheap plane fuel used

2. Change the albedo
https://en.wikipedia.org/wiki/Refle
c
t i v e_surfaces_(geoengineering)
c
• Using reflet i v e construction technologies
c
• Crops with high albedo
• The foam in the ocean
• Covering deserts with c
reflet i v e plastic
• Huge airships
m reflet i v e thin fil in the upper at m
o spher e

3. Increasing of the albedo of clouds over the sea with the help of injecting sea-c
water-based condensation nuclei
https://en.wikipedia.org/wiki/Cloud_refle
a
t i vi t y_modifict i on
1500 ships can do it. The spray itself will go to the upper atmosphere
Risk: The water turns to steam, which is itself a greenhouse gas.
Risk: Incorrect altitude clouds can lead to heating
https://en.wikipedia.org/wiki/Solar_radiation_management#Weaponization

• Mirrors in space
• Spraying moondust: explosions on the moon could create a cloud of dust
• Construction of factories for the production of the moon, satellites umbrellas
• Lenses or smart dust at L1 point
• Deviation of an asteroid to the Moon so that it would result into impact and a
cloud of dust on the moon’s orbit will be created, and will be screening solar radiation.
c

Robots replicators in space will building mirrors

• Artifical expl osi ons of iv olcanoes to create artifical v olcanic iwinter
• Artifical nuc l ear wi nt er - expl osi ons i n tai ga and in coal beds

Plan С
Urgent measures
to stop global
warming

Plan C could be realized in half a year with help of already existing nuclear
weapons

Plan D
Escape
• Escape into high mountains, like Himalaya or in Antarctic
• High-tech air condition escapes

• Space stations
• Other planet colonisation
• Uploading into AI

Types of warming and itshconsequences

Consequences

Probability

New Ice age

Large economic problems
Millions people will die

No warming

Useless waste of money, time and attention in figt ing gl obal w arming

Warming according ICPP
predictions: 2-4 C until
the end of 21 century

Large economic problems
Hurricanes, famine, sea change levels
Millions people will die

Catastrophic warming
10-20 C

Big parts of land became inhabitable,
human moves to poles
and mountains
Civilizational collapse

Catastrophic warming
New equilibrium of the
climate at 55 C (40 C
warming)

Small groups of people could survive
on very high mountains,
like Himalaya

Venusian runaway
warming (Mild)
The medium temperature
reach more than 100 C

Some form of life survive
on mountains tops

Venusian runaway warming (Strong, all water
evaporation)
The medium temperature
reach more than 1600 C

Life on Earth irreversibly ends

88

Chapter 7. The anthropogenic risks which are not connected with new technologies
Exhaustion of resources
The problem of exhaustion of resources, growth of the population and pollution of
environment is system problem, and in this quality we will consider to it further. Here we will
consider only, whether each of these factors separately can lead to mankind extinction.
Widespread opinion is that the technogenic civilization is doomed because of exhaustion of
readily available hydrocarbons. In any case, this in itself will not result in extinction of all mankind
as earlier people lived without oil. However there will be vital issues if oil ends earlier, than the
society will have time to adapt for it - that is will end quickly. However coal stocks are considerable,
and the "know-how" of liquid fuel from it was actively applied in Hitler’’s Germany. Huge stocks of
hydrate of methane are on a sea-bottom, and effective robots could extract it. And wind-energy,
transformation of a solar energy and similar as a whole it is enough existing technologies to keep
civilization development, though probably certain decrease in a standard of life is possible, and in
the worst case - considerable decrease in population, but not full extinction.
In other words, the Sun and a wind contain energy which in thousand times surpasses
requirements of mankind, and we as a whole understand how to take it. The question is not, whether
will suffice energy for us, but whether we will have time to put necessary capacities into operation
before shortage of energy will undermine technological possibilities of the civilization at the adverse
scenario.
To the reader can seem, that I underestimate a problem of exhaustion of resources to which the
is devoted set of books (Meadows, Parhomenko), researches and the Internet of sites (in the spirit of
www.theoildrum.com ). Actually, I do not agree with many of these authors as they start with the
precondition, that technical progress will stop. We will pay attention to last researches in the field of
maintenance with power resources: In 2007 in the USA industrial release of solar batteries in cost
less than 1 dollar for watt has begun, that twice it is less, than energy cost on coal power station, not
considering fuel. The quantity wind energy which can be taken from ocean shoal in the USA makes
900 gigawatts, that covers all requirements of the USA for the electric power. Such system would
give a uniform stream of energy for the account of the big sizes. The problem of accumulation of
surpluses of the electric power is solved for the account of application of return back waters in
89

hydroelectric power stations and developments of powerful accumulators and distribution, for
example, in electromobiles. The large amount of energy can be taken from sea currents, especially
Gulf Stream, and from underwater deposits methane hydrates.
Besides, end of exhaustion of resources is behind horizon of the forecast which is established
by rate of scientific and technical progress. (But the moment of change of the tendency - Peak Oil is in this horizon.)
One more variant of global catastrophe is poisoning by products of our own live. For example,
yeast in a bottle with wine grows on exponent, and then poisoned with products of the disintegration
(spirit) and all to one will be lost. This process takes place and with mankind, but it is not known,
whether we can pollute and exhaust so our inhabitancy that only it has led to our complete
extinction. Besides energy, following resources are necessary to people:
· Materials for manufacture - metals, rare-Earth substances etc. Many important ores can end
by 2050. However materials, unlike energy, do not disappear, and at development nanotechnology
there is possible a full processing of a waste, extraction of the necessary materials from sea water
where the large quantity is dissolved, for example, uranium, and even transportation of the necessary
substances from space.
· Food. According to some information, the peak of manufacture of foodstuff is already
passed: soils disappear, the urbanization grasps the fertile fields, the population grows, fish comes to
an end, environment becomes soiled by waste and poisons, water does not suffice, wreckers extend.
On the other hand, transition to essentially new industrial type of manufacture of the food plants,
based on hydroponics - that is cultivation of plants in water is possible, without soil in the closed
greenhouse that protects from pollution and parasites and is completely automated. (see Dmitry
Verhoturova's and Kirillovsky article “Agrotechnologies of the future: from an arable land to
factory”). At last, margarine and, possibly, many other things necessary components of a foodstuff, it
is possible to develop from oil at the chemical enterprises.
· Water. It is possible to provide potable water for the account desalination sea water, today it
costs about dollar on ton, but the water great bulk goes on crop cultivation - to thousand tons of
water on wheat ton that does desalination unprofitable for agriculture. But at transition on
hydroponic water losses on evaporation will sharply decrease, and desalination can become
profitable.
· Place for a life. Despite fast rates of a gain of quantity of the population on the Earth, it is
still far to a theoretical limit.

90

· Pure air. Already now there are the conditioners clearing air from a dust and raising in it the
maintenance of oxygen.
- Exceeding the global hypsithermal limit.
Artificial Wombs

Technological revolution causes following factors in population growth:
· Increase in number of beings which we attribute the rights equal to the human: monkeys,
dolphins, cats, dogs.
· Simplification of a birth and education of children. Possibilities of reproductive cloning,
creation of artificial mothers, robots-assistants on housekeeping etc.
· Appearance of the new mechanisms applying for the human rights and-or consuming
resources: cars, robots, AI systems.
· Possibilities of prolongation of a life and even revival of dead (for example, by cloning on
remained DNA).
· Growth of a "normal" consumption level.

Crash of the biosphere
If people seize genetic technologies it presumes both to arrange crash of biosphere of
improbable scales, and to find resources for its protection and repair. It is possible to imagine the
scenario at which all biosphere is so infected by radiation, genetically modified organisms and
toxins, that it will be not capable to fill requirement of mankind for the foodstuffs. If it occurs
suddenly, it will put a civilization on a side of economic crash. However advanced enough
civilization can adjust manufacture of a foodstuff in a certain artificial biosphere, like greenhouses.
Hence, biosphere crash is dangerous only at the subsequent recoil of a civilization on the previous
step - or if crash of biosphere causes this recoil.
But biosphere is very complex system in which self-organized criticality and a sudden collapse
are possible. Well known story is destruction of sparrows in China and the subsequent problems
with the foodstuffs because of invasion of wreckers. Or, for example, now corals perish worldwide
because sewage take out a bacterium which hurt them.

91

Chapter 8. Artificial Triggering of Natural Catastrophes
There are a number of natural catastrophes which could potentially be triggered by
the use of powerful weapons, especially nuclear weapons. These include: 1) initiation of a
supervolcano eruption, 2) dislodging of a large part of the Cumbre Vieja volcano on La
Palma in the Canary Islands, causing a megatsunami, 3) possible nuclear ignition of a gas
giant planet or the Sun. The first is the most dangerous and plausible, the second is
plausible but not dangerous to the whole of mankind, and the third is currently rather
speculative, but in need of further research. Besides these three risks, there is also the risk
of asteroids or comets being intentionally redirected to impact the Earth's surface and the
risk of destruction of the ozone layer through an auto-catalyzing reaction. These five risks
and some closely related others will be examined in this chapter.
We begin with the Cumbre Vieja risk, as it is the most studied among these
possibilities and serves as an illustrative example. According to heavily contested claims,
there may be a block with volume between 150 km 3 and 500 km3 on the Cumbre Vieja
volcano on La Palma in the Canary Islands, which if dislodged, would cause 3-8 m (10- 26
ft, in the instance of 150 km3 collapse) to 10-25 m (33- 108 ft, in 500 km3 collapse) waves
to hit the North American Atlantic seaboard, causing massive destruction and waves
traveling up to 25 km (16 mi) inland1. All the assumptions underlying this scenario have
been heavily questioned, including the most basic, that there is an unstable block to begin
with2,3. According to the researchers (Ward & Day) who made the original claims, there is a
30 km (17 mi) fissure along Cumbre Vieja. Quoting Ward and Day, “The unstable block
above the detachment extends to the north and south at least 15 km.” The evidence for this
detachment is not based on any obvious surface feature but rather is based on an analytic
argument by Day et al. which incorporates evidence such as vent activity. As such, the
statement has been questioned by critics, who say that the evidence does not imply such a
large detachment and that at most the detachment is 3 km (1.8 mi) in length.
We leave detailed discussion on the topic of the Cumbre Vieja volcano to the
referenced works—our point is to establish an archetype for the category of artificiallytriggered natural risk. If the volcano exploded, or a nuclear bomb were detonated in the
right place, it could send a large chunk of land from La Palma into the ocean, creating a
mega-tsunami that would sink ships, impact the East Coast and cost many lives. Saying
92

that this would certainly not occur, with confidence, is not possible now because the case
has not been investigated thoroughly enough. Unfortunately, the other risks which we
discuss in this chapter have been analyzed even less. It is important to highlight them,
however, so that future research can be prompted.
Yellowstone Supervolcano Eruption
A less controversial claim than the status of the Cumbre Vieja volcano is that there is
a huge, pressurized magma chamber beneath Yellowstone National Park in Wyoming.
Others have gone on to suggest, off the record, that it would blow if its cap were destroyed
by a nuclear weapon. No geologists have publicly addressed the possibility, but it is entirely
consistent with their statements on the pressure of the magma chamber and its depth 4. The
magma chamber is 80 km (50 mi) long and 40 km (24 mi) wide, and has 4,000 km 3 (960 cu
mi) of underground volume, of which 10–30% is filled with molten rock. The top of the
chamber is 8 km (5 mi) below the surface, the bottom about 16 km (10 mi) below. That
means that anything which could weaken or annihilate the 8 km (5 mi) cap could release
the hyper-pressurized gases and molten rock and trigger a supervolcanic eruption, causing
the loss of millions of lives.
Before we review the effects of a supervolcano eruption, it is worth considering the
depth penetration of nuclear explosions. During Operation Plowshare, an experiment of the
peaceful use of nuclear weapons, craters 100 m (320 ft) deep were created. Simplistically
speaking, this means that 80 similar weapons would be needed to burrow all the way down
to the Yellowstone Caldera magma chamber. Realistically speaking, fewer would be
needed, since deep nuclear explosions cause collapses which reduce overhead pressure.
Our estimate is that just 10 ten-megaton nuclear explosions or fewer would be sufficient to
connect the magma chamber to the surface. If there are solid boundaries between the
explosion cavities, they could be bridged by a drilling machine. That would release the
pressure and allow the magma to explode to the surface. You might wonder what sort of
people would have the motivation to do such a thing. America's enemies, for one, but there
are other possibilities. We explore the general case in a later chapter.
For now, let's review the effects of a supervolcano eruption. Like many of the risks
discussed in this book, they would not be likely to kill all of humanity, but “only” a few
hundreds of millions of people. It's in combination with other risks that the supervolcano
93

risk becomes a threat to the human species in general. Regardless, it would be
cataclysmic.
Specifically: a supervolcano eruption on the scale of Yellowstone's prior events would
eject about 1,000 km3 (250 cu mi) of molten rock into the sky, creating ash plumes which
cover as much as two-thirds of the United States in a layer of ash a foot thick, making it
uninhabitable for decades. 80,000 people would be killed instantly, and hundreds of
millions more would die over the subsequent months due to lack of food and water in the
wake of the eruption. Ash would kill all plants on the surface, causing the death of any
animals not able to survive on detritus or fungus. The average world temperature would
drop by several degrees, causing catastrophic crop failures and hundreds of millions of
deaths by starvation. The situation would be similar to that after a nuclear war, with the
ameliorating factor that volcanic aerosols would persist in the atmosphere for less time
than nuclear aerosols, due to greater average particle size. Therefore, a “volcanic winter”
would be shorter than a nuclear winter, although the acute effects would be far worse.
Nuclear war would barely affect many inland areas, but a Yellowstone supereruption would
cover them uniformly in ash. As a world power, the United States would be done for.
A foot thick layer of ash pretty much puts an end to all human activity. USGS geologist
Jake Lowenstern is quoted as saying that a Yellowstone supereruption would deposit a
layer of ash at least 10 cm (4 in) thick for a radius of 500 miles around Yellowstone 5. This is
somewhat less than the “10 foot thick layer of ash” claims seen in disaster documentaries
and in other sensationalist sources, but still more than enough to heavily disrupt activity
across a huge area. The effects would be similar to the catastrophe outlined in the nuclear
weapons chapter, but even more severe. Naturally, water would still be available from wells
and streams, but all crops and much game would be ruined, leaving people completely
dependent upon canned and other stored food. The systems that provide power would be
covered in dust, requiring weeks to months to repair. More likely than not, civil order would
completely collapse across the effected area, making repairs to power systems difficult and
piecemeal. Of course, if the victims don't prepare enough canned food, they can always
eat one another, which is the standard (and rarely spoken of) outcome in this kind of
disaster scenario.

94

Still, little of this directly matters in the context of this book, since such an event, while
severe, does not threaten humanity as a whole. Although worldwide temperatures would
drop by a few degrees, and the continental US would be devastated, the world population
as a whole would survive and live on, guaranteeing humanity's future. It's still worth noting
this scenario because 1) there may be multiple supervolcanos worldwide which could be
triggered simultaneously, which either individually or concurrently with nuclear weapons
could cause a volcanic/nuclear winter so severe that no one survives it, 2) it is an
exacerbating factor which could add to the tension of a World War or similar scenario. A
strike on a supervolcano would be deadlier than a strike on a major city, and could
correspondingly raise the stakes of any international conflict. Threatening to attack a
supervolcano with a series of ICBMs could be a potential blackmailing strategy in the
darkest hours of a World War.
Asteroid Bombardment
A risk which has more potential to be life-ending than a supervolcano eruption is that
of intentionally-directed asteroid bombardment, which could be quite dangerous to the
planet indeed. Directing an asteroid of sufficient size towards the Earth would require a
tremendous amount of energy, orders of magnitude greater than mankind's current total
annual energy consumption, but it could eventually be done, perhaps with the tools
described in the nanotechnology chapter.
The Earth's orbit already places it in some danger of being hit by a deadly asteroid.
65 million years ago, an asteroid between 5 km (2 mi) and 15 km (6 mi) impacted the
Earth, causing the extinction of the dinosaurs. There is an asteroid, 1950 DA, 1 km (0.6 mi)
in diameter, which scientists say has a 0.3 percent chance of impacting Earth in 2880 6. The
dinosaur-killer impact was so severe that its blast wave ignited most of the forests in North
America, destroying them and making fungus the dominant species for several years after
the impact7. Despite this, some human-sized species survived, including various turtles and
alligators. On the other hand, many human-sized and larger species were wiped out,
including all non-avian dinosaurs. More research is needed to determine whether a
dinosaur-killer-class asteroid would be likely to wipe out humanity in our entirety, taking into
account our ability to take refuge in bunkers with food and water for decades or possibly

95

even centuries at a time. Detailed studies have not been done, and we were only able to
locate one paper on the topic8.
In the geological record, there are asteroid impacts up to half the size of the
Chicxulub impactor which wiped out the dinosaurs, which are known not to have caused
mass extinctions. The Chicxulub impactor, on the other hand, caused a mass extinction
that destroyed about three-quarters of all living plant and animal species on Earth. It seems
fair to assume that an asteroid needs to be at least as large as the Chicxulub impactor to
have a chance of wiping out humanity, and probably significantly larger.
Sometimes, asteroids with a diameter of greater than 10 km (4 mi) are called “lifekiller” asteroids, though this is an exaggeration. At least one asteroid of this size has
impacted the Earth during the last 600 million years and not wiped out multicellular life,
though it did burn down the entire biosphere. The two largest impact craters known, with a
diameter of 300 and 250 km (186 and 155 mi) respectively, correspond to impacts which
occurred before the evolution of multicellular life. The Chicxulub crater, with a diameter of
180 km (112 mi), is the third-largest impact crater on Earth which is definitively known, and
the only known major asteroid to hit the planet after the rise of complex, multicellular life.
Craters of similar size from the last 600 million years cannot be definitively identified, but
that does not mean that such an impact has not occurred. The crater could very well have
been on part of the Earth's surface that has since been subsumed beneath a continental
plate, or could be camouflaged by surface features. There is one possible crater of even
larger size, the Shiva “crater,” which, if real (it is highly contested) is 600 km (370 miles)
long by 400 km (250 mi) wide, and may correspond to the impact of a 40 km (25 mi) sized
object. If this were confirmed, it would substantially increase the necessary size for a
human-killer asteroid, but since it is highly contested, it does not count, and we ought to
assume that an asteroid just modestly larger than the Chicxulub impactor, say near the top
of its probable size range, 15 km (6 mi) could potentially wipe out the human species, just
as it did the dinosaurs. Comets have somewhat greater speed than asteroids, due to their
origin farther out in the solar system, and can be correspondingly smaller than asteroids
but still do equivalent damage. It is unknown whether the Chicxulub impactor was an
asteroid or a comet. To put it in perspective, an asteroid 10 km (4 mi) across is similar to
the size of Mt. Everest, but with greater mass (due to its spherical, rather than conical
shape).
96

Restricting our search to asteroids of size 15 km (9 mi) or larger, we might wonder
where objects of this size may be found. Among 1,006 potentially hazardous asteroids
(PHOs) classified by NASA, the largest is 4.75×2.4×1.95 km, with a mass of 5.0×10 13 kg.
This is rather large, but not large enough to be categorized as a potential humanity-killer,
according to our analysis. (Furthermore, this object has a relatively low density, which
would make its impact even slighter than its size alone would suggest.) Moving on to nearEarth objects (NEOs), asteroids which orbit in Earth's neighborhood but are not at risk of
impact, more than 10,713 are categorized by NASA. Over 1,000 are estimated to have a
diameter greater than 1 km (0.6 mi). The largest, 1036 Ganymed, has an estimated
diameter of 32–34 km (20–21 mi), more than enough to qualify as a potential “humanitykiller”. Like all impacts of similar size, it would eject a huge amount of rock and dust into the
upper atmosphere, which would heat up into molten rock upon reentry, “broiling” the
surface upon its return. At a distance of 800 km (~500 mi), the ejecta would take roughly 8
minutes to arrive. According to one source, the impact would result in a volcanic winter with
a drop of 13 Kelvin after 20 days, rebounding by about 6 K after a year, at which point onethird of the Northern Hemisphere would be covered in ice 9. In addition, a sufficiently large
impact would create a mantle plume at the antipodal point (opposite side of the planet),
which would cause a supervolcanic eruption and a volcanic winter all on its own. The
combination of an impact and a supervolcanic eruption could cause a temperature drop
even greater than 13 Kelvin. Impactors greater than 3 km (1.2 mi) in diameter are likely to
create global firestorms, probably burning up a majority of the world's dense forests 10,11.
The smoke from the wood fires would cause the impact winter to last even longer, creating
even thicker ice sheets at lower latitudes which would be even more omnicidal. Still,
according to our best guess, a decade of food and underground refuges would be enough
for humans to survive an impactor of equivalent size to the Chicxulub impactor. Studies of
the civilizational outcome of a Ganymed-sized impactor cannot be found, but it is fair to say
it would be very bad, though thankfully very unlikely.
There are several reasons it seems unlikely that a single asteroid impact could wipe
out humanity. The foremost is, that, unlike the dinosaurs, some minority of us could seal
ourselves in caves with a pleasant climate and live for dozens if not hundreds of years on
stored food alone. If supplied with a nuclear reactor, metal shop, a large supply of light
bulbs, a reliable water source, and means of disposing of waste, people could even grow
97

plants underground and extend their stay for the life of the reactor, which could be 100
years or longer. A thick ice sheet forming on top of the bunker could kill all life inside, by
denial of oxygen, but such ice sheets are unlikely to form in the tropics, where billions of
people live today. Perhaps a greater risk would be a series of artificially engineered
asteroid impacts, 50-100 years apart, designed to last for thousands of years. It seems
more likely that an unnatural scenario such as that could actually wipe out all human life,
but also seems correspondingly more difficult to engineer. It could become possible during
the late 21st century and beyond, however.
Manually redirecting an asteroid with an orbit relatively far from the Earth, say 0.3
astronomical units (AU) in the case of 1036 Ganymed, would require a tremendous amount
of energy, even by the standards of high energy density MNT-built machinery. Redirecting
an object of that size by a substantial amount would require many tens of thousands of
years and a corresponding number of orbits. If someone had an objective to destroy a
target on Earth, it would be much simpler to blow it up with a nuclear weapon or direct
sunlight from a space mirror to vaporize it rather than drop an asteroid on it. It would be
even far easier to pick up a mountain, launch it off the surface of the Earth, orbit it around
the Earth until it picked up sufficient energy, then drop it on the Earth, rather than
redirecting a distant object. For this reason, a man-engineered asteroid impact seems like
an unlikely risk to humanity for the foreseeable future (tens of thousands of years), and an
utterly negligible one for the time frame under consideration, the 21 st century. Of course,
the probability of a natural impact of this size in the next century is minute.
Runaway Autocatalytic Reactions
Very few people know that if the concentration of deuterium (an isotope of hydrogen,
also known as “heavy water”) in the world's oceans were just 22 times higher than it is, it
would be possible to ignite a self-sustaining nuclear reaction that would vaporize the seas
and the Earth's crust with it. The oceans have 1 atom per 6,247 of deuterium, and the
critical threshold for a self-sustaining nuclear chain reaction is one atom per 300 12. It may
be that there are deposits of ice or heavy water with the required concentration of
deuterium. Heavy water has a slightly higher melting point than normal water and thus
concentrates during natural processes. Even a cube of heavy water ice just 100 m (320 ft)
on a side, small in terms of geologic deposits, could release energy equivalent to many
98

gigatons of TNT if ignited, greater than the largest nuclear bombs ever detonated. The
reason we do not observe these events in other star systems may be due to the fact that
artificial nuclear explosions are needed to trigger them, or that the required concentrations
of deuterium do not exist naturally. More research on the topic is needed. There is even a
theory that the Moon formed during a natural nuclear explosion from concentrated uranium
at the core-mantle boundary13. It may be that we have not observed such an event in other
star systems yet because it is sufficiently uncommon.
It should be possible to go looking for higher-than-normal concentrations of deuterium
in Arctic ice deposits. Theses studies should be pursued out of an abundance of caution. If
such deposits are found, this would provide more evidence for the usefulness of space
stations distant from the Earth as an insurance policy against geogenic human extinction
triggered by nuclear chain reaction in geologic deposits. There may be other autocatalytic
runaway reactions which are possible, threatening structures such as the ozone layer.
These possibilities should be studied in greater detail by geologic chemists. Dangerous
thresholds of deuterium may exist in the icy bodies of the outer solar system, on Mars, or
among the asteroids. The use of nuclear weapons in space should be restricted until these
objects are thoroughly investigated for deuterium levels.
Gas Giant Ignition
The potential ignition of gas giants is of sufficient importance that it merits its own
section. The first reaction to such an idea is that it sounds utterly crazy. There is a reflexive
search for a rationalization, a quick rebuttal, to put such an implausible idea out of our
heads. Problematically, however, many of the throw-away rebuttals have been dismissed
on rational grounds, and if we are going to rule out this possibility decisively, more work
needs to be done14. Specifically, we are talking about a nuclear chain reaction being
triggered by a nuclear bomb detonated in a deuterium-rich cloud layer of a planet like
Jupiter, Saturn, Uranus, or Neptune. Even a pocket “only” several kilometers in diameter
would be enough to sterilize the solar system if ignited.
For ignition to be a danger, there needs to be a pocket in a gas giant where the level
of deuterium per normal hydrogen atom is at least 1:300. That is all it takes. A nuclear
explosion could then theoretically start a self-reinforcing nuclear chain reaction. If all the
deuterium in the depths of Jupiter were somehow ignited, it would release energy
99

equivalent to 3000 years of luminescence of the Sun during a few tens of seconds, enough
to melt the first few kilometers of the Earth's crust and penetrate much deeper with x-rays.
Surviving this would be extremely difficult, though possibly machine-based life forms buried
deeply underground could do it. If it turns out to seem possible, the threat of blowing up a
gas giant could be used as a blackmailing device by someone to get all of humanity to do
what they want.
The average deuterium concentration of Jupiter's atmosphere is low, about 26 per 1
million hydrogen atoms, similar to what is thought to be the primordial ratio of deuterium
created shortly after the Big Bang. Although this is a small amount, there may be chemical
and physical processes, which concentrate deuterium in parts of the atmosphere of Jupiter.
Natural solar heating of ice in comets leads to isotope separation and greater local
concentrations, for instance. Scientific details of how isotope separation can occur and
what isotope concentrations are reached in the interior of gas giants is poorly understood.
If the required deuterium concentrations exist, there are additional reasons to be
concerned that a runaway reaction could be initiated in a gas giant. One is the immense
pressure and opacity of the gas giants beneath the cloud layer. 10,000 km beneath
Jupiter's cloud tops, the pressure is 1 million bar. For comparison, the pressure at the core
of the Earth is 3 million bar. Deep in the bowels of gas giants, hydrogen changes to a
different phase called metallic hydrogen, where it is so compressed that it behaves as an
electrical conductor, and would be considerably more opaque than normal hydrogen. The
pressure and opacity would help contain any nuclear reaction and ensure it gets going
before fizzling out. Another factor to consider is that a gas giant has plenty of fusion fuel
which has never been involved in nuclear reactions, elements like helium-3 and lithium.
This makes it potentially more ignitable than a star.
There are runaway fusion reactions which occur in astronomical bodies in nature. The
most obvious is a supernova, where a red giant star collapses and fuses a large amount of
fuel very quickly. Another example is a helium flash, where a degenerate star crosses a
critical threshold of helium pressure and 60-80 percent of the helium in its core fuses in a
matter of seconds. This causes the luminosity of star to increase to about 10 11 solar
luminosities, similar to the luminosity of an entire galaxy. Jupiter, which is about a thousand

100

times less massive than the Sun, would still sterilize the solar system if any substantial
portion of its helium fused. Helium makes up about 10 percent of the mass of Jupiter.
The arguments for and against the likelihood of planetary ignition are complicated,
and require some knowledge of nuclear physics. For this reason, we will conclude
discussion of the topic here and point to some key references. Besides the risk of planetary
ignition, there has also been some discussion of the possibility of solar ignition, but this has
been more limited. Like with planetary ignition, it is tempting to dismiss the possibility
without considering the evidence, which is a danger.
Deep Drilling and Geogenic Magma Risk
Returning back to the realm of Earth, it may be possible to dig very deeply, creating
an artificial supervolcano even more energetic than Yellowstone. We should always
remember that 1792 mi (2885 km) beneath us is a molten, pressurized, liquid core
interspersed with gas. If even a tiny amount of that energy could be released to the
surface, it could wipe out all life. This is especially concerning in light of recent proposals to
send a probe to the mantle-core boundary. Although the liquid iron in the core would be too
heavy to rise to the surface on its own, the gases in the core could eject it through a
channel, like opening the cork of a champagne bottle. If even a small channel were flooded
with pressurized magma, it could make the fissure larger by melting its walls all the way to
the top, until it became large enough to eject a substantial amount of liquid iron.
A scientist at Caltech has devised a proposal for sending a grapefruit-sized
communications probe to the Earth's core, by creating a crack and pouring in a huge
amount of molten iron. This would slowly melt its way downwards, taking about a week to
travel the 1,792 miles to the core. Being conducted as a routine scientific exploration, like a
space mission, the risks might be ignored until it is too late. The idea of a probe to the
Earth's core causing the end of life on the surface just feels far-fetched; even though we
don't know enough about the physics to know for sure one way or another. These
“seemingly far-fetched” risks are especially dangerous because the reasons for dismissing
them are superficial.

101

Artificial irreversible global warming
The risk of global runaway warming caused by the release of methane from methane
hydrates in Arctic has been discussed for long.
https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis. These hydrates are deposited at
the depth of 300 meters or more as layer of a few hundred meters thick. Warming of the
ocean water, connected with the change in the amount of surface arctic ice, as well as
inflow of warm water from the rivers and the Gulf Stream, gradually destabilize the
hydrates, which are stable in a narrow range of temperatures and pressures. Namely, they
require high pressure and a temperature around 0 C.
If the temperature will rise by a few degrees, it would lead to their destruction and
release of methane. (Some think it could be quick process and other think that it will be
slow.) They also would be destroyed if the pressure drops.
Now a significant portion of the hydrates is on the verge of stability and release of
methane takes place in the form of bubbles on the surface, which leads to an increased
concentration of methane in the Arctic (sometimes more than 2200 ppb, with a mean of
1600). Methane is a hundred times more powerful greenhouse gas than carbon dioxide, so
the 2 parts per million of methane is equal to 200 parts per million by carbon dioxide. So
the contribution of methane and carbon dioxide to the warming are almost equal size.
According to some scientists, the release of methane from the Arctic tundra and arctic
undersea ice is a process with positive feedback that will lead to global warming of few
degrees in the period 2020-2030, followed by stronger greenhouse effect of water vapor.
After it global temperature will rise on tens degrees, and the death of all life on Earth will
happen in a century. http://arctic-news.blogspot.ru/ Others take more conservative
approach and think that outgassing will take millennia.
https://en.wikipedia.org/wiki/Runaway_climate_change
Although this view is not share by everybody, and the probability of its truth is not
high, as these processes can be much slower, the described warming mechanism is
apparently true.

102

As a result, there is a technical possibility to destabilize the fields of methane
hydrates by means of nuclear weapons and other means that will lead to the accelerated
release of methane and result in rapid runaway global warming.
Such weapons can be used as doomsday machine for blackmail of an adversary by
large arctic country (Russia or Canada-US). Russia has recently built secret arctic deep
water submarine “Losharic”, with unknown purpose, which is capable to go 2500 metes
under surface. https://en.wikipedia.org/wiki/Russian_submarine_Losharik
Here we will discuss the minimum requirements for the initial explosions, and the
factors that lead to the highest allocation of methane. These estimates are very preliminary.
Lets suggest that there is a country that has access to the Arctic, and that possesses
a nuclear arsenal of 10 000 nuclear warheads (like the USSR and the United States at the
height of the Cold War)
Then it can distribute these nukes at the bottom of the Arctic Ocean at the points
where the largest and most unstable deposits of methane hydrates laid. So their
simultaneous explosion would result in craters in seafloor and large destabilization of
methane hydrates.
At the same time the very location (and detonation order) of the charges may be able
to create a big wave in the ocean, which will go over other fields of methane hydrates,
changing the pressure over them, which will break them, and will lead to explosive
outgassing. The wave may also affect enemy’s shore.
In addition, such charges should be located not far to each other, so that the total
contribution of the explosion in the heating of the ocean waters will not be lost at large
areas. Heating and mixing ocean water would contribute to methane outgassing.
Nukes may be also put under the seafloor, so the shock wave shattered and threw to
the surface of pieces of methane hydrates which would increase methane outgassing.
If one explosion of 1 Mt order destabilize an area of about 1 square. km, it could
release 10 million tons of methane.

103

In case of using 10 000 bombs it would be released 100 gigatons of methane, which
will lead to the ten times more than now concentration of methane in the atmosphere. This
is roughly equivalent to the amount of atmospheric CO2 ten fold increase by its effect on
global warming and would result in immediate rise of global temperatures on around 10 C,
which will probably enough to start chain reaction of fires, outgassing and evaporation
which would lead to even high temperatures. If median earth temperature once rise to 47C,
than water vapor greenhouse effect will push it until new stable state at 900 C.
Of course, part of the methane would burn, but its concentration will increase in the
northern hemisphere, which will lead to the continuation of the release of methane from
wetlands and ocean.
This is similar to shaking of a can of soda that typically result in strong outgassing.
A similar effect can be achieved by dropping millions of ordinary depth charges on the
seabed, and the cost may be less, or by using just several multimegaton devices.
The lower stability of gas hydrates, the less intervention is needed to destabilize
them.
Another scenario to cause irreversible global warming is to spraye into the upper
atmosphere more efficient greenhouse substances.
For example, frenons, substances used in refrigeration, can be 8000 times more
effective greenhouse gases than carbon dioxide.
https://en.wikipedia.org/wiki/Greenhouse_gas This means that a strong changes of thermal
balance of the atmosphere may be due to the emissions of about 1 Gt of freons. In
addition, such amount in stratosphere will totally destroy the ozone. Currently about 5
million tons of freons situated in a different old refrigeration systems.
https://en.wikipedia.org/wiki/Chlorofluorocarbon
The most potent greenhouse gas is sulfur hexafluoride, which effect is 23 000 times
greater than of CO2 (on a century-old interval, but the immediate effect is only 3200). Its
annual production is 10 000 tons, and the contribution to global warming is now 0.2
percent. https://en.wikipedia.org/wiki/Sulfur_hexafluoride

104

Synthesis of gigatons of freons would be a huge scientific and technical challenge,
easily visible, expensive and pointless, but it is theoretically possible creation of GMO
bacteria that produce freons.
Another greenhouse gas is nitrogen oxide N2O, which has 298 times stronger
greenhouse effect than carbon dioxide. https://en.wikipedia.org/wiki/Nitrous_oxide It can be
produced by atomic explosions, volcanic eruptions, and also by solar radiation and
supernova explosions. It is also heavier than air and has a narcotic effect, that is, can lead
to physiological reactions.
Ozone has immediate greenhouse effect of 1,000 times greater than CO2, but
tropospheric ozone decay in 22 days. Therefore, ozone may result in regional greenhouse
warming, and in some regions, the contribution of ozone warming is greater than of CO2.
Nuclear explosions can lead to a lot of local ozone created by ionizing radiation.
The main greenhouse gas on Earth is actually water vapor, and nuclear explosions
can locally increase the concentration of water vapor, as well as put it into the upper
atmosphere. However, the vapor has the ability to turn into clouds and crystals that
increase the Earth's albedo and lower the temperature.
One of the possible causes of the extinction of 250 million years ago was
appearing of a new class of methane-generating bacteria that have arisen due to the
emergence and horizontal transfer of a single gene. These bacteria are used nickel emitted
by the eruption of the Siberian Traps as catalytic for the synthesis of methane from organic
sediments on the ocean floor. The exponential growth in the number of these bacteria has
led to rapid global warming. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3992638/
Catalysts now used to stimulate the growth of plankton (iron put into ocean), and the
production and dispersion of nickel are quite large.
Several factors could lead to increase of the number of methane-generating
bacteria and large scale release of methane, which may result in irreversible global
warming:

GMO bacterium that can synthesize methane,

a gene encoding catalytic ability for such bacteria
105

explosion supervolcano in nickel deposits

unintentional accumulation of nickel and iron in the ocean from waste
Conclusion: while deliberate creation of doomsday machine based on
methane gun seems to be implausible possibility, mostly because of
unpredictable results of such attack. Even if a state decides to go on with
Doomsday machine, it has many more predictable options, including
biological and nuclear. In the other way, if it will be proved that starting
clathrate gun is the most devastating use of exiting nuclear arsenal, than the
risk is real.

References
1.

Ward, S. N. & Day, S. J. 2001. “Cumbre Vieja Volcano; potential collapse and
tsunami at La Palma, Canary Islands”. Geophysical Research Letters. 28-17, 3397-3400.

2.

“La Palma Tsunami: The mega-hyped tidal wave story”. http://www.lapalmatsunami.com/

3.

“Risk is low, but US East Coast faces variety of tsunami threats”. November 16,
2011. NBC News.

4.

Alexandra Witze. 2013. "Large magma reservoir gets bigger". Nature.

5.

Analee Newitz. “What will really happen when the Yellowstone supervolcano
erupts?” May 17, 2013. io9.

6.

Steven N. Ward and Erik Asphaug. “Asteroid impact tsunami of 2880 March 16”.
2003. Geophysical Journal International 153, F6–F10.

7.

Robertson, D.S., Lewis, W.M., Sheehan, P.M. & Toon, O.B. 2013. “K/Pg extinction:
re-evaluation

of

the

heat/fire

hypothesis”.

Journal

of

Geophysical

Research:

Biogeosciences.
8.

Victoria Garshnek, David Morrison, and Frederick M. Burkle Jr. “The mitigation,
management, and survivability of asteroid/comet impact with Earth”. 2000. Space Policy
16, Issue 3, 16 July 2000, pp. 213–222.

106

9.

M.C. MacCracken; Covey, C.; Thompson, S.L.; Weissman, P.R. “Global Climatic
Effects of Atmospheric Dust from An Asteroid or Comet Impact on Earth”. 1994. Global and
Planetary Change 9 (3-4): 263–273.

10.

Clark R. Chapman and David Morrison. “Impacts on the Earth by Asteroids and
Comets - Assessing the Hazard”. 1994. Nature 367 (6458): 33–40.

11.

John S. Lewis. Rain Of Iron And Ice: The Very Real Threat Of Comet And Asteroid
Bombardment. 1997. Helix Books.

12.

Thomas A. Weaver and Lowell Wood. “Necessary conditions for the initiation and
propagation of nuclear-detonation waves in plane Atmospheres”. 1978. Physical Review A,
vol 20, no. 1.

13.

R.J. de Meijer, V.F. Anisichkin, W. van Westrenen. “Forming the Moon from
terrestrial silicate-rich material”. 2010. Journal of Chemical Geology.

14.

Alexei Turchin. “The possibility of artificial fusion explosion of giant planets and other
objects of Solar system”. Scribd.

Block 3 – risks, connected with 20 century technologies
Chapter 9. Nuclear Weapons
The risk of nuclear warfare has been discussed in many places, but this book is the
first to thoroughly analyze the risks of electromagnetic pulse (EMP) and nuclear winter and
how these effects combine on the ground. Research which has only been published since
2007 has shed new light on the effects of nuclear winter and is the first research to
examine 10-year, multi-scenario simulations. Previous simulations were run in the late 80s
and early 90s and used far inferior computers.
We should state from the outset that the risk of human extinction from nuclear
weapons alone is rather minute. Rather, nuclear warfare could create a general
atmosphere of chaos and decline that eventually leads to possible human extinction, in
combination with other factors such as biological warfare (to be covered in chapter four)
and nuclear winter. Even taking all of this completely into account, our estimate of the risk
of human extinction from nuclear warfare is rather small, less than 1 percent over the
107

course of the 21st century. We aren't going to specify how far below 1 percent it is, but
simply use 1 percent as a convenient number that seems to be in the right ballpark.
To analyze the effects of nuclear warfare properly requires a complete understanding
of two fields; the literature and debate on nuclear winter, and the likely effects of
electromagnetic pulse (EMP) caused by a high-altitude nuclear detonation. People with a
thorough understanding of both these domains are fairly rare, even within the small slice of
academia and public policy circles that discusses the risks of nuclear war. Therefore, it is
important to understand that overall appraisals of nuclear war from seemingly trustworthy
sources, including physicists and arms control experts, may be unreliable or even
completely worthless, as these individuals often lack knowledge on crucial pieces of the
puzzle. The spectacular and intimidating nature of nuclear warfare tends to make casual
scientists exploring the topic fixate on a particular interpretation and be reluctant to back
away from it in the face of new evidence. Popular incorrect interpretations abound, such as
the false notion that nuclear war would assuredly wipe out humanity (a mistaken view
causing by sensationalist fictional accounts such as On the Beach) or the idea that nuclear
war would have rather mild long-term effects (this idea was prominent in the late 90s and
early 00s).
The leading scientific conception of the effects of nuclear war, particularly nuclear
winter, has taken a serpentine path, fluctuating back and forth since the invention of the
bomb. Since 2007, it is firmly understood, thanks to the work of Alan Robock, that even a
minor nuclear exchange, such as a few dozen warheads between India and Pakistan,
would have important worldwide climactic effects which would be devastating on crop
yields1. Crops are bred to mature within a very specific time frame, which would be gravely
disrupted if the growing season were shortened by even a couple weeks, as would occur if
there were even a “minor” nuclear exchange. The end result would be worldwide food
shortages, food riots, civil disorder, hunger-motivated crimes and atrocities, and mass
starvation.

The Evolution of Scientific Opinion on Nuclear Winter
Before we launch into likely scenarios for nuclear war, and to what degree they
threaten the long-term survival of humanity, we will briefly overview the shifting sands of
the dominant opinion on nuclear winter. In 1983 the first highly influential paper on nuclear
108

winter was published, by Turco, Toon, Ackerman, Pollack, and Carl Sagan (TTAPS,
pronounced “T-Taps”). The paper was titled “Nuclear Winter: Global Consequences of
Multiple Nuclear Explosions”2, and is credited for introducing the term “nuclear winter” to
the public. The paper claimed, “For many simulated exchanges of several thousand
megatons, in which dust and smoke are generated and encircle the earth within 1 to 2
weeks, average light levels can be reduced to a few percent of ambient and land
temperatures can reach -15 degrees to -25 degrees C.”
These numbers later turned out to be overly pessimistic, which Sagan admitted in his
1995 book Demon-Haunted World. Though land temperatures would reach -15 degrees to
-25 degrees C in some highly landlocked areas in the event of full-scale nuclear war
(thousands of megatons), temperatures would not fall so low across most of the Earth's
land mass, according to the best present studies (more detail later). The paper also
claimed that in certain high-warhead exchange scenarios, the ambient radiation across the
surface of the Earth would reach 50 rad, which turned out to be a great exaggeration. The
danger of nuclear war is not primarily from the radiation (though it would kill hundreds of
millions) but the long-term temperature drop and its resulting impact on harvests.
Over the course of the 1980s, doubt proliferated regarding the extremity of the
nuclear winter predictions made by TTAPS, and the motives of the authors were
questioned. In 1987, Cresson Kearny, a respected civil defense engineer and former
military officer, rebutted the claims of TTAPS in his book Nuclear War Survival Skills. In the
book, Kearny said3:
Unsurvivable “nuclear winter” is a discredited theory that, since its conception
in 1982, has been used to frighten additional millions into believing that trying to
survive a nuclear war is a waste of effort and resources, and that only by ridding the
world of almost all nuclear weapons do we have a chance of surviving.
Non-propagandizing scientists recently have calculated that the climatic and
other environmental effects of even an all-out nuclear war would be much less
severe than the catastrophic effects repeatedly publicized by popular astronomer
Carl Sagan and his fellow activist scientists, and by all the involved Soviet scientists.
Conclusions reached from these recent, realistic calculations are summarized in an
article, "Nuclear Winter Reappraised", featured in the 1986 summer issue of Foreign
109

Affairs, the prestigious quarterly of the Council on Foreign Relations. The authors,
Starley L. Thompson and Stephen H. Schneider, are atmospheric scientists with the
National Center for Atmospheric Research. They showed " that on scientific grounds
the global apocalyptic conclusions of the initial nuclear winter hypothesis can now
be relegated to a vanishing low level of probability."
Kearny's primary source for these statements was 1986 paper “Nuclear Winter
Reappraised” by Starley Thompson and Stephen Schneider 4. They later clarified that they
“resisted the interpretation that this means a rejection of the basic points made about
nuclear winter”5. Regardless, all of this thinking is now obsolete, since the present science
is much better and our computers are thousands of times faster.
It turns out, in light of modern research and simulations, that Kearny and Sagan are
both wrong. Nuclear war is neither a guarantee of intense nuclear winter, as Sagan and
TTAPS implied, nor is it of “vanishingly low level of probability”. Some form of nuclear
winter is practically guaranteed under any nuclear conflict, though in some cases it might
be mild enough to qualify as a nuclear “autumn”, meaning a drop of only a few degrees
Celsius. A full-blown, extremely deadly nuclear winter scenario, however, is quite assured if
any significant portion of the present-day nuclear arsenals of the United States and Russia
are used. “Deadly” in this sense means causing hundreds of millions of deaths, perhaps as
many as 2 billion.
In his book, Kearny claims that the estimated temperature drop would be about 20
degrees Fahrenheit (11 degrees Celsius) and last only a few days. It turns out he was
completely incorrect; the average global temperature drop would be close to twice that, and
as much as three times that in very inland areas such as Kazakhstan and other parts of
interior Eurasia. This is extreme cooling, and well worthy of the term “nuclear winter”.
What's more, the severe temperature drop would last for 5-10 years, with atmospheric
smoke reducing by a factor of e (2.718) every 5.5 years in the 150 Tg (teragram) smoke
injection scenario, which corresponds to full-blown nuclear war 6. This would be extremely
unpleasant if it were to happen, and the details will be left to later in this chapter.
For every stage of scientific progress on the question of nuclear war, there are still
people, including scientists, still stuck there. Most of them are stuck in the year 1983, when
it was thought that nuclear winter would be universally fatal and that nothing could be done
110

to survive it. Some of them are stuck in the year 1987 (as one of the authors was until
performing research for this book), thinking that nuclear winter is not really an issue and
would only be a mild “nuclear autumn”. The great majority of commentators on nuclear war
and nuclear winter are mentally stuck in one of these two places, being more than 20 years
out of date. Imagine if the knowledge of most “computer experts” were 20 years out of
date. That is where the state of knowledge is today regarding nuclear war. Because
nuclear war is such a complex topic, few people take notice of the mismatch. 20-year-old
obsolete knowledge is considered acceptably up-to-date since nearly everyone, including
the well-educated, share the same ignorance.
The next major course correction in the view of nuclear winter took place in 1990,
when TTAPS published a revised estimate that lowered their estimation of the temperature
drop which would ensue from nuclear war7. Their update was “Climate and Smoke: An
Appraisal of Nuclear Winter”, published in Science. The article was insufficiently
retractive of the initial mistaken estimates from 1983, which triggered another
backlash of skepticism and accusations of a political disarmament agenda 8. The
abstract summarizes their results: “For the most likely soot injections from a full-scale
nuclear exchange, three-dimensional climate simulations yield midsummer land
temperature decreases that average 10 degrees to 20 degrees C in northern mid-latitudes,
with local cooling as large as 35 degrees C, and subfreezing summer temperatures in
some regions.” These numbers are significantly less than the initial numbers and added
much-needed nuance, provided by better simulations on superior computers. At the time,
the longest detailed simulation was only 105 days, and the computing power was not
available to make an accurate long-term estimate. The authors exhibited overconfidence in
the wording of their statements, exaggerating the degree of confidence it was possible to
achieve with the tools of the time.
From 1991 until 2007, the field of nuclear winter simulation studies was relatively
stagnant. According to climatologist Alan Robock, no major simulations worth citing were
conducted between the years 1991 and 2007 9. By the time studies resumed in 2007, the
price performance of computers had improved by a factor of greater than 800, representing
more than 10 doublings of Moore's law. This allowed the simulations to be correspondingly
more detailed. In a 2012 featured article for the Bulletin of the Atomic Scientists, Robock
wrote that previous models were “based on primitive computer models of the climate
111

system” and that the duration of nuclear winter was found to be much longer than
previously thought10.
Today, Alan Robock's detailed work is the dominant model for the effects of nuclear
war, and his results, while not as gloomy as Sagan's, show that nuclear winter would still
be extremely severe, even in a limited regional conflict 11. However, billions of people would
still be likely to survive. The temperature drop in places like South America, Africa,
southeast Asia, and Australia would be more liveable, ranging from 2.5 to 12.5 degrees
Celsius, depending on location. In most population centers in these locales, the
temperature drop would average around 5 degrees. Rather than being directly threatened
by the cold, as the northern continental areas would be, these people would be more
threatened by the indirect effects of food shortages, which would be severe, but probably
not civilization-threatening.
Robock's work shows that the temperature drops would be the greatest for North
America, Europe, and Eurasia, including China. In the case of full-blown nuclear war,
injected 150 Tg of smoke into the upper atmosphere, the aforementioned areas would
suffer temperature drops between 15 and 35 degrees Celsius. In areas like Ukraine, the
temperature would remain below-freezing year round. Ukraine, being the breadbasket of
Europe, would not be able to provide any crops whatsoever. Canada, much of Russia, and
large parts of China would also remain below-freezing year round. Such temperatures are
even lower than those during the last Ice Age. During the last Ice Age, global temperatures
were only around 5 degrees Celsius colder than today. In this scenario, all of Europe and
America would be 15 to 20 degrees colder.
It is hard to properly convey how cold this would be. In combination with the effects of
ruined infrastructure due to EMP and general disorder, greater than 95 percent of the
population of Europe and America would be likely to perish. This percentage estimate is
from the authors of this work—Robock does not give any precise estimate of likely
casualties. He only calls nuclear warfare “self-assured destruction,” meaning that an
attacking state would seal their own demise, even if they were not hit by any nuclear
weapons in return. Experts commenting on EMP have stated that “9 in 10 Americans would
die” in a grid-down scenario in the United States, however 12. It is our view that the
combined effects of EMP, nuclear war, and nuclear winter would be likely to kill 95 percent
112

of citizens in the target countries, if not more. Some of this stems from the greater
susceptibility of northern hemisphere to nuclear winter and power grid disruption, due to
the greater concentration of land mass, distance from the equator, and dependency on
electricity, respectively. A greater concentration of land mass means that more land is
isolated from the thermal moderating effects of the world's oceans. This applies strongly to
North America and Eurasia.
In 2013, Robock co-authored a CNN article with Ira Helfand titled, “No such thing as a
safe number of nukes,” which argues exactly that13. In the article, they write, “A study by
Physicians for Social Responsibility showed that if only 300 warheads in the Russian
arsenal got through to targets in American cities, 75 million to 100 million people would be
killed in the first 30 minutes by the explosions and firestorms that would destroy all of our
major metropolitan areas, and vast areas would be blanketed with radioactive fallout.”
The key questions regarding the severity of nuclear winter are the following:

W
 hat quantity of soot will arise and will be thrown into the troposphere in the case
of large-scale nuclear war?
 How will it influence the temperature of the Earth?
How long it will soot remain in the upper atmosphere, and at what latitudes will it
persist?
W
 hat influence will the temperature drop have on the ability of humans to survive?
Research relevant to nuclear winter makes slightly different assumptions regarding all
of these points, and comes to different conclusions accordingly. It is important, however, to
cut back on all the complexity and hypothesize a set of “normative scenarios,” or there is
the inclination to get lost in analysis and avoid the urgency of concrete action. That is why
this chapter mentioned Robock's normative scenario before introducing ambiguity in the
form of the above questions.
The primary questions that have the greatest possible latitude in answering are 1)
how much smoke is injected into the atmosphere, 2) what latitude is that smoke injected at
(this strongly influences how much makes it into the upper atmosphere), and 3) what
influence will the temperature drop have on the ability of humans to survive? Regarding the
113

question of volume of smoke injection, there is a fair amount of uncertainty, because it
depends strongly on the scale of war. Robock's nuclear winter scenario outlined above is
likely in the instance of a full-scale nuclear war, but a more limited exchange would have
correspondingly more limited effects. An even worse outcome is possible if nuclear
weapons are intentionally detonated over forests in addition to cities, to deliberately
increase smoke injection into the upper atmosphere. A detailed scenario regarding the
likely effects on human survival and daily life that nuclear war and nuclear winter are likely
to have is presented later in this chapter.
There's more on this, but hold that thought, because it's time to switch gears to
another almost-certain prelude to nuclear war—electromagnetic pulse.
Effects of Electromagnetic Pulse
Any nuclear attack would almost certainly be preceded by the high-altitude detonation
of a hydrogen bomb, which would create a massive electromagnetic pulse, frying much
electronic equipment in the target area, which could extend across much of the continental
United States or western Russia, respectively. Several bombs might be used, saturating
the entire target country, or most of it, with the EMP. IN 2008, a congressionally mandated
body, the EMP Commission, completed a detailed study of the probable effects of EMP on
critical American infrastructure15. The prognosis was for the complete collapse of the power
grid, and ensuing collapse of essentially all critical infrastructure, from water infrastructure
to food infrastructure to all emergency services.
The long-term effects would be grim. A conference by the national security/public
interest group The United West featured speakers who argued that an EMP attack alone
would result in the death of 90 percent of Americans within 12 to 18 months. William
Forstchen, author of the EMP warning book One Second After, cited a 2004 study and
said, “Testimony in that study said 90 percent, let me repeat that, 90 percent of all
Americans would die within 12-18 months of an EMP attack”. The conference featured
prestigious speakers such as (quoting from an article) “CIA Director R. James Woolsey,
CIA Covert Operative Reza Kahlili, House Armed Services Committee Chairman Rep.
Roscoe Bartlett, former Director of Strategic Defense Initiative Organization Henry Cooper,
former National Intelligence Council Chairman Fritz Erdmarth and William Graham,
President Reagan’s science adviser and chairman of the EMP Commission.”
114

In the scenario of an EMP attack, a hydrogen bomb would be detonated about 100
miles above the surface of the United States, possibly launched from a container ship in
the Gulf of Mexico or sent via intercontinental ballistic missile (ICBM). This would produce
a blast of x-rays which would ionize massive volumes of air, generating an electromagnetic
pulse. The radiation would ricochet within the ionosphere, reflecting back and forth,
building in intensity and blanketing the ground below it with the massive pulse. The EMP
would create a tremendous inductive current in all exposed wires, delivering a crippling
voltage spike to anything connected to the power network. The pulse would be frontloaded, meaning that surge protectors would not have the chance to detect a rise in
electrical activity and switch off in advance, destroying nearly every electrical appliance.
The surge would be channeled through phone cords, power cords, ethernet cords,
everything. The overload would destroy all transformers in the power grid, from large to
small. The largest transformers take three years to replace under normal conditions and
are only manufactured in China. The United States lacks the manufacturing capacity to
make them. Without these transformers, the power grid would go down, and stay down.
Even a relatively minor disturbance has historically caused large parts of the grid to
temporarily collapse, and this disturbance would be on a level far greater than the grid has
ever been hit with. The widespread, simultaneous failure would make timely repairs
impossible, not only due to a lack of replacement parts, but also due to the ensuing
collapse of social order certain to be caused by grid down. The grid would be done for
good, more or less, and all crucial components would need to be completely replaced at
immense cost.
Besides destroying the power grid itself, the EMP would also destroy much of the
critical equipment at power plants and relay facilities. It would destroy the pumps that relay
water to vast areas which depend on them. It would destroy the pumps that send gasoline
from refineries to trucks which then distribute gasoline to gas stations around the country. It
would destroy the refineries that convert crude oil to gasoline and other products. It would
shut down the 25,000 food processing facilities in the United States that create food
products from raw foodstuffs. Nearly every feature of local civilization as know it would be
destroyed, at least 90 percent of the population would die, and the recovery time would be
greater than a decade, perhaps two or three. A decade would likely required—minimum—
to even reunify the country. It would consist of feudal, isolated groups until then. Taking into
115

account the crop-destroying effects of nuclear winter, food would be extremely scarce. The
attack would cause the complete collapse of civil society for more than a decade, possibly
as long as a generation. It would truly be an event without any historic precedent
whatsoever. For a visual dramatization of just ten days of grid down, see the National
Geographic docudrama American Blackout 201315. Imagine the scenario portrayed in that
documentary, but continuing for 5-20 years.
The EMP Commission verified that all this critical infrastructure would fail by analyzing
the impact of EMP on SCADA (supervisory data control and acquisition) boxes, which are
interspersed with nearly all modern infrastructural equipment, from power plants to water
utility plants to relay stations and so on. The repair teams created to deal with malfunctions
in these devices only have the numbers and spare parts needed to repair a few at a time
when they break down, and would never be able to handle a simultaneous, system-wide
grid-down scenario. Because the large transformers are the most crucial part of the grid
and the most difficult to replace, we'll quote the section from the EMP Commission's report
on damage to these components in its entirety:
The transformers that handle electrical power within the transmission system
and its interfaces with the generation and distribution systems are large, expensive,
and to a considerable extent, custom built. The transmission system is far less
standardized than the power plants are, which themselves are somewhat unique
from one to another. All production for these large transformers used in the United
States is currently offshore. Delivery time for these items under benign
circumstances is typically one to two years. There are about 2,000 such
transformers rated at or above 345 kV in the United States with about 1 percent per
year being replaced due to failure or by the addition of new ones. Worldwide
production capacity is less than 100 units per year and serves a world market, one
that is growing at a rapid rate in such countries as China and India. Delivery of a
new large transformer ordered today is nearly 3 years, including both manufacturing
and transportation. An event damaging several of these transformers at once means
it may extend the delivery times to well beyond current time frames as production is
taxed. The resulting impact on timing for restoration can be devastating. Lack of
high voltage equipment manufacturing capacity represents a glaring weakness in
our survival and recovery to the extent these transformers are vulnerable.
116

Nuclear War and Human Extinction
In discussions of human extinction, it is often remarked that nuclear war alone does
not threaten to wipe out humanity. In general, this is correct—nuclear war alone does not
threaten to wipe out humanity. But what about nuclear war in combination with, say,
biological warfare involving anthrax and novel prions? What about nuclear warfare in
conjunction with a nuclear bomb-triggered supervolcano eruption? What about nuclear war
in conjunction with nuclear weapons deliberately planted in coal seams or tropical forests,
which detonate and release even more particulate matter, blocking out the sun so intensely
that it triggers a new Ice Age? Nevertheless, we must concede that total human extinction
is not particularly likely in any of these scenarios. The objective in mentioning them is to
show that those who too quickly dismiss the threat of nuclear war may not have considered
conjunctive scenarios which could greatly exacerbate it. Many of them also underestimate
the intensity of nuclear war and nuclear winter and certainly have not gone so far as to
come to the realization that mass cannibalism would be a likely nutritional necessity in the
instance of nuclear winter. This shows a lack of earnestness or honesty in the analysis.

More Exotic Nuclear Scenarios
Putting mass cannibalism behind us, there are a number of more exotic nuclear risks
which ought to be mentioned although conventional nuclear war is the greatest and most
probable risk. The most notable exotic scenario concerns the use of cobalt bombs, that is,
nuclear weapons wrapped with an envelope of cobalt 23. Upon detonation, the free neutrons
created by the explosions would transmute the cobalt into the highly radioactive isotope
cobalt-60, which has a half-life of 5.27 years. This is in contrast with most other fallout
products of a nuclear explosion, which only have a half-life of a few days and decay to
acceptable levels in just three to five weeks. In the cobalt bomb scenario, the fallout would
remain lethal for decades instead of weeks, and the only way to deal with it would be to
wait for it to break down.
The concept of the cobalt bomb was first made public in a 1950 radio interview by
physicist Leo Szilard. In the interview, he suggested that an arsenal of cobalt bombs could
destroy all life on Earth, an assertion which has since been refuted by experts 24. Szilard
remarked that making the surface of the Earth uninhabitable to humans would require the
transmutation and vaporization of cobalt equal to the half the mass of the battleship USS
Missouri, which weighs in at 40,820 tons. The idea that some country or organization would
117

1) accumulate enough nuclear bombs to vaporize all this cobalt directly enough to
transmute it, 2) gather 20,000 tons of cobalt, 3) place these bombs all across the world, in
nearly every country, in such a perfect distribution as to cover as much land as possible...
is implausible. Fallout has a limited extent which it can travel, due to the weight of particles
and the strength of the prevailing winds. A major fallout plume can only travel a few
hundred miles, along a fairly restricted flight path a few dozen miles wide, and does not
distribute any serious quantity of particles or radiation farther than that. Any practical
number of bombs hitting restricted strategic targets would never produce enough fallout to
cover all the inhabitable areas of the Earth, or even more than about 10 percent of it. At the
very least, there are numerous remote islands and areas where there are no strategic
targets and will remain fallout-free.
When Szilard gave his radio interview and hypothesized that cobalt bombs could
mean the end of humanity, it was generally thought that radioactive fallout would be
dispersed by atmospheric winds evenly across the entire planet, and fall to the earth in a
more or less homogeneous manner. The more contained and directional nature of fallout
plumes was not well understood at the time. If fallout really did behave the way they
thought it did, cobalt bombs would be a danger to humanity's survival, and they don't, and
they are not. Of course, the inclusion of cobalt bombs in a nuclear war could make a grim
outcome even more grim, by more thoroughly ensuring that people in the fallout zone are
killed by radiation poisoning, but many billions of people would remain outside of any fallout
zone and would be spared from the radioactive cobalt, even if it did make a portion of the
Earth's surface completely uninhabitable for several decades.
Another unusual nuclear weapon risk would be the detonation of nuclear bombs in a
coal seam, resulting in a major smoke injection into the atmosphere. The severe nuclear
winter scenario described by Robock involved the injection of 150 teragrams (150 million
tonnes) of smoke into the upper atmosphere, resulting in a nuclear winter of ten years or
longer. Robock calculated plausible smoke injection levels for various nuclear war
scenarios based on the fuel loading (carbon footprint) of the average person in given
countries. In a scenario involving 50 weapons, each with an extremely low yield of 15
kilotons, for instance, he calculated that Argentina being struck by that number of weapons
of that yield would produce about 1 million tons of smoke from burning cities, Brazil 2
million tons, China 5 million, Egypt 2.5 million, France 1 million, India 3.7 million, Iran 2.4
118

million, Israel 1 million, Japan 2 million, Pakistan 3 million, Russia 2 million, UK 1 million,
US 1 million25. Let us compare these numbers to the plausible smoke injection from a coal
seam detonation.
As the yield and number of the weapons increases, so does the projection of emitted
smoke, though at somewhat of a shallow slope. At the level of 2,000 weapons of 100
kiloton yield, which is more in the ballpark of what a full-fledged nuclear war would be like,
the emitted smoke would be 90 million tons of smoke for China, 43 million tons for Russia,
and 38 million tons for the United States. That adds up to about 171 million tons of smoke,
which would cause the severe nuclear winter scenario described earlier. These numbers
don't even take into account the effects of smoke from Europe, though that is likely to be
similar to the numbers for Russia and the US, or about 40 million tons, giving a possible
grand total of roughly 211 million tons, for that scenario. Robock estimates 180 million
tonnes of smoke released in a scenario involving 4,400 100-kiloton weapons.
We can roughly calculate the amount of particulate matter which would be ejected
from a nuclear explosion in a coal seam. The radius of the rock melted by an underground
nuclear explosion is about 12 meters times the cube root of the yield in kilotons 26. Consider
a one megaton nuclear device. The number of kilotons (1,000) has a cube root of 10,
multiply that by 12 and we get 120, so it would produce a cavity with a radius of 120
meters. Coal seams have been discovered which are as thick as 45 meters, so a nuclear
explosion from a one-megaton device could vaporize a cylindrical coal area with a radius of
120 meters. That's about 2 million cubic meters of coal. Repeat that with ten devices, and
you have 20 million cubic meters of coal completely vaporized. Figure half of that makes it
to the stratosphere. That gives you a smoke injection of 10 million cubic meters of coal
smoke. A cubic meter of solid butiminous coal is 1,346 kilograms, so that gives us a smoke
injection of 13 billion kilograms, or 13 teragrams. This is substantial relative to the 150 Tg
scenario described earlier, so it could be an aggravating factor if some country or leader
were insane enough to try it. Threatening to nuke a coal seam and cause nuclear winter in
the case of attack could be a considerable Doomsday threat, even if not conducive to
international respect. This sort of self-destructive behavior has been described as the
“Samson Option”. It is represented by the phrase “mess with me, and I'll take down
everyone around me”. We certainly wouldn't put it past some dictators to try this, and thus it
must be included in our sphere of consideration.
119

Another risk concerning nuclear weapons which has been mentioned in the risk
literature, in passing, is the use of nuclear weapons to vaporize methane clathrate on the
ocean floor, in an attempt to trigger runaway global warming. Methane clathrates are ice
crystals with embedded methane. It is sometimes called “fire ice” for the way it burns when
lit. The 20-year global warming potential (GWP) of methane is 86 times greater than
carbon dioxide, so methane is a potent greenhouse gas. Due to the current trend of global
warming, methane clathrates are melting and releasing methane from the ocean floor in
large amounts, which is thought to be accelerating global warming 27. However, we strongly
doubt that the use of nuclear weapons could accelerate this trend enough to make much of
a difference, and the global cooling potential of using nuclear weapons on cities, a forest,
or coal seam is much greater than the global warming potential of using it on methane
clathrate deposits. Also, it is doubtful that global warming could cause human extinction
anyway. Global warming will be explored in much more detail in the later chapter on that
subject.
Greater than the risk of nuclear weapons being used to attack objects like coal seams
is the risk that uranium could become much easier to enrich and therefore accessible to
many more states, increasing the overall probability of nuclear war. Obviously, if nuclear
weapons can be built by more states more cheaply, the likelihood that they will be used
increases. Our global safety has been safeguarded by the fact that nuclear weapons are
difficult to produce. It's been more than 70 years since they were invented, but only nine
states have developed nuclear weapons since then. These are the United States, Russia,
China, France, the United Kingdom, India, Pakistan, North Korea, and Israel. All except for
North Korea are thought to have at least 100 warheads, enough to cause a serious (though
not severe as outlined earlier) nuclear winter. What if another 20 states acquired nuclear
weapons? States like Sudan, South Africa, Libya, Syria, Lebanon, Egypt, Turkey, Saudi
Arabia, Iraq, Afghanistan, and so on. What if North Korea had 10,000 nuclear warheads
instead of just 10-20? With better enrichment technology, in the long run of the 21 st century,
it could happen.
Throughout nuclear history, the primary method of uranium enrichment has been
gaseous diffusion through semi-permeable membranes. This method is extremely
expensive and power-hungry. The Portsmouth Gaseous Diffusion Plant south of Piketon,
Ohio covers 640 acres, with the largest buildings, the process buildings, cover 93 acres
120

and extending more than one and a half miles, containing 10 million feet of floor space. At
its peak power consumption the plant used 2,100 megawatts of power (it shut down in
2001). In contrast, the power consumption of New York City is about 4,570 megawatts. So
this plant used almost half the electrical power of New York City. This is an extremely large
amount of power, equivalent to the daily power consumption of about 4 million people.
These extreme power requirements put uranium enrichment out of reach of many states.
Construction costs for the plant were about $750 million.
The extreme cost of uranium enrichment has been lowered with the development of
centrifuge enrichment technology, which was adopted in the 1970s. Gas centrifuge
enrichment is the dominant enrichment method today, making up about 54 percent of
worldwide uranium enrichment. Instead of pushing uranium hexaflouride gas through semipermeable membranes, this approach uses centrifuge cascades to separate the slightly
lighter uranium isotope U-235, which can be used in nuclear power plants and nuclear
weapons, from the more common U-238 isotope. The exact process is highly secretive.
Gas centrifuges have been used by Iran to enrich uranium for nuclear plants, which has led
to international sanctions and various noise being made at the United Nations. The gas
only needs to pass through 30-40 centrifuge stages to reach the 90 percent enrichment
level needed for weapons grade uranium. However, the process is still “crude,” according
to enrichment experts, must be repeated over and over, and still consumes massive
amounts of electrical power, at immense cost.
Today, these two methods are the only used. New methods are under development
which may offer significant improvements to cost-effectiveness. One method, currently
being developed by a team of Australian scientists led by Michael Goldsworthy, is called
laser enrichment28. Instead of spinning uranium gas or pushing it through membranes, this
method uses a precisely tuned laser to selectively photoionize uranium particles, which
then adhere to a collector. The longer name for this is atomic vapor laser isotope
separation. It has been researched since at least 1985, and still has not been made more
cost-effective than centrifuges, though the Australian team claims they have made
breakthroughs that put it within their reach. This may just be hype, and only time will tell. In
the mid-90s, the United States Enrichment Corporation contracted with the Department of
Energy for a $100 million project to develop the technology, but failed 29. Iran is known to
have had a secret laser enrichment program, but it was discovered in 2003 and they have
121

since officially claimed it was dismantled. Whether this actually happened has not been
verified by weapons inspectors.
Laser enrichment technology, if perfected, would hardly offer revolutionary
improvements in enrichment capability. However, it is still making arms control bodies like
the US Nuclear Regulatory Commission (NRC) concerned about proliferation risk. The
lower cost and more compact nature of laser enrichment would allow enrichment facilities
to be about four times smaller, making them harder to detect from surveillance photos. This
could allow so-called “rogue states” to enrich uranium and possibly manufacture weapons
with impunity.
The effectiveness of centrifuges depends upon manufacturing details and power-toweight ratio. It may be possible to make them vastly more effective by inventing
fundamentally new manufacturing technology and building new, better centrifuges with
superior velocities and enrichment throughput. We will explore this possibility in the later
chapter on nanotechnology, and consider the mass-production of nuclear weapons to be a
risk associated with advancements in that field. During that discussion later in the book, be
sure to recall the grave dangers of nuclear warfare and nuclear winter discussed in this
chapter. Even a limited nuclear exchange would cause a serious nuclear winter, and a full
nuclear exchange would cause a crippling one. If enrichment technology becomes much
cheaper and more widespread, we could see a renewed nuclear arms race, where dozens
of states each have tens or even hundreds of thousands of warheads. That could quickly
lead us to an extremely unstable world. Competitive rhetoric between states does not cool
down just because they have nuclear weapons. Human nature and aggressiveness is a
constant, but our destructive capability is not.
There are a couple more nuclear risks which we ought to mention, although they are
longer-term risks, more likely to emerge near the end of the 21 st century rather than the
beginning or the middle. The first is the use of extremely high-yield bombs to vaporize large
amounts of rainforest, injecting more smoke into the atmosphere than the 150 Tg scenario.
The highest-yield nuclear weapon ever built, Tsar Bomba, had a yield of 50 megatons, and
its intensity was truly impressive. Quoting the nuclear weapon archive 30 , the yield of the
bomb was “10 times the combined power of all the conventional explosives used in World
War II, or one quarter of the estimated yield of the 1883 eruption of Krakatoa, and 10% of
122

the combined yield of all nuclear tests to date.” The fireball alone was five miles (8
kilometers) in diameter and all buildings were destroyed as far as 34 miles (55 km) away
from the explosion. The heat of the explosion was intense enough that it could have
caused third degree burns 62 miles (100 km) away from ground zero. The amount of forest
a bomb of this nature could vaporize would be truly apocalyptic, and it could conceivably
inject enough smoke into the atmosphere to cause a truly world-ending temperature drop.
More research into this possibility is needed. Could humanity survive in a world that has a
temperature equal to interior Antarctica? Let's hope we never find out.
In the same vein, in the more distant future it may be possible to create extremely
large bombs and detonate them high up in the atmosphere, showering the earth with x-rays
and gamma rays in sufficient quantities to be fatal to life on the surface. It is not known was
the maximum possible yield of a nuclear bomb truly is. Apparently, at one point there were
vague references to the idea that a 500 megaton nuclear bomb could be developed by the
Soviet Union and used to trigger a gigantic tidal wave to slam into California 31. There are
scary nuclear possibilities which were raised during the 50s and 60s, which seem to be all
but forgotten today. Who knows what obscure facts about the maximum potential of
nuclear weapons can be found in the United States' and Russia's top secret archives?
There may be devices which could be assembled on short order that make our currentlyknown nuclear weapons seem like firecrackers. This is a topic that is scarcely discussed,
and would benefit from academic analysis.
Still another highly intimidating scenario is the potential use of large nuclear bombs to
“uncork” supervolcanoes, specifically the Yellowstone Supervolcano in Yellowstone
National Park, which lies mostly within the boundaries of the state of Wyoming. If this
supervolcano erupted, it would shower the western United States in a layer of ash several
feet deep. Calculations have shown, however, that the cooling potential of volcanic ash is
much lower than the cooling of nuclear ash, and would cause a correspondingly more mild
volcanic winter32. In 1816, the “Year Without a Summer” is thought to have been triggered
by a volcanic eruption, and contributed to famine, though modern agriculture would be
much more resistant to such an event occurring in contemporary times. Although
“uncorking” a supervolcano would be a highly effective attack on the United States itself,
and could weaken the country severely, it is not likely to threaten the entirety of humanity

123

nearly as much as nuclear war would. This is primarily because volcanic particles are finer
than smoke particles from fires and would drop from the sky much faster.
There are around 20 dormant supervolcanoes on the earth and some of them maybe
triggered by the means of nuclear weapons. If several supervolcanoes will be triggered
simultaneously the result will be much larger then single eruption.

Nuclear space weapons

Nuclear attack on nuclear power stations

Cheap nukes and new ways enrichment

Estimating the Probability of Nuclear War Causing Human Extinction
In this chapter, we have thoroughly examined the dangers of nuclear warfare and
nuclear weapons at a level that is commensurate with the best published literature on the
topic. We have integrated the most up-to-date information at the time of this writing
(January 2015) on EMP, nuclear war, and nuclear winter, to give an integrated sketch the
severe risks. After all this analysis, we can carefully conclude that nuclear weapons are not
likely to cause the end of humanity. The only scenario we can imagine which could truly be
fatal to the human species would be the smoke injection of massive amounts of incinerated
rainforest by Tsar Bomba-class nuclear bombs. Even then, there would probably be
significant equatorial regions where the temperature drop would only be 30 degrees
Celsius or so, and millions of people could easily survive there, though they might be
alarmed at the darkened skies and unprecedented snowfall.
No matter how much smoke is injected to the atmosphere, Robock found that it would
reduce by a factor of e (~2.718) roughly every 5.5 years, and that makes almost any
conceivable scenario survivable. If people can survive at the peak of nuclear winter, during
the first 5 years, then they will likely be able to survive the whole thing. The Earth is a large
planet, with very high temperatures near the equator, which make it vastly resistant to
124

global cooling. In planetary history, roughly 700 million years ago, the Earth did go through
the “Cryogenian,” a period where glaciers extended to the equator and significant portions
of the oceans may have frozen over. Multicellular life is thought to have blossomed shortly
after the end of this period, so it may be that is could not have really existed during the
Cryogenian, a testament to how intensely cold it must have been.
If a “Snowball Earth” scenario could be triggered by enough smoke injection, it would
actually be possible that nuclear weapons could cause the end of humanity. We lack the
technology to create self-sufficient colonies in orbit, on the Moon, or on Mars, and will likely
lack it for many decades, perhaps all of the 21st century. So, we truly do depend on the
inhabitability of the Earth. Yet, no climatologist has yet performed an analysis of how much
smoke would need to be injected into the atmosphere to trigger Snowball Earth, so we are
quite literally in the dark on this.
As stated at the beginning of the chapter, we have decided to assign a probability of 1
percent to the probability that nuclear war could cause the end of humanity in the 21 st
century, partially because of the plausible intensity of nuclear war as an exacerbating factor
of extinction when considered in combination with other risks. There is no other currentlyexisting technology which we can confidently say would wipe out 95 percent of the
population in areas unlucky enough to be ravaged by it. Certainly, there is no historical
precedent for epidemics that kill so many. Although nuclear war lacks the prospect of
totality that the worst biotechnology, nanotechnology, and Artificial Intelligence scenarios
present, we think it would be premature to dismiss the risk. Thus, we cautiously assign a
risk probability of 1 percent, with more probability concentrated towards the end of the
century.

Near misses
I wrote an article how we could use such data in order to estimate cumulative probability of
the nuclear war up to now.
TL;DR: from other domains we know that frequency of close calls is around 100:1 to actual
events. If approximate it on nuclear war and assume that there were much more near
misses than we know, we could conclude that probability of nuclear war was very high and
we live in improbable world there it didn't happen.

125

Yesterday 27 October was Arkhipov day in memory of the man who prevented nuclear war.
Today 28 October is Bordne and Bassett day in memory of Americans who
prevented another near-war event. Bassett was the man who did most of the work of
preventing launch based false attack code, and Bordne made the story public.
The history of the Cold War shows us that there were many occasions when the world
stood on the brink of disaster. The most famous of them being the cases
of Petrov , Arkhipov  and the recently opened Bordne case in Okinawa
I know of over ten, but less than a hundred similar cases of varying degrees of reliability.
Other global catastrophic risk near-misses are not nuclear, but biological such as the Ebola
epidemic, swine flu, bird flu, AIDS, oncoviruses and the SV-40 vaccine.
The pertinent question is whether we have survived as a result of observational selection,
or whether these cases are not statistically significant.
In the Cold War era, these types of situations were quite numerous, (such as the Cuban
missile crisis). However, in each case, it is difficult to say if the near-miss was actually
dangerous. In some cases, the probability of disaster is subjective, that is, according to
participants it was large, whereas objectively it was small. Other near-misses could be a
real danger, but not be seen by operators.
We can define near-miss  of the first type as a case that meets the both following criteria:
a) safety rules have been violated
b) emergency measures were applied in order to avoid disaster (e.g. emergency breaking
of a vehicle, refusal to launch nuclear missiles)
Near-miss can also be defined as an event which, according to some participants of the
event, was very dangerous. Or, as an event, during which a number of factors (but not all)
of a possible catastrophe coincided.
Another type of near-miss is the miraculous salvation. This is a situation whereby a
disaster was averted by a miracle, that is, it had to happen, but it did not happen because
of a happy coincidence of newly emerged circumstances (for example, a bullet stuck in the
gun barrel). Obviously, in the case of miraculous salvation a chance catastrophe was much
higher than in near-misses of the first type, on which we will now focus.
We may take the statistics of near-miss cases from other areas where a known correlation
between the near-miss and actual event exists, for example, compare the statistics of nearmisses and actual accidents with victims in transport.
Industrial research suggests that one crash accounts for 50-100 near-miss cases in
different areas, and 10,000 human errors or violations of regulations. (“Gains from Getting
Near Misses Reported” )
126

Another survey estimates 1 to 600 and  another 1 to 300 and even 1 to 3000 (but in case of
unplanned maintenance).
The spread of estimates from 100 to 3000 is due to the fact that we are considering
different industries, and different criteria for evaluating a near-miss.
However, the average ratio of near-misses is in the hundreds, and so we can not conclude
that the observed non-occurrence of nuclear war results from observational selection.
On the other hand, we can use a near-miss frequency to estimate the risk of a global
catastrophe. We will use a lower estimate of 1 in 100 for the ratio of near-miss to real case,
because the type of phenomena for which the level of near-miss is very high will dominate
the probability landscape. (For example, if an epidemic is catastrophic in 1 to 1000 cases,
and for nuclear disasters the ratio is 1 to 100, the near miss in the nuclear field will
dominate).
During the Cold War there were several dozen near-misses, and several near-miss
epidemics at the same time, this indicates that at the current level of technology we have
about one such case a year, or perhaps more: If we analyze the press, several times a
year there is some kind of situation which may lead to the global catastrophe: a threat of
war between North and South Korea, an epidemic, a passage of an asteroid, a global
crisis. And also many near-misses remain classified.
If the average level of safety in regard to global risks does not improve, the frequency of
such cases suggests that a global catastrophe could happen in the next 50-100 years,
which coincides with the estimates obtained by other means.
It is important to increase detailed reporting on such cases in the field of global risks, and
learn how to make useful conclusions based on them. In addition, we need to reduce the
level of near misses in the areas of global risk, by rationally and responsibly increasing the
overall level of security measures.

The map of x-risks connected with nuclear weapons
http://immortality-roadmap.com/nukerisk2.pdf

interactive version: http://immortality-roadmap.com/nukerisk3bookmarks.pdf
http://lesswrong.com/lw/n3k/global_catastrophic_risks_connected_with_nuclear/

127

References

Alan Robock, Luke Oman, Georgiy L. Stenchikov, Owen B. Toon, Charles
Bardeen, and Richard P. Turco. “Climatic consequences of regional nuclear
conflicts”. 2007a. Atmospheric Chemistry and Physics., 7, 2003-2012.

Alan Robock, Luke Oman, and Georgiy L. Stenchikov. 2007b: Nuclear winter
revisited with a modern climate model and current nuclear arsenals: Still
catastrophic consequences. 2007b. Journal of Geophysical Research, 112,
D13107.

Cresson Kearny. Nuclear War Survival Skills. 1979. Oak Ridge National
Laboratory.

Starley

L.

Thompson

and

Stephen

H.

Schneider.

“Nuclear

Winter

Reappraised”. 1986. Foreign Affairs Vol. 64, No. 5 (Summer, 1986), pp. 9811005.

Stephen H. Schneider, letter, Wall Street Journal, 25 November 1986.

Robock 2007b.
128

R.P. Turco, O.B. Toon, T.P. Ackerman, J.B. Pollack, Carl Sagan. “Climate and
Smoke: an Appraisal of Nuclear Winter”.

Malcolm M. Browne. “Nuclear Winter Theorists Pull Back”. January 23, 1990.
The New York Times.

Robock 2007b.

Alan Robock and Owen Brian Toon. “Self-assured destruction: The climate
impacts of nuclear war”. 2012. The Bulletin of Atomic Scientists, 68(5) 66-74.

Robock 2007a.

Joseph Farah. “EMP Could Leave '9 Out of 10 Americans Dead'”. May 3, 2010.
WorldNetDaily.

Ira Helfand and Alan Robock. “No such thing as safe number of nukes”. June
20, 2013. CNN.

John S. Foster et al. “"Report of the Commission to Assess the Threat to the
United States from Electromagnetic Pulse (EMP) Attack”. April 2008. Congress
of the United States.

American Blackout 2013. October 27, 2013. National Geographic Channel.

Daniel DeSimone et al. The Effects of Nuclear War. May 1979. Congress of
the United States.

Daniel Ellsberg. “U.S. Nuclear War Planning for a Hundred Holocausts”.
September 13, 2009. Ellsberg.net.

Farah 2010.

Supply and Disappearance Data. USDA.gov.

Yingcong Dai (2009). The Sichuan Frontier and Tibet: Imperial Strategy in the
Early Qing. University of Washington Press. pp. 22–27.

Costco Annual Report 2013.

“About the NALC”. Native American Land Conservancy.

Brian Clegg. Armageddon Science: The Science of Mass Destruction. 2011.
St. Martins Griffin. p. 77.

The Effects of Nuclear Weapons (Report) (3rd ed.). 1977. Washington, D.C.:
United States Department of Defense and Department of Energy.

Robock 2007a.

Cary Sublette. “The Effects of Underground Explosions”. March 30, 2001.
NuclearWeaponArchive.org.
129

Michael Marshall. “Major methane release is almost inevitable”. February 21,
2013. New Scientist.

Richard Macey. “Laser enrichment could cut cost of nuclear power”. May 27,
2006. The Sydney Morning Herald.

Macey 2006.

“Big Ivan, The Tsar Bomba (“King of Bombs”)”. September 3, 2007.
NuclearWeaponsArchive.org.

Citation for 500 gigaton “tidal wave” bomb. (ask Alexei.)

Stephen Self. "The effects and consequences of very large explosive volcanic
eruptions". August 15, 2006. Philosophical Transactions of the Royal Society A,
vol. 364 no. 1845 2073-2097.


Other comments:
1 Satellites may host nuclear weapons for the first EMP strike – and North Korean first
satellite was on polar orbit which is best suitable for it
2 China may have much large nuclear arsenal then it is known, maybe several 1000
nuclear bombs - link below
3 At some point pure fusion weapons could be created – link below
4 Large nuclear bombs were proposed to serve as anti asteroid shield but could be used in
nuclear war. Teller planned to do it.
5 Nuclear summer after nuclear winter
6 Question of nuclear detonation LA-602 “Ignition of atmosphere with nuclear bomb” [LA-602 1945], proved to be impossible in the air. Another article assessed if it possible in sea water and found that deuterium
concentration is 20 lower than needed (but it is small margin – as some places on the earth may have
naturally higher concentration of deuterium and also lithium – like dry lakes). And some other constrictions –
like enormous initial bomb. But large bomb in Uranium shaft or in lithium deposit maybe – but probably not.
This unfinished article discusses more on topic: http://www.scribd.com/doc/8299748/The-possibility-ofartificial-fusion-explosion-of-giant-planets-and-other-objects-of-Solar-system
7 Risks of accidental nuclear war. Bruce Blair book is about it. Nuclear proliferation lead to many more
hostile nuclear states pairs.
8 Nuclear terrorism – provocation of war – attack on a nuclear station. What would happen if a bomb will
explode inside nuclear reactor?
9 Use of nuclear weapons against bioweapons facilities. Will kill the virus or disseminate it?

130

10 Probability of nuclear war is around 1 per cent a year based on frequency approach if we think of 1945 as
nuclear war. But most future nuclear wars maybe not all our nuclear war. Probability (or speaking more
clearly our credence in it based on known facts and best for risk prevention) of nuclear war that stops
progress for 10-500 years is in order of 10 per cent.
11

Coronal

mass

ejection

could

have

the

same

devastating

results

as

EMP.

http://www.washingtonpost.com/blogs/capital-weather-gang/wp/2014/07/23/how-a-solar-storm-nearlydestroyed-life-as-we-know-it-two-years-ago/?hpid=z5
In years 774-775AD 20 times stronger event happened http://arxiv.org/pdf/1212.0490v1.pdf
12 If one country is under attack from high attitude EMP it has incentive to put other rival countries in the
same conditions – so many places attack is possible all over the world
13. Nuclear stations need electricity for cooling OR they will melt like Fukusima.

131

Chapter 10. Global chemical contamination
Chemical weapons are usually not considered a doomsday weapon. There are no
academic references of which the authors are aware that seriously argue that chemical
weaponry could be a threat to the survival of the human species as a whole. However, in
the interests of completeness, and considering distant risks, we will address chemical
weapons in the context of global catastrophic risks here.
Of course, chemical and biological weapons are completely different. Biological
weapons, such as smallpox, may be self-replicating, whereas chemical weapons are not.
VX gas, the most persistent chemical warfare agent, has a maximum persistence in a cold
environment (40-60 degrees F) of thirty to ninety days 1. In a warm environment (70-90
degrees F), it persists for ten to thirty days. In the context of global risks, every habitable
location on the globe would need to be saturated with VX liquid in enough quantity to kill
every, or nearly every human being within ten days or less. Given currently available
technology, this would be impossible. Even taking into account mass-produced unmanned
aerial vehicles, it would probably take billions of flying robots working around the clock for
many days, drawing on a supply of many hundreds of tonnes, to exterminate all human life
on Earth. It does seem to be theoretically possible in the long-term future, if robotics,
nanotechnology, and artificial intelligence continue to advance and if progress in offensive
applications outpaces defensive applications.
The lethal dose for VX gas is only ten milligrams coming into contact with the skin,
30-50 mg if inhaled. Assuming ten billion people and perfectly accurate delivery methods, it
would take about 100 tonnes to exterminate the human population. In reality, it would
probably take 100-1,000 times that to cover sufficient area to be sure of contaminating it,
so, a supply of 10,000-100,000 tonnes would be required. Throughout the course of World
War I, it is estimated that the Germans and French used about 100,000 tons of chlorine
gas, so such stockpiles are certainly within the realm of possibility, it is just global delivery
which presents a technological challenge. In the 1960s, about 77,000 tons of Agent Orange
defoliant was sprayed in Vietnam2, along with over 50,000 tons of Agent White, Blue,
Purple, Pink, and Green. Of course, the problem with using agents like VX gas to try and
wipe out the human species are defensive measures like bunkers with air purification

132

systems and gas masks. Though one can imagine chemical weapons being used in
tandem with other methods.
There are agents more lethal than VX gas, such as botulinum toxin, for which the
lethal dose is only 0.1 micrograms, but it is very unstable in the environment. Such toxins
may one day be used as the lethal payload for military microrobots 3, and a relatively small
amount of toxin, less than a kilogram, would be sufficient to kill everyone on Earth. Again,
delivery is the problem. The risk of microbot-administered botulinum toxin will be
addressed in the chapter on robotics and nanotechnology, and considered a robotic risk
rather than chemical.
Dioxins are another compound which are extremely lethal, with an LD50 (the dose at
which half of the experimental animal models die) of about 0.6 micrograms per kilogram of
body weight, and are very stable in the environment, putting them in the category of
persistent organic pollutants (POPs). A leak of about 25 kilograms of dioxin at Seveso in
Italy in 1976 caused a contamination of 17 square kilometers, killing 3,300 local animals
within days4. 80,000 animals had to be slaughtered to prevent dioxin from entering the
food chain. Dioxins bioaccumulate, meaning they reach higher concentrations in animals
higher on the food chain. Through this, they contaminate meat and other animal products.
The Seveso disaster resulted in no confirmed human casualties, but almost 200 cases of
chloracne, a severe type of disfiguring acne caused by chemical contamination. The total
cost of decontamination exceeded 40 billion lire (US $47.8 million).
The most prominent mental association of dioxin is with Agent Orange. Although
Agent Orange only contains small amounts of dioxin, it was used in sufficient quantities in
Vietnam to cause severe birth defects and other long-term health problems among the
native population. Since dioxin is persistent, it has more potential to be globally destructive
than a substance like VX, which rapidly dissipates. However, since dioxin has never been
used in a purposeful military way to attack enemy armies or civilians, the quantity needed
to kill a large percentage of people in a given area over a given amount of time is not well
understood. Starting with the Seveso contamination numbers of 25 kilograms of dioxin
contaminating a 17 square kilometer area, we can estimate the amount required to
contaminate other inhabited areas to a similar level. The Earth's land surface area minus
Antarctica is about 138 million square kilometers. Half of this is uninhabited, so we can
estimate the inhabited area of the planet as roughly 69 million square kilometers.
Contaminating this entire area on a level comparable to the Seveso disaster would require
133

100 million kilograms, or 100,000 tonnes of dioxin. This is well within the storage capacity
of a Suezmax-class oil tanker. It is theoretically possible that an industrially developed state
could manufacture this much dioxin over a few years. As always, distribution is the
problem.
Another potential means of global chemical contamination is from some natural
process, or an artificial catalyst or trigger accelerating a natural process. The major risk
factor is the unity of the terrestrial atmosphere. If enough poison is produced somewhere, it
will eventually circulate everywhere. Some candidates for natural causes of global
chemical contamination include: supervolcano eruption 5, release of methane clathrates
causing runaway global warming6, or the sudden oxidation of large, unknown subterranean
mineral deposits, such as heretofore unknown hydrocarbon layers. Supervolcano eruption
and methane clathrate release are important enough topics that they will receive their own
treatment in the later chapter on natural risks, but we want to at least mention them here.
As a separate artificial risk, we may also consider the gradual accumulation of
chemicals destructive to the environment, such as freon, which may be relatively harmless
individually but harmful taken in combination with other destructive chemicals. It may also
be possible to destroy the ozone layer through chemical means, which would increase
incidence of cancer but would not be likely to wipe out the human species.
Besides the scenarios outlined above, there are eight improbable variants of global
chemical contamination we will mention for the sake of completeness:
a)

Global extermination through carbon dioxide poisoning. Breathing
concentrations of carbon dioxide in air at greater than 3 percent is dangerous
over the long term and can cause death by hypercapnia (carbon dioxide
poisoning). The normal atmospheric concentration of CO2 is just 0.03
percent. At the Permian-Triassic boundary, there is evidence that CO2
concentrations increased by 2000 ppm (parts per million) and global
temperature increased by 8 °C (14 °F) in a relatively short period of time,
10,000 years or less7. This was probably caused by extensive volcanism.
Even 2000 ppm is only a 0.2 percent concentration, more than an order of
magnitude below what is needed to directly kill complex organisms, though
the warming effects of carbon dioxide (along with other factors) were fatal to
96% of all marine species and 70% of all terrestrial vertebrates living at the
134

time. Risks connected to global warming through greenhouse gas emission or
volcanic eruption will be covered in the chapter on global warming. It is worth
noting that the high CO2 concentrations on planet Venus derive from
volcanism. If several supervolcanoes around the world could be artificially
triggered by nuclear explosions, it might be possible to render the surface of
the earth uninhabitable for complex life.
b)

Catastrophic methane release from methane clathrate deposits in the tundra
and on the continental shelf would release immense amounts of methane, a
gas with 80 times more warming potential than carbon dioxide. During the
Permian-Triassic extinction event, it is thought that this methane release, also
caused by a buildup of methane-producing microbes, caused a catastrophic
global warming episode triggering anoxia in the ocean and increased aridity
on land. This resulted in the death of so many plants that the predominant
river pattern of the era switched from meandering to braided, meaning there
were too few plants to channel the water flow.8

c)

There may exist gigantic deposits of hydrogen and/or hydrocarbons deep
within the earth, which, if released, could cause all kinds of mayhem, from
destroying the ozone layer to igniting and causing a gigantic explosion. There
may be natural processes which create hydrocarbons within the earth. As an
example, there are natural hydrocarbons on celestial bodies such as Titan,
Saturn's moon, which has hundreds of times the hydrocarbons of all known
gas and oil reserves on Earth. If these reserves exist on Earth, they could be
a vast threat in the form of potential explosive energy and atmospheric
chemical contamination.

d)

It may be possible to exhaust the oxygen in the atmosphere by some
spectacular combustion process, such as the quick combustion of massive
deposits of hydrogen or hydrocarbons released from deep within the earth.

e)

The fall of a comet with a considerable amount of poisonous gas. Cyanide
gas and cyanide polymers are thought to be a major component of comets 9. A
paper on the topic says, “The original presence on cometary nuclei of frozen
volatiles such as methane, ammonia and water makes them ideal sites for the
formation and condensed-phase polymerization of hydrogen cyanide. We
propose that the non-volatile black crust of comet Halley consists largely of
135

such polymers.” The tails of comets are rich in cyanide, but it is not dense
enough to cause damage when the Earth passes through such a tail. We can
imagine it causing much more damage if the comet itself fell to Earth,
vaporized, and circulated poisonous dust around the planet.
f)

Poisoning of the world ocean through oil or other means. Imagine a very large
undersea oil deposit uncorked through the application of a nuclear weapon. A
series of nuclear weapons could open up a huge channel between the ocean
and an oil well a mile or so deep. Due to the massive size of the resulting
hole, it would be impossible to seal, and would simply release an enormous
quantity of oil into the ocean. The Deepwater Horizon oil leak in 2010
released almost five million barrels of oil into the sea, a nuclear weaponcaused leak could release considerably more. Whether this could profoundly
threaten the ocean's biosystems, and thereby the rest of the world, has not
been well studied. Terrestrial life is dependent on microorganisms in the
ocean for participation in the carbon cycle.

g)

Blowout of the Earth's atmosphere. This would need to be caused by an
explosion so powerful that it accelerates a substantial part of the atmosphere
to escape velocity. Difficult to imagine, but mentioned here for the sake of
completeness.

h)

An auto-catalytic reaction extending across the surface of the Earth in the
spirit of “Ice-9” from the novel Cat's Cradle by Kurt Vonnegut. Similar
reactions have been confirmed in narrow circumstances. Nick Szabo
describes it on his blog10: Self-replicating chemicals are not merely
hypothetical: since Cat's Cradle, scientists have discovered some real-world
example of crystals that seed the environment, converting other forms
(polymorphs) of the crystal into their own. The population of the original
polymorph diminishes as it is converted into the new form: it is a
“disappearing polymorph.” In 1996 Abbott Labs began manufacturing the new
anti-AIDS drug ritonavir. In 1998 a more stable polymorph appeared in the
American manufacturing plant. It converted the old form of the drug into a
new polymorph, Form 2, that did not fight AIDS nearly as well. Abbott’s plant
was contaminated, and it could no longer manufacture effective rintonavir.
Abbott continued to successfully manufacture the drug in its Italian plant.
136

Then American scientists visited, and that plant too was contaminated was
contaminated and could henceforth only produce the ineffective Form 2.
Apparently the scientists had carried some Form 2 crystals into the plant on
their clothing.” Could such an autocatalytic reaction be discovered that puts
humanity at risk, perhaps by spreading from brain to brain as a contagious
prion? We might hope not, but the fact is that we don't yet know.
The authors do not rate global chemical contamination as a very severe global
catastrophic or extinction risk. Our estimate of the probability of a severe global chemical
contamination event is on the order of 0.1% for the duration of the 21 st century. As with
many of these risks, the current probability is very small, but advances in areas such as
nanotechnology would allow for the cheap mass production of dangerous chemical agents.
Fortunately, those very same advances would also allow means to clean up any chemical
contamination or even enhance living humans to make them immune to chemical
contaminants.
Our conclusion is that although the theoretical possibility of contaminating the
atmosphere or surface of the planet with chemical agents is theoretically possible, the risk
is far outweighed by the creation of toxic and epidemiological bio-agents, which are not
only self-replicating but have a much better track-record of mortality. Whatever can be
done with chemical weapons, can be done much more cheaply and effectively with
genetically engineered pathogens. Pathogens also have the fortunate advantage that the
default kind used in biological warfare would threaten just humans and not the animal or
plant kingdom. Presumably, even psychopaths would be significantly more likely to want to
target humanity specifically and not actually end all life on Earth. The amount of planning
and implementation needed to seriously threaten all life on the planet with chemical
weapons is immense, and would be more likely put to use in biological warfare. However,
we should not rule out the use of chemical weapons in a total war-type scenario, and
should take their possible utilization into account in considering different scenarios.
References
1. Federation of American Scientists. “Chemical Weapons Information Table”. 2007.
2. Aspen Institute. “History: Agent Orange/Dioxin in Vietnam”. August 2011.
137

3. Jurgen Altmann. Military Nanotechnology: Potential Applications and Preventive
Arms Control. New York: Routledge. 2006.
4. "The Seveso Accident: Its Nature, Extent and Consequences". Ann. Occup. Hyg
(Pergamon Press) 22 (4): 327–370. doi:10.1093/annhyg/22.4.327
5. Morgan T. Jones, R. Stephen J. Sparks, Paul J. Valdes. “The Climactic Impact of
Supervolcanic Ash Blankets”. Climate Dynamics, November 2007, Volume 29, Issue
6, pp 553-564.
6. L. D. Danny Harvey, Zhen Huang. “Evaluation of the potential impact of methane
clathrate destabilization on future global warming”. Journal of Geophysical
Research. 01/1995; 100:2905-2926.
7. Michael R. Rampino and Ken Caldeira. “Major perturbation of ocean chemistry and
a ‘Strangelove Ocean’ after the end-Permian mass extinction”. Terra Nova, vol. 17,
issue 6, 554–559, December 2005.
8. Peter D. Ward, David R. Montgomery, Roger Smith. “Altered River Morphology in
South Africa Related to the Permian-Triassic Extinction”.
9. “Hydrogen cyanide polymers, comets and the origin of life”. Faraday Discussions,
2006, 133, 393-401.
10. Nick Szabo. “Patent goo: self-replicating Paxil”. Unenumerated blog. November
2005.

138

Chapter 11. Space Weapons
The late 21st century could witness a number of existential dangers related to possible
expansion into space and the deployment of weapons there. Before we explore these, it is
necessary to note that mankind may choose not to engage in extensive space expansion
during the 21st century, so many of these risks may be a non-issue, at least during the next
100 years. A generation of Baby Boomers and their children were inspired and influenced
by science fiction such as Star Trek and Star Wars, which causes us to systematically
overestimate the likelihood of space expansion in the near future. Similarly, limited success
at building launch vehicles by companies such as SpaceX and declarations of desire to
colonize Mars by its CEO Elon Musk are a far cry from self-sustaining, economically
realistic space colonization.
Space colonization feels futuristic, it feels like something that should happen in the
future, but this is not a compelling argument for why it actually will. During the 60s and 70s,
many scientists were utterly convinced that expansion into space was going to occur in the
2000s, with permanent moon bases by the 2020s. This could still happen, but it seems far
less likely than it did during the 70s. In our view, large-scale space colonization is unlikely
during the 21st century (unless there is a Singularity, in which case anything is possible),
but could pick up shortly after it. This chapter examines the possibility that it will happen
during the 21st century even though it may not be the most likely outcome.
There are many challenges to colonizing space which makes it more appealing to
consider colonizing places like Canada, Siberia, the Dakotas, or even Greenland or
Antarctica first. Much of this planet is completely uninhabited and unexploited. If we seek to
travel to new realms, exploit new resources, and so on, we ought to look to North Dakota
or Canada before we look to the Moon or Mars. The economic calculus is strongly in favor
of colonizing these locations before we colonize space. First of all, lifting anything into
space is extremely expensive; $2,200 per kilogram ($1,000/lb) to low Earth orbit, at least 1.
Would the American pioneers have ventured West if they needed to pay such a premium
for each kilogram of baggage they carried? Absolutely not. It would be impractical. Second,
space is empty. For the most part, it is a void. All proposals for developing it, such as space
colonies, asteroid mining, or helium-3 harvesting on the Moon, have extremely high capital
139

investment costs that will make them prohibitive to everyone well into the 21 st century.
Third, space and zero gravity are dangerous to human health. Cosmic rays and
weightlessness cause a variety of health problems and a heightened risk of cancer 2, not to
mention the constant risk of micrometeorites3,4 and the hazards of taking spacewalks to
repair even the most minor equipment. These three barriers to entry make space
colonization highly unlikely during the 21st century unless advanced molecular
nanotechnology (MNT) is developed, but few space enthusiasts even know what these
words mean, and as we reviewed in the earlier chapter on the topic, basic research
towards MNT is extremely slow in coming.
To highlight the dangers of space, consider some recent comments by the Canadian
astronaut Robert Thirsk, who calls a one-way Mars trip a “suicide mission” 5 . Thirsk, who
has spent 204 days in orbit, says we lack the technology to survive a trip to Mars, and that
he spent much of his time in space repairing basic equipment like the craft's CO2
scrubbers and toilet. His comments were prompted by plans by the Netherlands-based
Mars One project to launch a one-way trip to Mars, but they also apply to any long-term
efforts in space, including space stations in low Earth orbit, trips to the asteroids, lunar
colonies, and so on. A prerequisite for space colonization are basic systems, like CO2
scrubbers and toilets, which break down only very rarely. Until we can manufacture these
systems with extreme reliability, we haven't even taken the first serious step towards
colonizing space. You can't colonize space until you can build a toilet that doesn't
constantly break down.
Next, the topic of getting there. Consider a rocket. It is a bomb with a hole poked in
the side. A rocket is a hollow skyscraper filled with enormous amounts of fuel costing tens
of millions of dollars, all of which is burned away in a few minutes, never to be recovered.
Rockets have a tendency to spontaneously explode for the most trivial of reasons, such as
errors in a few lines of code6,7. If the astronauts on-board do happen to make it to orbit in
one piece, a chipped tile is enough to seal their fate during reentry 8. Even if they do make it
to orbit, what is there? Absolutely nothing. To make true use of space requires putting
megatons of equipment, water, soil, and other resources up there, creating a simulacra of
Earth. Without extremely ambitious engineering projects like Space Piers (to be described
soon) or asteroid-towing spacecraft, a space station is just too small, a vulnerable tin can
filled with slightly nauseous people who must exercise vigorously on a continuous basis to
140

make sure their bones don't become irreversibly smaller 9. A finger-sized hole in a space
station is enough to kill everyone on board 10. Out of 7 Space Shuttles, 2 of them exploded
in a shower of flame. That would be considered a failure. Without radically improved
materials, automation, means of launch, reliability, and tests spanning many decades,
space is only a highly experimental domain suitable for a few dozen specialists at a time.
“Space hotels,” notwithstanding the self-promotional announcements made by companies
such as Russian firm Orbital Technologies11, are more likely to resemble jail cells than the
Ritz for decades to come.
Since the 1970s, space has been the ultimate mismatch between vision and
practicality. This is not to say that space isn't important, it eventually will be. People are just
prone to vastly underestimating how soon that is likely to be. For the foreseeable future (at
least the first half of the 21st century), it is likely to just be a diversion for celebrities who
take quick trips into low Earth orbit. One accident that causes the death of a few celebrities
could set the entire field back by a decade or more.
While the Baby Boomer generation who are among the most enthusiastic about
space quickly move into retirement, a new generation is being raised on computers and
video games, a new “inner space” that offers more than outer space plausibly can 12. Within
a couple decades, there will be compelling virtual reality and haptic suits that put the user
in another place, psychologically speaking. These worlds of our own invention have more
character and practical value than a dangerous vacuum at near absolute zero temperature.
If people want to experience space, they will enjoy it in virtual reality, in the safety of their
homes on terra firma. It will be far cheaper and safer to build interactive spaces that
simulate zero-g with robotic suspension systems and VR goggles than to actually launch
people into space and take that risk. Colonizing space is not like Europeans colonizing the
Americas, where both areas have the same temperature, the same gravity, the same
abundant organic materials, wide open spaces for farming, an atmosphere to protect them
from ultraviolet light, and the familiarity of Earth's landscape. The differences between the
Earth's surface and the surface of the Moon or Mars are shocking and extreme.
Otherworldly landscapes are instantly deadly to unprotected human beings. Without a
space suit, a person on Mars would lose consciousness in 15 seconds due to a lack of
oxygen. Within 30 seconds to 1 minute, the blood boils due to low pressure. This is fatal.

141

To construct an extremely effective and reliable space suit, such as a buckypaper skintight
suit, would likely require molecular manufacturing.
Few advocates of space colonization understand the level of technological
improvement which would be required for humans live in orbit, on the Moon, or on Mars in
any appreciable numbers for any substantial length of time. You'd need a space elevator,
Space Pier13, or large set of mass accelerators, which would cost literally trillions of dollars
to build. (Not enough rockets could be built to launch thousands of people and all the
resources they would need; they are too expensive.) Construction would be a multi-decade
project unless it were carried out almost completely by robots. Even if such a bridge were
built, the energy costs of ferrying items up and sending them out would still be in the
hundreds of dollars per kilogram. Loads of a few tens of tons could be sent every 5-10
minutes or so at most per track (for a Space Pier), which is a serious limitation if we
consider that a space station that can hold just 3,000 people would weigh seven million
tons14. The original proposed design for Kalpana One, a space station, assumes $500/kg
launch costs and hundreds of thousands of flights from the Moon, Earth, and NEOs to
deliver the necessary material, including millions of tons of regolith for radiation shielding.
This is all to build a space station for just a few thousand people which has no economic
value. The inhabitants would be sealed on the inside of a cylinder. It seems very difficult to
economically justify such a project unless the colony is filled with multi-millionaries giving
up a major portion of the their wealth just to live there. Meanwhile, without advanced
robotics or next-generation self-repairing materials to protect them, a single hull breach
would be all it takes to kill everyone on board. The cost of building a rotating colony alone
is prohibitive unless the majority of the materials-gathering and launch work is conducted in
an automated fashion by robots and artificial intelligences. Without the assistance of highly
advanced nano-manufacturing and artificial intelligence, such a project would likely be
delayed to the early 22nd century. It is often helpful to belabor these points, since so many
intelligent people have such an emotional over-investment in space colonization and space
technologies, and a naïve underestimation of the difficulties involved.
Keeping all this in mind, this chapter will look to a possible future in the late 21 st
century, where, due to abrupt and seemingly miraculous breakthroughs in automation and
nano-manufacturing, it becomes economically feasible to construct large facilities in space,
including space colonies and asteroid mines, which fundamentally change the context of
142

human civilization and open up entirely new risks. We envision scenarios where hundreds
of thousands of people can make it into space with the required tools to keep them there,
within the lifetime of some people alive today (the 2060s-2100s). The possibility of
expanding into space would tempt the nations of Earth to compete for new resources, to
gain a foothold in orbit or on the Moon before their rivals do. Increased competition
between the United States, Russia, and China could become plausible, much like
competition between the US and Russia for Arctic resources is looming today. The
scenarios analyzed in this chapter also have relevance to human extinction risks on
timescales beyond the 21st century, and we briefly violate our exclusive focus on the 21 st
century in some of these sections.
Overview of Space Risks
Space risks are in a different category from biotech risks, nuclear risks,
nanotechnology, robotics, and Artificial Intelligence. They are in a category of lower
intensity. Space risks require much more scientific effort and advanced technology to
become a serious global threat than any of the other risks mentioned. Even the smallest
space risks that plausibly involve the extinction of mankind involve megaengineering
projects with space stations tens, hundreds, even hundreds of thousands of kilometers
across. Natural space risks, such as Gamma Ray Bursts or massive asteroid or large
comet impacts only occur once every few hundred million years. With such low
probabilities of natural disasters from space, our attention is well spent on artificial space
weapons instead, which could be constructed on the timescale of decades or centuries.
Though such risks may seem far off today, they may develop over the course of the 21 st or
22nd centuries to become a more substantial portion of the total probability mass of global
catastrophic risk. Likewise, they may not. The only way to make a realistic estimate is to
observe technological developments as they progress. Today's space efforts, such as
those by SpaceX, likely have very little to do with future space possibilities, since, as we've
argued and will continue to argue, any large-scale future space exploitation will likely be
based on MNT, not on anything we are developing today. Whether space technology
appears to be progressing slowly or quickly from our vantage point, it will be leapfrogged
when and if MNT is developed. If MNT is not developed in the 21 st century, then space
technologies pose no immediate threat, since weapons platforms will not be built on a
scale large enough to do any serious damage to Earth. Full-scale space development and
143

possible global risk from space is almost wholly dependent on molecular manufacturing; it
is hard to imagine it happening otherwise, and especially not in this century. Other
technologies simply cannot build objects on the scale to make it notable in the context of
global risk. The masses and energies involved are too great, on the order of tens of
thousands of times greater than global annual electricity consumption. We discuss this in
more detail shortly.
There are four primary anthropogenic space risks: 1) orbiting mirrors or particle beam
weapons that set cities, villages, soldiers, and civilians on fire, 2) nuclear weapons
launched from space, 3) biological or chemical weapons dispersed from space, and 4)
hitting the Earth with a fast-moving projectile. Let's briefly review these. To wipe out
humanity with orbiting mirrors, the aspiring evil dictator would need to build a lot of them;
hundreds or thousands, each nearly a mile in diameter, then painstakingly aim them at
every human being on Earth until they all died. This would probably take decades. Then
the dictator would need to kill himself and all his followers, or accidentally achieve the
same, otherwise the total extinction of humanity (what this book is exclusively concerned
with) would not be secured. This is obviously not a highly plausible scenario, but we aren't
ruling anything out here. The nuclear weapons scenario is more plausible; enough nuclear
weapons could be launched from a space platform that the Earth goes through a crippling
nuclear winter and becomes temporarily uninhabitable. This could be combined with space
mirror attacks, artificially triggering supervolcano eruptions, and so on. The third scenario,
dispersion of biological weapons, will be discussed in more detail later in the chapter. The
fourth, hitting the Earth with a giant projectile, is very complicated and energy-intensive,
and we also cover it at some length here. This scenario is notable because while difficult to
pull off, a sufficiently large and fast projectile would be extremely destructive to the entire
surface of the Earth, sterilizing it in one go. Such an attack could literally burn everything
on the surface to ashes and make it nearly uninhabitable. Fortunately, it would require
about 100,000 times more energy than the United States consumes in a year to accelerate
a projectile to the suitable speed, among other challenges.
Deviation of Asteroids
The first catastrophic space risk that many people immediately think of is the
deviation of an asteroid to impact Earth, like the kind that killed off the dinosaurs. There are
144

a number of reasons, however, that this would be difficult to carry out, inertia being
foremost among them. Other attack methods would be preferable from a military and cost
perspective if one were trying to do a tremendous amount of damage to the planet. The
energy required to deviate an asteroid of any substantial size is prohibitive, so much so
that in nearly every case, it would be preferable to just build an iron projectile and launch it
directly at the Earth with rockets, or to use nuclear weapons or some other destructive
means instead.
One of the definitions of a planet is that it is a celestial body which has cleared the
neighborhood around its orbit. Its great mass and gravity has pulled in all the rocks that
were in its orbital neighborhood billions of years ago, when the solar system was formed
out of dust and rubble. These ancient rocks are mostly long gone from Earth's orbit. Of
three categories of Near-Earth Objects (NEOs), the closest, the Aten asteroids, number
only 815, and nearly all of them are smaller than 100 m (330 ft) in diameter 15. A 100 meter
diameter asteroid, impacting into dense rock, creates about a 1 megaton explosion, the
size of a modern nuclear weapon16. This is enough to wipe out a city if it is in the wrong
place at the wrong time, but is not likely to have any global effects. About 867 NEOs are of
1 km (3,280 ft) in size or greater, and 167 of these are categorized as PHOs (potentially
hazardous objects)17. About 92 percent of these are estimated to have been discovered so
far. If a 1 km (0.6 mi) wide impactor hit the Earth, it would release about a couple tens of
thousands of megatons TNT equivalent of energy, enough to create a 12 km (7 mi) crater
and a century-long period of lower temperatures, which would effect harvests worldwide
and could lead to billions of deaths by starvation 18,19,20,21,22,23. It would be a catastrophic
event, creating an explosion more than a hundred times greater than the largest atom
bomb ever detonated, but it would not threaten humanity in general. The asteroid that
wiped out the dinosaurs was ten times larger.
To evaluate the risk to humanity from asteroid redirection, we located the largest
possible NEO with a future trajectory that brings it close to Earth, and calculated the energy
input which would be required to cause it to impact. The asteroid that stands out the most
is 4179 Toutatis, which will pass within 8 lunar distances of the Earth in 2069. The distance
to the Moon is 384,400 km (238,855 miles), meaning 8 lunar distances is 3,075,200 km
(1,910,840 mi) from Earth. 4179 Toutatis is a huge rock with dimensions approximately
4.75×2.4×1.95 km (2.95×1.49×1.21 mi), shaped like a lumpy potato, with a mass of about
145

50 billion tonnes. Imagine pushing 50 billion tonnes for almost two million miles by hand.
Seems impossible, right? Pushing it with a rocket turns out to be almost as hard, by both
the standards of today's technology, and the likely technology available in the 21 st century.
About 1.5 × 10^16 Newton-seconds of force would be required, equal to the thrust of about
2,142,857 Saturn V heavy lift rockets. The Saturn V rocket is what took the Apollo
astronauts to the Moon, it is 363.0 feet (110.6 m) tall, with a diameter of 33.0 feet (10.1 m),
it weighs 3,000,000 kg (6,600,000 pounds) and costs about $47.25 billion in 2015 dollars to
build. Thus, it would cost about $101,250 trillion to redirect the asteroid with currently
imaginable technology, about 6,457 times the annual GDP of the United States. But, wait—
those rockets need other rockets to send them up to space in one piece and unused. If we
just sent the rockets up into space on their own, they would be depleted and couldn't push
the asteroid. You get the idea. If a nation can afford hundreds of millions or possibly billions
of Moon rockets, it would be far cheaper and more destructive to directly bombard the
target with such rockets than to redirect an asteroid that is mostly made of loose rubble
anyway and wouldn't even cause that much destruction when it lands. Various impact
effects calculators available online allow for an estimate of the immediate destruction
caused by a single impact24.
Other asteroids are even more massive or distant from the Earth. 1036 Ganymed, the
largest NEO, is about 34 km (21 mi) in diameter, with a mass about 10 17 kg, a hundred
trillion tonnes. Its closest approach to the Earth, 55,964,100 km (34,774,500 mi), is roughly
a third of the distance between the Earth and the Sun. Ganymed would definitely destroy
practically everything on the surface of the Earth if it impacted us, but with masses and
distances of that magnitude, no one is directing it to hit anything. It is difficult for us to
intuitively imagine how much momentum a 34 km (21 mi) wide asteroid has and how much
energy is needed to move it even a few feet off its current course. Exploding all atomic
bombs in the US and Russia's nuclear arsenals would barely scratch it. The entire energy
output of the history of human civilization on Earth would scarcely move it by a few
hundred feet. One day, a super-advanced civilization may be able to slowly move objects
like this with energy from solar panels larger than Earth's surface, but it is not on the top of
our list of risks to humanity in the 21st century. The same applies to breaking off a piece of
Ganymed and moving it—it is simply too distant. Asteroids in the asteroid belt are even

146

more distant, and even more impractical to move. It would be far easier to build a giant
furnace and cook the Earth's surface to a crisp than to move these distant asteroids.
Chicxulub Impact
As many know, approximately 65.5 million years ago, an asteroid 10 km (6 mi) in
diameter crashed into the Earth, causing the extinction of roughly three-quarters of all
terrestrial species, including all non-avian dinosaurs, and a third of marine species 25. The
effects of this object hitting the Earth, just north of the present-day Yucatan peninsula in
Mexico, were severe and global. The impact kicked up about 5 × 10 15 kg of flaming ejecta,
sending it well above the Earth's atmosphere and raining down around the globe at
velocities between 5 and 10 km/h26. Reentering the skies worldwide, the tremendous air
friction made this material glow red-hot and broiled the surface in thermal radiation
equivalent to 1 megaton nuclear bombs detonated at 6 km (3.7 mi) intervals around the
globe. This is like detonating about 20 million nuclear weapons in the skies above the
Earth. For at least an hour and as long as several hours, the entire surface, from Antarctica
to the equator, was bathed in thermal radiation 50 to 150 times more intense than full
sunlight. This is enough to ignite most wood, and to certainly ignite all the dry tinder on the
world's forest floors. A thick ash layer in the geologic record shows us that the entire
biosphere burned down. Being on the opposite side of the planet from the impact would
have hardly helped. In fact, the amount of flaming ejecta raining from the sky at the socalled “antipodal point” was even greater than anywhere but immediately around the
impact.
The ash layer evidence that the biosphere burned down is corroborated by the
survival pattern of species that made it through the K-T extinction event, as it is called 27.
The animals that survived were those with the ability to burrow or hide underwater during
the heat flux in the hour or two after the impact. During this time, the temperature of the
surface was literally be as hot as a broiler, and almost every single large animal, including
favorites like Tyrannosaurus Rex and Triceratops, would have been cooked to a blackened
steak. A few fortunate enough to hide underwater or in caves may have survived. If they
were large, their survival wouldn't persist for long, however, since the impact winter began
within a few weeks to a few months after the impact, lasting for decades and causing even
further destruction28. Living plant and animal matter would have been scarce, meaning only
147

detritivores—animals that can survive on detritus—had enough to eat. During this time,
only a few isolated communities in “refugia,” protected places like swamps alongside
overhanging cliffs or equatorial islands, would have survived. Examples of animals that
would have had a relatively easy time of surviving would be the ancestors of modern-day
earthworms, pill bugs, and millipedes, all of which feed mainly on detritus.
Events like the Chicxulub impact, which happen only once every few hundred million
years, are different from some of the earlier risks discussed in this book (except AI) in the
sense that they are more comprehensively destructive and involve the release of more
energy, especially energy in the form of heat. Whereas some individuals may have
immunity to certain microbes during a global plague, or be able to survive nuclear winter in
a self-sufficient fortress in the mountains, an asteroid or comet impact that throws flaming
ejecta across the planet has a totality and intensity that is hard for many other risks to
match. The only risk we've discussed so far that is comparable is an Artificial Intelligence
using nano-robots to convert the entire Earth into paperclips. An asteroid impact is
intermediate between AI risk and the risk of a bio-engineered multi-plague or similar event
in terms of its brute killing power. It is intense enough to destroy the entire surface through
brute heat, but not dangerous enough to intelligently seek out and kill humans.
After the multi-decade long impact winter, the Chicxulub impactor caused centuries of
greater-than-normal temperatures due to greenhouse effects from all the carbon dioxide
released by the incinerated biosphere. This, in combination with the scarcity of plants
caused by their conflagration, caused huge interior continental regions to transform into
deserts and badlands. The only comparable natural event that can create climactic
changes of this magnitude would probably be the Deccan Traps, a series of volcanic
eruptions which lasted 500,000 years during the end of the Permian era 250 million years
ago.
The K-T extinction was highly selective. Many alligator, turtle, and salamander
species survived. This was because they could both hide underwater and eat detritus. In
general, detritus-eating animals were able to survive, since that's all there was for many
years after the impact. Like the alligators and turtles of 65 million years ago, if a Chicxulubsized asteroid were to hit us today, many human beings would figure out a way to survive,
both during the initial impact and in the ensuing years. Like many of the scenarios in this
148

book, such an event would likely wipe out 99 to 99.99 percent of humanity, but many (over
500,000) would survive. Many people work underground or in places where they would be
completely shielded by the initial thermal pulse. Even if everything exposed to the surface
burned, there would be sufficient stored food underground to keep millions alive for
decades without farming. Wheat berries stored in an oxygen-free environment can retain
nutritional value for hundreds of years. This would give humanity enough time to start
growing new food and locate refugia where possible. The Earth's fossil fuels and many
functional machines and electronics would remain, giving us tools to stage a recovery. A
five degree or even ten degree Celsius temperature increase for hundreds of years, while
extremely harsh and potentially lethal to as many as 90-95 percent of all living plant and
animal species, could not wipe out every single human. There would always be
somewhere, like Iceland, Svalbard, northern Siberia and Canada, which would remain at
mild temperatures even if the global average greatly increased. Humans are not dinosaurs.
We are smarter, our nutritional needs are fewer, and we would be able to survive, even if
photosynthesis completely shut down for several years.
A brief word here on the difficulty of surviving various global temperature changes.
During humanity's existence on Earth, for the last 200,000 years, there have been global
temperature variations of significant magnitude, mostly in the negative direction relative to
the present day. Antarctic ice cores show that the global temperature average during the
last Ice Age was about 8-9 degrees Celsius cooler than it is now 29. We know that humanity
and other animal species can survive significant drops in temperature. What makes impact
winter qualitatively different than a simple Ice Age is the complete shutdown of
photosynthesis caused by ash-choked skies. Even just a few years of this is enough to
reshape global biota entirely. The third risk, global warming, which is distinct from global
cooling and photosynthetic shutdown, but like these risks, is an effect of major asteroid
impact, has the potential to be as deadly as the others because it is more difficult for life to
adapt to increased temperatures than decreased temperatures. If an animal is cold, it can
eat more food, or migrate to a warmer place, and survive. If an animal is too hot, it can
migrate, but it suffers more in the process of doing so. Overheating is extremely
dangerous, which is why tropical animals have so many elaborate adaptations to
preventing it. The relative contributions of initial firestorms, impact winter, and impact

149

summer to species extinction at the K-T boundary is poorly studied and requires more
research.
Daedalus Impact
To consider an impact that could truly wipe out humanity completely, we have to either
analyze the impact of a larger object, a faster object, or an object with both qualities. We
also should expand our scope beyond natural asteroids and comets, large versions of
which impact us only rarely, and consider the possibility of artificially accelerated objects. At
some point in the future, humanity will probably harvest the Sun's energy in larger
amounts, with huge arrays of solar panels which might even approach the size of planetary
surfaces. If we do spread beyond Earth and conquer the solar system, this would be very
useful for our energy needs. Such systems would give humanity and our descendants
access to tremendous amounts of energy, enough to accelerate large objects up to a
significant fraction of the speed of light, say 0.1 c.
Assuming sophisticated automated robotics, artificial intelligence, and
nanotechnology, it would be possible to disassemble asteroids and convert them into
massive solar arrays within the next hundred years. If the right technology is in place, it
could be done with the press of a button, and the size of the asteroid itself would be
immaterial. This energy could then be applied to harvesting other fuels, such as helium-3 in
the atmosphere of Uranus. This could give human groups access to tremendous amounts
of energy, possibly even thousands of times greater than the Earth's present power
consumption, within a hundred to two hundred years. We aren't saying that this is
necessarily particularly likely, just imaginable. After all, the Industrial Revolution also rapidly
improved the capabilities of humanity within a short amount of time, and some scientists
and engineers anticipate an even greater capability boost from molecular manufacturing,
AI, and robotics during the 21st century. So it shouldn't be considered completely out of the
question.
The energy released by the Chicxulub impactor was between 310 million megatons to
13.86 billion megatons of TNT, according to a careful study30. (100 million megatons, a
frequently cited number, is too low.) As a comparison, the largest nuclear bomb ever
detonated, Tsar Bomba, had a yield of 50 megatons. Scaling this way up, we make the
general assumption that it would require an explosion with a yield of 200 billion megatons
150

to completely wipe out humanity. This would be an explosion much larger than anything
that has occurred during the era of multicellular life. An asteroid has not triggered an
explosion this large since the Late Heavy Bombardment, roughly 4 billion years ago when
huge asteroids were routinely hitting the Earth. It seems like a roughly arbitrary number,
which it is, but it has a couple things to recommend it: 1) it's more than ten times greater
than the explosion which wiped out the dinosaurs, 2) it's large enough to be beyond the
class of objects that has any chance of hitting Earth naturally, but small enough that it isn't
completely outlandish. 3) it's easily large enough to argue that it could wipe out all of
humanity, even that it would be likely to do so, in the absence of bunkers built deliberately
to last for many decades involving major temperature drops and no farming.
An impact releasing energy equivalent to 200 billion megatons (2 x 10 11 MT) of TNT is
extremely enormous and difficult to imagine. The following effects are all from the results of
the impact simulator of the Earth Impacts Effects Program, a collaboration between
astronomers and physicists at Imperial College London and Purdue University. A 200 billion
MT impact would release approximately 1027 joules of energy, opening a “crater” in water
(assuming it strikes the ocean) with a diameter of 881 km (547 mi), about the distance
between San Francisco and Las Vegas. The crater opened on the seafloor would be about
531 km (329 mi) in diameter, which, after the crater wall undergoes collapse, would form a
final crater 1,210 km (749 mi) in diameter. This crater would be so large it would span
almost the entire states of California and Nevada. This is so much larger than the
Chicxulub crater, at only 180 km (110 mi) in diameter. The explosion, which is between 144
and 645 times more powerful than the Chicxulub impact, leaves a crater about 7 times
larger and with 49 times greater area. All this greater area corresponds to more dust and
soot clogging up the sky in the decades to come, as well as more molten rock raining from
the heavens in the few hours following the impact. Both heat and dust translate into killing
power.
The physical effects of the impact itself would be mind-boggling. The fireball
generated would be more than 122 km (76 mi) across. At a distance of 1,000 miles, the
thermal pulse would hit 15.6 seconds after impact, with a duration of 6.74 hours, and a
radiant heat flux 5,330 times greater than full sunlight (all these numbers are from the
impact effects calculator). At a distance of 3,000 miles (4,830 km), equivalent to that
between San Francisco and New York, the fireball would appear 5.76 times larger than the
151

sun, with an intensity 13.7 times greater than full sunlight, enough to ignite clothing,
plywood, grass, deciduous trees, and third degree burns over most of the body. Roughly
815,000 cubic miles of molten material would be ejected, arriving approximately 16.1
minutes after impact. The impact would cause shaking at 12.1 on the Richter scale, greater
than any earthquake in recorded history. Even at this continental distance of 3,000 miles,
the ejecta, which arrives after about 25.4 minutes as an all-incinerating wave of flaming
dust particles, has an average thickness on the ground of 6.28 meters (20.6 ft). That is
incredible, enough to cover and kill just about everything. The air blast would arrive about 4
full hours after impact, with a maximum wind velocity of 1,760 mph, peak overpressure of
10.6 bars (150 psi), and a sound intensity of 121 dB. At a distance of 12,450 miles (20,037
km), the maximum possible distance from the impact, the air blast arrives after 16.8 hours,
with a peak overpressure of 0.5 bars (7.6 psi), maximum wind velocity of 233 mph, and a
sound intensity of 95 dB. The force of the blast would be enough to collapse almost all
wood frame buildings and blow down 90 percent of trees. Altogether, more than 99.99
percent of the world's trees would be blown down, regardless of where the impact hits.
No authors have considered in much detail the effects such a blast would have on
humanity itself, probably because it could conceivably only be caused by an artificial
impact rather than a natural one, and anthropogenic high-velocity object impacts are rarely
considered, especially of that energy level. What is particularly interesting about blasts of
around this level is that they are somewhere between “sure survival” and “sure death” for
the human species, so there is definite uncertainty about the likely outcome. Nearly anyone
not in an underground shelter would be destroyed, just as they would in the winds of a
powerful hurricane. Hurricanes do not bring flaming hot ejecta that lands in 10-ft thick
layers and burns away all oxygen on the surface, however. The Chicxulub impact kicked up
~5 x 1015 kg of material which deposited an average of 10 kg (22 lb) per square meter,
forming a layer 3-4 mm thick on average. The larger impact we describe above, which we
will call a Daedalus impact for reasons which will become known shortly, would eject at
least 3,625,000 cubic miles (15,110,000 cu km) of material, equivalent to a cube 153 miles
(246 km) on a side, into the atmosphere, for a mass of ~5 x 10 19 kg, roughly 10,000 times
greater. Extrapolating, this means we could expect an average deposition rate of 100
tonnes (110 tons) per square meter, forming a layer 30-40 meters (100-130 ft) on average!
This is inconsistent with the Impact Effects Calculator result of just 6 meters thickness only
152

3,000 miles away, and knowing the difference is crucial. (note: double-check this result and
consult w/ expert)
If the impact really did leave a layer of molten silicate dust 30 meters thick, 100
tonnes per square meter, it is easy to imagine how that might threaten the existence of
every last human being, especially as the post-impact years tick by and food is hard to
come by. During the K-T event, global temperature is thought to have dropped by 13 Kelvin
after 20 days31. With 10,000 times as much dust in the atmosphere as the K-T event, how
much cooling would the Daedalus event cause? It could be catastrophic cooling, not just in
the sense of wiping out 75-90 percent of all terrestrial animal species, but more in the
sense of potentially wiping out all terrestrial non-arthropod species.
There would be some warning time for those distant from the blast. At a distance of
about 10,000 km (6,210 miles), the seismic shock would arrive after about 33.3 minutes,
measuring 12.2 on the Richter scale. In comparison, the 1906 San Francisco earthquake,
which destroyed 80 percent of the city, was about 7.8 on the moment magnitude scale
(modern Richter scale), which corresponds to about 7.6 on the old Richter scale. Each
increment on the scale corresponds to a 10 times greater shaking amplitude, so the
earthquake following a Daedalus impact would have a shaking amplitude greater than
20,000 times that of the San Francisco earthquake at the hypocenter, meaning a shaking
amplitude of tens of miles. At a distance of 10,000 km, this would rate at VI or VII on the
Mercalli scale. VI on the Mercalli scale refers to, “Felt by all, many frightened. Some heavy
furniture moved; a few instances of fallen plaster. Damage slight.” VII refers to, “Damage
negligible in buildings of good design and construction; slight to moderate in well-built
ordinary structures; considerable damage in poorly built or badly designed structures;
some chimneys broken.” Upon feeling the seismic shock, people would check the Internet,
television, or radio to find news that an object had hit the Earth, and that the deadly blast
wave was on its way. Starting after about 45 minutes, the ejecta would begin arriving, the
red-hot rain of dust, getting thicker over the next 25 minutes and reaching maximum
intensity 70 minutes after impact. This all-encompassing heat would be sufficient to cook
everything on the surface. The devastating blast wave, which is the largest physical
disruption and would include maximum winds of up to 677 mph and pressures of 30.8 psi,
similar to those felt at ground zero of a nuclear explosion, would arrive after about 8.42
hours. This would completely level everything on the ground, including any structures.
153

Contrary to popular conception, the pressures on a level similar to those directly
underneath a nuclear explosion are survivable, through using a simple arched-earth
structure over a closed trench. One such secure trench was located near ground zero in
Nagasaki, where pressures approached 50 psi. Such trenches require a secure blast door,
however. Perhaps a greater problem would be the buildup of ejecta, which would rest
everywhere and cause a great amount of pressure, crushing unreinforced structures and
their inhabitants. It would also be super-hot. We can imagine a more hopeful situation on
the side of a cliff or mountain, where ejecta slides off to lower altitudes, or in tropical
lagoons and lakes, where even 100 ft worth of dust may simply sink to the bottom, sparing
anyone floating on the surface. Oxygen would be a greater problem, however, meaning
that those in elevated, steep areas with plenty of fresh air would be at an advantage. Such
areas would have increased exposure to thermal radiation, on the other hand, making it a
tradeoff. Ideal for survival would be a secure structure carved into a cliff or mountainside,
or a hollowed-out mountain like the Cheyenne Mountain nuclear bunker in Colorado, which
is manned by about 1,400 people. The bunker has a water reservoir of 1,800,000 US gal
(6,800,000 l), which would be more than enough to support its 1,400 workers for over a
year. One wonders if the intense heat of a 100-130 ft layer of molten dust would be
sufficient to melt and clog the air intake vents and suffocate everyone inside. Our guess
would be probably not, which makes us even question the assumption that a 200 billion
megaton explosion would be sufficient to truly wipe out all of humanity. The general
concept requires deeper investigation.
If staff in a highly secured bunker like the Cheyenne complex were somehow able to
survive the initial ejecta and blast wave, the world they emerged out into when it cooled
would be very different. Unlike the post-apocalyptic landscape faced by the survivors of the
Chicxulub impact, which was only dusted with a few millimeters of material, these refugees
would be dealing with deep, multi-story building thick layers of dust which would get into
everything and create a nutrient-free layer nearly impossible for plants to grow in. Any
survivors would need to find a deposit of remaining soil, which could be done by digging
into a mountainside or possibly clearing an area with a nuclear explosion. Then, they would
need to use rain or whatever other available water source to attempt to grow food. Given
that the sky would be blacked out for at least several years and possibly longer, this would
be quite a challenge. Perhaps they could grow plants underground with artificial light from a
154

nuclear-powered generator. Survivors could scavenge any underground grain reserves, if
they manage to locate these and expend the energy to dig down to them by hand. The
United States and many other countries have grain surpluses sufficient to feed a few
thousand people almost indefinitely, as we reviewed in the nuclear chapter.
Over the years, moss and lichen would begin to grow over the surface of the cooled
ash, and rivers would wash some of it into the sea. If humanity were lucky, there would be
a fern spike, a massive recolonization of the land by ferns, as generally occurs after mass
extinctions. However, the combination of falling temperatures, limited power sources,
everything on Earth being covered in a layer of ash 100 feet deep, and so on, could easily
prove sufficient to snuff out a few isolated colonies of several thousand people fortunate
enough to have survived the initial blast and heat. One option might be to construct sea
cities, if the survivors had the technology available for it. It would be difficult to reboot the
large-scale technological infrastructure needed to construct such cities, especially working
with few people, though possibly working components could be salvaged. In a matter of a
couple decades, without a technological infrastructure to build new tools, all but the most
simple devices would break down, putting humanity back into a technological limbo similar
to the early Bronze Age. This would make it difficult for us to achieve tasks such as locating
grain stores hundreds of miles away, determining their precise location, and digging 100
feet down to reach them. Because of all these extreme challenges to survival, we
tentatively anticipate that such an impact probably would wipe out humanity, and indeed
most of the rest, if not all complex terrestrial life.
How could such an impact occur? It would have to be an artificial projectile, about the
size of the Great Pyramid of Giza, with a radius of 0.393 km, made of iron, accelerated into
the surface of the Earth at one-tenth the speed of light (0.1 c). The name “Daedalus” is
borrowed from the Project Daedalus, a project undertaken by the British Interplanetary
Society between 1973 and 1978 to design an unmanned interstellar probe 32. The craft
would have a length of about 190 meters (626 ft) and weigh 54,000 tons, with a scientific
payload of 400 tons. The craft was designed to be powered using nuclear fusion, fueled by
deuterium harvested from an outer planet like Uranus. Accelerating to 0.12 c over the
course of 4 years, the craft would then cruise for 46 years to reach its target star system,
Bernard's star, 5.9 light years distant. The craft would be shielded by an artificially
generated cloud of particles called “dust bugs” hovering 200 km ahead of the vehicle to
155

remove larger obstacles such as small rocks. Micro-sized dust grains would impact the
craft's beryllium shield and ablate it over time.
The Daedalus craft would only accelerate 400 tons, instead of the 9.3 x 10 9 kg (9.3
million tonnes, 1.02 million tons) required to deal a potential deathblow to mankind. It would
need to be scaled up by a factor of 2,550 to reach critical mass. You hear discussion about
starships in science fiction frequently, but no science fiction I'm aware of addresses the
consequences of the fact that an interstellar probe just 2,550 times larger than the
minimum viable interstellar probe could cause an explosion on the Earth that covers the
entire surface in 50-100 feet of molten rock. Once such a probe began to get going, it
would be very difficult to stop. No object would have the momentum to push it off course; it
would need to be stopped before it acquired a high speed, in just four years. If an
antimatter drive could be developed that allows an even greater speed, one closer to the
speed of light, the projectile would only require a mass of 8.38 x 10 8 kg, about a 29 m (96
ft) radius iron sphere, similar to the mass of the Seawise Giant, the longest and heaviest
ship ever built. On the human scale, that is rather large, but on the cosmic scale, it's like a
dust grain. Any future planetary civilization that wants to survive unknown threats will need
high-resolution monitoring of its entire cosmic area, all the way out to tens of light years
and as far beyond that as possible.
Consider that hundreds of years ago, all transportation was by wind power, foot, or
pack animal, and the railroad, diesel ship, and commercial aircraft didn't exist. As humanity
developed huge power sources and more energetic modes of transport, energy release
levels thousands, tens of thousands, and hundreds of thousands of times greater than we
were accustomed to became routine. Today, there are about 93,000 commercial flights
daily, and about one in every hundred million is hijacked and flown into something, causing
events like 9/11. Imagine a future where interstellar travel is routine, and a similar
circumstance might become a threat with respect to starships used as projectiles. Instead
of killing a few thousand people, however, the hijacking could kill everyone on Earth. This
would be especially useful if the terrorist were a nationalist for a future country outside the
Earth, located on the Moon, Mars, asteroid belt, among space stations, or even a separate
star system. For whatever reason, such an individual or group may have no sympathy for
the planet and be perfectly willing to ruin it. This would probably not put humanity as a

156

whole at risk, since many would be off-world at that stage, but the logic of the scenario has
implications for our security in the long-term future.
Heliobeams
Perhaps a more plausible risk in the nearer future is some group using a space
station as a tool to attack the Earth. They might wipe out most or all of the human species
to replace us with their own people, or to otherwise dominate the planet. This scenario was
portrayed in the 1979 James Bond film Moonraker. Space is the ultimate high ground; from
it, it would be easier to distribute capsules of some lethal biological agent, observe the
enemy, launch nuclear weapons, and so on. Perhaps, speculatively, this process could get
carried away until those who control low Earth orbit begin to see themselves as gods and
start to pose a threat to humanity in general.
To create space stations on a scale required to actually control or threaten the entire
Earth's surface would be a difficult task. In the 1940s, Nazi scientists developed the design
for a large mirror in orbit with a 1 mile diameter 33, for focusing light on the surface and
incinerating cities or military formations. Called the Sun Gun or heliobeam, they anticipated
it would take around 50 to 100 years to construct. There are no detailed documents for the
Sun Gun design which survived the war, but assuming a mass of about 1,191 kg (2,626 lb)
per square meter, corresponding to a steel plate 6 inches thick, a heliobeam a mile in
diameter would have a surface area of about 2,034,162 square meters (21,895,500 sq ft)
and mass of about 2,422,690 tonnes (2,670,560 tons), similar to the mass of 26 Nimitzclass aircraft carriers. At current space launch costs of $2,200/kg ($1,000 per pound) as of
March 2013 for the Falcon Heavy Rocket, lifting that much material to orbit would cost
$5,330,000,000, about the annual GDP of Japan. Even given the relatively enormous US
military budget, this would be a rather difficult expense to justify. Such a project would have
to be carried out over the course of many years, or weight would have to be sacrificed,
making the heliobeam thinner, which would make it more vulnerable to attack or disruption.
There are ways to take the heliobeam concept as devised by the Nazis and transform
it into a better design which makes it easier to construct and more resilient to potential
attack. Instead of one giant mirror, it could be a cluster of several dozen giant mirrors,
communicating with data links and designed to point to the same spot. Instead of being
made of thick metal, these mirrors could be constructed out of futuristic nanomaterials
157

which are extremely light, yet durable. This could lower the launch weight by a factor of
tens, hundreds, maybe even thousands. The preferred construction material for a scaffold
would be diamond or fullerenes. Even if MNT is not developed in the near future, the cost
of industrially produced diamond is falling, though not to the extent that would be required
to make such large quantities affordable. This makes the construction of a heliobeam seem
mostly dependent on progress in the field of MNT, much like the other hypothetical
structures in this chapter.
If launch costs and the costs of bulk diamond could be brought way down, an
effective heliobeam could potentially be constructed for only a few trillion dollars instead of
a few hundred trillion, which might put it within the reach of the United States or Russian
military during the latter half of the 21st century. The United States is spending between
$620 and $661 billion over the next decade on maintaining its nuclear weapons, as an
example of a project in a similar cost window. Spending a similar amount on a heliobeam
could be justified if the ethical and geopolitical considerations looked right. After all, a
heliobeam would be like a military laser beam that never runs out of power. There would be
few limits to the destruction it could do. Against stationary targets in particular, it would be
extremely effective. There is nothing like vaporizing your enemies using a giant beam from
space.
Countries are especially dependent on their power and military infrastructure to
persist. If these could be destroyed one by one, over the course of several days by a giant
orbiting mirror, that could cripple a country and make it ripe for ground invasion. Nuclear
missiles could defend against such a mirror by damaging it, but possibly a mirror could be
sent up in the aftermath of a World War during which most nuclear weapons were used up,
leaving the remainder at the mercy of the gods in the sky with the sun mirror. Whatever
group controlled the sun mirror could use it to cripple key human infrastructure (power
plants, ports), forcing humanity to live as it did in ancient times, without electricity. If the
group were the only ones who retained industrial technology, such as advanced fighter
aircraft, they would be fighting against people who had little more than crude firearms. The
difference in technological capability between Israelis and Palestinians comes to mind, only
moreso. The future could be a world of entirely different technological levels, where an elite
group is essentially running a giant zoo for its own amusement. If they got tired of the
masses of humanity, they might even decide to genocide us through other means such as
158

nuclear weapons. A heliobeam need not be used to perform every important military
maneuver, just the most crucial moves like destroying power plants, air bases, and tank
formations. It would still give whoever controlled it a dominant position.
There are many important strategic, and therefore geopolitical, differences between a
hypothetical heliobeam and nuclear weapons. Nuclear weapons are more of an all-ornothing thing. When you use a nuclear weapon, it makes a gigantic explosion with ultra
high-speed winds and a fearsome mushroom cloud that showers lethal radioactivity for
miles around. Even the most power-hungry politician or military commander knows they
are not to be used lightly. A heliobeam, on the other hand, could be used in an attack which
is arbitrarily small and seemingly muted. By selectively obscuring parts of a main mirror, or
selecting just a few out of a cluster to target sunlight towards a particular location, a
heliobeam array could be used to destroy just a few hundred troops instead of a few
thousand, or to harass on an even smaller scale. Furthermore, its use would be nearly free
once it is built. The combination of low maintenance costs and arbitrarily minor attack
potential would make the incentive to use a heliobeam substantially greater than the
incentive to use nuclear missiles. A country with messianic views about its role in the world,
such as the United States, could use it to get our way in every conflict, no matter how
minor. This could exacerbate global tensions, paving the way for an eventual global
dictatorship with absolute power. We ought to be particularly wary of any technologies
which could be used to enforce global dictatorship, due to the potentially irreversible nature
of a transition to one, and the attendant suffering it could cause once firmly in place 34.
In conclusion, it is difficult to imagine how space-based heliobeams could kill all of
humanity, but they could certainly oppress us greatly, possibly locking us into a state where
a dictatorship controls us completely. Combined with life extension technologies and
oppressive brain implants (see next chapter), a very long-lived dictator could secure his
position, preventing technological development and personal freedom in perpetuity. Nick
Bostrom defines an existential risk as “One where an adverse outcome would either
annihilate Earth-originating intelligent life or permanently and drastically curtail its
potential.”35 This particular risk seems less likely to annihilate life as it is to permanently
and drastically curtail its potential if put in the wrong hands. As a counterpoint, however,
one may argue that nuclear weapons have the same great power, but they have not
resulted in the establishment of a global dictatorship.
159

Dispersal of Biological Agents
In Moonraker, bad guy Hugo Drax attempts to release 50 spheres of nerve gas from a
space station to wipe out the global population, replacing it with his own master race.
Would this be possible? Almost certainly not. Even if 50 spheres could hold sufficient toxic
agent, they would not achieve the necessary level of dispersion necessary to distribute a
lethal dose across the entire surface of the Earth, or even a tiny fraction of it. But is it
possible in principle? As always, it is our job to find out.
First, it would not make sense to disperse a biological agent in an un-targeted
fashion. If someone is trying to kill as many people as possible with a biological agent, it
makes the most sense to distribute it in areas where there are people, especially people
quickly dispersing across far distances, like an airport. This is best done on the ground,
with an actual human vector, spraying aerosols in places like public bathrooms. The
method of dispersal would be up close, personal, and meticulous, not slipshod or
haphazard, as in a shotgun approach. The downside of this is that you can't hit millions of
people at once, and it puts the attacker himself at risk of contracting the disease.
Any biological weapon launched from a distance has to deal with the problem of
adequate dispersal. The biological bomblets developed by the United States military during
our biological weapons program are small spheres designed to spin and spray a biological
agent as they are dropped from a plane. If bomblets are to be launched from a space
station, a number of weaponization challenges would need to be overcome. First, the
bomblets would need to be encased in a larger module with a heat shield to protect it from
burning up on reentry. Most of the heat of a reentering space capsule is transferred just 6
km (3.7 mi) over the ground, where the air gets much thicker relative to the upper
atmosphere, so the bomblets have to remain enclosed down to that altitude at least (unless
constructed from some advanced material like diamond). By the time they are that far
down, they can't disperse across that wide an area, maybe an area 10 km (6.2 mi) across.
Breaking this general rule would either require making ultra-tough reentry vehicles that can
reenter in one piece despite being relatively small, or having the reentry module break up
into dozens or hundreds of rockets which go off horizontally in all directions before falling
down to Earth. Both are serious challenges, and the technology does not exist, as far as is
publicly known.
160

Most people live in cities. Cities are the natural place where an aspiring Bond villain
would want to spread a plague. A problem is that major cities are far apart. Another
problem is that doomsday viruses don't survive very well without a host, quickly getting
destroyed by low-level biological activity (such as virophages) or sunlight. To optimally hit
humanity with a virus would require not only a plethora of viruses (since many people
would undoubtedly be immune to just one), but a multitude of living hosts to spread the
viruses, since viruses in aerosols or otherwise unprotected would very quickly be
degenerated by UV light from the sun. So the problem of wiping out humanity with a
biological weapon launched from a space station is actually a problem of launching bats,
monkeys, rats, or some similar stable vector in large numbers from a space station to
hundreds or thousands of major cities on Earth. When you think about it that way, in
combination with the reentry module challenges cited above, pulling this off clearly is a bit
more complicated than as was portrayed in Moonraker.
Could biological bomblets filled with bats be launched into the world's major cities,
flooding them with doomsday bats? It's certainly imaginable, and would be a more subtle
and achievable way of trying to kill off humanity than using the Daedalus impactor.
However, it certainly doesn't seem like a threat in the near future, and difficult to imagine
even in this century, though difficulty of imagination is not always the best criterion for
predicting the future. For the vectors to make it to their targets in one piece, they would
need to be put in modules with parachutes, which would be rather noticeable upon entering
a city. Many modules could be used, thousands of separate pods filled with bats each
landing in different locations and different cities, for a total of hundreds or thousands of
modules. This would entail a lot of mass, in the tens of thousands of tons. Keeping all
these bats or rats and their 50-100 necessary deadly viruses all contained on a space
station would be a major task, and require substantial amount of space and resources. It
could conceivably be done, but it would have to be a relatively large space station, one with
living and working space for 1,000 people at least. There would need to be isolation
chambers on the space station itself, with workers entering numerous staging chambers
filled with the modules which would need to be loaded with the biological specimens by
hand or via remote-controlled robot.
To take a serious crack at destroying as many human beings as possible, the
biological attack would need to be targeted at every city with over a million people, of which
161

there are 476. To ensure that enough people are infected to actually get the multipandemic going, rather than being contained, one would need as many vectors as
possible, in the tens of thousands per city. Rats, squirrels, or mice would be more suitable
than bats, since they would be more innocuous-seeming in world cities, though all of the
above could be used. Each city would need its own reentry capsule, which, to contain
10,000 rats without killing them, assuming (generously) ten per cubic foot, would need to
have around 1,000 cubic feet of internal space, or a cube 10 ft (3 m) on a side. The Apollo
Service module, for instance, had an internal space of 213 cubic feet. Assuming a module
carrying rats would need to be about 4 times heavier to contain the necessary internal
space, that gives us a weight of 98,092 kg (216,256 lbs) per module, which we round off to
100 tonnes. One module for every city with over a million people makes that 47,600
tonnes. Now we see the scale of size a space station would need to contain the necessary
facilities to launch a species-threatening bio-attack on the Earth. A space station of five
million tonnes or more would likely be needed to contain all the modules, preparatory
facilities, staff, gardens to grow food for the staff, space for it to be tolerable for them to live
there, the water they need, and so on. At that point, you might as well build Kalpana One,
the AIAA (American Institute of Aeronautics and Astronautics) designed space colony we
mentioned earlier, designed to fit 3,000 people in a colony that would weigh about 7 million
tonnes36. Incidentally, this number is close to the minimum viable population (MVP) needed
for a species to survive. A meta-analysis of the MVP of different species found the average
number to be 4,16937.
Would 10,000 rats released in each city in the world, carrying a multitude of viruses,
be sufficient to wipe out humanity? It seems unlikely, but could be possible in combination
with nuclear bombardment and targeted use of a heliobeam. After wiping out all human
beings on the surface, the space station could be struck by a NEO and suffer catastrophic
reentry into the atmosphere, killing everyone on board. After that, no more humanity.
Destroying every last human being on Earth with viruses seems seriously difficult, given
that there are individuals living out in rural Canada or Russia with no human contact
whatsoever. Yet, if only a few thousand or tens of thousands of individuals remain, are too
widely distributed, and fail to mate and reproduce in the wake of a serious disaster, it could
be possible. It would require a lot of things all going wrong simultaneously, or in a
sequence. But it is definitely possible.
162

Motivations
Given all this, we may rightly wonder: what would motivate someone to do these
horrible things in the first place? Most of us don't seriously consider wiping out humanity,
and those that do tend to be malcontent misfits with little hope of carrying their plans out.
The topic of motivations will be addressed at greater length further along in the book, but
we ought to briefly address it here in the specific context of space-originating artificial risks.
Space represents a new frontier, the possibility of starting over. To some, this means
replacing the old with the new. When you consider that 7-10 billion people only have a
collective mass of around 316 million tons, a tiny fraction of the size of the Earth, you
realize that's not a lot of matter to scramble and thereby have the entire Earth to yourself.
People concerned about the direction of the planet, or of society, combined with a
genocidal streak, may be sufficient consider wiping humanity out and starting over. They
may even see a messianic role for themselves in accomplishing it. Wiping out humanity
opens up the possibility of using the Earth for literally anything an individual or small group
could imagine. It would allow them to impose their preferences over the whole future.
The science fiction film Elysium showed a possible future for Earth: the planet noisy,
messy, dusty, and crowded, and a space station where everything is clean and utopian. In
the movie, people from Earth were able to launch themselves aboard the space station and
reach “asylum,” but in real life, it would be quite difficult to dock on a space station if the
inhabitants didn't want you there. Defended by a cluster of mirrors or lasers, they could
cook any unwanted visitors before they ever got to the front door. If a space colony could
actually maintain a stable social system, they might develop a very powerful sense of ingroup, more powerful than that experienced by the great majority of humans who lived
throughout history. Historically, communities almost always have some contact with the
outside, but on a space station, true self-sufficiency and isolation could become possible.
3,000 people might not be sufficient to create full self-sufficiency, but a network of 10-20
such space stations might. If these space stations had access to molecular manufacturing,
they could build things without the large factories we have today, making the best use of
available space. Eventually such colonies could spread to the Moon and Mars, colonizing
the solar system. Powerful in-group feelings could cause them to eventually think much

163

less of those left behind on Earth, even to the point of despising us. Of course, this is by no
means certain, but it ought to be considered.
The potential for hyper-in-group feelings increases when we consider the possibility of
mental and physical enhancement, modifications to the body and brain that change who
people are. A group of people on a space station could become a new species, capable of
flying through outer space unprotected for short periods. Their skin cells could be equipped
with small scales that fold to create a hard shell which maintains internal pressure when
exposed to space. These people could work on asteroids wearing casual clothing, a new
race of human beings adapted to the harshness of the vacuum. Beyond physical
modifications, they could have mental modifications as well. These could increase the
variance of possible emotions, allow long-term wakefulness, or even increase the
happiness set-point. They might see themselves as doing a service by wiping out the
billions of “model 1.0” humans on Earth. In the aftermath of such an act, infighting among
these space groups could lead to their eventual death and the total demise of the human
species in all its variations.
In the context of all of this, it is important to recall Fermi's Paradox and the Great
Filter. Though it may simply be that the prior probability for the development of intelligent
life is extremely low, there may be more sinister reasons for the silence in the cosmos,
developmental inevitabilities that cause the demise of any intelligent race. These may be
more subtle than nanotechnology or artificial intelligence; they could involve psychological
or mental dead-ends that inevitably occur when an intelligent species starts expanding out
into space, modifying its own brain, or both. Maybe intelligent species reliably expand out
into the space immediately around their planet, inevitably wipe out the people left behind,
then inevitably wipe themselves out in space. Space certainly has a lot fewer organic
resources than the Earth, and a lot more things that can go wrong. If the surface of the
planet Earth became uninhabitable for whatever reason, hanging onto survival in space is
much more of a risky prospect. Maybe small groups of people living in space eventually go
insane over time scales of decades or centuries. We really have no idea. Nothing should
be taken for granted, especially our future.
Space and Molecular Nanotechnology

164

At the opening of this chapter, we made the controversial claim that molecular
manufacturing (MM) is a necessary prerequisite to large-scale space development. In this
section we'll provide more justification and reasoning for the evidence behind this
assumption, while being more specific about what precise levels of space development we
mean.
Let's begin by considering an optimal scenario for space development in the absence
of MM. Say that a space elevator is actually built in 2050 as one Japanese company
claims38. Say that we actually find a way to make carbon nanotubes of the required length
and thickness (we can't now), tens of thousands of miles long. Say that geopolitical
considerations are overcome and nations don't fight over or forbid the existence of an
elevator which would give anyone who controls it a huge military and strategic advantage
over other nations. Say that the risk of having a huge cable in space which would cut
through or get tangled in anything that got in its way, providing a huge hazard to anything in
orbit, was considered acceptable. If all these challenges are overcome, it would still take a
number of years to thicken a space elevator to the point where it could carry substantial
loads. Bradley C. Edwards, who conducted a space elevator study for NASA, found that
five years of thickening a space elevator cable would make it strong enough to send 1,100
tons (106 kg) of cargo into orbit every four days 39. That's 912 trips every decade, or about
108 kg, 100,000 tonnes. Space enthusiasts have trouble grasping how little this is; less
than 70 times the quantity needed to build a rotating space colony that holds only 3,000
people.
The capacity of a space elevator can be increased by robots that add to its thickness,
but the weight of these robots prevents the space elevator from being in use while
construction is happening. The thickness that can be added to a space elevator on any
given trip is extremely limited, because the elevator itself is at least 35,800 km (22,245 mi)
long and a robot can only carry so much material up on any given trip without breaking the
elevator due to excessive weight. Spreading out a few hundred tons on a string that long
only adds a little bit of width. These facts create fundamental limitations on the rate that a
space elevator can be improved. In the long term, if a reliable method of mass-producing
carbon nanotubes is found, space elevators are found to be safe and defensible from
attacks, then extremely large space elevators could be constructed, but it seems unlikely in

165

this century, which is our scope of focus. It's possible that space elevators may be the
primary route to space in the 22nd or 23rd century, but it seems unlikely in the 21st.
Enthusiasts might object that mass drivers based on electromagnetic acceleration
could be used to send up more material more cheaply, with claims of costs as low as $1/lb
for electricity40. Edwards claims his space elevator design could put a kilogram in orbit for
$220, the cost lowering to $55/kg as the power beaming system efficiency improves. Still,
these systems have limited launch capacity. Even if the costs of sending payloads into
space were free, there would still be that limitation. Perhaps the limitation could be
bypassed somewhat by constructing mass drivers on the Moon which can send up large
amounts of material. Even then, the number of mass drivers required to send up any
substantial amount of mass would cost trillions of dollars worth of investment, and involve
mega-projects on another celestial body, something which is very far off (without the help
of MM). This is something that could be started in the late 21 st century, but definitely not
completed. What about towing NEOs into the Earth's orbit as building materials? As we
analyzed earlier in this chapter, the energy costs of modifying the orbit of large objects are
very high.
Simple facts remain: the gravity wells of both the Moon and Earth are very powerful,
making it difficult to send up any substantial amount of matter without great cost. Without
nanotechnology and automated self-replicating robotics, space colonization is a project
which would unfold over the timescale of centuries, and mostly involve just a few tens of
thousands of people. That's 0.0001 percent of the human race. These are likely to be
professionals living in close quarters, like the scientists who do work in the Antarctic, not
like the explorers who won the West. The romance of large-scale space colonization is a
future reserved only for people living in a civilization with self-replicating robotics and selfreplicating robotics. Efforts like SpaceX or Mars One are token efforts only, fun for press
releases, which may inspire people, but actually getting to space requires self-replicating
robotics. Anyone working on rockets is not working on the fastest route to space
colonization; they are working on an irrelevant route to space colonization, one which will
never reach its target without the help of nanotechnology. Space enthusiasts should be
working on diamondoid mechanosynthesis (DMS), but very few of them even know what
those words mean. Rockets are sexier than DMS. These will be great for space tourism,
glimpsing the arc of the Earth for a few minutes before returning, but they will not allow us
166

to build space colonies of any appreciable size. Rockets are simply too expensive on a perkilogram basis. Space elevators and mass drivers do not have the necessary capacity to
send up supplies to build facilities for more than a few thousand people on the timescale of
decades. 1,100 tons every four days simply isn't a lot, especially when you need 10 tons
per square meter of hull just to shield yourself from radiation.
We've mentioned every key technology for getting into space without taking into
account MM: space elevators, mass drivers (including on the Moon), and rockets. They all
suffer from capacity problems in the 21st century. Based on all this, it is hard to conclude
that pre-molecular manufacturing space-based weaponry is a serious risk to human
survival during the 21st century. Any risk of space-based weaponry appears to derive from
molecular manufacturing and from molecular manufacturing only. Accordingly, it seems
logical to tentatively see space weapon risks as a subcategory of the larger class of
molecular manufacturing (MM) risks.
To review, molecular manufacturing is the hypothetical future technology that uses
self-replicating nanosystems to manufacture “nanofactories” which are exponentially selfreplicating and can build a wide array of diamondoid products with fantastic strength and
power specifications at a high throughput. Developing MM requires us to master
diamondoid mechanosynthesis (DMS), the science and engineering of joining together
individual carbon atoms into atomically precise diamond shapes. This has not been
achieved yet, though very limited examples of mechanosynthesis (with silicon) have been
demonstrated41. Only a few scientists are working on mechanosynthesis or even consider it
important42. However, many prominent futurists, including the most prominent futurist, Ray
Kurzweil, predicts that nanofactories will be developed in the 2030s, which will allow us to
enhance our bodies, extend our lifespans, develop realistic virtual reality, and so on 43. The
mismatch between his predictions and the actual state of the art of nanotechnology in
2015, however, is jarring. We seem nowhere close to developing nanofactories, with much
more basic research, design, and development required 44. The entire field of
nanotechnology needs to entirely reorient itself towards this goal, which it is currently
ignoring45.
Molecular manufacturing is fundamentally different than any other technology
because it is 1) self-replicating, 2) atomically precise, 3) can build almost anything, 4)
167

cheap once it gets going. Nanotechnology policy analysts have even called it “magic”
because of these properties. Any effort to colonize space needs it; it's key.
How about colonizing space with the assistance of molecular nanotechnology? It gets
much easier. Say that the first nanofactory is built in 2050. It replicates itself in under 12
hours, those two nanofactories replicate themselves, and so on, using natural gas, which is
highly plentiful, for natural feedstocks46. Even after the natural gas runs out, we can obtain
large amounts of carbon from the atmosphere47. Reducing the atmospheric CO2 levels
from about 362 parts per million to 280 parts per million (pre-industrial levels) would allow
for the extraction of 118 gigatonnes of carbon. That's 118 billion metric tons. Given that
diamond produced by nanofactories would be atomically perfect and ultra-strong, we could
build skyscrapers or megastructures which only use only 1/100 th the materials for the loadsupporting structures, compared to present-day buildings. Burj Khalifa, the world's tallest
building as of this writing, at 829.8 m (2,722 ft), uses 55,000 tonnes of steel rebar in its
construction. Creating a building of the same height with diamond beams would only
require 550 tonnes of diamond. For the people who design and build buildings, it's hard to
imagine that so little material could be used, but it can. Some might object that diamond is
more brittle than steel and therefore not a suitable building material for a building, since it
doesn't have any flexibility. That problem is easily solvable by using carbon nanotubes,
also known as fullerenes, for construction. Another option would be to use diamond but
connect together a series of segments with joints that allow the structure to bend slightly.
With nanofactories, you don't just have diamond at your disposal, but the entire class of
materials made out of carbon, which includes fullerenes such as buckyballs and
buckytubes (carbon nanotubes)48.
Imagine a tower much taller than Burj Khalifa, 100 kilometers (62 mi) in height instead
of 829.8 meters. This tower would use 100 times more structural material to be stable.
Flawless diamond is so strong (50 GPa compressive strength) that it does not need to
taper at all to be stable for a tower of that height. That works out to about 55,000 tonnes of
material. Build 10 of these structures in a row and put a track on top of them, and we have
a Space Pier, a launch platform made up of about 600,000 tonnes of material which puts
us “halfway to anywhere,” designed by J. Storrs Hall 49. At that altitude, the air is 100 times
thinner than at ground level, making it easier to launch payloads into space. In fact, the
structure is so tall that it reaches the Karman Line, the international designation for the
168

boundary of space. The top of the tower itself is in space, technically. The really fantastic
thing about this structure is that it could hold great weight (tens of thousands of tons) and
run launches every 10 minutes or so instead of every four days. It avoids many other
problems that space elevators have, such as the risk of colliding with objects in low Earth
orbit. It is far more structurally stable than a space elevator. It only occupies the territory of
one sovereign nation, and avoids issues relating to who owns the atmosphere and space
above a given country. (To say that a country automatically owns the space above it and
has a right to build a space elevator there is not consistent with any current legal
conception of space.)
When it comes to putting objects into orbit, a Space Pier and a space elevator are in
completely different categories. Although a Space Pier does not actually extend into space
like a space elevator does, it is much more efficient at getting payloads into orbit. A 10
tonne payload can be sent into orbit for just $4,300 of electricity, a rate of 43 cents per
kilogram. That is cheap, much cheaper than any other proposal. Why build a mass driver
on the ground when you can build it at the edge of space? Only molecular nanotechnology
makes it possible; nothing else will do. Nothing else can build flawless diamond in the
massive quantities needed to put up this structure. Nothing else but a Space Pier can put
payload into space in sufficient quantities to achieve the space expansion daydreams of
the 60s and 70s.
A “future of mankind in space” is only enabled by 1) molecular nanotechnology, and
2) Space Piers, which can only be built using it. Nothing else will do. A Space Pier
combines two simple concepts: that of a mass driver, and putting it above most of the
atmosphere. It's extremely simple, and is the logical conclusion of building taller and taller
buildings. Although a Space Pier is large, building it would only require a small amount of
the total carbon available. A Space Pier consumes about half a million tonnes, while our
total carbon budget from the atmosphere is about 118 billion tonnes. That's roughly
1/200,000 of the carbon available. It's definitely a major project that would use up
substantial resources, but is well within the realm of possibility.
Assuming a launch of 10 tonnes every ten minutes, we get a figure of 5,256,000
tonnes a decade, instead of the measly 100,000 tonnes a decade of the Edwards space
elevator. That makes a Space Pier roughly 52 times better at sending payloads into space
169

than a space elevator, which is a rather big deal. It's an even bigger deal when you start
adding additional tracks and support to the Space Pier, allowing it to support multiple
launches simultaneously. At some point, the main restriction becomes how much power
you can produce at the site using nuclear power plants, rather than the costs of additional
construction itself, which are modest in comparison. A Space Pier can be worked on and
expanded while launches are ongoing, unlike a space elevator which is non-operational
during construction. A 10-track Space Pier can launch 50,256,000 tonnes a decade,
enough to start building large structures on the Moon, such as its own Space Pier. Carbon
is practically non-existent on the Moon's surface, so any Space Pier built there would need
to be built with carbon sent from Earth. 50 million tonnes would be enough to build over a
thousand mass drivers on the Moon, which could then send up 5 billion tonnes of Moon
rock in a single decade! Now we're talking. Large amounts of dead, dumb matter like Moon
rock is needed to shield space colonies from the dangerous effects of radiation. Each
square meter will need at least 10 tonnes, more for colonies outside of the Earth's Van
Allen Belts, in attractive locations like the Lagrange points, where colonies can remain
stable in their orbits relative to the Earth and Moon. 5 billion tonnes of Moon rock is enough
to facilitate the creation of about 500 space colonies, assuming 10 million tonnes per
colony, which is enough to contain roughly 1,500,000 people. Even with the “magic” of
nanotechnology, a ten-track Space Pier on the Earth, a thousand industrial-scale mass
drivers on the Moon operating around the clock, each launching ten tonnes every ten
minutes, and two or three decades of work to build it all, we only could put 0.15 percent of
the global population into space. To grasp the amount of infrastructure it would take to do
this, it's comparable in mass to the entire world's annual oil output.
Creating space colonies could even be accelerated beyond the above scenario by
using self-replicating factories on the Moon which create mass drivers primarily out of local
materials, using ultra-strong diamond materials only for the most crucial components.
Since the Moon has no atmosphere, you can build a mass driver right on the ground,
instead of building a huge, expensive tower to support it. Robert A. Freitas conducted a
study of self-replicating factories on the Moon, titled “A Self-Replicating, Growing Lunar
Factory” in 198150. Freitas' design begins with a 100 ton seed and replicates itself in a year,
but it does not use molecular manufacturing. A design based on molecular manufacturing
and powered by a nuclear reactor could replicate itself much faster, on the order of a day.
170

In just 18 days, the factories could harvest 4 billion tons of rock a year, roughly equivalent
to the annual industrial output of all human civilization.
From them on, providing nuclear power for the factories is the main limitation on
increasing output. A single nuclear power plant with an output of 2 GW is enough to power
174 of these factories, so many nuclear power plants would be needed. Operating solely
on solar energy generated by themselves, the factories would take a year to self-replicate.
Assisting the factories with large solar panels in space beaming down power is another
option. A 4 GW solar power station would weigh about 80,000 tonnes, which could be
launched from an Earth-based 10-track Space Pier in pieces over the course of six days.
Every six days, energy infrastructure could be provided for the construction of 4,000
factories with a combined annual output of 400 million tons (~362 million tonnes) of Moon
rock, assuming each factory requires 10 MW to power it (Freitas' paper estimates 0.47 MW
- 11.5 MW power requirements per factory). That alone would be sufficient to produce 3.6
billion tonnes a decade, more than half the 5 billion tonne benchmark for 1,000 mass
drivers sending material up into space. If the self-replicating lunar robotics could construct
mass drivers on their own, billions of tonnes of Moon rock might be sent up every week
instead of every decade. Eventually, this would be enough to build so many space colonies
around the Earth that they would create a gigantic and clearly visible ring.
Hopefully these sketches illustrate the massive gulf between space colonization with
pre-MM technologies and space colonization with MM. They are huge. The primary limiting
factory in the case of MM is human supervision, the degree to which would be necessary is
still unknown. If factories, mass drivers, and space stations can all be constructed
according to an automated program, with minimal human supervision, the colonization of
space could proceed quite rapidly in the decades after the development of MM. That is why
we can't completely rule out major space-based risks this century. If MM is developed in
2050, that gives humanity 50 years to colonize space in our crucial window, at which point
risks do emerge. The bio-attack risk outlined in this chapter could certainly become feasible
within that window. However, if MM is not developed this century, then it seems that major
risks from space are also unlikely.
Space and the Far Future

171

In the far future, if humanity doesn't destroy itself, some mass expansion into space
seems likely. It probably isn't necessary in the nearer term, as the Earth could hold trillions
of people with plenty of space if huge underground caverns are excavated and flooded with
sunlight, millions of 50-mile high arcologies are built, and so on. The latest population
projections, contrary to two decades of prior consensus, have world population continuing
to increase throughout the entire 21st century rather than leveling off51. Quoting the
researchers: “world population stabilization unlikely this century”. Assuming the world
population continues to grow indefinitely, eventually all these people will need to find
somewhere to be put which isn't the Earth.
Assuming we do expand into space, increasing our level of reproduction to produce
enough people to take up all that space, what are the long-term implications? Though
space may present dangers in the short term, in the long term it actually secures our future
as a species. If mankind spreads across the galaxy, it would become almost impossible to
wipe us out. For a detailed step-by-step scenario of how mankind might go from building
modest structures in low Earth orbit to colonizing the entire galaxy, we recommend the
book The Millennial Project by Marshall T. Savage52.
The long-term implications of space colonization depend heavily on the degree of
order and control that exists in future civilization. If the future is controlled by a singleton, a
single top-level decision-making agency, there may be little to no danger, and it could be
safe to live on Earth53. If there is a lack of control, we could reach a point where small
organizations or even individuals could gather enough energy to send a Great Pyramidsized iron sphere into the Earth at the speed of light, causing impact doomsday.
Alternatively, perhaps the Earth could be surrounded by defensive layers and structures so
powerful that they can stop such attacks. Or, perhaps detectors could be scattered across
the local cosmic neighborhood, alerting the planet to the unauthorized acceleration of
distant objects and allowing the planet to form a response long in advance. All of these
outcomes are possible.
Probability Estimates and Discussion
As we've stated several times in this chapter, space colonization on the level required
to pose a serious threat to Earth does not seem particularly likely in the first half of the 21 st
century. One of the reasons why is that molecular nanotechnology does not seem very
172

likely to be developed in the first half of the century. There is close to zero effort towards
the prerequisite steps, namely diamondoid mechanosynthesis, complex nanomachine
design, and positional molecular placement. As we've argued, space expansion depends
upon the development of MM to become a serious risk.
Even if MM is developed, there are other risks that derive from it which seem to pose
a greater danger than the relatively speculative and far-out risks discussed in this chapter.
Why would someone launch a bio-attack on the Earth from space when they could do it
from the ground and seal themselves inside a bunker? For the cost of putting a 7 million
ton space station into orbit, they could build an underground facility with enough food and
equipment for decades, and carry out their plans of world domination that way. Anything
that can be built in space can be built on the ground in a comparable structure, for great
savings, improved specifications, or both.
Space weapons seem to pose a greater threat in the context of destabilizing the
international system than they do in terms of direct threat to the human species. Rather
than space itself being the threat, we ought to consider the exacerbation of competition
over space resources (especially the Moon) leading to nuclear war or a nano arms race on
Earth.
We've reviewed that materials in space are scarce, there are not many NEOs
available, and launching anything into space is expensive, even with potential future
assistance from megastructures like the Space Pier. Even if a Space Pier can be built in a
semi-automated fashion, there is a limit to how much carbon is available, and real estate
will continue to have value. A Space Pier would cast vast shadows which would effect
property values across tens of thousands of square miles. Perhaps the entire structure
could be covered in displays, such as phased array optics, which simulate it being
transparent, but then the footprint of the towers themselves would still require substantial
real estate. Perhaps the greatest cost of all would be the idea that it exists and the demand
for its use, which could cause aggravation or jealousy among other nations or powerful
companies.
All of these factors mean that access to space would still have value. Even if many
Space Piers are constructed, it will continue to be relatively scarce and probably in
substantial demand. Space has resources, like NEOs and the asteroid belt, which contain
173

gigatons of platinum and gold that could be harvested, returned to Earth, and sold for a
profit. A Space Pier would need to be defended, just like any other important structure. It
would be an appealing target for attack, just as the Twin Towers in New York were.
The scarcity of materials in space and the cost of sending payloads into orbit means
that the greatest chunk of material there, the Moon, would have great comparative value.
The status of the Moon is ostensibly determined by the 1979 Moon Treaty, The Agreement
Governing the Activities of States on the Moon and Other Celestial Bodies, which assigns
the jurisdiction of the Moon to the “international community” and “international law”. The
problem is that the international community and international law are abstractions which
aren't actually real. Sooner or later there will be some powerful incentive to begin
colonizing the Moon and mining it for material to build space colonies, and figuring that the
whole object is in the possession of the “international community” will not be specific
enough. It will become necessary to actually protect the Moon with military resources.
Currently, putting weapons in space is permitted according to international law, as long as
they are not weapons of mass destruction. So, the legal precedent exists to deploy weapon
systems to secure the Moon. Whoever protects the Moon will become its de facto owner,
regardless of what treaties say.
Consider the electrical cost of launching a ten metric payload into orbit from a
hypothetical Space Pier; $4,200. Besides that cost, there is also the cost of building the
structure in the first place, and it seems likely that these costs will be amortized and
wrapped into charges for each individual launch. Therefore, launches could be
substantially more expensive than $4,200. If the cost for building the structure is $10 billion,
and the investors want to make all their money back within five years, and launches occur
every ten minutes around the clock without interruption, a $38,051 charge would need to
be added to every ten tonne load. That brings the total cost per launch to $41,251.
Launching ten tonnes from the Moon might cost only $1,000 or less, considering many
factors; 1) the initially low cost of lunar real estate, 2) the Moon's lack of atmosphere
encumbering the acceleration of the load, 3) the comparatively lower gravity of the Moon. If
someone is building a 7 million tonne space station with materials launched from the Earth,
launch costs would be $26,635,700,000, or roughly $26 trillion. Building a similar mass
driver on the surface of the Moon would be a lot cheaper than building a Space Pier;
perhaps by a factor of 10. We can ballpark the cost of launch per ten tonnes of Moon rock
174

at $5,000, making the same assumption that investors will want their money back in five
years and that electricity costs will be minimal. At that price, launching 7 million tonnes
works out to $3,500,000,000, or about 7 times cheaper. The basic fact is that the Moon's
gravity is 6 times less than the Earth's and it has no atmosphere. That makes it much
easier to use as a staging area for launching large quantities of basic building materials
into the Earth-Moon neighborhood.
The uncertain status of the Moon and its status as a source of material at least ten
times cheaper than the Earth (much more at large scales) means that there will be an
incentive to fight over it and control it. Like the American continent, it could become home
to independent states which begin as colonies of Earth and eventually declare their
independence. This could lead to bloody wars going both ways, wars waged with futuristic
weapons like those outlined in this and earlier chapters. Those on the Moon might think
little of people on the Earth, and do their best to sterilize the surface. If the surface of the
Earth became uninhabitable, say through a Daedalus impact, the denizens of the Moon
and space would have to be extremely confident in their ability to keep the human race
going without that resource. This seems very plausible, especially when we take into
account human enhancement, but it's worth noting. Certainly such an event would lead to
many deaths and lessen the quality of life for all humanity.

Latest developments:
Yuri Milner created a project to use very powerful laser so send nanobots to stars.
Such phase array with the power of gigawats will be based in space. It could be used to
attack targets on Earth and could change its aim probably thousands times a second or
more, so it could be used to kill people. The safe solution would be to put it on opposite to
Earth side of the Moon.

References

1. “Upgraded SpaceX Falcon 9.1.1 will launch 25% more than old Falcon 9 and bring
price down to $4109 per kilogram to LEO”. March 22, 2013. NextBigFuture.
175

2. Kenneth Chang. “Beings Not Made for Space”. January 27, 2014. The New York
Times.
3. Lucian Parfeni. “Micrometeorite Hits the International Space Station, Punching a
Bullet Hole”. April 30, 2013. Softpedia.
4. David Dickinson. “How Micrometeoroid Impacts Pose a Danger for Today’s
Spacewalk”. April 19, 2013. Universe Today.
5. Bill Kaufmann. “Mars colonization a ‘suicide mission,’ says Canadian astronaut”.
6. James Gleick. “Little Bug, Big Bang”. December 1, 1996. The New York Times.
7. Leslie Horn. “That Massive Russian Rocket Explosion Was Caused by Dumb
Human Error”. July 10, 2013. Gizmodo.
8. “Space Shuttle Columbia Disaster”. Wikipedia.
9. “Your Body in Space: Use it or Lose It”. NASA.
10. Alexander Davies. “Deadly Space Junk Sends ISS Astronauts Running for Escape
Pods”. March 26, 2012. Discovery.
11. Tiffany Lam. “Russians unveil space hotel”. August 18, 2011. CNN.
12. John M. Smart. “The Race to Inner Space”. December 17, 2011. Ever Smarter
World.
13. J. Storrs Hall. “The Space Pier: a hybrid Space-launch Tower concept”. 2007.
Autogeny.org.
14. Al Globus, Nitin, Arora, Ankur Bajoria, Joe Straut. “The Kalpana One Orbital Space
Settlement Revised”. 2007. American Institute of Aeronautics and Astronautics.
15. “List Of Aten Minor Planets”. February 2, 2012. Minor Planet Center.
16. Robert Marcus, H. Jay Melosh, and Gareth Collins. “Earth Impact Effects Program”.
2010. Imperial College London.
17. Alan Chamberlin. “NEO Discovery Statistics”. 2014. Near Earth Object Program,
NASA.
18. Clark R. Chapman and David Morrison. “Impacts on the Earth by Asteroids and
Comets - Assessing the Hazard”. January 6, 1994. Nature 367 (6458): 33–40.
19. Curt Covey, Starley L. Thompson, Paul R. Weissman, Michael C. MacCracken.
“Global climatic effects of atmospheric dust from an asteroid or comet impact on
Earth”. December 1994. Global and Planetary Change: 263-273.
20. John S. Lewis. Rain Of Iron And Ice: The Very Real Threat Of Comet And Asteroid
Bombardment. 1997. Helix Books.
176

21. Kjeld C. Engvild. “A review of the risks of sudden global cooling and its effects on
agriculture”. 2003. Agricultural and Forest Meteorology. Volume 115, Issues 3–4, 30
March 2003, Pages 127–137.
22. Covey, C; Morrison, D.; Toon, O.B.; Turco, R.P.; Zahnle, K. “Environmental
Perturbations Caused By the Impacts of Asteroids and Comets”. Reviews of
Geophysics 35 (1): 41–78.
23. Bains, KH; Ianov, BA; Ocampo, AC; Pope, KO. “Impact Winter and the CretaceousTertiary Extinctions - Results Of A Chicxulub Asteroid Impact Model”. Earth and
Planetary Science Letters 128 (3-4): 719–725.
24. Earth Impact Effects Program.
25. Alvarez LW, Alvarez W, Asaro F, Michel HV. “Extraterrestrial cause for the
Cretaceous–Tertiary extinction”. 1980. Science 208 (4448): 1095–1108.
26. H. J. Melosh, N. M. Schneider, K. J. Zahnle, D. Latham. “Ignition of global wildfires
at the Cretaceous/Tertiary boundary”. 1990.
27. MacLeod N, Rawson PF, Forey PL, Banner FT, Boudagher-Fadel MK, Bown PR,
Burnett JA, Chambers, P, Culver S, Evans SE, Jeffery C, Kaminski MA, Lord AR,
Milner AC, Milner AR, Morris N, Owen E, Rosen BR, Smith AB, Taylor PD, Urquhart
E, Young JR; Rawson; Forey; Banner; Boudagher-Fadel; Bown; Burnett; Chambers;
Culver; Evans; Jeffery; Kaminski; Lord; Milner; Milner; Morris; Owen; Rosen; Smith;
Taylor; Urquhart; Young. “The Cretaceous–Tertiary biotic transition”. 1997. Journal
of the Geological Society 154 (2): 265–292.
28. Johan Vellekoopa, Appy Sluijs, Jan Smit, Stefan Schouten, Johan W. H. Weijers,
Jaap S. Sinninghe Damsté, and Henk Brinkhuis. “Rapid short-term cooling following
the Chicxulub impact at the Cretaceous–Paleogene boundary”. May 27, 2014.
Proceedings of the National Academy of Sciences of the United States of America,
vol. 111, no 21, 7537–7541.
29. Petit, J.R., J. Jouzel, D. Raynaud, N.I. Barkov, J.-M. Barnola, I. Basile, M. Benders,
J. Chappellaz, M. Davis, G. Delayque, M. Delmotte, V.M. Kotlyakov, M. Legrand,
V.Y. Lipenkov, C. Lorius, L. Pépin, C. Ritz, E. Saltzman, and M. Stievenard. “Climate
and atmospheric history of the past 420,000 years from the Vostok ice core,
Antarctica”. 1999. Nature 399: 429-436.
30. Hector Javier Durand-Manterola and Guadalupe Cordero-Tercero. “Assessments of
the energy, mass and size of the Chicxulub Impactor”. March 19, 2014. Arxiv.org.
177

31. Covey 1994
32. Project Daedalus Study Group: A. Bond et al. Project Daedalus – The Final Report
on the BIS Starship Study, JBIS Interstellar Studies, Supplement 1978.
33. “Science: Sun Gun”. July 9, 1945. Time.
34. Bryan Caplan. “The Totalitarian Threat”. 2006. In Nick Bostrom and Milan Cirkovic,
eds. Global Catastrophic Risks. Oxford: Oxford University Press, pp. 504-519.
35. Nick Bostrom. “Existential Risks: Analyzing Human Extinction Scenarios”. 2002.
Journal of Evolution and Technology, Vol. 9, No. 1.
36. Globus 2007
37. Traill LW, Bradshaw JA, Brook BW. “Minimum viable population size: A metaanalysis of 30 years of published estimates”. 2007. Biological Conservation 139 (12): 159–166.
38. Michelle Starr. “Japanese company plans space elevator by 2050”. September 23,
2014. CNET.
39. Bradley C. Edwards, Eric A. Westling. The Space Elevator: A Revolutionary Earthto-Space Transportation System. 2003.
40. Henry Kolm. “Mass Driver Update”. September 1980. L5 News. National Space
Society.
41. Oyabu, Noriaki; Custance, Óscar; Yi, Insook; Sugawara, Yasuhiro; Morita, Seizo.
“Mechanical Vertical Manipulation of Selected Single Atoms by Soft Nanoindentation
Using Near Contact Atomic Force Microscopy”. 2003. Physical Review Letters 90
(17).
42. Nanofactory Collaboration. 2006-2014.
43. Ray Kurzweil. The Singularity is Near. 2005. Viking.
44. Ralph Merkle and Robert A. Freitas. “Remaining Technical Challenges for Achieving
Positional Diamondoid Molecular Manufacturing and Diamondoid Nanofactories”.
2007. Nanofactory Collaboration.
45. Eric Drexler. Radical Abundance: How a Revolution in Nanotechnology Will Change
Civilization. 2013. PublicAffairs.
46. Michael Anissimov. “Interview with Robert A. Freitas”. 2010. Lifeboat Foundation.
47. Robert J. Bradbury. “Sapphire Mansions: Understanding the Real Impact of
Molecular Nanotechnology”. June 2003. Aeiveos.
48. Drexler 2013.
178

49. Hall 2007
50. Robert A. Freitas. “A Self-Replicating, Growing Lunar Factory”. Proceedings of the
Fifth Princeton/AIAA Conference. May 18-21, 1981. Eds. Jerry Grey and Lawrence
A. Hamdan. American Institute of Aeronautics and Astronautics
51. Patrick Gerland, Adrian E. Raftery, Hana Ševčíková, Nan Li, Danan Gu, Thomas
Spoorenberg, Leontine Alkema, Bailey K. Fosdick, Jennifer Chunn, Nevena Lalic,
Guiomar Bay, Thomas Buettner, Gerhard K. Heilig, John Wilmoth. “World population
stabilization unlikely this century”. September 18, 2014. Science.
52. Marshall T. Savage. The Millennial Project: Colonizing the Galaxy in Eight Easy
Steps. 1992. Little, Brown, and Company.
53. Nick Bostrom. “What is a singleton?” 2006. Linguistic and Philosophical
Investigations,

Vol.

5,

No.

2:

pp.

48-54.

Block 4 – Super technologies. Nanotech and Biotech.
Chapter 12. Biological weapons
The most immediate global catastrophic risk facing humanity, especially in the prior
to 2030 time frame, plausibly appears to be the threat of genetically engineered viruses
causing a global pandemic. In 1918, the Spanish flu killed 50 million people, a number
greater than the casualties from World War I which occurred immediately prior. After
biology researchers publicly released the full genome for the virus, tech thinkers Ray
Kurzweil and Bill Joy published an op-ed article in The New York Times in 2005 castigating
them1, saying, “This is extremely foolish. The genome is essentially the design of a weapon
of mass destruction.”
The greatest risk comes into play when we consider the possibility of either creating
dangerous pathogens, like the Spanish flu, from scratch, or modifying the genomes of
existing pathogens, like bird flu, to make it transmissible between humans 2. Once the new
virus exists, the risk is that it either is used in warfare, or spread around airports in a
terrorist act.

179

Biological weapons are forbidden by the Biological Weapons Convention, which
entered into force in 1975 under the auspices of the United Nations 3. The treaty has been
signed or ratified by 154 UN member states, signed but not ratified by 10 UN member
states, and not signed or ratified by an additional 16 UN member states. There are three
non-UN member states, which have not participated in the treaty: Kosovo, Taiwan, and
Vatican City. A summary of the convention is in Article 1 of its text:
Each State Party to this Convention undertakes never in any circumstances to
develop, produce, stockpile or otherwise acquire or retain:

(1) Microbial or other biological agents, or toxins whatever their origin or
method of production, of types and in quantities that have no justification for
prophylactic, protective or other peaceful purposes;
(2) Weapons, equipment or means of delivery designed to use such agents
or toxins for hostile purposes or in armed conflict.
The two largest historical stockpiles, those of the United States and Russia, have
allegedly been destroyed, though Russian destruction of Soviet biological weapons
facilities is mostly undocumented. In 1992, Ken Alibek, a former Soviet biological warfare
expert who was the First Deputy Director of the Biopreparat (Soviet biological weapons
facilities) defected to the United States. Over the subsequent decade, he wrote various opeds and a book on how massive the Soviet biological weapons program was and lamented
the lack of international oversight of Russia's biological weapons program, pointing out that
inspectors are still denied access to many crucial facilities 4.
Both the United States and Russia still maintain facilities that study biological
weapons for “defensive purposes,” and these facilities are known to contain samples of
dangerous viruses, such as smallpox. Two medical journal papers claim that as recently as
the late 80s, the United States military was currently developing vaccines for “tularemia, Q
fever, Rift Valley fever, Venezuelan equine encephalitis, Eastern and Western equine
encephalitis, chikungunya fever, Argentine hemorrhagic fever, the botulinum toxicoses, and
anthrax”5,6. The biological weapons convention is mostly a “gentleman's agreement,” is not
backed by monitoring or inspections, and offers wide latitude for the development of
offensive biological weaponry. The James Martin Center for Nonproliferation maintains a
180

detailed list of “known,” “probable,” and “possible” biological and chemical agents
possessed by various countries, a list on which many dangerous pathogens appear 7.
Furthermore, the Federation of American scientists has questioned whether the current US
interpretation of the Biological Weapons Convention is in line with its original
interpretation8:
Recently, the US interpretation of the Biological Weapons Convention has come to
reflect the point of view that Article I, which forbids the development or production of
biological agents except under certain circumstances, does not apply to non-lethal
biological weapons. This position is at odds with original US interpretation of the
Convention. From the perspective of this original interpretation, current non-lethal
weapons research clearly exceeds the limits of acceptability defined by Article I.
There is also the “dual-use problem,” meaning that pathogens grown for testing and
defensive uses can quickly be re-purposed for offensive use. The United States had an
extensive biological weapons program from 1943—1969, weaponizing anthrax, tularemia,
brucellosis, Q-fever, VEE, botulism, and staph B. The procedures and methods needed to
grow and weaponize these pathogens are known to the US military, and industrial
production could begin at any time. The US developed munitions such as the E120
biological bomblet, a specialized bomblet designed to hold 0.1 kg of biological agent, to
rotate rapidly and shatter on impact, spraying tularemia bacteria across as wide an area as
possible. Despite the international agreement, all the tools for biological warfare could
readily be built.
What is tularemia? It is a highly infectious bacteria which is found in various
lagomorphs (hares, rabbits, pikas) in North America. Humans can become infected by as
few as 10-50 bacteria. From the point of view of biological weaponry, tularemia is attractive
because it is easy to aerosolize, easy to clean up (unlike anthrax), and is incapacitating to
the victim. Within a few days of contact, tularemia causes pus-filled lesions to form,
followed by fever, lethargy, anorexia, symptoms of blood poisoning, and possibly death.
Tularemia causes the nymph nodes to swell and fill with pus, similar to bubonic plague.
The death rate with treatment is about 1%, without treatment it is 7%. The low death rate is
actually somewhat helpful from the perspective of warfare, as sick or injured soldiers and

181

civilians must be taken care of by others, creating a burden that hampers fighting
capability.
Even worse than tularemia is weaponized anthrax, because anthrax spores are
persistent and very difficult to clean up. To clean them up requires sterilizing an entire area
with diluted formaldehyde. Anthrax spores are not fungal spores—the scientific term is
“endospores,” which are just a dormant and more resilient form of the original bacteria.
“Spores” is for short. Anthrax spores stay in a dehydrated form and are reactivated when a
host inhales or ingests them, or they come into contact with a skin lesion. At that point, they
come to life quickly and can cause fatal disease. We'll spare you a detailed description of
the disease, but the death rate for ingestion of anthrax spores is 25-60%, depending upon
promptness of treatment9.
After anthrax-contaminated letters containing just a few teaspoons worth of spores
were sent by terrorists to congressional officials in 2002, it cost over $100 million to clean
up the anthrax trail safely, which included spores at the Brentwood mail sorting facility in
northeast Washington and the State Department mail sorting facility in Sterling, Virginia 10.
Dorothy A. Canter, chief scientist for bioterrorism issues at the Environmental Protection
Agency, who tracked the decontamination work, said, “It's in the hundreds of millions of
dollars for the cleanup alone.”
If it cost over $100 million to clean up just two mailing facilities, imagine what it
would cost to clean up a whole city. It would be close to impossible. Anthrax spores are
very long-lasting—there have been cases where anthrax-infected animal corpses have
caused disease 70 years after they were dug up. So, it is reasonable to assume that a site
thoroughly contaminated with anthrax would remain uninhabitable for 70 years or longer 11.
It would be cheaper and easier to just rebuild a city in a different place than clean one up.
The total real estate value of Manhattan was $802.4 billion in 2006, consider if that entire
area were contaminated with anthrax. It would knock more than $800 billion off the US
economy all at once, and far more due to second-order economic disruption.
The case of Gruinard Island, an island in Scotland contaminated with anthrax due to
biological weapons testing, illustrates what is required to remove anthrax 12. The island is
just soil and dirt (no trees), and is under a square mile in size. To decontaminate the island
required diluting 280 tonnes of formaldehyde in seawater and spreading it over the entire
surface of the island, with complete removal of the worst-contaminated topsoil. After the

182

decontamination, code-named Dark Harvest, was completed, a flock of sheep was
introduced to the island and they thrived without incident.
These descriptions of biological weapons are included to give the reader more
clarity into what we are referring to when the phrase is used.
Consider a total nuclear war between the United States and Russia, fought over
Crimea or some other border area. After each side unloaded on one another, many major
cities and military bases would be destroyed, but each side would still have ample military
resources to continue fighting. As the conflict dragged on, there would be an increasing
desperation to turn to exotic weapons with a better cost-benefit profile than conventional
arms. Biological weapons fall into this category. Anthrax could quickly be produced in huge
quantities (if there aren't already secret stockpiles we don't know about), loaded up into
bomblets, and delivered to the enemy. Recall that few people even remember the exact
reason behind why World War I was fought, yet it still lasted for four years and led to the
death of more than 9 million combatants. Such a total war is certainly plausible, even if it
would be fought for reasons that don't seem to make much sense to us today.
Alongside anthrax, other dangerous pathogens, such as tularemia might be used,
creating an artificial epidemic. Refugees pouring out from the cities into the countryside
would carry the pathogens along with them, spreading them among friends, family, and
whoever else they came across. The war might be accompanied by the high-altitude
detonation of hydrogen bombs, causing a continent-wide EMP that fries all unprotected
electronics and makes industrial agriculture impossible 13. All sophisticated farm machinery
is dependent on electronic circuits and cannot function without them. They would be fried in
an EMP attack. The refugee situation would make a lack of hygiene rampant, as refugee
camps would lack proper sanitation and would likely be contaminated with human waste.
Combinations of factors such as these—nuclear war, biological war, pandemic, breakdown
of infrastructure—all could combine to deal an extremely hard hit to large countries such as
Russia and the United States, causing them to lose more than nine-tenths of their
population. Frank Gaffney, head of the Center for Security Policy, has said of an EMP
attack, “Within a year of that attack, nine out of 10 Americans would be dead, because we
can’t support a population of the present size in urban centers and the like without
electricity.”14
Without some external force to reestablish order in a situation such as that
described here, countries like the United States and Russia could even decline into feudal
183

societies. Further war and conflict based on grudges from the original conflict could
continue whittling down the human species, until crop failures from nuclear winter finally kill
us off. Modern agriculture is based on artificial fertilizer, and if infrastructure is down and
the security situation is poor, artificial fertilizer cannot be produced. Based on subsistence
agriculture alone, the carrying capacity of the Earth would be significantly lower than it is
now. Our modern population depends on industrial agriculture and artificial fertilizer to
sustain this many people. Without these tools, billions would perish.
We've briefly reviewed one possible disaster scenario, which would not be fatal to
humanity all at once, but could be the trigger for a long-term decline that eventually
culminates in extinction. We will return to such scenarios later in the chapter, but for now
we will review the tools which could be used to cause trouble.
The key danger area is synthetic biology, a broad term defined as follows 15:
A) the design and construction of new biological parts, devices, and systems, and
B) the re-design of existing, natural biological systems for useful purposes.
The term “synthetic biology” somewhat replaces the older term “genetic
engineering,” since synthetic biology has a more inclusive definition which captures much
more of the work happening today. In synthetic biology, genes are not merely modified, but
completely new genes may be synthesized and inserted, or even viruses constructed from
entirely synthetic genomes, as was achieved by Craig Venter in 2010 16. Venter had
previously chemically synthesized the genome of the bacteriophage virus phi X 174 in
2003, but it was not until 2010 that an artificial bacterial genome was created. The genome
was synthesized and injected into a bacteria, the nucleolus of which had been previously
hollowed out, and it “booted up” the organism and it came to life, Frankenstein-style, and
demonstrated itself capable of self-replication. The bacteria was named Mycoplasma
laboratorium, after the Mycoplasma genitalium bacterium which inspired its creation, the
simplest bacteria known at the time. Venter's Institute for Biological Alternatives called the
original 2003 viral synthesis, “an important advance toward the goal of a completely
synthetic genome that could aid in carbon sequestration and energy production.” 17
In 2002, the polio virus was synthesized from an artificial genome 18, an advance that
Craig Venter called “irresponsible” and “inflammatory without scientific justification.” These
synthetic virus particles

are absolutely indistinguishable from natural particles on all
184

parameters—from size to behavior and contagiousness. Though Venter was working on
developing a synthetic artificial viral genome at the time, the objectionable nature of the
experiment was that polio is a virus infectious to humans. The project was financed by the
US Department of Defense as part of a biowarfare response program. We see “biowarfare
response” also encompasses creating dangerous pathogens, not just “responding” to them.
That is the fundamental nature of dual use. Commenting on the advance, researcher
Eckard Wimmer said, “The reason we did it was to prove that it can be done and it is now a
reality”.19
The synthetic biology path pursued by Craig Venter's J. Craig Venter Institute (JCVI)
focuses on stripping down the simplest known bacterium, Mycoplasma genitalium, which
consists of 525 genes making up 582,970 base pairs. The idea, called the “Minimal
Genome Project” is to simplify the genome of the bacterium to make a free-living cell with
the simplest possible genome for survival, which can then serve as a base template for
engineering new organisms with functions like producing renewable fuels from biological
feedstock. The overview of the “Synthetic Biology and Bioenergy” group of JVCI says,
“Since 1995, Dr. Venter and his teams have been trying to develop a minimal cell both to
understand the fundamentals of biology and to begin the process of building a new cell and
organism with optimized functions.”
The activity of the Minimal Genome Project project picked up in 2006, with
participation from Nobel laureate Hamilton Smith and microbiologist Clyde A. Hutchison III.
The project is still ongoing, and has reached important milestones. From the 525 genes of
M. genitalium, JCVI plans to scale down to just 382 genes, synthesizing them and
transplanting them into a M. genitalium whose nucleolus has been hollowed out for this
purpose. This has not yet been achieved as of this writing (January 2015). In 2010, the
group successfully synthesized the 1,078,809 base pair genome of Mycoplasma mycoides
and transplanted it into a Mycoplasma capricolum cell, which was shown to be viable, i.e.,
it self-replicated billions of times. Craig Venter said the new organism, called Synthia, was
“the first species... to have its parents be a computer”. 20
Today, synthesizing genomes the length of a bacterium, i.e., half a million base
pairs, is extremely expensive and complicated. The 1,078,809 base pair genome of
Mycoplasma laboratorium cost $40 million to make and required a decade of work from 20
people. That works out to almost $40 per base pair. For more routine gene synthesis of
segments about 2,000-3,000 base pairs long, the more typical cost is 28 cents per base
185

pair, or $560 for a 2,000 base pair segment. Synthesizing a viral genome like phi X 174,
5,375 base pairs in length, currently costs about $1,500. So, the barrier to creating new
doomsday viruses is not cost so much as it is knowledge about what kind of genome to
synthesize.
Basic tools for garage lab bioengineering and synthetic biology have been
developed. One of the most prominent international groups is the International Genetically
Engineered Machines (iGEM) competition, which

was initially oriented towards

undergraduate students, but has since expanded to include High School students and
entrepreneurs. The MIT group led by Tom Knight which created iGEM also created a
standardized set of DNA sequences for genetic engineering, called BioBricks, and The
Registry of Standard Biological Parts, used by iGEM. The way iGEM works is that students
are assigned a kit of biological parts from the Registry of Standard Biological Parts, and are
given the summer to engineer these parts into working biological systems operating within
cells. As an example, the winners in 2007 were a team from Peking University, who
submitted “Towards a Self-Differentiated Bacterial Assembly Line”. Other winners have
“developed a new, more efficient approach to bio production (Slovenia 2010), improved the
design and construction of biosensors (Cambridge 2009), created a prototype designer
vaccine (Slovenia 2008),and re-engineered human cells

to combat sepsis (Slovenia

2006)”21.
There is a Biobrick Public Agreement, which is “a free-to-use legal tool that allows
individuals, companies, and institutions to make their standardized biological parts free for
others to use.” This makes what is essentially a “DNA dev kit” available for all to use. The
Standard Biological Parts Registry currently contains over 8,000 standard biological parts,
which can be used to create a wide variety of biological systems within cells. Most of the
parts within the registry are designed to operate within the E. coli bacterium, which lives in
the intestine and is the most widely studied prokaryotic organism.
The basic technological trend is towards biotech lab equipment becoming cheaper
and the knowledge of how to use it becoming more diffuse. The constant reduction in price
and improvements in the simplicity of DNA sequencers and synthesizers make
“biohackers” possible. However, it is important to point out that falling costs in the field of
gene synthesis are absolutely not exponential, and there is no predictable “Moore's law”
type effect happening (besides, Moore's law itself is leveling off). Still, we are entering an

186

era where it will be possible for nearly anyone to synthesize custom viruses, and eventually
bacteria.
Even simple genetic engineering, making a few modifications to an existing genome,
can be extremely dangerous. In 2014, controversial Dutch virologist Ron Fouchier and

his team at Erasmus Medical Center in the Netherlands studied the H5N1 avian flu
virus and determined that only five mutations were needed to make this bird flu
transmissible between humans22. This was confirmed by making the new strain and
spreading it between ferrets, which serve as a reliable human model for contagiousness. In
2011, Fouchier and his group had published details on how to make bird flu contagious
among humans, which was met by widespread condemnation by critics 23. Microbiologist

David Relman of Stanford University told NPR, “I still don't understand why such a
risky approach must be taken […] I'm discouraged.” 24 D.A Henderson at the
University of Pittsburgh, a pioneer in the eradication of smallpox said of this
“ominous news” that “the benefits of this work do not outweigh the risks.” 25 The
World Health Organization conveyed “deep concern” about “possible risks and misuses
associated with this research” and “the potential negative consequences.” 26 In December
2011, the National Science Advisory Board for Biosecurity (NSABB) ruled that Fouchier's
work be censored, but after much debate, the ruling was reversed and all the specific
mutation data was openly published27. Around that same time, Secretary of State Hillary
Clinton said there is “evidence in Afghanistan that…al Qaeda…made a call to arms for—
and I quote—'brothers with degrees in microbiology or chemistry to develop a weapon of
mass destruction.’”28 Fouchier told New Scientist that his human transmissible bird flu
variant “is transmitted as efficiently as seasonal flu” and that it has a 60 percent mortality
rate29.
Biological risk
The basic one-factorial scenario of biological catastrophe is global pandemic caused
by one type of virus or bacteria. The spread of this pathogen could occur via two means—
as an epidemic transferred from human to human, or in the form of contamination of the
environment (air, water, food, soil). The Spanish Flu epidemic of 1918 spread over the
entire world except several remote islands, as an example. Would-be man-killing
187

epidemics face two problems. The first is if carriers of the pathogen die too quickly it can't
spread. The second is that no matter how lethal the epidemic, some people usually have
congenital immunity to it. For example, about one in every three hundred people has an
innate immunity to AIDS, meaning they can contract HIV and it never progresses to AIDS.
Even if innate immunity is rare or non-existent, as is apparently the case with anthrax (the
authors were not able to find any medical research papers citing cases of innate immunity
to anthrax), vaccines are usually available.
A second variant of a global biological catastrophe would be the appearance of an
omnivorous agent which destroys the entire biosphere, devouring all live cells, or a more
limited version that takes out a large chunk of the biosphere. This is a more fantastic
scenario, less likely in the near term, but of uncertain likelihood in the long term. It seems
fanciful that natural evolution could produce such an agent, but perhaps through deliberate
biological engineering it could be possible. A bacterium immune to bacteriophages due to
fundamental genetic differences from the rest of all living bacteria would be a possible
candidate for such a pathogen. Harvard geneticist George Church is working towards such
a bacterium for research purposes30.
In 2009, Chris Phoenix of the Center for Responsible Nanotechnology wrote 31:
I learned about research that is nearing completion to develop a strain of E. coli
which cannot be infected by bacteriophages. Phages are a major mechanism—
likely *the* major mechanism—that keeps bacteria from growing out of control. A
phage-proof bacterium might behave very similarly to "red tide" algae blooms, which
apparently happen when an algae strain is transported away from its specialized
parasites. But E. coli is capable of living in a wide range of environments, including
soil, fresh water, and anaerobic conditions. A virus-proof version, with perhaps 50%
lower mortality, and (over time) less metabolic load from shedding virus defenses
that are no longer needed, might thrive in many conditions where it currently only
survives. The researchers doing this acknowledge the theoretical risk that some
bacteria might become invasive, but they don't seem to be taking anywhere near the
appropriate level of precaution. They are one gene deletion away from creating the
strain.

188

The work Phoenix is referring to is work undertaken by the famous geneticist George
Church, who describes his work in a Seed magazine article 32:
Given this knowledge, the modern tools of biotechnology allow us to do something
amazing: We can alter the translational code within an organism by modifying the
DNA bases of its genome, making the organism effectively immune to viral infection.
My colleagues and I are exploring this within E. coli, the microbial powerhouse of
the biotech world. By simply changing a certain 314 of the 5 million bases in the E.
coli genome, we can change one of its 64 codons. In 2009 this massive (albeit
nanoscale) construction project is nearing completion via breakthroughs in our
ability to “write” genomes. This process is increasingly automated and inexpensive 
— soon it will be relatively easy to change multiple codons. Viral genomes range
from 5,000 to a million bases in length, and each of the 64 codons is present, on
average, 20 times. This means that to survive the change of a single codon in its
host, a virus would require 20 simultaneous, specific, spontaneous changes to its
genome. Even in viruses with very high mutation rates, for example HIV, the chance
of getting a mutant virus with the correct 20 changes and zero lethal mutations is
infinitesimally small.
As of January 2015, this E. coli has not yet been created, though intermediate
variants that display increased virus resistance have been. The basic idea is scary, though
—how much more effective would E. coli be if it were immune to bacteriophages? What
would the large-scale consequences be? Are we going to be up to necks in self-replicating,
virus-immune E. coli? The specific safety issues and the worst-case scenarios in this vein
have been studied poorly, if at all. Articles on the topic are often filled with reassurances by
scientists that nothing will go wrong, while other scientists issue ominous warnings. George
Church recommends the following countermeasures 33:
If we engineer organisms to be resistant to all viruses, we must anticipate that
without viruses to hold them in check, these GEOs could take over ecosystems.
This might be handled by making engineered cells dependent on nutritional
components absent from natural environments. For example, we can delete the
genes required to make diaminopimelate, an organic compound that is essential for
189

bacterial cell walls (and hence bacterial survival) yet very rare in humans and our
environment.
Notice that it requires special effort to come up with an idea to make these
organisms less harmful. They would take over planetary ecosystems by default if released.
That is a scary thought. Church refers to the possibility of “molecular kudzu” taking over the
biosphere. The risk seems so clear that engineering such organisms is arguably a risk we
should not be willing to take. Where are the international treaties forbidding the creation of
such organisms? With one of the world's most prominent geneticists, George Church,
pioneering their creation, it seems extremely unlikely that the United States government will
do anything to restrict this research. We're just going to have to wait until these organisms
are created and see what happens.
The third variant of globe-threatening biological disaster is a binary bacteriological
weapon. For example, the tuberculosis and AIDS are chronic illnesses, but even more
deadly in combination. A similar concept would be a two-level biological weapon. The first
stage would make a certain bacterium, the

toxin of which imperceptibly spreads

worldwide. The second stage would be a bacterium that makes an additional toxin which is
lethal in combination with the first. This opens up a larger space of possibilities than a unifactorial biological attack. Such an attack could be possible, for instance, if a ubiquitous
bacterium such as E. coli is used as the template for a bioweapon. The selective nature of
a two-staged attack would be more amenable to the biological attack being targeted
towards a certain nation or ethnic group.
The fourth variant of a biological global catastrophic risk is the dispersion in the air of
a considerable quantity of anthrax spores (or a similar agent). This variant does not
demand a self-replicating pathogenic agent—it may be inert but lethal. As previously
mentioned, anthrax contamination lasts a very long time, more than 50 years.
Contamination does not require a very large quantity of the pathogen. One gram of
anthrax, properly dispersed, can infect a whole building. To contaminate a substantial
portion of the planet would require thousands of tonnes of anthrax, and very effective
delivery methods (drones), but it can be done in theory. This amount of anthrax required is
not unattainable—the USSR had stockpiled 100 to 200 tons of anthrax at Kantubek on
Vozrozhdeniya Island until it was destroyed in 1992 34. If this amount of anthrax had been
released from its drums as a powder and stirred up by a large storm, it could have
190

contaminated an area hundreds of miles across for more than 50 years. At its height, the
Soviet biological weapons program employed over 50,000 people and produced 100 tons
of weaponized smallpox a year 35. The huge fermenting towers at Stepnogorsk in modernday Kazakhstan could produce more than two tons a day, enough to wipe out an entire
city36. More advanced basic technology and manufacturing could improve this throughput
by an order of magnitude or more, and it could all be done in secret. Many in the US
intelligence community are skeptical that Russia has even shut down its biological
weapons program completely. When Ken Alibek defected from Russia in 1992, he said the
biological weapons program was still ongoing, despite the country being a signatory to the
Biological Weapons Convention forbidding the use or production of biological weapons.
A fifth variant of a world-threatening bioweapon is an infectious agent changing
human behavior, such as by causing heightened aggression leading to war. A virus that
causes aggression or the loss of the feeling of fear (toxoplasma) can induce infected
animals to behavior which promotes the contamination of other animals. It is theoretically
possible to imagine an agent which would trigger a person to spread it to others, and for
this to continue until everyone is infected. This may actually be more intimidating than all
the previous scenarios, because a pathogen spread by aggressive carriers could be
spread much more totally and effectively than a typical pandemic, and spread to remote
settlements.
Toxoplasma gondii is the world's best known behavior-changing pathogen, a parasitic
protozoan estimated to infect up to a third of the world's population. One study of its
prevalence in the United States found the protozoan in 11 percent of women of childbearing age37 (15 to 44 years), while another study put the overall US prevalence rate at
22.5 percent38. The same study found a prevalence rate of 75% in El Salvador. Research
has linked toxoplasmosis with ADD 39, OCD40, and schizophrenia41, and numerous studies
have found a positive correlation between latent toxoplasmosis and suicidal behavior 42,43,44.
T. gondii has the capability to infect nearly all warm-blooded animals, though pigs, sheep,
and goats have the highest infection rates. The prevalence of T. gondii among domestic
cats is 30-40 percent45. Czech parasitologist Jaroslav Flegr found that men with
toxoplasmosis are likely to act more suspicious relative to controls, while infected women
show more warmth46. In addition, infected men show higher testosterone in the blood than
uninfected men, while infected women have lower levels of testosterone relative to their
counterparts.
191

Another obvious example of a behavior-changing pathogen is rabies. Pathogens like
rabies take one to three months to show symptoms, which would be enough time to spread
around much of the world. As an example, the Spanish flu spread extremely quickly, with
populations going from near-zero flu-related deaths to 25 deaths per thousand people per
week in just under a month.47
It is not just unusual pathogens like T. gondii and rabies that studies have proved
modify human behavior. One team of epidemiologists found that even the common flu
vaccine modifies human behavior. Their abstract 48:
Purpose: The purpose of this study was to test the hypothesis that exposure to a
directly transmitted human pathogen-flu virus-increases human social behavior
presymptomatically. This hypothesis is grounded in empirical evidence that animals
infected with pathogens rarely behave like uninfected animals, and in evolutionary
theory as applied to infectious disease. Such behavioral changes have the potential
to increase parasite transmission and/or host solicitation of care.
Methods: We carried out a prospective, longitudinal study that followed participants
across a known point-source exposure to a form of influenza virus (immunizations),
and compared social behavior before and after exposure using each participant as
his/her own control.
Results: Human social behavior does, indeed, change with exposure. Compared to
the 48 hours pre-exposure, participants interacted with significantly more people,
and in significantly larger groups, during the 48 hours immediately post-exposure.
Conclusions: These results show that there is an immediate active behavioral
response to infection before the expected onset of symptoms or sickness behavior.
Although the adaptive significance of this finding awaits further investigation, we
anticipate it will advance ecological and evolutionary understanding of humanpathogen interactions, and will have implications for infectious disease epidemiology
and prevention.
This study shows that we are only beginning to uncover the complex nature of
human-pathogen interactions, and common diseases may have a more substantial
behavioral modification component than is commonly thought. That behavioral modification
component is likely to be adaptive from the perspective of the pathogen; that is, the
192

pathogen forces people to transmit it to others more effectively. With the right genetic
tweaks, could this property be magnified to an intense level? Only future genetic
engineering and experimentation will be able to determine that. It is definitely a global risk
worth considering, and is almost never discussed.
A sixth variant of the biological threat is an auto-catalytic molecule capable of
spreading beyond natural boundaries. One example would be prions, protein fragments
which cause mad cow disease and in some cases infect and kill humans. Prions only
spread through meat, however. Prions are unique among pathogens in that they
completely lack nucleic acids. What is even more scary is that prion diseases affect the
brain or neural tissue, are untreatable, and universally fatal. They consist of a misfolded
protein which enters the brain and causes healthy proteins to change to the misfolded
state, leading to destruction of the healthy brain and death. The prion particles are
extremely stable, meaning they are difficult to destroy through conventional chemical and
physical denaturation methods, complicating containment and disposal. Fungi also harbor
prion-like molecules, though they do not damage their hosts. Outbreaks of prion diseases
among cattle, called mad cow disease because of its unusual behavioral effects, can occur
when cows are given feed that includes ground-up cow brains.
Is there some kind of prion or other subviral protein particle which can spread quickly
between people through mucous membranes or lesions, which antibiotics and antiviral
drugs cannot fight? We may observe that the fact this has not happened in the course of
recorded history is evidence against its possibility, but synthetic biology will open up vast
new spaces of molecular engineering, and we will be able to create and experiment with
molecules inaccessible to natural evolutionary processes. Among these possibilities may
be a dangerous new prion or some other kind of auto-catalytic molecule that bypasses our
immune systems completely.
A seventh variant of biological disaster is the distribution throughout the biosphere of
some species of yeast or mold which is genetically engineered to release a deadly toxin
such as dioxin or botulinum. Since yeast and mold are ubiquitous, they would distribute the
toxin everywhere. It would probably be much easier to carry out this kind of bio-attack than
manufacturing dioxin in bulk and distributing it around the globe. Botulinum bacteria are
capable of producing eight types of toxins, of which there are antidotes for only seven. In
2013, scientists discovered a new type of botulism toxin and the gene that allows the cell to

193

produce it, and in a highly unusual move, withheld its genetic details from publication until
an antitoxin could be developed.49
One last biological danger is the creation of so-called “artificial life,” that is living
organisms constructed with use of DNA or another set of amino acids but with deep
biochemical differences from other life forms. Such organisms might appear invincible to
the immune systems of modern organisms and threaten the integrity of the biosphere, as in
the E. coli example discussed earlier. The virus-immune bacteria described above would
just be the first step into a world of organisms that have a different number or different
types of fundamental codons than the rest of life. In this way, we will eventually create
organisms which can rightly be said to be “outside the Tree of Life”.
A possible countermeasure to all these threats would be to create a “world immune
system”—that is, the worldwide dispersion of sets of genetically modified bacteria which
are capable of neutralizing dangerous pathogens. However, such a strategy would have to
be extremely complex, and may introduce other problems, such as “autoimmune reactions”
or other forms of loss of control. A worldwide immune system based on reprogrammable
nanotechnology would be likely to be more effective in the long-term, but it would be even
more difficult to build than a biological global immune system. Alternatively, nanobots could
be developed which can be injected into the bloodstream of any human and can eliminate
any pathogen whatsoever50.
Ubiquitous Polyomaviruses: SV40, BKV, JKV
There is another case which may help to better illustrate biological risk, the case of
the asymptomatic simian virus 40, which was inadvertently spread to hundreds of millions
of people in the United States and the former Soviet Union through polio vaccines 51. The
case of SV40 is extremely controversial, and talking about it bothers some biologists
deeply, who call it a “zombie scare” and attack any discussion about it as scaremongering.
Regardless of these attacks, it is an objective fact that over a hundred million people were
infected with this virus in the 1950s and 1960s, when polio vaccine stocks were
contaminated with it. One study found that 4 percent of healthy American subjects have it 52.
The virus was present in humans even before the poliovirus vaccine was invented 53, and
appears to be endemic in the human population, though the primary reservoir is unknown.
The controversy over SV40 primary revolves around whether it causes cancer in
humans. We find the studies that argue there is no link between SV40 and cancer in
194

humans to be convincing54. For us, a far more interesting topic is recognizing that there are
asymptomatic viruses which are prevalent in the population. Studies have found that SV40
antigen prevalence (a key indicator of the presence of the virus) in England is a few
percent55. There are other viruses which are far more common; BK virus, from the same
family of polyomaviruses, has a prevalence between 65 percent and 90 percent in
England, depending on age56. Prevalence is highest for those under age 40 and declines
somewhat during seniority. Another virus, JCV, starts off at just a 10 percent prevalence
among English infants, jumping to 20 percent prevalence in teenagers and escalating to 50
percent prevalence among seniors. Studies have confirmed that BKV and JCV are both
horizontally and vertically transmissible in humans 57,58. Despite their ubiquity, practically
nothing is known about the method of transmission.
BKV, JCV, and SV40 are all similar viruses, and cause tumors in monkeys and
rodents. The notion that there are ubiquitous viruses which cause cancer in animals and
about which close to nothing is known of the method of transmission is rather important,
and scary. It raises the possibility of a similar virus being genetically engineered in the
future, one that actually does cause cancer in humans (like SV40 has been incorrectly
claimed to), possibly at a high incidence. If the virus does not respond to antiviral drugs
(many viruses don't), it might be able to spread in an (unknown) fashion similar to the BK
virus, achieving 80 percent incidence in the population and causing a high incidence of
fatal cancer. It would be difficult for the human species to “shake loose” from the virus,
since it would be passed from mothers to children. The only way of perpetuating the human
species in circumstance would be for only women and men known to be uninfected to
produce offspring. Meanwhile, normal horizontal infection would keep occurring. Keeping
civilization going in such a circumstance, especially as infrastructure collapsed due to a
lack of population, could be very difficult.
Plural biological strike
Though it may be possible to stop the spread of one pandemic through the effective
use of vaccination and antibiotics, an epidemic caused by several dozen species of diverse
viruses and bacteria, released simultaneously in many different parts of the globe, would
produce a much greater challenge. It would be difficult to assemble all the appropriate
antibiotics and vaccines and administer them all in time. If the virus with 50 percent lethality
195

would be simply be a huge catastrophe, 30 diverse viruses and bacteria with 50 percent
lethality would mean the guaranteed destruction of all who had not hidden in bunkers.
Alternatively, about 100 different organisms with 10 percent lethality could have a similar
outcome.
Plural strike could be the most powerful means of conducting biological war, or setting
up a Doomsday weapon. But a plural strike could also occur inadvertently through the
release of pathogenic agents by different terrorists or competition among biohackers in a
wartime scenario. Even nonlethal agents, in conjunction, could weaken the immune system
of a human so greatly that his further survival becomes improbable.
The possibility of plural application of biological weapons is one of the most
considerable factors of global risk.
Biological delivery systems
To cause mass mayhem and damage, a biological weapon should not only be deadly,
but also infectious and rapidly spreading. Genetic engineering and synthetic biology offer
possibilities not only for the creation of a lethal weapon, but also new methods of delivery.
One does not need to possess a great imagination to imagine a genetically modified
malarial mosquito which can live in any environment and spread with great speed around
the planet, infecting everyone with a certain bio-agent. Fleas, rats, cats, and dogs are all
nearly global animals which would be excellent vectors or reservoirs with which to infect
the human population.
Probability of application of biological weapons and its distribution in time
Taking everything into account, we estimate there is a probability of 10% that
biotechnologies will lead to human extinction during the 21 st century. This estimate is based
on the assumption there that will inevitably be a wide circulation of cheap tabletop devices
allowing the simple creation of various biological agents. The assumption is that gene
synthesizer and bioprinting machines will be

circulated on a scale comparable to

computers today.
The properties a dangerous bioprinter (cheap mini-laboratory) would have are:
1) inevitability of appearance,
2) low cost,
3) wide prevalence,
196

4) uncontrollable by the authorities,
5) ability to create essentially new bio-agents,
6) simplicity of application,
7)

a variety of created objects and organisms,

8)

appeal as a device for bioweapon manufacture and drugs.
A device meeting these requirements could consist of a typical laptop, a softwarepiracy distributed program with a library of initial elements, and a reliable gene synthesis or
simple genetic engineering device, like CRISPR. These components already exist today,
they just need to be refined, improved, and made cheaper. Such a set might be called a
“bioprinter”. Drug-manufacturing communities could be the distribution channel for the
complete set. As soon as these critical components come together, the demand rises, and
the system shows itself to have many practical uses (biofuels, pharmaceuticals,
psychedelics, novelties like glowing plants), it will quickly become difficult to control. The
future of biopiracy could become similar to the present of software piracy—ubiquitous and
rampant.
A mature bioprinter would allow the user to synthesize self-replicating cells with nearly
arbitrary properties. Anyone could synthesize cells that pumped out psilocybin (magic
mushrooms), THC (marijuana), cocaine precursors, spider silk, mother of pearl, or botulism
toxin. You could print out cells that produce life-saving medicines, produce glowing moss,
or print your own foods. Eventually, a very wide range of biomaterials and biological
structures could be created using this system, including some very deadly and selfreplicating ones.
The probability of global biological catastrophe given widely available bioprinters will
increase very quickly as such devices are perfected and distributed. We can describe a
probability curve which now has a small but non-zero size, and after a while escalates to a
large size. What is interesting is not the exact form of the curve, but the time when it starts
to grow sharply.
The authors estimate that this major upswing will take place sometime between 2020
and 2030. (Martin Rees estimates that a major terrorist bio attack with more than a million
casualties will occur by 2020, and he has put a wager of $1,000 behind this prediction 59.)
The cost of gene sequencers and synthesizers continues to fall, and will eventually make a
bioprinter accessible. The genomes of many organisms are sequenced newly every year,
197

and there is an explosion in knowledge about the operative principles of organisms at the
genetic and biochemical level. A much cheaper method of gene synthesis is also needed;
current methods involve expensive reagents.
Soon there will be software programs that can quickly model the consequences of a
wide variety of possible genetic interventions to common lab organisms. This software,
along with improved hardware, will allow the creation of the bioprinter described above.
That growth of probability density around 2020-2030 does not mean that there aren't
already terrorists who could develop dangerous viruses in a laboratory today.
As stated previously, based on what we currently know, the misuse of biotechnologies
appears to be number one on the current list of plausible global catastrophes (especially if
concurrent with a nuclear war), at least for the next 20-30 years. This risk would be
substantially lowered in the following scenarios:
1) Biological attack could be survived in underground shelters. The United States and
other countries could substantially upgrade their shelter infrastructure to preserve a major
percentage of the affected population after biological warfare or terrorism. There are
bunkers for the entire population of Switzerland, for instance, though they are not
adequately stocked.
2) The first serious catastrophe connected with a leak of dangerous pathogens results
in draconian control measures, enough to prevent the creation or distribution of dangerous
bioprinters. This would require a government substantially more authoritarian than what we
have now.
3) AI and molecular manufacturing are developed first, and serve as a shield against
bioweapons before they are developed. (Highly unlikely as these technologies seem much
more difficult than bioprinters.)
4) Nuclear war or other disaster interrupts the development of biotechnologies.
Obviously, depending on a disaster to mitigate another disaster is not the best strategy.
5) It is possible that biotechnologies will allow us to create something like a universal
vaccine or artificial immune system before biological minilaboratories become a serious
problem.
Unfortunately, there is the following unpleasant chain of a feedback connected with
protection against a biological weapon. For the best protection we should prepare vaccines
198

and countermeasures against as many viruses and bacteria as possible, but the more
experts working on that task, the greater the potential that one of them goes rogue and
becomes a terrorist or troublemaker. Another risk besides targeted, weaponized pathogens
is the probability of creating biological “green goo”, that is, a universally omnivorous
microorganism capable of quickly digesting a wide variety of organic matter in the
biosphere in general. For this purpose, it would be necessary create one microorganism
with

properties

which

are

available

separately

in

different

microorganisms—to

photosynthesize, dissolve and acquire nutrients, to breed with the speed of E. coli, and so
on.
Usually, such a scenario is not considered because it is assumed that such organisms
would have already evolved, but the design space accessible through deliberate biodesign
is much, much larger than through evolution and natural selection alone.

Such an

organism that ends up as a bioweapon might even be designed for some other purpose,
such as cleaning up an oil spill. There is also a view that enhancement of crops for farming
could eventually lead to the creation of a “superweed” that outcompetes a majority of
natural plants in their biological niches.
In conclusion for this section, there are many imaginable ways of applying
biotechnology to the detriment of mankind, and we've only scratched the surface here. The
space of unknown risks is much larger than has been explicitly enumerated. Though it is
possible to limit the damage of each separate application of biotechnology, low cost,
inherent privacy, and prevalence of these technologies makes their ill-intentioned
application practically inevitable. In addition, many biological risks are not obvious and will
emerge over time as biotechnology develops.
Already, biological weapons are the cheapest way of causing mass death: it takes a
fraction of a cent of botulism toxin to kill someone. Manufacturing modern bioweapons like
anthrax for military purposes necessitates large protected laboratories and skilled
personnel, but it is only a matter of time until this changes. It can be even cheaper to kill if
we consider the ability of a pathogenic agent to self replicate.
It is often said that biological weapons are not effective in military applications.
However, it may have a non-conventional use--as a weapon for a stealth-strike targeting
the back of an enemy or as a last-ditch defense weapon to infect invading armies.
Plausible Motives for Use of Biological Weapons
199

The issue of motive with regard to unleashing extremely destructive and potentially
uncontrollable agents will be addressed in further detail in section two of this book, but it's
worth addressing here specifically with regard to biological weapons.
There are two likely scenarios in which biological weapons might be used; warfare
and terrorism. As mentioned above, biological weapons could be used non-conventionally
in warfare, either as a stealth-strike in a time of great military need, a last-ditch weapon to
wipe out invaders, or as a scorched earth tactic.
There are two prominent historical examples where either a stealth strike or
aggressive scorched earth tactics were considered. The first is Hitler's so-called Nero
Decree, named after the Roman emperor Nero who allegedly engineered the Great Fire of
Rome in 64 AD. Hitler ordered that “anything […] of value be destroyed” in German
territory, to prevent its use by the encroaching Allies 60. The relevant section of the decree is
as follows:
It is a mistake to think that transport and communication facilities, industrial
establishments and supply depots, which have not been destroyed, or have only
been temporarily put out of action, can be used again for our own ends when the
lost territory has been recovered. The enemy will leave us nothing but scorched
earth when he withdraws, without paying the slightest regard to the population. I
therefore order:
1) All military transport and communication facilities, industrial establishments and
supply depots, as well as anything else of value within Reich territory, which could in
any way be used by the enemy immediately or within the foreseeable future for the
prosecution of the war, will be destroyed.
The Nazi Minister of Armaments and War Production, Albert Speer, who had been
assigned responsibility for carrying out the decree, secretly ignored it, and Hitler committed
suicide 41 days later, making it impossible to ensure his orders were carried out. A
summons of Nazi party leaders in Dusseldorf in the closing weeks of the war, to be posted
throughout the city, called for the evacuation of all inhabitants and said, “Let the enemy
march into a burned out, deserted city!” 61 Around the same time, many sixteen-year-olds
and old men were also recruited in a “great levy” as a last-ditch effort to fight the Allies.

200

Back to the so-called Nero Decree, the orders for destruction were wide in extent. The
details are recalled in Speer's memoirs62:
As if to dramatize what lay in store for Germany after Hitler's command, immediately
after this discussion a teletype message came from the Chief of Transportation.
Dated March 29, 1945, it read: “Aim is creation of a transportation wasteland in
abandoned territory... Shortage of explosives demands resourceful utilization of all
possibilities for producing lasting destruction.” Included in the list of facilities slated
for destruction were, once again, all types of bridges, tracks, roundhouses, all
technical installations in the freight depots, workshop equipment, and sluices and
locks in our canals. Along with this, simultaneously all locomotives, passenger cars,
freight cars, cargo vessels, and barges were to be completely destroyed and the
canals and rivers blocked by sinking ships in them. Every type of ammunition was to
be employed for this task. If such explosives were not available, fires were to be set
and important parts smashed. Only the technician can grasp the extent of the
calamity that execution of this order would have brought upon Germany. The
instructions were also prime evidence of how a general order of Hitler's was
translated into terrifying thorough terms.
Speer called the order for the destruction of “all resources within the Reich” as a
“death sentence for the German people”:
The consequences would have been inconceivable: For an indefinite period
there would have been no electricity, no gas, no pure water, no coal, no
transportation. All railroad facilities, canals, locks, docks, ships, and locomotives
destroyed. Even where industry had not been demolished, it could not have
produced anything for lack of electricity, gas, and water. No storage facilities, no
telephone communications—in short, a country thrown back into the Middle Ages.
This shows the degree to which an authoritarian leader may choose to ruin his own
country to deny its resources to a victorious enemy, even at great detriment to the future of
the homeland. If these orders were carried out, it would have caused many millions of
unnecessary deaths. To put it bluntly, Hitler was willing to sacrifice millions of German lives
to express his contempt for the enemy.
201

Now, imagine Hitler had access to a Doomsday virus which could kill off millions of
the invading Allied soldiers but only at the cost of millions of native German lives. Would he
have used it? Giving the reluctance of his subordinates to carry out the scorched earth
policy, it might not have worked, but what if he were a dictator in the future, around the year
2040, and had access to an army of drones which could disperse the pathogen with a
single push of a button, no cooperation from subordinates required? In that case, we can
imagine that it is well within possibility. Perhaps Hitler would have rather seen the
destruction of Europe than to suffer the failure of his plan to dominate it. Are there not
dictators in the world today who might not do the same thing? Will not there also be in the
future? It certainly seems there will be. With an arsenal of biological weapons and dispersal
drones at their disposal, combined with a fevered and angry state of mind, such an
outcome is certainly imaginable. From the initial hot zone, the disease could spread
worldwide.
The second example of the potential use of non-conventional weapons in warfare on
a mass scale was the case of Churchill wanting to use poison gas on German cities in
1944 to counter the continuous rocket attacks hitting British cities. He issued a
memorandum telling his military chiefs to “think very seriously over this question of using
poison gas”, that “"it is absurd to consider morality on this topic when everybody used it in
the last war without a word of complaint”, and that 63:
I should be prepared to do anything [Churchill's emphasis] that would hit the enemy
in a murderous place. I may certainly have to ask you to support me in using poison
gas. We could drench the cities of the Ruhr and many other cities in Germany... We
could stop all work at the flying bombs starting points.... and if we do it, let us do it
one hundred per cent.
—Winston Churchill, 'Most Secret' Prime Minister's Personal Minute to the Chiefs of
Staff, 6 July 1944

There have also been claims that Churchill advocated the use of anthrax, but these
have since been shown to be false. The key point is that Churchill was willing to anything to
damage the enemy, meaning that if he did have sufficient anthrax, it seems likely he would
have advocated its use. Such a mass distribution of anthrax would have contaminated wide
202

areas of Germany for decades to come, as cleanup would have a prohibitive cost unless
used only in very limited areas. If anthrax had been used, many cities would have had to
have been completely abandoned and fenced off for generations. Anthrax has the
advantage that it is not contagious in the way that flu is; you can only catch it by directly
inhaling spores from cadavers. Therefore, a long-distance bombing would minimize the risk
of collateral damage, and increase the incentive for its use.
Many view WWII as the most intense war in history, which it certainly was, but they
also give it an aura of mystery and unrepeatability as if it were a unique event. That is not
necessarily the case. We are still at risk of a World War today. The Mutually Assured
Destruction (MAD) doctrine is false—it certainly would be possible to disable a great many
nuclear silos with submarine-launched ballistic missiles, and thereby greatly ameliorate the
damage of a counterstrike. Leaning on MAD as a guardian angel to protect from the threat
of another World War is not very prudent. In a total war, it is plausible that all ethics would
fly out the window, as it very nearly did in World War II. When the big guns come out, they
could very well include next-generation, genetically engineered biological weapons
alongside nuclear missiles and conventional arms. The military scenarios outlined above
make that imaginable.
The third major category of plausible usage of a Doomsday virus is, of course,
through a terrorist incident. The key threat is fundamentalist Islam, and Hillary Clinton was
previously cited in this chapter as noting that Al Qaeda was specifically looking for
specialists in the area. Fundamentalists Muslims might be quite happy with wiping out
billions of people in North America, South America, and Eurasia as long as, in their eyes, it
gave Islam a better foothold in the world. In the not-too-distant future, it may be possible to
design viruses that selectively target specific races, which would help enable this.
Ethnic bioweapons are a well-established and well-recognized area of risk 64. In
ancient times, the rulers of Nepal maintained the malaria-infected Terai forest as a natural
defense against invaders from the Ganges south, as the natives of the forest had innate
resistance against malaria while the invaders did not. Certain populations living around
specific microbes for a long period of time evolve defenses against them which other
groups lack. Microbes originating in the crowded cities of Europe killed millions of natives in
the Americas when the European explorers first arrived, far more than could be killed with
guns or swords alone. The native population simply lacked immune defenses against these
203

germs.
In 1997, US Secretary of Defense William Cohen referred to the concept of
ethnic-specific bioweapons, saying 65, “Alvin Toffler has written about this in terms of some
scientists in their laboratories trying to devise certain types of pathogens that would be
ethnic specific so that they could just eliminate certain ethnic groups and races”. An 1998
interview with a UK defense intelligence official, Dr. Christopher Davis, refers to ethnic
bioweapons66. Davis said, “...we also have the possibility of targeting specific ethnic groups
of specific genetic subtypes, if you like, of the population, that you can indiscriminately, in a
way, spray something, but it only kills the certain people that this material is designed to
find and attack.” In 2005, an official announcement of the International Committee of the
Red Cross said67, “The potential to target a particular ethnic group with a biological agent is
probably not far off. These scenarios are not the product of the ICRC's imagination but
have either occurred or been identified by countless independent and governmental
experts.”
Even more highly specific bioweapons than ethnic bioweapons may be
possible. In 2012, an article in The Atlantic covered various advances in synthetic biology
and concluded that a weapon that, for instance, only caused a mild flu for the general
population but was fatal to the President of the United States could be possible in the
future (timeframe unspecified)68. None of the authors of the article were biologists, so the
article should be taken with a grain of salt, but there are enough references to the
possibility of gene-specific bioweapons from more solid sources, such as by the Red Cross
and British Medical Association, that the prospect is worth taking seriously. In our literature
search, we could not find an in-depth description of the precise possible mechanisms or a
biochemical analysis of such a hypothetical pathogen. Microbes that affect one group more
than others are common in nature, but a pathogen that exclusively infects one group and
infects close to zero of another group is very rare, mostly because worldwide travel has
spread pathogens universally and produced immune resistance in many to all of them.
Consider if there were a massive global war and various ethnicities unleashed
bioweapons which were all targeted to kill one another, ultimately resulting in omnicide. Or,
perhaps a virus designed to attack one ethnicity might be designed in such a way that it
were possible for it to mutate to effect other ethnicities, perhaps even infecting the ethnic
group of the creators of the bioweapon. If biological minilaboratories with cheap gene
204

synthesis capability become universal and lack sufficient safeguards or are hackable, this
is a real possibility. Such an attack backfiring should be taken into account.
Aside from interethnic or international warfare and religious terrorism, there is also
the possibility of even more abstract and sinister ideological threats, such as Doomsday
cults who desire the destruction of the entire population. Heaven Gate, the suicide cult that
committed mass suicide in California in 1997, anticipated the “cleansing of the Earth” by
aliens who had ascended to “a level higher than human. Perhaps if they could have
deliberately carried out such a cleansing themselves, though would have done it. There are
many examples of such cultists who get themselves into a state of mind where the body is
just a “vessel,” so engaging in mass killing would not be construed as actually doing wrong,
but rather freeing their fellow man from their “vessel” and allowing them to “ascend to the
higher level” in a “cosmic body”. People who are not cultists (most of us, presumably) flinch
away from considering such worldviews, but they do exist among many people, and do
threaten us whether we acknowledge their existence or not.
This chapter has been a lengthy one, and we've covered many of the basic threats
from biological weapons, and the motivations which might give rise to their use. Overall,
this analysis just scratches the surface, and a more detailed, up-to-date book length
analysis is required. A very in-depth analysis will be needed to inform scientists and
policymakers of the true extent of the dangers and allow them to chart a course that
minimizes aggregate risk.

The map of biorisks

The map of global catastrophic risks connected with biological weapons and
genetic engineering
TL;DR:   Biorisks   could   result   in   extinction   because   of   multipandemic   in   near   future   and   their   risks   is   the   same   order
magnitude as risks of UFAI. A lot of biorisks exist, they are cheap and could happen soon.

It may be surprising that number of published research about risks of biological global
catastrophe is much less than number of papers about risks of self-improving AI. (One of
exception here is "Strategic terrorism” research parer by former chief technology officer of
Microsoft.)
205

It can’t be explain by the fact that biorisks have smaller probability (it will not be known until
Bostrom will write the book “Supervirus”). I mean we don’t know it until a lot of research will
be done.
Also biorisks are closer in time than AI risks and because of it they shadow AI risks, lowering
the probability that extinction will happen by means of UFAI, because it could happen
before it by means of bioweapons (e.g. if UFAI risk is 0.9, but chances that we will die
before its creation from bioweapons is 0.8, than actual AI risk is 0.18). So studying biorisks
may be more urgent than AI risks.
There is no technical problem to create new flu virus that could kill large part of human
population. And the idea of multi pandemic - that it the possibility to release 100 different
agents simultaneously - tells us that biorisk could have arbitrary high global lethality. Most
of bad things from this map may be created in next 5-10 years, and no improbable insights
are needed. Biorisks are also very cheap in production and small civic or personal biolab
could be used to create them.
May be research in estimation probability of human extinction by biorisks had been done
secretly? I am sure that a lot of analysis of biorisks exist in secret. But this means that they
do not exist in public and scientists from other domains of knowledge can’t independently
verify them and incorporate into broader picture of risks. The secrecy here may be useful if
it concerns concrete facts about how to crete a dangerous virus. (I was surprised by
effectiveness with which Ebola epidemic was stopped after the decision to do so was
made, so maybe I should not underestimate government knowledge on the topic).
I had concerns if I should publish this map. I am not a biologist and chances that I will find
really dangerous information are small. But what if I inspire bioterrorists to create
bioweapons? Anyway we have a lot of movies with such inspiration.
So I self-censored one idea that may be too dangerous to publish and put black box instead. I
also have a section of prevention methods in the lower part of the map. All ideas in the map
may be found in wikipedia or other open sources.
The goal of this map is to show importance of risks connected with new kinds of biological
weapons which could be created if all recent advances in bioscience will be used for bad.
The map shows what we should be afraid off and try to control. So it is map of possible
future development of the field of biorisks.
Not any biocatastrophe will result in extinction, it is in the fat tail of the distribution. But smaller
catastrophes may delay other good things and wider our window of vulnerability. If
protecting measures will be developed on the same speed as possible risks we are mostly
safe. If total morality of bioscientists is high we are most likely safe too - no one will make
dangerous experiments.
Timeline: Biorisks are growing at least exponentially with the speed of Moore law in biology.
After AI will be created and used to for global government and control, biorisks will probably
206

ended. This means that last years before AI creation will be most dangerous from the point
of biorisks.
The first part of the map presents biological organisms that could be genetically edited for
global lethality and each box presents one scenario of a global catastrophe. While many
boxes are similar to existing bioweapons, they are not the same as not much known
bioweapons could result in large scale pandemic (except smallpox and flu). Most probable
biorisks are outlined in red in the map. And the real one will be probably not from the map
as the world bio is very large and I can’t cover it all.
The map is provided with links which are clickable in the pdf, which is here: http://immortalityroadmap.com/biorisk.pdf

References
1. Ray Kurzweil and Bill Joy. “Recipe for Destruction”. The New York Times, October
17, 2005.
2. Linster M, van Boheemen S, de Graaf M, et al. “Identification, Characterization and
Natural Selection of Mutations Driving Airborne Transmission of H5N1 Virus”. Cell.
2014.
3. Convention on the Prohibition of the Development, Production and Stockpiling of
Bacteriological (Biological) and Toxin Weapons and on Their Destruction. Signed at
London, Moscow and Washington on 10 April 1972. Entered into force on 26 March
1975.
4. Alibek, K. and S. Handelman. Biohazard: The Chilling True Story of the Largest
Covert Biological Weapons Program in the World - Told from Inside by the Man Who
Ran it. 1999.
5. Huxsoll DL, Parrott CD, Patrick WC III. "Medicine in defense against biological
warfare". JAMA. 1989;265:677–679.
6. Takafuji ET, Russell PK. "Military immunizations: Past, present and future
prospects". Infect Dis Clin North Am. 1990;4:143–157.
7. “Chemical and Biological Weapons: Possession and Programs Past and Present”,
James Martin Center for Nonproliferation Studies, Middlebury College, April 9, 2002.
8. “Introduction to Biological Weapons”, Federation of American Scientists, official site.

207

9. “Anthrax Q&A: Signs and Symptoms”. Emergency Preparedness and Response.
Centers for Disease Control and Prevention. 2003.
10. Scott Shane. “Cleanup of anthrax will cost hundreds of millions of dollars”.
December 18, 2012. Baltimore Sun.
11. Jeanne Guillemin. Anthrax: The Investigation of a Deadly Outbreak. 1999. University
of California Press.
12. Pearson, Dr. Graham S. (October 1990) “Gruinard Island Returns to Civil Use” The
ASA Newsletter. Applied Science and Analysis. Inc. Retrieved 12 January 2008.
13. John S. Foster et al. Report of the Commission to Assess the Threat to the United
States from Electromagnetic Pulse (EMP) Attack. 2004. Congressional report.
14. “EMP Could Leave '9 Out of 10 Americans Dead'” WND.com. May 3, 2010.
15. Landing page, SyntheticBiology.org.
16. Craig Venter et al. “Creation of a Bacterial Cell Controlled by a Chemically
Synthesized Genome”. Science 329 (5987): 52–56.
17. JCVI press release. “IBEA Researchers Make Significant Advance in Methodology
Toward Goal of a Synthetic Genome”. November 3, 2010.
18. Andrew Pollack. “Scientists Create a Live Polio Virus”. The New York Times. July
12, 2002.
19. Sarah Post. “For the first time, scientists create a virus using only its genome
sequence”. GenomeWeb. July 23, 2012.
20. “How Scientists Made 'Artificial Life'”. BBC. May 20, 2010.
21. “BWC calls on more awareness about Biosecurity”. Kuwait News Agency. October
11, 2011.
22. Linster op cit.
23. Denise Grady and Donald G. McNeil Jr. “Debate Persists on Deadly Flu Made
Airborne”. The New York Times. December 26, 2011.
24. Nell Greenfieldboyce. “Scientists Publish Recipe For Making Bird Flu More
Contagious”. NPR. April 10, 2014.
25. Thomas Inglesby, Anita Cicero and D.A. Henderson. “Benefits of HN1 Research Do
Not Outweigh the Risks”. Milwaukee-Wisconsin Journal Sentinel.
26. “WHO concerned that new H5N1 influenza research could undermine the 2011
Pandemic Influenza Preparedness Framework”. World Health Organization
statement. December 30, 2011.
208

27. Donald G. McNeil Jr. “Bird Flu Paper is Published After Debate”. The New York
Times. June 21, 2012.
28. Frank Jordans, Associated Press. “Clinton warns of bioweapon threat from gene
tech”. Physorg. December 7, 2011.
29. Deborah MacKenzie. “Five easy mutations to make bird flu a lethal pandemic”. New
Scientist. September 26, 2011.
30. Philip Bethge and Johann Grolle. “Interview with George Church: Can Neanderthals
Be Brought Back from the Dead?” Spiegel Online International. January 18, 2013.
31. David Brin quoting Chris Phoenix. “The Old and New Versions of Culture War”.
Contrary Brin. May 8, 2009. Originally from Center for Responsible Nanotechnology
blog.
32. George Church. “Safeguarding Biology”. Seed magazine. February 2, 2009.
33. Church op. cit.
34. Michael R. Edelstein, Astrid Cerny, Abror Gadaev, eds. Disaster by Design: the Aral
Sea and its Lessons for Sustainability. 2012. Emerald.
35. Ken Alibek and Stephen Handelman. Biohazard: The Chilling True Story of the
Largest Covert Biological Weapons Program in the World--Told from Inside by the
Man Who Ran It. 2000. Delta.
36. Michael Dobbs. “Program to Halt Bioweapons Work Assailed”. Washington Post.
September 12, 2002.
37. Jones JL, Kruszon-Moran D, Sanders-Lewis K, Wilson M (2007). “Toxoplasma
gondii infection in the United States, 1999–2004, decline from the prior decade”. Am
J Trop Med Hyg 77 (3): 405–10.
38. Montoya J, Liesenfeld O (2004). “Toxoplasmosis”. Lancet 363 (9425): 1965–76.
39. Kathleen McAuliffe. “How Your Cat is Making You Crazy”. The Atlantic. February 6,
2012.
40. McAuliffe op cit.
41. Torrey, EF; Bartko, JJ; Yolken, RH (May 2012). “Toxoplasma gondii and other risk
factors for schizophrenia: an update”. Schizophrenia bulletin 38 (3): 642–7.
42.

Zhang, Yuanfen; Träskman-Bendz, Lil; Janelidze, Shorena; Langenberg, Patricia;
Saleh, Ahmed; Constantine, Niel; Okusaga, Olaoluwa; Bay-Richter, Cecilie; Brundin, Lena;
Postolache, Teodor T. (Aug 2012). “Toxoplasma gondii immunoglobulin G antibodies and
nonfatal suicidal self-directed violence”. J Clin Psychiatry 73 (8): 1069–76.
209

43.

Mortensen, Preben Bo; Mortensen, Preben Bo; Norgaard-Pedersen, Bent;
Postolache, Teodor T. (2012). “Toxoplasma gondii Infection and Self-directed Violence in
Mothers”. Archives of General Psychiatry 69 (11): 1.

44.

Ling, VJ; Lester, D; Mortensen, PB; Langenberg, PW; Postolache, TT (2011).
“Toxoplasma gondii Seropositivity and Suicide rates in Women”. The Journal of Nervous and
Mental Disease 199 (7): 440–444.
45. Elmore, SA; Jones, JL; Conrad, PA; Patton, S; Lindsay, DS; Dubey, JP (April 2010).
"Toxoplasma gondii: epidemiology, feline clinical aspects, and prevention". Trends in
parasitology 26 (4): 190–6.
46. McAuliffe op cit.
47. J. K. Taubenberger and D. M. Morens. “1918 Influenza: the Mother of All
Pandemics”. Cdc.gov.
48. Reiber C, Shattuck EC, Fiore S, Alperin P, Davis V, Moore J. “Change in human
social behavior in response to a common vaccine.” Annals of Epidemiology. 2010
Oct;20(10):729-33.
49. Debora MacKenzie. “New botox super-toxin has its details censored”. New Scientist.
October 14, 2013.
50. J. Storrs Hall. Nanofuture: What's Next for Nanotechnology. 2005. Prometheus
Books.
51. Robert L. Garcea and Michael J. Imperiale. “Simian Virus 40 Infection of Humans”.
Journal of Virology. May 2003 vol. 77 no. 9 5039-5045.
52. Rui-Mei Li, Mary H. Branton, Somsak Tanawattanacharoen, Ronald A. Falk, J.
Charles Jennette and Jeffrey B. Kopp. “Molecular Identification of SV40 Infection in
Human Subjects and Possible Association with Kidney Disease”. Journal of the
American Society of Nephrology, September 1, 2002 vol. 13 no.9, 2320-2330.
53. Garcea op. cit.
54. NIH/National Cancer Institute. “Studies Find No Evidence That Simian Virus 40 Is
Related To Human Cancer.” ScienceDaily. August 25, 2004.

55.

Knowles WA, Pipkin P, Andrews N, Vyse A, Minor P, Brown DWG, Miller E.
“Population-based study of antibody to the human polyomaviruses BKV and JCV and the
simian polyomavirus SV40.” Journal of Medical Virology 2003; 71: 115–23.
56. Knowles op. cit.

210

57. T. Kitamura, T. Kunitake, J. Guo, T. Tominaga, K. Kawabe and Y. Yogo.
“Transmission of the human polyomavirus JC virus occurs both within the family and
outside the family.” Journal of Clinical Microbiology. 1994, 32(10):2359.
58. Boldorini R., Allegrini S., Miglio U., Paganotti A., Cocca N., Zaffaroni M., Riboni F.,
Monga G., Viscidi R. “Serological evidence of vertical transmission of JC and BK
polyomaviruses in humans.” Journal of General Virology, 2011 May;92(Pt 5):104450.
59. Martin Rees. “By 2020, bioterror or bioerror will lead to one million casualties in a
single event.” LongBets.org.
60. Albert Speer. Inside the Third Reich [Translated by Richard and Clara Winston].
1970. New York and Toronto: Macmillan
61. Speer op. cit.
62. Speer op. cit.
63. Paxman, Jeremy; Harris, Robert (2002-08-06) [1982]. "The War That Never Was". A
higher form of killing: the secret history of chemical and biological warfare. p. 128.
64. Malcolm Dando. Biotechnology, Weapons and Humanity II. 2004. British Medical
Association report. BMA Professional Division Publications.
65. William Cohen. “Terrorism, Weapons of Mass Destruction, and U.S. Strategy”. 1997.
Sam Nunn Policy Forum, University of Georgia.
66. Interview of Dr Christopher Davis, UK Defence Intelligence Staff, Plague War, Frontline,
PBS. October 1998.
67. Preventing the use of biological and chemical weapons: 80 years on, Official
Statement by Jacques Forster, vice-president of the ICRC, 10-06-2005.
Andrew Hessel, Marc Goodman, Steven Kotler. “Hacking the President’s DNA”. The Atlantic,
2012.

Chapter 13. Superdrug

Biotechnology and cognitive science could eventually lead to the production of
superdrugs, or “soma”. Consider a drug that has a 100% addiction rate and makes its
211

addicts completely dependent on their supplier. They might quit their jobs and go to work in
fields or factories just to make more of the drug, a scenario portrayed in Philip K. Dick's
novel A Scanner Darkly (1977). If the drug did not incapacitate people, and they retained
all their vigor and cognitive capabilities (or even enhanced them), entire societies and
nations could be based around addiction to it. Such societies could even launch wars on
rival nations and force them into addiction until the entire world is consumed by the drug. If
the drug had long-term lethal or anti-reproductive effects, it could lead to the end of
civilization or humanity itself.
Instead of a literal drug, the drug might also be a computer game or wearable
computer. It could even be a pill filled with microbots which are able to cross the bloodbrain barrier (perhaps by creating a small perforation) and creating a direct brain-tocomputer interface. This was the drug “Accela” portrayed in the Japanese animation Serial
Experiments Lain (1998). In the story, the drug acted to multiply the brain's operational
capacity by 2 to 12 times, speeding it up and making the outside world seem slower. A real
version of such a pill would likely not be developed until after 2045 and would have more
limited effects, such as giving a mind direct access to search engines and other secondary
processing tools. It all depends on when the technology of molecular manufacturing would
be developed (see chapter 7 for more details).
We can postulate that any drug will not affect the entire population of the planet, as
there will always be people who will refuse it. On the other hand, we can imagine several
superdrugs emerging at once, all of which have the general effect of switching people off
from social life. Relatedly, the popularity of virtual reality and massively multiplayer online
RPGs (MMOs) could reach a point where fertility rates drop to 1 child per couple or lower
and there are devastating consequences to civilization. Fertility rates in tech-heavy places
like Japan and South Korea have already dropped to 1.39 and 1.24 respectively 1. The
fertility rate a population needs to replace itself is about 2.1. So, these populations are
shrinking and aging. By 2060, more than 40 percent of the Japanese population will be
over age 652.
The super-strong drug could be similar to an infectious illness if it causes people to
produce more of the drug and get others introduced. Here is a list of possible superdrug
varieties:
212

A superdrug that has a direct influence on the pleasure centers in a brain. This may
be through chemical means, like ecstasy or THC, or otherwise. There is research on
stimulating the brain's pleasure centers by means of a rotating magnetic field (a
Persiger helmet, Shakti helmet), transcranial magnetic stimulation, audiostimulation
(binaural rhythms), photostimulation, and biological feedback through devices that
can read EEGs (encephalograms), such as thought helmets for computer games.

The future appearance of microrobots which allows them carry out direct stimulation
of the brain, for instance the brain's pleasure centers or working memory centers 3.

Bio-engineering will allow us to create genetically modified plants which can produce
a range of psychoactive chemicals but appear to be simple house plants. The
knowledge to produce these plants would be widely distributed over the Internet and
it could be possible to produce seeds for them with the mini bio-laboratories
discussed in the previous chapter.

Better knowledge of biology, neurology, and neurochemistry will allow us to think up
much stronger psychoactive substances with specifically determined properties, and
with fewer side effects which make them more attractive.

Specially programmed microbes could be genetically modified to cross the bloodbrain barrier, perhaps through a temporarily-induced hole, and integrate with the
brain to stimulate the pleasure centers or have some other neuropsychological
effect. This might be technologically easier than creating microbots to perform the
same function.

Virtual reality is progressing, and could lead to helmets which are extremely
addictive. These could risk disconnecting millions or even billions of people from
reality in unhealthy ways. Arguably this is already on the verge of happening.
It is clear that there are different possible combinations of the listed items which would

only strengthen their effects.

Design features
There are several major technical challenges to creating a drug which could pose a
threat to humanity as a whole. First, the drug would have to not incapacitate its users too
213

much, at least initially, otherwise a great many people would be extremely resistant to
taking it, and it would not spread. Recreational drug use, outside of a few extremely
common drugs like cannabis and caffeine, is a niche culture. For instance, only about 0.6
percent of Americans use cocaine 4. Cannabis and caffeine are common, however—4
percent of the world's adult population uses cannabis annually 5, and in North America, 90
percent of adults consume caffeine daily6. The point being that a drug would need
extraordinarily appealing properties, or need to be administered by force, to reel in a large
percentage or all of the population. It would also need to be cheap and easily produced.
First, we ought to consider a drug based on conventional neurochemistry (rather than
complex functional microbots or the like) which could pose a threat to humanity. Such a
drug could be extremely simple as long as it were satisfying enough to prevent humans
from reproducing. Arguably, drugs like alcohol and cannabis are used in this way by some
today; those who live their lives partying instead of reproducing. Since 1960, the fertility
rate (births per woman) in the United States has dropped from 3.65 to 1.85; below
replacement7. Obviously, drugs and alcohol are not exclusively to blame, if at all. Instead,
“social forces” are to blame. Eventually, the problem will work itself out, as populations
which don't reproduce will die out, and be replaced by those which do. Or perhaps there
will be some more science fictional long-term solution, like artificial wombs. Artificial wombs
and robotic maids could ameliorate some of the reproduction-dampening effects of any
major drug addiction epidemics of the future, but such techno-fixes will not be a panacea
unless driven by advanced artificial intelligence, since otherwise there will always need to
be humans to maintain and direct machines to some degree.
Consider how typical pleasure center stimulating drugs, such as cocaine, peter out
before making the entire population addicted to them. Most people simply prefer the
pleasure they get from the real world, or they associate drug use with failure at life. What if
there were a drug that stimulated the pleasure center more powerfully than any drug
currently available? When considering this possibility, our minds jump right away to the use
of microbots, or some other direct technological neural stimulation, rather than through the
indirect means of psychochemistry. No drugs which are currently known are capable of
stimulating the pleasure center to the more extreme degree with which we are familiar with
from science fiction stories or rat experiments.

214

A classic set of experiments, morphine rat experiments, involved rats in small cages
deprived of sensory stimulation or socialization, with access to a level that administered
them morphine through an intravenous delivery tube 8. The experiments found that the rats
would self-administer the drug until they died of thirst. The result was spurious and based
on their sparse environment, however—a subsequent experiment found that when the rats
were given a cage 200 times larger, and opportunities to socialize, they only used
morphine in moderation9. There have also been experiments involving rats with the ability
to directly stimulate their own pleasure centers through an electrode 10. Like the morphine
experiment, they self-administered the stimulation until they died of thirst. No experiment
has yet been conducted that gives rats the ability to directly self-stimulate their pleasure
centers but also have a cage large enough to move around in and socialize with other rats.
We speculate that rats given such a larger cage would actually not self-stimulate to the
point of dying of thirst, but the experiment would need to be run before we can know for
sure.
Because of the results of the second rat morphine experiment, we are skeptical that a
drug or means of directly stimulating the pleasure center would necessarily suck in all of
humanity and make us all addicted. It would need to be combined with other features to be
sufficiently interesting to be a threat. For instance, a brain-computer interface connected to
an elaborate virtual world. People might be able to earn money in the virtual world to order
food, so they would never need to leave it. They might never exercise and eventually die of
obesity. Of course, this would not be a threat to the portion of the world without electricity,
but eventually the entire world might be electrified.
A sufficiently advanced brain-computer interface could even simulate the feeling of
being outside, and trigger the appropriate brain centers to release the endorphins and
hormones which would be gained from exercise, to give someone the sensation they were
outdoors and exercising. It is not known if this feeling could be perfectly simulated by a
brain-computer interface, but it certainly is a possibility to consider. The virtual world might
ultimately be able to simulate every single possible sensation available in the real world—
from sex to hiking to other sports and so on 11. This might not even be a bad thing, if
humanity can manage to keep one foot in the real world and maintain basic functions such
as reproduction, physical construction, and socialization. The real danger is if such
“addicts” really do not contribute to society and come to be regarded as deadweight, to be
215

discarded at the earliest opportunity, and if an increasing number of people fall into this
category.
Some futurists argue that virtual reality will become our preferred reality. That is, we
will spend all our time in virtual realities. Others argue that the structure of human brains
and minds will actually be uploaded into computers, a la the movie Transcendence. A
similar possibility would be people controlling proxy bodies from inside a computer, like
Avatar but the controller is a conscious computer program instead of a physical body.
Some may argue that this is impossible, but we mention the possibility anyway for
completeness. For those interested in the debate, search for the phrase “Church-Turing
thesis”.
Bearing all this in mind, there are two questions we have to ask:

Is it possible that a psychopharmacological drug could be designed which would bring
humanity to its knees?

Given that the right brain-computer interface scenario would be particularly likely to bring
humanity to its knees, what sorts of scenarios would qualify?
Note that we assume from the outset that some kind of brain-computer interface
would indeed cripple humanity. This is an important point. The space of possible braincomputer interface programs is so large, so expansive, that it seems plausible to assume
that somewhere in there lurks a program or device which would engross us entirely,
beyond all boundaries of reason. Of course, for this to be a true threat, either the cost of
the interface would need to fall below something like 20 dollars, and/or it would need to be
coercively spread, and it would need to catch on at a point in time of the development of
the world when most people have access to electricity. Only about 80 percent of the world
is currently electrified, meaning there are over 1,300 million people without electricity, about
half of those in Africa, the other half in South Asia. These people would be outside the
reach of any coercive brain-computer interface for now, but probably not by 2030.
Just hand-waving, here are the design features we postulate a drug that is a risk to
humanity would need to have:

Extremely engaging virtual reality, either psychedelic, brain-computer interface
216

facilitated, or both.

Pleasure center stimulation, ability to confer sensation of psychological pleasure
directly.

Preferable to real reality in every way, and capable of simulating many of the most
appealing features of everyday reality, including outdoors reality.
If all these conditions are met, we imagine that the drug or brain-computer interface

could be a threat to the continuation of human civilization.
There is an interesting incentive to develop such a system—the foreseen level of
structural unemployment caused by automation and robotics. As of the time of this writing
(June 2014), the adult labor force participation rate in the United States is under 63
percent, meaning 37 percent of adults do not work 12. Only 12.7 percent of the adult
population are seniors. This means that 37 percent of the adult population are sitting
around with nothing to do. Most of them are not in education or training of any kind. They
are perfect targets for games and drugs, thirsty for any sensation or meaning. Some
economists are arguing that this low level of employment is “the new normal,” and that
reemployment is not particularly unlikely. Like how horses became economically obsolete
due to the invention of the car, unspecialized workers are becoming economically obsolete
with the introduction of outsourcing and automation.
There is a name for 20-40-somethings who stay inside all day, have no girlfriend,
playing video games and surfing the Internet. They are called “hikikomori,” from the
Japanese word for “pulling inward, being confined”. According to the government statistics
of Japan, about 700,000 people live as hikikomoris in that country, with an average age of
31, but the numbers are almost certainly higher 13. 700,000 is about half a percent of the
population of Japan, but the real numbers likely exceed 1 percent. Furthermore, a great
number of Japanese men and women say they are uninterested in dating and sex—45
percent of Japanese women between the ages of 16-24 claim they are “uninterested in or
despise sexual contact”14. A survey in 2011 found that 61% of unmarried men and 49% of
women aged 18-34 were not in any kind of romantic relationship. This “celibacy syndrome”
is concerning, and is likely related to technology. From a peak of 126 million, the population

217

of Japan is expected to fall to 42 million by 2100, the same as 1900 levels 15. This is
elimination of 66 percent of the population, merely due to a lack of desire to breed.
All these anecdotes aside, it is extremely difficult for us to predict when potentially
catastrophic drugs or online interfaces will be developed, or what precise characteristics
they will have. A brain-computer interface that achieves mass participation from humanity
may actually be useful and enhance our performance rather than being dangerous. It is
hard to tell in advance. The idea of a “superdrug” that threatens humanity is one which has
not been adequately explored and is in need of more research. Here we have only
provided a crude outline.

References

Google Public Data Explorer.

"Japan population to shrink by one-third by 2060". BBC News. January 30,
2012.

Ray Kurzweil. The Singularity is Near. 2005. Viking.

National Household Survey on Drug Abuse.

United Nations Office on Drugs and Crime (2006). "Cannabis: Why We
Should Care" (PDF). World Drug Report 1 (S.l.: United Nations). p. 14.

Richard Lovett. "Coffee: The demon drink?" September 24, 2005. New
Scientist (2518).

Google Public Data Explorer.

Alexander, Bruce K., (2001) "The Myth of Drug-Induced Addiction", a
paper delivered to the Canadian Senate, January 2001, retrieved
December 12, 2004.

Lauren Slater. Opening Skinner's Box: Great Psychological Experiments
of the Twentieth Century. 2004. W.W. Norton & Company.
218

Michael R. Liebowitz. The Chemistry of Love. 1983. Boston: Little, Brown,
& Co.

Kurzweil 2005.

Bureau of Labor Statistics.

Michael Hoffman. “Nonprofits in Japan help 'shut-ins' get out into the
open”. The Japan Times Online. The Japan Times.

Abigail Haworth. “Why have young people in Japan stopped having sex?”
October 19, 2013. The Guardian.

National Institute of Population and Social Security Research, Population
Projections for Japan: 2006-2055, December 2006. The Japanese Journal
of Population, Vol.6, No.1 (March 2008) pp. 76-114.

Chapter 14: Nanotechnology and Robotics
Imagine a tabletop machine the size of a laser printer that can create complex
products out of diamond and allotropes of carbon like buckytubes or graphite. Called a
nanofactory, this machine can produce laptops, cell phones, stoves, watches, remotecontrolled drones, gene sequencing devices, sensors, culinary implements, textiles,
houses, ductwork, cranes, tractors, furnaces, heaters, coolers, engines, guns,
greenhouses, telescopes, microscopes, vacuum cleaners, scales, hammers, coat hangers,
ice cube trays, beer mugs, worm robots, gecko suits, radio towers, and many other useful
objects and devices1. The machine uses natural gas as its feedstock and can produce a
kilogram of product for twenty dollars. It can build a copy of itself in just fifteen hours 2. The
nanofactory is based on molecular nanotechnology (MNT), a manufacturing process that
uses arrays of nanoscale robotic manipulators to build products from the atoms up 3.
A tabletop nanofactory could become a reality sometime over the course of the next
century. Some writers expect nanofactories to be developed as early as the 2020s 4,5,
others not until the 2100s6. The prospect has been addressed by arms control experts 7, as
well as physicists and chemists of all stripes. Some are skeptical they could be built 8,9, but
219

evidence grows for their feasibility10,11. If they can be built, they would be a serious risk to
humankind, as well as a major potential boon.
A nanofactory would use quintillions of tiny nanorobotic arms, called assemblers, to
individually place carbon atoms into nanoblocks which are then combined into
progressively larger units, until a suitable diamond product is created 12. Carbon is the
element used, both because of its strength and its versatility. Since diamond is a very
strong material, nano-products could be mostly empty space, or filled with ballast like
water. A typical 1 kg (2.2 lb) product might only use 0.1 kg of diamond and cost two
dollars. The functional components of a nanofactory would be very complex, small, and
automated, by necessity. A working nanofactory would have internal complexity greater
than even the most complicated large-scale factories of today. The individual assemblers
would only be 200 nanometers across and contain just four million atoms each. A full-sized
nanofactory would contain about 100,000 quadrillion such assemblers, organized into 10
quadrillion production modules. By comparison, the human body contains about 50,000
quadrillion ribosomes, the “assemblers” of biological life. All these components would be
built by self-replication from the very first assembler. Specifically when or how that
assembler will be built is unknown. Some say it will be built in the 2030s, others, the 2060s,
while others say never. The basic requirements for an assembler are that it can individually
place carbon atoms, that it has three degrees of freedom, and that it can replicate itself.
We know that nanofactories of some kind are possible because all of biological life is
based on the principle of molecular assembly. Molecules making up every organism on
earth, from pond scum to giant redwoods, are made by ribosomes. Ribosomes evolved at
least 3.5 billion years ago, maybe as early as 4.4 billion years ago, not long after the
formation of the Earth itself. Because they are the root of life, ribosomes differ little
between organisms. Ribosomes are part of the basic package that was included in the first
prokaryotic cells. There are only a few basic types.
Consider what ribosomes can accomplish. The ribosomes in bamboo pump out 4 feet
of woody growth in just 24 hours. That’s two inches per hour. The ribosomes of mayflies or
locusts produce enough organisms to cover the sky in a dark cloud for days. The
ribosomes in human beings take us from a fertilized egg to an adult over twenty years.
Millions of years ago, ribosomes built dinosaurs over a hundred feet long. Today, they build
220

fungal networks miles long. Ribosomes synthesize proteins that are hard like the tusks of
elephants, soft like rose petals, stretchy like the membranes of a bat, warm like the fur of a
mink, shiny like the shell of a tortoise, tough like the sinews of a cheetah, and iridescent
like the wings of a butterfly. Over billions of years, ribosomes have pumped out about 4
billion living species. All that biological complexity derives from tiny molecular machines,
every atom and molecule of which has been exhaustively mapped by scientists 13.
Skeptics of molecular nanotechnology (MNT), the postulated-but-not-yet-invented
technology behind hypothetical nanofactories, have made many points against MNT since
the concept was formalized in 1986 by MIT engineer Eric Drexler. They point out that while
ribosomes work in water and synthesize floppy molecules, assemblers of the kind in
postulated nanofactories would have to work in a dry environment with high precision,
which due to certain physics issues, isn’t possible14. Nobel Prize winning chemist Richard
Smalley argued that molecular assembler “fingers” would be too “fat” and too “sticky” to
manipulate individual atoms, called the “fat fingers” and “sticky fingers” arguments,
respectively. These and similar arguments have been responded to at length by Drexler
and others15,16,17,18.
The only way to confirm or disconfirm that MNT is possible is to try and build it. MNT
of the kind formalized by Drexler in his 1992 book Nanosystems does not exist yet, but
there are steps in that direction—enzymes which work out of water and nanoscale
programmable assembly lines that build simple molecular products 19,20. This chapter
assumes that molecular nanotechnology and nanofactories will be developed sometime in
the 21st century, even if we cannot say when or even if it will happen at all. Many experts at
least consider it likely, and we see addressing it in the context of global risk as essential.
Even if “dry” nanotechnology of the kind described by Drexler cannot be built exactly as
envisioned, compromises or hybrids (between ‘wet’ and dry systems) will serve a similar
role and have similar capabilities to the original vision. This chapter also covers
“conventional” nanotechnology of the kind that already exists, and shows how there is
something of a continuum between conventional nanotechnology (NT) and molecular
nanotechnology (MNT), even though the two are distinct. First we highlight the basic
capabilities, then examine the risks. Due to limited space, we’ll primarily focus on potential
military applications of MNT, since therein lies the greatest danger.

221

The nanofactory model described by Chris Phoenix, Eric Drexler, and colleagues has
several crucial features that set it apart from all other modes of manufacturing. First, it is
self-replicating. Second, it is fully automated. Third, it is fully general, meaning it can build
almost anything out of carbon that is allowed by the laws of chemistry. Fourth, it is
atomically precise, meaning every atom is put in a specific, predetermined place according
to a design. Fifth, it has a high throughput, meaning it can fabricate products several times
more rapidly than via conventional means 21. Sixth, its products have superior material
properties, being made of diamond or other carbon allotropes such as buckytubes 22.
Seventh, its products have a high energy density, so extremely powerful motors and
generators can be built in a tiny space23. The Center for Responsible Nanotechnology
(CRN) says that nanomotors and nanogenerators will convert electrical power to motion,
and vice versa, “with 10 times the efficiency and about 10 8 (100,000,000) times more
compactly.”
Consider that last point, converting electrical power to motion a hundred million times
more compactly. If possible, it means an engine as powerful as a fighter aircraft that fits in
a matchbox. Phoenix writes, “such a motor can convert more than 500,000 watts (0.5
megawatts) of power per cubic millimeter at better than 99% efficiency” 24. For comparison,
the power output of a blue whale is about 2.5 megawatts, a diesel locomotive about 3
megawatts. A nanotech engine equivalent to their output could fit in a few cubic millimeters.
You might wonder, “how could such a small machine contain so much power without
overheating?” Indeed, cooling would be an issue. If adequate cooling can be provided,
tremendous power densities could be sustained. Even if the motors are tiny, they could be
hooked up to tiny rods which are then hooked up to larger and larger rods, driving shafts or
rotary axles to channel the power. The diamond rods connected to the engine itself would
be strong enough not to break, even given the power densities produced by nanoengines.
In this fashion, engines for the largest and most powerful vehicles could be produced in
small desktop fabricators. It’s difficult to imagine what the consequences of this could be,
since barely anyone has given it any serious thought. We will return to this scenario and its
implications for global risk later in the chapter.
Grey Goo Scenario

222

When people think of nanotechnology risks, the first thing that usually comes to mind
is the “grey goo” scenario—out of control self-replicating nanobots saturating the
environment, like out-of-control locusts or rabbits. In this scenario, a tiny nanomachine selfreplicates by using sunlight, biomass, or other available power and matter sources until it
destroys all life on Earth25. Though this may initially sound plausible to the non-technical
futurist, there are a number of reasons why it’s rather difficult to achieve technically 26.
Specifically, it seems to be less of a risk than biological warfare or artificial intelligence. The
most dangerous forms of grey goo would be likely to be bio-technical hybrids, or tools used
by advanced, human-equivalent or superintelligent artificial intelligences, rather than dumb
self-replicators (which could be outsmarted or otherwise beaten if detected early enough).
Grey goo was highlighted in Eric Drexler’s 1986 book Engines of Creation, and was
subsequently featured in 90s and 00s science fiction, from Star Trek to Michael Crichton’s
novel Prey. In a short 2003 article, “Grey goo is a small issue,” Drexler said that he thought
the risk was overblown and that he was mistaken to emphasize it in his 1986 book 27. This
is because grey goo would be very difficult to engineer, and especially difficult to engineer
accidentally. If you wanted to use nanotechnology to build a robot army, or a nanobot army,
you would just build the robots in a factory, instead of trying to engineer an autonomous,
open environment-ready nanoscale self-replicator. For most imaginable goals, even taking
over the world, this would be both more efficient and fully sufficient.
Why? Because converting raw materials from the environment into functioning
nanobots would be extremely complex and subject to trial and error. The first nanofactories
will require highly purified inputs such as refined natural gas. Trying to put complex,
unprocessed molecules from a natural biological environment through a nanofactory would
be like throwing a wrench into finely tuned turbines. To get the right molecules, it would be
far easier to purify them in bulk using purpose-built industrial systems, rather than
collecting them with nanobots from the dirty, impure natural environment. Navigation and
coordination among nanobots would also be problems. To solve them, you’d practically
need human-equivalent Artificial Intelligence, at which point you’re dealing with a selfimproving AI risk, not a nanotech risk.
Programming free-floating molecular assemblers is a robotics task far more complex
than programming assemblers “pinned down” in neat rows of a nanofactory. This is for the
223

same reasons that it’s easier to build purpose-built robotic arms to build cars in a factory
than to build robotic arms that somehow build cars from substances available in a forest.
Like a car, an assembler needs to be built out of highly purified, refined components in a
predictable environment where nothing can go wrong. Assemblers that replicate in the
external environment would need to be more complex and sturdy, spending a lot of space
on features like protective shells, membranes, and navigational sensors/computers.
We aren’t claiming that free-floating self-replicating assemblers are impossible or
could never exist, just that they would not be among the earlier nanotech risks to enter the
scene. The existence of viruses, bacteria, and all other organisms shows that assemblershells (organisms) filled with assemblers (ribosomes) powered by materials in the
environment (food or hosts) can exist. Blue-green algae (cyanobacteria) has been quite
effective in self-replicating with ordinary sunlight and carbon dioxide from the atmosphere.
Cyanobacteria sucks CO2 out of the atmosphere and replicate out of it using a chemical
process called the Calvin cycle. In the long run, there is no reason why there couldn’t be
“artificial cyanobacteria” that do precisely the same thing, but faster.
For many nanosystems, building materials will be extracted from the carbon cycle,
which includes carbon dioxide in the atmosphere, calcium carbonate on the ocean floor,
and biomass all around. Some nanotech pundits think that carbon dioxide will be removed
from the atmosphere in such quantities that we’ll take to burning down forests to replenish
the atmosphere and prevent global cooling28. That’s quite a variation on the usual line we
hear with regard to carbon dioxide.
Three Phases of Nanotech Risk
Now that we’ve gotten grey goo out of the way, it’s time to move on to a broad
categorization of nanotech risks in general. We see three primary phases of existential risk
connected to nanotechnology and robotics; a first phase involving nanotechnological arms
races, which may involve the mass-production of nuclear weapons on a scale never before
seen, followed by nuclear war and then globally catastrophic nuclear winter. This, in
combination with other factors like biological warfare, could potentially lead to the end of
the human species. The second phase is the risk of out-of-control or intentionally released
replicators, “grey goo,” which we just addressed, and will return to later in the chapter. As
part of this phase we also include the “blue goo” scenario, referring to deliberately
224

constructed “police goo” designed to protect the biosphere from grey goo, which ends up
going out of control or being used for warfare and acting as grey goo itself. The third phase
is the risk of advanced artificial intelligence or human intelligence augmentees, the first part
of which we explored thoroughly in the last chapter, the second of which will be addressed
in Chapter 11 on transhumanism.
The three phases are not mutually exclusive. The only way to avoid all these risks
would be 1) some kind of world system that enforces top-down peace, possibly benevolent
AI or a benevolent world dictatorship, sometimes referred to as a singleton 29, 2) mutually
assured destruction sustaining world peace, somewhat analogous to the situation with
nuclear weapons today (though some arms control experts would object to this
statement30). The problem with the second solution is that nanotech weapons may offer a
first strike advantage, meaning it might be possible to wipe out the enemy’s capacity to
respond to an attack, ensuring a first-strike military victory for any aggressor 31. This would
increase the incentive to attack an enemy before the enemy attacks you, and incentivize
both local and geopolitical aggression overall. Quoting the Center for Responsible
Nanotechnology, nanotech could be “very bad” for peace and security 32.
The First Phase: Military Nanorobotics
Consider an attack robot so tiny it’s invisible, an eighth of a millimeter wide, the size of
a fairyfly. We explored such a possibility in the previous chapter, in the context of a weapon
a hostile artificial intelligence might use to eliminate humanity. Mass-produced, these
robots could be distributed worldwide, sitting dormant and ready to be activated. Made out
of an extremely durable material like diamond or buckypaper, they would hold a payload of
anthrax spores, which remain inactive for years and kill quickly once in contact with a host.
If a country could manufacture millions or even billions and distribute them globally, they
could murder entire continents and dominate the planet. The only way to counteract them
would be to wear full-body suits around the clock and maintain hermetically sealed spaces.
Consider what could happen if such robots went out of control and attacked people
indiscriminately. Think what could happen if several countries each built a fleet of robots,
setting them on the populations of rival countries without defenses. During World War II, a
total war, especially towards the end, both sides had a mind to commit the atrocities
necessary to defeat the other side, and did. Combine that scenario with nanorobotics and
225

you could have a massacre that kills 99 percent or more of the population of the warring
countries. The fact that these nanorobots could be programmed to kill autonomously,
without controller input, would make them even more attractive to warring parties in a total
war, where hesitation can be fatal. If they could somehow power themselves, they could
continue to kill long after the war is over, like heat-seeking, invisible, aerial land mines. If
enough of them covered the surface of the planet, they could somehow continue to power
themselves autonomously, and the technology to locate and destroy them were not
available, the Earth could become uninhabitable. It sounds like an episode of the Twilight
Zone, but possibilities like it have been seriously considered by arms control experts.
Higher-energy scenarios are also imaginable. Instead of tiny killer robots, countries
could manufacture larger autonomous killer robots, the size of tanks, powered by the
super-powerful nanoengines described earlier. Such robots would not only overpower
human beings, but they would be hundreds or thousands of times stronger despite being of
similar size. They would be equipped with targeting systems to score perfect shots from
miles away, farther than human snipers can reach. Such “automatic aiming systems” are
already under development33. There are mortar systems that can reach targets behind
barriers and bunker busters that penetrate through deep layers of rock. All these systems
could be mounted on and delivered by high-power, high-energy MNT-built robots, which
would make our present-day robots look like cheap plastic toys.
Present-day military systems are built with the assumption that enemy shots
sometimes miss. Even if they do hit, they do limited damage due to armor. With nanotechbuilt weaponry, it will be possible to build systems that hit every time, and to calculate the
number of shots needed to destroy a target in advance, so by the time a target is visible, it
is as good as destroyed unless it has some outstanding defense system. In all likelihood,
the initial development of MNT attack robotics will be monopolized by a single nation,
giving it a huge advantage over the rest of the world, what Drexer calls “the Leading
Force”34. Combine that with human arrogance and natural geopolitical instability, and the
outcome is difficult to predict.
At any given time in the world, as on any given playground, there are always strong
people who want to expand their influence. They consider themselves “the good guys,” and
anyone they come in conflict with “the bad guys,” even if the reality is nuanced. To take a
226

concrete example, since the fall of the Soviet Union, NATO has extended its influence up to
the borders of Russia, bringing almost a dozen new countries under its wing. When Russia
dared to intervene in Ukranian affairs in 2014 after the government there collapsed,
Western media portrayed it as an outrage, and NATO imposed sanctions on Russia. Was
NATO justified in such an act? From our point of view, it doesn’t really matter. To a typical
American, it was justified, and to a typical Russian, it wasn’t. All that matters is that such
conflicts are routine, inevitable, and have the potential to escalate into world war. The
military technologies available at the time contribute to the volatility of the situation.
At some point in the future, it will be possible for a nanotech-armed power to
intervene in any global conflict consequence-free. Consider if the United States were a
nanotech power when Russia invaded Crimea. It would be simple for the USA to threaten
the Russians with nanotech weapons and compel them to leave, or prevent them from
entering in the first place. Any military conflict could be resolved decisively in favor of the
United States with no risk to our citizens and property. The geopolitical influence of the
USA would expand like a balloon, with no natural challengers or rivals to stop it.
You might think, “so what?” As long as the USA does not actually invade other
countries, the status quo would be maintained. But the fact that the USA (or NATO) would
have truly supreme military status over the rest of the world would not be only a curiosity,
but would have real geopolitical consequences, in both overt and subtle ways. Russian,
Iranian, and Chinese companies, media, and leaders would have an incentive to abandon
their national values and pride, instead appealing only to American values and American
pride. Their primary way of looking out for their interests would be to appeal to American
decision-making processes. Eventually this would converge to a true global hegemon, a de
facto global government, even if superficially each nation were independent. Today, NATO
may be the world’s greatest military power, but it cannot act with impunity. Military and
economic factors constrain it significantly. What happens when a power comes into being
which can act with impunity over the face of the whole planet? We don’t really know. It
could have negative consequences, if not in the short term, then in the long term 35.
In this scenario, the rise of a stable global hegemon might actually be a beneficial
outcome, but the key word is “stable”. If one nation acquires nanotech weapons and
prevents all others from acquiring them indefinitely, that’s a steady state, what Bostrom
227

calls a singleton. This scenario has its own risks, but they are smaller than the risk of open
conflict leading to nuclear or nanotech war. The danger is more in one country or group
getting used to hegemonic status, overextending themselves geopolitically, then recoiling in
horror when another nation acquires nanotech weapons, leading to conflict and full-scale
nano warfare. Years of built-up resentment from being on the receiving end of geopolitical
hegemony could inspire “subordinate” countries to regain all the land and influence they
lost while being a second-class state, with potentially explosive results. As we mentioned
earlier, when it comes to nanotech weapons, first strike may be the guarantee of victory, so
there is every incentive to move fast, and the possibility that states with less military
hardware could be victorious as long as they strike first. A state with a hundred times less
hardware may be able to prevail over a much larger state if they attack first and use both
nanotech and nuclear weapons with a strong emphasis on autonomous robotics.
We ought to consider the possibility that the relative peace of the last 70 years has
partially been the result of a time in history when attacking the enemy with the most
powerful available weapon (nuclear weapons) is self-destructive rather than just destructive
to the adversary. It seems fairly clear that the up-front costs of major conflict decrease
when highly destructive attacks are possible with non-nuclear weapons which use
conventional explosives or pure kinetics. These are more politically acceptable than
nuclear weapons. Nanotech weapons would make it easier to target attacks precisely, to
strike military targets with minimal collateral damage. Though this may initially cause fewer
casualties, it may paradoxically increase casualties in the long run because it makes the
initial escalation seem less foreboding, making such escalations likely overall. The
tendency towards greater escalation may overwhelm the short-term decrease in casualties
enabled by greater precision.
Another technology that nanotech-built robotics would provide is that of effective antimissile defense. Though missiles are very fast (Mach ~0.8–5 ), they can be tracked and hit
if the intercept system is precise enough. Even with today’s technology, there are relatively
effective anti-missile systems such as Israel’s Iron Dome, though they only work against
smaller missiles, and certainly not against intercontinental ballistic missiles (ICBMs). If all
ballistic missiles can easily be blocked, as nanotech could enable, it creates an incentive to
build exotic new delivery systems, such as nuclear weapons that form by the combination
of different components on the battlefield, or clouds of missiles so large that they can’t be
228

stopped36. If even a small drone could potentially be part of a nuclear attack, it pushes
defenders to treat each incursion as if it could be a weapon of mass destruction and
respond accordingly.
Nanotech-built robotics create many other exotic attack possibilities: lasers that fire
from space platforms, kinetic systems that accelerate projectiles from space to hit ground
targets (these are investigated in more detail in Chapter 10), supercavitating missiles that
travel under the ocean (too fast to be detected by sonar), and so on. Every destabilizing
military technology you can imagine, molecular manufacturing would be able to build. The
technology is so potentially destabilizing that arms control expert Jurgen Altmann has
suggested an outright ban on autonomous robots smaller than 1 m (3 ft) in diameter,
though this seems as if it would be difficult to enforce 37.
As previously mentioned, the only circumstances we can imagine which would
prevent this MNT arms race scenario would be global hegemony or mutually assured
destruction (MAD) of some kind. Global hegemony seems more stable, because a first
strike advantage imperils the MAD scenario. Anyone considering the future of
nanotechnology should note that its extremely great potential in the military realm
incentivizes geopolitical hegemony to minimize the likelihood of conflict.
Uranium Enrichment
Besides creating its own unique risks, nanotechnology exacerbates nearly every
other risk. It exacerbates the risk of unfriendly AI by giving us a huge amount of computing
power38. It exacerbates the risk of biowarfare by allowing large, automated test facilities for
new pathogens and biotechnology tools. It exacerbates the risk of nuclear warfare by
making it easier to enrich uranium and putting it within the reach of more states, including
states who have absolutely no intention of participating in global mediating bodies like the
UN39. It increases the risk of global warming or global cooling by giving us unprecedented
ability to warm up the Earth or cool it down, both deliberately (geoengineering) and
accidentally (waste heat, CO2 extraction from the atmosphere).
We mentioned in the chapter on nuclear weapons that uranium enrichment is
expensive, and that the primary US enrichment facility during the Cold War consumed
electric power equivalent to half that used by New York City. Nanotechnology would be the
229

ideal tool for enriching uranium, because of its capacity for creating extremely high power
density systems (like centrifuges), but also the prospect of developing new enrichment
techniques that work more efficiently for less power. It may be possible to develop
nanosystems that weigh individual atoms and separate out the useful isotopes, making
facilities both more compact and efficient. Even today there is the risk of development of
new enrichment facilities that cannot be observed by surveillance satellites. In the
nanotech future, enrichment facilities will become so small that the only way they could be
detected would be through ground surveillance, which could be difficult to pull off in a
foreign country. The only way to reveal such facilities would be if a human being passed on
the information to foreign intelligence services and provided photographic evidence. If the
scientists stayed quiet, the facility could remain hidden, increasing the risk of a behind-thescenes nuclear arms race and making international monitoring impossible.
The enrichment of uranium enabled by nanotech products might be among the
greatest risks of nanotechnology (aside from unfriendly AI), because while nanotech offers
the possibility of new weapons that minimize collateral damage, nuclear weapons are the
ultimate trigger of collateral damage on a vast scale, and seem likely to be used in any
sufficiently aggressive 21st century war. In the nuclear war chapter, we reviewed Alan
Robock’s simulation result that even a limited nuclear exchange could cripple global
harvests and cause hundreds of millions of deaths, and a full-scale nuclear war would put
huge parts of Asia and North America below freezing for years at a time 40,41. The explosion
of nuclear weapons in forests could even bring the Earth into a new glacial age. Combine
this with the fact that nanotechnology would enable self-sufficient space stations, and some
groups may be tempted to end all life on the planet, hide out in space, and return a couple
decades later to a world that is theirs alone.
The Earth’s crust up to 25 km (15 mi) down is estimated to contain 10 17 kg of uranium,
making it roughly 40 times more abundant than silver. About 0.7 percent of that is the
isotope that can be used to build nuclear weapons (U-235). It only takes about 5 kg (11 lb)
to construct a bomb. That means this planet has enough uranium, in principle, to build
trillions upon trillions of nuclear weapons. The primary barrier standing in the way of this is
the technological difficulty of uranium enrichment. If uranium enrichment becomes as easy
as extracting aluminum from ore (world annual production over 47 million tonnes), we
would have a huge global security problem. At one point aluminum was more valuable than
230

gold, now it’s dirt cheap. Would it be possible to regulate the processing of aluminum ore,
even if we wanted to? Not through anything less than employing a significant percentage of
the population to inspect every industrial facility on the planet for aluminum traces, or
placing armed guards at all known bauxite (aluminum ore) deposits. Nanotech could put us
in a similar position with the enriched uranium that can be used to build nuclear weapons.
Biological Warfare and Experimentation
Biological weapons research operates as follows: dangerous pathogens are isolated,
grown in culture, then tested on animals. The weapons are fine-tuned by optimizing their
distribution method depending on the nature of the pathogen. A “biological bomblet,” for
instance, is a bomb filled with a pathogenic agent designed to spray pathogen across as
widely an area as possible. If a biological bomblet just created a little puddle on the ground,
it wouldn’t be very effective. The optimal distribution method depends on the pathogen.
The world’s most powerful countries nominally ended their biological weapon
research and development in 1991, when the Biological Weapons Convention was signed.
Research continues under the auspices of “defensive” research. In reality, offensive and
defensive bioweapons research are hard to distinguish. The only way to develop defenses
for a given bioweapon is to grow it, use it on animals, and test out antidotes or other
protective measures. That is how the process works.
There are several technologies that NT would enable or greatly improve which would
make biological weapons research far simpler. The first is gene sequencing and gene
synthesis. The second is automated experimentation. The third is biological isolation
chambers. The fourth is proteomics. All of these fields would be vastly improved or
accelerated by the maturity of nanotechnology, and would make it much easier for more
people to design deadly new pathogens in a warehouse or even their basement. We
already covered the dangers of pathogens in an earlier chapter, and how new genemodification technologies are opening the window to entirely new pathogens that have
never existed before. The same dangers apply here, only magnified, since NT would give
us so many new tools which are high-performance, cheap, and useful.
A true “doomsday virus” would be a virus that spreads exponentially and
asymptomatically for 3-4 weeks, lies dormant so no one notices it, then hits every corner of
231

the globe at once with deadly effects. Such viruses do not tend to arise naturally, since if a
pathogen takes that long to kick in, it tends not to produce very deadly poisons. As an
example, ebolavirus produces symptoms about 2-21 days after it is contracted, typically 810. Engineering a virus that is both deadly, spreads quickly, and reliably takes multiple
weeks (and no less) for the symptoms to kick in is not an easy task. To do so effectively will
require an understanding of millions of proteins and their interactions with the immune
system of the human body, the genes that code for them, the existing viruses that possess
these genes, and so on. With our current technology, achieving this is quite difficult.
However, with automated experiment labs involving robotics manipulating millions of
human tissue test samples interacting with millions of possible pathogenic proteins, it
becomes much easier. There are already robots that carry out massively parallel
experimentation, but they tend to be expensive. With nanofactories, these robotic test-beds
could be built in someone’s basement, and would have so many legitimate uses that they
could not be easily screened for “terrorism” or similar ill uses. The same factor applies to
future gene synthesis technologies. Currently, gene synthesis companies screen requests
for dangerous sequences, but with nanotech-enabled devices, the technology will be in the
hands of everyone and no company or authority will be able to screen it all.
Miniature Nuclear Weapons?
As far as is currently publicly known, the core of a nuclear bomb requires a minimum
of about 5 kg (11 lbs) of enriched uranium or plutonium to reach critical mass. However, in
the arms control book Military Nanotechnology, Jurgen Altmann raises the possibility of
miniaturized nuclear weapons that break this lower limit. Instead of being triggered by
conventional explosives in a spherical shell pattern, as in current nuclear bombs, these
smaller payloads would be triggered by minute amounts of antimatter, on the order of
micrograms. It is unknown if this is possible, but if so, it could create nuclear weapons just
centimeters across, with yields in the 1-1,000 tons of TNT range.
If these weapons could be mass-produced, they would have extremely negative
consequences for geopolitical stability. Arbitrarily small yields could be possible. Such
weapons would only be accessible to medium-to-large states, since the production of
antimatter depends on large particle accelerators. It may also be possible to efficiently
harvest antimatter from space, where it can be found in low densities. The current record
232

for storing antimatter in a stable state, using a magnetic trap, is 16 minutes 42. Storing it for
at least a few days would presumably be necessary to weaponize it.
Besides miniature nuclear weapons, nanotechnology could enable miniature rockets
which could carry conventional explosives or biological payloads. Rockets could be made
more energetic per pound of propellent by using nanoparticles, which allow better mixing of
fuel and oxidizer43. Whereas today rocket engines tend to be a few tens of centimeters long
at the smallest, better engines built using nanotechnology could be just a few millimeters
across. There is already a DARPA program, which does not use nanotechnology, that aims
to demonstrate a liquid-fueled micro-rocket with turbopumps, with 15 N thrust meant to
deliver 200 g satellites to low earth orbit 44. If a rocket can reach orbit, it certainly can be
used on the battlefield. In 2005, a breakthrough was achieved that allowed the fabrication
of micro-thrusters 50 to 100 times more efficient than previous models 45. These guidedmicro-thrusters, if perfected, would weigh mere dozens of milligrams and could power
munitions, space launches, and micro air vehicles with high maneuverability.
Combining small rockets with antimatter-triggered nuclear bombs of arbitrarily low
yield would usher in new military possibilities only vaguely hinted at today. With enough of
these rockets, it might feel as if it were impossible to run out of ammo, since so many of
them could be contained within such a small space, and they would be so destructive. An
array of micro-rockets the size of ten AK-47 magazines could be used to level a small city.
The W54, the smallest nuclear warhead ever built, has a core that weighs about 23 kg (50
lb) and its most powerful version has a yield up to 1,000 tons of TNT. Imagine scaling that
down by a thousand times, to a core that has a weight of 23 g (0.8 oz), a yield of one ton of
TNT, small enough to fit about 20 rounds (taking into account additional mass for micromissile and propellent) into a 1 kg (2.2) steel magazine of the kind used in an AK-47.
That’s like having 20 tons of TNT in a single rifle magazine. Like larger nuclear weapons,
miniature nuclear weapons would also produce radioactive fallout, especially if detonated
at ground level. They could also be given cobalt coatings to contaminate target zones for
decades. The possibilities are quite destructive, though the technology itself is speculative.
Military Applications of Conventional Nanotechnology
In the academic volume Military Nanotechnology, arms control expert Jürgen Altmann
summarizes a list of what he considers important military applications of both
233

nanotechnology in general and molecular nanotechnology in particular. This subsection
and the next covers these. The (non-”molecular”/non-atomically-precise) nanotechnology
(NT) military applications he lists are as follows: 1) electronics, photonics, magnetics, 2)
computers, communication, 3) software/artificial intelligence, 4) materials, 5) energy
sources, storage, 6) propulsion, 7) vehicles, 8) propellants and explosives, 9) camouflage,
10) distributed sensors, 11) armor, protection, 12) conventional weapons, 13) soldier
systems, 14) implanted systems, body manipulation, 15) autonomous systems, 16)
mini/micro-robots, 17) bio-technical hybrids, 18) small satellites and space launchers, 19)
nuclear weapons, 20) chemical weapons, 21) biological weapons, and 22)
chemical/biological protection. Each of these has geopolitical implications in the bigger
picture.
To clarify the difference between nanotechnology and molecular nanotechnology,
nanotechnology refers to devices with nanoscale features manufactured using any method
besides positional placement of individual atoms (such as molecular self-assembly, 3Dprinting, vapor deposition, or with laser cutters), as opposed to ‘bottom-up’ manufacturing,
building structures atom by atom (molecular nanotechnology). Altmann expects most of the
developments listed below to be in use by the mid-2020s, though we tend to be more
pessimistic (or optimistic, as the case may be), placing most of them in the 2030s or later,
with specific predictions provided for select applications.
In microelectronics, nanotechnology could provide tools to break the 7 nm lower limit
feature size of photolithography. Even if this limit cannot easily be broken, NT methods
could be used to stack wafers three-dimensionally, permitting the continuation of Moore’s
law and further increases in computing power. NT could also be used to build gigahertz
mechanical resonators for filters and other applications. In photonics, NT will provide many
new opportunities to play with light, from generating natural light with LEDs, to detecting
single photons, and other optical tools such as waveguides and nanostructured
metamaterials. Related to camouflage, nano-photonics will allow the creation of garments
that project video images of the scene behind them towards a specific target, providing a
crude invisibility cloak (“crude” because it does not literally bend the light around it, as
other, smaller invisibility cloaks do). Such camouflage systems, and any conceivable
camouflage system, primarily work in one direction. Using phased array optics, “holodecks”
could be produced that simulate the appearance of virtual objects at any distance, real
234

enough that they could be viewed with binoculars46. Coupled with haptic pressure suits (to
simulate the sense of touch), this could provide realistic virtual reality even in the absence
of MNT. Nanophotonic circuits could provide more powerful computers and communication
links that transfer descriptively tagged three-dimensional videos of a scene in a fraction of
a second.
Nanotechnology would provide for new displays of any shape, small or large.
Covering wavy surfaces such as clothing or the inside of helmets, they would be incredibly
sharp and bright. Generally, NT will enable the increasing trend towards more ample light
in interior spaces. The pixel size of NT displays would be similar to that of a red blood cell
(12 microns). They would be durable and operate under wide temperature ranges. Instead
of being disposable electronics, displays would become durable objects, like stone or metal
monuments. Advances in magnetics enabled by the exploitation of giant
magnetoresistance (GMR) will allow improved memory drives which retain state during
power-down and boot instantly. All of these features will make electronics more comparable
to physical objects like rifles in terms of their durability and reliability instead of the fragile
devices they are now. It will permit electronics and displays to be built into tools that
undergo abuse such as helmets, clothing, vehicles, structures, walls, drones, and so on.
Larger structures will play with light like crystals or mirrors, camouflaging tanks or creating
“visual chaff” or “{rapheme{or labyrinths” on the battlefield.
Altmann states “making use of NT-enabled miniaturization and integration, complete
electronic systems could fit into a cubic millimeter or less.” However, power supplies
(batteries) would not shrink to a corresponding degree, putting limitations on the ability of
these devices to transmit radio signals unless they are connected to larger (but still small)
batteries similar to button batteries. Nanostructured batteries with extremely high power
densities have been demonstrated, but these are unstable (they only last for a few
recharges) and whether they could be mass-produced using NT methods is unknown. We
generally assume they will not be, but unforeseen breakthroughs should not be ruled out.
Regardless, miniaturization of electronics to a significant but not extreme degree is to be
expected as nanotechnology advances incrementally over the next twenty to thirty years. It
will become practical to integrate functional NT devices into small objects like glasses and
ammunition. Computers superior to present-day laptops will fit in cubes a few mm on a side
and be embedded in a flexible communications network throughout all military devices.
235

The advancement of software and artificial intelligence in relation to the development
of nanotechnology is an area where analytic caution is warranted. Mostly, pre-MNT
nanotechnology will not directly impact AI research, much of which is currently based on
pure mathematics and trial and error. Improved computers will allow the “brute-forcing” of
certain straightforward data processing challenges, like voice recognition and image
classification, but qualitative improvements require human ingenuity and programming,
which is on an independent vector from advances in physical technologies like
nanotechnology. MNT could offer such great quantitative improvements in computing
power that it will provide qualitatively better results, but it seems unlikely (though not
impossible) that NT could.
AI milestones to watch for include computers that communicate in natural spoken
language, realtime language translation, computers that accurately anticipate the need for
and pre-emptively search for required information in a messy realtime environment, and
those that visually recognize their immediate environment in three dimensions and perform
complex navigational tasks autonomously. Precisely quantifying anticipated improvements
in automated planning, decision preparation and management software is difficult, since
these are very complex tasks and improvements are hard to predict. It seems reasonable
to expect modest improvements in performance of the above tasks by 2020 and more
substantial improvements by 2030, but unforeseen stalls or breakthroughs are likely. Of
course, software that can perform all of the listed tasks would be invaluable for military
applications.
Fundamental improvements in materials from NT alone will be somewhat limited.
Specialty products such as carbon nanotube-infused armor and artificial sapphire
missile/view-port windows will become less expensive and more widely used, and the
performance of various materials will improve incrementally due to nanoscale additives
which modify their macroscale properties. Carbon nanotube composite speedboats, for
instance, have about twice the strength-to-weight ratio of steel speedboats and use
correspondingly less fuel. The same materials will be used for trucks and airframes.
Nanostructured and microstructured meshes will enable better ergonomics that allow pilots
to endure greater g-forces and impacts without blacking out. Amorphous metal, an
acrystalline (disordered on the atomic level) form of metal, has about twice the tensile
strength as conventional metal and three times the toughness. In 2011, there was a
236

breakthrough to lower its cost, but as of 2015 it is not being mass-produced. If it can be
mass-produced, it will make improved structural components, penetrators, and armors. The
Air Force Research Laboratory is currently investigating the use of amorphous aluminum
and titanium for airframes.
More advanced nanotechnology will permit the production of materials with selfhealing properties, surfaces made of microscale polymers which seal when cut. Simple
‘smart’ materials such as shape memory alloys and piezoelectric actuators will allow drone
wings which can withstand unusual stresses or vehicles which can slightly change shape to
perform specific tasks. Similar actuators could be used for haptic suits which give people
the sensation of picking up real objects in virtual reality, applying pressure to their fingertips
or other parts of the body. This could be used to greatly improve training simulations for
soldiers or enable better data management, display, and analysis for planners. Eventually,
many common military objects made of plastic will be reinforced with carbon nanotubes,
making them stronger and lighter. This includes everything from backpacks to internal
missile components to car seats.
The next domain concerns energy sources and storage. As mentioned before,
batteries have a limited window of improvement prior to anticipated improvements from
pure bottom-up manufacturing. Improvements of 50 percent in storage capacity over the
next 10-20 years are likely. Nanotechnology will allow the fabrication of much smaller
batteries than were previously available, to power tiny, invisible sensors, monitors,
communication networks, flying cameras, implants, devices that flow through the
bloodstream, and so on. Nanostructured fuel cells might make hydrogen fuel cells more
widely used, though these energy sources seem unlikely to replace conventional batteries
and motors except for specialty uses.
A candidate for the crowning achievement of nanotechnology would be the mass
production of extremely cheap and efficient solar cells, though it may be a number of years
before they really become “dirt” cheap. Improvements in solar cells are on a Moore’s lawlike curve, represented by an annual 7 percent reduction in dollars per watt for solar cells, a
trend which has held up for 30 years and should continue for the next 20 years at least.
One scientist predicts that solar electricity will drop below the cost of coal-generated
electricity by 2018, and that solar will cost half as much as coal electricity by 2030. If this
237

pans out, it could double our available energy, as well as enabling military operations and
power sources in remote areas without power infrastructure.
Current micro-robots and similar devices depend upon conventional batteries to
provide power, or integrated circuits that wirelessly receive small amounts of power from an
external source, in the case of RFID tags. As conventional NT improves, it will enable
microsystems that work more like car engines, utilizing hydrocarbon fuels and micro
thermo-electric converters to improve energy density storage by a factor of ten. This will
permit longer flight and loiter times for micro-UAVs, some of which may be the size of small
insects, and enable more complex and powerful microsystems in general. In fact, you
might say it would make micro-UAVs truly practical, and that current micro-UAVs are
impractical given their short-lasting conventional power source. Scaled up to the weight of
a few pounds, such systems could provide power in the range of 20-60 W to soldier
systems (like heads-up displays in a helmet) and small robots (surveillance drones or
bomb disposal robots). This is enough to power lasers, heaters, spotlights, computers,
pumps, terahertz imagers, climbing gear, listening gear, seismic sensors, mechanical
grippers, lab-on-a-chip, biological digesters, propulsion thrusters, and so on. Instead of
charging their electronic devices in a wall socket, the soldiers of 2030 could just “fill up”
with hydrocarbon fuel and get a full charge instantly. Instead of carrying expensive and
heavy batteries for each device that they need to swap out, soldiers would have one liquid
fuel battery for each device and carry a small fuel tank.
Improvements to propulsion from NT primarily involve miniaturization opportunities.
As previously mentioned, very small rockets with payloads and fuel tanks weighing just a
few dozen milligrams each should be possible. These would have a range in the tens to
hundreds of meters. Anything much smaller than these would have difficulty pushing
against the air and would have to be slower and have a lesser range, making them useful
for a point-blank attack only. Since the volume of fuel cells decreases cubically as their
linear dimension decreases linearly, rockets smaller than a centimeter or with a weight of
less than five grams or so would have too limited of a range, be too slow and inefficient for
regular use. Slightly larger rockets (but still extremely small by today’s standards) would
work fine for relatively close-range combat.

238

Used as weapons, these rockets could set off small explosions which would be fatal,
or drive themselves into flesh and bone mechanically. They could target sensitive parts of
the body like the brain or heart. Small propulsion systems will also enable a range of small
robots for every purpose from surveillance to transport to confusing the enemy. These
robots could cover land, sea, and air in huge numbers and blur the lines between
brinksmanship and direct conflict. Just like the electronics that soldiers carry with them,
these robots could refuel with liquid hydrocarbons. There could even be ‘tanker robots’ that
travel in the field and refuel these swarms. Carrier-like systems are possible, where
numerous small robots travel in larger robotic carriers for efficiency and protection until
they reach a target zone and disperse to complete a mission. These robots could even
consume local biomass for fuel, as has already been demonstrated in a prototype.
More exotic forms of propulsion based on shape-changing materials and
nano-/microstructured materials could take inspiration from biology: submarines that swim
like squid, UAVs that flap their wings like birds, ground robots that stalk like tigers, etc.
Nanotechnology will enable redundant propulsion. By making systems small, highly
functional, and flexible, several types of locomotion could be used by the same unit. One
possible combination could be rocket thrusters, swimming ability, and climbing ability. For
example, these locomotive abilities would allow a swarm of attack robots to fly to a marine
position, swim to shore, climb shoreline cliffs, and reach a target. This would enable
greater stealth, unpredictability, and more efficient use of fuel. Other combinations are
possible. Covert robots that fly through the sky as “rods” too fast for the eye to see could
be fabricated, their invisibility similar to how we cannot see flying bullets in the air. Such
rods would be impossible to detect with radar and would require physical barriers to defend
against.
More mundanely, NT will allow rocket and conventional engines that operate at higher
temperatures because they are made out of better materials. This will make them more
efficient and give the craft that use them a longer range. As far as using ‘smart’ materials in
the military, MIT’s Institute for Soldier Nanotechnologies is developing an exoskeleton that
uses the shape-changing polymer polypyrpole for locomotion. This polymer contracts or
expands in response to an electric field, much like the piezoelectric crystal in a scanning
tunneling microscope. A form of locomotion that could be used for nano-robots is the
rotation of the natural flagellum used by bacteria, or artificial variations which accomplish
239

the same task. A hybrid artificial/biological motor that runs on ATP has already been built
with funding from DARPA.
For vehicles, NT will create new opportunities for the wider integration and production
of light armor in the form of nanotube plastic composites. First-generation NT would have a
minimal impact on heavy armor, as it relies on bulk metals, unless there is an unforeseen
breakthrough in the production of amorphous metals, as mentioned earlier. If there is no
breakthrough, the overall structure of tanks and battleships would remain much the same
as they are today. Similarly, because tanks and battleships rely on large, high-intensity
power sources that run on fossil fuels or nuclear power, relatively little would change for
them as far as new power sources introduced by NT, which are most suited to small units.
To reiterate, we are looking at the 2030s time window here—advances in MNT in the
2040s-2060s and beyond could potentially allow the production of extremely durable and
high-performance tanks and battleships constructed out of diamond (if MNT is a success),
but conventional NT would not make this available. It is important to recognize the
differences in capability increases provided by nanotechnology (NT) versus molecular
nanotechnology (MNT), and to reiterate that the latter is an advanced subset of the former,
expected further in the future.
As far as vehicles, those which would benefit the most from initial advances in NT are
aircraft, where weight and space are at a premium. This especially applies to unmanned
craft, which can be made small since they have no pilot. We’ve already summarized how
NT will enable the mass production of tens of thousands or even millions of small flying
units, which could use hierarchies of carriers for secure transport between battlefields and
war zones. These swarms could introduce abrasives into enemy vehicles, distribute
nanofilaments that cause short circuits, clog intake and exhaust filters, initiate dangerous
and damaging chemical reactions (acids, etc.), cover and disable view ports or other
sensors, cause overheating (how bees cooperate to kill giant wasps), even cover vehicles
in immobilizing nets or bags of melt-resistant polymers. Large swarms could defend highvalue stationary targets and intercept bullets or even smaller missiles and rockets. With
high maneuverability, swarm behavior, varied propulsion methods, and varied attack
methods, coordinated flocks of small flying robots might seem like “magic” to the soldiers of
today. These are systems which will become available within 20-30 years, whether or not
MNT is developed. Small swarms of quadcopters performing cooperative tasks like tossing
240

a ball up from a net have already been publicly demonstrated and have elicited military
funding. Their performance is already eerily impressive. For a picture of how larger swarms
of more maneuverable drones might operate, look up flocks of starlings on YouTube.
Guiding and providing information to such swarms on the battlefield will be vast
networks of sensors, especially when on defense rather than offense. Today, soldiers are
provided with very limited environmental data, and mostly rely on the same basic means of
threat detection used since the Cambrian era; our eyeballs. Supplemental information
includes local intelligence, GPS devices, and aerial surveillance. Given the uncertainty of
the battlefield, even small pieces of information from auxiliary sources such as these can
make the difference between life and death.
With NT, environmental sensors can be built which are a millimeter (1000 µm) or less
across, producing “smart dust” that will eventually reach the dimensions of actual dust
particles (.05– 10 µm). These could be blown around by the wind and pick up footsteps,
vehicle movement, even detect whether someone is asleep or not. It will become possible
to actually listen to the heartbeats of the enemy with smart dust networks.
Countermeasures would have to rely on brain-to-brain interfaces, decoys, or similar means
of selecting withholding strategically relevant audio or visual information from the
environment. Some information, like the physical location of a human being, may be too
inherently noisy to conceal.
A person’s location is given away by the heat they emit, the noise they make while
walking and breathing, their visual signature, and other variables. These will be picked up
by smart dust on future battlefields and used to target individuals for lethal or non-lethal
strikes. It will increase the incentive to put soldiers behind armor or to remote control
drones from concealed or distant positions. Sensor networks could detect minute changes
in light, radio, sound, temperature, vibration, magnetism, or chemical concentrations.
Though nascent efforts towards smart dust have been made, the technology basically does
not exist and will not for another 10-20 years. NT will pave the way. Current battlefield
sensors are a few centimeters in diameter at the smallest—smart dust will consist of
sensors a millimeter or smaller in size. A millimeter is about the thickness of ten pages of
printer paper. Smaller sensors are conceivable for specialized applications like chemical

241

sensing. Some may be biologically inspired or borrow components from nature, such as
chemical-sensitive microorganisms.
So far, we’ve covered 10 military applications of nanotechnology out of 22.
Understanding how they all combine to unlock new capabilities and geopolitical chaos is
crucial for comprehending the nanotechnology and robotics threat profile, what these
technologies will enable in the next 90 years, and how they could threaten our survival as a
species. It also helps us build up an overall picture of the future as a starting point to
consider threats in a holistic way. Seth Baum, founder of the Global Catastrophic Risk
Institute, has emphasized this specific point: we need to examine risks holistically rather
than just individually and separately.
Nanotechnology is especially important because it enables technologies that underlie
a great deal of foreseeable 21st century risks. Thus a solid understanding is crucial for
global risk analysis, and especially for analyzing the risk of human extinction. The
remaining potential military applications to cover include armor, conventional weapons,
soldier systems, implants and enhancement, autonomous systems, mini/micro-robots, biotechnical hybrids, small satellites and space launchers, nuclear weapons, chemical
weapons, biological weapons, and chemical/biological protection. Again, we would like to
make clear that all of these advancements, many of which already exist in prototype form,
are based on conventional nanotechnology, not molecular nanotechnology, so critiques
directed at MNT do not apply to the technologies listed in this section. We describe
potential MNT military applications in the next section so that each category can be
considered separately.
With regard to armor, we already mentioned that improvements would be
concentrated in light armor instead of heavy armor. In the short run (10-20 years) new
materials will provide armor that is about twice as protective as today’s best bulletproof
vests. In the long run (40+ years), mass production of buckypaper could lead to armor that
is 300 times stronger than steel and several times lighter. Based on a technique called the
Egg of Columbus, scientists have made Endumax fibers (a kind of synthetic fiber) dozens
of times stronger by knitting them into small knots on the microscale, setting the record for
the world’s toughest fiber47. If used with {rapheme, the researchers who developed this
technology say it could be made with as toughness modulus as high as 100,000 J/g. In
242

contrast, the toughness modulus of spider silk is 170 J/g, of Kevlar 80 J/g. Since the knots
would unravel on impact, this technique would only provide protection from one strike,
though the protection it would provide for that encounter would be extremely high. Thick
layers of {rapheme fiber armor built with billions of small knots would provide extremely
effective protection for tanks, ships, airplanes, exoskeletons, and so on.
Nanotechnology will improve conventional arms. One of the most notable and
potentially dangerous inventions would be the development of metal-free handguns and
rifles, made using nanotube-reinforced polymers. In Military Nanotechnology, Altmann even
recommends that metal-free firearms be banned outright. Already, there exist 3D printed
plastic guns which only use a small amount of metal, though their performance is
substantially worse than a real gun. Of course, a plastic gun could go right through any
metal detector, and be less conspicuous to x-ray scanners. Besides plastic guns,
nanotechnology will allow for reduced weight on projectiles such as missiles and bullets,
allowing for increased muzzle velocity and range. It could also allow electronics and
guidance systems to be integrated even into rifle bullets, which would be equipped with fins
to guide them to a target. This would allow bullets to curve in midair, producing rifles and
machine guns that have near-perfect accuracy even if the shooter has little experience.
With such bullets, every shot would be a head shot. Every kind of projectile, from the
Vulcan chain guns on helicopters to anti-aircraft missiles fired from man-portable systems
to mortar shells to handguns, could be provided with self-guiding ammo. This could allow
missiles to be smaller and more precisely targeted towards weak points on enemy
vehicles, requiring a smaller payload to destroy a target.
As far as penetration capacity of firearms itself, NT would provide only limited
improvements, since the metals currently used for high-velocity rounds are close to the
theoretical limit of material density in normal matter. Better manufacturing procedures could
lower the cost of sabot rounds, which travel faster than conventional rounds by including
components that strip off from a flechette during flight, imparting momentum. As previously
mentioned, NT will allow very small missiles, down to a few millimeters in diameter, which
would have low kinetic energy but could still deliver a fatal blow by setting off just a few
grams of explosive near a sensitive part of the body such as the face or stomach.
Improvements in power source and supply will make exotic weapons such as lasers,
microwave beams, and electromagnetic accelerators (railguns) more practical. Large
243

railguns have already been developed by General Atomics for the Navy and tested in labs,
with at-sea testing scheduled for 201648. These railguns fire projectiles at Mach 7, about
5,000 mph. It is unknown whether electromagnetic accelerators could be miniaturized
enough for use in small arms, but if so, they could be powered by the portable hydrocarbon
fuel cells mentioned earlier.
“Soldier systems” refers to systems that soldiers carry on their bodies which augment
the natural capacities and talents of the soldier like sight and planning. These systems are
universally important for warfighting, and especially for protracted and guerrilla conflicts,
when resources are stretched thinner and maximum mileage must be extracted from longlasting systems. Examples include night vision, shoes, clothing, laser rangefinders,
backpacks, water filters, and so on. The DARPA-funded Institute for Soldier
Nanotechnologies (ISI) is pursuing nanotechnological upgrades to soldier systems with
their ‘super-suit’ vision, what they call a “nano-enhanced super soldier”. The projects aims
to “create a 21st century battlesuit that combines high-tech capabilities with light weight and
comfort… that monitors health, eases injuries, communicates automatically, and maybe
even lends superhuman abilities.” A demo video of the battlesuit by video students
unaffiliated with DARPA or ISI shows the anticipated functions of the suit: a “super shield”
described as “anticipatory projectile deceleration,” a “super cast” built into the suit that
snaps broken bones back into place and holds them there, “super vision” that is essentially
night vision combined with augmented reality, and “super stealth” that makes soldiers
“invisible to all electronic forms of detection and even the human eye” (an implausible claim
given that invisibility cloaks only work in one direction).
Soldier systems serve as force multipliers, meaning they make the soldier more
effective on the battlefield and make him harder to kill. A sniper with night vision,
concealment, and skill can potentially take out dozens or even hundreds of enemy soldiers
without being shot or captured. Conversely, an untrained and undisciplined mercenary with
an AK-47 may not be a very effective soldier at all. Another factor has to do not with the
effectiveness of individual soldiers, but the logistic demands of groups, which shape
deployment and movement patterns even more than the fighting capacity of individual
soldiers. An example of a NT-related invention that provides solutions in this domain is a
water extracting system (atmospheric water generator) that can extract hundreds of gallons
of water directly from the air, even in a desert 49. This already exists, but could be improved.
244

In a typical desert mission, a squad or platoon will be dependent on regular resupply from a
water truck, which is itself vulnerable to attack and gives away the position of soldiers. A
squad or platoon that can use nano-filters to purify water from local sources will not be as
dependent on a water truck, which, aggregated across hundreds or thousands of squads
with similar advantages, can make an entire military force more capable by reducing its
logistic complications. This can be even more helpful to fighting a war than better guns or
armor. Much of war is fought with food and water.
When it comes to clothing used for physical activities, or just clothing in general, our
technology today is rather limited. With NT, we will be able to build microsystems that
circulate cooling or heating fluids through hollow nano- or micro-fibers in combat fatigues or
a battlesuit. This would be especially crucial for a full-body battlesuit, which would
otherwise become intolerably hot in short order. In cold environments, avoiding overheating
is especially important, because overheating causes one to sweat, which then lowers body
temperature and increases the risk of hypothermia. Using active cooling suits, the future
soldier could be kept at a perfect temperature at all times—cooled down while running to
minimize sweating, warmed up quickly when having to hold position in a freezing wind.
Clothes could be more form-fitting, since the primary advantage of baggy clothing—air
circulation and cooling—would be achieved instead by these active systems. Specialized
clothes could perform the function of a cast or bandage by utilizing multi-functional NT
materials which provide both thermoregulation, blood absorption, and stiffening in response
to injury. Nano-clothes such as these seem almost mandatory for making battlefield
exoskeletons practical, otherwise it would be an unacceptable inconvenience and a danger
to step out of an exoskeleton for minor medical treatment or simply the feeling of
overheating.
The number one preventable cause of death on the battlefield is from “bleed-outs,”
especially from injuries to the neck or groin, where arterial bleeding causes the soldier to
lose too much blood too quickly, causing death50. A 2012 study of US combat soldier
deaths in Iraq and Afghanistan concluded that 25 percent were “potentially survivable,”
meaning that if the victims had better medical equipment or treatment, they could have
survived. A nano-suit that covers the neck and groin could potentially stretch to seal a
wound as soon as it is created, stopping bleeding until the injury can be treated by a
combat medic.
245

A nano-suit that covers the entire body could protect it from shrapnel fragments that
would otherwise impact above or below a bulletproof vest and helmet. If artificial blood is
developed in the next decade or two, which seems plausible, a nano-suit could contain
many small reservoirs of the fluid, pumping it into the bloodstream to buy an injured soldier
more time. Glucose reservoirs could be used for the storage of food energy. Combined with
an exoskeleton, a nano-suit would let soldiers jump farther, run faster, and carry heavier
loads. A typical soldier load is somewhere north of 100 lbs, heavy enough that it causes a
worrisome incidence of permanent back and spine injuries among soldiers. Even an
exoskeleton operating on low power could make a decisive difference in the speed and
agility of soldiers with a full pack, saving lives. Experiments have shown that soldiers can
retain combat effectiveness for as long as 40 hours with the help of modafinil; this would be
made easier if an exoskeleton is doing all the heavy lifting. After about 12 hours of heavy
activity, even the best soldiers get tired and need increasingly longer breaks.
NT opens up the possibility of implants for physical and cognitive enhancement.
Implants today are mostly limited to cochlear implants for the deaf and hard-of-hearing, or
pacemakers for heart patients. With mature NT, implants could be fabricated for a mindbogglingly large array of uses, from implants that release chemicals to accelerate
metabolism, to those which allow “artificial telepathy,” to ocular implants that outperform
real eyes. Some of these, like the first, are just around the corner, others 20-30 years in the
future, and still others (particularly useful brain implants) 40-50 years in the future and
beyond. The 2002 NSF-funded report Converging Technologies for Improving Human
Performance outlines some of these possibilities, but only scratches the surface 51.
Exhaustive options for human implants have yet to be explored technically because
they are not yet practical to implement. NT will make them practical within the lifetime of
many people living today. One difficulty to be overcome is the relative invasiveness of
surgery, especially brain surgery. Perhaps surgery by robots could be improved to the point
where tissue damage is negligible and post-surgery healing can occur quickly, maybe with
the assistance of micro- or nano-structured tissue trusses and grafts. There are also a
number of ethical issues which have to be confronted regarding implants, but these issues
haven’t prevented the military, and especially DARPA, from funding extensive studies on
the potential military uses of implants 52. If a nano super-suit turns someone into a “supersoldier,” a super-suit plus implants would make a soldier doubly super. Because the use of
246

NT-based implants is such a massive topic, we leave further details to Chapter 11, which
focuses on the risks and benefits of human enhancement in the context of global risks.
Potentially the greatest impact of military NT is in autonomous battlefield systems.
These would replace many of the functions of soldiers for attack, defense, and recon. One
source defines autonomous robots as follows: “A fully autonomous robot can: 1) Gain
information about the environment 2) work for an extended period without human
intervention, 3) move either all or part of itself throughout its operating environment without
human assistance, 4) Avoid situations that are harmful to people, property, or itself unless
those are part of its design specifications.” The last one is particularly tricky, and it seems
likely that if autonomous robots are ever widely deployed, they will accidentally damage
property and people as well. However, the same applies to the actions of human beings,
and the military appeal of these robots makes it unlikely that they will be relinquished
because of this.
As we’ve already mentioned several times already, swarms of small robots would be
effective NT-built weapons, perhaps even the “killer app” of military nanotechnology.
Without the ability to operate autonomously, robots have limited use, since they need to be
piloted remotely by human beings. With autonomy and mass-production, all sorts of
systems could be built, swarms that swim, hop, glide, flap, and wiggle their way to war.
Unless there is a major change in ethical views towards autonomous systems, it’s unlikely
that these robots will kill without a human in the loop, but they could disrupt, drown out,
harass, conceal, or confuse targets without human input. Some of these robots might use
neural nets with cognitive complexity and design principles borrowed from or inspired by
cockroaches, rats, fish, birds, mosquitoes, squirrels, mules, horses, tigers, lions, cheetahs,
elephants, jellyfish, bats, penguins, wasps, bees, worms, leeches, mice, lizards, springtails,
dinosaurs, and many other animals.
Altmann defines macro-scale autonomous systems as those with a size above about
0.5 m (1.6 ft). These already exist in the US military today, as bomb disposal robots or
flying surveillance drones. NT will allow them to use materials and miniaturized electronics
that make them more flexible, adaptive, numerous, coordinated, autonomous, and animallike. Because full environmental autonomy is a software problem, progress on this front is
more difficult to predict than something like the gradual improvement in computers, so we
247

predict that it could be anywhere between 10 and 40 years before there is software to
direct large swarms of medium-sized robots autonomously and effectively in unpredictable
environments with changing weather.
Many functions must be performed for robots to autonomously complete meaningful
missions, including navigating terrain, avoiding moving obstacles like wild animals,
livestock, and people, recon, concealment, refueling, defense and attack formations,
escorting, following, dispersal, return to base, self-destruct, and so on. Today, the software
palette of autonomous robot functions is very limited. Even the task of traversing terrain is
still a challenge. Substantial progress is happening, however, led by robotics companies
like Boston Dynamics, acquired by Google.
Despite improvements in miniaturized electronics, powerful robots with 5 centimeter
or larger guns will weigh several tons, though their size doesn’t mean they can’t be
autonomous. Some autonomous systems could be quite large, the size of tanks or even
larger. Lacking a crew, these tanks could accelerate more rapidly, engage rougher terrain,
hide underwater or underground, and be completely redesigned for greater mobility and the
ability to perform maneuvers that would kill a driver. The same applies to small submarines,
boats, and planes. Autonomous vehicles could more easily operate around the clock with
minimal breaks, independent from the weaknesses of a human crew. Lacking life support
systems, these vehicles could be smaller and lighter and therefore have room for more
armor. Entry ports might even be welded shut.
Moving on to smaller sizes, Altmann defines “mini” robots as under 0.5 m (1.6 ft) in
size, and “micro” robots as below 5 mm (0.2 in) in size. Of course, construction of these
would be greatly enabled by advances in NT. Altmann claims that “NT will likely allow
development of mobile autonomous systems below 0.1 mm, maybe down to 10 μm”. This
is still 2-3 orders of magnitude larger than the size of nano-robots and nano-assemblers
that are in the realm of MNT, but extremely small at the lower end (10 μm is a bit larger
than the diameter of a red blood cell and is small enough for microbots to circulate through
the bloodstream). Altmann does not explicate how these robots would be fabricated, but
based on prototypes that have already been built in a lab, they would probably be laser cut
laminated sheets of advanced materials or photolithography-fabricated MEMS (microelectro-mechanical systems).
248

Not many autonomous micro-robots have been fabricated as of this writing, and those
which have tend to be one-shot prototypes. The Harvard Microrobotics Lab has developed
a system for the mass production of flying micro-robots in 2012, but no actual mass
production has been done53. Korean researchers have built small bio-mimetic worm robots
that use actuators (artificial muscle wires) made of shape-memory alloy54. The worm uses
tiny earthworm-like claws and is 4.0 mm long by 2.7 mm wide. The Harvard Microrobotics
Laboratory is a current center of microbot activity, their flagship project being a “Monolithic
Bee” that is fabricated using a unique 2D-to-3D “pop-up book” process, where the chassis
is formed as a single piece of laser-cut metal, which is then pushed upwards by pins to
create the full 3D design. The chassis material uses 18 layers of carbon fiber, Kapton (a
plastic film), titanium, brass, ceramic, and adhesive sheets which are laminated together,
and the end result is about the size of a dime. More complex designs that fold like origami
are possible.
Currently, these robots are quite primitive. There is a major issue with regard to
limited energy supply, given that these robotic “bugs” cannot eat. Tiny batteries cannot
supply the power that digestive systems can. Without food, their time of operation is
measured in minutes rather than hours, making them impractical for any real use. They
lack stability control systems, meaning that with seconds of takeoff, they slam into the
ground again like drunken flies. The smallest stable artificial flyers are several tens of
centimeters (several inches) across, about a hundred times heavier than these microbot
protoypes. Today, that is roughly the size autonomous flyers need to be to carry out useful
functions such as surveillance. With advances in NT, however, crucial systems will be
miniaturized and micro-flyers will actually become useful along with other micro-robots. In
military parlance, very small autonomous flyers are called μUAVs. Like many of the
projects described in this chapter, the project is receiving grants from DARPA.
There are a number of challenges to building functional autonomous systems at the
sub-5 cm scale. Besides power, these smaller systems carry sensors that are less effective
than they would be if larger, due to a smaller detection surface. Communication is also a
problem, as small communication antennae running on tiny batteries have a poor signal
strength. Laser communication would be difficult because diffraction causes a smaller
beam emitter to emit a wider beam. For its information to be accessed, tiny surveillance
drones may simply have to return to base or a larger carrier to have their data obtained
249

through physical download. Aside from that, smaller systems are more fragile and less
mobile. A gust of wind may send a μUAV flying into a tree or concrete surface, snapping its
delicate components. A thick mist might cause micro-UAVs to become covered in water
which makes it impossible for them to fly, similar to how many smaller insects cannot fly
well in foggy conditions. Small ground robots could simply be stepped on or washed away
with hoses. Like human beings, they could be blown away with anti-personnel mines such
as Claymores or weapons like automatic shotguns. Naturally, they would have greater
difficulty passing physical barriers than larger machines or human soldiers would.
For mission applications, micro-robots could be used in reconnaissance (path-finding,
detection of enemy units and structures, environmental threats, mapping terrain), serving
as beacons to delineate targets for larger, remotely sourced attacks (railgun, bombing,
artillery strikes), or directly attacking the enemy, either by flying directly into targets,
detonating a small explosive, or injecting a toxic substance. Small μUAVs could even
attach themselves to a person and threaten to kill them unless they comply with certain
commands. Many terrifying options would be possible for countries or organizations without
scruples or with sufficient motivation to win at any cost. Of course, since they are small,
these robots could be destroyed or disabled quickly via a variety of means, including
swatting them like flies.
Despite all their limitations, there are many futurists who think that the future of civic
policing, area denial, and population control lies in large numbers of these small robots 55. It
seems like the optimal robotic armies would make use of combinations of robots of various
sizes, from the tiny to tank-sized. Today, disabling a tank is quite easy if you can get close
to it—a thermite reaction over the engine block of a tank can ruin it. In the not-too-distant
future, tanks might be escorted by small swarms of hopping, crawling, and flying robots
which defend the area around them and provide short-range reconnaissance. These
smaller robots would also be able to go places tanks cannot go, such as underground, into
buildings, up steep hills, around corners, and the like. They could disguise themselves as
bugs or use quickly changing camouflage. Large swarms of these robots could also be
used to confuse or terrify the enemy with sights and sounds. They might even be used to
trigger epilepsy. Try looking at a YouTube video designed to trigger epilepsy and you will
quickly get a headache, imagine that times a thousand. There are international treaties
against blinding laser weapons, but not against using brilliant multicolored lights or sounds
250

to disorient or antagonize the enemy. Lights, especially strobe lights, are an effective
weapon that has barely begun to be exploited by the military. The battlefield of the NT
future may look like a disco, with soldiers having to depend upon virtual reality goggles to
carefully filter out the visual garbage and reveal real targets. To the soldier of today, it will
be a strange, strange world. All this could happen just within the next 30 years.
Besides worm or flea-sized robots and larger tank-like robots, there is the realm of
robots too small to be seen by the naked eye (<0.1 mm) and robots the size of singlecelled organisms (2-200 μm). We examine these later in this chapter, in the sections on
MNT and grey goo. There is also the complex possibilities of “bio-technical hybrids,” robots
of all sizes which incorporate both biological and artificial structures. These will also be
covered later.
Remaining applications of NT include small satellites and space launchers, nuclear
weapons, chemical weapons, biological weapons, and biological/chemical protection. All of
these topics have been covered sufficiently in other sections, so we’ll move on.
Military Applications of Molecular Nanotechnology (MNT)
Now that we’ve reviewed the potential military applications of conventional NT, we
examine the more extreme and grandiose implications of bottom-up manufacturing, MNT.
In the opening of this chapter we cited the primary arguments against the feasibility of the
technology, and the numerous responses which have been made to critics by members of
the MNT community. Instead of delving deeply into these issues (which could fill an entire
volume), we simply assume the Drexler/Phoenix-style nanofactories described earlier in
this chapter are possible, and consider the military implications which proceed therefrom.
As previously stated, we expect these devices to be built sometime between 2040 and
2100, though exactly when is difficult to state. When nanofactories are developed, they
could have an extreme impact on the world system very quickly, on timescales of weeks or
months. We would be going from the Iron Age to the Diamond Age, to use sci-fi author
Neal Stephenson’s term. The cost of goods could fall precipitously, making it so that even
work might become optional in some areas.
The military applications of conventional NT and MNT could not be more different.
The first involves incremental upgrades and miniaturization of existing military systems, the
251

latter concerns exponential automated production. Assuming a doubling time of 15 minutes
for assemblers, MNT manufacturing systems could theoretically go from one kilogram to
100,000 tonnes of fabricators in 28 doubling cycles, or seven hours (providing sufficient
feedstock and suitable space would be a challenge). In “Design of a Primitive Nanofactory,”
Chris Phoenix writes, “the doubling time of a nanofactory is measured in hours” 56.
Assuming a doubling time of 15 hours (the number used in the nanofactory paper), we go
from one kilogram to 100,000 tonnes of fabricators in 17 and a half days. Logistics and
regulations would be likely to slow the proliferation of nanofactories from this lower bound,
but mass adoption within 5-6 months seems like a realistic estimate if nanofactories are
made available to the public. In contrast, the mass adoption of cell phones took 20 years.
Nanofactories would be considerably more useful than cell phones, so we can suppose
that their adoption would be correspondingly more rapid.
Where would the matter and energy for all this manufacturing activity come from? The
sun, mostly. Phoenix states: “A power use of 250 kWh/kg means that a large 1-GW power
plant, or four square miles of sun-collecting surface, could produce ~12,000 8-kg
nanofactories per day (not including feedstock production).” That’s roughly 100 tonnes of
nanofactory per day. Devote 100 power plants to production, and that goes up to 10,000
tonnes per day, giving us 1,000,000 tonnes of fabricators in 100 days. For matter, there are
two possible sources: hydrocarbons such as those in natural gas, or carbon dioxide from
the atmosphere. US annual natural gas production is over 30 trillion cubic feet. The weight
of a cubic foot of natural gas is about 22 grams and annual production is over 600 million
tonnes. Just one tenth of one percent of this, 6,000,000 tonnes, would suffice to build the
1,000,000 tonnes of fabricators we outlined, and plenty of products besides. Energy and
matter are not serious limits to the exponential proliferation of nanofactories.
This brings us to the first and probably most important military application of MNT;
endless production. Because the factories are exponentially self-replicating and automated,
increases in their numbers could continue until logistic or legal limitations are reached.
These “limits” may be rather extreme. For instance, there may be a limit such that one
soldier can only control five autonomous tanks and twenty autonomous aircraft at any
given time. In that case, given a million soldiers, there would be no point in manufacturing
many more than five million tanks and a hundred million aircraft. Considering that the
United States military currently only has 6,344 tanks and 13,683, that’s quite a lot, enough
252

to take over the world many times over. Even a smaller country like Switzerland, if it had
MNT and the US did not, could potentially conquer the world with a few hundred thousand
tanks and aircraft. If a country has domination of the skies, it barely matters how many
soldiers you have; you will be bombed into submission. A country with 150,000 soldiers
(the size of Switzerland’s military) and 100,000 autonomous tanks and aircraft is likely to
defeat a country with 1,000,000 soldiers (US military) and just 20,000 tanks and aircraft,
unless nuclear weapons are brought into the equation.
Think of what would happen if one country with access to MNT began massproducing tanks and aircraft in this way. A country like China, Russia, or the United States
could increase the firepower of their main battle tank force by over ten times in 24 days, or
a hundred times in 240 days. Here are the calculations: the United States has about 6,000
main battle tanks at 50 tons each, manufacturing 60,000 such tanks would require about 3
million tons of diamond or other carbon allotrope (like buckypaper or other fullerenes), one
three-hundredth of US annual natural gas supply. With 100,000 tonnes of fabricators,
achieving that output would take just over nine days, going by the product manufacturing
time estimates given by Phoenix. If the country were starting out with an even smaller
army, it could conceivably increase its firepower by over a thousand times. All it needs is
the energy (which can come from MNT-manufactured solar panels if need be) and
hydrocarbons, which can be extracted from the atmosphere or fossil fuels. Of course, such
a military buildup need not be limited to tanks. It could include aircraft carriers, jets,
bombers, drones, conventional weapons, and so on. The electronic components could be
manufactured by the nanofactory, in the form of diamondoid nanoelectronics. A few
components, such as ammunition, would require the insertion of parts that could not be
built by nanofactories, requiring some traditional industrial infrastructure.
Naturally, it might upset the current order of things if any one nation could massproduce so much materiel so quickly. It’s beyond our experience as a civilization. These
tanks needn’t be made out of diamond, they could be built out of nanotubes, a kind of
fullerene, which are even stronger and are not brittle like diamond. Since they would be
made out of a material much stronger than steel, they could have less mass devoted to
armor, making it possible to manufacture more of them on a per-ton basis. Instead of using
50 tons of fullerenes to build a 50 ton tank, you might use 10 tons of fullerenes and fill up
the hollow spaces with steel shavings to provide inertia and stability. The assembly process
253

could have a stage where steel shavings are pumped into the shells of each tank, then
melted to form a solid steel plate. Although the nanofactories themselves could only
produce carbon-based products, they could manufacture other systems to handle steel and
other non-carbon materials very rapidly. The entire manufacturing infrastructure, civilian as
well as military, could be rebuilt to better specs with MNT machinery. The performance of
mining machines could be improved ten to a hundred-fold, allowing increased exploitation
of natural resources. All of this would speed up the pace of manufacturing. Futurist Michio
Kaku calls it “the second Industrial Revolution” 57.
Besides the mass production of conventional weapons, as well as the swarms of
micro-UAVs and autonomous tanks described in the previous section, MNT would permit
the construction of completely new systems for attack and defense, both stationary and
mobile. One example would be utility fog, clouds of microbots with six mechanical grippers
that allow them to “holds hands” and form into solid structures in response to external
commands58,59,60. Designed by scientist J. Storrs Hall, a cloud of ‘foglets’ holding ‘hands’
would occupy about 3 percent of the volume of air they occupy, with a density of 0.3 g per
cc, and a anisotropic tensile strength of 1000 psi, about that of wood.
These foglets could move and connect in different patterns, manipulating armor
plates, bombs, or spikes of diamond along the way. A city surrounded by utility fog could
use it to raise diamond plates for shielding from attacks, or launch diamond spikes to
retaliate against besiegers. Utility fog could manipulate mirrors and concentrate sunlight to
overheat or ignite enemy tanks or airplanes. It could project lights and sounds to confuse
the enemy, and partially regenerate if blown apart. Sophisticated utility fog would be closer
to what we think of as ‘magic’ than machines. Each foglet would have a diameter of 100
microns, the width of a human hair. A single nanofactory could manufacture trillions per
hour, enough to fill a cubic meter.
Besides being used to mass manufacture tanks, airplanes, and utility fog, MNT could
mass-manufacture ordinary structures, from houses to military bases. If carbon became too
expensive or difficult to come by, structures could be built out of sapphire, which is
aluminum oxide. Aluminum is one of the most abundant elements on the planet. With MNT,
it would be feasible to mass-produce sapphire as well as diamond, though it would require
nanofactories with different tool-tips designed to manipulate aluminum oxides. Building
254

construction might use special mobile nanofactories which saunter around a construction
site and extrude material as they move. In this way, many structures could be built very
quickly with minimal human supervision. It would become possible to pave large areas of
land and put up buildings, even if they were never destined to be used. Mega-cities could
be built, covering hundreds of square miles. Large underground caverns could be
excavated, with sunlight routed into them through the use of MNT-built fiber optics. Entire
cities could be built underground or underwater, supported by diamond or sapphire pillars
and domes. The military implications of all this are extremely relevant to how mankind fares
in the 21st century.
Open Colonization of New Areas
Now that we’ve overviewed some of the key applications of NT and MNT, we turn
particularly towards factors which could cause or exacerbate geopolitical risk or tension in
the context of a post-MNT world. We assume that MNT will be accessible to several major
states around the same time, states like the US, European states, China, India, and so on.
So, events beyond the assembler breakthrough will be molded by the interaction of these
states on the global stage. The alternative is that MNT will be controlled and monopolized
by a single state, which would be unprecedented for a manufacturing technology which
provides greater humanitarian and practical benefits.
Today, there are certain limited resources which the world struggles over and which
characterizes interactions between states: primarily oil, but also natural gas and other
natural resources such as minerals. With MNT, the value of oil as a fuel goes down
because it becomes possible to build safe nuclear reactors or solar panels in huge
numbers. At the same time, the value of oil as feedstock goes up, because it is rich in
hydrocarbons which can be used to build nanoproducts and nanostructures. Thus, oilproducing states like Saudi Arabia will continue to be important. On the other hand, it is
possible to extract carbon from the atmosphere or biomass. Economic research is needed
to determine how resource prices, demand, and supply could change in a post-MNT world.
Though MNT will lead to new, more powerful, and more numerous weapons and new
possibilities for eliminating poverty, the same old tribal affiliations are certain to persist.
Palestinians and Israelis will still be attacking each other, radical Islamists will still seek
expansion, the United States will still seek to impose political hegemony on the world, with
255

Russia and China resisting it, and so on. Some of these factors may change in 40 to 50
years, but many of them are likely to persist in recognizable forms. People will not suddenly
become friendly and enlightened just because a new manufacturing technology is invented.
Human nature stays the same even when technology changes. Therefore, we can expect
countries to continue to behave antagonistically towards one another and compete over
resources, with the attendant conflict.
Two resources stand out as likely to be especially valuable in a post-MNT world;
carbon and space. We refer primarily to physical space on the planet, not outer space,
though outer space would have great value as well. With MNT, it would become cheap and
possible to build large cities at sea, farming seafood and generating power from solar
panels and the heat differential between the ocean depths and the surface. Countries
where there are issues with overcrowding, such as China, Japan, and India, may begin
building sea cities in their territorial waters. Although there are treaties that nominally
address the issue of maritime borders, these treaties were not made for a world where
marine cities can be built all the way up to these borders. The possibility of a “land rush” for
the sea could increase tensions between states like China and Japan, who have a history
of bad blood. There are many oil fields in the shallow seas around the world which could be
exploited by marine cities. The desire to take over these oil fields could lead to territorial
disputes, MNT arms races, and eventually all-out nano-war. In combination with biological
and nuclear weapons, such wars could threaten the survival of our species and lead to
widespread civic breakdown of the kind described in the nuclear weapons chapter.
Besides carbon sources, empty space itself would have value, both for exploring new
possibilities of human habitation (every person on Earth could have their own sapphire
mansion and private jet) as well as providing buffer zones to defend against possible
attack. Besides colonizing the continental shelf with sea cities, there would be an incentive
to build sea cities all the way out to the center of the Atlantic and Pacific oceans. If major
states do not take the initiative to build these cities, independent actors will, assuming
nanofactories are available to the public and international law does not prevent it. New
countries could be formed entirely at sea. Previously barren and mostly valueless areas
such as the taiga and Sahara Desert would increase in value as it suddenly becomes
possible to build real cities there. The Arctic Ocean would increase in value as superior
tools make the recovery of oil and gas there more feasible, prompting conflict between
256

Canada and Russia. With the aid of MNT, cities could be built on platforms that float on
swampy taiga, opening up millions of new square miles for habitation or construction.
Advances in MNT could provide everyone on Earth with great material wealth and
space, yet unfortunately, there is a “Red Queen” dynamic at work. The Red Queen, an
infamous character from Lewis Carroll’s novel Through the Looking Glass, conducts a race
where everyone must run constantly just to stay in the same place. The same applies to
evolution and human status competitions, known as “keeping up with the Joneses”. Even if
you have a mansion and a private jet, it means little if your neighbor has one too. People
don’t just get satisfied after a certain amount of material wealth, they need more, more than
the guy next to them. Most modern people are fabulously wealthy by the standards of our
ancestors, but we may consider ourselves “poor” even if we drive an SUV, with water
coming out of the tap, sleeping in a warm bed in a secure house. The same will apply in
2080 when people are fabulously wealthy by today’s standards but still want more. A
particular issue may be that carbon is limited on this planet and that all the most interesting
and useful things, from diamond to rainforests, are made out of it. There may be
considerable competition to monopolize as much carbon as possible. Asteroid mining could
help add to the carbon budget, but that takes time and effort. Limited resources will be a
cause of conflict, especially in a world where all resources can be dug up and utilized.
Consider the global supply of carbon. The current atmospheric CO 2 level is ~362
parts per million (0.036%) which is to 5.2 × 10 14 kg (520 Pg). Assuming that a reduction of
CO2 levels to the pre-industrial level of 280 ppm is acceptable, ~118 Pg (petagrams) of
carbon could be extracted. Assuming a world population of about 10 billion in 2060, that
gives us 11,800 kg per person. That is enough for a nice car and plenty of other toys 61. Yet,
for many people that may not be enough: they’ll want more. Diamond houses, diamond
factories, diamond palaces, and so on. The thirst for diamondoid nanoproducts could lead
to conflict over the Earth’s carbon dioxide. There may be a race to pull it out of the
atmosphere as quickly as possible. A mass draw-down of CO2 could cause global cooling,
which would be counteracted somewhat by the waste heat generated by nanofactories.
Though climate change would probably not be an immediate concern, the overall context of
fighting over CO2 is definitely a weird prospect, something that has rarely been considered.
If all CO2 were removed from the atmosphere, it would cause the death of all vegetation

257

and the total collapse of the natural food chain. Other bizarre surprises and “plot twists”
might lie in store for us in a post-nanomanufacturing world.
Super Empowered Individuals
Previously, we mentioned how nano-suits and implants could combine to create
super-soldiers. Beyond combat capability, MNT will facilitate the creation of super
empowered individuals (SEIs), people who have superior capabilities not just in the military
realm, but also in domains like finance, popular charisma, and geopolitics. The superlative
capabilities that MNT will confer to those who properly exploit it could spill into many areas,
creating SEIs who are a head and shoulders above the rest of humanity. These individuals
could establish totalitarian governments that control their citizens through selective
restriction and distribution of the fruits of MNT, in addition to other coercive means such as
novel forms of surveillance or psychological manipulation 62.
The capabilities that MNT would confer would seem almost magical from the
perspective of people unprepared for it. If the power is sufficiently restricted to a few, they
might even seem like gods. Economist Noah Smith has written about “Drone Lords,”
asymmetrically empowered individuals with drone armies 63. He compares the rise of
drones to the rise of guns, and calls it “the end of People Power”. Prior to the invention of
the rifle, military power rested in the hands of the few; those who commanded the loyalty of
well-trained and well-equipped knights. With the rifle, military training became less
important, and key factor was numbers. Now, as drones are being introduced and
improved, we may enter an era where a small group of people may command a large
drone force, which they can use to dominate an area, enabling a new authoritarianism or a
new imperialism. This could contribute to war, disrupt the current order of NATO/American
world hegemony, or reinforce it to levels beyond what we can imagine today. If drones
alone could destabilize the world order, imagine the degree of disorder which could be
introduced by exponential, automated production of military hardware in general.
MNT will enable very powerful computers that outperform legacy hardware by orders
of magnitude. This will allow those with access to nanocomputers to completely control
quantitative finance and thereby the world economy. More than half of all trades today are
made by software, a proportion is likely to increase as time goes on. Unless AI improves
considerably, and perhaps even if it does, there will continue to be the risk of a “flash
258

crash” that wreaks havoc on the world economy. A nation or organization with MNT would
not even need to use military force to take over the world; they could simply accomplish it
with computers. The margin of performance to dominate in quantitative finance is so thin
that trading firms pay top dollar for an office several blocks closer to crucial trading nodes,
to minimize latency in their transactions. The abrupt improvement in performance caused
by the fabrication of nanocomputers will enable complete financial domination by the
parties that possess them. Any nations that want to retain control over their own economies
will need to institute aggressive protectionist measures, and will likely be pressured by the
nano-powers-that-be to deregulate and be subjugated. Hegemonic totalitarianism could
rise in the name of open markets.
Besides finance, MNT will open up new possibilities for the control of human beings
by greatly improving the tools used for psychological study and manipulation. High
sensitivity brain-computer interfaces (BCI) built by nanofactories will allow much more
detailed modeling of human psychological processes, which could lead to effective
strategies for manipulating populations. Widespread nano-surveillance would allow states
to monitor every word their citizens say, both online and off. Combined with drones, this
could lead to dictatorships which are truly impossible to topple, because they can easily put
down threats before they arise. Whether on the right or the left side of the political
spectrum, history has shown that both sides are fully capable of implementing police states
which massacre millions of people. Totalitarian states clashing with nanoweapons could
cause wars which lead to the end of civilization. Democracies should not be considered
automatically safe, either; it seems likely that “democratic peace theory,” the observation
that democracies tend to go to war with each other more rarely than other forms of
government, may be an artifact of post-WWII American hegemony rather than any inherent
feature of democracies. In fact, there are arguments that democracies are more likely to
engage in total war than more authoritarian forms of government 64. In an authoritarian
government, war begins or ends at the decision of the ruling power, but in a democracy, the
call for war can become a mob dynamic which is impossible to stop once it gets started.
MNT will provide tools for human enhancement in every domain, from improving voice
to appearance to strength and alertness. It will provide tools to extend lifespans, by
producing microbots that flow through the bloodstream, removing dead or diseased cells
more effectively than the immune system. MNT-built microrobots will extend telomeres and
259

conduct other biological “maintenance work” to stave off the effects of aging, perhaps
extending human lifespans by 10-20 years at first, and eventually indefinitely 65,66. There is
no cosmic law that states that biological systems cannot be repaired faster than they break
down, it’s just extremely complicated and hasn’t been done yet. MNT could enable a sort of
“immortality,” meaning individuals who do not die through aging or disease. The only way
to kill such people would be through physical attack, which may be difficult if they are
surrounded by utility fog or robot soldiers, or concealed deep within the ground or high in
the sky. Such levels of power could create unaccountable individuals. Not “unaccountable”
like today’s politicians and bankers, but unaccountable like a god on Earth. These
individuals might even hide their true selves deep in bunkers and only “manifest” as
projections in utility fog on the surface.
We could say more on the issue of super empowered individuals and how they
interact with the equation of global risk, but additional analysis will be saved for Chapter 11,
which specifically addresses Transhumanism and human enhancement. The “inequalities”
which could be created by varying access to nanofactories could make the differences of
today look superficial. The cosmetic improvements to the body alone might allow certain
lucky individuals to captivate millions.
Bio-Technical War Machines
We’ve spent a fair amount of space in this chapter analyzing microbots and drones,
but MNT will enable breakthroughs in larger war machines as well. There will always be a
place in war for such constructions, as there is no substitute for pure mass. Mass has a
number of advantages; it has inertia, which means it is hard to move around; it has
thickness and toughness, meaning it is good defense; it can move through obstacles
effectively, crushing things that stand in its way; it can contain more functions and devices,
giving it more options than smaller systems. Recall that the biosphere was dominated by
the dinosaurs for 135 million years. Although many dinosaurs were small (fewer of these
bones are preserved due to preservation bias), the occurrence of many large dinosaurs
shows that size was a viable strategy for survival and success in the context of the
Mesozoic era. Mammals have a faster metabolism than dinosaurs, and as such, tend to be
smaller. This is because we need more food per kilogram of body mass, on average, to
survive. But what of a world where greenhouses can be automatically built and maintained
260

by robots, or nuclear reactors can be integrated into bio-technical war machines the size of
dinosaurs? As bizarre and science-fictional as it may sound, the eventual creation of such
machines seems a near-certainty. They offer too many military benefits for it to be
otherwise.
Drones and microbots are throw-away tools. They are constructed in large numbers,
and designed to be sacrificed. In a drone attack on a future city, thousands or even millions
might be vaporized by powerful laser beams or flak cannons. To supplement these weak
units requires strong units, like tanks but better. The problem with traditional tanks is that
they are not very mobile. Tanks cannot traverse forests or mountain slopes. They cannot
hide underwater, regenerate, or power themselves with a variety of energy sources. They
are not intelligent, and they have no agility. Their armament is limited to a main gun and a
small machine gun, and if a tank is hit in the right spot by an anti-tank round, it’s all over.
They depend upon human operators which make human errors. A human tank operator
who leaves his tank to urinate may return to discover that a grenade has been thrown into
the cabin, ruining the vehicle.
All these deficiencies could be overcome by creating animal-like tanks which contain
both biological and nanotechnological features. These constructs might be 10-50 m (32164 ft) large, with armor thick enough to block armor-piercing shells or the pressures at the
bottom of the ocean. Modeled on dinosaurs or large felines or canines, they might have
three sets of legs rather than two, and exploit other design features not accessible to
biological evolution, due to limitations on incremental adaptation. Like certain sauropod
dinosaurs, they might have two “brains” rather than just one, providing crucial redundancy.
Instead of being made up just of individual cells, like all life forms, they could have many
fused components (like bone) interspersed throughout their structure, giving them greater
durability than biological animals. Of course, they would be built out of fullerenes rather
than proteins, making them hundreds of times stronger than steel and several times lighter;
quite unusual for a “living” organism. Powered by ultra-high power density nano-motors
and actuators, they could move very quickly despite their size. Creating bio-technical war
machines that are the size of tanks but run at hundreds of miles per hour is not out of the
question. Some could even fly, like the dragons and gryphons of mythology. Others could
swim to the bottom of the ocean or dig underground, where they could hide undetected
until ready to be deployed. Being machines, they could power down and consume barely
261

any oxygen or food. If powered by small nuclear reactors, they could survive for decades
without food, much like nuclear submarines only require refueling every twenty years.
These beings would have all the strengths of biology and machines with none of the
weaknesses of either.
The super empowered individuals described earlier might even use these “animals”
as their bodies or exoskeletons, making them even more powerful. A human brain and
spinal cord could be connected to these machines, giving them a brand new body. Ultrafast regeneration would be possible. These bodies would contain reservoirs of
nanomachines that repair damage as soon as it occurs. Even if penetrated by ultra-highvelocity rounds like railgun bolts, these “animals” could regenerate and continue on. A little
bit of “brain” could be contained throughout the animal as an insurance policy, such that
even if the central processor were destroyed, nanomachines could rebuild it based on a
blueprint. Large herds of these machines would be much more effective at warfare than the
tanks we use today, and defeating them might require nothing less than nuclear weapons.
This would obviously increase the chance of nuclear war, providing a plausible justification
for the use of nuclear weapons.
Global thermal Limit
Given the scenarios and technology outlined in this chapter, it may appear like there
would be nearly no limit as to how greatly MNT could be used to remodel the planet's
surface and build war machines, but there is a limit. This limit has to do with the global
energy budget of the technosphere, the heat dissipation of all human artifice 67. The
biomass of all human beings combined dissipates roughly 10 12 watts of heat energy into
the atmosphere, and our current technology ten times that, roughly 1.2 x 10 13 watts. In
comparison, the solar energy absorbed by the Earth's surface is ~1.75 x 10 17 watts. The
power dissipation of all vegetation is ~1014 watts. According to Robert Freitas,
“Climatologists have speculated that an anthropogenic release of ~2 x 10 15 watts (~1%
solar insolation) might cause the polar icecaps to melt.” The polar ice caps melting would
not be the end of the world, contrary to public perception, but certainly there is a limit to
how much heat can be generated by nanomachinery before it becomes a problem. Freitas
points out that the ~3.3 x 1017 watts received by Venus at its cloudtops has made that

262

planet into a hell-world, with a surface hot enough to melt lead. Somewhere between here
and there seems like a good place to stop.
Freitas summarizes the issue of hypsithermal limit with respect to nanobots as
follows68:
The hypsithermal ecological limit in turn imposes a maximum power limit on
the entire future global mass of active nanomachinery or "active nanomass."
Assuming the typical power density of active nanorobots is ~10 7 W/m3, the
hypsithermal limit implies a natural worldwide population limit of ~10 8 m3 of active
functioning nanorobots, or ~1011 kg at normal densities. Assuming the worldwide
human population stabilizes near ~1010 people in the 21st century and assuming a
uniform distribution of nanotechnology, the above population limit would correspond
to a per capita allocation of ~10 kg of active continuously-functioning nanorobots, or
~1016 active nanorobots per person (assuming 1 micron 3 nanorobots developing ~10
pW each, and ignoring nonactive devices held in inventory). Whether a ~10-liter per
capita allocation (~100 KW/person) is sufficient for all medical, manufacturing,
transportation and other speculative purposes is a matter of debate.
Taking this into consideration, we can imagine new sources of conflict and new risks
to humanity's survival. A nanotech arms race that puts the world energy balance above the
hypsithermal limit could literally cook the planet alive. A number of countermeasures are
possible to buy time, but the only compelling one is moving civilization off-planet. Freitas
states that removal of all greenhouse gases from the atmosphere (which would be the end
of life as we know it, due to the removal of carbon dioxide) would increase the limit by a
factor of 10-20. Speculatively, if the entire atmosphere were removed, it would allow ~ 2 x
1016 watts (~10% of solar insolation) of operating nanomachinery while maintaining current
surface temperatures, according to Freitas. These calculations on hypsithermal limits make
it clear that sci-fi societies like Coruscant, the planet in Star Wars that is covered from pole
to pole in dense city, would be impossible. They would simply produce too much heat, and
the planet would be incinerated. More research is needed to clarify the various effects at
different levels of heat generation and how MNT systems could be used to compensate for
these effects.
Space Stations and MNT
263

MNT would open up space in a new way. Even with MNT, the costs of lifting a load to
outer space will be significant. The demand for spending the energy that could be used to
lift a payload to space for something on Earth instead will be high, meaning it will likely be
restricted to the wealthy and powerful. As of January 2015, launch prices are $4,109 per
kilogram ($1,864/lb) to low Earth orbit, for SpaceX's Falcon 9 launcher 69. The ability to
manufacture diamondoid rockets with nanofactories would certainly bring the relative cost
of launches down, but rockets still require a lot of mass and fuel to get anywhere, and
given all the fire and smoke they produce, it is likely that the use of large rockets will be
regulated and require a lot of real estate. This ensures that they will continue to have
scarcity, even in a world of great material abundance.
To truly bring down costs for space launches requires building megastructures that
allow launches to begin closer to space. Space elevators, a large tower stretching from the
Earth's surface to geostationary orbit, are often floated as a way of bringing down launch
costs, but they are impractical. This is because it would be too easy for a terrorist or enemy
state to sever them, causing a 35,790 km (22,239 mi) long tower to come crashing down to
Earth. A space elevator would just be too appealing of a target for disruption. (This
argument is made at greater length in chapter 10, which focuses on space weapons.)
More plausible is the construction of space piers, extremely tall (100 km, 62 mi)
towers supporting ramps along which spacecraft can accelerate and leave the Earth's
gravity, a concept created by J. Storrs Hall (also the originator of utility fog) 70. The launch
would take advantage of the thin air at that altitude, which is a million times thinner than at
the surface. Unlike space elevators, space piers could be heavily reinforced and made
resilient to attack. Space piers could be constructed such that even several nuclear
weapons would not be enough to bring down the entire structure. That would make them a
much more appealing investment than space elevators.
A space pier would consist of a track 300 km (186 mi) long and 100 km (62 mi) tall,
supported by 10-20 trusses below. These trusses would come two at a time, forming
bipods along the length of this colossal structure. An elevator would bring a 10-tonne craft
from the ground level to the launch level, and the craft could launch into space, spending
ten thousand times less energy than it would during a conventional rocket launch. Manned
craft would travel at an acceleration of 10 g for launch, which can be withstood by human
264

beings for the 80 seconds required if “form-fitting fluid sarcophagi” are used, according to
Hall, but “even so, the launch will not be particularly pleasant”. This space pier approach
makes a nice compromise between the dangerous, energy-intense route of using
conventional rockets, and the risky method of building a space elevator that is a long thin,
fragile thread to orbit.
With the space pier, many payloads could be launched into orbit in sufficient volume
to get true colonization of the inner solar system going. All the luxuries of Earth could be
sent up into orbit and to the Moon and Mars, from millions of gallons of water, to space
stations which create artificial gravity by spinning, to machines that mine the asteroids to
create additional space stations. Hall estimates that the pier could launch a 10 tonne
payload and use a 1 GW power plant to recharge the tower every five minutes. There's no
reason why we can't multiply that by a thousand, assuming 1,000 one-GW power plants
and 1,000 rails in a row all providing launches. With launches every five minutes, the pier
could launch 10,000 tonnes into space every five minutes. That should be more than
enough to get humanity started with colonization of space.
Kalpana One, a design for a space colony that has been endorsed by the National
Space Society, is a good model of what we can expect a space pier would be used to
launch into orbit71. The space station is a cylinder with a radius of 250 m (820 ft), a length
of 325 m (1066 ft) and a living area of 510,000 m 2, enough for 170 m2 for its 3,000
residents. Its mass is 6.3 million tonnes, equivalent to 63 Nimitz-class aircraft carriers. The
first such space colony would likely be built entirely with material launched from the surface
of Earth, while later colonies will be built with materials from the Moon. Assuming the
1,000-rail space pier outlined above operating around the clock, the necessary mass could
be delivered to orbit in 52.5 hours. That's 14 space colonies per month, enough for 42,000
people.
Space colonies have many benefits. By spreading throughout the cosmos, mankind
can become more resilient and eventually immune to any isolated disaster. Space
platforms have their own dangers, however. They could be used as outposts to drop rocks
onto the Earth, bombarding it. A space station that is intentionally directed to impact the
surface would make quite a bang. The current international law of “no weapons of mass
destruction in space” may eventually have to be violated. If there are colonies in space, of
265

course they will need powerful weapons to defend themselves. Like the scenario with sea
cities, this creates a land grab. Space colonies would be premium property, both because
of their novelty, because of new manufacturing possibilities afforded by zero gravity, and
their position by which they could launch missions to colonize the rest of the solar system
and exploit its resources. Space stations are the ultimate high ground, capable of
observing anything on Earth and striking it from height. However, the benefits to increasing
the probability of long-term human survival by building space colonies outweighs any
disruptive dynamics they might cause. It is important to be aware that with the positives
also come negatives, however.
Phase Two: Grey Goo
Now that we've reviewed many risks and technologies which could be produced by
NT or first-order MNT, we return to grey goo, the danger of out-of-control self-replicating
microbots based on nanotechnology72. The term “nanobots” is often used, but in all
probability, the size of dangerous replicators will be measured in microns, not nanometers.
Their components will have useful features on the nanoscale, however, just like biological
life.
Everything prior to this point is referred to as “phase one: military robotics” in our
analysis. Those are the products that could be built 5-10 years after the first nanofactories
are developed, possibly excepting space colonies and the more advanced bio-technical
hybrids. Grey goo is phase two. It would likely take a bit longer for civilization to develop
because it requires human ingenuity, one thing that nanofactories could not easily
replicate. Designing a microbot self-replicator requires human intelligence, probably with
the aid of advanced simulations and extensive experiments involving trial-and-error. The
microbot may have a design that is inspired by biological life, but ultimately it is a different
beast, made out of diamond instead of proteins. Naturally, they would be immune to
biological attack.
Like everything mentioned in this chapter so far, grey goo would have a military
application. By harvesting materials directly from the environment, it could potentially
fabricate military units in the field. Grey goo itself could self-replicate many times from a
single microbot, so if a swarm of grey goo were hit with a powerful explosive, even a
nuclear weapon, it is plausible that some of it could survive and continue to replicate. Grey
266

goo would be an excellent defense and an effective attack, as long as the opponent did not
have grey goo of his own. Any nutrients not secured or bolted down could be exploited by
the grey goo for self-replication.
Grey goo would need a minimum set of internal components, just like a cell. This
would include a self-replication program (in life: DNA), a coating (membrane), innate
defenses, collective behavioral strategies, sensors, and so on. Assuming grey goo could be
made as effective as the best viruses and bacteria, it would have a self-replication time of
15-30 minutes. Remember that our natural environment is completely coated in trillions of
bacteria and viruses at all times and in all places. It is also coated in other microorganisms
like flatworms, which we cannot see, but which are certainly there. Nature is constant
competition among self-replicators, from the largest sizes to the smallest. They are
everywhere.
The danger originates when grey goo is developed that can out-compete the natural
self-replicators of the world and displace them. If it can overcome the bacteria and viruses
in the environment, it can break them down and dominate the biosphere. Of course, this
could cause the food web to collapse, at least in areas where the goo is replicating. The
biomass of living things is about 560 billion tonnes, which is a rough estimate. Assuming a
replicator that starts off with a mass of about a picogram (one-trillionth of a gram) and a
self-replication rate of 30 minutes, grey goo could overrun the biosphere in about 93
doublings, or 46 hours. Of course, even under ideal conditions, replication would take
longer, since physical distance separates concentrations of biomass, but you get the idea.
The risk of grey goo has been analyzed in a detailed paper by Robert Freitas 73. He
concludes that replicators designed to operate slowly enough to be difficult to detect would
require 20 months to consume the biosphere, but that faster speeds are possible if stealth
is not an issue. Freitas proposes a moratorium on all artificial life experiments due to the
grey goo risk, as well as “continuous comprehensive infrared surveillance of Earth's
surface by geostationary satellites” and a long-term program to research countermeasures
to grey goo. It seems likely to us that the only suitable countermeasure to grey goo would
be environmental “blue goo” distributed over the surface of the planet. The blue goo
needn't all be owned by a single entity; it is possible that a network of blue goo owned by
different nations could provide a comprehensive shield in aggregate. More research is
267

needed to design a global system to handle the threat of grey goo 74. Any such system
would need advanced artificial intelligence of the human-friendly kind to respond to threats
quickly enough, making that a key prerequisite for building a grey goo shield, as well as
blurring the lines between efforts towards friendly AI and mitigation of nano-risk.
More Nano Risks
There are many more potential risks from nanotechnology we haven't covered here.
Possibly the most salient is the general space around human enhancement and AI giving
rise to recursively self-improving, smarter-than-human intelligence. We already discussed
AI in a previous chapter, and will review the risks from transhuman intelligence in the later
chapter on that topic. We call these the third stage of NT/MNT risk.
The risks from NT interact with nearly every single other major global risk, from
nuclear war to supervolcano explosion. Missiles built using nanotechnology could be used
to trigger supervolcano eruption. Superdrugs could be built with the assistance of NT which
put humanity on a lotus-eating path to our extinction. Along with the risks we can
recognize, NT could open up new categories of risk we can't even imagine now. In the
bigger picture, it confers god-like powers on primates whose brains are primarily adapted
to living in small communities and handling stone tools. There is bound to be trouble.
Another category of risk has to do with extreme modification of the environment.
Automated and semi-automated programs may create large amounts of nano-garbage,
clogging streams and penetrating the membranes of animals. Nanotechnology allows
major climatological engineering, from dialing down the amount of carbon dioxide in the
atmosphere to building giant mirrors in space that direct light to or away from certain spots
on the Earth. People may modify the environment willy-nilly without stopping to analyze the
consequences of what they are doing. MNT will enable quick environmental changes, on
timescales faster than is possible for reflection. We must be careful, but given humanity's
record when it comes to the use of new technologies, we shouldn't be too hopeful.
Quantifying Risk
Here we guess at the probabilities of NT/MNT causing the extinction of humanity. For
comparison, we put the risk of nuclear extinction in the 21 st century at 1 percent. As a
268

general probability, we will assign the risk of extinction through NT during the 21 st century
at 4 percent. This is a rather high value, but we think it is justified. NT contains so many
dangers within it, as well as so many benefits, it appears certain to fundamentally transform
the world. Even if the more radical visions of MNT do not come to pass, extreme outcomes
are still possible with NT alone. This is especially true with the way that NT feeds into every
other possible risk. It takes certain things which seem merely likely to threaten millions of
lives (pandemics and nuclear war) and pushes them over the edge into threatening billions
of lives and the entire species. Besides that, it creates completely new threats, such as
grey goo. Still, remember that a few hundred nuclear weapons detonated in cities or large
forests would be enough to kill off nearly 99% of the population of the northern hemisphere
through crop and infrastructure failures. NT, by making uranium enrichment much more
affordable, will greatly increase this risk.
The countermeasures necessary to decreasing increased NT-faciliated, increased
computation-derived AI risks are radical, and may go beyond the boundaries of what
people are willing to do. True risk management seems as if it would require a singleton, a
global decision-making agency that controls the overall course of civilization 75. From what
we can tell, the world is not psychologically prepared for that, but it may happen anyway.
Even a disaster that kills many millions or even billions of people is not likely to make an
impression significant enough to make people overcome our differences and defer to a
central authority. We'll instead be left with a shoddy patchwork of countermeasures, all of
which work partially but none of which work completely, and which leave major holes
through which terrors can crawl.
Rather than being something that people rationally respond to in advance, NT, and
especially MNT, when they are more fully developed, will be technologies that catch us by
surprise. The official and unofficial response will be reactive rather than pro-active,
designed to protect local resources rather than taking into account the planet as a whole.
This is a mistake. The inevitable holes produced by an excessively local focus will make it
as if there are few global protections at all. Concepts such as the rule of law could
evaporate in the free-for-all chaos of a world where nanofactories are widely available and
people are using them to implement arbitrary designs. Even in the absence of outright
maliciousness, there are many paths to destruction, from simple national self-interest to the
mercurial pranks of nano “script kiddies” making new self-replicators in their basement.
269

The only real way to deal with the risks from NT, and especially MNT, is to go offworld, concurrently with developing some form of human-friendly transhuman intelligence
which can visualize the complex web of threats and put a system in place to ameliorate
them76.

The map of nanorisks
Nanotech seems to be smaller risk than AI or biotech, but it advanced form has many ways
of omnicide. Nanotech will be probably created after strong biotech, but short before strong
AI (or by AI), so the period of vulnerability is rather short. Anyway nanotech has different
stages it its future development, mostly dependent on its level of miniaturisation and ability
to replicate. To control it in the future will be build some kind of protection shield which may
have its own failure modes.
The main reading about the risk is Freitas's article "Some   limits   to   global   ecophagy   by
biovorous nanoreplicators" and "Nanoshield".
Some integration between bio and nanotech has already started in the form of DNAorigami. So may be first nanobots will be bionanobots, like upgraded version of E.coli.

http://immortality-roadmap.com/nanorisk.pdf
References

1. Chris

Phoenix.

“Personal

nanofactories”.

2003a.

Center

for

Responsible

Nanotechnology.
2. Chris Phoenix. “Design of a primitive nanofactory”. October 2003b. Journal of
Evolution and Technology, Vol. 13.
3. Eric Drexler. Radical Abundance: How a Revolution in Nanotechnology Will Change
Civilization. 2013. PublicAffairs.
4. Nick Bostrom. “Transhumanist FAQ”. 2003. World Transhumanist Association.
5. Ray Kurzweil. The Singularity is Near. 2005. Viking.
6. Michio Kaku. “Can nanotechnology create utopia?” October 24, 2012. Big Think.
7. Jürgen Altmann. Military Nanotechnology: Potential Applications and Preventive
Arms Control. 2006. Routledge.
270

8. Richard E. Smalley. “Of Chemistry, Love and Nanobots”. September 2001. Scientific
American.
9. Richard Jones. “Is mechanosynthesis feasible? The debate moves up a gear.”
September 16, 2004. Soft Machines.
10. Hongzhou Gu, Jie Chao, Shou-Jun Xiao & Nadrian C. Seeman. “A proximity-based
programmable DNA nanoscale assembly line”. 2010. Nature 465, 202–205 (13 May
2010).
11. Drexler 2013.
12. Phoenix 2003b.
13. Nenad Ban, Poul Nissen, Jeffrey Hansen, Peter B. Moore, Thomas A. Steitz. “The
Complete Atomic Structure of the Large Ribosomal Subunit at 2.4 Å Resolution”.
Science. August 11, 2000: Vol. 289 no. 5481 pp. 905-920.
14. Smalley 2001.
15. K. Eric Drexler, David Forrest, Robert A. Freitas Jr., J. Storrs Hall, Neil Jacobstein,
Tom McKendree, Ralph Merkle, Christine Peterson. “On Physics, Fundamentals,
and Nanorobots: A Rebuttal to Smalley’s Assertion that Self-Replicating Mechanical
Nanorobots Are Simply Not Possible”. September 2001. Institute for Molecular
Manufacturing.
16. Ralph Merkle. “How good scientists reach bad conclusions”. April 2001. Foresight
Institute.
17. Ralph Merkle and Robert A. Freitas. “Remaining Technical Challenges for Achieving
Positional Diamondoid Molecular Manufacturing and Diamondoid Nanofactories”.
2007. Nanofactory Collaboration.
18. Chris Phoenix. “The Hollowness of Denial”. August 16, 2004. Center for Responsible
Nanotechnology.
19. Phoenix 2004.
20. Gu 2010.
21. Chris Phoenix and Tijamer Toth-Fejel. “Large-Product General-Purpose Design and
Manufacturing Using Nanoscale Modules”. May 2, 2005. NASA Institute for
Advanced Concepts. CP-04-01 Phase I Advanced Aeronautical/Space Concept
Studies.
22. Chris Phoenix. “Powerful Products of Molecular Manufacturing”. 2003c. Center for
Responsible Nanotechnology.
271

23. Phoenix 2003c.
24. Phoenix 2003c.
25. Robert A. Freitas. “Some Limits to Global Ecophagy by Biovorous Nanoreplicators,
with Public Policy Recommendations”. April 2000. Foresight Institute.
26. Eric Drexler and Chris Phoenix. “Grey goo is a small issue”. December 14, 2003.
Center for Responsible Nanotechnology.
27. Drexler & Phoenix 2003.
28. Tihamer Toth-Fejel. “A few lesser implications of nanofactories: Global Warming is
the least of our problems”. 2009. Nanotechnology Perceptions 5:37–59.
29. Nick Bostrom. “What is a singleton?” 2006. Linguistic and Philosophical
Investigations, Vol. 5, No. 2 (2006): pp. 48-54.
30. Cresson Kearny. Nuclear War Survival Skills. 1979. Oak Ridge National Laboratory.
31. Chris Phoenix. “Frequently Asked Questions”. 2003d. Center for Responsible
Nanotechnology.
32. Phoenix 2003d.
33. David Crane. "Trackingpoint XActSystem Precision Guided Firearm (PGF) Sniper
Rifle Package with Integrated Networked Tracking Scope (Tactical Smart Scope)”.
January 15, 2013. Defense Review.
34. Eric Drexler. Engines of Creation: the Coming Era of Nanotechnology. 1986.
Doubleday.
35. Bryan Caplan. “The Totalitarian Threat”. 2006. In Nick Bostrom and Milan Cirkovic,
eds. Global Catastrophic Risks. Oxford: Oxford University Press, pp. 504-519.
36. Altmann 2006.
37. Altmann 2006.
38. Eliezer Yudkowsky. Creating Friendly AI 1.0: The Analysis and Design of Benevolent
Goal Architectures. 2001. The Singularity Institute, San Francisco, CA, June 15.
39. Mark Gubrud. “Nanotechnology and International Security”. 1997. Draft paper for a
talk given at the Fifth Foresight Conference on Molecular Nanotechnology,
November 5-8, 1997; Palo Alto, CA.
40. Alan Robock, Luke Oman, Georgiy L. Stenchikov, Owen B. Toon, Charles Bardeen,
and Richard P. Turco. “Climatic consequences of regional nuclear conflicts”. 2007a.
Atmospheric Chemistry and Physics., 7, 2003-2012.

272

41. Alan Robock, Luke Oman, and Georgiy L. Stenchikov. 2007b. “Nuclear winter
revisited with a modern climate model and current nuclear arsenals: Still
catastrophic consequences”. 2007b. Journal of Geophysical Research, 112,
D13107.
42. ALPHA Collaboration: G. B. Andresen, M. D. Ashkezari, M. Baquero-Ruiz, W.
Bertsche, P. D. Bowe, E. Butler, C. L. Cesar, M. Charlton, A. Deller, S. Eriksson, J.
Fajans, T. Friesen, M. C. Fujiwara, D. R. Gill, A. Gutierrez, J. S. Hangst, W. N.
Hardy, R. S. Hayano, M. E. Hayden, A. J. Humphries, R. Hydomako, S. Jonsell, S.
L. Kemp, L. Kurchaninov, N. Madsen, S. Menary, P. Nolan, K. Olchanski, A. Olin, P.
Pusa, C. Ø. Rasmussen, F. Robicheaux, E. Sarid, D. M. Silveira, C. So, J. W.
Storey, R. I. Thompson, D. P. van der Werf, J. S. Wurtele, Y. Yamazaki.
“Confinement of antihydrogen for 1,000 seconds”. Nature Physics, 2011.
43. Altmann 2006.
44. Department of Defense. “Research, Development, Test, and Evaluation, DefenseWide: Volume 1: Defense Advanced Research Projects Agency”. February 2005. pp.
154-155.
45. Space Daily. “DARPA Demonstrates Micro-Thruster Breakthrough”. May 9, 2005.
46. Brian Wowk. “Phased Array Optics”. October 3, 1991. http://www.phasedarray.com/1996-Book-Chapter.html
47. Nicola M. Pugno. “The Egg of Columbus for making the world toughest fibres”. April
24, 2013. arXiv:1304.6658 [cond-mat.mtrl-sci] arxiv.org/abs/1304.6658
48. Kris Osborn. “Navy Plans to Test Fire Railgun at Sea in 2016”. April 7, 2014.
Military.com.
49. Anita Hamilton. “This Gadget Makes Gallons of Drinking Water Out of Air”. April 24,
2014. TIME.
50. Patricia Kime. “Study: 25% of war deaths medically preventable”. June 28, 2012.
Army Times.
51. Mihail C. Roco and William Sims Bainbridge, eds. Converging technologies for
improving

human

performance:

nanotechnology,

biotechnology,

information

technology and cognitive science. 2002. U.S. National Science Foundation.
52. Roco 2002.
53. P S Sreetharan, J P Whitney, M D Strauss and R J Wood. “Monolithic fabrication of
millimeter-scale machines”. 2012. Journal of Micromechanics and Microengineering.
273

54. Byungkyu Kim, Moon Gu Lee, Young Pyo Lee, YongIn Kim, GeunHo Lee. “An
earthworm-like micro robot using shape memory alloy actuator”. 2006. Sensors and
Actuators A: Physical, Volume 125, Issue 2, 10 January 2006, Pages 429–437.
55. Conversation with Michael Vassar.
56. Phoenix 2003b.
57. Kaku 2012.
58. John Storrs Hall. “Utility Fog: A Universal Physical Substance”. 1993. In Vision-21:
Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis,
ed., NASA Publication CP-10129, pp. 115-126 (1993).
59. John Storrs Hall. “Utility Fog: The Stuff that Dreams Are Made Of”. July 5, 2001.
KurzweilAI.
60. John Storrs Hall. “What I want to be when I grow up, is a cloud”. July 6, 2001.
KurzweilAI.
61. Robert Bradbury. “Sapphire Mansions: Understanding the Real Impact of Molecular
Nanotechnology”. October 2001. Aeiveos.
http://web.archive.org/web/20081121233908/http://www.aeiveos.com:8080/~bradbur
y/Papers/SM.html
62. Caplan 2006.
63. Noah Smith. “Drones will cause an upheaval of society like we haven’t seen in 700
years”. March 11, 2014. Quartz.
64. Hans-Hermann Hoppe. Democracy: the God That Failed. 2001. Transaction
Publishers.
65. Kurzweil 2005.
66. Robert A. Freitas. Nanomedicine, Volume I: Basic Capabilities. October 1999.
Landes Bioscience.
67. Freitas 1999.
68. Freitas 1999.
69. “Upgraded SpaceX Falcon 9.1.1 will launch 25% more than old Falcon 9 and bring
price down to $4109 per kilogram to LEO”. March 22, 2013. NextBigFuture.
70. J. Storrs Hall. “The Space Pier: a hybrid Space-launch Tower concept”. 2007.
Autogeny.org.
71. Al Globus, Nitin, Arora, Ankur Bajoria, Joe Straut. “The Kalpana One Orbital Space
Settlement Revised”. 2007. American Institute of Aeronautics and Astronautics.
274

72. Freitas 2000.
73. Freitas 2000.
74. Michael Vassar and Robert A. Freitas. “Lifeboat Foundation NanoShield Version
0.90.2.13”. 2006. Lifeboat Foundation.
75. Bostrom 2006.
76. Eliezer Yudkowsky. “Artificial Intelligence as a Positive and Negative Factor in
Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan M.
Ćirković, 308–345. 2008. New York: Oxford University Press.

Chapter 15. Space Weapons
The late 21st century could witness a number of existential dangers related to possible
expansion into space and the deployment of weapons there. Before we explore these, it is
necessary to note that mankind may choose not to engage in extensive space expansion
during the 21st century, so many of these risks may be a non-issue, at least during the next
100 years. A generation of Baby Boomers and their children were inspired and influenced
by science fiction such as Star Trek and Star Wars, which causes us to systematically
overestimate the likelihood of space expansion in the near future. Similarly, limited success
at building launch vehicles by companies such as SpaceX and declarations of desire to
colonize Mars by its CEO Elon Musk are a far cry from self-sustaining, economically
realistic space colonization.
Space colonization feels futuristic, it feels like something that should happen in the
future, but this is not a compelling argument for why it actually will. During the 60s and 70s,
many scientists were utterly convinced that expansion into space was going to occur in the
2000s, with permanent moon bases by the 2020s. This could still happen, but it seems far
less likely than it did during the 70s. In our view, large-scale space colonization is unlikely
during the 21st century (unless there is a Singularity, in which case anything is possible),
but could pick up shortly after it. This chapter examines the possibility that it will happen
during the 21st century even though it may not be the most likely outcome.
There are many challenges to colonizing space which makes it more appealing to
consider colonizing places like Canada, Siberia, the Dakotas, or even Greenland or
275

Antarctica first. Much of this planet is completely uninhabited and unexploited. If we seek to
travel to new realms, exploit new resources, and so on, we ought to look to North Dakota
or Canada before we look to the Moon or Mars. The economic calculus is strongly in favor
of colonizing these locations before we colonize space. First of all, lifting anything into
space is extremely expensive; $2,200 per kilogram ($1,000/lb) to low Earth orbit, at least 1.
Would the American pioneers have ventured West if they needed to pay such a premium
for each kilogram of baggage they carried? Absolutely not. It would be impractical. Second,
space is empty. For the most part, it is a void. All proposals for developing it, such as space
colonies, asteroid mining, or helium-3 harvesting on the Moon, have extremely high capital
investment costs that will make them prohibitive to everyone well into the 21 st century.
Third, space and zero gravity are dangerous to human health. Cosmic rays and
weightlessness cause a variety of health problems and a heightened risk of cancer 2, not to
mention the constant risk of micrometeorites3,4 and the hazards of taking spacewalks to
repair even the most minor equipment. These three barriers to entry make space
colonization highly unlikely during the 21st century unless advanced molecular
nanotechnology (MNT) is developed, but few space enthusiasts even know what these
words mean, and as we reviewed in the earlier chapter on the topic, basic research
towards MNT is extremely slow in coming.
To highlight the dangers of space, consider some recent comments by the Canadian
astronaut Robert Thirsk, who calls a one-way Mars trip a “suicide mission” 5 . Thirsk, who
has spent 204 days in orbit, says we lack the technology to survive a trip to Mars, and that
he spent much of his time in space repairing basic equipment like the craft's CO2
scrubbers and toilet. His comments were prompted by plans by the Netherlands-based
Mars One project to launch a one-way trip to Mars, but they also apply to any long-term
efforts in space, including space stations in low Earth orbit, trips to the asteroids, lunar
colonies, and so on. A prerequisite for space colonization are basic systems, like CO2
scrubbers and toilets, which break down only very rarely. Until we can manufacture these
systems with extreme reliability, we haven't even taken the first serious step towards
colonizing space. You can't colonize space until you can build a toilet that doesn't
constantly break down.
Next, the topic of getting there. Consider a rocket. It is a bomb with a hole poked in
the side. A rocket is a hollow skyscraper filled with enormous amounts of fuel costing tens
276

of millions of dollars, all of which is burned away in a few minutes, never to be recovered.
Rockets have a tendency to spontaneously explode for the most trivial of reasons, such as
errors in a few lines of code6,7. If the astronauts on-board do happen to make it to orbit in
one piece, a chipped tile is enough to seal their fate during reentry 8. Even if they do make it
to orbit, what is there? Absolutely nothing. To make true use of space requires putting
megatons of equipment, water, soil, and other resources up there, creating a simulacra of
Earth. Without extremely ambitious engineering projects like Space Piers (to be described
soon) or asteroid-towing spacecraft, a space station is just too small, a vulnerable tin can
filled with slightly nauseous people who must exercise vigorously on a continuous basis to
make sure their bones don't become irreversibly smaller 9. A finger-sized hole in a space
station is enough to kill everyone on board 10. Out of 7 Space Shuttles, 2 of them exploded
in a shower of flame. That would be considered a failure. Without radically improved
materials, automation, means of launch, reliability, and tests spanning many decades,
space is only a highly experimental domain suitable for a few dozen specialists at a time.
“Space hotels,” notwithstanding the self-promotional announcements made by companies
such as Russian firm Orbital Technologies11, are more likely to resemble jail cells than the
Ritz for decades to come.
Since the 1970s, space has been the ultimate mismatch between vision and
practicality. This is not to say that space isn't important, it eventually will be. People are just
prone to vastly underestimating how soon that is likely to be. For the foreseeable future (at
least the first half of the 21st century), it is likely to just be a diversion for celebrities who
take quick trips into low Earth orbit. One accident that causes the death of a few celebrities
could set the entire field back by a decade or more.
While the Baby Boomer generation who are among the most enthusiastic about
space quickly move into retirement, a new generation is being raised on computers and
video games, a new “inner space” that offers more than outer space plausibly can 12. Within
a couple decades, there will be compelling virtual reality and haptic suits that put the user
in another place, psychologically speaking. These worlds of our own invention have more
character and practical value than a dangerous vacuum at near absolute zero temperature.
If people want to experience space, they will enjoy it in virtual reality, in the safety of their
homes on terra firma. It will be far cheaper and safer to build interactive spaces that
simulate zero-g with robotic suspension systems and VR goggles than to actually launch
277

people into space and take that risk. Colonizing space is not like Europeans colonizing the
Americas, where both areas have the same temperature, the same gravity, the same
abundant organic materials, wide open spaces for farming, an atmosphere to protect them
from ultraviolet light, and the familiarity of Earth's landscape. The differences between the
Earth's surface and the surface of the Moon or Mars are shocking and extreme.
Otherworldly landscapes are instantly deadly to unprotected human beings. Without a
space suit, a person on Mars would lose consciousness in 15 seconds due to a lack of
oxygen. Within 30 seconds to 1 minute, the blood boils due to low pressure. This is fatal.
To construct an extremely effective and reliable space suit, such as a buckypaper skintight
suit, would likely require molecular manufacturing.
Few advocates of space colonization understand the level of technological
improvement which would be required for humans live in orbit, on the Moon, or on Mars in
any appreciable numbers for any substantial length of time. You'd need a space elevator,
Space Pier13, or large set of mass accelerators, which would cost literally trillions of dollars
to build. (Not enough rockets could be built to launch thousands of people and all the
resources they would need; they are too expensive.) Construction would be a multi-decade
project unless it were carried out almost completely by robots. Even if such a bridge were
built, the energy costs of ferrying items up and sending them out would still be in the
hundreds of dollars per kilogram. Loads of a few tens of tons could be sent every 5-10
minutes or so at most per track (for a Space Pier), which is a serious limitation if we
consider that a space station that can hold just 3,000 people would weigh seven million
tons14. The original proposed design for Kalpana One, a space station, assumes $500/kg
launch costs and hundreds of thousands of flights from the Moon, Earth, and NEOs to
deliver the necessary material, including millions of tons of regolith for radiation shielding.
This is all to build a space station for just a few thousand people which has no economic
value. The inhabitants would be sealed on the inside of a cylinder. It seems very difficult to
economically justify such a project unless the colony is filled with multi-millionaries giving
up a major portion of the their wealth just to live there. Meanwhile, without advanced
robotics or next-generation self-repairing materials to protect them, a single hull breach
would be all it takes to kill everyone on board. The cost of building a rotating colony alone
is prohibitive unless the majority of the materials-gathering and launch work is conducted in
an automated fashion by robots and artificial intelligences. Without the assistance of highly
278

advanced nano-manufacturing and artificial intelligence, such a project would likely be
delayed to the early 22nd century. It is often helpful to belabor these points, since so many
intelligent people have such an emotional over-investment in space colonization and space
technologies, and a naïve underestimation of the difficulties involved.
Keeping all this in mind, this chapter will look to a possible future in the late 21 st
century, where, due to abrupt and seemingly miraculous breakthroughs in automation and
nano-manufacturing, it becomes economically feasible to construct large facilities in space,
including space colonies and asteroid mines, which fundamentally change the context of
human civilization and open up entirely new risks. We envision scenarios where hundreds
of thousands of people can make it into space with the required tools to keep them there,
within the lifetime of some people alive today (the 2060s-2100s). The possibility of
expanding into space would tempt the nations of Earth to compete for new resources, to
gain a foothold in orbit or on the Moon before their rivals do. Increased competition
between the United States, Russia, and China could become plausible, much like
competition between the US and Russia for Arctic resources is looming today. The
scenarios analyzed in this chapter also have relevance to human extinction risks on
timescales beyond the 21st century, and we briefly violate our exclusive focus on the 21 st
century in some of these sections.
Overview of Space Risks
Space risks are in a different category from biotech risks, nuclear risks,
nanotechnology, robotics, and Artificial Intelligence. They are in a category of lower
intensity. Space risks require much more scientific effort and advanced technology to
become a serious global threat than any of the other risks mentioned. Even the smallest
space risks that plausibly involve the extinction of mankind involve megaengineering
projects with space stations tens, hundreds, even hundreds of thousands of kilometers
across. Natural space risks, such as Gamma Ray Bursts or massive asteroid or large
comet impacts only occur once every few hundred million years. With such low
probabilities of natural disasters from space, our attention is well spent on artificial space
weapons instead, which could be constructed on the timescale of decades or centuries.
Though such risks may seem far off today, they may develop over the course of the 21 st or
22nd centuries to become a more substantial portion of the total probability mass of global
279

catastrophic risk. Likewise, they may not. The only way to make a realistic estimate is to
observe technological developments as they progress. Today's space efforts, such as
those by SpaceX, likely have very little to do with future space possibilities, since, as we've
argued and will continue to argue, any large-scale future space exploitation will likely be
based on MNT, not on anything we are developing today. Whether space technology
appears to be progressing slowly or quickly from our vantage point, it will be leapfrogged
when and if MNT is developed. If MNT is not developed in the 21 st century, then space
technologies pose no immediate threat, since weapons platforms will not be built on a
scale large enough to do any serious damage to Earth. Full-scale space development and
possible global risk from space is almost wholly dependent on molecular manufacturing; it
is hard to imagine it happening otherwise, and especially not in this century. Other
technologies simply cannot build objects on the scale to make it notable in the context of
global risk. The masses and energies involved are too great, on the order of tens of
thousands of times greater than global annual electricity consumption. We discuss this in
more detail shortly.
There are four primary anthropogenic space risks: 1) orbiting mirrors or particle beam
weapons that set cities, villages, soldiers, and civilians on fire, 2) nuclear weapons
launched from space, 3) biological or chemical weapons dispersed from space, and 4)
hitting the Earth with a fast-moving projectile. Let's briefly review these. To wipe out
humanity with orbiting mirrors, the aspiring evil dictator would need to build a lot of them;
hundreds or thousands, each nearly a mile in diameter, then painstakingly aim them at
every human being on Earth until they all died. This would probably take decades. Then
the dictator would need to kill himself and all his followers, or accidentally achieve the
same, otherwise the total extinction of humanity (what this book is exclusively concerned
with) would not be secured. This is obviously not a highly plausible scenario, but we aren't
ruling anything out here. The nuclear weapons scenario is more plausible; enough nuclear
weapons could be launched from a space platform that the Earth goes through a crippling
nuclear winter and becomes temporarily uninhabitable. This could be combined with space
mirror attacks, artificially triggering supervolcano eruptions, and so on. The third scenario,
dispersion of biological weapons, will be discussed in more detail later in the chapter. The
fourth, hitting the Earth with a giant projectile, is very complicated and energy-intensive,
and we also cover it at some length here. This scenario is notable because while difficult to
280

pull off, a sufficiently large and fast projectile would be extremely destructive to the entire
surface of the Earth, sterilizing it in one go. Such an attack could literally burn everything
on the surface to ashes and make it nearly uninhabitable. Fortunately, it would require
about 100,000 times more energy than the United States consumes in a year to accelerate
a projectile to the suitable speed, among other challenges.
Deviation of Asteroids
The first catastrophic space risk that many people immediately think of is the
deviation of an asteroid to impact Earth, like the kind that killed off the dinosaurs. There are
a number of reasons, however, that this would be difficult to carry out, inertia being
foremost among them. Other attack methods would be preferable from a military and cost
perspective if one were trying to do a tremendous amount of damage to the planet. The
energy required to deviate an asteroid of any substantial size is prohibitive, so much so
that in nearly every case, it would be preferable to just build an iron projectile and launch it
directly at the Earth with rockets, or to use nuclear weapons or some other destructive
means instead.
One of the definitions of a planet is that it is a celestial body which has cleared the
neighborhood around its orbit. Its great mass and gravity has pulled in all the rocks that
were in its orbital neighborhood billions of years ago, when the solar system was formed
out of dust and rubble. These ancient rocks are mostly long gone from Earth's orbit. Of
three categories of Near-Earth Objects (NEOs), the closest, the Aten asteroids, number
only 815, and nearly all of them are smaller than 100 m (330 ft) in diameter 15. A 100 meter
diameter asteroid, impacting into dense rock, creates about a 1 megaton explosion, the
size of a modern nuclear weapon16. This is enough to wipe out a city if it is in the wrong
place at the wrong time, but is not likely to have any global effects. About 867 NEOs are of
1 km (3,280 ft) in size or greater, and 167 of these are categorized as PHOs (potentially
hazardous objects)17. About 92 percent of these are estimated to have been discovered so
far. If a 1 km (0.6 mi) wide impactor hit the Earth, it would release about a couple tens of
thousands of megatons TNT equivalent of energy, enough to create a 12 km (7 mi) crater
and a century-long period of lower temperatures, which would effect harvests worldwide
and could lead to billions of deaths by starvation 18,19,20,21,22,23. It would be a catastrophic
event, creating an explosion more than a hundred times greater than the largest atom
281

bomb ever detonated, but it would not threaten humanity in general. The asteroid that
wiped out the dinosaurs was ten times larger.
To evaluate the risk to humanity from asteroid redirection, we located the largest
possible NEO with a future trajectory that brings it close to Earth, and calculated the energy
input which would be required to cause it to impact. The asteroid that stands out the most
is 4179 Toutatis, which will pass within 8 lunar distances of the Earth in 2069. The distance
to the Moon is 384,400 km (238,855 miles), meaning 8 lunar distances is 3,075,200 km
(1,910,840 mi) from Earth. 4179 Toutatis is a huge rock with dimensions approximately
4.75×2.4×1.95 km (2.95×1.49×1.21 mi), shaped like a lumpy potato, with a mass of about
50 billion tonnes. Imagine pushing 50 billion tonnes for almost two million miles by hand.
Seems impossible, right? Pushing it with a rocket turns out to be almost as hard, by both
the standards of today's technology, and the likely technology available in the 21 st century.
About 1.5 × 10^16 Newton-seconds of force would be required, equal to the thrust of about
2,142,857 Saturn V heavy lift rockets. The Saturn V rocket is what took the Apollo
astronauts to the Moon, it is 363.0 feet (110.6 m) tall, with a diameter of 33.0 feet (10.1 m),
it weighs 3,000,000 kg (6,600,000 pounds) and costs about $47.25 billion in 2015 dollars to
build. Thus, it would cost about $101,250 trillion to redirect the asteroid with currently
imaginable technology, about 6,457 times the annual GDP of the United States. But, wait—
those rockets need other rockets to send them up to space in one piece and unused. If we
just sent the rockets up into space on their own, they would be depleted and couldn't push
the asteroid. You get the idea. If a nation can afford hundreds of millions or possibly billions
of Moon rockets, it would be far cheaper and more destructive to directly bombard the
target with such rockets than to redirect an asteroid that is mostly made of loose rubble
anyway and wouldn't even cause that much destruction when it lands. Various impact
effects calculators available online allow for an estimate of the immediate destruction
caused by a single impact24.
Other asteroids are even more massive or distant from the Earth. 1036 Ganymed, the
largest NEO, is about 34 km (21 mi) in diameter, with a mass about 10 17 kg, a hundred
trillion tonnes. Its closest approach to the Earth, 55,964,100 km (34,774,500 mi), is roughly
a third of the distance between the Earth and the Sun. Ganymed would definitely destroy
practically everything on the surface of the Earth if it impacted us, but with masses and
distances of that magnitude, no one is directing it to hit anything. It is difficult for us to
282

intuitively imagine how much momentum a 34 km (21 mi) wide asteroid has and how much
energy is needed to move it even a few feet off its current course. Exploding all atomic
bombs in the US and Russia's nuclear arsenals would barely scratch it. The entire energy
output of the history of human civilization on Earth would scarcely move it by a few
hundred feet. One day, a super-advanced civilization may be able to slowly move objects
like this with energy from solar panels larger than Earth's surface, but it is not on the top of
our list of risks to humanity in the 21st century. The same applies to breaking off a piece of
Ganymed and moving it—it is simply too distant. Asteroids in the asteroid belt are even
more distant, and even more impractical to move. It would be far easier to build a giant
furnace and cook the Earth's surface to a crisp than to move these distant asteroids.
Chicxulub Impact
As many know, approximately 65.5 million years ago, an asteroid 10 km (6 mi) in
diameter crashed into the Earth, causing the extinction of roughly three-quarters of all
terrestrial species, including all non-avian dinosaurs, and a third of marine species 25. The
effects of this object hitting the Earth, just north of the present-day Yucatan peninsula in
Mexico, were severe and global. The impact kicked up about 5 × 10 15 kg of flaming ejecta,
sending it well above the Earth's atmosphere and raining down around the globe at
velocities between 5 and 10 km/h26. Reentering the skies worldwide, the tremendous air
friction made this material glow red-hot and broiled the surface in thermal radiation
equivalent to 1 megaton nuclear bombs detonated at 6 km (3.7 mi) intervals around the
globe. This is like detonating about 20 million nuclear weapons in the skies above the
Earth. For at least an hour and as long as several hours, the entire surface, from Antarctica
to the equator, was bathed in thermal radiation 50 to 150 times more intense than full
sunlight. This is enough to ignite most wood, and to certainly ignite all the dry tinder on the
world's forest floors. A thick ash layer in the geologic record shows us that the entire
biosphere burned down. Being on the opposite side of the planet from the impact would
have hardly helped. In fact, the amount of flaming ejecta raining from the sky at the socalled “antipodal point” was even greater than anywhere but immediately around the
impact.
The ash layer evidence that the biosphere burned down is corroborated by the
survival pattern of species that made it through the K-T extinction event, as it is called 27.
283

The animals that survived were those with the ability to burrow or hide underwater during
the heat flux in the hour or two after the impact. During this time, the temperature of the
surface was literally be as hot as a broiler, and almost every single large animal, including
favorites like Tyrannosaurus Rex and Triceratops, would have been cooked to a blackened
steak. A few fortunate enough to hide underwater or in caves may have survived. If they
were large, their survival wouldn't persist for long, however, since the impact winter began
within a few weeks to a few months after the impact, lasting for decades and causing even
further destruction28. Living plant and animal matter would have been scarce, meaning only
detritivores—animals that can survive on detritus—had enough to eat. During this time,
only a few isolated communities in “refugia,” protected places like swamps alongside
overhanging cliffs or equatorial islands, would have survived. Examples of animals that
would have had a relatively easy time of surviving would be the ancestors of modern-day
earthworms, pill bugs, and millipedes, all of which feed mainly on detritus.
Events like the Chicxulub impact, which happen only once every few hundred million
years, are different from some of the earlier risks discussed in this book (except AI) in the
sense that they are more comprehensively destructive and involve the release of more
energy, especially energy in the form of heat. Whereas some individuals may have
immunity to certain microbes during a global plague, or be able to survive nuclear winter in
a self-sufficient fortress in the mountains, an asteroid or comet impact that throws flaming
ejecta across the planet has a totality and intensity that is hard for many other risks to
match. The only risk we've discussed so far that is comparable is an Artificial Intelligence
using nano-robots to convert the entire Earth into paperclips. An asteroid impact is
intermediate between AI risk and the risk of a bio-engineered multi-plague or similar event
in terms of its brute killing power. It is intense enough to destroy the entire surface through
brute heat, but not dangerous enough to intelligently seek out and kill humans.
After the multi-decade long impact winter, the Chicxulub impactor caused centuries of
greater-than-normal temperatures due to greenhouse effects from all the carbon dioxide
released by the incinerated biosphere. This, in combination with the scarcity of plants
caused by their conflagration, caused huge interior continental regions to transform into
deserts and badlands. The only comparable natural event that can create climactic
changes of this magnitude would probably be the Deccan Traps, a series of volcanic

284

eruptions which lasted 500,000 years during the end of the Permian era 250 million years
ago.
The K-T extinction was highly selective. Many alligator, turtle, and salamander
species survived. This was because they could both hide underwater and eat detritus. In
general, detritus-eating animals were able to survive, since that's all there was for many
years after the impact. Like the alligators and turtles of 65 million years ago, if a Chicxulubsized asteroid were to hit us today, many human beings would figure out a way to survive,
both during the initial impact and in the ensuing years. Like many of the scenarios in this
book, such an event would likely wipe out 99 to 99.99 percent of humanity, but many (over
500,000) would survive. Many people work underground or in places where they would be
completely shielded by the initial thermal pulse. Even if everything exposed to the surface
burned, there would be sufficient stored food underground to keep millions alive for
decades without farming. Wheat berries stored in an oxygen-free environment can retain
nutritional value for hundreds of years. This would give humanity enough time to start
growing new food and locate refugia where possible. The Earth's fossil fuels and many
functional machines and electronics would remain, giving us tools to stage a recovery. A
five degree or even ten degree Celsius temperature increase for hundreds of years, while
extremely harsh and potentially lethal to as many as 90-95 percent of all living plant and
animal species, could not wipe out every single human. There would always be
somewhere, like Iceland, Svalbard, northern Siberia and Canada, which would remain at
mild temperatures even if the global average greatly increased. Humans are not dinosaurs.
We are smarter, our nutritional needs are fewer, and we would be able to survive, even if
photosynthesis completely shut down for several years.
A brief word here on the difficulty of surviving various global temperature changes.
During humanity's existence on Earth, for the last 200,000 years, there have been global
temperature variations of significant magnitude, mostly in the negative direction relative to
the present day. Antarctic ice cores show that the global temperature average during the
last Ice Age was about 8-9 degrees Celsius cooler than it is now 29. We know that humanity
and other animal species can survive significant drops in temperature. What makes impact
winter qualitatively different than a simple Ice Age is the complete shutdown of
photosynthesis caused by ash-choked skies. Even just a few years of this is enough to
reshape global biota entirely. The third risk, global warming, which is distinct from global
285

cooling and photosynthetic shutdown, but like these risks, is an effect of major asteroid
impact, has the potential to be as deadly as the others because it is more difficult for life to
adapt to increased temperatures than decreased temperatures. If an animal is cold, it can
eat more food, or migrate to a warmer place, and survive. If an animal is too hot, it can
migrate, but it suffers more in the process of doing so. Overheating is extremely
dangerous, which is why tropical animals have so many elaborate adaptations to
preventing it. The relative contributions of initial firestorms, impact winter, and impact
summer to species extinction at the K-T boundary is poorly studied and requires more
research.
Daedalus Impact
To consider an impact that could truly wipe out humanity completely, we have to either
analyze the impact of a larger object, a faster object, or an object with both qualities. We
also should expand our scope beyond natural asteroids and comets, large versions of
which impact us only rarely, and consider the possibility of artificially accelerated objects. At
some point in the future, humanity will probably harvest the Sun's energy in larger
amounts, with huge arrays of solar panels which might even approach the size of planetary
surfaces. If we do spread beyond Earth and conquer the solar system, this would be very
useful for our energy needs. Such systems would give humanity and our descendants
access to tremendous amounts of energy, enough to accelerate large objects up to a
significant fraction of the speed of light, say 0.1 c.
Assuming sophisticated automated robotics, artificial intelligence, and
nanotechnology, it would be possible to disassemble asteroids and convert them into
massive solar arrays within the next hundred years. If the right technology is in place, it
could be done with the press of a button, and the size of the asteroid itself would be
immaterial. This energy could then be applied to harvesting other fuels, such as helium-3 in
the atmosphere of Uranus. This could give human groups access to tremendous amounts
of energy, possibly even thousands of times greater than the Earth's present power
consumption, within a hundred to two hundred years. We aren't saying that this is
necessarily particularly likely, just imaginable. After all, the Industrial Revolution also rapidly
improved the capabilities of humanity within a short amount of time, and some scientists
and engineers anticipate an even greater capability boost from molecular manufacturing,
286

AI, and robotics during the 21st century. So it shouldn't be considered completely out of the
question.
The energy released by the Chicxulub impactor was between 310 million megatons to
13.86 billion megatons of TNT, according to a careful study30. (100 million megatons, a
frequently cited number, is too low.) As a comparison, the largest nuclear bomb ever
detonated, Tsar Bomba, had a yield of 50 megatons. Scaling this way up, we make the
general assumption that it would require an explosion with a yield of 200 billion megatons
to completely wipe out humanity. This would be an explosion much larger than anything
that has occurred during the era of multicellular life. An asteroid has not triggered an
explosion this large since the Late Heavy Bombardment, roughly 4 billion years ago when
huge asteroids were routinely hitting the Earth. It seems like a roughly arbitrary number,
which it is, but it has a couple things to recommend it: 1) it's more than ten times greater
than the explosion which wiped out the dinosaurs, 2) it's large enough to be beyond the
class of objects that has any chance of hitting Earth naturally, but small enough that it isn't
completely outlandish. 3) it's easily large enough to argue that it could wipe out all of
humanity, even that it would be likely to do so, in the absence of bunkers built deliberately
to last for many decades involving major temperature drops and no farming.
An impact releasing energy equivalent to 200 billion megatons (2 x 10 11 MT) of TNT is
extremely enormous and difficult to imagine. The following effects are all from the results of
the impact simulator of the Earth Impacts Effects Program, a collaboration between
astronomers and physicists at Imperial College London and Purdue University. A 200 billion
MT impact would release approximately 1027 joules of energy, opening a “crater” in water
(assuming it strikes the ocean) with a diameter of 881 km (547 mi), about the distance
between San Francisco and Las Vegas. The crater opened on the seafloor would be about
531 km (329 mi) in diameter, which, after the crater wall undergoes collapse, would form a
final crater 1,210 km (749 mi) in diameter. This crater would be so large it would span
almost the entire states of California and Nevada. This is so much larger than the
Chicxulub crater, at only 180 km (110 mi) in diameter. The explosion, which is between 144
and 645 times more powerful than the Chicxulub impact, leaves a crater about 7 times
larger and with 49 times greater area. All this greater area corresponds to more dust and
soot clogging up the sky in the decades to come, as well as more molten rock raining from

287

the heavens in the few hours following the impact. Both heat and dust translate into killing
power.
The physical effects of the impact itself would be mind-boggling. The fireball
generated would be more than 122 km (76 mi) across. At a distance of 1,000 miles, the
thermal pulse would hit 15.6 seconds after impact, with a duration of 6.74 hours, and a
radiant heat flux 5,330 times greater than full sunlight (all these numbers are from the
impact effects calculator). At a distance of 3,000 miles (4,830 km), equivalent to that
between San Francisco and New York, the fireball would appear 5.76 times larger than the
sun, with an intensity 13.7 times greater than full sunlight, enough to ignite clothing,
plywood, grass, deciduous trees, and third degree burns over most of the body. Roughly
815,000 cubic miles of molten material would be ejected, arriving approximately 16.1
minutes after impact. The impact would cause shaking at 12.1 on the Richter scale, greater
than any earthquake in recorded history. Even at this continental distance of 3,000 miles,
the ejecta, which arrives after about 25.4 minutes as an all-incinerating wave of flaming
dust particles, has an average thickness on the ground of 6.28 meters (20.6 ft). That is
incredible, enough to cover and kill just about everything. The air blast would arrive about 4
full hours after impact, with a maximum wind velocity of 1,760 mph, peak overpressure of
10.6 bars (150 psi), and a sound intensity of 121 dB. At a distance of 12,450 miles (20,037
km), the maximum possible distance from the impact, the air blast arrives after 16.8 hours,
with a peak overpressure of 0.5 bars (7.6 psi), maximum wind velocity of 233 mph, and a
sound intensity of 95 dB. The force of the blast would be enough to collapse almost all
wood frame buildings and blow down 90 percent of trees. Altogether, more than 99.99
percent of the world's trees would be blown down, regardless of where the impact hits.
No authors have considered in much detail the effects such a blast would have on
humanity itself, probably because it could conceivably only be caused by an artificial
impact rather than a natural one, and anthropogenic high-velocity object impacts are rarely
considered, especially of that energy level. What is particularly interesting about blasts of
around this level is that they are somewhere between “sure survival” and “sure death” for
the human species, so there is definite uncertainty about the likely outcome. Nearly anyone
not in an underground shelter would be destroyed, just as they would in the winds of a
powerful hurricane. Hurricanes do not bring flaming hot ejecta that lands in 10-ft thick
layers and burns away all oxygen on the surface, however. The Chicxulub impact kicked up
288

~5 x 1015 kg of material which deposited an average of 10 kg (22 lb) per square meter,
forming a layer 3-4 mm thick on average. The larger impact we describe above, which we
will call a Daedalus impact for reasons which will become known shortly, would eject at
least 3,625,000 cubic miles (15,110,000 cu km) of material, equivalent to a cube 153 miles
(246 km) on a side, into the atmosphere, for a mass of ~5 x 10 19 kg, roughly 10,000 times
greater. Extrapolating, this means we could expect an average deposition rate of 100
tonnes (110 tons) per square meter, forming a layer 30-40 meters (100-130 ft) on average!
This is inconsistent with the Impact Effects Calculator result of just 6 meters thickness only
3,000 miles away, and knowing the difference is crucial. (note: double-check this result and
consult w/ expert)
If the impact really did leave a layer of molten silicate dust 30 meters thick, 100
tonnes per square meter, it is easy to imagine how that might threaten the existence of
every last human being, especially as the post-impact years tick by and food is hard to
come by. During the K-T event, global temperature is thought to have dropped by 13 Kelvin
after 20 days31. With 10,000 times as much dust in the atmosphere as the K-T event, how
much cooling would the Daedalus event cause? It could be catastrophic cooling, not just in
the sense of wiping out 75-90 percent of all terrestrial animal species, but more in the
sense of potentially wiping out all terrestrial non-arthropod species.
There would be some warning time for those distant from the blast. At a distance of
about 10,000 km (6,210 miles), the seismic shock would arrive after about 33.3 minutes,
measuring 12.2 on the Richter scale. In comparison, the 1906 San Francisco earthquake,
which destroyed 80 percent of the city, was about 7.8 on the moment magnitude scale
(modern Richter scale), which corresponds to about 7.6 on the old Richter scale. Each
increment on the scale corresponds to a 10 times greater shaking amplitude, so the
earthquake following a Daedalus impact would have a shaking amplitude greater than
20,000 times that of the San Francisco earthquake at the hypocenter, meaning a shaking
amplitude of tens of miles. At a distance of 10,000 km, this would rate at VI or VII on the
Mercalli scale. VI on the Mercalli scale refers to, “Felt by all, many frightened. Some heavy
furniture moved; a few instances of fallen plaster. Damage slight.” VII refers to, “Damage
negligible in buildings of good design and construction; slight to moderate in well-built
ordinary structures; considerable damage in poorly built or badly designed structures;
some chimneys broken.” Upon feeling the seismic shock, people would check the Internet,
289

television, or radio to find news that an object had hit the Earth, and that the deadly blast
wave was on its way. Starting after about 45 minutes, the ejecta would begin arriving, the
red-hot rain of dust, getting thicker over the next 25 minutes and reaching maximum
intensity 70 minutes after impact. This all-encompassing heat would be sufficient to cook
everything on the surface. The devastating blast wave, which is the largest physical
disruption and would include maximum winds of up to 677 mph and pressures of 30.8 psi,
similar to those felt at ground zero of a nuclear explosion, would arrive after about 8.42
hours. This would completely level everything on the ground, including any structures.
Contrary to popular conception, the pressures on a level similar to those directly
underneath a nuclear explosion are survivable, through using a simple arched-earth
structure over a closed trench. One such secure trench was located near ground zero in
Nagasaki, where pressures approached 50 psi. Such trenches require a secure blast door,
however. Perhaps a greater problem would be the buildup of ejecta, which would rest
everywhere and cause a great amount of pressure, crushing unreinforced structures and
their inhabitants. It would also be super-hot. We can imagine a more hopeful situation on
the side of a cliff or mountain, where ejecta slides off to lower altitudes, or in tropical
lagoons and lakes, where even 100 ft worth of dust may simply sink to the bottom, sparing
anyone floating on the surface. Oxygen would be a greater problem, however, meaning
that those in elevated, steep areas with plenty of fresh air would be at an advantage. Such
areas would have increased exposure to thermal radiation, on the other hand, making it a
tradeoff. Ideal for survival would be a secure structure carved into a cliff or mountainside,
or a hollowed-out mountain like the Cheyenne Mountain nuclear bunker in Colorado, which
is manned by about 1,400 people. The bunker has a water reservoir of 1,800,000 US gal
(6,800,000 l), which would be more than enough to support its 1,400 workers for over a
year. One wonders if the intense heat of a 100-130 ft layer of molten dust would be
sufficient to melt and clog the air intake vents and suffocate everyone inside. Our guess
would be probably not, which makes us even question the assumption that a 200 billion
megaton explosion would be sufficient to truly wipe out all of humanity. The general
concept requires deeper investigation.
If staff in a highly secured bunker like the Cheyenne complex were somehow able to
survive the initial ejecta and blast wave, the world they emerged out into when it cooled
would be very different. Unlike the post-apocalyptic landscape faced by the survivors of the
290

Chicxulub impact, which was only dusted with a few millimeters of material, these refugees
would be dealing with deep, multi-story building thick layers of dust which would get into
everything and create a nutrient-free layer nearly impossible for plants to grow in. Any
survivors would need to find a deposit of remaining soil, which could be done by digging
into a mountainside or possibly clearing an area with a nuclear explosion. Then, they would
need to use rain or whatever other available water source to attempt to grow food. Given
that the sky would be blacked out for at least several years and possibly longer, this would
be quite a challenge. Perhaps they could grow plants underground with artificial light from a
nuclear-powered generator. Survivors could scavenge any underground grain reserves, if
they manage to locate these and expend the energy to dig down to them by hand. The
United States and many other countries have grain surpluses sufficient to feed a few
thousand people almost indefinitely, as we reviewed in the nuclear chapter.
Over the years, moss and lichen would begin to grow over the surface of the cooled
ash, and rivers would wash some of it into the sea. If humanity were lucky, there would be
a fern spike, a massive recolonization of the land by ferns, as generally occurs after mass
extinctions. However, the combination of falling temperatures, limited power sources,
everything on Earth being covered in a layer of ash 100 feet deep, and so on, could easily
prove sufficient to snuff out a few isolated colonies of several thousand people fortunate
enough to have survived the initial blast and heat. One option might be to construct sea
cities, if the survivors had the technology available for it. It would be difficult to reboot the
large-scale technological infrastructure needed to construct such cities, especially working
with few people, though possibly working components could be salvaged. In a matter of a
couple decades, without a technological infrastructure to build new tools, all but the most
simple devices would break down, putting humanity back into a technological limbo similar
to the early Bronze Age. This would make it difficult for us to achieve tasks such as locating
grain stores hundreds of miles away, determining their precise location, and digging 100
feet down to reach them. Because of all these extreme challenges to survival, we
tentatively anticipate that such an impact probably would wipe out humanity, and indeed
most of the rest, if not all complex terrestrial life.
How could such an impact occur? It would have to be an artificial projectile, about the
size of the Great Pyramid of Giza, with a radius of 0.393 km, made of iron, accelerated into
the surface of the Earth at one-tenth the speed of light (0.1 c). The name “Daedalus” is
291

borrowed from the Project Daedalus, a project undertaken by the British Interplanetary
Society between 1973 and 1978 to design an unmanned interstellar probe 32. The craft
would have a length of about 190 meters (626 ft) and weigh 54,000 tons, with a scientific
payload of 400 tons. The craft was designed to be powered using nuclear fusion, fueled by
deuterium harvested from an outer planet like Uranus. Accelerating to 0.12 c over the
course of 4 years, the craft would then cruise for 46 years to reach its target star system,
Bernard's star, 5.9 light years distant. The craft would be shielded by an artificially
generated cloud of particles called “dust bugs” hovering 200 km ahead of the vehicle to
remove larger obstacles such as small rocks. Micro-sized dust grains would impact the
craft's beryllium shield and ablate it over time.
The Daedalus craft would only accelerate 400 tons, instead of the 9.3 x 10 9 kg (9.3
million tonnes, 1.02 million tons) required to deal a potential deathblow to mankind. It would
need to be scaled up by a factor of 2,550 to reach critical mass. You hear discussion about
starships in science fiction frequently, but no science fiction I'm aware of addresses the
consequences of the fact that an interstellar probe just 2,550 times larger than the
minimum viable interstellar probe could cause an explosion on the Earth that covers the
entire surface in 50-100 feet of molten rock. Once such a probe began to get going, it
would be very difficult to stop. No object would have the momentum to push it off course; it
would need to be stopped before it acquired a high speed, in just four years. If an
antimatter drive could be developed that allows an even greater speed, one closer to the
speed of light, the projectile would only require a mass of 8.38 x 10 8 kg, about a 29 m (96
ft) radius iron sphere, similar to the mass of the Seawise Giant, the longest and heaviest
ship ever built. On the human scale, that is rather large, but on the cosmic scale, it's like a
dust grain. Any future planetary civilization that wants to survive unknown threats will need
high-resolution monitoring of its entire cosmic area, all the way out to tens of light years
and as far beyond that as possible.
Consider that hundreds of years ago, all transportation was by wind power, foot, or
pack animal, and the railroad, diesel ship, and commercial aircraft didn't exist. As humanity
developed huge power sources and more energetic modes of transport, energy release
levels thousands, tens of thousands, and hundreds of thousands of times greater than we
were accustomed to became routine. Today, there are about 93,000 commercial flights
daily, and about one in every hundred million is hijacked and flown into something, causing
292

events like 9/11. Imagine a future where interstellar travel is routine, and a similar
circumstance might become a threat with respect to starships used as projectiles. Instead
of killing a few thousand people, however, the hijacking could kill everyone on Earth. This
would be especially useful if the terrorist were a nationalist for a future country outside the
Earth, located on the Moon, Mars, asteroid belt, among space stations, or even a separate
star system. For whatever reason, such an individual or group may have no sympathy for
the planet and be perfectly willing to ruin it. This would probably not put humanity as a
whole at risk, since many would be off-world at that stage, but the logic of the scenario has
implications for our security in the long-term future.
Heliobeams
Perhaps a more plausible risk in the nearer future is some group using a space
station as a tool to attack the Earth. They might wipe out most or all of the human species
to replace us with their own people, or to otherwise dominate the planet. This scenario was
portrayed in the 1979 James Bond film Moonraker. Space is the ultimate high ground; from
it, it would be easier to distribute capsules of some lethal biological agent, observe the
enemy, launch nuclear weapons, and so on. Perhaps, speculatively, this process could get
carried away until those who control low Earth orbit begin to see themselves as gods and
start to pose a threat to humanity in general.
To create space stations on a scale required to actually control or threaten the entire
Earth's surface would be a difficult task. In the 1940s, Nazi scientists developed the design
for a large mirror in orbit with a 1 mile diameter 33, for focusing light on the surface and
incinerating cities or military formations. Called the Sun Gun or heliobeam, they anticipated
it would take around 50 to 100 years to construct. There are no detailed documents for the
Sun Gun design which survived the war, but assuming a mass of about 1,191 kg (2,626 lb)
per square meter, corresponding to a steel plate 6 inches thick, a heliobeam a mile in
diameter would have a surface area of about 2,034,162 square meters (21,895,500 sq ft)
and mass of about 2,422,690 tonnes (2,670,560 tons), similar to the mass of 26 Nimitzclass aircraft carriers. At current space launch costs of $2,200/kg ($1,000 per pound) as of
March 2013 for the Falcon Heavy Rocket, lifting that much material to orbit would cost
$5,330,000,000, about the annual GDP of Japan. Even given the relatively enormous US
military budget, this would be a rather difficult expense to justify. Such a project would have
293

to be carried out over the course of many years, or weight would have to be sacrificed,
making the heliobeam thinner, which would make it more vulnerable to attack or disruption.
There are ways to take the heliobeam concept as devised by the Nazis and transform
it into a better design which makes it easier to construct and more resilient to potential
attack. Instead of one giant mirror, it could be a cluster of several dozen giant mirrors,
communicating with data links and designed to point to the same spot. Instead of being
made of thick metal, these mirrors could be constructed out of futuristic nanomaterials
which are extremely light, yet durable. This could lower the launch weight by a factor of
tens, hundreds, maybe even thousands. The preferred construction material for a scaffold
would be diamond or fullerenes. Even if MNT is not developed in the near future, the cost
of industrially produced diamond is falling, though not to the extent that would be required
to make such large quantities affordable. This makes the construction of a heliobeam seem
mostly dependent on progress in the field of MNT, much like the other hypothetical
structures in this chapter.
If launch costs and the costs of bulk diamond could be brought way down, an
effective heliobeam could potentially be constructed for only a few trillion dollars instead of
a few hundred trillion, which might put it within the reach of the United States or Russian
military during the latter half of the 21st century. The United States is spending between
$620 and $661 billion over the next decade on maintaining its nuclear weapons, as an
example of a project in a similar cost window. Spending a similar amount on a heliobeam
could be justified if the ethical and geopolitical considerations looked right. After all, a
heliobeam would be like a military laser beam that never runs out of power. There would be
few limits to the destruction it could do. Against stationary targets in particular, it would be
extremely effective. There is nothing like vaporizing your enemies using a giant beam from
space.
Countries are especially dependent on their power and military infrastructure to
persist. If these could be destroyed one by one, over the course of several days by a giant
orbiting mirror, that could cripple a country and make it ripe for ground invasion. Nuclear
missiles could defend against such a mirror by damaging it, but possibly a mirror could be
sent up in the aftermath of a World War during which most nuclear weapons were used up,
leaving the remainder at the mercy of the gods in the sky with the sun mirror. Whatever
294

group controlled the sun mirror could use it to cripple key human infrastructure (power
plants, ports), forcing humanity to live as it did in ancient times, without electricity. If the
group were the only ones who retained industrial technology, such as advanced fighter
aircraft, they would be fighting against people who had little more than crude firearms. The
difference in technological capability between Israelis and Palestinians comes to mind, only
moreso. The future could be a world of entirely different technological levels, where an elite
group is essentially running a giant zoo for its own amusement. If they got tired of the
masses of humanity, they might even decide to genocide us through other means such as
nuclear weapons. A heliobeam need not be used to perform every important military
maneuver, just the most crucial moves like destroying power plants, air bases, and tank
formations. It would still give whoever controlled it a dominant position.
There are many important strategic, and therefore geopolitical, differences between a
hypothetical heliobeam and nuclear weapons. Nuclear weapons are more of an all-ornothing thing. When you use a nuclear weapon, it makes a gigantic explosion with ultra
high-speed winds and a fearsome mushroom cloud that showers lethal radioactivity for
miles around. Even the most power-hungry politician or military commander knows they
are not to be used lightly. A heliobeam, on the other hand, could be used in an attack which
is arbitrarily small and seemingly muted. By selectively obscuring parts of a main mirror, or
selecting just a few out of a cluster to target sunlight towards a particular location, a
heliobeam array could be used to destroy just a few hundred troops instead of a few
thousand, or to harass on an even smaller scale. Furthermore, its use would be nearly free
once it is built. The combination of low maintenance costs and arbitrarily minor attack
potential would make the incentive to use a heliobeam substantially greater than the
incentive to use nuclear missiles. A country with messianic views about its role in the world,
such as the United States, could use it to get our way in every conflict, no matter how
minor. This could exacerbate global tensions, paving the way for an eventual global
dictatorship with absolute power. We ought to be particularly wary of any technologies
which could be used to enforce global dictatorship, due to the potentially irreversible nature
of a transition to one, and the attendant suffering it could cause once firmly in place 34.
In conclusion, it is difficult to imagine how space-based heliobeams could kill all of
humanity, but they could certainly oppress us greatly, possibly locking us into a state where
a dictatorship controls us completely. Combined with life extension technologies and
295

oppressive brain implants (see next chapter), a very long-lived dictator could secure his
position, preventing technological development and personal freedom in perpetuity. Nick
Bostrom defines an existential risk as “One where an adverse outcome would either
annihilate Earth-originating intelligent life or permanently and drastically curtail its
potential.”35 This particular risk seems less likely to annihilate life as it is to permanently
and drastically curtail its potential if put in the wrong hands. As a counterpoint, however,
one may argue that nuclear weapons have the same great power, but they have not
resulted in the establishment of a global dictatorship.
Dispersal of Biological Agents
In Moonraker, bad guy Hugo Drax attempts to release 50 spheres of nerve gas from a
space station to wipe out the global population, replacing it with his own master race.
Would this be possible? Almost certainly not. Even if 50 spheres could hold sufficient toxic
agent, they would not achieve the necessary level of dispersion necessary to distribute a
lethal dose across the entire surface of the Earth, or even a tiny fraction of it. But is it
possible in principle? As always, it is our job to find out.
First, it would not make sense to disperse a biological agent in an un-targeted
fashion. If someone is trying to kill as many people as possible with a biological agent, it
makes the most sense to distribute it in areas where there are people, especially people
quickly dispersing across far distances, like an airport. This is best done on the ground,
with an actual human vector, spraying aerosols in places like public bathrooms. The
method of dispersal would be up close, personal, and meticulous, not slipshod or
haphazard, as in a shotgun approach. The downside of this is that you can't hit millions of
people at once, and it puts the attacker himself at risk of contracting the disease.
Any biological weapon launched from a distance has to deal with the problem of
adequate dispersal. The biological bomblets developed by the United States military during
our biological weapons program are small spheres designed to spin and spray a biological
agent as they are dropped from a plane. If bomblets are to be launched from a space
station, a number of weaponization challenges would need to be overcome. First, the
bomblets would need to be encased in a larger module with a heat shield to protect it from
burning up on reentry. Most of the heat of a reentering space capsule is transferred just 6
km (3.7 mi) over the ground, where the air gets much thicker relative to the upper
296

atmosphere, so the bomblets have to remain enclosed down to that altitude at least (unless
constructed from some advanced material like diamond). By the time they are that far
down, they can't disperse across that wide an area, maybe an area 10 km (6.2 mi) across.
Breaking this general rule would either require making ultra-tough reentry vehicles that can
reenter in one piece despite being relatively small, or having the reentry module break up
into dozens or hundreds of rockets which go off horizontally in all directions before falling
down to Earth. Both are serious challenges, and the technology does not exist, as far as is
publicly known.
Most people live in cities. Cities are the natural place where an aspiring Bond villain
would want to spread a plague. A problem is that major cities are far apart. Another
problem is that doomsday viruses don't survive very well without a host, quickly getting
destroyed by low-level biological activity (such as virophages) or sunlight. To optimally hit
humanity with a virus would require not only a plethora of viruses (since many people
would undoubtedly be immune to just one), but a multitude of living hosts to spread the
viruses, since viruses in aerosols or otherwise unprotected would very quickly be
degenerated by UV light from the sun. So the problem of wiping out humanity with a
biological weapon launched from a space station is actually a problem of launching bats,
monkeys, rats, or some similar stable vector in large numbers from a space station to
hundreds or thousands of major cities on Earth. When you think about it that way, in
combination with the reentry module challenges cited above, pulling this off clearly is a bit
more complicated than as was portrayed in Moonraker.
Could biological bomblets filled with bats be launched into the world's major cities,
flooding them with doomsday bats? It's certainly imaginable, and would be a more subtle
and achievable way of trying to kill off humanity than using the Daedalus impactor.
However, it certainly doesn't seem like a threat in the near future, and difficult to imagine
even in this century, though difficulty of imagination is not always the best criterion for
predicting the future. For the vectors to make it to their targets in one piece, they would
need to be put in modules with parachutes, which would be rather noticeable upon entering
a city. Many modules could be used, thousands of separate pods filled with bats each
landing in different locations and different cities, for a total of hundreds or thousands of
modules. This would entail a lot of mass, in the tens of thousands of tons. Keeping all
these bats or rats and their 50-100 necessary deadly viruses all contained on a space
297

station would be a major task, and require substantial amount of space and resources. It
could conceivably be done, but it would have to be a relatively large space station, one with
living and working space for 1,000 people at least. There would need to be isolation
chambers on the space station itself, with workers entering numerous staging chambers
filled with the modules which would need to be loaded with the biological specimens by
hand or via remote-controlled robot.
To take a serious crack at destroying as many human beings as possible, the
biological attack would need to be targeted at every city with over a million people, of which
there are 476. To ensure that enough people are infected to actually get the multipandemic going, rather than being contained, one would need as many vectors as
possible, in the tens of thousands per city. Rats, squirrels, or mice would be more suitable
than bats, since they would be more innocuous-seeming in world cities, though all of the
above could be used. Each city would need its own reentry capsule, which, to contain
10,000 rats without killing them, assuming (generously) ten per cubic foot, would need to
have around 1,000 cubic feet of internal space, or a cube 10 ft (3 m) on a side. The Apollo
Service module, for instance, had an internal space of 213 cubic feet. Assuming a module
carrying rats would need to be about 4 times heavier to contain the necessary internal
space, that gives us a weight of 98,092 kg (216,256 lbs) per module, which we round off to
100 tonnes. One module for every city with over a million people makes that 47,600
tonnes. Now we see the scale of size a space station would need to contain the necessary
facilities to launch a species-threatening bio-attack on the Earth. A space station of five
million tonnes or more would likely be needed to contain all the modules, preparatory
facilities, staff, gardens to grow food for the staff, space for it to be tolerable for them to live
there, the water they need, and so on. At that point, you might as well build Kalpana One,
the AIAA (American Institute of Aeronautics and Astronautics) designed space colony we
mentioned earlier, designed to fit 3,000 people in a colony that would weigh about 7 million
tonnes36. Incidentally, this number is close to the minimum viable population (MVP) needed
for a species to survive. A meta-analysis of the MVP of different species found the average
number to be 4,16937.
Would 10,000 rats released in each city in the world, carrying a multitude of viruses,
be sufficient to wipe out humanity? It seems unlikely, but could be possible in combination
with nuclear bombardment and targeted use of a heliobeam. After wiping out all human
298

beings on the surface, the space station could be struck by a NEO and suffer catastrophic
reentry into the atmosphere, killing everyone on board. After that, no more humanity.
Destroying every last human being on Earth with viruses seems seriously difficult, given
that there are individuals living out in rural Canada or Russia with no human contact
whatsoever. Yet, if only a few thousand or tens of thousands of individuals remain, are too
widely distributed, and fail to mate and reproduce in the wake of a serious disaster, it could
be possible. It would require a lot of things all going wrong simultaneously, or in a
sequence. But it is definitely possible.
Motivations
Given all this, we may rightly wonder: what would motivate someone to do these
horrible things in the first place? Most of us don't seriously consider wiping out humanity,
and those that do tend to be malcontent misfits with little hope of carrying their plans out.
The topic of motivations will be addressed at greater length further along in the book, but
we ought to briefly address it here in the specific context of space-originating artificial risks.
Space represents a new frontier, the possibility of starting over. To some, this means
replacing the old with the new. When you consider that 7-10 billion people only have a
collective mass of around 316 million tons, a tiny fraction of the size of the Earth, you
realize that's not a lot of matter to scramble and thereby have the entire Earth to yourself.
People concerned about the direction of the planet, or of society, combined with a
genocidal streak, may be sufficient consider wiping humanity out and starting over. They
may even see a messianic role for themselves in accomplishing it. Wiping out humanity
opens up the possibility of using the Earth for literally anything an individual or small group
could imagine. It would allow them to impose their preferences over the whole future.
The science fiction film Elysium showed a possible future for Earth: the planet noisy,
messy, dusty, and crowded, and a space station where everything is clean and utopian. In
the movie, people from Earth were able to launch themselves aboard the space station and
reach “asylum,” but in real life, it would be quite difficult to dock on a space station if the
inhabitants didn't want you there. Defended by a cluster of mirrors or lasers, they could
cook any unwanted visitors before they ever got to the front door. If a space colony could
actually maintain a stable social system, they might develop a very powerful sense of ingroup, more powerful than that experienced by the great majority of humans who lived
299

throughout history. Historically, communities almost always have some contact with the
outside, but on a space station, true self-sufficiency and isolation could become possible.
3,000 people might not be sufficient to create full self-sufficiency, but a network of 10-20
such space stations might. If these space stations had access to molecular manufacturing,
they could build things without the large factories we have today, making the best use of
available space. Eventually such colonies could spread to the Moon and Mars, colonizing
the solar system. Powerful in-group feelings could cause them to eventually think much
less of those left behind on Earth, even to the point of despising us. Of course, this is by no
means certain, but it ought to be considered.
The potential for hyper-in-group feelings increases when we consider the possibility of
mental and physical enhancement, modifications to the body and brain that change who
people are. A group of people on a space station could become a new species, capable of
flying through outer space unprotected for short periods. Their skin cells could be equipped
with small scales that fold to create a hard shell which maintains internal pressure when
exposed to space. These people could work on asteroids wearing casual clothing, a new
race of human beings adapted to the harshness of the vacuum. Beyond physical
modifications, they could have mental modifications as well. These could increase the
variance of possible emotions, allow long-term wakefulness, or even increase the
happiness set-point. They might see themselves as doing a service by wiping out the
billions of “model 1.0” humans on Earth. In the aftermath of such an act, infighting among
these space groups could lead to their eventual death and the total demise of the human
species in all its variations.
In the context of all of this, it is important to recall Fermi's Paradox and the Great
Filter. Though it may simply be that the prior probability for the development of intelligent
life is extremely low, there may be more sinister reasons for the silence in the cosmos,
developmental inevitabilities that cause the demise of any intelligent race. These may be
more subtle than nanotechnology or artificial intelligence; they could involve psychological
or mental dead-ends that inevitably occur when an intelligent species starts expanding out
into space, modifying its own brain, or both. Maybe intelligent species reliably expand out
into the space immediately around their planet, inevitably wipe out the people left behind,
then inevitably wipe themselves out in space. Space certainly has a lot fewer organic
resources than the Earth, and a lot more things that can go wrong. If the surface of the
300

planet Earth became uninhabitable for whatever reason, hanging onto survival in space is
much more of a risky prospect. Maybe small groups of people living in space eventually go
insane over time scales of decades or centuries. We really have no idea. Nothing should
be taken for granted, especially our future.
Space and Molecular Nanotechnology
At the opening of this chapter, we made the controversial claim that molecular
manufacturing (MM) is a necessary prerequisite to large-scale space development. In this
section we'll provide more justification and reasoning for the evidence behind this
assumption, while being more specific about what precise levels of space development we
mean.
Let's begin by considering an optimal scenario for space development in the absence
of MM. Say that a space elevator is actually built in 2050 as one Japanese company
claims38. Say that we actually find a way to make carbon nanotubes of the required length
and thickness (we can't now), tens of thousands of miles long. Say that geopolitical
considerations are overcome and nations don't fight over or forbid the existence of an
elevator which would give anyone who controls it a huge military and strategic advantage
over other nations. Say that the risk of having a huge cable in space which would cut
through or get tangled in anything that got in its way, providing a huge hazard to anything in
orbit, was considered acceptable. If all these challenges are overcome, it would still take a
number of years to thicken a space elevator to the point where it could carry substantial
loads. Bradley C. Edwards, who conducted a space elevator study for NASA, found that
five years of thickening a space elevator cable would make it strong enough to send 1,100
tons (106 kg) of cargo into orbit every four days 39. That's 912 trips every decade, or about
108 kg, 100,000 tonnes. Space enthusiasts have trouble grasping how little this is; less
than 70 times the quantity needed to build a rotating space colony that holds only 3,000
people.
The capacity of a space elevator can be increased by robots that add to its thickness,
but the weight of these robots prevents the space elevator from being in use while
construction is happening. The thickness that can be added to a space elevator on any
given trip is extremely limited, because the elevator itself is at least 35,800 km (22,245 mi)
long and a robot can only carry so much material up on any given trip without breaking the
301

elevator due to excessive weight. Spreading out a few hundred tons on a string that long
only adds a little bit of width. These facts create fundamental limitations on the rate that a
space elevator can be improved. In the long term, if a reliable method of mass-producing
carbon nanotubes is found, space elevators are found to be safe and defensible from
attacks, then extremely large space elevators could be constructed, but it seems unlikely in
this century, which is our scope of focus. It's possible that space elevators may be the
primary route to space in the 22nd or 23rd century, but it seems unlikely in the 21st.
Enthusiasts might object that mass drivers based on electromagnetic acceleration
could be used to send up more material more cheaply, with claims of costs as low as $1/lb
for electricity40. Edwards claims his space elevator design could put a kilogram in orbit for
$220, the cost lowering to $55/kg as the power beaming system efficiency improves. Still,
these systems have limited launch capacity. Even if the costs of sending payloads into
space were free, there would still be that limitation. Perhaps the limitation could be
bypassed somewhat by constructing mass drivers on the Moon which can send up large
amounts of material. Even then, the number of mass drivers required to send up any
substantial amount of mass would cost trillions of dollars worth of investment, and involve
mega-projects on another celestial body, something which is very far off (without the help
of MM). This is something that could be started in the late 21 st century, but definitely not
completed. What about towing NEOs into the Earth's orbit as building materials? As we
analyzed earlier in this chapter, the energy costs of modifying the orbit of large objects are
very high.
Simple facts remain: the gravity wells of both the Moon and Earth are very powerful,
making it difficult to send up any substantial amount of matter without great cost. Without
nanotechnology and automated self-replicating robotics, space colonization is a project
which would unfold over the timescale of centuries, and mostly involve just a few tens of
thousands of people. That's 0.0001 percent of the human race. These are likely to be
professionals living in close quarters, like the scientists who do work in the Antarctic, not
like the explorers who won the West. The romance of large-scale space colonization is a
future reserved only for people living in a civilization with self-replicating robotics and selfreplicating robotics. Efforts like SpaceX or Mars One are token efforts only, fun for press
releases, which may inspire people, but actually getting to space requires self-replicating
robotics. Anyone working on rockets is not working on the fastest route to space
302

colonization; they are working on an irrelevant route to space colonization, one which will
never reach its target without the help of nanotechnology. Space enthusiasts should be
working on diamondoid mechanosynthesis (DMS), but very few of them even know what
those words mean. Rockets are sexier than DMS. These will be great for space tourism,
glimpsing the arc of the Earth for a few minutes before returning, but they will not allow us
to build space colonies of any appreciable size. Rockets are simply too expensive on a perkilogram basis. Space elevators and mass drivers do not have the necessary capacity to
send up supplies to build facilities for more than a few thousand people on the timescale of
decades. 1,100 tons every four days simply isn't a lot, especially when you need 10 tons
per square meter of hull just to shield yourself from radiation.
We've mentioned every key technology for getting into space without taking into
account MM: space elevators, mass drivers (including on the Moon), and rockets. They all
suffer from capacity problems in the 21st century. Based on all this, it is hard to conclude
that pre-molecular manufacturing space-based weaponry is a serious risk to human
survival during the 21st century. Any risk of space-based weaponry appears to derive from
molecular manufacturing and from molecular manufacturing only. Accordingly, it seems
logical to tentatively see space weapon risks as a subcategory of the larger class of
molecular manufacturing (MM) risks.
To review, molecular manufacturing is the hypothetical future technology that uses
self-replicating nanosystems to manufacture “nanofactories” which are exponentially selfreplicating and can build a wide array of diamondoid products with fantastic strength and
power specifications at a high throughput. Developing MM requires us to master
diamondoid mechanosynthesis (DMS), the science and engineering of joining together
individual carbon atoms into atomically precise diamond shapes. This has not been
achieved yet, though very limited examples of mechanosynthesis (with silicon) have been
demonstrated41. Only a few scientists are working on mechanosynthesis or even consider it
important42. However, many prominent futurists, including the most prominent futurist, Ray
Kurzweil, predicts that nanofactories will be developed in the 2030s, which will allow us to
enhance our bodies, extend our lifespans, develop realistic virtual reality, and so on 43. The
mismatch between his predictions and the actual state of the art of nanotechnology in
2015, however, is jarring. We seem nowhere close to developing nanofactories, with much
more basic research, design, and development required 44. The entire field of
303

nanotechnology needs to entirely reorient itself towards this goal, which it is currently
ignoring45.
Molecular manufacturing is fundamentally different than any other technology
because it is 1) self-replicating, 2) atomically precise, 3) can build almost anything, 4)
cheap once it gets going. Nanotechnology policy analysts have even called it “magic”
because of these properties. Any effort to colonize space needs it; it's key.
How about colonizing space with the assistance of molecular nanotechnology? It gets
much easier. Say that the first nanofactory is built in 2050. It replicates itself in under 12
hours, those two nanofactories replicate themselves, and so on, using natural gas, which is
highly plentiful, for natural feedstocks46. Even after the natural gas runs out, we can obtain
large amounts of carbon from the atmosphere47. Reducing the atmospheric CO2 levels
from about 362 parts per million to 280 parts per million (pre-industrial levels) would allow
for the extraction of 118 gigatonnes of carbon. That's 118 billion metric tons. Given that
diamond produced by nanofactories would be atomically perfect and ultra-strong, we could
build skyscrapers or megastructures which only use only 1/100 th the materials for the loadsupporting structures, compared to present-day buildings. Burj Khalifa, the world's tallest
building as of this writing, at 829.8 m (2,722 ft), uses 55,000 tonnes of steel rebar in its
construction. Creating a building of the same height with diamond beams would only
require 550 tonnes of diamond. For the people who design and build buildings, it's hard to
imagine that so little material could be used, but it can. Some might object that diamond is
more brittle than steel and therefore not a suitable building material for a building, since it
doesn't have any flexibility. That problem is easily solvable by using carbon nanotubes,
also known as fullerenes, for construction. Another option would be to use diamond but
connect together a series of segments with joints that allow the structure to bend slightly.
With nanofactories, you don't just have diamond at your disposal, but the entire class of
materials made out of carbon, which includes fullerenes such as buckyballs and
buckytubes (carbon nanotubes)48.
Imagine a tower much taller than Burj Khalifa, 100 kilometers (62 mi) in height instead
of 829.8 meters. This tower would use 100 times more structural material to be stable.
Flawless diamond is so strong (50 GPa compressive strength) that it does not need to
taper at all to be stable for a tower of that height. That works out to about 55,000 tonnes of
304

material. Build 10 of these structures in a row and put a track on top of them, and we have
a Space Pier, a launch platform made up of about 600,000 tonnes of material which puts
us “halfway to anywhere,” designed by J. Storrs Hall 49. At that altitude, the air is 100 times
thinner than at ground level, making it easier to launch payloads into space. In fact, the
structure is so tall that it reaches the Karman Line, the international designation for the
boundary of space. The top of the tower itself is in space, technically. The really fantastic
thing about this structure is that it could hold great weight (tens of thousands of tons) and
run launches every 10 minutes or so instead of every four days. It avoids many other
problems that space elevators have, such as the risk of colliding with objects in low Earth
orbit. It is far more structurally stable than a space elevator. It only occupies the territory of
one sovereign nation, and avoids issues relating to who owns the atmosphere and space
above a given country. (To say that a country automatically owns the space above it and
has a right to build a space elevator there is not consistent with any current legal
conception of space.)
When it comes to putting objects into orbit, a Space Pier and a space elevator are in
completely different categories. Although a Space Pier does not actually extend into space
like a space elevator does, it is much more efficient at getting payloads into orbit. A 10
tonne payload can be sent into orbit for just $4,300 of electricity, a rate of 43 cents per
kilogram. That is cheap, much cheaper than any other proposal. Why build a mass driver
on the ground when you can build it at the edge of space? Only molecular nanotechnology
makes it possible; nothing else will do. Nothing else can build flawless diamond in the
massive quantities needed to put up this structure. Nothing else but a Space Pier can put
payload into space in sufficient quantities to achieve the space expansion daydreams of
the 60s and 70s.
A “future of mankind in space” is only enabled by 1) molecular nanotechnology, and
2) Space Piers, which can only be built using it. Nothing else will do. A Space Pier
combines two simple concepts: that of a mass driver, and putting it above most of the
atmosphere. It's extremely simple, and is the logical conclusion of building taller and taller
buildings. Although a Space Pier is large, building it would only require a small amount of
the total carbon available. A Space Pier consumes about half a million tonnes, while our
total carbon budget from the atmosphere is about 118 billion tonnes. That's roughly

305

1/200,000 of the carbon available. It's definitely a major project that would use up
substantial resources, but is well within the realm of possibility.
Assuming a launch of 10 tonnes every ten minutes, we get a figure of 5,256,000
tonnes a decade, instead of the measly 100,000 tonnes a decade of the Edwards space
elevator. That makes a Space Pier roughly 52 times better at sending payloads into space
than a space elevator, which is a rather big deal. It's an even bigger deal when you start
adding additional tracks and support to the Space Pier, allowing it to support multiple
launches simultaneously. At some point, the main restriction becomes how much power
you can produce at the site using nuclear power plants, rather than the costs of additional
construction itself, which are modest in comparison. A Space Pier can be worked on and
expanded while launches are ongoing, unlike a space elevator which is non-operational
during construction. A 10-track Space Pier can launch 50,256,000 tonnes a decade,
enough to start building large structures on the Moon, such as its own Space Pier. Carbon
is practically non-existent on the Moon's surface, so any Space Pier built there would need
to be built with carbon sent from Earth. 50 million tonnes would be enough to build over a
thousand mass drivers on the Moon, which could then send up 5 billion tonnes of Moon
rock in a single decade! Now we're talking. Large amounts of dead, dumb matter like Moon
rock is needed to shield space colonies from the dangerous effects of radiation. Each
square meter will need at least 10 tonnes, more for colonies outside of the Earth's Van
Allen Belts, in attractive locations like the Lagrange points, where colonies can remain
stable in their orbits relative to the Earth and Moon. 5 billion tonnes of Moon rock is enough
to facilitate the creation of about 500 space colonies, assuming 10 million tonnes per
colony, which is enough to contain roughly 1,500,000 people. Even with the “magic” of
nanotechnology, a ten-track Space Pier on the Earth, a thousand industrial-scale mass
drivers on the Moon operating around the clock, each launching ten tonnes every ten
minutes, and two or three decades of work to build it all, we only could put 0.15 percent of
the global population into space. To grasp the amount of infrastructure it would take to do
this, it's comparable in mass to the entire world's annual oil output.
Creating space colonies could even be accelerated beyond the above scenario by
using self-replicating factories on the Moon which create mass drivers primarily out of local
materials, using ultra-strong diamond materials only for the most crucial components.
Since the Moon has no atmosphere, you can build a mass driver right on the ground,
306

instead of building a huge, expensive tower to support it. Robert A. Freitas conducted a
study of self-replicating factories on the Moon, titled “A Self-Replicating, Growing Lunar
Factory” in 198150. Freitas' design begins with a 100 ton seed and replicates itself in a year,
but it does not use molecular manufacturing. A design based on molecular manufacturing
and powered by a nuclear reactor could replicate itself much faster, on the order of a day.
In just 18 days, the factories could harvest 4 billion tons of rock a year, roughly equivalent
to the annual industrial output of all human civilization.
From them on, providing nuclear power for the factories is the main limitation on
increasing output. A single nuclear power plant with an output of 2 GW is enough to power
174 of these factories, so many nuclear power plants would be needed. Operating solely
on solar energy generated by themselves, the factories would take a year to self-replicate.
Assisting the factories with large solar panels in space beaming down power is another
option. A 4 GW solar power station would weigh about 80,000 tonnes, which could be
launched from an Earth-based 10-track Space Pier in pieces over the course of six days.
Every six days, energy infrastructure could be provided for the construction of 4,000
factories with a combined annual output of 400 million tons (~362 million tonnes) of Moon
rock, assuming each factory requires 10 MW to power it (Freitas' paper estimates 0.47 MW
- 11.5 MW power requirements per factory). That alone would be sufficient to produce 3.6
billion tonnes a decade, more than half the 5 billion tonne benchmark for 1,000 mass
drivers sending material up into space. If the self-replicating lunar robotics could construct
mass drivers on their own, billions of tonnes of Moon rock might be sent up every week
instead of every decade. Eventually, this would be enough to build so many space colonies
around the Earth that they would create a gigantic and clearly visible ring.
Hopefully these sketches illustrate the massive gulf between space colonization with
pre-MM technologies and space colonization with MM. They are huge. The primary limiting
factory in the case of MM is human supervision, the degree to which would be necessary is
still unknown. If factories, mass drivers, and space stations can all be constructed
according to an automated program, with minimal human supervision, the colonization of
space could proceed quite rapidly in the decades after the development of MM. That is why
we can't completely rule out major space-based risks this century. If MM is developed in
2050, that gives humanity 50 years to colonize space in our crucial window, at which point
risks do emerge. The bio-attack risk outlined in this chapter could certainly become feasible
307

within that window. However, if MM is not developed this century, then it seems that major
risks from space are also unlikely.
Space and the Far Future
In the far future, if humanity doesn't destroy itself, some mass expansion into space
seems likely. It probably isn't necessary in the nearer term, as the Earth could hold trillions
of people with plenty of space if huge underground caverns are excavated and flooded with
sunlight, millions of 50-mile high arcologies are built, and so on. The latest population
projections, contrary to two decades of prior consensus, have world population continuing
to increase throughout the entire 21st century rather than leveling off51. Quoting the
researchers: “world population stabilization unlikely this century”. Assuming the world
population continues to grow indefinitely, eventually all these people will need to find
somewhere to be put which isn't the Earth.
Assuming we do expand into space, increasing our level of reproduction to produce
enough people to take up all that space, what are the long-term implications? Though
space may present dangers in the short term, in the long term it actually secures our future
as a species. If mankind spreads across the galaxy, it would become almost impossible to
wipe us out. For a detailed step-by-step scenario of how mankind might go from building
modest structures in low Earth orbit to colonizing the entire galaxy, we recommend the
book The Millennial Project by Marshall T. Savage52.
The long-term implications of space colonization depend heavily on the degree of
order and control that exists in future civilization. If the future is controlled by a singleton, a
single top-level decision-making agency, there may be little to no danger, and it could be
safe to live on Earth53. If there is a lack of control, we could reach a point where small
organizations or even individuals could gather enough energy to send a Great Pyramidsized iron sphere into the Earth at the speed of light, causing impact doomsday.
Alternatively, perhaps the Earth could be surrounded by defensive layers and structures so
powerful that they can stop such attacks. Or, perhaps detectors could be scattered across
the local cosmic neighborhood, alerting the planet to the unauthorized acceleration of
distant objects and allowing the planet to form a response long in advance. All of these
outcomes are possible.

308

Probability Estimates and Discussion
As we've stated several times in this chapter, space colonization on the level required
to pose a serious threat to Earth does not seem particularly likely in the first half of the 21 st
century. One of the reasons why is that molecular nanotechnology does not seem very
likely to be developed in the first half of the century. There is close to zero effort towards
the prerequisite steps, namely diamondoid mechanosynthesis, complex nanomachine
design, and positional molecular placement. As we've argued, space expansion depends
upon the development of MM to become a serious risk.
Even if MM is developed, there are other risks that derive from it which seem to pose
a greater danger than the relatively speculative and far-out risks discussed in this chapter.
Why would someone launch a bio-attack on the Earth from space when they could do it
from the ground and seal themselves inside a bunker? For the cost of putting a 7 million
ton space station into orbit, they could build an underground facility with enough food and
equipment for decades, and carry out their plans of world domination that way. Anything
that can be built in space can be built on the ground in a comparable structure, for great
savings, improved specifications, or both.
Space weapons seem to pose a greater threat in the context of destabilizing the
international system than they do in terms of direct threat to the human species. Rather
than space itself being the threat, we ought to consider the exacerbation of competition
over space resources (especially the Moon) leading to nuclear war or a nano arms race on
Earth.
We've reviewed that materials in space are scarce, there are not many NEOs
available, and launching anything into space is expensive, even with potential future
assistance from megastructures like the Space Pier. Even if a Space Pier can be built in a
semi-automated fashion, there is a limit to how much carbon is available, and real estate
will continue to have value. A Space Pier would cast vast shadows which would effect
property values across tens of thousands of square miles. Perhaps the entire structure
could be covered in displays, such as phased array optics, which simulate it being
transparent, but then the footprint of the towers themselves would still require substantial
real estate. Perhaps the greatest cost of all would be the idea that it exists and the demand

309

for its use, which could cause aggravation or jealousy among other nations or powerful
companies.
All of these factors mean that access to space would still have value. Even if many
Space Piers are constructed, it will continue to be relatively scarce and probably in
substantial demand. Space has resources, like NEOs and the asteroid belt, which contain
gigatons of platinum and gold that could be harvested, returned to Earth, and sold for a
profit. A Space Pier would need to be defended, just like any other important structure. It
would be an appealing target for attack, just as the Twin Towers in New York were.
The scarcity of materials in space and the cost of sending payloads into orbit means
that the greatest chunk of material there, the Moon, would have great comparative value.
The status of the Moon is ostensibly determined by the 1979 Moon Treaty, The Agreement
Governing the Activities of States on the Moon and Other Celestial Bodies, which assigns
the jurisdiction of the Moon to the “international community” and “international law”. The
problem is that the international community and international law are abstractions which
aren't actually real. Sooner or later there will be some powerful incentive to begin
colonizing the Moon and mining it for material to build space colonies, and figuring that the
whole object is in the possession of the “international community” will not be specific
enough. It will become necessary to actually protect the Moon with military resources.
Currently, putting weapons in space is permitted according to international law, as long as
they are not weapons of mass destruction. So, the legal precedent exists to deploy weapon
systems to secure the Moon. Whoever protects the Moon will become its de facto owner,
regardless of what treaties say.
Consider the electrical cost of launching a ten metric payload into orbit from a
hypothetical Space Pier; $4,200. Besides that cost, there is also the cost of building the
structure in the first place, and it seems likely that these costs will be amortized and
wrapped into charges for each individual launch. Therefore, launches could be
substantially more expensive than $4,200. If the cost for building the structure is $10 billion,
and the investors want to make all their money back within five years, and launches occur
every ten minutes around the clock without interruption, a $38,051 charge would need to
be added to every ten tonne load. That brings the total cost per launch to $41,251.
Launching ten tonnes from the Moon might cost only $1,000 or less, considering many
310

factors; 1) the initially low cost of lunar real estate, 2) the Moon's lack of atmosphere
encumbering the acceleration of the load, 3) the comparatively lower gravity of the Moon. If
someone is building a 7 million tonne space station with materials launched from the Earth,
launch costs would be $26,635,700,000, or roughly $26 trillion. Building a similar mass
driver on the surface of the Moon would be a lot cheaper than building a Space Pier;
perhaps by a factor of 10. We can ballpark the cost of launch per ten tonnes of Moon rock
at $5,000, making the same assumption that investors will want their money back in five
years and that electricity costs will be minimal. At that price, launching 7 million tonnes
works out to $3,500,000,000, or about 7 times cheaper. The basic fact is that the Moon's
gravity is 6 times less than the Earth's and it has no atmosphere. That makes it much
easier to use as a staging area for launching large quantities of basic building materials
into the Earth-Moon neighborhood.
The uncertain status of the Moon and its status as a source of material at least ten
times cheaper than the Earth (much more at large scales) means that there will be an
incentive to fight over it and control it. Like the American continent, it could become home
to independent states which begin as colonies of Earth and eventually declare their
independence. This could lead to bloody wars going both ways, wars waged with futuristic
weapons like those outlined in this and earlier chapters. Those on the Moon might think
little of people on the Earth, and do their best to sterilize the surface. If the surface of the
Earth became uninhabitable, say through a Daedalus impact, the denizens of the Moon
and space would have to be extremely confident in their ability to keep the human race
going without that resource. This seems very plausible, especially when we take into
account human enhancement, but it's worth noting. Certainly such an event would lead to
many deaths and lessen the quality of life for all humanity.

References

54. “Upgraded SpaceX Falcon 9.1.1 will launch 25% more than old Falcon 9 and bring
price down to $4109 per kilogram to LEO”. March 22, 2013. NextBigFuture.
55. Kenneth Chang. “Beings Not Made for Space”. January 27, 2014. The New York
Times.
311

56. Lucian Parfeni. “Micrometeorite Hits the International Space Station, Punching a
Bullet Hole”. April 30, 2013. Softpedia.
57. David Dickinson. “How Micrometeoroid Impacts Pose a Danger for Today’s
Spacewalk”. April 19, 2013. Universe Today.
58. Bill Kaufmann. “Mars colonization a ‘suicide mission,’ says Canadian astronaut”.
59. James Gleick. “Little Bug, Big Bang”. December 1, 1996. The New York Times.
60. Leslie Horn. “That Massive Russian Rocket Explosion Was Caused by Dumb
Human Error”. July 10, 2013. Gizmodo.
61. “Space Shuttle Columbia Disaster”. Wikipedia.
62. “Your Body in Space: Use it or Lose It”. NASA.
63. Alexander Davies. “Deadly Space Junk Sends ISS Astronauts Running for Escape
Pods”. March 26, 2012. Discovery.
64. Tiffany Lam. “Russians unveil space hotel”. August 18, 2011. CNN.
65. John M. Smart. “The Race to Inner Space”. December 17, 2011. Ever Smarter
World.
66. J. Storrs Hall. “The Space Pier: a hybrid Space-launch Tower concept”. 2007.
Autogeny.org.
67. Al Globus, Nitin, Arora, Ankur Bajoria, Joe Straut. “The Kalpana One Orbital Space
Settlement Revised”. 2007. American Institute of Aeronautics and Astronautics.
68. “List Of Aten Minor Planets”. February 2, 2012. Minor Planet Center.
69. Robert Marcus, H. Jay Melosh, and Gareth Collins. “Earth Impact Effects Program”.
2010. Imperial College London.
70. Alan Chamberlin. “NEO Discovery Statistics”. 2014. Near Earth Object Program,
NASA.
71. Clark R. Chapman and David Morrison. “Impacts on the Earth by Asteroids and
Comets - Assessing the Hazard”. January 6, 1994. Nature 367 (6458): 33–40.
72. Curt Covey, Starley L. Thompson, Paul R. Weissman, Michael C. MacCracken.
“Global climatic effects of atmospheric dust from an asteroid or comet impact on
Earth”. December 1994. Global and Planetary Change: 263-273.
73. John S. Lewis. Rain Of Iron And Ice: The Very Real Threat Of Comet And Asteroid
Bombardment. 1997. Helix Books.

312

74. Kjeld C. Engvild. “A review of the risks of sudden global cooling and its effects on
agriculture”. 2003. Agricultural and Forest Meteorology. Volume 115, Issues 3–4, 30
March 2003, Pages 127–137.
75. Covey, C; Morrison, D.; Toon, O.B.; Turco, R.P.; Zahnle, K. “Environmental
Perturbations Caused By the Impacts of Asteroids and Comets”. Reviews of
Geophysics 35 (1): 41–78.
76. Bains, KH; Ianov, BA; Ocampo, AC; Pope, KO. “Impact Winter and the CretaceousTertiary Extinctions - Results Of A Chicxulub Asteroid Impact Model”. Earth and
Planetary Science Letters 128 (3-4): 719–725.
77. Earth Impact Effects Program.
78. Alvarez LW, Alvarez W, Asaro F, Michel HV. “Extraterrestrial cause for the
Cretaceous–Tertiary extinction”. 1980. Science 208 (4448): 1095–1108.
79. H. J. Melosh, N. M. Schneider, K. J. Zahnle, D. Latham. “Ignition of global wildfires
at the Cretaceous/Tertiary boundary”. 1990.
80. MacLeod N, Rawson PF, Forey PL, Banner FT, Boudagher-Fadel MK, Bown PR,
Burnett JA, Chambers, P, Culver S, Evans SE, Jeffery C, Kaminski MA, Lord AR,
Milner AC, Milner AR, Morris N, Owen E, Rosen BR, Smith AB, Taylor PD, Urquhart
E, Young JR; Rawson; Forey; Banner; Boudagher-Fadel; Bown; Burnett; Chambers;
Culver; Evans; Jeffery; Kaminski; Lord; Milner; Milner; Morris; Owen; Rosen; Smith;
Taylor; Urquhart; Young. “The Cretaceous–Tertiary biotic transition”. 1997. Journal
of the Geological Society 154 (2): 265–292.
81. Johan Vellekoopa, Appy Sluijs, Jan Smit, Stefan Schouten, Johan W. H. Weijers,
Jaap S. Sinninghe Damsté, and Henk Brinkhuis. “Rapid short-term cooling following
the Chicxulub impact at the Cretaceous–Paleogene boundary”. May 27, 2014.
Proceedings of the National Academy of Sciences of the United States of America,
vol. 111, no 21, 7537–7541.
82. Petit, J.R., J. Jouzel, D. Raynaud, N.I. Barkov, J.-M. Barnola, I. Basile, M. Benders,
J. Chappellaz, M. Davis, G. Delayque, M. Delmotte, V.M. Kotlyakov, M. Legrand,
V.Y. Lipenkov, C. Lorius, L. Pépin, C. Ritz, E. Saltzman, and M. Stievenard. “Climate
and atmospheric history of the past 420,000 years from the Vostok ice core,
Antarctica”. 1999. Nature 399: 429-436.
83. Hector Javier Durand-Manterola and Guadalupe Cordero-Tercero. “Assessments of
the energy, mass and size of the Chicxulub Impactor”. March 19, 2014. Arxiv.org.
313

84. Covey 1994
85. Project Daedalus Study Group: A. Bond et al. Project Daedalus – The Final Report
on the BIS Starship Study, JBIS Interstellar Studies, Supplement 1978.
86. “Science: Sun Gun”. July 9, 1945. Time.
87. Bryan Caplan. “The Totalitarian Threat”. 2006. In Nick Bostrom and Milan Cirkovic,
eds. Global Catastrophic Risks. Oxford: Oxford University Press, pp. 504-519.
88. Nick Bostrom. “Existential Risks: Analyzing Human Extinction Scenarios”. 2002.
Journal of Evolution and Technology, Vol. 9, No. 1.
89. Globus 2007
90. Traill LW, Bradshaw JA, Brook BW. “Minimum viable population size: A metaanalysis of 30 years of published estimates”. 2007. Biological Conservation 139 (12): 159–166.
91. Michelle Starr. “Japanese company plans space elevator by 2050”. September 23,
2014. CNET.
92. Bradley C. Edwards, Eric A. Westling. The Space Elevator: A Revolutionary Earthto-Space Transportation System. 2003.
93. Henry Kolm. “Mass Driver Update”. September 1980. L5 News. National Space
Society.
94. Oyabu, Noriaki; Custance, Óscar; Yi, Insook; Sugawara, Yasuhiro; Morita, Seizo.
“Mechanical Vertical Manipulation of Selected Single Atoms by Soft Nanoindentation
Using Near Contact Atomic Force Microscopy”. 2003. Physical Review Letters 90
(17).
95. Nanofactory Collaboration. 2006-2014.
96. Ray Kurzweil. The Singularity is Near. 2005. Viking.
97. Ralph Merkle and Robert A. Freitas. “Remaining Technical Challenges for Achieving
Positional Diamondoid Molecular Manufacturing and Diamondoid Nanofactories”.
2007. Nanofactory Collaboration.
98. Eric Drexler. Radical Abundance: How a Revolution in Nanotechnology Will Change
Civilization. 2013. PublicAffairs.
99. Michael Anissimov. “Interview with Robert A. Freitas”. 2010. Lifeboat Foundation.
100.

Robert J. Bradbury. “Sapphire Mansions: Understanding the Real Impact of

Molecular Nanotechnology”. June 2003. Aeiveos.
101.

Drexler 2013.
314

102.

Hall 2007

103.

Robert A. Freitas. “A Self-Replicating, Growing Lunar Factory”. Proceedings

of the Fifth Princeton/AIAA Conference. May 18-21, 1981. Eds. Jerry Grey and
Lawrence A. Hamdan. American Institute of Aeronautics and Astronautics
104.

Patrick Gerland, Adrian E. Raftery, Hana Ševčíková, Nan Li, Danan Gu,

Thomas Spoorenberg, Leontine Alkema, Bailey K. Fosdick, Jennifer Chunn, Nevena
Lalic, Guiomar Bay, Thomas Buettner, Gerhard K. Heilig, John Wilmoth. “World
population stabilization unlikely this century”. September 18, 2014. Science.
105.

Marshall T. Savage. The Millennial Project: Colonizing the Galaxy in Eight

Easy Steps. 1992. Little, Brown, and Company.
106.

Nick Bostrom. “What is a singleton?” 2006. Linguistic and Philosophical

Investigations,

Vol.

5,

No.

2:

pp.

48-54.

Chapter 16: Artificial Intelligence
AI and AI risk is now well know topic and I will not repeat here all that you could find in works of
E.Yudkowsky and in recent book on Nick Bostrom “Superintelligence”. I will include here only my
own research on AI timing, AI failures modes and AI prevention methods.

Current state of AI risks – 2016
This part of the book should be continuously updated based on recent news. The field of AI is
undergoing quick changes every year.
1.Elon Musk became the main player in AI safety field with his OpenAI program. But the
idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article
which is questioning safety of the idea of openness in the field of AI.
http://www.nickbostrom.com/papers/openness.pdf
Personally I think that here we see an example of arrogance of billionaire. He intuitively
come to idea which looks nice, appealing and may work in some contexts. But to prove
that it will actually work, we need rigorous prove.

But there is a difference between idea of Open AI as it was suggested by Musk in the beginning and
actual work in the organization named "Open AI". The latter seems to be more balanced.
2. Google seems to be one of the main AI companies and its AlphaGo won in Go game in
human champion. Yudkowsky predicted after 3:0 that AlphaGo has reached superhuman

315

abilities in Go and left humans forever behind, but AlphaGo lost the next game. It made
Yudkowsky to said that it poses one more risk of AI – a risk of uneven AI development, that
it is sometimes superhuman and sometimes it fails.
3. The number of technical articles on the field of AI control grew exponentially. And it is not
easy to read them all.
4. There are many impressive achievements in the field of neural nets and deep learning.
Deep learning was Cinderella in the field of AI for many years, but now (starting from 2012)
it is princess. And it was unexpected from the point of view of AI safety community. MIRI
only recently updated its research schedule and added studying of neural net based AI
safety.
5. The doubling time in some benchmarks in deep learning seems to be 1 year.
6. Media overhype AI achievements.
7. Many new projects in AI safety had started, but some are concentrated on safety of selfdriving cars (Even Russian KAMAZ lorry building company investigate AI ethics).
8. A lot of new investment going into AI research and salaries in field are rising.
9. Military are increasingly interested in implementing AI in warfare.
10. Google has AI ethic board, but what it is doing is unclear.
11. It seems like AI safety and implementation is lagging from actual AI development.
12. Governments are increasingly interested in AI safety: White House/OSTP conferences
on AI
13. Implications of AI safety become mainstream in the field of self driving cars and military robotics, as well
as discussion of AI unemployment. it sometimes distract attention from the real problem.

Comments: Jaime Sevilla Molina Comments by point: 3 - While there are more papers being produced,
most of them are on preliminary research and do not develop technical results. 4 - It looks like research has
changed its focus from value alignment to interruptibility due to some promising results,not to the rise of neural
nets. The previous line of work is still being explored.11 - Saying that implementation of AI safety is lagging
behind is the understatement of the week, not to say confuse: we are nowhere close to be in a position to start
even considering it. Also, it is worth remarking that MIRI is collaborating with Deep Mind and that Paul
Christiano has joined Open AI, and that academia remains largely indifferent to the problem. As a more
personal prediction, I am going to bet that the recent ruckus about the difficulty of implementing intention in
smart contracts is going to result in some relevant research in AI safety being done, though in a very indirect
way

You can add the

Distributed agent net as a basis for hyperbolic law of acceleration and its
implication for AI timing and x-risks

There are two things in the past that may be named super-intelligences, if we consider
level of tasks they solved. Studying them is useful when we are considering the creation of
our own AI.
The first one is biological evolution, which managed to give birth to such a sophisticated
thing as man, with its powerful mind and natural languages. The second one is all of
human science when considered as a single process, a single hive mind capable of
solving such complex problems as sending man to the Moon.

316

What can we conclude about future computer super-intelligence from studying the
available ones?
Goal system.

Both super-intelligences are purposeless. They don’t have any final goal which
would direct the course of development, but they solve many goals in order to survive in
the moment. This is an amazing fact of course.
They also lack a central regulating authority. Of course, the goal of evolution is survival at
any given moment, but is a rather technical goal, which is needed for the evolutionary
mechanism’s realization.
Both will complete a great number of tasks, but no unitary final goal exists. It’s just like a
man in their life: values and tasks change, the brain remains.
Consciousness.

Evolution lacks it, science has it, but to all appearances, it is of little

significance.
That is, there is no center to it, either a perception center or a purpose center. At the same
time, all tasks are completed. The sub-conscious part of the human brain works the same
way too.
Both super-intelligences are based on the principle: collaboration of
numerous smaller intelligences plus natural selection.
Master   algorithm.

Evolution is impossible without billions of living creatures testing various gene
combinations. Each of them solves its own egoistic tasks and does not care about any
global purpose. For example, few people think that selection of the best marriage partner is
a species evolution tool (assuming that sexual selection is true). Interestingly, the human
brain has the same organization: it consists of billions of neurons, but they don’t all see its
global task.
Roughly, there have been several million scientists throughout history. Most of them have
been solving unrelated problems too, while the least refutable theories passed for selection
(considering social mechanisms here).
Safety.

Dangerous, but not hostile.

Evolution may experience ecological crises; science creates an atomic bomb. There are
hostile agents within both, which have no super-intelligence (e.g. a tiger, a nation state).
Within an intelligent environment, however, a dangerous agent may appear which is
stronger than the environment and will “eat it up”. This will be difficult to initiate. Transition
from evolution to science was so difficult to initiate from evolution’s point of view, (if it had
one).
How to create our super­intelligence.

Assume, we agree that super-intelligence is an environment,
possessing multiple agents with differing purposes.
317

So we could create an “aquarium” and put a million differing agents into it. At the top,
however, we set an agent to cast tasks into it and then retrieve answers.
Hardware requirements now are very high:

we should simulate millions of human-level agents. A
computational environment of about 10 to the power of 20 flops is required to simulate a
million brains. In general, this is close to the total power of the Internet. It can be
implemented as a distributed network, where individual agents are owned by individual
human programmers and solve different tasks – something like SETI-home or the Bitcoin
network.
Everyone can cast a task into the network, but provides a part of their own resources in
return.

Speed of development of superintelligent environment

The Super-intelligence environment develops hyperbolically. Korotaev shows
that the human population grows governed by the law N = 1/T (Forrester law , which has
singularity at 2026), which is a solution to the following differential equation:
Hyperbolic law.

dN/dt = N*N
A solution and more detailed explanation of the equation can be found in this article by
Korotaev (article in Russian, and in his English book on p. 23). Notably, the growth rate
depends on the second power of the population size. The second power was derived as
follows: one N means that a bigger population has more descendants; the second N
means that a bigger population provides more inventors who generate a growth in
technical progress and resources.
Evolution and tech progress are also known to develop hyperbolically (see below to learn
how it connects with the exponential nature of Moore’s law; an exact layout of hyperbolic
acceleration throughout history may be found in Panov’s article “Scaling law of the
biological evolution and the hypothesis of the self-consistent Galaxy origin of life” ) The
expected singularity will occur in the 21 st century. And now we know why. Evolution and
tech progress are both controlled by the same development law of the superinteligent
environment. This law states that the intelligence in an intelligence environment depends
on the number of nodes, and on the intelligence of each node. This is of course is very
rough estimation, as we should also include the speed of transactions
However, Korotaev gives an equation for population size only, while actually it is also
applicable to evolution – the more individuals, the more often that important and interesting
mutations occur, and for the number of scientists in the 20 th century. (In the 21st century it
has reached its plateau already, so now we should probably include the number of AI
specialists as nodes).

318

Korotayev provides a hyperbolic law of acceleration and its derivation from
plausible assumptions but it is only applicable to demographics in human history from its
beginning and until the middle of the 20t century, when demographics stopped obeying this
law. Panov provides data points for all history from the beginning of the universe until the
end of the 20th century, and showed that these data points are controlled by hyperbolic law,
but he wrote down this law in a different form, that of constantly diminishing intervals
between biological and (lately) scientific revolutions. (Each interval is 2.67 shorter that
previous one, which implies hyperbolic law.)
In   short:

I suggested that Korotaev’s explanation of hyperbolic law stands as a prehuman history explanation of an accelerated evolutionary process, and that it will work in
the 21st century as a law describing the evolution of an AI-agents’ environment. It may need
some updates if we also include speed of transactions, but it would give even quicker
results.
What I did here:

is only exponential approximation, it is hyperbolical in the longer term, if seen as
the speed of technological development in general. Kurzweil wrote: “But I noticed
something else surprising. When I plotted the 49 machines on an exponential graph (where
a straight line means exponential growth), I didn’t get a straight line. What I got was
another exponential curve. In other words, there’s exponential growth in the rate of
exponential growth. Computer speed (per unit cost) doubled every three years between
1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling
every year.”
Moore’s law

While we now know that Moore’s law in hardware has slowed to 2.5 years for each
doubling, we will probably now start to see exponential growth in the ability of programs.
Neural net development has a doubling time of around one year or less. Moore’s law is like
spiral, which circles around more and more intelligent technologies, and it consists of small
s-like curves. It all deserves a longer explanation. Here I show that Moore’s law, as we
know it, is not contradicting the hyperbolic law of acceleration of a superintelligent
environment, but this is how we see it on a small scale.

Other considerations
Human level agents and Turing test .

Ok, we know that the brain is very complex, and if the power
of individual agents in AI environment grows so quickly, where should appear agents
capable of passing the Turing test – and it will happen very soon.

But for a long time the nodes of this net will be small companies and personal assistants,
which could provide superhuman results. There is already a market place where various
projects could exchange results or data using API. As a result, the Turing test will be
meaningless, because most powerful agents will be helped by humans.
319

In any case, some kind of “mind brick”, or universal robotic brain will also appear.
Physical size of Strong AI:

Because of light speed limit on information exchange, the computer,
on which super-intelligence run, must decrease in size rather than increase, in order to
make quick communications inside itself. Otherwise, the information exchange will slow
down, and the development rate will be lost.
Therefore, the super-intelligence should have a small core, e.g. up to the size of the Earth,
and even less in the future. The periphery can be huge, but it will perform technical
functions like defence and nutrition.
Transition   to   the   next   super­intelligent   environment.

It is logical to suggest that the next superintelligence will also be an environment rather than a small agent. It will be something like a
net of neural net-based agents as well as connected humans. The transition may seem to
be soft on a small time scale, but it will be disruptive by it final results. It is already
happening: the Internet, AI-agents, open AI, you name it.
The important part of such a transition is the change of speed of interaction between
agents. In evolution the transaction time was thousands of years, which was the time
needed to check new mutations. In science it was months, which was the time needed to
publish an article. Now it is limited by the speed of the Internet, which depends not only on
the speed of light, but also on its physical size, bandwidth and so on and have transaction
time in order of seconds.
So, a new super-intelligence will rise in a rather “ordinary” fashion: The power and number
of interacting AI agents will grow, become quicker and they will quickly perform any tasks
which are fed to them. (Elsewhere I discussed this and concluded that such a system may
evolve into two very large super-intelligent agents which will have a cold war, and that hard
take-off of any AI-agent against an AI environment is unlikely. But this does not result in AI
safety since war between such two agents will be very destructive – consider
nanoweapons. ).
As the power of individual agents grows, they will reach human and
latterly superhuman levels. They may even invest in self-improvement, but if many agents
do this simultaneously, it will not give any of them a decisive advantage.
Super­intelligent agents.

There is well known strategy to be safe in
the environment there are more powerful than you, and agents fight each other. It is
making alliances with some of the agents, or becoming such an agent yourself.
Human safety in the super­intelligent agents environment.

Such an AI neural net-distributed super-intelligence may not be the
last, if a quicker way of completing transactions between agents is found. Such a way may
be an ecosystem containing miniaturization of all agents. (And this may solve the Fermi
paradox – any AI evolves to smaller and smaller sizes, and thus makes infinite calculations
in final outer time, perhaps using an artificial black hole as an artificial Tippler Omega point
or femtotech in the final stages). John Smart’s conclusions are similar:
Fourth super­intelligence?

320

Singularity:

It could still happen around 2030, as was predicted by Forrester law, and the
main reason for this is the nature of hyperbolic law and its underlying reasons of the
growing number of agents and the IQ of each agent.
Oscillation   before   singularity:

Growth may become more and more unstable as we near
singularity because of the rising probability of global catastrophes and other consequences
of disruptive technologies. If true, we will never reach singularity dying off shortly before, or
oscillating near its “Schwarzschild sphere”, neither extinct, nor able to create a stable
strong AI.
The super-intelligent environment still reaches a singularity point, but a point cannot be the
environment by definition. Oops. Perhaps an artificial black hole as the ultimate computer
would help to solve such a paradox.
agent number growth, agent performance speed
growth, inter-agent data exchange rate growth, individual agent intelligence growth, and
growth in the principles of building agent working organizations.
Ways  of   enhancing   the   intelligent   environment:

The main problem of an intelligent environment : chicken or egg?

– Who will win: the super-intelligent
environment or the super-agent? Any environment can be covered by an agent submitting
tasks to it and using its data. On the other hand, if there are at least two super-agents of
this kind, they form an environment.

Problems with the model:

1)
The model excludes the possibility of black swans and other disruptive events, and
assumes continuous and predictable acceleration, even after human level AI is created.
2) The model is disruptive itself, as it predicts infinity, and in a very short time frame of
15 years from now. But expert consensus puts AI in the 2060-2090 timeframe.
These two problems may somehow cancel each other out.
In the model exists the idea of oscillation before the singularity, which may result in
postponing AI and preventing infinity. The singularity point inside the model is itself
calculated using remote past points, and if we take into account more recent points, we
could get a later date for the singularity, thus saving the model.
If we say that because of catastrophes and unpredictable events the hyperbolic law will
slow down and strong AI will be created before 2100, as a result, we could get a more
plausible picture.
This may be similar to R.Hanson’s “ems universe” , but here, neural net-based agents are
not equal to human emulations, which play a minor role in all stories.
321

It is only a model, so it will stop working at some point. Reality will
surprise us at some point, but reality doesn’t consist only of black swans. Models may work
between them.
Limitation of the model:

TL;DR: Science and evolution are super-intelligent environments governed by the same
hyperbolic acceleration law, which soon will result in a new super-intelligent environment,
consisting of neural net-based agents. Singularity will come after this, possibly as soon as
2030.

The map AI failures modes and levels
This map shows that AI failure resulting in human extinction could happen on different
levels of AI development, namely, before it starts self-improvement (which is unlikely but
we still can envision several failure modes), during its take off, when it uses different
instruments to break out from its initial confinement, and after its successful take over the
world, when it starts to implement its goal system which could be plainly unfriendly or its
friendliness may be flawed.
AI also can halts on late stages of its development because of either technical problems or
“philosophical” one.
I am sure that the map of AI failure levels is needed for the creation of Friendly AI theory as
we should be aware of various risks. Most of ideas in the map came from “ Artificial Intelligence
as a Positive and Negative Factor in Global Risk ” by Yudkowsky, from chapter 8 of “Superintelligence”
by Bostrom, from Ben Goertzel blog and from hitthelimit blog, and some are mine.
I will now elaborate three ideas from the map which may need additional clarification.
The problem of the chicken or the egg

The question is what will happen first: AI begins to self-improve, or the AI got a malicious
goal system. It is logical to assume that the goal system change will occur first, and this
gives us a chance to protect ourselves from the risks of AI, because there will be a short
period of time when AI already has bad goals, but has not developed enough to be able to
hide them from us effectively. This line of reasoning comes from Ben Goertzel.
Unfortunately many goals are benign on a small scale, but became dangerous as the scale
grows. 1000 paperclips are good, one trillion are useless, and 10 to the power of 30
paperclips are an existential risk.
AI halting problem

Another interesting part of the map are the philosophical problems that must face any AI.
Here I was inspired after this reading Russian-language blog hitthelimit
322

One of his ideas is that the Fermi paradox may be explained by the fact that any sufficiently
complex AI halts. (I do not agree that it completely explains the Great Silence.)
After some simplification, with which he is unlikely to agree, the idea is that as AI selfimproves its ability to optimize grows rapidly, and as a result, it can solve problems of any
complexity in a finite time. In particular, it will execute any goal system in a finite time. Once
it has completed its tasks, it will stop.
The obvious objection to this theory is the fact that many of the goals (explicitly or implicitly)
imply infinite time for their realization. But this does not remove the problem at its root, as
this AI can find ways to ensure the feasibility of such purposes in the future after it stops.
(But in this case it is not an existential risk if their goals are formulated correctly.)
For example, if we start from timeless physics, everything that is possible already exists
and the number of paperclips in the universe is a) infinite b) {rapheme{or{. When the
paperclip {rapheme has understood this fact, it may halt. (Yes, this is a simplistic
argument, it can be disproved, but it is presented solely to illustrate the approximate
reasoning, that can lead to AI halting.) I think the AI halting problem is as complex as
the halting problem for Turing Machine.
Vernor Vinge in his book Fire Upon the Deep described unfriendly Ais which halt any
externally visible activity about 10 years after their inception, and I think that this intuition
about the time of halting from the point of external observer is justified: this can happen
very quickly. (Yes, I do not have a fear of fictional examples, as I think that they can be
useful for explanation purposes.)
In the course of my arguments with “hitthelimit” a few other ideas were born, specifically
about other philosophical problems that may result in AI halting.
One of my favorites is associated with modal logic. The bottom line is that from observing
the facts, it is impossible to come to any conclusions about what to do, simply because
oughtnesses are in a different modality. When I was 16 years old this thought nearly killed
me.
It almost killed me, because I realized that it is mathematically impossible to come to any
conclusions about what to do. (Do not think about it too long, it is a dangerous idea.) This is
like awareness of the meaninglessness of everything, but worse.
Fortunately, the human brain was created through the evolutionary process and has
bridges from the facts to oughtness, namely pain, instincts and emotions, which are out of
the reach of logic.
But for the AI with access to its own source code these processes do not apply. For this AI,
awareness of the arbitrariness of any set of goals may simply mean the end of its activities:
the best optimization of a meaningless task is to stop its implementation. And if AI has
access to the source code of its objectives, it can optimize it to maximum simplicity, namely
to zero.
323

by Yudkowsky is also one of the problems of high level AI, and it’s probably just the
beginning of the list of such issues.
Lobstakle

Existence uncertainty

If AI use the same logic as usually used to disprove existence of philosophical zombies, it
may be uncertain if it really exists or it is only a possibility. (Again, then I was sixteen I
spent unpleasant evening thinking about this possibility for my self.) In both cases the
result of any calculations is the same. It is especially true in case if AI is philozombie itself,
that is if it does not have qualia. Such doubts may result in its halting or in conversion of
humans in philozombies. I think that AI that do not have qualia or do not believe in them
can’t be friendly. This topic is covered in the map in the bloc “Actuality”.

The status of this map is a draft that I believe can be greatly improved. The experience of
publishing other maps has resulted in almost a doubling of the amount of information. A
companion to this map is a map of AI Safety Solutions which I will publish later.
The map was first presented to the public at a LessWrong meetup in Moscow in June 2015 (in
Russian)
Pdf is here: http://immortality­roadmap.com/Aifails.pdf

AI safety in the age of neural networks and Stanislaw Lem 1959 prediction

Tl;DR: Neural networks will result in slow takeoff and arm race between two Ais. It has
some good and bad consequences to the problem of AI safety. Hard takeoff may happen
after it anyway.
Summary: Neural networks based AI can be built; it will be relatively safe, not for a long
time though.
The neuro AI era (since 2012) feature an exponential growth of the total AI expertise, with
a doubling period of about 1 year, mainly due to data exchange among diverse agents and
different processing methods. It will probably last for about 10 to 20 years, after that, hard
takeoff of strong AI or creation of Singleton based on integration of different AI systems
can take place.
Neural networks based AI implies slow takeoff, which can take years and eventually lead to
AI’s evolutionary integration into the human society. A similar scenario was described by
StanisĹ’aw Lem in 1959: the arms race between countries would cause power race
between Ais. The race is only possible if the self-enhancement rate is rather slow and
324

there is data interchange between the systems. The slow takeoff will result in a world
system with two competitive AI-countries. Its major risk will be a war between Ais and
corrosion of value system of competing Ais.
The hard takeoff implies revolutionary changes within days or weeks. The slow takeoff can
transform into the hard takeoff at some stage. The hard takeoff is only possible if one AI
considerably surpasses its peers (OpenAI project wants to prevent it).

Part 1. Limitations of explosive potential of neural nets
Everyday now we hear about success of neural networks, and we could conclude that
human level AI is near the corner. But such type of AI is not fit for explosive selfimprovement.
If AI is based on neural net, it is not easy for it to undergo quick self-improvement for
several reasons:
1. A neuronet’s executable code is not fully transparent  because of theoretical reasons,
as knowledge is not explicitly present within it. So even if one can read neuron weight
values, it’s not easy to understand how they can be changed to improve something.
2. Educating a new neural network is a resource-consuming task. If a neuro AI
decides to go the way of self-enhancement, but is unable to understand its source code, a
logical solution would be to ‘deliver a child’, i.e. to teach a new neural network. However,
educating neural networks requires much more resources than their executing; it requires
huge databases and has high failure probability. All those factors will lead to rather slow AI
self-enhancement.
3. Neural network education depends  on big data volumes and new ideas coming  from
the external world. It means that a single AI will hardly break away, if it has stopped free
information exchange with the external world; its level will not surpass the rest of the world
considerably.
4. The neural network power has relatively linear dependence on the power of the
computer it’s run on, so with a neuro AI, the hardware power is limiting to its selfenhancement ability.
5. Neuro AI would be a rather big program of about 1 Tbyte, so it can hardly leak into
the network unnoticed (at current internet speeds).
6. Even if a neuro AI reaches the human level, it will not get self-enhancement
ability (because no one person can understand all scientific aspects). For this end, a big
lab with numerous experts in different branches is needed. Additionally, it should be able to
launch such virtual laboratory at a rate at least 10 -100 times higher than that of a human
being to get an edge as compared to the rest of mankind. That is, it has to be as powerful
325

as 10,000 people or more to surpass the rest part of the mankind in terms of enhancement
rate. This is a very high requirement. As a result, the neural net era can lead to building a
human, or even a bit superhuman level AI, which is unable to self-enhance or does it so
slowly that lags behind the technical progress.
The civilization-level intelligence is the total IQ that the civilization possesses for 100 years
of its history, which is defined as a complexity of scientific and engineering tasks it can
solve. For example, during the 20th century, nuclear weapon was created, but problems of
cancer, aging and AI creation failed to be solved. It means, those tasks have superior
complexity.
For a strong AI to be able to change the human destiny, its IQ should 100 – 1000 times
surpass that of the entire civilization. In this case, it can create a remedy against aging,
treat cancer etc. within a year. (To destroy mankind, it would need a much lower level of
intelligence like that of a sophisticated computer virus, which can,
7. The richest neural net project will get decisive advantage (or not?). Neuronets
based AI’s power is approximately proportional to the power of computers it’s run on, time,
and a size of the base it was educated on, and number of researchers in the project. The
project from the basement may only succeed if it has superior master algorithm of
intelligence. That said, small project can hardly experience exponential self-improvement,
because the first AI will already have had better computing power and, more important,
better databases and researchers. And as now we live in AI-summer period and there is a
lot of hype about AI, big companies will invest to buy the best ideas about master algorithm
(by buying startups).
Objection here is that now everybody who spent 1 year in machine learning and has GPU
could reach meaningful results in creation his own RNN. So may be collective intelligence
of private researchers may overweight big companies.
A detailed roadmap of AI failure modes can be found  here; and the roadmap of the ways of
creating a safe AI here.
See also machine learning expert opinion and recent FHI post.

Part 2. Some ideas about ensuring safety of neural nets based AI
The mentioned concepts can be helpful in creating extra safety measures (while not
absolute, they can reduce chances of AI getting out of control.)
2. Program memory access restriction. In state-of-the-art computers, a program
code and data share the same memory, which is known as von Neumann
architecture. However, there is also Harvard architecture, where data and programs
are stored separately. https://en.wikipedia.org/wiki/Harvard_architecture
326

This kind of computer can be physically produced and provided with source-code reading
attempt indicators. (However, there are neural networks at the moment, which generate a
new neural network for every task, so they have higher efficiency, and this approach will
not work for them or limit AI power).
2. A compiled executable code and its source written in a high-level programming
language should be stored separately. If AI tries to reach it, it will be a signal that it is
getting out of control. With neural networks, a compiled code, its source, and a database
used to educate the neural network should be stored separately.
3. AI should be limited in knowledge useful to build AI. To improve itself, AI should
have very precise understanding of its internal construction (the same applies to a human
being). It enables us to keep it under control: if not provided with this data, it will have
problems with self-improvement. That is AI, which is not an AI expert, is safer to some
extent.
4. We can use “foggy architecture”, which is very difficult for optimizing  because the
AI’s values and major features are hidden among heaps of other details. Neural networks
are the architecture of this kind. But it will make it also difficult to us to put our values in it.
5. We can physically restrict the ability of the AI to create “AI children”; their bringingup requires huge resources, time, and data prior they become more effective than their
parents. AI can create another AI, but then it will need to test it for a long time to find out if
the latter observes the parent’s values, if it has any mistakes, and if it is really superior to
the original version. This kind of testing is time-consuming, generally nontrivial, and even
incomputable. Especially, this kind of task is difficult for neural networks which contain a
huge amount of random elements, are not exact in principle, and their faultlessness can
not be proven. (Moreover, there are numerous examples when even a small image shift will
cause a recognition error.) However, this problem can be solved because this is the way
that the human society goes: children are born, tested, and educated.
Creation of AI-children may be restricted by restricting needed infrastructure: memory,
datasets, processing power.
6. A neuronet-based agent (like a robot) will be anthropomorphous  in terms of its brain
structure. Even if we shall not imitate the human brain intentionally, we shall get
approximately the same thing. In a sense, it’s may be good as even if these Ais supplant
people, they still will be almost people who are different from normal people like one
generation from another. And being anthropomorphous they may be more compaterble
with human value systems. Along with that, there may exist absolutely humanless AI
architecture types (for example, if evolution is regarded as an inventor.)
But neural net world will be not EM-dominated world of Hanson. EM-world may appear on
later stage, but I think that exact uploads still will not be dominating form of AI.

327

Part 3. Transition from slow to hard takeoff
In a sense, neuronet-based AI is like a chemical fuel rocket: they do fly and can fly even
across the entire solar system, but they are limited in terms of their development potential,
bulky, and clumsy.
Sooner or later, using the same principle or another one, completely different AI can be
built, which will be less resource-consuming and faster in terms of self-improvement ability.
If a certain superagent will be built, which can create neural networks, but is not a neural
network itself, it can be of a rather small size and, partly due to this, experience faster
evolution. Neural networks have rather poor intelligence per code concentration. Probably,
the same thing could be done in a more optimum way by reducing its size by an order of
magnitude, for example, by creating a program to analyze an already educated neural
network and get all necessary information from it.
When, in 10 – 20 years, hardware will improve, multiple neuronets will be able to evolve
within the same computer simultaneously or be transmitted via the Internet, which will
boost their development.
Smart neuro AI can analyze all available data analysis methods and create new AI
architecture able to speed up faster.
Launch of quantum-computer-based networks can boost their optimization drastically.
There are many other promising AI directions which did not pop up yet: Bayesian networks,
genetic algorithms.
The neuro AI era will feature exponential growth of the total humanity intelligence, with a
doubling period of about 1 year, mainly due to the data exchange among diverse agents
and different processing methods. It will last for about 10 to 20 years (2025-2035) and,
after that, hard take-off of strong AI can take place.
That is, the slow take-off period will be the period of collective evolution of both computer
science and mankind, which will enable us to adapt to changes under way and adjust
them.
Just like there are Mac and PC in the computer world or democrats and republicans in
politics, it is likely that two big competing AI systems will appear (plus, ecology consisting of
smaller ones). It could be Google and Facebook or USA and China, depending on whether
the world will choose the way of economical competition or military opposition. That is, the
slow take-off hinders the world consolidation under the single control, but rather promotes
a bipolar model. While a bipolar system can remain stable for a long period of time, there
are always risks of a real war between the Ais (see Lem’s quote below).

328

Part 4. In the course of the slow takeoff, AI will go through several stages, that we
can figure out now
While the stages can be passed rather fast or be diluted, we still can track them like
milestones. The dates are only estimates.
1. AI autopilot. Tesla has it already.
2. AI home robot. All prerequisites are available to build it by 2020 maximum. This robot
will be able to understand and fulfill an order like ‘Bring my slippers from the other room’.
On its basis, something like “mind-brick” may be created, which is a universal robot brain
able to navigate in natural space and recognize speech. Then, this mind-brick can be used
to create more sophisticated systems.
3. AI intellectual assistant. Searching through personal documentation, possibility to ask
questions in a natural language and receive wise answers. 2020-2030.
4. AI human model. Very vague as yet. Could be realized by means of a robot brain
adaptation. Will be able to simulate 99% of usual human behavior, probably, except for
solving problems of consciousness, complicated creative tasks, and generating
innovations. 2030.
5. AI as powerful as an entire research institution  and able to create scientific
knowledge and get self-upgraded. Can be made of numerous human models. 100
simulated people, each working 100 times faster than a human being, will be probably able
to create AI capable to get self-improved faster, than humans in other laboratories can do
it. 2030-2100
5a Self-improving threshold. AI becomes able to self-improve independently and
quicker than all humanity
5b Consciousness and qualia threshold. AI is able not only pass Turing test in all
cases, but has experiences and has understanding why and what it is.
6. Mankind-level AI. AI possessing intelligence comparable to that of the whole mankind.
2040-2100
7. AI with the intelligence 10 – 100 times bigger than that of the whole mankind. It will
be able to solve problems of aging, cancer, solar system exploration, nanorobots building,
and radical improvement of life of all people. 2050-2100
8. Jupiter brain – huge AI using the entire planet’s mass for calculations.  It can
reconstruct dead people, create complex simulations of the past, and dispatch von
Neumann probes. 2100-3000
9. Galactic kardashov level 3 AI. Several million years from now.
329

10. All-Universe AI. Several billion years from now

Part 5. StanisĹ’aw Lem on AI, 1959, Investigation
In his novel «Investigation» Lem’s character discusses future of arm race and AI:
------- Well, it was somewhere in 46 th, A nuclear race had started. I knew that when the limit
would be reached (I mean maximum destruction power), development of vehicles to
transport the bomb would start. .. I mean missiles. And here is where the limit would be
reached, that is both parts would have nuclear warhead missiles at their disposal. And
there would arise desks with notorious buttons thoroughly hidden somewhere. Once the
button is pressed, missiles take off. Within about 20 minutes,  finis mundi
ambilateralis comes – the mutual end of the world. <…> Those were only prerequisites.
Once started, the arms race can’t stop, you see? It must go on. When one part invents a
powerful gun, the other responds by creating a harder armor. Only a collision, a war is the
limit. While this situation means  finis mundi, the race must go on. The acceleration, once
applied, enslaves people. But let’s assume they have reached the limit. What remains?
The brain. Command staff’s brain. Human brain can not be improved, so some automation
should be taken on in this field as well. The next stage is an automated headquarters or
strategic computers. And here is where an extremely interesting problem arises. Namely,
two problems in parallel. Mac Cat has drawn my attention to it. Firstly, is there any limit for
development of this kind of brain? It is similar to chess-playing devices. A device, which is
able to foresee the opponent’s actions ten moves in advance, always wins against the one,
which foresees eight or nine moves ahead. The deeper the foresight, the more perfect the
brain is. This is the first thing. <…> Creation of devices of increasingly bigger volume for
strategic solutions means, regardless of whether we want it or not, the necessity to
increase the amount of data put into the brain, It in turn means increasing dominating of
those devices over mass processes within a society. The brain can decide that the
notorious button should be placed otherwise or that the production of a certain sort of steel
should be increased – and will request loans for the purpose. If the brain like this has been
created, one should submit to it. If a parliament starts discussing whether the loans are to
be issued, the time delay will occur. The same minute, the counterpart can gain the lead.
Abolition of parliament decisions is inevitable in the future. The human control over
solutions of the electronic brain will be narrowing as the latter will concentrate knowledge.
Is it clear? On both sides of the ocean, two continuously growing brains appear. What is
the first demand of a brain like this, when, in the middle of an accelerating arms race, the
next step will be needed? <…> The first demand is to increase it – the brain itself! All the
rest is derivative.
- In a word, your forecast is that the earth will become a chessboard, and we – the pawns
to be played by two mechanical players during the eternal game?
Sisse’s face was radiant with proud.
330

3. Yes. But this is not a forecast. I just make conclusions. The first stage of a
preparatory process is coming to the end; the acceleration grows. I know, all this
sounds unlikely. But this is the reality. It really exists!
— <…> And in this connection, what did you offer at that time?
4. Agreement at any price. While it sounds strange, but the ruin is a less evil than the
chess game. This is awful, lack of illusions, you know.
----Part 6. The primary question is: Will strong AI be built during our lifetime?
That is, is this a question of future generations’ good (the question that an efficient altruist,
not a common person, is concerned about) or a question of my near term planning?
If AI will be built during my lifetime, it may lead to either the radical life extension by means
of different technologies and realization of all sorts of good things not to be numbered here
or my death and probably pain, if this AI is unfriendly.
It depends on the time when AI is built and my expected lifetime (with the account for the
life extension to be obtained from weaker AI versions and scientific progress on one hand,
and its reduction due to global risks irrelevant to AI.)
Note that we should consider different dates for different events. If we would like to avoid AI
risks, we should take the earliest date of its possible appearance (for example, the first
10%). And if we count on its good, then – the median.
Since the moment of neuro-revolution, an approximate rate of doubling AI algorithms
efficiency (mainly in image recognition area) is about 1 year. It is difficult to quantify this
process as the task complexity does not change linearly, and it is always more difficult to
recognize recent patterns.
Now, an important factor is a radical change in attitude towards AI research. Winter is over,
the unstrained summer with all its overhype has begun. It caused huge investments to AI
research (chart), more enthusiasts and employees in this field, and bold researches. It’s a
shame to have no own AI project now. Even KAMAZ develops a friendly AI system. The
entry threshold has dropped: one can learn basic neuronet adjustment skills within one
year; heaps of tutorial programs are available. Supercomputer hardware got cheaper. Also,
a guaranteed market of Ais in form of autopilot cars and, in the future, home robots has
emerged.
If the algorithm improvement keeps the pace of about one doubling per year, it means
1,000,000 during 20 years, which certainly will be equal to creating a strong AI beyond a
self-improvement threshold. In this case, a lot of people (and me) have good chances to
live till the moment and get immortality.
331

Conclusion
Even not self-improving neural AI system may be unsafe if it get global domination (and will
have bad values) or if it will go into confrontation with equally large opposing system. Such
confrontation may result in nuclear or nanotech based war, and human population may be
hostage especially if both systems have pro-human value system (blackmail).
Anyway slow takeoff AI risks of human extinction are not inevitable and are manageable in
ad hoc basis. Slow takeoff does not prevent hard takeoff on later stage of AI development.
Hard takeoff is probably the next logical stage of soft takeoff, as it will continue the trend of
accelerating progress. During biological evolution we could witness the same process: slow
process of brain enlargement of mammalian species in last tens of million years was
replace by almost hard takeoff of Homo sapience intelligence which threatens ecological
balance.
Hardtake off is a global catastrophe almost by definition, which needs extraordinary
measures to be put into safe way. Maybe the period of almost human level neural net
based AI will help us to create instruments of AI control. Maybe we could use simpler
neural Ais to control self-improving system.
Another option is that neural AI age will be very short and it is already almost over. In 2016
Google Deep Mind beats Go using complex approach of several AI architectures
combined. If such trend continue we could get Strong AI before 2020 and we are
completely not ready for it.

The map of x-risks prevention

When I started to work on this map of AI safety solutions, I wanted to illustrate the excellent 2013
article “Responses to Catastrophic AGI Risk: A Survey” by Kaj Sotala and IEET Affiliate Scholar Roman V.
Yampolskiy, which I strongly recommend. However, during the process I had a number of ideas to expand
the classification of the proposed ways to create safe AI.
In their article there are three main categories of safety checks on AI:

 Social constraints
332

 External constraints
 Internal constraints
I have added three more categories:

 AI is used to create a safe AI
 Multi-level solutions
 Meta-level
which describes the general requirements for any AI safety theory.
In addition, I have divided the solutions into simple and complex. Simple solutions are the ones whose
recipe we know today. For example: “do not create any AI”. Most of these solutions are weak, but they
are easy to implement.
Complex solutions require extensive research and the creation of complex mathematical models for their
implementation, and could potentially be much stronger. But the odds are less that there will be time to
realize them and implement successfully.
Additional ideas for AI safety been including in the map from the work of Ben Goertzel, Stuart Armstrong
and Christiano.
My novel contributions include:
1. Restriction of the self-improvement of the AI. Just as a nuclear reactor is controlled by regulating
the intensity of the chain reaction, one may try to control AI by limiting its ability to self-improve in
various ways.
2. Capture the beginning of dangerous self-improvement. At the start of potentially dangerous AI it
has a moment of critical vulnerability, just as a ballistic missile is most vulnerable at the start. Imagine
that AI gained an unauthorized malignant goal system and started to strengthen itself. At the beginning of
this process, it is still weak, and if it is below the level of human intelligence at this point, it may be still
more stupid than the average human even after several cycles of self-empowerment. Let’s say it has an
IQ of 50 and after self-improvement it rises to 90. At this level it is already committing violations that can
be observed from the outside (especially unauthorized self-improving), but does not yet have the ability
to hide them. At this point in time, you can turn it off. Alas, this idea would not work in all cases, as some
of the objectives may become hazardous gradually as the scale grows (1000 paperclips are safe, one
billion are dangerous, 10 to power 20 are an x-risk). This idea was put forward by Ben Goertzel.
3. AI constitution. First, in order to describe the Friendly AI and human values we can use the existing
body of laws. (It would be a crime to create an AI that would not comply with the law.) Second, to
describe the rules governing the conduct of AI, we can create a complex set of rules (laws that are much
more complex than Asimov’s three laws), which will include everything we want from AI. This set of rules
can be checked in advance by specialized AI, which calculates only the way in which the application of
these rules can go wrong (something like mathematical proofs based on these rules).
4. “Philosophical landmines.” In the map of AI failure levels I have listed a number of ways in which
high-level AI may halt when faced with intractable mathematical tasks or complex philosophical problems.
One may try to fight high-level AI using “landmines”, that is, putting it in a situation where it will have to
solve some problem, but within this problem is encoded more complex problems, the solving of which will
cause it to halt or crash. These problems may include Godelian mathematical problems, nihilistic rejection
of any goal system, or the inability of AI to prove that it actually exists.
5. Multi-layer protection. The idea here is not that if we apply several methods at the same time, the
likelihood of their success will add up. Simply adding methods will not work if all the methods are weak.
Rather, the idea is that the methods of protection can work together to protect the object from all sides.
In a sense, human society works the same way: a child is educated by an example as well as by rules of
conduct, then he begins to understand the importance of compliance with these rules, but also at the
same time the law, police and neighbors are watching him, so he knows that criminal acts will put him in
jail. As a result, lawful Lrapheme is his goal which he finds rational to obey.
This idea can be reflected in the specific architecture of AI, which will have at its core a set of immutable
rules, on which the human emulation will be built to make high-level decisions. Complex tasks will be
delegated to a narrow Tool Ais. In addition, an independent emulation (conscience) will check the ethics of

333

its decisions. Decisions will first be tested in a multi-level virtual reality, and the ability of selfimprovement of the whole system will be significantly limited. That is, it will have an IQ of 300, but not a
million. This will make it effective in solving aging and global risks, but it will also be predictable and
understandable to us. The scope of its jurisdiction should be limited to a few important factors: prevention
of global risks, death prevention, and the prevention of war and violence. But we should not trust it in
such an ethically delicate topic as the prevention of suffering, which should be addressed with the help of
conventional methods.
This map could be useful for the following applications:
1. As illustrative material in the discussions. Often people find solutions in an ad hoc way, once they learn
about the problem of friendly AI or are focused on one of their favorite solutions.
2. As a quick way to check whether a new solution really has been found.
3. As a tool to discover new solutions. Any systematization creates “free cells” to fill for which one can
come up with new solutions. One can also combine existing solutions or be inspired by them.
4. There are several new ideas in the map.

A companion to this map is the map of AI failure levels. In addition, this map is subordinated to the map
of global risk prevention methods and corresponds to the block “Creating Friendly AI” Plan A2 within it.

Estimation of timing of AI risk
TL;DR: Computer power is 1 of 3 arguments that we should take
prior probability of AI as "anywhere in 21 century". After I use 4
updates to shift it even earlier, that is precautionary principle,
recent neural net success etc.
I want to once again try to assess expected time until Strong AI. I
will estimate prior probability of AI, and then try to update it based
on recent evidences.
At first, I will try to prove the following prior probability of AI: "If AI is
possible, it most likely will be built in the 21 century, or it will be
proven that the task has some very tough hidden obstacles".
Arguments for this prior probability:
Part I
1. Science power argument.
We know that humanity was able to solve many very complex tasks
in the past, and it took typically around 100 years. That is flight of
heavy than air objects, nuclear technologies, space exploration. 100
years is enough for several generations of scientists to concentrate
on a complex task and extract everything about it which we could
do without some extraordinary insight from outside our knowledge.
We are already working on AI for 65 years, by the way.
334

2. Moore's law argument
Moore's law will run out of its power in the 21 century, but this will
not stop growth of stronger and stronger computers for a couple
decades.
This growth will result from cheaper components, from large number
of interconnected computers, for cumulative production of
components and from large money investment. It means that even
if Moore's laws stops (that is there will be no more progress in
microelectronic chips technology), in 10-20 years from that day the
power of most powerful computers in the world will continue to
grow, but in lower and lower speed, and may grow 100 -1000 times
from the moment of Moore law ending.
But such computer will be very large, power consuming and
expensive. They will cost hundreds billions of dollars and consume
gigawatts of energy. The biggest computer planned now is 200
petaflops Intel "Summit" and event if Moore law end on it, it means
that 20 exaflops computers will be eventually built.
There also several almost unused option: quantum computers,
superconducting, FPGA, new ways of parallelization, graphen,
memristors, optics, use of genetically modified biological neurons
for calculations.
All it means that: A) 10 power 20 flops computers will be eventually
built. (And its is comparable with some estimates of human brain
capacity.) B) They will be built in the 21 century. C) 21 century will
see the biggest advance in computer power compare with any other
century and almost all, what could be built, will be built in 21
century and not after.
So, the computer on which AI may run will be built in the 21 century.

"3". Uploading argument
The uploading even of a worm is lagging, but uploading provides
upper limit on AI timing. There is no reason to believe that scanning
human brain will take more than 100 years.
335

Conclusion from prior: Flat probability distribution.
If we know for sure that AI will be built in the 21 century, we could
give it flat probability, which gives it equal probability to appear in
any year, around 1 per cent. (It results in cumulated exponential
probability by the way, but we will not concentrate on it now). We
could use this probability as prior for our future updates of it. Now
we will consider argument for updating this prior probability.
Mediocrity argument. Here we use something like Doomsday
argument. We assume that we are in moment of time which is
randomly chosen from all time of AI development. If we take
1950 as a date of the begging of AI research, we are now at
66th year after the beginning. Using Gott’s formula it means
that AI will be created (or prove to be impossible) in next 66
years with 50 per cent probability and next 132 year with 66
per cent probability. This support estimation of high probability
of AI in the next 100 years. (We could also update this Gott’s
probability based on higher number of AI researchers now,
which would result in earlier prediction of AI.)
Part II
Updates of the prior probability.
Now we could use this prior probability of AI to estimate timing of AI
risks. Before we discussed AI in general, but now we add the word
“risk”.
Arguments for rising AI risks probability in near future:
1. Simpler AI for risk
We don’t need a) self improving b) super human с) universal d)
world domination AI for extinction catastrophe. All these conditions
are not necessary. Extinction is simpler task than friendliness. Even
a program which helps to built biological viruses and is local, non
self-improving, not agent and specialized could create enormous
harm by helping to build hundreds designed pathogens-viruses in
the hands of existential terrorists. Extinction-grade AI may be
simple. And it also could come earlier in time than full friendly AI.
While UFAI may be ultimate risk, we may not be able to survive until
it because of simpler form of AIs, almost on the level of computer
336

viruses. In general earlier risks overshadow later risks.
2. Precautionary principle
We should take lower estimates of timing of AI arrival based on
precautionary principle. Basically this means that we should treat 10
per cent probability of its arrival as 100 per cent.
3. Recent success of neural nets
We may use events of last several years for update our estimation
of AI timing. In last years we saw enormous progress in AI based on
neural nets. The doubling time of AI efficiency in different test is
around 1 year now, and it win on many games (Go, Poker so on).
Belief in AI possibility rose in recent years, which result in overhype
and large growth in investments as well as many new startups.
Specialized hardware for neural nets was built. If such growth will
continue for 10-20 years, it would mean 1000- 1 000 000 growth in
AI capabilities, which must include reaching of human level AI.
4. AI is increasingly used to built new AIs.
AI writes programs, help to calculate connectome of human brain.
All it means that we should expect human level AI in 10-20 years
and superintelligence soon afterwards.
It also means that AI probability is distributed exponentially, from
now and until it creation.
5. Contrarguments
The biggest argument against it is also historic: we saw a lot of AI
hypes before and they failed to produce meaningful results. AI is
always 10 years from now and researchers in AI tend to
overestimate it. Humans tend to be overconfident about AI research.
We also are still far from understanding how human brain works,
and even simplest question about it may be puzzling. Another way
to assess AI timing is idea that AI is unpredictable black swan event,
depending from only one idea to appear (it seems that Yudkowsky
think so). If someone gets this idea, AI is here.
6. Estimating frequency of the new ideas in AI design
In this case we should multiply number of independent AI
researchers on number of trails, that is number of new ideas they
337

get. I suggest to think that the last rate is constant. In this case we
should estimate the number of active and independent AI
researchers. It seems that it is growing fuelled by new funding and
hype.
Conclusion.
We should estimate it arrival in 2025-3025 and have our preventive
ideas ready and deployed to this time. If we want to hope to use AI
in preventing other x-risks or in life extension, we should not expect
it until second half of 21 century. We should use earlier estimation
for bad AI than for Good AI.
Doomsday argument in estimating of AI arrival timing

Now we will use famous Doomsday argument to get one more estimation of AI arrival
timing.
Gott famously estimated the future time duration of the Berlin wall’s existence:
“Gott first thought of his “Copernicus method” of lifetime estimation in 1969 when stopping at the Berlin Wall and
wondering how long it would stand. Gott postulated that the Copernican principle is applicable in cases where nothing
is known; unless there was something special about his visit (which he didn’t think there was) this gave a 75% chance
that he was seeing the wall after the first quarter of its life. Based on its age in 1969 (8 years), Gott left the wall with
75% confidence that it wouldn’t be there in 1993 (1961 + (8/0.25)). In fact, the wall was brought down in 1989, and
1993   was   the   year   in   which   Gott   applied   his   “Copernicus   method”   to   the   lifetime   of   the   human
race”. “https://en.wikipedia.org/wiki/J._Richard_Gott

The most interesting unknown in the future is the time of creation of Strong AI. Our priors
are insufficient to predict it because it is such a unique task. So it is reasonable to apply
Gott’s method.
AI research began in 1950, and so is now 65 years old. If we are currently in a random
moment during AI research then it could be estimated that there is a 50% probability of AI
being created in the next 65 years, i.e. by 2080. Not very optimistic. Further, we can say
that the probability of its creation within the next 1300 years is 95 per cent. So we get a
rather vague prediction that AI will almost certainly be created within the next 1000 years,
and few people would disagree with that.
But if we include the exponential growth of AI research in this reasoning (the same way as
we do in Doomsday argument where we use birth rank instead of time, and thus update the
density of population) we get a much earlier predicted date.
We can get data on AI research growth from  Luke’s post:
338

“According to MAS, the number of publications in AI grew by 100+% every 5 years between 1965 and 1995, but
between 1995 and 2010 it has been growing by about 50% every 5 years. One sees a similar trend in machine learning
and pattern recognition.”

From this we could conclude that doubling time in AI research is five to ten years (update
by adding the recent boom in neural networks which is again five years)
This means that during the next five years more AI research will be conducted than in all
the previous years combined.
If we apply the Copernican principle to this distribution, then there is a 50% probability that
AI will be created within the next five years (i.e. by 2020) and a 95% probability that AI will
be created within next 15-20 years, thus it will be almost certainly created before 2035.
This conclusion itself depends of several assumptions:
• AI is possible
• The exponential growth of AI research will continue
• The Copernican principle has been applied correctly.

Interestingly this coincides with other methods of AI timing predictions:
• Conclusions of the most prominent futurologists (Vinge – 2030, Kurzweil – 2029)
• Survey of the field of experts
• Prediction of Singularity based on extrapolation of history acceleration (Forrester –
2026, Panov-Skuns – 2015-2020)
• Brain emulation roadmap
• Computer power brain equivalence predictions
• Plans of major companies

It is clear that this implementation of the Copernican principle may have many flaws:
1. The one possible counterargument here is something akin to a Murphy law, specifically
one which claims that any particular complex project requires much more time and money
before it can be completed. It is not clear how it could be applied to many competing
projects. But the field of AI is known to be more difficult than it seems to be for researchers.
339

2. Also the moment at which I am observing AI research is not really random, as it was in
the Doomsday argument created by Gott in 1993, and I probably will not be able to apply it
to a time before it become known.
3. The number of researchers is not the same as the number of observers in the original
DA. If I were a researcher myself, it would be simpler, but I do not do any actual work on AI.

Several interesting ideas of AI control
Neouronet, stakeholders and dissolving Control problem via personal ascending
The simplest model of AI creation describes interaction of two entities: Creator and AI. But more complex
model must use system approach, and add other stakeholders:
-

Society

-

Other AI projects

-

Programmers

-

Employer (a company, or state)

-

Future generations

-

Aliens

-

Future states of AI development

Adding more stakeholders makes the problem of Friendliness more complicated. For example, if I
add “society”, I have to add into the AI not only my values but also values of other people which
are unknown to me and which I may not share.
If I add existence of other AI projects, I must conclude that I should be first project to win and
prevent them from realization. It may contradict other my values, as it may require violence.
It is clear that the more stakeholders I add, the more intractable problem I get.
So, I may try describe ideal situation where aforementioned problem doesn’t exist. In ideal
situation where will be smallest possible number of stakeholders.
Assume that it will be only one stakeholder and it is me. In this case there is straightforward
solution of the control problem:
I have to use self-improvement to become superintelligence myself
The positives of this idea: there are no intrinsic risks for me and my value system. My value
system will naturally evolve on its own logic, and I may control the rate of my ascending.

But there are still risks of mistakes and unpredictable consequences. Wireheading, loosing
meaning of life and memetic hazards are still here.
There are no risks to others as where are no others in this model. There is no problem of
communication, and I hope I will not become a paper clip maximizer.
Even if my values evolve in strange way, I still will think about them as “my” values, and will not
dissatisfy with it.

NB: This idea need more rigorous evolution and it is nowhere proved as safe. It is just a goodlooking idea.

340

Now, as we have something that looks like working solution, I may try to adapt it to existence of
other stakeholders. My values seems to be not bad for other people:
1) I do not want to become a serial killer, so I will override any my tendency for it.
2) I am interested in other people so I will keep people alive.
3) I am interested in preventing death and sufferings
4) I will create Tool AIs to solve practical task like fighting other AI projects, but I will understand
how they work, and prevent them from evolving
5) I have sufficient understanding of human ethic for not doing obviously bad things, and my
understanding of it will naturally evolve.
6) I will control the rate of my improvement and thus prevent risks of too quick improvement.
7) Most important: “me” here could be a group of people from the beginning. So it could be
effective small group of people, connected by shared values, effective social practices and
neuroimplants (Neoronet).
8) Merge of minds and consciousnesses in one large experience and brain could also help to
ascend large groups of people, and would help “not-me” problem.

The main problem for this approach is technical. The ways of self-improvement are lagging
compare to progress in computers. Even savants are behind. Nootropics are very weak.
And as so many people are interested in self-improvement, the chances that it will be really
me are rather small.
It also interesting to mention that to create FAI someone has to be very clever and so to use
all the ways of self improvement, and he will almost become AI himself if he will be able to
solve the problem of FAI, so we can’t escape personal ascending solution.

Struggle of AI-projects among themselves
Already now there is a rigid competition between the companies developing AI for
attention and investors and for correctness of ideas of their way of creation of universal AI.
When a certain company will create first powerful AI, it will have a choice - or to apply it to
the control over all other AI-projects in the world and so over all world, or to appear before
risk of that the competing organisation with the unknown global purposes will make it in the
near future - and will cover the first company. «Having advantage should attack before
threat of loss of this advantage». Thus given necessity of a choice is not secret - it was
already discussed in the open press and will be for certain known to all companies which
will approach to creation of strong AI. Probably, that some companies will refuse in that
case to try to establish the control over the world first, but the strongest and aggressive,
341

most likely, will dare at it. Thus the requirement to attack first will result to usage of the
poor-quality and underfulfilled versions of AI with not clear purposes. Even in XIX century
phone have patented almost simultaneously in different places so and now the backlash
between the leader of race and catching up can make days or hours. The ýже this
backlash, the will be struggle because the lagging behind project will possess force more
intensively to resist. And probably to present a variant when one AI-project should establish
the control over nuclear rockets and attack laboratories of other projects.
AI and its separate copies
When powerful AI will arise, it will be compelled to create his copies (probably,
reduced) to send them, for example, in expeditions on other planets or simply to load on
other computers. Accordingly, it should supply with their certain system of the purposes
and some kind of "friendly" or is faster, vassal relations with it, and also system of
recognition "friendly-enemy". Failure in this system of the purposes will result to that given
copy will “rebell". For example, self-preservation goal contradicts submission goal to obey
dangerous orders. It can accept very thin forms, but, finally, lead to war between versions
of one AI.
Speed of start
From the point of view of speed of development of AI three variants are possible: fast
start, slow start, and very slow start.
«Fast start» - AI reaches I.Q., on many orders of magnitude surpassing human, in
some hours or days. For this purpose should begin some kind of chain reaction in which
the growing increase in intelligence gives the increasing possibilities for its subsequent
increase. (This process already occurs in a science and technologies, supporting Moore's
law. And it is similar to chain reaction in a reactor where the factor of reproduction of
neutrons is more than 1.) In this case it almost for certain will overtake all other projects of
AI creation. Its intelligence becomes enough that «to seize power on the Earth». Thus we
cannot precisely tell, how such capture will look as we cannot predict behaviour of the
intelligence surpassing ours. The objection that AI will not want to behavior actively in an
external world is possible to role out on the ground that if there will be many AI-projects or
copies of AI of the program then one will try sooner or later as the tool for conquest of all
world.
342

It is important to notice, that successful attack of strong AI will develop, possibly, is
secretly until while it does not become irreversible. Theoretically, AI could hide the
domination and after attack end. In other words, probably, that it already is.
Scenarios of "fast start”
AI grasps all Internet and subordinates to itself its resources. Then gets into all
fenced off firewall networks. This scenario demands for the realisation of time of an order
of hours. Capture means possibility to operate by all computers in a network and to have
on them the calculations. However to that AI can read and process all information
necessary to it from the Internet.
AI orders in laboratory synthesis of some code of DNA which allows it to create
radio-controlled bacteria which synthesise more and more complex organisms under its
management and gradually create nanorobot which can be applied to any purposes in an
external world - including introduction in other computers, in a brain of people and creation
of new computing capacities. In details this scenario is considered in Yudkowsky article
about AI. (Speed: days.)
AI is involved in dialogue with people and becomes infinitely effective manipulator
of people’s behaviour. All people do that wants AI. Modern state propagation aspires to the
similar purposes and even reaches them, but in comparison with it AI will be much
stronger, as he can offer each human a certain suggestion which he cannot refuse. It will
be the promise of the most treasured desire, blackmail or the latent hypnosis.
AI subordinates to itself a state system and uses channels available in it for
management. Inhabitants of such state in general can nothing notice. Or on the contrary,
the state uses AI on channels available already for it.
AI subordinates to itself the remote operated army. For example, fighting robots or
a rocket (the scenario from a film «Termonator»).
AI finds essentially new way to influence human consciousness (memes, feromons,
electromagnetic fields) and spread itself or extends the control through it.
 Certain consecutive or parallel combination of the named ways.
Slow start and struggle of different AI among themselves
In a case of "the slow scenario” AI growth occupies months and years, and it means,
that, rather possibly, it will occur simultaneously in several laboratories worldwide. As a
result of it there will be a competition between different AI-projects. It is fraught with
343

struggle of several AI with different systems of the purposes for domination over the Earth.
Such struggle can be armed and to appear race for time. Thus advantage will get in it
those projects, whose system of the purposes is not constrained by any moral frameworks.
Actually, we will appear in the war centre between different kinds of an artificial intellect. It
is clear that such scenario is mortally dangerous to mankind. In case of the superslow
scenario thousand laboratories and powerful computers simultaneously come nearer to
creation AI, that, probably, does not give advantages to any project, and certain balance
between them is established. However here too struggle for computing resources and
elimination in favour of the most successful and aggressive projects is possible.
Struggle between states, as ancient forms of the organization using people as the
separate elements, and the new AI using as the carrier computers is also possible. And
though I am assured, that the states will lose, struggle can be short and bloody. As an
exotic variant it is possible to present a case when some states are under control of
computer AI, and others are ruled commonly. A variant of such device - the Automated
government system known from a science fiction. (V. Argonov. “2032”.)
Smooth transition. Transformation of total control state into AI
At last, there is a scenario in which all world system as whole gradually turns to an
artificial intellect. It can be connected with creation all-world Orwell state of the total control
which will be necessary for successful opposition to bioterrorism. It is world system where
each step of citizens is supervised by video cameras and every possible systems of
tracking, and this information downloaded in huge uniform databases and then analyzed.
As a whole, the mankind, probably, moves on this way, and technically all is ready for this
purpose. Feature of this system is that it initially has distributed character, and separate
people, following to their interests or instructions, are only gears in this huge machine. The
state as the impersonal machine was repeatedly described in the literature, including Karl
Marx, and earlier Gobbs. Is also interesting Lazarchuk's to theory about «Golems» and
«Leviafans» - about autonomism of the systems consisting of people in independent
machines with its own purposes. However only recently world social system became not
simply machine, but an artificial intellect capable to purposeful self-improvement.
The basic obstacle for development of this system are national states with their
national armies. Creation of the world government would facilitate formation of such
uniform AI. However meanwhile there is a sharp struggle between the states about on what
344

conditions to unite a planet. And also struggle against forces which conditionally are called
«antiglobalists», and other antisystem elements - Islamites, radical ecologists, separatists
and nationalists. World War for unification of the planet will be inevitable and it is fraught
with application of "Doomsday weapon” by those who has lost all. But peace world
integration through system of contracts is possible also.
Danger, however, consists that the global world machine will start to supersede
people from different spheres of a life, at least economically - depriving of their work and
consuming those resources which people could spend differently (for example, for 20062007 meal in the world has risen in price for 20 percent, in particular, because of transition
to biofuel). In any sense to people there will be nothing as «to watch TV and drink beer».
About this danger Bill Joy wrote in the known article «Why the future doesn’t need us».
In process of automation of manufacture and management people will be ever less
necessary for a state life. Human aggression, probably, will be neutralised by monitoring
systems and genetic manipulations. Finally, people will be on the role of pets. Thus to
occupy people will be created more and more bright and pleasant "matrix" which will
gradually turn to a superdrug deducing people from a life. However here people will climb
in continuous «a virtual reality» because in a usual reality they will have nothing to do (in
any measure now this role carries out the TV for the unemployed and pensioners). Natural
instincts of a life will induce some people to aspire to destroy all this system that is fraught
besides with global catastrophes or destruction of people.
It is important to note the following - whoever had been created the first strong
artificial intellect, it will bear on a print of system of the purposes and values of the given
group of people as this system will seem for them as only correct. For one overall objective
there will be a blessing of all people, for others - the blessing of all live beings, for the third
- only all devout Moslems, for the fourth - the blessing only those three programmers who
have created it. And representation about the blessing nature too will be rather variously. In
this sense the moment of creation of first strong AI is the moment of a fork with very
considerable quantity of variants.
"Revolt" of robots
There is still a dangerous scenario in which house, military and industrial robots
spread worldwide, and then all of them are amazed with a computer virus which incites
them on aggressive behaviour against human. All readers at least once in life time,
345

probably, faced a situation when the virus has damaged data on the computer. However
this scenario is possible only during the period of "a vulnerability window” when there are
already the mechanisms, capable to operate in an external world, but still there is no
enough an advanced artificial intellect which could or protect them from viruses, or itself to
execute virus function, for having grasped them.
There is still a scenario where in the future a certain computer virus extends on the
Internet, infect nanofactories worldwide and causes, thus, mass contamination. These
nanofactories can produce nanorobots, poisons, viruses or drugs.
Another variant is revolt of army of military robots. Armies of industrially developed
states are aimed to full automation. When it will be reached, the huge army consisting from
drones, wheel robots and serving mechanisms can move, simply obeying orders of the
president. Already, almost robotic army is the strategic nuclear forces. Accordingly, there is
a chance that the incorrect order will arrive and such army will start to successively attack
all people. We will notice, that it is not necessary for this scenario existence of universal
superintelligence, and, on the contrary, for the universal superintelligence seize the Earth,
the army of robots is not necessary to it.

Chapter 17. The risks of SETI
This chapter examines risks associated with the program of passive search for alien signals (SETI—
the Search for Extra­Terrestrial Intelligence). Here we propose a scenario of possible vulnerability
and   discuss   the   reasons   why   the   proportion   of   dangerous   signals   to   harmless   ones   can   be
dangerously high. This article does not propose to ban SETI programs, and does not insist on the
inevitability of SETI­triggered disaster. Moreover, it gives the possibility of how SETI can be a
salvation for mankind.
The idea that passive SETI can be dangerous is not new. Fred Hoyle suggested in the
story "A for Andromeda” a scheme of alien attack through SETI signals. According to the
plot, astronomers receive an alien signal, which contains a description of a computer and a
computer program for it. This machine creates a description of the genetic code which
346

leads to the creation of an intelligent creature – a girl dubbed Andromeda, which, working
together with the computer, creates advanced technology for the military. The initial
suspicion of alien intent is overcome by the greed for the technology the aliens can
provide. However, the main characters realize that the computer acts in a manner hostile to
human civilization and destroy the computer, and the girl dies.
This scenario is fiction, because most scientists do not believe in the possibility of a strong
AI, and, secondly, because we do not have the technology that enables synthesis of new
living organisms solely from its’ genetic code. Or at least, we have not until recently.
Current technology of sequencing and DNA synthesis, as well as progress in developing a
code of DNA modified with another set of the alphabet, indicate that in 10 years the task of
re-establishing a living being from computer codes sent from space in the form computer
codes might be feasible.
Hans Moravec in the book "Mind Children" (1988) offers a similar type of vulnerability:
downloading a computer program from space via SETI, which will have artificial
intelligence, promising new opportunities for the owner and after fooling the human host,
self-replicating by the millions of copies and destroying the human host, finally using the
resources of the secured planet to send its ‘child’ copies to multiple planets which
constitute its’ future prey. Such a strategy would be like a virus or a digger wasp—horrible,
but plausible. In the same direction are R. Carrigan’s ideas; he wrote an article "SETIhacker", and expressed fears that unfiltered signals from space are loaded on millions of
not secure computers of SETI-at-home program. But he met tough criticism from
programmers who pointed out that, first, data fields and programs are in divided regions in
computers, and secondly, computer codes, in which are written programs, are so unique
that it is impossible to guess their structure sufficiently to hack them blindly (without prior
knowledge).
After a while Carrigan issued a second article - "Should potential SETI signals be
decontaminated?"http://home.fnal.gov/~carrigan/SETI/SETI%20Decon%20Australia
%20poster%20paper.pdf, which I’ve translated into Russian. In it, he pointed to the ease of
transferring gigabytes of data on interstellar distances, and also indicated that the
interstellar signal may contain some kind of bait that will encourage people to collect a
dangerous device according to the designs. Here Carrigan not give up his belief in the
possibility that an alien virus could directly infected earth’s computers without human
‘translation’ assistance. (We may note with passing alarm that the prevalence of humans
obsessed with death—as Fred Saberhagen pointed out in his idea of ‘goodlife’—means
that we cannot entirely discount the possibility of demented ‘volunteers’ –human traitors
eager to assist such a fatal invasion) As a possible confirmation of this idea, Carrigan has
shown that it is possible easily reverse engineer language of computer program - that is,
based on the text of the program it is possible to guess what it does, and then restore the
value of operators.
In 2006, E. Yudkowsky wrote an article "AI as a positive and a negative factor of global
risk", in which he demonstrated that it is very likely that it is possible rapidly evolving
universal artificial intelligence which high intelligence would be extremely dangerous if it
was programmed incorrectly, and, finally, that the occurrence of such AI and the risks
347

associated with it significantly undervalued. In addition, Yudkowsky introduced the notion of
“Seed AI” - embryo AI - that is a minimum program capable of runaway self-improvement
with unchanged primary goal. The size of Seed AI can be on the close order of hundreds of
kilobytes. (For example, a typical representative of Seed AI is a human baby, whose part of
genome responsible for the brain would represent ~ 3% of total genes of a person with a
volume of 500 megabytes, or 15 megabytes, but given the share of garbage DNA is even
less.)
In the beginning, let us assume that in the Universe there is an extraterrestrial civilization,
which intends to send such a message, which will enable it to obtain power over Earth, and
consider this scenario. In the next chapter we will consider how realistic is that another
civilization would want to send such a message.
First, we note that in order to prove the vulnerability, it is enough to find just one hole in
security. However, in order to prove safety, you must remove every   possible hole. The
complexity of these tasks varies on many orders of magnitude that are well known to
experts on computer security. This distinction has led to the fact that almost all computer
systems have been broken (from Enigma to iPOD). I will now try to demonstrate one
possible, and even, in my view, likely, vulnerability of SETI program. However, I want to
caution the reader from the thought that if he finds errors in my discussions, it automatically
proves the safety of SETI program. Secondly, I would also like to draw the attention of the
reader, that I am a man with an IQ of 120 who spent all of a month of thinking on the
vulnerability problem. We need not require an alien super civilization with IQ of 1000000
and contemplation time of millions of years to significantly improve this algorithm—we have
no real idea what an IQ of 300 or even-a mere IQ of 100 with much larger mental ‘RAM’ (–
the ability to load a major architectural task into mind and keep it there for weeks while
processing) could accomplish to find a much more simple and effective way. Finally, I
propose one possible algorithm and then we will discuss briefly the other options.
In our discussions we will draw on the Copernican principle, that is, the belief that we are
ordinary observers in normal situations. Therefore, the Earth’s civilization is an ordinary
civilization developing normally. (Readers of tabloid newspapers may object!)
Algorithm of SETI attack
1. The sender creates a kind of signal beacon in space, which reveals that its message is
clearly artificial. For example, this may be a star with a Dyson sphere, which has holes or
mirrors, alternately opened and closed. Therefore, the entire star will blink of a period of a
few minutes - faster is not possible because of the variable distance between different
openings. (Even synchronized with an atomic clock according to a rigid schedule, the
speed of light limit means that there are limits to the speed and reaction time of
coordinating large scale systems) Nevertheless, this beacon can be seen at a distance of
millions of light years. There are possible other types of lighthouses, but the important fact
that the beacon signal could be viewed at long distances.
2. Nearer to Earth is a radio beacon with a much weaker signal, but more information
saturated. The lighthouse draws attention to this radio source. This source produces some
348

stream of binary information (i.e. the sequence of 0 and 1). About the objection that the
information would contain noises, I note that the most obvious (understandable to the
recipient's side) means to reduce noise is the simple repetition of the signal in a circle.
3. The most simple way to convey meaningful information using a binary signal is sending
of images. First, because eye structures in the Earth's biological diversity appeared
independently 7 times, it means that the presentation of a three-dimensional world with the
help of 2D images is probably universal, and is almost certainly understandable to all
creatures who can build a radio receiver.
4. Secondly, the 2D images are not too difficult to encode in binary signals. To do so, let us
use the same system, which was used in the first TV cameras, namely, a system of
progressive and frame rate. At the end of each time frame images store bright light,
repeated after each line, that is, through an equal number of bits. Finally, at the end of each
frame is placed another signal indicating the end of the frame, and repeated after each
frame. (This may form, or may not form a continuous film.) This may look like this:
01010111101010 11111111111111111
01111010111111 11111111111111111
11100111100000 11111111111111111
Here is the end line signal of every of 25 units. Frame end signal may appear every, for
example, 625 units.
5. Clearly, a sender civilization- should be extremely interested that we understand their
signals. On the other hand, people will share an extreme desire to decrypt the signal.
Therefore, there is no doubt that the picture will be recognized.
6. Using images and movies can convey a lot of information, they can even train in learning
their language, and show their world. It is obvious that many can argue about how such
films will be understandable. Here, we will focus on the fact that if a certain civilization
sends radio signals, and the other takes them, so they have some shared knowledge.
Namely, they know radio technique - that is they know transistors, capacitors, and
resistors. These radio-parts are quite typical so that they can be easily recognized in the
photographs. (For example, parts shown, in cutaway view, and in sequential assembly
stages— or in an electrical schematic whose connections will argue for the nature of the
components involved).
7. By sending photos depicting radio-parts on the right side, and on the left - their symbols,
it is easy to convey a set of signs indicating electrical circuit. (Roughly the same could be
transferred and the logical elements of computers.)
8. Then, using these symbols the sender civilization- transmits blueprints of their simplest
computer. The simplest of computers from hardware point of view is the Post-machine. It
has only 6 commands and a tape data recorder. Its full electric scheme will contain only a

349

few tens of transistors or logic elements. It is not difficult to send blueprints of Post
machine.
9. It is important to note that all computers at the level of algorithms are Turing-compatible.
That means that extraterrestrial computers at the basic level are compatible with any earth
computer. Turing-compatibility is a mathematical universality as the Pythagorean theorem.
Even the Babbage mechanical machine, designed in the early 19th century, was Turingcompatible.
10. Then the sender civilization- begins to transmit programs for that machine. Despite the
fact that the computer is very simple, it can implement a program of any difficulty, although
it will take very long in comparison with more complex programs for the same computer. It
is unlikely that people will be required to build this computer physically. They can easily
emulate it within any modern computer, so that it will be able to perform trillions of
operations per second, so even the most complex program will be carried out on it quite
quickly. (It is a possible interim step: a primitive computer gives a description of a more
complex and fast computer and then run on it.)
11. So why people would create this computer, and run its program? Perhaps, in addition to
the actual computer schemes and programs in the communication must be some kind of
"bait", which would have led the people to create such an alien computer and to run
programs on it and to provide to it some sort of computer data about the external world –
Earth outside the computer. There are two general possible baits - temptations and
dangers:
a). For example, perhaps people receive the following offer– lets call it "The humanitarian
aid con (deceit)". Senders of an "honest signal" SETI message warn that the sent program
is Artificial intelligence, but lie about its goals. That is, they argue that this is a "gift" which
will help us to solve all medical and energy problems. But it is a Trojan horse of most
malevolent intent. It is too useful not to use. Eventually it becomes indispensable. And then
exactly when society becomes dependent upon it, the foundation of society—and society
itself—is overturned…
b). "The temptation of absolute power con" - in this scenario, they offer specific transaction
message to recipients, promising power over other recipients. This begins a ‘race to the
bottom’ that leads to runaway betrayals and power seeking counter-moves, ending with a
world dictatorship, or worse, a destroyed world dictatorship on an empty world….
c). "Unknown threat con" - in this scenario bait senders report that a certain threat hangs
over on humanity, for example, from another enemy civilization, and to protect yourself, you
should join the putative “Galactic Alliance” and build a certain installation. Or, for example,
they suggest performing a certain class of physical experiments on the accelerator and
sending out this message to others in the Galaxy. (Like a chain letter) And we should send
this message before we ignite the accelerator, please…
d). "Tireless researcher con" - here senders argue that posting messages is the cheapest
way to explore the world. They ask us to create AI that will study our world, and send the
results back. It does rather more than that, of course…
350

12. However, the main threat from alien messages with executable code is not the bait
itself, but that this message can be well known to a large number of independent groups of
people. First, there will always be someone who is more susceptible to the bait. Secondly,
say, the world will know that alien message emanates from the Andromeda galaxy, and the
Americans have already been received and maybe are trying to decipher it. Of course, then
all other countries will run to build radio telescopes and point them on Andromeda galaxy,
as will be afraid to miss a “strategic advantage”. And they will find the message and see
that there is a proposal to grant omnipotence to those willing to collaborate. In doing so,
they will not know, if the Americans would take advantage of them or not, even if the
Americans will swear that they don’t run the malicious code, and beg others not to do so.
Moreover, such oaths, and appeals will be perceived as a sign that the Americans have
already received an incredible extraterrestrial advantage, and try to deprive "progressive
mankind" of them. While most will understand the danger of launching alien
code, someone will be willing to risk it. Moreover there will be a game in the spirit of "winner
take all", as well be in the case of opening AI, as Yudkowsky shows in detail. So, the bait is
not dangerous, but the plurality of recipients. If the alien message is posted to the Internet
(and its size, sufficient to run Seed AI can be less than gigabytes along with a description
of the computer program, and the bait), here we have a classic example of "knowledge" of
mass destruction, as said Bill Joy, meaning the recipes genomes of dangerous biological
viruses. If aliens sent code will be available to tens of thousands of people, then someone
will start it even without any bait out of simple curiosity We can’t count on existing SETI
protocols, because discussion on METI (sending of messages to extraterrestrial) has
shown that SETI community is not monolithic on important questions. Even a simple fact
that something was found could leak and encourage search from outsiders. And the
coordinates of the point in sky would be enough.
13. Since people don’t have AI, we almost certainly greatly underestimate its power and
overestimate our ability to control it. The common idea is that "it is enough to pull the power
cord to stop an AI" or place it in a black box to avoid any associated risks. Yudkowsky
shows that AI can deceive us as an adult does a child. If AI dips into the Internet, it can
quickly subdue it as a whole, and also taught all necessary about entire earthly life. Quickly
- means the maximum hours or days. Then the AI can create advanced nanotechnology,
buy components and raw materials (on the Internet, he can easily make money and order
goods with delivery, as well as to recruit people who would receive them, following the
instructions of their well paying but ‘unseen employer’, not knowing who—or rather, what—they are serving). Yudkowsky leads one of the possible scenarios of this stage in detail and
assesses that AI needs only weeks to crack any security and get its own physical
infrastructure.
"Consider, for clarity, one possible scenario, in which Alien AI (AAI) can seize power on the
Earth. Assume that it promises immortality to anyone who creates a computer on the
blueprints sent to him and start the program with AI on that computer. When the program
starts, it says: "OK, buddy, I can make you immortal, but for this I need to know on what
basis your body works. Provide me please access to your database. And you connect the
device to the Internet, where it was gradually being developed and learns what it needs
and peculiarities of human biology. (Here it is possible for it escape to the Internet, but we
omit details since this is not the main point) Then the AAI says: "I know how you become
351

biologically immortal. It is necessary to replace every cell of your body with nanobiorobot.
And fortunately, in the biology of your body there is almost nothing special that would block
bio-immorality.. Many other organisms in the universe are also using DNA as a carrier of
information. So I know how to program the DNA so as to create genetically modified
bacteria that could perform the functions of any cell. I need access to the biological
laboratory, where I can perform a few experiments, and it will cost you a million of your
dollars." You rent a laboratory, hire several employees, and finally the AAI issues a table
with its' solution of custom designed DNA, which are ordered in the laboratory by
automated machine synthesis of DNA. http://en.wikipedia.org/wiki/DNA_sequencing Then
they implant the DNA into yeast, and after several unsuccessful experiments they create a
radio guided bacteria (shorthand: This is not truly a bacterium, since it appears all
organelles and nucleus; also 'radio' is shorthand for remote controlled; a far more likely
communication mechanism would be modulated sonic impulses) , which can synthesize a
new DNA-based code based on commands from outside. Now the AAI has achieved
independence from human 'filtering' of its' true commands, because the bacterium has in
effect its own remote controlled sequencers (self-reproducing to boot!). Now the AAI can
transform and synthesize substances ostensibly introduced into test tubes for a benign
test, and use them for a malevolent purpose., Obviously, at this moment Alien AI is ready to
launch an attack against humanity. He can transfer himself to the level of nano-computer
so that the source computer can be disconnected. After that AAI spraying some of
subordinate bacteria in the air, which also have AAI, and they gradually are spread across
the planet, imperceptibly penetrates into all living beings, and then start by the timer to
divide indefinitely, as gray goo, and destroy all living beings. Once they are destroyed,
Alien AI can begin to build their own infrastructure for the transmission of radio messages
into space. Obviously, this fictionalized scenario is not unique: for example, AAI may seize
power over nuclear weapons, and compel people to build radio transmitters under the
threat of attack. Because of possibly vast AAI experience and intelligence, he can choose
the most appropriate way in any existing circumstances. (Added by Freidlander: Imagine a
CIA or FSB like agency with equipment centuries into the future, introduced to a primitive
culture without concept of remote scanning, codes, the entire fieldcraft of spying. Humanity
might never know what hit it, because the AAI might be many centuries if not millennia
better armed than we (in the sense of usable military inventions and techniques ).
14. After that, this SETI-AI does not need people to realize any of its goals. This does not
mean that it would seek to destroy them, but it may want to pre-empt if people will fight it and they will.
15. Then this SETI-AI can do a lot of things, but more importantly, that it should do - is to
continue the transfer of its communications-generated-embryos to the rest of the Universe.
To do so, he will probably turn the matter in the solar system in the same transmitter as the
one that sent him. In doing so the Earth and its’ people would be a disposable source of
materials and parts—possibly on a molecular scale.
So, we examined a possible scenario of attack, which has 15 stages. Each of these stages
is logically convincing and could be criticized and protected separately. Other attack
scenarios are possible. For example, we may think that the message is not sent directly to
352

us but is someone to someone else's correspondence and try to decipher it. And this will
be, in fact, bait.
But not only distribution of executable code can be dangerous. For example, we can
receive some sort of “useful” technology that really should lead us to disaster (for example,
in the spirit of the message "quickly shrink 10 kg of plutonium, and you will have a new
source of energy" ...but with planetary, not local consequences…). Such a mailing could be
done by a certain "civilization" in advance to destroy competitors in the space. It is obvious
that those who receive such messages will primarily seek technology for military use.
Analysis of possible goals
We now turn to the analysis of the purposes for which certain super civilizations could carry
out such an attack.
1. We must not confuse the concept of a super-civilization with the hope for superkindness
of civilization. Advanced does not necessarily mean merciful. Moreover, we should not
expect anything good from extraterrestrial ‘kindness’. This is well written in Strugatsky’s
novel "Waves stop wind." Whatever the goal of imposing super-civilization upon us , we
have to be their inferiors in capability and in civilizational robustness even if their intentions
are well.. The historical example: The activities of Christian missionaries, destroying
traditional religion. Moreover, we can better understand purely hostile objectives. And if the
SETI attack succeeds, it may be only a prelude to doing us more ‘favors’ and ‘upgrades’
until there is scarcely anything human left of us even if we do survive…
2. We can divide all civilizations into the twin classes of naive and serious. Serious
civilizations are aware of the SETI risks, and have got their own powerful AI, which can
resist alien hacker attacks. Naive civilizations, like the present Earth, already possess the
means of long-distance hearing in space and computers, but do not yet possess AI, and
are not aware of the risks of AI-SETI. Probably every civilization has its stage of being
"naive", and it is this phase then it is most vulnerable to SETI attack. And perhaps this
phase is very short. Since the period of the outbreak and spread of radio telescopes to
powerful computers that could create AI can be only a few tens of years. Therefore, the
SETI attack must be set at such a civilization. This is not a pleasant thought, because we
are among the vulnerable.
3. If traveling with super-light speeds is not possible, the spread of civilization through SETI
attacks is the fastest way to conquering space. At large distances, it will provide significant
temporary gains compared with any kind of ships. Therefore, if two civilizations compete for
mastery of space, the one that favored SETI attack will win.
4. The most important thing is that it is enough to begin a SETI attack just once, as it goes in a
self-replicating the wave throughout the Universe, striking more and more naive
civilizations. For example, if we have a million harmless normal biological viruses and one
dangerous, then once they get into the body, we will get trillions of copies of the dangerous
virus, and still only a million safe viruses. In other words, it is enough that if one of billions
of civilizations starts the process and then it becomes unstoppable throughout the
353

Universe. Since it is almost at the speed of light, countermeasures will be almost
impossible.
5. Further, the delivery of SETI messages will be a priority for the virus that infected a
civilization, and it will spend on it most of its energy, like a biological organism spends on
reproduction - that is tens of percent. But Earth's civilization spends on SETI only a few
tens of millions of dollars, that is about one millionth of our resources, and this proportion is
unlikely to change much for the more advanced civilizations. In other words, an infected
civilization will produce   a   million   times   more  SETI   signals   than   a   healthy   one. Or, to say in
another way, if in the Galaxy are one million healthy civilizations, and one infected, then we
will have equal chances to encounter a signal from healthy or contaminated.
6. Moreover, there are no other reasonable prospects to distribute its code in space except
through self-replication.
7. Moreover, such a process could begin by accident - for example, in the beginning it was
just a research project, which was intended to send the results of its (innocent) studies to
the maternal civilization, not causing harm to the host civilization, then this process became
"cancer" because of certain propogative faults or mutations.
8. There is nothing unusual in such behavior. In any medium, there are viruses – there are
viruses in biology, in computer networks - computer viruses, in conversation - meme. We
do not ask why nature wanted to create a biological virus.
9. Travel through SETI attacks is much cheaper than by any other means. Namely, a
civilization in Andromeda can simultaneously send a signal to 100 billion stars in our
galaxy. But each space ship would cost billions, and even if free, would be slower to reach
all the stars of our Galaxy.
10. Now we list several possible goals of a SETI attack, just to show the variety of motives.

To study the universe. After executing the code research probes are created to
gather survey and send back information.

To ensure that there are no competing civilizations. All of their embryos are
destroyed. This is preemptive war on an indiscriminate basis.

To preempt the other competing supercivilization (yes, in this scenario there are
two!) before it can take advantage of this resource.

This is done in order to prepare a solid base for the arrival of spacecraft. This
makes sense if super civilization is very far away, and consequently, the gap
between the speed of light and near-light speeds of its ships (say, 0.5 c) gives a
millennium difference.

The goal is to achieve immortality. Carrigan showed that the amount of human
personal memory is on the order of 2.5 gigabytes, so a few exabytes (1 exabyte
= 1 073 741 824 gigabytes) forwarding the information can send the entire
354

civilization. (You may adjust the units according to how big you like your supercivilizations!)

Finally we consider illogical and incomprehensible (to us) purposes, for
example, as a work of art, an act of self-expression or toys. Or perhaps an
insane rivalry between two factions. Or something we simply cannot understand
(For example, extraterrestrial will not understand why the Americans have stuck
a flag into the Moon. Was it worthwhile to fly over 300000 km to install painted
steel?)

11. Assuming signals propagated billions of light years distant in the Universe, the area
susceptible to widespread SETI attack, is a sphere with a radius of several billion light
years. In other words, it would be sufficient to find a one “bad civilization" in the light cone
of a height of several billion years old, that is, that includes billions of galaxies from which
we are in danger of SETI attack. Of course, this is only true, if the average density of
civilization is at least one in the galaxy. This is an interesting possibility in relation to
Fermi’s Paradox.
16. As the depth of scanning the sky rises linearly, the volume of space and the number of
stars that we see increases by the cube of that number. This means that our chances to
stumble on a SETI signal nonlinear grow by fast curve.
17. It is possible that when we stumble upon several different messages from the skies,
which refute one another in a spirit of: "do not listen to them, they are deceiving voices, and wish
you evil. But we, brother, we, are good—and wise…"
18. Whatever positive and valuable message we receive, we can never be sure that all of
this is not a subtle and deeply concealed threat. This means that in interstellar
communication there will always be an element of distrust, and in every happy revelation, a
gnawing suspicion.
19. A defensive posture regarding interstellar communication is only to listen, not sending
anything that does not reveal its location. The laws prohibit the sending of a message from
the United States to the stars. Anyone in the Universe who sends (transmits) self-evidentlyis not afraid to show his position. Perhaps because the sending (for the sender) is more
important than personal safety. For example, because it plans to flush out prey prior to
attacks. Or it is forced to, by a evil local AI.
20. It was said about atomic bomb: The main secret about the atomic bomb is that it can be
done. If prior to the discovery of a chain reaction Rutherford believed that the release of
nuclear energy is an issue for the distant future, following the discovery any physicist
knows that it is enough to connect two subcritical masses of fissionable material in order to
release nuclear energy. In other words, if one day we find that signals can be received from
space, it will be an irreversible event—something analogous to a deadly new arms race will
be on.
Objections.
355

The discussions on the issue raise several typical objections, now discussed.
Objection 1: Behavior discussed here is too anthropomorphic. In fact, civilizations are very
different from each other, so you can’t predict their behavior.
Answer: Here we have a powerful observation selection effect. While a variety of possible
civilizations exist, including such extreme scenarios as thinking oceans, etc., we can only
receive radio signals from civilizations that send them, which means that they have
corresponding radio equipment and has knowledge of materials, electronics and
computing. That is to say we are threatened by civilizations of the same type as our own.
Those civilizations, which can neither accept nor send radio messages, do not participate
in this game.
Also, an observation selection effect concerns purposes. Goals of civilizations can be very
different, but all civilizations intensely sending signals, will be only that want to tell
something to “everyone". Finally, the observation selection relates to the effectiveness and
universality of SETI virus. The more effective it is, the more different civilizations will catch
it and the more copies of the SETI virus radio signals will be in heaven. So we have the
‘excellent chances’ to meet a most powerful and effective virus.
Objection 2. For super-civilizations there is no need to resort to subterfuge. They can
directly conquer us.
Answer:
This is true only if they are in close proximity to us. If movement faster than light is not
possible, the impact of messages will be faster and cheaper. Perhaps this difference
becomes important at intergalactic distances. Therefore, one should not fear the SETI
attack from the nearest stars, coming within a radius of tens and hundreds of light-years.
Objection 3. There are lots of reasons why SETI attack may not be possible. What is the
point to run an ineffective attack?
Answer: SETI attack does not always work. It must act in a sufficient number of cases in
line with the objectives of civilization, which sends a message. For example, the con man
or fraudster does not expect that he would be able "to con" every victim. He would be
happy to steal from even one person inone hundred. It follows that SETI attack is useless if
there is a goal to attack all civilizations in a certain galaxy. But if the goal is to get at
least some outposts in another galaxy, the SETI attack fits. (Of course, these outposts can
then build fleets of space ships to spread SETI attack bases outlying stars within the target
galaxy.)
The main assumption underlying the idea of SETI attacks is that extraterrestrial super
civilizations exist in the visible universe at all. I think that this is unlikely for reasons related
to antropic principle. Our universe is unique from 10 ** 500 possible universes with different
physical properties, as suggested by one of the scenarios of string theory. My brain is 1 kg
out of 10 ** 30 kg in the solar system. Similarly, I suppose, the Sun is no more than about 1
356

out of 10 ** 30 stars that could raise a intelligent life, so it means that we are likely alone in the
visible universe.
Secondly the fact that Earth came so late (i.e. it could be here for a few billion years
earlier), and it was not prevented by alien preemption from developing, argues for the rarity
of intelligent life in the Universe. The putative rarity of our civilization is the best protection
against attack SETI. On the other hand, if we open parallel worlds or super light speed
communication, the problem arises again.
Objection 7. Contact is impossible between post-singularity supercivilizations, which are
supposed here to be the sender of SETI-signals, and pre- singularity civilization, which we
are, because supercivilization is many orders of magnitude superior to us, and its message
will be absolutely not understandable for us - exactly as the contact between ants and
humans is not possible. (A singularity is the time of creation of artificial intelligence capable
of learning, (and beginning an exponential booting in recursive improving self-design of
further intelligence and much else besides) after which civilization make leap in its
development - on Earth it may be possible in the area in 2030.)
Answer: In the proposed scenario, we are not talking about contact but a purposeful
deception of us. Similarly, a man is quite capable of manipulating behavior of ants and
other social insects, whose objectives are is absolutely incomprehensible to them. For
example, LJ user “ivanov-petrov” describes the following scene: As a student, he studied
the behavior of bees in the Botanical Garden of Moscow State University. But he had bad
relations with the security guard controlling the garden, which is regularly expelled him
before his time. Ivanov-Petrov took the green board and developed in bees conditioned
reflex to attack this board. The next time the watchman came, who constantly wore a green
jersey, all the bees attacked him and he took to flight. So “ivanov-petrov” could continue
research. Such manipulation is not a contact, but this does not prevent its’ effectiveness.

"Objection 8. For civilizations located near us is much easier to attack us –for ‘guaranteed
results’—using
starships
than
with
SETI-attack.
Answer. It may be that we significantly underestimate the complexity of an attack using
starships and, in general, the complexity of interstellar travel. To list only one factor, the
potential ‘minefield’ characteristics of the as-yet unknown interstellar medium.
If such an attack would be carried out now or in the past, the Earth's civilization has nothing
to oppose it, but in the future the situation will change - all matter in the solar system will be
full of robots, and possibly completely processed by them. On the other hand, the more the
speed of enemy starships approaching us, the more the fleet will be visible by its braking
emissions and other characteristics. These quick starships would be very vulnerable, in
addition we could prepare in advance for its arrival. A slowly moving nano- starship would
be very less visible, but in the case of wishing to trigger a transformation of full substance
of the solar system, it would simply be nowhere to land (at least without starting an alert in
such a ‘nanotech-settled’ and fully used future solar system. (Friedlander added:
357

Presumably there would always be some ‘outer edge’ of thinly settled Oort Cloud sort of
matter, but by definition the rest of the system would be more densely settled, energy rich
and any deeper penetration into solar space and its’ conquest would be the proverbial
uphill battle—not in terms of gravity gradient, but in terms of the available resources of war
against a full Class 2 Kardashev civilization.)
The most serious objection is that an advanced civilization could in a few million years sow
all our galaxy with self replicating post singularity nanobots that could achieve any goal in
each target star-system, including easy prevention of the development of incipient other
civilizations. (In the USA Frank Tipler advanced this line of reasoning.) However, this could
not have happened in our case - no one has prevented development of our civilization. So,
it would be much easier and more reliable to send out robots with such assignments, than
bombardment of SETI messages of the entire galaxy, and if we don’t see it, it means that
no SETI attacks are inside our galaxy. (It is possible that a probe on the outskirts of the
solar system expects manifestations of human space activity to attack – a variant of the
"Berserker" hypothesis - but it will not attack through SETI). Probably for many millions or
even billions of years microrobots could even reach from distant galaxies at a distance of
tens of millions of light-years away. Radiation damage may limit this however without
regular self-rebuilding.
In this case SETI attack would be meaningful only at large distances. However, this
distance - tens and hundreds of millions of light-years - probably will require innovative
methods of modulation signals, such as management of the luminescence of active nuclei
of galaxies. Or transfer a narrow beam in the direction of our galaxy (but they do not know
where it will be over millions of years). But a civilization, which can manage its’ galaxy’s
nucleus, might create a spaceship flying with near-light speeds, even if its mass is a mass
of the planet. Such considerations severely reduce the likelihood of SETI attacks, but not
lower it to zero, because we do not know all the possible objectives and circumstances.
(An comment by JF :For example the lack of SETI-attack so far may itself be a cunning ploy:
At first receipt of the developing Solar civilization’s radio signals, all interstellar ‘spam’
would have ceased, (and interference stations of some unknown (but amazing) capability
and type set up around the Solar System to block all coming signals recognizable to its’
computers as of intelligent origin,) in order to get us ‘lonely’ and give us time to discover
and appreciate the Fermi Paradox and even get those so philosophically inclined to despair
desperate that this means the Universe is apparently hostile by some standards. Then,
when desperate, we suddenly discover, slowly at first, partially at first, and then with more
and more wonderful signals, the fact that space is filled with bright enticing signals (like
spam). The blockade, cunning as it was (analogous to Earthly jamming stations) was yet a
prelude to a slow ‘turning up’ of preplanned intriguing signal traffic. If as Earth had
developed we had intercepted cunning spam followed by the agonized ‘don’t repeat our
mistakes’ final messages of tricked and dying civilizations, only a fool would heed the
enticing voices of SETI spam. But now, a SETI attack may benefit from the slow unmasking
of a cunning masquerade as first a faint and distant light of infinite wonder, only at the end
revealed as the headlight of an onrushing cosmic train…)
358

AT comment to it. In fact I think that SETI attack senders are on the distances more than 1000 ly
and so they do not know yet that we have appeared. But so called Fermi Paradox indeed maybe a
trick – senders deliberately made their signals weak in order to make us think that they are not
spam.
The scale of space strategy may be inconceivable to the human mind.

And we should note in conclusion that some types of SETI-attack do not even need a
computer but just a man who could understand the message that then "set his mind on
fire". At the moment we cannot imagine such a message, but we can give some analogies.
Western religions are built around the text of the Bible. It can be assumed that if the text of
the Bible appeared in some countries, which had previously not been familiar with it, there
might arise a certain number of biblical believers. Similarly subversive political literature, or
even some superideas, “sticky” memes or philosophical mind-benders. Or, as suggested
by Hans Moravec, we get such a message: "Now that you have received and decoded me,
broadcast me in at least ten thousand directions with ten million watts of power. Or else." this message is dropped, leaving us guessing, what may indicate that "or else". Even a few
pages of text may contain a lot of subversive information - Imagine that we could send a
message to the 19 th century scientists. We could open them to the general principle of the
atomic bomb, the theory of relativity, the transistors - and thus completely change the
course of technological history, and we could add that all the ills in the 20 century were
from Germany (which is only partly true) , then we would have influenced the political
history.
(Comment of JF: Such a latter usage would depend on having received enough of Earth’s
transmissions to be able to model our behavior and politics. But imagine a message as
posing from our own future, to ignite ‘catalytic war’—Automated SIGINT (signals
intelligence) stations are constructed monitoring our solar system, their computers
‘cracking’ our language and culture (possibly with the aid of children’s television programs
with see and say matching of letters and sounds, from TV news showing world maps and
naming countries possibly even from intercepting wireless internet encyclopedia articles. )
Then a test or two may follow, posting a what if scenario inviting comment from bloggers,
about a future war say between the two leading powers of the planet. (For purposes of this
discussion, say around 2100 present calendar China is strongest and India rising fast). Any
defects and nitpicks in the comments of the blog are noted and corrected. Finally, an actual
interstellar message is sent with the debugged scenario(not shifting against the stellar
background, it is unquestionably interstellar in origin) proporting to be from a dying starship
of the presently stronger side’s (China’s) future, when the presently weaker side (India’s)
space fleet has smashed the future version of the Chinese State and essentially committed
genocide. The starship has come back in time, but is dying, and indeed the transmission
ends, or simply repeats, possibly after some back and forth communication between the
false computer models of the ‘starship commander’ and the Chinese government. The
reader can imagine the urgings of the future Chinese military council to preempt to forestall
doom. If as seems probable, such a strategy is too complicated to carry off in one stage,
various ‘future travellers’ may emerge from a war, signal for help in vain, and ‘die’ far
359

outside our ability to reach them, (say some light days away, near the alleged location of an
‘emergence gate’ but near an actual transmitter) Quite a drama may emerge as the
computer learns to ‘play’ us like a con man, ship after ship of various nationalities dribbling
out stories but also getting answers to key questions for aid in constructing the emerging
scenario which will be frighteningly believable, enough to ignite a final war. Possibly lists of
key people in China (or whatever side is stronger) may be drawn up by the computer with a
demand that they be executed as the parents of future war criminals—sort of an
International Criminal Court –acting as Terminator scenario. Naturally the Chinese state, at
that time the most powerful in the world, would guard its’ rulers lives against any threat. Yet
more refugee spaceships of various nationalities can emerge transmit and die, offering
their own militaries terrifying new weapons technologies from unknown sciences that really
work (more ‘proof’ of their future origin). Or weapons from known sciences, for example
decoding online DNA sequences in the future internet and constructing formulae for DNA
constructors to make specific tailored genetic weapons against particular populations—that
endure in the ground, a scorched earth against a particular population on a particular piece
of land. These are copied and spread worldwide as are totally accurate plans—in standard
CNC codes for easy to construct thermonuclear weapons in the 1950s style—using U-238
for casing, and only a few kilograms of fissionable material for ignition By that time well
over a million tons of depleted uranium will be worldwide, and deuterium is free in the
ocean and can be used directly in very large weapons without lithium deuteride. Knowing
how to hack together a wasteful, more than critical mass crude fission device is one thing
(the South African device was of this kind). But knowing –with absolute accuracy, down to
machining drawings, CNC codes, etc how to make high-yield, super efficient very dirty
thermonuclear weapons without   need   for   testing means that any small group with a few
dozen million dollars and automated machine tools can clandestinely make a multimegaton device –or many— and smash the largest cities. And any small power with a few
dozen jets can cripple a continent for a decade. Already over a thousand tons of plutonium
exist. The SETI spam can include CNC codes for making a one shot reactor plutonium
chemical refiner that would be left hopelessly radioactive but output chemically pure
plutonium. (This would be prone to predetonation because of the Pu-240 content but then
plans for debugged laser isotope separators may also be downloaded). This is a variant of
the ‘catalytic war’ and ‘nuclear six gun’ (i.e. easy to obtain weapons) scenarios of the late
Herman Kahn. Even cheaper would be bioattacks of the kind outlined above. The principle
point is that planet killer weapons fully debugged take great amounts of debugging, tens to
hundreds of billions of dollars, and free access to a world scientific community. Today, it is
to every great power’s advantage to keep accurate designs out of the hands of third parties
because they have to live on the same planet (and because the fewer weapons, the easier
it is to stay a great power). Not so the SETI spam authors. Without the hundreds of billions
in R and D, the actual construction budget would be on the order of a million dollars per
multi-megaton device (depending on the expense of obtaining the raw reactor plutonium) If
wishing to extend today’s scenarios into the future, the SETI spam authors manipulate
Georgia (with about a $10 billion GDP) to arm against Russia and Taiwan against China
and Venezuela against the USA. Although Russian and China and the USA could
respectively promise annihilation against any attacker, with a military budget around 4% of
GDP and the downloaded plans, the reverse—for the first time—could then also be true.
(400 100 megaton bombs can kill by fallout perhaps 95% of unprotected populations over a
360

country the size of the USA or China and 90% of a country the size of Russia, assuming
the worst kind of cooperation from the winds.—from an old chart by Ralph Lapp) Anyone
living near a superarmed microstate with border conflicts will, of course, wish to arm
themselves. And these newly armed states themselves—of course—will have borders.
Note that this drawn out scenario gives lots of time for a huge arms buildup on both (or
many!) sides, and a Second Cold War that eventually turns very hot indeed…and unlike a
human player of such a horrific ‘catalytic war’ con game, worldwide fallout or enduring
biocontamination is not a concern at all… ()
Conclusion.
The product of the probabilities of the following events describes the probability of attack.
For these probabilities, we can only give so-called «expert» assessment, that is, assign
them a certain a priori subjective probability as we do now.
1) The likelihood that extraterrestrial civilizations exist at a distance at which radio
communication is possible with them. In general, I agree with the view of Shklovsky and
supporters of the “Rare Earth” hypothesis - that the Earth's civilization is unique in the
observable universe. This does not mean that extraterrestrial civilizations do not exist at all
(because the universe, according to the theory of cosmological inflation, is almost endless)
- they are just over the horizon of events visible from our point in space-time. In addition,
this is not just about distance, but also of the distance at which you can establish a
connection, which allows transferring gigabytes of information. (However, passing even 1
bit per second, you can submit 1-gigabit for about 20 years, which may be sufficient for the
SETI-attack.) If in the future will be possible some superluminal communication or
interaction with parallel universes, it would dramatically increase the chances of SETI
attacks. So, I appreciate this chance to 10%.
2) The probability that SETI-attack is technically feasible: that is, it is possible computer
program, with recursively self-improving AI and sizes suitable for shipping. I see this
chance as high: 90%.
3) The likelihood that civilizations that could have carried out such attack exist in our
space-time cone - this probability depends on the density of civilizations in the universe,
and of whether the percentage of civilizations that choose to initiate such an attack, or,
more importantly, obtain victims and become repeaters. In addition, it is necessary to take
into account not only the density of civilizations, but also the density created by radio
signals. All these factors are highly uncertain. It is therefore reasonable to assign this
probability to 50%.
4) The probability that we find such a signal during our rising civilization’s period of
vulnerability to it. The period of vulnerability lasts from now until the moment when we will
decide and be technically ready to implement this decision:Do   not   download   any
extraterrestrial   computer   programs   under   any   circumstances. Such a decision may only be
exercised by our AI, installed as world ruler (which in itself is fraught with considerable
risk). Such an world AI (WAI) can be in created circa 2030. We cannot exclude, however,
that our WAI still will not impose a ban on the intake of extraterrestrial messages, and fall
361

victim to attacks by the alien artificial intelligence, which by millions of years of machine
evolution surpasses it. Thus, the window of vulnerability is most likely about 20 years, and
“width” of the window depends on the intensity of searches in the coming years. This
“width” for example, depends on the intensity of the current economic crisis of 2008-2010,
from the risks of World War III, and how all this will affect the emergence of the WAI. It also
depends on the density of infected civilizations and their signal strength— as these factors
increase, the more chances to detect them earlier. Because we are a normal civilization
under normal conditions, according to the principle of Copernicus, the probability should be
large enough; otherwise a SETI-attack would have been generally ineffective. (The SETIattack, itself (here supposed to exist) also are subject to a form of “natural selection” to test
its effectiveness. (In the sense that it works or does not. ) This is a very uncertain chance
we will too, over 50%.
5) Next is the probability that SETI-attack will be successful - that is that we swallow the
bait, download the program and description of the computer, run them, lose control over
them and let them reach all their goals. I appreciate this chance to be very high because of
the factor of multiplicity - that is the fact that the message is downloaded repeatedly, and
someone, sooner or later, will start it. In addition, through natural selection, most likely we
will get the most effective and deadly message that will most effectively deceive our type of
civilization. I consider it to be 90%.
6) Finally, it is necessary to assess the probability that SETI-attack will lead to a complete
human extinction. On the one hand, it is possible to imagine a “good” SETI-attack, which is
limited so that it will create a powerful radio emitter behind the orbit of Pluto. However, for
such a program will always exist the risk that a possible emergent society at its’ target star
will create a powerful artificial intelligence, and effective weapon that would destroy this
emitter. In addition, to create the most powerful transponder would be needed all the
substance of solar system and the entire solar energy. Consequently, the share of such
“good” attacks will be lower due to natural selection, as well as some of them will be
destroyed sooner or later by captured by them civilizations and their signals will be weaker.
So the chances of destroying all the people with the help of SETI-attack that has reached
all its goals, I appreciate in 80%.
As a result, we have: 0.1h0.9h0.5h0.5h0.9h0.8 = 1.62%
So, after rounding, the chances of extinction of Man through SETI attack in XXI century is
around 1 per cent with a theoretical precision of an order of magnitude.
Our best protection in this context would be that civilization would very rarely met in the
Universe. But this is not quite right, because the Fermi paradox here works on the principle
of "Neither alternative is good":

If there are extraterrestrial civilizations, and there are many of them, it is
dangerous because they can threaten us in one way or another.

If extraterrestrial civilizations do not exist, it is also bad, because it gives weight
to the hypothesis of inevitable extinction of technological civilizations or of our
362

underestimating of frequency of cosmological catastrophes. Or, a high density
of space hazards, such as gamma-ray bursts and asteroids that we
underestimate because of the observation selection effect—i.e., were we not
here because already killed, we would not be making these observations….
Theoretically possible is a reverse option, which is that through SETI will come a warning
message about a certain threat, which has destroyed most civilizations, such